top of page

OpenTelemetry: Unifying Application and Infrastructure Observability

Explore how OpenTelemetry revolutionizes observability by unifying application and infrastructure monitoring and empowering developers with open standards.

In this insightful Q&A, Goutham Veeramachaneni, a long-time Prometheus maintainer and Product Manager at Grafana Labs, shares his unique perspective on the transformative impact of OpenTelemetry (OTel) in the observability landscape. Veeramachaneni discusses how OTel is standardizing telemetry data and inspiring new open-source data collectors and workflows that bridge the gap between application and infrastructure monitoring. He offers valuable insights into the evolving ecosystem, the challenges ahead, and the exciting possibilities for developers in composing more effective telemetry data pipelines.


Q: As a Long-Time Prometheus Maintainer, What’s Your Take on the Overall Impact That OpenTelemetry Has Had on the Market?

A: It’s given developers and platform teams much greater data ownership. It’s given them flexibility and freedom that they didn’t have before. Previously, with no universal open standard for telemetry data, the proprietary vendor mousetraps were designed to make it challenging to migrate to other solutions, which was insane. These vendors didn’t have a lot of incentive to innovate or compete because they had instrumented such effective mousetraps to lock users in. They spoke their protocols and collected their metrics, and there was no standardization. OpenTelemetry has already forced the entire market to standardize on the OTLP protocol and its ecosystem of SDKs and APIs. That has taken the power away from vendors and created a dynamic, open standard where everyone collaborates — driving a ton of innovation.


Q: What Is the Most Exciting New Progress You’ve Seen With OTel in the Last Year?

A: I’m excited to see how Prometheus and OTel are coming together and all the momentum bringing application observability to the same level of standardization and consistency as infrastructure observability. Prometheus is such a staple in infrastructure observability – everyone uses it directly or a flavor of Prometheus from a vendor. One of the reasons why Prometheus is so popular is because there is an exporter for just about every infrastructure component and just such a massive community supporting it. However, until OTel, no such standardization and velocity of innovation existed in application observability because you needed to spend a lot of effort to create auto instrumentation agents, and only the prominent vendors with teams of 20 people working on these instrumentation agents had that skill. So, application observability had all these proprietary protocols and methods for metrics collections, and there was no standardization. But now OTel has created a foothold by bringing in a standard where you can monitor your application and is similarly implementing that standard for all the popular languages.


Q: What’s the Implication of Application and Infrastructure Observability Coming Together, and What Needs To Happen for Us To Get There?

A: Well, we already have a standard where, just like Prometheus and its exporters, now in the application observability world, you have all these SDKs and auto-instrumentation agents generating OpenTelemetry. Prometheus has made many strides in the past year to start ingesting OTel data, so the infrastructure and application metrics now sit in the same system side-by-side. Prometheus 3.0, coming later this year, has OpenTelemetry support as one of the main features and focus areas.


However, the story doesn’t end there. You need to be able to correlate the metrics together easily. For example, you need to be able to relate a spike in errors in your RED metrics with a saturation of the CPU on the node the application is running on. This is tricky because the conventions for Prometheus and OpenTelemetry don’t line up yet. I believe this is what the community will focus on in the next year: ensuring you can seamlessly correlate and navigate the data between the two worlds of App monitoring with OTel and Infra monitoring with Prometheus.


Finally, there is also the sticking point of “collection” of this data. While you can collect Prometheus metrics with the OpenTelmetry Collector, you’ll convert the Prometheus into OTLP and then back into Prometheus data (as the datastore is Prometheus). Today, this has a high CPU overhead and is something we want to optimize. This is also why I am excited about Alloy, Grafana’s new open-source collector. Alloy comes from a Prometheus-first heritage and embeds several infra-Prometheus exporters. It is also an OTel Collector distribution and supports collecting and processing OTel data efficiently. It shines because if you collect Prometheus data and your final destination is also Prometheus, it avoids the CPU cost of converting into OTLP in the pipeline.


That’s one of the beauties of open standards and something stable to build against — that you can either use the OTel collector directly, you can use a new collector like Alloy that is optimized for making Prometheus and OTLP more seamless to work with together, or you can try any other collector. OTel has created this buyer’s market where developers will have so much optionality on which observability tools they use while knowing that they own their telemetry data underneath and that OTel itself is doing the heavy lifting of language support.


Q: Can You Describe the Historical Disconnect Between Application and Infrastructure Observability?

A: Today, if you look at the systems you’re trying to debug, you have the infrastructure – MySQL, Postgres, databases, node-level data like what hosts I’m running, and memory and CPU it’s running – and then you are running applications/containers on top. When you get a page, users see errors — and suddenly, you have a set of dashboards that show a list of applications and how each application performs. It’s easy to see where an error is occurring (say, in MySQL), but going from there to finding out what is wrong with MySQL is not easy.


Because the application talks about OpenTelemetry metrics and MySQL talks about Prometheus metrics, there is no standardization between the two. It takes a lot of expertise to find the correct instance of a running service and do these types of correlations, and this is going to get much easier as we see deeper native integration between Prometheus and OpenTelemetry.


Q: What Are Some of the Problem Domains You See OTel Tackling in the Future That Are Yet To Be Solved for Telemetry Data?

A: To get anything across the goal line in terms of standardization, you have to build consensus across many people and groups–that’s one of the things that’s so remarkable about what OTel has accomplished. Once specifications are settled, execution and innovation on top of that becomes easy. I see some realms where it’s inevitable that OpenTelemetry will have a similar impact on standardizing telemetry data, but where things are still so early, the consensus is still a way out. Front-end monitoring user monitoring is one example. We need to see more databases and logging systems adopting the OTel specification in logging. Semantic conventions for LLMs are still largely proprietary.


Observing messaging queues and applications built on top is still in its early days. There’s a ton of innovation in the CI/CD space, which needs better metrics to understand how long builds are taking — and that’s another exciting opportunity area for OTel. 

Comments


bottom of page