top of page

Bridging the Observability Gap for Modern Cloud Architectures

Upgrades to the Dynatrace observability platform leverage AI and expanded data pipelines to accelerate cloud-native development by simplifying complexity.

Cloud-native architectures have brought immense complexity along with increased business agility. But with this complexity comes fragility and lack of transparency into system performance and reliability. At Perform 2024, Dynatrace announced three major platform enhancements aimed squarely at bridging this observability gap for engineering teams. 

According to Steve Tack, SVP of Product Management at Dynatrace, a key goal is to "help organizations adopt new technologies confidently." By leveraging Davis AI and other core platform capabilities, Dynatrace provides intelligent observability and automation from code to production to help teams build, run, and optimize modern cloud-native applications.

A central theme across the announcements is using AI to increase developer productivity and autonomy. As Tack notes, "You can't expect developers to worry about Kubernetes configuration. I want to remove things from the developers' care about so they can focus on being productive." He points to Dell as an example, where Dynatrace has helped improve developer productivity significantly by eliminating mundane tasks.

Bridging the Observability Gap for Modern Cloud Architectures

During our interview, Tack highlighted an insightful study on code quality from code generated by AI models versus humans. As AI capabilities like copilot become more prevalent, software teams need confidence that any auto-generated code meets necessary standards.

Key findings on AI-generated code quality:

  • Code complexity was lower in AI-generated files compared to human-authored files in the same projects

  • AI-generated code had better style guideline adherence overall

  • Test coverage was lower for AI-generated code

  • No significant difference in security vulnerabilities was found

As Tack noted, this demonstrates both the promise and current limitations of AI code generation. While AI promises improved productivity, teams need robust observability to validate the resulting applications' quality, security, and efficiency.

Taming Generative AI Complexity

One of the most forward-looking announcements was Dynatrace AI Observability, providing end-to-end monitoring for generative AI workloads across the full stack — from infrastructure to models to orchestration. As cutting-edge as generative AI is, Tack warns it can also increase fragility. "Organizations need AI observability that covers every aspect of their generative AI solutions to overcome these challenges. Dynatrace is extending its observability and AI leadership to meet this need."

For development teams beginning to leverage generative AI models like GPT-3, this observability will provide guardrails by monitoring model performance, cost efficiency, and compliance. Ryan Berry of OneStream explains how Dynatrace AI Observability helps them confidently build ML applications — "to ensure our services supporting these critical workloads are reliable and perform well."

Trusting Data for Analytics

The second major announcement aims to help both data teams and developers trust the accuracy of analytics by providing data observability of both Dynatrace-native data and external sources. As Kulvir Gahunia of TELUS states, "New Dynatrace data observability capabilities will help ensure the data from these custom sources is also high-quality fuel for our analytics and automation."

Monitoring key aspects like data freshness, volume, and lineage can detect issues proactively before they impact downstream analytics and decisions. As Bernd Greifeneder, Founder and CTO at Dynatrace explains, "A valuable analytics solution must detect issues in the data that fuels analytics and automation as early as possible."

Taming Data Complexity

Another announcement targets the mushrooming volume and variety of monitoring and business data from hybrid cloud environments. The new OpenPipeline technology provides a single data ingestion pipeline with a much higher throughput to manage petabyte-scale volumes. 

Crucially, OpenPipeline retains full context as data streams in from sources like logs, metrics, and traces. This enables much richer analytics by understanding dependencies between events. Alex Hibbitt from PhotoBox Group explains how this will extend their use of Dynatrace: "It enables us to manage data from a broad spectrum of sources alongside real-time data collected natively in Dynatrace, all in one single platform, allowing us to make better-informed decisions."

By taming the complexity of hybrid cloud data, Dynatrace OpenPipeline also aims to ease regulated industries' security and compliance burden and reduce costs by avoiding duplicate copies of data. As Tack summarizes, "We’re enabling our customers to evaluate data streams five to ten times faster than legacy technologies."

The Bottom Line

These Observability 2.0 enhancements aim to abstract away complexity, increase developer productivity, and provide trusted analytics — ultimately helping Dynatrace customers innovate faster. Tack notes that "generative AI does increase the accessibility, usage, productivity, and efficiency" of developers. By providing robust observability of new technologies like generative AI and analytics over fast-changing hybrid cloud environments,

Dynatrace hopes to accelerate cloud-native application development in the enterprise.


bottom of page