top of page

Making Waves: Dynatrace Perform 2024 Ushers in New Era of Observability

Observability, automation, and sustainability took center stage. Key announcements around reducing carbon, ensuring AI reliability, and maximizing engineering efficiency.



Dynatrace welcomed thousands of in-person and virtual attendees to its annual Perform conference in Las Vegas this week. The overarching theme was “Make Waves,” – conveying both the tectonic shifts happening across industries and opportunities for organizations to drive transformational impact.


True to the cutting-edge nature of the company, Dynatrace had several major announcements that will allow enterprises to tackle some of today’s most pressing challenges around cloud complexity, AI adoption, security threats, and sustainability commitments. Let’s dive into the key developments.


Reducing the IT Carbon Footprint

With climate change accelerating, reducing carbon emissions has become a business imperative. However, IT infrastructures are extremely complex, making it difficult for enterprises to quantify and optimize their footprints at scale.


Dynatrace Carbon Impact is purpose-built to address this challenge. It translates highly granular observability data like compute utilization metrics into accurate sustainability impacts per data center, cloud provider regions, host clusters, and individual workloads.


Teams can instantly identify “hot spots” representing the highest energy waste and emissions for focused efficiency gains. For example, Carbon Impact may reveal an overload of duplicate microservices, dragging down utilization rates across critical application resources.


It also suggests precise optimization actions based on cloud architectures and dependencies, like eliminating grossly underutilized instances. Moreover, its continuous monitoring provides oversight into sustainability KPIs over time after taking measures like rightsizing initiatives or green coding enhancements.


According to Dynatrace customer Lloyds Banking Group, which aims to cut operational carbon 75% by 2030, these capabilities create “the visibility and impact across IT ecosystems needed to optimize infrastructure efficiency.”


As businesses pursue environmental goals amidst cloud scale and complexity, Carbon Impact makes observability the key enabler to reaching those targets.


Making Observability Work for AI

Artificial intelligence holds tremendous promise, but new observability challenges arise as the adoption of complex technologies like large language models and generative AI accelerates.


These modern AI workloads can behave unexpectedly, carry proprietary IP within models, hampering visibility, and operate as black boxes unable to trace failures. Their on-demand consumption models also make resource usage hard to predict and control.


Dynatrace AI Observability is purpose-built to overcome these hurdles. It instruments the entire AI stack, including infrastructure like GPU clusters, ML pipelines, model governance systems, and AI apps.


This full-stack observability combined with explanatory models from Davis AI delivers precise insights into the provenance and behavior of AI systems. Teams can pinpoint the root causes of model degradation plus quantify accuracies.


For large language models like GPT, in particular, Dynatrace traces query patterns and token consumption to prevent overages. As models iteratively learn from new data, they monitor for harmful drift. This governance ensures models operate reliably and cost-effectively at the enterprise scale.


Observability is no longer optional in an environment demanding responsible and secure AI rollouts across industries. Dynatrace equips businesses to drive generative AI and ML innovation with confidence.


Driving Analytics and Automation at Scale

Modern cloud-native environments generate massive data streams difficult for enterprises to manage smoothly, let alone extract value from. Constrained bandwidth and storage compound the issue, while ad hoc observability pipelines and data quality defects create headaches for practitioners.


Dynatrace OpenPipeline elegantly solves these challenges. It offers a single, high-powered route to funnel all observability, security, and business telemetry pouring from dynamic cloud workloads into value-driving analytics and automation platforms like Dynatrace.


Leveraging patent-pending accelerated processing algorithms combined with instant query abilities, OpenPipeline can evaluate staggering data volumes in flight up to 5-10 times faster than alternatives to unlock real-time analytics use cases previously unachievable. There is no need for clumsy sampling approximations.


It also enriches telemetry with full topology context for precise answers while allowing teams to seamlessly filter, route, and transform data on ingest based on specific analytics or compliance needs. OpenPipeline even helps reduce duplicate streams by up to 30% to minimize bandwidth demands and required data warehouse storage capacity.


For developers, SRE, and data engineering teams struggling to build custom pipelines handling massive, myriad data sources across today's heterogeneous enterprise stacks, OpenPipeline brings simplicity and performance, allowing more focus on extracting insights.


Ensuring Analytics and Automation Quality

Making decisions or triggering critical workflows based on bad data can spell disaster for organizations. However, maintaining flawless data quality gets exponentially harder as cloud scale and complexity mushroom.


Luckily for Dynatrace platform users, Data Observability helps eliminate these worries. It leverages Davis AI and other Dynatrace modules to automatically track key telemetry health metrics on ingest, including freshness, volume patterns, distribution outliers, and schema changes.


Any anomalies threatening downstream analytics and automation fidelity trigger alerts for investigation, made easy by lineage tracking to pinpoint root sources even across interconnected data pipelines. Teams save countless hours and no longer need to piece together where data defects originated manually.


But beyond reactive governance, Dynatrace Data Observability also proactively optimizes analytics by continually assessing the relevance and utilization of data feeds. Teams can confidently retire unused streams wasting resources or identify new sources to incorporate for better insights and models.


For developers building custom data integrations and architects managing business-critical analytics, worry-free data means more efficient delivery of value and innovation for the business. Data Observability grants the peace of mind that historical and real-time data fueling crucial automation are fully trustworthy.


The Path to Software Perfection

Across the board, Dynatrace Perform 2024 indicated how AI and automation will reshape performance engineering. Founder and CTO Bernd Greifeneder summarized it perfectly: “We built Dynatrace to help customers automate because that is how you get to software perfection. These advances give teams the answers and governance to prevent problems automatically versus manual fixes.”


Dynatrace Perform attendees are excited about observability’s next paradigm shift. 

Commentaires


bottom of page