As agentic AI moves from pilot projects to live production environments, organizations are increasingly focusing on how to run these systems safely at scale, rather than debating their adoption. A new report by Dynatrace highlights how large enterprises are integrating agentic AI into core operational functions, including IT operations, cybersecurity, data processing, and customer support.
According to the report, 70 percent of respondents are already using AI agents in IT operations and system monitoring, with nearly half applying them across both internal and external processes. Investment in agentic AI is also rising, with many organizations spending between $2 million and $5 million annually, particularly for applications linked to reliability and operational performance.
Despite growing adoption, implementation remains uneven. Half of the surveyed organizations reported agentic AI projects running in production for limited use cases, while 44 percent said deployments are broad within select departments. Most teams manage between two and ten active projects, with IT operations, cybersecurity, and data processing leading in production readiness.
Security, data privacy, and technical performance are key criteria for scaling projects. Observability and control mechanisms are central to ensuring safe rollout, as teams face challenges in monitoring autonomous agent behavior and tracing downstream effects. The report notes that as agentic AI systems interact across multiple tools, models, and datasets, real-time insight into decisions becomes critical to avoid unexpected outcomes and link technical signals to business results.
Observability tools are widely adopted, with nearly 70 percent of respondents using them during implementation and over half during development and operations. These tools monitor training data quality, detect anomalies, validate outputs, and ensure compliance. Despite rising autonomy, human oversight remains standard, with over two-thirds of AI decisions reviewed by a person. Validation methods include data checks, output review, and monitoring for drift, while fully autonomous agents without supervision remain rare.
When assessing success, organizations prioritize reliability and resilience, with 60 percent citing technical performance as their top metric. Operational efficiency, developer productivity, and customer satisfaction also rank highly. Teams continue to rely on a mix of automated and manual monitoring methods, including logs, metrics, traces, and manual review of agent-to-agent communications.
Looking forward, the report emphasizes the need for governance, standardized metrics, and consistent guardrails to guide autonomous actions. Observability serves as the linking mechanism across the AI lifecycle, ensuring systems perform reliably in real-world conditions. “Organizations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave as intended,” said Alois Reitbauer, Chief Technology Strategist at Dynatrace.


Leave a Reply
You must be logged in to post a comment.