New data reveals 73% of enterprises won’t ship an AI agent without monitoring and alerting, yet 63.4% cite lack of monitoring and observability as a top barrier to wider AI deployment.
Monte Carlo today announced new Agent Observability capabilities that give AI and data teams unified visibility across the full lifecycle of AI agents.
Enterprises racing to deploy AI agents are discovering they lack visibility into how those agents actually operate in production environments. That gap is eroding trust, with 53% of enterprises already expecting to significantly rebuild or redesign AI agent systems they have already deployed, according to Monte Carlo’s new survey of AI engineering leaders and practitioners.
The stakes are just as high going in, as enterprises cite secure data handling (68%), clear performance and latency expectations (62.7%) and monitoring with alerting for failures (72.7%) as top requirements before an agent goes live. Yet most lack the tools to meet them.
Addressing this critical gap, Monte Carlo’s Agent Observability is now the only solution in the market to provide unified visibility across four critical pillars that determine whether AI agents can operate reliably in production: context, performance, behavior and outputs. By monitoring these interconnected elements within a single platform, AI and data teams can understand not only what an agent produces, but also why it produced it and whether the underlying system is operating as intended.
Without this end-to-end observability across the entire data and agent stack, teams struggle to detect hallucinations, diagnose performance issues, validate workflow execution or identify the root cause of failures. As a result, many promising AI initiatives stall before reaching production, limiting the ability of enterprises to realize meaningful outcomes from AI.
“AI agents are moving into production faster than most companies are prepared for,” said Barr Moses, co-founder and CEO of Monte Carlo. “The future isn’t coming — it’s already here. If you’re deploying agents without a production-grade observability system that monitors context, performance, behavior and outputs, you’re flying blind. The companies that build trustworthy AI systems will move ahead quickly, and everyone else will fall further behind.”
Customer Spotlight
Axios is using Monte Carlo Agent Observability to ensure accuracy and efficiency in its AI-powered content tagging initiatives. The company is using OpenAI to automatically tag articles so that advertising is relevant and stories reach the right audiences. Axios initially built a manual validation process using a second OpenAI call, but managing costs and gaining visibility into telemetry and logs was challenging. Monte Carlo gives Axios the observability to expand across 12 additional LLM applications.
With Monte Carlo’s newest capabilities, enterprises can evaluate agents before deployment, monitor performance and costs in production, validate complex agent workflows and continuously assess output quality — enabling organizations to deploy AI agents with greater confidence and control.
New capabilities include:
Context: Validating the Data and Signals Agents Rely On
AI agents are only as reliable as the data and context they retrieve. Monte Carlo now enables teams to evaluate AI-generated fields directly against source data stored in their data warehouse, helping organizations verify that AI outputs accurately reflect the underlying data.
Teams can configure custom prompt-based evaluations on warehouse tables, automatically detecting errors and hallucinations before they impact downstream systems.
Expanded support for Google BigQuery and AWS Athena enables organizations building agents on GCP and AWS to implement agent observability directly within their existing cloud data environments.
Performance: Monitoring Cost, Latency and Operational Efficiency
New Agent Metric Monitors track signals such as latency, token usage, duration and error rates, helping teams detect performance regressions and operational anomalies early. Trace-level monitoring surfaces cost and telemetry across entire agent workflows rather than individual steps.
Behavior: Ensuring Agents Follow Intended Workflows
As agent workflows grow more complex, verifying that agents execute tasks as intended becomes increasingly difficult. Nearly one-third of organizations say they could not disable or roll back a harmful AI agent within minutes, and 14% say they could not do it at all, Monte Carlo’s survey found.
Monte Carlo introduces Agent Trajectory Monitors, which allow teams to validate the order, frequency and relationships between steps within agent workflows. These monitors ensure required tools are used, expected steps occur and unintended loops or skipped tasks are detected early.
This gives teams confidence that agents are operating safely within defined workflows and governance policies.
Outputs: Evaluating Agent Quality Before and After Deployment
To ensure consistent output quality, Monte Carlo now supports pre-production agent evaluations that test agents against a “golden dataset” of prompts and expected outputs before deployment.
Integrated into CI/CD workflows, these evaluations help teams detect regressions caused by prompt changes, model updates or code modifications.
In production, Agent Evaluation Monitors continuously assess output quality using LLM-based or rule-based checks, alerting teams when quality thresholds are not met.
Simplifying Deployment for Enterprise Teams
Monte Carlo also introduced a Monte Carlo-hosted OpenTelemetry deployment option for Agent Observability in AWS. This allows organizations to onboard agent observability without deploying and managing their own OpenTelemetry collectors, reducing infrastructure complexity while enabling telemetry data to remain within the customer’s AWS environment.
To learn more or request a demo, visit https://www.montecarlodata.com/platform/agent-observability/.
About Monte Carlo
Monte Carlo created the data + AI observability category to help enterprises drive mission critical business initiatives with trusted data + AI. NASDAQ, Honeywell, Roche and hundreds of other data teams rely on Monte Carlo to detect and resolve data + AI issues at scale. Named a “New Relic for data” by Forbes, Monte Carlo is rated as the #1 data + AI observability solution by G2 Crowd, Gartner Peer Reviews, GigaOm, ISG, and others.
View source version on businesswire.com: https://www.businesswire.com/news/home/20260312700545/en/
Contacts
Media Contact
Diana Puckett
prformontecarlo@bospar.com
