Chapters
Try It For Free
March 31, 2026

Making Deploys Safe Shouldn’t be Hard | Harness Blog

Eight years ago, we shipped Continuous Verification (CV) to solve one of the most miserable parts of a great engineer’s job: babysitting deployments.

The idea was simple but powerful. At 3:00 AM, your best engineers shouldn't be staring at dashboards waiting to see if a release went sideways. CV was designed to think like those engineers, watching your APM metrics, scanning your logs, and making the call for you. Roll forward or roll back, automatically, based on what the data actually said.

It worked. Customers loved it. Hundreds of teams stopped losing sleep over deployments.

But somewhere along the way, we noticed a new problem creeping in: setting up CV had become its own burden.

The Configuration Problem

To get value from Continuous Verification, you had to know what to look for. Which metrics matter for this service? Which log patterns indicate trouble? Which thresholds separate a blip from a real incident?

When we talk to teams trying to use Argo Rollouts and set up automatic verification with its analysis templates, we hear that they hit the same challenges. 

For teams with deep observability expertise, this was fine. For everyone else—and honestly, for experienced teams onboarding new services—it added friction that shouldn't exist. We’d solved the hardest part of deployments, but we’d left engineers with a new "homework assignment" just to get started.

That’s what AI Verification & Rollback is designed to fix.

What’s New: AI That Knows What to Look For

AI Verification & Rollback builds directly on the CV foundation you already trust, but adds a layer of intelligence before the analysis even begins. Instead of requiring you to define your metrics and log queries upfront, the system queries your observability provider—via MCP server—at the moment of deployment to determine what actually matters for the service you just deployed.

What that means in practice:

  • No pre-configuration required: You don’t need to define metrics or log patterns before adding verification to a pipeline. The AI identifies what’s relevant based on your observability data at runtime.
  • Step-by-step transparency: Rather than a black-box pass/fail, you see the full reasoning—which metrics were collected, how logs were aggregated, and why the AI reached its conclusion. Every step is visible.
  • Plain-language summaries: When a verification fails, you don't just get a red indicator. You get an analysis that explains exactly what went wrong and which data led to that determination—the same explanation a senior engineer would walk you through.
  • Integrated into your existing pipelines: Adding AI V&R is as simple as dropping the step into your pipeline. It configures itself for the application being deployed.

At our user conference six months ago, we showed this running live—triggering a real deployment, watching the MCP server query Dynatrace for relevant signals, and walking through a live failure analysis that caught a bad release within minutes. The response was immediate. Engineers got it instantly, because it matched how they already think about post-deploy monitoring.

What’s Changed Since the Preview

We’ve spent the past six months hardening what we showed you. A few highlights:

  • Broader observability support: The preview demoed Dynatrace integration. We've since expanded MCP server support to cover additional observability and logging providers, so more teams can take advantage without waiting for their toolchain to be supported.
  • More pipeline step coverage: The initial build focused on post-deployment verification. We've extended the capability to support additional pipeline steps, giving you AI-driven analysis at more points in your delivery workflow.
  • Production hardening: What you see today is meaningfully more robust than what ran on stage in the fall, improved reliability, better handling of edge cases, and refinements to the analysis and summary output based on real usage.

Where We Are Today

We're not declaring CV legacy today. AI Verification & Rollback is not yet a full replacement for traditional Continuous Verification across all use cases and customer configurations. CV remains the right choice for many teams, and we're committed to supporting it.

Bottom line: AI V&R is ready for many teams to use. It's available now, and for teams setting up verification for the first time—or looking to reduce the operational overhead of maintaining verification configs—it's the faster, smarter path forward.

The takeaway here is simple: If you've been putting off setting up Continuous Verification because of the configuration overhead, this is the version you were waiting for.

Ready to stop babysitting your releases? Drop the AI V&R step into your next pipeline and see what it finds.

How is your team currently handling the "3:00 AM dashboard stare"—and how much time would you save if the pipeline just told you why it rolled back?

Eric Minick

Eric Minick is an internationally recognized expert in software delivery with experience in Continuous Delivery, DevOps, and Agile practices, working as a developer, marketer, and product manager. Eric is the co-author of “AI Native Software Delivery” (O’Reilly) and is cited or acknowledged in the books “Continuous Integration,” “Agile Conversations,” and “Team Topologies.” Today, Eric works on the Harness product management team to bring its solutions to market. Eric joined Harness from CodeLogic, where he was Head of Product.

Similar Blogs

Continuous Delivery & GitOps