Chapters
Try It For Free
November 12, 2025

The AI Visibility Problem: When Speed Outruns Security

Harness surveyed 500 security practitioners and decision makers responsible for securing AI-native applications from the United States, UK, Germany, and France to share findings on global security practices. The State of AI-Native Application Security 2025 dives deep into AI visibility and the changing landscape of security vulnerabilities. 

If 2024 was the year AI started quietly showing up in our workflows, 2025 was the year it kicked the door down.

AI-generated code and AI-powered workflows have become part of nearly every software team’s daily rhythm. Developers are moving faster than ever, automation is woven into every step, and new assistants seem to appear in the pipeline every week.

I’ve spent most of my career on both sides of the equation — first in security, then leading engineering teams — and I’ve seen plenty of “next big things” come and go. But this shift feels different. Developers are generating twice the code in half the time. It’s a massive leap forward — and a wake-up call for how we think about security.

The Question Everyone’s Asking

The question I hear most often is, “Has AI made coding less secure?”

Honestly, not really. The code itself isn’t necessarily worse — in fact, a lot of it’s surprisingly good. The real issue isn’t the quality of the code. It’s the sheer volume of it. More code means more surface area: more endpoints, more integrations, more places for something to go wrong.

Harness recently surveyed 500 security practitioners and decision makers responsible for securing AI-native applications from the United States, UK, Germany, and France to share findings on global security practices. In our latest report, The State of AI-Native Application Security 2025, 82% of security practitioners said AI-native applications are the new frontier for cybercriminals, and 63% believe these apps are more vulnerable than traditional ones.

It’s like a farmer suddenly planting five times more crops. The soil hasn’t changed, but now there’s five times more to water, tend, and protect from bugs. The same applies to software. Five times more code doesn’t just mean five times more innovation — it means five times more vulnerabilities to manage.

And the tools we’ve relied on for years weren’t built for this. Traditional security systems were designed for static codebases that changed every few months, not adaptive, learning models that evolve daily. They simply can’t keep pace.

And this is where visibility collapses.

The AI Visibility Problem

In our research, 63% of security practitioners said they have no visibility into where large language models are being used across their organizations. That’s the real crisis — not bad actors or broken tools, but the lack of understanding about what’s actually running and where AI is operating.

When a developer spins up a new AI assistant on their laptop or an analyst scripts a quick workflow in an unapproved tool, it’s not because they want to create risk. It’s because they want to move faster. The intent is good, but the oversight just isn’t there yet.

The problem is that our governance and visibility models haven’t caught up. Traditional security tools were built for systems we could fully map and predict. You can’t monitor a generative model the same way you monitor a server — it behaves differently, evolves differently, and requires a different kind of visibility.

Security Has to Move Closer to Engineering

Security has to live where engineering lives — inside the pipeline, not outside it.

That’s why we’re focused on everything after code: using AI to continuously test, validate, and secure applications after the code is written. Because asking humans to manually keep up with AI speed is a losing game.

If security stays at a checkpoint after development, we’ll always be behind. The future is continuous — continuous delivery, continuous validation, continuous visibility.

Developers Don’t Need to Slow Down — They Need Guardrails

In the same report, 74% of security leaders said developers view security as a barrier to innovation. I get it — security has a reputation for saying “no.” But the future of software delivery depends on us saying “yes, and safely.”

Developers shouldn’t have to slow down. They need guardrails that let them move quickly without losing control. That means automation that quietly scans for secrets, flags risky dependencies, and tests AI-generated code in real time — all without interrupting the creative flow.

AI isn’t replacing developers; it’s amplifying them. The teams that learn to work with it effectively will outpace everyone else.

Seeing What Matters

We’re generating more innovation than ever before, but if we can’t see where AI is working or what it’s touching, we’re flying blind.

Visibility is the foundation:

  • Map where AI exists across your workflows, models, and pipelines.
  • Automate validation so issues are caught continuously, not just at release time.
  • Embed governance early, not as an afterthought.
  • Align security and development around shared goals and shared ownership.

AI isn’t creating chaos — it’s revealing the chaos that was already there. And that’s an opportunity. Once you can see it, you can fix it.

You can read the full State of AI-Native Application Security 2025 report here.

Similar Blogs

AI Test Automation
Security Testing Orchestration