Chapters
Try It For Free
March 4, 2026

How to Build AI-Native Security Resilience (And Finally Get Developers And Security On The Same Team) | Harness Blog

Developers and security professionals have struggled to get on the same page for what seems like forever and AI is only making that divide larger, according to results from our State of AI-Native Application Security 2025 research report.

AI applications are spreading through organizations at a fast rate, in many cases becoming the new “shadow IT” - 62% of our survey takers said they can’t identify where the LLMs are in their organizations, with 75% saying they’re potentially creating much greater risks than ever before. All told, 61% of those surveyed said two-third of their organizations' newly built applications are being designed with AI components.

But are those apps secure? Likely not: 62% of respondents believe AI apps are more vulnerable to cybercriminals than traditional IT applications and over two-thirds of survey takers report already experiencing an attack on an AI application.

And, unfortunately, dev and sec teams aren’t facing this problem together, at least according to our findings. Survey takers said:

  • Developers lack time and training: 62% say devs are too busy to implement comprehensive AI-native security, and the same percentage say they lack the necessary expertise.
  • Speed and security are mismatched: 75% believe AI applications evolve faster than security can keep up.
  • Collaboration breakdowns are widening the gap: Only 34% of developers notify security before starting AI projects, and just 53% before going live.
  • Perception remains a barrier: 74% of security leaders say developers view security as a blocker to AI innovation.

But, organizations can unlock the value of their AI investments *and* make them more secure at the same time, while, (bonus!), bringing security pros and developers together - if they commit to building AI-native security resilience. This is a mindset and culture shift, perhaps of monumental proportions, but we promise the payoff is worth it. Here’s how to get started:

Lay the groundwork with shared governance

Manual reviews are tedious, prone to human error, and can double or triple the wait times for approval. To break that cycle, opt for Policy as Code rather than manual reviews, building something that engineering and security agree upon beforehand. That could look like security defining policies that devs embed into CI/CD pipelines and violations that trigger automated feedback rather than blocking progress.

This is a great place to start - or stress - a true “shift left” mentality.

Make AI components discoverable

AI components can’t be secure if they’re not seen. Teams need to monitor and log all AI components, of course, but the organization needs to make it as easy as possible to use safe and sanctioned AI tools. Shadow AI only gets worse when the “official tools” are difficult to use.

Detect anomalies by tracking AI implementations in real-time

Normal rules won’t apply here, so instead teams need to look at model behavior (sudden spikes or abnormal token usage), security signals (prompt injection patterns or hidden tool calls), and operational (cost anomalies or context window size spikes). Also consider building real-time guardrails with policy automation that can throttle model calls or downgrade agent permissions.

Test dynamically against AI-specific threats

Up your testing game with specific threat catalogs including OWASP Top 10 for LLM Apps and MITRE ATLAS and don’t forget the TEVV concepts. A dedicated security test harness can be particularly helpful here, as can adversarial “prompt fuzzing.”

Don’t forget to protect what’s already in production

In the immortal words of Fox Mulder “trust no one,” or in this case, don’t trust *any* of the AI inputs and outputs. Enforce data classification and context boundaries, secure the model interaction layer, and make sure to monitor the behavior and not just the infrastructure.

FAQs on AI-Native Security Resilience

What does "AI-native security resilience" actually mean in practice?

AI-native security resilience means security isn't a gate at the end of the pipeline — it's woven into every stage of delivery. Harness uses contextual insights and agentic workflows to detect and mitigate risks from build to post-deployment, covering everything from application and API discovery to AI-powered threat prevention.

How does Harness help security and developer teams work from the same playbook?

The merger of Harness and Traceable enables software teams to seamlessly develop, deploy, and secure applications, ensuring security is embedded at every stage of the software lifecycle. By unifying DevOps and AppSec in a single platform, both teams operate with the same pipeline context — eliminating the handoff friction that traditionally breaks collaboration.

How does Harness reduce the burden on developers when it comes to fixing vulnerabilities?

Harness AI streamlines the process of fixing vulnerabilities, enabling developers and security personnel to manage security backlogs, address critical issues promptly, and generate code suggestions and pull requests to remediate issues directly from the security testing orchestration (STO) module.

With shadow AI becoming a major enterprise risk, how does Harness help organizations stay in control?

Harness addresses AI visibility through the Software Delivery Knowledge Graph — a contextual layer that maps a company's security policies, compliance requirements, infrastructure, and development practices — so AI agents can enforce guardrails automatically, rather than relying on developers to remember them.

Adam Arellano

For over 15 years, I have elevated enterprise cloud, AI, and cybersecurity capabilities by leading strategic initiatives at the heart of achieving core business goals and missions. From modernizing Veritone’s technology stacks and helping PayPal Ventures companies excel to evolving the product security architecture at Binti, I bring a career worth of success in transforming technology cornerstones. More than just an information security executive, I am a steadfast advocate of building infectiously collaborative working cultures that readily promote DEI initiatives and professional growth. I am also fulfilled when helping burgeoning startups achieve their most imperative goals; I use my extensive technical background and proven business acumen to guide these organizations through potentially tumultuous funding and growth periods. Here is a small sample of what I offer as a strategic technology and business leader: 💡 Venture Capital & Startup Consulting 🛡️ Cybersecurity Architecture Expertise 📝 FedRAMP & DoD Guidance 📊 Fractional CISO services 🎤 Keynote Speaking

Similar Blogs

AI Test Automation
Harness AI