Chapters
Try It For Free
May 4, 2026

AI in Software Delivery: Engineering Excellence or Just Market Hype?
| Harness Blog

AWS re:Invent 2025 made one thing very clear: enterprise interest in AI is no longer theoretical. The conversation has moved beyond curiosity. Teams are actively experimenting, leaders are looking for production-ready use cases, and engineering organizations are trying to figure out where AI can create real leverage across software delivery, security, platform engineering, and operations.

That part is real. But after five interviews at the event, I came away with a more important takeaway: AI is not removing the need for engineering discipline. It is increasing. Many of the challenges organizations are now running into with AI are not really AI problems at all. They are governance problems, process problems, data problems, platform problems, and measurement problems. AI is just making them harder to ignore.

Pattern #1: AI Is Accelerating Output, but Also Exposing Weak Systems

A lot of the market conversation still centers on speed. Faster code generation, faster documentation, faster testing support, faster issue resolution, faster delivery. And there is truth in that. Across the interviews, there was broad agreement that AI is already creating meaningful value across the software development lifecycle, especially by helping teams move faster through repetitive work.

But speed by itself is not the breakthrough. What matters is whether the system around that speed is strong enough to absorb it.

Tim Knapp, who leads Slalom's product engineering capability in Chicago, put it bluntly: you cannot layer AI on top of broken processes and expect transformation. Many enterprises are still operating from a waterfall mindset dressed up in modern tooling, and that mismatch becomes more expensive the faster AI pushes output through the pipeline. As Tim described it, every team is feeling the AI imperative right now, but the people and the processes are still trying to figure out how to adapt to a technology side that just changed drastically.

If engineering organizations already struggle with inconsistent processes, weak standards, poor documentation, siloed ownership, or unclear governance, then AI does not solve those issues. It amplifies them. The question is no longer just, "Can AI help teams move faster?" The better question is, "Can our engineering system handle what AI is about to accelerate?"

Pattern #2: The Most Practical Value Is Removing Friction, Not Replacing Engineers

One of the more grounded themes I heard was that AI's immediate value is often less glamorous than the headlines suggest. It is not that developers disappear. It is that developers spend less time on drag.

Eric Baran, who manages Amazon's global financial services developer platform business, described this clearly. The teams he works with are finding the most impact not from AI-generated features, but from offloading all the surrounding work (test creation, pipeline configuration, infrastructure templating, deployment patterns) that was eating into developers' days. As Eric put it, many developers were only writing lines of code on actual features for an hour or two out of their day before AI entered the picture. The rest was operational weight. When AI reduces that weight, developers get meaningful time back to focus on what actually differentiates the product.

This is where some of the loudest AI narratives miss the mark. The most valuable use of AI in engineering may not be autonomous software creation. It may be freeing teams from the work that keeps real innovation from happening.

Pattern #3: More Generated Code Creates a Bigger Governance Problem

It is easy to celebrate increased output. It is harder to govern it. That tension came up again and again.

Ron Miller, editor of the Fast Forward newsletter, shared a story that captured this perfectly. He overheard two developers in a Miami coffee shop. One of them was bleary-eyed, telling his friend he had been up all night reading 10,000 lines of code. His buddy suggested using AI to review it. The response: no, I have to know my code. Ron's point was sharp. Even though AI is generating code faster, someone still has to understand what is moving into production, and that responsibility does not shrink just because the volume grows.

Eric Baran echoed this from the enterprise side. His financial services clients are seeing an explosion of code coming off AI engines, but the regulatory and governance requirements have not changed. Teams that already struggled to keep up with compliance before AI are now moving even faster into territory they cannot fully audit. As Eric described it, customers keep coming back to the table saying they need to get better at actually getting this code out, and making sure their audit and governance teams can confirm it was built to spec.

This is where many organizations will get stuck. They will assume the bottleneck is still code creation, when in reality the bottleneck is shifting toward validation, governance, and operational trust. The winners in the next phase of AI adoption will not just be the teams that can generate faster. They will be the teams that can govern faster. be the teams that can govern faster.

Speed without trust is not transformation. It is just a faster risk.

Pattern #4: Platform Engineering Is Becoming Even More Strategic

If AI increases the pace of software creation, platform engineering becomes more important, not less. Someone still has to create the paved road. To make the secure path the easy path, reduce cognitive load, standardize workflows without creating more friction, and design systems where governance is built in rather than layered on after the fact.

Hasif Calp, a technology leader with over 16 years at Cisco who holds both platform engineering and security leadership roles, used a phrase that stuck with me: frictionless security. His argument is that most security friction comes not from the controls themselves, but from how they are implemented. When organizations rely on slow approval chains and human gates that overwhelm the people in them, the result is not better security. It is a clicking exercise where nobody fully understands the implications of what they are approving. Hasif advocated instead for making it easy to do the right thing through platform engineering, so that good security and fast delivery are not in conflict.

The more power AI gives teams, the more important it becomes to have strong internal platforms, clear golden paths, embedded controls, and systems that guide good behavior by default. That is the right goal. Not security by slowdown, not security by ticket queue, but security that fits naturally into how software is built and delivered.

That model was valuable before AI. It becomes essential with AI.

Pattern #5: Context and Judgment Are Becoming Premium Skills

One of the more underrated themes from these interviews was that AI does not just raise the value of technical infrastructure. It also raises the value of human clarity.

Ron Miller made a compelling case for this. As a writer, he has watched the narrative around AI and communication skills closely, and he pushed back hard on the idea that writing becomes less important in an AI-driven world. His argument is exactly the opposite. If you are building an agent or designing a prompt that will drive real automation, the quality of how you articulate intent matters enormously. You have to understand a process deeply, and then you have to be able to communicate it in a way that models can act on. That is a writing skill, and it is becoming more important, not less.

Tim Knapp built on this from the engineering side with the concept of context engineering. His framing was vivid. Every time you make a call to an LLM or invoke an agent in your IDE, you might as well be pulling somebody brand new off the street who knows nothing about what you are trying to do. If organizations do not invest in structuring and maintaining layers of context alongside their codebases, the AI will not perform. Tim described this as an emerging discipline that teams are just beginning to take seriously, and one that could become a real differentiator.

The future is not just model-driven. It is context-driven.

Pattern #6: The Old Enterprise Habits Are Getting More Expensive

AI may be new. Enterprise dysfunction is not. Another pattern that came through clearly is that many organizations are still carrying old habits that were expensive before AI and become even more expensive after it. Slow approval chains, waterfall thinking in modern clothes, measurement that creates theater instead of clarity, security models that rely too heavily on human gates, buying tools without changing operating rhythms, and mistaking experimentation for operational adoption.

Ron Miller captured the broader landscape well. Most of the CIOs and CTOs he talks to know they have to move toward AI, but many are still stuck in experimentation mode rather than production deployment. The knowing and the doing remain far apart.

Piyush Dewan, a director of software engineering at BridgeBio Medicines, offered a concrete example of how process rigidity undermines delivery. He described how over-engineered agile methodologies (the capacity planning rituals, the strict sprint predictability expectations) often cause teams to lose sight of what they actually need to deliver and how they should innovate. In his view, the emphasis on process compliance can become its own form of waste.

Tim Knapp reinforced this point by recommending that organizations remove agile project metrics from QBRs entirely. Story points and defect counts are useful internal signals for delivery teams, but they need heavy context to mean anything, and at the QBR level, they often create more theater than clarity. What matters at that altitude is whether timelines were met, how scope was managed, and what outcomes were actually delivered.

This is part of why so many leaders are now talking about centers of excellence, shared governance, and tighter collaboration between platform, security, and engineering leadership. AI does not just require new tools. It requires a more mature operating model around those tools.

Trying a model is easy. Running the business differently is the hard part.

The Bigger Takeaway: AI Is Testing the Operating Model

This is what all five interviews really pointed out. The AI era is not just about adopting new capabilities. It is about whether the organization itself is ready to operate differently. That includes better internal platforms, clearer governance, stronger delivery controls, more usable system context, tighter alignment between engineering, security, and operations, and a more disciplined way of measuring what is actually improving.

The organizations that benefit most from AI will not just be the ones that experiment the fastest. They will be the ones that modernize the system around the experimentation. That is the difference between a promising demo and durable advantage.

Where This Goes Next

If I had to summarize what I heard at AWS re:Invent in one sentence, it would be this: AI is not bypassing engineering excellence. It is making it more necessary.

Yes, the AI tools are getting better. Yes, the use cases are becoming more tangible. Yes, teams are finding real value. But none of that removes the need for strong platforms, clear governance, trustworthy delivery systems, and disciplined operating models. If anything, it raises the bar.

The next wave of competitive advantage will not come from using AI in isolated ways. It will come from building an engineering organization that can turn AI into reliable, scalable, governed outcomes. That is a much harder challenge than generating more code. And it is the one that matters.

Thomas Dockstader

Thomas Dockstader is the Director of EngX Advisory, where he helped engineering organizations measure what matters and turn delivery signals into meaningful improvement.

Similar Blogs

Harness AI