
Today, we're thrilled to announce a significant leap forward in our commitment to AI-driven innovation. Harness, a leader in AI-native software delivery, is proud to introduce three powerful AI agents designed to transform how teams create, test, and deliver software.
Since introducing Continuous Verification in 2018, Harness has been at the forefront of leveraging AI and machine learning to enhance software delivery processes. Our latest announcement reinforces our position as an industry pioneer, offering a comprehensive suite of AI-powered tools that address critical challenges across the entire software delivery lifecycle (SDLC).
Our vision is a multi-agent architecture embedded directly into the fabric of the Harness platform. We’re building a powerful library of ‘assistants’ designed to make software delivery faster, more efficient, and more enjoyable for developers. These AI-driven agents will work seamlessly within our platform, handling everything from automating complex tasks to providing real-time insights, freeing developers to focus on what they do best: creating innovative software.
Let's explore the capabilities of these new AI agents and see how they will reshape the future of software delivery.
The Harness AI QA Assistant is a game-changer in the world of software testing. This generative AI agent is purpose-built to simplify end-to-end automation and accelerate the transition from manual to automated testing. End-to-end tests have been plagued by slow authoring experiences that yield brittle tests, which need to be tended to every time the UI changes.

By harnessing the power of AI, this assistant offers a range of benefits that can dramatically improve your testing processes:
Sign up today for early access to the AI QA Assistant.
Crafting pipelines can be challenging. You need to consider your core build and deployment activities, as well as best practices around security scans, testing, quality gates, and more. The new Harness AI DevOps Assistant will make creating great pipelines much easier.

The introduction of the AI DevOps Assistant marks a significant milestone in our mission to simplify and streamline the software delivery process for the world’s developers. By automating complex tasks, and providing intelligent insights, this capability empowers teams to focus on innovation rather than getting bogged down in pipeline management intricacies.
Sign up today for early access to the AI DevOps Assistant.
The Harness AI Code Assistant accelerates developer productivity by streamlining coding processes and providing instant access to relevant information. This intelligent tool integrates seamlessly into the development workflow, offering a range of features that enhance coding efficiency and quality:

The Harness AI Code Assistant is more than just a coding tool; it's a comprehensive solution that enhances developer productivity, improves code quality, and fosters a more efficient and collaborative development environment. The AI Code Assistant is available today for all Harness customers at no additional charge.
Software delivery is changing fast. Generative AI has helped organizations code faster than ever. The rest of the delivery pipeline must keep up to take full advantage of these efficiencies.
These tools - the Harness AI QA Assistant, AI DevOps Assistant, and AI Code Assistant represent more than just technological advancements. They embody a shift in how we approach software development, testing, and delivery. By automating routine tasks, providing intelligent assistance, and offering deep insights into development processes, these AI agents eliminate toil, freeing up human creativity and expertise to focus on solving complex problems and driving innovation.
As we move forward, the integration of AI into software delivery processes will become increasingly crucial for organizations looking to maintain a competitive edge. The ability to deliver high-quality software faster, more reliably, and with greater insight will be a key differentiator in the digital marketplace.
Harness is committed to leading this AI-driven transformation of the software delivery landscape. We invite you to join us on this exciting journey toward a future where AI and human expertise work in harmony to create exceptional software experiences.
Stay tuned for more updates as we continue to innovate and shape the future of software delivery. If you want to try any of these capabilities early, sign up here.
Checkout Event: Revolutionizing Software Testing with AI
Checkout Harness AI Code Agent
Explore more resources: 3 Ways to Optimize Software Delivery and Operational Efficiency


Your developer productivity initiative didn't collapse because the data was wrong. It stalled because it couldn't answer the business question.
Leadership asked, "So what?"
You presented improved cycle time, higher deployment frequency, lower change failure rate. The dashboards were polished and the trends were moving in the right direction. And still, the room was unconvinced, because the real question was never about operational motion. It was whether engineering was driving measurable business impact.
The best engineering organizations stopped treating productivity as an internal reporting exercise a long time ago. They don't measure to validate effort. They measure to demonstrate outcomes, treating productivity as a strategic capability rather than a compliance artifact. That framing shift is the difference between a dashboard that gets ignored and a measurement system that actually influences investment decisions.
Most engineering productivity programs fail at the measurement selection stage. Teams track what is easy to instrument instead of what influences strategic outcomes: lines of code shipped, tickets closed, pull requests merged. These are activity signals. They describe motion, not value creation.
Even widely respected metrics become vanity indicators when stripped of context. Deployment frequency sounds impressive until you ask what those deployments actually delivered. Lead time looks strong until you realize the shipped features didn't move adoption or revenue. Change failure rate improves, but customer experience stays flat. The numbers go up and the business question remains unanswered.
What's needed is a translation layer between technical execution and business impact. This doesn't mean abandoning quantitative rigor. It means recognizing that metrics only matter when they're connected to outcomes. Deployment frequency is not the goal; sustainable value delivery is. Lead time is not the strategy; responsiveness to market demand is. The difference is subtle, but it's decisive.
High-performing teams measure how engineering execution influences customer value, product velocity, operational risk, and strategic alignment. They treat metrics as decision inputs, not performance theater.
Data without workflow context creates false conclusions. A pull request sitting in review for three days may look like inefficiency, but the cause matters enormously. Is it architectural complexity? Reviewer overload? Cross-timezone coordination? A critical design discussion that needed to happen? Without workflow visibility, metrics flatten nuance into noise and teams start optimizing the wrong bottlenecks.
Consider two teams. One deploys ten times per week with frequent rollbacks. Another deploys five times per week with zero incidents. Raw deployment frequency rewards the first team. Risk-adjusted delivery performance favors the second. Without context, your metrics are quietly incentivizing the wrong behavior, rewarding operational debt over operational discipline.
Developer productivity measurement at scale means connecting commits to pipelines, pipelines to releases, releases to incidents, and incidents back to customer impact. Only then can you distinguish between healthy experimentation and accumulating debt, between intentional technical debt reduction and systemic inefficiency. If review time improves but deployment frequency stays flat, you didn't accelerate delivery. You shifted the bottleneck. True engineering intelligence exposes those dynamics instead of hiding them behind aggregate scores.
Most organizations measure productivity within team silos and then wonder why platform investments underperform. A backend team increasing throughput doesn't create value if frontend teams can't integrate efficiently. An infrastructure team reducing pipeline time doesn't accelerate delivery if governance constraints slow application releases downstream. A platform investment only matters if it compounds velocity across the teams that depend on it.
Engineering productivity is systemic. High-functioning organizations measure it that way, instrumenting handoffs between systems rather than just activity within them. They track how long work waits between functions, analyze how architectural decisions in one domain impact velocity in another, and measure whether platform capabilities are translating into application-level acceleration.
This is where productivity measurement shifts from operational reporting to strategic intelligence. The question stops being whether individual teams are busy and starts being whether the organization is aligned. Whether platform investments are landing. Whether architectural decisions are compounding velocity or quietly constraining it. Those answers don't come from point-in-time dashboards. They emerge from trend analysis across repositories, pipelines, and organizational boundaries.
DORA metrics provide a delivery health baseline: deployment frequency, lead time for changes, change failure rate, and time to restore service. Think of them as the vital signs of your software delivery operation, answering whether the delivery engine is healthy enough to support strategic execution.
But delivery health alone doesn't guarantee sustainable performance. The SPACE framework extends that baseline by capturing satisfaction, performance, activity, communication, and efficiency. It acknowledges what throughput metrics often miss: that sustainable velocity requires healthy teams, manageable cognitive load, and real alignment between effort and impact.
The warning signs are predictable once you know how to read them. High DORA scores alongside declining satisfaction is a burnout signal. Strong activity metrics with weak communication indicators point to silo formation. Efficient deployment paired with persistent incident volume suggests fragility hiding beneath a healthy-looking surface.
The most effective engineering organizations don't choose between DORA and SPACE. They integrate them. DORA confirms the delivery engine is functioning. SPACE confirms that function is sustainable and human. Together, they create a multi-dimensional view of engineering effectiveness that balances speed, quality, resilience, and team health, transforming productivity measurement from throughput tracking into something closer to strategic foresight.
Most engineering intelligence platforms prioritize visibility without context. They surface metrics but fail to connect them to workflow realities or business outcomes, and that's exactly where they fall short.
Harness SEI treats measuring developer productivity as a strategic capability. By integrating with source control systems, CI/CD pipelines, and issue tracking platforms, it creates a unified view of delivery performance across the engineering ecosystem, connecting commits to execution, execution to release, and release to reliability.
The more important distinction is what the platform doesn't do. It doesn't reduce productivity to individual surveillance or flatten team performance into leaderboard comparisons. A team showing slower cycle times because they're paying down technical debt is not underperforming. A platform team with lower deployment frequency because they're building foundational infrastructure is not failing. In isolation, those signals look negative. In context, they're strategic. Harness SEI is built to surface that context, giving engineering leaders visibility into whether platform improvements are compounding velocity, whether architectural investments are reducing friction, and whether delivery health is genuinely supporting strategic goals.
The best engineering organizations don't measure productivity to justify headcount. They measure it to demonstrate value creation, and that shift changes the entire conversation.
When your developer productivity measurement framework connects technical activity to strategic results, you stop defending engineering costs and start demonstrating engineering value. You show that faster deployments enabled a faster market response. That reduced change failure rates lowered operational costs. That improved cycle times allowed the team to deliver more customer value with the same resources.
The common thread across DORA, SPACE, and platforms like Harness SEI is the same principle: context matters more than raw numbers. Optimizing for faster deployments in isolation is tactical. Optimizing for sustainable, risk-adjusted, business-aligned delivery is strategic.
The next time leadership asks whether engineering is productive, you won't reach for activity charts. You'll respond with impact evidence: trend lines tied to business outcomes, insights grounded in workflow context, metrics that influence decision-making rather than just filling reporting cycles.
That is the difference between tracking productivity and understanding it. Between measuring motion and proving impact.
Explore Harness SEI or review implementation details. For teams evaluating long-term fit, review the SEI roadmap.


Matthew Skelton is the CEO & CTO of Conflux and a featured speaker at this year’s DevOps Modernization Summit. Ahead of our annual summit, Matthew has shared his hot takes on AI, DORA, and the key to successful automation. We’ve summarized his thoughts below – or watch for yourself.
The AI gold rush is in full swing. Every engineering leader is under pressure to adopt it, measure it, and show ROI on it. But here's the uncomfortable truth most people aren't saying out loud: AI is having a massive impact on software engineering — and it's still not delivering real value. Most engineering teams start with the tool, then hunt for a use case. That's exactly wrong.
"It's really important for us to come back to the idea of starting with the outcomes first, then working back towards understanding how we'd use AI to empower teams to be effective stewards of value, to reduce cognitive load, to shorten time to do things that are not value add," Matthew shares.
Until you flip that equation — outcomes first, tools second — AI is just expensive noise. Know what problem you're solving before you touch the tooling.
Here's one nobody wants to admit at the all-hands: spinning up AI to generate mountains of code isn't always a productivity win. Sometimes it's just a liability transfer.
"We're not going to use AI to generate mountains of code that then has to be retested and where we find all the security bugs. But we can use it to aid teams to focus on their mission more effectively," according to Matthew.
More code means more review, more vulnerabilities, more cognitive load on already-stretched developers, creating a velocity paradox. The teams winning with AI aren't using it to ship more — they're using it to do less of what doesn't matter.
DORA metrics are everywhere. Deployment frequency. Lead time. MTTR. Change failure rate. And they're being misused by almost everyone who tracks them.
"DORA metrics are output metrics. We shouldn't be trying to drive them directly. We need to be looking at the fundamental capabilities — improving our capabilities and expect to see the DORA metrics change,” he says.
Optimizing for the metric instead of the capability is how you get teams gaming numbers while software quality quietly deteriorates. DORA metrics are a thermometer — not a treatment plan.
And there's another inconvenient truth: "The context for using DORA metrics is quite specific — it's teams that have end-to-end responsibility for value flow. And lots of organizations are not in that place."
If your teams don't own the full value stream, DORA might just be the wrong measuring stick entirely.
The metrics you push on need to be "safe to optimize." Choosing the wrong metrics doesn't just give you bad data — it actively drives behavior you don't want.
"The specific metrics you want to choose very much depend on the context that you're talking about. We need people with a high degree of awareness of the operating context to select the right metrics to empower leaders to be able to push those levers," he states.
Cookie-cutter metric frameworks applied without context are how you end up with fast deployments of broken software. Context is everything.
The pace of change in technology, regulation, and market conditions has blown past what any team can manage through manual inspection.
"The rate of change of technology, of regulatory requirements, of market and economic trading relationships — the rate of change of all these things is too fast for us to have manual inspection of things like security compliance and regulatory compliance," Matthew says.
If your compliance and security processes still depend on humans checking boxes at the end of a release cycle, you're not managing risk — you're manufacturing it. Compliance has to be baked into the platform. Full stop.
Here's the nuance that gets lost when teams rush to automate compliance into their delivery platforms: the technology is the easy part.
According to Matthew: "This has to be baked in. But it has to be baked in in a way which builds trust with the people who are, in some cases, on the hook for things like security compliance and regulatory compliance — particularly in financial services."
"In addition to baking compliance into a platform, we need to have a social dynamic inside the organization that builds that trust so that people feel confident that what the platform is doing and controlling is what's needed."
You can automate every security gate in your CI/CD pipeline, but if the compliance team doesn't trust the platform, they'll route around it. Governance is a people problem as much as a technology problem. Build the trust, or the automation won't stick.
Engineering excellence in 2026 doesn't go to the team with the most AI tools or the prettiest DORA dashboard. It goes to the teams who are ruthlessly honest about where they're generating real value — and brave enough to act on what the data is actually telling them.
Start with outcomes. Pick metrics that are safe to optimize. Automate compliance with trust baked in alongside it. And stop using AI to generate problems you'll have to fix later.
Want more hot takes? Join this year’s DevOps Modernization Summit and hear straight from industry leaders.


Engineering organizations are waking up to something that used to be optional: measurement.
Not vanity dashboards. Not a quarterly “engineering metrics review” that no one prepares for. Real measurement that connects delivery speed, quality, and reliability to business outcomes and decision-making.
That shift is a good sign. It means engineering leaders are taking the craft seriously.
But there are two patterns I keep seeing across the industry that turn this good intention into a slow-motion failure. Both patterns look reasonable on paper. Both patterns are expensive. And both patterns lead to the same outcome: a metrics tool becomes shelfware, trust erodes, and leaders walk away thinking, “Metrics do not work here.”
Engineering metrics do work. But only when leaders use them the right way, for the right purpose, with the right operating rhythm.
Here are the two patterns, and how to address them.
This is the silent killer.
An engineering executive buys a measurement platform and rolls it out to directors and managers with a message like: “Now you’ll have visibility. Use this to improve.”
Then the executive who sponsored the initiative rarely uses the tool themselves.
No consistent review cadence. No decisions being made with the data. No visible examples of metrics guiding priorities. No executive-level questions that force a new standard of clarity.
What happens next is predictable.
Managers and directors conclude that engineering metrics are optional. They might log in at first. They might explore the dashboards. But soon the tool becomes “another thing” competing with real work. And because leadership is not driving the behavior, the culture defaults to the old way: opinions, anecdotes, and local optimization.
If leaders are not driving direction with data, why would managers choose to?
This is not a tooling problem. It is a leadership ownership problem.
If measurement is important, the most senior leaders must model it.
That does not mean micromanaging teams through numbers. It means creating a clear expectation that engineering metrics are part of how the organization thinks, communicates, and makes decisions.
Here is what executive ownership looks like in practice:
When executives do this, managers follow. Not because they are told to, but because the organization has made measurement real.
This is the other trap, and it is even more common.
There is a false belief that if an organization has DORA metrics, improvements in throughput and quality will automatically follow. Like measurement itself is the intervention.
But measurement does not create performance. It reveals performance.
A tool can tell you:
Those are powerful signals. But they do not change anything on their own.
If the system that produces those numbers stays the same, the numbers stay the same.
This is why organizations buy tools, instrument everything, and still feel stuck. They measured the pain, but never built the discipline to diagnose and treat the cause.
If you want metrics to lead to improvement, you need two things:
Without definitions, metrics turn into arguments. Everyone interprets the same number differently, then stops trusting the system.
Without a practice, metrics turn into observation. You notice, you nod, then you go back to work.
The purpose of measurement is not to create pressure. It is to create clarity. Clarity about where the system is constrained, what tradeoffs you are making, and whether your interventions actually helped.
Here is the shift that unlocks everything:
The goal is not to measure engineers.
The goal is to measure the system.
More specifically, the goal is to prove whether a change you made actually improved outcomes.
A change could be:
If you cannot measure movement after you make a change, you are operating on opinions and hope.
If you can measure movement, you can run engineering like a disciplined improvement engine.
This is where DORA metrics become extremely valuable, when they are used as confirmation and learning, not as a scoreboard.
The best leaders I have worked with do not hand leadership over to dashboards. They use metrics as confirmation of what they already sense, and as a way to test assumptions.
That is the role of measurement. It turns gut feel into validated understanding, then turns interventions into provable outcomes.
If you want measurement to drive real improvement, here is a straightforward structure that scales.
Use DORA as a baseline, but make definitions explicit:
This prevents endless debates and keeps the organization aligned.
You do not need a heavy process. You need consistency.
A strong starting point:
A metric without a lever becomes a complaint.
Examples:
This is the part most organizations skip.
Pick one change. Implement it. Measure before and after. Learn. Repeat.
Improvement becomes a system, not a motivational speech.
This brings us back to Pattern #1.
If executives use the tool and drive decisions with it, measurement becomes real. If they do not, the tool becomes optional, and optional always loses.
The organizations that do this well eventually stop talking about “metrics adoption.” They talk about “how we run the business.”
Measurement becomes part of how engineering communicates with leadership, how priorities get set, how teams remove friction, and how investment decisions are made.
And the biggest shift is this:They stop expecting a measurement tool to fix problems.They use measurement to prove that the problems are being fixed.
That is the point. Not dashboards, not reporting, not performance theater: Clarity, decisions, experiments, and outcomes.
In the end, measurement is not the transformation. It is the instrument panel that tells you whether your transformation is working.



Engineering organizations today don’t lack data—they lack clarity. Delivery timelines, developer activity, and code quality metrics are scattered across systems, making it hard to answer simple but critical questions: Where are we losing time? Are we investing in the right work? Who needs support or coaching?
This is where Harness Software Engineering Insights (SEI) steps in. Unlike traditional dashboards, SEI offers opinionated, role-based insights that connect engineering execution with business value.
In this post, we’ll walk through a proven rollout framework, real customer success stories, and a practical guide for any organization looking to implement an engineering metrics program (EMP) that actually drives impact.

Rolling out SEI without a clear objective is like configuring CI/CD pipelines without deployment goals. Before diving into dashboards or metrics, align internally on what you’re trying to improve.
Most organizations fall into one or more of the following categories:
💡 A powerful first step is simply asking: What are the top 3 decisions you wish you could make with data but currently can't?

Once your objectives are clear, it’s time to define the key performance indicators (KPIs) that reflect progress. At Harness, we recommend starting with 5 core metrics that align with your goals:
These metrics aren’t just about numbers—they tell a story. And SEI’s pre-built dashboards help visualize that story from day one.

Out-of-the-box data isn’t enough—you need context. SEI allows deep configuration across integrations, people, and workflows to ensure accuracy and actionability.
Start with the essentials: Jira or ADO (issue tracking), GitHub or Bitbucket (SCM), Jenkins or Harness CI (build/deploy). Validate data ingestion and set up monitoring for failed syncs.
Merge developer identities across systems and tag them with meaningful metadata: Role, Team, Location, Manager, and Employee Type (FTE, contractor). This enables advanced filtering, benchmarking, and team-level coaching.
Use Asset-Based Collections for things like repositories or services (ideal for DORA/Sprint metrics) and People-Based Collections for teams, departments, or geographies (perfect for Dev Insights, Trellis, and Business Alignment).
SEI lets you build custom profiles for DORA metrics, Business Alignment, and Trellis. These profiles allow you to set your own definitions for “Lead Time,” “MTTR,” or what constitutes “New Work.” Configurable widgets ensure the insights match your team’s workflows—not the other way around.

One of SEI’s most valuable capabilities is persona-based reporting. Not every stakeholder needs to see every metric. Instead, create tailored views based on what matters to them.
| Persona | Primary Metrics | Cadence |
|---|---|---|
| CTO / VP Engineering | DORA, Effort Allocation, Innovation % | Quarterly |
| Director of Engineering | Sprint Trends, PR Cycle Time, MTTR | Monthly |
| Engineering Manager | Coding Days, PR Approval Rate, Rework | Weekly |
| Scrum Master / TPM | Commit-to-Done, Scope Creep, Sprint Hygiene | Weekly/Daily |
| Product Manager | Feature Delivery Lead Time, KTLO vs. New Work | Bi-weekly |
By aligning metrics to what stakeholders actually care about, you reduce dashboard fatigue and increase engagement.

Rolling out dashboards isn’t enough—you need cadence and accountability.
Successful SEI customers establish regular reviews, such as:
Each dashboard or collection should have an owner, responsible for interpreting and acting on the insights.

Once the foundation is in place, go deeper. SEI allows you to scale insight delivery across the org by:
This is how SEI becomes more than a dashboard—it becomes your engineering operating system.

Data without goals is directionless. Use SEI to establish stretch goals tied to organizational outcomes.
Here are common SEI-aligned OKRs:
Because SEI continuously measures these metrics, you can track OKR progress in real time.

🧭 Objective:
Improve engineering velocity without compromising security or code quality, while ensuring more effort is spent on new feature development.
📈 Key Results:
💥 Impact:
By using SEI’s Dev Insights and Business Alignment dashboards, the customer was able to shift engineering focus toward innovation. Unapproved PR backlog reductions improved code review discipline, while faster PR cycle times helped the team deliver secure, high-quality features faster.

🧭 Objective:
Accelerate delivery cadence, reduce lead times, and establish a baseline for operational resilience across distributed teams.
📈 Key Results:
💥 Impact:
SEI enabled visibility into every stage of the SDLC — from PRs to production. Dashboards helped engineering leadership identify workflow bottlenecks, while improved cycle time allowed the team to launch features continuously. The organization was also able to define new goals around MTTR reduction for future sprints.

🧭 Objective:
Improve release predictability, reduce change failure rates, and maintain quality during large-scale technology transformations.
📈 Key Results:
💥 Impact:
Using SEI’s DORA and Sprint Insights dashboards, engineering teams surfaced high-risk areas and improved review discipline. Leadership used Business Alignment reports to visualize time allocation, allowing them to rebalance priorities between legacy maintenance and innovation initiatives — critical for de-risking digital transformation.

🧭 Objective:
Improve collaboration and execution within hybrid teams (FTEs and contractors), while accelerating delivery with fewer blockers.
📈 Key Results:
💥 Impact:
SEI helped the customer restructure their hybrid engineering model by revealing top contributors, low-collaboration patterns, and team-specific bottlenecks. By tagging contributors by type, team, and location, the organization realigned review ownership and improved handoff speed across distributed groups.

🧭 Objective:
Reduce production risk while accelerating feature releases in a highly agile environment.
📈 Key Results:
💥 Impact:
SEI’s DORA metrics helped the team move from reactive issue management to proactive release planning. With improved scope hygiene and PR discipline, the organization was able to deliver features at a faster pace while maintaining platform stability — a crucial balance in gaming environments where user experience is paramount.

🧭 Objective:
Speed up secure development without compromising engineering discipline or quality during rapid team expansion.
📈 Key Results:
💥 Impact:
The customer used SEI to quantify the tradeoff between speed and review quality. By highlighting areas with excessive unapproved PRs and scope creep, the team set up opinionated OKRs to strike a balance between velocity and sustainability. Trellis and Dev Insights dashboards were used to coach developers and improve overall workflow consistency.
The most successful engineering organizations don’t just collect metrics—they operationalize them. Harness SEI enables your teams to go beyond dashboards and build a culture of insight, accountability, and impact.
By following a structured rollout, aligning metrics to personas, and setting outcome-focused OKRs, SEI can become the backbone of your engineering excellence strategy.
About the Author
Adeeb Valiulla leads the Quality Assurance & Resilience, Cost & Productivity function at Harness, where he works closely with Fortune 500 customers to drive engineering efficiency, improve developer experience, and align software delivery efforts with business outcomes. With a focus on measurable insights, Adeeb helps organizations turn engineering data into actionable intelligence that fuels continuous improvement. He brings a unique blend of technical depth and strategic vision, helping teams unlock their full potential through data-driven transformation.
Checkout : Harness Software Engineering Insights , Harness Software Engineering Insights Feature


Every so often, a piece of research lands in your inbox that makes you pause and think, “Yeah, this is exactly what I’ve been seeing, but couldn’t articulate.”
Microsoft’s recent study, “Time Warp: The Gap Between Developers’ Ideal vs Actual Workweeks in an AI-Driven Era” is that kind of read. It maps a disconnect I’ve heard developers vent about in 1:1s and retro meetings: the constant struggle between what they’re doing and what they wish they were doing.
In this article, we talk about that exact gap. And more importantly, what we as product leaders can learn from it.
Let’s start with the uncomfortable truth. Developers are spending a surprising amount of time not developing. Microsoft’s study shows:
None of this is shocking if you’ve worked with engineers up close. But seeing it quantified is a reality check. We’ve built org structures and workflows that slowly chip away at the flow state.

When developers talk about their “perfect week,” it’s surprisingly consistent.
They want more time for deep work, heads-down coding, solving real problems, and making architecture decisions that actually move the product forward.
They want collaboration, but the kind that’s quick, intentional, and actually helps, not an endless stream of pings, meetings, and status updates. They’re not asking to go off into a cave. They still value teamwork. But the ask is simple: less noise, more impact.
When there’s a big gap between how their week actually goes vs. how they wish it would, satisfaction drops. It’s not just about efficiency, it’s about identity. Developers want to feel like builders, not just operators moving tickets from “In Progress” to “Done.” That’s what’s really at stake when we talk about developer experience.
One of the most interesting parts of Microsoft's research is how it frames AI not as some future disruption, but as a tool developers are already leaning on today to reclaim their time. Developers who regularly use AI tools (like GitHub Copilot, code assistants, and auto-summarization tools) are seeing a closer match between how they want to spend their week and how they actually spend it. Not because AI is doing their jobs for them but because it’s helping clear the clutter.
That’s the real product insight: AI isn’t just another feature you bolt onto a dev tool. When it’s done right, it acts as a force multiplier. It automates the repetitive, low-value work that usually derails focus so developers can stay in their flow state longer. But the flip side is just as important: if AI adds complexity or noise, it becomes another source of interruption. We have to be deliberate about where and how we apply it.
It’s really tempting to jump straight into solution mode. But the first move isn't to fix it. It’s to understand.
Before we throw new tools, new processes, or new initiatives at the problem, we need to take a real, honest look at what the developer experience actually feels like today.

Here’s a simple way to break it down:
We recommend starting with a simple 3-step approach.
Start with the basics. You don’t need “N” number of dashboards what you need is a few key signals that reveal where energy is leaking:
At Harness, we believe in anchoring to DORA metrics first because they tell you whether your team is predictably delivering value:
DORA gives you the high-level outcomes but not the full story. Once you establish that baseline, you drill deeper into flow metrics to uncover where energy is leaking day-to-day. Specifically, track:
And just as important: Pair quantitative metrics with qualitative feedback.
Look for the early warning signs: Long review queues. Increased context-switching. Meetings that nobody wants but everyone attends. Friction shows up before velocity drops; you just have to know where to look.
Numbers alone aren’t enough. Metrics tell you what’s happening but they’ll never tell you why.
You might see a spike in pull request cycle times.
Is it because teams are slacking off?
Or because reviewers are spread across too many projects?
Or because no one knows who’s responsible for the next action?
You need real conversations. You need to hear the "why" directly from the people living it every day.
Treat data as a conversation starter, not a final answer. The goal isn’t just to measure experience, it's to understand it.
Once you understand the landscape, act deliberately.
The goal isn’t more activity, it's more effective workweeks.
This is exactly the problem we built Harness Software Engineering Insights (SEI) to solve.
We don’t believe metrics alone fix anything.
Our real North Star isn’t a better report card. It’s a better developer week.
Metrics are just the starting line. The real value comes from creating environments where developers spend more time building, solving, and innovating and less time stuck in endless loops of coordination and rework.
Building great products isn’t just about shipping roadmaps and features faster.
It’s about building environments and systems where people can do their best, most meaningful work.
If our developers feel stuck in a “Time Warp” consumed with a week full of meetings, blockers, and busy work, then no amount of AI, velocity tracking, or sprint burndowns will fix morale.
The next frontier isn’t just shipping faster. It’s about helping developers reclaim better weeks. And this is where leadership plays a critical role. With Harness SEI, we empower leaders with crystal-clear visibility into how their teams actually work, highlighting where flow is breaking down, where friction builds, and where leaders can step in to remove barriers. The goal isn’t just to optimize metrics it’s to free developers to do what they love best: building, creating, and solving meaningful problems.
Helping developers close the gap between the real and the ideal week isn’t just about improving productivity metrics. It’s about restoring a sense of purpose, ownership, and flow of the very things that make engineering such a creative, energizing craft. And I think that’s a future worth obsessing over.
Learn more: The causes of developer downtime and how to address them


Engineering leadership used to be about gut feel, strong opinions, and shipping fast. But that playbook is expiring—quickly.
The world we’re building software in today is fundamentally different. Economic pressure, AI disruption, rising complexity, and the demand for hyper-efficiency have converged. Old-school metrics, instinct-led prioritization, and managing by velocity charts won’t cut it.
What today’s engineering leaders need isn’t more dashboards. They need clarity. They need trust. They need a new way to lead.
And most of all? They need to stop guessing.

You shouldn’t have to start every leadership meeting explaining what your teams are working on, why something slipped, or where time is going.
With Harness Software Engineering Insights (SEI), you don’t guess. You know.
You see where bottlenecks are forming. You know when PRs are aging in silence. You understand whether your teams are overcommitted, burned out, or executing beautifully. You know the tradeoffs being made between tech debt, features, and KTLO—before someone asks.
SEI replaces opinions with insight. It surfaces the friction you can’t see in a sprint report, and helps you make smarter decisions based on what’s actually happening—not what you hope is happening.
Because in the new era of engineering, clarity is leadership.

But when you only measure output—story points, releases, burnup—you miss the nuance. You miss the tradeoffs. You miss the why behind the work.
Harness SEI helps leaders tell the complete story:
This is the story your CFO, CPO, and CEO need to hear—not how many tickets you closed last sprint.
Engineering deserves to be understood. SEI makes it possible.

Let’s be honest: we’re no longer in a “hire at all costs” era. Efficiency is the new growth. The mandate is clear:
And that’s not a burden—it’s an opportunity.
With Harness SEI, leaders can finally quantify engineering capacity, align work with outcomes, and invest where it matters most. You can see which teams are stretched too thin, where tech debt is slowing you down, and which initiatives are driving measurable business value.
This isn’t about pushing harder. It’s about working smarter, leading sharper, and delivering more strategically.

Great engineering happens when teams have clarity, focus, and space to build. But too often, they’re stuck in the weeds—fighting fires, filling out status reports, and guessing what matters.
With SEI, that changes.
This frees up energy for real engineering. It protects time for hackathons, R&D spikes, creative sprints—the things that move the business forward and keep developers fulfilled.
Because in a world full of AI and automation, the one thing we can’t afford to lose is human creativity.
SEI helps you protect it—by getting rid of everything that wastes it.

Burnout doesn’t start with bad code. It starts with bad leaders.
When developers don’t know where their work is going, why it matters, or what success looks like, morale suffers. When they’re forced to do status updates instead of shipping, they disengage. When PRs sit for days, they lose momentum.
SEI enables developers to see how their work connects to outcomes. It enables faster feedback, less friction, and clearer focus.
And for leaders? It means fewer surprises, better retention, and more meaningful 1:1s.

The best engineering leaders of the next decade won’t just be great technologists, they’ll be clear communicators, business strategists, and defenders of engineering best practices.
They’ll lead with data, empathy, and decisiveness.
They’ll connect effort to impact.
They’ll stop guessing. And they’ll lead better because of it.
If you're ready to lead in this new era, Harness SEI is your competitive advantage.


For too long, engineering has been seen as a black box—an opaque function that takes in business requirements and delivers software without clear visibility into the process. But in today’s data-driven, business-first world, engineering leaders must do more than execute; they must influence, align, and communicate with executive peers to drive business outcomes.
CTOs, VPs of Engineering, and other technical leaders who can effectively translate engineering metrics into business impact gain a seat at the strategic table. Instead of reacting to business requests, they help shape company priorities, resource allocation, and long-term growth strategies.
But here’s the challenge: Traditional engineering metrics don’t resonate with executives. Story points, commit counts, and deployment logs mean little to a CFO, CMO, or CEO. To gain influence, engineering leaders need to frame their work in business terms—think predictability, customer impact, cost efficiency, and revenue acceleration.
That’s where Harness Software Engineering Insights (SEI) comes in. SEI transforms engineering metrics into clear, actionable insights that bridge the gap between technical execution and business strategy. This blog will show you how to use SEI to speak the language of executives, drive cross-functional alignment, and elevate engineering’s strategic role in your organization.
Before presenting engineering metrics, it’s critical to understand what matters to your executive peers. Different leaders prioritize different business drivers, and aligning your communication style accordingly makes your insights more relevant and impactful.

| Executive | Key Priorities | How Engineering Metrics Apply |
|---|---|---|
| CEO (Chief Executive Officer) | Revenue growth, competitive differentiation, innovation | Engineering’s impact on faster time-to-market, scalability, and business alignment |
| CFO (Chief Financial Officer) | Cost efficiency, budget predictability, ROI | Engineering capacity, cost of technical debt, and efficiency improvements |
| CRO (Chief Revenue Officer) | Sales velocity, customer retention, revenue expansion | Feature delivery timelines, system reliability, customer-impacting defects |
| CPO (Chief Product Officer) | Product roadmap execution, user experience, feature adoption | Lead Time for Change, deployment frequency, engineering capacity for innovation |
| CMO (Chief Marketing Officer) | Digital transformation, campaign execution, website/app performance | Site reliability, system uptime, infrastructure scalability, release predictability |
🔹 Takeaway: Before presenting engineering data, frame it in terms of the business goals that resonate with each executive stakeholder.
Many engineering leaders fall into the trap of reporting on vanity metrics—like total commits, number of deployments, or story points completed—without connecting them to business outcomes.
The key is choosing the right metrics that executives care about. Harness SEI helps track engineering performance across three core areas:

Let’s explore which SEI metrics best support each area.
🎯 How to Communicate It: “Over the past quarter, engineering has improved on-time delivery from 67% to 85%, reducing last-minute delays and improving cross-team alignment.”
🎯 How to Communicate It: “Currently, 54% of engineering work is dedicated to new feature development, while 32% is spent on maintenance and 14% on technical debt reduction.”
🎯 How to Communicate It: “We’ve reduced Lead Time for Change from 14 days to 9 days, improving our ability to respond to market demands faster.”
🎯 How to Communicate It: “New engineers ramp up to full productivity in 6 weeks on average, down from 8 weeks last year.”
Harness SEI provides efficiency, productivity and alignment dashboards that make engineering metrics clear, visual, and actionable for executives.
SEI’s DORA, Sprint Insights, and Business Alignment Dashboards provide high-level summaries while allowing leaders to drill into details when needed.
Rather than waiting for executives to ask, SEI highlights risks upfront (e.g., increasing cycle time, declining deployment frequency) and identifies bottlenecks..
Numbers alone don’t drive action—framing metrics as stories do. SEI allows engineering leaders to present data in a way that connects to business goals and influences decisions.

Engineering is no longer just about writing code—it’s about driving business value. By using Harness SEI to track and communicate on-time delivery, engineering capacity, deployment frequency, and business alignment, engineering leaders can:
✅ Influence executive decisions by aligning engineering work with company priorities.
✅ Improve collaboration across teams by providing visibility into engineering efforts.
✅ Proactively drive impact instead of reacting to business requests.
Ready to communicate engineering’s impact more effectively? Start leveraging SEI today to gain visibility, efficiency, and alignment across your organization.
👉 Learn more about Harness SEI here.


Developer productivity has become a critical factor in today's fast-paced software development world. Organizations constantly seek methods to enhance productivity, improve engineering efficiency, and align their development teams with strategic business goals. But navigating the complexities of developer productivity isn't always straightforward.
In this blog, we’ll hear from Adeeb Valiulla, Director of Engineering Excellence at Harness, as we answer some of the most pressing questions on developer productivity to help you optimize your teams and processes effectively.
Developer productivity refers to the efficiency and effectiveness with which software developers deliver high-quality software solutions. It encompasses the speed and quality of coding, reliability of deployments, the ability to quickly recover from failures, and alignment of development efforts with strategic business goals. High developer productivity means achieving more impactful outcomes with fewer resources, enabling organizations to stay competitive and agile in rapidly evolving markets.

Developer productivity directly impacts an organization's ability to deliver software quickly, reliably, and with high quality. High productivity enhances agility, reduces costs, accelerates feature delivery, and ultimately drives customer satisfaction and competitive advantage. Improving productivity not only benefits the business but also increases developer satisfaction by removing bottlenecks and empowering teams.
“In the hardware technology industry, a well-known global hardware company implemented an engineering metrics program under Harness’s and my guidance. This led to significantly boosted developer productivity. Their PR cycle time improved dramatically from nearly 3 days to under an hour, greatly enhancing delivery speed and agility.”

Yes, software developer productivity can be effectively measured. While measuring productivity isn't always simple due to the complexity of software development, several key metrics have emerged as valuable indicators:
These metrics, when applied carefully and contextually, provide actionable insights into developer productivity.
“In the Gaming Industry, Harness’ holistic approach to productivity, which emphasizes consistent developer engagement and effective scope management, enabled a gaming company to manage scope creep and improve their weekly coding days significantly. This strengthened their development workflow and productivity.”

Generative AI certainly has the potential to improve developer productivity. But. the verdict is still out on whether GenAI provides any significant net improvements. GenAI certainly helps developers write code faster while they are coding by automating repetitive coding tasks, enhancing code reviews, predicting potential errors, and accelerating problem-solving. The vision is that AI-powered tools will help developers write cleaner, more reliable code faster, freeing them to focus on strategic, high-value tasks.

However, the time saved by using GenAI is not guaranteed to net as a productivity gain vs. new challenges GenAI brings, such as learning to prompt optimally, time spent learning and fixing the code it produces, and the potential system and software delivery lifecycle (SDLC) bottlenecks that can occur with the increased pace of new code that needs to be handled, deployed, and tested.
Tools such as Harness Software Engineering Insights (SEI) and AI Productivity Insights (AIPI) can help measure how, where, and with who, AI is causing impact (both positive and potential negative) so that you can optimize the likelihood that GenAI will have a positive impact on your developer productivity.
Additionally, most GenAI developer tool focus has been on AI coding assistants. However, coding is 30-40% of the work that needs to be done to get software updates and enhancements delivered (the pipeline and SDLC stages, as mentioned above). This leaves 60-70% of the overall process that GenAI is not yet helping with. The Harness AI-Native Software Delivery Platform provides many AI agents that help to automate about 40% of the part of the SDLC process that is not coding.

Measuring developer productivity involves:
When measuring developer productivity, focus on outcome-based metrics rather than activity counts. DORA metrics (deployment frequency, lead time, change failure rate, and recovery time) provide valuable insights into team performance and delivery efficiency. Complement these with contextual data like PR cycle times, coding days per week, and the ratio of building versus waiting time.
Harness SEI implements dashboards that visualize these metrics by role, enabling managers to identify bottlenecks, engineers to track personal progress, and executives to monitor overall delivery health. To learn more, read our blog on Persona-Based Metrics.
Remember that measurement should drive improvement, not punishment—create a psychologically safe environment where data informs positive change rather than triggering defensive behavior.
Improving developer productivity requires a multi-faceted approach that addresses both technical and organizational constraints. Start by eliminating common friction points: reduce build times through better CI/CD pipelines, implement robust code review processes that prevent bottlenecks, and adopt standardized development environments that minimize "it works on my machine" issues. Investment in developer tooling often yields outsized returns.
Improving developer productivity requires:
Creating focused work environments is equally crucial. Research shows that developers need uninterrupted blocks of at least 2-3 hours to reach flow state—the mental zone where complex problem-solving happens most efficiently. Consider implementing "no-meeting days" or core collaboration hours to protect deep work time. Google's approach of 20% innovation time and Atlassian's "ShipIt Days" demonstrate how structured creative periods can boost both productivity and engagement.
Finally, regularly audit and reduce technical debt; Etsy's practice of dedicating 20% of engineering resources to infrastructure improvements ensures their codebase remains maintainable as it grows. The most productive engineering cultures view developer experience as a product itself—one that requires continuous investment and refinement.
“In the cybersecurity sector, teams following Harness’ Engineering Metrics Program, consistently averaged over 4.5 coding days per week, demonstrating high developer engagement and productivity.”

In Agile environments, a deeper analysis of key metrics provides valuable insights into developer productivity:
Sprint Velocity serves as more than just a workload counter—it's a team's productivity fingerprint. High-performing teams focus less on increasing raw velocity and more on velocity stability, which indicates predictable delivery. By tracking velocity variance across sprints (aiming for less than 20% fluctuation), teams can identify external factors disrupting productivity. Leading organizations complement this with complexity-adjusted velocity, weighting story points based on technical challenge to reveal where teams excel or struggle with certain types of work.
Sprint Burndown Charts reveal productivity patterns beyond simple progress tracking. Teams should analyze the chart's shape—a consistently flat line followed by steep drops indicates batched work and potential bottlenecks, while a jagged but steady decline suggests healthier continuous delivery. Advanced teams overlay their burndown with blocker indicators, clearly marking when and why progress stalled, creating accountability for removing impediments quickly.
Commit to Done Ratio offers insights into planning accuracy and execution capability. The most productive teams maintain ratios above 80% while avoiding artificial padding of estimates. By categorizing incomplete work (technical obstacles, scope changes, or estimation errors), teams can systematically address root causes rather than symptoms. Some organizations track this metric over multiple sprints to identify trends and measure the effectiveness of process improvements.
PR Cycle Time deserves granular analysis, as code review often becomes a hidden productivity drain. Break this metric into component parts—time to first review, rounds of feedback, and time to final merge—to pinpoint specific improvement areas. Top-performing teams establish service-level objectives for each stage (e.g., initial reviews within 4 hours), supported by automated notifications and team norms. This detailed approach turns PR management from a black box into a well-optimized workflow with predictable throughput.
Harness SEI provides robust tracking of developer productivity by:
Harness SEI empowers teams to enhance productivity by clearly visualizing critical productivity metrics.

Adeeb emphasizes that
Improving developer productivity requires a holistic and human-centric approach. It's not merely about tools and metrics but fundamentally about creating an environment where developers can consistently deliver high-quality output without unnecessary friction.
According to Adeeb, the key factors include:
Harness' approach advocates for an integrated strategy that aligns technology, processes, and culture, emphasizing developer well-being as central to sustainable productivity improvements.
Harnessing the right insights and strategies can transform your software development processes, driving efficiency, innovation, and growth. Ready to elevate your developer productivity to the next level? Discover the power of Harness Software Engineering Insights (SEI) and start achieving measurable improvements today.
Request a meeting or demo
Learn more: The causes of developer downtime and how to address them


Today, we're thrilled to announce a significant leap forward in our commitment to AI-driven innovation. Harness, a leader in AI-native software delivery, is proud to introduce three powerful AI agents designed to transform how teams create, test, and deliver software.
Since introducing Continuous Verification in 2018, Harness has been at the forefront of leveraging AI and machine learning to enhance software delivery processes. Our latest announcement reinforces our position as an industry pioneer, offering a comprehensive suite of AI-powered tools that address critical challenges across the entire software delivery lifecycle (SDLC).
Our vision is a multi-agent architecture embedded directly into the fabric of the Harness platform. We’re building a powerful library of ‘assistants’ designed to make software delivery faster, more efficient, and more enjoyable for developers. These AI-driven agents will work seamlessly within our platform, handling everything from automating complex tasks to providing real-time insights, freeing developers to focus on what they do best: creating innovative software.
Let's explore the capabilities of these new AI agents and see how they will reshape the future of software delivery.
The Harness AI QA Assistant is a game-changer in the world of software testing. This generative AI agent is purpose-built to simplify end-to-end automation and accelerate the transition from manual to automated testing. End-to-end tests have been plagued by slow authoring experiences that yield brittle tests, which need to be tended to every time the UI changes.

By harnessing the power of AI, this assistant offers a range of benefits that can dramatically improve your testing processes:
Sign up today for early access to the AI QA Assistant.
Crafting pipelines can be challenging. You need to consider your core build and deployment activities, as well as best practices around security scans, testing, quality gates, and more. The new Harness AI DevOps Assistant will make creating great pipelines much easier.

The introduction of the AI DevOps Assistant marks a significant milestone in our mission to simplify and streamline the software delivery process for the world’s developers. By automating complex tasks, and providing intelligent insights, this capability empowers teams to focus on innovation rather than getting bogged down in pipeline management intricacies.
Sign up today for early access to the AI DevOps Assistant.
The Harness AI Code Assistant accelerates developer productivity by streamlining coding processes and providing instant access to relevant information. This intelligent tool integrates seamlessly into the development workflow, offering a range of features that enhance coding efficiency and quality:

The Harness AI Code Assistant is more than just a coding tool; it's a comprehensive solution that enhances developer productivity, improves code quality, and fosters a more efficient and collaborative development environment. The AI Code Assistant is available today for all Harness customers at no additional charge.
Software delivery is changing fast. Generative AI has helped organizations code faster than ever. The rest of the delivery pipeline must keep up to take full advantage of these efficiencies.
These tools - the Harness AI QA Assistant, AI DevOps Assistant, and AI Code Assistant represent more than just technological advancements. They embody a shift in how we approach software development, testing, and delivery. By automating routine tasks, providing intelligent assistance, and offering deep insights into development processes, these AI agents eliminate toil, freeing up human creativity and expertise to focus on solving complex problems and driving innovation.
As we move forward, the integration of AI into software delivery processes will become increasingly crucial for organizations looking to maintain a competitive edge. The ability to deliver high-quality software faster, more reliably, and with greater insight will be a key differentiator in the digital marketplace.
Harness is committed to leading this AI-driven transformation of the software delivery landscape. We invite you to join us on this exciting journey toward a future where AI and human expertise work in harmony to create exceptional software experiences.
Stay tuned for more updates as we continue to innovate and shape the future of software delivery. If you want to try any of these capabilities early, sign up here.
Checkout Event: Revolutionizing Software Testing with AI
Checkout Harness AI Code Agent
Explore more resources: 3 Ways to Optimize Software Delivery and Operational Efficiency


AI-based coding Assistants like Google Gemini Code Assist, GitHub Copilot, and others are becoming increasingly popular. However, the efficacy of these tools is still unknown. Engineering leaders want to understand how effective these tools are and how much they should invest in them.
Harness AI Productivity Insights is a new (beta) capability in Software Engineering Insights that helps engineering leaders understand the productivity gains unlocked by leveraging AI coding tools.
This targeted solution empowers engineering leaders to generate comprehensive comparison reports across diverse developer cohorts. It facilitates insightful analyses, such as evaluating the impact of AI Coding Tools on productivity by comparing developers who leverage these tools against those who don't. Additionally, it allows for comparisons between different points in time, tracking how developers' performance evolves as they adopt and grow their proficiency with AI Coding tools.

Customers can choose different types of comparison reports. The most common reports are comparing cohorts of developers who use coding assistants and those who don’t. Other supported types of comparison reports include comparing cohorts of developers with different metadata, for example senior engineers versus junior engineers, or comparing the same set of developers at different points in time.
For every report, customers can flexibly define the comparison cohorts either through manual selection or by utilizing existing metadata filters.

Customers can run multiple reports at any time. Reports will be saved and available to share within the organization.

Each report analyzes the productivity scores of both cohorts, calculating the productivity gain of the second cohort relative to the first. The analysis encompasses various facets of performance, including velocity and quality metrics. Additionally, the solution offers the option to gather qualitative insights through surveys distributed to all cohort members, enriching the quantitative data with user feedback.

AI Productivity Insights relies on source code management (SCM) systems for metrics collection. Customers can seamlessly integrate their preferred SCM platforms through convenient one-click integrations. To gain insights into AI Coding Tool usage, the solution also offers one-click integrations with these tools, enabling comprehensive data collection and analysis across the development ecosystem.
Let us know you are interested. We'd love to show you more and hear your feedback.
.webp)
.webp)
By integrating ServiceNow with Harness SEI, you can:
This integration provides a new data source for Harness SEI, enabling a more comprehensive and accurate measurement of your software delivery performance.
The SEI ServiceNow integration offers two authentication methods:
Choose the method that best suits your requirements and follow our ServiceNow integration help doc for detailed setup instructions.
This integration now allows you to monitor activity and measure crucial metrics regarding your change requests and incidents from the ServiceNow platform. You can now consolidate reporting, combining ServiceNow data with other metrics from Harness SEI, and create customizable dashboards i.e. Insights that focus on the metrics most crucial to your team's success.
A key advantage of this integration is its robust support for DORA metrics such as Deployment Frequency, Change Failure Rate and Mean Time to Restore.
The DORA Mean Time To Restore metric helps you understand how quickly your team can recover from failures. By configuring a DORA Workflow Profile with the ServiceNow integration, you can precisely measure the time between incident creation and resolution.
This report measures the duration between when an incident was created to when the service was restored. In other words, it tracks the time from when the incident was created to the time the incident was closed.
With this information, you can set and track Mean Time to Restore (MTTR) goals, driving continuous improvement in your team's ability to address and resolve issues quickly.

Understanding your deployment cadence is key to achieving continuous delivery. You can define DORA profiles using the ServiceNow integration for tracking how often you deploy. You have the flexibility to track deployments as either Change Requests or Incidents, though using Change Requests is recommended for more accurate deployment tracking

The DORA Deployment Frequency report will display metrics on how often change requests are resolved. This enables you to perform trend analysis, helping you see how your change requests resolution frequency changes over time. With this information, teams can identify patterns and optimize their processes, moving towards a more efficient continuous delivery model.
You can set up the DORA profile definition for Change Failure Rate to monitor the failed deployments from the ServiceNow platform. This links change requests to incidents. Change requests represent the total deployments (when a change request is resolved, it means a deployment is completed). Incidents indicate a failure caused by these deployments (when a change request is resolved but later causes an incident).

This integration bridges the gap between operational data in ServiceNow and development metrics in Harness SEI, providing a holistic view of the entire software delivery lifecycle.
With these insights at your fingertips, you can make more informed decisions, prioritize improvements effectively, and ultimately deliver better software faster and more reliably. Contact Harness Support to try this out today.
.webp)
.webp)
As a developer or development manager, you know how important it is to measure productivity. With your software development team racing against the clock to deliver a new feature in a sprint you're probably keen on boosting productivity and ensuring your team hits every milestone efficiently as planned as part of the sprint. However, it's not uncommon for sprints to fail, and the process can be broken in various ways.
When sprint results are broken, it can have a significant impact on the quality of the product being developed. One of the most significant challenges faced by developers working in agile environments is burnout. Developer burnout can occur when team members feel overwhelmed by the amount of work assigned to them during a sprint.
This can happen due to various reasons such as:
To avoid burnout, it's essential to plan sprints carefully, taking into account the team's capacity, skill sets, and potential roadblocks. Effective sprint planning involves setting achievable goals, prioritizing tasks based on their importance and urgency, estimating tasks accurately, allocating resources efficiently, and monitoring progress. To accomplish all of this, you need to have a clear understanding of your team's capabilities, strengths, and limitations.
By considering these factors and using relevant metrics, you can create a well-planned sprint that sets your team up for success and helps prevent burnout.
But with so many different metrics to choose from, it can be tough to know where to start. That's why we've put together this list of the top 3 sprint metrics to measure the sprint success. These metrics are easy to understand, and straightforward and will give you valuable insights into how your team is performing.
Developer churn in a sprint refers to the degree of change experienced in the set of tasks or work items allocated to a development team during a sprint cycle. More specifically, churn represents the total number of task additions, deletions, or modifications made after the initial commitment phase of the sprint. A higher level of churn indicates increased instability and fluctuation within the sprint scope, which often leads to several negative consequences impacting both productivity and morale.
For example, let's say your team is working on a new feature that requires several stages of development, including design, coding, testing, and review. If the tasks associated with this feature are consistently modified than expected, it may indicate that there are issues with communication between teams or that certain stages of the process lack clarity. By tracking Developer Churn, you can pinpoint these issues and make changes to improve efficiency.
Another essential metric to track developer productivity is comparing what the team planned to deliver versus what they actually completed within a given sprint. This comparison offers an overview of the team's ability to commit and adhere to realistic goals while also revealing potential bottlenecks or process improvements needed.
Let's say your development team plans to complete 60 story points worth of work during a two-week sprint. At the end of the sprint, the team managed to complete only 50 story points. In this scenario, the "planned" value was 60 story points, but the "delivered" value was only 50 story points. This result indicates that there might be some challenges with estimating task complexity or managing time constraints.
The difference between the planned and delivered values could trigger discussions about improving estimation techniques, setting more realistic targets, or identifying any obstacles hindering the team from meeting its goals. Over multiple sprints, tracking these metrics will provide insights into whether the gap between planned and delivered values decreases over time, indicating improvement in productivity and efficiency.
Velocity is a measure of how much work your team completes during a given period, usually a sprint or iteration. It's calculated by summing up the story points completed during a sprint and dividing that number by the number of sprint days. Velocity helps you understand how much work your team can handle in a given period and allows you to plan future sprints accordingly.
For example, if your team has a velocity of 50 story points per sprint, you know that you can expect them to complete around 50 story points worth of work in a two-week sprint. This information can help you prioritize tasks and allocate resources effectively, ensuring that your team stays on track and delivers quality results.
Measuring these metrics accurately is crucial to gain meaningful insights into your team's performance and identify areas for improvement.
Here are some ways to measure these metrics accurately using Harness SEI:




By using these reports on Harness SEI, you can measure sprint metrics accurately and gain insights into your team's performance.
To learn more, schedule a demo with our experts.


Executives often ask a crucial question - "What value is your team bringing to the organization?" As an engineering team, you should develop your own metrics to demonstrate your team's growth and contributions. This is necessary because marketing and sales have their own metrics for deals and leads.
This blog will explain the benefits of creating and managing a Developer metrics dashboard. It can help gain insights into the engineering team's work and identify areas that require attention. We will examine the problems with outdated tools for measuring developer productivity and provide solutions to overcome them. This way, you can accurately assess the business value your engineering team brings.
Understanding the health and productivity of your team is essential for any engineering organization. To achieve this, you can use Developer Insights to show your team's value and performance through metrics. Like a player's career graph, these dashboards show how efficient and productive your developers are.
Having reliable, up-to-date, organized, user-friendly data with the right measurements is crucial. Although many teams use metrics, only a few use the right ones. Choosing the right metrics is crucial for understanding your team's productivity and efficiency accurately.
Here are the top four metrics that can help executives understand your organization's true status.
Measuring development efficiency is important. Cycle time is a key metric that gives a brief overview of all stages involved. But only looking at the total number can be limiting, as there might be many reasons for a long cycle time.
To better understand the problem and find its main cause, it's best to divide the process into different stages.
This process involves several stages. These stages include measuring the time it takes to make the first commit. Another stage is to create a Pull Request. Additionally, we measure the activity in the PR and the approval time in the PR.
Lastly, we measure the time it takes to merge the item into the main codebase. You can analyze each stage separately.
This will help you identify specific areas where your development process is struggling. Once you have identified these areas, you can make a plan to fix the problems. These problems are slowing down your team's productivity.

Workload is the term used to describe the number of tasks that a developer is handling at any given time. When a developer has too many tasks, they may switch between them frequently. This frequent switching can lower productivity and eventually lead to burnout.
You can track the amount of work assigned to developers and in progress. This will help you determine who is overloaded. You can then adjust priorities to avoid harming productivity.
Moreover, tracking active work can help you determine whether your team's tasks align with your business goals. You can use this information to reorganize priorities and ensure that your team is working efficiently towards your goals.
Smaller pull requests from developers help reduce cycle time, according to studies. This may come as unexpected, but it makes sense once you think about it.
Reviewers are more inclined to promptly handle smaller PRs as they are aware that they can finish them more swiftly. If you notice that the pickup and review times for your team's PRs are taking too long, try monitoring the size of the PRs. Then, you can help developers keep their PRs within a certain size, which will reduce your cycle time.
Rework refers to any changes made to existing code, regardless of its age. This may include alterations, fixes, enhancements, or optimizations. Rework Metrics is a concept that enables developers to measure the amount of changes made to existing code. Developers can assess code stability, change frequency, and development efficiency.
By measuring the amount of changes made to existing code, developers can assess the quality of their development efforts. They find code problems, improve development, and prevent future rework.
As the common adage suggests, acknowledging that you have an issue is the initial step towards improvement. However, it's equally important to identify the problem accurately, or else improvement will be impossible.
This is especially true for software teams. Complicated processes in an engineering team can easily fail and finding the main problem is often difficult. That's where metrics come in.
A Developer Insight (i.e. the Dashboard) displays your engineering team's progress and helps identify areas where developers may struggle. By identifying the problem areas, you can provide solutions to improve the developer experience, which ultimately increases their productivity.
You need a dashboard that is accurate, current, unified, and easy to understand, even if it has the best metrics. Otherwise, it may not be very useful.
Harness SEI can help you create an end-to-end developer insight (i.e. Dashboard) with all the necessary metrics. The distinguishing factor of Harness SEI is its ability to link your git data and Jira data together. This helps you understand how your development resources are used, find obstacles for developers, and evaluate your organization's plan efficiency.
Once you understand what's going on with your teams, you can set targets to create an action plan for your developers. For example, you can reduce your PR sizes.
You can also use various reports on Harness SEI to measure and track your cycle time and lead time.
By providing a comprehensive set of essential parameters, including code quality, code volume, speed, impact, proficiency, and collaboration, SEI enables engineering teams to gain deeper insights into their workflows.
The Trellis Score, a proprietary scoring mechanism developed by SEI, offers an effective way to quantify team productivity. With this information at hand, engineering teams can leverage SEI Insights to pinpoint areas requiring improvement, whether they relate to people, processes, or tools. Ultimately, SEI empowers organizations to optimize their development efforts, leading to increased efficiency and higher-quality outputs.
To learn more, schedule a demo with our experts.


In the ever-evolving landscape of software development, the significance of producing high-caliber code is undeniable. This is where Harness Software Engineering Insights (SEI) shines, guiding teams toward elevated software quality, enhanced productivity, and overall excellence. Here, we delve deep into the pivotal role of SEI's Quality Module in aiding teams to gauge, supervise, and uplift their code quality.
The Trellis Framework: At the heart of SEI's transformative potential is the industry-validated Trellis Framework. This intricate design provides a comprehensive analysis of over 20 factors from various Software Development Life Cycle (SDLC) tools, enabling teams to efficiently track and optimize developer productivity.

Lagging indicators are retrospective measures, offering insights into past performance. Let's break down these metrics:
Defect Escape Rate: This metric, crucial for understanding production misses, measures the percentage of defects that go undetected during production and reach the customer. A higher defect escape rate can signal poor quality control, leading to customer dissatisfaction.
Escapes per Story Point or Ticket: This indicates the number of defects per unit of work delivered. An elevated number here can point to quality lapses in development.
Change Failure Rate: This metric measures the percentage of changes leading to failures, indicating the robustness of the product.
Severity of Escapes: This highlights the seriousness of defects, with higher severity demanding urgent attention.
APM Tools - Uptime: Measuring product availability and performance, a higher uptime percentage is indicative of good product quality.
Customer Feedback: Direct customer feedback, both positive and negative, provides valuable insights into product quality.
Leading indicators predict current or future performance. We explore these further:
SonarQube Issues: This includes code smells, vulnerabilities, and code coverage. Issues flagged here can indicate quality concerns in the codebase.
Coverage % by Repos: Evaluating code coverage percentage across various repositories.
Automation Test Coverage: A higher percentage here suggests a robust, reliable product.
Coding Hygiene: Measures such as code reviews and comments improve code maintainability and reduce defect risks.
Program Hygiene: This includes acceptance criteria and clear documentation to ensure the product meets requirements.
Development vs Test Time Ratio: A balanced ratio is crucial for product quality.

Automated Test Cases by Type: Categorizing test cases into functional, regression, performance, or destructive types.

Test Cases by Scenario: Differentiating between positive or negative scenarios.
Automated Test Cases by Component View: Providing a component-wise breakdown.

TestRail Test Trend Report - Automation Trend: Showcasing the trend of total, automated, and automatable test cases.

SEI's architecture integrates with CI/CD tools, offering over 40 third-party integrations. This structured approach aids in goal-setting and decision-making, driving teams towards engineering excellence.
Beyond metrics, SEI assists in resource allocation optimization, aligning resources with business objectives for efficient project delivery.
SEI’s dashboards provide a holistic view of the software factory, highlighting key metrics and KPIs for better collaboration and workflow management.
Harness Software Engineering Insights, with its Quality module, stands as a beacon for development teams, combining metrics, insights, and tools for superior code quality. To learn more, schedule a demo with our experts.
.webp)
.webp)
In the realm of software development, ensuring a robust and streamlined Software Development Life Cycle (SDLC) is paramount. While many focus on the technical intricacies and methodologies, there's an underlying aspect that holds equal importance: hygiene in SDLC processes. This blog delves into the significance of hygiene within SDLC and how it can pave the way for deriving valuable insights and metrics.
In essence, hygiene in SDLC refers to the practices, procedures, and protocols adopted to maintain the integrity, reliability, and effectiveness of the software development process. It is important to maintain hygiene across all the aspects of SDLC. It starts from the very beginning of understanding the requirements from various stakeholders and documenting them. It then flows through the different phases from design to implementation where several decisions will be made based on the best practices and ensuring the code quality is maintained.
Hygiene in SDLC serves as a foundational pillar that significantly influences the quality, reliability, and sustainability of software solutions. By emphasizing standardized practices, fostering cross-functional collaboration, and proactively addressing risks, hygiene paves the way for delivering software solutions that are robust, secure, and aligned with stakeholder expectations. This adherence not only enhances software quality and security but also fosters a culture of excellence, innovation, and accountability.
Maintaining hygiene in SDLC is not merely about adherence to protocols; it's about fostering a culture of excellence and continuous improvement. Here's how hygiene directly contributes to generating valuable insights:
As the saying goes, "only the things that get measured can be improved," this underscores the importance of establishing metrics and benchmarks to enhance hygiene within the Software Development Life Cycle (SDLC). By systematically evaluating and optimizing key areas, organizations can foster a culture of excellence and continuous improvement.


To harness the full potential of SDLC hygiene, organizations must cultivate a culture that prioritizes quality, collaboration, and continuous improvement. This entails:
Hygiene in SDLC is not a mere procedural aspect; it's a foundational pillar that underpins the success and sustainability of software development endeavors. By prioritizing hygiene and leveraging it as a catalyst for generating valuable insights and metrics, organizations can navigate the complexities of software development with confidence, agility, and foresight.
To explore how SEI can transform your software development process, we invite you to schedule a demo with our experts.