
Today, we're thrilled to announce a significant leap forward in our commitment to AI-driven innovation. Harness, a leader in AI-native software delivery, is proud to introduce three powerful AI agents designed to transform how teams create, test, and deliver software.
Since introducing Continuous Verification in 2018, Harness has been at the forefront of leveraging AI and machine learning to enhance software delivery processes. Our latest announcement reinforces our position as an industry pioneer, offering a comprehensive suite of AI-powered tools that address critical challenges across the entire software delivery lifecycle (SDLC).
Our vision is a multi-agent architecture embedded directly into the fabric of the Harness platform. We’re building a powerful library of ‘assistants’ designed to make software delivery faster, more efficient, and more enjoyable for developers. These AI-driven agents will work seamlessly within our platform, handling everything from automating complex tasks to providing real-time insights, freeing developers to focus on what they do best: creating innovative software.
Let's explore the capabilities of these new AI agents and see how they will reshape the future of software delivery.
The Harness AI QA Assistant is a game-changer in the world of software testing. This generative AI agent is purpose-built to simplify end-to-end automation and accelerate the transition from manual to automated testing. End-to-end tests have been plagued by slow authoring experiences that yield brittle tests, which need to be tended to every time the UI changes.

By harnessing the power of AI, this assistant offers a range of benefits that can dramatically improve your testing processes:
Sign up today for early access to the AI QA Assistant.
Crafting pipelines can be challenging. You need to consider your core build and deployment activities, as well as best practices around security scans, testing, quality gates, and more. The new Harness AI DevOps Assistant will make creating great pipelines much easier.

The introduction of the AI DevOps Assistant marks a significant milestone in our mission to simplify and streamline the software delivery process for the world’s developers. By automating complex tasks, and providing intelligent insights, this capability empowers teams to focus on innovation rather than getting bogged down in pipeline management intricacies.
Sign up today for early access to the AI DevOps Assistant.
The Harness AI Code Assistant accelerates developer productivity by streamlining coding processes and providing instant access to relevant information. This intelligent tool integrates seamlessly into the development workflow, offering a range of features that enhance coding efficiency and quality:

The Harness AI Code Assistant is more than just a coding tool; it's a comprehensive solution that enhances developer productivity, improves code quality, and fosters a more efficient and collaborative development environment. The AI Code Assistant is available today for all Harness customers at no additional charge.
Software delivery is changing fast. Generative AI has helped organizations code faster than ever. The rest of the delivery pipeline must keep up to take full advantage of these efficiencies.
These tools - the Harness AI QA Assistant, AI DevOps Assistant, and AI Code Assistant represent more than just technological advancements. They embody a shift in how we approach software development, testing, and delivery. By automating routine tasks, providing intelligent assistance, and offering deep insights into development processes, these AI agents eliminate toil, freeing up human creativity and expertise to focus on solving complex problems and driving innovation.
As we move forward, the integration of AI into software delivery processes will become increasingly crucial for organizations looking to maintain a competitive edge. The ability to deliver high-quality software faster, more reliably, and with greater insight will be a key differentiator in the digital marketplace.
Harness is committed to leading this AI-driven transformation of the software delivery landscape. We invite you to join us on this exciting journey toward a future where AI and human expertise work in harmony to create exceptional software experiences.
Stay tuned for more updates as we continue to innovate and shape the future of software delivery. If you want to try any of these capabilities early, sign up here.
Checkout Event: Revolutionizing Software Testing with AI
Checkout Harness AI Code Agent
Explore more resources: 3 Ways to Optimize Software Delivery and Operational Efficiency



Engineering organizations today don’t lack data—they lack clarity. Delivery timelines, developer activity, and code quality metrics are scattered across systems, making it hard to answer simple but critical questions: Where are we losing time? Are we investing in the right work? Who needs support or coaching?
This is where Harness Software Engineering Insights (SEI) steps in. Unlike traditional dashboards, SEI offers opinionated, role-based insights that connect engineering execution with business value.
In this post, we’ll walk through a proven rollout framework, real customer success stories, and a practical guide for any organization looking to implement an engineering metrics program (EMP) that actually drives impact.

Rolling out SEI without a clear objective is like configuring CI/CD pipelines without deployment goals. Before diving into dashboards or metrics, align internally on what you’re trying to improve.
Most organizations fall into one or more of the following categories:
💡 A powerful first step is simply asking: What are the top 3 decisions you wish you could make with data but currently can't?

Once your objectives are clear, it’s time to define the key performance indicators (KPIs) that reflect progress. At Harness, we recommend starting with 5 core metrics that align with your goals:
These metrics aren’t just about numbers—they tell a story. And SEI’s pre-built dashboards help visualize that story from day one.

Out-of-the-box data isn’t enough—you need context. SEI allows deep configuration across integrations, people, and workflows to ensure accuracy and actionability.
Start with the essentials: Jira or ADO (issue tracking), GitHub or Bitbucket (SCM), Jenkins or Harness CI (build/deploy). Validate data ingestion and set up monitoring for failed syncs.
Merge developer identities across systems and tag them with meaningful metadata: Role, Team, Location, Manager, and Employee Type (FTE, contractor). This enables advanced filtering, benchmarking, and team-level coaching.
Use Asset-Based Collections for things like repositories or services (ideal for DORA/Sprint metrics) and People-Based Collections for teams, departments, or geographies (perfect for Dev Insights, Trellis, and Business Alignment).
SEI lets you build custom profiles for DORA metrics, Business Alignment, and Trellis. These profiles allow you to set your own definitions for “Lead Time,” “MTTR,” or what constitutes “New Work.” Configurable widgets ensure the insights match your team’s workflows—not the other way around.

One of SEI’s most valuable capabilities is persona-based reporting. Not every stakeholder needs to see every metric. Instead, create tailored views based on what matters to them.
| Persona | Primary Metrics | Cadence |
|---|---|---|
| CTO / VP Engineering | DORA, Effort Allocation, Innovation % | Quarterly |
| Director of Engineering | Sprint Trends, PR Cycle Time, MTTR | Monthly |
| Engineering Manager | Coding Days, PR Approval Rate, Rework | Weekly |
| Scrum Master / TPM | Commit-to-Done, Scope Creep, Sprint Hygiene | Weekly/Daily |
| Product Manager | Feature Delivery Lead Time, KTLO vs. New Work | Bi-weekly |
By aligning metrics to what stakeholders actually care about, you reduce dashboard fatigue and increase engagement.

Rolling out dashboards isn’t enough—you need cadence and accountability.
Successful SEI customers establish regular reviews, such as:
Each dashboard or collection should have an owner, responsible for interpreting and acting on the insights.

Once the foundation is in place, go deeper. SEI allows you to scale insight delivery across the org by:
This is how SEI becomes more than a dashboard—it becomes your engineering operating system.

Data without goals is directionless. Use SEI to establish stretch goals tied to organizational outcomes.
Here are common SEI-aligned OKRs:
Because SEI continuously measures these metrics, you can track OKR progress in real time.

🧭 Objective:
Improve engineering velocity without compromising security or code quality, while ensuring more effort is spent on new feature development.
📈 Key Results:
💥 Impact:
By using SEI’s Dev Insights and Business Alignment dashboards, the customer was able to shift engineering focus toward innovation. Unapproved PR backlog reductions improved code review discipline, while faster PR cycle times helped the team deliver secure, high-quality features faster.

🧭 Objective:
Accelerate delivery cadence, reduce lead times, and establish a baseline for operational resilience across distributed teams.
📈 Key Results:
💥 Impact:
SEI enabled visibility into every stage of the SDLC — from PRs to production. Dashboards helped engineering leadership identify workflow bottlenecks, while improved cycle time allowed the team to launch features continuously. The organization was also able to define new goals around MTTR reduction for future sprints.

🧭 Objective:
Improve release predictability, reduce change failure rates, and maintain quality during large-scale technology transformations.
📈 Key Results:
💥 Impact:
Using SEI’s DORA and Sprint Insights dashboards, engineering teams surfaced high-risk areas and improved review discipline. Leadership used Business Alignment reports to visualize time allocation, allowing them to rebalance priorities between legacy maintenance and innovation initiatives — critical for de-risking digital transformation.

🧭 Objective:
Improve collaboration and execution within hybrid teams (FTEs and contractors), while accelerating delivery with fewer blockers.
📈 Key Results:
💥 Impact:
SEI helped the customer restructure their hybrid engineering model by revealing top contributors, low-collaboration patterns, and team-specific bottlenecks. By tagging contributors by type, team, and location, the organization realigned review ownership and improved handoff speed across distributed groups.

🧭 Objective:
Reduce production risk while accelerating feature releases in a highly agile environment.
📈 Key Results:
💥 Impact:
SEI’s DORA metrics helped the team move from reactive issue management to proactive release planning. With improved scope hygiene and PR discipline, the organization was able to deliver features at a faster pace while maintaining platform stability — a crucial balance in gaming environments where user experience is paramount.

🧭 Objective:
Speed up secure development without compromising engineering discipline or quality during rapid team expansion.
📈 Key Results:
💥 Impact:
The customer used SEI to quantify the tradeoff between speed and review quality. By highlighting areas with excessive unapproved PRs and scope creep, the team set up opinionated OKRs to strike a balance between velocity and sustainability. Trellis and Dev Insights dashboards were used to coach developers and improve overall workflow consistency.
The most successful engineering organizations don’t just collect metrics—they operationalize them. Harness SEI enables your teams to go beyond dashboards and build a culture of insight, accountability, and impact.
By following a structured rollout, aligning metrics to personas, and setting outcome-focused OKRs, SEI can become the backbone of your engineering excellence strategy.
About the Author
Adeeb Valiulla leads the Quality Assurance & Resilience, Cost & Productivity function at Harness, where he works closely with Fortune 500 customers to drive engineering efficiency, improve developer experience, and align software delivery efforts with business outcomes. With a focus on measurable insights, Adeeb helps organizations turn engineering data into actionable intelligence that fuels continuous improvement. He brings a unique blend of technical depth and strategic vision, helping teams unlock their full potential through data-driven transformation.


Every so often, a piece of research lands in your inbox that makes you pause and think, “Yeah, this is exactly what I’ve been seeing, but couldn’t articulate.”
Microsoft’s recent study, “Time Warp: The Gap Between Developers’ Ideal vs Actual Workweeks in an AI-Driven Era” is that kind of read. It maps a disconnect I’ve heard developers vent about in 1:1s and retro meetings: the constant struggle between what they’re doing and what they wish they were doing.
In this article, we talk about that exact gap. And more importantly, what we as product leaders can learn from it.
Let’s start with the uncomfortable truth. Developers are spending a surprising amount of time not developing. Microsoft’s study shows:
None of this is shocking if you’ve worked with engineers up close. But seeing it quantified is a reality check. We’ve built org structures and workflows that slowly chip away at the flow state.

When developers talk about their “perfect week,” it’s surprisingly consistent.
They want more time for deep work, heads-down coding, solving real problems, and making architecture decisions that actually move the product forward.
They want collaboration, but the kind that’s quick, intentional, and actually helps, not an endless stream of pings, meetings, and status updates. They’re not asking to go off into a cave. They still value teamwork. But the ask is simple: less noise, more impact.
When there’s a big gap between how their week actually goes vs. how they wish it would, satisfaction drops. It’s not just about efficiency, it’s about identity. Developers want to feel like builders, not just operators moving tickets from “In Progress” to “Done.” That’s what’s really at stake when we talk about developer experience.
One of the most interesting parts of Microsoft's research is how it frames AI not as some future disruption, but as a tool developers are already leaning on today to reclaim their time. Developers who regularly use AI tools (like GitHub Copilot, code assistants, and auto-summarization tools) are seeing a closer match between how they want to spend their week and how they actually spend it. Not because AI is doing their jobs for them but because it’s helping clear the clutter.
That’s the real product insight: AI isn’t just another feature you bolt onto a dev tool. When it’s done right, it acts as a force multiplier. It automates the repetitive, low-value work that usually derails focus so developers can stay in their flow state longer. But the flip side is just as important: if AI adds complexity or noise, it becomes another source of interruption. We have to be deliberate about where and how we apply it.
It’s really tempting to jump straight into solution mode. But the first move isn't to fix it. It’s to understand.
Before we throw new tools, new processes, or new initiatives at the problem, we need to take a real, honest look at what the developer experience actually feels like today.

Here’s a simple way to break it down:
We recommend starting with a simple 3-step approach.
Start with the basics. You don’t need “N” number of dashboards what you need is a few key signals that reveal where energy is leaking:
At Harness, we believe in anchoring to DORA metrics first because they tell you whether your team is predictably delivering value:
DORA gives you the high-level outcomes but not the full story. Once you establish that baseline, you drill deeper into flow metrics to uncover where energy is leaking day-to-day. Specifically, track:
And just as important: Pair quantitative metrics with qualitative feedback.
Look for the early warning signs: Long review queues. Increased context-switching. Meetings that nobody wants but everyone attends. Friction shows up before velocity drops; you just have to know where to look.
Numbers alone aren’t enough. Metrics tell you what’s happening but they’ll never tell you why.
You might see a spike in pull request cycle times.
Is it because teams are slacking off?
Or because reviewers are spread across too many projects?
Or because no one knows who’s responsible for the next action?
You need real conversations. You need to hear the "why" directly from the people living it every day.
Treat data as a conversation starter, not a final answer. The goal isn’t just to measure experience, it's to understand it.
Once you understand the landscape, act deliberately.
The goal isn’t more activity, it's more effective workweeks.
This is exactly the problem we built Harness Software Engineering Insights (SEI) to solve.
We don’t believe metrics alone fix anything.
Our real North Star isn’t a better report card. It’s a better developer week.
Metrics are just the starting line. The real value comes from creating environments where developers spend more time building, solving, and innovating and less time stuck in endless loops of coordination and rework.
Building great products isn’t just about shipping roadmaps and features faster.
It’s about building environments and systems where people can do their best, most meaningful work.
If our developers feel stuck in a “Time Warp” consumed with a week full of meetings, blockers, and busy work, then no amount of AI, velocity tracking, or sprint burndowns will fix morale.
The next frontier isn’t just shipping faster. It’s about helping developers reclaim better weeks. And this is where leadership plays a critical role. With Harness SEI, we empower leaders with crystal-clear visibility into how their teams actually work, highlighting where flow is breaking down, where friction builds, and where leaders can step in to remove barriers. The goal isn’t just to optimize metrics it’s to free developers to do what they love best: building, creating, and solving meaningful problems.
Helping developers close the gap between the real and the ideal week isn’t just about improving productivity metrics. It’s about restoring a sense of purpose, ownership, and flow of the very things that make engineering such a creative, energizing craft. And I think that’s a future worth obsessing over.
Learn more: The causes of developer downtime and how to address them


Engineering leadership used to be about gut feel, strong opinions, and shipping fast. But that playbook is expiring—quickly.
The world we’re building software in today is fundamentally different. Economic pressure, AI disruption, rising complexity, and the demand for hyper-efficiency have converged. Old-school metrics, instinct-led prioritization, and managing by velocity charts won’t cut it.
What today’s engineering leaders need isn’t more dashboards. They need clarity. They need trust. They need a new way to lead.
And most of all? They need to stop guessing.

You shouldn’t have to start every leadership meeting explaining what your teams are working on, why something slipped, or where time is going.
With Harness Software Engineering Insights (SEI), you don’t guess. You know.
You see where bottlenecks are forming. You know when PRs are aging in silence. You understand whether your teams are overcommitted, burned out, or executing beautifully. You know the tradeoffs being made between tech debt, features, and KTLO—before someone asks.
SEI replaces opinions with insight. It surfaces the friction you can’t see in a sprint report, and helps you make smarter decisions based on what’s actually happening—not what you hope is happening.
Because in the new era of engineering, clarity is leadership.

But when you only measure output—story points, releases, burnup—you miss the nuance. You miss the tradeoffs. You miss the why behind the work.
Harness SEI helps leaders tell the complete story:
This is the story your CFO, CPO, and CEO need to hear—not how many tickets you closed last sprint.
Engineering deserves to be understood. SEI makes it possible.

Let’s be honest: we’re no longer in a “hire at all costs” era. Efficiency is the new growth. The mandate is clear:
And that’s not a burden—it’s an opportunity.
With Harness SEI, leaders can finally quantify engineering capacity, align work with outcomes, and invest where it matters most. You can see which teams are stretched too thin, where tech debt is slowing you down, and which initiatives are driving measurable business value.
This isn’t about pushing harder. It’s about working smarter, leading sharper, and delivering more strategically.

Great engineering happens when teams have clarity, focus, and space to build. But too often, they’re stuck in the weeds—fighting fires, filling out status reports, and guessing what matters.
With SEI, that changes.
This frees up energy for real engineering. It protects time for hackathons, R&D spikes, creative sprints—the things that move the business forward and keep developers fulfilled.
Because in a world full of AI and automation, the one thing we can’t afford to lose is human creativity.
SEI helps you protect it—by getting rid of everything that wastes it.

Burnout doesn’t start with bad code. It starts with bad leaders.
When developers don’t know where their work is going, why it matters, or what success looks like, morale suffers. When they’re forced to do status updates instead of shipping, they disengage. When PRs sit for days, they lose momentum.
SEI enables developers to see how their work connects to outcomes. It enables faster feedback, less friction, and clearer focus.
And for leaders? It means fewer surprises, better retention, and more meaningful 1:1s.

The best engineering leaders of the next decade won’t just be great technologists, they’ll be clear communicators, business strategists, and defenders of engineering best practices.
They’ll lead with data, empathy, and decisiveness.
They’ll connect effort to impact.
They’ll stop guessing. And they’ll lead better because of it.
If you're ready to lead in this new era, Harness SEI is your competitive advantage.


For too long, engineering has been seen as a black box—an opaque function that takes in business requirements and delivers software without clear visibility into the process. But in today’s data-driven, business-first world, engineering leaders must do more than execute; they must influence, align, and communicate with executive peers to drive business outcomes.
CTOs, VPs of Engineering, and other technical leaders who can effectively translate engineering metrics into business impact gain a seat at the strategic table. Instead of reacting to business requests, they help shape company priorities, resource allocation, and long-term growth strategies.
But here’s the challenge: Traditional engineering metrics don’t resonate with executives. Story points, commit counts, and deployment logs mean little to a CFO, CMO, or CEO. To gain influence, engineering leaders need to frame their work in business terms—think predictability, customer impact, cost efficiency, and revenue acceleration.
That’s where Harness Software Engineering Insights (SEI) comes in. SEI transforms engineering metrics into clear, actionable insights that bridge the gap between technical execution and business strategy. This blog will show you how to use SEI to speak the language of executives, drive cross-functional alignment, and elevate engineering’s strategic role in your organization.
Before presenting engineering metrics, it’s critical to understand what matters to your executive peers. Different leaders prioritize different business drivers, and aligning your communication style accordingly makes your insights more relevant and impactful.

| Executive | Key Priorities | How Engineering Metrics Apply |
|---|---|---|
| CEO (Chief Executive Officer) | Revenue growth, competitive differentiation, innovation | Engineering’s impact on faster time-to-market, scalability, and business alignment |
| CFO (Chief Financial Officer) | Cost efficiency, budget predictability, ROI | Engineering capacity, cost of technical debt, and efficiency improvements |
| CRO (Chief Revenue Officer) | Sales velocity, customer retention, revenue expansion | Feature delivery timelines, system reliability, customer-impacting defects |
| CPO (Chief Product Officer) | Product roadmap execution, user experience, feature adoption | Lead Time for Change, deployment frequency, engineering capacity for innovation |
| CMO (Chief Marketing Officer) | Digital transformation, campaign execution, website/app performance | Site reliability, system uptime, infrastructure scalability, release predictability |
🔹 Takeaway: Before presenting engineering data, frame it in terms of the business goals that resonate with each executive stakeholder.
Many engineering leaders fall into the trap of reporting on vanity metrics—like total commits, number of deployments, or story points completed—without connecting them to business outcomes.
The key is choosing the right metrics that executives care about. Harness SEI helps track engineering performance across three core areas:

Let’s explore which SEI metrics best support each area.
🎯 How to Communicate It: “Over the past quarter, engineering has improved on-time delivery from 67% to 85%, reducing last-minute delays and improving cross-team alignment.”
🎯 How to Communicate It: “Currently, 54% of engineering work is dedicated to new feature development, while 32% is spent on maintenance and 14% on technical debt reduction.”
🎯 How to Communicate It: “We’ve reduced Lead Time for Change from 14 days to 9 days, improving our ability to respond to market demands faster.”
🎯 How to Communicate It: “New engineers ramp up to full productivity in 6 weeks on average, down from 8 weeks last year.”
Harness SEI provides efficiency, productivity and alignment dashboards that make engineering metrics clear, visual, and actionable for executives.
SEI’s DORA, Sprint Insights, and Business Alignment Dashboards provide high-level summaries while allowing leaders to drill into details when needed.
Rather than waiting for executives to ask, SEI highlights risks upfront (e.g., increasing cycle time, declining deployment frequency) and identifies bottlenecks..
Numbers alone don’t drive action—framing metrics as stories do. SEI allows engineering leaders to present data in a way that connects to business goals and influences decisions.

Engineering is no longer just about writing code—it’s about driving business value. By using Harness SEI to track and communicate on-time delivery, engineering capacity, deployment frequency, and business alignment, engineering leaders can:
✅ Influence executive decisions by aligning engineering work with company priorities.
✅ Improve collaboration across teams by providing visibility into engineering efforts.
✅ Proactively drive impact instead of reacting to business requests.
Ready to communicate engineering’s impact more effectively? Start leveraging SEI today to gain visibility, efficiency, and alignment across your organization.
👉 Learn more about Harness SEI here.


Developer productivity has become a critical factor in today's fast-paced software development world. Organizations constantly seek methods to enhance productivity, improve engineering efficiency, and align their development teams with strategic business goals. But navigating the complexities of developer productivity isn't always straightforward.
In this blog, we’ll hear from Adeeb Valiulla, Director of Engineering Excellence at Harness, as we answer some of the most pressing questions on developer productivity to help you optimize your teams and processes effectively.
Developer productivity refers to the efficiency and effectiveness with which software developers deliver high-quality software solutions. It encompasses the speed and quality of coding, reliability of deployments, the ability to quickly recover from failures, and alignment of development efforts with strategic business goals. High developer productivity means achieving more impactful outcomes with fewer resources, enabling organizations to stay competitive and agile in rapidly evolving markets.

Developer productivity directly impacts an organization's ability to deliver software quickly, reliably, and with high quality. High productivity enhances agility, reduces costs, accelerates feature delivery, and ultimately drives customer satisfaction and competitive advantage. Improving productivity not only benefits the business but also increases developer satisfaction by removing bottlenecks and empowering teams.
“In the hardware technology industry, a well-known global hardware company implemented an engineering metrics program under Harness’s and my guidance. This led to significantly boosted developer productivity. Their PR cycle time improved dramatically from nearly 3 days to under an hour, greatly enhancing delivery speed and agility.”

Yes, software developer productivity can be effectively measured. While measuring productivity isn't always simple due to the complexity of software development, several key metrics have emerged as valuable indicators:
These metrics, when applied carefully and contextually, provide actionable insights into developer productivity.
“In the Gaming Industry, Harness’ holistic approach to productivity, which emphasizes consistent developer engagement and effective scope management, enabled a gaming company to manage scope creep and improve their weekly coding days significantly. This strengthened their development workflow and productivity.”

Generative AI certainly has the potential to improve developer productivity. But. the verdict is still out on whether GenAI provides any significant net improvements. GenAI certainly helps developers write code faster while they are coding by automating repetitive coding tasks, enhancing code reviews, predicting potential errors, and accelerating problem-solving. The vision is that AI-powered tools will help developers write cleaner, more reliable code faster, freeing them to focus on strategic, high-value tasks.

However, the time saved by using GenAI is not guaranteed to net as a productivity gain vs. new challenges GenAI brings, such as learning to prompt optimally, time spent learning and fixing the code it produces, and the potential system and software delivery lifecycle (SDLC) bottlenecks that can occur with the increased pace of new code that needs to be handled, deployed, and tested.
Tools such as Harness Software Engineering Insights (SEI) and AI Productivity Insights (AIPI) can help measure how, where, and with who, AI is causing impact (both positive and potential negative) so that you can optimize the likelihood that GenAI will have a positive impact on your developer productivity.
Additionally, most GenAI developer tool focus has been on AI coding assistants. However, coding is 30-40% of the work that needs to be done to get software updates and enhancements delivered (the pipeline and SDLC stages, as mentioned above). This leaves 60-70% of the overall process that GenAI is not yet helping with. The Harness AI-Native Software Delivery Platform provides many AI agents that help to automate about 40% of the part of the SDLC process that is not coding.

Measuring developer productivity involves:
When measuring developer productivity, focus on outcome-based metrics rather than activity counts. DORA metrics (deployment frequency, lead time, change failure rate, and recovery time) provide valuable insights into team performance and delivery efficiency. Complement these with contextual data like PR cycle times, coding days per week, and the ratio of building versus waiting time.
Harness SEI implements dashboards that visualize these metrics by role, enabling managers to identify bottlenecks, engineers to track personal progress, and executives to monitor overall delivery health. To learn more, read our blog on Persona-Based Metrics.
Remember that measurement should drive improvement, not punishment—create a psychologically safe environment where data informs positive change rather than triggering defensive behavior.
Improving developer productivity requires a multi-faceted approach that addresses both technical and organizational constraints. Start by eliminating common friction points: reduce build times through better CI/CD pipelines, implement robust code review processes that prevent bottlenecks, and adopt standardized development environments that minimize "it works on my machine" issues. Investment in developer tooling often yields outsized returns.
Improving developer productivity requires:
Creating focused work environments is equally crucial. Research shows that developers need uninterrupted blocks of at least 2-3 hours to reach flow state—the mental zone where complex problem-solving happens most efficiently. Consider implementing "no-meeting days" or core collaboration hours to protect deep work time. Google's approach of 20% innovation time and Atlassian's "ShipIt Days" demonstrate how structured creative periods can boost both productivity and engagement.
Finally, regularly audit and reduce technical debt; Etsy's practice of dedicating 20% of engineering resources to infrastructure improvements ensures their codebase remains maintainable as it grows. The most productive engineering cultures view developer experience as a product itself—one that requires continuous investment and refinement.
“In the cybersecurity sector, teams following Harness’ Engineering Metrics Program, consistently averaged over 4.5 coding days per week, demonstrating high developer engagement and productivity.”

In Agile environments, a deeper analysis of key metrics provides valuable insights into developer productivity:
Sprint Velocity serves as more than just a workload counter—it's a team's productivity fingerprint. High-performing teams focus less on increasing raw velocity and more on velocity stability, which indicates predictable delivery. By tracking velocity variance across sprints (aiming for less than 20% fluctuation), teams can identify external factors disrupting productivity. Leading organizations complement this with complexity-adjusted velocity, weighting story points based on technical challenge to reveal where teams excel or struggle with certain types of work.
Sprint Burndown Charts reveal productivity patterns beyond simple progress tracking. Teams should analyze the chart's shape—a consistently flat line followed by steep drops indicates batched work and potential bottlenecks, while a jagged but steady decline suggests healthier continuous delivery. Advanced teams overlay their burndown with blocker indicators, clearly marking when and why progress stalled, creating accountability for removing impediments quickly.
Commit to Done Ratio offers insights into planning accuracy and execution capability. The most productive teams maintain ratios above 80% while avoiding artificial padding of estimates. By categorizing incomplete work (technical obstacles, scope changes, or estimation errors), teams can systematically address root causes rather than symptoms. Some organizations track this metric over multiple sprints to identify trends and measure the effectiveness of process improvements.
PR Cycle Time deserves granular analysis, as code review often becomes a hidden productivity drain. Break this metric into component parts—time to first review, rounds of feedback, and time to final merge—to pinpoint specific improvement areas. Top-performing teams establish service-level objectives for each stage (e.g., initial reviews within 4 hours), supported by automated notifications and team norms. This detailed approach turns PR management from a black box into a well-optimized workflow with predictable throughput.
Harness SEI provides robust tracking of developer productivity by:
Harness SEI empowers teams to enhance productivity by clearly visualizing critical productivity metrics.

Adeeb emphasizes that
Improving developer productivity requires a holistic and human-centric approach. It's not merely about tools and metrics but fundamentally about creating an environment where developers can consistently deliver high-quality output without unnecessary friction.
According to Adeeb, the key factors include:
Harness' approach advocates for an integrated strategy that aligns technology, processes, and culture, emphasizing developer well-being as central to sustainable productivity improvements.
Harnessing the right insights and strategies can transform your software development processes, driving efficiency, innovation, and growth. Ready to elevate your developer productivity to the next level? Discover the power of Harness Software Engineering Insights (SEI) and start achieving measurable improvements today.
Request a meeting or demo
Learn more: The causes of developer downtime and how to address them


Today, we're thrilled to announce a significant leap forward in our commitment to AI-driven innovation. Harness, a leader in AI-native software delivery, is proud to introduce three powerful AI agents designed to transform how teams create, test, and deliver software.
Since introducing Continuous Verification in 2018, Harness has been at the forefront of leveraging AI and machine learning to enhance software delivery processes. Our latest announcement reinforces our position as an industry pioneer, offering a comprehensive suite of AI-powered tools that address critical challenges across the entire software delivery lifecycle (SDLC).
Our vision is a multi-agent architecture embedded directly into the fabric of the Harness platform. We’re building a powerful library of ‘assistants’ designed to make software delivery faster, more efficient, and more enjoyable for developers. These AI-driven agents will work seamlessly within our platform, handling everything from automating complex tasks to providing real-time insights, freeing developers to focus on what they do best: creating innovative software.
Let's explore the capabilities of these new AI agents and see how they will reshape the future of software delivery.
The Harness AI QA Assistant is a game-changer in the world of software testing. This generative AI agent is purpose-built to simplify end-to-end automation and accelerate the transition from manual to automated testing. End-to-end tests have been plagued by slow authoring experiences that yield brittle tests, which need to be tended to every time the UI changes.

By harnessing the power of AI, this assistant offers a range of benefits that can dramatically improve your testing processes:
Sign up today for early access to the AI QA Assistant.
Crafting pipelines can be challenging. You need to consider your core build and deployment activities, as well as best practices around security scans, testing, quality gates, and more. The new Harness AI DevOps Assistant will make creating great pipelines much easier.

The introduction of the AI DevOps Assistant marks a significant milestone in our mission to simplify and streamline the software delivery process for the world’s developers. By automating complex tasks, and providing intelligent insights, this capability empowers teams to focus on innovation rather than getting bogged down in pipeline management intricacies.
Sign up today for early access to the AI DevOps Assistant.
The Harness AI Code Assistant accelerates developer productivity by streamlining coding processes and providing instant access to relevant information. This intelligent tool integrates seamlessly into the development workflow, offering a range of features that enhance coding efficiency and quality:

The Harness AI Code Assistant is more than just a coding tool; it's a comprehensive solution that enhances developer productivity, improves code quality, and fosters a more efficient and collaborative development environment. The AI Code Assistant is available today for all Harness customers at no additional charge.
Software delivery is changing fast. Generative AI has helped organizations code faster than ever. The rest of the delivery pipeline must keep up to take full advantage of these efficiencies.
These tools - the Harness AI QA Assistant, AI DevOps Assistant, and AI Code Assistant represent more than just technological advancements. They embody a shift in how we approach software development, testing, and delivery. By automating routine tasks, providing intelligent assistance, and offering deep insights into development processes, these AI agents eliminate toil, freeing up human creativity and expertise to focus on solving complex problems and driving innovation.
As we move forward, the integration of AI into software delivery processes will become increasingly crucial for organizations looking to maintain a competitive edge. The ability to deliver high-quality software faster, more reliably, and with greater insight will be a key differentiator in the digital marketplace.
Harness is committed to leading this AI-driven transformation of the software delivery landscape. We invite you to join us on this exciting journey toward a future where AI and human expertise work in harmony to create exceptional software experiences.
Stay tuned for more updates as we continue to innovate and shape the future of software delivery. If you want to try any of these capabilities early, sign up here.
Checkout Event: Revolutionizing Software Testing with AI
Checkout Harness AI Code Agent
Explore more resources: 3 Ways to Optimize Software Delivery and Operational Efficiency


AI-based coding Assistants like Google Gemini Code Assist, GitHub Copilot, and others are becoming increasingly popular. However, the efficacy of these tools is still unknown. Engineering leaders want to understand how effective these tools are and how much they should invest in them.
Harness AI Productivity Insights is a new (beta) capability in Software Engineering Insights that helps engineering leaders understand the productivity gains unlocked by leveraging AI coding tools.
This targeted solution empowers engineering leaders to generate comprehensive comparison reports across diverse developer cohorts. It facilitates insightful analyses, such as evaluating the impact of AI Coding Tools on productivity by comparing developers who leverage these tools against those who don't. Additionally, it allows for comparisons between different points in time, tracking how developers' performance evolves as they adopt and grow their proficiency with AI Coding tools.

Customers can choose different types of comparison reports. The most common reports are comparing cohorts of developers who use coding assistants and those who don’t. Other supported types of comparison reports include comparing cohorts of developers with different metadata, for example senior engineers versus junior engineers, or comparing the same set of developers at different points in time.
For every report, customers can flexibly define the comparison cohorts either through manual selection or by utilizing existing metadata filters.

Customers can run multiple reports at any time. Reports will be saved and available to share within the organization.

Each report analyzes the productivity scores of both cohorts, calculating the productivity gain of the second cohort relative to the first. The analysis encompasses various facets of performance, including velocity and quality metrics. Additionally, the solution offers the option to gather qualitative insights through surveys distributed to all cohort members, enriching the quantitative data with user feedback.

AI Productivity Insights relies on source code management (SCM) systems for metrics collection. Customers can seamlessly integrate their preferred SCM platforms through convenient one-click integrations. To gain insights into AI Coding Tool usage, the solution also offers one-click integrations with these tools, enabling comprehensive data collection and analysis across the development ecosystem.
Let us know you are interested. We'd love to show you more and hear your feedback.
.webp)
.webp)
By integrating ServiceNow with Harness SEI, you can:
This integration provides a new data source for Harness SEI, enabling a more comprehensive and accurate measurement of your software delivery performance.
The SEI ServiceNow integration offers two authentication methods:
Choose the method that best suits your requirements and follow our ServiceNow integration help doc for detailed setup instructions.
This integration now allows you to monitor activity and measure crucial metrics regarding your change requests and incidents from the ServiceNow platform. You can now consolidate reporting, combining ServiceNow data with other metrics from Harness SEI, and create customizable dashboards i.e. Insights that focus on the metrics most crucial to your team's success.
A key advantage of this integration is its robust support for DORA metrics such as Deployment Frequency, Change Failure Rate and Mean Time to Restore.
The DORA Mean Time To Restore metric helps you understand how quickly your team can recover from failures. By configuring a DORA Workflow Profile with the ServiceNow integration, you can precisely measure the time between incident creation and resolution.
This report measures the duration between when an incident was created to when the service was restored. In other words, it tracks the time from when the incident was created to the time the incident was closed.
With this information, you can set and track Mean Time to Restore (MTTR) goals, driving continuous improvement in your team's ability to address and resolve issues quickly.

Understanding your deployment cadence is key to achieving continuous delivery. You can define DORA profiles using the ServiceNow integration for tracking how often you deploy. You have the flexibility to track deployments as either Change Requests or Incidents, though using Change Requests is recommended for more accurate deployment tracking

The DORA Deployment Frequency report will display metrics on how often change requests are resolved. This enables you to perform trend analysis, helping you see how your change requests resolution frequency changes over time. With this information, teams can identify patterns and optimize their processes, moving towards a more efficient continuous delivery model.
You can set up the DORA profile definition for Change Failure Rate to monitor the failed deployments from the ServiceNow platform. This links change requests to incidents. Change requests represent the total deployments (when a change request is resolved, it means a deployment is completed). Incidents indicate a failure caused by these deployments (when a change request is resolved but later causes an incident).

This integration bridges the gap between operational data in ServiceNow and development metrics in Harness SEI, providing a holistic view of the entire software delivery lifecycle.
With these insights at your fingertips, you can make more informed decisions, prioritize improvements effectively, and ultimately deliver better software faster and more reliably. Contact Harness Support to try this out today.
.webp)
.webp)
As a developer or development manager, you know how important it is to measure productivity. With your software development team racing against the clock to deliver a new feature in a sprint you're probably keen on boosting productivity and ensuring your team hits every milestone efficiently as planned as part of the sprint. However, it's not uncommon for sprints to fail, and the process can be broken in various ways.
When sprint results are broken, it can have a significant impact on the quality of the product being developed. One of the most significant challenges faced by developers working in agile environments is burnout. Developer burnout can occur when team members feel overwhelmed by the amount of work assigned to them during a sprint.
This can happen due to various reasons such as:
To avoid burnout, it's essential to plan sprints carefully, taking into account the team's capacity, skill sets, and potential roadblocks. Effective sprint planning involves setting achievable goals, prioritizing tasks based on their importance and urgency, estimating tasks accurately, allocating resources efficiently, and monitoring progress. To accomplish all of this, you need to have a clear understanding of your team's capabilities, strengths, and limitations.
By considering these factors and using relevant metrics, you can create a well-planned sprint that sets your team up for success and helps prevent burnout.
But with so many different metrics to choose from, it can be tough to know where to start. That's why we've put together this list of the top 3 sprint metrics to measure the sprint success. These metrics are easy to understand, and straightforward and will give you valuable insights into how your team is performing.
Developer churn in a sprint refers to the degree of change experienced in the set of tasks or work items allocated to a development team during a sprint cycle. More specifically, churn represents the total number of task additions, deletions, or modifications made after the initial commitment phase of the sprint. A higher level of churn indicates increased instability and fluctuation within the sprint scope, which often leads to several negative consequences impacting both productivity and morale.
For example, let's say your team is working on a new feature that requires several stages of development, including design, coding, testing, and review. If the tasks associated with this feature are consistently modified than expected, it may indicate that there are issues with communication between teams or that certain stages of the process lack clarity. By tracking Developer Churn, you can pinpoint these issues and make changes to improve efficiency.
Another essential metric to track developer productivity is comparing what the team planned to deliver versus what they actually completed within a given sprint. This comparison offers an overview of the team's ability to commit and adhere to realistic goals while also revealing potential bottlenecks or process improvements needed.
Let's say your development team plans to complete 60 story points worth of work during a two-week sprint. At the end of the sprint, the team managed to complete only 50 story points. In this scenario, the "planned" value was 60 story points, but the "delivered" value was only 50 story points. This result indicates that there might be some challenges with estimating task complexity or managing time constraints.
The difference between the planned and delivered values could trigger discussions about improving estimation techniques, setting more realistic targets, or identifying any obstacles hindering the team from meeting its goals. Over multiple sprints, tracking these metrics will provide insights into whether the gap between planned and delivered values decreases over time, indicating improvement in productivity and efficiency.
Velocity is a measure of how much work your team completes during a given period, usually a sprint or iteration. It's calculated by summing up the story points completed during a sprint and dividing that number by the number of sprint days. Velocity helps you understand how much work your team can handle in a given period and allows you to plan future sprints accordingly.
For example, if your team has a velocity of 50 story points per sprint, you know that you can expect them to complete around 50 story points worth of work in a two-week sprint. This information can help you prioritize tasks and allocate resources effectively, ensuring that your team stays on track and delivers quality results.
Measuring these metrics accurately is crucial to gain meaningful insights into your team's performance and identify areas for improvement.
Here are some ways to measure these metrics accurately using Harness SEI:




By using these reports on Harness SEI, you can measure sprint metrics accurately and gain insights into your team's performance.
To learn more, schedule a demo with our experts.


Executives often ask a crucial question - "What value is your team bringing to the organization?" As an engineering team, you should develop your own metrics to demonstrate your team's growth and contributions. This is necessary because marketing and sales have their own metrics for deals and leads.
This blog will explain the benefits of creating and managing a Developer metrics dashboard. It can help gain insights into the engineering team's work and identify areas that require attention. We will examine the problems with outdated tools for measuring developer productivity and provide solutions to overcome them. This way, you can accurately assess the business value your engineering team brings.
Understanding the health and productivity of your team is essential for any engineering organization. To achieve this, you can use Developer Insights to show your team's value and performance through metrics. Like a player's career graph, these dashboards show how efficient and productive your developers are.
Having reliable, up-to-date, organized, user-friendly data with the right measurements is crucial. Although many teams use metrics, only a few use the right ones. Choosing the right metrics is crucial for understanding your team's productivity and efficiency accurately.
Here are the top four metrics that can help executives understand your organization's true status.
Measuring development efficiency is important. Cycle time is a key metric that gives a brief overview of all stages involved. But only looking at the total number can be limiting, as there might be many reasons for a long cycle time.
To better understand the problem and find its main cause, it's best to divide the process into different stages.
This process involves several stages. These stages include measuring the time it takes to make the first commit. Another stage is to create a Pull Request. Additionally, we measure the activity in the PR and the approval time in the PR.
Lastly, we measure the time it takes to merge the item into the main codebase. You can analyze each stage separately.
This will help you identify specific areas where your development process is struggling. Once you have identified these areas, you can make a plan to fix the problems. These problems are slowing down your team's productivity.

Workload is the term used to describe the number of tasks that a developer is handling at any given time. When a developer has too many tasks, they may switch between them frequently. This frequent switching can lower productivity and eventually lead to burnout.
You can track the amount of work assigned to developers and in progress. This will help you determine who is overloaded. You can then adjust priorities to avoid harming productivity.
Moreover, tracking active work can help you determine whether your team's tasks align with your business goals. You can use this information to reorganize priorities and ensure that your team is working efficiently towards your goals.
Smaller pull requests from developers help reduce cycle time, according to studies. This may come as unexpected, but it makes sense once you think about it.
Reviewers are more inclined to promptly handle smaller PRs as they are aware that they can finish them more swiftly. If you notice that the pickup and review times for your team's PRs are taking too long, try monitoring the size of the PRs. Then, you can help developers keep their PRs within a certain size, which will reduce your cycle time.
Rework refers to any changes made to existing code, regardless of its age. This may include alterations, fixes, enhancements, or optimizations. Rework Metrics is a concept that enables developers to measure the amount of changes made to existing code. Developers can assess code stability, change frequency, and development efficiency.
By measuring the amount of changes made to existing code, developers can assess the quality of their development efforts. They find code problems, improve development, and prevent future rework.
As the common adage suggests, acknowledging that you have an issue is the initial step towards improvement. However, it's equally important to identify the problem accurately, or else improvement will be impossible.
This is especially true for software teams. Complicated processes in an engineering team can easily fail and finding the main problem is often difficult. That's where metrics come in.
A Developer Insight (i.e. the Dashboard) displays your engineering team's progress and helps identify areas where developers may struggle. By identifying the problem areas, you can provide solutions to improve the developer experience, which ultimately increases their productivity.
You need a dashboard that is accurate, current, unified, and easy to understand, even if it has the best metrics. Otherwise, it may not be very useful.
Harness SEI can help you create an end-to-end developer insight (i.e. Dashboard) with all the necessary metrics. The distinguishing factor of Harness SEI is its ability to link your git data and Jira data together. This helps you understand how your development resources are used, find obstacles for developers, and evaluate your organization's plan efficiency.
Once you understand what's going on with your teams, you can set targets to create an action plan for your developers. For example, you can reduce your PR sizes.
You can also use various reports on Harness SEI to measure and track your cycle time and lead time.
By providing a comprehensive set of essential parameters, including code quality, code volume, speed, impact, proficiency, and collaboration, SEI enables engineering teams to gain deeper insights into their workflows.
The Trellis Score, a proprietary scoring mechanism developed by SEI, offers an effective way to quantify team productivity. With this information at hand, engineering teams can leverage SEI Insights to pinpoint areas requiring improvement, whether they relate to people, processes, or tools. Ultimately, SEI empowers organizations to optimize their development efforts, leading to increased efficiency and higher-quality outputs.
To learn more, schedule a demo with our experts.


In the ever-evolving landscape of software development, the significance of producing high-caliber code is undeniable. This is where Harness Software Engineering Insights (SEI) shines, guiding teams toward elevated software quality, enhanced productivity, and overall excellence. Here, we delve deep into the pivotal role of SEI's Quality Module in aiding teams to gauge, supervise, and uplift their code quality.
The Trellis Framework: At the heart of SEI's transformative potential is the industry-validated Trellis Framework. This intricate design provides a comprehensive analysis of over 20 factors from various Software Development Life Cycle (SDLC) tools, enabling teams to efficiently track and optimize developer productivity.

Lagging indicators are retrospective measures, offering insights into past performance. Let's break down these metrics:
Defect Escape Rate: This metric, crucial for understanding production misses, measures the percentage of defects that go undetected during production and reach the customer. A higher defect escape rate can signal poor quality control, leading to customer dissatisfaction.
Escapes per Story Point or Ticket: This indicates the number of defects per unit of work delivered. An elevated number here can point to quality lapses in development.
Change Failure Rate: This metric measures the percentage of changes leading to failures, indicating the robustness of the product.
Severity of Escapes: This highlights the seriousness of defects, with higher severity demanding urgent attention.
APM Tools - Uptime: Measuring product availability and performance, a higher uptime percentage is indicative of good product quality.
Customer Feedback: Direct customer feedback, both positive and negative, provides valuable insights into product quality.
Leading indicators predict current or future performance. We explore these further:
SonarQube Issues: This includes code smells, vulnerabilities, and code coverage. Issues flagged here can indicate quality concerns in the codebase.
Coverage % by Repos: Evaluating code coverage percentage across various repositories.
Automation Test Coverage: A higher percentage here suggests a robust, reliable product.
Coding Hygiene: Measures such as code reviews and comments improve code maintainability and reduce defect risks.
Program Hygiene: This includes acceptance criteria and clear documentation to ensure the product meets requirements.
Development vs Test Time Ratio: A balanced ratio is crucial for product quality.

Automated Test Cases by Type: Categorizing test cases into functional, regression, performance, or destructive types.

Test Cases by Scenario: Differentiating between positive or negative scenarios.
Automated Test Cases by Component View: Providing a component-wise breakdown.

TestRail Test Trend Report - Automation Trend: Showcasing the trend of total, automated, and automatable test cases.

SEI's architecture integrates with CI/CD tools, offering over 40 third-party integrations. This structured approach aids in goal-setting and decision-making, driving teams towards engineering excellence.
Beyond metrics, SEI assists in resource allocation optimization, aligning resources with business objectives for efficient project delivery.
SEI’s dashboards provide a holistic view of the software factory, highlighting key metrics and KPIs for better collaboration and workflow management.
Harness Software Engineering Insights, with its Quality module, stands as a beacon for development teams, combining metrics, insights, and tools for superior code quality. To learn more, schedule a demo with our experts.
.webp)
.webp)
In the realm of software development, ensuring a robust and streamlined Software Development Life Cycle (SDLC) is paramount. While many focus on the technical intricacies and methodologies, there's an underlying aspect that holds equal importance: hygiene in SDLC processes. This blog delves into the significance of hygiene within SDLC and how it can pave the way for deriving valuable insights and metrics.
In essence, hygiene in SDLC refers to the practices, procedures, and protocols adopted to maintain the integrity, reliability, and effectiveness of the software development process. It is important to maintain hygiene across all the aspects of SDLC. It starts from the very beginning of understanding the requirements from various stakeholders and documenting them. It then flows through the different phases from design to implementation where several decisions will be made based on the best practices and ensuring the code quality is maintained.
Hygiene in SDLC serves as a foundational pillar that significantly influences the quality, reliability, and sustainability of software solutions. By emphasizing standardized practices, fostering cross-functional collaboration, and proactively addressing risks, hygiene paves the way for delivering software solutions that are robust, secure, and aligned with stakeholder expectations. This adherence not only enhances software quality and security but also fosters a culture of excellence, innovation, and accountability.
Maintaining hygiene in SDLC is not merely about adherence to protocols; it's about fostering a culture of excellence and continuous improvement. Here's how hygiene directly contributes to generating valuable insights:
As the saying goes, "only the things that get measured can be improved," this underscores the importance of establishing metrics and benchmarks to enhance hygiene within the Software Development Life Cycle (SDLC). By systematically evaluating and optimizing key areas, organizations can foster a culture of excellence and continuous improvement.


To harness the full potential of SDLC hygiene, organizations must cultivate a culture that prioritizes quality, collaboration, and continuous improvement. This entails:
Hygiene in SDLC is not a mere procedural aspect; it's a foundational pillar that underpins the success and sustainability of software development endeavors. By prioritizing hygiene and leveraging it as a catalyst for generating valuable insights and metrics, organizations can navigate the complexities of software development with confidence, agility, and foresight.
To explore how SEI can transform your software development process, we invite you to schedule a demo with our experts.


Pioneering initiative brings together 300 senior engineering leaders to codify best practices and elevate engineering excellence
San Francisco, Sept 21, 2023 - Harness, the Modern Software Delivery Platform® company, is proud to announce the establishment of the industry-wide Engineering Excellence CollectiveTM, a groundbreaking engineering leadership community. The Collective comprises 300 esteemed senior engineering leaders, CTOs, and MDs from leading global organizations, including Broadcom, CrowdStrike, Encora, NetApp, Palo Alto Networks, Pipe, Wells Fargo, Oracle, Xactly, OutSystems, Pure Storage Inc. and many more. The Collective represents an unprecedented collaborative effort to drive innovation and advance industry-wide initiatives, the first of which is a comprehensive Engineering Excellence Maturity Model. To learn more about the Collective and information about joining please visit engineeringx.org.
“I am thrilled to participate in this amazing initiative to define and codify best practices. This is a big gap for engineering leaders. This model serves as a guidepost for understanding how to move the needle and challenge ourselves to get better. It really scratches my itch to understand how other leaders are solving similar challenges and how we can learn from the collective wisdom of engineering leaders,” said Karan Gupta, VP of Engineering at Palo Alto Networks.
“It was a pleasure collaborating with some of the top minds in the field to put together this much needed compendium of best practices,” said Preeti Iyer, Distinguished Engineer, SVP Enterprise Architecture at Wells Fargo.
The Engineering Excellence Collective’s aim is to foster an environment of knowledge exchange and best practice sharing among top engineering minds to stimulate industry progress. At the heart of this initiative is the group’s first major initiative, the Engineering Excellence Maturity Model, a cutting-edge framework that outlines 11 crucial pillars essential for achieving engineering excellence in software development. These include:
This comprehensive model provides a prescriptive guide to elevate engineering practices and drive excellence and developer experience within organizations. Each pillar encompasses a set of capabilities necessary to reach the full potential of engineering achievement. Unlike existing frameworks, the Engineering Excellence Collective doesn't replace established models like DORA or SPACE. Instead, it supplements them by offering specific guidance on capabilities and processes needed to enhance DORA metrics.
Harness has used the groundbreaking Engineering Excellence Maturity Model to create an innovative survey assessment tool. The Harness Engineering Excellence Survey Assessment, publicly launched today, empowers engineering teams to gauge their excellence across each facet and obtain an overall Engineering Excellence Maturity Score. Teams can also compare their scores against industry benchmarks, receive tailored recommendations, and prioritize improvement areas. For more information on the Survey Assessment please visit Engineering Excellence Maturity Assessment
"Engineering leadership can be a lonely job. We are changing that by creating a community for engineering leaders to learn from each other and codify best practices. The initial initiatives we've worked on – the assessment and the maturity model – give engineering leaders a unique perspective on the strengths and weaknesses in their engineering practices, and put together a holistic plan of improvement prioritized by areas that will be the most impactful,” said Nishant Doshi, GM of Software Engineering Insights at Harness. “ This initiative is a collaborative effort to bridge gaps in codifying engineering best practices and provide peer-based learning opportunities that foster progress, growth, and innovation.”
Harness is committed to promoting industry-wide innovation and excellence. The initiative demonstrates Harness's dedication to providing the tools and resources necessary for engineering leaders to navigate the complex landscape of engineering excellence.
For more information about the Engineering Excellence Collective and the Engineering Excellence Maturity Model, visit www.engineeringx.org. We welcome new members who are passionate about driving engineering excellence in their organizations.
About Harness
Harness is the leading end-to-end platform for complete software delivery. It provides a simple, safe, and secure way for engineering and DevOps teams to release applications into production. Harness uses machine learning to detect the quality of deployments and automatically roll back failed ones, saving time and reducing the need for custom scripting and manual oversight, giving engineers their nights and weekends back. Harness is based in San Francisco. Please visit www.harness.io to learn more


Developer productivity is a critical factor in the success of any software development project. The continuous evolution of software development practices has led to the emergence of innovative tools aimed at streamlining the coding process. GitHub Copilot, introduced by GitHub in collaboration with OpenAI, is one such tool that utilizes advanced AI models to assist developers in generating code snippets, suggesting contextually relevant code, and providing coding insights. To scale developer efficiency, one of our customers adopted GitHub Copilot, leading to increased collaboration and shortened development cycles, as demonstrated by Harness SEI's comprehensive analysis.
Before implementing GitHub Copilot, developer teams grappled with challenges primarily centered around pull requests (PRs) activity and cycle time in their software development processes. The existing workflow exhibited limited PR activity, leading to isolated development efforts and sluggish code review cycles. This hindered collaboration among developers and extended the time taken to integrate changes. Additionally, the cycle time from task initiation to deployment was longer than desired, resulting in delayed feature releases and impacting the product’s ability to swiftly respond to market demands.
Manual code reviews were time-consuming and inconsistent, exacerbating the efficiency challenges.
These issues collectively created bottlenecks in collaboration, resource allocation, and timely delivery of software solutions.
In this study we tried to investigate the impact of GitHub Copilot on developer productivity, with a focus on the number of pull requests (PRs) and cycle time, within the context of a comparative analysis conducted using Harness SEI. The study was guided by the expertise of the Harness Software Engineering Insights SEI and involved a sample of 50 developers from a customer. The study took place over multiple months. In the first 2 months, the developers worked without GitHub Copilot's assistance. In the last few months, they used GitHub Copilot as an integrated tool in their coding workflow. Throughout the study, various performance metrics were collected and analyzed to gauge the impact of Copilot.
The study measured the impact of GitHub Copilot on two important metrics:
The average number of PRs is a critical indicator of development activity and collaboration. The analysis revealed a significant increase of 10.6% in the average number of PRs during the month when developers utilized GitHub Copilot compared to the month when Copilot was disabled. This increase suggests that GitHub Copilot can help to improve collaboration, as developers using Copilot can potentially iterate more rapidly, leading to increased code review and integration.
Cycle time is defined as the time taken to complete a development cycle from the initiation of a task to its deployment. It is a fundamental measure of development efficiency. The study demonstrated a reduction in cycle time by an average of 3.5 hours during the month when developers leveraged GitHub Copilot, representing a 2.4% improvement compared to the month when Copilot was not used. This reduction suggests that GitHub Copilot's assistance in generating code snippets and offering coding suggestions contributes to quicker task completion and ultimately shorter development cycles.
GitHub Copilot has demonstrated the product's potential to transform software development. The increase in pull requests (PRs) and the reduction in cycle time are two key metrics that demonstrate the positive impact of GitHub Copilot on developer productivity.
Harness SEI was used to facilitate this study. To summarize, the study proves the capability of GitHub Copilot to significantly improve developer productivity. However, there is still more to uncover. We are conducting more experiments and a more thorough analysis of the experiment data we already collected, looking into heterogeneous effects, or potential effects on the quality of code. We plan to share our findings in further case studies.
To understand developer productivity and unlock such actionable metrics and insights, please schedule a demo of the Harness Software Engineering Insights module here https://www.harness.io/demo/software-engineering-insights.


As teams scale, the role of "Process" becomes a central topic, eliciting both strong support and vehement opposition. Processes can sometimes feel burdensome and ineffective, yet they're indispensable for seamless growth and concerted progress. The challenge lies in distinguishing between good and bad processes and finding the equilibrium between the need for consistency and the freedom to innovate. To unravel this, let's first examine the pitfalls that make processes cumbersome and prone to failure.
In the rapidly expanding business landscape, numerous new business cases arise daily, causing teams to traverse these 9 stages repeatedly. Put simply, what works for a small group might not suit a larger one.
Mismatched Processes vs. Amplifying Processes
All processes aren't created equal; there's no such thing as an inherently good or bad process. Processes either mismatch the specific business context or possess the potential to exponentially enhance efficiency, output, or cost-effectiveness by 10 times.
The Perception Quadrant of New Processes
Introducing a new process typically triggers skepticism or optimism among teams. This fresh process could either end up being a misfit or a 10X enhancer.
Initially, skepticism prevails when a new process is introduced, especially if imposed from a centralized decision-making point. Engineering managers might initially resist the new process's applicability to their unique business context, either accurately or erroneously. The possibility exists that the new process could indeed amplify their outcomes tenfold, but uncertainty clouds their judgment.
The fate of this advocacy depends on the organization's openness to change. If past processes were met with skepticism and proved misfits, subsequent decisions will be met with even more doubt. This breeds a damaging culture and suboptimal outcomes, a phenomenon all too common.
The solution lies in Continuous Adaptability Driven by Actionable Data.
Actionable Data:
Every introduced process requires instrumented data to gauge whether it's a 10X boost or a misfit. Examples include:
Technical Debt Sprint Introduction: Improved defect rates, reduced support tickets, and heightened customer NPS scores due to enhanced communication.
Products like Harness Software Engineering Insights can provide actionable insights for testing process effectiveness.
Continuous Adaptability:
Statements like "It's always been done like this" or "Other teams are doing it this way" reflect adaptability struggles. While standardization can be effective or not, continuous adaptability, data utilization, and questioning the "why" become potent tools to manage process edge cases. Leaders must recognize when existing processes falter for new contexts and iterate promptly.
The gravest error is halting process iteration, leading to institutionalization and forgetting the process's initial purpose.
To explore Harness SEI's capabilities, consider scheduling a quick demo.