
Today, we're thrilled to announce a significant leap forward in our commitment to AI-driven innovation. Harness, a leader in AI-native software delivery, is proud to introduce three powerful AI agents designed to transform how teams create, test, and deliver software.
Since introducing Continuous Verification in 2018, Harness has been at the forefront of leveraging AI and machine learning to enhance software delivery processes. Our latest announcement reinforces our position as an industry pioneer, offering a comprehensive suite of AI-powered tools that address critical challenges across the entire software delivery lifecycle (SDLC).
Our vision is a multi-agent architecture embedded directly into the fabric of the Harness platform. We’re building a powerful library of ‘assistants’ designed to make software delivery faster, more efficient, and more enjoyable for developers. These AI-driven agents will work seamlessly within our platform, handling everything from automating complex tasks to providing real-time insights, freeing developers to focus on what they do best: creating innovative software.
Let's explore the capabilities of these new AI agents and see how they will reshape the future of software delivery.
The Harness AI QA Assistant is a game-changer in the world of software testing. This generative AI agent is purpose-built to simplify end-to-end automation and accelerate the transition from manual to automated testing. End-to-end tests have been plagued by slow authoring experiences that yield brittle tests, which need to be tended to every time the UI changes.

By harnessing the power of AI, this assistant offers a range of benefits that can dramatically improve your testing processes:
Sign up today for early access to the AI QA Assistant.
Crafting pipelines can be challenging. You need to consider your core build and deployment activities, as well as best practices around security scans, testing, quality gates, and more. The new Harness AI DevOps Assistant will make creating great pipelines much easier.

The introduction of the AI DevOps Assistant marks a significant milestone in our mission to simplify and streamline the software delivery process for the world’s developers. By automating complex tasks, and providing intelligent insights, this capability empowers teams to focus on innovation rather than getting bogged down in pipeline management intricacies.
Sign up today for early access to the AI DevOps Assistant.
The Harness AI Code Assistant accelerates developer productivity by streamlining coding processes and providing instant access to relevant information. This intelligent tool integrates seamlessly into the development workflow, offering a range of features that enhance coding efficiency and quality:

The Harness AI Code Assistant is more than just a coding tool; it's a comprehensive solution that enhances developer productivity, improves code quality, and fosters a more efficient and collaborative development environment. The AI Code Assistant is available today for all Harness customers at no additional charge.
Software delivery is changing fast. Generative AI has helped organizations code faster than ever. The rest of the delivery pipeline must keep up to take full advantage of these efficiencies.
These tools - the Harness AI QA Assistant, AI DevOps Assistant, and AI Code Assistant represent more than just technological advancements. They embody a shift in how we approach software development, testing, and delivery. By automating routine tasks, providing intelligent assistance, and offering deep insights into development processes, these AI agents eliminate toil, freeing up human creativity and expertise to focus on solving complex problems and driving innovation.
As we move forward, the integration of AI into software delivery processes will become increasingly crucial for organizations looking to maintain a competitive edge. The ability to deliver high-quality software faster, more reliably, and with greater insight will be a key differentiator in the digital marketplace.
Harness is committed to leading this AI-driven transformation of the software delivery landscape. We invite you to join us on this exciting journey toward a future where AI and human expertise work in harmony to create exceptional software experiences.
Stay tuned for more updates as we continue to innovate and shape the future of software delivery. If you want to try any of these capabilities early, sign up here.
Checkout Event: Revolutionizing Software Testing with AI
Checkout Harness AI Code Agent
Explore more resources: 3 Ways to Optimize Software Delivery and Operational Efficiency


Kubernetes is a powerhouse of modern infrastructure — elastic, resilient, and beautifully abstracted. It lets you scale with ease, roll out deployments seamlessly, and sleep at night knowing your apps are self-healing.
But if you’re not careful, it can also silently drain your cloud budget.
In most teams, cost comes as an afterthought — only noticed when the monthly cloud bill starts to resemble a phone number. The truth is simple:
Kubernetes isn’t expensive by default.
Inefficient scheduling decisions are.
These inefficiencies don’t come from massive architectural mistakes. It’s the small, hidden inefficiencies — configuration-level choices — that pile up into significant cloud waste.
In this post, let’s unpack the hidden costs lurking in your Kubernetes clusters and how you can take control using smarter scheduling, bin packing, right-sizing, and better node selection.
Most teams play it safe by over-provisioning resource requests — sometimes doubling or tripling what the workload needs. This leads to wasted CPU and memory that sit idle, but still costs money because the scheduler reserves them.
Your cluster is “full” — but your nodes are barely sweating.

Kubernetes’s default scheduler optimizes for availability and spreading, not cost. As a result, workloads are often spread across more nodes than necessary. This leads to fragmented resource usage, like:

Choosing the wrong instance type can be surprisingly expensive:
But without node affinity, taints, or custom scheduling, workloads might not land where they should.
Old cron jobs, demo deployments, and failed jobs that never got cleaned up — they all add up. Worse, they might be on expensive nodes or keeping the autoscaler from scaling down.
Mixing too many node types across zones, architectures, or families without careful coordination leads to bin-packing failure. A pod that fits only one node type can prevent the scale-down of others, leading to stranded resources.
Many Kubernetes environments run 24/7 by default, even when there is little or no real activity. Development clusters, staging environments, and non-critical workloads often sit idle for large portions of the day, quietly accumulating cost.
This is one of the most overlooked cost traps.
Even a well-sized cluster becomes expensive if it runs continuously while doing nothing.
Because this waste doesn’t show up as obvious inefficiency — no failed pods, no over-provisioned nodes — it often goes unnoticed until teams review monthly cloud bills. By then, the cost is already sunk.
Idle infrastructure is still infrastructure you pay for.
Kubernetes doesn’t natively optimize for cost, but you can make it.
Encourage consolidation by:
In addition to affinity and anti-affinity, teams can use topology spread constraints to control the explicit distribution of pods across zones or nodes. While they’re often used for high availability, overly strict spread requirements can work against bin-packing and prevent efficient scale-down, making them another lever that needs cost-aware tuning.

All of us go through a state where all of our resources are running 24/7 but are barely getting used and racking up costs even when everything is idle.A tried and proved way to avoid this is to scale down these resources either based on schedules or based on idleness.
Harness CCM Kubernetes AutoStopping let’s you scale down your Kubernetes workloads, AutoScaling Groups, VMs and many more based on either their activity or based on Fixed schedules to save you from these idle costs.
Cluster Orchestrator can help you to scale down the entire cluster or specific Nodepools when they are not needed, based on schedules
It’s often shocking how many pods can run on half the resources they’re requesting. Instead of guessing resource requests:

Make architecture and pricing work in your favor:


Instead of 10 specialized pools, consider:
One overlooked reason why Kubernetes cost optimization is hard is that most scaling decisions are opaque. Nodes appear and disappear, but teams rarely know why a particular scale-up or scale-down happened.
Was it CPU fragmentation? A pod affinity rule? A disruption budget? A cost constraint?
Without decision-level visibility, teams are forced to guess — and that makes cost optimization feel risky instead of intentional.
Cost-aware systems work best when they don’t just act, but explain. Clear event-level insights into why a node was added, removed, or preserved help teams build trust, validate policies, and iterate safely on optimization strategies.


One of the most effective ways to eliminate idle cost is time- or activity-based scaling. Instead of keeping clusters and workloads always on, resources can be scaled down when they are not needed and restored only when activity resumes.
With Harness CCM Kubernetes AutoStopping, teams can automatically scale down Kubernetes workloads, Auto Scaling Groups, VMs, and other resources based on usage signals or fixed schedules. This removes idle spend without requiring manual intervention.
Cluster Orchestrator extends this concept to the cluster level. It enables scheduled scale-down of entire clusters or specific node pools, making it practical to turn off unused capacity during nights, weekends, or other predictable idle windows.
Sometimes, the biggest savings come from not running infrastructure at all when it isn’t needed.

Cost is not just a financial problem. It’s an engineering challenge — and one that we, as developers, can tackle with the same tools we use for performance, resilience, and scalability.
Start small. Review a few workloads. Test new node types. Measure bin-packing efficiency weekly.

You don’t need to sacrifice performance — just be intentional with your cluster design.
Check out Cluster Orchestrator by Harness CCM today!
Kubernetes doesn’t have to be expensive — just smarter.


As cloud adoption continues to rise, efficient cost management demands a robust and automated strategy. Native cloud provider recommendations, while helpful, often have limitations — they primarily focus on vendor-specific optimizations and may not fully align with unique business requirements. Additionally, cloud providers have little incentive to highlight cost-saving opportunities beyond a certain extent, making it essential for organisations to implement customised, independent cost optimization strategies.
At Harness, we developed a Policy-Based Cloud Cost Optimization Recommendations Engine that is highly customisable and operates across AWS, Azure, and Google Cloud. This engine leverages YAML-based policies powered by Cloud Custodian, allowing organisations to define and execute cost-saving rules at scale. The system continuously analyses cloud resources, estimates potential savings, and provides actionable recommendations, ensuring cost efficiency across cloud environments.
Cloud Custodian, an open-source CNCF-backed tool, is at the core of our policy-based engine. It enables defining governance rules in YAML, which are then executed as API calls against cloud accounts. This allows seamless policy execution across different cloud environments.
The system relies on detailed billing and usage reports from cloud providers to calculate cost savings:
The solution leverages Cloud Custodian to define YAML-based policies that identify cloud resources based on specific filters. The cost of these resources is retrieved from relevant cost data sources (AWS Cost and Usage Report (CUR), Azure Billing Report, and GCP Cost Usage Data). The identified cost is then multiplied by the predefined savings percentage to estimate the potential savings from the recommendation.

The diagram above illustrates the workflow of the recommendation engine. It begins with user-defined or Harness-defined cloud custodian policies, which are executed across various accounts and regions. The Harness application processes these policies, fetches cost data from cloud provider reports (AWS CUR, Azure Billing Report, GCP Cost Usage Data), and computes savings. The final output is a set of cost-saving recommendations that help users optimize their cloud spending.
Below is an example YAML rule that deletes unattached Amazon Elastic Block Store (EBS) volumes. When this policy is executed against any account and region, it filters out and deletes all unattached EBS volumes.
Harness CCM’s Policy-Based Recommendation Engine offers an intelligent, automated, and scalable approach to optimizing cloud costs. Unlike native cloud provider tools, it is designed for multi-cloud environments, allowing organisations to define custom cost-saving policies and gain transparent, data-driven insights for continuous optimization.
With over 50 built-in policies and full support for user-defined rules, Harness enables businesses to maximise savings, enhance cost visibility, and automate cloud cost management at scale. By reducing unnecessary cloud spend, companies can reinvest those savings into innovation, growth, and core business initiatives — rather than increasing the profits of cloud vendors.
Sign up for Harness CCM today and experience the power of automated cloud cost optimization firsthand!



Engineering organizations today don’t lack data—they lack clarity. Delivery timelines, developer activity, and code quality metrics are scattered across systems, making it hard to answer simple but critical questions: Where are we losing time? Are we investing in the right work? Who needs support or coaching?
This is where Harness Software Engineering Insights (SEI) steps in. Unlike traditional dashboards, SEI offers opinionated, role-based insights that connect engineering execution with business value.
In this post, we’ll walk through a proven rollout framework, real customer success stories, and a practical guide for any organization looking to implement an engineering metrics program (EMP) that actually drives impact.

Rolling out SEI without a clear objective is like configuring CI/CD pipelines without deployment goals. Before diving into dashboards or metrics, align internally on what you’re trying to improve.
Most organizations fall into one or more of the following categories:
💡 A powerful first step is simply asking: What are the top 3 decisions you wish you could make with data but currently can't?

Once your objectives are clear, it’s time to define the key performance indicators (KPIs) that reflect progress. At Harness, we recommend starting with 5 core metrics that align with your goals:
These metrics aren’t just about numbers—they tell a story. And SEI’s pre-built dashboards help visualize that story from day one.

Out-of-the-box data isn’t enough—you need context. SEI allows deep configuration across integrations, people, and workflows to ensure accuracy and actionability.
Start with the essentials: Jira or ADO (issue tracking), GitHub or Bitbucket (SCM), Jenkins or Harness CI (build/deploy). Validate data ingestion and set up monitoring for failed syncs.
Merge developer identities across systems and tag them with meaningful metadata: Role, Team, Location, Manager, and Employee Type (FTE, contractor). This enables advanced filtering, benchmarking, and team-level coaching.
Use Asset-Based Collections for things like repositories or services (ideal for DORA/Sprint metrics) and People-Based Collections for teams, departments, or geographies (perfect for Dev Insights, Trellis, and Business Alignment).
SEI lets you build custom profiles for DORA metrics, Business Alignment, and Trellis. These profiles allow you to set your own definitions for “Lead Time,” “MTTR,” or what constitutes “New Work.” Configurable widgets ensure the insights match your team’s workflows—not the other way around.

One of SEI’s most valuable capabilities is persona-based reporting. Not every stakeholder needs to see every metric. Instead, create tailored views based on what matters to them.
| Persona | Primary Metrics | Cadence |
|---|---|---|
| CTO / VP Engineering | DORA, Effort Allocation, Innovation % | Quarterly |
| Director of Engineering | Sprint Trends, PR Cycle Time, MTTR | Monthly |
| Engineering Manager | Coding Days, PR Approval Rate, Rework | Weekly |
| Scrum Master / TPM | Commit-to-Done, Scope Creep, Sprint Hygiene | Weekly/Daily |
| Product Manager | Feature Delivery Lead Time, KTLO vs. New Work | Bi-weekly |
By aligning metrics to what stakeholders actually care about, you reduce dashboard fatigue and increase engagement.

Rolling out dashboards isn’t enough—you need cadence and accountability.
Successful SEI customers establish regular reviews, such as:
Each dashboard or collection should have an owner, responsible for interpreting and acting on the insights.

Once the foundation is in place, go deeper. SEI allows you to scale insight delivery across the org by:
This is how SEI becomes more than a dashboard—it becomes your engineering operating system.

Data without goals is directionless. Use SEI to establish stretch goals tied to organizational outcomes.
Here are common SEI-aligned OKRs:
Because SEI continuously measures these metrics, you can track OKR progress in real time.

🧭 Objective:
Improve engineering velocity without compromising security or code quality, while ensuring more effort is spent on new feature development.
📈 Key Results:
💥 Impact:
By using SEI’s Dev Insights and Business Alignment dashboards, the customer was able to shift engineering focus toward innovation. Unapproved PR backlog reductions improved code review discipline, while faster PR cycle times helped the team deliver secure, high-quality features faster.

🧭 Objective:
Accelerate delivery cadence, reduce lead times, and establish a baseline for operational resilience across distributed teams.
📈 Key Results:
💥 Impact:
SEI enabled visibility into every stage of the SDLC — from PRs to production. Dashboards helped engineering leadership identify workflow bottlenecks, while improved cycle time allowed the team to launch features continuously. The organization was also able to define new goals around MTTR reduction for future sprints.

🧭 Objective:
Improve release predictability, reduce change failure rates, and maintain quality during large-scale technology transformations.
📈 Key Results:
💥 Impact:
Using SEI’s DORA and Sprint Insights dashboards, engineering teams surfaced high-risk areas and improved review discipline. Leadership used Business Alignment reports to visualize time allocation, allowing them to rebalance priorities between legacy maintenance and innovation initiatives — critical for de-risking digital transformation.

🧭 Objective:
Improve collaboration and execution within hybrid teams (FTEs and contractors), while accelerating delivery with fewer blockers.
📈 Key Results:
💥 Impact:
SEI helped the customer restructure their hybrid engineering model by revealing top contributors, low-collaboration patterns, and team-specific bottlenecks. By tagging contributors by type, team, and location, the organization realigned review ownership and improved handoff speed across distributed groups.

🧭 Objective:
Reduce production risk while accelerating feature releases in a highly agile environment.
📈 Key Results:
💥 Impact:
SEI’s DORA metrics helped the team move from reactive issue management to proactive release planning. With improved scope hygiene and PR discipline, the organization was able to deliver features at a faster pace while maintaining platform stability — a crucial balance in gaming environments where user experience is paramount.

🧭 Objective:
Speed up secure development without compromising engineering discipline or quality during rapid team expansion.
📈 Key Results:
💥 Impact:
The customer used SEI to quantify the tradeoff between speed and review quality. By highlighting areas with excessive unapproved PRs and scope creep, the team set up opinionated OKRs to strike a balance between velocity and sustainability. Trellis and Dev Insights dashboards were used to coach developers and improve overall workflow consistency.
The most successful engineering organizations don’t just collect metrics—they operationalize them. Harness SEI enables your teams to go beyond dashboards and build a culture of insight, accountability, and impact.
By following a structured rollout, aligning metrics to personas, and setting outcome-focused OKRs, SEI can become the backbone of your engineering excellence strategy.
About the Author
Adeeb Valiulla leads the Quality Assurance & Resilience, Cost & Productivity function at Harness, where he works closely with Fortune 500 customers to drive engineering efficiency, improve developer experience, and align software delivery efforts with business outcomes. With a focus on measurable insights, Adeeb helps organizations turn engineering data into actionable intelligence that fuels continuous improvement. He brings a unique blend of technical depth and strategic vision, helping teams unlock their full potential through data-driven transformation.


Engineering leadership used to be about gut feel, strong opinions, and shipping fast. But that playbook is expiring—quickly.
The world we’re building software in today is fundamentally different. Economic pressure, AI disruption, rising complexity, and the demand for hyper-efficiency have converged. Old-school metrics, instinct-led prioritization, and managing by velocity charts won’t cut it.
What today’s engineering leaders need isn’t more dashboards. They need clarity. They need trust. They need a new way to lead.
And most of all? They need to stop guessing.

You shouldn’t have to start every leadership meeting explaining what your teams are working on, why something slipped, or where time is going.
With Harness Software Engineering Insights (SEI), you don’t guess. You know.
You see where bottlenecks are forming. You know when PRs are aging in silence. You understand whether your teams are overcommitted, burned out, or executing beautifully. You know the tradeoffs being made between tech debt, features, and KTLO—before someone asks.
SEI replaces opinions with insight. It surfaces the friction you can’t see in a sprint report, and helps you make smarter decisions based on what’s actually happening—not what you hope is happening.
Because in the new era of engineering, clarity is leadership.

But when you only measure output—story points, releases, burnup—you miss the nuance. You miss the tradeoffs. You miss the why behind the work.
Harness SEI helps leaders tell the complete story:
This is the story your CFO, CPO, and CEO need to hear—not how many tickets you closed last sprint.
Engineering deserves to be understood. SEI makes it possible.

Let’s be honest: we’re no longer in a “hire at all costs” era. Efficiency is the new growth. The mandate is clear:
And that’s not a burden—it’s an opportunity.
With Harness SEI, leaders can finally quantify engineering capacity, align work with outcomes, and invest where it matters most. You can see which teams are stretched too thin, where tech debt is slowing you down, and which initiatives are driving measurable business value.
This isn’t about pushing harder. It’s about working smarter, leading sharper, and delivering more strategically.

Great engineering happens when teams have clarity, focus, and space to build. But too often, they’re stuck in the weeds—fighting fires, filling out status reports, and guessing what matters.
With SEI, that changes.
This frees up energy for real engineering. It protects time for hackathons, R&D spikes, creative sprints—the things that move the business forward and keep developers fulfilled.
Because in a world full of AI and automation, the one thing we can’t afford to lose is human creativity.
SEI helps you protect it—by getting rid of everything that wastes it.

Burnout doesn’t start with bad code. It starts with bad leaders.
When developers don’t know where their work is going, why it matters, or what success looks like, morale suffers. When they’re forced to do status updates instead of shipping, they disengage. When PRs sit for days, they lose momentum.
SEI enables developers to see how their work connects to outcomes. It enables faster feedback, less friction, and clearer focus.
And for leaders? It means fewer surprises, better retention, and more meaningful 1:1s.

The best engineering leaders of the next decade won’t just be great technologists, they’ll be clear communicators, business strategists, and defenders of engineering best practices.
They’ll lead with data, empathy, and decisiveness.
They’ll connect effort to impact.
They’ll stop guessing. And they’ll lead better because of it.
If you're ready to lead in this new era, Harness SEI is your competitive advantage.


For too long, engineering has been seen as a black box—an opaque function that takes in business requirements and delivers software without clear visibility into the process. But in today’s data-driven, business-first world, engineering leaders must do more than execute; they must influence, align, and communicate with executive peers to drive business outcomes.
CTOs, VPs of Engineering, and other technical leaders who can effectively translate engineering metrics into business impact gain a seat at the strategic table. Instead of reacting to business requests, they help shape company priorities, resource allocation, and long-term growth strategies.
But here’s the challenge: Traditional engineering metrics don’t resonate with executives. Story points, commit counts, and deployment logs mean little to a CFO, CMO, or CEO. To gain influence, engineering leaders need to frame their work in business terms—think predictability, customer impact, cost efficiency, and revenue acceleration.
That’s where Harness Software Engineering Insights (SEI) comes in. SEI transforms engineering metrics into clear, actionable insights that bridge the gap between technical execution and business strategy. This blog will show you how to use SEI to speak the language of executives, drive cross-functional alignment, and elevate engineering’s strategic role in your organization.
Before presenting engineering metrics, it’s critical to understand what matters to your executive peers. Different leaders prioritize different business drivers, and aligning your communication style accordingly makes your insights more relevant and impactful.

| Executive | Key Priorities | How Engineering Metrics Apply |
|---|---|---|
| CEO (Chief Executive Officer) | Revenue growth, competitive differentiation, innovation | Engineering’s impact on faster time-to-market, scalability, and business alignment |
| CFO (Chief Financial Officer) | Cost efficiency, budget predictability, ROI | Engineering capacity, cost of technical debt, and efficiency improvements |
| CRO (Chief Revenue Officer) | Sales velocity, customer retention, revenue expansion | Feature delivery timelines, system reliability, customer-impacting defects |
| CPO (Chief Product Officer) | Product roadmap execution, user experience, feature adoption | Lead Time for Change, deployment frequency, engineering capacity for innovation |
| CMO (Chief Marketing Officer) | Digital transformation, campaign execution, website/app performance | Site reliability, system uptime, infrastructure scalability, release predictability |
🔹 Takeaway: Before presenting engineering data, frame it in terms of the business goals that resonate with each executive stakeholder.
Many engineering leaders fall into the trap of reporting on vanity metrics—like total commits, number of deployments, or story points completed—without connecting them to business outcomes.
The key is choosing the right metrics that executives care about. Harness SEI helps track engineering performance across three core areas:

Let’s explore which SEI metrics best support each area.
🎯 How to Communicate It: “Over the past quarter, engineering has improved on-time delivery from 67% to 85%, reducing last-minute delays and improving cross-team alignment.”
🎯 How to Communicate It: “Currently, 54% of engineering work is dedicated to new feature development, while 32% is spent on maintenance and 14% on technical debt reduction.”
🎯 How to Communicate It: “We’ve reduced Lead Time for Change from 14 days to 9 days, improving our ability to respond to market demands faster.”
🎯 How to Communicate It: “New engineers ramp up to full productivity in 6 weeks on average, down from 8 weeks last year.”
Harness SEI provides efficiency, productivity and alignment dashboards that make engineering metrics clear, visual, and actionable for executives.
SEI’s DORA, Sprint Insights, and Business Alignment Dashboards provide high-level summaries while allowing leaders to drill into details when needed.
Rather than waiting for executives to ask, SEI highlights risks upfront (e.g., increasing cycle time, declining deployment frequency) and identifies bottlenecks..
Numbers alone don’t drive action—framing metrics as stories do. SEI allows engineering leaders to present data in a way that connects to business goals and influences decisions.

Engineering is no longer just about writing code—it’s about driving business value. By using Harness SEI to track and communicate on-time delivery, engineering capacity, deployment frequency, and business alignment, engineering leaders can:
✅ Influence executive decisions by aligning engineering work with company priorities.
✅ Improve collaboration across teams by providing visibility into engineering efforts.
✅ Proactively drive impact instead of reacting to business requests.
Ready to communicate engineering’s impact more effectively? Start leveraging SEI today to gain visibility, efficiency, and alignment across your organization.
👉 Learn more about Harness SEI here.


Developer productivity has become a critical factor in today's fast-paced software development world. Organizations constantly seek methods to enhance productivity, improve engineering efficiency, and align their development teams with strategic business goals. But navigating the complexities of developer productivity isn't always straightforward.
In this blog, we’ll hear from Adeeb Valiulla, Director of Engineering Excellence at Harness, as we answer some of the most pressing questions on developer productivity to help you optimize your teams and processes effectively.
Developer productivity refers to the efficiency and effectiveness with which software developers deliver high-quality software solutions. It encompasses the speed and quality of coding, reliability of deployments, the ability to quickly recover from failures, and alignment of development efforts with strategic business goals. High developer productivity means achieving more impactful outcomes with fewer resources, enabling organizations to stay competitive and agile in rapidly evolving markets.

Developer productivity directly impacts an organization's ability to deliver software quickly, reliably, and with high quality. High productivity enhances agility, reduces costs, accelerates feature delivery, and ultimately drives customer satisfaction and competitive advantage. Improving productivity not only benefits the business but also increases developer satisfaction by removing bottlenecks and empowering teams.
“In the hardware technology industry, a well-known global hardware company implemented an engineering metrics program under Harness’s and my guidance. This led to significantly boosted developer productivity. Their PR cycle time improved dramatically from nearly 3 days to under an hour, greatly enhancing delivery speed and agility.”

Yes, software developer productivity can be effectively measured. While measuring productivity isn't always simple due to the complexity of software development, several key metrics have emerged as valuable indicators:
These metrics, when applied carefully and contextually, provide actionable insights into developer productivity.
“In the Gaming Industry, Harness’ holistic approach to productivity, which emphasizes consistent developer engagement and effective scope management, enabled a gaming company to manage scope creep and improve their weekly coding days significantly. This strengthened their development workflow and productivity.”

Generative AI certainly has the potential to improve developer productivity. But. the verdict is still out on whether GenAI provides any significant net improvements. GenAI certainly helps developers write code faster while they are coding by automating repetitive coding tasks, enhancing code reviews, predicting potential errors, and accelerating problem-solving. The vision is that AI-powered tools will help developers write cleaner, more reliable code faster, freeing them to focus on strategic, high-value tasks.

However, the time saved by using GenAI is not guaranteed to net as a productivity gain vs. new challenges GenAI brings, such as learning to prompt optimally, time spent learning and fixing the code it produces, and the potential system and software delivery lifecycle (SDLC) bottlenecks that can occur with the increased pace of new code that needs to be handled, deployed, and tested.
Tools such as Harness Software Engineering Insights (SEI) and AI Productivity Insights (AIPI) can help measure how, where, and with who, AI is causing impact (both positive and potential negative) so that you can optimize the likelihood that GenAI will have a positive impact on your developer productivity.
Additionally, most GenAI developer tool focus has been on AI coding assistants. However, coding is 30-40% of the work that needs to be done to get software updates and enhancements delivered (the pipeline and SDLC stages, as mentioned above). This leaves 60-70% of the overall process that GenAI is not yet helping with. The Harness AI-Native Software Delivery Platform provides many AI agents that help to automate about 40% of the part of the SDLC process that is not coding.

Measuring developer productivity involves:
When measuring developer productivity, focus on outcome-based metrics rather than activity counts. DORA metrics (deployment frequency, lead time, change failure rate, and recovery time) provide valuable insights into team performance and delivery efficiency. Complement these with contextual data like PR cycle times, coding days per week, and the ratio of building versus waiting time.
Harness SEI implements dashboards that visualize these metrics by role, enabling managers to identify bottlenecks, engineers to track personal progress, and executives to monitor overall delivery health. To learn more, read our blog on Persona-Based Metrics.
Remember that measurement should drive improvement, not punishment—create a psychologically safe environment where data informs positive change rather than triggering defensive behavior.
Improving developer productivity requires a multi-faceted approach that addresses both technical and organizational constraints. Start by eliminating common friction points: reduce build times through better CI/CD pipelines, implement robust code review processes that prevent bottlenecks, and adopt standardized development environments that minimize "it works on my machine" issues. Investment in developer tooling often yields outsized returns.
Improving developer productivity requires:
Creating focused work environments is equally crucial. Research shows that developers need uninterrupted blocks of at least 2-3 hours to reach flow state—the mental zone where complex problem-solving happens most efficiently. Consider implementing "no-meeting days" or core collaboration hours to protect deep work time. Google's approach of 20% innovation time and Atlassian's "ShipIt Days" demonstrate how structured creative periods can boost both productivity and engagement.
Finally, regularly audit and reduce technical debt; Etsy's practice of dedicating 20% of engineering resources to infrastructure improvements ensures their codebase remains maintainable as it grows. The most productive engineering cultures view developer experience as a product itself—one that requires continuous investment and refinement.
“In the cybersecurity sector, teams following Harness’ Engineering Metrics Program, consistently averaged over 4.5 coding days per week, demonstrating high developer engagement and productivity.”

In Agile environments, a deeper analysis of key metrics provides valuable insights into developer productivity:
Sprint Velocity serves as more than just a workload counter—it's a team's productivity fingerprint. High-performing teams focus less on increasing raw velocity and more on velocity stability, which indicates predictable delivery. By tracking velocity variance across sprints (aiming for less than 20% fluctuation), teams can identify external factors disrupting productivity. Leading organizations complement this with complexity-adjusted velocity, weighting story points based on technical challenge to reveal where teams excel or struggle with certain types of work.
Sprint Burndown Charts reveal productivity patterns beyond simple progress tracking. Teams should analyze the chart's shape—a consistently flat line followed by steep drops indicates batched work and potential bottlenecks, while a jagged but steady decline suggests healthier continuous delivery. Advanced teams overlay their burndown with blocker indicators, clearly marking when and why progress stalled, creating accountability for removing impediments quickly.
Commit to Done Ratio offers insights into planning accuracy and execution capability. The most productive teams maintain ratios above 80% while avoiding artificial padding of estimates. By categorizing incomplete work (technical obstacles, scope changes, or estimation errors), teams can systematically address root causes rather than symptoms. Some organizations track this metric over multiple sprints to identify trends and measure the effectiveness of process improvements.
PR Cycle Time deserves granular analysis, as code review often becomes a hidden productivity drain. Break this metric into component parts—time to first review, rounds of feedback, and time to final merge—to pinpoint specific improvement areas. Top-performing teams establish service-level objectives for each stage (e.g., initial reviews within 4 hours), supported by automated notifications and team norms. This detailed approach turns PR management from a black box into a well-optimized workflow with predictable throughput.
Harness SEI provides robust tracking of developer productivity by:
Harness SEI empowers teams to enhance productivity by clearly visualizing critical productivity metrics.

Adeeb emphasizes that
Improving developer productivity requires a holistic and human-centric approach. It's not merely about tools and metrics but fundamentally about creating an environment where developers can consistently deliver high-quality output without unnecessary friction.
According to Adeeb, the key factors include:
Harness' approach advocates for an integrated strategy that aligns technology, processes, and culture, emphasizing developer well-being as central to sustainable productivity improvements.
Harnessing the right insights and strategies can transform your software development processes, driving efficiency, innovation, and growth. Ready to elevate your developer productivity to the next level? Discover the power of Harness Software Engineering Insights (SEI) and start achieving measurable improvements today.
Request a meeting or demo
Learn more: The causes of developer downtime and how to address them


Cloud cost management is crucial for organizations seeking to optimize their cloud spending while achieving maximum return on investment. With the rapid growth of cloud services, managing costs has become increasingly complex, and data teams often struggle to track and analyze spending effectively. This complexity makes it essential for organizations to implement effective cost reporting processes that can provide visibility into cloud expenses and enable informed decision-making.
Cloud cost reporting is critical for tracking, analyzing, and controlling cloud expenditures to ensure that the investment in cloud services aligns with business goals. Here’s why cloud cost reporting is essential and how it supports better decision-making, cost control, and overall financial management.
Harness Cloud Cost Management (CCM) offers comprehensive reporting tools designed to help businesses gain visibility and control over their cloud expenses. Harness CCM’s has different components that contribute to CCM's reporting capabilities, making it easier to track, analyze, and optimize cloud costs across various platforms.
The anomaly detection feature in CCM helps organizations proactively monitor and manage cloud expenses by identifying instances of abnormally high costs.
Perspectives allow users to organize cloud resources in ways that align with specific business needs, such as by department, project, or region.
CCM's dashboards provide an interactive platform for visualizing and analyzing cloud cost data. Users can create custom dashboards to monitor various metrics relevant to their business, aiding in data-driven decision-making.
The Cost Categories feature in CCM enables users to organize and allocate costs effectively. By grouping expenses by business units, projects, or departments, users can gain a detailed view of where money is being spent. This feature is ideal for organizations that need to allocate cloud costs accurately across various internal groups or external clients.
Learn more about Cloud Cost Management by Harness, or book a demo today.


Cloud cost automation refers to the use of automated tools and processes to manage and optimize cloud spending. It involves the implementation of technologies that automatically analyze billing data, track resource utilization, and manage cloud resources in real-time. By automating tasks such as resource provisioning, scaling, and monitoring, organizations can efficiently control their cloud costs without manual intervention.
Cloud cost optimization can be achieved using cloud cost management tools. These tools track and categorize all cloud-related expenses, attributing them to the respective teams responsible for their consumption. This promotes accountability, encouraging teams to use resources judiciously while discouraging wasteful practices.
Ultimately, by implementing effective cloud cost management strategies and leveraging appropriate tools, organizations can achieve greater financial efficiency and align their cloud spending with business objectives and key results (OKRs). This proactive approach not only safeguards profit margins but also positions organizations for sustainable growth in a dynamic cloud landscape.
Utilizing external tools for cloud cost management brings a range of significant advantages that enhance financial efficiency and strategic alignment for organizations leveraging cloud services. Here are some of the key benefits:
Selecting the right cloud cost management tool is essential for optimizing your cloud spending and ensuring operational efficiency. Here are some key factors to consider in more detail:
%2520(1).webp)
%2520(1).webp)
Backstage is an open-source platform developed by Spotify that helps manage and centralize software development infrastructure. It's designed to serve as a developer portal, providing a unified interface for accessing tools, services, documentation, and resources within an organization.
Backstage is gaining traction for centralizing the developer experience, offering a unified portal for tools and services. Its service catalog improves discoverability, while templates automate workflows and reduce developer effort. As an open-source platform, it supports customization and integrations, making it a key solution for improving productivity and standardizing practices in modern software development.
Within this powerful ecosystem, we're excited to introduce Harness's Cloud Cost Management Plugin. This new addition to the Backstage platform brings comprehensive cloud cost visibility directly into your developer portal, addressing a critical need in modern software development.

As organizations increasingly rely on cloud infrastructure, managing and optimizing cloud costs has become a crucial aspect of software development. However, many teams struggle with limited visibility into their cloud expenses, difficulty in aligning costs with business contexts, and the time-consuming process of accessing and interpreting cost data.
Cloud Cost Management’s plugin offers several powerful features to enhance your Backstage experience:
Each of these features was carefully designed to empower development teams with the information they need to make cost-effective decisions and manage cloud resources more efficiently.

Installing and configuring the Harness Cloud Cost Management Plugin for Backstage is straightforward. Follow these steps to get started:
yarn add @harnessio/backstage-plugin-harness-ccm
import {
isHarnessCcmAvailable,
EntityCcmContent,
} from '@harnessio/backstage-plugin-harness-ccm';
const ccmContent = (
<EntitySwitch.Case if={isHarnessCcmAvailable}>
<EntityHarnessCcmContent />
</EntitySwitch.Case>
);
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
annotations:
harness.io/perspective-url: <harness_ccm_perspective_url>
Replace <harness_ccm_perspective_url> with your actual Harness Perspective URL. The plugin will use the group by, aggregation, time range, and visualization settings defined in this Perspective.
Harness’s Internal Developer Portal (IDP) offers a seamless integration with the CCM plugin, enhancing the development process with cost management insights.
By embedding cloud cost visibility directly into daily workflows, the CCM plugin ensures developers can make informed, cost-conscious decisions without needing to switch between tools.
Harness’s Cloud Cost Management (CCM) Plugin for Backstage brings real-time cost tracking and financial insights directly into the developer’s environment. This integration allows teams to make cost-efficient decisions, optimize cloud spend, and accelerate software delivery—all within a unified portal, driving productivity and financial control.
To learn more about the Harness Cloud Cost Management plugin, check out the plugin repository on GitHub. Sign up for free to start using Harness CCM and IDP today.
Explore more resources about CCM: Automating Cloud Cost Management
Learn about Harness CCM in comparison to some other tools: Harness CCM v.s. Anodot, Harness CCM v.s. Cast AI , Harness CCM v.s. Finout , Harness CCM v.s. CloudCheckr , Harness CCM v.s. Zesty, Harness CCM v.s. AWS Cost Management, Harness CCM v.s. DoiT, Harness CCM v.s. Aurea CloudFix, Harness CCM v.s. AWS, Harness CCM v.s. Stacklet


Cloud Cost Governance is the strategic framework that organizations adopt to manage, control, and optimize cloud spending effectively. It encompasses a set of practices, policies, and tools aimed at ensuring that cloud resources are used in a financially responsible manner while still aligning with broader business objectives.
Cloud Cost Governance is about gaining visibility into cloud expenditure, implementing cost-saving measures, and enforcing accountability across teams. It enables companies to monitor their cloud usage in real-time, identify inefficiencies, and set clear budgets to prevent cost overruns. This governance model plays a vital role in maintaining financial prudence while leveraging the flexibility and scalability of cloud platforms.
Cloud Cost Governance is essential for organizations seeking to harness the full potential of cloud computing while maintaining financial discipline. As cloud environments become more complex, businesses need structured approaches to ensure they remain cost-effective, compliant, and aligned with strategic goals. Here’s why Cloud Cost Governance is crucial:
Harness Cloud Asset Governance automatically eliminates cloud waste, ensuring compliance and freeing engineers to focus on innovation. Think of it like managing your cloud infrastructure like you would a busy shared refrigerator. Over time, if no one takes charge, things can get messy—forgotten resources pile up, and inefficiencies grow, just like expired food taking up valuable space. This clutter not only wastes money but creates compliance and security risks, much like a neglected fridge could lead to health hazards.
Harness Cloud Asset Governance offers a better solution by automating cloud asset governance and providing clear visibility into cloud spend and efficiency. This tool helps organizations prevent cloud waste, optimize costs, and ensure resources align with corporate standards, all through a policy-driven governance-as-code approach.
Harness Cloud Asset Governance leverages Cloud Custodian, a widely adopted CNCF-backed open-source tool designed to streamline multi-cloud governance. While Cloud Custodian excels in policy support, it has some limitations: no GUI, no centralized reporting, and high management overhead, among other challenges. Harness eliminates these pain points by integrating AI Development Assistant (AIDA™), a natural language interface that simplifies policy creation and offers out-of-the-box governance rules.
Harness Cloud Asset Governance automates cloud cost governance, reduces cloud waste, and ensures compliance through a robust governance-as-code approach, ultimately empowering organizations to focus on innovation and efficiency.
Explore resources: Tackling Cloud Spend Challenges at Discover Dollar


Today, we're thrilled to announce a significant leap forward in our commitment to AI-driven innovation. Harness, a leader in AI-native software delivery, is proud to introduce three powerful AI agents designed to transform how teams create, test, and deliver software.
Since introducing Continuous Verification in 2018, Harness has been at the forefront of leveraging AI and machine learning to enhance software delivery processes. Our latest announcement reinforces our position as an industry pioneer, offering a comprehensive suite of AI-powered tools that address critical challenges across the entire software delivery lifecycle (SDLC).
Our vision is a multi-agent architecture embedded directly into the fabric of the Harness platform. We’re building a powerful library of ‘assistants’ designed to make software delivery faster, more efficient, and more enjoyable for developers. These AI-driven agents will work seamlessly within our platform, handling everything from automating complex tasks to providing real-time insights, freeing developers to focus on what they do best: creating innovative software.
Let's explore the capabilities of these new AI agents and see how they will reshape the future of software delivery.
The Harness AI QA Assistant is a game-changer in the world of software testing. This generative AI agent is purpose-built to simplify end-to-end automation and accelerate the transition from manual to automated testing. End-to-end tests have been plagued by slow authoring experiences that yield brittle tests, which need to be tended to every time the UI changes.

By harnessing the power of AI, this assistant offers a range of benefits that can dramatically improve your testing processes:
Sign up today for early access to the AI QA Assistant.
Crafting pipelines can be challenging. You need to consider your core build and deployment activities, as well as best practices around security scans, testing, quality gates, and more. The new Harness AI DevOps Assistant will make creating great pipelines much easier.

The introduction of the AI DevOps Assistant marks a significant milestone in our mission to simplify and streamline the software delivery process for the world’s developers. By automating complex tasks, and providing intelligent insights, this capability empowers teams to focus on innovation rather than getting bogged down in pipeline management intricacies.
Sign up today for early access to the AI DevOps Assistant.
The Harness AI Code Assistant accelerates developer productivity by streamlining coding processes and providing instant access to relevant information. This intelligent tool integrates seamlessly into the development workflow, offering a range of features that enhance coding efficiency and quality:

The Harness AI Code Assistant is more than just a coding tool; it's a comprehensive solution that enhances developer productivity, improves code quality, and fosters a more efficient and collaborative development environment. The AI Code Assistant is available today for all Harness customers at no additional charge.
Software delivery is changing fast. Generative AI has helped organizations code faster than ever. The rest of the delivery pipeline must keep up to take full advantage of these efficiencies.
These tools - the Harness AI QA Assistant, AI DevOps Assistant, and AI Code Assistant represent more than just technological advancements. They embody a shift in how we approach software development, testing, and delivery. By automating routine tasks, providing intelligent assistance, and offering deep insights into development processes, these AI agents eliminate toil, freeing up human creativity and expertise to focus on solving complex problems and driving innovation.
As we move forward, the integration of AI into software delivery processes will become increasingly crucial for organizations looking to maintain a competitive edge. The ability to deliver high-quality software faster, more reliably, and with greater insight will be a key differentiator in the digital marketplace.
Harness is committed to leading this AI-driven transformation of the software delivery landscape. We invite you to join us on this exciting journey toward a future where AI and human expertise work in harmony to create exceptional software experiences.
Stay tuned for more updates as we continue to innovate and shape the future of software delivery. If you want to try any of these capabilities early, sign up here.
Checkout Event: Revolutionizing Software Testing with AI
Checkout Harness AI Code Agent
Explore more resources: 3 Ways to Optimize Software Delivery and Operational Efficiency


AI-based coding Assistants like Google Gemini Code Assist, GitHub Copilot, and others are becoming increasingly popular. However, the efficacy of these tools is still unknown. Engineering leaders want to understand how effective these tools are and how much they should invest in them.
Harness AI Productivity Insights is a new (beta) capability in Software Engineering Insights that helps engineering leaders understand the productivity gains unlocked by leveraging AI coding tools.
This targeted solution empowers engineering leaders to generate comprehensive comparison reports across diverse developer cohorts. It facilitates insightful analyses, such as evaluating the impact of AI Coding Tools on productivity by comparing developers who leverage these tools against those who don't. Additionally, it allows for comparisons between different points in time, tracking how developers' performance evolves as they adopt and grow their proficiency with AI Coding tools.

Customers can choose different types of comparison reports. The most common reports are comparing cohorts of developers who use coding assistants and those who don’t. Other supported types of comparison reports include comparing cohorts of developers with different metadata, for example senior engineers versus junior engineers, or comparing the same set of developers at different points in time.
For every report, customers can flexibly define the comparison cohorts either through manual selection or by utilizing existing metadata filters.

Customers can run multiple reports at any time. Reports will be saved and available to share within the organization.

Each report analyzes the productivity scores of both cohorts, calculating the productivity gain of the second cohort relative to the first. The analysis encompasses various facets of performance, including velocity and quality metrics. Additionally, the solution offers the option to gather qualitative insights through surveys distributed to all cohort members, enriching the quantitative data with user feedback.

AI Productivity Insights relies on source code management (SCM) systems for metrics collection. Customers can seamlessly integrate their preferred SCM platforms through convenient one-click integrations. To gain insights into AI Coding Tool usage, the solution also offers one-click integrations with these tools, enabling comprehensive data collection and analysis across the development ecosystem.
Let us know you are interested. We'd love to show you more and hear your feedback.
-min.webp)
-min.webp)
Cloud cost visibility is the process of tracking, analyzing, and understanding the expenses associated with cloud services, including applications, resources, and infrastructure. It provides organizations with a clear view of how their cloud resources are utilized and how much they are spending. This visibility is essential for making informed decisions about optimizing cloud costs.
Cloud cost visibility is not just about having access to the data but also about organizing and presenting it in a way that makes it easy to understand and act upon - in the form of dashboards, graphs or any other representation. Well-structured and well-analyzed data enables businesses to gain insights into cloud usage patterns, identify inefficiencies, and take corrective actions. With accurate visibility, organizations can forecast future cloud costs and allocate resources more efficiently, ensuring they are not overspending or underutilizing their cloud environment.
The first step is understanding that cloud cost visibility is a group effort. Building a FinOps team with members from development, operations, engineering, and finance ensures that everyone understands what visibility is and how it works or what results it will yield. This collaboration helps in aligning cloud cost management practices with business objectives. Identify all the people who are involved in this process and are decision-makers and understand how they can work together to get the most out of the cloud spend.
Data is the backbone of cloud cost visibility. To ensure the data you have gives the most accurate insights, you need to make sure that the data is accumulated from all the cloud service providers, is accurate and latest, and is granular enough to derive conclusions and patterns. Storing huge amounts of data can be a challenge though, but it is very important for historical data, comparisons and when setting budgets for the future.
Granular data helps create detailed reports which in turn allows you to monitor which cloud resources are consuming the most budget, how Reserved Instances are utilized, and helps you to track trends over time. You can have a huge bunch of data but converting them into dashboards and reports are critical for visualizing cloud cost data and extracting actionable insights. The key idea is simple - data must be arranged in a way that’s easy to interpret and act on.
Tags act as labels that tie cloud resources to specific departments, projects, or teams. A well-defined tagging strategy involves consistently labeling all resources so that no cloud spend is untracked or misallocated. Manual tagging might become very cumbersome but automating the tagging process can alleviate the burden on development teams. An effective tagging strategy makes it easier to forecast, budget, and identify cost-saving opportunities.
Anomaly detection, driven by AI and machine learning, allows you to stay on top of unexpected cloud usage spikes even when you’re not actively monitoring your reports. Anomaly detection tools help detect instances of abnormally high costs and promptly notify users of these occurrences. You can use tools like Harness CCM to detect cost anomalies for your Kubernetes clusters and cloud accounts. CCM cost anomalies compare the previous cloud cost spending with the current spending to detect cost anomalies. Harness CCM uses statistical anomaly detection techniques and forecasting at scale to determine cost anomalies. These methods can detect various types of anomalies, such as a one-time cost spike, and gradual, or consistent cost increases.
Effective budgeting and forecasting are key to achieving full cloud cost visibility. A robust cloud cost management system should allow you to track past, present, and future spending. AI-powered tools can help predict future cloud costs based on past usage trends, enabling more accurate financial planning. Budget alerts, proactive notifications, and tailored forecasts ensure that every team has the visibility needed to manage cloud costs efficiently.
At Harness, we leverage the power of AI and machine learning to streamline cloud cost management. Harness Cloud Cost Management (CCM) provides comprehensive tracking of cloud spending, enabling optimized expenditure.
From CCM Dashboards that help you visualize cloud cost data across clusters and cloud accounts Anomaly Detection that detects instances of abnormally high costs and promptly notify users of these occurrences, Harness ensures everything is automatically managed, allowing you to focus on your core business while significantly reducing costs.
Learn more about Cloud Cost Management by Harness, or book a demo today.


AWS Cloud Cost Management refers to the processes, tools, and practices used to plan, organize, report, analyze, and control the usage of Amazon Web Services (AWS) resources and their associated costs. Unlike simply categorizing its tools as cost management, AWS adopts the term "cloud financial management," encompassing a broader range of services and optimization techniques.
The AWS cost management process involves the following:
AWS Cost Management is essential for organizations looking to optimize their cloud spending and enhance financial control. By implementing best practices for AWS Cost Management, companies can maximize their cloud investments, ensuring optimal performance while minimizing unnecessary expenses.
Amazon Web Services (AWS) offers a suite of free tools designed to help businesses monitor, analyze, and control their cloud costs. Below, we explore some of the key tools offered by AWS that can help you achieve better financial control over your cloud resources.
View the entire details here: AWS Cost Management

To help reduce cloud costs, AWS offers customers special discounted rates against on-demand costs. These discounts are mostly in the form of RIs (Reserved Instances) or SPs(Savings Plans).
To maximize these benefits, analyze your usage patterns to determine which option best meets your needs. This proactive approach helps ensure that you are making cost-effective commitments. You can also use external tools to manage your commitments in the cloud.
Regularly reviewing and optimizing resource utilization is very important when it comes to AWS cloud cost management. Tools like AWS Trusted Advisor and AWS Compute Optimizer can identify underutilized or idle resources. You can also use external tools like Harness CCM to optimize the use of resources. Also, Right-sizing instances and implementing Auto Scaling will ensure that resource allocation matches the demand such that there is no wastage or any added costs. Striking the right balance between provisioning enough resources and avoiding overprovisioning will prevent any unnecessary expense.
For organizations with multiple AWS accounts, consolidating them into a single organization can simplify billing and AWS cost management. With consolidated billing, you can view combined AWS costs across all accounts, which will, in turn, provide a clearer understanding of overall spending. This way, you can also access the volume discounts offered by AWS.
Spot Instances represent a cost-effective way to leverage spare Amazon EC2 compute capacity at significant discounts compared to On-Demand prices. Spot instances offer discounts of up to 90% for the same performance of On-Demand instances, which makes them a great option for cost savings and can help you achieve a good amount of savings.
But, please note, that their availability is subject to Amazon's two-minute interruption notice. For Amazon EKS, to avoid this, Harness provides Cluster Orchestrator for EKS (currently in Beta). The Harness Cluster Orchestrator for Amazon Elastic Kubernetes Service (EKS), a component of the Harness Cloud Cost Management (CCM) module scales EKS cluster nodes according to actual workload requirements. Additionally, by leveraging CCM’s distributed Spot orchestration capability, you can save up to 90% on cloud costs with Amazon EC2 Spot Instances.
At Harness, we leverage the power of AI and machine learning to streamline cloud cost management. Harness Cloud Cost Management (CCM) provides comprehensive tracking of cloud spending, enabling optimized expenditure. From Recommendations that help you better manage and allocate resources to AutoStopping rules that automatically shut down idle resources when not in use, Harness ensures everything is automatically managed, allowing you to focus on your core business while significantly reducing costs. Learn more about Cloud Cost Management by Harness, or book a demo today.


Traditional cloud resource recommendations are typically prescriptive, often providing limited options for fine-tuning. But what if you needed a recommendation for a niche product that's not widely used in your organization? Or maybe you want to heavily customize each parameter or introduce another dimension, like network throughput, into the recommendation process? Unfortunately, there's no straightforward way to achieve this without writing custom scripts.
Custom Recommendations powered by Cloud Asset Governance enables you to create a tailor-made recommendation with just a simple YAML. The Harness AI automatically generates these YAML policies. Custom recommendations leverage the power of policy-as-code and the simplicity of recommendation workflows to simplify lifecycle management.
Recommendations go beyond cost optimization, extending to various use cases such as security, compliance, and tag automation. For example, if you wanted to create a custom recommendation to optimize the startup performance of Lambda functions, you could use a straightforward policy that identifies candidates for SnapStart, generating a list of these candidates as a custom recommendation. Here’s an example policy for the use case mentioned:
policies:
- name: aws-lambda-java-snaptstart-off
resource: lambda
description: |
Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by up to 10x at no extra cost, typically with no changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code. With SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access. When you invoke the function version for the first time, and as the invocations scale up, Lambda resumes new execution environments from the cached snapshot instead of initializing them from scratch, improving startup latency. You can use SnapStart only on published function versions and aliases that point to versions.You can't use SnapStart on a function's unpublished version ($LATEST).
filters:
- type: value
key: Runtime
op: regex
value: '^Java.*'
- type: value
key: Snapstart.OptimizationStatus
op: eq
value: Off
- not:
- type: value
key: Version
op: eq
value: "$LATEST"
By building Cloud Asset Governance on top of the open-source Cloud Custodian, we can leverage its extensive existing coverage and seamlessly integrate new capabilities as they emerge. This approach allows us to support all major cloud assets across leading cloud providers, enabling us to offer recommendations that are tailor-made to any cloud resource.
AWS resource coverage includes EC2 instances, S3 buckets, Lambda functions, RDS instances, and CloudFormation stacks. (Comprehensive list)
Azure Resource Coverage includes Virtual Machines (VMs), Storage accounts, App services, Cosmos DB accounts, and Key Vaults. (Comprehensive list)
GCP Resource Coverage includes Compute Engine instances, Cloud Storage buckets, App Engine applications, Cloud SQL instances, and Cloud IAM policies. (Comprehensive list)
policies:
- name: delete-underutilized-redshift-cluster
resource: redshift
filters:
- type: metrics
name: CPUUtilization
days: 7
period: 86400
value: 5
op: less-than
actions:
- delete
If your Redshift costs are skyrocketing, a quick way to reduce them is by identifying and eliminating underutilized instances. Begin by looking at instances where the CPU usage has been less than 5% over the last seven days. These low-performing instances are prime candidates for deletion. You can leverage a YAML based policy to define the tuning parameters and translate the rule into a custom recommendation with a simple toggle.

The engine will pick this up, generate recommendations across the estate, and surface them to the user. You can leverage recommendation workflows to get users to take action by leveraging ticketing integrations and moving recommendations across various states of active, ignored, and completed.

As cloud adoption continues to accelerate, intelligent cost optimization has become essential. At Harness, we recognize that effective cost management requires knowledge tailored to each team, department, and organization. To support this, Harness Cloud Cost Management (CCM) equips enterprises with the tools needed to manage recommendations with a high degree of control, fostering democratization across the board.
Visit the Harness CCM web page or book a demo today to discover how custom recommendations can shift your team's perspective on incorporating cost as a crucial dimension for optimization.


Azure Cost optimization refers to optimizing and reducing the costs associated with Azure cloud services. This includes various strategies and techniques to enhance resource utilization, minimize waste, and align expenses with an organization's financial goals. Cost optimization can be achieved by analyzing usage patterns, identifying inefficiencies, and using various tools to cut costs without compromising performance. Azure cost optimization not only means saving money but also strategically managing investments and maximizing returns associated with utilizing Azure cloud services.
Manually overseeing and implementing all the cost optimization techniques can be daunting, which is why Harness provides a comprehensive suite of features designed to streamline Azure cost optimization. Harness CCM leverages machine learning and AI to manage your cloud expenses effectively, minimizing waste and maximizing efficiency. From Recommendations that help you better manage and allocate resources to AutoStopping rules that automatically shut down idle resources when not in use, Harness ensures everything is automatically managed, allowing you to focus on your core business while significantly reducing costs.
For organizations utilizing AWS EKS clusters, the Cluster Orchestrator (currently in Beta) optimizes both performance and cost. Additionally, the Commitment Orchestrator simplifies the management of Reserved Instances and Savings Plans.
Together, these features help you achieve cloud cost optimization, better performance, and enhanced cost efficiency.
Learn more about Cloud Cost Management by Harness, or book a demo today.
.webp)
.webp)
By integrating ServiceNow with Harness SEI, you can:
This integration provides a new data source for Harness SEI, enabling a more comprehensive and accurate measurement of your software delivery performance.
The SEI ServiceNow integration offers two authentication methods:
Choose the method that best suits your requirements and follow our ServiceNow integration help doc for detailed setup instructions.
This integration now allows you to monitor activity and measure crucial metrics regarding your change requests and incidents from the ServiceNow platform. You can now consolidate reporting, combining ServiceNow data with other metrics from Harness SEI, and create customizable dashboards i.e. Insights that focus on the metrics most crucial to your team's success.
A key advantage of this integration is its robust support for DORA metrics such as Deployment Frequency, Change Failure Rate and Mean Time to Restore.
The DORA Mean Time To Restore metric helps you understand how quickly your team can recover from failures. By configuring a DORA Workflow Profile with the ServiceNow integration, you can precisely measure the time between incident creation and resolution.
This report measures the duration between when an incident was created to when the service was restored. In other words, it tracks the time from when the incident was created to the time the incident was closed.
With this information, you can set and track Mean Time to Restore (MTTR) goals, driving continuous improvement in your team's ability to address and resolve issues quickly.

Understanding your deployment cadence is key to achieving continuous delivery. You can define DORA profiles using the ServiceNow integration for tracking how often you deploy. You have the flexibility to track deployments as either Change Requests or Incidents, though using Change Requests is recommended for more accurate deployment tracking

The DORA Deployment Frequency report will display metrics on how often change requests are resolved. This enables you to perform trend analysis, helping you see how your change requests resolution frequency changes over time. With this information, teams can identify patterns and optimize their processes, moving towards a more efficient continuous delivery model.
You can set up the DORA profile definition for Change Failure Rate to monitor the failed deployments from the ServiceNow platform. This links change requests to incidents. Change requests represent the total deployments (when a change request is resolved, it means a deployment is completed). Incidents indicate a failure caused by these deployments (when a change request is resolved but later causes an incident).

This integration bridges the gap between operational data in ServiceNow and development metrics in Harness SEI, providing a holistic view of the entire software delivery lifecycle.
With these insights at your fingertips, you can make more informed decisions, prioritize improvements effectively, and ultimately deliver better software faster and more reliably. Contact Harness Support to try this out today.


As organizations increasingly rely on cloud services to power their digital transformation initiatives, managing and optimizing cloud costs has become a top priority. Mismanaged commitments, lack of governance and not leveraging heavily discounted excess cloud capacity can quickly lead to inflated cloud bills, hindering innovation and growth.
To address these challenges head-on, we’re excited to launch three powerful new capabilities that add to our comprehensive cloud cost optimization and FinOps solution designed to help organizations take control of their spiraling cloud costs. Expanding on our current cloud cost management capabilities, this powerful trio empowers organizations to master cloud cost optimization through governance-as-code, automated commitment management, and efficient cluster node auto-scaling.

Cloud Asset Governance leverages the power of policy-as-code to automate cost management, security, and compliance tasks across your multi-cloud environments. This feature enables you to easily create and enforce governance policies that eliminate cloud waste, ensure adherence to security standards, and maintain continuous compliance. Out-of-the-box policies, such as upgrading to cheaper and faster Amazon Elastic Block Store (EBS) volume types, make optimizing your cloud resources from day one effortless. And, with the help of the Harness AI Development Assistant (Aida™), you get an AI-powered partner for streamlining rule creation and governance. AIDA can help you create custom rules with simple language prompts, validate those rules, and provide insights into existing rules, making governance easier than ever.
You can leverage Cloud Asset Governance for any cloud resource across AWS, Azure, and Google Cloud Platform. The rules you create can contain any filters and perform any actions. All of this is through simple YAML policies.

Our Commitment Orchestrator empowers you to maximize the commitment coverage of your AWS EC2 compute spend and maximize the realized savings. Commitment Orchestrator uses machine learning to forecast compute spend at an instance family level and automates the purchasing and management of Reserved Instances (RIs) and Savings Plans to match compute spend patterns over time. By continuously optimizing your commitment utilization, it proactively identifies opportunities for reallocation or exchanges and ensures comprehensive commitment coverage of your compute resources and utilization of your commitment purchases. It also provides optional manual approval, account exclusions, and configuration options for more granular control.

Our Cluster Orchestrator for Amazon Elastic Kubernetes Service (EKS) provides workload-driven intelligent node autoscaling, enabling you to manage your cluster infrastructure efficiently. With built-in Spot orchestration capabilities, you can achieve up to 90% cost savings on compute costs by running your workloads on Spot instances without compromising on availability. It offers automated optimization and a simplified approach to cluster node resizing. It also provides distributed spot orchestration, with the ability to run replicas of the same workload across both spot and on-demand nodes, with a base on-demand count configured to ensure stability while still leveraging the significant cost savings from spot pricing.
As cloud adoption continues to accelerate, intelligent cost optimization has become a necessity. Harness is at the forefront of this movement, providing enterprises with the tools they need to intelligently manage and optimize their cloud resources, commitments, and container environments.
Together, these three new capabilities form a comprehensive solution for intelligent cloud cost optimization, adding to our existing features. Cloud Asset Governance ensures continuous compliance, Commitment Orchestrator maximizes savings from cloud commitments, and Cluster Orchestrator optimizes container orchestration and Spot orchestration.
To learn more about these powerful new capabilities and how they can benefit your organization, visit the CCM webpage or book a demo today.


In the dynamic world of cloud computing, balancing the introduction of new features with managing costs is a constant challenge for businesses. Harness, a leading DevOps platform, is taking a bold step towards simplifying this delicate balance by integrating its Feature Flags and Cloud Cost Management (CCM) modules. This strategic move aims to empower customers to effortlessly identify potential cost anomalies resulting from changes in feature flag statuses.
When a feature is toggled on or off, it can impact running costs significantly. For example, enabling a feature that introduces caching using GCP Memorystore might enhance user experience but can lead to increased data storage costs and higher request volumes. The goal of this integration is to make it seamless for customers to pinpoint instances where enabling a feature may result in cost anomalies.
Cloud Cost Management (CCM)
What is CCM?
CCM provides detailed insights into resource consumption, allowing engineers and DevOps teams to monitor costs hourly.
What is a Perspective?
Perspectives group resources in meaningful ways, offering a unified view of cloud cost data across environments. Users can create perspectives based on various criteria, such as account, environment, service, region, product, label, namespace, workload, etc.
What is an Anomaly?
CCM's anomaly detection alerts users to significant increases in cloud costs, helping track potential waste and unexpected charges. It compares current spending with previous cloud cost data to detect anomalies.
How it Works
Each feature flag environment will be associated with a CCM perspective. When a flag is changed within an environment, it will be added to a watch list for 24 hours. Every hour, the CCM will be queried to identify anomalies in the associated perspective. If an anomaly is reported, the Feature Flags module will correlate it with any flags currently on the watch list. The Feature Flags UI will display the presence of anomalies, enabling users to quickly identify which flags might be related to reported anomalies.
Customer Value
Customers will benefit from enhanced visibility into their cloud spend's relationship with feature changes. For instance, if a flag change in a production environment results in a tripled cloud spend, users can quickly identify potential correlations, whether positive or negative.
Harness's integration of Feature Flags and Cloud Cost Management represents a leap forward in providing customers with actionable insights to optimize both feature development and cloud costs. This harmonious approach aligns seamlessly with the ever-evolving landscape of cloud computing, empowering organizations to make informed decisions and achieve the delicate balance between innovation and cost management.
Learn more with comparison guide: Harness CCM vs. Flexera
Sign up for a free trial of Harness Feature Flags today and start experiencing the benefits of reliable release management together with Cloud Cost Management.


As cloud usage expands exponentially, companies are struggling to rein in their ballooning cloud costs. Optimization often feels like a never-ending battle, with one-time fixes providing only temporary relief in complex cloud environments that constantly change. The key to sustainable cost management is implementing proactive governance - setting intelligent policies to automatically curb waste and enforce resource efficiency as the cloud evolves. But realistically, manually creating and updating those policies is incredibly time-consuming, prone to oversights, and rarely keeps up with the cloud's rapid growth. This process leaves costs accumulating unchecked.
That's where Harness Cloud Asset Governance comes in. As part of the Harness Cloud Cost Management (CCM) suite, it is an automated solution that establishes and maintains optimal policies to govern your cloud usage and cost. Built on top of Cloud Custodian, Harness Cloud Asset Governance continuously analyzes resource consumption and spending across your entire cloud environment. It identifies opportunities for efficiency and savings and then automatically generates precise policies tailored to your infrastructure and workloads. These smart policies enforce actions like resource scaling, rightsizing, shutdown scheduling, and more to optimize usage and eliminate waste. With the help of the Harness AI Development Assistant (Aida™), the result is proactive, optimized cost management that finally makes the cloud work for you - without the grueling manual effort.
Harness AIDA™ stands for Harness AI Development Assistant, and specifically within CCM, it's your AI-powered partner for streamlining rule creation and governance. With AIDA, you’ll get numerous benefits:

Let's break down how you'd actually use AIDA to turn cloud cost chaos into well-governed efficiency:
To learn more about creating rules with AIDA, check out our documentation.
AIDA isn't just about writing code faster. It also democratizes cost optimization by enabling your entire team to comprehend and contribute.
AI solutions like Harness AIDA™ aim to empower and augment human capabilities. By handling tedious, repetitive tasks like configuring complex cost allocation rules, AIDA frees up time for FinOps and engineering teams to focus on more strategic efforts. As cloud environments scale rapidly, Harness CCM’s capacity for governance and AI-powered automation ensures consistent oversight no matter the footprint size or complexity. And by providing clear, explainable logic behind each cost allocation decision, AIDA demystifies cloud finances across the wider organization. Cost-conscious practices can now become a shared responsibility.
If you're ready to move from reactive cloud cost firefighting to proactive, AI-assisted governance, try Harness CCM today or book a personalized demo. It's time to take back control and make those cloud bills a lot less scary.
.webp)
.webp)
A cloud PoC, or proof of concept, is a way to test and demonstrate the feasibility of using cloud computing to solve a specific business or technical problem. It can also be used to evaluate different cloud providers and services to determine which one is the best fit for your needs.
A cloud PoC can help you:
When choosing an application for your cloud PoC, there are a few things to keep in mind:
If you are new to cloud computing, there are a few things you need to do to get started:
Harness Cloud Cost Management (CCM) is a platform that helps organizations optimize their cloud costs. It provides visibility into cloud spending, recommendations for cost savings, and automated cost optimization measures.
Harness Cloud Cost Management can be used to support cloud PoCs in a number of ways:
A cloud PoC is a valuable tool for evaluating the feasibility of using cloud computing to solve your business or technical problems. By choosing the right application for your PoC and using Harness Cloud Cost Management to optimize your costs, you can ensure that your PoC is a success.
Here is an example of how Harness CCM can be used to support a cloud PoC for a machine learning application:
A company is considering migrating its machine learning workload to the cloud. To validate this decision, the company decides to conduct a cloud PoC.
The company chooses Harness CCM to estimate the cost of running its machine learning application in the cloud. Harness Cloud Cost Management provides the company with a detailed estimate of the cost of computing, storage, and networking resources.
The company then deploys its machine learning application to the cloud using Harness CCM, which then tracks the company's cloud spending during the PoC.
Once the PoC is complete, the company uses Harness Cloud Cost Management to analyze its cloud spending and identify any potential cost-saving opportunities. The company then uses the insights from the PoC to make a decision about whether or not to migrate its machine learning workload to the cloud.
Harness Cloud Cost Management can be used to support cloud PoCs for a wide variety of applications, including machine learning, web applications, and enterprise applications.
Want to learn more about how Harness CCM can support your cloud initiatives? Explore our resources or book a demo to reach out to our team.
.webp)
.webp)
As a developer or development manager, you know how important it is to measure productivity. With your software development team racing against the clock to deliver a new feature in a sprint you're probably keen on boosting productivity and ensuring your team hits every milestone efficiently as planned as part of the sprint. However, it's not uncommon for sprints to fail, and the process can be broken in various ways.
When sprint results are broken, it can have a significant impact on the quality of the product being developed. One of the most significant challenges faced by developers working in agile environments is burnout. Developer burnout can occur when team members feel overwhelmed by the amount of work assigned to them during a sprint.
This can happen due to various reasons such as:
To avoid burnout, it's essential to plan sprints carefully, taking into account the team's capacity, skill sets, and potential roadblocks. Effective sprint planning involves setting achievable goals, prioritizing tasks based on their importance and urgency, estimating tasks accurately, allocating resources efficiently, and monitoring progress. To accomplish all of this, you need to have a clear understanding of your team's capabilities, strengths, and limitations.
By considering these factors and using relevant metrics, you can create a well-planned sprint that sets your team up for success and helps prevent burnout.
But with so many different metrics to choose from, it can be tough to know where to start. That's why we've put together this list of the top 3 sprint metrics to measure the sprint success. These metrics are easy to understand, and straightforward and will give you valuable insights into how your team is performing.
Developer churn in a sprint refers to the degree of change experienced in the set of tasks or work items allocated to a development team during a sprint cycle. More specifically, churn represents the total number of task additions, deletions, or modifications made after the initial commitment phase of the sprint. A higher level of churn indicates increased instability and fluctuation within the sprint scope, which often leads to several negative consequences impacting both productivity and morale.
For example, let's say your team is working on a new feature that requires several stages of development, including design, coding, testing, and review. If the tasks associated with this feature are consistently modified than expected, it may indicate that there are issues with communication between teams or that certain stages of the process lack clarity. By tracking Developer Churn, you can pinpoint these issues and make changes to improve efficiency.
Another essential metric to track developer productivity is comparing what the team planned to deliver versus what they actually completed within a given sprint. This comparison offers an overview of the team's ability to commit and adhere to realistic goals while also revealing potential bottlenecks or process improvements needed.
Let's say your development team plans to complete 60 story points worth of work during a two-week sprint. At the end of the sprint, the team managed to complete only 50 story points. In this scenario, the "planned" value was 60 story points, but the "delivered" value was only 50 story points. This result indicates that there might be some challenges with estimating task complexity or managing time constraints.
The difference between the planned and delivered values could trigger discussions about improving estimation techniques, setting more realistic targets, or identifying any obstacles hindering the team from meeting its goals. Over multiple sprints, tracking these metrics will provide insights into whether the gap between planned and delivered values decreases over time, indicating improvement in productivity and efficiency.
Velocity is a measure of how much work your team completes during a given period, usually a sprint or iteration. It's calculated by summing up the story points completed during a sprint and dividing that number by the number of sprint days. Velocity helps you understand how much work your team can handle in a given period and allows you to plan future sprints accordingly.
For example, if your team has a velocity of 50 story points per sprint, you know that you can expect them to complete around 50 story points worth of work in a two-week sprint. This information can help you prioritize tasks and allocate resources effectively, ensuring that your team stays on track and delivers quality results.
Measuring these metrics accurately is crucial to gain meaningful insights into your team's performance and identify areas for improvement.
Here are some ways to measure these metrics accurately using Harness SEI:




By using these reports on Harness SEI, you can measure sprint metrics accurately and gain insights into your team's performance.
To learn more, schedule a demo with our experts.
.webp)
.webp)
Cloud custodian is a widely used open-source cloud management tool backed by CNCF which helps organizations enforce policies and automate actions to enable them achieve a well maintained cloud environment. It operates on the principles of declarative YAML based policies. With support for multiple cloud providers, including AWS, Azure, and Google Cloud, Cloud Custodian enables users to maintain consistent policies and governance practices across diverse cloud environments, making it particularly appealing for organizations embracing a multi-cloud strategy.
Cloud Custodian comes with all the goodness of battle testing by the community & detects and auto remediates issues - it does come with its own set of challenges. Let’s dive into what are the key challenges that organizations run into when leveraging Cloud Custodian at scale to manage their assets.
Harness Cloud Asset Governance leverages all of the goodness of Cloud Custodian, such as its comprehensive coverage of governance policy support across cloud providers, while solving for all the challenges that comes with self-hosting Cloud Custodian.
Harness provides a rich set of preconfigured governance-as-code rules that make it easy to implement out of the box. But who doesn’t like customisation & how do we solve it?
We leverage our AI Development Assistant (AIDA™) to power Cloud Asset Governance with a natural language interface that eliminates the need to understand YAML syntax to author policies.

Harness Cloud Asset Governance serves as a fully managed and scalable rule execution engine. This allows you to concentrate on establishing guardrails, while Harness takes care of the intricacies of management overhead. In addition, Cloud Asset Governance provides detailed Role-Based Access Control and Audit trails. This feature empowers you to precisely assign access permissions, determining who has the authority to execute specific policies and in which cloud accounts.
Moreover, Harness includes a user-friendly visual interface, minimizing friction and improving the usability of utilizing Cloud Custodian. This interface streamlines the process of reviewing policy evaluations at any given point in history and provides a clear view of the outcomes of those evaluations.

At times, determining what to execute and understanding how to save costs through policy guardrails can be challenging. Even with contextual knowledge, the question remains: how do we disseminate this understanding throughout the entire team? This is where Out-of-the-Box Recommendations come into play. We conduct policy assessments in the background to pinpoint cost-saving opportunities and present them through the visual interface.

In summary, while Cloud Custodian offers robust cloud management capabilities, it comes with notable challenges, including the absence of a graphical interface, scalability issues, and limitations in reporting and security features. Harness Cloud Asset Governance steps in as a strategic enhancement, retaining the strengths of Cloud Custodian while mitigating its drawbacks.
Harness introduces preconfigured governance-as-code rules, simplifying policy implementation, and distinguishes itself through the integration of AI Development Assistant (AIDA™) for a natural language interface during policy authoring. With a fully managed and scalable rule execution engine, Harness ensures organizations can establish effective guardrails without grappling with operational complexities. The platform's user-friendly visual interface, Role-Based Access Control, and detailed Audit trails contribute to a seamless and efficient governance experience, providing centralized visibility and precise access management. By choosing Harness Cloud Asset Governance, organizations can optimize their cloud governance, overcoming the challenges associated with self-hosting Cloud Custodian while enjoying enhanced customization and usability.
Transform your path to a well managed cloud with Governance-as-code and try Harness Cloud Asset Governance now to receive automatic recommendations that can save you money, improve compliance, and reduce security risks. Book a demo to learn more!


Executives often ask a crucial question - "What value is your team bringing to the organization?" As an engineering team, you should develop your own metrics to demonstrate your team's growth and contributions. This is necessary because marketing and sales have their own metrics for deals and leads.
This blog will explain the benefits of creating and managing a Developer metrics dashboard. It can help gain insights into the engineering team's work and identify areas that require attention. We will examine the problems with outdated tools for measuring developer productivity and provide solutions to overcome them. This way, you can accurately assess the business value your engineering team brings.
Understanding the health and productivity of your team is essential for any engineering organization. To achieve this, you can use Developer Insights to show your team's value and performance through metrics. Like a player's career graph, these dashboards show how efficient and productive your developers are.
Having reliable, up-to-date, organized, user-friendly data with the right measurements is crucial. Although many teams use metrics, only a few use the right ones. Choosing the right metrics is crucial for understanding your team's productivity and efficiency accurately.
Here are the top four metrics that can help executives understand your organization's true status.
Measuring development efficiency is important. Cycle time is a key metric that gives a brief overview of all stages involved. But only looking at the total number can be limiting, as there might be many reasons for a long cycle time.
To better understand the problem and find its main cause, it's best to divide the process into different stages.
This process involves several stages. These stages include measuring the time it takes to make the first commit. Another stage is to create a Pull Request. Additionally, we measure the activity in the PR and the approval time in the PR.
Lastly, we measure the time it takes to merge the item into the main codebase. You can analyze each stage separately.
This will help you identify specific areas where your development process is struggling. Once you have identified these areas, you can make a plan to fix the problems. These problems are slowing down your team's productivity.

Workload is the term used to describe the number of tasks that a developer is handling at any given time. When a developer has too many tasks, they may switch between them frequently. This frequent switching can lower productivity and eventually lead to burnout.
You can track the amount of work assigned to developers and in progress. This will help you determine who is overloaded. You can then adjust priorities to avoid harming productivity.
Moreover, tracking active work can help you determine whether your team's tasks align with your business goals. You can use this information to reorganize priorities and ensure that your team is working efficiently towards your goals.
Smaller pull requests from developers help reduce cycle time, according to studies. This may come as unexpected, but it makes sense once you think about it.
Reviewers are more inclined to promptly handle smaller PRs as they are aware that they can finish them more swiftly. If you notice that the pickup and review times for your team's PRs are taking too long, try monitoring the size of the PRs. Then, you can help developers keep their PRs within a certain size, which will reduce your cycle time.
Rework refers to any changes made to existing code, regardless of its age. This may include alterations, fixes, enhancements, or optimizations. Rework Metrics is a concept that enables developers to measure the amount of changes made to existing code. Developers can assess code stability, change frequency, and development efficiency.
By measuring the amount of changes made to existing code, developers can assess the quality of their development efforts. They find code problems, improve development, and prevent future rework.
As the common adage suggests, acknowledging that you have an issue is the initial step towards improvement. However, it's equally important to identify the problem accurately, or else improvement will be impossible.
This is especially true for software teams. Complicated processes in an engineering team can easily fail and finding the main problem is often difficult. That's where metrics come in.
A Developer Insight (i.e. the Dashboard) displays your engineering team's progress and helps identify areas where developers may struggle. By identifying the problem areas, you can provide solutions to improve the developer experience, which ultimately increases their productivity.
You need a dashboard that is accurate, current, unified, and easy to understand, even if it has the best metrics. Otherwise, it may not be very useful.
Harness SEI can help you create an end-to-end developer insight (i.e. Dashboard) with all the necessary metrics. The distinguishing factor of Harness SEI is its ability to link your git data and Jira data together. This helps you understand how your development resources are used, find obstacles for developers, and evaluate your organization's plan efficiency.
Once you understand what's going on with your teams, you can set targets to create an action plan for your developers. For example, you can reduce your PR sizes.
You can also use various reports on Harness SEI to measure and track your cycle time and lead time.
By providing a comprehensive set of essential parameters, including code quality, code volume, speed, impact, proficiency, and collaboration, SEI enables engineering teams to gain deeper insights into their workflows.
The Trellis Score, a proprietary scoring mechanism developed by SEI, offers an effective way to quantify team productivity. With this information at hand, engineering teams can leverage SEI Insights to pinpoint areas requiring improvement, whether they relate to people, processes, or tools. Ultimately, SEI empowers organizations to optimize their development efforts, leading to increased efficiency and higher-quality outputs.
To learn more, schedule a demo with our experts.


In the ever-evolving landscape of software development, the significance of producing high-caliber code is undeniable. This is where Harness Software Engineering Insights (SEI) shines, guiding teams toward elevated software quality, enhanced productivity, and overall excellence. Here, we delve deep into the pivotal role of SEI's Quality Module in aiding teams to gauge, supervise, and uplift their code quality.
The Trellis Framework: At the heart of SEI's transformative potential is the industry-validated Trellis Framework. This intricate design provides a comprehensive analysis of over 20 factors from various Software Development Life Cycle (SDLC) tools, enabling teams to efficiently track and optimize developer productivity.

Lagging indicators are retrospective measures, offering insights into past performance. Let's break down these metrics:
Defect Escape Rate: This metric, crucial for understanding production misses, measures the percentage of defects that go undetected during production and reach the customer. A higher defect escape rate can signal poor quality control, leading to customer dissatisfaction.
Escapes per Story Point or Ticket: This indicates the number of defects per unit of work delivered. An elevated number here can point to quality lapses in development.
Change Failure Rate: This metric measures the percentage of changes leading to failures, indicating the robustness of the product.
Severity of Escapes: This highlights the seriousness of defects, with higher severity demanding urgent attention.
APM Tools - Uptime: Measuring product availability and performance, a higher uptime percentage is indicative of good product quality.
Customer Feedback: Direct customer feedback, both positive and negative, provides valuable insights into product quality.
Leading indicators predict current or future performance. We explore these further:
SonarQube Issues: This includes code smells, vulnerabilities, and code coverage. Issues flagged here can indicate quality concerns in the codebase.
Coverage % by Repos: Evaluating code coverage percentage across various repositories.
Automation Test Coverage: A higher percentage here suggests a robust, reliable product.
Coding Hygiene: Measures such as code reviews and comments improve code maintainability and reduce defect risks.
Program Hygiene: This includes acceptance criteria and clear documentation to ensure the product meets requirements.
Development vs Test Time Ratio: A balanced ratio is crucial for product quality.

Automated Test Cases by Type: Categorizing test cases into functional, regression, performance, or destructive types.

Test Cases by Scenario: Differentiating between positive or negative scenarios.
Automated Test Cases by Component View: Providing a component-wise breakdown.

TestRail Test Trend Report - Automation Trend: Showcasing the trend of total, automated, and automatable test cases.

SEI's architecture integrates with CI/CD tools, offering over 40 third-party integrations. This structured approach aids in goal-setting and decision-making, driving teams towards engineering excellence.
Beyond metrics, SEI assists in resource allocation optimization, aligning resources with business objectives for efficient project delivery.
SEI’s dashboards provide a holistic view of the software factory, highlighting key metrics and KPIs for better collaboration and workflow management.
Harness Software Engineering Insights, with its Quality module, stands as a beacon for development teams, combining metrics, insights, and tools for superior code quality. To learn more, schedule a demo with our experts.
.webp)
.webp)
In the realm of software development, ensuring a robust and streamlined Software Development Life Cycle (SDLC) is paramount. While many focus on the technical intricacies and methodologies, there's an underlying aspect that holds equal importance: hygiene in SDLC processes. This blog delves into the significance of hygiene within SDLC and how it can pave the way for deriving valuable insights and metrics.
In essence, hygiene in SDLC refers to the practices, procedures, and protocols adopted to maintain the integrity, reliability, and effectiveness of the software development process. It is important to maintain hygiene across all the aspects of SDLC. It starts from the very beginning of understanding the requirements from various stakeholders and documenting them. It then flows through the different phases from design to implementation where several decisions will be made based on the best practices and ensuring the code quality is maintained.
Hygiene in SDLC serves as a foundational pillar that significantly influences the quality, reliability, and sustainability of software solutions. By emphasizing standardized practices, fostering cross-functional collaboration, and proactively addressing risks, hygiene paves the way for delivering software solutions that are robust, secure, and aligned with stakeholder expectations. This adherence not only enhances software quality and security but also fosters a culture of excellence, innovation, and accountability.
Maintaining hygiene in SDLC is not merely about adherence to protocols; it's about fostering a culture of excellence and continuous improvement. Here's how hygiene directly contributes to generating valuable insights:
As the saying goes, "only the things that get measured can be improved," this underscores the importance of establishing metrics and benchmarks to enhance hygiene within the Software Development Life Cycle (SDLC). By systematically evaluating and optimizing key areas, organizations can foster a culture of excellence and continuous improvement.


To harness the full potential of SDLC hygiene, organizations must cultivate a culture that prioritizes quality, collaboration, and continuous improvement. This entails:
Hygiene in SDLC is not a mere procedural aspect; it's a foundational pillar that underpins the success and sustainability of software development endeavors. By prioritizing hygiene and leveraging it as a catalyst for generating valuable insights and metrics, organizations can navigate the complexities of software development with confidence, agility, and foresight.
To explore how SEI can transform your software development process, we invite you to schedule a demo with our experts.


Deciding whether to adopt and implement a new product or build an in-house solution is a critical decision in any organization. Especially when the desired solution requires comprehensive features and deep functionality, pouring vast amounts of engineering time and resources into a homemade tool can be counterproductive. It's often more prudent to channel that investment and your team's time into bolstering your core business offerings.
So, why should you buy instead of build for cloud cost reporting? Building a chargeback and showback system for cloud costs may appear straightforward. However, a comprehensive cost reporting solution requires careful consideration of several aspects before deciding.
The short answer is that both of these are processes that task departments using shared IT resources to understand and be accountable for the costs of using those resources.
It sounds straightforward, but for anyone who has seen a cloud bill, you know that it really isn’t. Attributing costs to the right teams is a big challenge.
Leveraging simple Business Intelligence (BI) tools for cloud cost management may seem efficient at first, but they fall short when it comes to allocating shared costs across various departments, teams or applications across the organization. Without the ability to accurately attribute costs, chargeback and showback end up being best guess efforts from a centralized reporting team.
Harness CCM’s Cost Categories offers a comprehensive solution for cost attribution, and is adept at handling intricate cost allocation tasks including multiple strategies to allocate shared costs transparently. With Cost Categories, you can quickly and easily create chargeback and showback reports with a high degree of accuracy.

Once you’ve begun to accurately report and chargeback/showback cost per department, the obvious next step is taking initiative to reduce those costs. An integral part of efficiently managing cloud costs is right-sizing over-provisioned resources and cleaning up unused cloud resources. While building an in-house solution might provide raw cost data, effectively linking these costs with right-sizing recommendations does not come out of the box with a vanilla BI tool.
Harness CCM doesn't just tell you what you're spending, it also provides actionable recommendations on how to optimize cloud resources. By associating cloud costs with optimization recommendations, businesses can identify wastage, make informed adjustments, and realize significant savings while ensuring optimal performance.

Cost anomalies can indicate underlying issues or inefficiencies, and waiting for monthly chargeback/showback reports could result in surprise charges of thousands of dollars of surprise cloud spend. Detecting these anomalies is just the first step; understanding their financial implications is crucial for effective cloud cost management. With an in-house solution, this association can be a manual, time-consuming review process.
In contrast, Harness CCM not only provides cost anomaly detection out-of-the-box but also provides a clear view of their financial impact as they occur. This enables businesses to quickly address and rectify costly unplanned spikes in cloud consumption, ensuring both financial prudence and operational efficiency.
By definition, chargeback and showback reports show you what happened in the past. Planning for the future is imperative for efficient cloud cost management. Unfortunately, in-house systems often lack the advanced capabilities to integrate budgeting and forecasting functionalities. Vanilla BI tools usually do not provide the ability to create, manage and track cloud cost budgets.
Harness CCM fills this gap, allowing teams to set budgets, anticipate future costs, and make informed financial decisions. Additionally, CCM enables you to create group budgets or cascading budgets at multiple levels across your organization that are interrelated.

In a fast-paced business environment, teams and projects shift, merge, or divide. Managing cloud costs for such dynamic entities requires a system that's just as agile. An in-house solution might struggle to keep up with the changing definitions of teams, projects, departments and business units - or whatever construct you want to use for your chargeback/showback cloud cost reporting.
Harness CCM’s Cost Categories offers the flexibility to manage the definitions of these constructs across cost reports and dashboards for ever-evolving teams and projects effortlessly. This ensures that your chargeback/showback reports are fresh and accurate over time.

Beyond just costs, understanding resource utilization metrics like CPU usage can offer insights into efficiency and areas of potential optimization. Harness CCM shines here, offering an integrated view of both costs and critical utilization metrics that leverage cloud provider APIs out-of-the-box, which a DIY approach or standalone BI tool could struggle to incorporate and maintain. This can be particularly true with the ever-changing nature of cloud provider APIs and integrations.
Crafting a system that dynamically detects, understands and ingests new dimensions in the Cost and Usage Report (CUR) or billing export of AWS, Azure or GCP, when required, may not always be very straightforward. But it’s essential for accurate chargeback and showback reporting. If your team doesn’t keep up with these changes, your reports can lose accuracy quickly.
This goes beyond a simple one-time configuration given constant changes being introduced. This would demand constant updates and attention. Opting for Harness CCM eliminates this ongoing manual effort, ensuring that such changes are seamlessly integrated without the fuss.
As your business grows, so does the data you're processing. Building an in-house solution means constantly addressing performance bottlenecks and scalability issues as reports expand. Harness CCM is designed with scalability in mind, effortlessly adapting to increasing data loads and ensuring consistent performance even as your reports grow in size and complexity.
Together, these reasons strongly suggest that investing in Harness CCM offers tangible benefits over the long, arduous journey of building and maintaining an in-house cloud cost management solution.
Looking for an easy, out-of-the-box solution for chargeback and showback of your company's cloud costs? Book a demo of Harness CCM or sign up for free.


Managing cloud costs is always a challenge, especially when we need engineers to take time away from other work to look at and take action on cost savings recommendations. Historically, this was also true for my company, Quizlet, which is a global learning platform that provides engaging AI-powered study tools to help people practice and master whatever they are learning.
Quizlet implemented Harness Cloud Cost Management some time ago for its cost savings automation features, which went well, but we had a number of recommendations for additional cost savings that weren’t being acted upon. As a Senior Platform Engineer in the Platform team, I was tasked to find a solution to the problem.
Knowing our organization, our team knew that to empower our engineers to take action on cloud costs, we needed to make things as easy as possible for them, with as few clicks as possible for the owners of the cloud services. This led us to create an automated workflow to create pull requests for the Harness recommendations, focusing on our microservices as a starting point.
Before I go into detail about the process and how it works, I think it’s important to point out that just creating the automated pull requests in our GitHub repository wasn’t enough to empower the engineers to take action. They don’t live in the repository on a daily basis, so they weren’t seeing the PRs or reviewing them. We had 0 engagement at the start from any of the 10+ engineering teams that used cloud infrastructure for their applications.
The first step I took to increase engagement was to integrate with Slack so that all new PRs were sent to a dedicated Slack channel that the microservices owners were part of. This raised the visibility of the new PRs immediately, as well as enabling the service owners and the Platform team a place to discuss the impact and implementation of the PRs as needed.
The second step was to present this new workflow to the engineering management team to get their support for the engineers to take action on these PRs as they came in. Between these two actions, we went from 0 engagement, to 75-80% engagement on the automated PRs! These PRs on average saved us 40% of our previous spend on Kubernetes workloads they were applied to.
The automation runs once a week, for the development environment services on the first day, and for the production environment services the day after. It’s a Google Cloud Function launched from the Google Cloud Scheduler, architected to integrate with the Harness API (to pull the recommendations), our microservices infrastructure GitHub repository to retrieve and edit the configuration files, and Slack for posting the recommendation PRs to a shared microservices Slack channel.
This is what the architecture looks like:

We’ve configured the automation to pull recommendations from the Harness API that are over a certain $ amount, configured by the admin. This helps to keep the engineers focused on recommendations that have a more material impact on costs, so that we strike a balance on cost savings and engineering productivity.
Because we have a well defined repository structure for the microservices, it has enabled us to more easily automate searching the repo for the necessary configuration files to retrieve and edit. The Harness recommendation includes the name of the Kubernetes workload, allowing us to navigate directly to the correct directory for the service. Over time, we’ve accumulated the use of both Helm and Kustomize for configuring our microservices, so we added the automation needed to be able to differentiate between the two in order to find the correct config file path and the config values necessary to modify. This did add a small bit of complexity that wouldn’t be necessary if we were using a single Kubernetes configuration tool.
Once the file is retrieved, the recommended changes are written into the YAML configuration file, then the file is pushed to GitHub, and a PR created. The weekly PRs are automatically assigned to the service owners via the infrastructure repository’s CODEOWNERS file, so it’s important to keep this file up to date as ownership changes over time. Once done, notifications are sent to Slack for all PRs created during the scheduled job that week.
With the Slack integration, it is very easy to engage with the service owners and answer any questions they may have about the PRs or the potential impacts to prod if implemented. We’ve automated the creation of an easy-to-digest PR description for these recommendations, again to make things as easy as possible for the service owners. Once the PR is reviewed, it gets approved (or rejected) by the service owner, and merged when ready.

There are a lot of PR and recommendation questions that the engineers ask frequently, so we created an FAQ to answer some of the most common questions, such as:
We couldn’t make this automation and new reporting processes work without having the right tools in place, and Harness Cloud Cost Management has really enabled us to do more with our cloud. You can read more about Harness’ Automated Cost Savings features, or talk to Harness sales to get more information on how it’s working for us.


Developer productivity is a critical factor in the success of any software development project. The continuous evolution of software development practices has led to the emergence of innovative tools aimed at streamlining the coding process. GitHub Copilot, introduced by GitHub in collaboration with OpenAI, is one such tool that utilizes advanced AI models to assist developers in generating code snippets, suggesting contextually relevant code, and providing coding insights. To scale developer efficiency, one of our customers adopted GitHub Copilot, leading to increased collaboration and shortened development cycles, as demonstrated by Harness SEI's comprehensive analysis.
Before implementing GitHub Copilot, developer teams grappled with challenges primarily centered around pull requests (PRs) activity and cycle time in their software development processes. The existing workflow exhibited limited PR activity, leading to isolated development efforts and sluggish code review cycles. This hindered collaboration among developers and extended the time taken to integrate changes. Additionally, the cycle time from task initiation to deployment was longer than desired, resulting in delayed feature releases and impacting the product’s ability to swiftly respond to market demands.
Manual code reviews were time-consuming and inconsistent, exacerbating the efficiency challenges.
These issues collectively created bottlenecks in collaboration, resource allocation, and timely delivery of software solutions.
In this study we tried to investigate the impact of GitHub Copilot on developer productivity, with a focus on the number of pull requests (PRs) and cycle time, within the context of a comparative analysis conducted using Harness SEI. The study was guided by the expertise of the Harness Software Engineering Insights SEI and involved a sample of 50 developers from a customer. The study took place over multiple months. In the first 2 months, the developers worked without GitHub Copilot's assistance. In the last few months, they used GitHub Copilot as an integrated tool in their coding workflow. Throughout the study, various performance metrics were collected and analyzed to gauge the impact of Copilot.
The study measured the impact of GitHub Copilot on two important metrics:
The average number of PRs is a critical indicator of development activity and collaboration. The analysis revealed a significant increase of 10.6% in the average number of PRs during the month when developers utilized GitHub Copilot compared to the month when Copilot was disabled. This increase suggests that GitHub Copilot can help to improve collaboration, as developers using Copilot can potentially iterate more rapidly, leading to increased code review and integration.
Cycle time is defined as the time taken to complete a development cycle from the initiation of a task to its deployment. It is a fundamental measure of development efficiency. The study demonstrated a reduction in cycle time by an average of 3.5 hours during the month when developers leveraged GitHub Copilot, representing a 2.4% improvement compared to the month when Copilot was not used. This reduction suggests that GitHub Copilot's assistance in generating code snippets and offering coding suggestions contributes to quicker task completion and ultimately shorter development cycles.
GitHub Copilot has demonstrated the product's potential to transform software development. The increase in pull requests (PRs) and the reduction in cycle time are two key metrics that demonstrate the positive impact of GitHub Copilot on developer productivity.
Harness SEI was used to facilitate this study. To summarize, the study proves the capability of GitHub Copilot to significantly improve developer productivity. However, there is still more to uncover. We are conducting more experiments and a more thorough analysis of the experiment data we already collected, looking into heterogeneous effects, or potential effects on the quality of code. We plan to share our findings in further case studies.
To understand developer productivity and unlock such actionable metrics and insights, please schedule a demo of the Harness Software Engineering Insights module here https://www.harness.io/demo/software-engineering-insights.


As teams scale, the role of "Process" becomes a central topic, eliciting both strong support and vehement opposition. Processes can sometimes feel burdensome and ineffective, yet they're indispensable for seamless growth and concerted progress. The challenge lies in distinguishing between good and bad processes and finding the equilibrium between the need for consistency and the freedom to innovate. To unravel this, let's first examine the pitfalls that make processes cumbersome and prone to failure.
In the rapidly expanding business landscape, numerous new business cases arise daily, causing teams to traverse these 9 stages repeatedly. Put simply, what works for a small group might not suit a larger one.
Mismatched Processes vs. Amplifying Processes
All processes aren't created equal; there's no such thing as an inherently good or bad process. Processes either mismatch the specific business context or possess the potential to exponentially enhance efficiency, output, or cost-effectiveness by 10 times.
The Perception Quadrant of New Processes
Introducing a new process typically triggers skepticism or optimism among teams. This fresh process could either end up being a misfit or a 10X enhancer.
Initially, skepticism prevails when a new process is introduced, especially if imposed from a centralized decision-making point. Engineering managers might initially resist the new process's applicability to their unique business context, either accurately or erroneously. The possibility exists that the new process could indeed amplify their outcomes tenfold, but uncertainty clouds their judgment.
The fate of this advocacy depends on the organization's openness to change. If past processes were met with skepticism and proved misfits, subsequent decisions will be met with even more doubt. This breeds a damaging culture and suboptimal outcomes, a phenomenon all too common.
The solution lies in Continuous Adaptability Driven by Actionable Data.
Actionable Data:
Every introduced process requires instrumented data to gauge whether it's a 10X boost or a misfit. Examples include:
Technical Debt Sprint Introduction: Improved defect rates, reduced support tickets, and heightened customer NPS scores due to enhanced communication.
Products like Harness Software Engineering Insights can provide actionable insights for testing process effectiveness.
Continuous Adaptability:
Statements like "It's always been done like this" or "Other teams are doing it this way" reflect adaptability struggles. While standardization can be effective or not, continuous adaptability, data utilization, and questioning the "why" become potent tools to manage process edge cases. Leaders must recognize when existing processes falter for new contexts and iterate promptly.
The gravest error is halting process iteration, leading to institutionalization and forgetting the process's initial purpose.
To explore Harness SEI's capabilities, consider scheduling a quick demo.