
At Harness, we know developer velocity depends on everyday workflow. That is why we reimagined Harness Code with a faster, cleaner, and more intuitive experience that helps engineers stay in flow from the first clone to the final merge.
Smarter Pull Request Reviews
Review diffs and conversations without constant context switching. Inline comments, keyboard shortcuts, and faster file rendering help you focus on the code instead of the clicks.

Faster File Tree and Change Listing
The new file browser is optimized for large repositories. You can search, jump, and scan changes instantly even when working with thousands of files.

Seamless Repo Navigation
Move between branches, commits, and repositories without losing your scroll position or comment state.

Unified Harness Design System
The entire interface now uses the same design system as the rest of the Harness platform, which reduces the learning curve and makes navigation feel natural.
Every inefficiency in the developer experience is a hidden tax on velocity. Harness Code removes those blockers so your teams:
All 500-plus Harness engineers are already using the new experience, proving it scales in real enterprise environments.
Adopting the new experience is effortless:
There is nothing to migrate. Simply click 'Opt In', and your repositories, permissions, and integrations will continue to work as before.
The new Harness Code experience is only the beginning. Coming soon:
We’re continuing to invest in developer-first features that make Harness Code not just a repository, but the heartbeat of your software delivery pipeline.
If you have been looking for a modern, developer-first alternative to GitHub or GitLab that integrates directly with your CI/CD pipelines, now is the time to try it.
👉 Start your Harness Code trial today and experience a repo that helps you move faster and deliver more.
Learn more: Workflow Management, What Is a Developer Platform
.webp)
Harness Cloud is a fully managed Continuous Integration (CI) platform that allows teams to run builds on Harness-managed virtual machines (VMs) pre-configured with tools, packages, and settings typically used in CI pipelines. In this blog, we'll dive into the four core pillars of Harness Cloud: Speed, Governance, Reliability, and Security. By the end of this post, you'll understand how Harness Cloud streamlines your CI process, saves time, ensures better governance, and provides reliable, secure builds for your development teams.
Harness Cloud delivers blazing-fast builds on multiple platforms, including Linux, macOS, Windows, and mobile operating systems. With Harness Cloud, your builds run in isolation on pre-configured VMs managed by Harness. This means you don’t have to waste time setting up or maintaining your infrastructure. Harness handles the heavy lifting, allowing you to focus on writing code instead of waiting for builds to complete.
The speed of your CI pipeline is crucial for agile development, and Harness Cloud gives you just that—quick, efficient builds that scale according to your needs. With starter pipelines available for various programming languages, you can get up and running quickly without having to customize your environment.
One of the most critical aspects of any enterprise CI/CD process is governance. With Harness Cloud, you can rest assured that your builds are running in a controlled environment. Harness Cloud makes it easier to manage your build infrastructure with centralized configurations and a clear, auditable process. This improves visibility and reduces the complexity of managing your CI pipelines.
Harness also gives you access to the latest features as soon as they’re rolled out. This early access enables teams to stay ahead of the curve, trying out new functionality without worrying about maintaining the underlying infrastructure. By using Harness Cloud, you're ensuring that your team is always using the latest CI innovations.
Reliability is paramount when it comes to build systems. With Harness Cloud, you can trust that your builds are running smoothly and consistently. Harness manages, maintains, and updates the virtual machines (VMs), so you don't have to worry about patching, system failures, or hardware-related issues. This hands-off approach reduces the risk of downtime and builds interruptions, ensuring that your development process is as seamless as possible.
By using Harness-managed infrastructure, you gain the peace of mind that comes with a fully supported, reliable platform. Whether you're running a handful of builds or thousands, Harness ensures they’re executed with the same level of reliability and uptime.
Security is at the forefront of Harness Cloud. With Harness managing your build infrastructure, you don't need to worry about the complexities of securing your own build machines. Harness ensures that all the necessary security protocols are in place to protect your code and the environment in which it runs.
Harness Cloud's commitment to security includes achieving SLSA Level 3 compliance, which ensures the integrity of the software supply chain by generating and verifying provenance for build artifacts. This compliance is achieved through features like isolated build environments and strict access controls, ensuring each build runs in a secure, tamper-proof environment.
For details, read the blog An In-depth Look at Achieving SLSA Level-3 Compliance with Harness.
Harness Cloud also enables secure connectivity to on-prem services and tools, allowing teams to safely integrate with self-hosted artifact repositories, source control systems, and other critical infrastructure. By leveraging Secure Connect, Harness ensures that these connections are encrypted and controlled, eliminating the need to expose internal resources to the public internet. This provides a seamless and secure way to incorporate on-prem dependencies into your CI workflows without compromising security.
Harness Cloud makes it easy to run and scale your CI pipelines without the headache of managing infrastructure. By focusing on the four pillars—speed, governance, reliability, and security—Harness ensures that your development pipeline runs efficiently and securely.
Harness CI and Harness Cloud give you:
✅ Blazing-fast builds—8X faster than traditional CI solutions
✅ A unified platform—Run builds on any language, any OS, including mobile
✅ Native SCM—Harness Code Repository is free and comes packed with built-in governance & security
If you're ready to experience a fully managed, high-performance CI environment, give Harness Cloud a try today.
.webp)
As software projects scale, build times often become a major bottleneck, especially when using tools like Bazel. Bazel is known for its speed and scalability, handling large codebases with ease. However, even the most optimized build tools can be slowed down by inefficient CI pipelines. In this blog, we’ll dive into how Bazel’s build capabilities can be taken to the next level with Harness CI. By leveraging features like Build Intelligence and caching, Harness CI helps maximize Bazel's performance, ensuring faster builds and a more efficient development cycle.
Harness CI integrates seamlessly with Bazel, taking full advantage of its strengths and enhancing performance. The best part? As a user, you don’t have to provide any additional configuration to leverage the build intelligence feature. Harness CI automatically configures the remote cache for your Bazel builds, optimizing the process from day one.
Harness CI’s Build Intelligence ensures that Bazel builds are as fast and efficient as possible. While Bazel has its own caching mechanisms, Harness CI takes this a step further by automatically configuring and optimizing the remote cache, reducing build times without any manual setup.
This automatic configuration means that you can benefit from faster, more efficient builds right away—without having to tweak cache settings or worry about how to handle build artifacts across multiple machines.
Harness CI seamlessly integrates with Bazel’s caching system, automatically handling the configuration of remote caches. So, when you run a build, Harness CI makes sure that any unchanged files are skipped, and only the necessary tasks are executed. If there are any changes, only those parts of the project are rebuilt, making the process significantly faster.
For example, when building the bazel-gazelle project, Harness CI ensures that any unchanged files are cached and reused in subsequent builds, reducing the need for unnecessary recompilation. All this happens automatically in the background without requiring any special configuration from the user.
We compared the performance of Bazel builds using Harness CI and GitHub Actions, and the results were clear: Harness CI, with its automatic configuration and optimized caching, delivered up to 4x faster builds than GitHub Actions. The automatic configuration of the remote cache made a significant difference, helping Bazel avoid redundant tasks and speeding up the build process.
Results:

Bazel is an excellent tool for large-scale builds, but it becomes even more powerful when combined with Harness CI and Harness Cloud. By automatically configuring remote caches and applying build intelligence, Harness CI ensures that your Bazel builds are as fast and efficient as possible, without requiring any additional configuration from you.
By combining other Harness CI intelligence features like Cache Intelligence, Docker Layer Caching, and Test Intelligence, you can speed up your Bazel projects by up to 8x.With the hyper optimized build infrastructure, you can experience lightning-fast builds on Harness Cloud at reasonable costs. This seamless integration allows you to spend less time waiting for builds and more time focusing on delivering quality code.
If you're looking to speed up your Bazel builds, give Harness CI a try today and experience the difference!



Your developers complain about 20-minute builds while your cloud bill spirals out of control. Pipeline sprawl across teams creates security gaps you can't even see. These aren't separate problems. They're symptoms of a lack of actionable data on what actually drives velocity and cost.
The right CI metrics transform reactive firefighting into proactive optimization. With analytics data from Harness CI, platform engineering leaders can cut build times, control spend, and maintain governance without slowing teams down.
Platform teams who track the right CI metrics can quantify exactly how much developer time they're saving, control cloud spending, and maintain security standards while preserving development velocity. The importance of tracking CI/CD metrics lies in connecting pipeline performance directly to measurable business outcomes.
Build time, queue time, and failure rates directly translate to developer hours saved or lost. Research shows that 78% of developers feel more productive with CI, and most want builds under 10 minutes. Tracking median build duration and 95th percentile outliers can reveal your productivity bottlenecks.
Harness CI delivers builds up to 8X faster than traditional tools, turning this insight into action.
Cost per build and compute minutes by pipeline eliminate the guesswork from cloud spending. AWS CodePipeline charges $0.002 per action-execution-minute, making monthly costs straightforward to calculate from your pipeline metrics.
Measuring across teams helps you spot expensive pipelines, optimize resource usage, and justify infrastructure investments with concrete ROI.
SBOM completeness, artifact integrity, and policy pass rates ensure your software supply chain meets security standards without creating development bottlenecks. NIST and related EO 14028 guidance emphasize on machine-readable SBOMs and automated hash verification for all artifacts.
However, measurement consistency remains challenging. A recent systematic review found that SBOM tooling variance creates significant detection gaps, with tools reporting between 43,553 and 309,022 vulnerabilities across the same 1,151 SBOMs.
Standardized metrics help you monitor SBOM generation rates and policy enforcement without manual oversight.
Not all metrics deserve your attention. Platform engineering leaders managing 200+ developers need measurements that reveal where time, money, and reliability break down, and where to fix them first.
So what does this look like in practice? Let's examine the specific metrics.
Build duration becomes most valuable when you track both median (p50) and 95th percentile (p95) times rather than simple averages. Research shows that timeout builds have a median duration of 19.7 minutes compared to 3.4 minutes for normal builds. That’s over five times longer.
While p50 reveals your typical developer experience, p95 exposes the worst-case delays that reduce productivity and impact developer flow. These outliers often signal deeper issues like resource constraints, flaky tests, or inefficient build steps that averages would mask. Tracking trends in both percentiles over time helps you catch regressions before they become widespread problems. Build analytics platforms can surface when your p50 increases gradually or when p95 spikes indicate new bottlenecks.
Keep builds under seven minutes to maintain developer engagement. Anything over 15 minutes triggers costly context switching. By monitoring both typical and tail performance, you optimize for consistent, fast feedback loops that keep developers in flow. Intelligent test selection reduces overall build durations by up to 80% by selecting and running only tests affected by the code changes, rather than running all tests.

An example of build durations dashboard (on Harness)
Queue time measures how long builds wait before execution begins. This is a direct indicator of insufficient build capacity. When developers push code, builds shouldn't sit idle while runners or compute resources are tied up. Research shows that heterogeneous infrastructure with mixed processing speeds creates excessive queue times, especially when job routing doesn't account for worker capabilities. Queue time reveals when your infrastructure can't handle developer demand.
Rising queue times signal it's time to scale infrastructure or optimize resource allocation. Per-job waiting time thresholds directly impact throughput and quality outcomes. Platform teams can reduce queue time by moving to Harness Cloud's isolated build machines, implementing intelligent caching, or adding parallel execution capacity. Analytics dashboards track queue time trends across repositories and teams, enabling data-driven infrastructure decisions that keep developers productive.
Build success rate measures the percentage of builds that complete successfully over time, revealing pipeline health and developer confidence levels. When teams consistently see success rates above 90% on their default branches, they trust their CI system to provide reliable feedback. Frequent failures signal deeper issues — flaky tests that pass and fail randomly, unstable build environments, or misconfigured pipeline steps that break under specific conditions.
Tracking success rate trends by branch, team, or service reveals where to focus improvement efforts. Slicing metrics by repository and pipeline helps you identify whether failures cluster around specific teams using legacy test frameworks or services with complex dependencies. This granular view separates legitimate experimental failures on feature branches from stability problems that undermine developer productivity and delivery confidence.

An example of Build Success/Failure Rate Dashboard (on Harness)
Mean time to recovery measures how fast your team recovers from failed builds and broken pipelines, directly impacting developer productivity. Research shows organizations with mature CI/CD implementations see MTTR improvements of over 50% through automated detection and rollback mechanisms. When builds fail, developers experience context switching costs, feature delivery slows, and team velocity drops. The best-performing teams recover from incidents in under one hour, while others struggle with multi-hour outages that cascade across multiple teams.
Automated alerts and root cause analysis tools slash recovery time by eliminating manual troubleshooting, reducing MTTR from 20 minutes to under 3 minutes for common failures. Harness CI's AI-powered troubleshooting surfaces failure patterns and provides instant remediation suggestions when builds break.
Flaky tests pass or fail non-deterministically on the same code, creating false signals that undermine developer trust in CI results. Research shows 59% of developers experience flaky tests monthly, weekly, or daily, while 47% of restarted failing builds eventually passed. This creates a cycle where developers waste time investigating false failures, rerunning builds, and questioning legitimate test results.
Tracking flaky test rate helps teams identify which tests exhibit unstable pass/fail behavior, enabling targeted stabilization efforts. Harness CI automatically detects problematic tests through failure rate analysis, quarantines flaky tests to prevent false alarms, and provides visibility into which tests exhibit the highest failure rates. This reduces developer context switching and restores confidence in CI feedback loops.
Cost per build divides your monthly CI infrastructure spend by the number of successful builds, revealing the true economic impact of your development velocity. CI/CD pipelines consume 15-40% of overall cloud infrastructure budgets, with per-run compute costs ranging from $0.40 to $4.20 depending on application complexity, instance type, region, and duration. This normalized metric helps platform teams compare costs across different services, identify expensive outliers, and justify infrastructure investments with concrete dollar amounts rather than abstract performance gains.
Automated caching and ephemeral infrastructure deliver the biggest cost reductions per build. Intelligent caching automatically stores dependencies and Docker layers. This cuts repeated download and compilation time that drives up compute costs.
Ephemeral build machines eliminate idle resource waste. They spin up fresh instances only when the queue builds, then terminate immediately after completion. Combine these approaches with right-sized compute types to reduce infrastructure costs by 32-43% compared to oversized instances.
Cache hit rate measures what percentage of build tasks can reuse previously cached results instead of rebuilding from scratch. When teams achieve high cache hit rates, they see dramatic build time reductions. Docker builds can drop from five to seven minutes to under 90 seconds with effective layer caching. Smart caching of dependencies like node_modules, Docker layers, and build artifacts creates these improvements by avoiding expensive regeneration of unchanged components.
Harness Build and Cache Intelligence eliminates the manual configuration overhead that traditionally plagues cache management. It handles dependency caching and Docker layer reuse automatically. No complex cache keys or storage management required.
Measure cache effectiveness by comparing clean builds against fully cached runs. Track hit rates over time to justify infrastructure investments and detect performance regressions.
Test cycle time measures how long it takes to run your complete test suite from start to finish. This directly impacts developer productivity because longer test cycles mean developers wait longer for feedback on their code changes. When test cycles stretch beyond 10-15 minutes, developers often switch context to other tasks, losing focus and momentum. Recent research shows that optimized test selection can accelerate pipelines by 5.6x while maintaining high failure detection rates.
Smart test selection optimizes these feedback loops by running only tests relevant to code changes. Harness CI Test Intelligence can slash test cycle time by up to 80% using AI to identify which tests actually need to run. This eliminates the waste of running thousands of irrelevant tests while preserving confidence in your CI deployments.
Categorizing pipeline issues into domains like code problems, infrastructure incidents, and dependency conflicts transforms chaotic build logs into actionable insights. Harness CI's AI-powered troubleshooting provides root cause analysis and remediation suggestions for build failures. This helps platform engineers focus remediation efforts on root causes that impact the most builds rather than chasing one-off incidents.

Visualizing issue distribution reveals whether problems are systemic or isolated events. Organizations using aggregated monitoring can distinguish between infrastructure spikes and persistent issues like flaky tests. Harness CI's analytics surface which pipelines and repositories have the highest failure rates. Platform teams can reduce overall pipeline issues by 20-30%.
Artifact integrity coverage measures the percentage of builds that produce signed, traceable artifacts with complete provenance documentation. This tracks whether each build generates Software Bills of Materials (SBOMs), digital signatures, and documentation proving where artifacts came from. While most organizations sign final software products, fewer than 20% deliver provenance data and only 3% consume SBOMs for dependency management. This makes the metric a leading indicator of supply chain security maturity.
Harness CI automatically generates SBOMs and attestations for every build, ensuring 100% coverage without developer intervention. The platform's SLSA L3 compliance capabilities generate verifiable provenance and sign artifacts using industry-standard frameworks. This eliminates the manual processes and key management challenges that prevent consistent artifact signing across CI pipelines.
Tracking CI metrics effectively requires moving from raw data to measurable improvements. The most successful platform engineering teams build a systematic approach that transforms metrics into velocity gains, cost reductions, and reliable pipelines.
Tag every pipeline with service name, team identifier, repository, and cost center. This standardization creates the foundation for reliable aggregation across your entire CI infrastructure. Without consistent tags, you can't identify which teams drive the highest costs or longest build times.
Implement naming conventions that support automated analysis. Use structured formats like team-service-environment for pipeline names and standardize branch naming patterns. Centralize this metadata using automated tag enforcement to ensure organization-wide visibility.
Modern CI platforms eliminate manual metric tracking overhead. Harness CI provides dashboards that automatically surface build success rates, duration trends, and failure patterns in real-time. Teams can also integrate with monitoring stacks like Prometheus and Grafana for live visualization across multiple tools.
Configure threshold-based alerts for build duration spikes or failure rate increases. This shifts you from fixing issues after they happen to preventing them entirely.
Focus on p95 and p99 percentiles rather than averages to identify critical performance outliers. Drill into failure causes and flaky tests to prioritize fixes with maximum developer impact. Categorize pipeline failures by root cause — environment issues, dependency problems, or test instability — then target the most frequent culprits first.
Benchmark cost per build and cache hit rates to uncover infrastructure savings. Optimized caching and build intelligence can reduce build times by 30-40% while cutting cloud expenses.
Standardize CI pipelines using centralized templates and policy enforcement to eliminate pipeline sprawl. Store reusable templates in a central repository and require teams to extend from approved templates. This reduces maintenance overhead while ensuring consistent security scanning and artifact signing.
Establish Service Level Objectives (SLOs) for your most impactful metrics: build duration, queue time, and success rate. Set measurable targets like "95% of builds complete within 10 minutes" to drive accountability. Automate remediation wherever possible — auto-retry for transient failures, automated cache invalidation, and intelligent test selection to skip irrelevant tests.
The difference between successful platform teams and those drowning in dashboards comes down to focus. Elite performers track build duration, queue time, flaky test rates, and cost per build because these metrics directly impact developer productivity and infrastructure spend.
Start with the measurements covered in this guide, establish baselines, and implement governance that prevents pipeline sprawl. Focus on the metrics that reveal bottlenecks, control costs, and maintain reliability — then use that data to optimize continuously.
Ready to transform your CI metrics from vanity to velocity? Experience how Harness CI accelerates builds while cutting infrastructure costs.
Platform engineering leaders often struggle with knowing which metrics actually move the needle versus creating metric overload. These answers focus on metrics that drive measurable improvements in developer velocity, cost control, and pipeline reliability.
Actionable metrics directly connect to developer experience and business outcomes. Build duration affects daily workflow, while deployment frequency impacts feature delivery speed. Vanity metrics look impressive, but don't guide decisions. Focus on measurements that help teams optimize specific bottlenecks rather than general health scores.
Build duration, queue time, and flaky test rate directly affect how fast developers get feedback. While coverage monitoring dominates current practices, build health and time-to-fix-broken-builds offer the highest productivity gains. Focus on metrics that reduce context switching and waiting.
Cost per build and cache hit rate reveal optimization opportunities that maintain quality while cutting spend. Intelligent caching and optimized test selection can significantly reduce both build times and infrastructure costs. Running only relevant tests instead of entire suites cuts waste without compromising coverage.
Begin with pipeline metadata standardization using consistent tags for service, team, and cost center. Most CI platforms provide basic metrics through built-in dashboards. Start with DORA metrics, then add build-specific measurements as your monitoring matures.
Daily monitoring of build success rates and queue times enables immediate issue response. Weekly reviews of build duration trends and monthly cost analysis drive strategic improvements. Automated alerts for threshold breaches prevent small problems from becoming productivity killers.



Modern unit testing in CI/CD can help teams avoid slow builds by using smart strategies. Choosing the right tests, running them in parallel, and using intelligent caching all help teams get faster feedback while keeping code quality high.
Platforms like Harness CI use AI-powered test intelligence to reduce test cycles by up to 80%, showing what’s possible with the right tools. This guide shares practical ways to speed up builds and improve code quality, from basic ideas to advanced techniques that also lower costs.
Knowing what counts as a unit test is key to building software delivery pipelines that work.
A unit test looks at a single part of your code, such as a function, class method, or a small group of related components. The main point is to test one behavior at a time. Unit tests are different from integration tests because they look at the logic of your code. This makes it easier to figure out what went wrong if something goes wrong.
Unit tests should only check code that you wrote and not things like databases, file systems, or network calls. This separation makes tests quick and dependable. Tests that don't rely on outside services run in milliseconds and give the same results no matter where they are run, like on your laptop or in a CI pipeline.
Unit tests are one of the most important part of continuous integration in CI/CD pipelines because they show problems right away after code changes. Because they are so fast, developers can run them many times a minute while they are coding. This makes feedback loops very tight, which makes it easier to find bugs and stops them from getting to later stages of the pipeline.
Teams that run full test suites on every commit catch problems early by focusing on three things: making tests fast, choosing the right tests, and keeping tests organized. Good unit testing helps developers stay productive and keeps builds running quickly.
Deterministic Tests for Every Commit
Unit tests should finish in seconds, not minutes, so that they can be quickly checked. Google's engineering practices say that tests need to be "fast and reliable to give engineers immediate feedback on whether a change has broken expected behavior." To keep tests from being affected by outside factors, use mocks, stubs, and in-memory databases. Keep commit builds to less than ten minutes, and unit tests should be the basis of this quick feedback loop.
As projects get bigger, running all tests on every commit can slow teams down. Test Impact Analysis looks at coverage data to figure out which tests really check the code that has been changed. AI-powered test selection chooses the right tests for you, so you don't have to guess or sort them by hand.
To get the most out of your infrastructure, use selective execution and run tests at the same time. Divide test suites into equal-sized groups and run them on different machines simultaneously. Smart caching of dependencies, build files, and test results helps you avoid doing the same work over and over. When used together, these methods cut down on build time a lot while keeping coverage high.
Standardized Organization for Scale
Using consistent names, tags, and organization for tests helps teams track performance and keep quality high as they grow. Set clear rules for test types (like unit, integration, or smoke) and use names that show what each test checks. Analytics dashboards can spot flaky tests, slow tests, and common failures. This helps teams improve test suites and keep things running smoothly without slowing down developers.
A good unit test uses the Arrange-Act-Assert pattern. For example, you might test a function that calculates order totals with discounts:
def test_apply_discount_to_order_total():
# Arrange: Set up test data
order = Order(items=[Item(price=100), Item(price=50)])
discount = PercentageDiscount(10)
# Act: Execute the function under test
final_total = order.apply_discount(discount)
# Assert: Verify expected outcome
assert final_total == 135 # 150 - 10% discountIn the Arrange phase, you set up the objects and data you need. In the Act phase, you call the method you want to test. In the Assert phase, you check if the result is what you expected.
Testing Edge Cases
Real-world code needs to handle more than just the usual cases. Your tests should also check edge cases and errors:
def test_apply_discount_with_empty_cart_returns_zero():
order = Order(items=[])
discount = PercentageDiscount(10)
assert order.apply_discount(discount) == 0
def test_apply_discount_rejects_negative_percentage():
order = Order(items=[Item(price=100)])
with pytest.raises(ValueError):
PercentageDiscount(-5)Notice the naming style: test_apply_discount_rejects_negative_percentage clearly shows what’s being tested and what should happen. If this test fails in your CI pipeline, you’ll know right away what went wrong, without searching through logs.
When teams want faster builds and fewer late-stage bugs, the benefits of unit testing are clear. Good unit tests help speed up development and keep quality high.
When you use smart test execution in modern CI/CD pipelines, these benefits get even bigger.
Disadvantages of Unit Testing: Recognizing the Trade-Offs
Unit testing is valuable, but knowing its limits helps teams choose the right testing strategies. These downsides matter most when you’re trying to make CI/CD pipelines faster and more cost-effective.
Research shows that automatically generated tests can be harder to understand and maintain. Studies also show that statement coverage doesn’t always mean better bug detection.
Industry surveys show that many organizations have trouble with slow test execution and unclear ROI for unit testing. Smart teams solve these problems by choosing the right tests, using smart caching, and working with modern CI platforms that make testing faster and more reliable.
Developers use unit tests in three main ways that affect build speed and code quality. These practices turn testing into a tool that catches problems early and saves time on debugging.
Before they start coding, developers write unit tests. They use test-driven development (TDD) to make the design better and cut down on debugging. According to research, TDD finds 84% of new bugs, while traditional testing only finds 62%. This method gives you feedback right away, so failing tests help you decide what to do next.
Unit tests are like automated guards that catch bugs when code changes. Developers write tests to recreate bugs that have been reported, and then they check that the fixes work by running the tests again after the fixes have been made. Automated tools now generate test cases from issue reports. They are 30.4% successful at making tests that fail for the exact problem that was reported. To stop bugs that have already been fixed from coming back, teams run these regression tests in CI pipelines.
Good developer testing doesn't look at infrastructure or glue code; it looks at business logic, edge cases, and public interfaces. Testing public methods and properties is best; private details that change often should be left out. Test doubles help developers keep business logic separate from systems outside of their control, which makes tests more reliable. Integration and system tests are better for checking how parts work together, especially when it comes to things like database connections and full workflows.
Slow, unreliable tests can slow down CI and hurt productivity, while also raising costs. The following proven strategies help teams check code quickly and cut both build times and cloud expenses.
Choosing between manual and automated unit testing directly affects how fast and reliable your pipeline is.
Manual Unit Testing: Flexibility with Limitations
Manual unit testing means developers write and run tests by hand, usually early in development or when checking tricky edge cases that need human judgment. This works for old systems where automation is hard or when you need to understand complex behavior. But manual testing can’t be repeated easily and doesn’t scale well as projects grow.
Automated Unit Testing: Speed and Consistency at Scale
Automated testing transforms test execution into fast, repeatable processes that integrate seamlessly with modern development workflows. Modern platforms leverage AI-powered optimization to run only relevant tests, cutting cycle times significantly while maintaining comprehensive coverage.
Why High-Velocity Teams Prioritize Automation
Fast-moving teams use automated unit testing to keep up speed and quality. Manual testing is still useful for exploring and handling complex cases, but automation handles the repetitive checks that make deployments reliable and regular.
Difference Between Unit Testing and Other Types of Testing
Knowing the difference between unit, integration, and other test types helps teams build faster and more reliable CI/CD pipelines. Each type has its own purpose and trade-offs in speed, cost, and confidence.
Unit Tests: Fast and Isolated Validation
Unit tests are the most important part of your testing plan. They test single functions, methods, or classes without using any outside systems. You can run thousands of unit tests in just a few minutes on a good machine. This keeps you from having problems with databases or networks and gives you the quickest feedback in your pipeline.
Integration Tests: Validating Component Interactions
Integration testing makes sure that the different parts of your system work together. There are two main types of tests: narrow tests that use test doubles to check specific interactions (like testing an API client with a mock service) and broad tests that use real services (like checking your payment flow with real payment processors). Integration tests use real infrastructure to find problems that unit tests might miss.
End-to-End Tests: Complete User Journey Validation
The top of the testing pyramid is end-to-end tests. They mimic the full range of user tasks in your app. These tests are the most reliable, but they take a long time to run and are hard to fix. Unit tests can find bugs quickly, but end-to-end tests may take days to find the same bug. This method works, but it can be brittle.
The Test Pyramid: Balancing Speed and Coverage
The best testing strategy uses a pyramid: many small, fast unit tests at the bottom, some integration tests in the middle, and just a few end-to-end tests at the top.
Modern development teams use a unit testing workflow that balances speed and quality. Knowing this process helps teams spot slow spots and find ways to speed up builds while keeping code reliable.
Before making changes, developers write code on their own computers and run unit tests. They run tests on their own computers to find bugs early, and then they push the code to version control so that CI pipelines can take over. This step-by-step process helps developers stay productive by finding problems early, when they are easiest to fix.
Once code is in the pipeline, automation tools run unit tests on every commit and give feedback right away. If a test fails, the pipeline stops deployment and lets developers know right away. This automation stops bad code from getting into production. Research shows this method can cut critical defects by 40% and speed up deployments.
Modern CI platforms use Test Intelligence to only run the tests that are affected by code changes in order to speed up this process. Parallel testing runs test groups in different environments at the same time. Smart caching saves dependencies and build files so you don't have to do the same work over and over. These steps can help keep coverage high while lowering the cost of infrastructure.
Teams analyze test results through dashboards that track failure rates, execution times, and coverage trends. Analytics platforms surface patterns like flaky tests or slow-running suites that need attention. This data drives decisions about test prioritization, infrastructure scaling, and process improvements. Regular analysis ensures the unit testing approach continues to deliver value as codebases grow and evolve.
Using the right unit testing techniques can turn unreliable tests into a reliable way to speed up development. These proven methods help teams trust their code and keep CI pipelines running smoothly:
These methods work together to build test suites that catch real bugs and stay easy to maintain as your codebase grows.
As we've talked about with CI/CD workflows, the first step to good unit testing is to separate things. This means you should test your code without using outside systems that might be slow or not work at all. Dependency injection is helpful because it lets you use test doubles instead of real dependencies when you run tests.
It is easier for developers to choose the right test double if they know the differences between them. Fakes are simple working versions, such as in-memory databases. Stubs return set data that can be used to test queries. Mocks keep track of what happens so you can see if commands work as they should.
This method makes sure that tests are always quick and accurate, no matter when you run them. Tests run 60% faster and there are a lot fewer flaky failures that slow down development when teams use good isolation.
Teams need more ways to get more test coverage without having to do more work, in addition to isolation. You can set rules that should always be true with property-based testing, and it will automatically make hundreds of test cases. This method is great for finding edge cases and limits that manual tests might not catch.
Parameterized testing gives you similar benefits, but you have more control over the inputs. You don't have to write extra code to run the same test with different data. Tools like xUnit's Theory and InlineData make this possible. This helps find more bugs and makes it easier to keep track of your test suite.
Both methods work best when you choose the right tests to run. You only run the tests you need, so platforms that know which tests matter for each code change give you full coverage without slowing things down.
The last step is to test complicated data, such as JSON responses or code that was made. Golden tests and snapshot testing make things easier by saving the expected output as reference files, so you don't have to do complicated checks.
If your code’s output changes, the test fails and shows what’s different. This makes it easy to spot mistakes, and you can approve real changes by updating the snapshot. This method works well for testing APIs, config generators, or any code that creates structured output.
Teams that use full automated testing frameworks see code coverage go up by 32.8% and catch 74.2% more bugs per build. Golden tests help by making it easier to check complex cases that would otherwise need manual testing.
The main thing is to balance thoroughness with easy maintenance. Golden tests should check real behavior, not details that change often. When you get this balance right, you’ll spend less time fixing bugs and more time building features.
Picking the right unit testing tools helps your team write tests efficiently, instead of wasting time on flaky tests or slow builds. The best frameworks work well with your language and fit smoothly into your CI/CD process.
Modern teams use these frameworks along with CI platforms that offer analytics and automation. This mix of good tools and smart processes turns testing from a bottleneck into a productivity boost.
Smart unit testing can turn CI/CD from a bottleneck into an advantage. When tests are fast and reliable, developers spend less time waiting and more time releasing code. Harness Continuous Integration uses Test Intelligence, automated caching, and isolated build environments to speed up feedback without losing quality.
Want to speed up your team? Explore Harness CI and see what's possible.


For a long time, CI/CD has been “configuration as code.” You define a pipeline, commit the YAML, sync it to your CI/CD platform, and run it. That pattern works really well for workflows that are mostly stable.
But what happens when the workflow can’t be stable?
In all of those cases, forcing teams to pre-save a pipeline definition, either in the UI or in a repo, turns into a bottleneck.
Today, I want to introduce you to Dynamic Pipelines in Harness.
Dynamic Pipelines let you treat Harness as an execution engine. Instead of having to pre-save pipeline configurations before you can run them, you can generate Harness pipeline YAML on the fly (from a script, an internal developer portal, or your own code) and execute it immediately via API.
To be clear, dynamic pipelines are an advanced functionality. Pipelines that rewrite themselves on the fly are not typically needed and should generally be avoided. They’re more complex than you want most of the time. But when you need this power, you really need it ,and you want it implemented well.
Here are some situations where you may want to consider using dynamic pipelines.
You can build a custom UI, or plug into something like Backstage, to onboard teams and launch workflows. Your portal asks a few questions, generates the corresponding Harness YAML behind the scenes, and sends it to Harness for execution.
Your portal owns the experience. Harness owns the orchestration: execution, logs, state, and lifecycle management. While mature pipeline reuse strategies will suggest using consistent templates for your IDP points, some organizations may use dynamic pipelines for certain classes of applications to generate more flexibility automatically.
Moving CI/CD platforms often stalls on the same reality: “we have a lot of pipelines.”
With Dynamic Pipelines, you can build translators that read existing pipeline definitions (for example, Jenkins or Drone configurations), convert them into Harness YAML programmatically, and execute them natively. That enables a more pragmatic migration path, incremental rather than a big-bang rewrite. It even supports parallel execution where both systems are in place for a short period of time.
We’re entering an era where more of the delivery workflow is decided at runtime, sometimes by policy, sometimes by code, sometimes by AI-assisted systems. The point isn’t “fully autonomous delivery.” It’s intelligent automation with guardrails.
If an external system determines that a specific set of tests or checks is required for a particular change, it can assemble the pipeline YAML dynamically and run it. That’s a practical step toward a more programmatic stage/step generation over time. For that to work, the underlying DevOps platform must support dynamic pipelining. Harness does.
Dynamic execution is primarily API-driven, and there are two common patterns.
You execute a pipeline by passing the full YAML payload directly in the API request.
Workflow: your tool generates valid Harness YAML → calls the Dynamic Execution API → Harness runs the pipeline.
Result: the run starts immediately, and the execution history is tagged as dynamically executed.
You can designate specific stages inside a parent pipeline as Dynamic. At runtime, the parent pipeline fetches or generates a YAML payload and injects it into that stage.
This is useful for hybrid setups:
A reasonable question is: “If I can inject YAML, can I bypass security?”
Bottom line: no.
Dynamic pipelines are still subject to the same Harness governance controls, including:
This matters because speed and safety aren’t opposites if you build the right guardrails—a theme that shows up consistently in DORA’s research and in what high-performing teams do in practice.
To use Dynamic Pipelines, enable Allow Dynamic Execution for Pipelines at both:
Once that’s on, you can start building custom orchestration layers on top of Harness, portals, translators, internal services, or automation that generates pipelines at runtime.
The takeaway here is simple: Dynamic Pipelines unlock new “paved path” and programmatic CI/CD patterns without giving up governance. I’m excited to see what teams build with it.
Ready to try it? Check out the API documentation and run your first dynamic pipeline.


We're thrilled to share some exciting news: Harness has been granted U.S. Patent US20230393818B2 (originally published as US20230393818A1) for our configuration file editor with an intelligent code-based interface and a visual interface.
This patent represents a significant step forward in how engineering teams interact with CI/CD pipelines. It formalizes a new way of managing configurations - one that is both developer-friendly and enterprise-ready - by combining the strengths of code editing with the accessibility of a visual interface.
👉 If you haven’t seen it yet, check out our earlier post on the Harness YAML Editor for context.
In modern DevOps, YAML is everywhere. Pipelines, infrastructure-as-code, Kubernetes manifests, you name it. YAML provides flexibility and expressiveness for DevOps pipelines, but it comes with drawbacks:
The result? Developers spend countless hours fixing misconfigurations, chasing down syntax errors, and debugging pipelines that failed for reasons unrelated to their code.
We knew there had to be a better way.
The patent covers a hybrid editor that blends the best of two worlds:
What makes this unique is the schema stitching approach:
This ensures consistency, prevents invalid configurations, and gives users real-time feedback as they author pipelines.
This isn’t just a UX improvement - it’s a strategic shift with broad implications.
New developers no longer need to memorize every YAML field or indentation nuance. Autocomplete and inline hints guide them through configuration, while the visual editor provides an easy starting point. A wall of YAML can be hard to understand; a visual pipeline is easy to grok immediately.
Schema-based validation catches misconfigurations before they break builds or deployments. Teams save time, avoid unnecessary rollbacks, and maintain higher confidence in their pipelines.
By offering both a code editor and a visual editor, the tool becomes accessible to a wider audience - developers, DevOps engineers, and even less technical stakeholders like product managers or QA leads who need visibility.
Here’s a simple example:
Let’s say your pipeline YAML requires specifying a container image.
image: ubuntu:20.04
But what if you accidentally typed ubunty:20.04? In a traditional editor, the pipeline might fail later at runtime.
Now add the visual editor:
Multiply this by hundreds of fields, across dozens of microservices, and the value becomes clear.
We’re in a new era of software delivery:
This patent directly addresses these trends by creating a foundation for intelligent, schema-driven configuration tooling. It allows Harness to:
With this patent secured, the door is open to innovate further:
This isn’t about YAML. DevOps configuration must be intuitive, resilient, and scalable to enable faster, safer, and more delightful software delivery.
This milestone wouldn’t have been possible without the incredible collaboration of our product, engineering, and legal teams. And of course, our customers. The feedback they provided shaped the YAML editor into what it is today.
This patent is more than a legal win. It’s validation of an idea: that developer experience matters just as much as functionality. By bridging the gap between raw power and accessibility, we’re making CI/CD pipelines faster to build, safer to run, and easier to adopt.
At Harness, we invest aggressively in R&D to solve our customers' most complex problems. What truly matters is delivering capabilities that improve the lives of developers and platform teams, enabling them to innovate more quickly.
We're thrilled that this particular innovation, born from solving the real-world pain of YAML, has been formally recognized as a unique invention. It's the perfect example of our commitment to leading the industry and delivering tangible value, not just features.
👉 Curious to see it in action? Explore the Harness YAML Editor and share your feedback.


An airgapped environment enforces strict outbound policies, preventing external network communication. This setup enhances security but presents challenges for cross-cloud data synchronization.
A proxy server is a lightweight, high-performance intermediary facilitating outbound requests from workloads in restricted environments. It acts as a bridge, enabling controlled external communication.
ClickHouse is an open-source, column-oriented OLAP (Online Analytical Processing) database known for its high-performance analytics capabilities.
This article explores how to seamlessly sync data from BigQuery, Google Cloud’s managed analytics database, to ClickHouse running in an AWS-hosted airgapped Kubernetes cluster using proxy-based networking.
Deploying ClickHouse in airgapped environments presents challenges in syncing data across isolated cloud infrastructures such as GCP, Azure, or AWS.
In our setup, ClickHouse is deployed via Helm charts in an AWS Kubernetes cluster, with strict outbound restrictions. The goal is to sync data from a BigQuery table (GCP) to ClickHouse (AWS K8S), adhering to airgap constraints.
The solution leverages a corporate proxy server to facilitate communication. By injecting a custom proxy configuration into ClickHouse, we enable HTTP/HTTPS traffic routing through the proxy, allowing controlled outbound access.


Observed proxy logs confirming outbound requests were successfully relayed to GCP.

Left window shows query to BigQuery and right window shows proxy logs — the request forwarding through proxy server
This approach successfully enabled secure communication between ClickHouse (AWS) and BigQuery (GCP) in an airgapped environment. The use of a ConfigMap-based proxy configuration made the setup:
By leveraging ClickHouse’s extensible configuration system and Kubernetes, we overcame strict network isolation to enable cross-cloud data workflows in constrained environments. This architecture can be extended to other cloud-native workloads requiring external data synchronization in airgapped environments.


It’s 2025 and if you work as a software engineer, you probably have access to an AI coding assistant at work. In this blog, I’ll share with you my experience working on a project to change the API endpoints of an existing codebase while making heavy use of an AI code assistant.
There’s a lot to be said about research showing the capability of AI code assistants on the day to day work of a software engineer. It’s clear as mud. Many people also have their own experience of working with AI tooling causing massive headaches with ‘AI Slop’ that is difficult to understand and only tangentially related to the original problem they were trying to address; filling up their codebase and making it impossible for them to actually understand what it is (or is supposed to be) doing.
I was part of the Split team that was acquired by Harness in Summer 2024. I had been maintaining an API wrapper for the Split APIs for a few years at this point.This allowed our users to take their existing python codebases and easily automate management of Split feature flags, users, groups, segments and other administrative entities. We were getting about 12–13,000 downloads per month. Not something that gets an enormous amount of traffic but not bad for someone who’s not officially on a Software Engineering team.
The architecture of the Python API client is that instantiating it constructs a client class that shares an API Key and optional base url configuration. Each API is served by what is called a ‘microclient’, which essentially handles the appropriate behavior of that endpoint, returning a resource of that type during create, read, and update commands.

API Client Architecture

Example showing the call sequence of instantiating the API Client and making a list call
As part of the migration of Split into the Harness platform, Split will be deprecating some of its API endpoints — these — such as Users and Groups — will proceed to be maintained in the future under the banner of the Harness Platform. Split Customers are going to be migrated to have their Split App accessed from within Harness, and so Users, Groups, and Split Projects will proceed to be managed in Harness, meaning that Harness endpoints will have to be used.

How to mate the API Client with the proper endpoints for customers post Harness Migration?
With respect to API keys, the Split API keys will continue to work for existing endpoints, and after migration to harness they will still be able to work. Harness API keys will work for everything and be required for Harness endpoints post-migration.

I had some great help from the former Split (now Harness FME) PMM and Engineering teams who took on the task of actually feeding me the relevant APIs from the Harness API Docs. This gave me a good starting point to understand what I might need to do.
Essentially to have similar control over Harness’s Role Based Access Control (RBAC) and Project information just as we did in Split — I’d need to utilize the following Harness APIs
Not all Split accounts will be migrating at once to the Harness platform — this will be over a period of a few months. This means that we will have to support both API access styles for at least some period of time. I also know that I still have my normal role at Harness supporting onboarding customers using our FME SDKs and don’t have a lot of free time to re-write an API client from scratch, so I got to thinking about what my options were.
I really wanted to make the API transition as seamless as possible for my API client users. So the first thing I figured was that I would need a way to determine if the API key being used was from a migrated account. Unfortunately, after discussing with some folks there simply wasn’t going to be time for building out an endpoint like this for what will be, at most, a period of a few months. As such my first design decision was how to determine which ‘mode’ the Client was going to use, the existing mode with access to the older Split API endpoints, or the ‘new’ mode with those endpoints deprecated and a collection of new Harness endpoints available.
I decided this was going to be done with a variable on instantiation. Since the API client’s constructor signature already included an object as its argument, this I thought would be pretty straightforward.
Eg:
Would then have an additional option for:
Now — I was thinking and questioning how I would implement this.
Recently, Harness Employees were given access to Windsurf IDE with Claude AI. I figured since I could use the help that I would sign on and that this would help me build out my code changes faster.
I had used Claude, ChatGPT, DeepSeek, and various other AI assistants through their websites for small scale problem solving (eg — fill in this function, help me with this error, write me a shell script that does XYZ) but never actually worked with something integrated into the IDE.
So I fired up Windsurf and put in a pretty ambitious prompt to see what it was capable of doing.
Split has been acquired by harness and now the harness apis will be used for some of these endpoints. I will need to implement a seperate ‘harness_mode’ boolean that is passed in at the api constructor. In harness mode there will be new endpoints available and the existing split endpoints for users, groups, restrictions, all endpoints except ‘get’ for workspaces, and all endpoints for apikeys when the type == ‘admin’ will be deprecated. I will still need to have the apikey endpoint available for type==’client_side’ and ‘server_side’ keys.
It then whirred to work, and, quite frankly. I was really impressed with the results. However — It didn’t quite understand what I wanted. The harness endpoints are completely different in structure and methods (and in base url). The result was that I’d get the microclients to have harness methods and harness placeholders in the URLs but this wasn’t going to work. I should have told the AI that I really want different microclients and different resources for Harness. I reverted the changes and went back to the drawing board. (but I’ll get back to this later)
OpenAPI
My second Idea was to attempt to generate some API code from the Harness API docs themselves. Harness’s API docs have an OpenAPI specification available, and there are tools that can be used to generate API clients out of these specifications. However, it became clear to me that the tooling to create APIs from OpenAPI specifications isn’t easily filterable. Harness has nearly 300 API endpoints for the rich collection of modules and features that it has. Harness’s nearly 10 MB OpenAPI spec would actually crash the OpenAPI generator — it was too big. I spent some time working on code to strip out and filter the OpenAPI Spec JSON just to the endpoints I needed.
Here, the AI tooling was also helpful. I asked
how can I filter a openapi json by either tag or by endpoint resource path?
can this also remove components that aren’t part of the endpoints with tags
could you also have it remove unused tags
But the problem ended up being that the OpenAPI spec is actually more complex then I initially thought, including references, parameters and dependencies for objects. So it wasn’t going to be as simple as passing in my endpoints I need and proceeding to send them to the API Generator.
I kept attempting to run the filter script generated and then proceeded to run the generator. I did a few loops of attempting to run the script, getting an error, and sending it back to the AI assistant.
By the end I did seem to get a script that could do filtering, but filtering down to just what I needed ended up being still too big for the OpenAPI generator. You can see that code here
For a test, I did start generating with just one endpoint (harness_user) and reviewing the python generated code. One thing that was clear after reviewing the file was that it was just structured so wildly differently from the API Client that I already have. Also there are dozens of warnings inside of the generated code to not make any changes or updates to it. Moreover, I was not familiar with the codebase
Either manually or attempting via an AI assistant, stitching these together was not going to be easy, so I stashed this idea as well.
As an aside, I think this is worth noting, that an AI code assistant can’t help you when you don’t even know how to really specify what exactly you want and what your outcome is going to look like. I needed to have a better understanding of what I was trying to accomplish
One of the things I had in my mind was that I really wanted to make the transition as seamless as possible. However, once my idea of the automated mode select was dashed, I still thought I could, through heroic effort, automate the creation of the existing Split python classes via the Harness APIs.
I had a deep dive into this idea and really came back with the result that it would simply be too burdensome to implement and not really give the users what they need.
For example — to create an API Key in Split, we just had one API endpoint with a json body:
However, Harness has a very rich RBAC model and with multiple modules has a far more flexible model of Service Accounts, API Keys, and individual tokens. Harness’s model allows for easy key rotation and allows the API key to really be more of a container for the actual token string that is used for authentication in the APIs.
Shown more simply in the diagrams below:

Observe the difference in structure of API Key authentication and generation
Now the Python microclient for generating API keys for Split currently makes calls structured like so:
To replicate this would mean that I would have to have the client in ‘Harness Mode’ create a Service Account, API Key, and Token all at the same time, and automatically map the roles to a created service account, being seamless to the user.
This is a tall task, and being pragmatic, I don’t see that as a real sustainable solution for developers using my library as they get more familiar with the Harness platform. They’re going to want to use Harness objects natively.
This is especially true with the delete method of the current client,
The Harness method for deleting a token takes the token identifier, not the token itself, making this signature impossible to reproduce with Harness’s APIs. And even if I could delete a token, would I want to delete the token and keep the service account and api key? Would I need to replicate the role assignment and roles that Split has? Much of this is very undefined.
Wanting to keep things as straightforward and maintainable as possible, along with trying to move to understanding the world in Harness’s API Schema, I had a design decision in my head.
We were going to have ‘Harness Mode’ for the APIs that will explicitly deprecate the Split API microclients and resources and will then activate a separate client that will use Harness API endpoints and resources. The endpoints that are unchanged will still use the Split endpoints and API keys.

Now that I’ve got a better understanding of how I want to design this, I felt like I could create a better prompt.
Split has been acquired by harness and now the harness apis will be used for some of these endpoints. I will need to implement a seperate ‘harness_mode’ boolean that is passed in at the api constructor. In harness mode there will be new endpoints available and the existing split endpoints for users, groups, restrictions, all endpoints except ‘get’ for workspaces, and all endpoints for apikeys when the type == ‘admin’ will be deprecated. I will still need to have the apikey endpoint available for type==’client_side’ and ‘server_side’ keys. Make seperate microclients in harness mode for the following resources:
harness_user, harness_project, harness_group, role, role_assignment, service_account, and token
Ensure that that the harness_mode has a seperate harness_token key that it uses. It uses x-api-key as the header for auth and not bearer authentication
Claude then whirred away and this was with much better results here. With the separate microclients I had a much better structure to build my code with. This also helped me with understanding of how I thought I would continue building.

The next thing I asked it to do was to create resources for all of my microclient objects.

The next thing I did was a big mistake. I asked it to create tests for me for all of my microclients and resources. Creating the tests at this time before I had finished implementing my code means that the AI doesn’t know which one is right or not. So I spent a lot of time troubleshooting issues with tests until I just decided to delete all of my test files and create the tests much later in my development cycle. Once I had the designs for the microclients and resources reasonably implemented, I went forth and had it write the tests for me. DO NOT have the AI write BOTH your tests and your code before you have the chance to review either of them, or you will be in a world of pain and be spending hours trying to figure out what you actually want.
This was an enormous time saver for me. Having the project essentially built with custom scaffolding for me was just amazing.
The next thing I was going to do was fill in the resources. The resources were essentially a schema with an init call to pull the endpoints in and accessors to get the fields from the data.
The schemas I was able to pull from the apidocs.harness.io site pretty easily.
Here’s an example of the AI generated code for the harness group resource.
I did a few things here — I had the AI generate for me a generalizable getter and dict export from the schema itself — essentially allowing me to just copy and paste the schema into the resource and have it auto-generate the methods that it needs to have.
Here’s an example of that code for the harness user class.
Once this was done for all of my resources, I had the AI create tests for these resources and went through a few iterations before my tests passed.
The microclients were a bit more challenging. Partly because of how the methods were really fundamentally different in many cases between the Split and Harness way of managing these HTTP resources.
There was more manual work here and not as much automation. That being said, the AI had a lot of helpful autocompletes.
For example, in the harness_user microclient class, the default list of endpoints looked like this
If I were to change one of them to the proper endpoint (ng/api/user) and then press tab it will automatically fix the other endpoints — small things like that really added up when I was going through and manually setting up things like endpoints, looping over the returned array from a GET endpoint. The AI tooling really helps speed up the implementation.
Once I had the microclients finished, I had the AI create tests and worked through running them, ensuring that we had coverage and the tests made sense and covered all of the microclient endpoints (including pagination for the list endpoints)
The last thing to clean up now was the base client. The AI created a separate main harness_apiclient that would be instantiated when harness mode was enabled. I had to review the deprecation code to ensure that deprecation warnings were indeed only fired when specified. I also cleaned up and removed some extraneous code around supporting other base urls, and set the proper harness base url.
I proceeded to ask AI to allow me to pass in an account_identifier since many of the harness endpoints require that — allowing me to make it easier so that you didn’t need to pass that field in each time for every microclient request.
Finally, I had the AI write me a comprehensive test script that would test all endpoints in both harness mode and split mode. I ran this with a Harness account and a Split account to ensure success. I fixed a few minor issues but ultimately it worked very well and seemed extremely straightforward and easy to use.
After this whole project I would like to let the reader depart with a few learnings. First of which is that your AI assistant still requires you to have a good sense of code smell. If something looks wrong or your implementation in your head would be different, always feel free to back up and revert the changes it makes. Better to be safe than sorry.
You really need to have the design in your head and constantly be comparing it to what the AI is building for you when you ask it questions. Don’t just accept it — interrogate it. Save and commit often so that you can revert to known states.
Do not have it create both your tests and implementations at the same time. Only have it do one until you are finished with it and then have it do the other.
You do not want to just keep asking it for things without an understanding of what you want the outcome to look like. Keep your hand on the revert button and don’t be afraid to revert to earlier parts of your conversation with the AI. If you do not review the code coming out of your AI assistant you will be in a world of trouble. Coding with an AI assistant still uses those Senior/Staff Software Engineer skillsets, perhaps even more than ever due to the sheer volume of code that is possible to generate. Design is more important than ever.
If you’re familiar with the legend of John Henry — he was a railroad worker who challenged a steam drilling machine with his hammer. With an AI assistant I really feel like I’ve been given a steam driller. Like this is the way to huge gains in efficiency in the production of software.

Learn how to work with your robot and be successful
I’m very excited for the future and how AI code assistants will grow and become part and parcel of the standard workflow for software development. I know it saved me a lot of time and from a lot of frustration and headaches.


At Harness, we know developer velocity depends on everyday workflow. That is why we reimagined Harness Code with a faster, cleaner, and more intuitive experience that helps engineers stay in flow from the first clone to the final merge.
Smarter Pull Request Reviews
Review diffs and conversations without constant context switching. Inline comments, keyboard shortcuts, and faster file rendering help you focus on the code instead of the clicks.

Faster File Tree and Change Listing
The new file browser is optimized for large repositories. You can search, jump, and scan changes instantly even when working with thousands of files.

Seamless Repo Navigation
Move between branches, commits, and repositories without losing your scroll position or comment state.

Unified Harness Design System
The entire interface now uses the same design system as the rest of the Harness platform, which reduces the learning curve and makes navigation feel natural.
Every inefficiency in the developer experience is a hidden tax on velocity. Harness Code removes those blockers so your teams:
All 500-plus Harness engineers are already using the new experience, proving it scales in real enterprise environments.
Adopting the new experience is effortless:
There is nothing to migrate. Simply click 'Opt In', and your repositories, permissions, and integrations will continue to work as before.
The new Harness Code experience is only the beginning. Coming soon:
We’re continuing to invest in developer-first features that make Harness Code not just a repository, but the heartbeat of your software delivery pipeline.
If you have been looking for a modern, developer-first alternative to GitHub or GitLab that integrates directly with your CI/CD pipelines, now is the time to try it.
👉 Start your Harness Code trial today and experience a repo that helps you move faster and deliver more.
Learn more: Workflow Management, What Is a Developer Platform
.png)
.png)
Jenkins has been a mainstay in CI/CD, helping teams across the globe automate their build, test, and deployment workflows for over a decade. But with the explosion of AI-generated code, software delivery is expected to accelerate, and the need for faster, more reliable releases has gone up more than ever. Jenkins is showing its age. Organizations now find themselves wrestling with bloated, hard-to-maintain Jenkins pipelines, excessive infrastructure demands, and operational drag that stifles innovation.
It’s time for a change. With Harness, you can modernize your pipelines in just one day using our patented migration tool. Thus, with our AI DevOps platform, you can dramatically reduce complexity, accelerate deployments, and free your teams to focus on what matters: delivering value at speed.
For years, Jenkins was synonymous with CI/CD flexibility. Its open source roots, rich plugin ecosystem, and ubiquity made it the go-to for teams taking their first steps into automation. But today’s environment is different. Organizations are running hundreds or thousands of Jenkins jobs, many of which are barely used yet still consume precious resources. Jenkins setups are notorious for their appetite for RAM and CPU, often requiring a dedicated team just to stay operational.
The challenges are clear:
More and more code is generated, especially because of AI, but software delivery has remained the bottleneck. Modern software delivery demands more than just pipeline automation. Software delivery requires AI to complement AI-powered code generation so that it can scale effortlessly, streamline governance, and empower developers with AI-driven insights and self-service capabilities.
Harness is the AI for Software Delivery: Harness has a suite of purpose-built AI agents that help you deliver software fast and in a secure manner while integrating seamlessly with Kubernetes, cloud runtimes, and on-prem environments, which results in:
Migrating off Jenkins may seem daunting, especially for organizations with years of accumulated pipeline logic and custom scripts. But Harness provides a structured, phased approach that minimizes risk and accelerates the adoption. The focus is on modernizing your DevOps, not just migrating. From our experience, if you have thousands of Jenkins pipelines, you only need a fraction of them. So we don’t migrate each Jenkins pipeline. We consolidate them into smart templates so that the overhead of maintaining the pipeline is minimized.
Phase 1: Assess and Plan
Identify high-value, high-impact pipelines for initial migration. With our patented Jenkins Migration tool, you can automatically analyze your existing setup and prioritize the most critical workloads for modernization. Harness CI/CD specialists guide you at every step.
Phase 2: Pilot and Optimize
Migrate a single pipeline end-to-end to Harness, leveraging built-in template libraries, AI-generated workflows, and Harness’s differentiated features. Compare performance, reliability, and developer experience before scaling.
Phase 3: Scale and Sunset
Once your migration plan is proven, expand modernization and adoption across teams. Achieve significant Harness adoption in weeks, not months, and progressively sunset your Jenkins infrastructure, switching off the maintenance drain for good without any downtime.
Leading technology company Ancestry.com reduced its pipeline sprawl by 80:1 after migrating from Jenkins to Harness, cutting pipeline maintenance costs by 72%, accelerating time-to-market, and improving pipeline reliability.
Meanwhile, Citigroup leverages Harness to support 20,000 engineers. By automating tests and security scans with strong policy controls, Citi goes from build to running in production in under 7 minutes.
Jenkins served its purpose in the era of server farms and script-heavy automation. But the pace of software delivery has changed, and so should your toolchain. Modernize your pipelines without disrupting your delivery velocity to prepare for an AI-native world and unlock the next era of DevOps efficiency, security, and developer happiness.
Take the first step toward DevOps spring cleaning. Let expert CI/CD specialists guide you, migrate your first pipeline at no cost, and experience the difference an AI DevOps platform can make.
It’s time to break free from Jenkins and build & deploy the future, faster.


Software delivery isn’t slowing down, and neither is Harness AI. Today, we’re introducing powerful new capabilities that bring context-aware, automated intelligence to your DevOps workflows. From natural language pipeline generation to AI-driven troubleshooting and policy enforcement, Harness AI now delivers even deeper automation that adapts to your environment, understands your standards, and removes bottlenecks before they start with context-aware, agentic automation.
These capabilities, built into the Harness Platform, reflect our belief that AI is the foundation for how modern teams deliver software at scale.
“When we founded Harness, we believed AI would be a core pillar of modern software delivery,” said Jyoti Bansal, CEO and co-founder of Harness. “These new capabilities bring that vision to life, helping engineering teams move faster, with more intelligence, and less manual work. This is AI built for the real world of software delivery, governed, contextual, and ready to scale.”
Let’s take a closer look.
Imagine a scenario where an engineer starts at your organization and can create production-ready CI/CD pipelines that align with organizational standards on day one! That’s one of many use cases that Harness AI can help achieve. The AI doesn’t just generate generic pipelines; it pulls from your existing templates, tool configurations, environments, and governance policies to ensure every pipeline matches your internal standards. It’s like having a DevOps engineer on call 24/7 who already knows how your system works.
Easy to get started with your organization-specific pipelines
Teams today face a triple threat: faster code generation (thanks to AI coding assistants), increasingly fragmented toolchains, and mounting compliance requirements. Most pipelines can’t keep up with the increased volume of generated code.
Harness AI is purpose-built to meet these challenges. By applying large language models, a proprietary knowledge graph, and deep platform context, it helps your teams:
| Capability | What It Does |
|---|---|
| Pipeline Creation via Natural Language | Describe your app in plain English. Get a complete, production-ready CI/CD pipeline without YAML editing. |
| Automated Troubleshooting & Remediation | AI analyzes logs, pinpoints root causes, and recommends (or applies) fixes, cutting mean time to resolution. |
| Policy-as-Code via AI | Write and enforce OPA policies using natural language. Harness AI turns intent into governance, instantly. |
| Context-Aware Config Generation | AI understands your environments, Harness-specific constructs, secrets, and standards, and builds everything accordingly. |
| Multi-Product Coverage | Supports CI, CD, Infrastructure as Code Management, Security Testing Orchestration, and more, delivering consistent automation across your stack. |
| LLM Optimization | Harness dynamically selects the best LLM for each task from within a pool of LLMs, which also helps with fallback in case one of the LLMs is unavailable. |
| Enterprise-Grade Guardrails | Every AI action is RBAC-controlled, fully auditable, and embedded directly in the Harness UI; no extra setup needed. |
Watch the demo
Organizations using Harness AI are already seeing dramatic improvements across their DevOps pipelines:

Harness AI isn’t an add-on or a side tool. It’s woven directly into the Harness Platform, designed to support every stage of software delivery, from build to deploy to optimize.
Just smarter workflows, fewer manual steps, and a faster path from idea to impact.
AI shouldn’t add complexity. It should eliminate it.
These new capabilities are available now. Whether you’re onboarding new teams, enforcing security policies, or resolving pipeline issues faster, Harness AI is here to reduce toil and accelerate your path to production.
Harness AI is available for all Harness customers. Read the documentation here. Get started today!


As a platform engineer, your goal is to create reliable "golden paths" that make it easy for development teams to do the right thing. Harness Templates are a cornerstone of this effort, allowing you to codify best practices for building and deploying software.
But this model has a classic failure point. What happens when a team’s needs diverge, even slightly, from your standardized template?
It’s a depressingly common story. A team needs to use a specific security scanner or a niche testing tool not included in your golden path. Faced with a rigid template, they have one option: detach. They copy the pipeline YAML, make their changes, and move on.
While this solves their immediate need, it quietly dismantles your governance strategy. The moment they detach, they stop receiving critical updates you make to the central template. The result is pipeline sprawl - a collection of slightly different pipelines that are difficult to manage, secure, and improve over time. Your golden path becomes a dead end.
We see the end result of this over and over again. Companies come to us after years of this and they have hundreds, if not thousands of pipeline configurations and its completely unmanageable. After moving to Harness Continuous Delivery and adopting proper templating, they reduce their management overhead dramatically - Ancestry.com reported an 80-to-1 improvement.
Templates are great, but will one point of variance derail adoption? It shouldn’t.
This "all or nothing" scenario forces a false choice between standardization and developer autonomy. The solution is to find a point of compromise. In Harness, this is the insert block.
An insert block is a designated location within a template where a consumer can inject their own steps or stages. It allows you to keep the 90% of the template that is truly standard locked down, while providing controlled flexibility for the 10% that requires variance.
You define the core process, and teams fill in the blanks where it makes sense. We aren’t weakening standards; we are making them practical enough to be adopted universally.
Let’s apply this to a real-world scenario. Your platform policy requires a security scan, but different teams use different tools.

The outcome is efficient. Your governance objective, ensuring a security scan always runs, is met. The development team retains autonomy over its choice of tooling. Most importantly, they remain linked to the template, automatically inheriting all future improvements you make.
This model works for any point of known variance in your software delivery lifecycle:
This flexibility is a powerful tool, but it should be used with intention. Full standardization is still the most efficient model if you can achieve it. The goal isn't to create countless variations.
The insert block is a strategic solution for managing known, high-variance points in your delivery process. It’s for those specific scenarios where being too rigid causes more problems than it solves. It’s a pragmatic compromise to prevent fragmentation.
Ultimately, platform teams succeed when their tools are willingly adopted. By offering controlled points of flexibility, you eliminate the primary reason teams abandon your golden paths. You can build templates that are both robust and practical, achieving a state of flexible governance that actually works.
If you’re tired of watching your templates fragment, it’s time to explore a more flexible approach.
You can learn more about configuring Insert Blocks in the Harness documentation here.


At Harness, we’ve always believed software delivery should be intelligent, efficient, and secure. That’s why AI has been part of our DNA since day one. We first brought AI into software delivery when we introduced Continuous Verification in 2017. That same vision is behind our latest innovation: Harness MCP Server.
This isn’t just another integration tool. It’s a new way for AI agents – whether it’s Claude Desktop, Windsurf, Cursor, or something you’ve built yourself – to securely connect with your Harness workflows. No brittle glue code. No custom APIs. Just smart, consistent connections between your agents and the tools that power your software delivery lifecycle.
Let’s break it down. The Harness MCP Server runs in your environment and acts as a translator between your AI tools and our platform. It’s a lightweight local gateway that implements the Model Context Protocol (MCP) – an open standard designed to help AI agents securely access external developer services through a consistent, structured interface.
Our customers have repeatedly told us they’re excited to start getting real value from their AI investments, but having secure access to their own data remains a major roadblock. They want to build their own agents, but lack a simple, reliable way to connect them to workflows. Our MCP Server unlocks exactly that.
“Our customers are building agents, but they don’t need another plugin – they need AI with context. That means access to delivery data from pipelines, environments, and logs. The Harness MCP Server gives them a clean, reliable way to pull that data into their own tools, without fragile integrations. It’s a simple protocol, but it unlocks a lot. And it reflects a broader shift – from AI as a standalone layer to AI as part of the software delivery workflow. We believe that shift is foundational to where DevOps is headed."
—Sanjay Nagraj, SVP Engineering at Harness
Our MCP Server makes it easy for your AI agents to do more than just observe. They can take action! By exposing a growing set of structured, secure tool sets—including pipelines, repositories, logs, and artifact registries—MCP gives agents consistent access to the same systems your teams already use to build, test, and deploy software. MCP turns Harness into a plug-and-play backend for your AI. Here’s how it works.

Adapters and glue code slow teams down. But with our MCP server, you don’t need to worry about juggling different adapters or writing custom logic for each Harness service. A single standardized protocol gives agents access to pipelines, pull requests, logs, repositories, artifact registries, and more – all through one consistent interface.
Let’s say a customer success engineer needs to check whether a recent release went out for a specific client. Using their AI agent, the MCP Server will fetch the release data instantly, so they don’t need to waste time pinging their dev team or digging through dashboards.
We didn’t just build the MCP Server for our own platform – we built it for yours. The same MCP server that powers Harness’ AI agents is available to our customers, making it easy to reuse the same patterns across multiple AI agents and environments. That consistency reduces drift, simplifies maintenance, and cuts down overhead.
A platform engineer, for example, can build a Slack bot that alerts teams to failed builds and surfaces logs. With MCP, it connects in minutes – no custom APIs, no complex auth flows – just the same server we use internally.
Innovation never stands still – but your code shouldn’t break just to keep up with it. With our MCP Server, you can add new tool sets and endpoints without changing your agent code. Simply update your server. And because it's open and forkable, teams can extend functionality to support additional services, internal tools, or custom workflows.
Consider a development team integrating a data source from a product they rely on into VS Code to suggest which pipeline to trigger based on file changes. As their processes evolve, they can keep expanding the agent’s capabilities without ever touching the core agent logic.
Security teams need confidence that AI integrations won’t compromise their standards. That’s why our MCP Server is built with enterprise-grade controls from the start. It uses JSON-RPC 2.0 for structured, efficient communication and integrates with Harness’s granular RBAC model so that teams can manage access with precision and prevent unauthorized access. API keys are handled directly in the platform, and no sensitive data is ever sent to the LLM. It’s built to reflect the same security posture customers already trust in Harness.
Take a security team that needs to restrict an agent’s access. With MCP, they can configure the server so the agent is limited to deployment logs – giving support teams the insights they need without opening up the broader system.
AI is changing how software gets built – but today’s agents are only as helpful as the systems they can safely access. For DevOps and platform teams, this marks a shift from siloed automation to coordinated, AI-driven execution. Instead of building and maintaining custom connectors, teams can now focus on enabling agents to interact with their delivery stack safely, consistently, and at scale.
With the Harness MCP Server, we’re giving developers what they’ve asked for: a more innovative way to connect AI to the software delivery process, without compromising security or speed.
Curious how it all works? Watch our walkthrough video to see the MCP Server in action and learn how AI agents can securely interact with your Harness workflows.
🧠 Visit the Harness Developer Hub to get started.
.webp)
.webp)
Harness Cloud is a fully managed Continuous Integration (CI) platform that allows teams to run builds on Harness-managed virtual machines (VMs) pre-configured with tools, packages, and settings typically used in CI pipelines. In this blog, we'll dive into the four core pillars of Harness Cloud: Speed, Governance, Reliability, and Security. By the end of this post, you'll understand how Harness Cloud streamlines your CI process, saves time, ensures better governance, and provides reliable, secure builds for your development teams.
Harness Cloud delivers blazing-fast builds on multiple platforms, including Linux, macOS, Windows, and mobile operating systems. With Harness Cloud, your builds run in isolation on pre-configured VMs managed by Harness. This means you don’t have to waste time setting up or maintaining your infrastructure. Harness handles the heavy lifting, allowing you to focus on writing code instead of waiting for builds to complete.
The speed of your CI pipeline is crucial for agile development, and Harness Cloud gives you just that—quick, efficient builds that scale according to your needs. With starter pipelines available for various programming languages, you can get up and running quickly without having to customize your environment.
One of the most critical aspects of any enterprise CI/CD process is governance. With Harness Cloud, you can rest assured that your builds are running in a controlled environment. Harness Cloud makes it easier to manage your build infrastructure with centralized configurations and a clear, auditable process. This improves visibility and reduces the complexity of managing your CI pipelines.
Harness also gives you access to the latest features as soon as they’re rolled out. This early access enables teams to stay ahead of the curve, trying out new functionality without worrying about maintaining the underlying infrastructure. By using Harness Cloud, you're ensuring that your team is always using the latest CI innovations.
Reliability is paramount when it comes to build systems. With Harness Cloud, you can trust that your builds are running smoothly and consistently. Harness manages, maintains, and updates the virtual machines (VMs), so you don't have to worry about patching, system failures, or hardware-related issues. This hands-off approach reduces the risk of downtime and builds interruptions, ensuring that your development process is as seamless as possible.
By using Harness-managed infrastructure, you gain the peace of mind that comes with a fully supported, reliable platform. Whether you're running a handful of builds or thousands, Harness ensures they’re executed with the same level of reliability and uptime.
Security is at the forefront of Harness Cloud. With Harness managing your build infrastructure, you don't need to worry about the complexities of securing your own build machines. Harness ensures that all the necessary security protocols are in place to protect your code and the environment in which it runs.
Harness Cloud's commitment to security includes achieving SLSA Level 3 compliance, which ensures the integrity of the software supply chain by generating and verifying provenance for build artifacts. This compliance is achieved through features like isolated build environments and strict access controls, ensuring each build runs in a secure, tamper-proof environment.
For details, read the blog An In-depth Look at Achieving SLSA Level-3 Compliance with Harness.
Harness Cloud also enables secure connectivity to on-prem services and tools, allowing teams to safely integrate with self-hosted artifact repositories, source control systems, and other critical infrastructure. By leveraging Secure Connect, Harness ensures that these connections are encrypted and controlled, eliminating the need to expose internal resources to the public internet. This provides a seamless and secure way to incorporate on-prem dependencies into your CI workflows without compromising security.
Harness Cloud makes it easy to run and scale your CI pipelines without the headache of managing infrastructure. By focusing on the four pillars—speed, governance, reliability, and security—Harness ensures that your development pipeline runs efficiently and securely.
Harness CI and Harness Cloud give you:
✅ Blazing-fast builds—8X faster than traditional CI solutions
✅ A unified platform—Run builds on any language, any OS, including mobile
✅ Native SCM—Harness Code Repository is free and comes packed with built-in governance & security
If you're ready to experience a fully managed, high-performance CI environment, give Harness Cloud a try today.
.webp)
.webp)
As software projects scale, build times often become a major bottleneck, especially when using tools like Bazel. Bazel is known for its speed and scalability, handling large codebases with ease. However, even the most optimized build tools can be slowed down by inefficient CI pipelines. In this blog, we’ll dive into how Bazel’s build capabilities can be taken to the next level with Harness CI. By leveraging features like Build Intelligence and caching, Harness CI helps maximize Bazel's performance, ensuring faster builds and a more efficient development cycle.
Harness CI integrates seamlessly with Bazel, taking full advantage of its strengths and enhancing performance. The best part? As a user, you don’t have to provide any additional configuration to leverage the build intelligence feature. Harness CI automatically configures the remote cache for your Bazel builds, optimizing the process from day one.
Harness CI’s Build Intelligence ensures that Bazel builds are as fast and efficient as possible. While Bazel has its own caching mechanisms, Harness CI takes this a step further by automatically configuring and optimizing the remote cache, reducing build times without any manual setup.
This automatic configuration means that you can benefit from faster, more efficient builds right away—without having to tweak cache settings or worry about how to handle build artifacts across multiple machines.
Harness CI seamlessly integrates with Bazel’s caching system, automatically handling the configuration of remote caches. So, when you run a build, Harness CI makes sure that any unchanged files are skipped, and only the necessary tasks are executed. If there are any changes, only those parts of the project are rebuilt, making the process significantly faster.
For example, when building the bazel-gazelle project, Harness CI ensures that any unchanged files are cached and reused in subsequent builds, reducing the need for unnecessary recompilation. All this happens automatically in the background without requiring any special configuration from the user.
We compared the performance of Bazel builds using Harness CI and GitHub Actions, and the results were clear: Harness CI, with its automatic configuration and optimized caching, delivered up to 4x faster builds than GitHub Actions. The automatic configuration of the remote cache made a significant difference, helping Bazel avoid redundant tasks and speeding up the build process.
Results:

Bazel is an excellent tool for large-scale builds, but it becomes even more powerful when combined with Harness CI and Harness Cloud. By automatically configuring remote caches and applying build intelligence, Harness CI ensures that your Bazel builds are as fast and efficient as possible, without requiring any additional configuration from you.
By combining other Harness CI intelligence features like Cache Intelligence, Docker Layer Caching, and Test Intelligence, you can speed up your Bazel projects by up to 8x.With the hyper optimized build infrastructure, you can experience lightning-fast builds on Harness Cloud at reasonable costs. This seamless integration allows you to spend less time waiting for builds and more time focusing on delivering quality code.
If you're looking to speed up your Bazel builds, give Harness CI a try today and experience the difference!


As authentication complexities grow, have you ever wondered how effortlessly you can switch between different applications without remembering multiple passwords? All you need to do is click an option like "Sign in with <any social account>," and with just one click, You are authorized, and the application is now ready for use.
Sounds simple, right? But behind the scenes, you're actually using a powerful authentication method that not only verifies your identity but also determines what actions you're allowed to perform within the application.
In this article, we will explore and focus on a authentication method specifically —OpenID Connect (OIDC). We will discuss what it is, how it works, how companies leverage it for security and user experience, and how Harness uses OIDC to ensure secure software deployments.
OIDC is an authentication protocol built on top of OAuth 2.0. It provides a standardized way to verify user identities for an application while ensuring both security and convenience.
It acts as a digital identity card for the internet, enabling secure access to different applications without the need to manage multiple passwords. It simplifies the authentication process by adding an identity layer to OAuth 2.0.
OIDC ensures that only legitimate users can log into an application, while OAuth 2.0 determines what actions and permissions the user is allowed to perform. One of the benefit of OIDC is short-term token which reduces the blast radius.
To understand the functionality, let’s begin by dissecting the five essential components and examining how they interact cohesively.
OIDC offers users a seamless experience while guaranteeing secure access to various applications. The following key stages define the typical user journey with OIDC:
Using the similar workflow, Developers can create more intuitive and secure applications that enhance user satisfaction while maintaining robust security measures.

| Feature | Traditional Auth | OAuth 2.0 | OpenID Connect (OIDC) |
|---|---|---|---|
| Main Purpose | Basic login | Granting access permissions | Verifying identity + permissions |
| Real-World Analogy | Separate keys for each door | Hotel key card | ID card + key card |
| User Experience | Enter username & password for each site | "Allow this app to access your X" | "Sign in with Google/Facebook" |
| What You Get | Logged-in status only | Access token (permission slip) | ID token (identity proof) + Access token |
| Stores Passwords? | Yes, each app stores passwords | No | No |
| Best Used For | Simple, standalone apps | Allowing apps to access data on behalf of users | Modern apps needing identity verification |
| When to Use | Offline systems, regulatory needs, full login control | API access, authorization without identity verification | Single Sign-On (SSO), modern web & mobile apps |
Harness provides multiple options for using OpenID Connect (OIDC); however, in this section, we will focus on executing a pipeline with OIDC. We also encourage you to explore Single Sign-On (SSO) with OIDC, as we plan to share configuration details in future blog posts.
Secure your pipelines by configuring OIDC to ensure deployments run in a specific authorized environment. This section will guide you through configuring OIDC in GCP, requiring setup in both Harness and the cloud platform for secure pipeline execution.
To begin the configuration process, we will need the following.
<YOUR_ACCOUNT_ID>with the Harness account ID you obtained in step 1.




In today's digital world, security and user convenience are more important than ever. OpenID Connect (OIDC) provides a powerful solution that simplifies authentication while ensuring secure access to applications. With OIDC, users can log in seamlessly without managing multiple passwords, and organizations can strengthen security and streamline user management.
As we’ve seen, OIDC not only verifies user identities but also manages their permissions, making it essential for modern applications. When integrated with platforms like Harness, OIDC enhances security in software deployments, allowing teams to focus on innovation instead of authentication challenges.
As digital identity continues to evolve, adopting OIDC will be key for businesses looking to provide a secure and user-friendly experience. Implementing OIDC ensures organizations are prepared for the demands of today’s digital landscape, paving the way for a more secure and efficient future.
.webp)
.webp)
When building complex software projects, slow build times can become a major bottleneck, impacting developer productivity and resource efficiency. The goal of this blog post is to demonstrate that Harness CI is fast for building large-scale projects. Using benchmarks and a sample Gradle project, we’ll showcase how Harness CI optimizes build performance with Build Intelligence.
Gradle is a powerful build automation tool widely used across different programming languages and platforms. Whether you're building Java, Kotlin, or Android projects—or even large-scale distributed systems—Gradle’s flexibility and efficiency make it a go-to choice for developers. However, as projects grow in complexity, build times can become a major bottleneck, leading to wasted resources and slower development cycles.
This is where Harness CI comes in, providing Build Intelligence, Test Intelligence, and Cache Intelligence to optimize and accelerate Gradle builds. In this blog, we’ll explore how Build Intelligence works, using the Spring Framework as an example of a large Java project that benefits from Harness CI’s optimizations.
Gradle is designed to handle complex dependency management and incremental builds, but traditional CI pipelines often don’t take full advantage of its optimizations. Common issues include:
Harness CI Intelligence helps solve these issues by reusing build artifacts, running only relevant tests, and caching dependencies efficiently. Let’s dive into how these features enhance Gradle builds.
Build Intelligence in Harness CI speeds up Gradle builds by caching outputs of previous runs and retrieving them when inputs haven’t changed. This avoids redundant work, significantly reducing build times.
Harness CI integrates with Gradle’s caching mechanism to store and reuse outputs from cacheable tasks, such as compiling source code and generating artifacts.
Example with Spring Framework:
- Faster Builds: Reduces build times by reusing cached outputs.
- Efficient Resource Usage: Minimizes CPU and memory usage by skipping redundant tasks.
- Seamless Integration: Works out-of-the-box with Gradle and Bazel.
Here’s a sample pipeline with build intelligence in action:
To demonstrate the performance improvements provided by Harness CI’s Build Intelligence, we benchmarked build times for a Spring Framework Gradle build across different CI platforms. The test involved multiple builds with incremental changes to measure caching efficiency and execution speed. Here’s one of those benchmarks when compared against GitHub Actions.

Gradle’s powerful build system enables efficient dependency management and incremental builds, but traditional CI pipelines often fail to take full advantage of these optimizations. Harness CI bridges this gap with Build Intelligence, Test Intelligence, and Cache Intelligence, significantly reducing build times and improving efficiency.
From our benchmarks, Harness CI not only accelerates Gradle builds but also optimizes resource usage, making it an ideal choice for teams working on large-scale projects like Spring Framework. By integrating Harness CI with Gradle, developers can spend less time waiting for builds and more time delivering high-quality code.
If you're looking to speed up your Gradle builds, give Harness CI a try today and experience the difference!