
KubeCon 2025 Atlanta is here! For the next four days, Atlanta is the undisputed center of the cloud native universe. The buzz is palpable, but this year, one question seems to be hanging over every keynote, session, and hallway track: AI.
We've all seen the impressive demos. But as developers and engineers, we have to ask the hard questions. Can AI actually help us ship code better? Can it make our complex CI/CD pipelines safer, faster, and more intelligent? Or is it just another layer of hype we have to manage?
At Harness, we believe AI is the key to solving software delivery's biggest challenges. And we're not just talking about it—we're here to show you the code with Harness AI, purpose-built to bring intelligence and automation to every step of the delivery process.
We are thrilled to team up with Google Cloud to present a special lightning talk on Agentic AI and its practical use in CI/CD. This is where the hype stops and the engineering begins.
Join our Director of Product Marketing, Chinmay Gaikwad, for this deep-dive session.

Chinmay will be on hand to demonstrate how Agentic AI is moving from a concept to a practical, powerful tool for building and securing enterprise-grade pipelines. Be sure to stop by, ask questions, and get personalized guidance.
AI is our big theme, but we're everywhere this week, focusing on the core problems you face. Here's where to find us.
1. Main Event: The Harness Home Base (Nov 11-13)
This is our command center. Come by Booth #522 to see live demos of our Agentic AI in action. You can also talk to our engineers about the full Harness platform, including how we integrate with OpenTofu, empower platform engineering teams, and help you get a handle on cloud costs. Plus, we have the best swag at the show.
2. Co-located Event: Platform Engineering Day (Nov 10)
As a Platinum Sponsor, we're kicking off the week with a deep focus on building Internal Developer Platforms (IDPs). Stop by Booth #Z45 to chat about building "golden paths" that developers will actually love and how to prove the value of your platform.
3. Co-located Event: OpenTofu Day (Nov 10)
We are incredibly proud to be a Gold Sponsor of OpenTofu Day. As one of the top contributors to the OpenTofu project, our engineers are in the trenches helping shape the future of open-source Infrastructure as Code.
The momentum is undeniable:
Our engineers have contributed major features like the AzureRM backend rewrite and the new Azure Key Provider, and we serve on the Technical Steering Committee. Come find us in Room B203 to meet the team and talk all things IaC.
Can't wait? Download the digital copy of The Practical Guide to Modernizing Infrastructure Delivery and AI-Native Software Delivery right now.
KubeCon 2025 Atlanta is about what's next. This year, "what's next" is practical AI, smarter platforms, and open collaboration. We're at the center of all three.
See you on the floor!

Harness GitOps builds on the Argo CD model by packaging a Harness GitOps Agent with Argo CD components and integrating them into the Harness platform. The result is a GitOps architecture that preserves the Argo reconciliation loop while adding visibility, audit, and control through Harness SaaS.
At the center of the architecture is the Argo CD cluster, sometimes called the control cluster. This is where both the Harness GitOps Agent and Argo CD’s core components run:
The control cluster can be deployed in two models:
The Argo CD Application Controller applies manifests to one or more target clusters by talking to their Kubernetes API servers.
Developers push declarative manifests (YAML, Helm, or Kustomize) into a Git repository. The GitOps Agent and repo-server fetch these manifests. The Application Controller continuously reconciles the cluster state against the desired state. Importantly, clusters never push changes back into Git. The repository remains the single source of truth. Harness configuration, including pipeline definitions, can also be stored in Git, providing a consistent Git-based experience.
While the GitOps loop runs entirely in the control cluster and target clusters, the GitOps Agent makes outbound-only connections to Harness SaaS.
Harness SaaS provides:
All sensitive configuration data, such as repository credentials, certificates, and cluster secrets, remain in the GitOps Agent’s namespace as Kubernetes Secrets and ConfigMaps. Harness SaaS only stores a metadata snapshot of the GitOps setup (Applications, ApplicationSets, Clusters, Repositories, etc.), never the sensitive data itself. Unlike some SaaS-first approaches, Harness never requires secrets to leave your cluster, and all credentials and certificates remain confined to your Kubernetes namespace.

In short: a developer commits, Argo fetches and reconciles, and the GitOps Agent reports status back to Harness SaaS for governance and visibility.
This is the pure GitOps architecture: Git defines the desired state, Argo CD enforces it, and Harness provides governance and observability without altering the core reconciliation model.

Most organizations operate more than one Kubernetes cluster, often spread across multiple environments and regions. In this model, each region has its own Argo CD control cluster. The control cluster runs the Harness GitOps Agent alongside core Argo CD components and reconciles the desired state into one or more target clusters such as dev, QA, or prod.
The flow is straightforward:
Harness SaaS aggregates data from all control clusters, giving teams a single view and a single place to drive rollouts:
This setup preserves the familiar Argo CD reconciliation loop inside each control cluster while extending it with Harness’ governance, observability, and promotion pipelines across regions.
Note: Some enterprises run multiple Argo CD control clusters per region for scale or isolation. Harness SaaS can aggregate across any number of clusters, whether you have two or two hundred.
Harness GitOps lets you scale from single clusters to a fleet-wide GitOps model with unified dashboards, governance, and pipelines that promote with confidence and roll back everything when needed. Ready to see it in your stack? Get started with Harness GitOps and bring enterprise-grade control to your Argo CD deployments.

DevOps governance is a crucial aspect of modern software delivery. To ensure that software releases are secure and compliant, it is pivotal to embed governance best practices within your software delivery platform. At Harness, the feature that powers this capability is Open Policy Agent (OPA), a policy engine that enables fine-grained access control over your Harness entities.
In this blog post, we’ll explain how to use OPA to ensure that your DevOps team follows best practices when releasing software to production. More specifically, we’ll explain how to write these policies in Harness.

Every time an API request comes to Harness, the service sends the API request to the policy agent. The policy agent uses three things to evaluate whether the request can be made: the contents of the request, the target entity of the request, and the policy set(s) on that entity. After evaluating the policies in those policy sets, the agent simply outputs a JSON object.
If the API request should be denied the JSON object looks like:
{
“deny”:[<reason>]
}And the JSON object is empty, if the API request should be allowed:
{}Let’s now dive a bit deeper to look into how to actually write these policies.
OPA policies are written using Rego, a declarative language that allows you to reason about information in structured documents. Let’s take an example of a possible practice that you’d want to enforce within your Continuous Delivery pipelines. Let’s say you don’t want to be making HTTP calls to outside services within your deployment environment and want to enforce the practice: “Every pipeline’s Deployment Stage shouldn’t have an HTTP step”
Now, let’s look at the policy below that enforces this rule:
package pipeline
deny[msg] {
# Check for a deployment stage ...
input.pipeline.stages[i].stage.type == "Deployment"
# For that deployment stage check if there’s an Http step ...
input.pipeline.stages[i].stage.spec.execution.steps[j].step.type == "Http"
# Show a human-friendly error message
msg := "Deployment pipeline should not have HTTP step"
}
First you’ll notice some interesting things about this policy language. The first line declares that the policy is part of a package (“package pipeline”) and then the next line:
deny[msg] {is declaring a “deny block,” which tells the agent that if the statements in the policy are true, declare the variable deny with the message variable.
Then you’ll notice that the next line checks to see if there’s a deployment stage:
input.pipeline.stages[i].stage.type == "Deployment"
You may be thinking, there’s a variable “i” that was never declared! We’ll get to that later in the blog but for now just know that what OPA will do here is try to see if there’s any number i for which this statement is true. If there is, it will assign that number to “i” and move on to the next line,
input.pipeline.stages[i].stage.spec.execution.steps[j].step.type == "Http"Just like above, here OPA will now look for any j for which the statement above is true. If there are values of i and j for which these lines are true then OPA will finally move on to the last line:
msg := “deployment pipeline should not have HTTP step”Which sets the message variable to that string. So for the following input
{
"pipeline": {
"name": "abhijit-test-2",
"identifier": "abhijittest2",
"tags": {},
"projectIdentifier": "abhijittestproject",
"orgIdentifier": "default",
"stages": [
{
"stage": {
"name": "my-deployment",
"identifier": "my-deployment",
"description": "",
"type": "Deployment",
"spec": {
"execution": {
"steps": [
{
"step": {
"name": "http",
"identifier": "http",
"type": "Http",
…
}
The output will be:
{
“Deny”:["Deployment pipeline should not have HTTP step."]
}Ok, so that might have made some sense at a high level, but let’s really get a bit deeper into how to write these policies. Let’s look into how Rego works under the hood and get you to a point where you can write Rego policies for your use cases in Harness.
You may have noticed that throughout this blog we’ve been referring to Rego as a “declarative language” but what does that exactly mean? Most programming languages are “imperative” which means that each line of code explicitly states what needs to be done. In a declarative language, at run time, the program walks through a data source to find a match. With Rego what you do is you define a certain set of conditions, and OPA searches the input data to see whether those conditions are matched. Let’s see what this means with a simple example.
Imagine you have the following Rego policy:
x if input.user == “alex”
y if input.tokens > 100
and the engine gets the following input:
{
“user”: “alex”,
“tokens”: 200
}
The Policy engine will take the input and evaluate the policy line by line. Since both statements are true for the input shown, the policy engine will output:
{
"x": true,
“y”: true
}
Now, both of these were simple rules that could be defined in one line each. But you often want to do something a bit more complex. In fact, most rules in Rego are written using the following syntax:
variable_name := value {
condition 1
condition 2
...
}
The way to read this is, the variable is assigned the value if all the conditions within the block are met.
So let’s go back to the simple statements we had above. Let’s say we want our policy engine to allow a request only if the user is alex and if they have more than 100 tokens left. Thus our policy would look like:
allow := true {
input.user == “alex”
input.tokens > 100
}
It would return true for the following input request:
{
“user”: “alex”,
“tokens”: 200
}
But false for either of the following
{
“user”: “bob”,
“tokens”: 200
}
{
“user”: “alex”,
“tokens”: 50
}
Now let’s look at something a bit more complicated. Let’s say you want to write a policy to allow a “delete” action if the user has the “admin” permission attached. This is what the policy would look like (note: this policy is for illustrative purposes only and will not work in Harness)
deny {
input.action == “delete”
not user_is_admin
}
user_is_admin {
input.role == “admin”
}
So the first line will match only if the input is a delete action. The second line will then evaluate the “user_is_admin” rule which checks to see if the role field is “admin” and if not, the deny will get triggered. So for the following input:
{
"action": "delete",
“role": “non-admin”
}
The policy agent will return:
{“deny”: true}because the role was not “admin” . But for the following input
{
"action": "delete",
“role": “admin”
}
The policy agent will return:
{}
So far we’ve only seen instances of a rego policy taking in input and checking some fields within that input. Let’s see how variables, sets, and arrays are defined. Let’s say you only want to allow a code owner to trigger a pipeline. If that’s the case then the following policy will do the trick (note: this policy is for illustrative purposes only and will not work in Harness):
code_owners = {"albert", "beth", "claire"}
deny[msg] {
triggered_by = input.triggered_by
not code_owners[triggered_by]
msg := "Triggered user is not permitted to run publish CI"
}
Here, on line 1 we are defining the code owners variable as a set with three names. We are then entering the deny block. Remember, for the deny block to evaluate to true, all three lines within the block need to evaluate to true. The first line sets the “triggered_by” variable to see who triggered the pipeline. The next line
not code_owners[triggered_by]Checks if the code_owners set does not contain the variable triggered_by. Finally if that line evaluates to true, the next line is then run, where the value of message is set and finally the deny variable is established.
Now let’s look at an example of a policy that contains an array. Let’s say you want to ensure that every last step of a Harness pipeline is an “Approval” step. The policy below will ensure that’s the case (this policy will work in Harness):
package pipeline
deny[msg] {
arr_len = count(input.pipeline.stages)
not input.pipeline.stages[arr_len-1].stage.type == "Approval"
msg := "Last stage must be an approval stage"
}
The first line will first assign the length of the array to the variable “arr_len” and then the next line will ensure that the last stage in the pipeline is an Approval stage.
Ok, let’s look at another slightly more complicated policy that’ll work in Harness. Let’s say you want to write a policy: “For all pipelines where there’s a Deployment step, it is immediately followed by an Approval step”
deny[msg] {
input.pipeline.stages[i].stage.type == "Deployment"
not input.pipeline.stages[i + 1].stage.type == 'Approval'
}
The first line matches all values of ‘i’ for which the stage type is ‘Deployment’. The next line then checks whether there’s any value of i for which the stage at i+1 is not an ‘Approval’ stage. If for any i those two statements are true, then the deny block gets evaluated to true.
Finally, Rego also supports objects and dictionaries. An object is an unordered key-value collection. The key can be of any type and so can the value.
user_albert = {
"admin":true, "employee_id": 12, "state": "Texas"
}
To access any of this object’s attributes you simply use the “.” notation (i.e. user_albert.state).
You can also create dictionaries as follows:
users_dictionary = {
"albert": user_albert,
"bob": user_bob ...
}
And access each entry using the following syntax users_dictionary[‘albert’]
Of course in order to be able to write these policies correctly you need to know the types of objects that you can apply them on and the schema of these objects:
A simple way to figure out how to refer to deeply nested attributes within an object’s schema is shown in the gif below.

Visit our documentation to get started with OPA today!



Modern unit testing in CI/CD can help teams avoid slow builds by using smart strategies. Choosing the right tests, running them in parallel, and using intelligent caching all help teams get faster feedback while keeping code quality high.
Platforms like Harness CI use AI-powered test intelligence to reduce test cycles by up to 80%, showing what’s possible with the right tools. This guide shares practical ways to speed up builds and improve code quality, from basic ideas to advanced techniques that also lower costs.
Knowing what counts as a unit test is key to building software delivery pipelines that work.
A unit test looks at a single part of your code, such as a function, class method, or a small group of related components. The main point is to test one behavior at a time. Unit tests are different from integration tests because they look at the logic of your code. This makes it easier to figure out what went wrong if something goes wrong.
Unit tests should only check code that you wrote and not things like databases, file systems, or network calls. This separation makes tests quick and dependable. Tests that don't rely on outside services run in milliseconds and give the same results no matter where they are run, like on your laptop or in a CI pipeline.
Unit tests are one of the most important part of continuous integration in CI/CD pipelines because they show problems right away after code changes. Because they are so fast, developers can run them many times a minute while they are coding. This makes feedback loops very tight, which makes it easier to find bugs and stops them from getting to later stages of the pipeline.
Teams that run full test suites on every commit catch problems early by focusing on three things: making tests fast, choosing the right tests, and keeping tests organized. Good unit testing helps developers stay productive and keeps builds running quickly.
Deterministic Tests for Every Commit
Unit tests should finish in seconds, not minutes, so that they can be quickly checked. Google's engineering practices say that tests need to be "fast and reliable to give engineers immediate feedback on whether a change has broken expected behavior." To keep tests from being affected by outside factors, use mocks, stubs, and in-memory databases. Keep commit builds to less than ten minutes, and unit tests should be the basis of this quick feedback loop.
As projects get bigger, running all tests on every commit can slow teams down. Test Impact Analysis looks at coverage data to figure out which tests really check the code that has been changed. AI-powered test selection chooses the right tests for you, so you don't have to guess or sort them by hand.
To get the most out of your infrastructure, use selective execution and run tests at the same time. Divide test suites into equal-sized groups and run them on different machines simultaneously. Smart caching of dependencies, build files, and test results helps you avoid doing the same work over and over. When used together, these methods cut down on build time a lot while keeping coverage high.
Standardized Organization for Scale
Using consistent names, tags, and organization for tests helps teams track performance and keep quality high as they grow. Set clear rules for test types (like unit, integration, or smoke) and use names that show what each test checks. Analytics dashboards can spot flaky tests, slow tests, and common failures. This helps teams improve test suites and keep things running smoothly without slowing down developers.
A good unit test uses the Arrange-Act-Assert pattern. For example, you might test a function that calculates order totals with discounts:
def test_apply_discount_to_order_total():
# Arrange: Set up test data
order = Order(items=[Item(price=100), Item(price=50)])
discount = PercentageDiscount(10)
# Act: Execute the function under test
final_total = order.apply_discount(discount)
# Assert: Verify expected outcome
assert final_total == 135 # 150 - 10% discountIn the Arrange phase, you set up the objects and data you need. In the Act phase, you call the method you want to test. In the Assert phase, you check if the result is what you expected.
Testing Edge Cases
Real-world code needs to handle more than just the usual cases. Your tests should also check edge cases and errors:
def test_apply_discount_with_empty_cart_returns_zero():
order = Order(items=[])
discount = PercentageDiscount(10)
assert order.apply_discount(discount) == 0
def test_apply_discount_rejects_negative_percentage():
order = Order(items=[Item(price=100)])
with pytest.raises(ValueError):
PercentageDiscount(-5)Notice the naming style: test_apply_discount_rejects_negative_percentage clearly shows what’s being tested and what should happen. If this test fails in your CI pipeline, you’ll know right away what went wrong, without searching through logs.
When teams want faster builds and fewer late-stage bugs, the benefits of unit testing are clear. Good unit tests help speed up development and keep quality high.
When you use smart test execution in modern CI/CD pipelines, these benefits get even bigger.
Disadvantages of Unit Testing: Recognizing the Trade-Offs
Unit testing is valuable, but knowing its limits helps teams choose the right testing strategies. These downsides matter most when you’re trying to make CI/CD pipelines faster and more cost-effective.
Research shows that automatically generated tests can be harder to understand and maintain. Studies also show that statement coverage doesn’t always mean better bug detection.
Industry surveys show that many organizations have trouble with slow test execution and unclear ROI for unit testing. Smart teams solve these problems by choosing the right tests, using smart caching, and working with modern CI platforms that make testing faster and more reliable.
Developers use unit tests in three main ways that affect build speed and code quality. These practices turn testing into a tool that catches problems early and saves time on debugging.
Before they start coding, developers write unit tests. They use test-driven development (TDD) to make the design better and cut down on debugging. According to research, TDD finds 84% of new bugs, while traditional testing only finds 62%. This method gives you feedback right away, so failing tests help you decide what to do next.
Unit tests are like automated guards that catch bugs when code changes. Developers write tests to recreate bugs that have been reported, and then they check that the fixes work by running the tests again after the fixes have been made. Automated tools now generate test cases from issue reports. They are 30.4% successful at making tests that fail for the exact problem that was reported. To stop bugs that have already been fixed from coming back, teams run these regression tests in CI pipelines.
Good developer testing doesn't look at infrastructure or glue code; it looks at business logic, edge cases, and public interfaces. Testing public methods and properties is best; private details that change often should be left out. Test doubles help developers keep business logic separate from systems outside of their control, which makes tests more reliable. Integration and system tests are better for checking how parts work together, especially when it comes to things like database connections and full workflows.
Slow, unreliable tests can slow down CI and hurt productivity, while also raising costs. The following proven strategies help teams check code quickly and cut both build times and cloud expenses.
Choosing between manual and automated unit testing directly affects how fast and reliable your pipeline is.
Manual Unit Testing: Flexibility with Limitations
Manual unit testing means developers write and run tests by hand, usually early in development or when checking tricky edge cases that need human judgment. This works for old systems where automation is hard or when you need to understand complex behavior. But manual testing can’t be repeated easily and doesn’t scale well as projects grow.
Automated Unit Testing: Speed and Consistency at Scale
Automated testing transforms test execution into fast, repeatable processes that integrate seamlessly with modern development workflows. Modern platforms leverage AI-powered optimization to run only relevant tests, cutting cycle times significantly while maintaining comprehensive coverage.
Why High-Velocity Teams Prioritize Automation
Fast-moving teams use automated unit testing to keep up speed and quality. Manual testing is still useful for exploring and handling complex cases, but automation handles the repetitive checks that make deployments reliable and regular.
Difference Between Unit Testing and Other Types of Testing
Knowing the difference between unit, integration, and other test types helps teams build faster and more reliable CI/CD pipelines. Each type has its own purpose and trade-offs in speed, cost, and confidence.
Unit Tests: Fast and Isolated Validation
Unit tests are the most important part of your testing plan. They test single functions, methods, or classes without using any outside systems. You can run thousands of unit tests in just a few minutes on a good machine. This keeps you from having problems with databases or networks and gives you the quickest feedback in your pipeline.
Integration Tests: Validating Component Interactions
Integration testing makes sure that the different parts of your system work together. There are two main types of tests: narrow tests that use test doubles to check specific interactions (like testing an API client with a mock service) and broad tests that use real services (like checking your payment flow with real payment processors). Integration tests use real infrastructure to find problems that unit tests might miss.
End-to-End Tests: Complete User Journey Validation
The top of the testing pyramid is end-to-end tests. They mimic the full range of user tasks in your app. These tests are the most reliable, but they take a long time to run and are hard to fix. Unit tests can find bugs quickly, but end-to-end tests may take days to find the same bug. This method works, but it can be brittle.
The Test Pyramid: Balancing Speed and Coverage
The best testing strategy uses a pyramid: many small, fast unit tests at the bottom, some integration tests in the middle, and just a few end-to-end tests at the top.
Modern development teams use a unit testing workflow that balances speed and quality. Knowing this process helps teams spot slow spots and find ways to speed up builds while keeping code reliable.
Before making changes, developers write code on their own computers and run unit tests. They run tests on their own computers to find bugs early, and then they push the code to version control so that CI pipelines can take over. This step-by-step process helps developers stay productive by finding problems early, when they are easiest to fix.
Once code is in the pipeline, automation tools run unit tests on every commit and give feedback right away. If a test fails, the pipeline stops deployment and lets developers know right away. This automation stops bad code from getting into production. Research shows this method can cut critical defects by 40% and speed up deployments.
Modern CI platforms use Test Intelligence to only run the tests that are affected by code changes in order to speed up this process. Parallel testing runs test groups in different environments at the same time. Smart caching saves dependencies and build files so you don't have to do the same work over and over. These steps can help keep coverage high while lowering the cost of infrastructure.
Teams analyze test results through dashboards that track failure rates, execution times, and coverage trends. Analytics platforms surface patterns like flaky tests or slow-running suites that need attention. This data drives decisions about test prioritization, infrastructure scaling, and process improvements. Regular analysis ensures the unit testing approach continues to deliver value as codebases grow and evolve.
Using the right unit testing techniques can turn unreliable tests into a reliable way to speed up development. These proven methods help teams trust their code and keep CI pipelines running smoothly:
These methods work together to build test suites that catch real bugs and stay easy to maintain as your codebase grows.
As we've talked about with CI/CD workflows, the first step to good unit testing is to separate things. This means you should test your code without using outside systems that might be slow or not work at all. Dependency injection is helpful because it lets you use test doubles instead of real dependencies when you run tests.
It is easier for developers to choose the right test double if they know the differences between them. Fakes are simple working versions, such as in-memory databases. Stubs return set data that can be used to test queries. Mocks keep track of what happens so you can see if commands work as they should.
This method makes sure that tests are always quick and accurate, no matter when you run them. Tests run 60% faster and there are a lot fewer flaky failures that slow down development when teams use good isolation.
Teams need more ways to get more test coverage without having to do more work, in addition to isolation. You can set rules that should always be true with property-based testing, and it will automatically make hundreds of test cases. This method is great for finding edge cases and limits that manual tests might not catch.
Parameterized testing gives you similar benefits, but you have more control over the inputs. You don't have to write extra code to run the same test with different data. Tools like xUnit's Theory and InlineData make this possible. This helps find more bugs and makes it easier to keep track of your test suite.
Both methods work best when you choose the right tests to run. You only run the tests you need, so platforms that know which tests matter for each code change give you full coverage without slowing things down.
The last step is to test complicated data, such as JSON responses or code that was made. Golden tests and snapshot testing make things easier by saving the expected output as reference files, so you don't have to do complicated checks.
If your code’s output changes, the test fails and shows what’s different. This makes it easy to spot mistakes, and you can approve real changes by updating the snapshot. This method works well for testing APIs, config generators, or any code that creates structured output.
Teams that use full automated testing frameworks see code coverage go up by 32.8% and catch 74.2% more bugs per build. Golden tests help by making it easier to check complex cases that would otherwise need manual testing.
The main thing is to balance thoroughness with easy maintenance. Golden tests should check real behavior, not details that change often. When you get this balance right, you’ll spend less time fixing bugs and more time building features.
Picking the right unit testing tools helps your team write tests efficiently, instead of wasting time on flaky tests or slow builds. The best frameworks work well with your language and fit smoothly into your CI/CD process.
Modern teams use these frameworks along with CI platforms that offer analytics and automation. This mix of good tools and smart processes turns testing from a bottleneck into a productivity boost.
Smart unit testing can turn CI/CD from a bottleneck into an advantage. When tests are fast and reliable, developers spend less time waiting and more time releasing code. Harness Continuous Integration uses Test Intelligence, automated caching, and isolated build environments to speed up feedback without losing quality.
Want to speed up your team? Explore Harness CI and see what's possible.


Filtering data is at the heart of developer productivity. Whether you’re looking for failed builds, debugging a service or analysing deployment patterns, the ability to quickly slice and dice execution data is critical.
At Harness, users across CI, CD and other modules rely on filtering to navigate complex execution data by status, time range, triggers, services and much more. While our legacy filtering worked, it had major pain points — hidden drawers, inconsistent behaviour and lost state on refresh — that slowed both developers and users.
This blog dives into how we built a new Filters component system in React: a reusable, type-safe and feature-rich framework that powers the filtering experience on the Execution Listing page (and beyond).
Our old implementation revealed several weaknesses as Harness scaled:
These problems shaped our success criteria: discoverability, smooth UX, consistent behaviour, reusable design and decoupled components.

Building a truly reusable and powerful filtering system required exploration and iteration. Our journey involved several key stages and learning from the pitfalls of each:
Shifted to React functional components but kept logic centralised in the FilterFramework. Each filter was conditionally rendered based on visibleFilters array. Framework fetched filter options and passed them down as props.
COMPONENT FilterFramework:
STATE activeFilters = {}
STATE visibleFilters = []
STATE filterOptions = {}
ON visibleFilters CHANGE:
FOR EACH filter IN visibleFilters:
IF filterOptions[filter] NOT EXISTS:
options = FETCH filterData(filter)
filterOptions[filter] = options
ON activeFilters CHANGE:
makeAPICall(activeFilters)
RENDER:
<AllFilters setVisibleFilters={setVisibleFilters} />
IF 'services' IN visibleFilters:
<DropdownFilter
name="Services"
options={filterOptions.services}
onAdd={updateActiveFilters}
onRemove={removeFromVisible}
/>
IF 'environments' IN visibleFilters:
<DropdownFilter ... />
Pitfalls: Adding new filters required changes in multiple places, creating a maintenance nightmare and poor developer experience. The framework had minimal control over filter implementation, lacked proper abstraction and scattered filter logic across the codebase, making it neither “stupid-proof” nor scalable.
Improved the previous approach by accepting filters as children and using React.cloneElement to inject callbacks (onAdd, onRemove) from the parent framework. This gave developers a cleaner API to add filters.
children.forEach(child => {
if (visibleFilters.includes(child.props.filterKey)) {
return React.cloneElement(child, {
onAdd: (label, value) => {
activeFilters[child.props.filterKey].push({ label, value });
},
onRemove: () => {
delete activeFilters[child.props.filterKey];
}
});
}
});Pitfalls: React.cloneElement is an expensive operation that causes performance issues with frequent re-renders and it’s considered an anti-pattern by the React team. The approach tightly coupled filters to the framework’s callback signature, made prop flow implicit and difficult to debug and created type safety issues since TypeScript struggles with dynamically injected props.
The winning design uses React Context API to provide filter state and actions to child components. Individual filters access setValue and removeFilter via useFiltersContext() hook. This decouples filters from the framework while maintaining control.
COMPONENT Filters({ children, onChange }):
STATE filtersMap = {} // { search: { value, query, state } }
STATE filtersOrder = [] // ['search', 'status']
FUNCTION updateFilter(key, newValue):
serialized = parser.serialize(newValue) // Type → String
filtersMap[key] = { value: newValue, query: serialized }
updateURL(serialized)
onChange(allValues)
ON URL_CHANGE:
parsed = parser.parse(urlString) // String → Type
filtersMap[key] = { value: parsed, query: urlString }
RENDER:
<Context.Provider value={{ updateFilter, filtersMap }}>
{children}
</Context.Provider>
END COMPONENTBenefits: This solution eliminated the performance overhead of cloneElement, decoupled filters from framework internals and made it easy to add new filters without touching framework code. The Context API provides clear data flow that’s easy to debug and test, with type safety through TypeScript.

The Context API in React unlocks something truly powerful — Inversion of Control (IoC). This design principle is about delegating control to a framework instead of managing every detail yourself. It’s often summed up by the Hollywood Principle: “Don’t call us, we’ll call you.”
In React, this translates to building flexible components that let the consumer decide what to render, while the component itself handles how and when it happens.
Our Filters framework applies this principle: you don’t have to manage when to update state or synchronise the URL. You simply define your filter components and the framework orchestrates the rest — ensuring seamless, predictable updates without manual intervention.
Our Filters framework demonstrates Inversion of Control in three key ways.
The result? A single, reusable Filters component that works across pipelines, services, deployments or repositories. By separating UI logic from business logic, we gain flexibility, testability and cleaner architecture — the true power of Inversion of Control.
COMPONENT DemoPage:
STATE filterValues
FilterHandler = createFilters()
FUNCTION applyFilters(data, filters):
result = data
IF filters.onlyActive == true:
result = result WHERE item.status == "Active"
RETURN result
filteredData = applyFilters(SAMPLE_DATA, filterValues)
RENDER:
<RouterContextProvider>
<FilterHandler onChange = (updatedFilters) => SET filterValues = updatedFilters>
// Dropdown to add filters dynamically
<FilterHandler.Dropdown>
RENDER FilterDropdownMenu with available filters
</FilterHandler.Dropdown>
// Active filters section
<FilterHandler.Content>
<FilterHandler.Component parser = booleanParser filterKey = "onlyActive">
RENDER CustomActiveOnlyFilter
</FilterHandler.Component>
</FilterHandler.Content>
</FilterHandler>
RENDER DemoTable(filteredData)
</RouterContextProvider>
END COMPONENTOne of the key technical challenges in building a filtering system is URL synchronization. Browsers only understand strings, yet our applications deal with rich data types — dates, booleans, arrays and more. Without a structured solution, each component would need to manually convert these values, leading to repetitive, error-prone code.
The solution is our parser interface, a lightweight abstraction with just two methods: parse and serialize.
parse converts a URL string into the type your app needs.serialize does the opposite, turning that typed value back into a string for the URL.This bidirectional system runs automatically — parsing when filters load from the URL and serialising when users update filters.
const booleanParser: Parser<boolean> = {
parse: (value: string) => value === 'true', // "true" → true
serialize: (value: boolean) => String(value) // true → "true"
}At the heart of our framework lies the FiltersMap — a single, centralized object that holds the complete state of all active filters. It acts as the bridge between your React components and the browser, keeping UI state and URL state perfectly in sync.
Each entry in the FiltersMap contains three key fields:
You might ask — why store both the typed value and its string form? The answer is performance and reliability. If we only stored the URL string, every re-render would require re-parsing, which quickly becomes inefficient for complex filters like multi-selects. By storing both, we parse only once — when the value changes — and reuse the typed version afterward. This ensures type safety, faster URL synchronization and a clean separation between UI behavior and URL representation. The result is a system that’s predictable, scalable, and easy to maintain.
interface FilterType<T = any> {
value?: T // The actual filter value
query?: string // Serialized string for URL
state: FilterStatus // VISIBLE | FILTER_APPLIED | HIDDEN
}Let’s trace how a filter value moves through the system — from user interaction to URL synchronization.
It all starts when a user interacts with a filter component — for example, selecting a date. This triggers an onChange event with a typed value, such as a Date object. Before updating the state, the parser’s serialize method converts that typed value into a URL-safe string.
The framework then updates the FiltersMap with both versions:
value andquery.From here, two things happen simultaneously:
onChange callback fires, passing typed values back to the parent component — allowing the app to immediately fetch data or update visualizations.The reverse flow works just as seamlessly. When the URL changes — say, the user clicks the back button — the parser’s parse method converts the string back into a typed value, updates the FiltersMap and triggers a re-render of the UI.
All of this happens within milliseconds, enabling a smooth, bidirectional synchronization between the application state and the URL — a crucial piece of what makes the Filters framework feel so effortless.

For teams tackling similar challenges — complex UI state management, URL synchronization and reusable component design — this architecture offers a practical blueprint to build upon. The patterns used are not specific to Harness; they are broadly applicable to any modern frontend system that requires scalable, stateful and user-driven filtering.
The team’s core objectives — discoverability, smooth UX, consistent behavior, reusable design and decoupled elements — directly shaped every architectural decision. Through Inversion of Control, the framework manages the when and how of state updates, lifecycle events and URL synchronization, while developers define the what — business logic, API calls and filter behavior.
By treating the URL as part of the filter state, the architecture enables shareability, bookmarkability and native browser history support. The Context API serves as the control distribution layer, removing the need for prop drilling and allowing deeply nested components to seamlessly access shared logic and state.
Ultimately, Inversion of Control also paved the way for advanced capabilities such as saved filters, conditional rendering, and sticky filters — all while keeping the framework lightweight and maintainable. This approach demonstrates how clear objectives and sound architectural principles can lead to scalable, elegant solutions in complex UI systems.


Welcome back to the quarterly update series! Catch up on the latest Harness
Continuous Delivery innovations and enhancements with this quarter's Q4 2025 release. For full context, check out our previous updates:
Q4 2025 builds on last quarter's foundation of performance, observability, and governance with "big-swing" platform upgrades that make shipping across VMs/Kubernetes safer, streamline artifacts/secrets, and scale GitOps without operational drag.
Harness now supports deploying and managing Google Cloud Managed Instance Groups (MIGs), bringing a modern CD experience to VM-based workloads. Instead of stitching together instance templates, backend services, and cutovers in the GCP console, you can run repeatable pipelines that handle the full deployment lifecycle—deploy, validate, and recover quickly when something goes wrong.
For teams that want progressive delivery, Harness also supports blue-green deployments with Cloud Service Mesh traffic management. Traffic can be shifted between stable and stage environments using HTTPRoute or GRPCRoute, enabling controlled rollouts like 10% → 50% → 100% with checkpoints along the way. After the initial setup, Harness keeps the stable/stage model consistent and can swap roles once the new version is fully promoted, so you’re not re-planning the mechanics every release.
Learn more about MIG
AWS CDK deployments can now target multiple AWS accounts using a single connector by overriding the region and assuming a step-level IAM role. This is a big quality-of-life improvement for orgs that separate “build” and “run” accounts or segment by business unit, but still want one standardized connector strategy.
Learn more about Multi-account AWS CDK deployments
ECS blue-green deployments now auto-discover the correct stage target group when it’s not provided, selecting the target group with 0% traffic and failing fast if weights are ambiguous. This reduces the blast radius of a very real operational footgun: accidentally deploying into (and modifying) the live production target group during a blue/green cycle.
Learn more about automated ECS blue-green shifting
Improved resiliency for Azure WebApp workflows impacted by API rate limits, reducing flaky behavior and improving overall deployment stability in environments that hit throttling.
HAR is now supported as a native artifact source for all CD deployment types except Helm, covering both container images and packaged artifacts (Maven, npm, NuGet, and generic). Artifact storage and consumption can live inside the same platform that orchestrates the deployment, which simplifies governance and reduces integration sprawl.
Because HAR is natively integrated, you don’t need a separate connector just to pull artifacts. Teams can standardize how artifacts are referenced, keep tagging/digest strategies consistent, and drive more predictable “what exactly are we deploying?” audits across environments.
Learn more about Harness Artifact Registry
Terraform steps now support authenticating to GCP using GCP connector credentials, including Manual Credentials, Inherit From Delegate, and OIDC authentication methods. This makes it much easier to run consistent IaC workflows across projects without bespoke credential handling in every pipeline.
Learn more about GCP connector
AWS connectors now support configuring AssumeRole session duration for cross-account access. This is vital when you have longer-running operations (large Terraform applies, multi-region deployments, or complex blue/green flows) and want the session window to match reality.
Learn more about AWS connector
Harness now supports Terragrunt 0.78.0+ (including the v1.x line), with automatic detection and the correct command formats. If you’ve been waiting to upgrade Terragrunt without breaking pipeline behavior, this closes a major gap.
Learn more about Terragrunt
Vault JWT auth now includes richer claims such as pipeline, connector, service, environment, and environment type identifiers. This enables more granular Vault policies, better secret isolation between environments, and cleaner multi-tenant setups.
CV now supports custom webhook notifications for verification sub-tasks, sending real-time updates for data collection and analysis (with correlation IDs) and allowing delivery via Platform or Delegate. This is a strong building block for teams that want deeper automation around verification outcomes and richer external observability workflows.
Learn more about Custom webhook notifications
You can now query metrics and logs from a different GCP project than the connector’s default by specifying a project ID. This reduces connector sprawl in multi-project organizations and keeps monitoring setups aligned with how GCP estates are actually structured.
Learn more about cross-project GCP Operations
This quarter’s pipeline updates focused on making executions easier to monitor, triggers more resilient, and dynamic pipeline patterns more production-ready. If you manage a large pipeline estate (or rely heavily on PR-driven automation), these changes reduce operational blind spots and help pipelines keep moving even when parts of the system don't function as expected.
Pipelines can now emit a dedicated notification event when execution pauses for approvals, manual interventions, or file uploads. This makes “human-in-the-loop” gates visible in the same places you already monitor pipeline health, and helps teams avoid pipelines silently idling until someone notices.
Harness now supports Bitbucket Cloud Workspace API Tokens in the native connector experience. This is especially useful for teams moving off deprecated app password flows and looking for an authentication model that’s easier to govern and rotate.
Learn more about Bitbucket Cloud Connector
Pipeline metadata is now exported to the data platform, enabling knowledge graph style use cases and richer cross-entity insights. This lays the foundation for answering questions like “what deploys what,” “where is this template used,” and “which pipelines are affected if we change this shared component.”
Pull request triggers can now load InputSets from the source branch of the pull request. This is a big unlock for teams that keep pipeline definitions and trigger/config repositories decoupled, or that evolve InputSets alongside code changes in feature branches.
Learn more about Dynamic InputSet Branch Resolutions
Trigger processing is now fault-tolerant; a failure in one trigger’s evaluation no longer blocks other triggers in the same processing flow. This improves reliability during noisy event bursts and prevents one faulty trigger from suppressing otherwise valid automations.
Added API visibility into “Referenced By” relationships for CD objects, making it easier to track template adoption and understand downstream impact. This is particularly useful for platform teams that maintain shared templates and need to measure usage, plan migrations, or audit dependencies across orgs and projects.
Harness now includes detection and recovery mechanisms for pipeline executions that get stuck, reducing reliance on manual support intervention. The end result is fewer long-running “zombie” executions and better overall system reliability for critical delivery workflows.
Dynamic Stages can now source pipeline YAML directly from Git, with support for connector, branch, file path, and optional commit pinning. Since these values can be expression-driven and resolved at runtime, teams can implement powerful patterns like environment-specific stage composition, governed reuse of centrally managed YAML, and safer rollouts via pinned versions.
Learn more about Dynamic Stages
ApplicationSets are built for a problem every GitOps team eventually hits: one app template, dozens or hundreds of targets. Instead of managing a growing pile of near-duplicate GitOps applications, ApplicationSets act like an application factory—one template plus generators that produce the child applications.
With first-class ApplicationSet support, Harness adds an enhanced UI wizard and deeper platform integration. That includes Service/Environment integration (via standard labels), better RBAC alignment, validation/preview of manifests, and a cleaner operational experience for creating and managing ApplicationSets over time.
Learn more about ApplicationSets
You can now use Harness secret expressions directly inside Kubernetes manifests in a GitOps flow using the Harness Argo CD Config Management Plugin. The key shift is where resolution happens: secrets are resolved during Argo CD’s manifest rendering phase, which supports a pure GitOps pattern without requiring a Harness Delegate to decrypt secrets.
The developer experience is straightforward. You reference secrets using expressions like <+secrets.getValue("...")>, commit the manifest, and the plugin injects resolved values as Argo CD renders the manifests for deployment.
Learn more about Harness secret expressions
Harness GitOps now supports Argo Rollouts, unlocking advanced progressive delivery strategies like canary and blue/green with rollout-specific controls. For teams that want more than “sync and hope,” this adds a structured mechanism to shift traffic gradually, validate behavior, and roll back based on defined criteria.
This pairs naturally with pipeline orchestration. You can combine rollouts with approvals and monitoring gates to enforce consistency in how progressive delivery is executed across services and environments.
Learn more about Argo Rollouts support
Want to try these GitOps capabilities hands-on?
Check out the GitOps Samples repo for ready-to-run examples you can fork, deploy, and adapt to your own workflows.
Explore GitOps-Samples
That wraps up our Q4 2025 Continuous Delivery update. Across CD, Continuous Verification, Pipelines, and GitOps, the theme this quarter was simple: make releases safer by default, reduce operational overhead, and help teams scale delivery without scaling complexity.
If you want to dive deeper, check the “Learn more” links throughout this post and the documentation they point to. We’d also love to hear what’s working (and what you want next); share feedback in your usual channels or reach out through Harness Support.


GitOps has become the default model for deploying applications on Kubernetes. Tools like Argo CD have made it simple to declaratively define desired state, sync it to clusters, and gain confidence that what’s running matches what’s in Git.
And for a while, it works exceptionally well.
Most teams that adopt GitOps experience a familiar pattern: a successful pilot, strong early momentum, and growing trust in automated delivery. But as adoption spreads across more teams, environments, and clusters, cracks begin to form. Troubleshooting slows down. Governance becomes inconsistent. Delivery workflows sprawl across scripts and tools.
This is the point many teams describe as “Argo not scaling.”
In reality, they’ve hit what we call the Argo ceiling.
The Argo ceiling isn’t a flaw in Argo CD. It’s a predictable inflection point that appears when GitOps is asked to operate at scale without a control plane.
The Argo ceiling is the moment when GitOps delivery starts to lose cohesion as scale increases.
Argo CD is intentionally designed to be cluster-scoped. That design choice is one of its strengths: it keeps the system simple, reliable, and aligned with Kubernetes’ model. But as organizations grow, that same design introduces friction.
Teams move from:
At that point, GitOps still works — but operating GitOps becomes harder. Visibility fragments. Orchestration logic leaks into scripts. Governance depends on human process instead of platform guarantees.
The Argo ceiling isn’t a hard limit. It’s the point where teams realize they need more structure around GitOps to keep moving forward.

One of the first pain points teams encounter is visibility.
Argo CD provides excellent insight within a single cluster. But as environments multiply, troubleshooting often turns into dashboard hopping. Engineers find themselves logging into multiple Argo CD instances just to answer basic questions:
What teams usually want instead is:
Argo CD doesn’t try to be a global control plane, so this gap is expected. When teams start asking for cross-cluster visibility, it’s often the first sign they’ve hit the Argo ceiling.
As GitOps adoption grows, orchestration gaps start to appear. Teams need to handle promotions, validations, approvals, notifications, and integrations with external systems.
In practice, many organizations fill these gaps with:
These scripts usually start small and helpful. But as they grow, they begin to:
This is a classic Argo ceiling symptom. Orchestration lives outside the platform instead of being modeled as a first-class, observable workflow. Over time, GitOps starts to feel less like a modern delivery model and more like scripted CI/CD from a decade ago.
Promotion is another area where teams feel friction.
Argo CD is excellent at syncing desired state, but it doesn’t model the full lifecycle of a release. As a result, promotions often involve:
These steps slow delivery and increase cognitive load, especially as the number of applications and environments grows.
Git is the source of truth in GitOps — but secrets don’t belong in Git.
At small scale, teams manage this tension with conventions and external secret stores. At larger scale, this often turns into a patchwork of approaches:
The result is secret sprawl and operational risk. Managing secrets becomes harder precisely when consistency matters most.
Finally, audits become painful.
Change records are scattered across Git repos, CI systems, approval tools, and human processes. Reconstructing who changed what, when, and why turns into a forensic exercise.
At this stage, compliance depends more on institutional memory than on reliable system guarantees.
When teams hit the Argo ceiling, the instinctive response is often to add more tooling:
Unfortunately, this usually makes things worse.
The problem isn’t a lack of tools. It’s a lack of structure. Scaling GitOps requires rethinking how visibility, orchestration, and governance are handled — not piling on more glue code.
Before introducing solutions, it’s worth stepping back and defining the principles that make GitOps sustainable at scale.
One of the biggest mistakes teams make is repeating the same logic in every Argo CD cluster.
Instead, control should be centralized:
At the same time, application ownership remains decentralized. Teams still own their services and repositories — but the rules of the road are consistent everywhere.
GitOps should feel modern, not like scripted CI/CD.
Delivery is more than “sync succeeded.” Real workflows include:
These should be modeled as structured, observable workflows — not hidden inside scripts that only a few people understand.
Many teams start enforcing rules through:
That approach doesn’t scale.
In mature GitOps environments, guardrails are enforced automatically:
Git remains the source of truth, but compliance becomes a platform guarantee instead of a human responsibility.
These challenges point to a common conclusion: GitOps at scale needs a control plane.
Git excels at versioning desired state, but it doesn’t provide:
A control plane complements GitOps by sitting above individual clusters. It doesn’t replace Argo CD — it coordinates and governs it.
Harness provides a control plane that allows teams to scale GitOps without losing control.
Harness gives teams a single place to see deployments across clusters and environments. Failures can be correlated back to the same change or release, dramatically reducing time to root cause.

Instead of relying on scripts, Harness models delivery as structured workflows:

This keeps orchestration visible, reusable, and safe to evolve over time.
Kubernetes and Argo CD can tell you whether a deployment technically succeeded — but not whether the application is actually behaving correctly.
Harness customers use AI-assisted deployment verification to analyze metrics, logs, and signals automatically. Rather than relying on static thresholds or manual checks, the system evaluates real behavior and can trigger rollbacks when anomalies are detected.
This builds on ideas from progressive delivery (such as Argo Rollouts analysis) while making verification consistent and governable across teams and environments.
Harness GitOps Secret Expressions address the tension between GitOps and secret management:
This keeps Git clean while making secret handling consistent and auditable.
The Argo ceiling isn’t a failure of GitOps — it’s a sign of success.
Teams hit it when GitOps adoption grows faster than the systems around it. Argo CD remains a powerful foundation, but at scale it needs a control plane to provide visibility, orchestration, and governance.
GitOps doesn’t break at scale.
Unmanaged GitOps does.
Ready to move past the Argo ceiling? Watch the on-demand session to learn how teams scale GitOps with confidence.


For a long time, CI/CD has been “configuration as code.” You define a pipeline, commit the YAML, sync it to your CI/CD platform, and run it. That pattern works really well for workflows that are mostly stable.
But what happens when the workflow can’t be stable?
In all of those cases, forcing teams to pre-save a pipeline definition, either in the UI or in a repo, turns into a bottleneck.
Today, I want to introduce you to Dynamic Pipelines in Harness.
Dynamic Pipelines let you treat Harness as an execution engine. Instead of having to pre-save pipeline configurations before you can run them, you can generate Harness pipeline YAML on the fly (from a script, an internal developer portal, or your own code) and execute it immediately via API.
To be clear, dynamic pipelines are an advanced functionality. Pipelines that rewrite themselves on the fly are not typically needed and should generally be avoided. They’re more complex than you want most of the time. But when you need this power, you really need it ,and you want it implemented well.
Here are some situations where you may want to consider using dynamic pipelines.
You can build a custom UI, or plug into something like Backstage, to onboard teams and launch workflows. Your portal asks a few questions, generates the corresponding Harness YAML behind the scenes, and sends it to Harness for execution.
Your portal owns the experience. Harness owns the orchestration: execution, logs, state, and lifecycle management. While mature pipeline reuse strategies will suggest using consistent templates for your IDP points, some organizations may use dynamic pipelines for certain classes of applications to generate more flexibility automatically.
Moving CI/CD platforms often stalls on the same reality: “we have a lot of pipelines.”
With Dynamic Pipelines, you can build translators that read existing pipeline definitions (for example, Jenkins or Drone configurations), convert them into Harness YAML programmatically, and execute them natively. That enables a more pragmatic migration path, incremental rather than a big-bang rewrite. It even supports parallel execution where both systems are in place for a short period of time.
We’re entering an era where more of the delivery workflow is decided at runtime, sometimes by policy, sometimes by code, sometimes by AI-assisted systems. The point isn’t “fully autonomous delivery.” It’s intelligent automation with guardrails.
If an external system determines that a specific set of tests or checks is required for a particular change, it can assemble the pipeline YAML dynamically and run it. That’s a practical step toward a more programmatic stage/step generation over time. For that to work, the underlying DevOps platform must support dynamic pipelining. Harness does.
Dynamic execution is primarily API-driven, and there are two common patterns.
You execute a pipeline by passing the full YAML payload directly in the API request.
Workflow: your tool generates valid Harness YAML → calls the Dynamic Execution API → Harness runs the pipeline.
Result: the run starts immediately, and the execution history is tagged as dynamically executed.
You can designate specific stages inside a parent pipeline as Dynamic. At runtime, the parent pipeline fetches or generates a YAML payload and injects it into that stage.
This is useful for hybrid setups:
A reasonable question is: “If I can inject YAML, can I bypass security?”
Bottom line: no.
Dynamic pipelines are still subject to the same Harness governance controls, including:
This matters because speed and safety aren’t opposites if you build the right guardrails—a theme that shows up consistently in DORA’s research and in what high-performing teams do in practice.
To use Dynamic Pipelines, enable Allow Dynamic Execution for Pipelines at both:
Once that’s on, you can start building custom orchestration layers on top of Harness, portals, translators, internal services, or automation that generates pipelines at runtime.
The takeaway here is simple: Dynamic Pipelines unlock new “paved path” and programmatic CI/CD patterns without giving up governance. I’m excited to see what teams build with it.
Ready to try it? Check out the API documentation and run your first dynamic pipeline.


Cloud migration has shifted from a tactical relocation exercise to a strategic modernization program. Enterprise teams no longer view migration as just the movement of compute and storage from one cloud to another. Instead, they see it as an opportunity to redesign infrastructure, streamline delivery practices, strengthen governance, and improve cost control, all while reducing manual effort and operational risk. This is especially true in regulated industries like banking and insurance, where compliance and reliability are essential.
This first installment in our cloud migration series introduces the high-level concepts and the automation framework that enables enterprise-scale transitions, without disrupting ongoing delivery work. Later entries will explore the technical architecture behind Infrastructure as Code Management (IaCM), deployment patterns for target clouds, Continuous Integration (CI) and Continuous Delivery (CD) modernization, and the financial operations required to keep migrations predictable.

Many organizations begin their migration journey with the assumption that only applications need to move. In reality, cloud migration affects five interconnected areas: infrastructure provisioning, application deployment workflows, CI and CD systems, governance and security policies, and cost management. All five layers must evolve together, or the migration unintentionally introduces new risks instead of reducing them.
Infrastructure and networking must be rebuilt in the target cloud with consistent, automated controls. Deployment workflows often require updates to support new environments or adopt GitOps practices. Legacy CI and CD tools vary widely across teams, which complicates standardization. Governance controls differ by cloud provider, so security models and policies must be reintroduced. Finally, cost structures shift when two clouds run in parallel, which can cause unpredictability without proper visibility.
Cloud migration is often motivated by a combination of compliance requirements, access to more suitable managed services, performance improvements, or cost efficiency goals. Some organizations move to support a multi-cloud strategy while others want to reduce dependence on a single provider. In many cases, migration becomes an opportunity to correct architectural debt accumulated over years.
Azure to AWS is one example of this pattern, but it is not the only one. Organizations regularly move between all major cloud providers as their business and regulatory conditions evolve. What remains consistent is the need for predictable, auditable, and secure migration processes that minimize engineering toil.
The complexity of enterprise systems is the primary factor that makes cloud migration difficult. Infrastructure, platform, security, and application teams must coordinate changes across multiple domains. Old and new cloud environments often run side by side for months, and workloads need to operate reliably in both until cutover is complete.
Another challenge comes from the variety of CI and CD tools in use. Large organizations rarely rely on a single system. Azure DevOps, Jenkins, GitHub Actions, Bitbucket, and custom pipelines often coexist. Standardizing these workflows is part of the migration itself, and often a prerequisite for reliability at scale..
Security and policy enforcement also require attention. When two clouds differ in their identity models, network boundaries, or default configurations, misconfigurations can easily be introduced . Finally, cost becomes a concern when teams pay for two clouds at once. Without visibility, migration costs rise faster than expected.
Harness addresses these challenges by providing an automation layer that unifies infrastructure provisioning, application deployment, governance, and cost analysis. This creates a consistent operating model across both the current and target clouds.
Harness Internal Developer Portal (IDP) provides a centralized view of service inventory, ownership, and readiness, helping teams track standards and best-practice adoption throughout the migration lifecycle. Harness Infrastructure as Code Management (IaCM) defines and provisions target environments and enforces policies through OPA, ensuring every environment is created consistently and securely. It helps teams standardize IaC, detect drift, and manage approvals. Harness Continuous Delivery (CD) introduces consistent, repeatable deployment practices across clouds and supports progressive delivery techniques that reduce cutover risk. GitOps workflows create clear audit trails. Harness Cloud Cost Management (CCM) allows teams to compare cloud costs, detect anomalies, and govern spend during the transition before costs escalate.
A successful, low-risk cloud migration usually follows a predictable pattern. Teams begin by modeling both clouds using IaC so the target environment can be provisioned safely. Harness IaCM then creates the new cloud infrastructure while the existing cloud remains active. Once environments are ready, teams modernize their pipelines. This process is platform agnostic and applies whether the legacy pipelines were built in Azure DevOps, Jenkins, GitHub Actions, Bitbucket, or other systems. The new pipelines can run in parallel to ensure reliability before switching over.
Workloads typically migrate in waves. Stateless services move first, followed by stateful systems and other dependent components. Parallel runs between the source and target clouds provide confidence in performance, governance adherence, and deployment stability without slowing down release cycles. Throughout this process, Harness CCM monitors cloud costs to prevent unexpected increases. After the migration is complete, teams can strengthen stability using feature flags, chaos experiments, or security testing.

When migration is guided by automation and governance, enterprises experience fewer failures and smoother transitions, and faster time-to-value. Timelines become more predictable because infrastructure and pipelines follow consistent patterns. Security and compliance improve as policy enforcement becomes automated. Cost visibility allows leaders to justify business cases and track savings. Most importantly, engineering teams end up with a more modern, efficient, and unified operating model in the target cloud.
The next blog in this series will examine how to design target environments using Harness IaCM, including patterns for enforcing consistent, compliant baseline configurations. Later entries will explore pipeline modernization, cloud deployment patterns, cost governance, and reliability practices for post-migration operations.


Atlanta did not just host KubeCon this year. It hosted the beginning of a new chapter for cloud native. Over four incredible days, more than 9,000 attendees celebrated 10 years of CNCF, explored the next generation of open source innovation, and most of all, grappled with the question echoing through every keynote, booth, and hallway track:
What does AI native really mean for our community?
KubeCon week kicked off with OpenTofu Day, one of the few co-located events this year, and Harness had a strong presence as contributors and community leaders. Larry Bordowitz, a core maintainer, joined technical discussions on backend improvements and enterprise interoperability and participated in a panel on OpenTofu’s roadmap under CNCF governance. Roger Simms, on the OpenTofu Technical Steering Committee, met with maintainers and platform teams to align on long-term standards for vendor neutral IaC. We also released the Practical Guide to Modernizing Infrastructure Delivery, showing how teams like TransUnion and Fidelity scaled IaC, reduced toil, and built secure GitOps workflows in preparation for AI driven infrastructure delivery.

The themes of the day such as openness, trust, and modernization were echoed in talks like Fabrizio Sgura’s session on 5 Security Tips for Terraform, where he highlighted Harness’s native integrations with SCS and STO. This kind of independent validation matters because it shows the industry increasingly recognizes that modern IaC requires secure, integrated pipelines, not just template execution. OpenTofu Day underscored one message: the future of IaC is open, and Harness is committed to leading that future through contribution, community partnership, and enterprise-ready tooling.
The opening keynote marked a milestone: 10 years of CNCF and a clear signal that the next decade will be defined by AI workloads running natively on Kubernetes.
We saw the unveiling of the Kubernetes AI Conformance Program, a community-driven initiative ensuring that AI and ML workloads run consistently across clouds and hardware accelerators. The live demo, deploying a Gemma model on a Kubernetes 1.34 cluster using Dynamic Resource Allocation (DRA), showed how GPUs and TPUs are now first-class citizens in Kubernetes. The message was clear: the cloud native community is laying the foundation for standardized AI infrastructure across every cloud and cluster.
This community is trying to standardize AI inference workloads across every cloud and system out there.
That sentiment aligns perfectly with Harness’s vision that AI is no longer an add-on to software delivery; it is becoming the delivery substrate itself.
At the Google Cloud Booth, Chinmay Gaikwad, Harness’s Director of Product Marketing, presented “Creating Enterprise-ready CI/CD Using Agentic AI,” demonstrating how AI is making continuous delivery faster and more secure at enterprise scale. Harness’s extensive knowledge graph caught the attention of the audience.


In one of the most memorable keynotes of the week, Joseph Sandoval (Adobe) described our evolution from Linux to Cloud Native to AI Native. He introduced the idea of the Agent Economy, where autonomous systems observe, reason, and act, and emphasized that Kubernetes must now evolve to orchestrate them.
This transformation requires more intelligent scheduling, deeper observability, and verified agent identity using SPIFFE/SPIRE. Sandoval’s call to action summed up the mood of the entire conference:
AI native is not going to be built by one project or one company. It will be built by all of you. The community is the accelerator.
This perfectly reflects what we showcased at Harness: Agentic AI as an engineering accelerator, powering adaptive, intelligent pipelines that continuously learn and optimize delivery.
In the keynote “Supply Chain Reaction: A Cautionary Tale in K8s Security,” speakers S. Potter and A.G. Veytia delivered a wake-up call on supply chain attacks. A compromised compiler injected crypto-mining malware into a “clean” image, bypassing traditional vulnerability scanners.
Your image has zero vulnerable components, so there is nothing to report. It is zero-CV malware.
Their advice was practical: secure the supply chain through attestation (SLSA), policy enforcement, and artifact signing with Sigstore. Harness users saw this principle in action at our booth, where we demonstrated policy-as-code and provenance-aware delivery in our CI/CD pipelines.
Apple introduced Apple Containerization, a new framework that runs Linux containers directly on macOS using lightweight microVMs. Each container boots a minimal Linux kernel, runs as a static executable, and starts in under a second. This design combines the security of VMs with the speed of containers, creating safer, more private development environments.
Beyond the keynotes, technical sessions reinforced a clear trend: intelligent, automated, and developer-first platforms are the future.
At Booth #522, the Harness team showcased over 17 products (such as CI, CD, IDP, IaCM, security, testing, and FinOps products), sparking conversations on the practical side of AI in DevOps, Testing, Security, and FinOps.
Harness’s visual pipeline creation and creating pipelines using AI we hot topics at the booth.

Harness supports many CNCF projects, including ArgoCD, Backstage, LitmusChaos, and OpenTofu. The enthusiasm was electric, and our booth became a hub for deep technical conversations, spontaneous demos, and, yes, the best swag of the show.

If KubeCon 2024 was about AI awareness, KubeCon 2025 was about AI activation. From Kubernetes AI Conformance to Agentic orchestration, cloud native is evolving into the control plane for intelligent systems.
At Harness, we are building for that future where:
Could not make it to Atlanta? Experience the technology everyone was talking about.
KubeCon 2025 proved one thing: the future of cloud native is intelligent, adaptive, and collaborative. And at Harness, we are building it.


KubeCon 2025 Atlanta is here! For the next four days, Atlanta is the undisputed center of the cloud native universe. The buzz is palpable, but this year, one question seems to be hanging over every keynote, session, and hallway track: AI.
We've all seen the impressive demos. But as developers and engineers, we have to ask the hard questions. Can AI actually help us ship code better? Can it make our complex CI/CD pipelines safer, faster, and more intelligent? Or is it just another layer of hype we have to manage?
At Harness, we believe AI is the key to solving software delivery's biggest challenges. And we're not just talking about it—we're here to show you the code with Harness AI, purpose-built to bring intelligence and automation to every step of the delivery process.
We are thrilled to team up with Google Cloud to present a special lightning talk on Agentic AI and its practical use in CI/CD. This is where the hype stops and the engineering begins.
Join our Director of Product Marketing, Chinmay Gaikwad, for this deep-dive session.

Chinmay will be on hand to demonstrate how Agentic AI is moving from a concept to a practical, powerful tool for building and securing enterprise-grade pipelines. Be sure to stop by, ask questions, and get personalized guidance.
AI is our big theme, but we're everywhere this week, focusing on the core problems you face. Here's where to find us.
1. Main Event: The Harness Home Base (Nov 11-13)
This is our command center. Come by Booth #522 to see live demos of our Agentic AI in action. You can also talk to our engineers about the full Harness platform, including how we integrate with OpenTofu, empower platform engineering teams, and help you get a handle on cloud costs. Plus, we have the best swag at the show.
2. Co-located Event: Platform Engineering Day (Nov 10)
As a Platinum Sponsor, we're kicking off the week with a deep focus on building Internal Developer Platforms (IDPs). Stop by Booth #Z45 to chat about building "golden paths" that developers will actually love and how to prove the value of your platform.
3. Co-located Event: OpenTofu Day (Nov 10)
We are incredibly proud to be a Gold Sponsor of OpenTofu Day. As one of the top contributors to the OpenTofu project, our engineers are in the trenches helping shape the future of open-source Infrastructure as Code.
The momentum is undeniable:
Our engineers have contributed major features like the AzureRM backend rewrite and the new Azure Key Provider, and we serve on the Technical Steering Committee. Come find us in Room B203 to meet the team and talk all things IaC.
Can't wait? Download the digital copy of The Practical Guide to Modernizing Infrastructure Delivery and AI-Native Software Delivery right now.
KubeCon 2025 Atlanta is about what's next. This year, "what's next" is practical AI, smarter platforms, and open collaboration. We're at the center of all three.
See you on the floor!


We're thrilled to share some exciting news: Harness has been granted U.S. Patent US20230393818B2 (originally published as US20230393818A1) for our configuration file editor with an intelligent code-based interface and a visual interface.
This patent represents a significant step forward in how engineering teams interact with CI/CD pipelines. It formalizes a new way of managing configurations - one that is both developer-friendly and enterprise-ready - by combining the strengths of code editing with the accessibility of a visual interface.
👉 If you haven’t seen it yet, check out our earlier post on the Harness YAML Editor for context.
In modern DevOps, YAML is everywhere. Pipelines, infrastructure-as-code, Kubernetes manifests, you name it. YAML provides flexibility and expressiveness for DevOps pipelines, but it comes with drawbacks:
The result? Developers spend countless hours fixing misconfigurations, chasing down syntax errors, and debugging pipelines that failed for reasons unrelated to their code.
We knew there had to be a better way.
The patent covers a hybrid editor that blends the best of two worlds:
What makes this unique is the schema stitching approach:
This ensures consistency, prevents invalid configurations, and gives users real-time feedback as they author pipelines.
This isn’t just a UX improvement - it’s a strategic shift with broad implications.
New developers no longer need to memorize every YAML field or indentation nuance. Autocomplete and inline hints guide them through configuration, while the visual editor provides an easy starting point. A wall of YAML can be hard to understand; a visual pipeline is easy to grok immediately.
Schema-based validation catches misconfigurations before they break builds or deployments. Teams save time, avoid unnecessary rollbacks, and maintain higher confidence in their pipelines.
By offering both a code editor and a visual editor, the tool becomes accessible to a wider audience - developers, DevOps engineers, and even less technical stakeholders like product managers or QA leads who need visibility.
Here’s a simple example:
Let’s say your pipeline YAML requires specifying a container image.
image: ubuntu:20.04
But what if you accidentally typed ubunty:20.04? In a traditional editor, the pipeline might fail later at runtime.
Now add the visual editor:
Multiply this by hundreds of fields, across dozens of microservices, and the value becomes clear.
We’re in a new era of software delivery:
This patent directly addresses these trends by creating a foundation for intelligent, schema-driven configuration tooling. It allows Harness to:
With this patent secured, the door is open to innovate further:
This isn’t about YAML. DevOps configuration must be intuitive, resilient, and scalable to enable faster, safer, and more delightful software delivery.
This milestone wouldn’t have been possible without the incredible collaboration of our product, engineering, and legal teams. And of course, our customers. The feedback they provided shaped the YAML editor into what it is today.
This patent is more than a legal win. It’s validation of an idea: that developer experience matters just as much as functionality. By bridging the gap between raw power and accessibility, we’re making CI/CD pipelines faster to build, safer to run, and easier to adopt.
At Harness, we invest aggressively in R&D to solve our customers' most complex problems. What truly matters is delivering capabilities that improve the lives of developers and platform teams, enabling them to innovate more quickly.
We're thrilled that this particular innovation, born from solving the real-world pain of YAML, has been formally recognized as a unique invention. It's the perfect example of our commitment to leading the industry and delivering tangible value, not just features.
👉 Curious to see it in action? Explore the Harness YAML Editor and share your feedback.


Welcome back to the quarterly update series! Catch up on the latest Harness Continuous Delivery innovations and enhancements with this quarter’s Q3 2025 release. For full context, check out our previous updates:
The third quarter of 2025 has been all about deepening control, strengthening integrations, and enhancing deployment reliability across the Harness CD ecosystem. Here’s a roundup of everything new this quarter.
Harness Continuous Delivery now supports automated deployment for Salesforce, enabling teams to deploy Salesforce DX projects and pre-built unlocked packages (2GP) directly from version control systems like Git, GitHub, GitLab, and Bitbucket. The integration offers flexible authentication options and lets teams define target Salesforce orgs as infrastructure, empowering seamless and secure metadata deployments through automated pipelines.
Users can now perform basic deployments using Serverless Framework V4, with Node.js 22 runtime support and authentication via the SERVERLESS_ACCESS_KEY environment variable. Rollback to V3 is available if needed.
Explore complete details in the product documentation.
allows teams to define target Salesforce orgs as infrastructure. A new Skip Traffic Shift option in the Google Cloud Run Deploy step allows creating new revisions without an immediate traffic shift. The Traffic Shift step now supports assigning multiple tags to revisions, making traffic routing and management easier for easier traffic routing and management.
Explore complete details in the product documentation.
Harness extends containerized execution with support for VM infrastructure, enabling container step groups to run on Linux VMs as an alternative to Kubernetes clusters. This feature is controlled by a feature flag and is available upon request.
Explore complete details in the product documentation.
The Host field in Canary Traffic Routing now supports accepting a comma-separated list of hosts, enhancing flexibility in routing configurations.
Fixes have been applied where ECS deployments now respect the folder path option for scaling policies, correctly selecting all files from the specified folder rather than expecting a single file path.
Explore full details in the product documentation.
Harness now provides the option to disable automatic uninstall on a first release failure during Helm deployments, allowing easier debugging and investigation without losing the failed release state.
Explore full details in the product documentation.
Connect Azure Cloud Provider connectors to Azure Container Registry (ACR) for seamless deployment of Azure Functions.
Explore complete details in the product documentation.
Harness continuous deployment (CD) now supports Azure connectors in all Terraform steps (plan, apply, destroy, etc.), extending the flexibility of infrastructure-as-code for hybrid and multi-cloud deployments.
Explore complete details in the product documentation.
Harness now supports Dynatrace Grail Logs as a health source for deployment verification. By integrating Dynatrace connectors configured with a Platform URL and Platform Token, you can monitor logs in real-time during deployments—helping to identify issues earlier and ensuring greater environmental stability.
Explore complete details in the product documentation.
Manual intervention during deployment failures is now smarter. When a Run step fails and you choose to roll back the stage manually, Harness will automatically execute the rollback stage action instead of marking the workflow as failed. This enhancement brings greater resiliency and control to recovery workflows.
Configure CPU and memory resource requests directly in the Container Step UI, to ensure opitmized and reliable execution of resource-intensive tasks.
Select user groups when configuring Email Steps to enhance collaboration and communication workflows.
Explore complete details in the product documentation.
Harness now treats GitOps ApplicationSets as key native entities, allowing for easy creation via the UI or REST APIs, with support for all Argo CD generator types. ApplicationSets integrate with Harness Services and Environments for RBAC, deployment tracking, and unified visibility. Features include importing existing ApplicationSets, Terraform management, PR pipeline automation, sync policies, audit logging, and hierarchical application views—making large-scale GitOps management simpler and more powerful.
Support for secret expressions now extends beyond Helm files to resource manifests, specifically secrets, enabling direct injection from external secret management systems, such as Vault, into Argo CD applications.
New flows are being tracked and developed to help handle the clean deletion of GitOps entities, targeting the automation of deletion processes and reducing the need for manual intervention.
The Service Summary page now provides an improved view of deployments through GitOps apps, showing artifact and chart versions deployed, which helps with enhanced visibility and troubleshooting.
The GitOps Cluster Detail Page now has improved navigation, detailed credential information, inline editing, and an app listing pane. At the same time, optimizations to reconcile thousands of GitOps applications improve scalability and reduce processing time.
Each of these enhancements demonstrates Harness’s dedication to simplifying complex deployments while empowering teams with secure, scalable, and highly flexible continuous delivery tools.
As we look ahead to the next update cycle, we anticipate further advancements in performance, observability, and governance across both Continuous Delivery (CD) and GitOps domains.
See the innovations in action! Explore our Harness Q3 Fall Release playlist on YouTube.


Harness GitOps builds on the Argo CD model by packaging a Harness GitOps Agent with Argo CD components and integrating them into the Harness platform. The result is a GitOps architecture that preserves the Argo reconciliation loop while adding visibility, audit, and control through Harness SaaS.
At the center of the architecture is the Argo CD cluster, sometimes called the control cluster. This is where both the Harness GitOps Agent and Argo CD’s core components run:
The control cluster can be deployed in two models:
The Argo CD Application Controller applies manifests to one or more target clusters by talking to their Kubernetes API servers.
Developers push declarative manifests (YAML, Helm, or Kustomize) into a Git repository. The GitOps Agent and repo-server fetch these manifests. The Application Controller continuously reconciles the cluster state against the desired state. Importantly, clusters never push changes back into Git. The repository remains the single source of truth. Harness configuration, including pipeline definitions, can also be stored in Git, providing a consistent Git-based experience.
While the GitOps loop runs entirely in the control cluster and target clusters, the GitOps Agent makes outbound-only connections to Harness SaaS.
Harness SaaS provides:
All sensitive configuration data, such as repository credentials, certificates, and cluster secrets, remain in the GitOps Agent’s namespace as Kubernetes Secrets and ConfigMaps. Harness SaaS only stores a metadata snapshot of the GitOps setup (Applications, ApplicationSets, Clusters, Repositories, etc.), never the sensitive data itself. Unlike some SaaS-first approaches, Harness never requires secrets to leave your cluster, and all credentials and certificates remain confined to your Kubernetes namespace.

In short: a developer commits, Argo fetches and reconciles, and the GitOps Agent reports status back to Harness SaaS for governance and visibility.
This is the pure GitOps architecture: Git defines the desired state, Argo CD enforces it, and Harness provides governance and observability without altering the core reconciliation model.

Most organizations operate more than one Kubernetes cluster, often spread across multiple environments and regions. In this model, each region has its own Argo CD control cluster. The control cluster runs the Harness GitOps Agent alongside core Argo CD components and reconciles the desired state into one or more target clusters such as dev, QA, or prod.
The flow is straightforward:
Harness SaaS aggregates data from all control clusters, giving teams a single view and a single place to drive rollouts:
This setup preserves the familiar Argo CD reconciliation loop inside each control cluster while extending it with Harness’ governance, observability, and promotion pipelines across regions.
Note: Some enterprises run multiple Argo CD control clusters per region for scale or isolation. Harness SaaS can aggregate across any number of clusters, whether you have two or two hundred.
Harness GitOps lets you scale from single clusters to a fleet-wide GitOps model with unified dashboards, governance, and pipelines that promote with confidence and roll back everything when needed. Ready to see it in your stack? Get started with Harness GitOps and bring enterprise-grade control to your Argo CD deployments.


DevOps governance is a crucial aspect of modern software delivery. To ensure that software releases are secure and compliant, it is pivotal to embed governance best practices within your software delivery platform. At Harness, the feature that powers this capability is Open Policy Agent (OPA), a policy engine that enables fine-grained access control over your Harness entities.
In this blog post, we’ll explain how to use OPA to ensure that your DevOps team follows best practices when releasing software to production. More specifically, we’ll explain how to write these policies in Harness.

Every time an API request comes to Harness, the service sends the API request to the policy agent. The policy agent uses three things to evaluate whether the request can be made: the contents of the request, the target entity of the request, and the policy set(s) on that entity. After evaluating the policies in those policy sets, the agent simply outputs a JSON object.
If the API request should be denied the JSON object looks like:
{
“deny”:[<reason>]
}And the JSON object is empty, if the API request should be allowed:
{}Let’s now dive a bit deeper to look into how to actually write these policies.
OPA policies are written using Rego, a declarative language that allows you to reason about information in structured documents. Let’s take an example of a possible practice that you’d want to enforce within your Continuous Delivery pipelines. Let’s say you don’t want to be making HTTP calls to outside services within your deployment environment and want to enforce the practice: “Every pipeline’s Deployment Stage shouldn’t have an HTTP step”
Now, let’s look at the policy below that enforces this rule:
package pipeline
deny[msg] {
# Check for a deployment stage ...
input.pipeline.stages[i].stage.type == "Deployment"
# For that deployment stage check if there’s an Http step ...
input.pipeline.stages[i].stage.spec.execution.steps[j].step.type == "Http"
# Show a human-friendly error message
msg := "Deployment pipeline should not have HTTP step"
}
First you’ll notice some interesting things about this policy language. The first line declares that the policy is part of a package (“package pipeline”) and then the next line:
deny[msg] {is declaring a “deny block,” which tells the agent that if the statements in the policy are true, declare the variable deny with the message variable.
Then you’ll notice that the next line checks to see if there’s a deployment stage:
input.pipeline.stages[i].stage.type == "Deployment"
You may be thinking, there’s a variable “i” that was never declared! We’ll get to that later in the blog but for now just know that what OPA will do here is try to see if there’s any number i for which this statement is true. If there is, it will assign that number to “i” and move on to the next line,
input.pipeline.stages[i].stage.spec.execution.steps[j].step.type == "Http"Just like above, here OPA will now look for any j for which the statement above is true. If there are values of i and j for which these lines are true then OPA will finally move on to the last line:
msg := “deployment pipeline should not have HTTP step”Which sets the message variable to that string. So for the following input
{
"pipeline": {
"name": "abhijit-test-2",
"identifier": "abhijittest2",
"tags": {},
"projectIdentifier": "abhijittestproject",
"orgIdentifier": "default",
"stages": [
{
"stage": {
"name": "my-deployment",
"identifier": "my-deployment",
"description": "",
"type": "Deployment",
"spec": {
"execution": {
"steps": [
{
"step": {
"name": "http",
"identifier": "http",
"type": "Http",
…
}
The output will be:
{
“Deny”:["Deployment pipeline should not have HTTP step."]
}Ok, so that might have made some sense at a high level, but let’s really get a bit deeper into how to write these policies. Let’s look into how Rego works under the hood and get you to a point where you can write Rego policies for your use cases in Harness.
You may have noticed that throughout this blog we’ve been referring to Rego as a “declarative language” but what does that exactly mean? Most programming languages are “imperative” which means that each line of code explicitly states what needs to be done. In a declarative language, at run time, the program walks through a data source to find a match. With Rego what you do is you define a certain set of conditions, and OPA searches the input data to see whether those conditions are matched. Let’s see what this means with a simple example.
Imagine you have the following Rego policy:
x if input.user == “alex”
y if input.tokens > 100
and the engine gets the following input:
{
“user”: “alex”,
“tokens”: 200
}
The Policy engine will take the input and evaluate the policy line by line. Since both statements are true for the input shown, the policy engine will output:
{
"x": true,
“y”: true
}
Now, both of these were simple rules that could be defined in one line each. But you often want to do something a bit more complex. In fact, most rules in Rego are written using the following syntax:
variable_name := value {
condition 1
condition 2
...
}
The way to read this is, the variable is assigned the value if all the conditions within the block are met.
So let’s go back to the simple statements we had above. Let’s say we want our policy engine to allow a request only if the user is alex and if they have more than 100 tokens left. Thus our policy would look like:
allow := true {
input.user == “alex”
input.tokens > 100
}
It would return true for the following input request:
{
“user”: “alex”,
“tokens”: 200
}
But false for either of the following
{
“user”: “bob”,
“tokens”: 200
}
{
“user”: “alex”,
“tokens”: 50
}
Now let’s look at something a bit more complicated. Let’s say you want to write a policy to allow a “delete” action if the user has the “admin” permission attached. This is what the policy would look like (note: this policy is for illustrative purposes only and will not work in Harness)
deny {
input.action == “delete”
not user_is_admin
}
user_is_admin {
input.role == “admin”
}
So the first line will match only if the input is a delete action. The second line will then evaluate the “user_is_admin” rule which checks to see if the role field is “admin” and if not, the deny will get triggered. So for the following input:
{
"action": "delete",
“role": “non-admin”
}
The policy agent will return:
{“deny”: true}because the role was not “admin” . But for the following input
{
"action": "delete",
“role": “admin”
}
The policy agent will return:
{}
So far we’ve only seen instances of a rego policy taking in input and checking some fields within that input. Let’s see how variables, sets, and arrays are defined. Let’s say you only want to allow a code owner to trigger a pipeline. If that’s the case then the following policy will do the trick (note: this policy is for illustrative purposes only and will not work in Harness):
code_owners = {"albert", "beth", "claire"}
deny[msg] {
triggered_by = input.triggered_by
not code_owners[triggered_by]
msg := "Triggered user is not permitted to run publish CI"
}
Here, on line 1 we are defining the code owners variable as a set with three names. We are then entering the deny block. Remember, for the deny block to evaluate to true, all three lines within the block need to evaluate to true. The first line sets the “triggered_by” variable to see who triggered the pipeline. The next line
not code_owners[triggered_by]Checks if the code_owners set does not contain the variable triggered_by. Finally if that line evaluates to true, the next line is then run, where the value of message is set and finally the deny variable is established.
Now let’s look at an example of a policy that contains an array. Let’s say you want to ensure that every last step of a Harness pipeline is an “Approval” step. The policy below will ensure that’s the case (this policy will work in Harness):
package pipeline
deny[msg] {
arr_len = count(input.pipeline.stages)
not input.pipeline.stages[arr_len-1].stage.type == "Approval"
msg := "Last stage must be an approval stage"
}
The first line will first assign the length of the array to the variable “arr_len” and then the next line will ensure that the last stage in the pipeline is an Approval stage.
Ok, let’s look at another slightly more complicated policy that’ll work in Harness. Let’s say you want to write a policy: “For all pipelines where there’s a Deployment step, it is immediately followed by an Approval step”
deny[msg] {
input.pipeline.stages[i].stage.type == "Deployment"
not input.pipeline.stages[i + 1].stage.type == 'Approval'
}
The first line matches all values of ‘i’ for which the stage type is ‘Deployment’. The next line then checks whether there’s any value of i for which the stage at i+1 is not an ‘Approval’ stage. If for any i those two statements are true, then the deny block gets evaluated to true.
Finally, Rego also supports objects and dictionaries. An object is an unordered key-value collection. The key can be of any type and so can the value.
user_albert = {
"admin":true, "employee_id": 12, "state": "Texas"
}
To access any of this object’s attributes you simply use the “.” notation (i.e. user_albert.state).
You can also create dictionaries as follows:
users_dictionary = {
"albert": user_albert,
"bob": user_bob ...
}
And access each entry using the following syntax users_dictionary[‘albert’]
Of course in order to be able to write these policies correctly you need to know the types of objects that you can apply them on and the schema of these objects:
A simple way to figure out how to refer to deeply nested attributes within an object’s schema is shown in the gif below.

Visit our documentation to get started with OPA today!


Choosing the right tool for automating software deployment is a critical decision for any engineering team. While proprietary software offers a managed, out-of-the-box experience, many organizations find themselves drawn to the power and flexibility of the open-source ecosystem.
Open source deployment software gives you direct control over your pipelines and the freedom to innovate without being tied to a vendor's roadmap. This guide will explore the most 11impactful open source software deploy tools today. We'll examine their different philosophies and strengths, and help you build a deployment strategy that is both powerful and pragmatic.
At its core, a deployment tool automates getting your software into a runtime environment - typically from an artifact registry of some sort. Open source deployment tools make the source code for this process available for you to inspect, modify, and extend.
The primary benefit is this freedom to tailor the tool to your exact needs. This can lead to lower direct costs and a vibrant community you can lean on for support. However, this trade-off means investing your team's time into the setup, maintenance, and scaling of the solution. Understanding that balance is key.
The world of open source deployment is rich with options, but a few key players represent the major approaches and philosophies you'll encounter.
It’s impossible to talk about CI/CD without mentioning Jenkins. It’s one of the most established and widely used open-source automation servers. Many teams began their automation journey using Jenkins to build their code and naturally extended it to handle deployments.
Born at Netflix, Spinnaker is a heavyweight contender designed for large-scale, multi-cloud continuous delivery. It’s built on a pipeline-first model for deploying to cloud providers like AWS, GCP, and Azure.
Flux is a lightweight GitOps tool that lives inside your Kubernetes cluster. It was one of the first projects to champion the idea that your Git repository should be the single source of truth for your cluster's state.
Like Flux, Argo CD is a declarative, GitOps-style tool for Kubernetes. It has gained immense popularity for its intuitive user interface and clear visualization of the application state.
Harness offers a source-available, open-source version of its powerful Continuous Delivery platform. It provides a more holistic, pipeline-centric view of deployment that goes beyond simple synchronization.
This is where the strategic choices get interesting. The most effective deployment setups often don't rely on a single tool but instead combine the strengths of open source with the enterprise-grade features of a commercial platform.
Tools like Argo CD are good at their core task of moving the bits. But modern software delivery is more than just kubectl apply. You need to consider:
This is where a full continuous delivery tool like the Harness CD comes in. Harness embraces open source. You can use Harness as a control plane that integrates directly with tools like Argo CD or Flux. Let Argo handle the GitOps synchronization while Harness orchestrates the end-to-end pipeline - enforcing governance, running automated verification, and providing a unified view of your entire software delivery lifecycle. It's the classic "best of both worlds" scenario.
So, how do you decide? Start by assessing your team's reality.
The best approach is often incremental. Start with a powerful open-source engine that fits your immediate needs. As your requirements for governance and orchestration grow, integrate it into a broader platform like Harness. This allows you to maintain the flexibility of open source while gaining the enterprise-grade capabilities you need to scale securely and efficiently.


Welcome to Harness’s Q2 Feature Launchpad! As software delivery demands grow increasingly complex, our mission remains steadfast: to empower engineering teams with smarter, safer, and faster tools that simplify and supercharge continuous delivery.
This quarter, we’ve unveiled a powerful suite of enhancements across Kubernetes, Helm, AWS, Terraform, and more — each designed to streamline workflows, boost deployment confidence, and enable rapid recovery when issues arise.
Dive in to discover how Harness is helping you take full control over your delivery pipelines, with new capabilities that reduce risk, remove friction, and put you in the driver’s seat for seamless, automated releases. Whether you’re deploying containers, serverless functions, or infrastructure as code, these Q2 updates unlock exciting possibilities to innovate faster and with greater peace of mind.
Let’s explore the game-changing features you can start using today — backed by official documentation links to help you hit the ground running.
Declarative Rollback: Enhanced Reliability
Rolling back a deployment should always restore your system to a known safe state. With Declarative Rollback, Harness now tracks hashes of config and secret manifests in your Kubernetes workloads. This means that during rollback, all configuration and secret changes are accurately reapplied or reverted—reducing drift and preventing misconfigurations.
Demo Video
Helm Test Flag Support on Deploy Commands
Testing is crucial before putting changes into production. Now, you can directly add the Helm test flag into your Harness deployment steps. This lets you automatically run chart-defined tests after deploying, catching issues immediately and fostering greater release confidence—no manual intervention required.
Demo Video
Granular Helm ‘Command Flags’ at Step Level
Sometimes, you need custom Helm command-line flags—like debug settings or custom timeouts—during deployment. You can now specify these flags at individual deployment steps within native Helm workflows, giving you total flexibility and fine-tuned control over every Helm operation.
Demo Video
Automated EKS Token Refresh for Seamless Deployments
AWS EKS tokens expire regularly, sometimes interrupting deployments. Harness now detects token expiry and transparently refreshes your credentials mid-deployment—reducing failed runs and keeping your CI/CD pipelines flowing without manual reauthentication.
Post Deployment Rollback Artifact Labeling
To simplify operational audits, artifacts involved in Helm post-production rollbacks are now distinctly marked as “N/A” for traceability. This helps teams quickly identify which artifacts were actually deployed after a rollback—supporting compliance and incident reviews.
Parallel ASG Rollbacks: Advanced Recovery
Large AWS operations often deploy to multiple Auto Scaling Groups (ASGs) in a single pipeline stage. Harness now supports parallel rollback across multiple ASGs—reversing several deployments simultaneously. This enables safer, coordinated, and faster disaster recovery in complex environments.
Demo Video
Canary Support for Lambda Deployments
Serverless deployments need gradual rollouts too. Harness brings native canary support for AWS Lambda, letting you direct a percentage of traffic to the new version(s) before full rollout. Discover issues early, minimize blast radius, and deliver serverless updates with new levels of safety.
Demo Video
Upload Artifacts to S3 from CD Pipelines
Artifact management got easier: you can now upload deployment artifacts directly to Amazon S3 as part of your CD pipeline. This supports archiving, analytics, and direct distribution from a centralized, secure location.
Auto-Create Workspace for Remote Backends
When using Terraform with remote backends (like S3 or Terraform Cloud), Harness now automatically creates new workspaces if they don’t exist. No need for manual setup—simply define your infra as code and Harness handles backend initialization, reducing setup errors and saving engineering hours.
Demo Video
Cross-Scope Access for GCP Connectors
Managing resources across multiple GCP projects? Harness GCP connectors now allow cross-scope (or cross-project) authentication, so teams can securely operate in different Google Cloud environments from a single pipeline, streamlining multi-project management.
Connector Type Variables for Dynamic Expressions
Use connector type directly in your pipeline expressions, supporting even more dynamic configuration—such as branching logic or conditional steps based on the type of connector in use.
Native OIDC Step for Secure Token Exchange
Cloud security and federation are easier than ever. Introducing a native OIDC step: automatically consume connectors and generate OpenID Connect tokens right in your pipeline, making it simpler to integrate with cloud providers and external identity systems securely.
Demo Video
Exclude Pipeline Triggering User from Approvals
To enforce separation of duties, you can now prevent the user who started a pipeline from also approving it. This helps teams comply with regulatory requirements and ensures unbiased approval workflows.
Demo Video
Manual Refresh for Jira & ServiceNow Approval Steps
Harness users can now manually “refresh” the approval status from Jira or ServiceNow during pipeline execution—ensuring approval gates reflect the latest ticket state, even if updates happen after the pipeline starts.
Enhanced Email Notification Subjects
Email notifications now include the service name and environment directly in the subject line, so that stakeholders and on-call engineers can instantly assess what’s impacted—accelerating incident response and triage.
Redesigned GitOps Agent Details Page
The GitOps Agent Details page now features a cleaner UI, including paginated and sortable application lists, inline editing with a Save button, dynamic page titles that reflect the agent name, and clearly displayed agent versions. Mapped projects now link directly to their corresponding Harness projects for easier navigation.
Advanced Filtering for GitOps Agents
Managing GitOps agents is now easier with new filter options for Cluster ID, Project, Tag, and Agent Version. The UI also supports saved filters and dynamic dropdowns—improving usability for teams managing large environments.
Improved GitOps Application List View
The Application List now includes popovers with copy-to-clipboard buttons, server-side sorting for better performance, and preserved sort preferences when navigating back from detail views. Health and Sync statuses also include hoverable metadata for quick insights.
Interactive Health Status Bar in GitOps Overview
Clicking a health status (e.g., “Healthy”) on the Overview page now filters the dashboard to show only apps in that state—making troubleshooting and monitoring more efficient.
Label-Based Filtering in Sync Steps
You can now filter applications by label within GitOps Sync steps, allowing for more granular and controlled sync operations during deployments.
Accurate License Usage Metrics
To ensure fair and accurate billing, degraded and unhealthy pods are now excluded from GitOps license usage calculations—giving you a more realistic view of your usage footprint.
Fail Fast Mode—Faster Feedback and Safer Deployments
“Fail fast” means Harness will immediately stop all further stages in your pipeline if a failure is detected, rather than waiting for all parallel executions to finish. This saves resources, speeds incident feedback, and avoids unnecessary downstream actions.
Reliable Webhook Trigger Execution via Queue Service
Harness now processes custom webhook triggers through the Queue Service for better scalability and isolation, ensuring reliable performance even under high load.
View Pipeline Metadata from Templates
You can now view metadata and advanced settings for pipeline templates directly in Pipeline Studio. The view is read-only and reflects only what's defined in the YAML.
Filter Executions by Build ID or Execution ID
Easily find past pipeline runs using filters for Build ID or Execution ID, including support for comma-separated values and saved filters.
Filter Executions by Input Sets
Input Sets used in a pipeline run are now displayed as clickable links in the Inputs tab and the execution summary, helping you trace inputs quickly
Demo Video
Enhanced Notification Templates with Error Messages
You can now include the <+notification.errorMessage> expression in notification templates to show failure details from the pipeline, stage, or step.
Switch Git Connector When Saving Templates
While saving a new template, you can now switch the Git connector—such as changing from project-level to account-level—without having to start over.
GitX Webhook Setup Without Connector Registration
Harness now allows GitX webhooks to be registered without requiring direct webhook creation in the Git provider, supporting intermediary service integration.
Granular Create/Edit Permissions for Pipelines and Templates
Harness now supports separate Create and Edit permissions for pipelines and templates, giving teams more control over access without over-provisioning.
Project-Level Pipeline Execution Concurrency
You can now prioritize execution concurrency by project. Assign High-Priority and Low-Priority projects to reserve slots for critical workloads.
Demo Video
Custom Git Status Checks for PR Pipelines
Customize the Git status checks sent during PR-triggered pipelines. Skipped stages now report “Success,” ensuring PR merges aren’t blocked unnecessarily.
View Template Version in Execution
You can now see the exact template version used in any pipeline execution, helping you track changes and ensure traceability.
Barrier Synchronization Across Parent and Child Pipelines
Barriers defined in a parent pipeline can now be reused by child pipelines via runtime inputs, allowing coordinated rollouts and consistent behavior across pipeline chains.
Harness is committed to delivering the tools engineering teams need to build, test, and ship with increased velocity and confidence. To dive deeper into any of these enhancements, click the links above or visit the Harness Developer Hub. Stay tuned—Q3 will bring even more innovation!
Learn about Effective feature release strategies for successful software delivery


Software delivery isn’t slowing down, and neither is Harness AI. Today, we’re introducing powerful new capabilities that bring context-aware, automated intelligence to your DevOps workflows. From natural language pipeline generation to AI-driven troubleshooting and policy enforcement, Harness AI now delivers even deeper automation that adapts to your environment, understands your standards, and removes bottlenecks before they start with context-aware, agentic automation.
These capabilities, built into the Harness Platform, reflect our belief that AI is the foundation for how modern teams deliver software at scale.
“When we founded Harness, we believed AI would be a core pillar of modern software delivery,” said Jyoti Bansal, CEO and co-founder of Harness. “These new capabilities bring that vision to life, helping engineering teams move faster, with more intelligence, and less manual work. This is AI built for the real world of software delivery, governed, contextual, and ready to scale.”
Let’s take a closer look.
Imagine a scenario where an engineer starts at your organization and can create production-ready CI/CD pipelines that align with organizational standards on day one! That’s one of many use cases that Harness AI can help achieve. The AI doesn’t just generate generic pipelines; it pulls from your existing templates, tool configurations, environments, and governance policies to ensure every pipeline matches your internal standards. It’s like having a DevOps engineer on call 24/7 who already knows how your system works.
Easy to get started with your organization-specific pipelines
Teams today face a triple threat: faster code generation (thanks to AI coding assistants), increasingly fragmented toolchains, and mounting compliance requirements. Most pipelines can’t keep up with the increased volume of generated code.
Harness AI is purpose-built to meet these challenges. By applying large language models, a proprietary knowledge graph, and deep platform context, it helps your teams:
| Capability | What It Does |
|---|---|
| Pipeline Creation via Natural Language | Describe your app in plain English. Get a complete, production-ready CI/CD pipeline without YAML editing. |
| Automated Troubleshooting & Remediation | AI analyzes logs, pinpoints root causes, and recommends (or applies) fixes, cutting mean time to resolution. |
| Policy-as-Code via AI | Write and enforce OPA policies using natural language. Harness AI turns intent into governance, instantly. |
| Context-Aware Config Generation | AI understands your environments, Harness-specific constructs, secrets, and standards, and builds everything accordingly. |
| Multi-Product Coverage | Supports CI, CD, Infrastructure as Code Management, Security Testing Orchestration, and more, delivering consistent automation across your stack. |
| LLM Optimization | Harness dynamically selects the best LLM for each task from within a pool of LLMs, which also helps with fallback in case one of the LLMs is unavailable. |
| Enterprise-Grade Guardrails | Every AI action is RBAC-controlled, fully auditable, and embedded directly in the Harness UI; no extra setup needed. |
Watch the demo
Organizations using Harness AI are already seeing dramatic improvements across their DevOps pipelines:

Harness AI isn’t an add-on or a side tool. It’s woven directly into the Harness Platform, designed to support every stage of software delivery, from build to deploy to optimize.
Just smarter workflows, fewer manual steps, and a faster path from idea to impact.
AI shouldn’t add complexity. It should eliminate it.
These new capabilities are available now. Whether you’re onboarding new teams, enforcing security policies, or resolving pipeline issues faster, Harness AI is here to reduce toil and accelerate your path to production.
Harness AI is available for all Harness customers. Read the documentation here. Get started today!