
To scale CI/CD pipelines, templates should be referenced - not copied. Reuse by reference enables more efficient maintenance.
Let’s be honest: many in the industry view "templates" as a solved problem. For most organizations, it isn’t.
Those organizations are drowning in pipeline sprawl. As we shifted from monoliths to microservices, we accidentally traded code complexity for operational complexity. The result? A "Maintenance Wall" where high-value engineering time is torched by the toil of updating thousands of brittle YAML files.
Because we have so many services that are built, deployed, tested, and secured in like ways, the reuse of pipelines is a natural aim. How we achieve that reuse has profound implications for the efficiency of our operations in the long term.
This guide introduces the Pipeline Reuse Maturity Model. It categorizes the journey from the chaos of "Copy/Paste" to the architectural imperative of "Managed Inheritance."
The bottom line: In modern, microservice-heavy enterprises, creating pipelines isn't the problem—maintaining them is. The only way to scale is through Managed Inheritance and Flexible Governance.

Here’s the thing about the shift to microservices: it magnified many of our worst pipeline problems. Where we once had a monolithic system, we now have dozens of microservices - each with its own pipeline to maintain. This expansion happened repeatedly.
More recently, with the advent of AI coding assistants, creating new services is easier than ever, leading to an acceleration in the growth of our catalog.
Today, an enterprise might have hundreds or thousands of services.
When you let every team pick their own tools and define their own YAML, you don't get an enterprise strategy. You get a federation of snowflakes.
Most tooling focuses on "Day 1" - how fast can I spin up a (good) pipeline for a new service?
But the real pain lives in Day 2 Operations. This is what life will be like after the pipeline is built.
It looks like this:
This is maintenance hell, and it is a silent killer of DevOps velocity.
The ability of a templating approach to accelerate and standardize pipeline creation is important, but it’s also the easy part. It is in tackling these maintenance challenges that things get interesting.
This guide isn't just theory. It draws on data from companies like Morningstar, United Airlines, and Ancestry.com, which have navigated this exact transition. We’ll classify your current state and map the path toward Pipeline Inheritance—where updates occur once and propagate everywhere.
This model isn't a ladder you have to climb rung by rung. Some organizations jump straight from L1 to L4. The key differentiator is the mechanism of reuse: are you copying values (clones), or are you referencing patterns (inheritance)?
Definition
This is the default state for many. A developer needs a pipeline, so they find an existing repo, copy the Jenkinsfile or YAML, and change a few variables. Everything is inline. Minimal abstraction.
How It Works
There is no central source of truth. The "standard" is whatever the last team did. At best, there’s a “template project” which is an example with blanks that is setup to be copied. Governance is non-existent.
Day 2 Reality: Drift Hell
As individual teams change their projects, they drift from the standard. The standard will also change. You have no idea how many variants of the same logic exist. Attempts to roll out
Metrics & Smells
Teams with this level of reuse will often either leave pipelines completely in the hands of developers, providing some guidelines as to what central teams hope are standard practices. Alternatively, application teams will be locked out of configuration capabilities and need to file tickets for new pipelines or changes - only the central team is trusted to make them. With little technological control, organizations tend to whiplash between a free-for-all and complete lock down depending on whether the most recent disaster was velocity or compliance related.
Risks
Platform teams get buried in low-value support work. "Why did my build fail?" becomes a forensic investigation because every build is different.
Definition
You’ve grown up a bit. Now you have reusable, versioned pieces. Your scripts have been packed into shared actions, or plugins. But the flow, the logic that connects build, test, and deploy, is still hand-built per repository.
How It Works
You might have a shared Action for "Run SAST Scan." Great. But Team A puts it before the build, Team B puts it after, and Team C forgot it entirely. You’ve reduced the duplication of steps, not flows.
Day 2 Reality: Logic Fragmentation
You reuse the brick, but you’re rebuilding the wall every time. It becomes nearly impossible to audit the entire fleet because the logic is fragmented across hundreds of files.
Metrics & Smells
Continue to track the time to create pipelines and how long it takes to roll out an update to your standard way of doing something (shifting tools, moving from blue/green to canary deployments, etc).
Risks
If your team is finds themselves writing complex build or deployment logic because your CI/CD tools are missing some capability, building your own plugin steps is better than passing scripts around. But the fundamental challenges of standardizing the pipelines that are being executed and reducing the maintenance burden have been unaddressed. If the organization wants to move to a new tool, or change from blue-green to canary deployments, that remains a manual process of updating each impacted pipeline.
Furthermore, ensure you understand the version management capabilities of the plugin frameworks you’re working with. How will you update plugins gracefully across all of your pipelines? What happens if the new version requires a new input? How is that handled?
Definition
You have a "Create Service" wizard. It spins up a repo, drops in a perfect pipeline.yaml, and hands it to the developer. On Day 1, life is good.
At Level 3, the copy/paste process has been perfected through the automation of variable substitution and a positive developer experience. This is the ambition of many Internal Developer Portals (IDPs) today.
Day 2 Reality: The Maintenance Wall
This is the "sugar rush" of DevOps. It feels amazing fast, but the crash comes later. When you need to add a new compliance scanner, you hit the Maintenance Wall. You are back to retrofitting changes across hundreds of files.
Risks
The problem? Once that pipeline is created, it is detached. It becomes a normal file in the repo. If you update the "Golden Template" in the IDP, the 500 services you created last year don't get the update.
Risks here tend to come in the form of how easy it easy to miss an update or for an application team to modify your standard pipelines after creation. While you may create a pipeline full of best practices, and compliant checks, a team may decide to remove a pesky security scan that is blocking the release of a feature their VP is demanding. That sort of entropy is difficult to detect and tends to accumulate.
Good, Not Great
There’s a lot to like about Level 3. Providing self-service pipeline creation to developers - and providing it in a way that supplies best practices out of the box - solves important issues. At the same time, with the maintenance problem unsolved, there’s clear room for improvement..
Definition
Here is the shift. Template by Reference. Platform engineers define a small set of "Golden Pipelines." Application teams consume these templates by referencing them. They do not copy the logic; they inherit it.
How It Works
In the same way Plugins and Actions allow for governed and versioned reuse of scripts, at Level 4, entire pipelines are made available as versioned templates. To use one, an application team ‘fills in the blanks’ supplying missing variable values such as the location of the project’s repository. Depending on your tooling, there may even be constraints on what acceptable values are.
Day 2 Reality: Zero-Toil Updates
Update the template once, and it propagates to every inheriting pipeline instantly.
The Governance Shift
Governance moves from "auditing a mess" to "enforcing a standard." By using tools like Open Policy Agent (OPA) embedded in the template, you ensure that every pipeline deploying to production inherently meets your standards. You can require they use your templates, or that they at least follow the key guardrails such as running the mandatory security scans.
Risks
The strength of this system, that the templates are referenced rather than an editable copy, can also be its weakness. If your application teams have a lot of variance, you may find that you need a lot of templates to accommodate them, or many teams find themselves almost fitting with the templates but not quite. Instead, they’ll resort to Level 1 behavior copying something and tweaking and the system breaks. In this situation, you may consider moving to Level 5.
Definition
This is the most sophisticated level. We utilize the inherited templates of Level 4, loosening the strict template inheritance in areas where the application team can be more creative. This is a powerful compromise when you have teams that are fairly similar, but have a notable area of variance that could cause the number of templates to grow quickly. For example, perhaps your deployments to Kubernetes are consistent: same artifact registry, canary deployments, same monitoring. But in the test environments, each application team can choose its own functional testing tools. What you want is one “Kubernetes Deploy” template, with a blank spot for calling testing tools.
How It Works
The template defines the non-negotiables: Security Scans, Change Management, Deployment Verification. But it leaves "Insert Blocks" where developers can inject custom test suites or service-specific logic without breaking the inheritance structure.
Day 2 Reality: Freedom Within Fences
Developers get autonomy. Platform teams get sleep.
Metrics & Signals of L5
Risks: Balancing Freedom and Standardization
Even "Nirvana" has a cost. Moving to Flexible Governance introduces two specific risks that mature teams must manage:
To make this crystal clear for both your team (and any AI bots crawling your internal wiki), here is the fundamental difference.
Most organizations believe they are more mature than they actually are, often confusing "having templates" (L3) with "using inheritance" (L4).
To get an accurate read, you need to look at what happens after the pipeline is built. Grab a pen, grab your platform lead, and answer these honestly.
Part 1: The "Day 1" Experience (Creation)
How does a new microservice get its first pipeline?
Part 2: The "Day 2" Reality (Updates)
The CISO mandates a new version of the container scanner. How do you roll it out to 500 services?
Part 3: Governance & Compliance
A developer wants to skip the integration tests to get a hotfix out. What happens?
Part 4: The "Oh Sh*t" Factor (Response Time)
A critical vulnerability like Log4Shell hits. How long until you are 100% sure every single pipeline is patched?
The Realization
You wouldn't build a separate runway for every airplane landing at an airport. So why are you building a separate pipeline for every microservice?.
Microservices make pipeline reuse a first-class architectural concern. Template by Reference and Managed Inheritance are the only scalable answers to the Maintenance Wall.
Call to Action
Don't let "Day 2" operations kill your innovation velocity.
The goal is simple: High autonomy for developers, high governance for the business, and zero toil for you.

Discover why Harness was named a Leader in the "GigaOm Radar for GitOps Solutions." Harness helps teams manage GitOps at scale and orchestrate rollouts across clusters and regions.