Adopting GitOps best practices means managing your software development process and automating infrastructure using Git as a single source of truth. When you deploy an application, your continuous delivery (CD) tool automatically deploys, rolls back, or rolls forward the necessary clusters to support it. The more advanced your GitOps practice, the more of this that actually happens automatically. GitOps combines the best of infrastructure management and automation with DevOps best practices to achieve continuous deployment.
In its final form, GitOps means no more manual infrastructure work, which means faster release velocity. It means doing blue/green and canary deployments that use machine learning to detect failures and rolling back deployments entirely on their own when necessary, including the clusters.
But what’s needed to support it, and where does one begin? In this article, we discuss the underlying architecture that makes GitOps possible, along with tools you’ll need and advice for getting started.
So, how does GitOps work? GitOps principles are integral to modern continuous deployment. GitOps is an approach for companies looking to simplify deployments of cloud-native applications and gives developers more autonomy over how they deliver their applications. Whether it’s adding a firewall rule, defining a VPC, or fixing a UI bug, all of it should come from the central plane of source control.
GitOps focuses on the what instead of the how in which all application components are descriptions written declaratively, not directions on how to accomplish building them out. In the declarative paradigm, a developer describes their desired state in code, and the system they are interacting with determines when, how, and where to place applications in a way that meets the requirements described and automatically matches this desired state with the actual state of the cluster.
The key benefits of declarative infrastructure and declarative configuration are they allow software development teams to focus on their application first, not the logistics of deployment and runtimes. In other words, instead of manual processes, GitOps enables developers to use infrastructure automation, or infrastructure as code, with less friction or approval bottlenecks throughout the deployment process.
Check out our guidelines for GitOps best practices for cloud infrastructure.
To practice implement a GitOps workflow into your infrastructure management, you need five things:
Let’s explore each in detail.
When you define your infrastructure as code, you’ll store it in your Git repository. That becomes your source of truth. Your CD agent will monitor and compare it to the actual cluster configuration. These configuration files will be stored as code so that the same infrastructure environment is deployed each time.
Running pull requests through your CI tool allows you to incorporate testing into your declaratively-defined GitOps infrastructure deployments using configuration files. This is a crucial piece of your infrastructure management: automating tests so you can fix bugs before you commit is vital to your GitOps workflow.
For GitOps to work, you’ll need to be using containerization for your microservices to package up your code, dependencies, configuration, process, and more during software development.
You’ll need a tool like Kubernetes to orchestrate your container deployments. When you install an agent for your CD tool, you’ll install it on your Kubernetes cluster.
A controller like Argo CD is what makes GitOps function. You’ll install its agent on your Kubernetes cluster and it’ll monitor differences between your Git-defined infrastructure in the source control system and the actual production environment, and keep them in sync.
Together, these tools provide most of what you need to define your infrastructure in Git and configure things to automatically implement it when people commit code to an application. Let’s explore each of those steps in detail.
It may go without saying, but the first step is using a container tool like Docker to containerize applications. Use a tool such as And commonly, using Kubernetes (and perhaps also Terraform) to manage infrastructure declaratively, so it exists as code, and orchestrates changes.
You’ll need to define your infrastructure in Git, which means you’ll need a way to turn infrastructure into code. For many, that means using Kubernetes, because declarative code is easier to work with because you can implement version control for all configuration files. It might also mean using Helm charts to manage packets, or Kustomize, to manage your configuration and YAML templates more easily.
Your infrastructure defined and stored in the Git repository becomes your singular source of infrastructure truth. All other versions or descriptions, including those of clusters currently in production, are considered either in or out of sync with that truth.
[Pull quote: Your infrastructure defined and stored in Git repo becomes your singular source of infrastructure truth.]
Within Git, there are a few ways you might think about storing that infrastructure:
Option 1: Maintain one Git repository
Within that repository, you’ll create at least two branches: one for the application and one for the environment (infrastructure). This is the simpler way to go as it’s one less repository to manage, and your application branch can include all your microservices. The only downside is that it provides access to your infrastructure, and you might not always want to allow that.
Option 2: Maintain two or more Git repositories
Create, at minimum, one repository for your application and one for your environment. This gives you greater control over who has access to your production environment, and allows developers and operations teams to work in separate repositories, if they prefer that. But, it’s more work to manage.
Next, you’ll want to configure your CI tool so it automatically resolves differences between your desired cluster state in Git and the actual state in your production environment. Broadly, there are two approaches to this:
1. Push configuration changes
In a more traditional setup, you might build a trigger into your pipeline so it executes a command to push your desired infrastructure state into production. (Usually, with Jenkins or a CI/CD tool.) While this will update your cluster to the newly defined state, the issue is that it’s one-way and passive. If the two states diverge, there’s no automatic enforcement. Your cluster could be in an error loop and Jenkins would never detect or resolve it, because that’s not what it’s built to do. You’d have to detect the error yourself and go push an update, which could be annoying because you’d also have to update the application.
2. Pull configuration changes
The newer, more resilient way to update your cluster is to install an agent like Argo CD to actively monitor and resolve differences between the desired state in Git and the actual cluster state. The advantage here is that the agent is constantly checking to see that the two states are in sync. If they fall out of sync, it takes action. And, it works both ways. If the production environment falls out of sync, it can catch and resolve that.
It often takes a bit of a culture change for infrastructure teams to start using pull requests and merges in Git, and for developers to select their own infrastructure. But on the whole, it’s the path to reduce a whole lot of manual infrastructure deployment work.
Once your architecture is set up, here’s how things work in practice:
When developing cloud native applications, GitOps can save you vast amounts of time and effort that would otherwise be spent on manual tasks . Rather than manually provisioning infrastructure, you can deploy infrastructure with YAML files as part of your deployment. And with the right architecture and tools that can monitor your Git-defined, actual infrastructure states and resolve them, you eliminate a big blocker to fast releases.
Get started for free with Harness CD & GitOps as a Service!
Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.