A pipeline is an automated process that takes your code through all the packaging, testing, deployments, and procedures required to get it in the hands of your customer. When well-designed and implemented, a pipeline takes all the manual toil involved in releasing software out of the hands of the developer, and allows them to focus on writing more features with higher quality. Let's find out what challenges you may face when it comes to your software delivery pipeline, and how you can improve it drastically.
We talked about what, but as with most things, the why is much more interesting. The need for a software delivery pipeline is driven both by good engineering practices as well as the needs of the business. Let’s walk through a few common situations that most people working with software are familiar with.
Your company is on the verge of signing the biggest deal in company history, and to get the contract signed, they’ve committed to delivering several new features on a very aggressive timeline. You run the math, you know your development team can build it fast enough, but it’s going to be extremely tight. If your developers are wasting time with toil, or can’t get meaningful feedback fast enough, you’ve set yourself up for failure.
The often cited quote usually misses the part where the one bad apple spoils the bunch. There are many industries such as finance, healthcare, and more where a small bug has potentially devastating consequences. Preventing issues starts with CI (Continuous Integration) pipelines, where you run rigorous automated testing on your codebase long before it touches your production environment.
Over the years, I’ve had access to systems where one misstep on my part would have resulted in a slew of articles from all the usual suspects talking about the latest major outage at a Fortune 500 company. This kind of power is scary, but it’s several times more scary if you don’t know who has it, why they have it, and if they should even have it to begin with. Deploying to production is a big deal everywhere. Without the use of a CD (Continuous Delivery) pipeline, keeping tabs on who is allowed to do what becomes a game of managing countless servers and systems.
Waste doesn’t scale. Whether it’s money or time (time is money), small problems expand exponentially when you double your size year over year, add new services, or more staff. Spending an hour building and deploying a service by hand can work when you have three services and deploy once a week. Move up to daily deployments for 10 services… well, you can do the math on that.
Did that last section hit home at all? It’s okay. We’ve all been there. The first step is admitting you have a problem. Let’s talk about solutions, and ways you can scale up your processes.
DevOps isn’t just tools, it’s a culture, and it starts with Software Development - long before you stand up a Jenkins box or build out infrastructure. Version control is ubiquitous in software development, but usage should expand beyond just writing source code for your application. Build automation will be limited in value without automated testing. Regression tests, integration tests, and performance tests are part of writing good software, not just responsibilities dropped on QA after the fact.
Beyond testing, having a good process around pull requests, code changes, branching, and everything that happens ahead of your build process will allow you to fully realize the benefits of your software delivery pipeline.
Continuous Integration, Continuous Delivery, and Continuous Deployment are concepts that are often melded into a monolithic term of CI/CD, and in this house we are against monoliths. Why should you care? For starters, CI is a mostly closed process. It’s the factory that assembles and tests the end product. CD is out in the real world, where you have to deal with fallible humans, benevolent humans, and end-users. The challenges are distinct, and require specific solutions for each process, rather than using a CI tool to do CD or vice-versa.
A good CI/CD process starts with CI that runs your tests, runs security scans/analyses, packages, and ships your artifact. While there’s certainly blurring of the lines, this is your hand-off point from CI to CD. You have your final product ready to ship, and the CD pipeline takes that deliverable and runs with it.
The final note here is that delivery is not the same as deployment. Shipping code onto servers and having barriers is delivery, while Continuous Deployment is rapidly getting new features and bug fixes into the hands of your end-users.
The purposes of each process are distinct, so don’t make the mistake of mixing and mashing. Instead, design your pipelines as components with modularity in mind, including concerns such as reuse and interoperability.
Engineers love to build things, so focus that energy on building things to make the lives of your customers better. There are 20+ quality CI/CD tools in the ecosystem, ranging from open source tools to white-glove SaaS platforms that escort your code gracefully from your git repository into the hands of your users. The choices are better than they have ever been, and in most cases, you can achieve fully automated pipelines with little to no scripting to maintain.
A software delivery pipeline should be focused on what and why. “How” is the same problem that everyone deals with. Boring! We all have to connect to GitHub, build our Java code, build our Docker image, connect to our artifact repository, so on and so forth. These are solved problems. For the same reason you don’t go to a restaurant and give the chef your recipe and ingredients, you don’t focus on the how of pipelines.
Declarative pipelines describe what you want, and the tools of the trade give you those results. Some of these tasks are very complex to implement for a fairly simple result, and everyone has already done it before. Declarative pipelines allow you to focus on making your process flexible, modular, and repeatable.
When it comes down to Continuous Delivery, pipelines are orchestration above all else. Writing a script to put something onto a server and hit the start button is easy. Delivering software to your end-users in an efficient, repeatable, fault-tolerant, zero-downtime way is incredibly challenging. The right approach to your CD strategy is to pull together all the best-in-class tools and tie them together under one process.
As an artifact moves from your development environment, to your staging environment, to production, there’s a lot of automated testing and validation happening, and this has to be orchestrated and documented. To do this, you’re going to need to orchestrate security scanning tools, testing tools, and change management solutions such as Jira to document these efforts. Finally, once an artifact is ready to go to production, you may be required to have approval gates, and this is often a manual process. This is where being able to tie into solutions like ServiceNow comes in handy.
Your artifact is now in production, and it started successfully serving traffic.
No. Not even close.
If your new application version starts and immediately starts spitting errors at users, or the CPU spikes, you better know about it - and fast. You also better have a backout plan. The last mile of delivering to your customers is to assure quality by tying in your APM and logging tools to your deployment process. Your pipeline should not be considered successful without passing these final quality checks, and it should be able to roll back to a stable version as needed.
When you have many teams working in parallel on the same codebase, branching can quickly become a nightmare, especially when operations and product management’s priorities are weighted in. Being able to ship code behind a feature flag decomplicates managing competing priorities.
Once shipped, feature flags offer a powerful toolset to gather feedback, curate the experience of individual customers, and prevent outages with kill switches.
Org to org, the software delivery process varies wildly. You have your “move fast and break things,” you have companies that ship once a year and practice perfectionism, and we all have that one Windows application that’s been running on a server since 2009 that everyone is terrified to touch because by some miracle it still works (don’t deny it). With few exceptions, all these organizations have regulations, and audits to deal with, and that’s where governance comes in. When all the information exists in a nebulous “somewhere,” compiling it can be a massive headache.
Standardizing your tooling and your process is not just about the day to day, it’s about being able to know who deployed, how they deployed, and more, all without having to comb through so many logs that your screens look like you’re in the matrix.
As with everything, start with your foundations. If your source control process is broken, everything downstream will also be broken. Design your pipelines with the same approach you take to everything in software engineering. Apply DRY (don’t repeat yourself) principles. Modular pipelines, not spaghetti pipelines. Don’t reinvent wheels. Remember that everything you build, someone has to maintain.
At Harness, we practice what we preach. Our processes are meticulously documented on this blog, and we are very transparent about how we build and deliver our software. When writing new code, everything starts with a pull request to the master branch, where we have a workflow to build out environments for collaboration, ahead of merging and kicking off our deployment processes.
When we deploy our software, we are utilizing zero downtime deployment strategies such as canary and blue-green, so that our end-users have the best experience possible while receiving regular feature updates and new products. Finally, we use our feature flag solution to roll out new features how we want to, without expecting development or operations to be responsible for timing and logistics.
At Harness, we provide the toolset and know how to solve all the challenges we discussed today. We have helped our customers deploy more frequently, ship better code, control their processes, and save time.
Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.