As Continuous Integration continues to evolve, certain practices lend themselves to a more mature Continuous Integration approach. A mature Continuous Integration practice should allow for speed, agility, simplicity, and allow for the dissemination of results in an automated fashion.
This article contains an excerpt from our eBook, Modernizing Continuous Integration. If you like the content you see, stick around to the end where we’ll link the full eBook for you. It’s free - and best of all, ungated!
Since builds will occur throughout the day, having a speedy and automated build is core to engineering efficiency. Not tying up an engineer’s machine or local environment for a build by having a build externalized can allow an engineer to continue to make strides and adjustments as a build occurs. Simply put, the quicker the build, the quicker the feedback can be implemented - or a release candidate created to be deployed - by a Continuous Delivery solution.
For a software engineer, a commit - or merge, for that matter - in a shared repository signals moving forward in the software development life cycle. With a commit, you are committing that you are ready to start trying out what you developed. Core to Continuous Integration is to treat each commit as a potential release candidate and start building the artifact. This will allow for less lead time when a decision is made to deploy.
In microservices and in Continuous Integration, smaller pieces can help reduce complexity. By having smaller and functionally independent pieces such as build, testing, packaging, and publishing, the identification of problems/bottlenecks becomes much easier. If there are changes to any one of the functional areas, they can be made and tweaked and the steps inside a Continuous Integration platform can be updated. With smaller pieces, if certain pieces need to run on other systems, finding the line in the sand to lift or migrate functionality is easier.
Feedback is crucial in the software development life cycle, and most likely, the first time changes are leaving an engineer’s local environment is with a Continuous Integration process/practice. Disseminating build and test results across the teams in a clear, concise, and timely manner helps engineering teams adjust and march towards a successful release candidate. Initial builds are expected to run more than once as iteration occurs. Depending on the Continuous Integration platform, implementations can vary, especially around sharing results.
Delivering software can be seen as continuous decision-making. Getting your ideas to production in a safe manner requires confidence-building exercises in the form of tests and approvals, and safe mechanisms to deploy, such as a canary deployment. Continuous Delivery is the ability to deliver changes to your users in an automated fashion. Continuous Delivery is interdisciplinary, bringing in automation practices around monitoring, verification, change management, and notifications/ChatOps. Without an artifact to deploy, there would be no deployment; Continuous Integration provides the artifact to deploy. However, Continuous Integration is not without its challenges.
Because builds and release candidates follow advancements in development technology closely, such as new languages, packaging, and paradigms in testing the artifact, expanding the capabilities in Continuous Integration implementations can be challenging. With the introduction of containerization technology, the firepower and velocity required to build increased.
As the velocity of builds increases to match the mantra that “every commit should trigger a build,” development teams could potentially be generating several builds per day per team member, if not more. The firepower required to produce a modern containerized build has increased over the years, versus traditional application packaging.
The infrastructure required to run a distributed Continuous Integration platform can be as complex as the applications they are building because of the heavy compute requirements. Take a look at how much of your local machine’s resources are tied up during a local build and test cycle. Now, multiply that by the number of folks on a team or in an organization. Distributed build runners are one area that can be complex; managing when new build nodes are spun up and spun down can depend on the platform/end-user.
The adage “the only constant in technology is change” is true. New languages, platforms, and paradigms are to be expected as technology pushes forward. The ease of including new technologies in a heterogeneous build or accepting new testing paradigms can be difficult for more rigid/legacy Continuous Integration platforms that were designed for a small subset of technologies.
Homegrown/legacy Continuous Integration platforms can be very prone to rigidity, in terms of being designed for what was in the enterprise at the point in time when the platform was built. New technologies and paradigms require new dependencies for builds to occur or new testing methodologies to be implemented. Adding new dependencies should be as easy as the developer experiences on their local machine; e.g. simply declaring what is necessary and convention/declarative-based tooling resolves the dependencies. With legacy or rigid approaches/platforms, dependency management required to maintain technical velocity is a significant burden.
As some of the first systems that automate parts of the development pipeline, there would be a natural tendency to continue to build the automation that takes software all the way to production. Though organizations quickly realize that failing the build due to failing unit tests is different than handling multiple deployments and release strategies; a failed deployment can leave a system in a non-running state. This is why there should be a line in the sand between Continuous Integration (build) and Continuous Delivery (safe production deployments).
The rigor needed to create and test the infrastructure and application together, all while having a safe release strategy, such as a canary release, requires codifying tribal knowledge about applications to determine pass/failure scenarios. The burden of adding additional applications can be substantial and can go against best practices for Continuous Integration, such as keeping the build fast.
We hope you enjoyed this excerpt of our Modernizing Continuous Integration eBook. In the next excerpt, we’ll go over how to modernize Continuous Integration, and what modern infra looks like.
If you don’t want to wait for the next blog post, go ahead and download the eBook today - it’s free and doesn’t require an email address: Modernizing Continuous Integration.
Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.