Modernizing Continuous Integration practices and platforms can take a four-pillar approach. Making strides in any of the pillars will put you on the path of modernizing your Continuous Integration platforms and practices.
This article contains an excerpt from our eBook, Modernizing Continuous Integration. If you like the content you see, stick around to the end where we’ll link the full eBook for you. It’s free - and best of all, ungated.
Because of the speed, velocity, and concurrent nature that Continuous Integration solutions must be able to operate in, the platforms that run the external builds can mimic the applications they are supporting. With elastic infrastructure much more attainable today (for example, with autoscaling resources in the public cloud and/or by leveraging modern distributed platforms like Kubernetes), having a distributed and cloud-native infrastructure powering the CI platform is a necessity.
The location where the actual builds and packaging take place (e.g. build nodes/runners) do most of the heavy lifting and have a fairly elastic workload nature. Builds (e.g. a JAVA JAR build) and packaging (e.g. a Docker Image Compose) are a compute-heavy task. Once the build and packaging are complete, the runners can sit idle. This shows the importance of having an ephemeral build node. The build node spins up during the task then spins down/is destroyed after the build task is done, so as to not drain resources.
Even if the applications that are being built themselves are not headed to a Kubernetes endpoint, running your Continuous Integration solution on Kubernetes can provide learnings for the organization on how a distributed application runs on Kubernetes. Modern Continuous Integration platforms are designed to have ephemeral nodes/runners being deployed to elastic/Kubernetes-based infrastructure. Running on modern infrastructure also has benefits in engineering efficiency.
A core tenet of engineering efficiency is meeting your internal customers where they are. For software engineers, this is being as close to their tools and projects as possible. Like many modern pieces of application infrastructure, shifting left to the developer means being included in the project structure in source code management (SCM).
For local builds, checking in to source control language-specific build files, such as Maven, Gradle, or NPM configurations, has been the convention for some time. Though with additional packaging, confidence, and build steps (for example more than one language or artifact the distribution), Continuous Integration platform steps are now being included in SCM-managed projects. Modern Continuous Integration platforms support declarative instructions where goals are defined and the CI solution will work to create the desired declarative state (e.g. the output artifacts).
A common disconnect in Continuous Integration platforms is dependency management. Over the past decade, for software engineers, this problem has been solved with dependency/package/build tools such as Maven, Gradle, and NPM. Simply define implicitly or explicitly what you need, and the dependencies will be resolved. Continuous Integration tools suffer from a disconnect since we are leveraging several tools that potentially don’t have a common syntax. Dependency management between the build nodes and runners is a common pain point. For example, certain nodes have certain dependencies, and certain nodes don’t. Modern solutions take a container-based approach (e.g. Docker-based) with dependency management in the ephemeral build node/runner container. Declare what you need, and similar to a Docker build/compose, it will be executed, giving the node container everything it needs.
Typically, an organization’s first forays in running automated tests in a repeatable and consistent fashion end up in their Continuous Integration pipelines. Usually, this is an easy lift; the same code/test coverage that a developer is subject to in their local build makes its way into the build pipeline since those steps should have been executed before the commit.
Though as the initial confidence of getting tests into the CI pipeline expands, more tests and sometimes inappropriate test coverage is introduced due to the ease of integration with the pipeline. An even harder problem to identify and rectify are flaky tests. A flaky test is a test that both passes and fails periodically without any code changes. A twofold problem of increasing execution time and lack of confidence with flakiness requires optimization to avoid. A modern Continuous Integration solution should be able to visualize order, timings, and overall execution to help identify and eventually rectify excessive coverage and flakiness.
Software is an exercise in iteration. The lower the barrier of entry for iteration to occur, gains in engineering efficiency and agility are achieved. Local builds happen dozens of times before reaching a committable stage; moving forward to a dev-integration environment. Having a local environment is key. Oddly, Continuous Integration pipelines are designed to run externally from a local machine; that is the entire point.
With a chicken-or-the-egg problem, a CI pipeline needs to be developed before being run/accepted. Usually, CI pipelines are developed remotely since the CI systems are remote to a user’s machine. The ability to have localized CI pipeline development allows for the same iteration velocity that software engineers have been achieving for a while. Also, locally-run CI pipelines allow the internal customers (the software engineers) to run and debug pipelines before making a commit that would trigger a build, therefore building confidence before a build.
Modern Continuous Integration architecture supports iteration and scale while optimizing and building confidence, all while enabling proper feedback loops that are in place for action and automation to take place.
With the above model/architecture, Harness Continuous Integration can easily be deployed, and it supports modern Continuous Integration approaches.
Harness Continuous Integration, both Enterprise and Open-Source (based on Drone), have modern user interfaces and are built to meet the scaling requirements of cloud-native workloads.
Continuous Integration might seem like a solved problem for many organizations, but as with any technology, there is always room for improvement and modernization. With modern development processes allowing for more rapid development, the platforms that support the agility and iteration that organizations require are evolving. Legacy approaches are seen as brittle and rigid, and incorporating modern practices and approaches into your Continuous Integration platform will allow for future growth and agility in a lasting solution.
We hope you enjoyed this final excerpt of our Modernizing Continuous Integration eBook. Please feel free to download the full eBook today - it’s free and doesn’t require an email address: Modernizing Continuous Integration.
Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.