The term "cowboy coding" was coined in the spirit of making arguably risky changes directly in a production environment. This means that code doesn't first make it into lower deployment environments, and instead, is directly shipped to the hands of the end-user. This of course is risky, as automated tests and manual testing are thrown out the window.
To that point, many companies have a series of pre-production environments that their development and operations teams leverage to validate changes before "full sending" them to production. In this article, we'll be discussing how you might leverage multiple environments to ensure a degree of code quality before potentially interrupting customers with bugs, outages, or other chaotic scenarios.
A Deployment Environment is where engineers, product owners, QA teams, and automation can be used to get a picture of how well a new code revision behaves, performs, and how it looks and feels from a UI perspective. Sometimes, the number of SDEs per application can differ depending on the mission-criticality of the application's production environment. Additionally, for complex applications that have several attached dependencies, such as AWS RDS instances, Redis clusters, or other infrastructure, it's even more common to have a set of standard environments so that changes can be tested in isolation (when needed).
Some common environments we see at companies are:
Depending on the size of the engineering teams, the development methodology implemented (Agile, Waterfall, etc.), and whether or not PR or ad-hoc feature environment automation is in place, you may see a plethora of other SDEs, such as:
The ultimate goal of these environments is to enable teams to build and test infrastructure and application code in isolation, and ultimately to deliver high-quality software that performs in production.
Now that we've highlighted some of the environments you might use to methodically test code before moving it to production, let's dive a bit deeper into how and why these environments are used.
In most organizations, the latest version of code is found in the development environment.
So, you've made some modifications to the code via your local development environment, tested it locally (right? right?), and submitted a PR against your upstream branch. This might be your development branch, whereby potentially unstable code gets merged. CI kicks off tests and hopefully gives the green light for merging.
The goal of the development environment is to allow software developers to batch code changes together and deploy them via CD to the remote dev environment. If you're an AWS shop, this might be in Amazon ECS, EKS, or on simple EC2 instances or virtual machines elsewhere. Perhaps you're leveraging Azure AKS or Azure virtual machines.
At this point, developers can see if their code changes are working as expected in a production-like environment. I say this loosely, as the scale of the environment may be different, but the underlying technology should be the same. This is to avoid the trap of "it works on my machine" but doesn't work in the actual deployment environments that colleagues and customers might interact with. This is one of the key areas that DevOps teams focus on, as it happens more often than we'd like, given the wealth of environment variables and config file entries that might be environment specific.
Once the final developer QA testing on this environment completes, the code may be merged into another upstream branch for deployment to the next environment, typically staging. If your team subscribes to cutting semantically versioned releases, the code would likely get merged into the main or master branch in Git next. At that point, CI may kick off another build pipeline to rebuild assets for the staging environment or promote the previously-built VM or docker image to a release candidate. The approach here may vary depending on the application and coding languages used.
For instance, if static assets need to have environment-specific URLs baked into the JS/HTML via Webpack, Gulp, or the like, then additional build steps may potentially be needed. You might also need to install different npm packages or an optional module in lower deployment environments, for debugging purposes, but not in production.
Now that we have our build artifact for staging, engineers deploy the code to staging via their deployment pipeline. A common pattern is to have a single pipeline that carries code from development, to staging, into a load test environment, and then to production. Having a unified pipeline can be helpful, as it allows one to easily visualize the end-to-end process of committing, deploying, testing, and promoting new code into production.
After the new code is live in staging, one might run a suite of tests against the environment. This could include end-to-end tests via Cypress, integration tests, OWASP security scans, and even load tests if there isn't a dedicated environment for load testing.
This is the last chance to catch a bug, performance regressions, and security issues before our code problems become our customers. As a best practice, it's always wise to manually check features out in staging, even if tests have been completed. Test confidence takes time to build up to, and can we ever truly be 100% certain that our automated tests cover all cases? The saying "trust but verify" applies here.
In the ideal circumstances, the staging environment should be as close as possible to the production environment.
All database connection string variables should maintain parity to production, in terms of replica/reader URLs pointing to a replica in staging. Personally, I've seen a plethora of cases whereby locally, replica database URLs are pointing at a single Postgres or MySQL database instance, so having "writes" going to the replica URL flies under the radar. This may also subvert checks in CI, as most teams aren't running multi-node setups in CI (with a primary and replica node). The business impact of such code getting into production varies by company and application but can result in major production disruptions in the right scenario.
Try not to have environment-specific deviations between staging and production, as much as feasible, as it could make or break the efficacy of the environment.
Leverage multiple deployment environments to thoroughly validate changes before pushing them to production. Make sure that your pre-prod or staging environment is as similar to production from an infrastructure and configuration standpoint as possible to narrow the chances of missing issues.
Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.