Product
|
Cloud costs
|
released
May 28, 2019
|
3
min read
|

Pull Request-Driven Development

Updated

Like many companies, Harness started out with a minimal set of deployment environments. When we launched our first production application we had three - a Continuous Integration environment, quality assurance, and production itself.

Every change made by a developer was immediately deployed to our CI environment, which was great for quickly seeing the latest code deployed as a real application. However, by the time it got there the change had to have already been merged into the master branch so it was not helpful for testing prior to merging.

To address this we started producing deployable builds from developers’ feature branches. These could be deployed to our QA environment and tested there before being merged. This worked for a time, but some changes alter the data schema, and when they were rolled back from QA the data also had to be repaired since our QA data was populated specifically for testing prior to production deployment.

Another constraint was that since QA was used as a path to production it wasn’t available all the time. So, we introduced another dev environment specifically for deploying feature branch builds. Now our QA data was only altered by changes that had been merged to master and were on track to going to production. Our dev environment could be deployed with any feature build, and if the data got corrupted we could reset it.

Problem solved! Except…

Some changes need a long period of hands-on testing. Frontend and backend engineers may need to collaborate in one environment, each working from their respective branches, and new functionality may need to be tested by other teams, possibly in other timezones. As part of our evolving development process, we wanted code changes to be reviewed, tested and possibly signed off by multiple teams including Quality Engineering, Security Operations, and UX Design.

Our dev environment quickly became a bottleneck. The challenge became even bigger as our team continued to grow.

Pull Requests Need More Love

Features and code changes are centered around git branches and pull requests, so there’s nothing better than using these as focal points to drive all of the required collaboration. We needed to be able to create and tear down environments as cheaply and easily as you would a git branch. We needed maximum flexibility and we needed to solve this in a cost-effective and scalable way. We needed Harness-Environment-as-a-Service.

The Harness application had finally reached a level of maturity where we had all the features required to implement this. Making use of the flexibility of Kubernetes ingress and deploying with Harness, we can now deploy to an unlimited number of time-limited environments that can be created and configured on the fly. This frees our developers from having to reserve time on shared resources and allows for any number of experimental environments to exist simultaneously. The time limits ensure that resources are freed after use, preventing wasted cost.

Finally, we were able to have dedicated environments for collaborating with quality engineers, designers and DevOps teams. We now design our processes to allow developers to get sign off before merging code.

While we introduced these ephemeral environments for testing, collaborating, and sharing, we found that there are several other important use cases that also became possible.

The application’s resource configuration can be exposed as deployment time variables. We can then deploy any combination of resource limits and test them with the same load for stress testing various configurations.

We can deploy an older version of our application, then deploy a newer version to the same environment to test the upgrade migration path between any specific versions.

We can take backup snapshots of the data in these environments to preserve a particular state, and the backups can be restored into any other environment. That means we can keep different setups on hand for testing different sorts of functionality, and restore any relevant ones to the environment containing a particular feature build.

The following goes into the details of how we set up these ephemeral environments in Kubernetes and using Harness to deploy them, making all of your dreams come true.

Strategy

For us, a working environment consists of a number of microservices deployed into a Kubernetes cluster with routing handled by an ingress controller. The approach described here should work for any similar setup.

We start with a single autoscaling Kubernetes cluster dedicated to hosting these ad hoc environments. An ingress controller and any shared services that are not part of the on-demand environments are installed into dedicated namespaces.

Pull Request: Namespaces


Each ephemeral environment is deployed to a unique namespace in the cluster, with ingress rules created to route traffic to services in that namespace based on a path prefix matching the namespace name.

https://adhoc.mydomain.com/namespace/service-path

Pull Request: Namespaces


Annotations are added to the namespace with metadata including who deployed it, when it was last deployed, and the time to live (TTL), after which it can be torn down.

apiVersion: v1
kind: Namespace
metadata:
name: my-awesome-feature
labels:
harness-managed: "true"
annotations:
ttl-hours: "12"
last-deployed-by: Nessa Hiro
last-deployed-at: "1415926535"

Since each set of microservices along with their ingresses and the database are in a unique namespace selected by the developer there is complete isolation for the deployment. Having all of the deployment services and dependencies (and nothing else) in their own namespace makes it easy to tear everything down after the namespace expires.

Implementation

Let's take a look at how this is configured in Harness.

Setup

First, we want to collect user inputs at deployment time, including the name of the namespace and its desired TTL. We also collect any other configuration that we want the developer to enter. Some are optional while others are required. Some are prepopulated with reasonable defaults while others are left blank. The namespace is taken as input and is referenced in the service infrastructure, using an expression. This ensures all workloads and services are deployed to the selected namespace.

Namespace: ${workflow.variables.namespace}

We are creating a namespace with parameterized name and annotations, so the namespace spec is templatized.

apiVersion: v1
kind: Namespace
metadata:
name: {{.Values.namespace.name}}
annotations:
ttl-hours: {{.Values.ttlHours | quote}}
last-deployed-by: {{.Values.lastDeployedBy}}

The values are populated from what the user entered, or other available expressions such as the name of the person doing the deployment.

ttlHours: ${workflow.variables.namespace_ttl_hours}
lastDeployedBy: ${deploymentTriggeredBy}

We also need to set last-deployed-at to the current timestamp in seconds so that we can tear down the namespace after it expires. For that we use a kubectl command from a shell script.

kubectl annotate namespace ${infra.kubernetes.namespace} last-deployed-at=$(date +%s) --overwrite=true

For the ingress rules, we specify a path prefix as the selected namespace and rewrite the target without the prefix. Now all requests starting with the namespace are routed to the services in that namespace.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: manager-api
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /api
spec:
rules:
- http:
paths:
- path: {{.Values.pathPrefix}}/api
backend:
serviceName: manager
servicePort: manager-port

The value specifies the path prefix as the namespace name.

pathPrefix: /${infra.kubernetes.namespace}

Note that the same manifest may be used for your deployments into traditional environments, in which case pathPrefix can be left blank for those environments.
The microservices make requests to each other within the namespace using Kubernetes DNS, referring to each service by name only.

SERVER_URL: https://verification-svc:7070

For requests coming from outside the cluster, such as from the UI, environment variables are parameterized with the namespace and used when deploying the service using Harness.

MANAGER_URL: https://adhoc.mydomain.com/${infra.kubernetes.namespace}/api/

We add a resource constraint to workflows that deploy each service into the namespace. This ensures that the same workflow deploying to the same namespace can’t run at the same time, causing a conflict. To do this, define a resource constraint with a capacity of one, then use that one in the workflow. Specify the Unit as a combination of the service infrastructure ID and the namespace. The capacity and the usage will be segmented into unique values of the given expression.

Resource Constraint


Deployment

Once a developer enters all of the values to be collected and starts a deployment, the first thing we do is create and annotate the namespace. Next, we deploy our database into the namespace and populated it with test data.

Then, we deploy all of our microservices.

Pull Request: Workflow


Finally, we verify service endpoints with HTTP verifications.

Service Endpoints


The new environment is now ready for use.

Tear Down

To tear down the expired namespaces, we execute a workflow every few hours on a time trigger. This workflow runs a script that checks the TTL annotation on each namespace and deletes it if it is expired. This script could also be run as a Kubernetes CronJob.

namespaces=$(kubectl get namespaces \
-o jsonpath="{.items[*].metadata.name"})
for ns in $namespaces; do
ttl_hours=$(kubectl get namespace $ns \
-o jsonpath='{.metadata.annotations.ttl-hours}')
ts_last=$(kubectl get namespace $ns \
-o jsonpath='{.metadata.annotations.last-deployed-at}')
if [[ $ts_last -le $(($(date +%s) - ttl_hours * 3600)) ]]; then
kubectl delete namespace $ns
fi
done

Summary

Make unlimited temporary environments available in order to keep your developers from competing for resources and to enable widespread collaboration and sharing.

Sharing functionality before merging code provides the opportunity for valuable feedback and opens up new ways for teams to work together. You can also sign up for a free trial of Harness.

Brett Zane

Sign up now

Sign up for our free plan, start building and deploying with Harness, take your software delivery to the next level.

Get a demo

Sign up for a free 14 day trial and take your software development to the next level

Documentation

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

Case studies

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.

Sign up for our monthly newsletter

Subscribe to our newsletter to receive the latest Harness content in your inbox every month.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Continuous Delivery & GitOps