No items found.
Cloud costs
October 25, 2023
min read

Harness Developer Hub - Ease of Authoring with Git Triggers


It’s been about a year since we launched Harness Developer Hub [HDH] in Beta. Today, HDH is GA and is serving tens of thousands of unique visitors every month and hundreds of thousands of pageviews every month all across the globe. All of this while supporting hundreds of contributors with varying levels of skills. The traffic and number of contributors in the public repository continues to grow as we expand the capabilities of HDH. 

Looking at how HDH is architected, HDH is a Docursarus Implementation. Our site embraces documentation-as-code as a paradigm and is no different than any other modern TypeScript [Javascript] based application. We have an application that multiple contributors need to contribute to and needs to be built and deployed all throughout the day. 

Over the previous year we have made two shifts in how we build and deploy. We now treat every commit as a potential release and build multiple times throughout the day with every git commit and also deploy multiple times throughout the day with every merge to our main branch. Let’s look at our current solution and then jog down memory lane how we evolved. 

Current HDH Pipeline Strategy

We leverage several Harness capabilities to deliver HDH to the world.

Git Triggered Pipeline
HDH Pipeline Triggered by Git Repository

Starting at the Repository

Our source code management solution is the source of truth for HDH and the genesis of changes being published. We have webhook events that fire on several SCM events to Harness to process. 

  • Branch or tag creation / deletion. 
  • Pull Request Events - Created/merged/synchronized/updated/closed
  • Git Pushes

These events are then processed by Harness.

Harness Build and Deploy - Conditionally from Git Hooks in the Cloud

Our goal is to provide preview/ephemeral builds for changes that are represented in a Pull Request. To do this, we need to remotely build the Docusarus instance which leverages Yarn and NPM to facilitate the build. We build on every net new commit to the PR. 

We build via a Harness Cloud [hosted] build node so we do not have to manage build infrastructure and dependencies on the build node. We also leverage for performance Cache Intelligence on a conservative estimate sped up our builds more than 30%. From when we implemented the current setup, we have had over 9000 builds. 

From a deployment standpoint, we deploy to our static host which is Netlify. The flexibility and extensibility of Harness allows us to bring a plugin that interacts with Netlify’s APIs. We have a decision that we make in JEXL if a build needs to head to a preview environment or if a build needs to be published to production. 

Preview Logic [if branch is not main]:

<+trigger.event>.equals("PR") && <+trigger.branch>!~"/^main$/"

Production Logic [if branch is main]

<+trigger.event>.equals("PUSH") && <+trigger.targetBranch>.equals("main")

Configuring this Harness Trigger, here is our YAML configuration looking out for a few events.

type: Webhook
type: Github
type: PullRequest
connectorRef: hdh_gh_connector
autoAbortPreviousExecutions: false
payloadConditions: []
headerConditions: []
- Open
- Synchronize
- Reopen

Based on the condition, we fire a slightly different request to the Netlify API. Once we get the results of the Netlify API call, we comment back to the GitHub PR. This allows the contributor to preview their work in a live site if a preview flow is executed. In totality, the Pipeline looks as follows in the Harness Editor:

Editable Pipeline
Harness HDH Editable Pipeline

For example the Cache Intelligence step is easy to weave in during the Build Stage. Once execution will look as follows in the Harness UI:

Git Trigger Pipeline Executed
Executed Git Trigger Pipeline for HDH

Pipelines are designed to evolve. We had two other renditions of the Pipeline which we optimized over the year to produce what we are currently leveraging today.

Pipelines Should Evolve

We have embraced two principals as we evolved our pipelines. The KISS Principle to take a more simplistic approach and DRY Principle to cut out duplicate steps/tests. Our second rendition was Kubernetes heavy for the static site before we optimized on calling the Netlify APIs directly for a preview build; we used to maintain our own preview environment when Netlify provided this out of the box. Because we learned of this feature, we were able to easily modify our HDH Pipeline to leverage this new methodology. 

If you like to continuously improve your software delivery capabilities, I would implore you to consider signing up and  using the Harness Platform to help you with your goals.



Sign up now

Sign up for our free plan, start building and deploying with Harness, take your software delivery to the next level.

Get a demo

Sign up for a free 14 day trial and take your software development to the next level


Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

Case studies

Learn intelligent software delivery at your own pace. Step-by-step tutorials, videos, and reference docs to help you deliver customer happiness.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback?
Share your thoughts by creating a new topic in the Harness community forum.

Sign up for our monthly newsletter

Subscribe to our newsletter to receive the latest Harness content in your inbox every month.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Continuous Delivery & GitOps
Continuous Integration