July 31, 2025

Improving Liquibase Developer Experience with Harness Database DevOps Automated Change Generation

Table of Contents

The blog highlights how Harness Database DevOps improves Liquibase developer experience by automating changelog generation directly from UI tools, streamlining the entire database change management process. It details a step-by-step pipeline for snapshotting, diffing, and securely committing changes to Git, emphasizing built-in governance and policy enforcement (like index naming conventions) for enhanced control and compliance. This approach enables faster, more reliable database deployments by treating schema changes as version-controlled code within a robust CI/CD framework.

This blog discusses how Harness Database DevOps can automate, govern, and deliver database schema changes safely, without having to manually author your database change.

Introduction

Adopting DevOps best practices for databases is critical for many organizations. With Harness Database DevOps, you can define, test, and govern database schema changes just like application code—automating the entire workflow from authoring to production while ensuring traceability and compliance. But we can do more–we can help your developers write their database change in the first place.

In this blog, we'll walk through a concrete example, using YAML configuration and a real pipeline, to show how Harness empowers teams to:

  • Automatically generate database changes using snapshots and diffs
  • Enforce governance before changes move beyond the authoring environment
  • Ensure consistency across environments via CI/CD using a GitOps Workflow

📽️ Companion Demo Video

To see this workflow in action, watch the companion demo:

In the demo, the pipeline captures changes, specifically adding a new column and index, and propagates those changes via Git before triggering further automated CI/CD, including rollback validation and governance.

Development teams adopting Liquibase for database continuous delivery can accelerate and standardize changeset authoring by leveraging Harness Database DevOps. By enabling developers to generate their changes by diffing DB environments, they no longer need to write SQL or YAML. This provides the benefits of state-based migrations and the benefits of script based migrations at the same time. They can define the desired state of the database in an authoring environment and automatically generate the changes to get there.

Automated Authoring: Step-by-Step with Harness Pipelines

1. Sync Development and Authoring Environments

The workflow starts by ensuring that your development and authoring environments are in sync with git, guaranteeing a clean baseline for change capture. The pipeline ensures the development environment reflects the current git state before capturing a new authoring snapshot. This allows developers to use their environment of choice—such as database UI tools—for schema design. To do this, we just run the apply step for the environment.

2. Take an Authoring Snapshot

Harness uses the LiquibaseCommand step to snapshot the current schema in the authoring environment:

- step:

    type: LiquibaseCommand

    name: Authoring Schema Snapshot

    identifier: snapshot_authoring

    spec:

      connectorRef: account.harnessImage

      command: snapshot

      resources:

        limits:

          memory: 2Gi

          cpu: "1"

      settings:

        output-file: mySnapshot.json

        snapshot-format: json

        dbSchema: pipeline_authored

        dbInstance: authoring

        excludeChangeLogFile: true

      timeout: 10m

      contextType: Pipeline

3. Generate a Diff Changelog

Next, the pipeline diffs the snapshot against the development database, generating a Liquibase YAML changelog (diff.yaml) that describes all the changes made in the authoring environment. Again, this uses the liquibase command step.:

- step:

    type: LiquibaseCommand

    name: Diff as Changelog

    identifier: diff_dev_as_changelog

    spec:

      connectorRef: account.harnessImage

      command: diff-changelog

      resources:

        limits:

          memory: 2Gi

          cpu: "1"

      settings:

        reference-url: offline:mssql?snapshot=mySnapshot.json

        author: <+pipeline.variables.email>

        label-filter: <+pipeline.variables.ticket_id>

        generate-changeset-created-values: "true"

        generated-changeset-ids-contains-description: "true"

        changelog-file: diff.yaml

        dbSchema: pipeline_authored

        dbInstance: development

        excludeChangeLogFile: true

      timeout: 10m

      when:

        stageStatus: Success

4. Merge Diff Changelog with the Central Changelog Using yq

Your git changelog should include all changes ever deployed, so the pipeline merges the auto-generated diff.yaml into the master changelog with a Run step that uses yq for structured YAML manipulation. This shell script also echoes only the new changesets to the log for the user to view.

- step:

    type: Run

    name: Output and Merge

    identifier: Output_and_merge

    spec:

      connectorRef: Dockerhub

      image: mikefarah/yq:4.45.4

      shell: Sh

      command: |

        # Optionally annotate changesets

        yq '.databaseChangeLog.[].changeSet.comment = "<+pipeline.variables.comment>" | .databaseChangeLog.[] |= .changeSet.id = "<+pipeline.variables.ticket_id>-"+(path | .[-1])' diff.yaml > diff-comments.yaml

        # Merge new changesets into the main changelog

        yq -i 'load("diff-comments.yaml") as $d2 | .databaseChangeLog += $d2.databaseChangeLog' dbops/ensure_dev_matches_git/changelogs/pipeline-authored/changelog.yml

        # Output the merged changelog (for transparency/logging)

        cat dbops/ensure_dev_matches_git/changelogs/pipeline-authored/changelog.yml

5. Commit to Git

Once merged, the pipeline commits the updated changelog to your Git repository. 

- step:

    type: Run

    name: Commit to Git

    identifier: Commit_to_git

    spec:

      connectorRef: Dockerhub

      image: alpine/git

      shell: Sh

      command: |

        cd dbops/ensure_dev_matches_git

        git config --global user.email "<+pipeline.variables.email>"

        git config --global user.name "Harness Pipeline"

        git add changelogs/pipeline-authored/changelog.yml

        git commit -m "Automated changelog update for ticket <+pipeline.variables.ticket_id>"

        git push

This push kicks off further CI/CD workflows for deployment, rollback testing, and integration validation in the development environment. The git repo is structured using a different branch for each environment, so promoting through staging to prod is accomplished by merging PRs.

Enforcing Database Change Policies

To ensure regulatory and organizational compliance, teams can automatically enforce their policies at deployment time. The demo features an example of a policy being violated, and fixing the change to meet it. The policy given in the demo is shown below, and it enforces a particular naming convention for indexes.

Example: SQL Index Naming Convention Policy

package db_sql

deny[msg] {

  some l

  sql := lower(input.sqlStatements[l])

  regex.match(`(?i)create\s+(NonClustered\s+)?index\s+.*`, sql)

  matches := regex.find_all_string_submatch_n(`(?i)create\s+(NonClustered\s+)?index\s+([^\s]+)\s+ON\s+([^\s(]+)\s?\(\[?([^])]+)+\]?\);`, sql,-1)[0]

  idx_name := matches[2]

  table_name := matches[3]

  column_names := strings.replace_n({" ": "_",",": "__"},matches[4])

  expected_index_name := concat("_",["idx",table_name,column_names])

  idx_name != expected_index_name

  msg := sprintf("Index creation does not follow naming convention.\n SQL: '%s'\n expected index name: '%s'", [sql,expected_index_name])

}

This policy automatically detects and blocks non-compliant index names in developer-authored SQL.

In the demo, a policy violation appears if a developer’s generated index (e.g., person_job_title_idx) doesn’t match the convention.

Conclusion

Harness Database DevOps amplifies Liquibase’s value by automating changelog authoring, merge, git, deployment, and pipeline orchestration—reducing human error, boosting speed, and ensuring every change is audit-ready and policy-compliant. Developers can focus on schema improvements, while automation and policy steps enable safe, scalable delivery.

Ready to modernize your database CI/CD?

Learn more about how you can improve your developer experience for database changes or  contact us to discuss your particular usecase.

Next-generation CI/CD For Dummies

Stop struggling with tools—master modern CI/CD and turn deployment headaches into smooth, automated workflows.

Book a 30 minute product demo.
Database DevOps