Database DevOps Blogs

Featured Blogs

October 23, 2025
Time to Read

For the last decade, DevOps has been obsessed with speed – automating CI/CD, testing, infrastructure, and even feature rollout. But one critical layer has been left almost entirely manual: the database. Teams still write SQL scripts by hand, deploy them at midnight, and pray that rollback plans work. At Harness, we think it’s time to fix that.

Today, Harness is launching AI-Powered Database Migration Authoring, a new capability within Harness Database DevOps that brings “vibe coding for databases” to life – where developers can create safe, compliant database migrations simply by describing them in plain language. 

This is the next step in Harness’s vision to bring automation to every stage of the software delivery lifecycle, from code to cloud to database.

Database DevOps: One of the Fastest-Growing Modules at Harness

Harness’s Database DevOps offering removes one of the last blockers in modern software delivery: slow, manual database schema changes. 

Ask any engineering leader what slows down their release cycles, and the answer will sound familiar: “We can deploy apps fast, but database changes always hold us back.” While CI/CD transformed how applications are released, most teams still manage schema updates through SQL scripts, spreadsheet tracking, and late-stage approvals.

Harness closes this gap by treating database changes like application code. Updates are versioned in Git, validated with policy-as-code, deployed through governed pipelines, and rolled back automatically if needed. A unified dashboard provides full visibility into what is deployed where, enabling teams to compare environments and maintain a comprehensive audit trail.

Harness is the only DevOps platform with a fully integrated, enterprise-grade Database DevOps solution, not a plug-in or point tool. And customers need it! As one of the fastest-growing modules at Harness, Database DevOps delivers value to organizations like Athenahealth (click to see video interview).

“Harness gave us a truly out-of-the-box solution with features we couldn’t get from Liquibase Pro or a homegrown approach. We saved months of engineering effort and got more for less – with better governance, orchestration, and visibility.”
— Daniel Gabriel, Senior Software Engineer, Athenahealth

The Market Shift: From Continuous Delivery to “Vibe Coding”

AI has transformed how code is written, but software delivery remains stuck in the past. “Vibe coding” is speeding up creation, yet the systems that move code into production – including testing, security, and database delivery – haven’t kept pace.

In a recent Harness study, 63% of organizations ship code faster since adopting AI, but 72% have suffered at least one production incident caused by AI-generated code. The result is the AI Velocity Paradox: faster coding, slower delivery. 

But there’s a solution. 83% of leaders agree that AI must extend across the entire SDLC to unlock its full potential. Database DevOps helps to close that gap by extending AI-powered automation and governance to the last mile of DevOps: the database. 

Introducing AI-Powered Database Migration Authoring

With AI-Powered Database Migration Authoring, any developer can describe the database change they need in natural language, like – 

“Create a table named animals with columns for genus_species and common_name. Then add a related table named birds that tracks unladen airspeed and proper name. Add rows for Captain Canary, African swallow, and European swallow.”

– and Harness will generate a compliant, production-ready migration complete with rollback scripts, validation, and Git integration. Capabilities include:

  • Analyzing the current schema and policies
  • Generating the correct, backward-compatible migration
  • Validating the change for safety and compliance
  • Committing it to Git for testing through CI/CD
  • Creating rollback migrations to ensure complete reversibility

Every migration is versioned, tested, governed, and fully auditable just like your application code.

Trained on Best Practices, Guided by Your Governance

Harness AI isn’t a generic code assistant. It’s trained on proven database management best practices and guided by your organization’s existing governance rules.

It understands keys, constraints, triggers, backward compatibility, and compliance standards. DBAs retain oversight through policy-as-code and automated approvals, ensuring governance never becomes a bottleneck.

Why It Matters

This is more than an incremental feature – it’s a step toward AI-native DevOps, where systems understand intent, enforce policy, and automate delivery from code to cloud to database.

  • For developers, AI removes one of the most frustrating dependencies in the release process.
  • For DBAs, policy-as-code and automated rollback keep every change safe and auditable.
  • For leaders, this offering turns the database from a bottleneck into an accelerator for innovation.

Harness Database DevOps now combines generative AI, policy-as-code, and CI/CD orchestration into one governed workflow. The result: faster releases, stronger governance, and fewer 2 a.m. rollbacks.

See It in Action

Harness’s AI Database Migration Authoring, like most of Harness AI, is powered by the Software Delivery Knowledge Graph and Harness’s MCP Server. This server knows about your database, your pipelines, and comes with baked-in best practices to help you rapidly transform your company’s DevOps using AI.

Below is a preview of how Harness AI takes a simple English-language prompt and generates a compliant database migration complete with validation, rollback, and GitOps integration.

Harness Database DevOps AI-Powered Database Migration Authoring

The Database Doesn’t Have to Be the Bottleneck

The last mile of DevOps has just caught up.

With Harness Database DevOps and AI-Powered Database Migration Authoring, database delivery becomes automated, governed, and safe – finally a first-class citizen in the CI/CD pipeline.

Learn more about Harness Database DevOps and book a demo to see AI-Powered Database Migration Authoring in action.

July 31, 2025
Time to Read

This blog discusses how Harness Database DevOps can automate, govern, and deliver database schema changes safely, without having to manually author your database change.

Introduction

Adopting DevOps best practices for databases is critical for many organizations. With Harness Database DevOps, you can define, test, and govern database schema changes just like application code—automating the entire workflow from authoring to production while ensuring traceability and compliance. But we can do more–we can help your developers write their database change in the first place.

In this blog, we'll walk through a concrete example, using YAML configuration and a real pipeline, to show how Harness empowers teams to:

  • Automatically generate database changes using snapshots and diffs
  • Enforce governance before changes move beyond the authoring environment
  • Ensure consistency across environments via CI/CD using a GitOps Workflow

📽️ Companion Demo Video

To see this workflow in action, watch the companion demo:

In the demo, the pipeline captures changes, specifically adding a new column and index, and propagates those changes via Git before triggering further automated CI/CD, including rollback validation and governance.

Development teams adopting Liquibase for database continuous delivery can accelerate and standardize changeset authoring by leveraging Harness Database DevOps. By enabling developers to generate their changes by diffing DB environments, they no longer need to write SQL or YAML. This provides the benefits of state-based migrations and the benefits of script based migrations at the same time. They can define the desired state of the database in an authoring environment and automatically generate the changes to get there.

Automated Authoring: Step-by-Step with Harness Pipelines

1. Sync Development and Authoring Environments

The workflow starts by ensuring that your development and authoring environments are in sync with git, guaranteeing a clean baseline for change capture. The pipeline ensures the development environment reflects the current git state before capturing a new authoring snapshot. This allows developers to use their environment of choice—such as database UI tools—for schema design. To do this, we just run the apply step for the environment.

2. Take an Authoring Snapshot

Harness uses the LiquibaseCommand step to snapshot the current schema in the authoring environment:

- step:

    type: LiquibaseCommand

    name: Authoring Schema Snapshot

    identifier: snapshot_authoring

    spec:

      connectorRef: account.harnessImage

      command: snapshot

      resources:

        limits:

          memory: 2Gi

          cpu: "1"

      settings:

        output-file: mySnapshot.json

        snapshot-format: json

        dbSchema: pipeline_authored

        dbInstance: authoring

        excludeChangeLogFile: true

      timeout: 10m

      contextType: Pipeline

3. Generate a Diff Changelog

Next, the pipeline diffs the snapshot against the development database, generating a Liquibase YAML changelog (diff.yaml) that describes all the changes made in the authoring environment. Again, this uses the liquibase command step.:

- step:

    type: LiquibaseCommand

    name: Diff as Changelog

    identifier: diff_dev_as_changelog

    spec:

      connectorRef: account.harnessImage

      command: diff-changelog

      resources:

        limits:

          memory: 2Gi

          cpu: "1"

      settings:

        reference-url: offline:mssql?snapshot=mySnapshot.json

        author: <+pipeline.variables.email>

        label-filter: <+pipeline.variables.ticket_id>

        generate-changeset-created-values: "true"

        generated-changeset-ids-contains-description: "true"

        changelog-file: diff.yaml

        dbSchema: pipeline_authored

        dbInstance: development

        excludeChangeLogFile: true

      timeout: 10m

      when:

        stageStatus: Success

4. Merge Diff Changelog with the Central Changelog Using yq

Your git changelog should include all changes ever deployed, so the pipeline merges the auto-generated diff.yaml into the master changelog with a Run step that uses yq for structured YAML manipulation. This shell script also echoes only the new changesets to the log for the user to view.

- step:

    type: Run

    name: Output and Merge

    identifier: Output_and_merge

    spec:

      connectorRef: Dockerhub

      image: mikefarah/yq:4.45.4

      shell: Sh

      command: |

        # Optionally annotate changesets

        yq '.databaseChangeLog.[].changeSet.comment = "<+pipeline.variables.comment>" | .databaseChangeLog.[] |= .changeSet.id = "<+pipeline.variables.ticket_id>-"+(path | .[-1])' diff.yaml > diff-comments.yaml

        # Merge new changesets into the main changelog

        yq -i 'load("diff-comments.yaml") as $d2 | .databaseChangeLog += $d2.databaseChangeLog' dbops/ensure_dev_matches_git/changelogs/pipeline-authored/changelog.yml

        # Output the merged changelog (for transparency/logging)

        cat dbops/ensure_dev_matches_git/changelogs/pipeline-authored/changelog.yml

5. Commit to Git

Once merged, the pipeline commits the updated changelog to your Git repository. 

- step:

    type: Run

    name: Commit to Git

    identifier: Commit_to_git

    spec:

      connectorRef: Dockerhub

      image: alpine/git

      shell: Sh

      command: |

        cd dbops/ensure_dev_matches_git

        git config --global user.email "<+pipeline.variables.email>"

        git config --global user.name "Harness Pipeline"

        git add changelogs/pipeline-authored/changelog.yml

        git commit -m "Automated changelog update for ticket <+pipeline.variables.ticket_id>"

        git push

This push kicks off further CI/CD workflows for deployment, rollback testing, and integration validation in the development environment. The git repo is structured using a different branch for each environment, so promoting through staging to prod is accomplished by merging PRs.

Enforcing Database Change Policies

To ensure regulatory and organizational compliance, teams can automatically enforce their policies at deployment time. The demo features an example of a policy being violated, and fixing the change to meet it. The policy given in the demo is shown below, and it enforces a particular naming convention for indexes.

Example: SQL Index Naming Convention Policy

package db_sql

deny[msg] {

  some l

  sql := lower(input.sqlStatements[l])

  regex.match(`(?i)create\s+(NonClustered\s+)?index\s+.*`, sql)

  matches := regex.find_all_string_submatch_n(`(?i)create\s+(NonClustered\s+)?index\s+([^\s]+)\s+ON\s+([^\s(]+)\s?\(\[?([^])]+)+\]?\);`, sql,-1)[0]

  idx_name := matches[2]

  table_name := matches[3]

  column_names := strings.replace_n({" ": "_",",": "__"},matches[4])

  expected_index_name := concat("_",["idx",table_name,column_names])

  idx_name != expected_index_name

  msg := sprintf("Index creation does not follow naming convention.\n SQL: '%s'\n expected index name: '%s'", [sql,expected_index_name])

}

This policy automatically detects and blocks non-compliant index names in developer-authored SQL.

In the demo, a policy violation appears if a developer’s generated index (e.g., person_job_title_idx) doesn’t match the convention.

Conclusion

Harness Database DevOps amplifies Liquibase’s value by automating changelog authoring, merge, git, deployment, and pipeline orchestration—reducing human error, boosting speed, and ensuring every change is audit-ready and policy-compliant. Developers can focus on schema improvements, while automation and policy steps enable safe, scalable delivery.

Ready to modernize your database CI/CD?

Learn more about how you can improve your developer experience for database changes  or  contact us to discuss your particular usecase.

Request a demo

Check out State of the Developer Experience 2024

Latest Blogs