
Engineering teams are generating more shippable code than ever before — and today, Harness is shipping five new capabilities designed to help teams release confidently. AI coding assistants lowered the barrier to writing software, and the volume of changes moving through delivery pipelines has grown accordingly. But the release process itself hasn't kept pace.
The evidence shows up in the data. In our 2026 State of DevOps Modernization Report, we surveyed 700 engineering teams about what AI-assisted development is actually doing to their delivery. The finding stands out: while 35% of the most active AI coding users are already releasing daily or more, those same teams have the highest rate of deployments needing remediation (22%) and the longest MTTR at 7.6 hours.
This is the velocity paradox: the faster teams can write code, the more pressure accumulates at the release, where the process hasn't changed nearly as much as the tooling that feeds it.
The AI Delivery Gap
What changed is well understood. For years, the bottleneck in software delivery was writing code. Developers couldn't produce changes fast enough to stress the release process. AI coding assistants changed that. Teams are now generating more change across more services, more frequently than before — but the tools for releasing that change are largely the same.
In the past, DevSecOps vendors built entire separate products to coordinate multi-team, multi-service releases. That made sense when CD pipelines were simpler. It doesn't make sense now. At AI speed, a separate tool means another context switch, another approval flow, and another human-in-the-loop at exactly the moment you need the system to move on its own.
The tools that help developers write code faster have created a delivery gap that only widens as adoption grows.
Today Harness is releasing five capabilities, all natively integrated into Continuous Delivery. Together, they cover the full arc of a modern release: coordinating changes across teams and services, verifying health in real time, managing schema changes alongside code, and progressively controlling feature exposure.
Release Orchestration replaces Slack threads, spreadsheets, and war-room calls that still coordinate most multi-team releases. Services and the teams supporting them move through shared orchestration logic with the same controls, gates, and sequence, so a release behaves like a system rather than a series of handoffs. And everything is seamlessly integrated with Harness Continuous Delivery, rather than in a separate tool.
AI-Powered Verification and Rollback connects to your existing observability stack, automatically identifies which signals matter for each release, and determines in real time whether a rollout should proceed, pause, or roll back. Most teams have rollback capability in theory. In practice it's an emergency procedure, not a routine one. Ancestry.com made it routine and saw a 50% reduction in overall production outages, with deployment-related incidents dropping significantly.
Database DevOps, now with Snowflake support, brings schema changes into the same pipeline as application code, so the two move together through the same controls with the same auditability. If a rollback is needed, the application and database schema can rollback together seamlessly. This matters especially for teams building AI applications on warehouse data, where schema changes are increasingly frequent and consequential.
Improved pipeline and policy support for feature flags and experimentation enables teams to deploy safely, and release progressively to the right users even though the number of releases is increasing due to AI-generated code. They can quickly measure impact on technical and business metrics, and stop or roll back when results are off track. All of this within a familiar Harness user interface they are already using for CI/CD.
Warehouse-Native Feature Management and Experimentation lets teams test features and measure business impact directly with data warehouses like Snowflake and Redshift, without ETL pipelines or shadow infrastructure. This way they can keep PII and behavioral data inside governed environments for compliance and security.
These aren't five separate features. They're one answer to one question: can we safely keep going at AI speed?
Traditional CD pipelines treat deployment as the finish line. The model Harness is building around treats it as one step in a longer sequence: application and database changes move through orchestrated pipelines together, verification checks real-time signals before a rollout continues, features are exposed progressively, and experiments measure actual business outcomes against governed data.
A release isn't complete when the pipeline finishes. It's complete when the system has confirmed the change is healthy, the exposure is intentional, and the outcome is understood.
That shift from deployment to verified outcome is what Harness customers say they need most. "AI has made it much easier to generate change, but that doesn't mean organizations are automatically better at releasing it," said Marc Pearce, Head of DevOps at Intelliflo. "Capabilities like these are exactly what teams need right now. The more you can standardize and automate that release motion, the more confidently you can scale."
The real shift here is operational. The work of coordinating a release today depends heavily on human judgment, informal communication, and organizational heroics. That worked when the volume of change was lower. As AI development accelerates, it's becoming the bottleneck.
The release process needs to become more standardized, more repeatable, and less dependent on any individual's ability to hold it together at the moment of deployment. Automation doesn't just make releases faster. It makes them more consistent, and consistency is what makes scaling safe.
For Ancestry.com, implementing Harness helped them achieve 99.9% uptime by cutting outages in half while accelerating deployment velocity threefold.
At Speedway Motors, progressive delivery and 20-second rollbacks enabled a move from biweekly releases to multiple deployments per day, with enough confidence to run five to 10 feature experiments per sprint.
AI made writing code cheap. Releasing that code safely, at scale, is still the hard part.
Harness Release Orchestration, AI-Powered Verification and Rollback, Database DevOps, Warehouse-Native Feature Management and Experimentation, and Improve Pipeline and Policy support for FME are available now. Learn more and book a demo.

For the last decade, DevOps has been obsessed with speed – automating CI/CD, testing, infrastructure, and even feature rollout. But one critical layer has been left almost entirely manual: the database. Teams still write SQL scripts by hand, deploy them at midnight, and pray that rollback plans work. At Harness, we think it’s time to fix that.
Today, Harness is launching AI-Powered Database Migration Authoring, a new capability within Harness Database DevOps that brings “vibe coding for databases” to life – where developers can create safe, compliant database migrations simply by describing them in plain language.
This is the next step in Harness’s vision to bring automation to every stage of the software delivery lifecycle, from code to cloud to database.
Harness’s Database DevOps offering removes one of the last blockers in modern software delivery: slow, manual database schema changes.
Ask any engineering leader what slows down their release cycles, and the answer will sound familiar: “We can deploy apps fast, but database changes always hold us back.” While CI/CD transformed how applications are released, most teams still manage schema updates through SQL scripts, spreadsheet tracking, and late-stage approvals.
Harness closes this gap by treating database changes like application code. Updates are versioned in Git, validated with policy-as-code, deployed through governed pipelines, and rolled back automatically if needed. A unified dashboard provides full visibility into what is deployed where, enabling teams to compare environments and maintain a comprehensive audit trail.
Harness is the only DevOps platform with a fully integrated, enterprise-grade Database DevOps solution, not a plug-in or point tool. And customers need it! As one of the fastest-growing modules at Harness, Database DevOps delivers value to organizations like Athenahealth (click to see video interview).
“Harness gave us a truly out-of-the-box solution with features we couldn’t get from Liquibase Pro or a homegrown approach. We saved months of engineering effort and got more for less – with better governance, orchestration, and visibility.”
— Daniel Gabriel, Senior Software Engineer, Athenahealth
AI has transformed how code is written, but software delivery remains stuck in the past. “Vibe coding” is speeding up creation, yet the systems that move code into production – including testing, security, and database delivery – haven’t kept pace.
In a recent Harness study, 63% of organizations ship code faster since adopting AI, but 72% have suffered at least one production incident caused by AI-generated code. The result is the AI Velocity Paradox: faster coding, slower delivery.
But there’s a solution. 83% of leaders agree that AI must extend across the entire SDLC to unlock its full potential. Database DevOps helps to close that gap by extending AI-powered automation and governance to the last mile of DevOps: the database.
With AI-Powered Database Migration Authoring, any developer can describe the database change they need in natural language, like –
“Create a table named animals with columns for genus_species and common_name. Then add a related table named birds that tracks unladen airspeed and proper name. Add rows for Captain Canary, African swallow, and European swallow.”
– and Harness will generate a compliant, production-ready migration complete with rollback scripts, validation, and Git integration. Capabilities include:
Every migration is versioned, tested, governed, and fully auditable just like your application code.
Harness AI isn’t a generic code assistant. It’s trained on proven database management best practices and guided by your organization’s existing governance rules.
It understands keys, constraints, triggers, backward compatibility, and compliance standards. DBAs retain oversight through policy-as-code and automated approvals, ensuring governance never becomes a bottleneck.
This is more than an incremental feature – it’s a step toward AI-native DevOps, where systems understand intent, enforce policy, and automate delivery from code to cloud to database.
Harness Database DevOps now combines generative AI, policy-as-code, and CI/CD orchestration into one governed workflow. The result: faster releases, stronger governance, and fewer 2 a.m. rollbacks.
Harness’s AI Database Migration Authoring, like most of Harness AI, is powered by the Software Delivery Knowledge Graph and Harness’s MCP Server. This server knows about your database, your pipelines, and comes with baked-in best practices to help you rapidly transform your company’s DevOps using AI.
Below is a preview of how Harness AI takes a simple English-language prompt and generates a compliant database migration complete with validation, rollback, and GitOps integration.

The last mile of DevOps has just caught up.
With Harness Database DevOps and AI-Powered Database Migration Authoring, database delivery becomes automated, governed, and safe – finally a first-class citizen in the CI/CD pipeline.
Learn more about Harness Database DevOps and book a demo to see AI-Powered Database Migration Authoring in action.

This blog discusses how Harness Database DevOps can automate, govern, and deliver database schema changes safely, without having to manually author your database change.
Adopting DevOps best practices for databases is critical for many organizations. With Harness Database DevOps, you can define, test, and govern database schema changes just like application code—automating the entire workflow from authoring to production while ensuring traceability and compliance. But we can do more–we can help your developers write their database change in the first place.
In this blog, we'll walk through a concrete example, using YAML configuration and a real pipeline, to show how Harness empowers teams to:
To see this workflow in action, watch the companion demo:
In the demo, the pipeline captures changes, specifically adding a new column and index, and propagates those changes via Git before triggering further automated CI/CD, including rollback validation and governance.
Development teams adopting Liquibase for database continuous delivery can accelerate and standardize changeset authoring by leveraging Harness Database DevOps. By enabling developers to generate their changes by diffing DB environments, they no longer need to write SQL or YAML. This provides the benefits of state-based migrations and the benefits of script based migrations at the same time. They can define the desired state of the database in an authoring environment and automatically generate the changes to get there.
The workflow starts by ensuring that your development and authoring environments are in sync with git, guaranteeing a clean baseline for change capture. The pipeline ensures the development environment reflects the current git state before capturing a new authoring snapshot. This allows developers to use their environment of choice—such as database UI tools—for schema design. To do this, we just run the apply step for the environment.
Harness uses the LiquibaseCommand step to snapshot the current schema in the authoring environment:
- step:
type: LiquibaseCommand
name: Authoring Schema Snapshot
identifier: snapshot_authoring
spec:
connectorRef: account.harnessImage
command: snapshot
resources:
limits:
memory: 2Gi
cpu: "1"
settings:
output-file: mySnapshot.json
snapshot-format: json
dbSchema: pipeline_authored
dbInstance: authoring
excludeChangeLogFile: true
timeout: 10m
contextType: Pipeline
Next, the pipeline diffs the snapshot against the development database, generating a Liquibase YAML changelog (diff.yaml) that describes all the changes made in the authoring environment. Again, this uses the liquibase command step.:
- step:
type: LiquibaseCommand
name: Diff as Changelog
identifier: diff_dev_as_changelog
spec:
connectorRef: account.harnessImage
command: diff-changelog
resources:
limits:
memory: 2Gi
cpu: "1"
settings:
reference-url: offline:mssql?snapshot=mySnapshot.json
author: <+pipeline.variables.email>
label-filter: <+pipeline.variables.ticket_id>
generate-changeset-created-values: "true"
generated-changeset-ids-contains-description: "true"
changelog-file: diff.yaml
dbSchema: pipeline_authored
dbInstance: development
excludeChangeLogFile: true
timeout: 10m
when:
stageStatus: Success
Your git changelog should include all changes ever deployed, so the pipeline merges the auto-generated diff.yaml into the master changelog with a Run step that uses yq for structured YAML manipulation. This shell script also echoes only the new changesets to the log for the user to view.
- step:
type: Run
name: Output and Merge
identifier: Output_and_merge
spec:
connectorRef: Dockerhub
image: mikefarah/yq:4.45.4
shell: Sh
command: |
# Optionally annotate changesets
yq '.databaseChangeLog.[].changeSet.comment = "<+pipeline.variables.comment>" | .databaseChangeLog.[] |= .changeSet.id = "<+pipeline.variables.ticket_id>-"+(path | .[-1])' diff.yaml > diff-comments.yaml
# Merge new changesets into the main changelog
yq -i 'load("diff-comments.yaml") as $d2 | .databaseChangeLog += $d2.databaseChangeLog' dbops/ensure_dev_matches_git/changelogs/pipeline-authored/changelog.yml
# Output the merged changelog (for transparency/logging)
cat dbops/ensure_dev_matches_git/changelogs/pipeline-authored/changelog.yml
Once merged, the pipeline commits the updated changelog to your Git repository.
- step:
type: Run
name: Commit to Git
identifier: Commit_to_git
spec:
connectorRef: Dockerhub
image: alpine/git
shell: Sh
command: |
cd dbops/ensure_dev_matches_git
git config --global user.email "<+pipeline.variables.email>"
git config --global user.name "Harness Pipeline"
git add changelogs/pipeline-authored/changelog.yml
git commit -m "Automated changelog update for ticket <+pipeline.variables.ticket_id>"
git push
This push kicks off further CI/CD workflows for deployment, rollback testing, and integration validation in the development environment. The git repo is structured using a different branch for each environment, so promoting through staging to prod is accomplished by merging PRs.
To ensure regulatory and organizational compliance, teams can automatically enforce their policies at deployment time. The demo features an example of a policy being violated, and fixing the change to meet it. The policy given in the demo is shown below, and it enforces a particular naming convention for indexes.
Example: SQL Index Naming Convention Policy
package db_sql
deny[msg] {
some l
sql := lower(input.sqlStatements[l])
regex.match(`(?i)create\s+(NonClustered\s+)?index\s+.*`, sql)
matches := regex.find_all_string_submatch_n(`(?i)create\s+(NonClustered\s+)?index\s+([^\s]+)\s+ON\s+([^\s(]+)\s?\(\[?([^])]+)+\]?\);`, sql,-1)[0]
idx_name := matches[2]
table_name := matches[3]
column_names := strings.replace_n({" ": "_",",": "__"},matches[4])
expected_index_name := concat("_",["idx",table_name,column_names])
idx_name != expected_index_name
msg := sprintf("Index creation does not follow naming convention.\n SQL: '%s'\n expected index name: '%s'", [sql,expected_index_name])
}
This policy automatically detects and blocks non-compliant index names in developer-authored SQL.
In the demo, a policy violation appears if a developer’s generated index (e.g., person_job_title_idx) doesn’t match the convention.
Harness Database DevOps amplifies Liquibase’s value by automating changelog authoring, merge, git, deployment, and pipeline orchestration—reducing human error, boosting speed, and ensuring every change is audit-ready and policy-compliant. Developers can focus on schema improvements, while automation and policy steps enable safe, scalable delivery.
Ready to modernize your database CI/CD?
Learn more about how you can improve your developer experience for database changes or contact us to discuss your particular usecase.
Check out State of the Developer Experience 2024


Testing database changes against production-like data removes risk from your delivery process but to be effective, it must be orchestrated, governed, and automated. Manual scripts and ad-hoc checks lack the repeatability and auditability required for modern delivery practices.
Harness Database DevOps provides a framework to embed production data testing into your CI/CD pipelines, enabling you to manage database schema changes with the same rigor as application code. Harness DB DevOps is designed to bridge development, operations, and database teams by bringing visibility, governance, and standardized execution to database changes.
Instead of treating testing with production data as an afterthought, you can define it as a pipeline stage that executes reliably across environments.
To incorporate production data testing into your delivery process, you define a Harness Database DevOps pipeline with structured, repeatable steps. The result is a governed testing model that captures evidence of correctness before any change ever reaches production.
In Harness Database DevOps, you begin by configuring the necessary database instances and schemas:
For production data testing, you provision two isolated instances seeded with a snapshot of production data (secured and masked as needed). These instances are not customer-facing; they serve as ephemeral test targets.
This structure sets up identical baselines for controlled experimentation.
Harness Database DevOps lets you define a deployment pipeline that incorporates database and application changes in the same workflow:
Using Liquibase or Flyway via Harness, the pipeline applies schema changes to Instance A while Instance B remains the baseline.
This step executes the migration in a real, production-scale context, capturing performance, constraint behaviors, and other runtime characteristics.
A powerful capability of Harness Database DevOps is automated rollback testing within the pipeline:
Testing rollback paths removes the assumption that reversal will work in production, a key risk often untested in traditional workflows.
After rollback, you compare Instance A (post-rollback) with Instance B (untouched):
If disparities are detected, the pipeline can fail early, prompting review and remediation before production deployment.
This approach builds evidence rather than assumptions about the quality and safety of database changes.
The updated workflow aligns with the documented capabilities of Harness Database DevOps:
Importantly, the workflow does not assume native data cloning features within Harness itself. Instead, it positions data-centric operations (cloning and validation) as composable steps in a broader automation pipeline.
Embedding production data testing inside Harness Database DevOps pipelines delivers measurable outcomes:
This integrated, pipeline-oriented approach elevates database change management into a disciplined engineering practice rather than a set of isolated tasks.
Database changes do not fail because teams lack skill or intent. They fail because uncertainty is tolerated too late in the delivery cycle when production data, scale, and history finally collide with untested assumptions.
Testing with production data, when executed responsibly, shifts database delivery from hope-based validation to evidence-based confidence. It allows teams to validate not just that a migration applies, but that it performs, rolls back cleanly, and leaves no hidden drift behind. That distinction is the difference between routine releases and high-severity incidents.
By operationalizing this workflow through Harness Database DevOps, organizations gain a governed, repeatable way to:
This is not about adding more processes. It is about removing uncertainty from the most irreversible layer of your system.
Explore a Harness Database DevOps to see how production-grade database testing, rollback validation, and governed pipelines can fit seamlessly into your existing workflows The fastest teams don’t just deploy quickly, they deploy with confidence.


Engineering teams are generating more shippable code than ever before — and today, Harness is shipping five new capabilities designed to help teams release confidently. AI coding assistants lowered the barrier to writing software, and the volume of changes moving through delivery pipelines has grown accordingly. But the release process itself hasn't kept pace.
The evidence shows up in the data. In our 2026 State of DevOps Modernization Report, we surveyed 700 engineering teams about what AI-assisted development is actually doing to their delivery. The finding stands out: while 35% of the most active AI coding users are already releasing daily or more, those same teams have the highest rate of deployments needing remediation (22%) and the longest MTTR at 7.6 hours.
This is the velocity paradox: the faster teams can write code, the more pressure accumulates at the release, where the process hasn't changed nearly as much as the tooling that feeds it.
The AI Delivery Gap
What changed is well understood. For years, the bottleneck in software delivery was writing code. Developers couldn't produce changes fast enough to stress the release process. AI coding assistants changed that. Teams are now generating more change across more services, more frequently than before — but the tools for releasing that change are largely the same.
In the past, DevSecOps vendors built entire separate products to coordinate multi-team, multi-service releases. That made sense when CD pipelines were simpler. It doesn't make sense now. At AI speed, a separate tool means another context switch, another approval flow, and another human-in-the-loop at exactly the moment you need the system to move on its own.
The tools that help developers write code faster have created a delivery gap that only widens as adoption grows.
Today Harness is releasing five capabilities, all natively integrated into Continuous Delivery. Together, they cover the full arc of a modern release: coordinating changes across teams and services, verifying health in real time, managing schema changes alongside code, and progressively controlling feature exposure.
Release Orchestration replaces Slack threads, spreadsheets, and war-room calls that still coordinate most multi-team releases. Services and the teams supporting them move through shared orchestration logic with the same controls, gates, and sequence, so a release behaves like a system rather than a series of handoffs. And everything is seamlessly integrated with Harness Continuous Delivery, rather than in a separate tool.
AI-Powered Verification and Rollback connects to your existing observability stack, automatically identifies which signals matter for each release, and determines in real time whether a rollout should proceed, pause, or roll back. Most teams have rollback capability in theory. In practice it's an emergency procedure, not a routine one. Ancestry.com made it routine and saw a 50% reduction in overall production outages, with deployment-related incidents dropping significantly.
Database DevOps, now with Snowflake support, brings schema changes into the same pipeline as application code, so the two move together through the same controls with the same auditability. If a rollback is needed, the application and database schema can rollback together seamlessly. This matters especially for teams building AI applications on warehouse data, where schema changes are increasingly frequent and consequential.
Improved pipeline and policy support for feature flags and experimentation enables teams to deploy safely, and release progressively to the right users even though the number of releases is increasing due to AI-generated code. They can quickly measure impact on technical and business metrics, and stop or roll back when results are off track. All of this within a familiar Harness user interface they are already using for CI/CD.
Warehouse-Native Feature Management and Experimentation lets teams test features and measure business impact directly with data warehouses like Snowflake and Redshift, without ETL pipelines or shadow infrastructure. This way they can keep PII and behavioral data inside governed environments for compliance and security.
These aren't five separate features. They're one answer to one question: can we safely keep going at AI speed?
Traditional CD pipelines treat deployment as the finish line. The model Harness is building around treats it as one step in a longer sequence: application and database changes move through orchestrated pipelines together, verification checks real-time signals before a rollout continues, features are exposed progressively, and experiments measure actual business outcomes against governed data.
A release isn't complete when the pipeline finishes. It's complete when the system has confirmed the change is healthy, the exposure is intentional, and the outcome is understood.
That shift from deployment to verified outcome is what Harness customers say they need most. "AI has made it much easier to generate change, but that doesn't mean organizations are automatically better at releasing it," said Marc Pearce, Head of DevOps at Intelliflo. "Capabilities like these are exactly what teams need right now. The more you can standardize and automate that release motion, the more confidently you can scale."
The real shift here is operational. The work of coordinating a release today depends heavily on human judgment, informal communication, and organizational heroics. That worked when the volume of change was lower. As AI development accelerates, it's becoming the bottleneck.
The release process needs to become more standardized, more repeatable, and less dependent on any individual's ability to hold it together at the moment of deployment. Automation doesn't just make releases faster. It makes them more consistent, and consistency is what makes scaling safe.
For Ancestry.com, implementing Harness helped them achieve 99.9% uptime by cutting outages in half while accelerating deployment velocity threefold.
At Speedway Motors, progressive delivery and 20-second rollbacks enabled a move from biweekly releases to multiple deployments per day, with enough confidence to run five to 10 feature experiments per sprint.
AI made writing code cheap. Releasing that code safely, at scale, is still the hard part.
Harness Release Orchestration, AI-Powered Verification and Rollback, Database DevOps, Warehouse-Native Feature Management and Experimentation, and Improve Pipeline and Policy support for FME are available now. Learn more and book a demo.




Data platforms are evolving rapidly, and Snowflake has become a cornerstone of modern data architectures. Teams rely on Snowflake to power analytics, machine learning, and business intelligence, but managing data warehouse changes in a safe, repeatable way can still be a challenge.
Today, we’re excited to announce Snowflake support for Harness DB DevOps, enabling teams to bring the same automation, governance, and reliability they expect from application DevOps to their Snowflake data warehouse changes. By combining application and Snowflake deployments into a consistent pipeline, teams can release confidently at speed.
With this release, organizations can now manage Snowflake schema changes using pipeline-driven database DevOps workflows directly within Harness.
Snowflake empowers teams to move fast with data, but data warehouse change management often remains a manual or fragmented process.
Common challenges include:
Without a standardized process, teams struggle to balance speed, control, and reliability.
Harness DB DevOps now supports Snowflake as a first-class database platform, allowing teams to manage schema changes through automated, pipeline-driven workflows.
This means Snowflake schema changes can now be treated just like application code—versioned, tested, and promoted through environments using Harness pipelines.
With Snowflake support, teams can:
The result is a modern Database DevOps workflow for Snowflake that helps teams release faster without sacrificing reliability.
Harness DB DevOps can now connect directly to Snowflake environments, allowing teams to deploy and manage schema changes seamlessly.
Use Harness pipelines to automate Snowflake data warehouse deployments with repeatable workflows across environments.
Leverage Harness approval gates, role-based access controls, and policy enforcement to ensure safe production changes.
Track every Snowflake change deployment with full pipeline visibility and audit logs.
As organizations increasingly rely on Snowflake to power data-driven applications, database changes need the same rigor and automation as application deployments.
By bringing Snowflake into Harness DB DevOps, teams can:
Snowflake support for Harness DB DevOps is now available.
To get started:
To learn more about using Snowflake with Harness DB DevOps, check out our documentation or schedule a demo.
Related Snowflake Topics


Database systems store some of the most sensitive data of an organization such as PII, financial records, and intellectual property, making strong database governance non-negotiable. As regulations tighten and audit expectations increase, teams need governance that scales without slowing delivery.
Harness Database DevOps addresses this by applying policy-driven governance using Open Policy Agent (OPA). With OPA policies embedded directly into database pipelines, teams can automatically enforce rules, capture audit trails, and stay aligned with compliance requirements. This blog outlines how to use OPA in Harness to turn database compliance from a manual checkpoint into a built-in, scalable part of your DevOps workflow.
Organizations face multiple challenges when navigating database compliance:
These challenges highlight the necessity of embedding governance directly into database development and deployment pipelines, rather than treating compliance as a reactive checklist.
Harness Database DevOps is designed to offer a comprehensive solution to database governance - one that aligns automation with compliance needs. It enables teams to adopt policy-driven controls on database change workflows by integrating the Open Policy Agent (OPA) engine into the core of database DevOps practices.
What is OPA and Policy as Code?
Open Policy Agent (OPA) is an open-source, general-purpose policy engine that decouples policy decisions from enforcement logic, enabling centralized governance across infrastructures and workflows. Policies in OPA are written in the Rego declarative language, allowing precise expression of rules governing actions, access, and configurations.
Harness implements Policy as Code through OPA, enabling teams to store, test, and enforce governance rules directly within the database DevOps lifecycle. This model ensures that compliance controls are consistent, auditable, and automatically evaluated before changes reach production.
Here’s a structured approach to implementing database governance with OPA in Harness:
Start by cataloging your regulatory obligations and internal governance policies. Examples include:
Translate these requirements into quantifiable rules that can be expressed in Rego.
Within the Harness Policy Editor, define OPA policies that codify governance rules. For example, a policy might block any migrations containing operations that remove columns in production environments without explicit DBA approval.
Harness policies are modular and reusable, you can import and extend them as part of broader governance packages. This allows cross-team reuse and centralized management of rules. Key aspects include:
By expressing governance as code, you ensure consistency and remove ambiguity in policy enforcement.
Policies can be linked to specific triggers within your database deployment workflow, for instance, evaluating rules before a migration is applied or before a pipeline advances to production. This integration ensures that non-compliant changes are automatically blocked, while compliant changes proceed seamlessly, maintaining the balance between speed and control.
Harness evaluates OPA policies at defined decision points in your pipeline, such as pre-deployment checks. This prevents risky actions, enforces access controls, and aligns every deployment with governance objectives without manual intervention.
Audit Trails and Traceability
Every policy evaluation is logged, creating an auditable trail of who changed what, when, and why. These logs serve as critical evidence during compliance audits or internal reviews, reducing the overhead and risk associated with traditional documentation practices.
By enforcing the principle of least privilege, policies ensure that users and applications possess only the necessary permissions for their specific roles. This restriction on access is crucial for minimizing the potential attack surface and maintaining compliance with regulatory requirements for data access governance.
Database governance is an essential pillar of enterprise compliance strategies. By embedding OPA-based policy enforcement within Harness Database DevOps, organizations can automate compliance controls, minimize risk, and maintain developer productivity. Policy as Code provides a scalable, auditable, and consistent framework that aligns with both regulatory obligations and the need for agile delivery.
Transforming database governance from a manual compliance burden into an automated, integrated practice empowers teams to innovate securely, confidently, and at scale - ensuring that every change respects the policies that protect your data, your customers, and your brand.


There was a time when database design was an event. It happened once, early in a project, often before the first line of application code was written. Architects would gather with domain experts, sketch entities and relationships, debate normalization levels, and arrive after weeks of discussion, at what was believed to be the schema. Once approved, that schema was treated as immutable.
This mindset assumed that the future was predictable. But it rarely is. Modern database design is no longer about defining a perfect schema upfront, but about enabling safe, continuous evolution as systems and requirements change.
At the beginning, requirements are usually clear and limited. The schema reflects the system’s first understanding of the domain.
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) NOT NULL UNIQUE,
created_at TIMESTAMP NOT NULL DEFAULT NOW()
);This design is clean, minimal, and correct, for now. It models what the system knows today: users exist, they have an email, and they were created at a specific time.
At this stage, the schema feels complete, although it never is.
As the product matures, new questions emerge. The business wants to personalize communication. Support wants to address users by name. Marketing wants segmentation. The schema evolves, not because it was poorly designed, but because the system learned something new.
ALTER TABLE users
ADD COLUMN first_name VARCHAR(100),
ADD COLUMN last_name VARCHAR(100);
This change is small, additive, and safe. No existing behavior breaks. No data is lost. The schema now captures richer context without invalidating earlier assumptions.
This is evolutionary design in its simplest form: adapting without disruption.
As usage grows, teams discover new workflows. Users can now deactivate their accounts. Regulatory requirements demand traceability.
Instead of redefining the table, the schema evolves to support new behavior.
ALTER TABLE users
ADD COLUMN status VARCHAR(20) NOT NULL DEFAULT 'ACTIVE',
ADD COLUMN deactivated_at TIMESTAMP;
Importantly, this change preserves backward compatibility. Existing queries continue to work. New logic can gradually adopt the new fields. This approach reflects database schema evolution best practices, where changes are incremental, backward-compatible, and safely deployable through CI/CD pipelines. Evolutionary design favors extension over replacement.
With scale comes performance pressure. Queries that once ran instantly now struggle. Reporting workloads introduce new access patterns.
Rather than redesigning everything, the schema evolves structurally to meet new demands.
CREATE INDEX idx_users_status ON users (status);This change does not alter the data model conceptually, but it reflects a deeper understanding of how the system is used. Design evolves not just for correctness, but for operational reality.
Database design is no longer theoretical, it is informed by production behavior.
Eventually, teams outgrow early assumptions. A single user’s table can no longer represent multiple user roles, tenants, or identity providers. The model needs refinement. Evolutionary design handles this carefully, through parallel structures and gradual migration.
CREATE TABLE user_profiles (
user_id INT PRIMARY KEY REFERENCES users(id),
display_name VARCHAR(150),
preferences JSONB,
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
Instead of overloading the original table, the design evolves by extracting responsibility. Existing functionality remains stable while new capabilities move forward. At no point was a “big rewrite” required.
As changes accumulate, complexity shifts from design to operations. Teams struggle to answer basic questions:
This is where evolutionary design demands discipline. Small changes only remain safe when they are visible, validated, and governed.
Modern database design extends beyond tables and columns. It includes how changes are reviewed, tested, approved, and promoted. As applications adopt CI/CD and ship continuously, databases often remain the slowest and riskiest part of the release. Manual migrations, limited visibility, and fear of rollbacks turn schema changes into operational bottlenecks.
Database DevOps addresses this gap by applying software delivery discipline to database changes:
By embedding database schema evolution into CI/CD pipelines, teams reduce deployment risk while increasing delivery velocity. Platforms like Harness Database DevOps enable this by combining state awareness, controlled execution, and auditability, making database changes predictable, repeatable, and safe.
Each SQL change tells a story:
A database is not a monument to early decisions. It is a living artifact that reflects the system’s understanding of its domain at every point in time.
Database design evolution is not a failure of planning, it is evidence of adaptation.
The most resilient systems are not those with perfect initial schemas, but those designed to evolve safely and continuously. By embracing incremental change, versioned history, and automated governance, teams align database design with the realities of modern software delivery.
In a world where applications never stop shipping, database design cannot remain static. They must evolve, with confidence, control, and clarity, supported by Database DevOps practices and platforms such as Harness Database DevOps.
Because in the end, the schema is not the design. The ability to evolve it safely is.


Harness Database DevOps is introducing an open source native MongoDB executor for Liquibase Community Edition. The goal is simple: make MongoDB workflows easier, fully open, and accessible for teams already relying on Liquibase without forcing them into paid add-ons.
This launch focuses on removing friction for open source users, improving MongoDB success rates, and contributing meaningful functionality back to the community.
Teams using MongoDB often already maintain scripts, migrations, and operational workflows. However, running them reliably through Liquibase Community Edition has historically required workarounds, limited integrations, or commercial extensions.
This native executor changes that. It allows teams to:
This is important because MongoDB adoption continues to grow across developer platforms, fintech, eCommerce, and internal tooling. Teams want consistency: application code, infrastructure, and databases should all move through the same automation pipelines. The executor helps bring MongoDB into that standardised DevOps model.
It also reflects a broader philosophy: core database capabilities should not sit behind paywalls when the community depends on them. By open-sourcing the executor, Harness is enabling developers to move faster while keeping the ecosystem transparent and collaborative.
With the native MongoDB executor:
This improves the success rate for MongoDB users adopting Liquibase because the workflow becomes familiar rather than forced. Instead of adapting MongoDB to fit the tool, the tool now works with MongoDB.
1. Getting started is straightforward. The Liquibase MongoDB extension is hosted on HAR registry, which can be downloaded by using below command:
curl -L \
"https://us-maven.pkg.dev/gar-prod-setup/harness-maven-public/io/harness/liquibase-mongodb-dbops-extension/1.1.0-4.24.0/liquibase-mongodb-dbops-extension-1.1.0-4.24.0.jar" \
-o liquibase-mongodb-dbops-extension-1.1.0-4.24.0.jar
2. Add the extension to Liquibase: Place the downloaded JAR file into the Liquibase library directory, example path: "LIQUIBASE_HOME/lib/".
3. Configure Liquibase: Update the Liquibase configuration to point to the MongoDB connection and changelog files.
4. Run migrations: Use the "liquibase update" command and Liquibase Community will now execute MongoDB scripts using the native executor.
Migration adoption often stalls when teams lack a clean way to generate changelogs from an existing database. To address this, Harness is also sharing a Python utility that mirrors the behavior of "generate-changelog" for MongoDB environments.
The script:
This reduces onboarding friction significantly. Instead of starting from scratch, teams can bootstrap changelogs directly from production-like environments. It bridges the gap between legacy MongoDB setups and modern database DevOps practices.
The intent is not just to release a tool. The intent is to strengthen the open ecosystem.
Harness believes:
By contributing a native MongoDB executor:
This effort also reinforces Harness as an active open source contributor focused on solving real developer problems rather than monetizing basic functionality.
The native executor, combined with changelog generation support, provides:
Together, these create one of the most functional open source MongoDB integrations available for Liquibase Community users. The objective is clear: make it the default path developers discover when searching for Liquibase MongoDB workflows.
Discover the open-source MongoDB native executor. Teams can adopt it in their workflows, extend its capabilities, and contribute enhancements back to the project. Progress in database DevOps accelerates when the community collaborates and builds in the open.


As modern organizations continue their shift toward microservices, distributed systems, and high-velocity software delivery, NoSQL databases have become strategic building blocks. Their schema flexibility, scalability, and high throughput empower developers to move rapidly - but they also introduce operational, governance, and compliance risks. Without structured database change control, even a small update to a NoSQL document, key-value pair, or column family can cascade into production instability, data inconsistency, or compliance violations.
To sustain innovation at scale, enterprises need disciplined database change control for NoSQL - not as a bottleneck, but as an enabler of secure and reliable application delivery.
Unlike relational systems, NoSQL databases place schema flexibility in the hands of developers. And the enterprises that rely on such NoSQL Database at scale are discovering the following truths:
With structured change control:
NoSQL’s agility remains intact but reliability, safety, and traceability are added.
To eliminate risk and release bottlenecks, NoSQL change control needs to operate inside CI/CD pipelines - not outside them. This ensures that:
A database change ceases to be a manual, tribal-knowledge activity and becomes a first-class software artifact - designed, tested, versioned, deployed, and rolled back automatically.
Harness Database DevOps extends CI/CD best practices to NoSQL by providing automated delivery, versioning, governance, and observability across the entire change lifecycle, including MongoDB. Instead of treating database changes as a separate operational track, Harness unifies database evolution with modern engineering practices:
This unification allows enterprises to move fast and maintain control, without rewriting how teams work.
High-growth teams that adopt change control for NoSQL environments report:
In short, the combination of NoSQL flexibility and automated governance allows enterprises to scale without trading speed for stability.
NoSQL databases have become fundamental to modern application architectures, but flexibility without control introduces operational risk. Implementing structured database change control - supported by CI/CD automation, runtime policy enforcement, and data governance - ensures that NoSQL deployments remain safe, compliant, and resilient even at scale.
Harness Database DevOps provides a unified platform for automating change delivery, enforcing compliance dynamically, and securing the complete database lifecycle - without slowing down development teams.


As organizations double down on cloud modernization, Google Cloud’s AlloyDB for PostgreSQL is quickly becoming the preferred engine for mission-critical applications. Its high-performance, PostgreSQL-compatible architecture offers unparalleled scalability, yet managing schema changes, rollouts, and governance can still be challenging at enterprise scale.
With Harness Database DevOps now supporting AlloyDB, engineering teams can unify their end-to-end database delivery lifecycle under one automated, secure, and audit-ready platform. This deep integration enables you to operationalize AlloyDB migrations using the same GitOps, CI/CD, and governance workflows already powering your application deployments.
AlloyDB offers a distributed PostgreSQL-compatible engine built for scale, analytical performance, and minimal maintenance overhead. It introduces capabilities such as:
This resource provides end-to-end guidance, including connection requirements, JDBC formats, network prerequisites, and best-practice deployment patterns, ensuring teams can onboard AlloyDB with confidence and operational rigor. Harness simplifies how teams establish connectivity with AlloyDB, manage authentication, and run PostgreSQL-compatible operations through Liquibase or Flyway. For the full setup instructions, refer to the AlloyDB Configuration Guide.
Once the connection is established, AlloyDB benefits from the same enterprise-grade automation that Harness provides across all supported engines. This includes:
Harness abstracts operational complexity, ensuring that every AlloyDB schema change is predictable, auditable, and aligned with platform engineering standards.
Organizations adopting this integration typically may see:
AlloyDB’s performance and elasticity give teams a powerful foundation for modern application workloads. Harness DB DevOps amplifies this by providing consistency, guardrails, and automation across environments.
Together, they unlock a future-ready workflow where:
As cloud-native architectures continue to evolve, Harness and AlloyDB create a strategic synergy making database delivery more scalable, more secure, and more aligned with modern DevOps principles.
Harness leverages a secure JDBC connection using standard PostgreSQL drivers. All credentials are stored in encrypted secrets managers, and communication occurs through the Harness Delegate running inside your VPC, ensuring zero-trust alignment and no data egress exposure.
Yes. Since AlloyDB is fully PostgreSQL-compatible, your existing Liquibase or Flyway changesets, versioning strategies, and rollback workflows operate seamlessly. Harness simply orchestrates them with CI/CD, GitOps, and governance layers.
Harness provides enterprise-grade audit logs, approval gates, policy-as-code (OPA), and environment-specific guardrails. Every migration, manual or automated is fully traceable, ensuring regulatory compliance across environments.
Absolutely. Harness enables consistent dev → test → staging → production promotions with parameterized pipelines, drift detection, and automated validation steps. Each promotion is version-controlled and follows your organization’s release governance.


In every engineering organization, there’s an invisible tug-of-war that plays out almost daily with developers racing to ship features and DBAs guarding the gates of data stability. Both teams work toward the same outcome: reliable, high-performing software but their paths often diverge at the intersection of change.
This is where friction brews due to process, pressure, and perspective, and every failed release, every schema rollback, and every late-night incident only deepens the misunderstanding. But it’s also where the opportunity for transformation lies.
Imagine this:
A developer finishes a new feature, ready to deploy it before the sprint demo. Their code depends on a schema change, a new column, a renamed table, maybe a simple constraint. It’s small, they think. It should be safe.
Across the room, the DBA frowns. They know that a “small change” can cascade into a large issue. One missing index or wrong default value can slow down queries, lock tables, or even cause an outage. Their instinct is to review, test, and double-check before touching production.
Neither is wrong. The developer is driven by speed. The DBA is anchored in safety.
What’s missing is a bridge, a shared workflow that lets both move with confidence.
Many teams have lived through this story: a migration script runs perfectly in staging but fails in production. The rollback doesn’t trigger as planned. Dashboards turn red. The chat channels fill with messages.
Developers scramble to patch the issue. DBAs comb through logs, searching for a root cause. It’s chaos not because someone made a mistake, but because the process didn’t protect them.
These moments are never about blame. They’re about visibility and coordination. A system that doesn’t give both teams a common view of database change is like flying without radar fast, but blind.
DevOps revolutionized how we ship applications. Continuous Integration and Continuous Delivery (CI/CD) pipelines made deployments faster, safer, and traceable. Yet, while code pipelines evolved, the database layer stayed behind, guarded, manual, and slow.
This is where modern Database DevOps steps in, not to replace DBAs, but to empower them.
Platforms like Harness Database DevOps are built around collaboration. They bring schema changes into the same pipeline as code, with visibility, approvals, and guardrails for both sides.
Instead of asking developers to “just trust the DBAs” or DBAs to “just approve faster,” it lets the process speak for itself through automation, audit trails, and predictable rollbacks.
Database DevOps gives both developers and DBAs a common ground. Developers can submit migration scripts confidently, knowing every step will be verified in a controlled environment. DBAs can set policies, enforce standards, and still sleep well at night knowing rollbacks and validations are built in.
The process turns from a tug-of-war into a handshake. Instead of “your script broke production,” the conversation shifts to “our workflow caught this early.” That’s the magic of shared context - empathy through automation.
In essence, “DBA vs Dev” was never a rivalry, it was a misalignment of priorities. When process, empathy, and automation align, the divide disappears. What remains is a shared mission: shipping change that both teams can stand behind, confidently and together. Because the future of database delivery isn’t about faster migrations or stricter reviews, it’s about harmony between precision and speed, between caution and innovation, between humans and the systems they build.
1. What causes tension between developers and DBAs?
Different priorities, speed versus safety often create friction. Developers aim for agility, while DBAs focus on data integrity and performance.
2. How can Database DevOps reduce conflicts?
It introduces shared workflows, automated validation, and clear governance, allowing both teams to collaborate instead of working in isolation.
3. Is automation replacing DBAs?
Not at all. Automation frees DBAs from repetitive tasks so they can focus on strategy, optimization, and data reliability.
4. How does Harness Database DevOps support collaboration?
It unifies developers and DBAs under one automated pipeline, with versioning, approvals, rollbacks, and clear visibility for every schema change.
5. What’s the best way to start adopting Database DevOps?
Begin small, automate a single schema change pipeline, establish review checkpoints, and expand as confidence grows. Gradual adoption builds lasting trust.


Every developer knows this story.
You’ve automated everything, your builds, tests, deployments. Application changes flow through CI/CD pipelines like clockwork. And then comes the dreaded pull request with a database change.
Suddenly, the rhythm breaks. A DBA review gets stuck. Someone asks, “Did we test that ALTER TABLE script?” Another person hesitates, “What if it breaks production?” And just like that, the release grinds to a halt. That’s the silent bottleneck many teams face, it’s because the database world was never built for the kind of speed DevOps demands..
That’s why Database DevOps and Database Migration Systems matter. They look similar at first glance, but they solve different parts of the same story. Together, they turn database delivery from a blocker into a seamless extension of your CI/CD flow.
DevOps, at its core, is about collaboration and automation. It’s not just pipelines, it’s people, process, and culture. Now imagine applying that same philosophy to databases. That’s Database DevOps.
It’s the belief that your database deserves the same level of care as your application code versioning, testing, governance, and continuous delivery. Database DevOps focuses on how teams work:
It’s not about replacing DBAs or loosening standards.It’s about balance: blending speed with safety, agility with accountability. When teams adopt Database DevOps, they stop treating databases as static infrastructure and start treating them as living, evolving parts of the product.
If Database DevOps defines the culture, then Database Migration Systems define the execution. Tools like Liquibase OSS and Flyway track every schema change as a versioned script. They make database evolution predictable and repeatable.
Each migration acts like a checkpoint, a recorded story of how your schema grew over time. Migration systems bring order to the chaos:
But here’s the limitation: migrations only manage scripts, not teams.
They track changes, but they don’t tell you who approved them, how they fit into your CI/CD process, or whether they comply with policies.
That’s where Database DevOps steps in, it connects technical precision with operational discipline.
Think of Database DevOps and migration systems as two halves of a complete workflow.

A migration tool alone is like an engine without a driver. Database DevOps alone is like a map without wheels. Together, they help you go further faster, and with fewer risks.
Modern platforms like Harness Database DevOps combine both worlds: the cultural framework of DevOps and the structure of migrations. Harness doesn’t just execute scripts; it orchestrates the entire journey of a database change.
This is where the philosophy of Database DevOps becomes tangible. You gain clarity, and move smarter. With Harness, database delivery finally feels like an extension of your CI/CD pipeline, not an exception to it.
Database DevOps and Database Migration Systems are not competing paradigms but they are complementary pillars of a modern delivery strategy. Migration systems give you precision and consistency. Database DevOps gives you velocity, visibility, and control. Together, they create a culture of trust and empowerment, where every database change becomes just another automated, governed part of your CI/CD process.
Harness Database DevOps exemplifies this harmony by combining the power of migration systems with enterprise-grade orchestration, GitOps, and observability. By unifying schema evolution with delivery automation, it enables teams to deliver confidently, collaborate effectively, and scale safely.
The future isn’t about choosing one over the other but about embracing both to build a truly modern database delivery pipeline.
1. What’s the main difference between Database DevOps and migration systems?
Database Migration Systems manage schema versioning and execution, while Database DevOps platforms manage the orchestration, governance, and automation of those migrations across environments.
2. Can I use Database DevOps without a migration system?
Technically yes, but it’s not ideal. Migration systems provide version control and rollback capabilities-core components that Database DevOps leverages for automation and compliance.
3. How does Harness Database DevOps integrate with tools like Liquibase OSS or Flyway?
Harness natively supports both. It runs your migration scripts through secure, policy-driven pipelines with visibility, approvals, and audit trails.
4. Is Database DevOps suitable for small teams?
Absolutely. Even small teams benefit from consistent workflows and rollback safety. As they scale, those early practices prevent chaos later.
5. What’s the long-term benefit of using both together?
You get the best of both worlds structured schema evolution with governance, automation, and collaboration baked in. It’s a foundation for continuous database delivery.


The PASS Data Community Summit returns to Seattle this November, bringing together database professionals, developers, architects, and data leaders from around the world. This year, Harness is proud to join as a Bronze Sponsor, showcasing how teams can finally bring the same speed, safety, and governance of CI/CD to their databases, now supporting Native Flyway.
From November 19–21, 2025, you’ll find us in the Seattle Convention Center at Booth #220, where we’ll be diving deep into the future of Database DevOps with AI-assisted database schema changes.
Modern applications move fast, but databases haven’t kept up. Manual reviews, brittle scripts, inconsistent change validation, and a lack of visibility into the environment continue to slow delivery and introduce risk.
PASS Summit is the annual gathering where the data community discusses openly these challenges and explores what’s next.
This year, AI-assisted database development, automated governance, Kubernetes-native databases, and data reliability are top-of-mind topics. Harness will be right in the middle of those conversations.
Harness will present two talks this year, each focused on accelerating database delivery while protecting data integrity and ensuring compliance.
🗓 November 19, 2025
⏰ 11:30 AM–12:30 PM
📍 Room 337–339
🔗 Session Link: Database DevOps: CD for Stateful Applications
Running stateful applications on Kubernetes can be just as safe, predictable, and repeatable as stateless workloads when the right approach is used.
In this session, Stephen Atwell (Harness) and Chris Crow (Pure Storage) will explore how to:
This talk includes real-world schema migration examples, performance analysis, and a live demo showing how CD tooling automates data migrations inside Kubernetes.
If you’re working with Kubernetes and databases, this session is a must-see.
🗓 November 21, 2025
⏰ 12:15 PM–12:45 PM
📍 Room 442
🔗 Session Link: Faster DB Schema Migrations with AI-Enabled CI/CD & Automated Governance
AI is changing how organizations approach database development, and in this session, we explore what’s now possible.
Join Stephen Atwell, Principal Product Manager for Harness Database DevOps, to learn how AI + CI/CD can accelerate data delivery while maintaining safety and compliance.
You’ll walk away understanding:
This session is ideal for DBAs, data engineers, DevOps teams, and anyone looking to modernize their database change process.
At Booth #220, we’ll be presenting the latest evolution of Harness Database DevOps, designed to solve one core challenge:
See live demos of:
✔ AI-Authored Database Migrations
✔ Migration Tracking Across Environments
✔ Automated Rollback Intelligence
✔ Policy-Driven Governance
✔ Unified Dev + DBA Workflows
Our product managers, database experts, and DevOps engineers will be available for hands-on demos and in-depth discussions on your current challenges.
Stop by to:
We’re thrilled to sponsor the PASS Data Community Summit 2025 and look forward to connecting with the community.
See you at Booth #220 in Seattle!


Harness Database DevOps was built to make database delivery as automated, safe, and repeatable as application delivery. Historically, Liquibase was our primary migration engine. Today we’ve added Flyway support - an SQL-first, simple migration engine - to give teams more choice and better alignment with their existing workflows.
Teams differ: some prefer Liquibase’s structured changelogs (XML/YAML/JSON); others prefer Flyway’s versioned SQL scripts. Flyway’s minimalism and convention-over-configuration approach make it attractive for developers who want direct control over SQL. Adding Flyway is about enabling choice and reducing friction for teams that already rely on SQL based migrations.
Harness integrates both engines into one platform so teams can use their preferred tool while retaining centralized governance, approvals, drift detection, automated rollbacks, and environment visibility. You can run Liquibase and Flyway side by side within the same pipeline, with consistent policy and audit controls.
From a technical point of view, Flyway’s power lies in its simplicity - but that also means its conventions matter. Below is a brief guide to the essentials: naming conventions, baselines, pending migrations, and success validation.
Best Practices:

Flexibility: Teams pick the engine that matches their workflow.
Scalability: Enterprises with multiple database tools can onboard easily.
Governance: Centralized policies, approvals, and audits apply consistently.
Productivity: Developers focus on writing migrations - Harness handles execution, safety, and rollback automation.


Supporting Flyway alongside Liquibase marks a significant step toward tool-neutral Database DevOps Orchestration. Harness continues to focus on developer freedom and operational confidence.
Whether you prefer structured changelogs or raw SQL scripts, Harness provides a unified pipeline experience - bringing automation, policy control, and observability under one roof. Try Flyway support in your next pipeline and experience how Harness brings agility, safety, and choice to modern database delivery.


When I look back at how Harness Database DevOps came to life, it feels less like building a product and more like solving a collective industry puzzle, one piece at a time. Every engineer, DBA, and DevOps practitioner I met had their own version of the same story: application delivery had evolved rapidly, but databases were still lagging behind. Schema changes were risky, rollbacks were manual, and developers hesitated to touch the database layer for fear of breaking something critical.
That was where our journey began, not with an idea, but with a question: “What if database delivery could be as effortless, safe, and auditable as application delivery?”
At Harness, we’ve always been focused on making software delivery faster, safer, and more developer-friendly. But as we worked with enterprises across industries, one recurring gap became clear, while teams were automating CI/CD pipelines for applications, database changes were still handled in silos.
The process was often manual: SQL scripts being shared over email, version control inconsistencies, and late-night hotfixes that no one wanted to own. Even with existing tools, there was a noticeable disconnect between database engineers, developers, and platform teams. The result was predictable - slow delivery cycles, high change failure rates, and limited visibility.
We didn’t want to simply build another migration tool. We wanted to redefine how databases fit into the modern CI/CD narrative, how they could become first-class citizens in the software delivery pipeline.
Before writing a single line of code, we started by listening to DBAs, developers, and release engineers who lived through these challenges every day.
Our conversations revealed a few consistent pain points:
We also studied existing open-source practices. Many of us were active contributors or long-time users of Liquibase, which had already set strong foundations for schema versioning. Our goal was not to replace those efforts, but to learn from them, build upon them, and align them with the Harness delivery ecosystem.
That’s when the real learning began, understanding how different organizations implement Liquibase, how they handle rollbacks, and how schema evolution differs between teams using PostgreSQL, MySQL, or Oracle.
This phase of research and contribution provided us with valuable insights: while the tooling existed, the real challenge was operational, integrating database changes into CI/CD pipelines without friction or risk.
Armed with insights, we began sketching the first blueprints of what would eventually become Harness Database DevOps. Our design philosophy was simple:
Early prototypes focused on automating schema migration, enforcing policy compliance, and building audit trails for database changes. But we soon realized that wasn’t enough.
Database delivery isn’t just about applying migrations; it’s about governance, visibility, and confidence. Developers needed fast feedback loops; DBAs needed assurance that governance was intact; and platform teams needed to integrate it into their broader CI/CD fabric. That realization reshaped our vision entirely.
We started with the fundamentals: source control and pipelines. Every database change, whether a script or a declarative state definition, needed to be versioned, automatically-tested, and traceable.
To make this work at scale, we leveraged script-based migrations. This allowed teams to track the actual change scripts applied to reach that state, ensuring alignment and transparency. The next challenge was automation. We wanted pipelines that could handle complex database lifecycles, provisioning instances, running validations, managing approvals, and executing rollbacks, all within a CI/CD workflow familiar to developers.
This was where the engineering creativity of our team truly shined. We integrated database delivery into Harness Pipelines, enabling one-click deployments and policy-driven rollbacks with complete auditability.
Our internal mantra became: “If it’s repeatable, it’s automatable.”
Our first internal release was both exciting and humbling. We quickly learned that every organization manages database delivery differently. Some teams followed strict change control. Others moved fast and valued agility over structure.
To bridge that gap, we focused on flexibility, which allowed teams to define their own workflows, environments, and policies while keeping governance seamlessly built in.
We also realized the importance of observability. Teams didn’t just want confirmation that a migration succeeded; they wanted to understand “why something failed”, “how long it took”, and “what exactly changed” behind the scenes.
Each round of feedback, from customers and our internal teams, helped us to refine the product further. Every iteration made it stronger, smarter, and more aligned with real-world engineering needs. And the journey wasn’t just about code; it was about collaboration and teamwork. Here’s how Harness Database DevOps connects every role in the database delivery lifecycle.
Behind every release stood a passionate team: engineers, product managers, customer success engineer and developer advocates, with a shared mission: to make database delivery seamless, safe, and scalable.
We spent long nights debating rollback semantics, early mornings testing changelog edge cases, and countless hours perfecting pipeline behavior under real workloads. It wasn’t easy, but it mattered.
This wasn’t just about building software; it was about building trust between developers and DBAs, between automation and human oversight. When we finally launched Harness Database DevOps, it didn’t feel like a product release. It felt like the beginning of something bigger, a new way to bring automation and accountability to database delivery.
What makes us proud isn’t just the technology. It’s “how we built it”, with empathy, teamwork, and a deep partnership with our customers from day one. Together with our design partners, we shaped every iteration to ensure what we were building truly reflected their needs and that database delivery could evolve with the same innovation and collaboration that define the rest of DevOps.
After months of iteration, user testing, and refinements, Harness Database DevOps entered private beta in early 2024. The excitement was immediate. Teams finally saw their database workflows appear alongside application deployments, approvals, and governance check, all within a single pipeline.
During the beta, more than thirty customers participated, offering feedback that directly shaped the product. Some asked for folder-based trunk deployments. Others wanted deeper rollback intelligence. Some wanted Harness to help there developers design and author changes in the first place. Many just wanted to see what was happening inside their database environments.
By the time general availability rolled around, Database DevOps had evolved into a mature platform, not just a feature. It offered migration state tracking, rollback mechanisms, environment isolation, policy enforcement, and native integration with the Harness ecosystem.
But more importantly, it delivered something intangible: trust. Teams could finally move faster without sacrificing control.
Database DevOps is still an evolving space. Every new integration, every pipeline enhancement, every database engine we support takes us closer to a world where managing schema changes is as seamless as deploying code.
Our mission remains the same: to help teams move fast without breaking things, to give developers confidence without compromising governance, and to make database delivery as modern as the rest of DevOps.
And as we continue this journey, one thing is certain: the story of Harness Database DevOps isn’t just about a product. It’s about reimagining what’s possible when empathy meets engineering.
From its earliest whiteboard sketch to production pipelines across enterprises, Harness Database DevOps is the product of curiosity, collaboration, and relentless iteration. It was never about reinventing databases. It was about rethinking how teams deliver change, safely, visibly, and confidently.
And that journey, from concept to reality, continues every day with every release, every migration, and every team that chooses to make their database a part of DevOps.


CockroachDB is known for its distributed SQL power and fault tolerance. It can scale horizontally and handles multi-region workloads very well while keeping applications online even when nodes fail. But while CockroachDB simplifies scaling, managing schema changes across multiple environments can still be a pain. Developers often face issues like schema drift, inconsistent rollouts, or manual SQL execution during releases.
This is where Harness Database DevOps changes the game. It brings CI/CD discipline to database management. Now that Harness supports CockroachDB, you can automate database updates with the same precision and visibility you expect from application deployments.
Traditional database updates rely on manual scripts and DBA interventions. Harness replaces that with a Git-driven workflow:
This ensures:
Think of it as DevOps for your database, reliable, traceable, and continuous.
Harness uses JDBC connectors (PostgreSQL compatible) to communicate with CockroachDB.
During a pipeline run, Harness leverages Liquibase to interpret the changelog and execute migrations on your cluster.
Here’s the big picture:
This integration transforms CockroachDB schema updates from a manual task into a controlled CI/CD process.
Let’s look at the setup at a high level :

CockroachDB’s distributed design fits perfectly with Harness’s declarative, version-controlled workflows, helping teams ship database changes faster and safer.
Consider a SaaS company operating CockroachDB clusters across multiple regions to serve a global user base. Each week, developers push schema changes — adding new tables, refining indexes, or evolving data models to support new features.
Previously, every change required manual intervention. DBAs ran SQL scripts by hand, coordinated release windows across time zones, and updated migration logs in spreadsheets. This process was not only time-consuming but also prone to drift and human error. After adopting Harness Database DevOps, that manual process becomes a seamless, automated pipeline::
The team moves faster, reduces human error, and gains complete traceability.
With every schema change tracked, validated, and version-controlled, teams can ship updates faster while maintaining compliance and operational safety. Rollbacks, audit trails, and environment consistency become part of the process and not just an afterthoughts.
This integration empowers both developers and DBAs to collaborate seamlessly through a single pipeline that brings transparency, repeatability, and confidence to database operations. Whether you’re running CockroachDB across multiple regions or scaling up new environments, Harness ensures your database evolves at the same pace as your application. If you’re looking to modernize your database delivery pipeline, start with the Harness Database DevOps documentation and try integrating CockroachDB today.
Yes. Harness uses Liquibase under the hood, so most standard operations — like createTable, addColumn, createIndex, and alterTable — work seamlessly with CockroachDB. A few engine-specific operations may differ due to CockroachDB’s SQL dialect, but Liquibase provides compatible fallbacks.
Absolutely. Harness tracks every executed changeSet and supports Liquibase rollback commands. You can trigger rollbacks manually or define rollback scripts in your changelog. This ensures that failed schema updates don’t impact production stability.
Harness detects version conflicts based on Liquibase changeSet IDs and Git history. If two changelogs modify the same table differently, the pipeline flags it before applying. You can resolve the conflict in Git, re-commit, and re-run the pipeline safely.
Harness pipelines use containerized execution to isolate database operations. Since CockroachDB distributes workloads, most migrations scale horizontally. You can also configure throttling, pre-deployment validations, or dry runs to measure performance impact before actual execution.
Yes. Harness encrypts all secrets (JDBC credentials, certificates, and keys) using its Secret Manager. TLS is strongly recommended for CockroachDB, and Harness supports SSL verification flags like sslmode=verify-full. Credentials are never exposed in pipeline logs.