.png)
.png)
Have you ever watched a “temporary” Infrastructure as Code script quietly become mission-critical, undocumented, and owned by someone who left the company two years ago? We can all related to a similar scenario, if not infrastructure-specific, and this is usually the moment teams realise the build vs buy IaC decision was made by accident, not design.
As your teams grow from managing a handful of environments to orchestrating hundreds of workspaces across multiple clouds, the limits of homegrown IaC pipeline management show up fast. It starts as a few shell scripts wrapping OpenTofu or Terraform commands often evolves into a fragile web of CI jobs, custom glue code, and tribal knowledge that no one feels confident changing.
The real question is not whether you can build your own IaC solution. Most teams can. The question is what it costs you in velocity, governance, and reliability once the platform becomes business-critical.
Building a custom IaC solution feels empowering at first. You control every detail. You understand exactly how plan and apply flows work. You can tailor pipelines to your team’s preferences without waiting on vendors or abstractions.
For small teams with simple requirements, this works. A basic OpenTofu or Terraform pipeline in GitHub Actions or GitLab CI can handle plan-on-pull-request and apply-on-merge patterns just fine. Add a manual approval step and a notification, and you are operational.
The problem is that infrastructure rarely stays simple.
As usage grows, the cracks start to appear:
At this point, the build vs buy IaC question stops being technical and becomes strategic.
We cannot simply label our infrastructure as code management platform as “CI for Terraform.” It exists to standardise how infrastructure changes are proposed, reviewed, approved, and applied across teams.
Instead of every team reinventing the same patterns, an IaCM platform provides shared primitives that scale.
Workspaces are treated as first-class entities. Plans, approvals, applies, and execution history are visible in one place. When something fails, you do not have to reconstruct context from CI logs and commit messages.
IaC governance stops being a best-practice document and becomes part of the workflow. Policy checks run automatically. Risky changes are surfaced early. Approval gates are applied consistently based on impact, not convention.
This matters regardless of whether teams are using OpenTofu as their open-source baseline or maintaining existing Terraform pipelines.
Managing environment-specific configuration across large numbers of workspaces is one of the fastest ways to introduce mistakes. IaCM platforms provide variable sets and secure secret handling so values are managed once and applied consistently.
Infrastructure drift is inevitable. Manual console changes, provider behaviour, and external automation all contribute. An IaCM platform detects drift continuously and surfaces it clearly, without relying on scheduled scripts parsing CLI output.
Reusable modules are essential for scaling IaC, but unmanaged reuse creates risk. A built-in module and provider registry ensures teams use approved, versioned components and reduces duplication across the organisation.
Most platform teams underestimate how much work lives beyond the initial pipeline.
You will eventually need:
None of these are hard in isolation. Together, they represent a long-term maintenance commitment. Unless building IaC tooling is your product, this effort rarely delivers competitive advantage.
Harness Infrastructure as Code Management (IaCM) is designed for teams that want control without rebuilding the same platform components over and over again.
It supports both OpenTofu and Terraform, allowing teams to standardise workflows even as tooling evolves. OpenTofu fits naturally as an open-source execution baseline for new workloads, while Terraform remains supported where existing investment makes sense.
Harness IaCM provides:
Instead of writing and maintaining custom orchestration logic, teams focus on infrastructure design and delivery.
Drift detection, approvals, and audit trails are handled consistently across every workspace, without bespoke scripts or CI hacks.
The build vs buy IaC decision should be intentional, not accidental.
If your organisation has a genuine need to own every layer of its tooling and the capacity to maintain it long-term, building can be justified. For most teams, however, the operational overhead outweighs the benefits.
An IaCM platform provides faster time-to-value, stronger governance, and fewer failure modes as infrastructure scales.
Harness Infrastructure as Code Management enables teams to operationalise best practices for OpenTofu and Terraform without locking themselves into brittle, homegrown solutions.
The real question is not whether you can build this yourself. It is whether you want to be maintaining it when the platform becomes critical.
Explore Harness IaCM and move beyond fragile IaC pipelines.


Have you ever asked yourself, what is the fastest way to turn a harmless Infrastructure as Code change into a production incident and an awkward postmortem? We did, and found that usually, it's from letting it through without any guardrails.
Infrastructure guardrails in Infrastructure as Code (IaC) were once a nice-to-have. Today, they’re essential. Without clear boundaries and safety mechanisms, even well-designed IaC workflows can turn small mistakes into fast-moving, high-impact problems.
Infrastructure guardrails are preventive controls that help teams standardize and secure infrastructure deployments. They act as a safety net, ensuring changes consistently align with organizational policies, security best practices, and compliance requirements.
Think of infrastructure guardrails as the difference between letting developers drive on an open road with no lanes versus providing clear lane markings, speed limits, and crash barriers. Guardrails do not restrict innovation. They make it safe to move fast without losing control.
As organizations adopt cloud-native practices and infrastructure as code becomes the standard for deployment, the complexity and scale of infrastructure management increases exponentially. Here's why infrastructure guardrails have become non-negotiable:
Without proper infrastructure guardrails, simple human errors can result in significant outages or security incidents. Consider these common scenarios:
Each of these scenarios can lead to substantial financial impact, from unexpected cloud bills to costly security breaches and downtime. Infrastructure guardrails help prevent these issues before they manifest in your environment.
Infrastructure guardrails ensure teams follow infrastructure as code best practices consistently. These include:
When these practices are enforced through guardrails rather than through documentation alone, teams naturally develop better habits while reducing technical debt.
Policy-based guardrails enforce rules across your entire infrastructure. Tools like Open Policy Agent (OPA) integrate with OpenTofu and Terraform to validate infrastructure changes against organizational policies before deployment.
These policies can be as simple or complex as needed:
Policy-based infrastructure guardrails provide the flexibility to codify any organizational requirement while ensuring consistent enforcement.
Both OpenTofu and Terraform benefit from specific guardrails that enhance their native capabilities:
OpenTofu compliance controls can be particularly effective when integrated into CI/CD pipelines, creating automated checkpoints that validate changes before they reach production environments.
One of the most insidious challenges in infrastructure management is configuration drift. Without proper infrastructure guardrails, manual changes can occur outside the IaC workflow, creating inconsistencies between your code and the actual deployed resources.
Effective drift prevention guardrails include:
Infrastructure guardrails should incorporate robust IaC security controls to protect against both accidental and malicious security issues:
These security-focused infrastructure guardrails help organizations maintain a strong security posture even as infrastructure scales and evolves.
For organizations operating at scale, infrastructure guardrails form the foundation of cloud infrastructure governance. This governance framework provides:
Harness Infrastructure as Code Management (IaCM) provides a comprehensive platform for implementing and maintaining effective infrastructure guardrails. Supporting both OpenTofu and Terraform, Harness IaCM addresses the challenges we've discussed through several key capabilities:
Harness IaCM integrates policy-as-code directly into your infrastructure workflows. Teams can define, test, and enforce policies that validate infrastructure changes against security, compliance, and operational requirements. These policies run automatically during the plan phase, preventing non-compliant changes from being applied.
Harness IaCM includes a built-in registry for OpenTofu and Terraform modules and providers. This enables teams to:
This standardization dramatically reduces the risk of configuration errors while improving developer productivity.
With Harness IaCM, infrastructure deployments follow consistent, auditable workflows:
These workflows provide the perfect balance between developer autonomy and operational control.
Harness IaCM continuously monitors your infrastructure for drift, automatically detecting when resources deviate from their expected state. When drift occurs, teams can:
This ensures your infrastructure guardrails remain effective even after deployment.
Implementing effective infrastructure guardrails doesn't have to be an all-or-nothing proposition. Start with these steps:
Effective infrastructure guardrails don't limit innovation, they enable it by providing a safe environment for experimentation and rapid deployment. By preventing costly errors, enforcing best practices, and ensuring compliance, guardrails give teams the confidence to move quickly without sacrificing reliability or security.
Harness Infrastructure as Code Management provides the ideal platform for implementing these guardrails, with native support for both OpenTofu and Terraform, built-in policy enforcement, and comprehensive drift management capabilities.
Ready to implement effective infrastructure guardrails in your environment? Explore how Harness IaCM can help your team deploy more confidently and securely while maintaining the flexibility developers need to innovate.


Cloud migration has shifted from a tactical relocation exercise to a strategic modernization program. Enterprise teams no longer view migration as just the movement of compute and storage from one cloud to another. Instead, they see it as an opportunity to redesign infrastructure, streamline delivery practices, strengthen governance, and improve cost control, all while reducing manual effort and operational risk. This is especially true in regulated industries like banking and insurance, where compliance and reliability are essential.
This first installment in our cloud migration series introduces the high-level concepts and the automation framework that enables enterprise-scale transitions, without disrupting ongoing delivery work. Later entries will explore the technical architecture behind Infrastructure as Code Management (IaCM), deployment patterns for target clouds, Continuous Integration (CI) and Continuous Delivery (CD) modernization, and the financial operations required to keep migrations predictable.

Many organizations begin their migration journey with the assumption that only applications need to move. In reality, cloud migration affects five interconnected areas: infrastructure provisioning, application deployment workflows, CI and CD systems, governance and security policies, and cost management. All five layers must evolve together, or the migration unintentionally introduces new risks instead of reducing them.
Infrastructure and networking must be rebuilt in the target cloud with consistent, automated controls. Deployment workflows often require updates to support new environments or adopt GitOps practices. Legacy CI and CD tools vary widely across teams, which complicates standardization. Governance controls differ by cloud provider, so security models and policies must be reintroduced. Finally, cost structures shift when two clouds run in parallel, which can cause unpredictability without proper visibility.
Cloud migration is often motivated by a combination of compliance requirements, access to more suitable managed services, performance improvements, or cost efficiency goals. Some organizations move to support a multi-cloud strategy while others want to reduce dependence on a single provider. In many cases, migration becomes an opportunity to correct architectural debt accumulated over years.
Azure to AWS is one example of this pattern, but it is not the only one. Organizations regularly move between all major cloud providers as their business and regulatory conditions evolve. What remains consistent is the need for predictable, auditable, and secure migration processes that minimize engineering toil.
The complexity of enterprise systems is the primary factor that makes cloud migration difficult. Infrastructure, platform, security, and application teams must coordinate changes across multiple domains. Old and new cloud environments often run side by side for months, and workloads need to operate reliably in both until cutover is complete.
Another challenge comes from the variety of CI and CD tools in use. Large organizations rarely rely on a single system. Azure DevOps, Jenkins, GitHub Actions, Bitbucket, and custom pipelines often coexist. Standardizing these workflows is part of the migration itself, and often a prerequisite for reliability at scale..
Security and policy enforcement also require attention. When two clouds differ in their identity models, network boundaries, or default configurations, misconfigurations can easily be introduced . Finally, cost becomes a concern when teams pay for two clouds at once. Without visibility, migration costs rise faster than expected.
Harness addresses these challenges by providing an automation layer that unifies infrastructure provisioning, application deployment, governance, and cost analysis. This creates a consistent operating model across both the current and target clouds.
Harness Internal Developer Portal (IDP) provides a centralized view of service inventory, ownership, and readiness, helping teams track standards and best-practice adoption throughout the migration lifecycle. Harness Infrastructure as Code Management (IaCM) defines and provisions target environments and enforces policies through OPA, ensuring every environment is created consistently and securely. It helps teams standardize IaC, detect drift, and manage approvals. Harness Continuous Delivery (CD) introduces consistent, repeatable deployment practices across clouds and supports progressive delivery techniques that reduce cutover risk. GitOps workflows create clear audit trails. Harness Cloud Cost Management (CCM) allows teams to compare cloud costs, detect anomalies, and govern spend during the transition before costs escalate.
A successful, low-risk cloud migration usually follows a predictable pattern. Teams begin by modeling both clouds using IaC so the target environment can be provisioned safely. Harness IaCM then creates the new cloud infrastructure while the existing cloud remains active. Once environments are ready, teams modernize their pipelines. This process is platform agnostic and applies whether the legacy pipelines were built in Azure DevOps, Jenkins, GitHub Actions, Bitbucket, or other systems. The new pipelines can run in parallel to ensure reliability before switching over.
Workloads typically migrate in waves. Stateless services move first, followed by stateful systems and other dependent components. Parallel runs between the source and target clouds provide confidence in performance, governance adherence, and deployment stability without slowing down release cycles. Throughout this process, Harness CCM monitors cloud costs to prevent unexpected increases. After the migration is complete, teams can strengthen stability using feature flags, chaos experiments, or security testing.

When migration is guided by automation and governance, enterprises experience fewer failures and smoother transitions, and faster time-to-value. Timelines become more predictable because infrastructure and pipelines follow consistent patterns. Security and compliance improve as policy enforcement becomes automated. Cost visibility allows leaders to justify business cases and track savings. Most importantly, engineering teams end up with a more modern, efficient, and unified operating model in the target cloud.
The next blog in this series will examine how to design target environments using Harness IaCM, including patterns for enforcing consistent, compliant baseline configurations. Later entries will explore pipeline modernization, cloud deployment patterns, cost governance, and reliability practices for post-migration operations.


Infrastructure as Code (IaC) has made provisioning infrastructure faster than ever, but scaling it across hundreds of workspaces and teams introduces new challenges. Secrets get duplicated. Variables drift. Custom providers become hard to share securely.
That’s why we’re excited to announce two major enhancements to Harness Infrastructure as Code Management (IaCM):
Variable Sets and Provider Registry built to help platform teams standardize and secure infrastructure workflows without slowing developers down.
Variables in Infrastructure as Code store configuration values like credentials and environment settings so teams can reuse and customize deployments without hardcoding. However, once teams operate dozens or hundreds of workspaces, variables quickly become fragmented and hard to govern. Variable Sets provide a single control plane for configuration parameters, secrets, and variable files used across multiple workspaces. In large organizations, hundreds of Terraform or OpenTofu workspaces share overlapping credentials and configuration keys such as Terraform variable sets or OpenTofu variable sets. Traditionally, these are duplicated, making credential rotation, auditing, and drift prevention painful.
Harness IaCM implements Variable Sets as first-class resources within its workspace model that are attachable at the account, organization, or project level. The engine dynamically resolves variable inheritance based on a priority ordering system, ensuring the highest-priority set overrides conflicting keys at runtime.
.png)
For enterprises running hundreds of Terraform workspaces across multiple regions, Variable Sets give platform engineers a single, authoritative home for Vault credentials. When keys are rotated, every connected workspace automatically inherits the update by eliminating manual edits, reducing risk, and ensuring compliance across the organization. It’s a fundamental capability for terraform variable management at scale.
Provider Registry introduces a trusted distribution mechanism for custom Terraform registry and OpenTofu provider registry. While the official Terraform registry and OpenTofu Provider Registry caters to public providers, enterprise teams often build internal providers to integrate IaC with proprietary APIs or on-prem systems. Managing these binaries securely is non-trivial.
Harness IaCM solves this with a GPG-signed, multi-platform binary repository that sits alongside the Module Registry under IaCM > Registry. Each provider is published with platform-specific artifacts (macOS, Linux, Windows), SHA256 checksums, and signature files.
.png)
For any enterprise teams that build a custom provider to integrate OpenTofu with their internal API. Using Harness Provider Registry, they sign and publish binaries for multiple platforms. Developers simply declare the provider source in code, Harness handles signature verification, delivery, and updates automatically. Together with the Module Registry and Testing for Modules, Provider Registry completes the picture for trusted, reusable infrastructure components helping organizations scale IaC with confidence.
Harness IaCM already provides governed-by-default workflows with centralized pipelines, policy-as-code enforcement, and workspace templates that reduce drift. Now, with Variable Sets and Provider Registry, IaCM extends that governance deeper into how teams manage configuration and custom integrations. These updates make Harness IaCM not just a Terraform or OpenTofu orchestrator, but a secure, AI infrastructure management platform that unifies visibility, control, and collaboration across all environments.
Harness’s broader IaCM ecosystem includes:
Unlike standalone tools today, Harness IaCM brings a unified, end-to-end approach to infrastructure delivery, combining:
This all-in-one approach means fewer tools to manage, tighter compliance, and faster onboarding for developers while maintaining the flexibility of open IaC standards. Harness is the only platform that brings policy-as-code, cost insight, and self-service provisioning together into a single developer experience.
Explore how Variable Sets and Provider Registry can streamline your infrastructure delivery all within the Harness Platform. Request a Demo to see how your team can standardize configurations, improve security, and scale infrastructure delivery without slowing down innovation.


Are you still using Terraform without realizing the party has already moved on?
For years, Terraform was the default language of Infrastructure as Code (IaC). It offered predictability, community, and portability across cloud providers. But then, the music stopped. In 2023, HashiCorp changed Terraform’s license from Mozilla Public License (MPL) to the Business Source License (BSL), a move that put guardrails around what users and competitors could do with the code.
That shift opened a door for something new and truly open.
That “something” is OpenTofu.
And if you’re not already using or contributing to it, you’re missing your chance to help shape the future of infrastructure automation.
OpenTofu didn’t just appear out of thin air. It was born from community demand, a collective realization that Terraform’s BSL license could limit the open innovation that made IaC thrive in the first place.
So OpenTofu forked from Terraform’s last open source MPL version and joined the Linux Foundation, ensuring that it would remain fully open, community-governed, and vendor-neutral. A true Terraform alternative.
Unlike Terraform’s now-centralized governance, OpenTofu’s roadmap is decided by contributors, people building real infrastructure at real companies, not by a single commercial entity.
That means if you depend on IaC tools to build and scale your environments, your voice actually matters here.
OpenTofu is not a “different tool.” It’s a continuation, the same HCL syntax, same workflows, and same mental model, but under open governance and a faster, community-driven release cadence.
Let’s break down the Terraform vs OpenTofu comparison:

It’s still Terraform-compatible. You can take your existing configurations and run them with OpenTofu today. But beyond compatibility, OpenTofu is already moving faster and more freely, prioritizing developer-requested features that a commercial model might not. Some key examples of it's true power and longevity include:
Packaging and sharing modules or providers privately has always been clunky. You either ran your own registry or relied on Terraform Cloud.
OpenTofu solves this with OCI Registries, i.e. using the same open container standard that Docker uses.
It’s clean, familiar, and scalable.
Your modules live in any OCI-compatible registry (Harbor, Artifactory, ECR, GCR, etc.), complete with built-in versioning, integrity checks, and discoverability. No proprietary backend required.
For organizations managing hundreds of modules or providers, this is a big deal. It means your IaC supply chain can be secured and audited with the same standards you already use for container images.
Secrets in your Terraform state have always been a headache.
Even with remote backends, you’re still left with the risk of plaintext credentials or keys living inside the state file.
OpenTofu is the only IaC framework with built-in encryption at rest.
You can define an encryption block directly in configuration:
This encrypts the state transparently, no custom wrapper scripts or external encryption logic.
It also supports multiple key providers (AWS KMS, GCP KMS, Azure Key Vault, and more).
Coming soon in OpenTofu 1.11 (beta): ephemeral resources.
This feature lets providers mark sensitive data as transient so it never touches your state file in the first place. That’s a security level no other mainstream IaC tool currently offers.
OpenTofu’s most powerful feature isn’t in its code, it’s in its process.
Every proposal goes through a public RFC. Every contributor has a say. Every decision is archived and transparent.
If you want a feature, you can write a proposal, gather community feedback, and influence the outcome.
Contrast that with traditional vendor-driven roadmaps, where features are often prioritized by product-market fit rather than user need.
That’s what “being late to the party” really means: you miss your seat at the table where the next decade of IaC innovation is being decided.
Being early in an open-source ecosystem isn’t about bragging rights, it’s about influence.
OpenTofu is already gaining serious traction:
If you join later, you’ll still get the code. But you won’t get the same opportunity to shape it.
The longer you wait, the more you’ll be reacting to other people’s decisions instead of helping make them.
Migrating is a one-liner!
The OpenTofu migration guide shows that most users can simply install the tofu CLI and reuse their existing Terraform files:
It’s the same commands, same workflow, but under an open license. You can even use your existing Terraform state files directly; no conversion step required.
For teams already managing infrastructure at scale, the move to OpenTofu doesn’t just preserve your workflow, it future-proofs it.
When you’re ready to bring OpenTofu into a managed, collaborative environment, Harness Infrastructure as Code Management (IaCM) has you covered.
Harness IaCM natively supports both Terraform and OpenTofu. You can create a workspace, select your preferred binary, and run init, plan, and apply pipelines without changing your configurations.
That means you can:
Harness essentially gives you the sandbox to explore OpenTofu’s potential, whether you’re testing ephemeral resource behavior or building private OCI registries for module distribution.
So while the OpenTofu community defines the standards, Harness ensures you can implement them securely and at scale.
The real magic of OpenTofu lies in participation.
If you’ve ever complained about Terraform limitations, this is your moment to shape the alternative.
You can:
Everything lives in the open on the OpenTofu Repository.
Even reading a few discussions there shows how open, constructive, and fast-moving the community is.
The IaC landscape is changing, and this time, the direction isn’t being set by a vendor, but by the community.
OpenTofu brings us back to the roots of open-source infrastructure: collaboration, transparency, and freedom to innovate.
It’s more than a fork, it’s a course correction.
If you’re still watching from the sidelines, remember: the earlier you join, the more your voice matters.
The OpenTofu party is already in full swing.
Grab your seat at the table, bring your ideas, and help build the future of IaC, before someone else decides it for you.


Ever felt like managing your infrastructure is less like engineering and more like trying to herd cats through a perpetually changing obstacle course?
You’re not alone. In the glorious, chaotic world of modern IT, where microservices evolve constantly and scale pushes the limits of complexity, traditional approaches to managing infrastructure simply don’t keep pace. This is where Infrastructure as Code (IaC), and more importantly, Infrastructure as Code Management (IaCM) come in, enabling organizations to bring consistency, automation, and governance to even the most complex environments.
At its heart, IaC is the practice of defining and provisioning infrastructure resources. Think servers, databases, networks, and all their configurations, through code. Instead of clicking endlessly through cloud provider consoles or manually configuring settings, you write declarative configuration files. These files become the single source of truth for your infrastructure. Just like your application code, these infrastructure definitions can be versioned, reviewed, tested, and deployed, bringing software development best practices to infrastructure operations.
The advantages are transformative:
While IaC is a monumental leap forward, simply writing code for your infrastructure isn't enough when you're operating at an enterprise scale. Imagine hundreds of teams, thousands of infrastructure resources, multiple cloud providers, and strict regulatory requirements. This is where Infrastructure as Code Management (IaCM) becomes not just beneficial, but absolutely vital.
IaCM is the overarching strategy and set of tools designed to effectively manage your IaC across the entire organization. It addresses the inherent complexities and challenges that arise when scaling IaC practices:
Without a robust IaCM strategy, large organizations risk turning the promise of IaC into a new form of operational headache – one where inconsistencies, security gaps, and manual oversight creep back in, negating the very benefits IaC aims to deliver. IaCM elevates IaC from a technical practice to a strategic operational model, essential for controlling and managing infrastructure at enterprise scale with speed, security, and precision.
Implementing IaCM across an enterprise might seem daunting, but by breaking it down into a structured approach, organizations can successfully adopt and leverage its full potential. Here’s a 5-step guide to help you get started:
Before you start, understand where you are.
Selecting the right tools is crucial, but remember, IaCM is about managing them all.
Bringing order to chaos is a core benefit of IaCM.
Security and compliance are non-negotiable at the enterprise level.
Embrace automation and treat your infrastructure like application code.
By following these steps, enterprises can systematically transition to a fully managed, automated, and compliant infrastructure environment, unlocking the true potential of Infrastructure as Code.
To truly operationalize IaCM at scale, enterprises need a platform built for governance, automation, and collaboration. Harness IaCM brings these capabilities together, enabling teams to manage infrastructure securely and efficiently across the organization.
Harness IaCM empowers teams to leverage reusable, enterprise-grade tooling designed to maximize consistency and speed, including:
With Harness IaCM, your organization can move beyond simply writing infrastructure code to managing it as a governed, automated, and scalable system—empowering teams to innovate faster and operate with confidence.


If there’s one thing we all care deeply about, it’s not fame, fortune or perfect HCL formatting; it’s reusability.
Whether you're a seasoned practitioner or new to Infrastructure as Code (IaC), Reusable modules are fast becoming the backbone of modern platform engineering. That's why modern platforms introduced Module Registries, central systems for publishing and consuming OpenTofu/Terraform modules across your organization.
They promote the DRY principle ("Don't Repeat Yourself") by codifying best practices, reducing duplication, and helping teams ship faster by focusing on what’s unique to their workload
But as teams scale, so does the risk: a misconfigured or buggy module can break dozens of environments in seconds.
Enter testing for infrastructure modules.
A few years ago, a platform engineering team learned a painful lesson: one bad Terraform command can destroy everything.
This real incident describes how a single misconfigured module and an unguarded Terraform destroy wiped out an entire staging environment, dozens of services gone in minutes. Recovery took days.
Now imagine your team building a reusable VPC module. Without testing, a single overlooked bug, say, a missing region variable or a misconfigured ACL that leaves an S3 bucket public, could silently make it into your registry. Every environment using that module would be exposed.
Here’s how to prevent it:
Before publishing, the platform team runs an integration pipeline that provisions a real test workspace with actual cloud credentials. On the first run, the missing region is caught. On the second, the public S3 bucket is flagged. Both are fixed before the module ever touches the registry.
The single step of testing modules in isolation before release turns potential outages into harmless build failures, protecting every downstream environment.
When you publish a shared module to your registry, you're trusting that it works now, and will continue to work later. Without dedicated testing, it's easy to miss:
Testing modules addresses these risks by validating them in isolation before they’re promoted to the registry.
A dedicated Integration pipeline is added to your module’s development branch. This pipeline:
✅ Tip: You define test inputs just like consumers would, using actual variables, connectors, and real infrastructure.
Only after the module passes this pipeline should it be promoted to the main branch and published.
Testing modules complements traditional IaC testing techniques like the following methods:
tflint, validate TerratestCheckov, tfsec
The integration testing stage spins up real infrastructure to validate that your modules work as expected before they reach production consumers.
Test your modules across different cloud regions or account configurations to ensure portability:
Test complex module hierarchies where modules depend on outputs from other modules:
Integrate security scanning directly into your integration tests:
Validate that module updates can be safely rolled back by testing both upgrade and downgrade paths.
Setting up tests for your modules requires some initial overhead, but the investment pays dividends as your module ecosystem grows:
Resource Costs: Integration tests provision real infrastructure, so factor in cloud costs for test environments. Use short-lived resources and automated cleanup to minimize expenses.
Test Environment Management: Establish dedicated sandbox accounts or subscriptions for integration testing to avoid conflicts with production resources.
Pipeline Execution Time: Real infrastructure provisioning takes longer than unit tests, so optimize your pipeline for parallel execution where possible.
Testing modules is becoming a core best practice in the OpenTofu ecosystem. But finding a platform that natively integrates registry management and test pipelines can be challenging.
If you’re looking for a platform that natively integrates module registries with testing pipelines, Harness Infrastructure as Code Management (IaCM) has you covered:
Check out how to create and register your IaC modules and configure module tests to get started with pipeline setup and test inputs.
If you value stability, reusability, and rapid iteration, then testing your modules is more than a nice-to-have; it’s your safeguard against chaos.
By combining traditional CI/CD validation with real infrastructure testing, you get the best of both worlds: fast feedback and real-world assurance.
Start small. Iterate. And as your registry grows, let testing give you the confidence to scale.


When we launched Harness Infrastructure as Code Management (IaCM), our goal was clear: help enterprises scale infrastructure automation without compromising on governance, consistency, or developer velocity. One year later and we’re proud of the progress we’ve made when it comes to delivering this solution with unmatched capabilities for templatization and enterprise scalability.
Today we’re announcing a major expansion of Harness IaCM with two new features: Module Registry and Workspace Templates. Both are designed to drive repeatability, security, and control with a common foundation: reusability.
In software development we talk quite a bit about the DRY principle, aka “Don’t Repeat Yourself.” These new capabilities bring that mindset to infrastructure, giving teams the tools to define once and reuse everywhere with built-in governance.
During customer meetings one theme came up over and over again – the need to define infrastructure once and reuse it across the platform in a secure and consistent manner, at scale. Our latest expansion of Harness IaCM was built to solve exactly that.
The DRY principle has long been a foundational best practice in software engineering. Now, with the launch of Module Registry and Workspace Templates, we’re bringing the same mindset to infrastructure – enabling platform teams to adopt a more standardized approach while reducing risk.
From a security and compliance perspective, these features allow teams to define infrastructure patterns once, test them thoroughly, and then reuse them with confidence across teams and environments. This massively improves consistency across teams and reduces the risk of human error — without slowing down delivery.
Here’s how each feature works.
Module Registry empowers users to create, share, and manage centrally stored “golden templates” for infrastructure components. By registering modules centrally, teams can:

By making infrastructure components standardized, discoverable, and governed from a single location, Module Registry dramatically simplifies complexity and empowers teams to focus on building value, not reinventing the wheel.
The potential is already generating excitement among early adopters:
"The new Module Registry is exactly what we need to scale our infrastructure standards across teams,” said John Maynard, Director of Platform Engineering at PlayQ. “Harness IaCM has already helped us cut provisioning times dramatically – what used to take hours in Terraform Cloud now takes minutes – and with Module Registry, we can drive even more consistency and efficiency."
With Module Registry, we’re not just improving scalability, we’re simplifying the way teams manage their infrastructure.
Workspace Templates allow teams to predefine essential variables, configuration settings, and policies as reusable templates. When new workspaces are created, this approach:

By embedding best practices into every new project, Workspace Templates help teams move faster while maintaining alignment, control, and repeatability across the organization.
Traditional Infrastructure as Code (IaC) solutions laid the foundation for how teams manage their cloud resources. But as organizations scale, many run into bottlenecks caused by complexity, drift, and fragmented tooling. Without built-in automation, repeatability, and visibility, teams struggle to maintain reliable infrastructure across environments.
Harness IaCM was built to solve these challenges. As a proud sponsor and contributor to the OpenTofu community, Harness also supports a more open, community-driven future for infrastructure as code. IaCM builds on that foundation with enterprise-grade capabilities like:
Together, these capabilities help teams to:
Since its GA launch last year, Harness IaCM has gained strong traction with several dozens of enterprise customers already on board – including multiple seven-figure deals. In financial services, one customer is managing dozens of workspaces using just a handful of templates, with beta users averaging more than 10 workspaces per template. In healthcare, another team now releases 100% of their modules with pre-configured tests, dramatically improving reliability. And a major banking customer has scaled to over 4,000 workspaces in just six months, enabled by standardization and governance patterns that drive consistency and confidence at scale.
With a focus on automation, reusability and visibility, Harness IaCM is helping enterprise teams rethink how they manage and deliver infrastructure at scale.
Harness’ Infrastructure as Code Management (IaCM) was built to address a massive untapped opportunity: to merge automation with deep capabilities in compliance, governance, and operational efficiency and create a solution that redefines how infrastructure code is managed throughout its lifecycle. Since launch, we’ve continued to invest in that vision – adding powerful features to drive consistency, governance, and speed. And we’re just getting started.
As we look ahead, we’re expanding IaCM in three key areas:
I invite you to sign up for a demo today and see firsthand how Harness IaCM is helping organizations scale infrastructure with greater speed, consistency, and control.


For Fidelity Investments, Hashicorp’s move to BSL licensing of Terraform and the community’s immediate response of creating an open-source fork, OpenTofu, under the Linux Foundation raised immediate questions. As an organization deeply committed to open source principles, moving from Terraform to OpenTofu aligned perfectly with their strategic values. They weren't just avoiding license restrictions; they were embracing a community-driven future for infrastructure automation.
What makes their story remarkable isn't just the scale (though managing 50,000+ state files is impressive), but how straightforward the migration proved to be. Because OpenTofu is a true drop-in replacement for Terraform, Fidelity's challenge was organizational, not technical. Their systematic approach offers lessons for any enterprise considering the move to OpenTofu—or tackling any major infrastructure change.
Let me walk you through what they did, because there are insights here that extend far beyond tool migration.
First, let's appreciate what Fidelity was dealing with:
This isn't a side project. This is production infrastructure that keeps a financial services giant running. Any misstep ripples through the entire organization.
Phase 1: Rigorous POC
They didn't start with faith they started with evidence. The key question wasn't "Does OpenTofu work?" but "Does it work with our existing CI/CD pipelines and artifact management?"
The answer was yes, confirming what many of us suspected: OpenTofu really is a drop-in replacement for Terraform.
Phase 2: Lighthouse Project
Here's where theory meets reality. Fidelity took an internal IaC platform application, converted it to OpenTofu, and deployed it to production. Not staging. Production.
This lighthouse approach is brilliant because it surfaces the unknown unknowns before they become organization-wide problems.
Phase 3: Building Consensus
You can't mandate your way through a migration of this scale. Fidelity invested heavily in socializing the change, presenting pros and cons honestly, engaging with key stakeholders, and targeting their biggest Terraform users for early buy-in.
Phase 4: Enablement Infrastructure
Migration success isn't just about the technology—it's about the people using it. Fidelity built comprehensive support structures, including tooling, documentation, and training, to ensure developers had everything they needed to succeed.
Phase 5: Transparent Progress Tracking
They made migration progress visible across the organization. Data-driven approaches build confidence. When people can see momentum, they're more likely to participate.
Phase 6: Default Switch
Once confidence was high, they made OpenTofu the default CLI, consolidated versions, and deprecated older Terraform installations.
Bonus: They branded their internal IaC services as "Bento"—creating a unified identity for standardized pipelines and reusable modules. Sometimes organizational psychology matters as much as the technology.
OpenTofu delivers on its compatibility promise. The migration effort focused on infrastructure pipeline adaptation, not massive code rewrites. This validates what the OpenTofu community has been saying—it really is a drop-in replacement that makes migration far simpler than switching between fundamentally different tools.
Shared pipelines are a force multiplier. Central pipeline changes benefited multiple teams simultaneously. This is why standardization matters—it creates leverage and makes organization-wide changes manageable.
CLI version consistency is crucial. Consolidating Terraform versions before migration eliminated a major source of friction. This organizational discipline paid dividends during the actual transition.
Open source alignment was deeply strategic. This wasn't just about licensing costs—Fidelity wanted to contribute to the OpenTofu community and actively shape IaC's future. They're now part of building the tools they depend on, rather than just consuming them.
Fidelity's success illustrates how straightforward OpenTofu migration can be when approached systematically. The real work wasn't rewriting infrastructure code—it was organizational: building consensus, creating enablement, measuring progress.
This validates a key point about OpenTofu: because it maintains compatibility with Terraform, the traditional migration pain points (syntax changes, feature gaps, learning curves) simply don't exist. Organizations can focus on process and adoption rather than technical rewrites.
The shift to OpenTofu represents more than just avoiding HashiCorp's licensing restrictions. It's about participating in a community-driven future for infrastructure automation—something that clearly resonated with Fidelity's open source values.
If you're managing infrastructure at scale, Fidelity's playbook offers a proven path for OpenTofu migration. The key insight? Because OpenTofu is compatible with Terraform, your migration complexity is organizational, not technical. Focus on consensus-building, phased adoption, and comprehensive enablement rather than worrying about code rewrites.
For organizations committed to open source principles, the choice becomes even clearer. OpenTofu offers the same functionality with the added benefit of community control and transparent development. You're not just getting a tool—you're joining an ecosystem where you can influence the future of infrastructure automation.
The infrastructure automation landscape is evolving toward community-driven solutions. Organizations like Fidelity aren't just adapting to this change they're leading it. Their migration proves that moving to OpenTofu isn't just possible at enterprise scale; with the right approach, it's surprisingly straightforward.
Worth studying, worth emulating and worth making the move.
At Harness, we offer our Infrastructure-as-Code Management customers guidance and services to streamline their migration from Terraform to OpenTofu if that's part of their plans. To learn more about that, please contact us.


Infrastructure management has undergone a radical transformation in the past decade. Gone are the days of manual server configuration and endless clicking through cloud provider consoles. Today, we're witnessing a renaissance of infrastructure management, driven by Infrastructure as Code (IaC) tools like OpenTofu.
Imagine a world where deploying infrastructure was like assembling furniture without instructions. Each engineer would interpret the blueprint differently, leading to inconsistent, fragile systems. This was the reality before IaC. OpenTofu emerged as a community-driven solution to standardize and simplify infrastructure deployment, offering a declarative approach that treats infrastructure like software.
The first stage of infrastructure automation is about bringing structure and repeatability to deployments. Here, teams transition from manual configurations to storing infrastructure definitions in version-controlled repositories.
Picture a development team where infrastructure changes are no longer mysterious, one-off events. Instead, every network configuration and every server setup becomes a traceable, reviewable piece of code. Pull requests become the new change management meetings, with automated checks validating proposed infrastructure modifications before they touch production.
Version control integration transformed infrastructure management. Suddenly, infrastructure changes became collaborative, transparent processes where team members could review, comment, and validate complex system modifications before deployment. By treating infrastructure code like application code, organizations created more reliable, predictable deployment mechanisms.
This approach allows teams to:
As organizations mature, they move beyond basic automation to create sophisticated, environment-specific deployment strategies. This isn't just about deploying infrastructure—it's about creating intelligent, context-aware deployment mechanisms.
Custom workflows emerge, allowing teams to:
Here's where things get interesting. Advanced teams start thinking about infrastructure not as monolithic blocks, but as dynamic, interconnected micro-services. Infrastructure becomes adaptable, scalable, and increasingly intelligent.
Imagine infrastructure that can:
The final stage represents the holy grail of infrastructure management: a fully self-service model with robust governance and compliance mechanisms.
Open Policy Agent (OPA) policies transform compliance from a bureaucratic nightmare into an automated, programmable process. Instead of lengthy approval meetings, organizations can now encode compliance requirements directly into their infrastructure deployment pipelines.
Advanced platforms now offer:
While Terraform pioneered this space, OpenTofu represents the next evolution. As a community-driven, open-source alternative, it offers:
Infrastructure automation is no longer a luxury—it's a strategic imperative. By embracing tools like OpenTofu, organizations can transform infrastructure from a cost centre to a competitive advantage.
Explore Harness Infrastructure as Code Management and discover the benefits associated with IaCM tooling that integrates seamlessly with tools like CI/CD pipelines, Cloud Cost Management, Vulnerability scanner with Security Testing Orchestration and much more, all under one roof.


Managing and predicting cloud costs can be challenging in today's dynamic cloud environments, especially when infrastructure changes occur frequently. Many organizations struggle to maintain visibility into their cloud spending, which can lead to budget overruns and financial inefficiencies. This issue is exacerbated when infrastructure is provisioned and modified frequently, making it hard to predict and control costs.
Integrating Infrastructure as Code (IaC) practices with robust cost management tools can provide a solution to these challenges. By enabling cost estimates and enforcing budgetary policies at the planning stage of infrastructure changes, teams can gain greater visibility and control over their cloud expenses. This approach not only helps in avoiding surprise costs but also ensures that resources are used efficiently and aligned with business goals.
Infrastructure as Code Management (IaCM): IaCM allows teams to define, provision, and manage cloud resources using code, making infrastructure changes repeatable and consistent. This method of managing infrastructure comes with the added benefit of predictability. By incorporating cost estimation directly into the IaC workflow, teams can preview the financial impact of proposed changes before they are applied. This capability is crucial for planning and budgeting, enabling organizations to avoid costly surprises and make data-driven decisions about infrastructure investments.
Cloud Cost Management (CCM): While IaC provides a foundation for controlled and predictable infrastructure changes, Cloud Cost Management tools take this a step further by offering continuous visibility into cloud spending. CCM tools allow teams to monitor and analyze costs in real time, set spending thresholds, and receive alerts when costs approach or exceed these limits. This ongoing oversight is essential for maintaining financial discipline, especially in dynamic environments where infrastructure usage and costs can fluctuate rapidly.
A development team is tasked with launching a new feature that requires additional cloud infrastructure. Before deploying, they use their IaC tool to define the necessary resources and run a cost estimation. The estimation reveals that the proposed changes will significantly increase the monthly cloud spend, prompting the team to reassess their approach.
They decide to implement an automated policy that checks whether the total monthly cost of any proposed infrastructure exceeds a predefined threshold. If this threshold is crossed, the policy triggers an alert or blocks the deployment, ensuring costs stay within expected limits. While some companies might not be price-sensitive, they aim to allocate resources effectively, prioritizing value and strategic impact over cost alone. To further optimize spending, they schedule certain environments to be scaled down or temporarily decommissioned during weekends when they are not needed.
Such proactive measures can be instrumental in ensuring that cloud costs remain within budget, while still allowing for the flexibility to scale infrastructure as needed.
When you combine the power of IaCM with Cloud Cost Management, you create a robust system that enables continuous optimization of cloud infrastructure with cost control in mind. This combination, IaCM for Cost Management, has the potential to automate, optimize, and provide cost transparency across the entire cloud environment. While IaCM handles provisioning and scaling, Cloud Cost Management (CCM) tools are essential for monitoring and tracking cloud expenses after resources have been provisioned. When you combine IaCM with CCM, organizations gain continuous cost visibility and real-time feedback on resource usage.
With IaC, you can define your cloud infrastructure in code and apply cost-saving policies directly within your infrastructure definitions. For example, if you're using OpenTofu or Terraform, you can incorporate best practices like:
By incorporating these cost-saving measures into your IaC pipeline, cost optimization becomes a native part of your infrastructure provisioning process, reducing the likelihood of unnecessary waste in the long run.
IaCM isn't just about provisioning infrastructure — it also includes ongoing cost tracking and monitoring. With automated reporting and cost analysis tools, organizations can continuously track how their cloud spending evolves over time. This makes it easier to pinpoint areas of overspending or inefficiency that need attention.
By integrating CCM tools, such as Harness CCM, into your IaCM workflow, teams can receive real-time feedback on resource usage and costs as infrastructure is deployed and scaled. This integration helps track the following:
Cloud cost governance is an essential aspect of any cost management strategy, ensuring that teams do not overspend and stay within their allocated budgets. With IaCM, you can automate governance policies to ensure cloud resources are provisioned in accordance with business rules and financial guidelines.
For instance, you can enforce policies such as:
Harness IaCM allows you to enable cost estimation at the workspace level, ensuring that you know the approximate cost of your infrastructure changes ahead of time before applying those changes. For example, the team can implement an automated policy that checks whether the total monthly cost of any proposed infrastructure exceeds a predefined threshold. If this threshold is crossed, the policy triggers an alert or blocks the deployment altogether, preventing unexpected financial strain.
This policy automatically denies any changes if the total monthly cost of the infrastructure exceeds $100, helping to maintain budgetary control and avoid unexpected expenses. Additionally, the team can set policies to ensure that the cost of changes does not increase significantly compared to the previous plan, providing an extra layer of cost governance.
When integrating Infrastructure as Code and Cloud Cost Management into your workflows, consider the following strategies:
Bringing together the capabilities of Infrastructure as Code and Cloud Cost Management can significantly enhance your organization’s ability to manage cloud costs effectively. By integrating these practices, teams can gain better visibility into their spending, enforce budgetary controls, and optimize resource usage—all critical components for running efficient, cost-effective cloud operations.
For more information on implementing these strategies, check out Harness Infrastructure as Code Management and Harness Cloud Cost Management.
Also, check out our recent webinar on how to whip your cloud costs into shape.


Infrastructure as Code (IaC) has revolutionized IT infrastructure management, with HashiCorp’s Terraform leading the way for many years. However, when HashiCorp introduced licensing restrictions on Terraform, it left many organizations questioning the future of their open-source infrastructure tooling.
Enter OpenTofu: a community-driven fork of Terraform that's rapidly gaining traction among developers and operations teams alike.
Born out of a desire to preserve the open-source ethos, OpenTofu is more than just a Terraform clone. It represents a philosophical shift in how we approach infrastructure management tools. In this post, we'll dive deep into OpenTofu, unpack its origins, explore its key features, and see how it stacks up against its well-established predecessor.
Whether you're a seasoned Terraform user or new to the world of IaC, understanding OpenTofu is crucial as we navigate the ever-evolving terrain of cloud infrastructure management. So, let's roll up our sleeves and get to know this promising new player in the DevOps toolkit.
OpenTofu is an open-source infrastructure as code tool that allows users to define and provision data center infrastructure using a declarative configuration language. As a fork of HashiCorp's Terraform, it was created to ensure the continued availability of a fully open-source option for infrastructure management.
At its core, OpenTofu enables developers and operations teams to manage complex infrastructure setups through code, bringing software development practices to infrastructure management. This approach, known as Infrastructure as Code (IaC), facilitates version control, code review, and automated testing for infrastructure changes.
OpenTofu supports a wide array of service providers and can manage both cloud and on-premises resources. From spinning up virtual machines to configuring networking rules, it provides a unified workflow for provisioning and managing infrastructure across different platforms.
OpenTofu was born out of the open-source community’s response to HashiCorp’s decision to change Terraform’s license from the Mozilla Public License v2.0 (MPL v2.0) to the more restrictive Business Source License (BSL). This shift raised concerns about the long-term accessibility and open-source future of Terraform, prompting the need for an alternative that would uphold open-source principles and empower the community to drive the tool’s direction.
Here’s how OpenTofu differs from Terraform and why it represents a significant shift in the Infrastructure as Code (IaC) landscape:
Licensing:
OpenTofu remains fully open-source under the MPL v2.0 license, ensuring unrestricted access to its code and the freedom for users to modify and distribute it. In contrast, Terraform now operates under the BSL, which imposes limitations on its use and restricts open development as a source-available licence.

Governance:
OpenTofu is governed by the community through the Linux Foundation, emphasizing a decentralized and collaborative approach. This means development decisions, feature requests, and bug fixes are driven by the broader community. Terraform, on the other hand, remains under the control of HashiCorp, with development decisions made centrally by the company.
Feature Parity and Divergence:
While OpenTofu began as a fork of Terraform and maintains feature parity for now, the two tools are expected to diverge over time. OpenTofu will evolve based on community priorities and needs, while Terraform’s feature set will likely be influenced by HashiCorp’s commercial goals. This divergence may lead to OpenTofu gaining new features or adopting changes faster in areas prioritized by its users.
Provider Ecosystem:
Both OpenTofu and Terraform can utilize the existing ecosystem of providers to manage infrastructure across multiple platforms. However, differences may arise in how quickly new providers are supported. OpenTofu, driven by community contributions, may focus on rapid provider development for cloud and on-premises systems based on user demand, while Terraform’s provider updates will follow HashiCorp’s priorities.
Development Pace:
As a community-driven project, OpenTofu is likely to see a faster pace of development in areas that matter most to its users. This could include bug fixes, new features, and provider support, with the community able to directly influence the tool’s roadmap. Terraform’s development, meanwhile, will be steered by HashiCorp’s internal timelines and enterprise focus.
Enterprise Features:
OpenTofu aims to keep all features fully open-source, making it a cost-effective solution for teams of any size. In contrast, Terraform separates certain advanced features—such as governance, policy enforcement, and collaboration—into its enterprise offering, which is only available under commercial licensing.
OpenTofu stands out as more than just a fork of Terraform; it represents a philosophical shift toward community empowerment, open governance, and unrestricted access. By combining these differences with its commitment to open-source values, OpenTofu provides a stable, innovative, and flexible alternative for organizations looking to maintain control over their infrastructure as code tools without compromising on features or flexibility.
The history of OpenTofu is closely intertwined with that of Terraform:

Since its inception, OpenTofu has gained support from major tech companies and cloud providers, indicating strong interest in maintaining an open-source IaC solution.
Why Use OpenTofu and What Does It Offer?
OpenTofu is more than just an open-source alternative to Terraform—it combines the power of a community-driven project with a robust feature set that makes it a compelling choice for managing infrastructure as code. Here’s why OpenTofu stands out and the key features it offers:
Open-Source Commitment & Community Governance:
As a fully open-source tool under the MPL v2.0 license, OpenTofu ensures unrestricted access to its codebase. Managed by the Linux Foundation, it benefits from diverse contributions, making it a tool shaped by its community’s needs. This open governance model also drives faster innovation and development, responding directly to user demands.
Compatibility & Flexibility:
OpenTofu maintains full compatibility with existing Terraform configurations and providers, making it an easy transition for users already familiar with Terraform. Its flexibility allows for extensive customization and integration with other tools, offering broad support for cloud and on-premises infrastructure.
Declarative Infrastructure as Code:
Like Terraform, OpenTofu uses the HashiCorp Configuration Language (HCL) for declarative infrastructure management. This allows users to define their infrastructure in a way that’s easy to read and understand, while also being compatible with version control systems for change tracking.
Comprehensive State Management:
OpenTofu keeps track of infrastructure changes with a state file, ensuring you always have an accurate representation of your deployed resources. Features like state locking prevent concurrent modifications, maintaining consistency and preventing accidental overwrites.
Plan and Apply Workflow:
OpenTofu allows users to generate a detailed execution plan (opentofu plan) before applying any changes. This helps prevent unexpected infrastructure modifications by giving teams full visibility into what changes will be made before they are applied.
Resource Graph:
OpenTofu builds a dependency graph of your resources, allowing it to determine the correct order for creating, updating, or deleting them. This automatic ordering ensures efficient infrastructure provisioning.
Modular Infrastructure & Reusability:
OpenTofu supports reusable modules, enabling developers to encapsulate and share standardized infrastructure components. This encourages the DRY (D
Using OpenTofu is straightforward, with a set of core commands that follow a clear workflow for managing infrastructure. Whether you’re running OpenTofu directly or through a CI/CD pipeline like Harness IaCM, these commands form the backbone of your IaC operations:
opentofu init: The init command initializes your working directory. This sets up your configuration files and prepares your environment for running other OpenTofu commands.
opentofu plan: Before applying changes, plan generates a detailed execution plan, showing what changes will be made to your infrastructure. This step ensures that you review changes before implementing them.
opentofu apply: The apply command carries out the changes defined in your configuration. Once you’ve reviewed the plan, apply makes the modifications to your infrastructure, whether it’s creating resources, updating them, or deleting them.
opentofu destroy: The destroy command is used to clean up resources. When you no longer need infrastructure, destroy will tear it down, ensuring you avoid unnecessary costs or complexity.
opentofu validate: Before running a plan or apply, it’s good practice to use validate to check your configuration for syntax errors or inconsistencies. This helps catch issues early in the workflow.
opentofu state: The state command manages the state file, which tracks the current state of your infrastructure. You can use state to query, modify, or import resources into OpenTofu’s state file.
Here’s how these commands work in sequence:
For those using Harness IaCM, OpenTofu commands like init, plan, and apply are automatically executed within your pipelines. Harness ensures these commands run with the correct context and credentials, streamlining your IaC processes.
Checkout how this commands can be executed in sequence with Harness IaCM.
1. Is OpenTofu compatible with existing Terraform configurations?
Yes, OpenTofu is designed to be compatible with existing Terraform configurations and state files.
2. Can I use Terraform providers with OpenTofu?
Yes, OpenTofu can use existing Terraform providers.
3. How does OpenTofu handle state management?
OpenTofu uses the same state management system as Terraform, including support for remote state storage.
4. Is OpenTofu suitable for enterprise use?
Yes, OpenTofu is designed to be enterprise-ready, with features supporting large-scale infrastructure management.
5. How does the performance of OpenTofu compare to Terraform?
As OpenTofu is a direct fork of Terraform, its performance is generally similar.
By leveraging Harness IaCM alongside OpenTofu, teams can move beyond manual IaC management to a fully automated, governed, and scalable solution. This combination provides the tools needed for consistent, secure, and efficient infrastructure provisioning, all while maintaining the open-source flexibility that OpenTofu offers.
For more information on getting started with OpenTofu in Harness, check out Harness IaCM or join our on-demand webinar to learn how GitOps and OpenTofu are shaping the future of IaC.
OpenTofu, combined with tools like Harness IaCM, is the future of Infrastructure as Code. Whether you’re building infrastructure at scale or just getting started with IaC, the flexibility and community-driven innovation offered by OpenTofu make it a must-have tool. Start exploring how OpenTofu and Harness IaCM can transform your infrastructure management today.


As software professionals—whether you're a developer, DevOps engineer, or managing infrastructure, the DRY (Don't Repeat Yourself) principle is a foundational concept you're likely familiar with. In the world of code, this principle encourages us to minimize repetition by reusing components, libraries, and frameworks wherever possible. It's all about making your work more efficient, consistent, and maintainable.
The same principle holds true when working with infrastructure as code (IaC). With tools like OpenTofu and Terraform, we can avoid repetition by using reusable modules — pre-packaged pieces of infrastructure code that can be imported and deployed across various projects. These modules act like software libraries but are tailored for infrastructure management, allowing us to streamline the process of provisioning and managing infrastructure components such as networks, databases, and servers.
If you've ever worked with code dependencies or libraries, you'll find the concept of a module registry quite familiar. A module registry is a centralized location where reusable components—modules—are stored, managed, and shared. These modules represent pre-built pieces of infrastructure code that can be easily imported into your projects to set up common infrastructure components, such as virtual machines, databases, and networks.
For example, instead of writing new code each time you need to spin up a database server, you can import a pre-built module from the registry. This approach helps developers avoid "reinventing the wheel," speeding up deployment and making infrastructure code easier to maintain.

Let's explore how module registries are transforming the landscape of infrastructure management and why they're becoming an essential tool in the modern DevOps toolkit.
Traditionally, setting up and managing infrastructure involved a lot of manual work. DevOps engineers would often find themselves copying and pasting code snippets across different environments, leading to inconsistencies and a higher likelihood of errors. This approach was not only time-consuming but also prone to configuration drift – a situation where environments that should be identical slowly become different over time due to manual changes and inconsistent updates.
Let's look at an example of how infrastructure might have been set up in the past:
In this example, every detail of the VPC, subnet, and security group configuration is defined manually. Now imagine having to replicate this across multiple projects or environments. The potential for errors and inconsistencies becomes apparent, not to mention the time investment required to set up and maintain such configurations.
A module registry is a centralized repository where reusable infrastructure components – modules – are stored, managed, and shared. These modules are pre-packaged pieces of infrastructure code that can be easily imported and deployed across various projects. Think of them as the infrastructure equivalent of software libraries.
By leveraging a module registry, DevOps teams can:
1. Enhance Consistency: Use standardized, pre-approved modules across projects and environments.
2. Improve Efficiency: Reduce the time spent on repetitive tasks by reusing existing modules.
3. Minimize Errors: Decrease the likelihood of misconfigurations by using tested and verified modules.
4. Facilitate Collaboration: Share best practices and expertise across teams through well-documented modules.
5. Enable Version Control: Manage different versions of infrastructure components, allowing for controlled updates and rollbacks.
Let's revisit our previous example, this time using a module from a registry:
In this updated version, we're using a pre-built network module that encapsulates the complexity of setting up a VPC, subnets, and associated resources. Not only is this code more concise, but it also ensures that best practices are followed consistently across all instances where this module is used.
While the concept of module registries isn't new, platforms like the Harness IaCM Module Registry are taking it to the next level by integrating seamlessly with existing DevOps workflows and providing additional features that enhance security, governance, and ease of use.
Key features of the Harness IaCM Module Registry include:
1. Centralized Storage: All modules are stored in one secure location, making it easy to share and update them across projects.
2. Version Management: Modules are versioned, allowing teams to specify exact versions in their projects, ensuring stability and predictability.
3. Security and Access Control: Granular controls over who can access modules and enforce policies for updates.
4. Integration with CI/CD Pipelines: Seamless integration with existing CI/CD workflows, automating deployment and enhancing consistency across environments.
5. Automated Syncing: The ability to automatically sync modules with their source repositories, ensuring that the registry always contains the latest versions.
For detailed instructions on how to implement and use the Harness IaCM Module Registry in your workflow, refer to the official Harness developer documentation.
The adoption of module registries is having a profound impact on DevOps practices:
1. Accelerated Development: By eliminating the need to reinvent the wheel for every project, teams can focus on building and deploying applications faster.
2. Improved Collaboration: Shared modules foster knowledge sharing and collaboration between teams, breaking down silos between development and operations.
3. Enhanced Security: Centralized management of infrastructure modules allows for better control over security practices and easier implementation of security patches across all projects.
4. Scalability: As organizations grow, module registries provide a scalable way to manage increasingly complex infrastructure needs without a proportional increase in management overhead.
5. Cost Optimization: Reusable modules can be optimized for cost-effectiveness, ensuring that best practices for resource utilization are consistently applied across all projects.
While the benefits of module registries are clear, their implementation does come with some challenges:
1. Learning Curve: Teams need to adapt to thinking in terms of modular infrastructure, which may require some initial training and adjustment.
2. Module Maintenance: As with any shared resource, modules need to be maintained, updated, and deprecated when necessary. This requires ongoing effort and clear ownership.
3. Balancing Flexibility and Standardization: There's a delicate balance between providing standardized modules and allowing for the flexibility needed in diverse projects.
4. Version Management: As modules evolve, managing different versions and ensuring compatibility can become complex.
As we look to the future, module registries are poised to play an increasingly central role in infrastructure management. We can expect to see:
1. AI-Assisted Module Creation: Machine learning algorithms helping to generate and optimize infrastructure modules based on best practices and usage patterns.
2. Cross-Platform Compatibility: Enhanced interoperability between different cloud providers and on-premises infrastructure through standardized module interfaces.
3. Automated Compliance Checking: Built-in tools for automatically verifying that modules meet industry standards and compliance requirements.
4. Dynamic Module Composition: The ability to dynamically compose complex infrastructure setups from smaller, more granular modules based on application requirements.
The adoption of module registries represents a significant leap forward in the world of infrastructure management. By embracing the DRY principle and leveraging tools like the Harness IaCM Module Registry, DevOps teams can dramatically improve their efficiency, consistency, and ability to scale.
As we continue to push the boundaries of what's possible in software development and deployment, module registries will undoubtedly play a crucial role in shaping the future of DevOps practices. Whether you're managing a small startup's infrastructure or orchestrating complex systems for a large enterprise, embracing module registries is a step towards more robust, maintainable, and efficient infrastructure management.
The journey towards fully modular infrastructure may seem daunting, but the benefits far outweigh the initial investment. As you embark on this path, remember that each module you create or reuse is a step towards a more streamlined, secure, and scalable infrastructure.
For those currently using Terraform and considering a switch, OpenTofu provides a smooth migration path. You can find detailed instructions on how to migrate from Terraform to OpenTofu in the official OpenTofu documentation.
The future of DevOps is modular – are you ready to embrace it?
Learn more: How to Implement Infrastructure as Code, IaC Workflow Automation
Checkout comparisons: Harness IaCM v.s. Hashicorp Terraform
.webp)
.webp)
As DevOps has taken hold in the software development, infrastructure management has become a critical aspect of software development. We need cloud infrastructure to be agile and dependable. To meet these, two powerful concepts have emerged: GitOps and Infrastructure-as-Code (IaC). When combined, these approaches create a robust framework for managing infrastructure as code. This article will explore how to implement GitOps with the IaC tool Terraform\OpenTofu, providing you with a streamlined approach to infrastructure management.
Terraform is an open-source infrastructure as code (IaC) tool created by HashiCorp. It allows developers and operations teams to define and provision infrastructure using a declarative language. With Terraform, you can describe your desired infrastructure state in configuration files, and the tool will handle the complexities of creating, modifying, and deleting resources across various cloud providers and services.
Terraform's power lies in its ability to manage complex infrastructure setups with consistency and repeatability. It supports a wide range of providers, from major cloud platforms to more specialized services. This versatility makes Terraform a go-to choice for organizations looking to standardize their infrastructure management across multiple environments.
OpenTofu is an open-source infrastructure as code tool that emerged as a community-driven fork of Terraform. It was created in response to HashiCorp's decision to change Terraform's license from open-source to a more restrictive one. OpenTofu aims to maintain the functionality and compatibility of Terraform while remaining fully open-source.
Key points about OpenTofu:
When implementing GitOps for infrastructure as code, OpenTofu can be used as a drop-in replacement for Terraform in most scenarios. The choice between OpenTofu and Terraform often comes down to licensing preferences and organizational requirements. Both tools can be effectively used within a GitOps workflow, leveraging the same principles of version control, declarative configurations, and automated deployments.
For organizations concerned about potential future licensing changes or those preferring a community-driven open-source solution, OpenTofu provides a viable alternative that integrates seamlessly into existing GitOps practices. Harness is sponsor of the OpenTofu project and believes you should use OpenTofu over Terraform.
GitOps is an operational framework that takes DevOps best practices used for application development and applies them to infrastructure automation. At its core, GitOps uses Git repositories as the single source of truth for declarative infrastructure and applications. This approach leverages Git's version control capabilities to manage infrastructure changes, providing a clear audit trail and facilitating collaboration among team members.
In a GitOps workflow, any change to the infrastructure is made through a Git repository. Automated processes then sync these changes with the actual infrastructure, ensuring that the deployed state always matches the desired state defined in the repository. This method enhances transparency, improves security, and streamlines the change management process.
Combining Terraform with GitOps creates a powerful synergy for infrastructure management. Here's why this pairing is particularly effective:
By leveraging Terraform within a GitOps framework, organizations can achieve a high degree of automation, consistency, and traceability in their infrastructure management processes. This combination is particularly powerful when used with platforms like Harness Software Delivery Platform, which provides robust GitOps capabilities for both applications and infrastructure as code management.
Implementing GitOps for your infrastructure involves several key steps.
Let's break down the process into four manageable parts.
The first step in GitOps-ing your IaC is to set up a Git repository that will serve as the single source of truth for your infrastructure code. This repository will contain all your Terraform configuration files, modules, and associated documentation.
When setting up your repository, consider best practices such as using a clear directory structure, includ
ing comprehensive documentation, implementing branch protection rules, and using .gitignore files to prevent sensitive information from being committed.
By centralizing your Terraform code in a Git repository, you're laying the foundation for a GitOps workflow. This step also facilitates collaboration and provides a clear history of infrastructure changes.
With your repository set up, the next step is to define your infrastructure using Terraform's HashiCorp Configuration Language (HCL). This involves creating .tf files that describe your desired infrastructure state.
When writing your Terraform configurations, focus on using variables and locals for flexibility, leveraging modules for organization, following naming conventions, and generating documentation for your modules.
Platforms like Harness Infrastructure as Code Management (IaCM) can enhance this process. Harness IaCM works seamlessly with Terraform and OpenTofu, providing additional capabilities such as cost impact analysis and security scanning, which can be invaluable when making infrastructure changes.
The third step in implementing GitOps for your Terraform setup is to establish a pull request (PR) workflow for managing infrastructure changes. This process ensures that all changes are reviewed and approved before being applied to your infrastructure.
A typical workflow involves creating branches for changes, opening pull requests, running automated checks, conducting team reviews, and merging approved changes. This approach enforces code review practices, allows for collaboration, and provides a clear audit trail of infrastructure changes.
Platforms like Harness IaCM can enhance this process by automatically running cost impact analyses and security scans on your Terraform or OpenTofu changes, updating the pull request with this valuable information. This additional context can help reviewers make more informed decisions about proposed infrastructure changes.
In conclusion, implementing GitOps for your Terraform-managed infrastructure is a powerful approach that can significantly improve your infrastructure management practices. By following these four steps - setting up a Git repository, configuring your infrastructure as code, creating an automated pipeline, and implementing a pull request workflow - you can achieve a more consistent, transparent, and efficient infrastructure management process.
The combination of GitOps and Terraform, especially when enhanced by platforms like Harness, provides a robust framework for managing infrastructure at scale. It enables teams to apply software development best practices to infrastructure management, resulting in more reliable, secure, and agile infrastructure deployments.
As you embark on your GitOps journey with Terraform, remember that the key to success lies in embracing the principles of automation, version control, and continuous improvement. With these practices in place, you'll be well-equipped to handle the challenges of modern infrastructure management in an increasingly complex technological landscape.
The final step involves setting up a CI/CD pipeline that will automatically apply your Terraform changes to your infrastructure. This pipeline is the core of your GitOps workflow, ensuring that any changes pushed to your Git repository are reflected in your actual infrastructure.
Your pipeline should include steps to checkout the latest code, initialize Terraform, generate a plan, and apply changes. Pipelines can seem counter-intuitive for a GitOps flow. It's common to ask, "But shouldn't we just do a pull request and apply the approved change?" However, pipeline automation can help inform that pull request. By running the Terraform Plan as well as security and cost checks before the PR is approved, the PR can be decorated with additional information making it easier for the reviewer to make the right decision.
Many IaCM tools will have some support for automation tied to changes in Git, and many CI/CD tools will be able to script some IaCM behaviors. Tools like Harness are nice because they bring the pipeline maturity of a leading CI/CD tool together with the dedicated IaCM steps as well as state management typically only found in tools that specialize in infrastructure.
In conclusion, implementing GitOps for your Terraform-managed infrastructure is a powerful approach that can significantly improve your infrastructure management practices. By following these four steps - setting up a Git repository, configuring your infrastructure as code, creating an automated pipeline, and implementing a pull request workflow - you can achieve a more consistent, transparent, and efficient infrastructure management process.
The combination of GitOps and Terraform, especially when enhanced by platforms like Harness, provides a robust framework for managing infrastructure at scale. It enables teams to apply software development best practices to infrastructure management, resulting in more reliable, secure, and agile infrastructure deployments.
As you embark on your GitOps journey with Terraform, remember that the key to success lies in embracing the principles of automation, version control, and continuous improvement. With these practices in place, you'll be well-equipped to handle the challenges of modern infrastructure management in an increasingly complex technological landscape.


The objectives for this initiative were:
Ours is a cloud-native stack with dozens of microservices deployed in a Kubernetes cluster. We needed to create new production clusters for scale and provide our SaaS service in more geographies(outside the US). We opted for OpenTofu and Terragrunt as our IaC tools and standardized on Helm Charts as service artifacts. Our CI process produces docker images and corresponding helm charts(as versioned and immutable artifacts; we bake image tags in the chart).
We divided our stack into four tiers from the bottom (Tier-1) to the top (Tier-4), as illustrated in the diagram below:

The separation of concerns from the security and operations point of view determined these tiers. The following are the functions of each tier:
Tier-1 needs the highest privileged access (it needs to be IAM admin). Our Security Operations team operates this tier from their workstations. Tier-1 setup also deploys a Harness Delegate with an IAM role with required permissions(scoped to the Project) to manage other Tiers. We operate Tiers 2-4 through Harness pipelines. Harness’ RBAC system provides granular controls for managing access at the environment levels. For production environments, we restrict access to Tier-2 and Tier-3 to Cloud Engineers (who manage our production infrastructure), and Tier-4 is available for individual application teams for their independent service deployments.
We use External Secret Manager to pass secrets from the lower to the upper tier. Cloud and Application engineering teams never see the secrets. We use keyless workload identities wherever applicable in our application and infrastructure tiers.
This tiered approach to the infrastructure stack ensures best-in-class security controls for our cloud infrastructure and provides flexibility and agility for application teams' development flows.
We have more than a dozen independent development teams. Devspaces are on-demand production-like environments where the teams can do their feature testing. We use the same infrastructure stack to build Devspaces. Each devspace is implemented as an isolated namespace (Tier-3 and -4) in a shared cluster (Tier-1 and -2). Developers can deploy feature builds to their devspaces while the rest of the stack runs a production-like configuration managed by the central team.

Devspaces have proven to be very versatile in our development process. They have effectively removed the bottlenecks of the integration environment, enabling each development team to do end-to-end feature testing in their environments. These environments are instrumental in various use cases, such as feature testing, performance testing, demo environments for early feedback, and documentation. In a typical week, we see over a thousand feature build deployments across hundred-odd devspaces, a testament to their versatility and efficiency. We have built features like TTL and team-wise cost visibility for devspaces to bring cost efficiency.
We have all aspects of our infrastructure version controlled in Git. It includes infrastructure and pipeline definition and environment-specific configurations. Git provides us with an audit trail through commit history. We use Pull Request flows to govern changes. Git-based versioning provides us with complete repeatability of environment setup.

Standardized IaC driven by Harness pipelines has provided a very flexible mechanism for creating environments for various use cases at Harness. We call this approach Environment-as-a-Service. The diagram below depicts the different use cases in which we employ this.

This initiative has had a significant impact at Harness. In the last few months, we have created three new production clusters, one integration and two QA environments, and over a hundred devspaces. The time to create a new production cluster has been reduced to a few hours from many weeks, a testament to the power and efficiency of this approach.
We are working with some of our large enterprise customers interested in adopting our approach. In the future, we plan to improve documentation, system usability, and open source our infrastructure repository so that others can benefit from this work.