Feature Management & Experimentation Blogs

Featured Blogs

December 1, 2025
Time to Read

Product and experimentation teams need confidence in their data when making high-impact product decisions. Today, experiment results often require copying behavioral data into external systems, which creates delays, security risks, and black-box calculations that are difficult to trust or validate.

Warehouse Native Experimentation keeps experiment data directly in your data warehouse, enabling you to analyze results with full transparency and governance control.

With Warehouse Native Experimentation, you can:

  • Run experiments without exporting data
  • Use transparent SQL logic that you control
  • Maintain alignment with internal data models
  • Accelerate experimentation without depending on streaming data pipelines

Why Warehouse Native Experimentation matters today

Product velocity has become a competitive differentiator, but experimentation often lags behind. AI-accelerated development means teams are shipping code faster than ever, while maintaining confidence in data-driven decisions is becoming increasingly challenging.

Modern teams face increasing pressure to move faster while reducing operational costs, reducing risk when launching high-impact features, maintaining strict data compliance and governance, and aligning product decisions with reliable, shared business metrics.

Executives are recognizing that sustainable velocity requires trustworthy insights. According to the 2025 State of AI in Software Engineering report, 81% of engineering leaders surveyed agreed that:

“Purpose-built platforms that automate the end-to-end SDLC will be far more valuable than solutions that target just one specific task in the future.”

At the same time, investments in data warehouses such as Snowflake and Amazon Redshift have increased. These platforms have become the trusted source of truth for customer behavior, financial reporting, and operational metrics.

This shift creates a new expectation where experiments must run where data already lives, results must be fully transparent to data stakeholders, and insights must be trustworthy from the get-go.

Warehouse Native Experimentation enables teams to scale experimentation without relying on streaming data pipelines, vendor lock-in, or black-box calculations, as trust and speed are now critical to business success.

Experiment where your data lives

Warehouse Native Experimentation integrates with Snowflake and Amazon Redshift, allowing you to analyze assignments and events within your data warehouse.

Running a Warehouse Native experiment in Snowflake

Because all queries run inside your warehouse, you benefit from full visibility into data schemas and transformation logic, higher trust in experiment outcomes, and the ability to validate, troubleshoot, and customize queries.

Viewing Warehouse Native experiment results in Harness FME

When Warehouse Native experiment results are generated from the same source of truth for your organization, decision-making becomes faster and more confident.

Create metrics that reflect your business

Metrics define success, and Warehouse Native Experimentation enables teams to define them using data that already adheres to internal governance rules. You can build metrics using existing warehouse tables, reuse them across multiple experiments, and include guardrail metrics (such as latency, revenue, or stability) to ensure consistency and accuracy. As experimentation needs evolve, metrics evolve with them, without duplicate data definitions.

Adding a metric definition in Harness FME

Experiments generate value when success metrics represent business reality. By codifying business logic into metrics, you can monitor the performance of what matters to your business, such as checkout conversion based on purchase events, average page load time as a performance guardrail, and revenue per user associated with e-commerce goals.

Understand experiment impact with transparent results

Once you've defined your metrics, Warehouse Native Experimentation automatically computes results on a daily recalculation or manual refresh and provides clear statistical significance indicators.

Because every result is generated with SQL that you can view in your data warehouse, teams can validate transformations, debug anomalies, and collaborate with data stakeholders. When everyone, from product to data science, can inspect the results, everyone trusts the decision.

Set up Warehouse Native Experimentation

Warehouse Native Experimentation requires connecting your data warehouse and ensuring your experiment and event data are ready for analysis. Warehouse Native Experimentation does not require streaming or ingestion; Harness FME reads directly from assignment and metric source tables.

To get started:

  1. Connect your data warehouse to Harness FME. Warehouse Native Experimentation requires the ability to read behavioral event and assignment tables, write results into a dedicated Harness schema, and run scheduled query jobs.
  2. Prepare your data model. In your data warehouse, assignment source tables track who was exposed to which variant, ensuring that users are correctly mapped to treatments and environments. Metric source tables, on the other hand, contain event-level data used in metric definitions, ensuring that analyses are grounded in a consistent, verifiable reality.
  3. Configure sources in Harness FME. Assignment sources define the exposure table structure and mappings, while metric sources define the event structure and metadata context. This ensures experiment analysis aligns with your warehouse schemas.
  4. Define metrics and create experiments. Once your data warehouse is connected, you can add key metrics and guardrail metrics, run experiments, and view the latest results in Harness FME.

From setting up Warehouse Native Experimentation to accessing your first Warehouse Native experiment result, organizations can efficiently move from raw data to validated insights, without building data pipelines.

Start running Warehouse Native experiments today

Warehouse Native Experimentation is ideal for organizations that already capture behavioral data in their warehouse, want experimentation without data exporting, and value transparency, governance, and flexibility in metrics.

Whether you're optimizing checkout or testing a new onboarding experience, Warehouse Native Experimentation enables you to make informed decisions, powered by the data sources your business already trusts.

Looking ahead, Harness FME will extend these workflows toward a shift-left approach, bringing experimentation closer to the release process with data checks in CI/CD pipelines, Harness RBAC permissioning, and policy-as-code governance. This alignment ensures product, experimentation, and engineering teams can release faster while maintaining confidence and compliance in every change.

To start running experiments in a supported data warehouse, see the Warehouse Native Experimentation documentation. If you're brand new to Harness FME, sign up for a free trial today.

October 31, 2025
Time to Read

Managing feature flags can be complex, especially across multiple projects and environments. Teams often need to navigate dashboards, APIs, and documentation to understand which flags exist, their configurations, and where they are deployed. What if you could handle these tasks using simple natural language prompts directly within your AI-powered IDE?

Screenshot of the Claude Code interface displaying the output from a prompt to identify fully rolled-out feature flags that are safe to remove from code.

Harness Model Context Protocol (MCP) tools make this possible. By integrating with Claude Code, Windsurf, Cursor, or VS Code, developers and product managers can discover projects, list feature flags, and inspect flag definitions, all without leaving their development environment.

By using one of many AI-powered IDE agents, you can query your feature management data using natural language. They analyze your projects and flags to generate structured outputs that the agent can interpret to accurately answer questions and make recommendations for release planning.

With these agents, non-technical stakeholders can query and understand feature flags without deeper technical expertise. This approach reduces context switching, lowers the learning curve, and enables teams to make faster, data-driven decisions about feature management and rollout.

According to Harness and LeadDev’s survey of 500 engineering leaders in 2024

82% of teams that are successful with feature management actively monitor system performance and user behavior at the feature level, and 78% prioritize risk mitigation and optimization when releasing new features.

Harness MCP tools help teams address these priorities by enabling developers and release engineers to audit, compare, and inspect feature flags across projects and environments in real time, aligning with industry best practices for governance, risk mitigation, and operational visibility.

Simplifying Feature Management Workflows

Traditional feature flag management practices can present several challenges:

  • Complexity: Understanding flag configurations and environment setups can be time-consuming.
  • Context Switching: Teams frequently shift between dashboards, APIs, and documentation.
  • Governance and Consistency: Ensuring flags are correctly configured across environments requires manual auditing.

Harness MCP tools address these pain points by providing a conversational interface for interacting with your FME data, democratizing access to feature management insights across teams.

How MCP Tools Work for Harness FME

The FME MCP integration supports several capabilities:

Tool Purpose Example Use
list_fme_workspaces Discover all projects (also known as workspaces). Show me all FME projects in my account
list_fme_environments Explore environments within a project. List the environments under `checkout-service`
list_fme_feature_flags List all flags in a project. What feature flags are active in staging?
get_fme_feature_flag_definition Inspect a specific flag. Describe the enable_discount_banner flag in staging

You can also generate quick summaries of flag configurations or compare flag settings across environments directly in Claude Code using natural language prompts.

Some example prompts to get you started include the following:

"List all feature flags in the `checkout-service` project."
"Describe the rollout strategy and targeting rules for `enable_new_checkout`."
"Compare the `enable_checkout_flow` flag between staging and production."
"Show me all active flags in the `payment-service` project."  
“Show me all environments defined for the `checkout-service` project.”
“Identify all flags that are fully rolled out and safe to remove from code.”

These prompts produce actionable insights in Claude Code (or your IDE of choice).

Getting Started

To start using Harness MCP tools for FME, ensure you have access to Claude Code and the Harness platform with FME enabled. Then, interact with the tools via natural language prompts to discover projects, explore flags, and inspect flag configurations.

Installation & Configuration

Harness MCP tools transform feature management into a conversational, AI-assisted workflow, making it easier to audit and manage your feature flags consistently across environments.

Prerequisites

Build the MCP Server Binary

  1. Clone the Harness MCP Server GitHub repository.
  2. Build the binary from source.
  3. Copy the binary to a directory accessible by Claude Code.

Configure Claude Code

  1. Open your Claude configuration file at `~/claude.json`. If it doesn’t exist already, you can create it.
  2. Add the Harness FME MCP server configuration:
{
  ...
  "mcpServers": {
    "harness": {
      "command": "/path/to/harness-mcp-server",
      "args": [
        "stdio",
        "--toolsets=fme"
      ],
      "env": {
        "HARNESS_API_KEY": "your-api-key-here",
        "HARNESS_DEFAULT_ORG_ID": "your-org-id",
        "HARNESS_DEFAULT_PROJECT_ID": "your-project-id",
        "HARNESS_BASE_URL": "https://your-harness-instance.harness.io"
      }
    }
  }
}
  1. Save the file and restart Claude Code for the changes to take effect.

To configure additional MCP-compatible AI tools like Windsurf, Cursor, or VS Code, see the Harness MCP Server documentation, which includes detailed setup instructions for all supported platforms.

Verify Installation

  1. Open Claude Code (or the AI tool that you configured).
  2. Navigate to the Tools/MCP section.
The Claude Code interface shows the Harness FME MCP server's status as connected, including the command path, arguments, configuration location, capabilities, and available tools.
  1. Verify Harness tools are available.
The Claude Code interface displays the Harness FME MCP toolset, listing all available options.

What’s Next

Feature management at scale is a common operational challenge. With Harness MCP tools and AI-powered IDEs, teams can already discover, inspect, and summarize flag configurations conversationally, reducing context switching and speeding up audits.

Looking ahead, this workflow extends itself towards a DevOps-focused approach, where developers and release engineers can prompt tools like Claude Code to identify inconsistencies or misconfigurations in feature flags across environments and take action to address them.

By embedding these capabilities directly into the development workflow, feature management becomes more operational and code-aware, enabling teams to maintain governance and reliability in real time.

For more information about the Harness MCP Server, see the Harness MCP Server documentation and the GitHub repository. If you’re brand new to Harness FME, sign up for a free trial today.

Recent Blogs