Chapters
Try It For Free
February 10, 2026

Argo CD Install: Helm-Based Setup for Enterprise DevOps Teams | Harness Blog

Install Argo CD with Helm and pinned versions to ensure repeatable setups and predictable upgrades. Secure it with a stable hostname, SSO, least-privilege RBAC, and AppProjects so multiple teams can safely adopt GitOps. Running Argo CD as platform infrastructure with HA, monitoring, backups, and staged upgrades keeps deployments reliable at scale.

What You’re Installing (and Why Enterprises Standardize on Argo CD)

Argo CD is a Kubernetes-native continuous delivery controller that follows GitOps principles: Git is the source of truth, and Argo CD continuously reconciles what’s running in your cluster with what’s declared in Git.

That pull-based reconciliation loop is the real shift. Instead of pipelines pushing manifests into clusters, Argo CD runs inside the cluster and pulls the desired state from Git (or Helm registries) and syncs it to the cluster. The result is an auditable deployment model where drift is visible and rollbacks are often as simple as reverting a Git commit.

For enterprise teams, Argo CD becomes a shared platform infrastructure. And that changes what “install” means. Once Argo CD is a shared control plane, availability, access control, and upgrade safety matter as much as basic deployment correctness because failures impact every team relying on GitOps.

What It Means In An Enterprise

A basic install is “pods are running.” An enterprise install is:

  • Secure access (SSO + least-privilege RBAC)

  • Safe multi-team usage (AppProjects guardrails, predictable onboarding)

  • Stable operations (monitoring, backups, upgrades)

  • Repeatability (version pinning, Git-driven configuration)

Argo CD can be installed in two ways: as a “core” (headless) install for cluster admins who don’t need the UI/API server, or as a multi-tenant install, which is common for platform teams. Multi-tenant is the default for most enterprise DevOps teams that use GitOps with a lot of teams.

Setup Prerequisites

Before you start your Argo CD install, make sure the basics are in place. You can brute-force a proof of concept with broad permissions and port-forwarding. But if you’re building a shared service, doing a bit of prep up front saves weeks of rework.

Cluster Prerequisites

  • A Kubernetes cluster you can administer (or at least create namespaces and cluster-scoped resources).

  • Network path to the API server from your workstation/CI environment.

  • A plan for ingress and TLS (internal-only is fine, just decide early).

Workstation Tools

  • kubectl configured for the target cluster

  • helm (recommended approach)

  • Optional: argocd CLI (useful for scripting and verification)

Platform Prerequisites to Confirm

  • Ingress controller availability (NGINX, ALB Ingress Controller, Traefik, etc.), or a willingness to use a cloud LoadBalancer.

  • DNS for a stable Argo CD hostname (even if internal).

  • Certificate strategy for TLS (cert-manager, corporate PKI, or managed certs).

If your team is in a regulated environment, align on these early:

  • Where Argo CD secrets will live (Kubernetes Secrets vs external secrets tooling)

  • Audit requirements (SSO provider logs, Kubernetes audit logs, etc.)

  • Network restrictions (private clusters, egress policies)

Decide Your Argo CD Installation Approach

Argo CD install choices aren’t about “works vs doesn’t work.” They’re about how you want to operate Argo CD a year from now.

Helm vs. Upstream Manifests

Helm (recommended for enterprise):

  • Repeatable installs across environments (dev/stage/prod)

  • Easy upgrades via version pinning

  • Values-driven configuration you can store in Git

Upstream manifests:

  • Fast and close to upstream defaults

  • Great for evaluation or a quick validation environment

  • Less structured change management unless you wrap it in GitOps

If your Argo CD instance is shared across teams, Helm usually wins because version pinning, values-driven configuration, and repeatable upgrades are easier to audit, roll back, and operate safely over time.

Single Instance vs. Multiple Instances

Enterprises often land in one of these models:

  • One shared Argo CD instance per cluster (common in platform teams)

  • One shared instance managing multiple clusters (central GitOps control plane)

  • Multiple instances (per business unit or compliance boundary)

As a rule: start with one shared instance and use guardrails (RBAC + AppProjects) to keep teams apart. Add instances only when you really need to (for example, because of regulatory separation, disconnected environments, or blast-radius requirements).

When Argo CD is a shared dependency, high availability (HA) is important. If teams depend on Argo CD to deploy, having just one replica Argo CD server can slow things down and cause problems with pagers.

How You’ll Expose Argo CD

There are three common access patterns:

  • Port-forward (setup only): safest for a first login, not an enterprise default.

  • Ingress (most common): use your standard ingress + TLS termination.

  • LoadBalancer service: simple in cloud environments, but can increase cost and widen exposure.

For most enterprise teams, the sweet spot is Ingress + TLS + SSO, with internal-only access unless your operating model demands external access.

Install Argo CD Using Helm (Step-by-Step)

If you’re building Argo CD as a shared service, Helm gives you the cleanest path to versioned, repeatable installs.

Step 1: Add the Helm Repo and Pin a Version

helm repo add argo https://argoproj.github.io/argo-helm

helm repo update

# Optional: list available versions so you can pin one

helm search repo argo/argo-cd --versions | head -n 10

In enterprise environments, “latest” isn’t a strategy. Pin a chart version so you can reproduce your install and upgrade intentionally.

Step 2: Create the argocd Namespace

kubectl create namespace argocd

Keeping Argo CD isolated in its own namespace simplifies RBAC, backup scope, and day-2 operations.

Step 3: Export Default Values and Make Minimal Enterprise Edits

Start by pulling the chart’s defaults:

helm show values argo/argo-cd > values.yaml

Then make the minimum changes needed to match your access model. Many tutorials demonstrate NodePort because it’s easy, but most enterprises should standardize on Ingress + TLS.

Here’s a practical starting point (adjust hostnames, ingress class, and TLS secret to match your environment):

# values.yaml (example starter)

global:

  domain: argocd.example.internal

configs:

  params:

    # Common when TLS is terminated at an ingress or load balancer.

    server.insecure: "true"

server:

  ingress:

    enabled: true

    ingressClassName: nginx

    hosts:

      - argocd.example.internal

    tls:

      - secretName: argocd-tls

        hosts:

          - argocd.example.internal

# Baseline resource requests to reduce noisy-neighbor issues.

controller:

  resources:

    requests:

      cpu: 200m

      memory: 512Mi

repoServer:

  resources:

    requests:

      cpu: 200m

      memory: 512Mi

This example focuses on access configuration and baseline resource isolation. In most enterprise environments, teams also explicitly manage RBAC policies, NetworkPolicies, and Redis high-availability decisions as part of the Argo CD platform configuration.

If your clusters can’t pull from public registries, you’ll need to mirror Argo CD and dependency images (Argo CD, Dex, Redis) into an internal registry and override chart values accordingly.

Step 4: Install (Or Upgrade) Argo CD

Use helm upgrade --install so your install and upgrade command is consistent.

helm upgrade --install argocd argo/argo-cd \

  --namespace argocd \

  --values values.yaml

Validate that core components are healthy:

kubectl get pods -n argocd

kubectl get svc -n argocd

kubectl get ingress -n argocd

If something is stuck, look at events:

kubectl get events -n argocd --sort-by=.lastTimestamp | tail -n 30

Step 5: Confirm the Installation Shape (What’s Running)

Most installs include these core components:

  • argocd-server (UI/API)

  • argocd-repo-server (fetches repos and renders manifests)

  • argocd-application-controller (reconciliation)

  • ApplicationSet Controller (optional but common at scale)

  • Dex (if enabled for SSO integration)

  • Redis (caching and coordination)

Knowing what each component does helps you troubleshoot quickly when teams start scaling usage.

Access the Argo CD UI and First Login

Your goal is to get a clean first login and then move toward enterprise access (Ingress + TLS + SSO).

Option 1: Port-Forward (Best for Initial Setup)

kubectl port-forward -n argocd svc/argocd-server 8080:443

Then open https://localhost:8080.

It’s common to see an SSL warning because Argo CD ships with a self-signed cert by default. For a quick validation, proceed. For enterprise usage, use real TLS via your ingress/load balancer.

Option 2: Ingress (Enterprise Default)

Once DNS and TLS are wired:

  • Browse to https://argocd.example.internal
  • Confirm you’re hitting the ingress you expect (and that TLS is correct)

If your ingress terminates TLS at the edge, running the Argo CD API server with TLS disabled behind it (for example, server.insecure: “true”) is a common pattern.

Get the Initial Admin Password

Default username is typically admin. Retrieve the password from the initial secret:

kubectl -n argocd get secret argocd-initial-admin-secret \

  -o jsonpath="{.data.password}" | base64 --decode; echo

After you’ve logged in and set a real admin strategy using SSO and RBAC, the initial admin account should be treated as a break-glass mechanism only. Disable or tightly control its use, rotate credentials, and document when and how it is allowed.

Install Argo CD Using Upstream Manifests (Fast Path for Evaluation)

If you want a quick Argo CD install for learning or validation, upstream manifests get you there fast.

Important context: the standard install.yaml manifest is designed for same-cluster deployments and includes cluster-level privileges. It’s also the non-HA install type that’s typically used for evaluation, not production. If you need a more locked-down footprint, Argo CD also provides namespace-scoped and HA manifest options in the upstream manifests.

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Validate:

kubectl get pods -n argocd

kubectl get svc -n argocd

Then port-forward to access the UI:

kubectl port-forward -n argocd svc/argocd-server 8080:443

Use admin plus the password from argocd-initial-admin-secret as shown in the prior section.

For enterprise rollouts, treat manifest installs as a starting point. If you’re standardizing Argo CD across environments, Helm is easier to control and upgrade.

Deploy Your First Application With Argo CD

A real install isn’t “pods are running.” A real install is “we can deploy from Git safely.” This quick validation proves:

  • repo access works

  • sync works

  • drift shows up

  • rollbacks are Git-driven

Step 1: Pick a Simple Repo Layout

Keep it boring and repeatable. For example:

apps/

  guestbook/

    base/

    overlays/

      dev/

      prod/

Or, if you deploy with Helm:

apps/

  my-service/

    chart/

    values/

      dev.yaml

      prod.yaml

Step 2: Create an AppProject (The Enterprise Guardrail)

Even for a test app, start with the guardrail. AppProjects define what a team is allowed to deploy, and where.

apiVersion: argoproj.io/v1alpha1

kind: AppProject

metadata:

  name: team-sandbox

  namespace: argocd

spec:

  description: "Sandbox boundary for initial validation"

  sourceRepos:

    - "https://github.com/argoproj/argocd-example-apps.git"

  destinations:

    - namespace: sandbox

      server: https://kubernetes.default.svc

  namespaceResourceWhitelist:

    - group: "apps"

      kind: Deployment

    - group: ""

      kind: Service

    - group: "networking.k8s.io"

      kind: Ingress

Apply it:

kubectl apply -f appproject-sandbox.yaml

Step 3: Create an Application and Sync

apiVersion: argoproj.io/v1alpha1

kind: Application

metadata:

  name: guestbook

  namespace: argocd

spec:

  project: team-sandbox

  source:

    repoURL: https://github.com/argoproj/argocd-example-apps.git

    targetRevision: HEAD

    path: guestbook

  destination:

    server: https://kubernetes.default.svc

    namespace: sandbox

  syncPolicy:

    automated:

      selfHeal: true

      prune: false

    syncOptions:

      - CreateNamespace=true

Note: In many enterprise environments, namespace creation is restricted to platform workflows or Infrastructure as Code pipelines. If that applies to your organization, remove CreateNamespace=true and require namespaces to be provisioned separately.

Apply it:

kubectl apply -f application-guestbook.yaml

Now confirm:

  • The app shows up in the UI

  • It syncs successfully

  • If you change something manually in-cluster, the app becomes OutOfSync

  • If you revert Git, syncing takes you back to the previous state

Optional: Add Git Webhooks for Faster Sync

By default, Argo CD polls repos periodically. Many teams configure webhooks (GitHub/GitLab) so Argo CD can refresh and sync quickly when changes land. It’s not required for day one, but it improves feedback loops in active repos.

Secure Access and Multi-Team Guardrails

This is where most enterprise rollouts either earn trust or lose it. If teams don’t trust the platform, they won’t onboard their workloads.

Focus on these enterprise minimums:

  • SSO first: your identity provider should be the source of truth.

  • Least privilege: app teams deploy only to approved namespaces/clusters.

  • Guardrails as code: AppProjects prevent accidental cross-team deploys.

Practical rollout order:

  1. Establish a stable hostname (so SSO callbacks are consistent).

  2. Configure SSO (OIDC or SAML) and group mapping.

  3. Apply RBAC aligned to roles (platform admin, app owner, read-only).

  4. Define AppProjects for team boundaries.

Break-glass access should exist, but it should be documented, auditable, and rare.

Production Hardening and Day-2 Operations

Enterprise teams don’t struggle because they can’t install Argo CD. They struggle because Argo CD becomes a shared dependency—and shared dependencies need operational maturity.

High Availability and Scaling

At scale, pressure points are predictable:

  • argocd-server: UI/API and auth flows

  • repo-server: Git/Helm fetches, rendering, and caching

  • application-controller: reconciliation across many apps/clusters

Plan a path to HA before you onboard many teams. If HA Redis is part of your design, validate node capacity so workloads can spread across failure domains.

Monitoring and Alerting

Keep monitoring simple and useful:

  • Argo CD API availability

  • sync failure rate

  • number of degraded apps

  • reconciliation lag (are apps taking too long to converge?)

Also, decide alert ownership and escalation paths early. Platform teams typically own Argo CD availability and control-plane health, while application teams own application-level sync and runtime issues within their defined boundaries.

  • Platform team owns Argo CD availability.

  • App teams own app-level health within their boundary.

Backups, Restore Tests, and DR

Git is the source of truth for desired state, but you still need to recover platform configuration quickly.

Backup:

  • argocd namespace ConfigMaps and Secrets

  • Argo CD custom resources (Applications, AppProjects, ApplicationSets)

Then run restore tests on a schedule. The goal isn’t perfection—it’s proving you can regain GitOps control safely.

Upgrade Strategy

A safe enterprise approach:

  1. Pin chart/app versions and document what’s running.

  2. Stage upgrades in a non-production environment.

  3. Validate core workflows after upgrade (login, repo access, sync).

  4. Promote the same change through environments.

Avoid “random upgrades.” Treat Argo CD as platform infrastructure with controlled change management.

Argo CD Install on EKS: Enterprise Notes

Argo CD works well on EKS, but enterprise teams often have extra constraints: private clusters, restricted egress, and standard AWS ingress patterns.

Common installation approaches on EKS:

  • Manual install (Helm/manifests): direct control; easiest if you already have platform standards.

  • Terraform: repeatable infrastructure and bootstrap.

  • EKS Blueprints: a structured AWS-aligned framework for adding platform components.

For access, most EKS enterprise teams standardize on an ingress backed by AWS Load Balancer Controller (ALB) or NGINX, with TLS termination at the edge.

Making Argo CD Production-Ready

An enterprise-grade Argo CD install is less about getting a UI running and more about putting the right foundations in place: a repeatable deployment method (typically Helm), a stable endpoint for access and SSO, and clear boundaries so teams can move fast without stepping on each other. If you take away one thing, make it this: treat Argo CD like shared platform infrastructure, not a one-off tool.

Start with a pinned, values-driven Helm install. Then lock in the enterprise minimums: SSO, RBAC, and AppProjects, before you onboard your second team. Finally, operationalize it with monitoring, backups, and a staged upgrade process so Argo CD stays reliable as your cluster and application footprint grows.

When you need orchestration, approvals, and progressive delivery across complex releases, pair GitOps with Harness CD. Request a demo.

Argo CD Installation: Frequently Asked Questions (FAQs)

These are quick answers to the most common questions that business teams have when they install Argo CD.

What’s the best way to install Argo CD for production?

Most enterprise teams should use Helm to install Argo CD because it lets you pin versions, keep configuration in Git, and upgrade in a predictable way. Upstream manifests are a great way to get started quickly if you’re thinking about Argo CD.

How can we safely expose Argo CD?

Use an internal hostname, end TLS at your ingress/load balancer, and make sure that SSO is required for interactive access. Do not make Argo CD public unless your business model really needs it.

What is the safest way to upgrade Argo CD?

Pin your chart/app versions, test upgrades in a non-production environment, and then move the same change to other environments. After the upgrade, check that you can log in, access the repo, and sync with a real app.

What’s the right model for multi-team access?

Use RBAC and AppProjects to set limits on a single shared instance. Only approved repos should be used by app teams to deploy to approved namespaces and clusters.

How do we back up and restore Argo CD?

Back up the argocd namespace (ConfigMaps, Secrets, and CRs) and keep app definitions in Git. Run restore tests on a schedule so recovery steps are proven, not theoretical.

Dewan Ahmed

Dewan Ahmed is a Principal Developer Advocate at Harness, a company that aims to enable every software engineering team in the world to deliver code reliably, efficiently and quickly to their users. Before joining Harness, he worked at IBM, Red Hat, and Aiven as a developer, QA lead, consultant, and developer advocate. For the last fifteen years, Dewan has worked to solve DevOps and infrastructure problems for small startups, large enterprises, and governments. Starting his public speaking at a toastmaster in 2016, he has been speaking at tech conferences and meetups for the last ten years. His work is fueled by a passion for open-source and a deep respect for the tech community. Dewan writes about app/data infrastructure, developer advocacy, and his thoughts around a career in tech on his personal blog. Outside of work, he’s an advocate for underrepresented groups in tech and offers pro bono career coaching as his way of giving back.

Similar Blogs

Continuous Delivery & GitOps