If you're running ArgoCD and storing secrets in your Git repository, even encrypted ones, you're one compromised repo away from a bad day. This article walks through a production-grade approach to secret management using HashiCorp Vault and the External Secrets Operator (ESO), using Harbor (a container registry) with an external Redis HA cluster as a real-world example.
GitOps is built on a simple premise: Git is the source of truth. Everything declarative, everything auditable, everything reproducible. That works beautifully, until secrets enter the picture.
The naive approach is to store secrets directly in Git. This is obviously wrong. The slightly less naive approach is to encrypt them with something like Sealed Secrets or SOPS. Better, but you've now introduced a new problem: the encryption key itself needs to be managed, rotated, and protected. You've moved the problem, not solved it.
The correct approach is to never store secret values in Git at all. Store only the references to secrets. The actual values live in a dedicated secret management system, in our case, HashiCorp Vault.
ArgoCD has a plugin ecosystem, and the most popular option for Vault integration is argocd-vault-plugin (AVP). It works by replacing <placeholder> patterns in manifests at sync time with values pulled from Vault.
ArgoCD's own documentation explicitly cautions against this approach for three reasons:
Security: ArgoCD needs direct access to the secrets to inject them. More critically, ArgoCD stores generated manifests in plaintext in its Redis cache. That means your secrets are sitting in Redis, unencrypted.
Operational coupling: Secret updates are tied to ArgoCD sync operations. Rotating a secret means triggering a sync, which may also apply unrelated infrastructure changes at the same time.
Pattern incompatibility: AVP is incompatible with the Rendered Manifests pattern, which is increasingly becoming the standard for GitOps at scale.
The better architecture keeps ArgoCD completely out of the secrets business. ArgoCD syncs manifests. A separate operator (ESO) handles secret synchronization. They don't interfere with each other.
Git Repository
└── ExternalSecret manifests (references only, no values)
└── ArgoCD Application manifests
ArgoCD
└── Syncs ExternalSecret/ClusterExternalSecret CRDs to cluster
External Secrets Operator (ESO)
└── Watches ExternalSecret resources
└── Authenticates to Vault via Kubernetes auth
└── Pulls secret values from Vault KV
└── Creates/updates Kubernetes Secrets
HashiCorp Vault
└── infra/ KV v2 mount
├── harbor/config
└── redis/config
Applications (Harbor, etc.)
└── Reference Kubernetes Secrets normally
└── Never know Vault exists
The key insight: your application manifests reference ordinary Kubernetes Secrets. ESO is responsible for keeping those Secrets populated with current values from Vault. ArgoCD is responsible for keeping the ESO resources (ExternalSecret, ClusterExternalSecret) in sync with Git. Everyone does their job.
kubectl access to the clusterThe official ESO Helm chart is published at https://charts.external-secrets.io. The ArtifactHub package to use is external-secrets/external-secrets-operator.
Deploy it as an ArgoCD Application:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
namespace: argocd
spec:
destination:
namespace: external-secrets
server: https://kubernetes.default.svc
project: deviqon
source:
chart: external-secrets
helm:
values: |-
repoURL: https://charts.external-secrets.io
targetRevision: '*'
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
This installs ESO into the external-secrets namespace and creates the necessary CRDs, including ExternalSecret, ClusterExternalSecret, SecretStore, and ClusterSecretStore.
Vault organizes secrets into mounts. Rather than dumping infrastructure secrets into an existing shared secret/ KV mount used by other teams, create a dedicated mount for infrastructure secrets.
This matters for three reasons: policy scoping is cleaner (you can write path "infra/*" instead of path "secret/data/infra/*"), mount-level settings (TTLs, audit) are independent, and you can nuke the entire mount cleanly if needed.
Via the Vault API:
curl --header "X-Vault-Token: <your-token>" \
--request POST \
--data '{"type": "kv", "options": {"version": "2"}}' \
http://vault:8200/v1/sys/mounts/infra
Or via the Vault UI: Secrets → Enable new engine → KV → path: infra → Version 2.
Mirror your ArgoCD application names exactly. This removes all ambiguity when mapping an ArgoCD app to its Vault path:
infra/
├── harbor/
│ └── config # harborAdminPassword, s3 keys, etc.
├── redis/
│ └── config # password, sentinel credentials
├── cert-manager/
│ └── config
└── ingress-nginx/
└── config
Use camelCase key names that match the values.yaml keys of the Helm chart consuming them. This enables dataFrom: extract to pull all keys from a path without any explicit mapping, reducing boilerplate. For example, Harbor's Helm chart expects harborAdminPassword, so store it with that exact key in Vault.
ESO authenticates to Vault using Kubernetes service account tokens. Vault verifies these tokens against the Kubernetes TokenReview API. This eliminates the need for any static credentials.
# Enable Kubernetes auth (if not already enabled)
vault auth enable kubernetes
# Configure it, since Vault runs inside the cluster,
# it can auto-discover the CA cert and service account token
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc"
The policy defines what ESO is allowed to read. Scope it tightly to the infra/ mount only:
# eso-infra-policy.hcl
path "infra/data/*" {
capabilities = ["read"]
}
path "infra/metadata/*" {
capabilities = ["read", "list"]
}
Note the infra/data/* path, KV v2 always stores values under a /data/ prefix internally, even though the CLI and UI hide this from you. ESO talks to the raw API, so the policy must include it.
vault policy write eso-infra-policy eso-infra-policy.hcl
The role binds a Kubernetes service account to a Vault policy:
vault write auth/kubernetes/role/eso-infra-role \
bound_service_account_names="external-secrets" \
bound_service_account_namespaces="external-secrets" \
policies="eso-infra-policy" \
ttl="1h"
This says: if a request comes from the external-secrets service account in the external-secrets namespace, issue a Vault token with eso-infra-policy attached.
ESO Pod (SA: external-secrets, NS: external-secrets)
│
│ presents Kubernetes JWT token + role name
▼
Vault Kubernetes Auth Method
│ verifies JWT against Kubernetes TokenReview API
│ matches SA + namespace to eso-infra-role
│ attaches eso-infra-policy
▼
Vault Token issued to ESO
│
▼
ESO reads infra/* paths
│
▼
Kubernetes Secrets created in target namespaces
The ClusterSecretStore is a cluster-scoped resource that tells ESO how to connect to and authenticate with Vault. It's the bridge between ESO and Vault, referenced by all ExternalSecret and ClusterExternalSecret resources.
apiVersion: external-secrets.io/v1
kind: ClusterSecretStore
metadata:
name: vault-infra
spec:
provider:
vault:
server: "http://vault.vault.svc.cluster.local:8200"
path: "infra"
version: "v2"
auth:
kubernetes:
mountPath: kubernetes
role: eso-infra-role
serviceAccountRef:
name: external-secrets
namespace: external-secrets
audiences:
- vault
A few things worth noting here:
server: Use the full cluster-internal DNS name (vault.vault.svc.cluster.local). Using just vault:8200 only works if ESO and Vault are in the same namespace, they're not.
path: This is your KV mount name (infra), not a full secret path.
namespace on serviceAccountRef: Required when using ClusterSecretStore because it's cluster-scoped and has no implicit namespace. Without it, ESO won't know where to find the service account.
audiences: If set, ESO will request a Kubernetes token with this audience claim. The Vault role must be configured with the matching audience. If you're not enforcing audience validation, omit this field entirely.
status.capabilities: ReadWrite: You'll see this in the resource status. It's informational, it means ESO could write to Vault if asked (via PushSecret). Your actual permissions are enforced by the Vault policy, not this field.
Before creating ExternalSecrets, populate Vault with the actual values.
For Harbor:
Path: infra/harbor/config
Keys:
harborAdminPassword = <your-admin-password>
For Redis:
Path: infra/redis/config
Keys:
password = <your-redis-password>
Harbor's admin password belongs to Harbor. Nothing else needs it. Use a standard namespaced ExternalSecret:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: harbor-secrets
namespace: harbor
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-infra
kind: ClusterSecretStore
target:
name: harbor-secrets
dataFrom:
- extract:
key: harbor/config
dataFrom: extract pulls all key/value pairs from infra/harbor/config and creates them as keys in the resulting Kubernetes Secret. No explicit field mapping needed when key names match what the Helm chart expects.
Redis credentials are used by multiple applications, Harbor, and potentially others. The right resource here is ClusterExternalSecret, which creates the same secret in multiple namespaces from a single definition:
apiVersion: external-secrets.io/v1
kind: ClusterExternalSecret
metadata:
name: redis-secrets
spec:
namespaces:
- harbor
- redis
refreshTime: 1m
externalSecretSpec:
refreshInterval: 1h
secretStoreRef:
name: vault-infra
kind: ClusterSecretStore
target:
name: redis-secrets
dataFrom:
- extract:
key: redis/config
refreshTime (on ClusterExternalSecret): How often ESO checks that the ExternalSecret objects exist in the listed namespaces. Keep this short (1m).
refreshInterval (inside externalSecretSpec): How often the secret value is actually fetched from Vault. Set this based on your rotation frequency, 1h is a reasonable default.
When you add a new application that needs Redis, add its namespace to the list. One Vault path, one ClusterExternalSecret, N namespaces.
With ESO creating the secrets, update the Harbor ArgoCD Application to reference them by name instead of embedding values:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: harbor
namespace: argocd
spec:
destination:
namespace: harbor
server: https://kubernetes.default.svc
project: default
source:
chart: harbor
helm:
values: |-
existingSecretAdminPassword: "harbor-secrets"
existingSecretAdminPasswordKey: harborAdminPassword
persistence:
enabled: true
persistentVolumeClaim:
registry:
existingClaim: "harbor-registry-pvc"
jobservice:
jobLog:
existingClaim: "harbor-jobservice-pvc"
database:
existingClaim: "harbor-database-pvc"
trivy:
existingClaim: "harbor-trivy-pvc"
database:
internal:
livenessProbe:
timeoutSeconds: 10
readinessProbe:
timeoutSeconds: 10
expose:
type: ingress
tls:
enabled: true
certSource: secret
secret:
secretName: "harbor.example.com-tls"
ingress:
hosts:
core: harbor.example.com
className: "internal-nginx"
annotations:
forecastle.stakater.com/expose: "true"
forecastle.stakater.com/group: "myorg"
cert-manager.io/cluster-issuer: example-com-issuer
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
labels: {}
externalURL: https://harbor.example.com
redis:
type: external
external:
addr: "redis-ha-server-0.redis-ha.redis.svc.cluster.local:26379,redis-ha-server-1.redis-ha.redis.svc.cluster.local:26379,redis-ha-server-2.redis-ha.redis.svc.cluster.local:26379"
sentinelMasterSet: "mymaster"
coreDatabaseIndex: "10"
jobserviceDatabaseIndex: "11"
registryDatabaseIndex: "12"
trivyAdapterIndex: "13"
harborDatabaseIndex: "14"
cacheLayerDatabaseIndex: "15"
existingSecret: "redis-secrets"
repoURL: https://helm.goharbor.io
targetRevision: '*'
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Harbor reads existingSecretAdminPassword: "harbor-secrets" and looks for a Kubernetes Secret named harbor-secrets in its namespace. ESO created that secret from Vault. Harbor never knows where the value came from.
These are real errors encountered during this setup, in the order they appeared.
error: cannot get Kubernetes service account "external-secrets":
ServiceAccount "external-secrets" not found
Cause: ClusterSecretStore is cluster-scoped and has no implicit namespace. The serviceAccountRef field needs an explicit namespace.
Fix: Add namespace: external-secrets to serviceAccountRef:
serviceAccountRef:
name: external-secrets
namespace: external-secrets # required for ClusterSecretStore
StatusInvalidProviderConfig
Ready: False
Cause: ESO validated the ClusterSecretStore config before Kubernetes auth was fully configured in Vault, or the serviceAccountRef namespace was missing (see above).
Fix: Ensure the Vault Kubernetes auth method is configured and the role exists before applying the ClusterSecretStore. After fixing config, delete and reapply the resource to force revalidation.
unable to log in with Kubernetes auth: Code: 500
* could not load backend configuration
Cause: The Kubernetes auth method was enabled in Vault but never configured. vault auth enable kubernetes alone is not enough, you must also run vault write auth/kubernetes/config.
Fix:
kubectl exec -it vault-0 -n vault -- vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc"
Since Vault runs inside the cluster, it auto-discovers the CA cert and service account token. You only need to provide kubernetes_host.
Using http://vault:8200 in the ClusterSecretStore server field only works if ESO and Vault are in the same namespace. They aren't, ESO runs in external-secrets, Vault in vault.
Fix: Use the full cluster DNS name:
server: "http://vault.vault.svc.cluster.local:8200"
Format: http://<service-name>.<namespace>.svc.cluster.local:<port>
no matches for kind "ClusterSecretStore" in version "external-secrets.io/v1"
Cause: ESO pods started before the CRDs were fully registered. This is a race condition that happens on fresh installs.
Fix: This is transient. ESO will retry automatically every 10 seconds until the CRDs are available. No action needed, it resolves itself within a minute.
If you configure audiences: ["vault"] in the ClusterSecretStore but the Vault role doesn't have a matching audience set, authentication will silently fail during store validation.
Fix: Either remove audiences entirely (simplest), or set the matching audience on the Vault role:
vault write auth/kubernetes/role/eso-infra-role \
bound_service_account_names="external-secrets" \
bound_service_account_namespaces="external-secrets" \
policies="eso-infra-policy" \
audience="vault" \
ttl="1h"
Vault KV v2 stores data under a hidden /data/ path prefix. If your policy uses path "infra/*" instead of path "infra/data/*", reads will be denied even though the path looks correct.
Fix: Always include /data/ and /metadata/ in KV v2 policies:
path "infra/data/*" {
capabilities = ["read"]
}
path "infra/metadata/*" {
capabilities = ["read", "list"]
}
A pattern worth establishing early: secrets are owned by the service that defines them, not the service that consumes them.
infra/redis/config is owned by Redis. Harbor consumes it via ClusterExternalSecret.infra/harbor/config is owned by Harbor. Nothing else touches it.This maps cleanly to the ESO resource types:
| Secret type | ESO resource | Example |
|---|---|---|
| App-specific | ExternalSecret (namespaced) |
Harbor admin password |
| Shared across apps | ClusterExternalSecret |
Redis password |
When you need to add a 9th app that uses Redis, you add its namespace to the ClusterExternalSecret. You do not create a new secret path in Vault. One source of truth, one rotation point.
Git (safe to commit, no secret values):
├── ArgoCD Application manifests
├── ClusterSecretStore
├── ExternalSecret manifests
└── ClusterExternalSecret manifests
Vault (never in Git):
├── infra/harbor/config → harborAdminPassword = <value>
└── infra/redis/config → password = <value>
If your Git repository was fully public tomorrow, nothing sensitive would be exposed.
The setup involves more moving parts than hardcoding secrets into Helm values, but the operational benefits are significant: no secrets in Git history, centralized rotation, fine-grained access policies, and a clean separation between ArgoCD (delivery) and ESO (secret lifecycle).
The ArgoCD vault plugin is the easier path initially, but the Redis cache exposure alone makes it a non-starter for production. ESO with Vault KV gives you the same developer experience, reference a secret by name in your Helm values, without the security tradeoffs.
Once the ClusterSecretStore is wired up and working, adding secrets for a new application is three steps: write the value to Vault, create an ExternalSecret manifest, commit it to Git. ArgoCD picks it up, ESO creates the Kubernetes Secret, your app gets its credentials. No plaintext anywhere.
By implementing these Vault configurations and adhering to the External Secrets Operator lifecycle, you ensure your GitOps infrastructure operates exactly as securely as designed, with zero plaintext exposure. If your project faces a unique architectural challenge, our senior specialists are ready to help.