Background
Recently, I decided to set up a Kubernetes environment on my old MacBook Air to host my personal web service. As someone new to Kubernetes, I quickly realized that managing secrets properly is one of the most critical yet challenging aspects of running a production-read cluster.
During my research, I discovered that while Kubernetes does provide a built-in Secret mechanism, it's not sufficient for production use case. This led me down the path of exploring external secret management solutions, ultimately landing on HashiCorp Vault combined with External Secrets Operator (ESO).
This post documents my journey and serves as a practical guide for anyone looking to implement a secure, production-grade secret management solution in Kubernetes.
Why we use Vault?
For everyone new to the Kubernetes, you may ask: "Why do we need to install Vault? Doesn't Kubernetes already support secrets?" The short answer is yes, Kubernetes does provide a basic secret storage solution, but it's too basic and unsafe to use in production environments. Let me show you an example to explain why we need Vault or other external secret engine solutions.
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
username: YWRtaW4= # admin
password: c3VwZXJzZWNyZXQ= # supersecretThe problem? Kubernetes stores these secrets in etcd using only base64 encoding. Base64 is not encryption - it's just encoding. Anyone with access to etcd or the secret manifest can easily decode it:
echo "YWRtaW4=" | base64 -d
# Output: adminThis is extremely dangerous! That's why you need to enable "Encryption at Rest" if you're using Kubernetes Secrets. After you enabling Kubernetes Secret:
from: Secret → base64 → store to etcd
to: Secret → AES encrypt → store etcdso if Encryption at Rest solves the storage problem, why do we still need Vault? Fair question! For small teams or personal projects, Kubernetes Secrets with Encryption at Rest might be sufficient. However, Vault provides enterprise-grade features that go far beyond just encrypting data at rest:
- Dynamic Secret Generation: Vault can generate database credentials, API keys, and certificates on-demand with automatic expiration. This means secrets are short-lived an automatically rotated, reducing the risk window if credentials are compromised.
- Advanced Access Control: While Kubernetes uses RBAC (Role-Based Access Control), Vault's policy-based access control is more granular and flexible. You can define policies like "App A can only read secrets from paht X between 9 AM - 5 PM" or "Service B can generate MySQL credentials but only for 1 hour."
3. Centralized Secret Management: If you're running multiple Kubernetes clusters or have services outside of Kubernetes (VMs, cloud functions,
+ etc.), Vault provides a single source of truth for all your secrets across your entire infrastructure.
4. Audit Logging: Vault maintains detailed audit logs of who accessed which secrets and when, which is crucial for compliance requirements.
5. Secret Versioning and Rollback: Unlike Kubernetes Secrets, Vault keeps a full version history and allows you to rollback to previous versions if needed.
From an operational perspective, these capabilities are why most companies choose Vault over the native Kubernetes secret solution for production workloads.
Solutions
After deciding to use Vault, we need to understand how to integrate it with Kubernetes. There are two main approaches:
- Vault Sidecar Injection (Vault Agent Injector)
- External Secrets Operator (ESO)
What is a Kubernetes Operator?
A Kubernetes Operator is a software extension that uses Custom Resources to manage applications and their components. Think of it as an automated
administrator that continuously monitors and manages specific resources in your cluster.
Comparison: Sidecar Injection vs External Secrets Operator
| Category | Sidecar Injection | External Secrets Operator |
|---|---|---|
| Secret Location | Only in Vault + Pod memory | Stored in K8s Secret (etcd) |
| Security Level | Higher | Medium |
| etcd Exposure | Not stored in etcd | Stored in etcd |
| GitOps Friendly | Medium | Very |
| Resource Overhead | Extra container per Pod | One controller only |
| App Code Change | None | None |
| Secret Rotation | Automatic live update | Depends on refresh interval |
| Startup Complexity | Higher | Lower |
| Debugging | Harder | Easier |
| Vault Coupling | Tight | Loose |
| Multi-cluster Scaling | Harder | Easier |
| Performance Impact | More overhead | Minimal |
How Each Solution works
Sidecar Injection:
- Vault Agent runs as a sidecar container alognside your application pod
- Secrets are fetched from Vault and written to a shared volume at
/vault/secrets/... - Your application reads secrets directly from these files
- Secrets NEVER touch etcd
- Best for: Highly regulated environments (financial systems, healthcare, government)
External Secrets Operator:
- ESO controller watches
ExternalSecretcustom resources - Fetch secrets from Vault and syncs them into native Kubernetes Secrets
- Secrets are stored in etcd (encrypted if Encryption at Rest is enabled)
- Applications consume secrets as standard Kubernetes Secrets (env vars or mounted volumes)
- Best for: Developer-friendly environments, SaaS systems, teams using GitOps
When to Choose External Secrets Operator
Choose ESO if you:
- Use ArgoCD or other GitOps tools
- Want simplicity and faster onboarding for developers
- Manage many microservices
- Prefer Kubernetes-native secret UX (developers don't need to know about Vault)
In this post we will use the ESO and Vault to complete the whole external secret engine solution. The data flow would looks like following:
Vault (Encrypted storage)
↓
ESO read the secret and sync
↓
Kubernetes Secret
↓
etcd
↓
PodInstallation
Step 1: Install Vault
We'll use the official HashiCorp Helm chart to install Vault in standalone mode.
$ helm repo add hashicorp https://helm.releases.hashicorp.com
$ helm repo update
$ kubectl create namespace vault
$ helm install vault hashicorp/vault \
--namespace vault \
--set "server.dev.enabled=false" What this does:
server.dev.enabled=falseensures Vault runs in standalone mode (not dev mode)- Standalone mode uses persistent storage and requires manual initialization/unsealing
- Dev mode is insecure (auto-unseals, stores data in memory) and should only be used for local testing.
Verify Installation
kubectl get pods -n vault
# Expected output:
# NAME READY STATUS RESTARTS AGE
# vault-0 0/1 Running 0 30sNote: The pod shows 0/1 READY because Vault starts in a sealed state. This is a security feature - Vault needs to be initialized and unsealed before it can serve requests.
Step 2: Initialization + Unseal
When Vault first starts, it's in a sealed state. This is a critical security feature:
Sealed = Locked
- Vault's encryption keys are themselves encrypted
- Cannot decrypt any data
- All API operations blocked (except unseal and status)
Unsealed = Unlocked
- Encryption keys are available in memory
- Can read / write secrets
- Fully operational
Why seal? Even if an attacker gains access to Vault's storage (disk/etcd), they cannot read any secrets without the unseal keys.
You will got the unseal key and root toke after executing the operator init. You should store them in other place and if you lost them you must delete vault storage and reinitialize vault for unseal again which may cause all secrets will be lost.
# login to the vault pod
$ kubectl exec -it -n vault vault-0 -- sh
$ vault operator initVault generated:
- 5 Unseal Keys - Using Shamir's Secret Sharing algorithm
- 1 Root Token - The initial admin token with full permissions
Unsealing requires providing 3 different unseal keys. Run this command 3 times:
# Unsealing requires providing **3 different unseal keys**. Run this command 3 times:
$ vault operator unseal
# After the third key, Vault is unsealed and ready to use!
$ vault login <root-token>$ vault status
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.21.2
Build Date 2026-01-06T08:33:05Z
Storage Type file
Cluster Name vault-cluster-ac3113bf
Cluster ID 629983de-2ee2-1ad6-35d2-43773d93efca
HA Enabled falseYou should double check the status is correct (Initialized: true, Sealed: false)
Step 3: Install External Secrets Operator
Now that Vault is running, we need to install the External Secrets Operator (ESO) which will sync secrets from Vault into Kubernetes Secrets.
We'll use the official ESO Helm chart:
$ helm repo add external-secrets https://charts.external-secrets.io
$ helm repo update
$ kubectl create namespace external-secrets
$ helm install external-secrets external-secrets/external-secrets \
-n external-secretsVerify Installation
$ kubectl get pods -n external-secrets
# Expected output (wait until all pods are Running):
# NAME READY STATUS RESTARTS AGE
# external-secrets-xxxxxxxxxx-xxxxx 1/1 Running 0 60s
# external-secrets-cert-controller-xxxxxxxxxx-xxxxx 1/1 Running 0 60s
# external-secrets-webhook-xxxxxxxxxx-xxxxx 1/1 Running 0 60sESO deploys three components:
- external-secrets: The main controller that watches
ExternalSecretresources and syncs secrets from Vault into Kubernetes Secrets. - external-secrets-cert-controller: Manages the TLS certificates used by the webhook server.
- external-secrets-webhook: A validating/mutating webhook that validates ESO custom resources before they are accepted by the Kubernetes API server.
Step 4: Enable the Secret Engine in Vault
Before enabling the secret engine, you may ask: what does KV mean, and what's the difference between v1 and v2?
KV stands for Key-Value, and it's the most fundamental secret engine in Vault — essentially a secure key-value store. There are two versions:
- KV v1: Simple key-value storage. No version history, no rollback. Overwriting a secret destroys the previous value permanently.
- KV v2: Adds versioning and soft delete. Every write creates a new version, and deleted secrets can be recovered. This is the recommended choice for production.
Log in to the Vault pod and enable the KV v2 engine at the path secret/:
$ kubectl exec -it -n vault vault-0 -- sh
$ vault login <root-token>
$ vault secrets enable -path=secret kv-v2
# Success! Enabled the kv-v2 secrets engine at: secret/The -path=secret flag defines the mount path — this is the prefix you'll use when reading and writing secrets (e.g., `secret/myapp/config`). You can name it anything, but secret is the conventional default.
Now write a test secret to verify the engine is working:
$ vault kv put secret/myapp/config username="admin" password="supersecret"
# == Secret Path ==
# secret/data/myapp/config
#
# ======= Metadata =======
# Key Value
# --- -----
# created_time 2026-04-25T00:00:00.000000000Z
# custom_metadata <nil>
# deletion_time n/a
# destroyed false
# version 1Read it back to confirm:
$ vault kv get secret/myapp/config
# == Secret Path ==
# secret/data/myapp/config
#
# ======= Data =======
# Key Value
# --- -----
# password supersecret
# username adminNotice that Vault internally stores the secret under secret/data/myapp/config (it inserts /data/ automatically for KV v2). This matters later when you configure ESO to reference the path.
Step 5: Enable Kubernetes Auth
Before moving forward, it's worth understanding why this step exists at all.
When ESO needs to fetch a secret from Vault, it must first provide its identity - Vault doesn't just hand out secrets to anyone. The Kubernetes
auth method lets Vault trust Kubernetes ServiceAccount tokens as a form of identity. Here's the flow:
ESO Pod
→ presents its ServiceAccount JWT to Vault
→ Vault calls Kubernetes TokenReview API to validate the JWT
→ Kubernetes confirms: "yes, this is the external-secrets ServiceAccount"
→ Vault grants access based on the bound roleFor Vault to call the kubernetes TokenReview API, it needs three things:
- kubernetes_host
- token_reviewer_jwt – a ServiceAccount token that vault itself uses to call the TokenReview API.
- kubernetes_ca_cert – the CA certificate to verify the kubernetes API sever's TLS certificate.
Fortunately, every pod in kubernetes (including the Vault pod) automatically has all three mounted at /var/run/secrets/kubernetes.io/serviceaccount/, so we can read them directly from inside the Vault pod.
First, enable the Kubernetes auth method:
$ kubectl exec -it -n vault vault-0 -- sh
$ vault login <root-token>
$ vault auth enable kubernetes
# Success! Enabled kubernetes auth method at: kubernetes/Then configure it:
$ vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://kubernetes.default.svc:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Success! Data written to: auth/kubernetes/configBreaking down each parameter:
- token_reviwer_jwt: The ServiceAccount JWT of the Vault pod itself. Vault uses this token to authenticate against the Kubernetes API when it calls
TokenReviewto validate incoming requests. - kubernetes_host: The cluster-internal address of the Kubernetes API server.
kubernetes.default.svcis the stable DNS name that resolves to the API server from within the cluster. - Kubernetes_ca_cert: The CA certificate used to verify the TLS certificate presented by the Kubernetes API server. The
@prefix tells Vault to read the value from a file path.
Verify the configuration was applied:
$ vault read auth/kubernetes/config
# Key Value
# --- -----
# disable_iss_validation true
# disable_local_ca_jwt false
# issuer n/a
# kubernetes_host https://kubernetes.default.svc:443
# pem_keys []Step 6: Create the Vault Policy
A Vault policy defines what a token is allowed to to – which paths it can access and what operation it can perform (read, write, list, delete, etc.). Without a policy, even an authenticated token has zero permissions.
ESO only need to read secrets from Vault, so we'll create a minimal read-only policy scoped to the secret/ path we enabled in step 4.
Still inside the Vault pod, create a policy file and apply it:
$ vault policy write eso-policy - <<EOF
path "secret/data/*" {
capabilities = ["read"]
}
path "secret/metadata/*" {
capabilities = ["read", "list"]
}
EOF
# Success! Uploaded policy: eso-policyBreaking down the two path rules:
secret/data/*: This is where the actual secret values live in KV v2. Thereadcapability lets ESO fetch any secret under this mount.secret/metadata/*: This is where KV v2 stores version metadata.listlets ESO enumerate secrets;readlets it inspect version history. ESO needs this to detect when a secret has been updated.
Verify the policy was created:
$ vault policy read eso-policy
# path "secret/data/*" {
# capabilities = ["read"]
# }
#
# path "secret/metadata/*" {
# capabilities = ["read", "list"]
# }Step 7: Create the Vault Role (Binding to the ServiceAccount)
We have a policy (eso-policy) that defines what ESO is allowed to do, but Vault still doesn't know who is allowed to use it. That's what a Vault role does – it binds a Kubernetes ServiceAccount to a policy, completing the trust chain:
Kubernetes ServiceAccount → Vault Role → Vault Policy → Secret AccessFirst, let's confirm the ServiceAccount that ESO created during installation:
$ kubectl get serviceaccount -n external-secrets
# NAME SECRETS AGE
# external-secrets 0 10m
# external-secrets-cert-controller 0 10m
# external-secrets-webhook 0 10mThe main controller uses the external-secrets ServiceAccount in the external-secrets namespace. That's the identity we need to bind.
Now create the Vault role inside the Vault Pod:
$ vault write auth/kubernetes/role/eso-role \
bound_service_account_names=external-secrets \
bound_service_account_namespaces=external-secrets \
policies=eso-policy \
ttl=1h
# Success! Data written to: auth/kubernetes/role/eso-roleBreaking down each parameter:
bound_service_account_names: Only tokens belonging to theexternal-secretsServiceAccount are allowed to authenticate with this role.bound_service_account_namespaces: Further restricts the scope to theexternal-secretsnamespace. A ServiceAccount with the same name in a different namespace is still denied.policies: The policy (or policies, comma-separated) to attach. Here we bindeso-policyfrom Step 6.ttl: How long the issued Vault token is valid. After 1 hour, ESO must re-authenticate to get a new token.
Verify the role was created correctly:
$ vault read auth/kubernetes/role/eso-role
# Key Value
# --- -----
# bound_service_account_names [external-secrets]
# bound_service_account_namespaces [external-secrets]
# policies [eso-policy]
# ttl Step 8: Create the ClusterSecretStore
A ClusterSecretStore is a cluster-wide ESO resource that tells ESO how to connect to Vault – which address to reach, how to authenticate, and which secret engine to use. Think of it as the bridge configuration between ESO and Vault.
The difference between SecretStore and ClusterSecretStore:
SecretStore: Namespace-scoped. OnlyExternalSecretresources in the same namespace can use it.ClusterSecretStore: Cluster-scoped. AnyExternalSecretin any namespace can reference it — ideal when multiple teams or apps share the same Vault backend.
Create the manifest and apply it:
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "http://vault.vault.svc:8200"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "eso-role"
serviceAccountRef:
name: "external-secrets"
namespace: "external-secrets"Breaking down the key fields:
server: The in-cluster address of Vault.vault.vault.svcresolves to the Vault service in thevaultnamespace.path: The KV mount path we enabled in Step 4.version: "v2": Tells ESO this is a KV v2 engine, so it will automatically insert/data/into the path when fetching secrets.auth.kubernetes.mountPath: The auth method mount path we enabled in Step 5 (kubernetes).auth.kubernetes.role: The Vault role we created in Step 7 (eso-role).serviceAccountRef: The ServiceAccount ESO will use to authenticate with Vault. This must match what we bound in the Vault role.
Verify the ClusterSecretStore is ready:
$ kubectl get clustersecretstore vault-backend
# NAME AGE STATUS CAPABILITIES READY
# vault-backend 30s Valid ReadWrite TrueThe READY: True and STATUS: Valid confirm that ESO successfully connected to Vault and authenticated using the Kubernetes auth method. If you see Invalid here, double-check the Vault address, role name, and ServiceAccount reference.
The full pipeline is now working end-to-end:
Vault secret (secret/myapp/config)
↓ ESO reads via ClusterSecretStore
Kubernetes Secret (myapp-secret)
↓
Pod consumes via env vars or mounted volume
Summary
In this post, we built a complete, production-oriented secret management pipeline on Kubernetes from scratch. Here is a recap of what we covered:
- Why Kubernetes Secrets alone aren't enough — base64 encoding is not encryption, and native Secrets lack dynamic generation, granular access control, audit logging, and versioning.
- Why HashiCorp Vault — it addresses all of those gaps with enterprise-grade features, while remaining compatible with Kubernetes through its auth methods.
- ESO over Sidecar Injection — we chose the External Secrets Operator for its developer-friendly UX, GitOps compatibility, and Kubernetes-native secret consumption model.
The 8 steps we walked through, and how they connect:
| Step | What we did | Why it matters |
|---|---|---|
| 1 | Install Vault via Helm | Standalone mode with persistent storage |
| 2 | Initialize and unseal Vault | Required before Vault can serve any requests |
| 3 | Install External Secrets Operator | The controller that syncs Vault secrets into Kubernetes |
| 4 | Enable KV v2 secret engine | Where secrets are stored in Vault, with versioning support |
| 5 | Enable Kubernetes auth | Lets Vault validate Kubernetes ServiceAccount tokens |
| 6 | Create a Vault policy | Defines what ESO is allowed to read |
| 7 | Create a Vault role | Binds the ESO ServiceAccount to the policy |
| 8 | Create ClusterSecretStore + ExternalSecret | Wires ESO to Vault and syncs secrets into Kubernetes |
This setup covers a personal cluster or small team environment well. For larger production deployments, the natural next steps would be enabling Vault High Availability, automating unseal with Auto Unseal (e.g., AWS KMS or GCP Cloud KMS), and tightening policies to specific app paths rather than the broad secret/* wildcard used here.