Skip to main content
Learn how to deploy Infisical on Kubernetes using the official Helm chart. This method is ideal for production environments that require scalability, high availability, and integration with existing Kubernetes infrastructure.

Prerequisites

  • A running Kubernetes cluster (version 1.23+)
  • Helm package manager (version 3.11.3+)
  • kubectl installed and configured to access your cluster
  • Basic understanding of Kubernetes concepts (pods, services, secrets, ingress)
This guide assumes familiarity with Kubernetes. If you’re new to Kubernetes, consider starting with the Docker Compose guide for simpler deployments.

System Requirements

The following are minimum requirements for running Infisical on Kubernetes:
ComponentMinimumRecommended (Production)
Nodes1 node3+ nodes (for HA)
CPU per node2 cores4 cores
RAM per node4 GB8 GB
Disk per node20 GB50 GB+ (SSD recommended)
Per-pod resource defaults (configurable in values.yaml):
PodCPU RequestMemory Limit
Infisical350m1000Mi
PostgreSQL250m512Mi
Redis100m256Mi
For production deployments with many users or secrets, increase these values accordingly.

Deployment Steps

1

Create a namespace

Create a dedicated namespace for Infisical to isolate resources:
kubectl create namespace infisical
All subsequent commands will use this namespace. You can also add -n infisical to each kubectl command if you prefer not to set a default context.
2

Add the Helm repository

Add the Infisical Helm charts repository and update your local cache:
helm repo add infisical-helm-charts 'https://dl.cloudsmith.io/public/infisical/helm-charts/helm/charts/'
helm repo update
3

Create the secrets

Infisical requires a Kubernetes secret named infisical-secrets containing essential configuration. Create this secret in the same namespace where you’ll deploy the chart.
For testing or proof-of-concept deployments, the Helm chart automatically provisions in-cluster PostgreSQL and Redis instances. You only need to provide the core secrets:
kubectl create secret generic infisical-secrets \
  --namespace infisical \
  --from-literal=AUTH_SECRET="$(openssl rand -base64 32)" \
  --from-literal=ENCRYPTION_KEY="$(openssl rand -hex 16)" \
  --from-literal=SITE_URL="http://localhost"
The in-cluster PostgreSQL and Redis are not configured for high availability. Use this only for testing purposes.
4

Create values.yaml

Create a values.yaml file to configure your deployment. Start with a minimal configuration:
values.yaml
infisical:
  image:
    repository: infisical/infisical
    tag: "v0.151.0"  # Check https://hub.docker.com/r/infisical/infisical/tags for latest
    pullPolicy: IfNotPresent
  replicaCount: 2

ingress:
  enabled: true
  hostName: "infisical.example.com"  # Replace with your domain
  ingressClassName: nginx
  nginx:
    enabled: true
Do not use the latest tag in production. Always pin to a specific version to avoid unexpected changes during upgrades.
For all available configuration options, see the full values.yaml reference.
5

Install the Helm chart

Deploy Infisical using Helm:
helm upgrade --install infisical infisical-helm-charts/infisical-standalone \
  --namespace infisical \
  --values values.yaml
This command installs Infisical if it doesn’t exist, or upgrades it if it does.
6

Verify the deployment

Check that all pods are running:
kubectl get pods -n infisical
You should see output similar to:
NAME                         READY   STATUS    RESTARTS   AGE
infisical-5d4f8b7c9-abc12    1/1     Running   0          2m
infisical-5d4f8b7c9-def34    1/1     Running   0          2m
postgresql-0                 1/1     Running   0          2m
redis-master-0               1/1     Running   0          2m
Verify the ingress is configured:
kubectl get ingress -n infisical
Test the health endpoint (port-forward if ingress isn’t ready):
kubectl port-forward -n infisical svc/infisical 8080:8080 &
curl http://localhost:8080/api/status
The first user to sign up becomes the instance administrator. Complete this step before exposing Infisical to others.

Managing Your Deployment

Viewing Pod Logs

To view logs from Infisical pods:
# View logs from all Infisical pods
kubectl logs -n infisical -l component=infisical -f

# View logs from a specific pod
kubectl logs -n infisical <pod-name> -f

# View last 100 lines
kubectl logs -n infisical <pod-name> --tail=100

# View logs from the previous container instance (useful after crashes)
kubectl logs -n infisical <pod-name> --previous
To view logs from PostgreSQL or Redis:
kubectl logs -n infisical -l app.kubernetes.io/name=postgresql -f
kubectl logs -n infisical -l app.kubernetes.io/name=redis -f

Scaling the Deployment

Infisical’s application layer is stateless, so you can scale horizontally:
# Scale to 4 replicas
kubectl scale deployment -n infisical infisical --replicas=4
Or update your values.yaml and re-apply:
values.yaml
infisical:
  replicaCount: 4
helm upgrade infisical infisical-helm-charts/infisical-standalone \
  --namespace infisical \
  --values values.yaml

Upgrading Infisical

To upgrade to a new version:
  1. Back up your database before upgrading:
    kubectl exec -n infisical postgresql-0 -- pg_dump -U infisical infisicalDB > backup_$(date +%Y%m%d).sql
    
  2. Update the image tag in your values.yaml:
    infisical:
      image:
        tag: "v0.152.0"  # New version
    
  3. Apply the upgrade:
    helm upgrade infisical infisical-helm-charts/infisical-standalone \
      --namespace infisical \
      --values values.yaml
    
  4. Monitor the rollout:
    kubectl rollout status deployment/infisical -n infisical
    

Uninstalling Infisical

To completely remove Infisical from your cluster:
# Uninstall the Helm release
helm uninstall infisical -n infisical

# Delete the namespace (removes all resources including secrets and PVCs)
kubectl delete namespace infisical
Deleting the namespace removes all data including persistent volume claims. Back up your database before uninstalling if you need to preserve data.
To uninstall but preserve data:
# Uninstall only the Helm release (keeps PVCs and secrets)
helm uninstall infisical -n infisical

# Verify PVCs are retained
kubectl get pvc -n infisical

Persistent Volume Claims

The Helm chart creates Persistent Volume Claims (PVCs) for PostgreSQL and Redis data storage when using in-cluster databases.

Default PVCs

PVC NamePurposeDefault Size
data-postgresql-0PostgreSQL data8Gi
redis-data-redis-master-0Redis data8Gi

Viewing PVCs

kubectl get pvc -n infisical

Customizing Storage

To customize storage in your values.yaml:
values.yaml
postgresql:
  primary:
    persistence:
      size: 20Gi
      storageClass: "your-storage-class"

redis:
  master:
    persistence:
      size: 10Gi
      storageClass: "your-storage-class"
The PostgreSQL PVC contains all your encrypted secrets. Never delete this PVC unless you intend to lose all data. Always back up before any maintenance operations.

Additional Configuration

Infisical uses email for user invitations, password resets, and notifications. Add SMTP configuration to your secrets:
kubectl create secret generic infisical-secrets \
  --namespace infisical \
  --from-literal=AUTH_SECRET="your-auth-secret" \
  --from-literal=ENCRYPTION_KEY="your-encryption-key" \
  --from-literal=SITE_URL="https://infisical.example.com" \
  --from-literal=SMTP_HOST="smtp.example.com" \
  --from-literal=SMTP_PORT="587" \
  --from-literal=SMTP_USERNAME="your-smtp-username" \
  --from-literal=SMTP_PASSWORD="your-smtp-password" \
  --from-literal=SMTP_FROM_ADDRESS="infisical@example.com" \
  --from-literal=SMTP_FROM_NAME="Infisical" \
  --dry-run=client -o yaml | kubectl apply -f -
Common SMTP providers:
ProviderHostPort
AWS SESemail-smtp..amazonaws.com587
SendGridsmtp.sendgrid.net587
Gmailsmtp.gmail.com587
After updating secrets, restart the Infisical pods:
kubectl rollout restart deployment/infisical -n infisical
To configure a custom domain with HTTPS:1. Using cert-manager (recommended):First, install cert-manager if not already installed:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.0/cert-manager.yaml
Create a ClusterIssuer for Let’s Encrypt:
cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your-email@example.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: nginx
kubectl apply -f cluster-issuer.yaml
Update your values.yaml:
values.yaml
ingress:
  enabled: true
  hostName: "infisical.example.com"
  ingressClassName: nginx
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  tls:
    - secretName: infisical-tls
      hosts:
        - infisical.example.com
2. Using existing TLS certificate:Create a TLS secret with your certificate:
kubectl create secret tls infisical-tls \
  --namespace infisical \
  --cert=path/to/tls.crt \
  --key=path/to/tls.key
Update your values.yaml:
values.yaml
ingress:
  enabled: true
  hostName: "infisical.example.com"
  ingressClassName: nginx
  tls:
    - secretName: infisical-tls
      hosts:
        - infisical.example.com
Apply the changes:
helm upgrade infisical infisical-helm-charts/infisical-standalone \
  --namespace infisical \
  --values values.yaml
For enhanced security, implement network policies to restrict traffic between pods:
network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: infisical-network-policy
  namespace: infisical
spec:
  podSelector:
    matchLabels:
      component: infisical
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: ingress-nginx
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - podSelector:
            matchLabels:
              app.kubernetes.io/name: postgresql
      ports:
        - protocol: TCP
          port: 5432
    - to:
        - podSelector:
            matchLabels:
              app.kubernetes.io/name: redis
      ports:
        - protocol: TCP
          port: 6379
    - to:
        - namespaceSelector: {}
          podSelector:
            matchLabels:
              k8s-app: kube-dns
      ports:
        - protocol: UDP
          port: 53
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0
      ports:
        - protocol: TCP
          port: 443
        - protocol: TCP
          port: 587
kubectl apply -f network-policy.yaml
Network policies require a CNI plugin that supports them (e.g., Calico, Cilium, Weave Net). Verify your cluster supports network policies before applying.
For production, use external managed services instead of in-cluster databases.Disable in-cluster databases in values.yaml:
values.yaml
postgresql:
  enabled: false

redis:
  enabled: false
Add connection strings to your secrets:
kubectl create secret generic infisical-secrets \
  --namespace infisical \
  --from-literal=AUTH_SECRET="your-auth-secret" \
  --from-literal=ENCRYPTION_KEY="your-encryption-key" \
  --from-literal=DB_CONNECTION_URI="postgresql://user:password@your-rds-endpoint:5432/infisical?sslmode=require" \
  --from-literal=REDIS_URL="rediss://:password@your-elasticache-endpoint:6379" \
  --from-literal=SITE_URL="https://infisical.example.com" \
  --dry-run=client -o yaml | kubectl apply -f -
Recommended managed services:
CloudPostgreSQLRedis
AWSRDS for PostgreSQLElastiCache
GCPCloud SQLMemorystore
AzureAzure Database for PostgreSQLAzure Cache for Redis
Infisical exposes Prometheus metrics when enabled.1. Add telemetry configuration to your secrets:Include these in your infisical-secrets:
--from-literal=OTEL_TELEMETRY_COLLECTION_ENABLED="true" \
--from-literal=OTEL_EXPORT_TYPE="prometheus"
2. Create a ServiceMonitor (if using Prometheus Operator):
servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: infisical
  namespace: infisical
spec:
  selector:
    matchLabels:
      component: infisical
  endpoints:
    - port: metrics
      interval: 30s
kubectl apply -f servicemonitor.yaml
See the Monitoring Guide for full setup instructions.
For production high availability:1. Multiple Infisical replicas:
values.yaml
infisical:
  replicaCount: 3
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: topology.kubernetes.io/zone
      whenUnsatisfiable: ScheduleAnyway
      labelSelector:
        matchLabels:
          component: infisical
2. Pod Disruption Budget:
pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: infisical-pdb
  namespace: infisical
spec:
  minAvailable: 1
  selector:
    matchLabels:
      component: infisical
kubectl apply -f pdb.yaml
3. External HA database:Use managed PostgreSQL with multi-AZ deployment (e.g., AWS RDS Multi-AZ, GCP Cloud SQL HA).4. External HA Redis:Use managed Redis with replication (e.g., AWS ElastiCache with cluster mode, GCP Memorystore).

Troubleshooting

Check pod events:
kubectl describe pod -n infisical <pod-name>
Common causes:
  • Insufficient cluster resources: Check node capacity with kubectl describe nodes
  • PVC not bound: Check PVC status with kubectl get pvc -n infisical
  • Image pull errors: Verify image name and check for ImagePullBackOff errors
Solutions:
  • Scale up your cluster or reduce resource requests
  • Ensure a StorageClass is available for dynamic provisioning
  • Check image registry credentials if using a private registry
View pod logs:
kubectl logs -n infisical <pod-name> --previous
Common causes:
  • Missing or invalid secrets: Verify infisical-secrets exists and contains required keys
  • Database connection failed: Check DB_CONNECTION_URI is correct and accessible
  • Invalid configuration: Check for typos in environment variables
Verify secrets:
kubectl get secret infisical-secrets -n infisical -o yaml
Check ingress status:
kubectl get ingress -n infisical
kubectl describe ingress -n infisical infisical
Check ingress controller logs:
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx
Verify service is accessible:
kubectl port-forward -n infisical svc/infisical 8080:8080
curl http://localhost:8080/api/status
Common causes:
  • Ingress controller not installed
  • DNS not pointing to ingress IP
  • TLS certificate issues
Check PostgreSQL pod:
kubectl get pods -n infisical -l app.kubernetes.io/name=postgresql
kubectl logs -n infisical postgresql-0
Test database connectivity:
kubectl exec -it -n infisical postgresql-0 -- psql -U infisical -d infisicalDB -c "SELECT 1"
For external databases:
  • Verify the connection string in infisical-secrets
  • Check network policies and security groups allow traffic
  • Ensure SSL certificates are configured if required
Check Redis pod:
kubectl get pods -n infisical -l app.kubernetes.io/name=redis
kubectl logs -n infisical redis-master-0
Test Redis connectivity:
kubectl exec -it -n infisical redis-master-0 -- redis-cli ping
For external Redis:
  • Verify the REDIS_URL in infisical-secrets
  • Check if TLS is required (use rediss:// instead of redis://)
Check Helm release status:
helm status infisical -n infisical
helm history infisical -n infisical
Rollback to previous version:
helm rollback infisical -n infisical
Force upgrade (use with caution):
helm upgrade infisical infisical-helm-charts/infisical-standalone \
  --namespace infisical \
  --values values.yaml \
  --force
Check resource usage:
kubectl top pods -n infisical
kubectl top nodes
Check for resource throttling:
kubectl describe pod -n infisical <pod-name> | grep -A5 "Limits\|Requests"
Solutions:
  • Increase resource limits in values.yaml
  • Scale horizontally by increasing replicaCount
  • Use external managed databases for better performance
  • Enable connection pooling for PostgreSQL

Full Values Reference

values.yaml
# -- Overrides the default release name
nameOverride: ""

# -- Overrides the full name of the release, affecting resource names
fullnameOverride: ""

infisical:
  # -- Enable Infisical chart deployment
  enabled: true
  # -- Sets the name of the deployment within this chart
  name: infisical

  autoBootstrap:
    # -- Enable auto-bootstrap of the Infisical instance
    enabled: false

    image:
      # -- Infisical Infisical CLI image tag version
      tag: "0.41.86"

    # -- Template for the data/stringData section of the Kubernetes secret. Available functions: encodeBase64
    secretTemplate: '{"data":{"token":"{{.Identity.Credentials.Token}}"}}'

    secretDestination:
      # -- Name of the bootstrap secret to create in the Kubernetes cluster which will store the formatted root identity credentials
      name: "infisical-bootstrap-secret"

      # -- Namespace to create the bootstrap secret in. If not provided, the secret will be created in the same namespace as the release.
      namespace: "default"

    # -- Infisical organization to create in the Infisical instance during auto-bootstrap
    organization: "default-org"

    credentialSecret:
      # -- Name of the Kubernetes secret containing the credentials for the auto-bootstrap workflow
      name: "infisical-bootstrap-credentials"

  databaseSchemaMigrationJob:
    image:
      # -- Image repository for migration wait job
      repository: ghcr.io/groundnuty/k8s-wait-for
      # -- Image tag version
      tag: no-root-v2.0
      # -- Pulls image only if not present on the node
      pullPolicy: IfNotPresent

  serviceAccount:
    # -- Creates a new service account if true, with necessary permissions for this chart. If false and `serviceAccount.name` is not defined, the chart will attempt to use the Default service account
    create: true
    # -- Custom annotations for the auto-created service account
    annotations: {}
    # -- Optional custom service account name, if existing service account is used
    name: null

  # -- Override for the full name of Infisical resources in this deployment
  fullnameOverride: ""
  # -- Custom annotations for Infisical pods
  podAnnotations: {}
  # -- Custom annotations for Infisical deployment
  deploymentAnnotations: {}
  # -- Number of pod replicas for high availability
  replicaCount: 2

  image:
    # -- Image repository for the Infisical service
    repository: infisical/infisical
    # -- Specific version tag of the Infisical image. View the latest version here https://hub.docker.com/r/infisical/infisical
    tag: "v0.151.0"
    # -- Pulls image only if not already present on the node
    pullPolicy: IfNotPresent
    # -- Secret references for pulling the image, if needed
    imagePullSecrets: []

  # -- Node affinity settings for pod placement
  affinity: {}
  # -- Tolerations definitions
  tolerations: []
  # -- Node selector for pod placement
  nodeSelector: {}
  # -- Topology spread constraints for multi-zone deployments
  # -- Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
  topologySpreadConstraints: []

  # -- Kubernetes Secret reference containing Infisical root credentials
  kubeSecretRef: "infisical-secrets"

  service:
    # -- Custom annotations for Infisical service
    annotations: {}
    # -- Service type, can be changed based on exposure needs (e.g., LoadBalancer)
    type: ClusterIP
    # -- Optional node port for service when using NodePort type
    nodePort: ""

  resources:
    limits:
      # -- Memory limit for Infisical container
      memory: 1000Mi
    requests:
      # -- CPU request for Infisical container
      cpu: 350m

ingress:
  # -- Enable or disable ingress configuration
  enabled: true
  # -- Hostname for ingress access, e.g., app.example.com
  hostName: ""
  # -- Specifies the ingress class, useful for multi-ingress setups
  ingressClassName: nginx

  nginx:
    # -- Enable NGINX-specific settings, if using NGINX ingress controller
    enabled: true

  # -- Custom annotations for ingress resource
  annotations: {}
  # -- TLS settings for HTTPS access
  tls: []
    # -- TLS secret name for HTTPS
    # - secretName: letsencrypt-prod
    # -- Domain name to associate with the TLS certificate
    #   hosts:
    #     - some.domain.com

postgresql:
  # -- Enables an in-cluster PostgreSQL deployment. To achieve HA for Postgres, we recommend deploying https://github.com/zalando/postgres-operator instead.
  enabled: true
  # -- PostgreSQL resource name
  name: "postgresql"
  # -- Full name override for PostgreSQL resources
  fullnameOverride: "postgresql"

  image:
    # -- Image registry for PostgreSQL
    registry: mirror.gcr.io
    # -- Image repository for PostgreSQL
    repository: bitnamilegacy/postgresql

  auth:
    # -- Database username for PostgreSQL
    username: infisical
    # -- Password for PostgreSQL database access
    password: root
    # -- Database name for Infisical
    database: infisicalDB

  useExistingPostgresSecret:
    # -- Set to true if using an existing Kubernetes secret that contains PostgreSQL connection string
    enabled: false
    existingConnectionStringSecret:
      # -- Kubernetes secret name containing the PostgreSQL connection string
      name: ""
      # -- Key name in the Kubernetes secret that holds the connection string
      key: ""

redis:
  # -- Enables an in-cluster Redis deployment
  enabled: true
  # -- Redis resource name
  name: "redis"
  # -- Full name override for Redis resources
  fullnameOverride: "redis"

  image:
    # -- Image registry for Redis
    registry: mirror.gcr.io
    # -- Image repository for Redis
    repository: bitnamilegacy/redis

  cluster:
    # -- Clustered Redis deployment
    enabled: false

  # -- Requires a password for Redis authentication
  usePassword: true

  auth:
    # -- Redis password
    password: "mysecretpassword"

  # -- Redis deployment type (e.g., standalone or cluster)
  architecture: standalone


Your Infisical instance should now be running on Kubernetes. Access it via the ingress hostname you configured, or use kubectl port-forward for local testing. self-hosted sign up