Skip to main content
Learn how to deploy Infisical on Google Cloud Platform using Google Kubernetes Engine (GKE) for container orchestration. This guide covers setting up Infisical in a production-ready GCP environment using Cloud SQL (PostgreSQL) for the database, Memorystore (Redis) for caching, and Google Cloud Load Balancing for routing traffic.

Prerequisites

  • A Google Cloud Platform account with permissions to create VPCs, GKE clusters, Cloud SQL instances, Memorystore instances, and Load Balancers
  • Basic knowledge of GCP networking (VPC, subnets, firewall rules) and Kubernetes concepts
  • gcloud CLI installed and configured
  • kubectl installed for interacting with your GKE cluster
  • Helm installed (version 3.x) for deploying the Infisical Helm chart
  • An Infisical Docker image tag from Docker Hub
Do not use the latest tag in production. Always pin to a specific version to avoid unexpected changes during upgrades.

System Requirements

The following are minimum requirements for running Infisical on GCP GKE:
ComponentMinimumRecommended (Production)
GKE Node Machine Typee2-smalln2-standard-2 or larger
GKE Nodes per Zone12+
Cloud SQL Instancedb-f1-microdb-n1-standard-2 or larger
Memorystore Capacity1 GB2 GB or larger
Infisical Pod Memory512 MB1 GB
Infisical Pod CPU500m1000m
For production deployments with many users or secrets, increase these values accordingly.

Deployment Steps

1

Set up network infrastructure (VPC, subnets, firewall rules)

Create a VPC network for hosting Infisical:VPC & Subnets:
  • Create a VPC-native network that will host your GKE cluster, Cloud SQL instance, and Memorystore instance
  • Create a subnet for your GKE cluster with an appropriate IP range
  • Define secondary IP ranges for Kubernetes pods and services:
    • Primary range: 10.0.0.0/20 (for nodes)
    • Secondary range for pods: 10.4.0.0/14
    • Secondary range for services: 10.8.0.0/20
# Create VPC
gcloud compute networks create infisical-vpc --subnet-mode=custom

# Create subnet with secondary ranges
gcloud compute networks subnets create infisical-subnet \
  --network=infisical-vpc \
  --region=us-central1 \
  --range=10.0.0.0/20 \
  --secondary-range=pods=10.4.0.0/14,services=10.8.0.0/20
Cloud Router & Cloud NAT:
  • Deploy a Cloud Router and NAT gateway for outbound internet access from private GKE nodes
# Create Cloud Router
gcloud compute routers create infisical-router \
  --network=infisical-vpc \
  --region=us-central1

# Create Cloud NAT
gcloud compute routers nats create infisical-nat \
  --router=infisical-router \
  --region=us-central1 \
  --nat-all-subnet-ip-ranges \
  --auto-allocate-nat-external-ips
Firewall Rules:
RuleSourceDestinationPortsPurpose
Allow internalVPC CIDRVPC CIDRAllInternal communication
Allow health checks130.211.0.0/22, 35.191.0.0/16GKE nodes8080Load balancer health checks
Allow GKE to Cloud SQLGKE podsCloud SQL5432Database access
Allow GKE to MemorystoreGKE podsMemorystore6379Redis access
# Allow health check traffic
gcloud compute firewall-rules create allow-health-checks \
  --network=infisical-vpc \
  --allow=tcp:8080 \
  --source-ranges=130.211.0.0/22,35.191.0.0/16 \
  --target-tags=gke-infisical
Enable Private Google Access:
gcloud compute networks subnets update infisical-subnet \
  --region=us-central1 \
  --enable-private-ip-google-access
Verify: Confirm your network infrastructure is created:
# Verify VPC and subnet
gcloud compute networks describe infisical-vpc
gcloud compute networks subnets describe infisical-subnet --region=us-central1

# Verify NAT gateway
gcloud compute routers nats describe infisical-nat --router=infisical-router --region=us-central1
2

Provision Google Kubernetes Engine (GKE) cluster

Create a GKE cluster to host your Infisical deployment:
gcloud container clusters create infisical-cluster \
  --region us-central1 \
  --machine-type n2-standard-2 \
  --num-nodes 1 \
  --enable-ip-alias \
  --network infisical-vpc \
  --subnetwork infisical-subnet \
  --cluster-secondary-range-name pods \
  --services-secondary-range-name services \
  --enable-private-nodes \
  --master-ipv4-cidr 172.16.0.0/28 \
  --no-enable-basic-auth \
  --no-issue-client-certificate \
  --enable-stackdriver-kubernetes \
  --enable-autoscaling \
  --min-nodes 1 \
  --max-nodes 5 \
  --enable-autorepair \
  --enable-autoupgrade \
  --enable-workload-identity \
  --workload-pool=<YOUR_PROJECT_ID>.svc.id.goog
Connect to the cluster:
gcloud container clusters get-credentials infisical-cluster --region us-central1
Verify: Confirm cluster is ready:
# Check nodes are ready
kubectl get nodes

# Verify cluster info
kubectl cluster-info
You should see your nodes listed and in a Ready state.
For private GKE clusters, you’ll need to access the cluster from within the VPC (via a bastion host or Cloud Shell) or configure authorized networks to allow your IP to access the control plane endpoint.
3

Provision Cloud SQL for PostgreSQL

Set up the PostgreSQL database for Infisical:
# Enable required APIs
gcloud services enable sqladmin.googleapis.com
gcloud services enable servicenetworking.googleapis.com

# Allocate IP range for private services
gcloud compute addresses create google-managed-services-infisical-vpc \
  --global \
  --purpose=VPC_PEERING \
  --prefix-length=16 \
  --network=infisical-vpc

# Create private connection
gcloud services vpc-peerings connect \
  --service=servicenetworking.googleapis.com \
  --ranges=google-managed-services-infisical-vpc \
  --network=infisical-vpc

# Create Cloud SQL instance
gcloud sql instances create infisical-db \
  --database-version=POSTGRES_15 \
  --tier=db-n1-standard-2 \
  --region=us-central1 \
  --network=infisical-vpc \
  --no-assign-ip \
  --availability-type=REGIONAL \
  --storage-type=SSD \
  --storage-size=20GB \
  --storage-auto-increase \
  --backup-start-time=03:00 \
  --enable-point-in-time-recovery \
  --retained-backups-count=7
Create database and user:
# Set root password
gcloud sql users set-password postgres \
  --instance=infisical-db \
  --password=<your-secure-password>

# Create database
gcloud sql databases create infisical --instance=infisical-db

# Create user
gcloud sql users create infisical_user \
  --instance=infisical-db \
  --password=<your-secure-password>
Verify: Confirm Cloud SQL is ready:
# Check instance status
gcloud sql instances describe infisical-db --format="value(state)"

# Get private IP address
gcloud sql instances describe infisical-db --format="value(ipAddresses[0].ipAddress)"
Note the private IP address for your connection string:
postgresql://infisical_user:<password>@<private-ip>:5432/infisical
4

Provision Memorystore for Redis

Set up Redis caching for Infisical:
# Enable Memorystore API
gcloud services enable redis.googleapis.com

# Create Memorystore instance
gcloud redis instances create infisical-redis \
  --size=1 \
  --region=us-central1 \
  --network=infisical-vpc \
  --tier=STANDARD_HA \
  --redis-version=redis_7_0
Verify: Confirm Memorystore is ready:
# Check instance status
gcloud redis instances describe infisical-redis --region=us-central1 --format="value(state)"

# Get host IP
gcloud redis instances describe infisical-redis --region=us-central1 --format="value(host)"
Note the host IP for your connection string:
redis://<memorystore-ip>:6379
Memorystore for Redis does not support AUTH passwords. Security relies on VPC isolation and firewall rules. Ensure only your GKE cluster’s pods can access the Memorystore IP.
5

Securely store Infisical secrets and configuration

Generate and store the required secrets:Generate secrets:
# Generate ENCRYPTION_KEY (16-byte hex string)
ENCRYPTION_KEY=$(openssl rand -hex 16)
echo "ENCRYPTION_KEY: $ENCRYPTION_KEY"

# Generate AUTH_SECRET (32-byte base64 string)
AUTH_SECRET=$(openssl rand -base64 32)
echo "AUTH_SECRET: $AUTH_SECRET"
Store your ENCRYPTION_KEY securely outside of GCP as well. Without this key, you cannot decrypt your secrets even if you restore the database.
Configure IAM for Secret Access (if using Secret Manager with Workload Identity):
# Create Google Cloud IAM service account
gcloud iam service-accounts create infisical-gsa \
  --display-name="Infisical GKE Service Account"

# Grant access to secrets
for secret in infisical-encryption-key infisical-auth-secret infisical-db-uri infisical-redis-url; do
  gcloud secrets add-iam-policy-binding $secret \
    --member="serviceAccount:infisical-gsa@<YOUR_PROJECT_ID>.iam.gserviceaccount.com" \
    --role="roles/secretmanager.secretAccessor"
done

# Bind to Kubernetes service account
gcloud iam service-accounts add-iam-policy-binding \
  infisical-gsa@<YOUR_PROJECT_ID>.iam.gserviceaccount.com \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:<YOUR_PROJECT_ID>.svc.id.goog[infisical/infisical]"
6

Deploy Infisical using Helm

Deploy Infisical to your GKE cluster using the official Helm chart:Add the Infisical Helm Repository:
helm repo add infisical-helm-charts https://dl.cloudsmith.io/public/infisical/helm-charts/helm/charts/
helm repo update
Create a Helm Values File:Create a file named infisical-values.yaml:
# infisical-values.yaml

replicaCount: 2

image:
  repository: infisical/infisical
  tag: "v0.46.2-postgres"  # Use a specific version
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 8080

ingress:
  enabled: true
  className: "gce"
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "infisical-ip"
    networking.gke.io/managed-certificates: "infisical-cert"
  hosts:
    - host: infisical.example.com
      paths:
        - path: /
          pathType: Prefix

env:
  - name: SITE_URL
    value: "https://infisical.example.com"
  - name: HOST
    value: "0.0.0.0"
  - name: PORT
    value: "8080"

envFrom:
  - secretRef:
      name: infisical-secrets

resources:
  requests:
    memory: "512Mi"
    cpu: "500m"
  limits:
    memory: "1Gi"
    cpu: "1000m"

livenessProbe:
  httpGet:
    path: /api/status
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /api/status
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 5

podDisruptionBudget:
  enabled: true
  minAvailable: 1

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

serviceAccount:
  create: true
  name: infisical
  annotations:
    iam.gke.io/gcp-service-account: infisical-gsa@<YOUR_PROJECT_ID>.iam.gserviceaccount.com

podSecurityContext:
  fsGroup: 1000
  runAsNonRoot: true
  runAsUser: 1000

affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                  - infisical
          topologyKey: topology.kubernetes.io/zone
Reserve a Static IP Address:
gcloud compute addresses create infisical-ip --global
gcloud compute addresses describe infisical-ip --global --format="value(address)"
Deploy Infisical:
helm install infisical infisical-helm-charts/infisical \
  --namespace infisical \
  --create-namespace \
  --values infisical-values.yaml
Verify: Confirm deployment is successful:
# Check pods are running
kubectl get pods -n infisical

# Check service
kubectl get svc -n infisical

# Check ingress
kubectl get ingress -n infisical

# Check pod logs
kubectl logs -l app.kubernetes.io/name=infisical -n infisical --tail=50
Wait for all pods to be in Running state.
7

Configure HTTPS access with SSL/TLS

Secure your Infisical deployment with HTTPS:Force HTTPS Redirect:
# frontend-config.yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: ssl-redirect
  namespace: infisical
spec:
  redirectToHttps:
    enabled: true
kubectl apply -f frontend-config.yaml
Add the annotation to your ingress:
annotations:
  networking.gke.io/v1beta1.FrontendConfig: "ssl-redirect"
Verify: Test HTTPS access:
curl -I https://infisical.example.com/api/status
After completing the above steps, your Infisical instance should be up and running on GCP. You can now proceed with creating an admin account and configuring additional features.

Additional Configuration

Configure email sending for Infisical notifications and invitations:Using SendGrid:
kubectl create secret generic infisical-smtp \
  --from-literal=SMTP_HOST="smtp.sendgrid.net" \
  --from-literal=SMTP_PORT="587" \
  --from-literal=SMTP_USERNAME="apikey" \
  --from-literal=SMTP_PASSWORD="<your-sendgrid-api-key>" \
  --from-literal=SMTP_FROM_ADDRESS="noreply@example.com" \
  --from-literal=SMTP_FROM_NAME="Infisical" \
  -n infisical
Using Gmail:
kubectl create secret generic infisical-smtp \
  --from-literal=SMTP_HOST="smtp.gmail.com" \
  --from-literal=SMTP_PORT="587" \
  --from-literal=SMTP_USERNAME="your-email@gmail.com" \
  --from-literal=SMTP_PASSWORD="<app-password>" \
  --from-literal=SMTP_FROM_ADDRESS="your-email@gmail.com" \
  --from-literal=SMTP_FROM_NAME="Infisical" \
  -n infisical
Update your Helm values to include the SMTP secret:
envFrom:
  - secretRef:
      name: infisical-secrets
  - secretRef:
      name: infisical-smtp
Upgrade the deployment:
helm upgrade infisical infisical-helm-charts/infisical \
  --namespace infisical \
  --values infisical-values.yaml
Verify: Check logs for SMTP configuration:
kubectl logs -l app.kubernetes.io/name=infisical -n infisical | grep -i smtp
Access running containers for debugging:Exec into an Infisical pod:
# Get pod name
kubectl get pods -n infisical

# Exec into the pod
kubectl exec -it <pod-name> -n infisical -- /bin/sh
Common debugging commands:
# Check environment variables
kubectl exec -it <pod-name> -n infisical -- env | grep -E "(DB_|REDIS_|SITE_)"

# Test database connectivity
kubectl exec -it <pod-name> -n infisical -- nc -zv <cloud-sql-ip> 5432

# Test Redis connectivity
kubectl exec -it <pod-name> -n infisical -- nc -zv <memorystore-ip> 6379

# View application logs
kubectl logs <pod-name> -n infisical --tail=100 -f
Run a debug pod:
kubectl run debug-pod --rm -it --image=busybox -n infisical -- /bin/sh
Infisical automatically runs database migrations on startup. To manually manage migrations:Check migration status:
kubectl logs -l app.kubernetes.io/name=infisical -n infisical | grep -i migration
Run migrations manually:
# Exec into a pod and run migrations
kubectl exec -it <pod-name> -n infisical -- npm run migration:latest
Rollback migrations (if needed):
kubectl exec -it <pod-name> -n infisical -- npm run migration:rollback
Always backup your database before running manual migrations. Use Cloud SQL automated backups or create a manual snapshot first.
Database Backups:Cloud SQL automated backups are enabled by default. To create a manual backup:
gcloud sql backups create --instance=infisical-db
To restore from a backup:
gcloud sql backups restore <backup-id> \
  --restore-instance=infisical-db \
  --backup-instance=infisical-db
Encryption Key Backup:The ENCRYPTION_KEY is critical. Store it securely:
  • In Google Secret Manager with restricted IAM permissions
  • In an offline encrypted backup in a secure physical location
  • Never store it in version control
Export secrets for backup:
gcloud secrets versions access latest --secret=infisical-encryption-key > encryption-key-backup.txt
# Encrypt and store this file securely offline
Pre-upgrade checklist:
  1. Review Infisical release notes for breaking changes
  2. Backup your database
  3. Verify your ENCRYPTION_KEY is backed up
Upgrade process:
# Update Helm repo
helm repo update

# Update image tag in values file
# Edit infisical-values.yaml: image.tag: "v0.X.X"

# Upgrade deployment
helm upgrade infisical infisical-helm-charts/infisical \
  --namespace infisical \
  --values infisical-values.yaml

# Monitor rollout
kubectl rollout status deployment/infisical -n infisical
Rollback if needed:
helm rollback infisical -n infisical
Google Cloud Logging:View Infisical logs in Cloud Logging:
  • Navigate to Logging > Logs Explorer in the GCP Console
  • Filter: resource.type="k8s_container" resource.labels.namespace_name="infisical"
Set up Cloud Monitoring alerts:
# Create alert for high CPU usage
gcloud alpha monitoring policies create \
  --notification-channels=<channel-id> \
  --display-name="Infisical High CPU" \
  --condition-display-name="Pod CPU > 80%" \
  --condition-threshold-value=0.8 \
  --condition-threshold-duration=300s \
  --condition-filter='resource.type="k8s_pod" AND resource.labels.namespace_name="infisical"'
Uptime checks:
  • Navigate to Monitoring > Uptime checks in the GCP Console
  • Create a check for https://infisical.example.com/api/status
  • Set check frequency (e.g., every 1 minute)
Enable OpenTelemetry:
env:
  - name: OTEL_TELEMETRY_COLLECTION_ENABLED
    value: "true"
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: "http://otel-collector.monitoring.svc.cluster.local:4317"
Horizontal Pod Autoscaler:The HPA is configured in the Helm values. To verify:
kubectl get hpa -n infisical
kubectl describe hpa infisical -n infisical
GKE Cluster Autoscaler:Already enabled during cluster creation. To verify:
gcloud container clusters describe infisical-cluster \
  --region=us-central1 \
  --format="value(autoscaling)"
Adjust scaling parameters:
autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 20
  targetCPUUtilizationPercentage: 70
  targetMemoryUtilizationPercentage: 80
To completely remove Infisical and associated resources:Delete Helm release:
helm uninstall infisical -n infisical
kubectl delete namespace infisical
Delete GCP resources:
# Delete Memorystore
gcloud redis instances delete infisical-redis --region=us-central1

# Delete Cloud SQL (WARNING: This deletes all data!)
gcloud sql instances delete infisical-db

# Delete GKE cluster
gcloud container clusters delete infisical-cluster --region=us-central1

# Delete static IP
gcloud compute addresses delete infisical-ip --global

# Delete NAT and router
gcloud compute routers nats delete infisical-nat --router=infisical-router --region=us-central1
gcloud compute routers delete infisical-router --region=us-central1

# Delete firewall rules
gcloud compute firewall-rules delete allow-health-checks

# Delete VPC (after all resources are removed)
gcloud compute networks subnets delete infisical-subnet --region=us-central1
gcloud compute networks delete infisical-vpc

# Delete secrets
for secret in infisical-encryption-key infisical-auth-secret infisical-db-uri infisical-redis-url; do
  gcloud secrets delete $secret
done

# Delete service account
gcloud iam service-accounts delete infisical-gsa@<YOUR_PROJECT_ID>.iam.gserviceaccount.com
Deleting Cloud SQL will permanently delete all data. Ensure you have backups before proceeding.

Infrastructure as Code

A Terraform configuration for deploying Infisical infrastructure on GCP:
# main.tf
terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 5.0"
    }
  }
}

provider "google" {
  project = var.project_id
  region  = var.region
}

variable "project_id" {
  description = "GCP Project ID"
  type        = string
}

variable "region" {
  description = "GCP Region"
  type        = string
  default     = "us-central1"
}

# VPC Network
resource "google_compute_network" "infisical_vpc" {
  name                    = "infisical-vpc"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "infisical_subnet" {
  name          = "infisical-subnet"
  ip_cidr_range = "10.0.0.0/20"
  region        = var.region
  network       = google_compute_network.infisical_vpc.id

  secondary_ip_range {
    range_name    = "pods"
    ip_cidr_range = "10.4.0.0/14"
  }

  secondary_ip_range {
    range_name    = "services"
    ip_cidr_range = "10.8.0.0/20"
  }

  private_ip_google_access = true
}

# Cloud Router and NAT
resource "google_compute_router" "infisical_router" {
  name    = "infisical-router"
  region  = var.region
  network = google_compute_network.infisical_vpc.id
}

resource "google_compute_router_nat" "infisical_nat" {
  name                               = "infisical-nat"
  router                             = google_compute_router.infisical_router.name
  region                             = var.region
  nat_ip_allocate_option             = "AUTO_ONLY"
  source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
}

# GKE Cluster
resource "google_container_cluster" "infisical_cluster" {
  name     = "infisical-cluster"
  location = var.region

  network    = google_compute_network.infisical_vpc.name
  subnetwork = google_compute_subnetwork.infisical_subnet.name

  remove_default_node_pool = true
  initial_node_count       = 1

  ip_allocation_policy {
    cluster_secondary_range_name  = "pods"
    services_secondary_range_name = "services"
  }

  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = false
    master_ipv4_cidr_block  = "172.16.0.0/28"
  }

  workload_identity_config {
    workload_pool = "${var.project_id}.svc.id.goog"
  }

  logging_service    = "logging.googleapis.com/kubernetes"
  monitoring_service = "monitoring.googleapis.com/kubernetes"
}

resource "google_container_node_pool" "infisical_nodes" {
  name       = "infisical-node-pool"
  location   = var.region
  cluster    = google_container_cluster.infisical_cluster.name
  node_count = 1

  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }

  node_config {
    machine_type = "n2-standard-2"
    disk_size_gb = 50

    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    workload_metadata_config {
      mode = "GKE_METADATA"
    }
  }

  management {
    auto_repair  = true
    auto_upgrade = true
  }
}

# Cloud SQL
resource "google_sql_database_instance" "infisical_db" {
  name             = "infisical-db"
  database_version = "POSTGRES_15"
  region           = var.region

  settings {
    tier              = "db-n1-standard-2"
    availability_type = "REGIONAL"
    disk_type         = "PD_SSD"
    disk_size         = 20
    disk_autoresize   = true

    ip_configuration {
      ipv4_enabled    = false
      private_network = google_compute_network.infisical_vpc.id
    }

    backup_configuration {
      enabled                        = true
      start_time                     = "03:00"
      point_in_time_recovery_enabled = true
      transaction_log_retention_days = 7
    }
  }

  deletion_protection = true

  depends_on = [google_service_networking_connection.private_vpc_connection]
}

# Private VPC Connection for Cloud SQL
resource "google_compute_global_address" "private_ip_address" {
  name          = "google-managed-services-infisical-vpc"
  purpose       = "VPC_PEERING"
  address_type  = "INTERNAL"
  prefix_length = 16
  network       = google_compute_network.infisical_vpc.id
}

resource "google_service_networking_connection" "private_vpc_connection" {
  network                 = google_compute_network.infisical_vpc.id
  service                 = "servicenetworking.googleapis.com"
  reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}

# Memorystore Redis
resource "google_redis_instance" "infisical_redis" {
  name           = "infisical-redis"
  tier           = "STANDARD_HA"
  memory_size_gb = 1
  region         = var.region

  authorized_network = google_compute_network.infisical_vpc.id
  redis_version      = "REDIS_7_0"
}

# Outputs
output "gke_cluster_name" {
  value = google_container_cluster.infisical_cluster.name
}

output "cloud_sql_private_ip" {
  value = google_sql_database_instance.infisical_db.private_ip_address
}

output "redis_host" {
  value = google_redis_instance.infisical_redis.host
}
This is a simplified example to get you started. For a complete deployment, you’ll need to add Secret Manager resources, IAM bindings, and Kubernetes resources. Adapt this example to your infrastructure standards.

Troubleshooting

Symptoms: Pods stuck in Pending, CrashLoopBackOff, or Error state.Check pod status:
kubectl describe pod <pod-name> -n infisical
kubectl logs <pod-name> -n infisical --previous
Common causes:
  • Insufficient resources: Check node capacity and resource requests
  • Image pull errors: Verify image tag and registry access
  • Secret not found: Ensure infisical-secrets exists in the namespace
  • Database connection failed: Verify Cloud SQL private IP and credentials
Symptoms: Database connection errors in logs.Verify connectivity:
# Check Cloud SQL instance status
gcloud sql instances describe infisical-db

# Test from a debug pod
kubectl run debug --rm -it --image=postgres:15 -n infisical -- \
  psql "postgresql://infisical_user:<password>@<cloud-sql-ip>:5432/infisical"
Common causes:
  • VPC peering not established: Check private service connection
  • Firewall rules blocking traffic: Verify firewall allows port 5432
  • Wrong credentials: Verify username and password
  • Cloud SQL not in same VPC: Ensure private IP is configured
Symptoms: Redis connection errors in logs.Verify connectivity:
# Check Memorystore status
gcloud redis instances describe infisical-redis --region=us-central1

# Test from a debug pod
kubectl run debug --rm -it --image=redis:7 -n infisical -- \
  redis-cli -h <memorystore-ip> ping
Common causes:
  • Memorystore not in same VPC: Verify network configuration
  • Firewall rules blocking traffic: Verify firewall allows port 6379
  • Wrong IP address: Verify the Memorystore host IP
Symptoms: Cannot access Infisical via the external URL.Check ingress status:
kubectl describe ingress -n infisical
kubectl get events -n infisical --sort-by='.lastTimestamp'
Common causes:
  • DNS not configured: Verify A record points to static IP
  • Certificate not ready: Check ManagedCertificate status
  • Backend unhealthy: Verify pods are passing health checks
  • Static IP not reserved: Ensure infisical-ip exists
Symptoms: ManagedCertificate stuck in Provisioning state.Check certificate status:
kubectl describe managedcertificate infisical-cert -n infisical
Common causes:
  • DNS not propagated: Wait for DNS propagation (can take up to 48 hours)
  • Domain verification failed: Ensure A record is correct
  • Rate limiting: Let’s Encrypt has rate limits for certificate issuance
  • Ingress not ready: Ensure ingress has an external IP assigned
Symptoms: Pods being OOMKilled or throttled.Check resource usage:
kubectl top pods -n infisical
kubectl describe pod <pod-name> -n infisical | grep -A5 "Limits\|Requests"
Solutions:
  • Increase resource limits in Helm values
  • Enable HPA for automatic scaling
  • Check for memory leaks in application logs
  • Review Cloud Monitoring dashboards for trends
Symptoms: Errors about decryption or invalid encryption key.Common causes:
  • Wrong ENCRYPTION_KEY: Verify the key matches what was used to encrypt data
  • Key not set: Ensure the secret contains ENCRYPTION_KEY
  • Key changed: The encryption key cannot be changed after initial setup
Verify key is set:
kubectl get secret infisical-secrets -n infisical -o jsonpath='{.data.ENCRYPTION_KEY}' | base64 -d
If you’ve lost your encryption key, encrypted data cannot be recovered. Always maintain secure backups of your encryption key.