Deploy SFTP Gateway with Helm Chart
TLDR - Quick Summary
What: Deploy SFTP Gateway on GKE using a Helm chart with either bundled PostgreSQL or Cloud SQL
Steps: Download chart, configure prerequisites, run
helm installQuick start:
helm install sftpgw ./sftp-gateway-gcp \ --namespace sftpgw --create-namespace \ --set security.clientId=$(openssl rand -hex 16) \ --set security.clientSecret=$(openssl rand -hex 32) \ --set security.jwtSecret=$(uuidgen) \ --set postgresql.auth.password=$(openssl rand -hex 16) \ --set gcp.bucket=YOUR_BUCKET_NAME \ --set serviceAccount.annotations."iam\.gke\.io/gcp-service-account"=sftpgw@YOUR_PROJECT_ID.iam.gserviceaccount.com
Overview
The SFTP Gateway Helm chart simplifies deploying SFTP Gateway on GKE. It handles creating all the Kubernetes resources (Deployments, Services, ConfigMaps, Secrets, PVCs, ServiceAccount) and supports two database modes:
- Bundled PostgreSQL (default) — runs a PostgreSQL container inside the cluster. Good for testing and simple deployments.
- Cloud SQL with Auth Proxy — connects to a managed Cloud SQL PostgreSQL instance via a sidecar proxy. Recommended for production.
Docker Hub Images
The SFTP Gateway container images are available on Docker Hub:
- Backend:
thorntech/sftpgateway-backend:3.8.1 - Admin UI:
thorntech/sftpgateway-admin-ui:3.8.1
Both images support amd64 and arm64 architectures.
Prerequisites
- A GKE cluster (Standard or Autopilot) with Workload Identity enabled
- Helm 3 installed
kubectlconfigured to access your cluster- A GCS bucket for SFTP file storage
- A GCP service account with the Storage Admin role
- Workload Identity binding between the GCP service account and the Kubernetes service account
:::tip If you already completed the prerequisites from the Deploy on GKE Autopilot guide, you can skip ahead to Install the Helm Chart. :::
Step 1: Create a GKE cluster
Option A: GKE Autopilot (recommended for hands-off management)
gcloud container clusters create-auto sftpgw-cluster \
--region=us-central1 \
--project=YOUR_PROJECT_ID
Option B: GKE Standard (recommended for full node control)
gcloud container clusters create sftpgw-cluster \
--zone=us-central1-a \
--num-nodes=2 \
--machine-type=e2-standard-4 \
--workload-pool=YOUR_PROJECT_ID.svc.id.goog \
--project=YOUR_PROJECT_ID
:::tip
For GKE Standard, use at least e2-standard-4 nodes (4 vCPU, 16 GB RAM) to ensure sufficient resources for all SFTP Gateway components. Smaller node types like e2-standard-2 may not have enough allocatable CPU after system pod reservations.
:::
Step 2: Create a GCS bucket
gcloud storage buckets create gs://YOUR_BUCKET_NAME \
--project=YOUR_PROJECT_ID \
--location=us-central1
Step 3: Create a GCP service account with Storage Admin
# Create the service account
gcloud iam service-accounts create sftpgw \
--display-name="SFTP Gateway" \
--project=YOUR_PROJECT_ID
# Grant Storage Admin role (for GCS access)
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:sftpgw@YOUR_PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/storage.admin"
Step 4: Configure Workload Identity
Workload Identity allows GKE pods to authenticate as a GCP service account without storing credentials.
# Get cluster credentials
# For Autopilot (regional cluster):
gcloud container clusters get-credentials sftpgw-cluster \
--region=us-central1 \
--project=YOUR_PROJECT_ID
# For Standard (zonal cluster):
# gcloud container clusters get-credentials sftpgw-cluster \
# --zone=us-central1-a \
# --project=YOUR_PROJECT_ID
# Create namespace
kubectl create namespace sftpgw
# Bind the Kubernetes SA to the GCP SA
# The Helm chart creates a SA named <release>-sftp-gateway-gcp by default
gcloud iam service-accounts add-iam-policy-binding \
sftpgw@YOUR_PROJECT_ID.iam.gserviceaccount.com \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:YOUR_PROJECT_ID.svc.id.goog[sftpgw/sftpgw-sftp-gateway-gcp]" \
--project=YOUR_PROJECT_ID
:::note
The Kubernetes service account name follows the pattern <release-name>-sftp-gateway-gcp. If your release name is sftpgw, the SA will be sftpgw-sftp-gateway-gcp.
:::
Step 5: Download the Helm chart
curl -LO https://thorntech-public-documents.s3.amazonaws.com/sftpgateway/helm-charts/gcp/sftp-gateway-gcp-0.1.0.tgz
tar xzf sftp-gateway-gcp-0.1.0.tgz
cd sftp-gateway-gcp
The chart includes the bundled PostgreSQL dependency in the charts/ directory, so no additional downloads are needed.
Option A: Bundled PostgreSQL (quickstart)
This is the simplest option — a PostgreSQL container runs alongside the backend inside your cluster.
Install
helm install sftpgw ./sftp-gateway-gcp \
--namespace sftpgw --create-namespace \
--set security.clientId=$(openssl rand -hex 16) \
--set security.clientSecret=$(openssl rand -hex 32) \
--set security.jwtSecret=$(uuidgen) \
--set postgresql.auth.password=$(openssl rand -hex 16) \
--set gcp.bucket=YOUR_BUCKET_NAME \
--set serviceAccount.annotations."iam\.gke\.io/gcp-service-account"=sftpgw@YOUR_PROJECT_ID.iam.gserviceaccount.com
:::note GKE Autopilot On GKE Autopilot, resource requests must equal limits. Add these flags to the install command above for optimal resource allocation:
--set backend.resources.requests.cpu=4000m \
--set backend.resources.requests.memory=4Gi \
--set backend.resources.limits.cpu=4000m \
--set backend.resources.limits.memory=4Gi \
--set config.javaOpts="-Xms2g -Xmx6g -XX:+UseG1GC -XX:MaxGCPauseMillis=200"
:::
That's it. After a few minutes, all pods should be running:
kubectl get pods -n sftpgw
Expected output:
NAME READY STATUS RESTARTS AGE
sftpgw-postgresql-0 1/1 Running 0 2m
sftpgw-sftp-gateway-gcp-backend-xxxxx 1/1 Running 0 2m
sftpgw-sftp-gateway-gcp-ui-xxxxx 1/1 Running 0 2m
sftpgw-sftp-gateway-gcp-ui-yyyyy 1/1 Running 0 2m
Option B: Cloud SQL with Auth Proxy (production)
For production deployments, use a managed Cloud SQL PostgreSQL instance. The Helm chart includes a Cloud SQL Auth Proxy sidecar that provides secure, IAM-authenticated database connections.
Create the Cloud SQL instance
# Create the instance
gcloud sql instances create sftpgw-db \
--database-version=POSTGRES_16 \
--edition=ENTERPRISE \
--tier=db-custom-2-8192 \
--region=us-central1 \
--availability-type=ZONAL \
--storage-type=SSD \
--storage-size=10GB \
--project=YOUR_PROJECT_ID
# Create database and user
gcloud sql databases create sftpgw \
--instance=sftpgw-db \
--project=YOUR_PROJECT_ID
gcloud sql users create sftpgw \
--instance=sftpgw-db \
--password=YOUR_DATABASE_PASSWORD \
--project=YOUR_PROJECT_ID
# Get the connection name (you'll need this)
gcloud sql instances describe sftpgw-db \
--format="value(connectionName)" \
--project=YOUR_PROJECT_ID
# Output: YOUR_PROJECT_ID:us-central1:sftpgw-db
Grant Cloud SQL Client role
The GCP service account needs the Cloud SQL Client role in addition to Storage Admin:
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:sftpgw@YOUR_PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/cloudsql.client"
Install with Cloud SQL
helm install sftpgw ./sftp-gateway-gcp \
--namespace sftpgw --create-namespace \
--set security.clientId=$(openssl rand -hex 16) \
--set security.clientSecret=$(openssl rand -hex 32) \
--set security.jwtSecret=$(uuidgen) \
--set gcp.bucket=YOUR_BUCKET_NAME \
--set serviceAccount.annotations."iam\.gke\.io/gcp-service-account"=sftpgw@YOUR_PROJECT_ID.iam.gserviceaccount.com \
--set postgresql.enabled=false \
--set cloudSqlProxy.enabled=true \
--set cloudSqlProxy.instanceConnectionName="YOUR_PROJECT_ID:us-central1:sftpgw-db" \
--set externalDatabase.host=127.0.0.1 \
--set externalDatabase.password=YOUR_DATABASE_PASSWORD
:::note GKE Autopilot On GKE Autopilot, resource requests must equal limits. Add these flags to the install command above:
--set backend.resources.requests.cpu=4000m \
--set backend.resources.requests.memory=4Gi \
--set backend.resources.limits.cpu=4000m \
--set backend.resources.limits.memory=4Gi \
--set config.javaOpts="-Xms2g -Xmx6g -XX:+UseG1GC -XX:MaxGCPauseMillis=200"
:::
The backend pod will show 2/2 containers — the backend and the Cloud SQL Auth Proxy sidecar:
NAME READY STATUS RESTARTS AGE
sftpgw-sftp-gateway-gcp-backend-xxxxx 2/2 Running 0 3m
sftpgw-sftp-gateway-gcp-ui-xxxxx 1/1 Running 0 3m
sftpgw-sftp-gateway-gcp-ui-yyyyy 1/1 Running 0 3m
Access the deployment
Get the Admin UI URL
# Wait for the LoadBalancer IP to be assigned
kubectl get svc -n sftpgw -w
# Get the UI IP
export UI_IP=$(kubectl get svc sftpgw-sftp-gateway-gcp-ui -n sftpgw \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "Admin UI: https://$UI_IP"
Open the URL in your browser. You'll see a certificate warning because the chart generates a self-signed TLS certificate by default — this is expected. Accept the warning to continue.
Get the SFTP endpoint
export SFTP_HOST=$(kubectl get svc sftpgw-sftp-gateway-gcp-sftp -n sftpgw \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "SFTP Host: $SFTP_HOST"
echo "SFTP Port: 22"
Initial setup
- Open the Admin UI in your browser
- Create your initial Web Admin account
- Create SFTP users
- Connect with your SFTP client (FileZilla, WinSCP, etc.)
:::warning License required for SFTP SFTP connections require a valid license key. Without one, the backend runs normally and the Admin UI is fully accessible, but SFTP clients will be immediately disconnected with "The SFTP Server's license has expired." To add your license during install:
--set license.key=YOUR_LICENSE_KEY
Or add it later via helm upgrade --set license.key=YOUR_LICENSE_KEY --reuse-values.
:::
Configuration reference
Key values
| Parameter | Description | Default |
|---|---|---|
gcp.bucket | GCS bucket name | "" |
backend.replicaCount | Number of backend replicas | 1 |
backend.resources.requests.cpu | Backend CPU request | 500m |
backend.resources.requests.memory | Backend memory request | 2Gi |
backend.resources.limits.cpu | Backend CPU limit | 1500m |
backend.resources.limits.memory | Backend memory limit | 4Gi |
backend.useEmptyDir | Use emptyDir instead of PVC (for multi-replica) | false |
backend.service.externalTrafficPolicy | Local preserves client IP, Cluster for cross-node balancing | Local |
ui.replicaCount | Number of UI replicas | 2 |
ui.tls.certificate | Custom TLS certificate (PEM) | "" (self-signed) |
ui.tls.privateKey | Custom TLS private key (PEM) | "" |
ui.service.loadBalancerSourceRanges | Restrict Admin UI to specific IPs | [] |
config.javaOpts | JVM options for backend | "" |
postgresql.enabled | Use bundled PostgreSQL | true |
postgresql.auth.password | PostgreSQL password (required) | "" |
cloudSqlProxy.enabled | Enable Cloud SQL Auth Proxy sidecar | false |
cloudSqlProxy.instanceConnectionName | Cloud SQL connection name | "" |
externalDatabase.host | External database host (use 127.0.0.1 with Cloud SQL Proxy) | 127.0.0.1 |
externalDatabase.password | External database password | "" |
serviceAccount.annotations | Annotations for the Kubernetes service account (Workload Identity) | {} |
license.key | SFTP Gateway license key | "" |
Custom TLS certificate
To use your own TLS certificate instead of the auto-generated self-signed one:
helm install sftpgw ./sftp-gateway-gcp \
--namespace sftpgw --create-namespace \
--set-file ui.tls.certificate=path/to/tls.crt \
--set-file ui.tls.privateKey=path/to/tls.key \
# ... other values
Restrict Admin UI access
Lock down the Admin UI to specific IP addresses:
# values-production.yaml
ui:
service:
loadBalancerSourceRanges:
- "203.0.113.50/32" # Office IP
- "198.51.100.0/24" # VPN range
helm install sftpgw ./sftp-gateway-gcp -f values-production.yaml --namespace sftpgw --create-namespace
Multi-replica SFTP backend
For high-throughput deployments, scale the backend with emptyDir volumes (files are streamed directly to GCS, so local persistence is not needed):
# values-high-throughput.yaml
backend:
replicaCount: 3
useEmptyDir: true
service:
externalTrafficPolicy: Cluster
annotations:
cloud.google.com/l4-rbs: "enabled"
PROXY Protocol (TCP Proxy Load Balancer)
If you need a TCP Proxy Load Balancer with Network Endpoint Groups (NEG) for global load balancing:
proxyProtocol:
enabled: true
unrestrictedMode: true
backend:
service:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports":{"22":{"name":"sftpgw-sftp-neg"}}}'
Upgrading
helm upgrade sftpgw ./sftp-gateway-gcp \
--namespace sftpgw \
--reuse-values
:::warning
If you used --set flags during install, you must pass the same values during upgrade (or use --reuse-values). Helm does not persist --set values between releases.
:::
Uninstalling
helm uninstall sftpgw --namespace sftpgw
# PVCs are not deleted automatically — remove if no longer needed:
kubectl delete pvc --all -n sftpgw
Troubleshooting
Backend pod stuck in CrashLoopBackOff
Check the logs:
kubectl logs -n sftpgw -l app.kubernetes.io/component=backend --tail=50
Common causes:
- Database connection failed: Verify the Cloud SQL instance is running and the connection name is correct
- Workload Identity not configured: Check that the GCP SA has the correct IAM bindings. For GKE Standard, verify the cluster was created with
--workload-pool=YOUR_PROJECT_ID.svc.id.goog
UI pod in CrashLoopBackOff
The UI container requires TLS certificates. The Helm chart auto-generates a self-signed certificate, but if you see nginx certificate errors, verify the secret exists:
kubectl get secret -n sftpgw -l app.kubernetes.io/instance=sftpgw
Backend pod stuck in Pending (Insufficient cpu)
On GKE Standard, the backend pod may fail to schedule if nodes don't have enough allocatable CPU. Check with:
kubectl describe pod -n sftpgw -l app.kubernetes.io/component=backend
If you see Insufficient cpu, either use larger node types (e2-standard-4 or above) or reduce the resource requests during install:
--set backend.resources.requests.cpu=250m \
--set backend.resources.requests.memory=1Gi
Backend pod not becoming Ready
The backend takes approximately 30-90 seconds to start. On GKE Autopilot, initial deployments may take longer due to node provisioning. Check readiness probe status:
kubectl describe pod -n sftpgw -l app.kubernetes.io/component=backend
Cloud SQL Auth Proxy connection errors
# Check proxy logs
kubectl logs -n sftpgw -l app.kubernetes.io/component=backend -c cloud-sql-proxy
# Verify Workload Identity binding
gcloud iam service-accounts get-iam-policy \
sftpgw@YOUR_PROJECT_ID.iam.gserviceaccount.com
PVC Multi-Attach errors during upgrades
If the backend pod is stuck in Init with a Multi-Attach error, the old pod is still holding the PVC:
# Delete the old pod
kubectl delete pod <old-pod-name> -n sftpgw
# Or switch to emptyDir to avoid PVC contention entirely
helm upgrade sftpgw ./sftp-gateway-gcp --set backend.useEmptyDir=true --reuse-values -n sftpgw
Workload Identity not working on GKE Standard
If the backend can connect to the database but fails to access GCS, Workload Identity may not be enabled on the cluster. Verify with:
gcloud container clusters describe sftpgw-cluster \
--zone=us-central1-a \
--format="value(workloadIdentityConfig.workloadPool)"
If this returns empty, the cluster was created without --workload-pool. You can enable it on an existing cluster:
gcloud container clusters update sftpgw-cluster \
--zone=us-central1-a \
--workload-pool=YOUR_PROJECT_ID.svc.id.goog
After enabling, you must also enable it on the node pool:
gcloud container node-pools update default-pool \
--cluster=sftpgw-cluster \
--zone=us-central1-a \
--workload-metadata=GKE_METADATA