Deploy SFTP Gateway with Helm Chart
TLDR - Quick Summary
What: Deploy SFTP Gateway on EKS using a Helm chart with either bundled PostgreSQL or Amazon RDS
Steps: Download chart, configure prerequisites, run
helm installQuick start (with eksctl-managed service account):
helm install sftpgw ./sftp-gateway-aws \ --namespace sftpgw --create-namespace \ --set security.clientId=$(openssl rand -hex 16) \ --set security.clientSecret=$(openssl rand -hex 32) \ --set security.jwtSecret=$(uuidgen) \ --set postgresql.auth.password=$(openssl rand -hex 16) \ --set aws.bucket=YOUR_BUCKET_NAME \ --set aws.region=us-east-1 \ --set serviceAccount.create=false \ --set serviceAccount.name=sftpgw-sftp-gateway-aws
Overview
The SFTP Gateway Helm chart simplifies deploying SFTP Gateway on EKS. It handles creating all the Kubernetes resources (Deployments, Services, ConfigMaps, Secrets, PVCs, ServiceAccount) and supports two database modes:
- Bundled PostgreSQL (default) — runs a PostgreSQL container inside the cluster. Good for testing and simple deployments.
- External database (RDS) — connects to a managed Amazon RDS PostgreSQL instance. Recommended for production.
Docker Hub Images
The SFTP Gateway container images are available on Docker Hub:
- Backend:
thorntech/sftpgateway-backend:3.8.1 - Admin UI:
thorntech/sftpgateway-admin-ui:3.8.1
Both images support amd64 and arm64 architectures.
Prerequisites
- An EKS cluster with OIDC provider enabled
- Helm 3 installed
kubectlconfigured to access your cluster- An S3 bucket for SFTP file storage
- An IAM role with S3 permissions (for IRSA)
- The EBS CSI driver addon installed on the cluster (required for PVC provisioning)
:::tip If you already completed the prerequisites from the Deploy on EKS guide, you can skip ahead to Install the Helm Chart. :::
Step 1: Create an EKS cluster
Create a cluster config file called eks-cluster.yaml:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: sftpgw-cluster
region: us-east-1
version: "1.32"
iam:
withOIDC: true
managedNodeGroups:
- name: sftpgw-nodes
instanceType: t3.large
desiredCapacity: 2
minSize: 1
maxSize: 4
volumeSize: 50
iam:
withAddonPolicies:
ebs: true
addons:
- name: vpc-cni
- name: coredns
- name: kube-proxy
- name: aws-ebs-csi-driver
Create the cluster (this takes ~15-20 minutes):
eksctl create cluster -f eks-cluster.yaml
:::warning
The aws-ebs-csi-driver addon is required for PVC provisioning on EKS. Without it, PersistentVolumeClaims will stay in Pending state even though the gp2 StorageClass exists by default. If you're using an existing cluster, verify the addon is installed:
aws eks describe-addon --cluster-name sftpgw-cluster --addon-name aws-ebs-csi-driver
# Install if missing:
aws eks create-addon --cluster-name sftpgw-cluster --addon-name aws-ebs-csi-driver
:::
Step 2: Create an S3 bucket
aws s3 mb s3://YOUR_BUCKET_NAME --region us-east-1
Step 3: Create an IAM role for S3 access (IRSA)
IRSA (IAM Roles for Service Accounts) allows EKS pods to authenticate as an IAM role without storing credentials.
# Create the namespace first (eksctl needs it to exist)
kubectl create namespace sftpgw
# Create the IAM service account with S3 access
# The Helm chart creates a SA named <release>-sftp-gateway-aws by default
eksctl create iamserviceaccount \
--cluster=sftpgw-cluster \
--namespace=sftpgw \
--name=sftpgw-sftp-gateway-aws \
--attach-policy-arn=arn:aws:iam::aws:policy/AmazonS3FullAccess \
--approve \
--region=us-east-1
:::note
The Kubernetes service account name follows the pattern <release-name>-sftp-gateway-aws. If your release name is sftpgw, the SA will be sftpgw-sftp-gateway-aws.
:::
:::warning
When eksctl creates a service account, it adds its own labels that conflict with Helm's SA creation. You must use serviceAccount.create=false during Helm install so Helm uses the existing SA instead of trying to create a new one. See Install below.
:::
Step 4: Download the Helm chart
curl -LO https://thorntech-public-documents.s3.amazonaws.com/sftpgateway/helm-charts/aws/sftp-gateway-aws-0.1.0.tgz
tar xzf sftp-gateway-aws-0.1.0.tgz
cd sftp-gateway-aws
The chart includes the bundled PostgreSQL dependency in the charts/ directory, so no additional downloads are needed.
Option A: Bundled PostgreSQL (quickstart)
This is the simplest option — a PostgreSQL container runs alongside the backend inside your cluster.
Install
Because eksctl already created the service account (with IRSA annotations), tell Helm not to create it:
helm install sftpgw ./sftp-gateway-aws \
--namespace sftpgw \
--set security.clientId=$(openssl rand -hex 16) \
--set security.clientSecret=$(openssl rand -hex 32) \
--set security.jwtSecret=$(uuidgen) \
--set postgresql.auth.password=$(openssl rand -hex 16) \
--set aws.bucket=YOUR_BUCKET_NAME \
--set aws.region=us-east-1 \
--set serviceAccount.create=false \
--set serviceAccount.name=sftpgw-sftp-gateway-aws
:::note
The namespace was already created in Step 3 (required by eksctl), so --create-namespace is not needed here. If you skipped Step 3 and are managing IRSA without eksctl, add --create-namespace to the install command.
:::
After a few minutes, all pods should be running:
kubectl get pods -n sftpgw
Expected output:
NAME READY STATUS RESTARTS AGE
sftpgw-postgresql-0 1/1 Running 0 2m
sftpgw-sftp-gateway-aws-backend-xxxxx 1/1 Running 0 2m
sftpgw-sftp-gateway-aws-ui-xxxxx 1/1 Running 0 2m
sftpgw-sftp-gateway-aws-ui-yyyyy 1/1 Running 0 2m
Option B: Amazon RDS (production)
For production deployments, use a managed RDS PostgreSQL instance.
Create the RDS instance
# Get the VPC ID from your EKS cluster
VPC_ID=$(aws eks describe-cluster --name sftpgw-cluster \
--query "cluster.resourcesVpcConfig.vpcId" --output text)
# Get the private subnet IDs (RDS should only use private subnets)
PRIVATE_SUBNET_IDS=$(aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=$VPC_ID" "Name=tag:*Private*,Values=*" \
--query "Subnets[].SubnetId" --output text)
# If the above returns empty (subnets aren't tagged), use all subnets:
# PRIVATE_SUBNET_IDS=$(aws ec2 describe-subnets \
# --filters "Name=vpc-id,Values=$VPC_ID" \
# --query "Subnets[].SubnetId" --output text)
# Create DB subnet group
aws rds create-db-subnet-group \
--db-subnet-group-name sftpgw-db-subnet \
--db-subnet-group-description "SFTP Gateway DB subnets" \
--subnet-ids $PRIVATE_SUBNET_IDS
# Get the cluster security group
SG_ID=$(aws eks describe-cluster --name sftpgw-cluster \
--query "cluster.resourcesVpcConfig.clusterSecurityGroupId" --output text)
# Create the RDS instance
aws rds create-db-instance \
--db-instance-identifier sftpgw-db \
--db-instance-class db.t3.medium \
--engine postgres \
--engine-version 16 \
--master-username sftpgw \
--master-user-password YOUR_DATABASE_PASSWORD \
--allocated-storage 20 \
--db-name sftpgw \
--vpc-security-group-ids $SG_ID \
--db-subnet-group-name sftpgw-db-subnet \
--no-publicly-accessible
# Wait for the instance to become available
aws rds wait db-instance-available --db-instance-identifier sftpgw-db
# Get the endpoint
aws rds describe-db-instances --db-instance-identifier sftpgw-db \
--query "DBInstances[0].Endpoint.Address" --output text
# Output: sftpgw-db.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com
Install with RDS
helm install sftpgw ./sftp-gateway-aws \
--namespace sftpgw \
--set security.clientId=$(openssl rand -hex 16) \
--set security.clientSecret=$(openssl rand -hex 32) \
--set security.jwtSecret=$(uuidgen) \
--set aws.bucket=YOUR_BUCKET_NAME \
--set aws.region=us-east-1 \
--set serviceAccount.create=false \
--set serviceAccount.name=sftpgw-sftp-gateway-aws \
--set postgresql.enabled=false \
--set externalDatabase.host=sftpgw-db.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com \
--set externalDatabase.password=YOUR_DATABASE_PASSWORD
Expected output:
NAME READY STATUS RESTARTS AGE
sftpgw-sftp-gateway-aws-backend-xxxxx 1/1 Running 0 3m
sftpgw-sftp-gateway-aws-ui-xxxxx 1/1 Running 0 3m
sftpgw-sftp-gateway-aws-ui-yyyyy 1/1 Running 0 3m
Access the deployment
Get the Admin UI URL
# Wait for the NLB to be provisioned
kubectl get svc -n sftpgw -w
# Get the UI hostname
export UI_HOST=$(kubectl get svc sftpgw-sftp-gateway-aws-ui -n sftpgw \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "Admin UI: https://$UI_HOST"
Open the URL in your browser. You'll see a certificate warning because the chart generates a self-signed TLS certificate by default — this is expected. Accept the warning to continue.
:::note AWS NLBs use DNS hostnames (not IP addresses) for their endpoints. It may take a few minutes for the DNS to propagate after the NLB is created. :::
Get the SFTP endpoint
export SFTP_HOST=$(kubectl get svc sftpgw-sftp-gateway-aws-sftp -n sftpgw \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "SFTP Host: $SFTP_HOST"
echo "SFTP Port: 22"
Initial setup
- Open the Admin UI in your browser
- Create your initial Web Admin account
- Create SFTP users
- Connect with your SFTP client (FileZilla, WinSCP, etc.)
Configuration reference
Key values
| Parameter | Description | Default |
|---|---|---|
aws.region | AWS region | us-east-1 |
aws.bucket | S3 bucket name | "" |
backend.replicaCount | Number of backend replicas | 1 |
backend.resources.requests.cpu | Backend CPU request | 500m |
backend.resources.requests.memory | Backend memory request | 2Gi |
backend.resources.limits.cpu | Backend CPU limit | 1500m |
backend.resources.limits.memory | Backend memory limit | 4Gi |
backend.service.externalTrafficPolicy | Local preserves client IP, Cluster for cross-node balancing | Local |
ui.replicaCount | Number of UI replicas | 2 |
ui.tls.certificate | Custom TLS certificate (PEM) | "" (self-signed) |
ui.tls.privateKey | Custom TLS private key (PEM) | "" |
ui.service.loadBalancerSourceRanges | Restrict Admin UI to specific IPs | [] |
config.javaOpts | JVM options for backend | "" |
postgresql.enabled | Use bundled PostgreSQL | true |
postgresql.auth.password | PostgreSQL password (required) | "" |
externalDatabase.host | RDS endpoint | "" |
externalDatabase.password | RDS password | "" |
serviceAccount.create | Create a new service account | true |
serviceAccount.name | Name of existing service account (when create=false) | "" |
serviceAccount.annotations | SA annotations (for IRSA role ARN) | {} |
license.key | SFTP Gateway license key | "" |
Custom TLS certificate
To use your own TLS certificate instead of the auto-generated self-signed one:
helm install sftpgw ./sftp-gateway-aws \
--namespace sftpgw \
--set-file ui.tls.certificate=path/to/tls.crt \
--set-file ui.tls.privateKey=path/to/tls.key \
# ... other values
Restrict Admin UI access
Lock down the Admin UI to specific IP addresses:
# values-production.yaml
ui:
service:
loadBalancerSourceRanges:
- "203.0.113.50/32" # Office IP
- "198.51.100.0/24" # VPN range
helm install sftpgw ./sftp-gateway-aws -f values-production.yaml --namespace sftpgw
IRSA without eksctl
If you prefer to manage the service account entirely through Helm (without using eksctl create iamserviceaccount), you can set the IRSA annotation directly:
helm install sftpgw ./sftp-gateway-aws \
--namespace sftpgw --create-namespace \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::ACCOUNT_ID:role/sftpgw-s3-role \
# ... other values
This requires that you have already created the IAM role with the correct trust policy for your EKS cluster's OIDC provider.
Upgrading
helm upgrade sftpgw ./sftp-gateway-aws \
--namespace sftpgw \
--reuse-values
:::warning
If you used --set flags during install, you must pass the same values during upgrade (or use --reuse-values). Helm does not persist --set values between releases.
:::
Uninstalling
helm uninstall sftpgw --namespace sftpgw
# PVCs are not deleted automatically — remove if no longer needed:
kubectl delete pvc --all -n sftpgw
Troubleshooting
Backend pod stuck in CrashLoopBackOff
Check the logs:
kubectl logs -n sftpgw -l app.kubernetes.io/component=backend --tail=50
Common causes:
- Database connection failed: Verify the RDS instance is running and the security group allows traffic from the EKS cluster
- IRSA not configured: Check that the service account has the correct IAM role annotation:
kubectl get sa sftpgw-sftp-gateway-aws -n sftpgw -o yaml
PVCs stuck in Pending
EKS requires the EBS CSI driver addon for dynamic PVC provisioning:
# Check if the addon is installed
aws eks describe-addon --cluster-name sftpgw-cluster --addon-name aws-ebs-csi-driver
# Install if missing
aws eks create-addon --cluster-name sftpgw-cluster --addon-name aws-ebs-csi-driver
# Verify the gp2 storage class is default
kubectl get storageclass
If gp2 is not marked as (default):
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
UI pod in CrashLoopBackOff
The UI container requires TLS certificates. The Helm chart auto-generates a self-signed certificate, but if you see nginx certificate errors, verify the secret exists:
kubectl get secret -n sftpgw -l app.kubernetes.io/instance=sftpgw
Backend pod not becoming Ready
The backend takes approximately 30-90 seconds to start. Check readiness probe status:
kubectl describe pod -n sftpgw -l app.kubernetes.io/component=backend
PVC Multi-Attach errors during upgrades
If the backend pod is stuck in Init with a Multi-Attach error, the old pod is still holding the PVC:
# Delete the old pod
kubectl delete pod <old-pod-name> -n sftpgw
# Or use a rolling update strategy with maxUnavailable=1
Service account conflicts with eksctl
If you see an error like ServiceAccount exists and cannot be imported, it means eksctl created the service account with its own labels. Use these flags to tell Helm to use the existing SA:
helm install sftpgw ./sftp-gateway-aws \
--set serviceAccount.create=false \
--set serviceAccount.name=sftpgw-sftp-gateway-aws \
# ... other values
NLB not provisioning
If the LoadBalancer external hostname stays empty, check the service events:
kubectl describe svc sftpgw-sftp-gateway-aws-sftp -n sftpgw
kubectl describe svc sftpgw-sftp-gateway-aws-ui -n sftpgw
The chart uses service.beta.kubernetes.io/aws-load-balancer-type: "nlb" annotations. The in-tree Kubernetes cloud provider handles NLB creation on EKS. If you have the AWS Load Balancer Controller installed, it may intercept these annotations — check for conflicting configurations.