Deploy SFTP Gateway on Azure Kubernetes Service (AKS)
TLDR - Quick Summary
- What: Deploy SFTP Gateway 3.8.0 as a multi-container application on AKS
- Method: Kubernetes manifests managed with Kustomize
- Components: Backend (SFTP + API), Admin UI, PostgreSQL database
- Images: Public Docker Hub —
thorntech/sftpgateway-backendandthorntech/sftpgateway-admin-ui- Storage: Azure Managed Disks via Persistent Volume Claims
- Command:
kubectl apply -k .
Overview
Reference Implementation: This guide provides a reference implementation for deploying SFTP Gateway on AKS. The manifests and configurations shown here are starting points — you should review and adjust them according to your organization's security policies, networking requirements, and operational standards. All placeholder values (IP addresses, passwords, certificates) must be replaced with your actual values before deployment.
In this guide, we'll walk through deploying SFTP Gateway as a Kubernetes application on Azure Kubernetes Service (AKS). This deployment uses a set of Kubernetes manifests to stand up a fully functional SFTP Gateway environment, including:
- Backend service — handles SFTP connections and the management API
- Admin UI — web-based administration console (HTTPS)
- PostgreSQL database — stores user accounts and configuration data
- Persistent storage — Azure Managed Disks for SFTP file storage and database persistence
Container images are pulled from Docker Hub (public), so no private registry credentials are required. The deployment is orchestrated using Kustomize, which allows you to deploy all resources with a single command. Two Azure Load Balancers are provisioned automatically — one for the Admin UI (ports 80/443) and one for SFTP access (port 22).
Architecture
┌──────────────────────────────────────────┐
│ Azure Kubernetes Service (AKS) │
│ Namespace: sftpgw │
│ │
HTTPS (443) │ ┌────────────┐ ┌────────────────┐ │
──────────────────────┼──►│ Admin UI │───►│ Backend │ │
│ │ (nginx) │ │ (Spring Boot) │ │
│ └────────────┘ │ │ │
SFTP (22) │ │ Port 8080 API │ │
──────────────────────┼────────────────────►│ Port 2222 SFTP│ │
│ └───────┬────────┘ │
│ │ │
│ ┌───────▼────────┐ │
│ │ PostgreSQL 16 │ │
│ │ Port 5432 │ │
│ └────────────────┘ │
│ │
│ Storage: │
│ ├── 10Gi Home directory (PVC) │
│ ├── 50Gi SFTP mount directory (PVC) │
│ └── 10Gi PostgreSQL data (PVC) │
└──────────────────────────────────────────┘
Prerequisites
Before you begin, make sure you have the following:
- Azure Kubernetes Service (AKS) cluster — a running AKS cluster with at least one node pool. The cluster should have a minimum of 4 GB of allocatable memory available for the SFTP Gateway backend.
- kubectl — installed and configured to communicate with your AKS cluster. Verify with:
kubectl get nodes - Kustomize — included with kubectl v1.14+. No separate installation required.
- SFTP Gateway license key — a valid license is required. Contact Thorn Technologies if you do not have one.
Note: The SFTP Gateway container images are hosted on public Docker Hub repositories. No Docker Hub account or registry credentials are needed to pull them.
Deployment files
The deployment consists of eight Kubernetes manifest files, managed by a Kustomize configuration. Clone or download the deployment files to your local machine.
| File | Description |
|---|---|
namespace.yaml | Creates the sftpgw namespace to isolate all resources |
secrets.yaml | Stores database credentials, OAuth secrets, JWT key, license, and TLS certificate (as environment variables) |
configmap.yaml | Non-sensitive configuration for PostgreSQL, the backend, and the UI |
storage.yaml | Persistent Volume Claims for SFTP home directory (10Gi) and mount directory (50Gi) |
postgres.yaml | PostgreSQL 16 StatefulSet and headless Service |
backend.yaml | SFTP Gateway backend Deployment and internal Service |
ui.yaml | Admin UI Deployment and LoadBalancer Service (with IP restrictions) |
sftp-service.yaml | LoadBalancer Service exposing SFTP on port 22 (open to internet) |
kustomization.yaml | Kustomize orchestration — applies namespace, labels, and deployment order |
The kustomization.yaml file should use the labels transformer (not the deprecated commonLabels):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: sftpgw
labels:
- pairs:
app.kubernetes.io/name: sftpgateway
app.kubernetes.io/version: "3.8.0"
includeSelectors: true
resources:
- namespace.yaml
- secrets.yaml
- configmap.yaml
- storage.yaml
- postgres.yaml
- backend.yaml
- ui.yaml
- sftp-service.yaml
Container images
The deployment pulls the following images:
| Component | Image | Tag |
|---|---|---|
| Backend | thorntech/sftpgateway-backend | 3.8.0 |
| Admin UI | thorntech/sftpgateway-admin-ui | 3.8.0 |
| PostgreSQL | postgres | 16-alpine |
Step 1: Customize the configuration
Before deploying, review and modify the configuration files to match your environment.
1a. Update secrets
Open secrets.yaml and update the following values:
Database credentials (sftpgw-db-secret and sftpgw-backend-secret):
# sftpgw-db-secret
stringData:
POSTGRES_USER: sftpgw
POSTGRES_PASSWORD: <your-secure-password> # Change this
POSTGRES_DB: sftpgw
# sftpgw-backend-secret
stringData:
SPRING_DATASOURCE_USERNAME: sftpgw
SPRING_DATASOURCE_PASSWORD: <your-secure-password> # Must match above
OAuth credentials (sftpgw-backend-secret and sftpgw-ui-secret):
# sftpgw-backend-secret
stringData:
SECURITY_CLIENT_ID: "<your-client-id>" # Change this
SECURITY_CLIENT_SECRET: "<your-client-secret>" # Change this
# sftpgw-ui-secret — must match the backend values
stringData:
SECURITY_CLIENT_ID: "<your-client-id>" # Must match backend
SECURITY_CLIENT_SECRET: "<your-client-secret>" # Must match backend
Note: The
SECURITY_CLIENT_IDandSECURITY_CLIENT_SECRETvalues must be identical in both the backend and UI secrets. These are used for internal OAuth communication between the two services.
TLS certificate (sftpgw-ui-secret):
The Admin UI container reads TLS certificate and key from environment variables. Add your certificate to the sftpgw-ui-secret:
# sftpgw-ui-secret
stringData:
SECURITY_CLIENT_ID: "<your-client-id>"
SECURITY_CLIENT_SECRET: "<your-client-secret>"
WEBSITE_BUNDLE_CRT: |
-----BEGIN CERTIFICATE-----
<your-certificate>
-----END CERTIFICATE-----
WEBSITE_KEY: |
-----BEGIN PRIVATE KEY-----
<your-private-key>
-----END PRIVATE KEY-----
To generate a self-signed certificate for testing:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout website.key -out website.bundle.crt \
-subj "/CN=sftpgateway/O=YourOrganization"
Then copy the contents into the secret.
Important: The certificate and private key must match. If you see an error like
SSL_CTX_use_PrivateKey ... key values mismatchin the UI pod logs, it means the private key doesn't correspond to the certificate. Regenerate both files together using the command above.
License key (sftpgw-backend-secret):
The LICENSE field in sftpgw-backend-secret should contain your SFTP Gateway license. The deployment files include a license key — verify it matches the license provided to you by Thorn Technologies.
1b. Adjust storage sizes (optional)
Open storage.yaml to adjust the Persistent Volume Claim sizes if needed:
# Home directory — stores SSH host keys and user home directories
resources:
requests:
storage: 10Gi # Adjust as needed
# Mount directory — primary SFTP file storage
resources:
requests:
storage: 50Gi # Adjust based on expected data volume
Both PVCs use the managed-csi storage class, which provisions Azure Managed Disks.
1c. Review backend configuration (optional)
Open configmap.yaml to review the backend configuration. Key settings include:
# sftpgw-backend-config
data:
SPRING_PROFILES_ACTIVE: local # Spring profile
LOGGING_LEVEL_ROOT: INFO # Log level
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/sftpgw # DB connection
SERVER_PORT: "8080" # API port
FEATURES_FIRST_CONNECTION_CLOUD_PROVIDER: lfs # Local File System
FEATURES_FIRST_CONNECTION_NAME: "Local File System" # Display name
FEATURES_FIRST_CONNECTION_BASE_PREFIX: "/mnt/sftpgw_1" # Mount path
SFTP_PORT: "2222" # SFTP port
Note: The
FEATURES_FIRST_CONNECTION_CLOUD_PROVIDERis set tolfs(Local File System) by default, which stores uploaded files on the Azure Managed Disk. You can change this after deployment through the Admin UI to connect to Azure Blob Storage or other cloud storage providers.
1d. Review UI configuration (optional)
The UI configmap contains nginx proxy settings:
# sftpgw-ui-config
data:
BACKEND_URL: "http://backend:8080/" # Backend API endpoint (required)
NGINX_PORT: "80" # HTTP port
SSL_PORT: "443" # HTTPS port
Important: The
BACKEND_URLmust include a trailing slash and point to the backend service. This is used by nginx to proxy API requests.
1e. Configure Admin UI access restrictions
The Admin UI should be restricted to authorized administrator IP addresses. Open ui.yaml and configure the Service with loadBalancerSourceRanges:
# ui.yaml (Service section)
apiVersion: v1
kind: Service
metadata:
name: ui
spec:
type: LoadBalancer
loadBalancerSourceRanges:
- "203.0.113.50/32" # Replace with your sysadmin's IP address
- "198.51.100.0/24" # Replace with your office network range (optional)
selector:
app: ui
ports:
- name: https
port: 443
targetPort: 443
Key security settings:
| Setting | Description |
|---|---|
loadBalancerSourceRanges | Restricts which IP addresses can reach the Admin UI. Use CIDR notation: /32 for a single IP, /24 for a 256-address range. |
| HTTPS only (port 443) | The example above only exposes HTTPS. Add port 80 only if you need HTTP access. |
Note: The SFTP service (
sftp-service.yaml) does not includeloadBalancerSourceRangesbecause SFTP users typically connect from various locations. Only the Admin UI is restricted.
Azure will automatically configure Network Security Group rules on the Load Balancer to enforce these IP restrictions.
Step 2: Deploy to AKS
Now that the configuration is in place, deploy all resources using Kustomize:
kubectl apply -k .
This single command deploys all eight manifest files in the correct order and applies consistent labels (app.kubernetes.io/name: sftpgateway, app.kubernetes.io/version: "3.8.0") across all resources.
You should see output similar to:
namespace/sftpgw created
secret/sftpgw-db-secret created
secret/sftpgw-backend-secret created
secret/sftpgw-ui-secret created
configmap/sftpgw-db-config created
configmap/sftpgw-backend-config created
configmap/sftpgw-ui-config created
persistentvolumeclaim/sftpgw-home-pvc created
persistentvolumeclaim/sftpgw-mount-pvc created
service/db created
statefulset.apps/postgres created
service/backend created
deployment.apps/backend created
service/ui created
deployment.apps/ui created
service/sftp-lb created
Step 3: Monitor the deployment
Watch the pods as they come up:
kubectl get pods -n sftpgw -w
The pods will start in this order due to dependencies:
- postgres-0 — the database starts first
- backend-xxx — waits for the database to be ready (via init container), then starts
- ui-xxx — starts independently but requires the backend to serve requests
A healthy deployment looks like this:
NAME READY STATUS RESTARTS AGE
postgres-0 1/1 Running 0 2m
backend-7d797c89b-xxxxx 1/1 Running 0 90s
ui-f97458895-xxxxx 1/1 Running 0 90s
Note: The backend pod may take 60–90 seconds to become ready after starting. This is normal — the Spring Boot application needs time to initialize. The readiness probe begins checking after 30 seconds, and the liveness probe after 60 seconds.
If a pod is stuck in a non-ready state, check its logs:
kubectl logs -n sftpgw <pod-name>
To see init container logs (useful if the backend is stuck waiting for the database):
kubectl logs -n sftpgw <backend-pod-name> -c wait-for-db
Step 4: Get the external IP addresses
Once all pods are running, retrieve the external IP addresses assigned by Azure:
kubectl get svc -n sftpgw
You should see output like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
db ClusterIP None <none> 5432/TCP 3m
backend ClusterIP 10.0.xxx.xxx <none> 8080/TCP 3m
ui LoadBalancer 10.0.xxx.xxx 20.xxx.xxx.xxx 443:xxxxx/TCP 3m
sftp-lb LoadBalancer 10.0.xxx.xxx 20.xxx.xxx.xxx 22:xxxxx/TCP 3m
Take note of two external IPs:
| Service | External IP | Ports | Purpose |
|---|---|---|---|
ui | 20.xxx.xxx.xxx | 443 | Admin UI (HTTPS only, restricted to admin IPs) |
sftp-lb | 20.xxx.xxx.xxx | 22 | SFTP connections (open to internet) |
Note: It may take 1–2 minutes for Azure to assign the external IPs. If you see
<pending>in the EXTERNAL-IP column, wait and run the command again.
Step 5: Access the Admin UI
Open your web browser and navigate to the Admin UI using the external IP from the ui service:
https://<UI-EXTERNAL-IP>
Note: You must access the Admin UI from an IP address listed in
loadBalancerSourceRanges(configured in Step 1e). Connections from other IPs will be refused. If you're using a self-signed certificate, your browser will show a security warning — you can proceed past this for testing, but use a CA-signed certificate for production.
You will be presented with the SFTP Gateway setup wizard. Follow the on-screen prompts to:
- Create an admin account — set your administrator username and password
- Configure storage — the default Local File System connection is pre-configured and points to
/mnt/sftpgw_1(backed by the 50Gi Azure Managed Disk) - Create SFTP users — add user accounts that will connect via SFTP
Step 6: Connect via SFTP
Once you have created an SFTP user in the Admin UI, you can connect using any SFTP client (e.g., FileZilla, WinSCP, or the command line).
Using the command line:
sftp -P 22 <username>@<SFTP-EXTERNAL-IP>
Using FileZilla:
- Open FileZilla and go to File > Site Manager
- Click New Site and enter the following:
- Protocol: SFTP - SSH File Transfer Protocol
- Host:
<SFTP-EXTERNAL-IP>(the external IP from thesftp-lbservice) - Port:
22 - Logon Type: Normal
- User: the SFTP username you created in the Admin UI
- Password: the password you set for that user
- Click Connect
On first connection, you will be prompted to accept the server's host key. Click OK to proceed.
You should now be connected to SFTP Gateway and can upload/download files. Files are stored on the 50Gi Azure Managed Disk mounted at /mnt/sftpgw_1.
Resource summary
The following table summarizes the compute and storage resources used by this deployment:
| Component | Image | Memory (request/limit) | CPU (request/limit) | Storage |
|---|---|---|---|---|
| PostgreSQL | postgres:16-alpine | 256Mi / 512Mi | 250m / 500m | 10Gi |
| Backend | thorntech/sftpgateway-backend:3.8.0 | 2Gi / 3Gi | 500m / 1500m | 10Gi + 50Gi |
| Admin UI | thorntech/sftpgateway-admin-ui:3.8.0 | 128Mi / 256Mi | 100m / 500m | — |
| Total | ~2.4Gi / ~3.75Gi | 850m / 2500m | 70Gi |
Troubleshooting
Pods stuck in ImagePullBackOff
This typically means Docker Hub is rate-limiting your pulls, or there is a network connectivity issue from your AKS cluster. Check the pod events for details:
kubectl describe pod <pod-name> -n sftpgw
Look at the Events section at the bottom. If you see rate-limit errors, you can create a Docker Hub pull secret with your Docker Hub credentials:
kubectl create secret docker-registry dockerhub-secret \
--namespace=sftpgw \
--docker-server=https://index.docker.io/v1/ \
--docker-username=<your-dockerhub-username> \
--docker-password=<your-dockerhub-password>
Then add an imagePullSecrets reference to the backend and UI deployments.
Backend pod stuck in Init state
The backend pod uses two init containers that run before the main application starts:
- init-permissions — sets file ownership on the mount directory
- wait-for-db — waits for PostgreSQL to accept connections on port 5432
If the pod is stuck, check which init container is running and view its logs:
kubectl describe pod <backend-pod-name> -n sftpgw
kubectl logs -n sftpgw <backend-pod-name> -c wait-for-db
If stuck on wait-for-db, verify PostgreSQL is running:
kubectl get pods -n sftpgw -l app=postgres
kubectl logs -n sftpgw postgres-0
Backend pod CrashLoopBackOff
Check the backend logs for errors:
kubectl logs -n sftpgw -l app=backend
Common causes:
- Invalid license key — verify the
LICENSEvalue insecrets.yaml - Database connection failure — verify the database credentials match between
sftpgw-db-secretandsftpgw-backend-secret - Insufficient memory — the backend requires at least 2Gi of memory. Check if the node has enough allocatable resources
SFTP uploads fail with permission denied
If SFTP users can connect but cannot upload files, or the "Test Connection" in the Admin UI shows a write permission error, this is a file ownership issue on the persistent volumes.
The backend container runs as user sftpgw with UID 100. The init container must set ownership to this UID:
# Verify the ownership
kubectl exec -n sftpgw deployment/backend -- ls -la /mnt/
kubectl exec -n sftpgw deployment/backend -- ls -la /home/
Both /mnt/sftpgw_1 and /home/sftpgw should be owned by sftpgw:sftpgw (UID 100:GID 100).
If ownership is incorrect, check the init-permissions container in backend.yaml uses the correct UID:
command:
- sh
- -c
- |
chown -R 100:100 /home/sftpgw
chown -R 100:100 /mnt/sftpgw_1
Then restart the backend deployment to re-run the init container:
kubectl rollout restart deployment/backend -n sftpgw
External IP stuck on <pending>
If the LoadBalancer external IPs remain in <pending> state, check the AKS cluster's ability to provision Azure Load Balancers:
kubectl describe svc ui -n sftpgw
kubectl describe svc sftp-lb -n sftpgw
Look at the Events section for error messages. Common causes include insufficient permissions on the AKS managed identity or Azure subscription quota limits.
UI pod CrashLoopBackOff
If the UI pod is crashing, check the logs:
kubectl logs -n sftpgw -l app=ui
Common causes:
Missing or invalid TLS certificate — The UI container requires
WEBSITE_BUNDLE_CRTandWEBSITE_KEYenvironment variables insftpgw-ui-secret. If these are missing or contain invalid PEM data, nginx will fail to start with an error like:cannot load certificate "/etc/nginx/ssl/website.bundle.crt": PEM_read_bio_X509_AUX() failedEnsure the certificate and key are valid PEM format and properly indented in the YAML.
Certificate and key mismatch — If the private key doesn't correspond to the certificate, nginx will fail with:
SSL_CTX_use_PrivateKey("/etc/nginx/ssl/website.key") failed (SSL: error:... key values mismatch)This often happens when copying certificate/key content into YAML. Regenerate both files together using
openssl req -x509 ...and copy them carefully, preserving exact content.Missing BACKEND_URL — The UI's nginx configuration requires the
BACKEND_URLenvironment variable. If missing, you'll see:unknown "backend_url" variableVerify
sftpgw-ui-configconfigmap containsBACKEND_URL: "http://backend:8080/".
UI loads but shows connection errors
If the Admin UI loads but cannot communicate with the backend, verify the backend service is running and the OAuth credentials match:
kubectl get endpoints backend -n sftpgw
The output should show an endpoint IP. If the ENDPOINTS column is empty, the backend pod is not ready.
Also verify that the SECURITY_CLIENT_ID and SECURITY_CLIENT_SECRET values in sftpgw-ui-secret match the values in sftpgw-backend-secret. A mismatch will cause authentication failures between the UI and backend.
Additional considerations
The deployment manifests in this guide include production security settings (IP restrictions on the Admin UI, HTTPS-only access). Depending on your environment, you may also want to consider the following enhancements.
Checklist
- [ ] Replace the self-signed TLS certificate with a CA-signed certificate, or use an Azure Application Gateway Ingress Controller with Azure-managed certificates.
- [ ] Use Azure Database for PostgreSQL instead of the in-cluster StatefulSet for automated backups, high availability, and managed patching. Update
SPRING_DATASOURCE_URLinconfigmap.yamlto point to the managed database. - [ ] Generate strong credentials — the database password, OAuth client ID/secret, and JWT secret should be randomly generated (e.g.,
openssl rand -base64 32). - [ ] Use Azure Key Vault with the Secrets Store CSI Driver to manage secrets outside of Kubernetes manifests.
- [ ] Configure backup strategies for the Persistent Volume Claims using Azure Backup for AKS.
- [ ] Add network policies for defense-in-depth (see below).
- [ ] Enable autoscaling with the Kubernetes Horizontal Pod Autoscaler (HPA) if you expect variable SFTP traffic.
- [ ] Use static IP addresses for the LoadBalancer services (see below).
Adding a NetworkPolicy for defense-in-depth
The loadBalancerSourceRanges setting in ui.yaml restricts access at the Azure Load Balancer level. For an additional layer of security, add a Kubernetes NetworkPolicy that restricts traffic at the pod level:
# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-ui-ingress
namespace: sftpgw
spec:
podSelector:
matchLabels:
app: ui
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 203.0.113.50/32 # Replace with your sysadmin's IP
- ipBlock:
cidr: 198.51.100.0/24 # Replace with your office network
ports:
- protocol: TCP
port: 443
kubectl apply -f network-policy.yaml
Note: NetworkPolicies require a CNI plugin that supports them (e.g., Azure CNI with Network Policy enabled, Calico). Verify your AKS cluster has network policy support enabled.
Using static IP addresses
By default, Azure assigns dynamic public IPs to LoadBalancer services. These IPs remain stable during normal operation but will change if you delete and recreate the services. For production, use static IPs to ensure your SFTP endpoint and Admin UI URLs never change.
Step 1: Create static public IPs
Find your AKS node resource group (typically MC_<resource-group>_<cluster-name>_<region>):
az aks show --resource-group <your-resource-group> --name <your-cluster-name> \
--query nodeResourceGroup -o tsv
Create static IPs in that resource group:
# Static IP for SFTP
az network public-ip create \
--resource-group MC_<resource-group>_<cluster-name>_<region> \
--name sftpgw-sftp-ip \
--sku Standard \
--allocation-method Static
# Static IP for Admin UI
az network public-ip create \
--resource-group MC_<resource-group>_<cluster-name>_<region> \
--name sftpgw-ui-ip \
--sku Standard \
--allocation-method Static
Step 2: Get the assigned IP addresses
az network public-ip show --resource-group MC_<...> --name sftpgw-sftp-ip --query ipAddress -o tsv
az network public-ip show --resource-group MC_<...> --name sftpgw-ui-ip --query ipAddress -o tsv
Step 3: Update the service manifests
Add loadBalancerIP to each service in your manifest files:
# sftp-service.yaml
spec:
type: LoadBalancer
loadBalancerIP: "<your-sftp-static-ip>"
selector:
app: backend
ports:
- name: sftp
port: 22
targetPort: 2222
# ui.yaml (Service section) - includes static IP and access restrictions
spec:
type: LoadBalancer
loadBalancerIP: "<your-ui-static-ip>"
loadBalancerSourceRanges:
- "203.0.113.50/32" # Replace with your sysadmin's IP
selector:
app: ui
ports:
- name: https
port: 443
targetPort: 443
Step 4: Apply the changes
kubectl apply -f sftp-service.yaml -n sftpgw
kubectl apply -f ui.yaml -n sftpgw
The services will now use your static IPs, which persist even if the services are recreated.
Uninstall
To remove the entire SFTP Gateway deployment:
kubectl delete -k .
This removes all resources in the sftpgw namespace, including deployments, services, secrets, configmaps, and persistent volume claims.
Note: Deleting the PersistentVolumeClaims will permanently delete the underlying Azure Managed Disks and all data stored on them. Make sure to back up any important data before uninstalling.
If you created static public IPs, delete them separately:
az network public-ip delete --resource-group MC_<...> --name sftpgw-sftp-ip
az network public-ip delete --resource-group MC_<...> --name sftpgw-ui-ip