HA Infrastructure Manager Template
TLDR - Quick Summary
What: Deploy SFTP Gateway Professional HA using Google Cloud Infrastructure Manager (the replacement for Google Deployment Manager)
Resources Created: Regional Managed Instance Group, Cloud SQL PostgreSQL 16, Cloud Storage Bucket, Firewall Rules, Static IP, Secret Manager Secret
Deploy:
gcloud infra-manager deployments apply ...Prerequisite: Subscribe to SFTP Gateway Professional in Google Marketplace first
Overview
Google Deployment Manager (GDM) is deprecated. This article covers deploying SFTP Gateway Professional in an HA configuration using Google Cloud Infrastructure Manager, which is Google's managed service for running Terraform configurations.
The template deploys SFTP Gateway version 3.8.2 on GCP. It creates a dedicated VPC (or deploys into an existing one), a Regional Managed Instance Group with a load balancer across multiple zones, and a zonal Cloud SQL PostgreSQL instance.
Note: Make sure you are subscribed to SFTP Gateway Professional in the Google Marketplace before deploying.
Prerequisites
- Google Cloud SDK (
gcloud) installed and authenticated - The following APIs enabled in your project:
compute,sqladmin,servicenetworking,secretmanager,cloudresourcemanager,iam,config - Your workstation's public IP address (used to restrict admin access)
Enable all required APIs with a single command:
gcloud services enable \
compute.googleapis.com \
sqladmin.googleapis.com \
servicenetworking.googleapis.com \
secretmanager.googleapis.com \
cloudresourcemanager.googleapis.com \
iam.googleapis.com \
config.googleapis.com \
--project=YOUR_PROJECT_ID
Running the template
Step 1: Create the Terraform files
Create a working directory and add the two files from the File contents section at the bottom of this article:
sftpgw-ha.tfterraform.tfvars
Fill in your values in terraform.tfvars. Refer to the Configuration variables section for details on each setting.
Step 2: Create an Infrastructure Manager service account
Infrastructure Manager requires a dedicated service account to create resources on your behalf.
PROJECT_ID="your-gcp-project-id"
# Create the service account
gcloud iam service-accounts create infra-manager-service-account \
--display-name="Infrastructure Manager Service Account" \
--project="${PROJECT_ID}"
# Grant the required IAM roles
for ROLE in \
roles/compute.admin \
roles/iam.serviceAccountAdmin \
roles/iam.serviceAccountUser \
roles/cloudsql.admin \
roles/storage.admin \
roles/secretmanager.admin \
roles/servicenetworking.networksAdmin \
roles/resourcemanager.projectIamAdmin \
roles/config.agent; do
gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
--member="serviceAccount:infra-manager-service-account@${PROJECT_ID}.iam.gserviceaccount.com" \
--role="${ROLE}"
done
Then initialize the Infrastructure Manager service identity and allow it to act as the service account:
PROJECT_NUMBER=$(gcloud projects describe "${PROJECT_ID}" --format='value(projectNumber)')
# Create the IM service identity
gcloud --quiet beta services identity create \
--service=config.googleapis.com \
--project="${PROJECT_ID}"
# Allow the IM service agent to impersonate the deployment service account
gcloud iam service-accounts add-iam-policy-binding \
"infra-manager-service-account@${PROJECT_ID}.iam.gserviceaccount.com" \
--member="serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-config.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountTokenCreator" \
--project="${PROJECT_ID}"
Step 3: Deploy
Run the following command from the directory containing your two files. Set these variables to match what you put in terraform.tfvars — the gcloud CLI cannot read the tfvars file directly, so they must be provided here to construct the deployment resource path:
PROJECT_ID="your-gcp-project-id" # must match project in terraform.tfvars
REGION="us-east1" # must match region in terraform.tfvars
STACK_NAME="my-sftpgw" # must match stack_name in terraform.tfvars
gcloud infra-manager deployments apply \
"projects/${PROJECT_ID}/locations/${REGION}/deployments/${STACK_NAME}" \
--project="${PROJECT_ID}" \
--local-source="." \
--inputs-file="terraform.tfvars" \
--service-account="projects/${PROJECT_ID}/serviceAccounts/infra-manager-service-account@${PROJECT_ID}.iam.gserviceaccount.com"
Infrastructure Manager will upload your files, run terraform plan, and apply the configuration. Deployment typically takes 15–20 minutes, with Cloud SQL taking the longest.
To check the deployment status:
gcloud infra-manager deployments describe \
"projects/${PROJECT_ID}/locations/${REGION}/deployments/${STACK_NAME}" \
--project="${PROJECT_ID}"
Step 4: Get the load balancer IP
Once the deployment state shows ACTIVE, retrieve the static IP address:
gcloud compute addresses list \
--project="${PROJECT_ID}" \
--filter="name:${STACK_NAME}" \
--format="value(address)"
Note:
STACK_NAMEhere must match thestack_namevalue in yourterraform.tfvars, not just the IM deployment name — all GCP resources are prefixed with that value.
Use this IP to access the web admin interface: https://<load_balancer_ip>
Note: Even after the deployment is
ACTIVE, the instances need an additional 5–10 minutes to complete their startup sequence before the load balancer will accept traffic. Connection timeouts immediately after deployment are normal — wait a few minutes and try again.
How does it work
The template (sftpgw-ha.tf) provisions the following resources:
- VPC Network and Subnet: An isolated network for the SFTP Gateway instances and Cloud SQL, with Private Google Access enabled. Skipped when
existing_networkis set. - Firewall Rules: TCP port
22(SFTP) open from anywhere; admin ports80,443,2222restricted toweb_admin_ip_address; port443open to GCP health check ranges - Static External IP: A reserved public IP address for the load balancer
- Target Pool and Forwarding Rules: Load balancer distributing traffic on ports
22,80,443, and2222across all instances - Regional Managed Instance Group: Maintains the desired number of SFTP Gateway VMs across multiple zones, with auto-healing
- Service Account: Grants instances access to Cloud Storage, Cloud SQL, Cloud Logging, and Secret Manager
- Cloud SQL PostgreSQL 16: A managed single-zone database instance; private IP only, no public access
- Secret Manager Secret: Stores the auto-generated database password
- Cloud Storage Bucket: Receives SFTP file uploads
Configuration variables
Configure these values in terraform.tfvars:
project: Your GCP project IDstack_name: Prefix for all resource names. Use lowercase letters, numbers, and hyphens only.region: GCP region to deploy into (e.g.us-east1)zone: GCP zone for zonal resources (e.g.us-east1-c)web_admin_username: Username for the web admin interfaceweb_admin_password: Password for the web admin interface. Must be at least 12 characters.web_admin_ip_address: Your workstation's public IP with/32suffix (e.g.1.2.3.4/32). Get your IP fromcheckip.dyndns.org.google_storage_bucket: A globally unique name for the GCS bucket that will receive SFTP uploadsimage_path: The SFTP Gateway marketplace image. Useprojects/sftp-gateway/global/images/sftpgw-pro-3-8-2-20260414175401machine_type: Optional. VM size for SFTP Gateway instances. Defaults toe2-medium.instance_count: Optional. Number of instances in the managed instance group. Defaults to2.db_tier: Optional. Cloud SQL instance tier. Defaults todb-g1-smallfor testing. Usedb-custom-2-7680for production.existing_network: Optional. Name of an existing VPC network to deploy into. Leave empty to create a new VPC. The VPC must already have Cloud SQL private service access configured.existing_subnet: Optional. Name of an existing subnet. Required whenexisting_networkis set. Must have Private Google Access enabled.
Using an existing VPC (optional)
By default the template creates a new VPC network and subnet. If you want to deploy into an existing VPC — for example, to use your organization's shared network — set existing_network and existing_subnet in terraform.tfvars. When these are set, the template skips creating the VPC, subnet, and Cloud SQL private service access resources.
Your VPC must meet two requirements before deploying:
1. Cloud SQL private service access must be configured
# Allocate a private IP range for Cloud SQL
gcloud compute addresses create my-private-ip-range \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network=my-existing-vpc \
--project=YOUR_PROJECT_ID
# Create the service networking peering connection
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--network=my-existing-vpc \
--ranges=my-private-ip-range \
--project=YOUR_PROJECT_ID
Skip this step if your VPC already has Cloud SQL private service access configured. You can verify with:
gcloud services vpc-peerings list \
--network=my-existing-vpc \
--project=YOUR_PROJECT_ID
2. Private Google Access must be enabled on the subnet
gcloud compute networks subnets update my-existing-subnet \
--region=YOUR_REGION \
--enable-private-ip-google-access \
--project=YOUR_PROJECT_ID
Deleting the deployment
Deleting the deployment requires preparation — running deployments delete without it will fail due to Cloud SQL object ownership and VPC peering locks. The full procedure is covered in the [HA Infrastructure Manager Teardown Guide][teardown-guide].
At a minimum, before running deployments delete:
- Remove Cloud SQL deletion protection — run
gcloud sql instances patch ${STACK_NAME}-db --no-deletion-protection --project=${PROJECT_ID} --quiet - Empty the GCS bucket — delete all objects from the bucket
- Delete Cloud SQL directly — run
gcloud sql instances delete ${STACK_NAME}-db --project=${PROJECT_ID} --quietand wait for it to complete
Then delete the deployment:
gcloud infra-manager deployments delete \
"projects/${PROJECT_ID}/locations/${REGION}/deployments/${STACK_NAME}" \
--project="${PROJECT_ID}"
Note: If you deployed using an existing VPC (
existing_network), the VPC and its Cloud SQL private service access peering are not managed by this template and will not be deleted.
File contents
sftpgw-ha.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
}
}
provider "google" {
project = var.project
region = var.region
zone = var.zone
}
variable "stack_name" {
description = "Prefix applied to all resource names (lowercase letters, numbers, and hyphens only)"
type = string
}
variable "project" {
description = "GCP project ID"
type = string
}
variable "region" {
description = "GCP region to deploy into (e.g. us-east1)"
type = string
default = "us-east1"
}
variable "zone" {
description = "GCP zone for zonal resources (e.g. us-east1-c)"
type = string
default = "us-east1-c"
}
variable "machine_type" {
description = "GCP machine type for SFTP Gateway instances. e2-medium is recommended for production."
type = string
default = "e2-medium"
}
variable "instance_count" {
description = "Number of SFTP Gateway instances in the managed instance group (minimum 2 for HA)"
type = number
default = 2
}
variable "image_path" {
description = "SFTP Gateway image path (e.g. projects/sftp-gateway/global/images/sftpgw-pro-3-8-2-20260414175401)"
type = string
}
variable "web_admin_username" {
description = "Initial web admin username"
type = string
}
variable "web_admin_password" {
description = "Initial web admin password (minimum 12 characters)"
type = string
sensitive = true
}
variable "web_admin_ip_address" {
description = "Your workstation's IP in CIDR notation (e.g. 1.2.3.4/32) — restricts access to admin ports 80, 443, and 2222"
type = string
}
variable "google_storage_bucket" {
description = "Name for the GCS bucket SFTP Gateway will use for file storage (must be globally unique)"
type = string
}
variable "db_tier" {
description = "Cloud SQL instance tier. db-g1-small is suitable for testing; use db-custom-2-7680 or higher for production."
type = string
default = "db-g1-small"
}
variable "existing_network" {
description = "Name of an existing VPC network to use. If empty, a new VPC and subnet are created. When set, existing_subnet must also be provided, and the VPC must already have Cloud SQL private service access (VPC peering with servicenetworking.googleapis.com) configured."
type = string
default = ""
}
variable "existing_subnet" {
description = "Name of an existing subnet to use. Required when existing_network is set. The subnet must have Private Google Access enabled."
type = string
default = ""
}
locals {
# Cloud SQL IAM database user = service account email without .gserviceaccount.com suffix
iam_db_user = trimsuffix(google_service_account.sftpgw.email, ".gserviceaccount.com")
use_existing_network = var.existing_network != ""
# Resolve network/subnet from either the created or existing resources
network_name = local.use_existing_network ? var.existing_network : google_compute_network.main[0].name
network_id = local.use_existing_network ? data.google_compute_network.existing[0].id : google_compute_network.main[0].id
subnet_id = local.use_existing_network ? data.google_compute_subnetwork.existing[0].id : google_compute_subnetwork.main[0].id
}
# Service Account
resource "google_service_account" "sftpgw" {
account_id = "${var.stack_name}-sa"
display_name = "SFTP Gateway Service Account"
project = var.project
}
resource "google_project_iam_member" "logging" {
project = var.project
role = "roles/logging.logWriter"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
resource "google_project_iam_member" "monitoring" {
project = var.project
role = "roles/monitoring.metricWriter"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
resource "google_project_iam_member" "cloudsql_client" {
project = var.project
role = "roles/cloudsql.client"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
resource "google_project_iam_member" "cloudsql_instance_user" {
project = var.project
role = "roles/cloudsql.instanceUser"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
resource "google_project_iam_member" "secretmanager_accessor" {
project = var.project
role = "roles/secretmanager.secretAccessor"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
# VPC Network
data "google_compute_network" "existing" {
count = local.use_existing_network ? 1 : 0
name = var.existing_network
project = var.project
}
data "google_compute_subnetwork" "existing" {
count = local.use_existing_network ? 1 : 0
name = var.existing_subnet
region = var.region
project = var.project
}
resource "google_compute_network" "main" {
count = local.use_existing_network ? 0 : 1
name = "${var.stack_name}-network"
auto_create_subnetworks = false
project = var.project
}
resource "google_compute_subnetwork" "main" {
count = local.use_existing_network ? 0 : 1
name = "${var.stack_name}-subnet"
ip_cidr_range = "10.0.1.0/24"
region = var.region
network = google_compute_network.main[0].id
project = var.project
private_ip_google_access = true
}
# Firewall Rules
resource "google_compute_firewall" "sftp" {
name = "${var.stack_name}-allow-sftp"
network = local.network_name
project = var.project
description = "Allow SFTP access on port 22 from anywhere"
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["${var.stack_name}-sftpgw"]
}
resource "google_compute_firewall" "admin" {
name = "${var.stack_name}-allow-admin"
network = local.network_name
project = var.project
description = "Allow admin access on ports 80, 443, 2222 from the configured IP"
allow {
protocol = "tcp"
ports = ["80", "443", "2222"]
}
source_ranges = [var.web_admin_ip_address]
target_tags = ["${var.stack_name}-sftpgw"]
}
resource "google_compute_firewall" "health_check" {
name = "${var.stack_name}-allow-health-check"
network = local.network_name
project = var.project
description = "Allow GCP load balancer health checks"
allow {
protocol = "tcp"
ports = ["443"]
}
source_ranges = ["130.211.0.0/22", "35.191.0.0/16"]
target_tags = ["${var.stack_name}-sftpgw"]
}
# Private Service Access for Cloud SQL
resource "google_compute_global_address" "private_ip_range" {
count = local.use_existing_network ? 0 : 1
name = "${var.stack_name}-private-ip-range"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = google_compute_network.main[0].id
project = var.project
}
resource "google_service_networking_connection" "main" {
count = local.use_existing_network ? 0 : 1
network = google_compute_network.main[0].id
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_range[0].name]
}
# Secret Manager
resource "random_password" "db" {
length = 30
special = false
}
resource "google_secret_manager_secret" "db_password" {
secret_id = "${var.stack_name}-db-password"
project = var.project
replication {
auto {}
}
}
resource "google_secret_manager_secret_version" "db_password" {
secret = google_secret_manager_secret.db_password.id
secret_data = random_password.db.result
}
resource "random_password" "pg_admin" {
length = 30
special = false
}
resource "google_secret_manager_secret" "pg_admin_password" {
secret_id = "${var.stack_name}-pg-admin-password"
project = var.project
replication {
auto {}
}
}
resource "google_secret_manager_secret_version" "pg_admin_password" {
secret = google_secret_manager_secret.pg_admin_password.id
secret_data = random_password.pg_admin.result
}
# Cloud SQL PostgreSQL 16 (Zonal)
resource "google_sql_database_instance" "main" {
name = "${var.stack_name}-db"
database_version = "POSTGRES_16"
region = var.region
project = var.project
deletion_protection = true
settings {
tier = var.db_tier
edition = "ENTERPRISE"
availability_type = "ZONAL"
backup_configuration {
enabled = true
start_time = "00:00"
}
ip_configuration {
ipv4_enabled = false
private_network = local.network_id
}
database_flags {
name = "cloudsql.iam_authentication"
value = "on"
}
disk_type = "PD_SSD"
disk_autoresize = true
}
depends_on = [google_service_networking_connection.main]
}
resource "google_sql_database" "main" {
name = "sftpgw"
instance = google_sql_database_instance.main.name
project = var.project
}
resource "google_sql_user" "main" {
name = "sftpgw"
instance = google_sql_database_instance.main.name
password = random_password.db.result
project = var.project
}
resource "google_sql_user" "postgres" {
name = "postgres"
instance = google_sql_database_instance.main.name
password = random_password.pg_admin.result
project = var.project
}
resource "google_sql_user" "iam_sa" {
name = trimsuffix(google_service_account.sftpgw.email, ".gserviceaccount.com")
instance = google_sql_database_instance.main.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
project = var.project
}
# Cloud Storage Bucket
resource "google_storage_bucket" "main" {
name = var.google_storage_bucket
location = var.region
project = var.project
force_destroy = false
uniform_bucket_level_access = true
}
resource "google_storage_bucket_iam_member" "sftpgw" {
bucket = google_storage_bucket.main.name
role = "roles/storage.admin"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
# Instance Template
resource "google_compute_instance_template" "main" {
name_prefix = "${var.stack_name}-"
machine_type = var.machine_type
project = var.project
tags = ["${var.stack_name}-sftpgw"]
disk {
source_image = var.image_path
auto_delete = true
boot = true
disk_size_gb = 32
disk_type = "pd-ssd"
}
network_interface {
subnetwork = local.subnet_id
access_config {}
}
service_account {
email = google_service_account.sftpgw.email
scopes = ["cloud-platform"]
}
metadata = {
load_balancer_ips = google_compute_address.main.address
user-data = <<-EOT
#cloud-config
write_files:
- content: |
CLOUD_PROVIDER=gcp
ARCHITECTURE=HA
DB_HOST=${google_sql_database_instance.main.connection_name}
DB_USER=sftpgw
SECRET_ID=${google_secret_manager_secret.db_password.secret_id}
GCS_BUCKET=${var.google_storage_bucket}
LOAD_BALANCER_IPS=${google_compute_address.main.address}
path: /opt/sftpgw/launch_config.env
permissions: '0600'
- content: |
#!/bin/bash
systemctl daemon-reload
exit 0
path: /var/lib/cloud/scripts/per-instance/00-sftpgw-grants.sh
permissions: '0755'
- content: |
#!/bin/bash
_token() {
curl -sf \
-H "Metadata-Flavor: Google" \
"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
| python3 -c "import sys,json;print(json.load(sys.stdin)['access_token'])"
}
PG_PASS=$(curl -sf \
-H "Authorization: Bearer $(_token)" \
"https://secretmanager.googleapis.com/v1/projects/${var.project}/secrets/${google_secret_manager_secret.pg_admin_password.secret_id}/versions/latest:access" \
| python3 -c "import sys,json,base64;d=json.load(sys.stdin);print(base64.b64decode(d['payload']['data']).decode())")
PGPASSWORD="$PG_PASS" psql -h 127.0.0.1 -p 5432 -U postgres -d postgres \
-c "GRANT cloudsqlsuperuser TO \"${local.iam_db_user}\";" \
-c "GRANT ALL ON SCHEMA public TO \"${local.iam_db_user}\";" \
-c "CREATE EXTENSION IF NOT EXISTS ltree;" 2>&1 || true
exit 0
path: /usr/local/bin/sftpgw-apply-grants.sh
permissions: '0755'
- content: |
[Service]
ExecStartPre=/usr/local/bin/sftpgw-apply-grants.sh
path: /etc/systemd/system/sftpgw-admin-api.service.d/99-grants.conf
permissions: '0644'
runcmd:
- |
systemctl stop sftpgw-admin-api.service 2>/dev/null || true
_sm_secret() {
local secret_id="$1"
local token
token=$(curl -sf \
-H "Metadata-Flavor: Google" \
"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
| python3 -c "import sys,json;print(json.load(sys.stdin)['access_token'])")
curl -sf \
-H "Authorization: Bearer $token" \
"https://secretmanager.googleapis.com/v1/projects/${var.project}/secrets/$secret_id/versions/latest:access" \
| python3 -c "import sys,json,base64;d=json.load(sys.stdin);print(base64.b64decode(d['payload']['data']).decode())"
}
PG_ADMIN_PASS=$(_sm_secret "${google_secret_manager_secret.pg_admin_password.secret_id}" 2>/dev/null)
PGPASSWORD="$PG_ADMIN_PASS" psql -h 127.0.0.1 -p 5432 -U postgres -d postgres \
-c "GRANT cloudsqlsuperuser TO \"${local.iam_db_user}\";" \
-c "GRANT ALL ON SCHEMA public TO \"${local.iam_db_user}\";" \
-c "CREATE EXTENSION IF NOT EXISTS ltree;" 2>&1
/opt/cloud-sql-proxy/cloud-sql-proxy \
--port=5433 \
"${var.project}:${var.region}:${google_sql_database_instance.main.name}" \
--private-ip &
ADMIN_PROXY_PID=$!
sleep 10
PGPASSWORD="$PG_ADMIN_PASS" psql -h 127.0.0.1 -p 5433 -U postgres -d postgres \
-c "GRANT cloudsqlsuperuser TO \"${local.iam_db_user}\";" \
-c "GRANT ALL ON SCHEMA public TO \"${local.iam_db_user}\";" \
-c "CREATE EXTENSION IF NOT EXISTS ltree;" 2>&1
kill $ADMIN_PROXY_PID 2>/dev/null || true
systemctl reset-failed sftpgw-admin-api.service
systemctl start sftpgw-admin-api.service
for i in $(seq 1 18); do
if systemctl is-active --quiet sftpgw-admin-api.service; then
echo "[boot] Service active on attempt $i"
break
fi
sleep 10
done
echo "[boot] Waiting 120s for Spring Boot to finish initialising..."
sleep 120
LB_IP=$(grep "^LOAD_BALANCER_IPS=" /opt/sftpgw/launch_config.env 2>/dev/null | cut -d= -f2)
if [ -n "$LB_IP" ]; then
mkdir -p /etc/nginx/conf.d
printf 'set_real_ip_from %s;\nreal_ip_header X-Forwarded-For;\nreal_ip_recursive on;\n' \
"$LB_IP" > /etc/nginx/conf.d/realip.conf
echo "[boot] nginx realip configured for $LB_IP"
fi
systemctl reload nginx 2>/dev/null && echo "[boot] nginx reloaded" \
|| { systemctl start nginx && echo "[boot] nginx started"; }
for i in $(seq 1 18); do
HTTP_CODE=$(curl -s -o /dev/null -w "%%{http_code}" \
-X "POST" "http://localhost:8080/3.0.0/admin/config" \
-H "Content-Type: application/json" \
-d "{\"password\": \"${var.web_admin_password}\",\"username\": \"${var.web_admin_username}\"}" \
2>/dev/null)
if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ] || \
[ "$HTTP_CODE" = "401" ] || [ "$HTTP_CODE" = "404" ] || \
[ "$HTTP_CODE" = "409" ]; then
echo "[boot] Admin config done (HTTP $HTTP_CODE) on attempt $i"
break
fi
sleep 10
done
EOT
}
lifecycle {
create_before_destroy = true
}
}
# Health Check
resource "google_compute_health_check" "main" {
name = "${var.stack_name}-health-check"
project = var.project
check_interval_sec = 10
timeout_sec = 5
https_health_check {
port = 443
request_path = "/index.html"
}
}
# Static External IP
resource "google_compute_address" "main" {
name = "${var.stack_name}-ip"
region = var.region
project = var.project
}
# Target Pool and Regional Managed Instance Group
resource "google_compute_target_pool" "main" {
name = "${var.stack_name}-pool"
region = var.region
project = var.project
}
resource "google_compute_region_instance_group_manager" "main" {
name = "${var.stack_name}-igm"
base_instance_name = var.stack_name
region = var.region
project = var.project
target_size = var.instance_count
target_pools = [google_compute_target_pool.main.id]
version {
instance_template = google_compute_instance_template.main.id
}
auto_healing_policies {
health_check = google_compute_health_check.main.id
initial_delay_sec = 300
}
}
# Forwarding Rules
resource "google_compute_forwarding_rule" "sftp" {
name = "${var.stack_name}-forward-sftp"
region = var.region
project = var.project
ip_address = google_compute_address.main.address
ip_protocol = "TCP"
port_range = "22"
target = google_compute_target_pool.main.id
}
resource "google_compute_forwarding_rule" "http" {
name = "${var.stack_name}-forward-http"
region = var.region
project = var.project
ip_address = google_compute_address.main.address
ip_protocol = "TCP"
port_range = "80"
target = google_compute_target_pool.main.id
}
resource "google_compute_forwarding_rule" "https" {
name = "${var.stack_name}-forward-https"
region = var.region
project = var.project
ip_address = google_compute_address.main.address
ip_protocol = "TCP"
port_range = "443"
target = google_compute_target_pool.main.id
}
resource "google_compute_forwarding_rule" "admin" {
name = "${var.stack_name}-forward-admin"
region = var.region
project = var.project
ip_address = google_compute_address.main.address
ip_protocol = "TCP"
port_range = "2222"
target = google_compute_target_pool.main.id
}
# Outputs
output "load_balancer_ip" {
value = google_compute_address.main.address
description = "Static public IP address — use this as your SFTP Gateway hostname"
}
output "storage_bucket" {
value = google_storage_bucket.main.name
description = "GCS bucket used for SFTP file storage"
}
terraform.tfvars
Make sure you replace these values with your own.
project = "your-gcp-project-id"
region = "us-east1"
zone = "us-east1-c"
stack_name = "my-sftpgw"
machine_type = "e2-medium"
instance_count = 2
image_path = "projects/sftp-gateway/global/images/sftpgw-pro-3-8-2-20260414175401"
web_admin_ip_address = "1.2.3.4/32" // replace with your IP address followed by /32
web_admin_username = "admin"
web_admin_password = "replace-with-a-strong-password" // minimum 12 characters
google_storage_bucket = "my-sftpgw-files" // must be globally unique
db_tier = "db-g1-small" // use db-custom-2-7680 for production
// Optional: deploy into an existing VPC instead of creating a new one.
// The VPC must already have Cloud SQL private service access configured.
// The subnet must have Private Google Access enabled.
// existing_network = "my-existing-vpc"
// existing_subnet = "my-existing-subnet"