HA Infrastructure Manager Template — Shared VPC
TLDR - Quick Summary
What: Deploy SFTP Gateway Professional HA using a VPC that lives in a separate GCP host project (Shared VPC / XPN)
When: Your organization centralizes network resources in a dedicated shared-network project and deploys workloads into separate service projects
Difference from standard HA template: Adds a
host_projectvariable; existing VPC/subnet are required (no new VPC creation path); firewall rules deploy to the host project; one additional IM service account role grant needed in the host project
Overview
In GCP Shared VPC, one project is the host project — it owns the VPC, subnets, and firewall rules. Other projects (called service projects) are attached to the host and launch resources into its subnets. This article covers setting up the shared network infrastructure, then deploying SFTP Gateway into the service project using it.
| Resource | Where it lives |
|---|---|
| VPC, subnet, firewall rules | Host project |
| Cloud SQL, GCS bucket, secrets, load balancer, MIG | Service project |
| SFTP Gateway service account | Service project |
Prerequisites
- GCP SDK (
gcloud) installed and authenticated - Owner or Editor on both the host project and service project
roles/compute.xpnAdminat the organization level — required only for Steps 1 and 2 (enabling Shared VPC and attaching the service project). This is an org-level role; if you don't have it, ask your GCP org admin to run those two commands or grant you the role.- Both projects must have billing enabled
To check who can grant org-level permissions, contact your GCP organization administrator.
Step 1: Enable Shared VPC on the Host Project
Requires
roles/compute.xpnAdminat the org level.
HOST_PROJECT="your-host-project-id"
gcloud compute shared-vpc enable "${HOST_PROJECT}"
Validate:
gcloud compute shared-vpc organizations list-host-projects \
$(gcloud projects describe "${HOST_PROJECT}" --format='value(parent.id)') \
--filter="name:${HOST_PROJECT}" \
--format="table(name,xpnProjectStatus)"
Expected output — xpnProjectStatus should be HOST:
NAME XPN_PROJECT_STATUS
your-host-project-id HOST
Step 2: Attach the Service Project
Requires
roles/compute.xpnAdminat the org level.
SERVICE_PROJECT="your-service-project-id"
gcloud compute shared-vpc associated-projects add "${SERVICE_PROJECT}" \
--host-project="${HOST_PROJECT}"
Validate:
gcloud compute shared-vpc associated-projects list "${HOST_PROJECT}"
Expected output — your service project should appear:
RESOURCE_ID RESOURCE_TYPE
your-service-project-id PROJECT
Step 3: Enable Required APIs
Host project (Compute Engine is the only one needed there):
gcloud services enable compute.googleapis.com \
--project="${HOST_PROJECT}"
Service project (all required for the HA template):
gcloud services enable \
compute.googleapis.com \
sqladmin.googleapis.com \
servicenetworking.googleapis.com \
secretmanager.googleapis.com \
cloudresourcemanager.googleapis.com \
iam.googleapis.com \
config.googleapis.com \
--project="${SERVICE_PROJECT}"
Validate:
gcloud services list --project="${SERVICE_PROJECT}" --enabled \
--filter="name:(compute.googleapis.com OR sqladmin.googleapis.com OR servicenetworking.googleapis.com OR secretmanager.googleapis.com OR cloudresourcemanager.googleapis.com OR iam.googleapis.com OR config.googleapis.com)" \
--format="table(name)"
All seven services should appear in the output.
Step 4: Create the Shared VPC and Subnet
Create a custom VPC and subnet in the host project. Using a custom-mode VPC (not auto-mode) is strongly recommended — auto-mode creates subnets in every region automatically and can create unintended IP range conflicts.
REGION="us-east1"
VPC_NAME="sftpgw-shared-vpc"
SUBNET_NAME="sftpgw-subnet"
SUBNET_CIDR="10.10.0.0/24"
# Create the VPC
gcloud compute networks create "${VPC_NAME}" \
--project="${HOST_PROJECT}" \
--subnet-mode=custom \
--bgp-routing-mode=regional
# Create the subnet with Private Google Access enabled
# Private Google Access is required for instances to reach Secret Manager,
# Cloud SQL, and other Google APIs without an external IP address
gcloud compute networks subnets create "${SUBNET_NAME}" \
--project="${HOST_PROJECT}" \
--network="${VPC_NAME}" \
--region="${REGION}" \
--range="${SUBNET_CIDR}" \
--enable-private-ip-google-access
Validate:
gcloud compute networks describe "${VPC_NAME}" \
--project="${HOST_PROJECT}" \
--format="table(name,subnetMode,routingConfig.routingMode)"
gcloud compute networks subnets describe "${SUBNET_NAME}" \
--project="${HOST_PROJECT}" \
--region="${REGION}" \
--format="table(name,ipCidrRange,privateIpGoogleAccess,region)"
Expected — privateIpGoogleAccess must be True:
NAME IP_CIDR_RANGE PRIVATE_IP_GOOGLE_ACCESS REGION
sftpgw-subnet 10.10.0.0/24 True us-east1
Step 5: Configure Cloud SQL Private Service Access
Cloud SQL uses a private IP that requires VPC peering with Google's service networking. This is configured once per VPC.
If you already have a Cloud SQL instance using a private IP on this VPC, skip this step — the peering already exists.
RANGE_NAME="sftpgw-cloudsql-ip-range"
# Allocate an internal IP range for Cloud SQL
gcloud compute addresses create "${RANGE_NAME}" \
--project="${HOST_PROJECT}" \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network="${VPC_NAME}"
# Create the private service access peering connection
gcloud services vpc-peerings connect \
--project="${HOST_PROJECT}" \
--service=servicenetworking.googleapis.com \
--network="${VPC_NAME}" \
--ranges="${RANGE_NAME}"
Validate:
# Confirm the IP range is reserved
gcloud compute addresses describe "${RANGE_NAME}" \
--project="${HOST_PROJECT}" \
--global \
--format="table(name,purpose,prefixLength,status)"
Expected — status must be RESERVED:
NAME PURPOSE PREFIX_LENGTH STATUS
sftpgw-cloudsql-ip-range VPC_PEERING 16 RESERVED
# Confirm the peering connection is active
gcloud services vpc-peerings list \
--project="${HOST_PROJECT}" \
--network="${VPC_NAME}"
Expected — servicenetworking-googleapis-com peering should appear with your range:
network: projects/HOST_PROJECT_NUMBER/global/networks/sftpgw-shared-vpc
peering: servicenetworking-googleapis-com
reservedPeeringRanges:
- sftpgw-cloudsql-ip-range
service: services/servicenetworking.googleapis.com
Step 6: Create the Infrastructure Manager Service Account
Create the service account in the service project and grant it roles in both projects.
# Create the service account
gcloud iam service-accounts create infra-manager-sa \
--display-name="Infrastructure Manager Service Account" \
--project="${SERVICE_PROJECT}"
IM_SA="infra-manager-sa@${SERVICE_PROJECT}.iam.gserviceaccount.com"
# Grant service project roles
for ROLE in \
roles/compute.admin \
roles/iam.serviceAccountAdmin \
roles/iam.serviceAccountUser \
roles/cloudsql.admin \
roles/storage.admin \
roles/secretmanager.admin \
roles/servicenetworking.networksAdmin \
roles/resourcemanager.projectIamAdmin \
roles/config.agent; do
gcloud projects add-iam-policy-binding "${SERVICE_PROJECT}" \
--member="serviceAccount:${IM_SA}" \
--role="${ROLE}"
done
# Grant host project roles
# compute.networkUser — allows Terraform to reference the shared subnet
# compute.securityAdmin — allows Terraform to create firewall rules in the host project
for ROLE in \
roles/compute.networkUser \
roles/compute.securityAdmin; do
gcloud projects add-iam-policy-binding "${HOST_PROJECT}" \
--member="serviceAccount:${IM_SA}" \
--role="${ROLE}"
done
Then initialize the Infrastructure Manager service identity and allow it to impersonate the service account:
PROJECT_NUMBER=$(gcloud projects describe "${SERVICE_PROJECT}" --format='value(projectNumber)')
# Create the IM service identity
gcloud --quiet beta services identity create \
--service=config.googleapis.com \
--project="${SERVICE_PROJECT}"
# Grant it the ability to act as the deployment service account
gcloud iam service-accounts add-iam-policy-binding "${IM_SA}" \
--member="serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-config.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountTokenCreator" \
--project="${SERVICE_PROJECT}"
Validate:
# Verify service project roles (should list all 9 roles)
gcloud projects get-iam-policy "${SERVICE_PROJECT}" \
--flatten="bindings[].members" \
--filter="bindings.members:${IM_SA}" \
--format="table(bindings.role)"
Expected:
ROLE
roles/cloudsql.admin
roles/compute.admin
roles/config.agent
roles/iam.serviceAccountAdmin
roles/iam.serviceAccountUser
roles/resourcemanager.projectIamAdmin
roles/secretmanager.admin
roles/servicenetworking.networksAdmin
roles/storage.admin
# Verify host project roles (should list both)
gcloud projects get-iam-policy "${HOST_PROJECT}" \
--flatten="bindings[].members" \
--filter="bindings.members:${IM_SA}" \
--format="table(bindings.role)"
Expected:
ROLE
roles/compute.networkUser
roles/compute.securityAdmin
# Verify IM service identity impersonation
gcloud iam service-accounts get-iam-policy "${IM_SA}" \
--project="${SERVICE_PROJECT}" \
--flatten="bindings[].members" \
--filter="bindings.members:service-${PROJECT_NUMBER}@gcp-sa-config.iam.gserviceaccount.com" \
--format="table(bindings.role)"
Expected:
ROLE
roles/iam.serviceAccountTokenCreator
Step 7: Create the Terraform Files
Create a working directory and add the following two files.
sftpgw-ha.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
}
}
provider "google" {
project = var.project
region = var.region
zone = var.zone
}
# ---------------------------------------------------------------------------
# Variables
# ---------------------------------------------------------------------------
variable "stack_name" {
description = "Prefix applied to all resource names (lowercase letters, numbers, and hyphens only)"
type = string
}
variable "project" {
description = "GCP project ID of the service project where SFTP Gateway resources are deployed"
type = string
}
variable "host_project" {
description = "GCP project ID of the Shared VPC host project (where the VPC and subnet live)"
type = string
}
variable "region" {
description = "GCP region to deploy into (e.g. us-east1)"
type = string
default = "us-east1"
}
variable "zone" {
description = "GCP zone for zonal resources (e.g. us-east1-c)"
type = string
default = "us-east1-c"
}
variable "machine_type" {
description = "GCP machine type for SFTP Gateway instances. e2-medium is recommended for production."
type = string
default = "e2-medium"
}
variable "instance_count" {
description = "Number of SFTP Gateway instances in the managed instance group (minimum 2 for HA)"
type = number
default = 2
}
variable "image_path" {
description = "SFTP Gateway image path (e.g. projects/sftp-gateway/global/images/sftpgw-pro-3-8-2-20260414175401)"
type = string
}
variable "web_admin_username" {
description = "Initial web admin username"
type = string
}
variable "web_admin_password" {
description = "Initial web admin password (minimum 12 characters)"
type = string
sensitive = true
}
variable "web_admin_ip_address" {
description = "Your workstation's IP in CIDR notation (e.g. 1.2.3.4/32) — restricts access to admin ports 80, 443, and 2222"
type = string
}
variable "google_storage_bucket" {
description = "Name for the GCS bucket SFTP Gateway will use for file storage (must be globally unique)"
type = string
}
variable "db_tier" {
description = "Cloud SQL instance tier. db-g1-small is suitable for testing; use db-custom-2-7680 or higher for production."
type = string
default = "db-g1-small"
}
variable "existing_network" {
description = "Name of the shared VPC network in host_project. The VPC must already have Cloud SQL private service access configured."
type = string
}
variable "existing_subnet" {
description = "Name of the shared subnet in host_project. The subnet must have Private Google Access enabled."
type = string
}
# ---------------------------------------------------------------------------
# Locals
# ---------------------------------------------------------------------------
locals {
iam_db_user = trimsuffix(google_service_account.sftpgw.email, ".gserviceaccount.com")
}
# ---------------------------------------------------------------------------
# Service Account (in service project)
# ---------------------------------------------------------------------------
resource "google_service_account" "sftpgw" {
account_id = "${var.stack_name}-sa"
display_name = "SFTP Gateway Service Account"
project = var.project
}
resource "google_project_iam_member" "logging" {
project = var.project
role = "roles/logging.logWriter"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
resource "google_project_iam_member" "monitoring" {
project = var.project
role = "roles/monitoring.metricWriter"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
resource "google_project_iam_member" "cloudsql_client" {
project = var.project
role = "roles/cloudsql.client"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
resource "google_project_iam_member" "cloudsql_instance_user" {
project = var.project
role = "roles/cloudsql.instanceUser"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
resource "google_project_iam_member" "secretmanager_accessor" {
project = var.project
role = "roles/secretmanager.secretAccessor"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
# ---------------------------------------------------------------------------
# Shared VPC — look up network and subnet from host project
# ---------------------------------------------------------------------------
data "google_compute_network" "existing" {
name = var.existing_network
project = var.host_project
}
data "google_compute_subnetwork" "existing" {
name = var.existing_subnet
region = var.region
project = var.host_project
}
# ---------------------------------------------------------------------------
# Firewall Rules (must live in the host project alongside the shared VPC)
# ---------------------------------------------------------------------------
resource "google_compute_firewall" "sftp" {
name = "${var.stack_name}-allow-sftp"
network = data.google_compute_network.existing.self_link
project = var.host_project
description = "Allow SFTP access on port 22 from anywhere"
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["${var.stack_name}-sftpgw"]
}
resource "google_compute_firewall" "admin" {
name = "${var.stack_name}-allow-admin"
network = data.google_compute_network.existing.self_link
project = var.host_project
description = "Allow admin access on ports 80, 443, 2222 from the configured IP"
allow {
protocol = "tcp"
ports = ["80", "443", "2222"]
}
source_ranges = [var.web_admin_ip_address]
target_tags = ["${var.stack_name}-sftpgw"]
}
resource "google_compute_firewall" "health_check" {
name = "${var.stack_name}-allow-health-check"
network = data.google_compute_network.existing.self_link
project = var.host_project
description = "Allow GCP load balancer health checks"
allow {
protocol = "tcp"
ports = ["443"]
}
source_ranges = ["130.211.0.0/22", "35.191.0.0/16"]
target_tags = ["${var.stack_name}-sftpgw"]
}
# ---------------------------------------------------------------------------
# Secret Manager (Cloud SQL passwords)
# ---------------------------------------------------------------------------
resource "random_password" "db" {
length = 30
special = false
}
resource "google_secret_manager_secret" "db_password" {
secret_id = "${var.stack_name}-db-password"
project = var.project
replication {
auto {}
}
}
resource "google_secret_manager_secret_version" "db_password" {
secret = google_secret_manager_secret.db_password.id
secret_data = random_password.db.result
}
resource "random_password" "pg_admin" {
length = 30
special = false
}
resource "google_secret_manager_secret" "pg_admin_password" {
secret_id = "${var.stack_name}-pg-admin-password"
project = var.project
replication {
auto {}
}
}
resource "google_secret_manager_secret_version" "pg_admin_password" {
secret = google_secret_manager_secret.pg_admin_password.id
secret_data = random_password.pg_admin.result
}
# ---------------------------------------------------------------------------
# Cloud SQL PostgreSQL 16 (private IP via shared VPC peering)
# ---------------------------------------------------------------------------
resource "google_sql_database_instance" "main" {
name = "${var.stack_name}-db"
database_version = "POSTGRES_16"
region = var.region
project = var.project
deletion_protection = true
settings {
tier = var.db_tier
edition = "ENTERPRISE"
availability_type = "ZONAL"
backup_configuration {
enabled = true
start_time = "00:00"
}
ip_configuration {
ipv4_enabled = false
private_network = data.google_compute_network.existing.id
}
database_flags {
name = "cloudsql.iam_authentication"
value = "on"
}
disk_type = "PD_SSD"
disk_autoresize = true
}
}
resource "google_sql_database" "main" {
name = "sftpgw"
instance = google_sql_database_instance.main.name
project = var.project
}
resource "google_sql_user" "main" {
name = "sftpgw"
instance = google_sql_database_instance.main.name
password = random_password.db.result
project = var.project
}
resource "google_sql_user" "postgres" {
name = "postgres"
instance = google_sql_database_instance.main.name
password = random_password.pg_admin.result
project = var.project
}
resource "google_sql_user" "iam_sa" {
name = trimsuffix(google_service_account.sftpgw.email, ".gserviceaccount.com")
instance = google_sql_database_instance.main.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
project = var.project
}
# ---------------------------------------------------------------------------
# Cloud Storage Bucket
# ---------------------------------------------------------------------------
resource "google_storage_bucket" "main" {
name = var.google_storage_bucket
location = var.region
project = var.project
force_destroy = false
uniform_bucket_level_access = true
}
resource "google_storage_bucket_iam_member" "sftpgw" {
bucket = google_storage_bucket.main.name
role = "roles/storage.admin"
member = "serviceAccount:${google_service_account.sftpgw.email}"
}
# ---------------------------------------------------------------------------
# Instance Template
# ---------------------------------------------------------------------------
resource "google_compute_instance_template" "main" {
name_prefix = "${var.stack_name}-"
machine_type = var.machine_type
project = var.project
tags = ["${var.stack_name}-sftpgw"]
disk {
source_image = var.image_path
auto_delete = true
boot = true
disk_size_gb = 32
disk_type = "pd-ssd"
}
network_interface {
subnetwork = data.google_compute_subnetwork.existing.self_link
access_config {}
}
service_account {
email = google_service_account.sftpgw.email
scopes = ["cloud-platform"]
}
metadata = {
load_balancer_ips = google_compute_address.main.address
user-data = <<-EOT
#cloud-config
write_files:
- content: |
CLOUD_PROVIDER=gcp
ARCHITECTURE=HA
DB_HOST=${google_sql_database_instance.main.connection_name}
DB_USER=sftpgw
SECRET_ID=${google_secret_manager_secret.db_password.secret_id}
GCS_BUCKET=${var.google_storage_bucket}
LOAD_BALANCER_IPS=${google_compute_address.main.address}
path: /opt/sftpgw/launch_config.env
permissions: '0600'
- content: |
#!/bin/bash
systemctl daemon-reload
exit 0
path: /var/lib/cloud/scripts/per-instance/00-sftpgw-grants.sh
permissions: '0755'
- content: |
#!/bin/bash
_token() {
curl -sf \
-H "Metadata-Flavor: Google" \
"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
| python3 -c "import sys,json;print(json.load(sys.stdin)['access_token'])"
}
PG_PASS=$(curl -sf \
-H "Authorization: Bearer $(_token)" \
"https://secretmanager.googleapis.com/v1/projects/${var.project}/secrets/${google_secret_manager_secret.pg_admin_password.secret_id}/versions/latest:access" \
| python3 -c "import sys,json,base64;d=json.load(sys.stdin);print(base64.b64decode(d['payload']['data']).decode())")
PGPASSWORD="$PG_PASS" psql -h 127.0.0.1 -p 5432 -U postgres -d postgres \
-c "GRANT cloudsqlsuperuser TO \"${local.iam_db_user}\";" \
-c "GRANT ALL ON SCHEMA public TO \"${local.iam_db_user}\";" \
-c "CREATE EXTENSION IF NOT EXISTS ltree;" 2>&1 || true
exit 0
path: /usr/local/bin/sftpgw-apply-grants.sh
permissions: '0755'
- content: |
[Service]
ExecStartPre=/usr/local/bin/sftpgw-apply-grants.sh
path: /etc/systemd/system/sftpgw-admin-api.service.d/99-grants.conf
permissions: '0644'
runcmd:
- |
systemctl stop sftpgw-admin-api.service 2>/dev/null || true
_sm_secret() {
local secret_id="$1"
local token
token=$(curl -sf \
-H "Metadata-Flavor: Google" \
"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
| python3 -c "import sys,json;print(json.load(sys.stdin)['access_token'])")
curl -sf \
-H "Authorization: Bearer $token" \
"https://secretmanager.googleapis.com/v1/projects/${var.project}/secrets/$secret_id/versions/latest:access" \
| python3 -c "import sys,json,base64;d=json.load(sys.stdin);print(base64.b64decode(d['payload']['data']).decode())"
}
PG_ADMIN_PASS=$(_sm_secret "${google_secret_manager_secret.pg_admin_password.secret_id}" 2>/dev/null)
PGPASSWORD="$PG_ADMIN_PASS" psql -h 127.0.0.1 -p 5432 -U postgres -d postgres \
-c "GRANT cloudsqlsuperuser TO \"${local.iam_db_user}\";" \
-c "GRANT ALL ON SCHEMA public TO \"${local.iam_db_user}\";" \
-c "CREATE EXTENSION IF NOT EXISTS ltree;" 2>&1
/opt/cloud-sql-proxy/cloud-sql-proxy \
--port=5433 \
"${var.project}:${var.region}:${google_sql_database_instance.main.name}" \
--private-ip &
ADMIN_PROXY_PID=$!
sleep 10
PGPASSWORD="$PG_ADMIN_PASS" psql -h 127.0.0.1 -p 5433 -U postgres -d postgres \
-c "GRANT cloudsqlsuperuser TO \"${local.iam_db_user}\";" \
-c "GRANT ALL ON SCHEMA public TO \"${local.iam_db_user}\";" \
-c "CREATE EXTENSION IF NOT EXISTS ltree;" 2>&1
kill $ADMIN_PROXY_PID 2>/dev/null || true
systemctl reset-failed sftpgw-admin-api.service
systemctl start sftpgw-admin-api.service
for i in $(seq 1 18); do
if systemctl is-active --quiet sftpgw-admin-api.service; then
echo "[boot] Service active on attempt $i"
break
fi
echo "[boot] Service not active yet, attempt $i/18..."
sleep 10
done
echo "[boot] Waiting 120s for Spring Boot to finish initialising..."
sleep 120
LB_IP=$(grep "^LOAD_BALANCER_IPS=" /opt/sftpgw/launch_config.env 2>/dev/null | cut -d= -f2)
if [ -n "$LB_IP" ]; then
mkdir -p /etc/nginx/conf.d
printf 'set_real_ip_from %s;\nreal_ip_header X-Forwarded-For;\nreal_ip_recursive on;\n' \
"$LB_IP" > /etc/nginx/conf.d/realip.conf
echo "[boot] nginx realip configured for $LB_IP"
fi
systemctl reload nginx 2>/dev/null && echo "[boot] nginx reloaded" \
|| { systemctl start nginx && echo "[boot] nginx started"; }
for i in $(seq 1 18); do
HTTP_CODE=$(curl -s -o /dev/null -w "%%{http_code}" \
-X "POST" "http://localhost:8080/3.0.0/admin/config" \
-H "Content-Type: application/json" \
-d "{\"password\": \"${var.web_admin_password}\",\"username\": \"${var.web_admin_username}\"}" \
2>/dev/null)
if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ] || \
[ "$HTTP_CODE" = "401" ] || [ "$HTTP_CODE" = "404" ] || \
[ "$HTTP_CODE" = "409" ]; then
echo "[boot] Admin config done (HTTP $HTTP_CODE) on attempt $i"
break
fi
echo "[boot] Admin config attempt $i/18 not ready (HTTP $HTTP_CODE)..."
sleep 10
done
EOT
}
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------
# Health Check (MIG auto-healing)
# ---------------------------------------------------------------------------
resource "google_compute_health_check" "main" {
name = "${var.stack_name}-health-check"
project = var.project
check_interval_sec = 10
timeout_sec = 5
https_health_check {
port = 443
request_path = "/index.html"
}
}
# ---------------------------------------------------------------------------
# Static External IP
# ---------------------------------------------------------------------------
resource "google_compute_address" "main" {
name = "${var.stack_name}-ip"
region = var.region
project = var.project
}
# ---------------------------------------------------------------------------
# Target Pool + Regional Managed Instance Group
# ---------------------------------------------------------------------------
resource "google_compute_target_pool" "main" {
name = "${var.stack_name}-pool"
region = var.region
project = var.project
}
resource "google_compute_region_instance_group_manager" "main" {
name = "${var.stack_name}-igm"
base_instance_name = var.stack_name
region = var.region
project = var.project
target_size = var.instance_count
target_pools = [google_compute_target_pool.main.id]
version {
instance_template = google_compute_instance_template.main.id
}
auto_healing_policies {
health_check = google_compute_health_check.main.id
initial_delay_sec = 300
}
}
# ---------------------------------------------------------------------------
# Forwarding Rules (one per port)
# ---------------------------------------------------------------------------
resource "google_compute_forwarding_rule" "sftp" {
name = "${var.stack_name}-forward-sftp"
region = var.region
project = var.project
ip_address = google_compute_address.main.address
ip_protocol = "TCP"
port_range = "22"
target = google_compute_target_pool.main.id
}
resource "google_compute_forwarding_rule" "http" {
name = "${var.stack_name}-forward-http"
region = var.region
project = var.project
ip_address = google_compute_address.main.address
ip_protocol = "TCP"
port_range = "80"
target = google_compute_target_pool.main.id
}
resource "google_compute_forwarding_rule" "https" {
name = "${var.stack_name}-forward-https"
region = var.region
project = var.project
ip_address = google_compute_address.main.address
ip_protocol = "TCP"
port_range = "443"
target = google_compute_target_pool.main.id
}
resource "google_compute_forwarding_rule" "admin" {
name = "${var.stack_name}-forward-admin"
region = var.region
project = var.project
ip_address = google_compute_address.main.address
ip_protocol = "TCP"
port_range = "2222"
target = google_compute_target_pool.main.id
}
# ---------------------------------------------------------------------------
# Outputs
# ---------------------------------------------------------------------------
output "load_balancer_ip" {
value = google_compute_address.main.address
description = "Static public IP address — use this as your SFTP Gateway hostname"
}
output "storage_bucket" {
value = google_storage_bucket.main.name
description = "GCS bucket used for SFTP file storage"
}
terraform.tfvars
# GCP project ID of the service project where SFTP Gateway resources are deployed
project = "your-service-project-id"
# GCP project ID of the Shared VPC host project (where the VPC and subnet live)
host_project = "your-shared-network-project-id"
# GCP region to deploy into
region = "us-east1"
# GCP zone for zonal resources
zone = "us-east1-c"
# Prefix for all resource names (lowercase letters, numbers, and hyphens only)
stack_name = "my-sftpgw"
# GCP machine type for SFTP Gateway instances
machine_type = "e2-medium"
# Number of SFTP Gateway instances (minimum 2 for HA)
instance_count = 2
# SFTP Gateway Marketplace image path
# Find the latest image path in the GCP Marketplace:
# Console → Marketplace → SFTP Gateway Professional → View Launch Options → Image
image_path = "https://www.googleapis.com/compute/v1/projects/mpi-thorn-technologies-public/global/images/sftpgw-byol-3-8-1-20260326223539"
# Your workstation's public IP address with /32 suffix
# Restricts access to admin ports 80, 443, and 2222
web_admin_ip_address = "1.2.3.4/32"
# Initial web admin credentials
web_admin_username = "admin"
web_admin_password = "YourSecurePassword123!"
# GCS bucket name for SFTP file storage (must be globally unique across all of GCS)
google_storage_bucket = "my-sftpgw-files"
# Cloud SQL instance tier
# db-g1-small is suitable for testing; use db-custom-2-7680 for production
db_tier = "db-g1-small"
# Name of the shared VPC network in host_project
existing_network = "sftpgw-shared-vpc"
# Name of the shared subnet in host_project
existing_subnet = "sftpgw-subnet"
Step 8: Deploy with Infrastructure Manager
Run the following command from the directory containing sftpgw-ha.tf and terraform.tfvars:
STACK_NAME="my-sftpgw"
gcloud infra-manager deployments apply \
"projects/${SERVICE_PROJECT}/locations/${REGION}/deployments/${STACK_NAME}" \
--project="${SERVICE_PROJECT}" \
--local-source="." \
--inputs-file="terraform.tfvars" \
--service-account="projects/${SERVICE_PROJECT}/serviceAccounts/${IM_SA}"
Monitor status:
gcloud infra-manager deployments describe \
"projects/${SERVICE_PROJECT}/locations/${REGION}/deployments/${STACK_NAME}" \
--project="${SERVICE_PROJECT}"
Step 9: Retrieve the Static IP
gcloud compute addresses list \
--project="${SERVICE_PROJECT}" \
--filter="name~${STACK_NAME}" \
--format="value(address)"
Allow 8–12 minutes after the deployment reaches ACTIVE for instances to finish booting (Liquibase schema migration runs on first boot).
Cleanup
The shared VPC infrastructure in the host project (VPC, subnet, private service access) is not managed by the HA template and will not be deleted when the deployment is torn down. Only the resources created by Terraform in the service project are removed.
To tear down the SFTP Gateway deployment:
- Remove Cloud SQL deletion protection:
gcloud sql instances patch "${STACK_NAME}-db" \ --no-deletion-protection \ --project="${SERVICE_PROJECT}" - Empty the GCS bucket:
gcloud storage rm -r "gs://YOUR_BUCKET_NAME/**" - Delete the Cloud SQL instance:
gcloud sql instances delete "${STACK_NAME}-db" --project="${SERVICE_PROJECT}" - Delete the Infrastructure Manager deployment (also removes the firewall rules from the host project):
gcloud infra-manager deployments delete \ "projects/${SERVICE_PROJECT}/locations/${REGION}/deployments/${STACK_NAME}" \ --project="${SERVICE_PROJECT}"
No VPC peering teardown steps are required — the private service access peering lives in the host project and was not created by this deployment.
Troubleshooting
Terraform fails at instance template with permission error The service project has not been attached to the Shared VPC host. Confirm Step 2 was completed:
gcloud compute shared-vpc associated-projects list "${HOST_PROJECT}"
Terraform fails creating firewall rules
The IM service account is missing roles/compute.securityAdmin on the host project. Confirm Step 6 host project roles are set.
Cloud SQL fails to get a private IP
The private service access peering in the host project is not active, or the VPC name passed to existing_network doesn't match. Re-run the validation in Step 5.
Instances boot but can't reach Secret Manager or Cloud SQL APIs
Private Google Access is not enabled on the subnet. Confirm privateIpGoogleAccess: True in the Step 4 subnet validation output.