HA Terraform Template
Overview
You can deploy SFTP Gateway Professional in an HA configuration using Terraform.
This article covers deploying an HA deployment of SFTP Gateway version 3.7.0
on GCP. The Terraform template is provided as an example, so feel free to further customize it for your business case.
Note: Make sure you are subscribed to SFTP Gateway Professional in the Google Marketplace before deploying the Terraform template or else you will run into errors.
Running the template
We recommend that you use the Cloud Shell within the Google Cloud Platform console. The Cloud Shell supports the Terraform CLI. And it also inherits your Google Cloud permissions from the web console.
This article contains two files:
sftpgw-ha-terraform.tf
terraform.tfvars
Create these two files, using the file contents at the bottom of this page. Make adjustments to the terraform.tfvars
file. Then, run the following commands:
terraform init
terraform plan
When you are ready to deploy the template, run:
terraform apply
Once the Terraform resources have been deployed, you will see an Outputs section that displays an IP address. Connect to this IP to access the web admin portal.
How does it work
This article contains a main Terraform template named:
sftpgw-ha-terraform.tf
This template provisions the following resources:
Instance Group Manager
: This is a load balancer that directs traffic to multiple VMsPublic IP
: Static IP associated with the IGMFirewall
: Allows TCP22
from anywhere, but locks down admin ports80
,443
,2222
to a single IPSQL Database
: A Postgres database stores the users and settings of SFTP GatewayGoogle Storage Bucket
: A Cloud Storage Bucket to receive SFTP files
There's also another file that contains variables:
terraform.tfvars
Since this file is named terraform.tfvars
, it will be automatically used without having to run:
terraform -var-file terraform.tfvars
You can configure the following variables:
stack_name
: This is the name of the Terraform stack. Use all lowercase and hyphens, since the names of resources will build off of this name.project
: Specify the project in which to deploy the VMregion
: Specify your current regionzone
: Specify your current zoneweb_admin_username
: Define the username of the web admin (e.g. "admin")web_admin_password
: Set the password of the web admin user. Make sure it is complex and contains at least 12 characters.web_admin_ip_address
: Get your workstation's public IP fromcheckip.dyndns.org
. Append/32
to specify a single IP rangegoogle_storage_bucket
: Specify a bucket for your SFTP filescredentials
: Optional. Specify a json key credentials file to deploy this Terraform template.machine_type
: Optional. Specify the size of your VM. Defaults toe2-medium
.image_path
: Use the latest image from the Google Marketplace, which ishttps://www.googleapis.com/compute/v1/projects/thorn-technologies-public/global/images/sftpgw-pro-3-7-0-20241218231511
Refer to the example terraform.tfvars
file at the bottom of this article.
Deleting the stack
To delete your Terraform stack, run the command:
terraform destroy
Although the compute resources will be deleted immediately, the destroy operation will not go to completion.
First, the SQL database has deletion protection. So you will need to delete this manually from within the Google Cloud console.
Second, the Google Cloud Bucket will not delete if it contains objects. So you will need to first empty the bucket.
Once these two items have been taken care of, you can destroy the Terraform stack cleanly.
Terraform file contents
sftpgw-ha-terraform.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "5.2.0"
}
}
}
provider "google" {
credentials = var.credentials == null ? null : file(var.credentials)
project = var.project
region = var.region
zone = var.zone
}
variable "credentials" {
type = string
description = "Name of your Service Key"
default = null
}
variable "project" {
type = string
description = "Name of your Google Cloud project"
}
variable "stack_name" {
type = string
description = "Name of this Terraform stack, which acts as a prefix for resources"
}
variable "region" {
type = string
description = "Name of your Google Cloud region"
default = "us-central1"
}
variable "zone" {
type = string
description = "Name of your region zone"
default = "us-central1-c"
}
variable "google_storage_bucket" {
type = string
description = "Name of your Google Storage bucket"
}
variable "image_path" {
type = string
description = "Path to the VM image"
}
variable "web_admin_username" {
type = string
description = "This is the web admin username"
}
variable "web_admin_password" {
type = string
description = "This is the web admin password"
sensitive = true
}
variable "web_admin_ip_address" {
type = string
description = "IP address range of your workstation. Provides sysadmin access for SSH and web administration."
}
variable "machine_type" {
type = string
description = "Machine type of your VM"
default = "e2-medium"
}
locals {
truncated_email = trimsuffix(google_service_account.service_account_solution.email, ".gserviceaccount.com")
bucket_name = var.google_storage_bucket == "" ? "sftpgw-terraform-${random_id.new.hex}" : var.google_storage_bucket
}
resource "random_id" "new" {
byte_length = "8"
}
resource "random_password" "db_password" {
length = 20
special = true
override_special = "*+?&^!@"
}
# Bucket
resource "google_storage_bucket" "sftpgw_default_storage" {
name = local.bucket_name
location = var.region
}
# IAM
resource "google_service_account" "service_account_solution" {
account_id = "${var.stack_name}-iam"
display_name = "SFTPGW Service Account Terraform"
project = var.project
}
resource "google_project_iam_member" "role_service_account_sql_instance" {
role = "roles/cloudsql.instanceUser"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}
resource "google_project_iam_member" "role_service_account_sql_client" {
role = "roles/cloudsql.client"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}
resource "google_project_iam_member" "role_service_account_logging" {
role = "roles/logging.serviceAgent"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}
resource "google_project_iam_member" "role_service_account_storage" {
role = "roles/storage.admin"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}
resource "google_project_iam_member" "role_service_account_metric_write" {
role = "roles/monitoring.metricWriter"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}
resource "google_project_iam_member" "role_service_account_log_write" {
role = "roles/logging.logWriter"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}
resource "google_project_iam_member" "role_service_account_log_view_accessor" {
role = "roles/logging.viewAccessor"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}
# DB
resource "google_sql_database" "database" {
name = "${var.stack_name}-sftpgw"
instance = google_sql_database_instance.instance.name
charset = "utf8"
depends_on = [
google_sql_user.db-service-user
]
}
resource "google_sql_database_instance" "instance" {
name = "${var.stack_name}-db-instance"
database_version = "POSTGRES_13"
root_password = "${random_password.db_password.result}"
settings {
tier = "db-custom-4-16384"
disk_size = 20
disk_type = "PD_SSD"
availability_type = "REGIONAL"
disk_autoresize = true
activation_policy = "ALWAYS"
maintenance_window {
hour = 0
day = 7
}
backup_configuration {
enabled = true
start_time = "00:00"
}
ip_configuration {
ipv4_enabled = false
private_network = "https://www.googleapis.com/compute/v1/projects/${var.project}/global/networks/default"
}
database_flags {
name = "cloudsql.iam_authentication"
value = "on"
}
}
}
resource "google_sql_user" "db-service-user" {
name = "${local.truncated_email}"
instance = google_sql_database_instance.instance.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}
# Compute
resource "google_compute_instance_template" "it" {
name = "${var.stack_name}-instance-template"
machine_type = var.machine_type
network_interface {
network = "default"
access_config {}
}
service_account {
email = "${google_service_account.service_account_solution.email}"
scopes = [
"https://www.googleapis.com/auth/cloud.useraccounts.readonly",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/cloud-platform"
]
}
disk {
auto_delete = true
type = "PERSISTENT"
device_name = "boot"
boot = true
source_image = "${var.image_path}"
disk_size_gb = 32
disk_type = "pd-ssd"
}
can_ip_forward = false
tags = [
"${var.stack_name}-firewall-health-check",
"${var.stack_name}-public-network",
"${var.stack_name}-admin-network"
]
metadata = {
google-logging-enable = 1
google-monitoring-enable = 1
user-data = <<-EOT
#cloud-config
repo_update: true
repo_upgrade: all
write_files:
- content : |
#!/bin/bash
export CLOUD_PROVIDER=gcp
export ARCHITECTURE=HA
export SECRET_ID="${random_password.db_password.result}"
export DB_HOST=${google_sql_database_instance.instance.connection_name}
path: /opt/sftpgw/launch_config.env
- path: /opt/sftpgw/application.properties
append: true
content: |
features.first-connection.cloud-provider=gcp
features.first-connection.base-prefix=${local.bucket_name}
features.first-connection.use-instance-credentials=true
runcmd:
- 'curl -X "POST" "http://localhost:8080/3.0.0/admin/config" -H "accept: */*" -H "Content-Type: application/json" -d "{\"password\": \"${var.web_admin_password}\",\"username\": \"${var.web_admin_username}\"}"'
EOT
}
}
resource "google_compute_region_instance_group_manager" "igm" {
name = "${var.stack_name}-igm"
region = var.region
base_instance_name = "${var.stack_name}"
version {
instance_template = google_compute_instance_template.it.id
}
target_size = 2
named_port {
name = "http"
port = 80
}
named_port {
name = "https"
port = 443
}
named_port {
name = "ssh"
port = 2222
}
named_port {
name = "sftp"
port = 22
}
depends_on = [
google_compute_instance_template.it
]
}
resource "google_compute_region_health_check" "rhc" {
name = "${var.stack_name}-rhc"
tcp_health_check {
port = 80
}
}
resource "google_compute_address" "ip" {
name = "${var.stack_name}-ip"
}
resource "google_compute_forwarding_rule" "fr_80" {
name = "${var.stack_name}-fr-80"
port_range = 80
backend_service = google_compute_region_backend_service.backend_http.id
ip_address = google_compute_address.ip.address
depends_on = [
google_compute_address.ip,
google_compute_region_backend_service.backend_http
]
}
resource "google_compute_forwarding_rule" "fr_443" {
name = "${var.stack_name}-fr-443"
port_range = 443
backend_service = google_compute_region_backend_service.backend_https.id
ip_address = google_compute_address.ip.address
}
resource "google_compute_forwarding_rule" "fr_22" {
name = "${var.stack_name}-fr-22"
port_range = 22
backend_service = google_compute_region_backend_service.backend_sftp.id
ip_address = google_compute_address.ip.address
}
resource "google_compute_forwarding_rule" "fr_2222" {
name = "${var.stack_name}-fr-2222"
port_range = 2222
backend_service = google_compute_region_backend_service.backend_ssh.id
ip_address = google_compute_address.ip.address
}
resource "google_compute_region_backend_service" "backend_http" {
name = "${var.stack_name}-backend-http"
health_checks = [
google_compute_region_health_check.rhc.id
]
backend {
group = google_compute_region_instance_group_manager.igm.instance_group
}
port_name = "http"
protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
resource "google_compute_region_backend_service" "backend_https" {
name = "${var.stack_name}-backend-https"
health_checks = [
google_compute_region_health_check.rhc.id
]
backend {
group = google_compute_region_instance_group_manager.igm.instance_group
}
port_name = "https"
protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
resource "google_compute_region_backend_service" "backend_sftp" {
name = "${var.stack_name}-backend-sftp"
health_checks = [
google_compute_region_health_check.rhc.id
]
backend {
group = google_compute_region_instance_group_manager.igm.instance_group
}
port_name = "sftp"
protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
resource "google_compute_region_backend_service" "backend_ssh" {
name = "${var.stack_name}-backend-ssh"
health_checks = [
google_compute_region_health_check.rhc.id
]
backend {
group = google_compute_region_instance_group_manager.igm.instance_group
}
port_name = "ssh"
protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
resource "google_compute_firewall" "firewall_health_check" {
name = "${var.stack_name}-firewall-health-check"
network = "default"
target_tags = ["${var.stack_name}-tag-health-check"]
source_ranges = [
"130.211.0.0/22",
"35.191.0.0/16",
"209.85.152.0/22",
"209.85.204.0/22",
"169.254.169.254/32"
]
allow {
protocol = "tcp"
ports = ["80"]
}
}
resource "google_compute_firewall" "public-network" {
name = "${var.stack_name}-public-network"
network = "default"
target_tags = ["${var.stack_name}-public-network"]
source_ranges = [
"0.0.0.0/0"
]
allow {
protocol = "tcp"
ports = ["22"]
}
}
resource "google_compute_firewall" "admin-network" {
name = "${var.stack_name}-admin-network"
network = "default"
target_tags = ["${var.stack_name}-admin-network"]
source_ranges = [
"${var.web_admin_ip_address}",
"35.235.240.0/20" // Google Cloud Portal SSH tool
]
allow {
protocol = "tcp"
ports = ["2222", "80", "443"]
}
}
output "ip_address" {
value = "${google_compute_address.ip.address}"
}
terraform.tfvars
Example Make sure you replace these values with your own.
stack_name = "your-terraform-stack"
web_admin_username = "admin"
web_admin_password = "replace this with a strong password" // use mixed case, numbers, and symbols
google_storage_bucket = "your-bucket-terraform"
web_admin_ip_address = "1.2.3.4/32" // replace this with your IP address, followed by /32
project = "your-google-cloud-project"
region = "us-central1"
zone = "us-central1-c"
image_path = "https://www.googleapis.com/compute/v1/projects/thorn-technologies-public/global/images/sftpgw-pro-3-7-0-20241218231511" // this is the SFTP Gateway marketplace image
machine_type = "e2-medium"