Terraform Template (HA)
Overview
You can deploy SFTP Gateway version 3.x using Terraform, as an alternative to CloudFormation.
This article covers deploying an HA stack of SFTP Gateway version 3.4.6
on AWS. Resources are deployed into a new VPC.
The Terraform template is provided as an example, so feel free to further customize it for your business case.
Note: Make sure you are subscribed to SFTP Gateway in the AWS Marketplace. Otherwise, your AWS account will not be authorized to deploy any product AMIs without a subscription.
Running the template
This article contains three files:
sftpgw-ha.tf
userdata.yaml
terraform.tfvars
Create these three files on your workstation, using the file contents at the bottom of this page. Make adjustments to the terraform.tfvars
file. Then, run the following commands:
terraform init
terraform plan
When you are ready to deploy the template, run:
terraform apply
How does it work
This article contains a main Terraform template named:
sftpgw-ha.tf
This template provisions the following resources:
VPC
: This is a new network, including subnets and route tablesNLB
: A network load balancer, including listeners and target groupsASG
: An AutoScaling Group, along with Launch ConfigurationIAM role
: Grants EC2 instances access to S3RDS
: A postgres database serviceSecrets Manager
: Stores the postgres database passwordCloudwatch Log Group
: Stores log dataEC2 Security Group
: Allows TCP22
from anywhere, but locks down admin ports80
,443
,2222
to a single incoming IP range
There's another file that you need, which configures the EC2 instances in HA mode:
userdata.yaml
Finally, there's a third file that contains variables:
terraform.tfvars
Since this file is named terraform.tfvars
, it will be automatically used without having to run:
terraform -var-file terraform.tfvars
You can configure the following variables:
stack_name
: Specify the name of your Terraform stackkey_name
: Specify the name of your EC2 key pairregion
: Specify your current regionadmin_ip
: Get your workstation's public IP fromcheckip.dyndns.org
. Append/32
to specify a single IP rangeopen_s3_permissions
: Set this totrue
for full S3 permissions. Usefalse
to restrict permissions to buckets with the naming conventionsftpgw-i-*
aws_profile
: Optional. Specify an AWS CLI profile if not using the default profile.ec2_instance_size
: Optional. Defaults tot3.medium
. Use this to override the EC2 instance size.disk_volume_size
: Optional. Defaults to32
. Use this to override the EC2 disk size, in GB.desired_capacity
: Optional. Defaults to2
. The number of EC2 instances to deploy in your AutoScaling Group.dbclass
: Optional. Defaults todb.t3.micro
. The size of your RDS instance.vpc_ip_range
: Optional. Defaults to192.168.1.0/24
. Set this to a Class C private IP range.
Using the AWS CloudShell
The AWS CloudShell
does not come with Terraform installed by default. So you will need to download and install it manually.
wget https://releases.hashicorp.com/terraform/1.7.3/terraform_1.7.3_linux_amd64.zip
unzip terraform_1.7.3_linux_amd64.zip
mkdir ~/bin
mv terraform ~/bin
rm -f terraform_1.7.3_linux_amd64.zip
(Special thanks to this online article for these instructions https://blog.clairvoyantsoft.com/aws-cloudshell-and-terraform-18eb8b41041f)
The AWS CloudShell
does not have a default AWS command-line profile. So, you will need to set the following variable in the terraform.tfvars
file:
aws_profile = null
Terraform file contents
userdata.yaml
#cloud-config
repo_update: true
repo_upgrade: all
write_files:
- content : |
#!/bin/bash
export CLOUD_PROVIDER=aws
export ARCHITECTURE=HA
export LOG_GROUP_NAME=${LogGroup}
export SECRET_ID=${DBSecretStore}
export DB_HOST=${DBEndpoint}
export LOAD_BALANCER_ADDRESSES=${NLBDNSName}
path: /opt/sftpgw/launch_config.env
runcmd:
- /opt/aws/bin/cfn-init --stack ${StackName} --resource LaunchConfiguration --region ${Region}
- /opt/aws/bin/cfn-signal -e 0 --stack ${StackName} --resource AutoScalingGroup --region ${Region}
sftpgw-ha.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.29"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
region = var.region
profile = var.aws_profile != "" ? var.aws_profile : null
}
data "aws_partition" "current" {}
variable "stack_name" {
type = string
description = "This is the stack name"
}
variable "admin_ip" {
type = string
description = "Public IP address range for SSH and web access. Use a CIDR range to restrict access. To get your local machine's IP, see http://checkip.dyndns.org/. (Remember to append /32 for a single IP e.g. 12.34.56.78/32) For security reasons, do not use 0.0.0.0/0."
validation {
condition = can(regex("([1-9]\\d{0,2})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/([1-9]\\d{0,1})",var.admin_ip))
error_message = "Must be a valid IP CIDR range in the form of x.x.x.x/x. Do not use 0.0.0.0/0."
}
}
variable "desired_capacity" {
type = number
default = 2
}
variable "region" {
type = string
description = "This is the AWS region"
}
variable "aws_profile" {
type = string
default = "default"
description = "Optional: specify an AWS profile"
}
variable "dbclass" {
type = string
default = "db.t3.micro"
description = "DB instance type, such as: db.t3.micro, db.t3.small, db.t3.medium, or db.m5.large"
}
variable "ami_map" {
type = map
default = {
us-east-1 = "ami-05423b9bf8ec631fc"
us-east-2 = "ami-0e869fdba7337203f"
us-west-1 = "ami-01a3669f67c2d0d8a"
us-gov-east-1 = "ami-06bdd58dd4c48adcf"
us-gov-west-1 = "ami-05bece0ffbb9c049a"
us-west-2 = "ami-0109be966a0118f94"
eu-central-1 = "ami-0d69b5eb9feca7bf9"
eu-west-1 = "ami-0e9cd42841ae758bd"
eu-west-2 = "ami-09ed82f57c303abcf"
eu-west-3 = "ami-0551b7ff645633185"
ca-central-1 = "ami-0e8eb0d0ef32e5e96"
eu-central-2 = "ami-0e4dcf7ea1b3c150c"
ap-southeast-1 = "ami-000cf8c1d8aaba2b8"
ap-southeast-2 = "ami-0872ada1f0fae54f7"
ap-south-1 = "ami-0c66c0e69776d60f4"
ap-northeast-1 = "ami-064ece5429a7c17cd"
ap-northeast-2 = "ami-006a63e591f76cc8c"
eu-north-1 = "ami-03451562f32341591"
ap-east-1 = "ami-004129c6348c1b3f0"
me-south-1 = "ami-011ec244dbbeafcb3"
me-central-1 = "ami-0253c954e71ce06fc"
eu-south-1 = "ami-0d2389b193d2164c1"
}
}
variable "ec2_instance_size" {
type = string
description = "EC2 instance size. Recommended: t3.medium for testing, m5.large for production."
default = "t3.medium"
}
variable "disk_volume_size" {
type = number
description = "Disk volume size in GB. Must be at least 8."
default = 32
}
variable "key_name" {
type = string
description = "Make sure you have access to this EC2 key pair. Otherwise, create a new key pair before proceeding."
}
variable "open_s3_permissions" {
type = bool
description = "Set this to true to allow full S3 access to support multiple buckets. Otherwise, S3 permissions are limited to our default bucket naming convention: sftpgw-<instance-id>."
default = false
}
variable "vpc_ip_range" {
type = string
description = "Choose a private class C range"
default = "192.168.1.0/24"
}
locals {
prefix_cidr = var.vpc_ip_range
truncated_db_endpoint = trimsuffix(aws_db_instance.sftpgw-db.endpoint, ":5432")
db_password = data.aws_secretsmanager_random_password.db-pass.random_password
secret_json = "{\"username\":\"sftpgw\",\"password\":\"${local.db_password}\"}"
}
resource "aws_vpc" "main" {
cidr_block = local.prefix_cidr
enable_dns_hostnames = true
}
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main.id
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_subnet" "public-subnet-a" {
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(local.prefix_cidr, 4, 0)
map_public_ip_on_launch = true
availability_zone = data.aws_availability_zones.available.names[0]
}
resource "aws_subnet" "public-subnet-b" {
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(local.prefix_cidr, 4, 1)
map_public_ip_on_launch = true
availability_zone = data.aws_availability_zones.available.names[1]
}
resource "aws_subnet" "private-subnet-a" {
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(local.prefix_cidr, 4, 2)
map_public_ip_on_launch = false
availability_zone = data.aws_availability_zones.available.names[0]
}
resource "aws_subnet" "private-subnet-b" {
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(local.prefix_cidr, 4, 3)
map_public_ip_on_launch = false
availability_zone = data.aws_availability_zones.available.names[1]
}
resource "aws_default_route_table" "public-route-table" {
default_route_table_id = aws_vpc.main.main_route_table_id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
}
resource "aws_route_table" "private-route-table" {
vpc_id = aws_vpc.main.id
}
resource "aws_route_table_association" "map-public-a" {
subnet_id = aws_subnet.public-subnet-a.id
route_table_id = aws_default_route_table.public-route-table.id
}
resource "aws_route_table_association" "map-public-b" {
subnet_id = aws_subnet.public-subnet-b.id
route_table_id = aws_default_route_table.public-route-table.id
}
resource "aws_route_table_association" "map-private-a" {
subnet_id = aws_subnet.private-subnet-a.id
route_table_id = aws_route_table.private-route-table.id
}
resource "aws_route_table_association" "map-private-b" {
subnet_id = aws_subnet.private-subnet-b.id
route_table_id = aws_route_table.private-route-table.id
}
data "aws_secretsmanager_random_password" "db-pass" {
password_length = 30
exclude_punctuation = true
include_space = false
}
resource "aws_secretsmanager_secret" "db-secret-store" {
name = "${var.stack_name}-sftp-gateway-db-secret"
}
resource "aws_secretsmanager_secret_version" "db-secret-version" {
secret_id = aws_secretsmanager_secret.db-secret-store.id
secret_string = local.secret_json
lifecycle {
ignore_changes = [secret_string, ]
}
}
resource "aws_db_instance" "sftpgw-db" {
identifier = "${var.stack_name}-sftpgw-db"
allocated_storage = 50
db_name = "sftpgw"
engine = "postgres"
engine_version = "13.10"
instance_class = var.dbclass
username = "sftpgw"
password = local.db_password
multi_az = true
auto_minor_version_upgrade = true
db_subnet_group_name = aws_db_subnet_group.db-subnet-group.name
storage_encrypted = true
vpc_security_group_ids = [aws_security_group.db-security-group.id]
final_snapshot_identifier = "${var.stack_name}-final-db-snapshot"
}
resource "aws_db_subnet_group" "db-subnet-group" {
name = "${var.stack_name}-dbsubnetgroup"
subnet_ids = [aws_subnet.private-subnet-a.id, aws_subnet.private-subnet-b.id]
}
resource "aws_security_group" "db-security-group" {
description = "Security group for RDS DB Instance"
vpc_id = aws_vpc.main.id
}
resource "aws_vpc_security_group_ingress_rule" "db-security-group-ingress" {
referenced_security_group_id = aws_security_group.sg.id
security_group_id = aws_security_group.db-security-group.id
from_port = 5432
to_port = 5432
ip_protocol = "tcp"
}
resource "aws_eip" "eip" {
domain = "vpc"
}
resource "aws_lb" "network-load-balancer" {
name = "network-load-balancer"
load_balancer_type = "network"
enable_cross_zone_load_balancing = true
subnet_mapping {
allocation_id = aws_eip.eip.allocation_id
subnet_id = aws_subnet.public-subnet-a.id
}
subnet_mapping {
subnet_id = aws_subnet.public-subnet-b.id
}
}
resource "aws_lb_listener" "port-22" {
load_balancer_arn = aws_lb.network-load-balancer.arn
port = "22"
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.port-22-target-group.arn
}
}
resource "aws_lb_target_group" "port-22-target-group" {
name = "port-22-target-group"
port = 22
protocol = "TCP"
vpc_id = aws_vpc.main.id
health_check {
enabled = true
port = 443
protocol = "HTTPS"
}
preserve_client_ip = true
}
resource "aws_lb_listener" "port-2222" {
load_balancer_arn = aws_lb.network-load-balancer.arn
port = "2222"
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.port-2222-target-group.arn
}
}
resource "aws_lb_target_group" "port-2222-target-group" {
name = "port-2222-target-group"
port = 2222
protocol = "TCP"
vpc_id = aws_vpc.main.id
health_check {
enabled = true
port = 443
protocol = "HTTPS"
}
}
resource "aws_lb_listener" "port-80" {
load_balancer_arn = aws_lb.network-load-balancer.arn
port = "80"
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.port-80-target-group.arn
}
}
resource "aws_lb_target_group" "port-80-target-group" {
name = "port-80-target-group"
port = 80
protocol = "TCP"
vpc_id = aws_vpc.main.id
}
resource "aws_lb_target_group" "port-443-target-group" {
name = "port-443-target-group"
port = 443
protocol = "TCP"
vpc_id = aws_vpc.main.id
}
resource "aws_lb_listener" "port-443" {
load_balancer_arn = aws_lb.network-load-balancer.arn
port = "443"
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.port-443-target-group.arn
}
}
resource "aws_autoscaling_group" "autoscaling-group" {
name = "autoscaling-group"
max_size = 10
min_size = 1
desired_capacity = var.desired_capacity
launch_configuration = aws_launch_configuration.launch-configuration.name
vpc_zone_identifier = [
aws_subnet.public-subnet-a.id,
aws_subnet.public-subnet-b.id
]
target_group_arns = [
aws_lb_target_group.port-22-target-group.id,
aws_lb_target_group.port-80-target-group.id,
aws_lb_target_group.port-443-target-group.id,
aws_lb_target_group.port-2222-target-group.id
]
tag {
key = "Name"
value = "SFTPGateway"
propagate_at_launch = true
}
}
resource "aws_security_group" "sg" {
name = "${var.stack_name}-EC2-Security-group"
description = "EC2 Security Group"
vpc_id = aws_vpc.main.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 2222
to_port = 2222
protocol = "tcp"
cidr_blocks = [var.admin_ip]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [var.admin_ip]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = [var.admin_ip]
}
egress {
from_port = 0
protocol = "tcp"
to_port = 65535
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
protocol = "udp"
to_port = 65535
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_iam_role" "open_role" {
name = "open_role"
managed_policy_arns = [
"arn:${data.aws_partition.current.partition}:iam::aws:policy/AmazonS3FullAccess",
"arn:${data.aws_partition.current.partition}:iam::aws:policy/service-role/AmazonEC2RoleforSSM"
]
assume_role_policy = jsonencode({
Version: "2012-10-17",
Statement: [
{
Action: "sts:AssumeRole",
Principal: {
Service: "ec2.amazonaws.com"
},
Effect: "Allow"
}
]
})
}
resource "aws_iam_role" "restricted_role" {
name = "restricted_role"
managed_policy_arns = ["arn:${data.aws_partition.current.partition}:iam::aws:policy/service-role/AmazonEC2RoleforSSM"]
assume_role_policy = jsonencode({
Version: "2012-10-17",
Statement: [
{
Action: "sts:AssumeRole",
Principal: {
Service: "ec2.amazonaws.com"
},
Effect: "Allow"
}
]
})
}
resource "aws_iam_instance_profile" "ec2_profile" {
name = "${var.stack_name}-ec2-profile"
role = var.open_s3_permissions ? aws_iam_role.open_role.name : aws_iam_role.restricted_role.name
}
resource "aws_iam_role_policy" "ec2_policy" {
name = "ec2_policy"
role = var.open_s3_permissions ? aws_iam_role.open_role.name : aws_iam_role.restricted_role.name
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
Action: [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:CreateBucket"
],
Effect: "Allow",
Resource: "arn:${data.aws_partition.current.partition}:s3:::sftpgw-i-*"
}, {
Action: [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:GetLogEvents"
],
Effect: "Allow",
Resource: "*"
}, {
Action: [
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",
"ec2:DescribeTags"
],
Effect: "Allow",
Resource: "*"
}, {
Action: [
"cloudformation:DescribeStacks",
"cloudformation:ListStackResources"
],
Effect: "Allow",
Resource: "*"
}, {
Action: [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
Effect: "Allow",
Resource: "${aws_secretsmanager_secret.db-secret-store.id}"
}, {
Action: "secretsmanager:ListSecrets",
Effect: "Allow",
Resource: "*"
}]
})
}
data template_file "userdata" {
template = file("${path.module}/userdata.yaml")
vars = {
LogGroup = aws_cloudwatch_log_group.sftpgw-log-group.id
DBSecretStore = aws_secretsmanager_secret.db-secret-store.id
DBEndpoint = local.truncated_db_endpoint
StackName = var.stack_name
Region = var.region
NLBDNSName = aws_lb.network-load-balancer.dns_name
}
}
resource "aws_launch_configuration" "launch-configuration" {
name = "${var.stack_name}-launch-configuration"
image_id = lookup(var.ami_map, var.region)
instance_type = var.ec2_instance_size
key_name = var.key_name
associate_public_ip_address = true
root_block_device {
volume_size = var.disk_volume_size
volume_type = "gp2"
encrypted = true
}
iam_instance_profile = aws_iam_instance_profile.ec2_profile.name
security_groups = [aws_security_group.sg.id]
user_data = base64encode(data.template_file.userdata.rendered)
}
resource "aws_cloudwatch_log_group" "sftpgw-log-group" {
name = var.stack_name
}
output "hostname" {
value = aws_lb.network-load-balancer.dns_name
}
output "cloudwatch_logs" {
value = aws_cloudwatch_log_group.sftpgw-log-group.id
description = "CloudWatch logs"
}
terraform.tfvars
region = "us-east-1"
key_name = "rob"
open_s3_permissions = true
stack_name = "rob-tf-stack"
admin_ip = "3.222.237.17/32"
desired_capacity = 1