Thorn Tech Marketing Ad
Skip to main content
Version: 1.1.5

HA Terraform Template for Google Cloud

Overview

You can deploy StorageLink in an HA configuration using Terraform.

This article covers deploying an HA deployment of StorageLink version 1.1.5 on GCP. The Terraform template is provided as an example, so feel free to further customize it for your business case.

Note: Make sure you are subscribed to StorageLink in the Google Marketplace before deploying the Terraform template or else you will run into errors.

Running the template

We recommend that you use the Cloud Shell within the Google Cloud Platform console. The Cloud Shell supports the Terraform CLI. And it also inherits your Google Cloud permissions from the web console.

This article contains two files:

  • storagelink-ha-terraform.tf
  • terraform.tfvars

Create these two files, using the file contents at the bottom of this page. Make adjustments to the terraform.tfvars file. Then, run the following commands:

terraform init
terraform plan

When you are ready to deploy the template, run:

terraform apply

Once the Terraform resources have been deployed, you will see an Outputs section that displays an IP address. Connect to this IP to access the web admin portal.

How does it work

This article contains a main Terraform template named:

storagelink-ha-terraform.tf

This template provisions the following resources:

  • Instance Group Manager: This is a load balancer that directs traffic to multiple VMs
  • Public IP: Static IP associated with the IGM
  • Firewall: Allows TCP 80, 443 from anywhere for user web access, but locks down admin SSH port 22 to a single IP
  • SQL Database: A Postgres database stores the users and settings of StorageLink
  • Existing Google Storage Bucket: Uses an existing Cloud Storage Bucket to receive StorageLink files (bucket must be created separately)

There's also another file that contains variables:

terraform.tfvars

Since this file is named terraform.tfvars, it will be automatically used without having to run:

terraform -var-file terraform.tfvars

You can configure the following variables:

  • stack_name: This is the name of the Terraform stack. Use all lowercase and hyphens, since the names of resources will build off of this name.
  • project: Specify the project in which to deploy the VM
  • region: Specify your current region
  • zone: Specify your current zone
  • web_admin_username: Define the username of the web admin (e.g. "admin")
  • web_admin_password: Set the password of the web admin user. Make sure it is complex and contains at least 12 characters.
  • ssh_admin_ip_address: Get your workstation's public IP from checkip.dyndns.org. Append /32 to specify a single IP range
  • google_storage_bucket: Specify the name of an existing Google Storage bucket for your StorageLink files
  • credentials: Optional. Specify a json key credentials file to deploy this Terraform template
  • machine_type: Optional. Specify the size of your VM. Defaults to e2-medium
  • image_path: Use the latest image from the Google Marketplace, which is https://www.googleapis.com/compute/v1/projects/mpi-thorn-technologies-public/global/images/storagelink-1-1-5-1756246182

Refer to the example terraform.tfvars file at the bottom of this article.

Deleting the stack

To delete your Terraform stack, run the command:

terraform destroy

Although the compute resources will be deleted immediately, the destroy operation will not go to completion.

First, the SQL database has deletion protection. So you will need to delete this manually from within the Google Cloud console.

Second, the Google Cloud Bucket will not delete if it contains objects. So you will need to first empty the bucket.

Once these two items have been taken care of, you can destroy the Terraform stack cleanly.

Terraform file contents

terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "5.2.0"
}
}
}

provider "google" {
credentials = var.credentials == null ? null : file(var.credentials)
project = var.project
region = var.region
zone = var.zone
}

variable "credentials" {
type = string
description = "Name of your Service Key"
default = null
}

variable "project" {
type = string
description = "Name of your Google Cloud project"
}

variable "stack_name" {
type = string
description = "Name of this Terraform stack, which acts as a prefix for resources"
}

variable "region" {
type = string
description = "Name of your Google Cloud region"
default = "us-central1"
}

variable "zone" {
type = string
description = "Name of your region zone"
default = "us-central1-c"
}

variable "google_storage_bucket" {
type = string
description = "Name of your existing Google Storage bucket"
}

variable "image_path" {
type = string
description = "Path to the VM image"
}

variable "web_admin_username" {
type = string
description = "This is the web admin username"
}

variable "web_admin_password" {
type = string
description = "This is the web admin password"
sensitive = true
}

variable "ssh_admin_ip_address" {
type = string
description = "IP address range of your workstation. Provides sysadmin access for SSH only."
}

variable "machine_type" {
type = string
description = "Machine type of your VM"
default = "e2-medium"
}

locals {
truncated_email = trimsuffix(google_service_account.service_account_solution.email, ".gserviceaccount.com")
bucket_name = var.google_storage_bucket
}

resource "random_id" "new" {
byte_length = "8"
}

resource "random_password" "db_password" {
length = 20
special = true
override_special = "*+?&^!@"
}

# Bucket

# Note: This template uses an existing Google Storage bucket
# The bucket must be created separately before running this template

# IAM

resource "google_service_account" "service_account_solution" {
account_id = "${var.stack_name}-iam"
display_name = "StorageLink Service Account Terraform"
project = var.project
}

resource "google_project_iam_member" "role_service_account_sql_instance" {
role = "roles/cloudsql.instanceUser"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}

resource "google_project_iam_member" "role_service_account_sql_client" {
role = "roles/cloudsql.client"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}

resource "google_project_iam_member" "role_service_account_logging" {
role = "roles/logging.serviceAgent"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}

resource "google_project_iam_member" "role_service_account_storage" {
role = "roles/storage.admin"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}

resource "google_project_iam_member" "role_service_account_metric_write" {
role = "roles/monitoring.metricWriter"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}

resource "google_project_iam_member" "role_service_account_log_write" {
role = "roles/logging.logWriter"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}

resource "google_project_iam_member" "role_service_account_log_view_accessor" {
role = "roles/logging.viewAccessor"
member = "serviceAccount:${google_service_account.service_account_solution.email}"
project = var.project
}

# DB

resource "google_sql_database" "database" {
name = "${var.stack_name}-storagelink"
instance = google_sql_database_instance.instance.name
charset = "utf8"
depends_on = [
google_sql_user.db-service-user
]
}

resource "google_sql_database_instance" "instance" {
name = "${var.stack_name}-db-instance"
database_version = "POSTGRES_16"
root_password = "${random_password.db_password.result}"
settings {
tier = "db-custom-2-12288"
edition = "ENTERPRISE"
disk_size = 20
disk_type = "PD_SSD"
availability_type = "REGIONAL"
disk_autoresize = true
activation_policy = "ALWAYS"
maintenance_window {
hour = 0
day = 7
}
backup_configuration {
enabled = true
start_time = "00:00"
}
ip_configuration {
ipv4_enabled = false
private_network = "https://www.googleapis.com/compute/v1/projects/${var.project}/global/networks/default"
}
database_flags {
name = "cloudsql.iam_authentication"
value = "on"
}
}
}

resource "google_sql_user" "db-service-user" {
name = "${local.truncated_email}"
instance = google_sql_database_instance.instance.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}

# Compute

resource "google_compute_instance_template" "it" {
name = "${var.stack_name}-instance-template"
machine_type = var.machine_type
network_interface {
network = "default"
access_config {}
}
service_account {
email = "${google_service_account.service_account_solution.email}"
scopes = [
"https://www.googleapis.com/auth/cloud.useraccounts.readonly",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/cloud-platform"
]
}
disk {
auto_delete = true
type = "PERSISTENT"
device_name = "boot"
boot = true
source_image = "${var.image_path}"
disk_size_gb = 32
disk_type = "pd-ssd"
}
can_ip_forward = false
tags = [
"${var.stack_name}-firewall-health-check",
"${var.stack_name}-public-network",
"${var.stack_name}-admin-network"
]
metadata = {
google-logging-enable = 1
google-monitoring-enable = 1
user-data = <<-EOT
#cloud-config
repo_update: true
repo_upgrade: all

write_files:
- content : |
#!/bin/bash
export CLOUD_PROVIDER=gcp
export ARCHITECTURE=HA
export SECRET_ID="${random_password.db_password.result}"
export DB_HOST=${google_sql_database_instance.instance.connection_name}
path: /opt/swiftgw/launch_config.env
- path: /opt/swiftgw/application.properties
append: true
content: |
features.first-connection.cloud-provider=gcp
features.first-connection.base-prefix=${local.bucket_name}
features.first-connection.use-instance-credentials=true
runcmd:
- 'curl -X "POST" "http://localhost:8080/1.0.0/admin/config" -H "accept: */*" -H "Content-Type: application/json" -d "{\"password\": \"${var.web_admin_password}\",\"username\": \"${var.web_admin_username}\"}"'

EOT
}
}

resource "google_compute_region_instance_group_manager" "igm" {
name = "${var.stack_name}-igm"
region = var.region
base_instance_name = "${var.stack_name}"
version {
instance_template = google_compute_instance_template.it.id
}
target_size = 2
named_port {
name = "http"
port = 80
}
named_port {
name = "https"
port = 443
}
named_port {
name = "ssh"
port = 22
}
depends_on = [
google_compute_instance_template.it
]
}

resource "google_compute_region_health_check" "rhc" {
name = "${var.stack_name}-rhc"
tcp_health_check {
port = 80
}
}

resource "google_compute_address" "ip" {
name = "${var.stack_name}-ip"
}

resource "google_compute_forwarding_rule" "fr_80" {
name = "${var.stack_name}-fr-80"
port_range = 80
backend_service = google_compute_region_backend_service.backend_http.id
ip_address = google_compute_address.ip.address
depends_on = [
google_compute_address.ip,
google_compute_region_backend_service.backend_http
]
}

resource "google_compute_forwarding_rule" "fr_443" {
name = "${var.stack_name}-fr-443"
port_range = 443
backend_service = google_compute_region_backend_service.backend_https.id
ip_address = google_compute_address.ip.address
}

resource "google_compute_forwarding_rule" "fr_22" {
name = "${var.stack_name}-fr-22"
port_range = 22
backend_service = google_compute_region_backend_service.backend_ssh.id
ip_address = google_compute_address.ip.address
}

resource "google_compute_region_backend_service" "backend_http" {
name = "${var.stack_name}-backend-http"
health_checks = [
google_compute_region_health_check.rhc.id
]
backend {
group = google_compute_region_instance_group_manager.igm.instance_group
}
port_name = "http"
protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}

resource "google_compute_region_backend_service" "backend_https" {
name = "${var.stack_name}-backend-https"
health_checks = [
google_compute_region_health_check.rhc.id
]
backend {
group = google_compute_region_instance_group_manager.igm.instance_group
}
port_name = "https"
protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}

resource "google_compute_region_backend_service" "backend_ssh" {
name = "${var.stack_name}-backend-ssh"
health_checks = [
google_compute_region_health_check.rhc.id
]
backend {
group = google_compute_region_instance_group_manager.igm.instance_group
}
port_name = "ssh"
protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}

resource "google_compute_firewall" "firewall_health_check" {
name = "${var.stack_name}-firewall-health-check"
network = "default"
target_tags = ["${var.stack_name}-tag-health-check"]
source_ranges = [
"130.211.0.0/22",
"35.191.0.0/16",
"209.85.152.0/22",
"209.85.204.0/22",
"169.254.169.254/32"
]
allow {
protocol = "tcp"
ports = ["80"]
}
}

resource "google_compute_firewall" "public-network" {
name = "${var.stack_name}-public-network"
network = "default"
target_tags = ["${var.stack_name}-public-network"]
source_ranges = [
"0.0.0.0/0"
]
allow {
protocol = "tcp"
ports = ["80", "443"]
}
}

resource "google_compute_firewall" "admin-network" {
name = "${var.stack_name}-admin-network"
network = "default"
target_tags = ["${var.stack_name}-admin-network"]
source_ranges = [
"${var.ssh_admin_ip_address}",
"35.235.240.0/20" // Google Cloud Portal SSH tool
]
allow {
protocol = "tcp"
ports = ["22"]
}
}

output "ip_address" {
value = "${google_compute_address.ip.address}"
}

Example terraform.tfvars

Make sure you replace these values with your own.

stack_name = "your-terraform-stack"
web_admin_username = "admin"
web_admin_password = "replace this with a strong password" // use mixed case, numbers, and symbols
google_storage_bucket = "your-bucket-terraform"
ssh_admin_ip_address = "1.2.3.4/32" // replace this with your IP address, followed by /32
project = "your-google-cloud-project"
region = "us-central1"
zone = "us-central1-c"
machine_type = "e2-medium"

image_path = "https://www.googleapis.com/compute/v1/projects/mpi-thorn-technologies-public/global/images/storagelink-1-1-5-1756246182" // this is the latest StorageLink marketplace image