Migrating from EC2 to Container Deployment
This guide walks you through migrating your SFTP Gateway deployment from an EC2 instance to a containerized deployment on AWS. Whether you're modernizing your infrastructure, preparing for auto-scaling requirements, or standardizing on container-based deployments across your organization, this guide will help you complete the migration while preserving your existing users, folders, and configuration.
Prerequisite: This guide assumes you have already deployed SFTP Gateway as a container. If you haven't set up your container environment yet, see Deploying SFTP Gateway on ECS first, then return to this guide to migrate your configuration.
Overview
Many organizations are moving from traditional VM-based deployments to containers as part of their cloud modernization strategy. Container deployments offer several advantages over traditional EC2 instances:
Easier scaling: Containers can be spun up or down in seconds, allowing you to respond quickly to changes in demand.
Improved portability: A containerized SFTP Gateway runs the same way regardless of the underlying environment. This consistency makes it easier to replicate your setup across development, staging, and production environments.
Reduced operational overhead: One of the top drivers for container adoption is eliminating the burden of VM patching. Containers have a significantly smaller software footprint than full VMs—you're not responsible for kernel updates, OS patches, or system-level security fixes.
Cost efficiency: If you're already running Kubernetes or ECS, adding another container to your existing infrastructure has minimal marginal cost. A dedicated VM, by contrast, requires its own compute resources that often sit underutilized.
The migration is straightforward: export your configuration from the EC2 instance and import it into the new container deployment. The process is designed to minimize downtime—most of the work happens in parallel with your existing deployment, and the actual cutover takes only a few minutes.
Prerequisites
Before beginning the migration, ensure you have the following:
A running SFTP Gateway container deployment — Follow the ECS deployment guide to set up your container environment first.
Administrative access to your existing SFTP Gateway EC2 instance — You'll need access to the Web Admin interface to export the configuration.
AWS CLI configured with appropriate permissions — The CLI should be configured with credentials that have access to EC2 and ELB services for the cutover steps.
Understanding of your current network configuration — Document your VPC, subnets, security groups, and any Elastic IPs or DNS hostnames associated with your current deployment.
Migration Process
The migration process consists of two phases: preparation (Steps 1-3) and cutover (Steps 4-5). The preparation phase can be done without any impact to your existing deployment. Only the cutover phase requires a brief maintenance window.
Phase 1: Preparation
The first phase focuses on capturing everything you need from your existing EC2 deployment. This includes your SFTP Gateway configuration and network settings. Taking the time to thoroughly document your current setup will make troubleshooting easier if any issues arise during or after the migration.
Step 1: Export Configuration from EC2 Instance
Your SFTP Gateway configuration includes users, folders, permissions, and system settings. SFTP Gateway provides a built-in backup mechanism that exports all of this into a portable YAML file. This file is the key to migrating your setup—it contains everything the new container deployment needs to recreate your current configuration.
The easiest way to export your configuration is through the Web Admin interface:
- Log into your SFTP Gateway Web Admin interface
- Navigate to Settings > Backup & Recovery
- Click Export Backup File
- Save the downloaded
.ymlfile securely
Store this backup file securely—you'll need it in Step 3, and it also serves as a disaster recovery backup for your current deployment.
Step 2: Document Current Network Configuration
Before making any changes, document your current network configuration. This serves two purposes: it provides a reference for configuring the new deployment, and it gives you the information you need to quickly roll back if something goes wrong.
The following commands capture the key network details:
# Note your current Elastic IP (if applicable)
aws ec2 describe-addresses --query 'Addresses[*].[PublicIp,InstanceId,AllocationId]' --output table
# Note security group configuration
aws ec2 describe-security-groups --group-ids sg-xxxxxxxx --output yaml > security-group-backup.yaml
# Note your VPC and subnet IDs
aws ec2 describe-instances --instance-ids i-xxxxxxxx \
--query 'Reservations[*].Instances[*].[VpcId,SubnetId,SecurityGroups]' --output table
Save this information somewhere accessible. You'll reference it during the cutover phase, and it provides the details needed for a quick rollback if necessary.
Step 3: Import Configuration to Container
Now it's time to bring your SFTP Gateway configuration into the container deployment. The import process is straightforward and uses the same backup file you created in Step 1.
- Access the Web Admin interface of your container deployment (via the NLB DNS name or however your container environment is exposed)
- Navigate to Settings > Backup & Recovery
- Click Import Backup File
- Upload the
.ymlbackup file from Step 1 - Review the import summary to verify all users and folders were imported successfully
After the import completes, the container deployment will have the same users, folders, permissions, and SSH host keys as your EC2 instance.
Phase 2: Cutover
With your container environment running and your configuration imported, you're ready for the cutover. This phase involves switching traffic from the old EC2 instance to the new container and verifying everything works. Plan for a brief maintenance window during the actual cutover (typically 5-15 minutes).
Step 4: Perform Cutover
This is the moment of truth. You'll redirect traffic from your EC2 instance to the container deployment. The method depends on how your users currently connect.
4a. Update DNS or Elastic IP
Option A: DNS Cutover (Recommended)
If users connect via a DNS hostname (e.g., sftp.yourcompany.com), update the DNS record to point to the NLB. This approach provides the most flexibility and allows for easy rollback.
Important: About 24 hours before your planned cutover, reduce the TTL on your DNS record to the lowest value your provider allows (e.g., 60 seconds). This ensures the cutover propagates quickly across the internet, and makes rollbacks faster if needed. After the migration is stable, you can increase the TTL back to a normal value.
# Get NLB DNS name
aws elbv2 describe-load-balancers --names sftp-gateway-nlb \
--query 'LoadBalancers[*].DNSName' --output text
Update your DNS provider (Route 53, Cloudflare, etc.) to point your SFTP hostname to the NLB DNS name. If using Route 53, consider using an alias record for better performance and no additional cost.
Option B: Elastic IP Association
If users connect directly to an Elastic IP address, you can reassociate that IP with the NLB. Note that NLBs require one Elastic IP per Availability Zone.
# Dissociate from EC2 instance
aws ec2 disassociate-address --association-id eipassoc-xxxxxxxx
# Associate with NLB (requires allocating EIP to each NLB subnet)
# This is done through the NLB configuration, not ec2 associate-address
This approach provides instant cutover with no DNS propagation delay, but it's more complex for multi-AZ deployments.
4b. Verify Connectivity
Before declaring the migration complete, verify that everything is working:
# Test SFTP connectivity
sftp testuser@your-new-endpoint
# Test Web Admin access
curl -k https://your-new-endpoint/admin
Have a few users test file uploads and downloads to verify end-to-end functionality. Check CloudWatch logs for any errors:
aws logs tail /ecs/sftp-gateway --follow
Step 5: Decommission EC2 Instance
Once you've verified the container deployment is working correctly, you can begin decommissioning the EC2 instance. However, don't rush this step—keeping the old instance available for a few days provides a safety net in case issues emerge.
Recommended decommissioning timeline:
Immediately after cutover: Keep the EC2 instance running but no longer receiving traffic. This allows instant rollback if critical issues emerge.
After 24-48 hours: If no issues have been reported, stop (but don't terminate) the EC2 instance. Stopping the instance eliminates most costs while preserving the ability to restart if needed.
After 1-2 weeks: If the container deployment has been stable, terminate the EC2 instance and delete any associated resources (EBS volumes, snapshots, etc.) that are no longer needed.
Important: If your EC2 instance was deployed via CloudFormation, add DeletionPolicy: Retain to the Elastic IP resource before deleting the stack. This prevents the IP from being released, which is important if you need to roll back or want to keep the IP for future use.
IPAddress:
DeletionPolicy: Retain
Properties:
Domain: vpc
InstanceId: !Ref 'SFTPGatewayInstance'
Type: AWS::EC2::EIP
Rollback Procedure
Despite careful planning, sometimes things don't go as expected. If you encounter critical issues with the container deployment that can't be quickly resolved, here's how to roll back to the EC2 instance.
The key is that your EC2 instance should still be available (either running or stopped) during the initial post-migration period. Rolling back is essentially the reverse of the cutover process:
If using DNS: Update your DNS record to point back to the EC2 instance's IP address or Elastic IP.
If using Elastic IP: Re-associate the Elastic IP with the EC2 instance:
aws ec2 associate-address --instance-id i-xxxxxxxx --allocation-id eipalloc-xxxxxxxxIf the EC2 instance is stopped: Start it first:
aws ec2 start-instances --instance-ids i-xxxxxxxxVerify the rollback: Test SFTP connectivity and Web Admin access to confirm the EC2 instance is serving traffic correctly.
Once rolled back, investigate the issues with the container deployment before attempting the migration again. Common causes include network configuration problems, IAM permission issues, or EFS mount failures.
Post-Migration Checklist
After completing the migration, work through this checklist to ensure everything is properly configured for production use:
All users can authenticate successfully — Test with a sample of user accounts, including both password and key-based authentication if applicable.
File uploads and downloads work correctly — Verify files appear in the expected S3 location and can be retrieved.
S3 bucket integration is functioning — Check that new files uploaded via SFTP appear in S3, and that files added to S3 are accessible via SFTP.
Web Admin is accessible — Verify you can log in and perform administrative tasks.
SSL/TLS certificates are properly configured — Check that HTTPS connections use valid certificates.
Monitoring and alerting is configured — Set up CloudWatch alarms for container health, CPU/memory usage, and error rates.
Backup automation is set up for container deployment — Configure regular exports of the SFTP Gateway configuration.
Documentation is updated with new architecture — Update runbooks, architecture diagrams, and any relevant documentation to reflect the container deployment.
Troubleshooting
Even with careful preparation, you may encounter issues during or after the migration. This section covers the most common migration-related problems. For infrastructure issues (container won't start, EFS mount failures, etc.), refer to the ECS deployment guide.
SSH Host Key Warning
Symptom: Users see a message like "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!" when connecting.
Cause: The container is using different SSH host keys than the original EC2 instance.
Solution:
- Verify the backup was imported correctly in Step 3 (the backup includes SSH host keys)
- Check that the key files exist:
ls -la /opt/sftpgw/ssh_host_* - Verify ownership is correct for the container user (typically UID 1000)
- Restart the container to reload the keys
Users Cannot Authenticate
Symptom: Users who worked on the EC2 instance cannot log in after migration.
Cause: The configuration import may not have completed successfully, or there may be a mismatch in authentication settings.
Solution:
- Verify the import completed successfully in Settings > Backup & Recovery
- Check that the user exists in Users in the Web Admin
- For key-based authentication, verify the user's public key was imported correctly
- Check CloudWatch logs for authentication errors
S3 Access Issues After Migration
Symptom: Users can authenticate but cannot access files, or files uploaded via SFTP don't appear in S3.
Cause: The container's IAM role may not have access to the same S3 bucket, or bucket permissions may differ.
Solution:
- Verify the container's task role has permissions to the S3 bucket configured in your SFTP Gateway
- Ensure the S3 bucket name in the container environment matches your original configuration
- Check that any bucket policies allow access from the container's IAM role
Related Articles
For more information on specific topics covered in this guide, see these related articles:
- Deploying SFTP Gateway on ECS — Set up the container infrastructure before migrating
- Automated Backup for SFTP Gateway — Set up automated configuration backups for your container deployment
- Elastic IP Cutover — Detailed guide for Elastic IP management during migrations
Support
If you encounter issues during migration that aren't covered in this guide, contact support at support@thorntech.com. To help us assist you quickly, please include:
- A detailed description of the problem, including when it started and any error messages
- The steps you've already tried to resolve the issue