Migrating from Azure VM to Container Deployment
This guide walks you through migrating your SFTP Gateway deployment from an Azure VM to a containerized deployment. Whether you're modernizing your infrastructure, preparing for auto-scaling requirements, or standardizing on container-based deployments across your organization, this guide will help you complete the migration while preserving your existing users, folders, and configuration.
Prerequisite: This guide assumes you have already deployed SFTP Gateway as a container. If you haven't set up your container environment yet, see Deploying SFTP Gateway on AKS first, then return to this guide to migrate your configuration.
Overview
Many organizations are moving from traditional VM-based deployments to containers as part of their cloud modernization strategy. Container deployments offer several advantages over traditional Azure VMs:
Easier scaling: Containers can be spun up or down in seconds, allowing you to respond quickly to changes in demand.
Improved portability: A containerized SFTP Gateway runs the same way regardless of the underlying environment. This consistency makes it easier to replicate your setup across development, staging, and production environments.
Reduced operational overhead: One of the top drivers for container adoption is eliminating the burden of VM patching. Containers have a significantly smaller software footprint than full VMs—you're not responsible for kernel updates, OS patches, or system-level security fixes.
Cost efficiency: If you're already running Kubernetes or Azure Container Apps, adding another container to your existing infrastructure has minimal marginal cost. A dedicated VM, by contrast, requires its own compute resources that often sit underutilized.
The migration is straightforward: export your configuration from the Azure VM and import it into the new container deployment. The process is designed to minimize downtime—most of the work happens in parallel with your existing deployment, and the actual cutover takes only a few minutes.
Prerequisites
Before beginning the migration, ensure you have the following:
A running SFTP Gateway container deployment — Follow the AKS deployment guide to set up your container environment first.
Administrative access to your existing SFTP Gateway Azure VM — You'll need access to the Web Admin interface to export the configuration.
Azure CLI configured with appropriate permissions — The CLI should be configured with credentials that have access to your Azure subscription for the cutover steps.
Understanding of your current network configuration — Document your VNet, subnets, Network Security Groups, and any public IPs or DNS hostnames associated with your current deployment.
Migration Process
The migration process consists of two phases: preparation (Steps 1-3) and cutover (Steps 4-5). The preparation phase can be done without any impact to your existing deployment. Only the cutover phase requires a brief maintenance window.
Phase 1: Preparation
The first phase focuses on capturing everything you need from your existing Azure VM deployment. This includes your SFTP Gateway configuration and network settings. Taking the time to thoroughly document your current setup will make troubleshooting easier if any issues arise during or after the migration.
Step 1: Export Configuration from Azure VM
Your SFTP Gateway configuration includes users, folders, permissions, and system settings. SFTP Gateway provides a built-in backup mechanism that exports all of this into a portable YAML file. This file is the key to migrating your setup—it contains everything the new container deployment needs to recreate your current configuration.
The easiest way to export your configuration is through the Web Admin interface:
- Log into your SFTP Gateway Web Admin interface
- Navigate to Settings > Backup & Recovery
- Click Export Backup File
- Save the downloaded
.ymlfile securely
Store this backup file securely—you'll need it in Step 3, and it also serves as a disaster recovery backup for your current deployment.
Step 2: Document Current Network Configuration
Before making any changes, document your current network configuration. This serves two purposes: it provides a reference for configuring the new deployment, and it gives you the information you need to quickly roll back if something goes wrong.
The following commands capture the key network details:
# Note your current public IP (if applicable)
az network public-ip list --query "[].{Name:name, IP:ipAddress, ResourceGroup:resourceGroup}" --output table
# Note Network Security Group configuration
az network nsg show --name your-nsg-name --resource-group your-resource-group --output yaml > nsg-backup.yaml
# Note your VNet and subnet configuration
az vm show --name your-vm-name --resource-group your-resource-group \
--query "{VNet:networkProfile.networkInterfaces[0].id}" --output table
Save this information somewhere accessible. You'll reference it during the cutover phase, and it provides the details needed for a quick rollback if necessary.
Step 3: Import Configuration to Container
Now it's time to bring your SFTP Gateway configuration into the container deployment. The import process is straightforward and uses the same backup file you created in Step 1.
- Access the Web Admin interface of your container deployment (via the Load Balancer IP or however your container environment is exposed)
- Navigate to Settings > Backup & Recovery
- Click Import Backup File
- Upload the
.ymlbackup file from Step 1 - Review the import summary to verify all users and folders were imported successfully
After the import completes, the container deployment will have the same users, folders, permissions, and SSH host keys as your Azure VM.
Phase 2: Cutover
With your container environment running and your configuration imported, you're ready for the cutover. This phase involves switching traffic from the old Azure VM to the new container and verifying everything works. Plan for a brief maintenance window during the actual cutover (typically 5-15 minutes).
Step 4: Perform Cutover
This is the moment of truth. You'll redirect traffic from your Azure VM to the container deployment. The method depends on how your users currently connect.
4a. Update DNS or Public IP
Option A: DNS Cutover (Recommended)
If users connect via a DNS hostname (e.g., sftp.yourcompany.com), update the DNS record to point to the container service's load balancer. This approach provides the most flexibility and allows for easy rollback.
Important: About 24 hours before your planned cutover, reduce the TTL on your DNS record to the lowest value your provider allows (e.g., 60 seconds). This ensures the cutover propagates quickly across the internet, and makes rollbacks faster if needed. After the migration is stable, you can increase the TTL back to a normal value.
# Get the Load Balancer public IP for your AKS service
kubectl get svc sftp-gateway-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Update your DNS provider (Azure DNS, Cloudflare, etc.) to point your SFTP hostname to the load balancer IP.
Option B: Public IP Reassociation
If users connect directly to a static public IP address, you can reassociate that IP with the container service's load balancer.
# Dissociate from VM network interface
az network nic ip-config update \
--resource-group your-resource-group \
--nic-name your-nic-name \
--name ipconfig1 \
--remove publicIpAddress
# The public IP can then be associated with the AKS load balancer
# This requires configuring the AKS service with the specific public IP
This approach provides instant cutover with no DNS propagation delay, but requires more complex AKS configuration.
4b. Verify Connectivity
Before declaring the migration complete, verify that everything is working:
# Test SFTP connectivity
sftp testuser@your-new-endpoint
# Test Web Admin access
curl -k https://your-new-endpoint/admin
Have a few users test file uploads and downloads to verify end-to-end functionality. Check container logs for any errors:
kubectl logs -f deployment/sftp-gateway
Step 5: Decommission Azure VM
Once you've verified the container deployment is working correctly, you can begin decommissioning the Azure VM. However, don't rush this step—keeping the old VM available for a few days provides a safety net in case issues emerge.
Recommended decommissioning timeline:
Immediately after cutover: Keep the Azure VM running but no longer receiving traffic. This allows instant rollback if critical issues emerge.
After 24-48 hours: If no issues have been reported, stop (deallocate) the Azure VM. Deallocating eliminates compute costs while preserving the ability to restart if needed.
After 1-2 weeks: If the container deployment has been stable, delete the Azure VM and any associated resources (managed disks, NICs, etc.) that are no longer needed.
# Deallocate VM (stops billing for compute, preserves disk)
az vm deallocate --name your-vm-name --resource-group your-resource-group
# Later, when ready to fully decommission
az vm delete --name your-vm-name --resource-group your-resource-group
Rollback Procedure
Despite careful planning, sometimes things don't go as expected. If you encounter critical issues with the container deployment that can't be quickly resolved, here's how to roll back to the Azure VM.
The key is that your Azure VM should still be available (either running or deallocated) during the initial post-migration period. Rolling back is essentially the reverse of the cutover process:
If using DNS: Update your DNS record to point back to the Azure VM's IP address.
If using Public IP: Re-associate the public IP with the Azure VM's network interface:
az network nic ip-config update \ --resource-group your-resource-group \ --nic-name your-nic-name \ --name ipconfig1 \ --public-ip-address your-public-ip-nameIf the Azure VM is deallocated: Start it first:
az vm start --name your-vm-name --resource-group your-resource-groupVerify the rollback: Test SFTP connectivity and Web Admin access to confirm the Azure VM is serving traffic correctly.
Once rolled back, investigate the issues with the container deployment before attempting the migration again. Common causes include network configuration problems, managed identity permission issues, or persistent storage mount failures.
Post-Migration Checklist
After completing the migration, work through this checklist to ensure everything is properly configured for production use:
All users can authenticate successfully — Test with a sample of user accounts, including both password and key-based authentication if applicable.
File uploads and downloads work correctly — Verify files appear in the expected Blob Storage location and can be retrieved.
Blob Storage integration is functioning — Check that new files uploaded via SFTP appear in Blob Storage, and that files added to Blob Storage are accessible via SFTP.
Web Admin is accessible — Verify you can log in and perform administrative tasks.
SSL/TLS certificates are properly configured — Check that HTTPS connections use valid certificates.
Monitoring and alerting is configured — Set up Azure Monitor alerts for container health, CPU/memory usage, and error rates.
Backup automation is set up for container deployment — Configure regular exports of the SFTP Gateway configuration.
Documentation is updated with new architecture — Update runbooks, architecture diagrams, and any relevant documentation to reflect the container deployment.
Troubleshooting
Even with careful preparation, you may encounter issues during or after the migration. This section covers the most common migration-related problems. For infrastructure issues (container won't start, persistent volume failures, etc.), refer to the AKS deployment guide.
SSH Host Key Warning
Symptom: Users see a message like "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!" when connecting.
Cause: The container is using different SSH host keys than the original Azure VM.
Solution:
- Verify the backup was imported correctly in Step 3 (the backup includes SSH host keys)
- Check that the key files exist:
ls -la /opt/sftpgw/ssh_host_* - Verify ownership is correct for the container user (typically UID 1000)
- Restart the container to reload the keys
Users Cannot Authenticate
Symptom: Users who worked on the Azure VM cannot log in after migration.
Cause: The configuration import may not have completed successfully, or there may be a mismatch in authentication settings.
Solution:
- Verify the import completed successfully in Settings > Backup & Recovery
- Check that the user exists in Users in the Web Admin
- For key-based authentication, verify the user's public key was imported correctly
- Check container logs for authentication errors
Blob Storage Access Issues After Migration
Symptom: Users can authenticate but cannot access files, or files uploaded via SFTP don't appear in Blob Storage.
Cause: The container's managed identity may not have access to the same storage account, or storage permissions may differ.
Solution:
- Verify the container's managed identity has the "Storage Blob Data Contributor" role on the storage account
- Ensure the storage account name in the container environment matches your original configuration
- Check that any storage account firewall rules allow access from the container's VNet
Related Articles
For more information on specific topics covered in this guide, see these related articles:
- Deploying SFTP Gateway on AKS — Set up the container infrastructure before migrating
- Automated Backup for SFTP Gateway — Set up automated configuration backups for your container deployment
- Public IP Migration — Detailed guide for public IP management during migrations
Support
If you encounter issues during migration that aren't covered in this guide, contact support at support@thorntech.com. To help us assist you quickly, please include:
- A detailed description of the problem, including when it started and any error messages
- The steps you've already tried to resolve the issue