Performance tuning (Azure)
Overview
SFTP Gateway 2.001.x
is designed to support many SFTP users, with a
high volume of file uploads.
Some customers have experienced server slowness, or it takes several minutes and multiple attempts to connect via SSH or SFTP.
If this is happening to your server, this can be a sign of more serious underlying issues. So it's important to try to address any serious issues while the server is still operational.
Disk space
If the server is out of disk space, it will become unstable. So, the first thing to check is the remaining disk space:
sudo su
df -h
You should see the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 32G 14G 19G 43% /
In this case, the root partition is only using up 43% of the disk.
If the root partition (/
) is full, follow the instructions on
this page to free up some space.
Memory and CPU
The next step is to check the server memory. Run this command:
top
You should see the following output:
top - 21:37:48 up 237 days, 23:19, 1 user, load average: 0.04, 0.04, 0.01
Tasks: 127 total, 1 running, 97 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8168864k total, 4576804k used, 3592060k free, 213368k buffers
Swap: 2097148k total, 0k used, 2097148k free, 1913304k cached
Note the following examples from the output above:
- Cpu usage:
0.0%us
A high sustained number is a sign of trouble. - Memory used:
4576804k used
Divide this by thetotal
, and watch for a high sustained percentage. - Swap used:
0k used
Anything other than0k
(and rising) is a bad sign.
If the server is using up a lot of memory, chances are there are too many processes running in memory.
Too many running processes
If the server is using up too much memory, one common cause is that there are too many processes.
Run the following command:
ps -ef | wc -l
This will count the total processes. Anything over a few thousand could be a sign of trouble, especially if that number keeps rising.
If a server crash is imminent, you can stop all the task spooler processes.
sudo yum install psmisc -y
sudo killall ts
Warning: Any file events in the task spooler will be lost, so this could result in stuck files.
Check the number of processes again:
ps -ef | wc -l
This time, the number should be much smaller.
Temporarily stop specific services
Forcefully terminating task spooler processes is just a temporary measure.
There are two services that emit events:
- cron: Kicks off a folder scans every minute for every user.
- incron: The folder scan kicks off a
movetos3
(i.e. move to Blob) process for every stuck file.
It's possible that these services are burying your server in task spooler processes. If so, you may want to temporarily stop these services:
service crond stop
service incrond stop
(Just remember to start these services later, so that SFTP Gateway can function normally.)
Performance tuning: reduce the folderscan frequency
By default, SFTP Gateway triggers a folder scan every minute for every SFTP user. This helps ensure that file uploads are promptly moved to Blob.
As you start adding more users on a production system, these folder scans can add up quickly -- especially if you have not upgraded the VM size along the way.
The biggest bang for the buck in terms of performance, is to make the folder scans more infrequent.
Edit the file:
/usr/local/bin/globals.sh
Look for the following line:
DEFAULT_USER_FOLDER_SCAN_FREQUENCY_IN_MINUTES=5
Change this to the following:
DEFAULT_USER_FOLDER_SCAN_FREQUENCY_IN_MINUTES=30
(Valid values are: 1
, 2
, 3
, 4
, 5
, 12
, 15
, 20
, 30
-- any number
that divides evenly by 60)
This setting will apply to new SFTP users you create from this point on.
To apply your changes to existing users, you will need to manually update the userfolderscan.cron
file for each
SFTP user:
vi /etc/cron.d/robtest.userfolderscan.cron
Behind the scenes, the cron service runs a folder scan every minute for every SFTP user. There is a cron file for each SFTP user:
/etc/cron.d/robtest.userfolderscan.cron
In this file, the following line runs the folder scan every 5 minutes:
*/5 * * * * root ${BIN_DIR}/ts-folderscan ...
Change this to the following:
*/30 * * * * root ${BIN_DIR}/ts-folderscan ...
Now, it runs once every 30 minutes.
VM size
When testing SFTP Gateway, you want to use a cheap burst instance, like a B1ms
.
This should give you enough resources to run SFTP Gateway's Java backend and LDAP database.
When using SFTP Gateway in production, it's easy to forget to change the VM size
to something more robust like a D
series. Because it's really not necessary to do so at first
(at least not or those first few pilot users).
But as you add more SFTP users, and as these users start uploading files, an undersized VM size will start to suffer.
The B
series VMs are burstable, meaning that you get decent performance,
as long as you only draw on that performance occasionally.
Once you start adding more SFTP users, you get more folder scans, and these run incessantly and will quickly chew through your burst credits.
The solution is to move to a proper production grade VM size.
Taking advantage of a larger VM size
If your SFTP users are uploading a lot of files (hundreds of thousands),
you can upgrade your VM size to a much larger size (e.g. D
series).
But you may notice that most of the CPU and memory is idle, and the number of files moving to Blob does not clear out any faster than before.
The solution is to edit the file:
/opt/sftpgw/sftpgateway.properties
And change the line:
concurrent_cloudtransfers=10
To something higher:
concurrent_cloudtransfers=15
To apply your change, run the command:
/usr/local/bin/apply-sftpgw-props
By default, SFTP Gateway will run 10
concurrent threads that move files to Blob.
If you want to move more files to Blob at a time, you can increase this number.
You need to be careful though. Set this number too high, and the server will run out of memory.
So some performance tuning is necessary. Gradually increase the
number beyond 10
(while the server is under load) while monitoring your
baseline memory.