π

A Quick and Dirty Performance Analysis of sshfs

Show Sidebar

Looking for a lightweight method to share file systems between multiple VMs within a host running Qubes OS I was told that sshfs does not have a mentionable overhead.

Well, this could be tested in a quick and dirty way.

Setup

I installed sshfs via sudo apt install sshfs on my laptop. This is a lenovo X260 (intel i5) with a SSD. Although my main target platform is using a large hard disk (instead of a SSD), I wanted to test the sshfs overhead without waiting for my hard disk. This way, any overhead of sshfs should be easier to see.

I created a share of my user's home with:

 root@lX260 ~ # sshfs user@localhost:/home/user /media/user/test	  

Indexing Files

My first use-case is indexing file names. I simulated it with a run of du:

 du > output.txt	  

These are the measured run-times in the order I ran them:

  1. sshfs: 14.490s
  2. direct (no sshfs): 0.583s
  3. sshfs: 8.949s
  4. direct: 0.410s

The indexing gets faster for succeeding since the data gets loaded into the caches. But still: for the last runs, sshfs was 21 times slower than direct accessing the data.

Performance Tool

There are many performance benchmark tools out there. I was using bonnie++ for the next test run. It takes care of not using cached data for measuring storage performance.

The first run I did on the sshfs using following command:

 bonnie++ -d /media/user/test/tmp/ -u root:root	  

It used up one CPU core of my intel i5 and therefore had a clearly to see impact on the system load. The run took eight minutes and 49 seconds to finish.

 Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
 Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
 Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
 lX260        14840M    84  39 131797  17 57811   9  4560  99 156188   4  8241  44
 Latency               175ms     926ms    1431ms    4053µs     158ms   12216µs
 Version  1.97       ------Sequential Create------ --------Random Create--------
 lX260               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                  16  4271   7 +++++ +++ 12001  10  4155   7 +++++ +++ 11678   9
 Latency              2082µs     143µs    1621µs    4488µs     261µs    2411µs	  

The second run with directly accessing the file system:

 bonnie++ -d /home/user/tmp/ -u root:root	  

It did not really result in any CPU load at all and took three minutes and five seconds:

 Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
 Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
 Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
 lX260        14840M  1134  99 230670  27 175377  12  4221  98 613173  15 +++++ +++
 Latency             10129µs    2181ms     498ms    3106µs    4580µs    2736µs
 Version  1.97       ------Sequential Create------ --------Random Create--------
 lX260               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
 Latency               648µs    2149µs     526µs     741µs      22µs      68µs	  

As you can see in the result data, sshfs offers less performance here as well.

Summary

From my quick and dirty tests I have to report that accessing a local file system via sshfs does have a noticeable overhead on CPU usage and performance.

It is not that bad, I would say, when you consider the things that go on in the back ground. You get a very simple to use remote mount possibility with secure data encryption in between. It is clearly a nice piece of software which has its use-cases.

However, I could falsify the assumption that I won't notice the difference between direct accessing my ext4 file system and using it via sshfs.


Related articles that link to this one:

Comment via email (persistent) or via Disqus (ephemeral) comments below: