Poor Block Storage performance

Hello,
we are facing slow-pipeline issue using Block Storage volume mounted in /var/lib/docker path of a Gitlab runner VM.

We had run few tests to collect some data.
These are the commands scheduled in crontab

50 21 29 1 * /root/w_volume.sh
50 21 29 1 * /root/w_iostat.sh
50 22 29 1 * /root/r_volume.sh
50 22 29 1 * /root/r_iostat.sh

And these are the related files

  • /root/w_volume.sh
#!/bin/sh
cd /var/lib/docker/test
dd if=/dev/zero of=blockstorage.test bs=512b count=32000000
  • /root/r_volume.sh
#!/bin/sh
cd /var/lib/docker/test
dd if=blockstorage.test of=blockstorage.test bs=512b
  • /root/w_iostat.sh
#!/bin/sh
cd /var/lib/docker/test
iostat 1 10 > output_w_iostat.txt

and this is the output

Linux 4.15.0-50-generic (localhost)     01/29/2020  _x86_64_    (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.43    0.00    0.53    1.80    0.15   96.09

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          1          0
sda              22.36        68.64       188.34  262544393  720379728
sdc              34.48       228.36       766.25  873441577 2930745648
sdb               0.08         0.19         0.31     708648    1194520

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.25    0.00   35.59    4.26    0.50   59.40

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               2.00        12.00         0.00         12          0
sdc              85.00         4.00     75780.00          4      75780
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.26    0.00   12.34   33.07    0.26   54.07

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc             528.00        12.00    419076.00         12     419076
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   16.84   35.00    0.26   47.89

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc             501.00         8.00    320256.00          8     320256
sdb               2.00         0.00        16.00          0         16

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   13.98   39.84    0.00   46.17

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc             501.00        16.00    307180.00         16     307180
sdb              11.00         0.00        76.00          0         76

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.26    0.00    9.50   42.48    0.26   47.49

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc             612.00         8.00    314932.00          8     314932
sdb               4.00         0.00        24.00          0         24

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.26    0.00   16.62   53.25    0.00   29.87

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda              89.00         0.00       464.00          0        464
sdc             508.00        12.00    363092.00         12     363092
sdb               7.00         0.00        60.00          0         60

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   17.15   36.94    0.53   45.38

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda              14.00        76.00         0.00         76          0
sdc             602.00        12.00    403012.00         12     403012
sdb               5.00         0.00        48.00          0         48

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.52    0.00   15.62   38.54    0.26   45.05

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc             501.00         8.00    332052.00          8     332052
sdb               6.00         0.00        44.00          0         44

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   19.00   38.52    0.53   41.95

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               3.00        12.00         0.00         12          0
sdc             607.00        12.00    343288.00         12     343288
sdb               4.00         0.00        16.00          0         16
  • /root/r_iostat.sh
#!/bin/sh
cd /var/lib/docker/test
iostat 1 10 > output_r_iostat.txt

and this is the output

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.43    0.00    0.53    1.81    0.15   96.08

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          1          0
sda              22.34        68.63       188.19  262762005  720465908
sdc              34.55       228.15       819.21  873450313 3136267036
sdb               0.08         0.19         0.32     709772    1219296

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.50    0.00   25.81    0.00    0.25   73.43

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               1.00         8.00         0.00          8          0
sdc               0.00         0.00         0.00          0          0
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00   25.00    0.00    0.00   75.00

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc               0.00         0.00         0.00          0          0
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.50    0.00    6.47   18.41    0.25   74.38

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc              60.00       240.00         0.00        240          0
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00   24.87    0.00   75.13

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda              70.00         0.00       380.00          0        380
sdc              35.00       140.00         0.00        140          0
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00   24.81    0.00   75.19

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc              81.00       324.00         0.00        324          0
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.75   24.31    0.00   74.94

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc             101.00       364.00      4184.00        364       4184
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00   24.81    0.00   75.19

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc              44.00       176.00         0.00        176          0
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00   24.81    0.00   75.19

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc             319.00       112.00      4044.00        112       4044
sdb               0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.25   24.81    0.00   74.94

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          0          0
sda               0.00         0.00         0.00          0          0
sdc              51.00       204.00         0.00        204          0
sdb               0.00         0.00         0.00          0          0

Are these tests enough to help us on this issue?
Are there any other tests we can do to provide a more clear situation?

Thank you

7 Replies

Hey there,

I took a look at the data you provided. Thanks for reaching out with this.

My question is, what are your speed requirements? If speed is your main concern, your local Linode disk is going to be faster than a Block Storage Volume.

What was the output of the dd test you ran?

You can also run a fio test. You may have to install fio before you can run it, and an example of the command you can run with fio follows:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

fio is a great tool for benchmarking disk IO, and what it aims to do is simulate realistic workloads and you can customize it. We're happy to take a look at the results that you get from that.

419 MB/s (which I saw in your iostat output) is actually really good. Would you be able to tell us more of what speeds you're looking for?

Ultimately, we want to help you determine if a Block Storage Volume is the right fit for you in this situation, or if your local disk would suit you better.

Same here. It's about 9x~45x slower from what I test.
I am considering switch back to local linode disk.

The block storage is extremely slow for my Linode too - read (1.7mb/s) and write (0.6mb/s) as tested using fio as requested by the support team. See results below.

It is more like the speed of an old internet connection, rather than a storage device.

I've got a ticket in and hoping for a fix.

James

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtodreduce=1 --name=test --filename=randomread_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=1517KiB/s,w=476KiB/s][r=379,w=119 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=10156: Tue Oct 20 19:43:34 2020
read: IOPS=409, BW=1638KiB/s (1678kB/s)(3070MiB/1918671msec)
bw ( KiB/s): min= 1272, max= 5016, per=100.00%, avg=1638.20, stdev=637.83, samples=3837
iops : min= 318, max= 1254, avg=409.53, stdev=159.46, samples=3837
write: IOPS=136, BW=548KiB/s (561kB/s)(1026MiB/1918671msec)
bw ( KiB/s): min= 304, max= 2040, per=100.00%, avg=547.50, stdev=219.35, samples=3837
iops : min= 76, max= 510, avg=136.83, stdev=54.84, samples=3837
cpu : usr=0.81%, sys=2.86%, ctx=987336, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=785920,262656,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=1638KiB/s (1678kB/s), 1638KiB/s-1638KiB/s (1678kB/s-1678kB/s), io=3070MiB (3219MB), run=1918671-1918671msec
WRITE: bw=548KiB/s (561kB/s), 548KiB/s-548KiB/s (561kB/s-561kB/s), io=1026MiB (1076MB), run=1918671-1918671msec

Disk stats (read/write):
sdc: ios=785844/263407, merge=0/379, ticks=121642037/1758367, in_queue=124075869, util=100.00%

Same here. Super slow. It's useless.

This is what I get for rsync -a --info=progress2 from block volume to local volume:

286,505,913,367 99% 19.43MB/s 3:54:20

Yeah, it's really slow.. I just started using it but it has some serious performance penalties. It's advertised as "high-speed volumes", I mean it easily drops down to 30mbytes/s at times. It does IIRC some times burst to somwhere over 250mb/s.

I know it's all realative but I wouldn't really call anything that's below 100mb/s any kind of high speed. High speed HDMI is 10.2Gbps, High speed USB (from 2001, 20 years ago) is 480 Mbit/s and these "high speed volumes" seems to drop below that as well.

I just migrated a gitlab installation on to linode and put gitlabs data (repositories and uploads, not database or redis) onto a linode block storage device.

I noticed the not so great performance when testing to create a backup.

It's not a very large installation, below 50gb of data resulting in a 16gb tar file for a full packup.

Creating the back up (45min) and pushing it to linode object storage (20 minutes) are visualized here..

This is far from an well defined test or anything (some of the steps are partly cpu bound as well) but I captured a screen shot of some of the disk IO metrics while doing the backup here: https://ibb.co/314x3KW . The largest performance hits unsurprisingly is when reading a lot of files which maybe is why rsync is so slow (I get similar results as other people for 2 different linodes recently)

Block storage will be slower from my past tests as it is configured with 3x replicated data. If you want faster, then local storage is only way.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct