How-to check the speed of storage for on demand delivery – Linux

There is always some concern when Wowza is found not to be delivering content at the capacity required. There are many factors to this however when doing on demand delivery the main area of focus should be the IO speed of the storage.

This how-to will concentrate on the use of the tool hdparm however there are other tools available.

A basic Linux file system layout using the command df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             938M  347M  545M  39% /
tmpfs                 7.8G     0  7.8G   0% /lib/init/rw
udev                  7.3G  212K  7.3G   1% /dev
tmpfs                 7.8G     0  7.8G   0% /dev/shm
/dev/sda5              19G  5.2G   13G  30% /home
/dev/sda8              19G  173M   18G   1% /tmp
/dev/sda6              19G  7.0G   11G  40% /usr
/dev/sda7              19G  528M   17G   3% /var

The hdparm command has several switches however the simplest ones to use and understand are the -tT which are cached and buffered timed reads respectively.

An example output from the above disk is

hdparm -tT /dev/sda7

/dev/sda7:
 Timing cached reads:   24876 MB in  1.99 seconds = 12519.87 MB/sec
 Timing buffered disk reads: 558 MB in  3.01 seconds = 185.69 MB/sec

This apparently suggests that if reading from the disk cache (it has 64MB) then I would receive (12519*8) 100Gb/s which is interesting, but of course this only applies to what is in cache. The actual rate which we are interested in is the buffered disk read, the one the disk has to go and find and read, in this case (185*8) 1.4 Gbp/s which on the face of it seems quite good. This is however only a single test while the disk is NOT doing anything else.

As a very quick test I added some load to the disk using iozone tool and then got

hdparm -tT /dev/sda6

/dev/sda6:
 Timing cached reads:   21394 MB in  1.99 seconds = 10758.25 MB/sec
 Timing buffered disk reads: 506 MB in  3.01 seconds = 168.19 MB/sec

You can clearly see I have lost 10% buffered disk reads with just one test, so more investigation is required.

We can use hdparm to tell us about the device with the -I flag so

hdparm -I /dev/sda

/dev/sda:

ATA device, with non-removable media
        Model Number:       ST1000DM003-1CH162
        Serial Number:      ABCDEFGHIJKL
        Firmware Revision:  CC44
        Transport:          Serial, SATA Rev 3.0

If you look up the specifications for this device you will see the average read time is 8.5ms, so it can support 117 reads per second when doing nothing else. If you are going to use this device for on demand delivery the absolute best you will be able to support is 117 connection before the device gets under stress and consequently starts to provide bad performance, hence bad performance to your connected clients. You also need to be aware that if this is also used as a system disk, so OS, it will not be able to sustain even this as logs from various system processes and Wowza will also use the disk for writing which takes longer, average 9.5ms.

As a comparison to a hard disk I have also provided an SSD output based on an OCTANE 256 OCZ. This is a slightly older device but the specifications show a 0.06ms read time and 0.08ms write time. These should not vary based on load (apparently) so we can also see how that fairs.

hdparm  -I /dev/sdb

/dev/sdb:

ATA device, with non-removable media
        Model Number:       OCZ-OCTANE
        Serial Number:      OCZ-ABCDEFGHIJKLMNOP
        Firmware Revision:  1.14

An example output from the above device is

hdparm  -tT /dev/sdb1

/dev/sdb1:
 Timing cached reads:   23982 MB in  1.99 seconds = 12067.85 MB/sec
 Timing buffered disk reads: 794 MB in  3.00 seconds = 264.51 MB/sec

As a very quick test I added some load to the disk using iozone tool and then got

hdparm  -tT /dev/sdb1

/dev/sdb1:
 Timing cached reads:   21498 MB in  1.99 seconds = 10810.94 MB/sec
 Timing buffered disk reads: 796 MB in  3.01 seconds = 264.85 MB/sec

As you can see the buffered disk read rate is the same (the difference is more tolerance than actual change). As a comparison to the hard disk and with the read rate being 0.06ms this would suggest that your server could support 16k connections. You should be aware though than the device throughput is only 2.1Gbp/s (264*8) so this would be a limiting factor.

You can of course RAID multiple devices, harddisks/SSDs, together to get a better rate. You should be aware for SSDs specifically that you do not get a significant performance boost past 2/3 devices. A good write up can be found on Tom’s hardware page here http://www.tomshardware.co.uk/ssd-raid-benchmark,review-32689.html


Comments are closed.