SLOB on Violin 3000 Series with Infiniband

Last week I invited Martin Bach to the Violin Memory EMEA headquarters to do some testing on both our 3000 and 6000 series arrays. Martin was very interested in seeing how the Violin flash memory arrays performed, having already had some experience with PCIe-based flash card vendor.

There are a few problems with PCIe flash cards, but perhaps the two most critical are that a) the storage cannot be shared, meaning it isn’t highly-available; and b) the replacement of any PCIe card requires the server to be taken offline.

Violin’s approach is fundamentally different because the flash memory is contained in a separate unit which can then be presented over one of a number of connections: PCIe direct-attached, Fibre Channel, iSCSI… and now Infiniband. All of those, with the exception of PCIe, allow for the storage to be shared and highly-available. So why do we still provide PCIe?

There are two answers. The first and most simple answer is for flexibility – the design of the arrays makes it simple to provide multiple connectivity options, so why not? The second and more important (in terms of performance) is for latency. The overhead of adding fibre-channel to a flash memory is only in the order of one or two hundred microseconds, but if you consider that the 6216 SLC array has a read and write latency of 90 and 25 microseconds respectively that’s quite an additional overheard.

The new and exciting addition to these options is therefore Infiniband, which allows for extremely low latencies yet with the ability to avoid the pitfalls of PCIe around sharing and HA.

To demonstrate the latency figures achievable through a 3205 SLC array connected via Infiniband, Martin and I ran a series of SLOB physical IO tests and monitored the latency. The tests consisted of gradually ramping up the number of readers to see how the latency fared as the number of IOPS increased – we always kept the number of writers as zero. As usual the database block size was 8k. Here are the results:

Filename      Event                          Waits  Time(s)  Latency       IOPS
------------- ------------------------ ------------ -------- ------- ----------
awr_0_1.txt   db file sequential read        9,999        1     100     2,063.8
awr_0_4.txt   db file sequential read       29,992        5     166     5,998.8
awr_0_8.txt   db file sequential read       39,965        6     150     8,285.5
awr_0_16.txt  db file sequential read       79,958       15     187    13,897.8
awr_0_32.txt  db file sequential read      159,914       43     269    18,133.9
awr_0_64.txt  db file sequential read   21,595,919    6,035     280   115,461.1
awr_0_128.txt db file sequential read   99,762,808   69,007     691   124,907.4

The interesting thing is to note how the latency scales linearly. The tests were performed on a 2s8c16t Supermicro server with 2x QDR Infiniband connections via a switch to the array. The Supermicro starts having trouble driving the IO once we get beyond 32 readers – and by the time we get to 128 the load average is so high on the machine that even logging on is hard work. I guess it’s time to ask for a bigger server in the lab…

Advertisement

2 Responses to SLOB on Violin 3000 Series with Infiniband

  1. Alex says:

    Hello,
    Very interesting ,,was waiting for this test 🙂 .. curious how this test compares to the same test done with PCIe connectivity againts the same array ? I think will be interesting to know what is the latency with PCIe vs Infiniband . Thanks

    Regards,
    Alex

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: