SLOB Sustained Throughput Test: Interpreting SLOB Results
This page is part of a process for running a sustained throughput test using Oracle SLOB. If you stumbled across this by accident, you can start at the beginning here.
This is the final stage in the process of running a sustained throughput test. Having run the test successfully to completion, you now have a load of raw data in the SLOB directory… but what can you do with it?
The choices are endless, but here’s what I do. Let’s take a look at the *.out files containing output from operating system monitoring commands:
[oracle@server4 SLOB]$ ls -l *.out -rw-r--r-- 1 oracle oinstall 784634 Jul 17 04:30 iostat.out -rw-r--r-- 1 oracle oinstall 78775 Jul 17 04:30 mpstat.out -rw-r--r-- 1 oracle oinstall 6 Jul 17 04:30 tm.out -rw-r--r-- 1 oracle oinstall 2766 Jul 17 04:30 vmstat.out
The iostat output is especially useful as it shows me interesting information such as Read Thoughput (MB/sec), Write Throughput (MB/sec) and information about queue lengths. For example:
[oracle@server4 SLOB]$ head -10 iostat.out Linux 3.8.13-26.2.3.el6uek.x86_64 (server4) 16/07/14 _x86_64_ (32 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 1.58 0.00 0.32 2.66 0.00 95.45 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 1.45 2.18 0.04 0.01 30.90 0.00 1.13 0.49 0.18 sdf 0.00 0.00 3.54 2.27 0.46 0.47 327.37 0.05 7.89 2.57 1.49 sdc 0.00 0.00 3.53 2.29 0.47 0.47 331.77 0.05 8.24 2.63 1.53 sde 0.00 0.00 3.70 2.29 0.46 0.47 317.78 0.05 8.02 2.56 1.54
I have truncated the output after four rows of device information (sda, sdf, sdc and sde) because on my system there are 272 devices for every single iostat output. SLOB calls iostat with the options -xm 3 which means a full listing of those 272 devices will be written every three seconds. That’s 9,600 sets of information for each device – a lot of text!
My All Flash storage array is connected via multiple paths using Linux multipathing – so each path shows up via UDEV as a different SCSI device, with a different name such as sde, sdf etc. I don’t want this information, I just want the information for the multipath devices (with names like /dev/dm-*):
[oracle@server4 SLOB]$ head -278 iostat.out | grep dm- dm-0 0.05 0.78 1.39 1.97 0.04 0.01 33.18 0.00 1.22 0.53 0.18 dm-1 0.00 0.00 0.06 0.00 0.00 0.00 4.36 0.00 0.49 0.29 0.00 dm-2 0.00 0.00 0.05 0.00 0.00 0.00 5.99 0.00 0.50 0.40 0.00 dm-3 0.00 0.00 1.32 2.75 0.04 0.01 27.24 0.00 1.10 0.44 0.18 dm-4 0.00 0.00 0.46 0.55 0.01 0.00 31.46 0.00 1.91 0.61 0.06 dm-5 0.00 0.00 0.02 0.00 0.00 0.00 7.98 0.00 4.08 2.44 0.00 dm-6 0.00 0.00 112.29 73.65 14.72 15.01 327.45 1.53 8.25 0.44 8.14 dm-7 0.00 0.00 112.61 73.44 15.04 15.18 332.66 1.59 8.53 0.44 8.12 dm-8 0.00 0.00 108.59 77.09 14.97 15.19 332.70 1.56 8.41 0.44 8.18 dm-9 0.00 0.00 118.30 73.13 14.77 15.01 318.53 1.57 8.20 0.42 8.13 dm-10 0.00 0.00 112.75 72.68 14.73 15.01 328.45 1.53 8.27 0.44 8.15 dm-11 0.00 0.00 110.99 77.08 15.01 15.19 328.87 1.58 8.38 0.44 8.21 dm-12 0.00 0.00 109.98 76.22 14.70 15.04 327.11 1.54 8.25 0.44 8.20 dm-13 0.00 0.00 109.85 77.00 15.00 15.20 330.98 1.58 8.45 0.44 8.21 dm-14 0.00 0.00 0.85 2.00 0.03 0.01 27.87 0.00 0.87 0.42 0.12
This is a bit more like it, but of course not every one of those devices is my database storage. To know which devices I need, I can check in /dev/mapper (because when I created them, I named them with nice, friendly names by editing the multipath.conf file):
[oracle@server4 SLOB]$ ls -l /dev/mapper
total 0
crw-rw---- 1 root root 10, 236 Jul 16 14:28 control
lrwxrwxrwx 1 root root 7 Jul 16 14:28 mpatha -> ../dm-0
lrwxrwxrwx 1 root root 7 Jul 16 14:28 mpathap1 -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 16 14:28 mpathap2 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jul 16 14:28 mpathap3 -> ../dm-3
lrwxrwxrwx 1 root root 7 Jul 16 14:32 slob1 -> ../dm-6
lrwxrwxrwx 1 root root 7 Jul 16 14:32 slob2 -> ../dm-7
lrwxrwxrwx 1 root root 7 Jul 16 14:32 slob3 -> ../dm-8
lrwxrwxrwx 1 root root 7 Jul 16 14:32 slob4 -> ../dm-9
lrwxrwxrwx 1 root root 8 Jul 16 14:29 slob5 -> ../dm-10
lrwxrwxrwx 1 root root 8 Jul 16 14:29 slob6 -> ../dm-11
lrwxrwxrwx 1 root root 8 Jul 16 14:29 slob7 -> ../dm-12
lrwxrwxrwx 1 root root 8 Jul 16 14:29 slob8 -> ../dm-13
lrwxrwxrwx 1 root root 8 Jul 16 14:28 vg_halfserver4-lv_home -> ../dm-14
lrwxrwxrwx 1 root root 7 Jul 16 14:28 vg_halfserver4-lv_root -> ../dm-4
lrwxrwxrwx 1 root root 7 Jul 16 14:28 vg_halfserver4-lv_swap -> ../dm-5
So the devices I’m interested in are dm-6 through to dm-13. The other dm-* devices correspond to things like the root filesystem, swap space and so on, so they are of no use to me. I will have to grep my specific devices out of the iostat.out file, so I’m going to create a small text file with just the names I want (on separate lines). I’ve called the file dm-grep:
[oracle@server4 SLOB]$ cat dm-grep dm-6 dm-7 dm-8 dm-9 dm-10 dm-11 dm-12 dm-13
Finally, I’m going to grep all of the matching lines out of the iostat.out file and print columns 1 (the device name), 6 (Read Throughput) and 7 (Write Throughput). I’m going to comma-delimit them so I can read them into a spreadsheet tool such as Excel:
[oracle@server4 SLOB]$ echo "Device,ReadThroughput(MB/sec),WriteThroughput(MB/sec)" > test.csv [oracle@server4 SLOB]$ egrep -f dm-grep iostat.out | awk '{print $1","$6","$7}' >> test.csv [oracle@server4 SLOB]$ head -5 test.csv Device,ReadThroughput(MB/sec),WriteThroughput(MB/sec) dm-6,14.72,15.01 dm-7,15.04,15.18 dm-8,14.97,15.19 dm-9,14.77,15.01
That’s all I need. You can obviously choose which columns you want, but for my requirement I only need the read and write throughput values.
Graphing The Results
I’m not going to blog the steps required to create a graph in Excel. However, since you are going to have a column with multiple different device names in it, I’d suggest using a pivot table to create sums for the total read and write data in each sample. [Note for myself: I’ve added an additional column in front of my data in Excel which I’ve called “Sample Number” and which uses the formula =INT((ROW()-2)/8) to generate friendly labels on the X axis of my graphs.]
Ultimately, what you want to achieve is a graph which looks something like this:
What you are looking for is sustained, predictable performance and no sudden drop-off in throughput as the storage system hits a wall. Of course you are also looking for reasonable throughput values, but that is highly dependent on your server infrastructure and storage networking as well as the storage itself….
There are many other things you can do with the results from this test. Not only that, but there are many other variations of this test you can perform with SLOB. Why not, for example, graph the CPU utilisation from the mpstat.out file using a similar method to that above:
[oracle@server4 SLOB]$ echo "Time,User,Sys,IOWait,Idle" > mpstat.csv [oracle@server4 SLOB]$ grep "all" mpstat.out | awk '{print $1","$3","$5","$6","$11}' >> mpstat.csv [oracle@server4 SLOB]$ head -5 mpstat.csv Time,User,Sys,IOWait,Idle 20:20:05,1.14,1.35,0.01,97.50 20:20:08,1.10,1.23,0.00,97.67 20:20:11,0.10,0.11,0.00,99.78 20:20:14,18.86,8.10,22.25,48.55
That’s the great thing about SLOB – as a complete testing toolkit for Oracle, the only limits are your curiosity…
Pingback: SLOB Sustained Throughput Test: Interpreting SLOB Results | SimpleSQLDBA | Shadab Mohammad
Pingback: EXT4 vs XFS for Oracle, which performs better? – myvirtualcloud.net
Pingback: EXT4 vs XFS vs ASM vs ASM + OEL, which one performs better? Taking it to the next level. – myvirtualcloud.net