Oracle SLOB: Sustained Throughput Test

slob ghostOne of the useful tests that can be performed with the Oracle SLOB toolkit is the sustained throughput test. In this test, a large transactional workload is driven through the database over a prolonged period to measure the ability of the storage system to offer a sustained and predictable response. It is a particularly relevant test for flash systems because of their susceptibility to hit the “write cliff” when subjected to long periods of write activity.

On this page I am going to describe a typical test which I use. All of the criteria can be changed should you so desire, but remember to test like-for-like when comparing different systems. Remember also – and this is critical – that you are testing your complete database infrastructure, not just the storage. If you want to compare different storage products you must do so using identical servers with identical CPU, memory and software configurations.

Too many times I have seen people test modern storage using an old database server from their dev/test estate and wonder why they cannot reach the claimed datasheet performance. Oracle is a piece of software that is highly CPU-intensive, even when performing I/O. The reason for this is that Oracle has much code associated with the sharing of resources and the methods of mutual exclusion required to allow such sharing (spinlocks, mutexes etc). Although SLOB aims to minimise this overhead by avoiding contention on shared resources, if you use a RDBMS to manage data this will always result in different behaviour to just issuing basic read/write I/O calls and then subsequently discarding the data.

Test Criteria

Clearly the Sustained Throughput Test needs to run for long enough to justify the adjective “sustained”, so I use a run time of 8 hours. If you can cope with longer, why not leave it running for 24 hours? Maybe even a whole weekend? You will probably find the limitation will be the size of the output file containing the iostat information and your ability to process it.

Here are the other test criteria I typically use:

  • 8 hour test
  • 96 sessions
  • 25% updates
  • Plenty of capacity (I’m using 2TB)
  • 1100GB tablespace for SLOB data
  • 1GB database buffer cache

For SLOB itself I’m using the following parameters in the slob.conf file:

UPDATE_PCT=25
RUN_TIME=28800
WORK_LOOP=0
SCALE=1398101
WORK_UNIT=3
REDO_STRESS=LITE
LOAD_PARALLEL_DEGREE=16
SHARED_DATA_MODULUS=0

And for the database parameter file I will use the following:

*._db_block_prefetch_limit=0
*._db_block_prefetch_quota=0
*._db_file_noncontig_mblock_read_count=0
*._disk_sector_size_override=TRUE
*.audit_trail='none'
*.compatible='11.2.0.4.0'
*.control_files='+DISKGROUP/path/to/your/controlfile' # Change this!
*.db_block_size=8192
*.db_cache_size=1G
*.db_create_file_dest='+DATA'
*.db_name='orcl'
*.diagnostic_dest='/u01/app/oracle'
*.log_buffer=134217728
*.pga_aggregate_target=10G
*.processes=1024
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_manager_plan=''
*.shared_pool_size=4G
*.undo_tablespace='UNDOTBS1'
*.use_large_pages='ONLY'

Next steps:

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: