Oracle Exadata X4 (Part 2): The All Flash Database Machine?

This article looks at the new Oracle Exadata X4-2 Database Machine from Big Red. In part one I looked at the changes made from the X3 model (more stuff) as well as the implications (more license bills). I also covered some of the confusing and bewildering descriptions Oracle has used to describe the flash capacity of the X4. To recap, here are some of the quotes made in various Oracle literature:

Source

Quote

Oracle Exadata X4-2 datasheet

44.8 TB of raw physical flash memory”

Oracle Exadata X4 Press Release

logical flash cache capacity to 88 TB

Oracle X3 to X4 Changes slide deck

“flash cache compression expands capacity to 88TB (raw)”

Oracle Exadata X4-2 datasheet

effective flash capacity of 440 TB

The source of this confusion appears to be the claim that a new feature called Exadata Smart Flash Cache Compression will allow more data to fit into flash. Noticeably absent from the press release and datasheet is the information that this new feature apparently requires the Advanced Compression license, potentially adding over $1m to the list price of a full rack (see slide 22 of this Oracle presentation).

This second part of the article will look at the implications of these changes, but to make things more interesting there’s one specific change I haven’t mentioned until now. And it’s the change that I think gives the biggest insight into Oracle’s thinking.

The Hybrid Database Machine

Picture courtesy of Dennis van Zuijlekom

Picture courtesy of Dennis van Zuijlekom

Right now, in the storage industry, there is a paradigm shift taking place as primary data moves from rusty old spinning disks to semiconductor-based NAND flash storage. Most storage vendors now offer all-flash arrays as part of their product lineup, although one or two still insist on the hybrid approach where data is located on disk but flash is used as a tiering or caching layer to improve performance.

Oracle, despite being one of the early adopters of flash with its Sun Oracle Database Machine (i.e. the Exadata v2), still uses the hybrid approach in Exadata. Each full rack contains 14 storage cells, with each cell containing 12 rotating magnetic disks as well as four PCIe flash cards (made by LSI and then rebranded as Sun). The disks can be bought in two options: high performance or high capacity (known as HP and HC respectively). It’s fair to say that the majority of customers buy the high performance version (* see comments below) – after all, Exadata is a very expensive solution aimed at solving performance problems, so performance is generally high up on a customer’s list of requirements.

Upgrading to Slower Performance?

See if you can spot the most important change to be made since the introduction of flash back in the Sun Oracle v2 (second generation) machine:

Product

Raw Flash

High Performance Disks

HP Disk Capacity

Sun Oracle Database Machine (v2)

5.3 TB

600GB 15,000 RPM

100 TB

Exadata Database Machine X2-2

5.3 TB

600GB 15,000 RPM

100 TB

Exadata Database Machine X3-2

22.4 TB

600GB 15,000 RPM

100 TB

Exadata Database Machine X4-2

44.8 TB

1.2TB 10,000 RPM

200 TB

Did you notice? In the X4 model storage cells, the HP disks have now doubled in capacity. That’s not the important bit though, it’s the sacrifice that Oracle had to make to do this: 10k RPM disk drives instead of 15k RPM. In Exadata X4, the high performance disks are slower than in Exadata X3.

How much slower are we talking? Well, the average rotational latency of a 15k RPM drive is 4ms. The average rotational latency for a 10k RPM drive is 6ms. That’s an extra 50% average rotational latency. Why on Earth would Oracle make that change? If customers wanted more capacity, couldn’t they just buy the storage expansion racks?

Design Dilemmas

The answer lies in two of Oracle’s fundamental design choices for the Exadata architecture:

  • the reliance on ASM software mirroring (meaning all data is stored either twice or three times), and
  • the use of flash as cache only (meaning all data in flash is eventually destaged to disk) rather than a tier of storage.

Remember that Oracle claims the Exadata Smart Flash Cache can now contain 88TB of data? But if all data on disk must be mirrored, then with ASM “normal redundancy” (i.e. double mirroring) the usable disk capacity with HP disks is just 90TB, according to the datasheet. If you want to perform zero-downtime upgrades then you need “high redundancy” (i.e. triple mirroring) which means even less capacity. What is the point of having less disk capacity than you have flash cache capacity? Clue: there is no point.

Which is where I finally get to my point. Oracle has taken the decision, almost by stealth, to make the Exadata X4 into an all-flash database machine. Except you still have to pay for the disks…

The All Flash Database Machine

Before we go any further, here’s a quote from Oracle’s Vice President of Product Management, Tim Shetler, discussing the increased flash capacity in Exadata X4:

exadata-disk-is-the-new-tape

Yes, that’s right: on Exadata X4, your entire database is now likely to be in flash. Yet in Exadata flash is only ever used as a cache, so the database in question is also going to be located on disk. And because ASM mirroring is required, it will actually be on disk twice – or, if you need zero-downtime upgrades, three times. Three copies on disk and one on flash? That doesn’t seem like the most efficient way to utilise what is, after all, extremely expensive storage.

What about the “inactive, colder data” that remains solely on disk? Well ok… let’s think about that for a minute. The flash cache, according to the sources in the first table above, holds between 88TB and 440TB of data – but, since it’s a cache, that data must be read from a persistent source somewhere. That source is the disks. If your disks contain “inactive, colder data” which doesn’t enter the cache, exactly how is that cache going to be efficiently populated? Keeping inactive data on Exadata’s disks is not only financially ruinous, it impacts the effect of having such an increased flash cache capacity.

Money Talks

dollarsWhat if Oracle ditched the disks and went for an all-flash architecture, as many storage vendors are now doing? Would that be a win for Oracle and it’s customers alike?

Whether it would be a win for customers is something that can be debated. What is undeniable though is the commercial problem Oracle would face if it made a technical decision to ditch the disks. Customers buying Oracle Exadata have to pay for Oracle Exadata Storage Software licenses… and guess what the licensing unit is? You license by the disk. Each storage cell has 12 disks and each full rack has 14 cells, meaning a full rack requires 168 storage licenses. These are currently listing at $10,000 per disk, bringing the total list price to $1.68m per rack.

Hmm. Admitting that the disks are no longer necessary could be an expensive problem, couldn’t it?

Strange ASM Behaviour with 4k Devices

question-mark-dice

This is only a short post to document something I’ve seen and reproduced but still don’t understand. Storage devices generally have a physical sector size of 512 bytes or, more recently, 4k. This is a subject which causes much confusion (partly because some vendors seek to portray whichever sector size they use as “better”). You can read more about the subject here.

On my Oracle Linux 6.3 server running the Unbreakable Enterprise Kernel I have two block devices presented from Violin. These devices have physical block sizes of 4k but logical block sizes of 512 bytes:

[root@server1 ~]# uname -r
2.6.39-200.24.1.el6uek.x86_64

[root@server1 ~]# fdisk -l /dev/mapper/violin_data1 | grep "Sector size"
Sector size (logical/physical): 512 bytes / 4096 bytes

[root@server1 ~]# fdisk -l /dev/mapper/violin_data2 | grep "Sector size"
Sector size (logical/physical): 512 bytes / 4096 bytes

Using Oracle Grid Infrastructure 11.2.0.3, I’m now going to create an ASM diskgroup on just one of the devices, using the new clause SECTOR_SIZE=4096:

SQL> CREATE DISKGROUP DATA EXTERNAL REDUNDANCY
  2  DISK '/dev/mapper/violin_data1'
  3  ATTRIBUTE
  4       'sector_size'='4096',
  5       'compatible.asm' = '11.2',
  6       'compatible.rdbms' = '11.2';

Diskgroup created.

SQL> select GROUP_NUMBER, DISK_NUMBER, STATE, NAME, PATH, SECTOR_SIZE from v$asm_disk;

GROUP_NUMBER DISK_NUMBER STATE    NAME            PATH                      SECTOR_SIZE
------------ ----------- -------- --------------- ------------------------- -----------
           0           1 NORMAL                   /dev/mapper/violin_data2          512
           1           0 NORMAL   DATA_0000       /dev/mapper/violin_data1          512

As you can see, the disk which joined the diskgroup shows a sector size of 512 (which is correct, because ASM is reading the logical block size of the device). The other disk is also showing a 512 byte sector size. So now let’s add it to the diskgroup:

SQL> alter diskgroup data add disk '/dev/mapper/violin_data2';

Diskgroup altered.

SQL> select GROUP_NUMBER, DISK_NUMBER, STATE, NAME, PATH, SECTOR_SIZE from v$asm_disk;

GROUP_NUMBER DISK_NUMBER STATE    NAME            PATH                      SECTOR_SIZE
------------ ----------- -------- --------------- ------------------------- -----------
           1           1 NORMAL   DATA_0001       /dev/mapper/violin_data2         4096
           1           0 NORMAL   DATA_0000       /dev/mapper/violin_data1          512

Huh? The newly added disk has suddenly become a 4k sector device. Why? If I add both devices during the initial CREATE DISKGROUP statement this does not happen, it only seems to happen when I ADD the disk to an existing diskgroup.

Why?

ASM Metadata Utilities

One of the things I meant to write about when I started this blog was the undocumented stuff in Oracle that is publicly available. Since I used to spend a lot of time working with ASM I had an idea that I would write an article about kfed, the kernel file editor used to query (and in desperate circumstances actually change) the mysterious dark matter known as ASM Metadata.

I say mysterious, it isn’t actually that unfathomable, but I have heard a lot of people get confused between the ASM Metadata which resides at the start of each ASM disk (and contains structures such as the Partner Status Table) and the ASM “metadata” that can be backed up and restored using the commands md_backup and md_restore (essentially just information about directory structure and aliases etc in the diskgroup). As usual Oracle’s naming convention does not make things completely clear.

Anyway after a quick bit of Google-fu I’ve realised that I will have to scrap the whole idea anyway, because my ex-Oracle colleague Bane Radulović has written a great article all about kfed and then added insult to injury by eloquently explaining all about ASM Metadata.

Race you to write an article about AMDU then Bane…

Oh too late.