Deprecation of Non-CDB Architecture in Oracle 12c

dead-end

Back in July 2013, Oracle released the latest version of its flagship database product, Oracle 12c. Among the usual fanfare was information about a number of new options – including one known as Multitenant. With the Multitenant option, databases use a new architecture which features a container database (or CDB) which in turn contains one or more pluggable databases (or PDBs). Use of Multitenant requires a licence – which at the time of writing retails at $17,500 per processor (perpetual) plus 22% per annum for support.

This post is not intended to discuss the way Multitenant works – if you want to read more about it, Tim has a great set of articles about Multitenant here. But keep in mind that you can choose to install the Multitenant feature or not. If you do install it, you can create a single PDB within your CDB without requiring the license. As soon as you use more than one PDB the license is required.

What I want to talk about is Oracle’s attitude to its customers and what seems to me to be breathtaking arrogance. Personally I can think of three very good reasons why I might not want to use the single PDB within a CDB configuration which does not require a Multitenant license:

  • Multitenant requires additional configuration and the use of new administrative commands, which means re-writing admin procedures and re-training operations staff
  • Multitenant is an entirely new feature, with new code paths – which means it carries a risk of bugs (the list of bug fixes for the 12.1.0.2 patchset contains a section on Pluggable/Container Databases which lists no fewer than 105 items)
  • With the Multitenant option installed it is possible to trigger the requirement for an expensive set of licenses due to human error… without the option installed this is not possible

So it seems to me that, while Multitenant might be an interesting and useful new feature to evaluate, it is not something that I would want to be forced into using on production environments just yet. As always, people who manage production environments are conservative in their attitude to risk.

And that’s why I’m surprised to see this deprecation notice in the 12.1 documentation:

cdb_dep

The non-CDB architecture (i.e. the old way of building a database without CDBs and PDBs) has been deprecated in 12c and “may be desupported or unavailable” in later releases of Oracle Database. In other words, you need to change to using the CDB and PDB configuration now, even if you do not plan to purchase the Multitenant option.

It would be nice to have the choice, wouldn’t it?

Deprecated versus Desupported

OK so first let’s just remember that the term deprecated does not mean the same as the term desupported. We can dip back into the documentation to define these two important terms:

“By deprecate, we mean that the feature is no longer being enhanced but is still supported for the full life of the 12.1 release. By desupported, we mean that Oracle will no longer fix bugs related to that feature and may remove the code altogether.”

- Oracle Database 12c Upgrade Guide

It seems that Oracle will still support the use of non-CDB databases and will continue to do so for the lifetime of the 12.1 release. But, if you were designing a new system right now, it would take some confidence to choose a configuration which is deprecated and already living on death row.

And there’s more. The deprecation notice says there are some features that still do not work with the CDB architecture – and that if you want to use these you should use the deprecated non-CDB architecture. The list of features which are restricted or not available includes Automatic Data Optimization, Heat Maps, DBVERIFY and Flashback Pluggable Database (you can see the complete lists here for 12.1.0.1 and 12.1.0.2).

So we can add a fourth reason to our list of three drawn up earlier:

  • one-wayThe Multitenant causes a number of other options or features to be unusable

Now, given that I think all four reasons stated here are good enough to stand up on their own, what does this say about Oracle’s decision to deprecate non-CDB architectures?

You can draw your own conclusion, but I can’t help see it as arrogance on Oracle’s part as they force customers to use a specific new configuration with little regard for how it affects their operations. At worst, I don’t like being forced into changes by the vendor (to whom customers pay large amounts of money) while at best I would at least expect them to get all the features working before forcing the issue…

Understanding Flash: Floating Gates and Wear

OLYMPUS DIGITAL CAMERA

One of the important characteristics of flash memory is wear. We know from previous articles in this series that flash packages consist of dies, which contain planes, which contain blocks, which in turn contain pages. We also know that these pages contain individual cells which store the bits of data… but to understand what wear is we need to look a little bit closer at those cells, where we will find something called a floating gate transistor.

Now don’t run away. Electronics might not be your thing, but I’m won’t be getting too deep. [After all, I once attempted degree-level education in Electronic Engineering but had to move to another course because I kept burning my fingers on the soldering irons during practical laboratory sessions…] If you really don’t want to think about transistors you can just skip to the section called Wear, or ignore this blog post entirely and spend a few minutes looking at picture of cats instead.

Field Effect Transistors

If you are reading this blog on a computer or a phone (and how else could you be reading it?) you owe a debt of gratitude to a humble device called the metal oxide semiconductor field effect transistor, or MOSFET. These tiny devices revolutionised the world and are considered by some to be the most important invention of the 20th century.

Students being stimulated to go to a bar and drink alcohol

Students being stimulated to go to a bar and drink alcohol

They therefore deserve to be explained in a serious and respectful manner, but sadly I don’t have time for that so I’m going to resort to one of my silly analogies. Again.

Imagine you have a house full of students, which we’ll call the source. Two doors down the road you have a nightclub full of free alcohol, which we’ll call the drain. However, it’s freezing cold outside and the students won’t venture out of the door, meaning there is no flow of students from source to drain. Now let’s set up a banging sound system behind the wall of the property in between. If we play music loud enough through the wall then we can excite the students into action, causing them to run down the road and enter the nightclub, which creates the flow. The loud music (which we’re going to call the gate) is the stimulus which causes this flow, essentially acting as a switch, while the volume of the music which first drives the students into action is called the threshold. The beauty of this design is that we can control the flow of students from behind the wall, therefore avoiding having to come into direct contact with them. Phew.

I really cannot tell you how bad that analogy is, but I’m afraid it’s only going to get worse later on. If you don’t know how a transistor works I implore you to watch this short video from the excellent folk at Veritasium, which describes it far better than I ever intend to.

Metal Oxide Semiconductor Field Effect Transistor (MOSFET)

Metal Oxide Semiconductor Field Effect Transistor (MOSFET)

In a MOSFET, electrons (students!) can be allowed to flow from the source terminal to the drain terminal by the application of charge to the gate terminal. This charge creates an electric field which alters the behaviour of the silicon layers (the pinkish parts of the diagram) and thus controls the flow. The important bit here is the yellow rectangle which represents an insulating layer (commonly known as the oxide layer). This means the gate is never physically connected to either the source or drain terminals – and you’ll see why this is important as we turn our attention now to something called the floating grate transistor, or FGMOS.

There are many things I haven’t explained in that clumsy analogy, but this is not an electronics blog. Of specific interest is the way that the two types of silicon (n and p) are doped in order to create free electrons and electron holes. Seriously, watch the video. If you haven’t ever learnt how a semiconductor works and my student party analogy is the nearest you get, it would be a crime. Although that’s not going to stop me from using it again in the next section…

Floating Gate Transistors

Floating Gate MOSFET (FGMOS)

Floating Gate MOSFET (FGMOS)

The diagram on the right (labelled “FGMOS”) is of a Floating Gate MOSFET, which is essentially what you will find in a flash memory cell. If you play spot the difference with the previous diagram above (the one labelled “MOSFET”) you’ll see that there are now two gates, one above the yellow oxide layer as before but the other entirely surrounded by it. This second gate is known as the floating gate because it is completely electrically isolated.

Notice also that the oxide layer beneath the floating gate is deliberately thinner than that above it in the diagram. Now, back to my analogy…

strip-doorsThe students are still there, as is the nightclub. The property in between is still there, but this time we have replaced the brick wall at the front with one of those sets of PVC strip curtains that you sometimes find covering the doors of factories, or butchers shops to keep insects out (or even lining the edge of the cold aisle in a data centre). Inside, we’ll put some comfy chairs and maybe a beer fridge. This is now our floating gate party room and the PVC strips are our thinner section of oxide layer.

Excited students going to a nightclub are stimulated so that some fall into the "Floating Gate" room

Excited students going to a nightclub are stimulated so that some fall into the “Floating Gate” room

As we now play the music loud enough to exceed the threshold, the excited students will run up the road towards the bar as before, but this time – with enough excitement – some of them will pass through the divider into our floating gate room and remain there, thus trapping electrons (sorry, I mean students) on the floating gate. Meanwhile, as the floating gate room fills up with people, the sound of the music becomes more muffled which slows down the flow of students in the road outside. To put it another way, the volume threshold at which stimulation will occur rises if there are students (sorry, I mean electrons) on the floating gate.

In a FGMOS, if a high charge is applied to the control gate in the same manner as with a MOSFET, electrons flowing from source to drain can get excited and “jump” through the oxide layer into the floating gate, increasing its retained charge. This is the program operation we have talked about so many times before: the floating gate is the “bucket of electrons” from my previous posts (a classic case of mixed metaphors). To erase the charge stored on the floating gate, a high voltage is applied across the source and drain while a negative voltage is applied to the control gate, causing the retained electrons to “jump” back off the floating gate (through the oxide layer). I’m putting the word “jump” in inverted commas there because it’s slightly more complicated and usually involves a process called Fowler–Nordheim tunnelling. I’ll explain everything I understand about that in the next paragraph.

[This paragraph intentionally left blank]

Yeah, it’s complicated, it involves quantum mechanics and it’s way over my head. I’m just taking it for granted that it works.

Read Operations

Now that we have methods for programming and erasing we just need a way of testing the value stored: a read operation. When we were talking about the MOSFET in the previous section we could control the flow of charge (which I had better start calling current) between source and drain by varying the voltage applied to the gate.

FGMOS Read Thresholds: The voltage threshold at which current begins to flow from drain to source is different depending on the charge stored on the floating gate. By testing at an intermediate reference voltage (VtREF) called the "read point" we can determine whether the floating gate contains charge (which we call ZERO) or not (which we call ONE)

FGMOS Read Thresholds: The voltage threshold at which current begins to flow from drain to source is different depending on the charge stored on the floating gate. By testing at an intermediate reference voltage (VtREF) called the “read point” we can determine whether the floating gate contains charge (which we call ZERO) or not (which we call ONE)

In the FGMOS this can be turned around so that by measuring the current we can determine the voltage on the floating gate, because electrons trapped on the floating gate cause the threshold we’ve previously mentioned to move. By applying a certain voltage (VtREF in the diagram on the right) across the source and drain and then testing the current we can determine if the voltage on the gate is above or below a specific point, called the read point.

So, if we play music at a certain threshold volume when the floating gate party room is empty, the sound travels far enough to stimulate a flow of students to the nightclub. But if the room is full (the floating gate contains charge) this specific volume will not stimulate a flow. Instead, it will require a louder threshold volume to get the students out of bed. And I think it’s time to abandon the student analogy now… let us never speak of it again.

Read Points for SLC and MLC

Read Points for SLC and MLC (image courtesy AnandTech)

If you remember, flash comes in different forms: SLC, MLC and TLC. So why are MLC reads slower than SLC, with TLC even slower still? Well, because SLC contains only one bit of data (two possible states: a zero or one), we only need to test one threshold voltage, i.e. SLC only has one read point. But for MLC, where there are two bits (and therefore four possible states), there are three read points… while for TLC there are even more.

Remember the bucket full of electrons analogy? When you test the SLC bucket to see if it’s above or below 50% full, the answer will tell you whether the stored value is a zero or a one. But for the MLC bucket, the answer to that test isn’t enough: based on the first answer you then need to perform a second test to see if the bucket is above or below 25% / 75% [delete as appropriate] full. All this additional testing takes time, which is why SLC reads are faster than MLC reads, which in turn are faster than TLC reads.

But what about the wear?

Flash Wear

broken-blindsFinally I’m getting to the point. Remember the PVC strip curtains, like the ones in the main picture at the top of this post? What do you think happens to them as all those excited students (sorry) hurtle through them? They get damaged. In a FGMOS the oxide layer which isolates the floating gate from the silicon substrate is designed to be thin enough to allow quantum tunnelling of electrons when a high enough charge is applied, but this process gradually damages the layer. Reads are not a problem because only lower voltages are used and no electron tunnelling takes place. But program and erase operations are a different story, which is why wear is measured by the number of program/erase cycles. As the layer gets more damaged, the isolation of the floating gate is increasingly affected and the probability of stored charge leaking out will increase.

For SLC this is less of an issue, because during a read we only need to measure at one read point – so there is a lot of room for error either side of the threshold. But for MLC, with three read points, we need to be much more exact. Thus the wear caused by using SLC, MLC and TLC isn’t really very different, it’s simply that the tolerances for error are much finer with the increased bit counts of MLC and TLC.

At some point, the oxide layer for a FGMOS will become sufficiently degraded so that it can no longer store charge properly on the floating gate. We won’t know about this until it happens, but at some point a read from the cell will no longer be trustworthy. And clearly that’s a problem for data storage, which is why (just like disk) flash stores error correction codes (ECC) alongside user data to ensure that incorrect information is spotted and dealt with while any underlying pages are marked as unusable – all without impacting users.

Comparison of SLC, MLC and EMLC (courtesy of EE Times)

Comparison of SLC, MLC and EMLC (courtesy of EE Times)

A final point to consider is that there is a (more expensive) form of MLC known as eMLC (the e stands for enterprise, with “normal” MLC then sometimes referred to as consumer or cMLC). The only difference between eMLC and the standard cMLC we have discussed in the past is that program and erase operations are “slowed down” for eMLC in order to cause less damage to the oxide layer. This gives slightly reduced performance but also significantly reduces wear, allowing up to 10x more P/E cycles. Opinion is divided on whether this is actually a worthwhile investment (not for me though, I think it’s a waste of money in the majority of cases).

Hot News

That’s the end of this post – and as usual I’ve committed two of my three regular blogging sins: writing too much, using silly analogies and finishing on a terrible pun. Time to complete the trio:

Some flash companies have been experimenting with methods of refreshing the lifetime of flash, with one avenue of exploration focussed on providing a short burst of heat to repair the damage to the oxide layer. It has been claimed that flash treated this way can sustain over 100 million P/E cycles with no noticeable degradation. If that is really the case – and this technology can be put into production – we might finally find that the Talking Heads were correct: in the world of flash memory we are on the road to no-wear…

Oracle AWR Reports: When IOStats Lie

graph

If you’ve been unfortunate enough to follow my dithering on Twitter recently you’ll know that I’ve been lurching between thinking that there is and isn’t a problem with Oracle’s tracking of I/O statistics in its AWR reports.

I’m now convinced there is a problem, but I can’t work out what causes it… so step 1 is to describe the problem here, after which step 2 will probably be to sit back and hope someone far more intelligent than me will solve it.

But first some background:

AWR I/O Statistics

I’ve written about the I/O statistics contained in Oracle AWR Reports before, so I won’t repeat myself too much other than to highlight two critical areas which we’ll focus on today: Instance Statistics and IOStat Summaries. By the way, the format of AWR reports changed in 11.2.0.4 and 12c to include a new IO Profile section, but today we’re covering reports from 11.2.0.3.

First of all, here are some sensible statistics. I’m going to show you the IOStat by Function Summary section of a report from a real life database:

IOStat by Function summary
-> 'Data' columns suffixed with M,G,T,P are in multiples of 1024
    other columns suffixed with K,M,G,T,P are in multiples of 1000
-> ordered by (Data Read + Write) desc

                Reads:   Reqs   Data    Writes:  Reqs   Data    Waits:    Avg  
Function Name   Data    per sec per sec Data    per sec per sec Count    Tm(ms)
--------------- ------- ------- ------- ------- ------- ------- ------- -------
Buffer Cache Re   14.9G   504.1 4.22491      0M     0.0      0M 1690.3K     0.0
Direct Reads      12.2G     3.5 3.45612      0M     0.0      0M       0     N/A
DBWR                 0M     0.0      0M    7.9G   186.6 2.25336       0     N/A
Others             4.2G    12.1 1.18110    3.6G     2.5 1.03172   41.9K     0.1
LGWR                 1M     0.0 .000277    5.8G    11.9 1.65642   17.4K     1.0
Direct Writes        0M     0.0      0M      4M     0.0 .001110       0     N/A
TOTAL:            31.2G   519.7 8.86241   17.4G   201.0 4.94263 1749.6K     0.0
          -------------------------------------------------------------

This section of the report is breaking down all I/O operations into the functions that caused them (e.g. buffer cache reads, direct path reads or writes, redo log writes by LGWR, the writing of dirty buffers by DBWR and so on). Let’s ignore that level of detail now and just focus on the TOTAL row at the bottom.

To try and make this simple to describe, I’ve gone a bit crazy with the colours. In green I’ve highlighted the labels for reads and writes – and now let’s walk through the columns:

  1. The first column is the function name, but we’re just focussing on TOTAL
  2. The second column is the total amount of reads that happened in this snapshot: 31.2GB
  3. The third column is the average number of read requests per second, i.e. 519.7 read IOPS
  4. The fourth column is the average volume of data read per second, i.e. 8.86 MB/sec read throughput
  5. The fifth column is the total amount of writes that happened in this snapshot: 17.4GB
  6. The sixth column is the average number of write requests per second, i.e. 201 write IOPS
  7. The seventh column is the average volume of data written per second, i.e. 4.94 MB/sec write throughput
  8. The eighth and ninth columns are not of interest to us here

The whole section is based on the DBA_HIST_IOSTAT_FUNCTION view. What we care about today is the IOPS figures (shown in red) and the Throughput figures (shown in blue). Pay attention to the comments in the view header which show that Data columns (including throughput) are multiples of 1024 while other columns (including IOPS) are multiples of 1000. It’s interesting that the two throughput values are obviously measured in MB/sec and yet are missing the “M” suffix – I assume this “falls off the end” of the column because of the number of decimal places displayed.

Now that we have these figures explained, let’s compare them to what we see in the Instance Activity Stats section of the same AWR report:

Instance Activity Stats
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
physical read IO requests                 1,848,576          513.3         121.5
physical read bytes                  29,160,300,544    8,096,201.8   1,917,053.5
physical read total IO requests           1,892,366          525.4         124.4
physical read total bytes            33,620,594,176    9,334,578.5   2,210,281.7
physical read total multi block              17,096            4.8           1.1
physical reads                            3,559,607          988.3         234.0

physical write IO requests                  671,728          186.5          44.2
physical write bytes                  8,513,257,472    2,363,660.5     559,677.7
physical write total IO requests            723,348          200.8          47.6
physical write total bytes           18,657,198,080    5,180,071.5   1,226,559.6
physical write total multi block             26,192            7.3           1.7
physical writes                           1,039,216          288.5          68.3

For this section we care about the per Second column because both IOPS and Throughput are measured using units per second. For both reads and writes there are two sets of statistics: those with the word total in them and those without. You can find the full description of 11gR2 statistics in the documentation, but the difference between the two is best summed up by this snippet which describes physical read total bytes:

Total size in bytes of disk reads by all database instance activity including application reads, backup and recovery, and other utilities. The difference between this value and “physical read bytes” gives the total read size in bytes by non-application workload.

I’ve underlined non-application workload here because this is critical. If you merely look at the Load Profile section at the top of an AWR report you will only see values for “application workload” I/O but this does not include stuff like RMAN backups, archive logging and so on… important stuff if you care about the actual I/O workload. For this reason, we only care about the following four statistics:

  1. physical read total IO requests (per second) = the average number of read IOPS
  2. physical read total bytes (per second) = the average read throughput in bytes per second
  3. physical write total IO requests (per second) = the average number of write IOPS
  4. physical write total bytes (per second) = the average write throughput in bytes per second

Again I’ve coloured the IOPS figures in red and the throughput figures in blue.

Tying It Together

Now that we have our two sets of values, let’s just compare them to make sure they align. The IOPS figures do not require any conversion but the throughput figures do: the values in the Instance Activity Stats section are in bytes/second and we want them to be in MB/second so we need to divide by 1048576 (i.e. 1024 * 1024).

IOStat by Function

Instance Activity Stats

Error Percentage

Read IOPS

519.7

525.4

1.09%

Write IOPS

201.0

200.8

0.10%

Read Throughput

8.86241 MB/sec

9,334,578.5 bytes/sec

8.90215 MB/sec

0.45%

Write Throughput

4.94263 MB/sec

5,180,071.5 bytes/sec

4.94010 MB/sec

0.05%

I’ve calculated the error percentages here to see how far the figures vary. It is my assumption that the Instance Activity Stats are accurate and that any margin of error in the IOStat figures comes as a result of sampling issues. The highest error percentage we see here is just over 1%, which is hardly a problem in my opinion.

Don’t Believe The Stats

So far I have no complaints about the matching of statistics in the AWR report. But now let me introduce you to the AWR report that has been puzzling me recently:

IOStat by Function summary
-> 'Data' columns suffixed with M,G,T,P are in multiples of 1024
    other columns suffixed with K,M,G,T,P are in multiples of 1000
-> ordered by (Data Read + Write) desc

                Reads:   Reqs   Data    Writes:  Reqs   Data    Waits:    Avg
Function Name   Data    per sec per sec Data    per sec per sec Count    Tm(ms)
--------------- ------- ------- ------- ------- ------- ------- ------- -------
Direct Reads       3.8T 2.7E+04    3.2G    405M     3.6 .335473       0     N/A
Direct Writes        0M     0.0      0M  164.5G  1159.0 139.519       0     N/A
Buffer Cache Re   63.2G  3177.6 53.6086      0M     0.0      0M 3292.5K    23.4
DBWR                 0M     0.0      0M      2G    45.3 1.71464       0     N/A
LGWR                 0M     0.0      0M    485M    19.2 .401739   19.7K     0.9
Others             220M    12.3 .182232     26M     0.7 .021536   15.6K    34.5
Streams AQ           0M     0.0      0M      0M     0.0      0M       1     9.0
TOTAL:             3.9T 3.0E+04    3.3G  167.4G  1227.8 141.992 3327.8K    23.3
          -------------------------------------------------------------

Instance Activity Stats
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
physical read total IO requests          35,889,568       29,728.4       3,239.4
physical read total bytes         4,261,590,852,608 3.52999864E+09 3.8465483E+08

physical write total IO requests          1,683,381        1,394.4         151.9
physical write total bytes          205,090,714,624  169,882,555.1  18,511,663.0

Again I’ve coloured the IOPS measurements in red and the throughput measurements in blue. And as before we need to convert the bytes per second values shown in the Instance Activity Stats section to MB/sec as shown in the IOStat by Function Summary section.

IOStat by Function

Instance Activity Stats

Error Percentage

Read IOPS

30,000

29,728.4

0.91%

Write IOPS

1227.8

1,394.4

11.95%

Read Throughput

3.3 GB/sec

3,379.2 MB/sec

3,529,998,640 bytes/sec

3,366.469 MB/sec

0.38%

Write Throughput

141.992 MB/sec

169,882,555.1 bytes/sec

162.01263 MB/sec

12.36%

Do you see what’s bugging me here? The write values for both IOPS and throughput are out by over 10% when I compare the values in IOStat by Function against the Instance Activity Stats. Ten percent is a massive margin of error at this level – we’re talking 20MB/sec. To translate that into something easier to understand, if 20MB/sec were sustained over a 24 hour period it would amount to over 1.6TB of data. I’ve seen smaller data warehouses.

So why is this happening? Unfortunately I don’t have access to the system where this AWR report was created, so I cannot tell if, for example, there was an instance restart between the start and end snapshots (although the elapsed time of the report was just 20 minutes so it seems unlikely).

The truth is I don’t know. Which is why I’m writing about it here… if you think you have the answer, or just as importantly if you see the same behaviour, let me know!

New Cookbook: Oracle Linux 6 Update 5 within an Oracle VM Template

Oracle-VMI’ve posted a new installation cookbook for using Oracle within a virtual machine running on Oracle VM. Surprisingly, I was unable to come up with a satisfactory method of accessing external storage that did not involve the use of Oracle ASMLib

Oracle Linux 6 Update 5 within an Oracle VM Template

Understanding Flash: Unpredictable Write Performance

fast-page-slow-page

I’ve spent a lot of time in this blog series talking about the challenges involved in using flash, such as the way that pages have to be erased before they are written and the restriction that erase operations take place on a whole block. I also described the problem of erase operations being slow in comparison to reads and writes – and the resulting processes we have to put in place to manage that problem (i.e. garbage collection) . And most recently I covered the way that garbage collection can result in unpredictable performance.

But so far we’ve always worked under the assumption that reads and writes to NAND flash have the same predictably low latency. This post is all about bursting that particular bubble…

Programming NAND Flash: A Quick Recap

You might remember from my post on the subject of SLC, MLC and TLC that I used the analogy of electrons in a bucket to explain the programming of NAND flash cells:

slc-mlc-tlc-buckets

I’d now like to change that analogy slightly, so I’m asking you to consider that you have an empty bucket and a powerful hose pipe. You can turn the hose on and off whenever you want to fill the bucket up, but you cannot empty water out of the bucket unless you completely empty it. Ok, now we’re ready.

For SLC we simply say that an empty bucket denotes a binary value of 1 and a full bucket denotes binary 0. Thus when you want to program an SLC bucket you simply let rip with your hose pipe until it’s full. No need to measure whether the water line is above or below the halfway point (the threshold), just go crazy. Blam! That was quick, wasn’t it?

For MLC however, we have three thresholds – and again we start with the bucket empty (denoting binary 11). Now, if I want to program the binary values of 01 or 10 in the above diagram I need to be careful, because if I overfill I cannot go backwards. bucketI therefore have to fill a little, test, fill some more, test and so on. It’s actually kind of tricky – and it’s one of the reasons that MLC is both slower than SLC and has a lower wear limit. But here’s the thing… if I want to program my MLC to have a value of binary 00 in the above diagram, I have no such problems because (as with SLC) I can just open the hose up on full power and hit it.

What we’ve demonstrated here is that programming a full charge value to an MLC cell is faster than programming any of the other available values. With a little more thought you can probably see that TLC has this problem to an even worse degree – imagine how accurate you need to be with that hose when you have seven thresholds to consider!

One final thought. We read and write (program) to NAND flash at the page level, which means we are accessing a large collection of cells as if they are one single unit. What are the chances that when we write a page we will want every cell to be programmed to full charge? I’d say extremely low. So even if some cells are programmed “the fast way”, just one “slow” program operation to a non-full-charge threshold will slow the whole program operation down. In other words, I can hardly ever take advantage of the faster latency experienced by full charge operations.

Fast Pages and Slow Pages

The majority of flash seen in the data centre today is MLC, which contains two bits per cell. Is there a way to program MLC in order that, at least sometimes, I can program at the faster speeds of a full-charge operation?

mlc-bucket-msb-lsbLet’s take my MLC bucket diagram from above and remap the binary values like the diagram on the left. What have I changed? Well most importantly I’ve reordered the binary values that correspond to each voltage level; empty charge still represents 11 but now full charge represents 10. Why did I do that?

The clue is the dotted line separating the most significant bit (MSB) and the least significant bit (LSB) of each value. Let’s consider two NAND flash pages, each comprising many cells. Now, instead of having both bits from each MLC cell used for a single page, I will put all of the MSB values into one page and call that the slow page. Then I’ll take all of the LSB values and put that into the other page and call that the fast page.

Why did I do this? Well, consider what happens when I want to program my fast page: in the diagram you can see that it’s possible to turn the LSB value from one to zero by programming it to either of the two higher thresholds… including the full charge threshold. In fact, if you forget about the MSB side for a second, the LSB side very similar to an SLC cell – and therefore performs like one.

The slow page, meanwhile, has to be programmed just like we discussed previously and therefore sees no benefit from this configuration. What’s more, if I want to program the fast page in this way I can’t store data in the corresponding slow page (the one with the matching MSBs) because every time I program a full charge to this cell the MSB ends up with a value of one. Also, when I want to program the slow page I have to erase the whole block first and then program both pages together (slowly!).

It’s kind of complicated… but potentially we now have the option to program certain MLC pages using a faster operation, with the trade-off that other pages will be affected as a result.

Getting To The Point

I should point out here that this is pretty low-level stuff which requires direct access to NAND flash (rather than via an SSD for example). It may also require a working relationship with the flash manufacturer. So why am I mentioning it here?

Well first of all I want to show you that NAND flash is actually a difficult and unpredictable medium on which to store data – unless you truly understand how it works and make allowances for its behaviour. NAND-flashThis is one of the reasons why so many flash products exist on the market with completely differing performance characteristics.

When you look at the datasheet for an MLC flash product and you see write / program times shown as, for example, 1.4 milliseconds it’s important to realise that this is the average of its bi-modal behaviour. Fast (LSB) pages may well have program times of 300 microseconds, while slow (MSB) pages might take up to 2.5 milliseconds.

Secondly, I want to point out that direct access to the flash (instead of via an SSD) brings certain benefits. What if, in my all flash array, I send all inbound user writes to fast pages but then, later on during garbage collection, I move data to be stored in slow pages? If I could do that, I’d effectively be hiding much of the slower performance of MLC writes from my users. And that would be a wonderful thing…

…which is why, at Violin, we’ve been doing it for years :-)

 

Oracle AWR Reports: When Averages Lie

truth-sign

I was recently contacted by a customer who runs a busy ERP system on their Oracle database. Their users have been screaming for some time about unpredictable performance, where application screens which would usually open in under a second suddenly take an age. ERP systems are legendary for this, of course, because as each screen loads the application tends to fire off a glut of queries to populate all the various fields, graphs and so on. When performance problems strike, the elapsed time for each query rises, which usually results in “stuff going slowly”. And unfortunately, because of something known to psychologists as the availability heuristic, users tend not to remember the times when performance was acceptable… just the occasions when they were left frustrated.

Initial Investigation

The first thing I ask for in situations like this is a set of AWR reports. Customers are generally happy to part with such information, whereas the running of bespoke SQL scripts to harvest data can often cause delays as the scripts have to be investigated and tested. AWR reports, I’m sure you know, contain information about the behaviour and performance of the database between two points in time – and much of the data is averaged. For this reason, I always always ask for a set of multiple, consecutive reports covering the problematic period where each report covers no more than one hour. I’d prefer less than one hour per report, but the default configuration of Oracle results in hourly snapshots so that’s usually the limit… as it was for this customer.

As a first pass, I like to run the AWR reports through my awr-parser.sh script. This parses them all and produces a CSV file as output. This can then be easily used to graph information such the amount of time spent working on CPU versus time spent waiting on I/O requests to complete. Often this is enough to get a basic idea of what’s happening….

awr-averages-lie-work-vs-wait

In the above graph, each column shows data from a single (hourly) AWR report; blue represents DB CPU while red represents User I/O wait time. If we are looking to increase the user experience with low-latency storage then we would hope to find two things: a) lots of User I/O wait time, and b) average latency which is higher than we would expect a flash array to deliver.

Unfortunately, the first requirement isn’t really there in the above graph. There just isn’t enough red, except for peaks towards the end of the day where there appears to be some batch job or backup taking place. A peak at the AWR reports shows that this is in fact a scheduled job which performs an online rebuild of an index. With more capable storage we could make this go a lot faster, so would that perhaps help the customer?

Sadly, the answer was no, it wouldn’t. The customer specifically stated that it’s the online day where they struggle and not the index rebuild. Hmmm.

More Graphs

Let’s have a look at some of the other information that came out of the AWR reports to see if anything interesting shows up:

awr-averages-lie-cpu-utilization

Plenty of spare CPU capacity here. In fact, if you think about how much Oracle licenses cost per CPU – and how much CPU is not being used… it looks like this database server represents a significant over-investment.

awr-averages-lie-throughput

Not much in the way of I/O… there are spikes here and there, including those for the index rebuilds, but overall the I/O rates are very low and for many samples we barely scratch 10MiB/sec.

So what have we got? A system which hardly performs any I/O and doesn’t seem to spend any time waiting on I/O during the period the users are complaining about. Time to go back to the customer and ask for some more detail…

Second Attempt

After a plea for more details, I received another batch of AWR reports covering a single 24 hour period, as well as the crucial information that complaints were received between 5am and 6am. Perfect. Let’s have a look at that throughput graph again:

awr-averages-lie-2nd-throughput

Again, the throughput is very low – including around the critical 5am-6am period where complaints were received – which might fool us into thinking any problems are not related to the storage. But what about if we plot a graph of I/O Operations Per Second (IOPS)?

awr-averages-lie-2nd-iops

Bingo! There’s a big spike of read IOPS at exactly the time where problems were reported, which we can put down to a large amount of random physical reads. We know from the customer that they have a multi-controller monolithic disk array – the top of the range product from the world’s biggest disk array vendor. But it’s still a disk array, which means random I/O frequently results in high latency as somewhere a disk head has to move across the platter and then the platter has to rotate to the correct sector.

So we should be able to just check the average wait time for random physical reads (the wait event db file sequential read) and it will reveal itself as the problem. Right? We’ll expect to see waits in the region of 4-8ms or perhaps even worse if it’s really struggling.

Here’s a graph of the average user I/O wait time seen across each AWR report – don’t forget we’re especially interested in the troublesome 5am-6am period:

awr-averages-lie-average-wait

Not what we expected. The 5am-6am slot is showing the best latency figures for the entire day! And this is when users are complaining?

Averages Lie

A disk array like this usually has a reasonably large DRAM cache and quite possibly a set of SSDs being used for further caching or tiering of data. If a physical read can be satisfied from the cache, it will likely be serviced in under a millisecond. But if we experience a cache miss, we’re forced to pay the penalty of mechanical latency and wait for many milliseconds. Thus there are two ranges of response times: sub-millisecond and many milliseconds.

Average wait times won’t show this because they are simply the average of all waits over the period of the AWR report (an entire hour in this case). What if many of the reads are cache hits but some still miss? Maybe it’s possible that there are enough misses to cause users to complain, but enough hits to make the average look acceptable?

Luckily, there is another section in the AWR report which can help here: the Wait Event Histogram. Here’s the histograms covering db file sequential read waits (random physical reads) for our 24 hour period:

$ cat header.txt; for file in `ls -1`; do
grep -A 40 "Wait Event Histogram" $file | grep "db file sequential read" | head -1
done
                                                    % of Waits                  
                                 -----------------------------------------------
                           Total                                                
Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----  
db file sequential read     3329  99.2    .8    .0    .0    .0    .0    .0    .0
db file sequential read     3487  99.4    .6    .0    .0
db file sequential read    35.4K  21.0   1.2   2.5  23.2  39.0  11.9   1.1
db file sequential read    82.9K  34.6   1.8   3.4  15.9  31.9  11.2   1.2
db file sequential read    428.6  67.4   2.1   2.5  10.1  15.2   2.4    .3
db file sequential read    4440.  92.6   1.3   1.2   1.9   2.6    .4    .0
db file sequential read    2013K  91.7   1.4   1.2   2.1   3.0    .5    .0
db file sequential read    131.7  45.7   1.1   2.8  15.6  29.1   5.4    .3
db file sequential read    76.3K  20.7   1.7   2.8  21.7  44.0   8.7    .4
db file sequential read    83.7K  31.1   1.1   2.4  16.6  38.1  10.0    .7
db file sequential read    430.4  79.6   1.5   2.0   4.4   9.3   2.9    .3
db file sequential read    106.6  32.3   1.7   3.0  15.4  36.1  10.3   1.2
db file sequential read      94K  23.8   1.8   2.7  16.6  41.7  12.1   1.2
db file sequential read    248.6  36.0   1.8   2.9  14.8  34.5   8.9   1.1
db file sequential read    202.5  31.9   1.4   2.0   9.0  38.0  16.1   1.6
db file sequential read    237.7  20.2   1.9   2.9  17.7  43.5  12.5   1.2
db file sequential read    49.5K   9.4   1.0   2.7  24.1  48.0  13.6   1.1
db file sequential read    75.6K  42.4   1.5   3.0  16.9  29.9   5.9    .5
db file sequential read    88.7K  61.4   1.3   2.5  12.4  18.2   3.8    .4
db file sequential read    644.3  84.3   1.0   1.5   4.3   7.3   1.5    .2
db file sequential read    61.3K  34.9   6.9   4.4  11.0  31.2  10.5   1.1
db file sequential read      330  96.7   3.1    .2    .0    .0
db file sequential read      12K  88.7   9.5   1.2    .4    .2    .1    .0    .0

Now the problem becomes clear. Although many reads are hitting the cache (<1ms bucket), there are an awful lot that are missing. The <2ms and <4ms buckets are virtually empty and then around <8ms we start to see big numbers again. Those are cache misses which incur the penalty of seek time on a hard disk drive somewhere in the array.

What’s more, it’s clear that the array is under some pressure because in many of the AWR reports there are substantial waits in the <16ms and above buckets. I’ve highlighted in red the occasions where over one third of read requests took over 8ms. That’s exactly the sort of behaviour that is making people replace disks with flash.

Let’s visualise the above data. I’ve gone back to the older set of AWR reports and graphed the histogram buckets to see if we can get a feel for the distribution of waits:

awr-averages-lie-random-read-histogram

As you can see, I’ve tried to go for a heat map-style colour scheme, with green indicating cache hits on the storage array and the colours moving from yellow through orange to red indicating cache misses which result in disk drive seek time. There are also some real outliers in black where requests took over one thousand milliseconds to be serviced (ouch).

Another way to look at it is to make the picture clearer by reducing the detail. In the graph below I’ve simply assumed that everything sub-millisecond is a storage array cache hit and everything else is a miss:

awr-averages-lie-cache-hits-and-misses

Clearly there are a lot of physical reads paying the price of mechanical latency here.

Conclusion

What this customer does next is up for debate – there are more than just technical issues to consider. But the point of this post is to illustrate that while average wait times are great for getting a rough feel of how a system is behaving, they also offer huge scope to lose details. And sometimes those details are important. The devil, as they say, is truly in the detail…

Update: January 2015 – I have now added the ability to read Wait Event Histograms to my awr-parser.sh script which you can find on my Useful Scripts page…

Understanding Flash: The Write Cliff

cliffs

For the last couple of posts in this series I’ve been banging on about the importance of garbage collection (GC) in a flash system. I attempted to show you what happens if you don’t perform any GC at all (clue: you turn your flash device into a slow ROM), but clearly in the real world every flash array or SSD vendor has GC technology built into their flash translation layer. So why am I going to devote yet another post to it?

Predictable Performance

When you consider the performance of a system, what’s the number one requirement on your list? Is it “fast”? I would argue not. In my opinion, the first and most important requirement when considering performance is predictability. If you know how a system will perform at any time then (even if you would prefer things to happen faster) you can plan accordingly. If the same repeatable action behaves completely differently over random samples, how can you ever consider it reliable?

Cast your mind back to the post about flash blocks and pages. Remember that reads happen at the page level, as do writes (known as program operations when working with NAND flash)queues-likely but only empty pages can be programmed. To make a page empty you must perform an erase operation – and, crucially, these happen at the block level thereby affecting an entire set of pages.

As you fill up your flash device, pages containing data must be relocated in order to free up blocks so that they can be erased. At this point there are two pieces of bad news to consider:

First of all, in general each flash die can only perform one operation at a time (sometimes this is one operation per plane but that really doesn’t detract from the point). That means if you are performing an erase operation on block A, a read operation from a page in block Z on the same die has to be queued. It’s a completely different block – one of thousands on the same die – but the operation is queued nonetheless.

The second bit of news is that erase operations are slow… really slow. For MLC flash we’re talking maybe 3 milliseconds, which is an age when you compare it to the ~50 microseconds it takes to perform a read. Program operations are also slower than reads (but faster than erases) and they also have to be queued.

So based on all this information, a user who simply reads data at a predictable rate may suddenly see their latency spike up to around 60x higher than the 50us they are expecting if their read gets queued behind an erase. That doesn’t sound like fun does it?

Background vs Active / Foreground Garbage Collection

We know that garbage collection has to try and take care of erases in order to stop you running out of space as you fill up your flash. But let’s now consider this in terms of the performance problems caused by “user operations” (i.e. active reads and writes from the host) queuing up behind “background operations” (i.e. activity caused by the flash translation layer doing its job). Clearly the latter will affect the former if we are not careful. It therefore makes sense that we should try and perform all of our background operations at times when they will not cause problems to the users. As you know if you’ve read my blog before, I love an analogy… so let’s consider garbage collection a bit like the process of washing dishes at a busy restaurant.

dishesA restaurant only has a finite number of plates, glasses, cutlery etc, so once stuff is used it has to return to the kitchen and be washed ready for reuse. In a well-functioning restaurant this process takes place without disrupting the flow of cooked meals leaving the kitchen and being served to customers. In the same way, if our flash garbage collection is taking place in a manner which does not affect the active I/O operations of our users, this is known as background garbage collection (BGC). You can consider BGC “a good thing”, since it “hides” the impact of erase operations and results in more stable, predictable I/O times from the host. In other words, the kitchen runs smoothly and our customers are happy. Bravo.

On the other hand, if our dishes are not being washed fast enough in the kitchen of our restaurant, at some point there will be a shortage of clean plates etc and the customers will have to wait. Likewise, if more data is being changed than BGC can keep up with, the flash device is now running out of free space in which to program incoming writes. This means we have to switch into a different mode called active garbage collection (AGC), sometimes also described as foreground garbage collection. In AGC, user I/O inevitably ends up queuing behind background operations – and in severe cases we have to throttle user I/O requests because they cannot be serviced in time. Yes, we actually have to tell the waiters not to take any more orders until we can get our act together in the kitchen.

You might remember from my previous post that all flash vendors overprovision their flash to allow an additional working area where new writes can land while stale pages are being erased. In the same way, most restaurants probably have more plates and cutlery than they have table settings out front. It helps – and the more you overprovision, the more breathing space you have – but at some point if you don’t take care of your dirty dishes you will still run out of clean ones.

The Infamous Write Cliff

There has been a lot of talk about the write cliff by various commentators, flash vendors and bloggers over the years. I’ve read articles that say it’s no longer a problem (“in most SSD arrays”), articles that show it causing significant problems and white papers on how to avoid it.

My advice is to keep it simple: your flash device has an over-provisioned “buffer zone” which you may or may not be able to see (on Violin you can actually configure it). If you change more data on your flash device than the background garbage collection algorithm can keep up with, you will eat into this buffer zone until you hit active garbage collection. Keep pushing your device at this point and you will see the latency rise as the number of IOPS falls. It’s a simple as that.

My good buddy Maxim from Violin’s amazing engineering team demonstrated this to me in real life by deliberately limiting the ability of background garbage collection on a test system and then hitting it with lots of writes. Here’s the result:

active-vs-background-garbage-collection

This same pattern can be seen in numerous other places on the internet; for example, in the Preconditioning Curve graphs of reviews of SSDs. In fact, there are only two possible scenarios where a flash device won’t hit the write cliff (assuming you push it hard enough):

  1. Flash devices (mainly SLC) which can perform garbage collection really quickly and have lots of overprovisioned space (e.g. the Violin Memory 6616 array frequently used for setting benchmark records)
  2. Flash devices where the limited ingest capability means they can never accept enough writes to exhaust their overprovisioned space

That last one might seem contentious, but think about it: it’s a simple fact of NAND flash that erases are slower than writes (programs). This means if you are able to perform enough writes then eventually you will always exhaust any finite overprovisioned buffer space. At that point, writes must slow down to the speed of erases. As the theory of constraints says, “a chain is no stronger than its weakest link“…

Follow

Get every new post delivered to your Inbox.

Join 901 other followers