The Real Cost of Enterprise Database Software

moneyStorage for DBAs: The strange thing about enterprise databases is that the people who design, manage and support them are often disassociated from the people who pay the bills. In fact, that’s not unusual in enterprise IT, particularly in larger organisations where purchasing departments are often at opposite ends of the org chart to operations and engineering staff.

I know this doesn’t apply to everyone but I spent many years working in development, operations and consultancy roles without ever having to think about the cost of an Oracle license. It just wasn’t part of my remit. I knew software was expensive, so I occasionally felt guilt when I absolutely insisted that we needed the Enterprise Edition licenses instead of Standard Edition (did we really, or was I just thinking of my CV?) but ultimately my job was to justify the purchase rather than explain the cost.

On the off chance that there are people like me out there who are still a little bit in the dark about pricing, I’m going to use this post to describe the basic price breakdown of a database environment. I also have a semi-hidden agenda for this, which is to demonstrate the surprisingly small proportion of the total cost that comprises the storage system. If you happen to be designing a database environment and you (or your management) think the cost of high-end storage is prohibitive, just keep in mind how little it affects the overall three-year cost in comparison to the benefits it brings.

Pricing a Mid-Range Oracle Database

Let’s take a simple mid-range database environment as our starting point. None of your expensive Oracle RAC licenses, just Enterprise Edition and one or two options running on a two-socket server.

At the moment, on the Oracle Store, a perpetual license for Enterprise Edition is retailing at $47,500 per processor. We’ll deal with the whole per processor thing in a minute. Keep in mind that this is the list price as well. Discounts are never guaranteed, but since this is a purely hypothetical system I’m going to apply a hypothetical 60% discount to the end product later on.

midrange-price-assumptions

I said one or two options, so I’m going to pick the Partitioning option for this example – but you could easily choose Advanced Compression, Active Data Guard, Spatial or Real Application Testing as they are all currently priced at $11,500 per processor (with the license term being perpetual – if you don’t know the difference between this and named user then I recommend reading this). For the second option I’ll pick one of the cheaper packs… none of us can function without the wait interface anymore, so let’s buy the Tuning Pack for $5,000 per processor.

The Processor Core Factor

I guess we’d better discuss this whole processor thing now. Oracle uses per core licensing which means each CPU core needs a license, as opposed to per socket which requires one license per physical chip in the server. This is normal practice these days since not all sockets are equal – different chips can have anything from one to ten or more cores in them, making socket-based licensing a challenge for software vendors. Sybase is licensed by the core, as is Microsoft SQL Server from SQL 2012. However, not all cores are equal either… meaning that different types of architecture have to be priced according to their ability.

The solution, in Oracle’s case, is the Oracle Processor Core Factor, which determines a multiplier to be applied to each processor type in order to calculate the number of licenses required. (At the time of writing the latest table is here but always check for an updated version.) So if you have a server with two sockets containing Intel Xeon E5-2690 processors (each of which has eight cores, giving a total of sixteen) you would multiply this by Oracle’s core factor of 0.5 meaning you need a total of 16 x 0.5 = 8 licenses. That’s eight licenses for Enterprise Edition, eight licenses for Partitioning and eight licenses for the Tuning Pack.

midrange-price-breakdown

What else do we need? Well there’s the server cost, obviously. A mid-range Xeon-based system isn’t going to be much more than $16,000. Let’s also add the Oracle Linux operating system (one throat to choke!) for which Premier Support is currently listing at $6,897 for three years per system. We’ll need Oracle’s support and maintenance of all these products too – traditionally Oracle sells support at 22% of the net license cost (i.e. what you paid rather than the list price), per year. As with everything in this post, the price / percentage isn’t guaranteed (speak to Oracle if you want a quote) but it’s good enough for this rough sketch.

Finally, we need some storage. Since I’m actually describing from memory an existing environment I’ve worked on in the past, I’m going to use a legacy mid-range disk array priced at $7 per GB – and I want 10TB of usable storage. It’s got some SSD in it and some DRAM cache but obviously it’s still leagues apart from an enterprise flash array.

Price Breakdown

That’s everything. I’m not going to bother with a proper TCO analysis, so these are just the costs of hardware, software and support. If you’ve read this far your peripheral vision will already have taken in the graph below. so I can’t ask you to take a guess… but think about your preconceptions. Of the total price, how much did you think the storage was going to be? And how much of the total did you think would go to the database vendor?

midrange-database-price-breakdown

The storage is just 17% of the total, while the database vendor gets a whopping 80%. That’s four-fifths… and they don’t even have to deal with the logistics of shipping and installing a hardware product!

Still, the total price is “only” $430k, so it’s not in the millions of dollars, plus you might be able to negotiate a better discount. But ask yourself this: what would happen if you added Oracle Real Application Clusters (currently listing at $23,000 per processor) to the mix. You’d need to add a whole set of additional nodes too. The price just went through the roof. What about if you used a big 80-core NUMA server… thereby increasing the license cost by a factor of five (16 cores to 80)? Kerching!

Performance and Cost are Interdependent

light_bulbThere are two points I want to make here. One is that the cost of storage is often relatively small in terms of the total cost. If a large amount of money is being spent on licensing the environment it makes sense to ensure that the storage enables better performance, i.e. results in a better return on investment.

The second point is more subtle – but even more important. Look at the price calculations above and think about how important the number of CPU cores is. It makes a massive difference to the overall cost, right? So if that’s the case, how important do you think it is that you use the best CPUs? If CPU type A gives significantly better performance than CPU type B, it’s imperative that you use the former because the (license-related) cost of adding more CPU is prohibitive.

Yet many environments are held back by CPUs that are stuck waiting on I/O. This is bad news for end users and applications, bad news for batch jobs and backups. But most of all, this is terrible news for data centre economics, because those CPUs are much, much more expensive than the price you pay to put them in the server.

There is more to come on this subject…

Storage Myths: Put Oracle Redo on SSD

tortoise

Storage for DBAs: “My database is slow”… “Well then why not put your redo logs on SSDs?” Gaaaah. I still hear people having this discussion and it drives me mad. “Nobody got fired for putting Oracle redo on …<flash vendor>”. Yeah right, but does that mean it was worth the investment?

I’m bored of this line of illogical “reasoning”, so here are three reasons why you shouldn’t put your redo logs on SSD.

1. Solve The Right Problem

If a database is slow, find out why. Investigate, troubleshoot, resolve. Don’t throw hardware at it without understanding what the problem is. Redo is written by the Oracle Log Writer process – and the wait event log file parallel write covers the writing of redo records from the log buffer into the online redo log files. If you are seeing high average wait times for log file parallel write (or occasional high wait times in the Wait Event Histogram) maybe it’s time to investigate the speed of redo I/Os. Otherwise … leave it alone, or you are fixing the wrong issue.

Also, let’s not confuse the wait event log file sync with log file parallel write. Log file sync is experienced by foreground processes waiting on the log writer to complete a flush of the log buffer to storage. It’s tempting to assume high log file sync times are therefore a consequence of slow log writes, but as Kevin Closson points out in this must-read article, most log file sync waits are actually processing issues where the log writer is not getting enough CPU time.

2. SSD Write Performance Sucks

Huh? You thought I was pro-SSD right? Ok so I’m being a bit crafty, because the terms SSD and Flash are not really synonymous. SSD stands for Solid State Disk (or Device depending on who you ask), which generally means a set of flash chips crafted into the shape of a hard disk drive and plugged into a HDD-shaped hole somewhere via the use of a Flash Memory Controller. This interface takes page-based flash memory and makes it look like block-based storage – and each SSD in an array has its own controller.

snailThere is a fundamental difference between an all-flash array and a set of SSDs masquerading as disks: an all-flash array can manage the flash holistically while the SSD-populated array cannot. This matters because flash is awkward to work with – for example, flash pages must be erased before they are written to – a process which is both slow and cumbersome, since other pages are locked (even from reads) during an erase.

The all-flash array is able to avoid the consequences of these restrictions by managing the flash globally, so that erases do not block reads and writes. In contrast, SSDs shoved into a disk array cannot communicate with each other to indicate when they are busy performing this garbage collection process, resulting in unpredictable performance and horrible spikes in latency as I/Os queue up behind the erase process.

3. Disk Is Good Enough

Disk

You didn’t expect me to say that, did you? Don’t get me wrong, disk is terrible at random I/O. Really, truly awful. But here’s the thing: the Oracle log writer performs large, sequential writes. And disk is ok with sequential I/O, particularly if you are using faster spindles like the 15k RPM drives.

Flushing the log buffer to storage involves writing some multiple of the redo log block size (512 byte default but configurable to 1024 or 4096 bytes from Oracle version 11.2). If your system is busy enough that you believe you have redo performance issues, it seems likely that those writes will be larger as more redo is created per log flush. The larger the write, the more efficient it will be on disk as the impact of the initial seek time is averaged out.

But hey, don’t take my word for it. Trust the evidence – and it turns out there is a wealth of data out there for anyone to analyse… right here: http://www.tpc.org/

The thing about TPC-C benchmarks is that they generate redo logs like you wouldn’t believe. So if anyone needs the ultimate redo performance it’s a system like this one, which set a world record back in September 2012 (which Oracle crowed about in it’s usual classy way by using it to bash IBM). The great thing about TPC results is that they come with a complete full disclosure report so you can see just how the vendors did it. And in the full disclosure report for this submission, where was the redo located? On a RAID set consisting of 600GB 15K RPM disk drives (see page 21). If disk is fast enough for a world record, it’s fast enough for you.

Incidentally, the datafiles in that benchmark were located on 2x Violin Memory 6616 arrays – which also tells you something important: if you are migrating from disk to flash, the first thing you need to move is the primary data, not the redo.

The Counter-Argument: Flash is Not SSD

Now I don’t want to wrap this article up giving you the impression that you shouldn’t move your redo logs to flash memory, so I’ll leave you with some counter arguments to the above. When I build a database, I always put the redo logs on flash (not on SSD mind, but on a flash memory array). Here’s why:

1. Violin Isn’t Limited By Writes

I know, I know … that sounds like a sales pitch. I usually try to talk about flash in general, which is why I originally wrote “All-flash arrays aren’t limited by writes”, but the truth is I don’t know other all-flash arrays to the extent that I know Violin… so forgive me for sticking with what I know.

I’ll explain Violin’s methods for guaranteeing sustained ultra-low write latency some other day. for now, let’s just see the evidence:

Load Profile              Per Second    Per Transaction
~~~~~~~~~~~~         ---------------    ---------------
      DB Time(s):              197.6                2.8
       DB CPU(s):               18.8                0.3
       Redo size:    1,477,126,876.3       20,568,059.6
   Logical reads:          896,951.0           12,489.5
   Block changes:          672,039.3            9,357.7
  Physical reads:           15,529.0              216.2
 Physical writes:          166,099.8            2,312.8

That’s over 1.4GB/sec of sustained redo generation from a 5 minute snapshot (see this post for details) using just a single Violin Memory 6616 array connected over 8Gb fibre channel. The AWR snapshot was 5 minutes long but the workload had been running for an extended period prior to the capture. Don’t leave here with the illusion that redo on flash memory isn’t blindingly fast.

2. Your First Design Goal Should Be Simplicity

There is a quote often attributed to Albert Einstein which says, “Everything should be made as simple as possible, but no simpler“. This applies perfectly to system design – and is one reason why I always recommend an all-flash database design over a flash and disk hybrid. Yes it’s possible to put some datafiles here and others over there, redo logs on disk and primary data on flash, etc. But the simplest design is to put everything on high performance, low latency flash. Is it the cheapest solution? Maybe not always on list price, but it probably will be based on TCO.

Conclusion

Look, if you want to put your redo logs on flash, I’m not going to argue. I’m not saying that it’s a bad thing.

cautionWhat is a bad thing though is the practice of taking a disk-based database and sticking some SSDs in to home the redo logs. That’s just silly. The first part of the database you should move to flash is the primary data. If it makes sense to relocate the whole database (which it almost always does, because that disk array doesn’t belong in your data centre anymore – it belongs in a museum) then go for it. Just don’t compromise on having only the redo logs on flash or SSD, because then you have essentially built yourself an anti-TPC-C benchmarking system! And what’s the opposite of a system that goes really fast…?

Storage Myths: IOPS Matter

shout-iops

Storage for DBAs: Having now spent over a year in the storage industry, I’ve decided it’s time to call out an industry-wide obsession that I previously wasn’t aware of: everyone in storage is obsessed with IOPS (the performance metric I/O Operations Per Second). Take a minute to perform a web search for “flash iops” and you’ll see countless headlines from vendors that have broken new IOPS records – and yes, these days my own employer is often one of them. You’d be forgiven for thinking that, in storage, IOPS was the most important thing ever.

I’m here to tell you that it isn’t. At least, not if databases are your game.

storage-characteristicsFundamental Characteristics of Storage

In a previous article I described the three fundamental characteristics of storage: latency, IOPS and bandwidth (or throughput). I even drew a simple, boxy diagram which, despite being one of the least-inspiring pieces of artwork ever created, serves me well enough to warrant its inclusion again here. These three properties are related – when one changes, the others change. With that in mind, here’s lesson #1:

High numbers of IOPS are useless unless they are delivered at low latency.

It’s all very well saying you can supply 1 million, 2 million, 4 million IOPS but if the latency sucks it’s not going to be of much value in the real world. Flash is great for delivering higher numbers of IOPS than disk, particularly for random I/Os (as I’ve written about previously), but ultimately the delay introduced by high latency is going to make real-world workloads unusable.

And there’s another, oft-hidden, problem that many flash vendors face: unpredictable latency. This is particularly the case during write-heavy workloads where garbage collection cannot always keep up with load, resulting in the infamous “write cliff” (more technically described as bandwidth degradationsee figure 7 of this paper). Maybe we should revise that previous line to be lesson #1.5;

High numbers of IOPS are useless unless they are delivered at predictable low latency.

But what about when we deal with volumes of data? If your requirement is to process vast amounts of information, do IOPS then become more important? Not really, because this is a bandwidth challenge – you need to design and build a system to suit your bandwidth requirements. How many GB/sec do you need? What can the storage subsystem deliver and how fast can you process it? Unlike bandwidth, an IOPS measurement does not contain the critical component of block size, so information is missing. And if you have the bandwidth figures, there is little additional value in knowing the IOPS, is there? Cue lesson #2:

Bandwidth figures are more useful for describing data volumes than IOPS

So what good are IOPS figures? And why does the storage industry talk about them all the time? Personally I think it’s a hang-up from the days of disk, when IOPS were such a limiting factor… and partly a marketing thing, because multi-million results sound impressive. Who knows? I’m more interested in what we should be asking about than what we shouldn’t.

So what does matter?

Latency Is King

crown-latencyForget everything else. Latency is the critical factor because this is what injects delay into your system. Latency means lost time; time that could have been spent busily producing results, but is instead spent waiting for I/O resources.

Forget IOPS. The whole point of a flash array is that IOPS effectively become an unlimited resource. Sure, there is always a real limit – but it’s so high that it’s no longer necessary to worry about it.

Bandwidth still matters, particularly when you are doing something which requires volume, such as analytics or data warehousing. But bandwidth is a question of plumbing – designing a solution with enough capability to deliver throughout the stack: storage, fabric, HBAs, network, processor… build it and the data will come.

Latency is the “application stealth tax”, extending the elapsed times of individual tasks and processes until everything takes slightly longer. Add up all those delays and you have a significant problem. This is why, when you consider buying flash, you need to test the latency – and not just at the storage level, but end-to-end via the application (I’ll talk about this more in a following post).

“But I Don’t Need That Many IOPS…”

This is classic misunderstanding, often the result of confusion brought about by FUD from storage vendors who cannot deliver at the higher end of the market. To repeat my previous statement, with a good flash system IOPS will effectively become an unlimited resource. This does not mean that it’s overkill for your needs. There is no point in spending more money on a solution than is necessary, but IOPS is not the indicator you should use to determine this – decisions like that should rely entirely on business requirements. I have yet to ever see a business requirement that related to IOPS (emphasis on business rather than technical).

Business requirements tend to be along the lines of needing to supply trading reports faster, or reduce the time spent by call centre operatives waiting for their CRM screens to refresh. These almost always translate back into latency requirements. After all, the key to solving any performance issue is always to follow the time and find out where it is being spent. Have you noticed that latency is the only one of our three fundamental characteristics which is expressed solely in units of time?

Don’t get distracted by IOPS… it’s all about latency.

SLOB2: Testing The Effect Of Oracle Blocksize

I recently posted a test harness for generating physical I/O using the new version of SLOB (the Silly Little Oracle Benchmark) known as SLOBv2. This test harness can be used for driving varying workloads and then processing the results for use in … well, wherever really. Some friends of mine are getting very adept with R recently, but I have yet to board that train, so I’m still plugging my data into Excel. Here’s an example.

We know that Oracle allows varying database block sizes with the parameter DB_BLOCK_SIZE, which typically has values of 4k, 8k (the default), 16k or 32k. Do you ever change this value? In my experience the vast majority of customers use 8k and a small number of data warehouse users choose 32k. I almost never see 16k and absolutely never see 4k or lower. Yet the choice of value can have a big effect on performance…

Simple Test with 8k Block Size

In the storage world we like to talk about IOPS and throughput as well as latency (see this article for a description of these terms). IOPS and throughput are related: multiply the number of IOPS by the block size and you get the throughput as a result.

So let’s see what happens if we run SLOB PIO tests using the test harness for the default 8k block size, testing workloads with 0% update DML, 10%, 20% and 30%:

SLOB2-initial-testing

You can see four loops or “petals”, with each loop starting at the bottom left and working out towards the upper right, moving anti-clockwise before coming back down. This is expected behaviour, because each line tracks an increasing number of SLOB processes from 1 to 64. As the number of processes increases, so does the number of IOPS – and as a result there is a small increase in the latency. At some point near the high end of 64 the server CPU gets saturated, causing the number of IOPS to drop and therefore resulting in the latency coming down again – this is because the compute resource is exhausted while the storage has plenty left to spare (the server has 2 sockets each with an E5-2470 8-core processor, but crucially I am pinning* Oracle to one core with CPU_COUNT=1). To put that even more simply, there is not enough CPU available to drive the storage any harder (and the storage can go a LOT faster – after all, it’s the same storage used during this). [* This is really bad wording from me – see the comments section below]

From this graph we can deduce the optimum number of SLOB processes needed to drive the maximum possible I/O through Oracle for each workload: the point where the line bends back on itself marks this spot. We can also see, in graphical form, evidence of what we already know: read-only workloads can drive much higher amounts of IOPS at a lower latency than mixed workloads.

Multiple Tests with Varying Block Sizes

Now let’s take that relatively simple graph and cover it in cluttered “petal-shaped” lines like the ones above, but with a set for each of the following block sizes: 4k, 8k, 16k and 32k.

slob-blocksize-tests-latency-versus-IOPS

Ok so it’s not easy on the eyes – but look closely because there’s a story in there. In this graph, each block size has the same four-petal pattern as above, with the colour used to denote the block size. The 32k block size, for example, is in purple – and quite clearly exhibits the highest latencies at its peaks. The blue 4k blocksize line, on the other hand, has very low latencies and extends the furthest to the right – indicating that 4k would be the better choice if you were aiming to drive as many IOPS as possible.

So 4k has lower latency and more IOPS… must be the way to go, right?

Two Sides To Every Story

What happens if we stop thinking about IOPS and start thinking about throughput? By multiplying the IOPS by the block size we can draw up the same graph but with throughput on the horizontal axis instead:

slob-blocksize-tests-latency-versus-throughput

Well now. The blue 4k line may indeed have the lowest latency figures but if throughput is important it’s nowhere on this scale. The purple 32k line, on the other hand, is able to drive over 3,500MB/sec of throughput at its peak (and still stay around the 300 microsecond latency mark). Maybe 32k is the way to go then?

Conclusion

As always, the truth lies somewhere in between. In the case of SLOB the workload is extremely random, meaning that each update is probably only affecting one row per block. It therefore makes no sense to have large 32k blocks as this is just an overhead – the throughput may be high, but the majority of the data being read is waste. Your real life workload, on the other hand, is likely to be more diverse and unpredictable. SLOB is a brilliant tool for using Oracle itself to generate load, but not intended as a substitute for proper testing. What it is great for though is learning, so use the test harness (or write your own) and get testing.

Also, don’t overlook the impact of DB_BLOCK_SIZE when building your databases – as you can see above it has a potentially dramatic effect on I/O.

The Most Important Thing You Need To Know About Flash

NAND-flash

Storage for DBAs: There are many things you need to know about flash: it’s performance, it’s behaviour, it’s durability etc. But there’s one single piece of information which tells you more than anything else, because it gives you an insight into the future of not just flash memory, but the primary data storage industry. Let me explain, but first let me contrast against something we all more familiar with: disk drives.

Disk Drive Market History

Almost all disk drives are made by three manufacturers: Western Digital, Seagate and Toshiba. There used to be a lot more than that, but all the others either went out of business or got acquired. These are tough times for disk drive manufacturers, with sales expected to take a double-digit dive in 2013. It was not always this way though; for decades the thirst for HDDs was unquenchable, with large volumes of them being required in desktop PCs (remember them?) as well as enterprise disk arrays. Fast forward to the present day and desktop PCs have been ambushed by tablets and SSDs, while flash is now similarly disrupting the data centre.

For a minute though, let’s remember the golden era of disk. If we rewind around eight years ago, the disk industry was thriving. To quote a TrendFocus report published in Businesswire (emphasis added by me):

The industry’s 25% unit growth in 2005 was based on solid fundamentals in core markets like PCs and servers… Booming notebook PC sales caused a surge in 2.5″ HDD shipments to 77 million, a 45% growth. Enterprise HDD shipments grew 11% to over 26 million units. HDD industry revenue was $28 billion, an increase of 18% from 2004.

An 18% annual increase… that’s impressive! As the quote states, this growth was built on the “solid fundamentals” of PCs and Servers – nobody foresaw the end of the PC disk market,2013 HDD Market by Application because a new and exciting range of “Notebook PCs” seemed ready to drive demand even higher. And on top of that, two new market segments were growing rapidly: Consumer Electronics and Mobile. The graph on the left was created in 2005 and showed HDD market predictions going forward until 2010. The yellow and light blue colours indicate CE and Mobile respectively – you can see that there was a lot of optimism for the future – while the reddish colour indicates the huge returns from desktop PCs (while the Enterprise segment is merely a small purple brick at the bottom of each column). But somewhere at the bottom of a 2005 Q4 report published later that year were the first indications of a changing tide:

Shipments accounted for slightly less than the 20 percent forecast, and the drop-off is attributed to Apple’s decision to move away from 1-inch-based hard drives for its iPod mini business.

What did Apple move to? Take a guess. Very slowly, but very surely, the potential CE and Mobile markets evaporated. At the same time, the Desktop PC business died a lingering death, leaving enterprise storage as the mainstay for hard drives.

That’s one thing which remained constant through this period though: the enterprise HDD market. Enterprise Storage vendors like EMC and NetApp bought up massive volumes of disk drives to put in enterprise class arrays for their customers – and these massive volumes meant two critical outcomes:

  1. The larger Enterprise Storage vendors bought at enormous discounts
  2. HDD vendors selling to Enterprise Storage focussed their R&D efforts on improving the characteristics that these customers desired

The characteristics required for enterprise storage were: density and performance. It’s a simple case of supply and demand. As with all business, the demand shapes the supply.

NAND Flash Market Forces

One thing that flash has in common with disk is the relatively small number of manufacturers. Note that I’m not talking about companies like Violin here, I’m talking about the flash chip manufacturers who own the fabs. In 2012 the NAND flash market consisted of Samsung (38%), Toshiba (28%) – the inventor of flash, Micron (14%) and Hynix (12%). But who was the largest buyer worldwide? Was it EMC? Netapp? IBM, HP or Dell?

2013 NAND Market by ApplicationIn 2011, Apple became the largest worldwide consumer of NAND flash. And as I’m sure you can guess, the reason for this was the iPhone. Today, according to IC Insights, the majority of NAND flash (59%) is used in smartphones, tablets and portable devices, with another 17% used in USB keys and cameras. If you look at the pie chart on the right, that little red portion marked SSD (just 13%) comprises all the flash used in both enterprise storage and consumer solid state drives (e.g. the ones you might get in an ultrabook).

And that trend is only going one way. By the end of 2013 it is forecast that there will be nearly 1.5 billion smartphones in the world – one smartphone for every five people. Meanwhile, tablets are not only the fastest growing segments but also one of the fastest-growing consumer devices of all time.

What does this mean? It means that NAND flash development is driven by the consumer market, by smartphones and portable devices. In enterprise storage, when we talk about flash we always talk about performance and endurance – but the consumer market isn’t interested in either of these. The consumer market is interested in density, i.e. how much data you can fit on a chip, as well as power consumption and cost. If a NAND flash manufacturer could produce larger flash chips at the cost of 20% slower performance, for example, this would be considered a great result. There’s a fundamental difference in requirements between the consumer and enterprise markets: only the enterprise cares about performance.

The Balance Of Power

In the heyday of disk, the enterprise storage industry had serious influence on what came out of the factories. But with NAND flash, the power of the enterprise storage industry to influence the direction of development is clearly far weaker. Sure there are relationships between flash storage vendors and the manufacturers – in fact, one of the strongest is between Violin and Toshiba – but market forces dictate that NAND flash development will be mostly influenced by the consumer market: the phone in your pocket and the tablet on your desk. (Don’t be confused by claims of “enterprise-class” NAND flash either – the key is to follow where the billions of dollars of R&D money are going, i.e. the consumer market. Enterprise-class flash is merely the least-consumer-like consumer flash…)

cautionWhat does this mean for enterprise storage? It’s simple – it means that each enterprise vendor will have to take “consumer” flash and come up with innovative ways to make it perform like an enterprise product. Each flash vendor needs to do this to deliver the performance you need. Anybody can take a bunch of flash cards or SSDs and put them in a box, but that’s not innovation. The flash vendors who survive the great flash market consolidation will be the ones with intellectual property and patents around making consumer NAND flash perform for the enterprise.

Understand this and you will know the most important thing to ask a potential flash vendor about their product is not “How fast is it?”, or “How long does it last?”, but “Where’s your innovation?”. After all, if your vendor isn’t adding anything to the equation, you might as well be doing it yourself…

Does My Database Need Flash?

flash-and-the-disk-is-gone

Storage for DBAs: Here’s a question I get asked a lot: “Does my database need flash?”. In fact it’s the most common question customers have, followed by the alternative version, “Does my database need SSD?”. In fact, often customers already have some SSDs in their disk arrays but still see poor performance, so really I ought to wind it back a level and call this article, “Does my database need low latency storage?”. This would in fact be a much better headline from a technical perspective, but until I change the name of this site to LowLatencyDBA I’m sticking with the current title.

Flash is no longer a cutting edge new technology, it’s a mainstream product sold by almost every storage vendor. This means that you or your organisation will probably already have some flash sales person beating down your door to flog you some sort of flash product, whether it’s an all-flash array, a hybrid flash/disk system or a set of PCIe flash cards. While these products are diverse in nature, they all share two main characteristics: low latency and large numbers of IOPS. But how do you know whether you really need them?

In a later post I’ll be running through the questions which I think need to be asked in order to whittle down the massive list of flash vendors to the select few capable of servicing your needs. This, of course, will be difficult to achieve without being biased towards my own employer – but that’s a problem for another day. For now, here’s the first (and potentially most important) step: working out whether you actually need low latency flash storage in the first place.

Who Needs Flash?

For the world of databases, there are three main reasons why you might want to switch to low latency flash:

acceleration-consolidation-virtualizationAcceleration – perhaps the most obvious reason is to go faster. There are many reasons why people desire better performance, but they generally boil down to one of two scenarios: Not Good Enough Now and Not Good Enough For The Future. In the former, bad performance is holding back an application, denying potential revenue or incurring penalties in some way (either SLA-based financial penalties or simply the loss of customers due to poor service levels). In the latter, existing infrastructure is incapable of allowing increased agility, i.e. the ability to do more (offering new services for example, or adding more concurrent users).

Consolidation – always on the mind of CIOs and CTOs is the benefit of consolidating database and server estates. Consolidation brings agility and risk benefits as well as the new and important benefit of cost savings. By consolidating (and standardising) multiple databases onto a smaller pool of servers, organisations save money on hardware, on maintenance and administration, and on the holy grail of all cost savings: software license fees. If you think that sounds like an exaggeration, take a look at this article on Wikibon which demonstrates that Oracle license costs account for 82% of the total cost of a traditional database deployment. Consolidation allows for reduced CPU cores, which means a reduction in the number of licenses, but it also increases I/O as workloads are “stacked” on the same infrastructure. The Wikibon article argues that by moving to flash storage and consolidating, the total cost drops significantly – by around 26% in fact.

Virtualisation – an increasingly prevalent option in the database world. The use of server virtualisation technologies is allowing organisations to move to cloud architectures, where environments are automatically provisioned, managed and migrated across hardware. Virtualisation brings massive agility benefits but also carries a risk because, just like with consolidation, I/O workloads accumulate on the same infrastructure. Unlike consolidation though, virtualisation adds an extra layer of latency, making the I/O even more of a potential bottleneck. Flash systems now make this option practical, as hypervisor vendors begin to realise the potential of flash memory.

There is actually a fourth reason, which is Infrastructure Optimisation. If you have data centres stuffed with disk arrays there is every chance that they can be replaced by a small number of flash arrays, thus reducing power, cooling and real estate requirements and saving large amounts of money. But as this article is primarily targeted at databases I thought I’d leave that one out for now. Consider it the icing on the cake… but don’t forget it, because sometimes it turns out that there’s a lot of icing.

So now we know the reasons why, let’s have a look at which sorts of systems are suitable for flash and which aren’t, starting with the Performance requirement…

Databases Love Flash If…

  • tickThey create lots of I/O! I know, it sounds obvious, but more than once I’ve seen customers with CPU-bound applications that generate hardly any I/O. Flash is a fantastic technology, but its not magic.
  • There is lots of random I/O. Now don’t take that the wrong way – sequential I/O is good too. But if you currently have a random I/O workload running on a disk system you will see the most dramatic benefit after switching that to flash. Here’s why.
  • High amounts of parallelism. The simple fact is that a single process cannot drive anywhere near the amount of I/O that a good flash system can support. If you think of flash as being like a highway, not only is it fast, it’s also wide. Use all the lanes.
  • Large IOWAIT times. If you are using an operating system that has a concept of IOWAIT (Linux and most versions of UNIX do, Windows doesn’t) then this can be a great indicator that processes are stuck waiting on I/O. It’s not perfect though, because IOWAIT is actually an idle wait (within the operating system, this is nothing to do with Oracle wait events) so if the system is really busy it may not be present.

Those are all great indicators, but the next two should be considered the golden rules:

  • I/O wait times are high. Essentially we are looking for high latency from the existing storage system. Flash memory systems should deliver I/O with sub-millisecond latency, so if you see an average latency of 8ms on random reads (db file sequential read), for example, you know there is potential for reducing latency to an eighth of its previous average value.
  • I/O forms a significant percentage of Database Time. If I/O is only responsible for 5% of database time, no amount of lightening-fast flash is going to give you a big performance boost… your problems are elsewhere. On the other hand, if I/O is comprising a large portion of database time, you have lots of room for improvement. (I plan to post a guide to reading AWR Reports pretty soon)

If any of this is ticking boxes for you, it’s time to consider what flash could do for the performance of your database. On the other hand…

Performance Won’t Improve If…

  • red-crossThere isn’t any I/O. Any flash vendor in the industry would be happy to sell you their products in this situation – and let’s face it you’ll get great latency! – but be realistic. If you don’t generate I/O, what’s the point? Unless of course you aren’t after performance. If consolidation, virtualisation or infrastructure optimisation is your aim, there could be a benefit. Also, consider the size of your memory components – if your database produces no physical I/O, could you consider reducing the size of the buffer cache? One of the big benefits of flash to consolidation is the ability to reduce SGA sizes and thus fit more databases onto the same DRAM-restricted server.
  • Single threaded workloads. Sure your application will run slightly faster, but will that speed-up be enough to justify the change of infrastructure? I’m not ruling this out – I have customers with single-threaded ETL jobs that bought flash because it was easier (and cheaper) than rewriting legacy code, but the impact of low-latency storage may well be reduced.
  • Application serialisation points. A session waiting on a lock will not wait any faster! Basically, if your application regularly ties itself in a knot with locks and contention issues, putting it on flash may well just increase the speed at which you hit those problems. Sometimes people use flash to overcome bad programming, but it’s by no means guaranteed to work.
  • CPU-bound systems. CPU starvation is a CPU problem, not an I/O problem. If anything, moving to low-latency storage will reduce the amount of time CPUs spent waiting on I/O and thus increase the amount of time they spend working, i.e. in a busy state. If your CPU is close to the limit and you remove the ballast that is a disk system, you might find that you hit the limit very quickly.

If you are unfortunate enough to be struggling with a badly-performing application that fits into one of these areas, flash probably isn’t the magic bullet you’re looking for.

Consolidation and Virtualisation

This is a different area where it’s no longer valid to only look at individual databases and their workloads. The key factor for both of these areas is density i.e. the number of databases or virtual machines that can fit on a single physical server. The main challenges here are memory usage and I/O generation: databases SGAs tend to be large, but flash allows for the possibility of reducing the buffer cache; while I/O generation is a problem in the disk world because consolidated workloads tend to create more random I/O. Of course, with flash that’s not really a problem. I’ve written a number of articles on consolidation and virtualisation in the past – I’m sure I’ll be writing more about them in the future too.

Summary

I work for a flash vendor – we want you to buy our products. We have competitors who want you to buy their products instead. If everyone in the industry is telling you to buy flash, how do you know if it’s relevant to you? Here’s my advice: make them speak your language and then check their claims against what you can see yourself.

Take some time to understand your workload. Look at the amount of I/O generated and the latency experienced; look at how random the workload is and the ratio of reads to writes (I’ll post a guide for this soon). Ask your (potential) flash vendor how much benefit you will see from your existing storage and then get them to explain why. If you’re a database person, make them speak in your language – don’t accept someone talking in the language of storage. Likewise if you’re an application person make them explain the benefits from an application perspective. You’re the customer, after all.

If your flash vendor can’t communicate with you in your language to explain the benefit you will see, there’s only one course of action: Get rid of them in a flash.

Footnote

Incidentally, if you live outside the UK and you’re wondering about the picture at the top of this article, check out this. If you live inside the UK you will know it’s a Cillit Bang reference… unless you live in a cave and shun the outside world – in which case, how are you reading this?

Understanding I/O: Random vs Sequential

sushiStorage for DBAs: Ever been to one of those sushi restaurants where the food comes round in dishes on a conveyor belt? As each dish travels around the loop you eye it up and, as long as you can make your mind up in time, grab it. However, if you are as indecisive as me, there’s a chance it will be out of range before you come to your senses – in which case you have to wait for it to complete a further full revolution before getting another chance. And that’s assuming someone else doesn’t get to it first.

Let’s assume that it takes a dish exactly 4 minutes to complete a whole lap of the conveyor belt. And just for simplicity’s sake let’s also assume that no two dishes on the belt are identical. As a hungry diner you look in the little menu and see a particular dish which you decide you want. It’s somewhere on the belt, so how long will it take to arrive?

Probability dictates that it could be anywhere on the belt. It could be passing by right now, requiring no wait time – or it could have just passed out of reach, thus requiring 4 minutes of wait time to go all the way round again. As you follow this random method (choose from the menu then look at the belt) it makes sense that the average wait time will tend towards half way between the min and max wait times, i.e. 2 minutes in this case. So every time you pick a dish you wait an average of 2 minutes: if you have eight dishes the odds say that you will spend (8 x 2) = 16 minutes waiting for your food. Welcome to the disk data diet, I hope you weren’t too hungry?

Now let’s consider an alternative option, where you order eight dishes from the chef and he or she places all of them sequentially (i.e. next to each other) somewhere on the conveyor belt. That location is random, so again you might have to wait anywhere between 0 and 4 minutes (an average of 2 minutes) for the first dish to pass… but the next seven will follow one after the other with no wait time. So now, in this scenario, you only had to wait 2 minutes for all eight dishes. Much better.

I’m sure you will have seen through my analogy right from the start. The conveyor belt is a hard disk and the sushi dishes are blocks which are being eaten / read. I haven’t yet worked out how to factor a bottle Asahi Super Dry into this story, but I’ll have one all the same thanks.

Random versus Sequential I/O

I have another article planned for later in this series which describes the inescapable mechanics of disk. For now though, I’ll outline the basics: every time you need to access a block on a disk drive, the disk actuator arm has to move the head to the correct track (the seek time), then the disk platter has to rotate to locate the correct sector (the rotational latency). This mechanical action takes time, just like the sushi travelling around the conveyor belt.

random-or-sequentialObviously the amount of time depends on where the head was previously located and how fortunate you are with the location of the sector on the platter: if it’s directly under the head you do not need to wait, but if it just passed the head you have to wait for a complete revolution. Even on the fastest 15k RPM disk that takes 4 milliseconds (15,000 rotations per minute = 250 rotations per second, which means one rotation is 1/250th of a second or 4ms). Admittedly that’s faster than the sushi in my earlier analogy, but the chances are you will need to read or write a far larger number of blocks than I can eat sushi dishes (and trust me, on a good day I can pack a fair few away).

What about the next block? Well, if that next block is somewhere else on the disk, you will need to incur the same penalties of seek time and rotational latency. We call this type of operation a random I/O. But if the next block happened to be located directly after the previous one on the same track, the disk head would encounter it immediately afterwards, incurring no wait time (i.e. no latency). This, of course, is a sequential I/O.

Size Matters

In my last post I described the Fundamental Characteristics of Storage: Latency, IOPS and Bandwidth (or Throughput). As a reminder, IOPS stands for I/Os Per Second and indicates the number of distinct Input/Output operations (i.e. reads or writes) that can take place within one second. You might use an IOPS figure to describe the amount of I/O created by a database, or you might use it when defining the maximum performance of a storage system. One is a real-world value and the other a theoretical maximum, but they both use the term IOPS.storage-characteristics

When describing volumes of data, things are slightly different. Bandwidth is usually used to describe the maximum theoretical limit of data transfer, while throughput is used to describe a real-world measurement. You might say that the bandwidth is the maximum possible throughput. Bandwidth and throughput figures are usually given in units of size over units of time, e.g. Mb/sec or GB/sec. It pays to look carefully at whether the unit is using bits (b) or bytes (B), otherwise you are likely to end up looking a bit silly (sadly, I speak from experience).

In the previous post we stated that IOPS and throughput were related by the following relationship:

Throughput   =   IOPS   x   I/O size

It’s time to start thinking about that I/O size now. If we read or write a single random block in one second then the number of IOPS is 1 and the I/O size is also 1 (I’m using a unit of “blocks” to keep things simple). The Throughput can therefore be calculated as (1 x 1) = 1 block / second.

Alternatively, if we wanted to read or write eight contiguous blocks from disk as a sequential operation then this again would only result in the number of IOPS being 1, but this time the I/O size is 8. The throughput is therefore calculated as (1 x 8) = 8 blocks / second.

Hopefully you can see from this example the great benefit of sequential I/O on disk systems: it allows increased throughput. Every time you increase the I/O size you get a corresponding increase in throughput, while the IOPS figure remains resolutely fixed. But what happens if you increase the number of IOPS?

Latency Kills Disk Performance

In the example above I described a single-threaded process reading or writing a single random block on a disk. That I/O results in a certain amount of latency, as described earlier on (the seek time and rotational latency). We know that the average rotational latency of a 15k RPM disk is 4ms, so let’s add another millisecond for the disk head seek time and call the average I/O latency 5ms. snailHow many (single-threaded) random IOPS can we perform if each operation incurs an average of 5ms wait? The answer is 1 second / 5 ms = 200 IOPS. Our process is hitting a physical limit of 200 IOPS on this disk.

What do you do if you need more IOPS? With a disk system you only really have one choice: add more disks. If each spindle can drive 200 IOPS and you require 80,000 IOPS then you need (80,000 / 200) = 400 spindles. Better clear some space in that data centre, eh?

On the other hand, if you can perform the I/O sequentially you may be able to reduce the IOPS requirement and increase the throughput, allowing the disk system to deliver more data. I know of Oracle customers who spend large amounts of time and resources carving up and re-ordering their data in order to allow queries to perform sequential I/O. They figure that the penalty incurred from all of this preparation is worth it in the long run, as subsequent queries perform better. That’s no surprise when the alternative was to add an extra wing to the data centre to house another bunch of disk arrays, plus more power and cooling to run them. This sort of “no pain, no gain” mentality used to be commonplace because there really weren’t any other options. Until now.

Flash Offers Another Way

The idea of sequential I/O doesn’t exist with flash memory, because there is no physical concept of blocks being adjacent or contiguous. Logically, two blocks may have consecutive block addresses, but this has no bearing on where the actual information is electronically stored. You might therefore say that all flash I/O is random, but in truth the principles of random I/O versus sequential I/O are disk concepts so don’t really apply. And since the latency of flash is sub-millisecond, it should be possible to see that, even for a single-threaded process, a much larger number of IOPS is possible. When we start considering concurrent operations things get even more interesting… but that topic is for another day.

sushi-dishBack to the sushi analogy, there is no longer a conveyor belt – the chefs are standing right in front of you. When you order a dish, it is placed in front of you immediately. Order a number of dishes and you might want to enlist the help of a few friends to eat in parallel, because the food will start arriving faster than you can eat it on your own. This is the world of flash memory, where hunger for data can be satisfied and appetites can be fulfilled. Time to break that disk diet, eh?

Looking back at the disk model, all that sitting around waiting for the sushi conveyor belt just takes too long. Sure you can add more conveyor belts or try to get all of your sushi dishes arranged in a line, but at the end of the day the underlying problem remains: it’s disk. And now that there’s an alternative, disk just seems a bit too fishy to me…

Oracle VM 3.1.1 with Violin Memory storage

Oracle-VMAs part of the Installation Cookbook series I have now posted a new entry on how to install Oracle VM with Violin Memory flash storage:

Oracle VM 3.1.1 with Violin Memory storage

Database Workload Theory

equations

In the scientific world, theoretical physicists postulate theories and ideas, for example the Higgs Boson. After this, experimental physicists design and implement experiments, such as the Large Hadron Collider, to prove or disprove these theories. In this post I’m going to try and do the same thing with databases, except on a smaller budget, with less glamour and zero chance of winning a Nobel prize. On the plus side though, my power bills will be a lot lower.

That last paragraph was really just a grandiose way of saying that I have an idea, but haven’t yet thought of a way to prove it. I’m open to suggestions, feedback and data which prove or disprove it… but for now let’s just look at the theory.

Visualising Database Server I/O Workload

If you look at a database server running a real life workload, you will generally see a pattern in the behaviour of the I/O. If you plot a graph of the two extremes of purely sequential I/O and purely random I/O most workloads will fit somewhere along this sliding scale :

IO-scale

Now of course workloads change all the time, so this is an approximation or average, but it makes sense. After all, we do this in the world of storage, because if the workload is highly random the storage requirements will be very different to if the workload is highly sequential.

What I am going to do now is plot a graph with this as the horizontal axis. The vertical axis will be an exponential representation of the storage footprint used by the database server, i.e. the amount of space used. I can then plot different database server workloads on the graph to see where they fall.

But first, two clarifications. I am at pains to say “database server” instead of “database” because in many environments there are multiple database instances generating I/O on the same server. What we are interested in here is how the storage system is being driven, not how each individual database is behaving. Remember this point and I’ll come back to it soon. The other clarification is regarding workload – because many systems have different windows where I/O patterns change. The classic (and very common) example is the OLTP database where users log off at the end of the day and then batch jobs are run. Let’s plot the OLTP and batch workloads as separate points on our graph.

Here’s what I expect to see:

database-io-workload

There are data points in various places but a correlation is visible which I’ve highlighted with the blue line. Unfortunately this line is nothing new or exciting, it’s just a graphical representation of the fact that large databases tend to perform lots of sequential I/O whereas small databases tend to perform lots of random I/O.

Why is that? Well because in most cases large databases tend to be data warehouses, decision support systems, business intelligence or analytics systems… places where data is bulk loaded through ETL jobs and then scanned to create summary information or spot trends and patterns. Full table scans are the order of the day, hence sequential I/O. On the other hand, smaller databases with lots of random I/O tend to be OLTP-based, highly transactional systems running CRM, ERM or e-Commerce platforms, for example.

Still, it’s a start – and we can visualise this by dividing the graph up into quadrants and calling them zones, like this:

database-io-workload-zonesThis is only an approximation, but it does help with visualising the type of I/O workload generated by database servers. However, there are two more quadrants looking conspicuously un-labelled, so let’s now turn our attention to them.

Database Consolidation I/O Workload

The bottom left quadrant is not very exciting, because small database systems which generate highly-sequential workloads are rare. I have worked on one or two, but none that I ever felt should actually have been designed to work that way. (One was an indexing system which got scrapped and replaced with Lucene, the other I am still not sure actually existed or if it was just a bad dream that I once had…)

The top right quadrant is much more interesting, because this is the world of database consolidation. I said I would come back to the idea that we are interested not in the workload of the database but of the database server.  The reason for this is that as more databases are run on the same server and storage infrastructure, the I/O will usually become increasingly random. If you think about multiple sets of disparate users working on completely different applications and databases, you realise that it quickly becomes impossible to predict any pattern in the behaviour of the I/O. We already know this from the world of VDI, where increasing the number of seats results in an increasingly random I/O requirement.

The top right quadrant requires lots of random I/O and yet is large in capacity. Let’s label it the consolidation zone on our graph:

database-io-workload-consolidation-zone

We now have a graphical representation of three broad areas of I/O workload. If we believe in the trend of database consolidation, as described by the likes of Gartner and IDC, then over time the dots in the DW and OLTP zones will migrate to the consolidation zone. I have already blogged my thoughts on the benefits of database consolidation, bringing with it increased agility and massive savings in operational costs (especially Oracle licenses) – and many of the customers I have been speaking to both at Violin and in my previous role are already on this journey, even if some are still in the planning stages. I therefore expect to see this quadrant become increasingly populated with workloads, particularly as flash storage technologies take away the barriers to entry.

I/O Workload Zone Requirements

The final step in this process is to look at the generic requirements of each of our three workload zones.

database-io-workload-requirements

The data warehouse zone is relatively straightforward, because what these systems need more than anything is bandwidth. Also known as throughput, this is the ability of the storage to pump large volumes of data in and out. There is competition here, because whilst flash memory systems can offer excellent throughput, so can disk systems. So can Exadata of course, it’s what it was designed for. Mind you, flash should enable a lower operational cost, but this isn’t a sales pitch so let’s move on to the next zone.

The OLTP zone is all about latency. To run a highly-transactional system and get good performance and end-user experience, you need consistently low latency. This is where flash memory excels – and disk sucks. We all (hopefully) know why – disk simply cannot overcome the seek time and rotational latency inherent in its design.

The consolidation zone however is particularly interesting, because it has a subtly different set of requirements. For consolidation you need two things: the ability to offer sustained high levels of IOPS, plus predictable latency. Obviously when I say that I mean predictably low, because predictably high latency isn’t going to cut it (after all, that’s what disk systems deliver). If you are running multiple, disparate applications and databases on the same infrastructure (as is the case with consolidation) it is crucial that each does not affect the performance of the other. One system cannot be allowed to impact the others if it misbehaves.

Now obviously disk isn’t in with a hope here – highly random I/O driving massive and sustained levels of IOPS is the worst nightmare for a disk system. For flash it’s a different story – but it’s not plain sailing. Not every flash vendor can truly sustain their performance levels or keep their latency spike-free. Additionally, not every flash vendor has the full set of enterprise features which allow their products to become a complete tier of storage in a consolidation environment.

As database consolidation increases – and in fact accelerates with the continued onset of virtualisation – these are going to be the requirements which truly differentiate the winners from the contenders in the flash market.

It’s going to be fun…

Disclaimer

What Do You Think?These are my thoughts and ideas – I’m not claiming them as facts. The data here is not real – it is my attempt at visualising my opinions based on experience and interaction with customers. I’m quite happy to argue my points and concede them in the face of contrary evidence. Of course I’d prefer to substantiate them with proof, but until I (or someone else) can devise a way of doing that, this is all I have. Feel free to add your voice one way or the other… and yes, I am aware that I suck at graphics.

Flash Enables Human-Time Analytics

analyticsAnalytics. Apparently it’s “the discovery and communication of meaningful patterns in data“. Allegedly it’s the “Next Holy Grail“. By definition it’s “the science of logical analysis“. But what is it really?

We know that it is considered a type of Business Intelligence. We know that when applied to massive volumes of information it is often described as a Big Data problem. And we know that companies and organisations are using it for tasks like credit risk analysis, fraud detection and complex event processing.

Above all else we know that Oracle, IBM, SAP etc are all banging on about analytics like it’s the most important thing in the world. Which must mean it is. Maybe. But what did you say it was again?

Here’s my view: analytics is the section of a feedback loop in which data is analysed and condensed. If you are using analytics the chances are you have some sort of data stream which you want to process. Analytics is a way of processing that (often large) data in order to produce (smaller) summary information which can then be used to make business decisions. The consequence of these business decisions is likely to affect the source data stream, at which point the whole loop begins again.

analytics-feedback-loopSomething else that is often attributed to analytics is data visualisation, i.e. the representation of data in some visual form that offers a previously unattainable level of clarity and understanding. But don’t be confused – taking your ugly data and presenting it in a pretty picture or as some sort of dashboard isn’t analytics (no matter how real-time it is). You have to be using that output for some purpose – the feedback loop. The output from analytics allows you to change your behaviour in order to do … something. Maximise your revenue, increase your exposure to customers, identify opportunities or risks, anything…

Two Types of Analytics… and Now a Third

Until recently you can sort of consider two realms of analytics based on the available infrastructure on which they could run:

Real-Time Analytics

The processing of data in real-time requires immense speed, particularly if the data volume is large. A related but important factor is the use of filtering. If you are attempting to glean new and useful information from massive amounts of raw data (let’s say for example the data produced by the Large Hadron Collider) you need to filter at every level in order to have a chance of being able to handle the dataset – there simply isn’t enough storage available to keep a historical account of what has happened.

And that’s the key thing to understand about real-time analytics: there is no history. You cannot afford to keep historical data going back months, days or even minutes. Real-time means processing what is happening now – and if you cannot get the answer instantly you are too late, because the opportunity to benefit from a change in your behaviour has gone.

So what storage media would you use for storing the data involved in real-time analytics? There is only one answer: DRAM. This is why products such as Oracle Exalytics and SAP HANA make use of large amounts of DRAM – but while this offers excellent speed it suffers from other issues such as scalability and a lack of high availability options. Nevertheless DRAM is the only way to go if you want to process your data in real time.

Batch Analytics

This is the other end of the field. In batch analytics we take (usually vast) quantities of data and load them into an analytics engine as a batch process. Once the data is in place we can run analytical processes to look for trends, anomalies and patterns. Since the data is at rest there are ample opportunities to change or tweak the analytical jobs to look for different things – after all, in a true analytical process the chances are you do not know what you are looking for until you’ve found it.

Clearly there is a historical element to this analysis now. Data may span timescales of days, months or years – and consequently the data volume is large. However, the speed of results is usually less important, with jobs taking anything from ten minutes to days.

What storage media would you use here then? Let’s be honest, the chances are you will use disk. Slow, archaic, spinning magnetic disks of rusty metal. Ugh. But I don’t blame you, SATA disks will inevitably be the most cost efficient means of storing the data if you don’t need your results quickly.

time-spiralHuman-Time Analytics

So with flash memory taking the data centre by storm, what does this new storage technology allow us to do in the world of analytics that was previously impossible? The answer, in a phrase I’m using courtesy of Jonathan Goldick, Violin Memory’s CTO of Software, is human-time analytics. Let me explain by giving one of Jonathan’s examples:

Imagine that you are walking into a shopping mall in the United States. Your NFC-enabled phone is emitting information about you which is picked up by sensors in the entrance to the mall. Further in there are a set of screens performing targeted advertising – and the task of the advertiser or mall-owner is to display a targeted ad on that screen within ten seconds of finding out that you are inbound.

The analytical software has no possible way of knowing that you are about to enter that mall. As such it cannot use any sort of pre-fetching to get your details – which means those ten seconds are going to have to suffice. How can your details be fetched, parsed and a decision made within just ten seconds?

DRAM – From a technical perspective, one solution is to have your details located in DRAM somewhere. But with over 300 million people living in the US that is going to require an enormous and financially-impractical amount of DRAM. It just isn’t feasible.

DISK – A much less expensive option is clearly going to be the use of disk storage. However, even with the best high-performance disk array (with the highest cost) the target of finding and acting upon that data within ten seconds is just not going to happen.

FLASH – Here we have the perfect answer. Extremely fast, with sub-millisecond response times, flash memory allows for data to be retrieved orders-of-magnitude faster than disk and yet with cost far lower than DRAM (in fact the cost is now approaching that of disk).

Flash is a new way of thinking – and it allows for new opportunities which were previously unattainable. It’s always tempting to think about how much better we could enhance existing solutions with flash, but the real magic lies in thinking about the new heights we can scale which we couldn’t reach before…