Exadata X3 – Sound The Trumpets

It’s crazy time in the world of Oracle, because Oracle OpenWorld 2012 is only a week away. Which means that between now and then the world of Oracle blogging and tweeting will gradually reach fever pitch speculating on the various announcements that will be issued, products that will be launched and outrageous claims that will be made. The hype machine that is the Oracle Marketing department will be in overdrive, whilst partners and competitors clamour to get a piece of the action too. Such is life.

There was supposed to be one disappointment this year, i.e. that the much-longed-for new version of Oracle Database (12c) would not be released… we knew this because Larry told us back in June that it wouldn’t be out until December or January. Mind you, he also told us that it would’t be ported to Itanium, yet it appears that promise cannot be kept. And now it seems another of those claims back in June was incorrect, because yesterday we learnt (from Larry) that Oracle Database 12c would be released at OOW after all. How are we supposed to keep up with what’s accurate?

Also for OOW 2012 we have the prospect of new versions of the Oracle Exadata Database Machine, the Exadata X3, to replace the existing X2 models which have now been in service for two years. The new models (the X3-2 and the X3-8) don’t represent a huge change, more of an evolutionary step to keep up with current technology: Oracle partners have been told that the Westmere-based Xeon processors have been swapped for Sandy Bridge versions (see comments below), the amount of RAM has increased, the flash cards are switching from the ancient F20 models to the F40 models which have better performance characteristics as well as higher capacity (and my, don’t they look just like the LSI Nytro WarpDrive WLP4-200?)

One thing that doesn’t appear to be changing though is the disks in the storage servers, which remain the 12x 600GB high performance or 3TB high capacity spindles used in the X2-2 and X2-8. I’ve heard a lot of people suggest that Oracle might switch to using only SSDs in the storage servers, but I generally discount this idea because I am not sure it makes sense in the Exadata design. The Exadata Smart Flash Cache (i.e. the F20 / F40 cards) are there to try and handle the random I/O requests, as is the database buffer cache of course. The disks in an Exadata storage server are there to handle sequential I/O – and since all 12 of them can saturate the I/O controller there is no need to go increasing the available bandwidth with SSD… particularly if Oracle hasn’t got the technology to do SSD right (maybe they have, maybe they haven’t – I wouldn’t know… but working for a flash vendor I am aware that flash is a complicated technology and you need plenty of IP to manage it properly. My, those F40 cards really do look familiar…)

Exadata on Violin? No.

Of course what could have been really interesting is the idea of using the Violin Memory flash Memory Array as a storage server. Very much like an Exadata storage cell, the 6000 series array has intelligence in the form of its Memory Gateways, which are effectively a type of blade server built into the array. There are two in each 6000 series array and they have x86 processors, DRAM and network connectivity as you would expect. On a standard Violin Memory system you would find them running our own operating system with our vShare software, as well as the option to run Symantec Storage Foundation, but we have also used them to run other, extremely cool stuff:

Violin Memory Windows Cluster In A Box

Violin Memory OEMs VMware Virtualization Technology

Violin Memory DOES NOT run Exadata Storage Server

Ok that last one was a trap… Exadata storage software is a closed technology that can only be run on Oracle’s Exadata Database Machine. But ’twas not always thus…

Open and Closed

The original plan for Exadata storage software was that it would have an open hardware stack, rather than the proprietary Oracle-only approach that we see today. We know this from various sources including none other than the CEO of Oracle himself. It would have been possible to build Exadata systems on multiple platforms and architectures  – there was a port of iDB for HPUX under development, for example (evidence of this can be seen on page 101 of HP’s HPUX Release Notes). Given that Oracle’s success as a database company was founded on that openness and willingness to port onto multiple platforms, or to put it another way the freedom of choice, it came as a shock to many when the Sun acquisition put an end to this approach.

Now it seems that Oracle is going the other way. The Database Smart Flash Cache feature is only available on Solaris or Oracle Linux platforms. Hybrid Columnar Compression, an apparently generic feature, was only supported on Oracle Exadata systems when it was first released. Since then the list of supported storage for HCC has grown to encompass Oracle ZFS Storage Appliances and Oracle Pillar Axiom Storage Systems. Notice something these systems all have in common? The clue is in the name.

So what can we learn from this? Is Oracle using it’s advantage as the largest database vendor to make it’s less-successful hardware products more attractive? Will customers continue to see more goodies withheld unless they purchase the complete Oracle stack? Have a look at this and see what you think:

Oracle Storage – The Smarter Choice

This is a marketing feature in which Oracle explains the “Top Five Reasons Oracle Storage is a Smarter Choice than EMC“. But hold on, what’s reason number five?

So Oracle storage is “smarter” than EMC because Oracle doesn’t let you use an apparently-generic software feature on EMC? That’s an interesting view. Maybe there’s more. What about reason number four?

Oracle storage is “smarter” than EMC because Exadata software – you remember, that software which was originally going to be available on multiple systems and architectures – only runs on Oracle storage. Well duh.

Life Goes on

So here we are in the modern world. Exadata is a closed platform solution. It’s still well-designed and very good at doing the thing it was designed for (data warehousing). It’s still sold by Oracle as the strategic platform for all workloads. Oracle still claims that Exadata is a solution for OLTP and Consolidation workloads, yet we don’t see TPC-C benchmarks for it (and that criticism has become boring now anyway). Next week we will hear all about the Exadata write-back cache and how it means that Exadata X3 is now the best machine for OLTP, even though that claim was already being made about the V2 back in 2009.

I am sure the announcements at OOW will come thick and fast, with many a 200x improvement seen here or a 4000% reduction claimed there. But amid all the hype and hyperbole, why not take a minute to think about how different it all could have been?

Exadata Roadmap – More Speculation

Oracle Sun Flash Accelerator F40 card

It’s silly season. In the run up to Oracle Open World there are always rumours and whispers about what products will be announced – and this year is no different. I know this because I’m one of the people partaking in the spread of baseless and unfounded speculation.

Clearly the thing that most people are talking about is the almost certain release of a new Exadata generation called the X3. There appear to be both the X3-2 and X3-8 generations coming, as well as an interesting “Exadata X3-2 Eighth Rack” (that’s eighth as in 1/8 not as in 8th I presume). You don’t need me to tell you any of this, because Andy Colvin from those excellent guys at Enkitec has written a great article all about it right here.

And as if that wasn’t enough, Kevin Closson, the ex-Performance Architect of Exadata, has added his own speculative article in which he walks the fine line of legal requirement placed upon anyone who used to work in the Oracle Development organisation (because Oracle’s expensively assembled legal team often finds time to stretch its muscles about these things: to quote Kevin, “I’m only commenting about the rumors I’ve read and I will neither confirm nor deny even whether I *can* confirm or deny.” But did you notice how he didn’t confirm whether he could confirm that he could confirm it?)

Anyway, with Andy and Kevin on the case, there is little point in me trying to add anything there. So let’s look at some of the other rumours.

Sun F40 Accelerator Flash Cards

It appears that the X3 will finally ditch the unloved Sun F20 flash cards that have been present since the introduction of flash when the V2 model came out in 2009. Flash technology has advanced rapidly over recent years – and the F20 cards were hardly at the forefront of the technology even in 2009.

The F20 cards contained four flash modules known as DOMs, each with 24GB of SLC flash and 64MB of DRAM. In order to ensure that writes made it to flash in the event of power loss, they also had a dirty great big super-capacitors strapped to the back. I’m no fan of supercaps in general, they tend to have reliability issues and go bang in the night. I’m not saying that Oracle’s cards had this issue though (because I also have to consider that expensively-assembled legal team). However it’s interesting to note this quote in the F20 user guide:  “Because high temperatures can have negative impact on life expectancy, it is best to locate the Sun Flash Accelerator F20 PCIe Card in PCIe slots that offer maximum airflow“.

The new F40 cards have now switched to using MLC flash and again contain four DOMs. This time they are 100GB in size, giving a total of 400GB usable (512GB raw). There is no mention of DRAM, but of course it must be there. The manual also offers no insight into whether there are any supercaps (unlike the F20 manual which had a lovely section on “Super Capacitors versus Batteries”) but I can see some fat little nodules on the picture up above which tell me that capacitance is still essential. The result of these changes (probably mainly the switch to MLC) is that the published mean time between failure has dropped by 50% from 2m hours to 1m hours. That’s taken 114 years off of the lifetime of the cards!

The power draw appears to have risen, because the F20 used around 16.5W during normal operation, whereas the F40 is described as using 25W max and 11.5W even when idle. On the other hand maybe they just picked a value in the middle and called that “normal”.

What will be interesting is to see how Oracle handles the flash write cliff. Flash media is very fast for reads; in the case of the F40 the latency is 251 microseconds (not impressive against the 90 microseconds on a Violin system, but still better than disk). Flash is even faster for writes, with the F40 having a 95 microsecond latency (25 microseconds on Violin 🙂 ). The area to watch out for though is erasing. On flash you can only write to an empty block, so once the block is used it has to be erased again before you issue another write to it. Violin has all sorts of patented technology to ensure this doesn’t affect performance (but as I’ve already plugged Violin twice I’ll shut up about it). Oracle doesn’t – at least, nothing that any of the flash vendors would be worried about.

[Disclosure: In the comments section below, Alex asked a question about the block size which made me realise that the F40 datasheet numbers are showing latency figures for 8k, whereas I am quoting Violin latency figures for 4k blocks. Even so, it’s still obvious that there are some big differences there.]

That’s never really been a problem for Oracle before, because the Exadata flash was used as a write-through cache, where the write performance of the flash cards was not an issue. This time, with the new “flash for all writes” capabilities of the flash cache, write performance is going to matter – particularly for sustained writes, such as ETL jobs, batch loads, data imports etc. Unless Oracle has some way to avoid it, once the capacity of the cards is used and all of the flash cells have been written to, there will be a big drop-off in performance whilst the garbage collection takes place in the background to try and erase free cells. It will be interesting to see how the X3 behaves during this type of load.

Database Virtualisation

This is the other hot topic for me, since I am an avid believer that we are seeing a major trend in the industry towards the virtualisation of production Oracle databases. Oracle, it has to be said, has not had a massive amount of success with its Oracle VM product. I actually quite like it, but I appear to be in a minority. It’s not got anything like the market penetration of Hyper-V, whilst VMware is in a different league altogether.

History tells us that when Oracle has a product with which it wants to drive (or rather, enforce) more adoption, it uses “interesting” strategies. The addition of OVM to Exadata is, for me, almost certain. In this way, Oracle gets to push its own virtualisation product as something that a) is “engineered to work with Exadata”, b) is a “one throat to choke” support solution, and c) is the *only* choice you can have.

Expect to see lots of announcements around this, with particular hype over the features such as online migration and integration with OEM, as well as lots of talk about how the Infiniband network makes it all a million times faster than some unspecified alternative.

Update 10 September 2012

It’s come to my attention that the Sun F40 cards look incredibly similar to the LSI Nytro WarpDrive WLP4-200 flash cards. Just take a look at the pictures. I don’t know this for a fact, but the similarity is plain to see. Surely Oracle must be OEMing these?

A note for Oracle’s legal team: please note that this is all wild speculation and that I am in no way using any knowledge gained whilst an employee of Oracle. In fact the main thing I learned whilst an employee was that people on the outside who aren’t supposed to know get to have a lot more fun speculating than the people on the inside who are supposed to know but don’t.

Exadata Roadmap Preview

Last week, Andrew Mendelsohn gave a talk at the Enkitec Extreme Exadata Expo (“E4”) run in Texas by those excellent guys at Enkitec. Andrew is the SVP of Oracle’s Database Server Technologies group, so it’s fair to say he has his finger on the pulse of the Oracle roadmap for Exadata.

Big thanks to Frits Hoogland for tweeting a picture of the roadmap slide. As you can see there are some interesting things on there… I’m told that Andrew described these features as “coming within the next 12 months”. Of course, that could mean they arrive at the next Oracle Open World in a month’s time, or they could be 365 days away. I suspect some are coming sooner than others, but as usual it is all wild speculation. Never mind though, if there’s one thing I’m quite good at it’s wild(ly inaccurate)  speculation.

The first one to consider is the in-memory optimized compression. Why is this important? Well, for Exadata, one reason is that no compression functionality can be offloaded to the storage cells, with their 168 cores (in a full rack). Instead it has to take place on the far-less processor-heavy compute nodes (only 96 cores on a full rack X2-2). Of course, it may be that the cells are busy and the compute nodes are idle, in which case this is a happy coincidence and there would be plenty of resource available for compression (although actually if the cells are really busy they may be performing “passthrough“, where work is offloaded back to the compute nodes!). But the fact remains that since the Exadata design is asymmetrical, you are still limited to only using the CPUs in the compute nodes. If you want to know what that means, you really need to be watching these videos by Kevin Closson. It seems like everyone wants to do everything in memory these days, but then I guess that’s not surprising when the alternative is doing it on disk.

The second important feature is the “flash for all writes” write-back flash cache, enabling the database writer to use some of the 5.3TB of flash available in a full rack. Of course, this is effectively a cache, albeit a persistent one. The writes still have to be de-staged back to disk at some point. Andrew is claiming a 10x improvement here on the slide, but it will be interesting to see how that plays out – particularly if those writes are sustained and the area allocated on the flash cards starts to run out. Kevin posted some views about this on his site, although being Kevin he likes to stick to the facts rather than throw about the armfuls of wildly inaccurate speculation that you’ll find here.

Finally, the feature that caught my eye the most was “Virtualization of database servers”. Regular readers will know my absolute faith in the meeting of databases with virtualization technology, so for me this appears to be yet another clear sign (if you look for them hard enough you can always find them 🙂 ). I wonder if this means the introduction of Oracle VM onto the compute nodes. The x86 hardware is there, the Infiniband network is there, so this could pave the way for OVM on Exadata with all of the resultant Live Migration technology… it’s a thought.

Let’s face it, Oracle is getting spanked in the virtualisation arena by VMware, so they need to do something big to get people to notice OVM. With the release of EMC’s vFabric Data Director 2.0 it’s now time to fight or give up. And we all know Oracle likes a fight.

For my money OVM is actually a great product, but then so is VMware. And for all Larry’s words on virtualization being the best security model, it’s a technology that has been noticeably lacking on what is, after all, Oracle’s strategic platform for all database workloads

Comments welcome… and feel free to call me out on what is clearly an obvious lack of insider knowledge.

The Strategic Platform for ALL Database Workloads

I was invited to Microsoft HQ in the UK yesterday to be a speaker at one of their launch event for SQL Server 2012. It’s the second of these events that I’ve appeared at and it finally made me realise I need to change something about this blog.

So far until now I have resisted making any critical remarks about the Oracle Exadata product here, other than to quote the facts as part of my History of Exadata series. I’m going to change that now by offering my own opinions on the product and Oracle’s strategy around selling it.

Before I do that I should establish my credentials and declare any bias I may have. For a number of years, until very recently in fact, I was an employee of Oracle Corporation in the UK where I worked in Advanced Customer Services. I began working with Exadata upon the release of the “v2” Sun Oracle Database Machine and at the time of the “X2” I was the UK Team Lead for Exadata. I personally installed and supported Exadata machines in the UK and also trained a number of the current Exadata engineers in ACS (although that wasn’t exactly difficult as all of the ACS engineers I know are excellent). I also used to train the sales and delivery management communities on Exadata using my trademarked “coloured balls” presentation (you had to be there).

I now work for Violin Memory, a company that (to a degree) competes with Oracle Exadata. Exadata is a database appliance, whilst Violin Memory make flash memory arrays… so that doesn’t immediately sound like a true competition. But I’ll let you into a little secret: Exadata isn’t a database appliance at all – it’s an application acceleration product. That’s what is does, it takes applications which businesses rely on and makes them run faster. And in fact that’s also exactly what Violin is – it’s an application acceleration product that just happens to look like a storage array.

So now we have everything out in the open I’m going to talk about my issue with Exadata – and you can read this keeping in mind that everything I say is tainted by the fact that I have an interest in making Violin products look better than Oracle’s. I can’t help that, I’m not going to quit my exciting new job just to gain some journalistic integrity…

There are a number of critiques of Exadata out there on the web, ranging from technical discussions (the best of which are Kevin Closson’s Critical Analysis videos) to stories about the endless #PatchMadness from Exadata DBAs on Twitter. My main issue is much more fundamental:

Oracle now say that Exadata is the strategic database platform for ALL database workloads. This did not used to be the case. If you read my History of Exadata piece you will see that when the original v1 HP Oracle Database Machine was released, “Exadata” was the name of the storage servers. And those storage servers were, in Oracle’s own words, “Designed for Oracle Data Warehouses“.

Upon the release of the v2 Sun Oracle Database Machine there came an epiphany at Oracle: the realisation that flash technology was essential for performance (don’t forget I’m biased). This was great news for Violin as back in those days (this was 2009) flash was still an emerging technology. However, the Sun F20 Accelerator cards that were added to the v2 were (in my biased opinion) pretty old tech and Oracle was only able to use them as a read cache. However, that didn’t stop Oracle’s marketing department (never one to hold back on a bold claim) from making the statement that the v2 was “The First Database Machine For OLTP“. We are now on the X2 model (really only a minor upgrade in CPU and RAM from the v2) and Oracle has now added Database Consolidation to the list of things that Exadata does. And of course the new bold claim has now appeared, as in the image above, “Exadata is Oracle’s strategic database platform for ALL database workloads“. Sure the X2 now came in two models, the X2-2 and the X2-8, but they weren’t actually different in terms of the features that you get above a normal Oracle database… you still get the same Exadata storage, Hybrid Columnar Compression and Exadata Flash Cache features regardless of the model.

So what’s my problem with this? Well first of all let’s just think about what a workload is. Essentially you can define the workload of a database by the behaviour of its users. There are two main types of workload in the database world, OnLine Transactional Processing (OLTP) and Data Warehousing (DW). OLTP systems tend to have highly transactional workloads, with many users concurrently querying and changing small amounts of data. Conversely, DW systems tend to have a smaller number of power users who query vast amounts of data performing sorts and aggregation. OLTP systems experience huge amounts of change throughout their working period (e.g. 9am-5pm for a national system, 24×7 for a global system). DW systems on the other hand tend to remain relatively static except during ETL windows when massive amounts of data are loaded or changed.

In fact, you can pretty much picture any workload as fitting somewhere on a scale between these two extremes:

Of course, this is a sweeping generalisation. In practice no system is purely OLTP or purely DW. Some systems have windows during which different types of workload occur. Consolidation systems make things even more complicated because you can have multiple concurrent workloads taking place.

There’s a point to all this though. Take a random selection of real life databases and look at their workloads. If you agree with my OLTP <> DW scale above then you will see that they all fit in different places. Maybe you don’t agree with it though and you think there are actually many more dimensions to consider… no matter. What we should all be able to agree on is this:

In the real world, different databases have different workloads.

And if we can agree on that then perhaps we can also agree on this:

Different workloads will have different requirements.

That’s simple logic. And to extend that simple logic just one more step:

One design cannot possibly be optimal for many different requirements.

And that’s my problem with Oracle’s strategy around selling Exadata. We all know that it was originally designed as a data warehouse solution. Although I defer to Kevin’s knowledge about the drawbacks of an asymmetric shared-nothing MPP design, I always thought that Exadata was an excellent DW product and something that (at the time) seemed like an evolutionary step forward (although I now believe that flash memory arrays are a revolutionary step forward that make that evolution obsolete – keep remembering that I’m biased though). But it simply cannot be the best solution for everything because that doesn’t make sense. You don’t need to be technical to get that, you don’t even need to be in IT.

Let’s say I wanted to drive from town A to town B as fast as I can. I’d choose a Ferrari right? That’s my OLTP requirement. Now let’s say I wanted to tow a caravan from A to B, I’d need a 4×4 or something with serious towing ability – definitely not a Ferrari. There’s my DW requirement. Now I need to transport 100 people from A to B. I guess I’d need a coach. That’s my Database Consolidation requirement. There is no single solution which is optimal for all requirements. Only a set of solutions which are better at some and worse at others.

A final note on this subject. The Microsoft event at which I spoke was about Redmond’s new set of database appliances: the Database Consolidation Appliance, the Parallel Data Warehouse, and the Business Decision Appliance. Microsoft have been lagging behind Oracle in the world of appliances but I believe that they have made a wise choice here in offering multiple solutions based on customer workload. And they are not the only ones to think this. Look at this document from Bloor comparing IBM and Exadata:

“Oracle’s view of these two sets of requirements is that a single solution, Oracle Exadata, is ideal to cover both of them; even though, in our view (and we don’t think Oracle would disagree), the demands of the two environments are very different. IBM’s attitude, by way of contrast, is that you need a different focus for each of these areas and thus it offers the IBM pureScale Application System for OLTP environments and IBM Smart Analytics Systems for data warehousing.”

Now… no matter how biased you think I am… Maybe it’s time to consider if this strategy of Oracle’s really makes sense?

Exadata Re-Racking Service

I’ve heard from a few sources now that Oracle is offering a new Exadata Re-racking service for quarter and half racks. The idea, as I understand it, is that if you have your own rack equipment in your data centre and don’t want to use the rack that Exadata comes preinstalled in, you can pay an extra fee for Oracle’s Advanced Customer Services engineers (a fine bunch of people I must say!) to re-rack it. It appears that the machine is delivered to your data centre and then ACS will disassemble it at your site and reassemble it in your rack.

There appear to be some caveats, such as a pre-installation survey to check that your rack kit is suitable and a ban on putting anything else in the same rack. Also, since you cannot have this with the full rack I presume that this would preclude you from upgrading to a full machine in the future – at least not without having to relocate the kit, which I guess means downtime. I must stress that I don’t have the exact details, so talk to your friendly local Exadata sales rep if you want to know.

What I will say is that in all my time at Oracle the idea that customers could not re-rack the Exadata component servers was one of the few rules which was set in stone. Many customers asked, but all were told no. So what’s changed?

If you ask Oracle I am sure they would say that they are “listening to customer demand” and being “flexible”. On the other hand surely there must be some who will see this as a simple case of abandoning a principle in order to increase the attraction of Exadata and get more sales.

I’d love to know what happens to the empty Exadata rack once the kit has been moved. I’ll start checking to see if they appear on eBay…

SLOB testing on Violin and Exadata

I love SLOB, the Silly Little Oracle Benchmark introduced to me by Kevin Closson in his blog.

I love it because it’s so simple to setup and use. Benchmarking tools such as Hammerora have their place of course, but let’s say you’ve just got your hands on an Exadata X2-8 machine and want to see what sort of level of physical IO it can drive… what’s the quickest way to do that?

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
exadataX2-8.vmem Linux x86 64-bit                  128    64       8    1009.40

Anyone who knows their Exadata configuration details will spot that this is one of the older X2-8’s as it “only” has eight-core Beckton processors instead of the ten-core Westmeres buzzing away in today’s boxes. But for the purposes of creating physical I/O this shouldn’t be a major problem.

Running with a small buffer cache recycle pool and calling SLOB with 256 readers (and zero writers) gives:

Load Profile              Per Second
~~~~~~~~~~~~         ---------------
  Physical reads:          138,010.5

So that’s 138k read IOPS at an 8k database block size. Not bad eh? I tried numerous values for readers and 256 gave me the best result.

Now let’s try it on the Violin 3000 series flash memory array I have here in the lab. I don’t have anything like the monster Sun Fire X4800 servers in the X2-8 with their 1TB of RAM and proliferation of 14 IB-connected storage cells. All I have is a Supermicro server with two quad-core E5530 Gainestown processors and under 100GB RAM:

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
oel57            Linux x86 64-bit                   16     8       2      11.74

You can probably guess from the hostname that I’ve installed Oracle Linux 5 Update 7. I’m also running the Oracle Unbreakable Enterprise Kernel (v1) and using Oracle 11.2.0.3 database and Grid Infrastructure in order to take advantage of the raw performance of Violin LUNs on ASM. For each of the 8x100GB LUNs I have set the IO scheduler to use noop, as described in the installation cookbook.

So let’s see what happens when we run SLOB with the same small buffer cache recycle pool and 16 readers (zero writers):

Load Profile              Per Second
~~~~~~~~~~~~         ---------------
  Physical reads:          159,183.9

That’s 159k read IOPS at an 8k database block size. I’m getting almost exactly 20k IOPS per core, which funnily enough is what Kevin told me to expect as a rough limit.

The thing is, my Supermicro has four dual-port 8Gb fibre-channel cards in it, but only two of them have connections to the Violin array I’m testing here. The other two are connected to an identical 3000 series array, so maybe I should present another 8 LUNs from that and add them to my ASM diskgroup… Let’s see what happens when I rerun SLOB with the same 16 readers / 0 writers:

Load Profile              Per Second
~~~~~~~~~~~~         ---------------
  Physical reads:          236,486.7

Again this is an 8k blocksize so I’ve achieved 236k read IOPS. That’s nearly 30k IOPS per core!

I haven’t run this set of tests as a marketing exercise or even an attempt to make Violin look good. I was generally interested in seeing how the two configurations compared – and I’m blown away by the performance of the Violin Memory arrays. I should probably spend some more time investigating these 3000 arrays to see whether I can better that value, but like a kid with a new toy I have one eye on the single 6000 series array which has just arrived in the lab here. I wonder what I can get that to deliver with SLOB?

The History of Exadata

I’ve been working on a timeline for the history of Exadata, starting with the HP Oracle Database Machine and working through to the X2 series.

It’s interesting to see how Oracle’s presentation of the product has changed over time, particularly the marketing messages.

Also, if you didn’t know better you would probably think that Engineered Systems were something Oracle had been planning for years. But the original plans for the Oracle Database Machine were to allow multiple vendors and ports of the storage software, basically an open architecture.

Things have changed a lot since then…