Database Consolidation Part 3 – It’s All About Capacity

In parts one and two of this article I blogged, extensively and laboriously, about database consolidation. I talked (at length) about the business drivers for this industry trend, then went on to discuss (for some considerable time) the technical challenges. I even droned on about the different design choices faced by enterprises who are about to embark upon a database consolidation exercise.

From this we can conclude a number of things, not the least of which is that I write too much. This is particularly true because none of what I’ve actually written so far contains any of the things that made me want to start blogging about database consolidation… I’ve saved all of those until now. We also concluded that consolidation is about cost reduction combined with increased manageability. I expanded on the manageability piece to encompass standardisation, agility and reduced complexity. But what about that cost reduction bit? If you cannot achieve cost reductions, where is the justification for all of this change?

Capacity: Enough – But Not Too Much

Part three is all about capacity. Not (just) as in disk space, but as in the amount of finite resources available. With any consolidation exercise the idea is always the same: take a large estate of disparate systems and condense them into a smaller, better-managed offering with clearly-defined service levels. That word “condense” is the key, because you aren’t just trying to join up the dots. If you go from having 100 databases all on their own dedicated servers, to having 1 uber-server running all your databases, that’s not necessarily a good thing. Not if that uber-server costs 100 times more than the original servers and takes up 100 times more floor space, uses 100 times more power etc. The true saving comes when you compare what you have to what you need… and discover a difference.

Of course, things are never that simple, because what you need is hardly ever constant. For any specific resource you can probably define an average requirement e.g. on a usual day I need this much processing power, this many I/Os to be serviced per second from my storage etc. But what about peaks? Each system has a time when it is at the peak utilisation of its resources, so during these peak times how big is the gap between the average and the maximum? These peaks can come in all sorts of forms, from daily schedules (e.g. logon storms caused at the start of new shifts on a call centre CRM system) to seasonal events (university enrolment or tax submission systems where users have an annual deadline).

Let’s consider ten databases which need to be consolidated. For simplicity we’ll assume that they are all identical in their behaviour and requirements. These ten databases run on ten identical servers, each with a capacity to deliver 10 of something. Let’s not worry about what that something is, whether it’s a specific property such as CPU, memory, etc; let’s just keep this generic and say that whatever it is can be measured. So our total capacity is 100. Now each database has an average requirement for 6, so if we were to consolidate then on average we need to be able to supply 60. That means I could potentially think about reducing my server requirements for the consolidation platform down from 100 to nearer 60. However, at peak times each database utilises up to 9. So actually, if all of these databases were to require their peak capacity at the same time (which they would, because I said they were identical) then I would need at least 90 or I would be unable to service the requirement. The trick with consolidation is to recognise that not all of your systems are likely to need their peak requirements at the same time. Thus with enough information I am able to take a well-educated and considered judgement (or alternatively a reckless gamble) that the combined load of all of my consolidated systems will never reach the theoretical maximum, therefore building a system that is smaller than the sum of its parts. This practice is known as overcommitting.

Overcommitting Resources

Now that we have the concept of overcommitting loosely defined, let’s think about which resources can be overcommitted. Incidentally, in the world of virtualisation, overcommitting is a long-standing concept. In fact, even in the world of Oracle we have already bumped into it in areas such as Instance Caging (although Oracle calls it “over-provisioning”, a term which means something different in the storage industry).

At a high-level we have the following resources to consider:

  • CPU
  • Memory
  • I/O
  • Network

Of course I/O can mean more than one thing. Obviously there is the total storage capacity required, but you also need to consider the I/O demand in terms of how much data can be delivered and at what latency. More on that later.

Now if you are new to database consolidation you may be tempted to think that CPU and I/O are going to be the main areas for concern, but actually by far the most common problem is memory. The reason for this is that CPU consumption and I/O rates tend to vary over time for each database, to the point that (if you choose your applications wisely) you are unlikely to see every database demand its peak requirement at the same time. This means you can overcommit, bringing you considerable infrastructure savings. But if you think about the memory components of a database, namely the SGA and PGA, how can consolidation help you achieve a saving there? One possible solution could perhaps be to use Oracle’s Automatic Memory Management feature to try and limit the size of each instance’s memory footprint but then overcommit the maximum sizes to allow them to grow when they need it. Good luck with that. The risk of every instance ballooning up to its full size is palpable (and my years working for Oracle have resulted in a deep mistrust of AMM and its predecessor ASMM…)

The answer to this issue is flash memory. Not just for the memory issues but for CPU and (perhaps obviously) I/O as well. Flash memory allows you to achieve a greater density of database consolidation. In part four I’ll explain why…

[But first, a small apology. There is a fourth bullet point up there which says “Network”. I don’t have much to say on networking when it comes to consolidation… in fact I often don’t have much to say on networking at all. My experience of working with network operations teams has generally led me to conclude that they don’t actually exist. Instead, they seem to have been replaced by automatic email response systems which wait a predefined amount of time and then reply with the message, “There are networking issues…”. And yes, I am aware of how lame this excuse is, but I’m sticking with it.]

Database Consolidation Part 2 – Shared Infrastructure Design Choices

Part one was all about the business drivers and technical challenges faced when building a database consolidation platform. Database consolidation is all about sharing infrastructure, so part two is about the design choices that are available…

An important architectural decision when consolidating databases is that of where the shared infrastructure should diverge. If we assume that your customers are applications which require a database service, at what point should each application be segregated from the others? Obviously you want to use the same underlying hardware, but what about the OS? What about the storage, do you want to segregate the data into different volumes on different LUNs? Maybe you want to share right at the top and just have different application schemas in one big container database?

Let’s have a look at the three main choices available:

  • Multi-Tenancy databases
  • Shared Platform databases
  • Virtualisation

A multi-tenancy database is a database which contains main different applications, each with their own schema. In many ways this model makes a lot of sense, since it allows for the highest level of resource sharing and an almost-zero deployment time for new schemas. And after all, Oracle is designed to have multiple users and schemas; the database resource manager allows for a level of QoS (quality of service) to be maintained whilst features such as Virtual Private Database can be used to enhance the security levels. Oracle allows for services to be defined which can then be controlled and relocated on a clustered database. Why not opt for this method? In fact, some customers do – although the vast majority don’t. The reasons for avoiding this method are further up this page, under the heading “Technical Challenges”. A single big database is a big single point of failure. You don’t want to hit an ORA-600 and see the whole thing come crashing down if it’s a container for your entire application estate! Say someone accidentally truncates a table and wants the whole database rolled back so they can retrieve their data, how can you work that situation out? Maintenance becomes a nightmare – can you really have all of your applications on the exact same release and patchset of Oracle? What about testing… Say one of your applications requires a patch for the optimizer, how do you go about testing every other application to ensure they are not affected? And security… it only takes one mistaken privilege to be granted and everything is exposed… do you really trust this model?

A shared platform database model provides segregation at the database level, so that a cluster of hardware (for example a six-node cluster running Oracle Grid Infrastructure) then runs different databases. This allows for a wide-ranging variety of database versions and patchsets to be run on the same platform, which is far more practical and makes the security issues far easier to cope with. Of course, it’s not without its challenges either. Firstly, there are still components that cannot be upgraded without affecting large groups (or all) of the customers: the operating systems, the Grid Infrastructure software, firmware for various components etc. Then there are the additional resource requirements for running multiple databases: extra RAM to cope with all of the SGAs and PGAs, extra CPU capacity to cope with all the additional processes from each instance, extra storage for all of those temporary and undo tablespaces, the online and archive redo logs, the SYSTEM and SYSAUX tablespaces. Maintenance requirements also increase, because although you can upgrade or patch each database independently you now have many more databases to upgrade / patch. This means administrative time increases dramatically – although you can combat this with the use of enterprise management tools such as Oracle Enterprise Manager.

An environment which uses virtualisation is perhaps the strongest design model. Virtualisation products have matured significantly in recent years to the point that they are now being used not just in non-database production environments but now for databases as well. Traditionally this is been a difficult subject for DBAs due to Oracle’s support policy for databases running on VMWare. This has softened considerably in recent years but Oracle still reserves the right withdraw support for an issue unless it “can be demonstrated to not be as a result of running on VMware”. Of course, Oracle has its own virtualisation product Oracle VM (which I have to say I actually really like) where support is not an issue, but I suspect that it has a far smaller share of the market than VMWare (although you wouldn’t know it from the aggressive marketing…). The great thing about virtualisation is that you have inherent security based on the segregation of each virtual machine. Maintenance becomes a lot easier because even OS upgrades can take place without affecting other users, whilst VMs can be migrated from one physical stack to another in order to perform non-disruptive hardware maintenance. Deployment and provisioning becomes easier as virtualisation products like VMWare and OVM are designed with these requirements in mind; the use of templates and the cloning of existing images are both great options. Similarly, expansion both at the VM level and across the whole platform is a lot easier. On the other hand, licensing (particularly of Oracle products) isn’t always clear (but then when is it?). The main challenge though is capacity, because now you not only have to consider all of those database SGAs and PGAs but also the operating systems and their various requirements, from root filesystems to swap files. Again I will talk about this in the second post on this topic.

Finally… there is a fourth model, which I haven’t mentioned here because it almost certainly won’t apply to the majority of people reading this. The fourth model is schema-level multi-tenancy, as used by the likes of Software-as-a-Service companies, whereby a single application is shared by multiple customers each of which only see their slice of the data. This is really an application-based consolidation solution, where each user or set of users only has visibility of their data despite it being stored in the same tables as that of other users. The application uses unique keys and referential integrity to lookup only the correct data for each user, leading to the security ramification that your data is only as secure as the developer code written to extract it for you. I once worked on one of these systems and discovered a SQL injection issue that allowed me to view not only my data but that of anyone whose userID I could guess. Of course there are products such as Oracle’s Virtual Private Database that can be used to provide additional levels of protection.

The reason I mention this fourth model is that Larry Ellison attacked Salesforce.com for using a variant of this model and said that multi-tenancy “was the state-of-the-art 15 years ago”, whilst talking up the Oracle Public Cloud for using virtualisation as a security model. According to Larry, multi-tenancy “puts your data at risk by commingling it with others”. Now, I don’t know Salesforce’s database design so I don’t know how well it fits into my description above (I have some friends who work for Salesforce though so I do know that they employ great developers!)… but what I do know is Exadata. And Exadata, along with the Super Cluster, is the platform for Oracle’s “Private Cloud” offering (details of which you can read about here). Exadata, however, has no virtualisation option. You cannot run OVM on Exadata, so if you read Oracle’s Exadata Database Consolidation white paper, it’s all about building the shared platform model I talked about above. To me, that doesn’t really fit in with Larry’s words on the subject.

Scale works in both directions…

One final thought for this section. If you build a DaaS environment and get all of your automated provisioning right etc you will make it very easy for your users to build new applications and services. That’s a good thing, right? But don’t forget to spend some time thinking about how you are going to ensure that this thing doesn’t grow and grow out of control. Ideally you need some sort of cross-charging process in place (I could probably write another whole article on this at some point, it’s such a big topic) but most of all you need to have a process for decommissioning and tearing down applications and databases that have exceeded their shelf life. If you don’t have that, you will find that all of your infrastructure cost savings are very short lived…!

That’s it for part two. In part three I will be discussing the capacity requirements of a consolidation platform. And you won’t be surprised to hear that flash is going to make an appearance soon, because flash memory is the perfect fit for a consolidation environment. Don’t believe me? Wait and see…

Database Consolidation Part 1 – Business Drivers and Technical Challenges

Database consolidation has been a big trend in the industry for a while now. You can see this if you read the IT press, or if you listen to the relentless procession of people queueing up to talk about the “cloud”. I saw it in my time at Oracle, where we had an increasing number of customers come and talk to us about the pressures of running thousands of independent databases, all on their own servers, all taking up vast amounts of data centre real estate and acting like a dead weight around the neck of their IT organisations.

Of course, just rounding up all of your databases and sticking them on some big iron isn’t really going to bring you many benefits. The real benefits of a consolidation exercise come when you use it as a way of repositioning your databases as service offerings. These come under various guises: the “As A Service” models: Database-as-a-Service (DaaS), Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS); the On-Demand models (e.g. Amazon’s Relational Database Service); and the ubiquitous cloud offerings (e.g. the Oracle Cloud). (My first job working with On Demand services was in 2003, so I still can’t use the term “cloud” without it coming out sounding as if I’m being sarcastic… I don’t mean to, but why does everyone always talk as if it’s a brand new idea?!)

Anyway, I’m going to make a bold claim and say that, at least to a DBA, it’s all the same thing. To me, database consolidation is about reducing vast estates of physical database servers into smaller, more tightly-managed groups of databases that can provide predefined services. (Oh and labelling them as a “cloud”, because if that word isn’t used at least a hundred times a day in our industry the world would end…) It’s also about turning your databases into a well-defined service – and therefore turning your users into your customers, even if they are actually part of your own organisation.

No matter what you call it, you can always spot a database consolidation exercise by the business drivers and the technical challenges.

Business Drivers

  • Cost reduction
  • Increased agility
  • Reduced complexity
  • Higher service levels

The cost reduction piece seems obvious – a smaller number of servers cost less to buy and run than a larger number, right? But it’s often misunderstood just how much of a cost saving can be made. Don’t just think about the servers, think about the savings in data centre footprint, in power and cooling. Think about the reduced administration costs, particularly if you design your service properly (i.e. the agility angle). Now think about the potential reduction in license costs. And an often-overlooked area of saving is the reduction in failures and outages caused by having a tightly designed and standardised operating model (i.e. the reduced complexity angle).

Increased agility is as important as cost reduction, something which may come as a surprise to those who are used to concentrating on technical rather than business challenges. To a CIO, the ability to react quicker, to take advantage of new opportunities as soon as they become apparent, is equally as important as controlling the bottom line. In a well-implemented database-as-a-service offering, deployments of new databases / services are fully automated. Automatic provisioning has to be a default requirement in the design. Likewise the ability to automatically scale (up or down) in order to meet changing demand is a must. That scaling needs to be possible on two levels: at the individual database level to meet the developing requirements of each “customer” and at the macro level to expand (or contract) the capability of your DaaS offering depending on overall demand.

It may not always seem like it at first, but the consolidation of your databases onto a DaaS platform should result in reduced complexity. Why? Well because at the heart of any consolidation exercise must be standardisation. Every large IT organisation has a plethora of different databases running different versions on different operating systems. No matter how stringent your deployment procedures are, it’s guaranteed that if your databases are built manually then each one will have a subtle difference based on a) when it was built, b) who built it, and c) what sort of day they were having at the time. Human beings are complex creatures, they behave in unexpected ways – the only way to have true consistency is to have your database deployment automated. And then there are the systems that you may have inherited, perhaps as the result of an acquisition or departmental reorganisation. You know the ones, they are usually sat in the corner untouched and unloved, because nobody dares go near them in case they break. In a DaaS environment every database is, at least outwardly, identical. This means that as a DBA you don’t have to worry about the way you treat them – what you can do with one database you can do with any of them. It’s all about manageability.

And the outcome of that reduced complexity must therefore be higher service levels. You can pretty much guarantee that any organisation with 1000 databases all running on similar path levels on the same OS, using the same file layouts, with automated management scripts to deploy them or tear them down (and perhaps even to patch them) will deliver a higher uptime than an organisation with a multitude of different databases on different operating systems, each one of which has its own subtleties and nuances.

Technical Challenges

So now that we’ve covered why it’s worth doing, what are the challenges associated with actually doing it? I’ve had a lot of exposure to DaaS and consolidation environments, both at Oracle (hands on in a support role) and in my new role at Violin (in a technical presales capacity). One particular experience which serves me well is the four years I spent working on British Telecom’s DaaS environment for Surren Partabh, who is BT’s CTO of Core Technologies. When it came to DaaS, BT were light years ahead of the game – their mutiple DaaS environments have been in place for years already and support many hundreds of databases. There is an interesting case study about BT DaaS here – if you are considering a consolidation exercise (and you can ignore the author’s overuse of the word “cloud”) then it’s well worth a read. As Surren says, “Our Oracle Database 11g consolidation has enabled us to reduce our server sprawl, deploy databases faster, and operate with 20% fewer DBA’s”.

So what are the challenges?

  • Availability
  • Capacity
  • Security
  • Maintenance

Availability is a challenge, not really in a technical sense (at least not any more than normal) but because of the increase in risk. When you consolidate your databases you put all of your eggs in one basket. If you have a large part of your business dependent on your DaaS platform and it takes a plunge, the pressure is truly going to be on. Having said that, my experience is that availability increases during database consolidation. HA and DR are easier to plan for and incorporate into a DaaS design than on the ad-hoc basis of a siloed database environment. Extensive backup and DR solutions cost money, which means that inevitably you end up with databases in your environment whose HA characteristics you are not always comfortable with. When you have all your eggs in the aforementioned basket it becomes impossible to argue about whether good backup solutions, HA and DR etc are worth the investment. Consequently you can achieve economies of scale by implementing a single solution across your whole environment – with the happy consequence that systems which may not have qualified for this level of service if they were independent end up getting a free ride. One thing to remember about consolidation though: test your backups, test your HA and test your DR… test it again and again. I know what it’s like to lose >50 production databases in one single calamity – and I can promise you it’s not a nice place to be.

Capacity for me is the biggest challenge of all. In fact it’s so critical to the idea of database consolidation that it is the reason I started writing this blog entry. Don’t forget that capacity isn’t just about disk space, it’s about CPU resources, it’s about memory, networking, IO requirements… essentially everything that is a finite resource. Capacity is something you have to plan for when you design and build a DaaS environment; get it wrong in one direction (too much) and you won’t achieve those cost savings that were one of the driving forces behind the whole exercise… get it wrong in the other direction (too small) and that cherished availability will be compromised, possibly affecting your entire solution. In fact, capacity planning for database consolidation is such an important topic that having started this blog entry with it in mind, I am going to give it its own entry entirely…!

Security is a challenge which has similar characteristics to those I described for availability. By putting all of your databases on one platform you increase the risk – security therefore needs to be strictly controlled. At a very simplistic level, unauthorised acquisition of administrator privileges on a consolidated environment could lay open your entire data estate. Compliance is another potential issue: things are complicated by environments where different databases have different regulatory or legal requirements. For example, if one of the databases on a DaaS system needs to meet Payment Card Industry standards then all of the underlying architecture will be affected, potentially resulting in all of the databases needing to meet PCI standards. Of course, as with availability, this can work in your favour because if you design the system with security and compliance in mind, you may find that databases which were previously somewhat lacking in the security department are dragged kicking and screaming into a compliant state (often under the threat of being cast out of the environment if they fail to comply). The other major consideration for security is the use of virtualisation. By placing each database in its own virtual environment, an additional layer of security can be wrapped around it, effectively segregating it from its neighbours whilst still retaining the benefits of a shared infrastructure. This is a massive trend in the industry now and is something that, I believe, is inevitable for most enterprise database environments.

And finally we come to maintenance. I cannot emphasise enough how important it is to define the maintenance strategy of a DaaS / consolidation environment before you implement it. Most vendors now, whether they be software (e.g. Oracle), operating system or hardware (server, storage, network etc) are focussed on providing zero-downtime products capable of non-disruptive maintenance. But no matter how much you spend, there will inevitably be times when you need to take a planned outage. And of course, with all your internal customers now using the same shared infrastructure, that downtime is going to have quite an effect. Here is what is going to happen if you don’t plan to avoid it up front: your DaaS environment has 26 databases on it, labelled A to Z. The application owner of A is hitting a problem which, unfortunately, requires maintenance on the underlying infrastructure. This patch, firmware upgrade, whatever it may be, requires downtime which will take the service offline. You were promised by all your vendors that their products would never require downtime… but hey guess what? So you go to application owner B and you say I need to take the system down this weekend – and he says “No way, not this weekend – we have a critical application upgrade planned. Can you wait until the weekend after?”. So you tell this to application owner C and she says, “We have our upgrade the week after – we already had to delay it because of B so we cannot wait any longer”. Trust me that you will never get as far as Z! You could pull rank of course, so you go to the CTO and say, “These guys are driving me mad, can you help me out?” but the CTO says, “What are you crazy, it’s the quarter end this month – we can’t do any of this stuff!”. Here is my advice to anyone implementing a DaaS or consolidation environment: Define maintenance windows into the service agreement, then make your internal customers sign up to these terms before they are allowed on to your platform. If they don’t agree to these maintenance cycles then they need to go and build their own system! This is also another argument for virtualisation, because – although it doesn’t completely solve the problem – adding an extra layer of abstraction down at the hypervisor level allows for everything above that to be treated independently.

Those are the technical challenges, but what about the design choices? There are three (or four depending on your view) architectural methods of achieving a consolidation or DaaS platform. In part two of this series I will examine them and have a look at the benefits and pitfalls associated with each. If you made it this far you will be delighted to hear that I am only just started…

Database Consolidation Part 2 – Shared Infrastructure Design Choices

The Strategic Platform for ALL Database Workloads

I was invited to Microsoft HQ in the UK yesterday to be a speaker at one of their launch event for SQL Server 2012. It’s the second of these events that I’ve appeared at and it finally made me realise I need to change something about this blog.

So far until now I have resisted making any critical remarks about the Oracle Exadata product here, other than to quote the facts as part of my History of Exadata series. I’m going to change that now by offering my own opinions on the product and Oracle’s strategy around selling it.

Before I do that I should establish my credentials and declare any bias I may have. For a number of years, until very recently in fact, I was an employee of Oracle Corporation in the UK where I worked in Advanced Customer Services. I began working with Exadata upon the release of the “v2” Sun Oracle Database Machine and at the time of the “X2” I was the UK Team Lead for Exadata. I personally installed and supported Exadata machines in the UK and also trained a number of the current Exadata engineers in ACS (although that wasn’t exactly difficult as all of the ACS engineers I know are excellent). I also used to train the sales and delivery management communities on Exadata using my trademarked “coloured balls” presentation (you had to be there).

I now work for Violin Memory, a company that (to a degree) competes with Oracle Exadata. Exadata is a database appliance, whilst Violin Memory make flash memory arrays… so that doesn’t immediately sound like a true competition. But I’ll let you into a little secret: Exadata isn’t a database appliance at all – it’s an application acceleration product. That’s what is does, it takes applications which businesses rely on and makes them run faster. And in fact that’s also exactly what Violin is – it’s an application acceleration product that just happens to look like a storage array.

So now we have everything out in the open I’m going to talk about my issue with Exadata – and you can read this keeping in mind that everything I say is tainted by the fact that I have an interest in making Violin products look better than Oracle’s. I can’t help that, I’m not going to quit my exciting new job just to gain some journalistic integrity…

There are a number of critiques of Exadata out there on the web, ranging from technical discussions (the best of which are Kevin Closson’s Critical Analysis videos) to stories about the endless #PatchMadness from Exadata DBAs on Twitter. My main issue is much more fundamental:

Oracle now say that Exadata is the strategic database platform for ALL database workloads. This did not used to be the case. If you read my History of Exadata piece you will see that when the original v1 HP Oracle Database Machine was released, “Exadata” was the name of the storage servers. And those storage servers were, in Oracle’s own words, “Designed for Oracle Data Warehouses“.

Upon the release of the v2 Sun Oracle Database Machine there came an epiphany at Oracle: the realisation that flash technology was essential for performance (don’t forget I’m biased). This was great news for Violin as back in those days (this was 2009) flash was still an emerging technology. However, the Sun F20 Accelerator cards that were added to the v2 were (in my biased opinion) pretty old tech and Oracle was only able to use them as a read cache. However, that didn’t stop Oracle’s marketing department (never one to hold back on a bold claim) from making the statement that the v2 was “The First Database Machine For OLTP“. We are now on the X2 model (really only a minor upgrade in CPU and RAM from the v2) and Oracle has now added Database Consolidation to the list of things that Exadata does. And of course the new bold claim has now appeared, as in the image above, “Exadata is Oracle’s strategic database platform for ALL database workloads“. Sure the X2 now came in two models, the X2-2 and the X2-8, but they weren’t actually different in terms of the features that you get above a normal Oracle database… you still get the same Exadata storage, Hybrid Columnar Compression and Exadata Flash Cache features regardless of the model.

So what’s my problem with this? Well first of all let’s just think about what a workload is. Essentially you can define the workload of a database by the behaviour of its users. There are two main types of workload in the database world, OnLine Transactional Processing (OLTP) and Data Warehousing (DW). OLTP systems tend to have highly transactional workloads, with many users concurrently querying and changing small amounts of data. Conversely, DW systems tend to have a smaller number of power users who query vast amounts of data performing sorts and aggregation. OLTP systems experience huge amounts of change throughout their working period (e.g. 9am-5pm for a national system, 24×7 for a global system). DW systems on the other hand tend to remain relatively static except during ETL windows when massive amounts of data are loaded or changed.

In fact, you can pretty much picture any workload as fitting somewhere on a scale between these two extremes:

Of course, this is a sweeping generalisation. In practice no system is purely OLTP or purely DW. Some systems have windows during which different types of workload occur. Consolidation systems make things even more complicated because you can have multiple concurrent workloads taking place.

There’s a point to all this though. Take a random selection of real life databases and look at their workloads. If you agree with my OLTP <> DW scale above then you will see that they all fit in different places. Maybe you don’t agree with it though and you think there are actually many more dimensions to consider… no matter. What we should all be able to agree on is this:

In the real world, different databases have different workloads.

And if we can agree on that then perhaps we can also agree on this:

Different workloads will have different requirements.

That’s simple logic. And to extend that simple logic just one more step:

One design cannot possibly be optimal for many different requirements.

And that’s my problem with Oracle’s strategy around selling Exadata. We all know that it was originally designed as a data warehouse solution. Although I defer to Kevin’s knowledge about the drawbacks of an asymmetric shared-nothing MPP design, I always thought that Exadata was an excellent DW product and something that (at the time) seemed like an evolutionary step forward (although I now believe that flash memory arrays are a revolutionary step forward that make that evolution obsolete – keep remembering that I’m biased though). But it simply cannot be the best solution for everything because that doesn’t make sense. You don’t need to be technical to get that, you don’t even need to be in IT.

Let’s say I wanted to drive from town A to town B as fast as I can. I’d choose a Ferrari right? That’s my OLTP requirement. Now let’s say I wanted to tow a caravan from A to B, I’d need a 4×4 or something with serious towing ability – definitely not a Ferrari. There’s my DW requirement. Now I need to transport 100 people from A to B. I guess I’d need a coach. That’s my Database Consolidation requirement. There is no single solution which is optimal for all requirements. Only a set of solutions which are better at some and worse at others.

A final note on this subject. The Microsoft event at which I spoke was about Redmond’s new set of database appliances: the Database Consolidation Appliance, the Parallel Data Warehouse, and the Business Decision Appliance. Microsoft have been lagging behind Oracle in the world of appliances but I believe that they have made a wise choice here in offering multiple solutions based on customer workload. And they are not the only ones to think this. Look at this document from Bloor comparing IBM and Exadata:

“Oracle’s view of these two sets of requirements is that a single solution, Oracle Exadata, is ideal to cover both of them; even though, in our view (and we don’t think Oracle would disagree), the demands of the two environments are very different. IBM’s attitude, by way of contrast, is that you need a different focus for each of these areas and thus it offers the IBM pureScale Application System for OLTP environments and IBM Smart Analytics Systems for data warehousing.”

Now… no matter how biased you think I am… Maybe it’s time to consider if this strategy of Oracle’s really makes sense?

SLOB on Violin 3000 Series with PCIe Direct Attach

A reader Alex asked if I could post a comparative set of tests from my previous 3000 series Infiniband testing but using the PCIe direct-attached method. I was actually very keen to test this myself as I wanted to see how close the Infiniband connectivity method could get to the PCIe latencies. Why? Well, PCIe offers the lowest overhead but also causes some HA problems.

When SSDs first came out they were just that, solid state disks – or at least they looked like them. They had the same form factor and plugged into existing disk controllers, but had no spinning magnetic parts. This offered performance benefits but those benefits were restricted to the performance of those very disk controllers, which were never designed for this sort of technology. We call this the first generation of flash.

To overcome this architectural limitation, flash vendors came out with a new solution – placing flash on PCIe cards which can then attached direct to the system board, reducing latency and providing extreme performance. This is what we call the second generation of flash. It is what vendors such as Fusion IO provide – and looking at FIO’s share price you would have to congratulate them on getting to market and making a success of this.

However, there are other architectural limitations to this PCIe approach. One is that you cannot physical share the storage provided by PCIe – sure you can run some sort of sharing software to make it available outside of the server it is plugged into, but that increases latency and defeats the object of having super-fast flash storage plugged right into the system board. Even worse, if the system goes down then that flash (and everything that was on it) is unavailable. This makes PCIe flash cards a non-starter for HA solutions. If you want HA then the best you can do with them is use them for caching data which is still available on shared storage elsewhere (the Oracle Database Smart Flash Cache being one possible solution).

At Violin we don’t like that though. We don’t believe in spending time and CPU resources (or even worse, human resources) managing a cache of data trying to improve the probability and predictability of cache hits. Not when flash is now available as a tier 1 storage medium, giving faster results whilst using less space, power and cooling.

Another problem with PCIe is that the number of slots on a system board will always be limited – for reasons of heat, power, space etc there will always be a limit beyond which you cannot expand.

And there’s another even more major problem with PCIe flash cards, which no PCIe flash vendor can overcome: you cannot replace a PCIe card without taking the server down. That’s hardly the sort of enterprise HA solution that most customers are looking for.

This is where we get to the third generation of flash storage, which is to place the flash memory into arrays which connect via storage fabrics such as fibre-channel or Infiniband. This allows for the flash storage to be shared, to be extended, to offer resilience (e.g. RAID) and to have high-availability features such as online patching and maintenance, hot-swappable components etc.

This is the approach that Violin Memory took when designing their flash memory arrays from the ground up. And it’s an approach which has resulted in both families of array having a host of connectivity features: PCIe (for those who don’t want HA), iSCSI, Fibre-Channel and now Infiniband.

But what does the addition of a fibre-channel gateway do to the latency? Well, it adds a few hundred microseconds to the latency… In the scheme of things, when legacy disk arrays deliver latencies of >5ms that’s nothing, but when we are talking about flash memory with latencies of <1ms that suddenly becomes a big deal. And that’s why the Infiniband connectivity is so important – because it ostensibly offers the latency of PCIe but with the HA and management features of FC.

So let’s have a look at the latencies of the 3000 series using PCIe direct attach to see how the latency measures up against the Infiniband testing in my previous post:

Filename      Event                          Waits  Time(s)  Latency       IOPS
------------- ------------------------ ------------ -------- ------- ----------
awr_0_1.txt   db file sequential read      308,185       33     107     7,139.2
awr_0_4.txt   db file sequential read    4,166,252      510     122    24,883.1
awr_0_8.txt   db file sequential read    9,146,095    1,245     136    41,569.2
awr_0_16.txt  db file sequential read   19,496,201    3,112     160    70,121.9
awr_0_32.txt  db file sequential read   40,159,185   11,079     275    92,185.0
awr_0_64.txt  db file sequential read   81,342,725   49,049     602    99,060.1

We can see that again the latency is pretty much scaling at a linear rate. And up to 16 readers (which is double the number of CPU cores I have available) the latency remains under 200us. This is very similar to the Infiniband results, where up to (and including) 16 readers I also had <200us latency.

A couple of points to note:

  • Again the lack of CPU capability in my Supermicro servers is prohibiting me from really pushing the arrays – causing the tests above 16 readers to get skewed. I have requested a new set of lab servers with ten-core Westmere-EX CPUs so I just need to sit back and wait for Father Christmas to visit
  • The database block size is 8k
  • To make matters even more complicated, this was actually a RAC system (although I ran the SLOB tests from a single instance)

That last point is worth expanding. I said that PCie does not allow for HA. That’s not strictly true for Violin however. In this system I have a pair of Supermicro servers, each connected via PCie to my single 3205 SLC array and presenting a single LUN, which I have partitioned and presented to ASM as a series of ASM disks.

Because ASM does not require SCSI-3 persistent reservations or any other such nastiness, I am able to use this as shared storage and run a 11.2.0.3 RAC and Grid Infrastructure system on it. I’ve run all the usual cable-pulling tests and not managed to break it yet, although I’m not convinced it is a design I would choose over Infiniband if I had to choose… mainly because the PCIe method does not incorporate the Violin Memory HA Gateway, which gives me the management GUI and an additional layer of protection from partial / unaligned IO.

I now need to go and beg for that bigger server so I can get some serious testing done on the 6000 series array which is currently laughing at me every time I tickle it with SLOB