The Real Cost of Enterprise Database Software
September 10, 2013 10 Comments
Storage for DBAs: The strange thing about enterprise databases is that the people who design, manage and support them are often disassociated from the people who pay the bills. In fact, that’s not unusual in enterprise IT, particularly in larger organisations where purchasing departments are often at opposite ends of the org chart to operations and engineering staff.
I know this doesn’t apply to everyone but I spent many years working in development, operations and consultancy roles without ever having to think about the cost of an Oracle license. It just wasn’t part of my remit. I knew software was expensive, so I occasionally felt guilt when I absolutely insisted that we needed the Enterprise Edition licenses instead of Standard Edition (did we really, or was I just thinking of my CV?) but ultimately my job was to justify the purchase rather than explain the cost.
On the off chance that there are people like me out there who are still a little bit in the dark about pricing, I’m going to use this post to describe the basic price breakdown of a database environment. I also have a semi-hidden agenda for this, which is to demonstrate the surprisingly small proportion of the total cost that comprises the storage system. If you happen to be designing a database environment and you (or your management) think the cost of high-end storage is prohibitive, just keep in mind how little it affects the overall three-year cost in comparison to the benefits it brings.
Pricing a Mid-Range Oracle Database
Let’s take a simple mid-range database environment as our starting point. None of your expensive Oracle RAC licenses, just Enterprise Edition and one or two options running on a two-socket server.
At the moment, on the Oracle Store, a perpetual license for Enterprise Edition is retailing at $47,500 per processor. We’ll deal with the whole per processor thing in a minute. Keep in mind that this is the list price as well. Discounts are never guaranteed, but since this is a purely hypothetical system I’m going to apply a hypothetical 60% discount to the end product later on.
I said one or two options, so I’m going to pick the Partitioning option for this example – but you could easily choose Advanced Compression, Active Data Guard, Spatial or Real Application Testing as they are all currently priced at $11,500 per processor (with the license term being perpetual – if you don’t know the difference between this and named user then I recommend reading this). For the second option I’ll pick one of the cheaper packs… none of us can function without the wait interface anymore, so let’s buy the Tuning Pack for $5,000 per processor.
The Processor Core Factor
I guess we’d better discuss this whole processor thing now. Oracle uses per core licensing which means each CPU core needs a license, as opposed to per socket which requires one license per physical chip in the server. This is normal practice these days since not all sockets are equal – different chips can have anything from one to ten or more cores in them, making socket-based licensing a challenge for software vendors. Sybase is licensed by the core, as is Microsoft SQL Server from SQL 2012. However, not all cores are equal either… meaning that different types of architecture have to be priced according to their ability.
The solution, in Oracle’s case, is the Oracle Processor Core Factor, which determines a multiplier to be applied to each processor type in order to calculate the number of licenses required. (At the time of writing the latest table is here but always check for an updated version.) So if you have a server with two sockets containing Intel Xeon E5-2690 processors (each of which has eight cores, giving a total of sixteen) you would multiply this by Oracle’s core factor of 0.5 meaning you need a total of 16 x 0.5 = 8 licenses. That’s eight licenses for Enterprise Edition, eight licenses for Partitioning and eight licenses for the Tuning Pack.
What else do we need? Well there’s the server cost, obviously. A mid-range Xeon-based system isn’t going to be much more than $16,000. Let’s also add the Oracle Linux operating system (one throat to choke!) for which Premier Support is currently listing at $6,897 for three years per system. We’ll need Oracle’s support and maintenance of all these products too – traditionally Oracle sells support at 22% of the net license cost (i.e. what you paid rather than the list price), per year. As with everything in this post, the price / percentage isn’t guaranteed (speak to Oracle if you want a quote) but it’s good enough for this rough sketch.
Finally, we need some storage. Since I’m actually describing from memory an existing environment I’ve worked on in the past, I’m going to use a legacy mid-range disk array priced at $7 per GB – and I want 10TB of usable storage. It’s got some SSD in it and some DRAM cache but obviously it’s still leagues apart from an enterprise flash array.
Price Breakdown
That’s everything. I’m not going to bother with a proper TCO analysis, so these are just the costs of hardware, software and support. If you’ve read this far your peripheral vision will already have taken in the graph below. so I can’t ask you to take a guess… but think about your preconceptions. Of the total price, how much did you think the storage was going to be? And how much of the total did you think would go to the database vendor?
The storage is just 17% of the total, while the database vendor gets a whopping 80%. That’s four-fifths… and they don’t even have to deal with the logistics of shipping and installing a hardware product!
Still, the total price is “only” $430k, so it’s not in the millions of dollars, plus you might be able to negotiate a better discount. But ask yourself this: what would happen if you added Oracle Real Application Clusters (currently listing at $23,000 per processor) to the mix. You’d need to add a whole set of additional nodes too. The price just went through the roof. What about if you used a big 80-core NUMA server… thereby increasing the license cost by a factor of five (16 cores to 80)? Kerching!
Performance and Cost are Interdependent
There are two points I want to make here. One is that the cost of storage is often relatively small in terms of the total cost. If a large amount of money is being spent on licensing the environment it makes sense to ensure that the storage enables better performance, i.e. results in a better return on investment.
The second point is more subtle – but even more important. Look at the price calculations above and think about how important the number of CPU cores is. It makes a massive difference to the overall cost, right? So if that’s the case, how important do you think it is that you use the best CPUs? If CPU type A gives significantly better performance than CPU type B, it’s imperative that you use the former because the (license-related) cost of adding more CPU is prohibitive.
Yet many environments are held back by CPUs that are stuck waiting on I/O. This is bad news for end users and applications, bad news for batch jobs and backups. But most of all, this is terrible news for data centre economics, because those CPUs are much, much more expensive than the price you pay to put them in the server.
There is more to come on this subject…
thanks for the analysis, we went through a similar process in our environment to evaluate the cost associated with our systems, and to get to ‘one throat to choke’, which has made support issues a breeze, compared to multi-vendor database environment.
From whom did you buy the hardware? Was everything from the one vendor, or just the software?
We bought the Oracle Database Appliances (ODA’s), and have been migrating our legacy heterogenous environments (AIX, Windows, Linux) to the ODA’s, also utilizing the opportunity to consolidate where applicable.
I have an issue with the ODA. Computer systems are like any system, in that they are held back by the weakest link in the chain: the bottleneck. In the case of the ODA, top-bin Xeon E5-2690 processors are weighed down by spinning magnetic hard drives and midrange SSDs. A bit like putting a massive engine in a cart with wooden wheels.
I’m not criticising your choices by the way – if it works for you then that’s a success. But in my view, the ODA is fundamentally flawed.
I understand your criticisms, and agree, there are definitely issues with the system, such as storage capacity and memory, along with powerful chips.
From our perspective and what our current situation/needs are, it is a great low cost consolidation platform, providing us with a very easy system to setup and maintain providing redundant storage, and with what we consider the systems usable lifetime, provides us enough of a window to plan for the next system, whether it be something like Exadata or cloud/hybrid cloud systems.
Our goal was to get off of unsupported and difficult to support hardware, storage, and operating systems that cost five to ten times as much, required more support staff, custom setups, and performed worse that the ODA’s did out of the box.
We could debate how the current systems we support came to be, and that would be valid, but the reality for us was we at least needed to get into this century with our architecture, which allows us to plan and prepare for the next 3-5 years for the next system.
So far, the system has been a great success, not without a few headaches, but definitely magnitudes fewer than our current environment. The ODA’s are easier to troubleshoot (‘one throat to choke’)… one of the biggest benefits, for us, has been not having to go to the system admin, network engineer, storage admin to resolve problems, and then have the each vendor pointing the finger at the other.
Sorry for the ramble, thanks for you articles and blog, I am reading the Database Consolidation articles, and appreciate you sharing you expertise, and links to the BT Case Study, you have provided us with great information and food for thought.
Interesting. It goes without saying that you are not alone – the IT press is full of discussions about shiny new architectures and success stories in which company X saved millions moving to a cloud-based, in-memory, social-media-enabled, big data, real-time analytics solution that transformed their business. Often these stories come from the vendors who want to sell you products and services, with use cases ghost-written for IT managers who want to get a promotion.
The reality, which I see all the time when I visit real customers and talk about real problems, is that most people are struggling to support multiple and disparate legacy architectures, none of which conform to any shared standard, resulting in an operations nightmare where fire fighting is the norm and innovation is a pipe dream.
Sorry to sound cynical – I’m actually very exciting about the future of our industry – but anyone who fails to acknowledge the problems customers have with legacy systems needs to spend a few years back on the front line before they get my attention.
I completely agree and don’t think you are being cynical at all, the white papers, buzzwords, sales pitches, next greatest thing, etc just muddy the waters on the reality of what is and what needs to be done. We are hit daily with new projects, new ideas, things that have to be done asap, service issues, fires, etc, and this robs us of the time and resources to innovate.
I also think that the legacy architecture I support, was not built and tuned for the systems we had running on it. Having separate server/storage/network admins, along with the application developers and COTS vendors, identifying and fixing issues becomes a lot tougher job. The shiny box for us, eliminated a large amount of the admin overhead, finger pointing, and control issues that existed on the legacy systems, and by in large are allowing us to really look at the system bottlenecks from a standard and known configuration, which in some respects is sad because it points to an organizational failing with the legacy systems.
The future is bright though, gonna need to wear shades!
Excellent!!. I am a big fan of your articles but this one is really, really good. I have looked several times at the Oracle licensing document but this post definitely helped me understand it better. Hope it does the same for others.
Thanks. I have more to say on this subject but as usual… so much to do, so little time…
And by the way Emmanuel good luck with your OCM. I think it’s a great way for someone in the DBA world to give themselves the ultimate test.