Overprovisioning: The Curse Of The Cloud

I want you to imagine that you check in to a nice hotel. You’ve had a good day and you feel like treating yourself, so you decide to order breakfast in your room for the following morning. Why not? You fill out the menu checkboxes… Let’s see now: granola, toast, coffee, some fruit. Maybe a juice. That will do nicely.

You hang the menu on the door outside, but later a knock at the door brings bad news: You can only order a maximum of three items for breakfast. What? That’s crazy… but no amount of arguing will change their rules. Yet you really don’t want to choose just three of your five items. So what do you do? The answer is simple: you pay for a second hotel room so you can order a second breakfast.

Welcome to the world of overprovisioning.

Overprovisioning = Inefficiency

Overprovisioning is the act of deploying – and paying for – resources you don’t need, usually as a compromise to get enough of some other resource. It’s a technical challenge which results in a commercial or financial penalty. More simply, it’s just inefficiency.

The history of Information Technology is full of examples of this as well as technologies to overcome it: virtualization is a solution designed to overcome the inefficiency of deploying multiple physical servers; containerisation overcomes the inefficiency of virtualising a complete operating system many times… it’s all about being more efficient so you don’t have to pay for resources you don’t really need.

In the cloud, the biggest source of overprovisioning is the way that cloud resources like compute, memory, network bandwidth, storage capacity and performance are packaged together. If you need one of these in abundance, the chances are you will need to pay for more of the others regardless of whether they are required or not.

Overprovisioning = Compromise

As an example, at the time of writing, Google Cloud Platform’s pd-balanced block storage options provide 6 read IOPS and 6 write IOPS per GB of capacity:

* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.

Consider a 1TB database with a reasonable requirement of 30,000 read IOPS during peak load. To build a solution capable of this, 5000GB (i.e. 5TB) of capacity would need to be provisioned… meaning 80% of the capacity is wasted!

Worse still, the “Read IOPS per instance” row of the table tells us that some of the available GCP instance types may not be able to hit our 30,000 requirement, meaning we may have to (over)provision a larger virtual machine type and pay for cores and RAM that aren’t necessary (by the way, I’m not picking on GCP here, this is common to all public clouds).

But the real sucker punch is that, if this database is licensed by CPU cores (e.g. Oracle, SQL Server) and we are having to overprovision CPU cores to get the required IOPS numbers, we now have to pay for additional, unwanted – and very expensive – database licenses.

Overprovisioning = Overpaying

My (old) front door

Let’s not imagine that this is a new phenomenon. If you’ve ever over-specced a server in your data centre (me), if you’ve ever convinced your boss that you need the Enterprise Edition of something because you thought it would be better for your career prospects (also me), or if you’ve ever spent £350 on a thermal imaging camera just so you can win an argument about whether you need a new front door (I neither admit nor deny this) then you have been overprovisioning.

It’s just that the whole nature of cloud computing, with it’s self-service, on-demand, limitlessly-scalable charateristics make it so easy to overprovision things all the time. So while the amounts may seem small when shown on the cloud provider’s Price per hour list, when you multiply them by the number of VMs, the number of regions and the number of hours in a year, they start to look massive on your bill.

And when you consider the knock on effects on database licensing, things really get painful. But let’s save that for the next blog post…

Choosing The Right Path To The Cloud

What happens when customers with on-prem databases decide they want to embrace the public cloud? If you’ve been following the story so far, it is my assertion that most of “the easy stuff” has already moved to the cloud: backups, websites, test/dev suites, videos of cats etc. We are now in the next phase of enterprise cloud adoption, where all the difficult, complex, gnarly stuff is being considered – and that usually means business-critical databases.

The guys at IBM Cloud have a name for this: they call it “Chapter Two“. In fact here’s a quote from page one of a recent IBM annual report (emphasis added by me):

“… the most challenging and complex work of these digital transformations still lies ahead. We call this work ‘Chapter 2,’ in which our clients modernize and move their mission-critical workloads to the cloud, and infuse AI deep into the decision-making workflows of their business.”

It seems that IBM knows it missed out on Chapter 1, but is determined to have a different result for this second, complex wave of cloud transformation. It has a long way to go to catch up, though, because Microsoft and AWS are dominating this market right now – while Oracle and Google are both racing hard to build their own shares of this massive opportunity.

Regardless of which public cloud is being considered, the question for most customers isn’t so much “Who?” as the more thorny issue of “How?”

So with that in mind, let’s have a look at the three main approaches to moving complex database applications into the public cloud.

Three Journeys To The Public Cloud

When it comes to existing, business-critical database-based applications, there are three high-level methods to consider when moving to the cloud. Yes that’s right, migrating to the public cloud is as easy as 1, 2, 3…

1. The ‘Lift and Shift’ Approach [IaaS]

The chances are that your on-prem database is going to be one of the usual suspects: Oracle, SQL Server, Postgres, mySQL, DB/2 etc. And as probabilities have it, that database is more than likely running on Linux or Windows (if you are still running on big iron UNIX or – heaven help you – some kind of mainframe, you can leave now please) and there’s a fair chance you have a virtualization layer in there too. So just pick it up and wazz it into an Infrastructure-as-a-Service (IaaS) offering, will you? Quit fooling around reading this, you could have done it by now.

Welcome to lift and shift. You have now immediately realized the benefits of the cloud: all that on-prem hardware has been turned off and junked, the Capex bills have been replaced by monthly Opex costs to the cloud vendor and the DBAs have been rebadged as Site Reliability Engineers (SREs) and given new, cooler t-shirts to wear. Somebody give the CIO a bonus!

Of course, life is never this simple and there are inevitable pros and cons. IaaS still has to be managed by your own operations teams (which is expensive), databases licenses, where applicable, still have to be managed (and are expensive) and those cloud infrastructure bills are looking awfully large. Did somebody say the cloud was going to save money?

Performance is a problem too. The Dream Of The Cloud™️ is to have infinite scale of resources on demand, but your architecture was designed for on-prem and simply cannot take advantage of cloud scalability. When the DBAs (sorry, SREs) deployed this in the cloud, they had to choose between architecting for the average workload – which means performance sucks at peak times – or architecting for the max, which is way more expensive. Inevitably, the compromise fell on the side of cost and so the result is that application latency is high, user experience is low and nobody dares to run any analytical workloads for fear of taking the whole platform down.

Maybe there’s an alternative?

2. Managed Database Services [PaaS]

Every cloud vendor has managed database offerings – in fact, most have a plethora of different offerings. Microsoft, for example, has Azure SQL Database as well as Cosmos DB. Oracle has the managed version of its eponymous database, Google has Cloud SQL and AWS has so many database services that there aren’t enough electrons on the internet to list them all. So why choose managed databases?

The Dream Of The Cloud™️ is to rid your business of all the low-level drudgery that comes with running IT infrastructure, so that your operations staff can rebadge themselves as DevOps and spend their time on more valuable activities, like drinking artisan coffees or breaking CI/CD pipelines. IaaS doesn’t really deliver on that dream, but PaaS gets a lot closer. Now, the cloud vendor takes care of much more drudgery and also – for licensable database products – manages the licenses so that you only pay for what you consume.

Of course, life is never this simple and there are inevitable pros and cons. Managed database services can be notoriously expensive if you need all the enterprise features you took for granted on-prem, while performance can often be problematic. Remember that managed SQL databases are designed for the average workload and not the peak, so if your system is an edge case in any way – if you’re not running with the pack – it won’t be a perfect fit. Maybe far from perfect.

Another potential issue is that many business-critical database applications are full of business logic. Think of Oracle database with PL/SQL packages written by developers long since retired. Can that be easily migrated into Managed Postgres on Azure, or Cloud SQL on GCP? Maybe the code calls UTL_FILE to write files which are then sent elsewhere using UTL_TCP. Try feeding that code into an automatic migration service.

Managed Databases are a great solution for the hundreds of boring databases you may have on-prem. Imagine never having to patch stuff again! But for anything even remotely unusual, or anything that regularly causes you pain, the chances are slim that PaaS will be the right fit.

Of course, there is another option…

3. Refactor to Cloud Native

Ahh, the path of the truly enlightened! Rip up your existing applications and rewrite everything to be cloud native. Fill out a bingo card with words like microservices, containers, serverless, Kubernetes and mutable infrastructure; tick them off one by one as your DevOps team write the whole application in Rust. Move to open source quasi-database platforms like Postgres, Cassandra and Elasticsearch.

Boom! You have now achieved The Dream Of The Cloud™️ which means that your application is truly distributed, scalable and has virtually no performance limits. I sound like I’m being sarcastic, but I’m really not – I have recently been working with customers who have built environments exactly like this and I could not be more impressed with the results (although these were “born-in-the-cloud” companies who were building new application stacks, rather than toiling with the technical debt of “legacy” on-prem apps). It’s the future.

But guess what? Life is never this simple and there are inevitable pros and cons. If you are starting with the baggage of an on-prem deployment which needs to be migrated, this is quite clearly the most complex and time-consuming option. It’s a proper migration project – and everybody in IT has a story about a long-term migration project which ran over time, over budget and ultimately didn’t deliver on its starting goals. Also, it may require specialist skills which your organisation doesn’t have. Do you really want to engage a team of consultants and pay them on a time and materials basis?

No matter which way you look at it, this option is the most expensive and carries the most risk.

So Which Option Is Best?

To state the obvious, there is no best option and everything has to be evaluated on a case by case basis. It makes sense to look at the anticipated lifetime of the application in question, because if it’s only going to be around for another couple of years, why expend the effort of rewriting anything? Just lift and shift, or use PaaS if possible. But most important of all, keep in mind that the options above don’t have to be mutually exclusive. It’s possible to lift and shift multiple applications to achieve an immediate goal of reducing your on-prem data centre footprint, then consider a smaller selection of those for further adaptation to PaaS. It’s also possible to move into the cloud using IaaS or PaaS while, at the same time, starting a longer-term project to refactor to cloud native.

In summary, there is no perfect journey to the cloud. The bigger and more complex the application/database, the more you’ll have to compromise on the expected result. But, after all, when was that not the case in Enterprise IT?

The Battle For Your Databases

There’s a battle going on right now between all of the public cloud vendors – a war in the clouds. And you might be surprised to hear what they are fighting over… They are fighting over you. Or, more specifically, your business-critical databases.

Everybody has something in the cloud these days. On a personal level, we are all keeping our photos, our music and our emails in the cloud. Corporations have followed suit: email, document collaboration and workflow, backups, websites… Almost everything is in the cloud. Almost.

The Big Scary Stuff That Nobody Wants To Move

Pretty much every company with an on-prem presence will have one or more relational databases underpinning their critical applications. Oracle Database, Microsoft SQL Server, PostgreSQL, DB/2 (the forgotten database of yesteryear: it’s still out there, but nobody likes to talk about it), MySQL… these products support mission critical applications like CRM, ERM, e-commerce, all those SAP modules that I can never remember the names of… And in each industry vertical, there are critical systems: healthcare has Electronic Patient Records, retail has its warehouse management platforms, finance has all manner of systems labelled Do Not Touch.

These workloads are the last bastion of on-prem, the final stand of the privately-managed data centre. And just like mainframes, on-prem may never completely die, but we should expect to see it fade away this decade. The challenge, though, is the inertia caused by such massive amounts of complexity and the associated risk of disturbing it. I have witnessed DBA teams who draw lots over which unfortunate will have to log on to “that database”, the one in the corner that nobody understands or wants to touch when it’s working ok. So how are they going to migrate that entire thing into AWS or Azure? Everybody knows a story about an eighteen-month migration project that overran budget by 1000% and then failed, right?

The View From The Clouds

So you may ask, if all this complex, gnarly stuff is full of risk, why do the hyperscalers want it? The answer is, because this is the biggest game left on the hunting ground. These vast technology stacks are the crown jewels of on-prem data estates. If you are Cloud Vendor A, there are some important reasons why you really want to capture this workload into your cloud:

  1. Big applications and databases require a large recurring spend on premium cloud infrastructure
  2. Customers are used to spending large amounts of money to run these services
  3. The surrounding application ecosystem offers potential for the upsell of further cloud services (analytics, AI, business intelligence etc)
  4. Once that workload comes into your cloud, it’s probably never leaving. In other words, it’s a long-term guaranteed revenue stream.

The last point is especially important: vendors use the term sticky to describe workloads like this. The effort of migrating all that sensitive, critical data and all that impenetrable business logic (written ten years ago by developers who have long since moved on) means you are never going to want to do this more than once. Once it’s in, it’s in.

A Massive Anchor

Working with one of the hyperscalers, I have heard these databases described as anchor workloads (credit: Kellyn Pot’vin Gorman) because they are what holds back the migration of large, juicy and complex environments into the public cloud. Like the biggest beast on the savannah, they are the hardest to take down… but a successful capture means everybody gets to eat until they are full.

So if this is you – if you are in fact a massive anchor – it’s probably worth keeping this in mind. Migrating your complex, challenging workload to the public cloud might seem like a mammoth task from your perspective, but to the hyperscalers you are the goose that lays the golden egg. And they can’t wait to get cracking.

Side note: I originally planned to call this post “Cloud Wars”, but I discovered that my former Oracle colleague, the inestimable Bob Evans, had beaten me to it…

How To Look Stupid (Part #612)

Now is the winter of our discontent. But rather than dwell on what a terrible year 2020 has been, I thought I’d make my final post of the year something more positive… so I am going to look back on one of the (many) times I made a fool of myself, in the hope that 2021 will give me the chance to do so again.

When Computers Go Bad

In the late 1990s, I was fresh out of university and working in my first job, for a small company (5 people!) at London’s Heathrow Airport, as a developer and database admin. We provided cargo handling software for all of the big airlines and freight companies. And on this particular day, “Dave”* at Air Canada had a problem with his system.

My company’s software managed the customs clearance of all inbound air freight for most of the airport. In order for inbound freight to leave the secure warehouses on a truck, this software (which, for Air Canada, ran on their main HPUX server) would send a message to the central HM Customs computer and then, upon receiving clearance, print out an official “air waybill” document. The waybill was legal proof that goods had clearance to leave the warehouse: no waybill = no clearance = no freight.

An hour ago, Dave had called in with a major problem: goods were being cleared by customs, but no waybills were bring printed. Air Canada now had a queue of lorries backed up at the warehouse and a crew that couldn’t do any work. There was nothing wrong with the printers, it was our software. Fix it, Dave begged us. Fix it now!

When DBAs Go Rogue

A senior colleague of mine, Denis**, was working on the problem and trying to test a fix on our lab system. He was also dialled in to Air Canada’s production system, on which our software ran – a crucial fact which turned out to be very important.

So when he called through to me from the server room to say, “Hey could you reboot the lab box?” I wondered over to his desktop and typed the magic reboot command on the first root window I found. Hey, one terminal session looks like another, right?

“Are you going to reboot it?” called Denis.

“I already have,” I yelled back, mildly irritated.

Denis stuck his head out of the door and stared at me, puzzled. I was then able to watch a whole range of emotions pass over his face: confusion changed to comprehension which in turn became outright horror.

I had just hard rebooted Air Canada’s entire UNIX platform with no warning to them at all.

Knowing When To Own Up

It took them a little while for Air Canada to realise what (or who) had happened to them. Remember, this was the 1990s, so big iron UNIX systems took about 15-30 mins to restart – and everybody was connected via dumb terminals which would have just suddenly gone blank.

Fred was a DBA until he accidentally truncated the wrong table

I mainly spent this time in purgatory, thinking about alternative careers, planning my new life in a Tibetan monastery or hoping for a natural disaster to divert attention.

But eventually, my desk phone rang and our receptionist said, “Dave from Air Canada wants to speak to you”.

I can vividly remember the dry mouth, my sweaty palms holding the phone, my voice about three octaves too high.

“Yes?” I stammered.

“I don’t know what you’ve done,” said Dave, “but all the waybills are coming out again now. Thanks very much!”

It’s important, I think, to be honest in these situations. But not that honest. So I let Dave get back to his busy job and made a mental note to confess to what had really happened some time within the next 25 years. And then I filed that next to the other mental note – the one about never, ever typing reboot without triple checking which system you are connected to.

Aspirations for 2021

When I look back at this story – and the many other times in my career when I made myself look stupid – I am grateful for the fact that things turned out ok. The whole year 2020 has felt like an elongated version of the purgatory I experienced above. But, as anybody who has ever rebooted a 1990’s-era big iron UNIX server will attest, the login window only appears about ten seconds after you’ve finally admitted to yourself that it’s never coming back.

So let’s hope that 2021, like Dave and his waybill printouts, gets us back on track fast.

* The names of innocent parties have been changed to protect their identities

** Denis really was called Denis though

The Public Cloud: The Hotel For Your Applications

Unless you are Larry Ellison (hi Larry!), the chances are you probably live in a normal house or an apartment, maybe with your family. You have a limited number of bedrooms, so if you want to have friends or relatives come to stay with you, there will come point where you cannot fit anybody else in without it being uncomfortable. Of course, for a large investment of time and money, you could extend your existing accommodation or maybe buy somewhere bigger, but that feels a bit extreme if you only want to invite a few people On to your Premises for the weekend.

Another option would be to sell up and move into a hotel. Pick the right hotel and you have what is effectively a limitless ability to scale up your accommodation – now everybody can come and stay in comfort. And as an added bonus, hotels take care of many dull or monotonous daily tasks: cooking, cleaning, laundry, valet parking… Freeing up your time so you can concentrate on more important, high-level tasks – like watching Netflix. And the commercial model is different too: you only pay for rooms on the days when you use them. There is no massive up-front capital investment in property, no need to plan for major construction works at the end of your five year property refresh cycle. It’s true pay-as-you-go!

It’s The Cloud, Stupid

The public cloud really is the hotel for your applications and databases. Moving from an investment model to a consumption-based expense model? Tick. Effectively limitless scale on demand? Tick. Being relieved of all the low-level operational tasks that come with running your own infrastructure? Tick. Watching more Netflix? Definite Tick.

But, of course, the public cloud isn’t better (or worse) than On Prem, it’s just different. It has potential benefits, like those above, but it also has potential disadvantages which stem from the fact that it’s a pre-packaged service, a common offering. Everyone has different, unique requirements but the major cloud providers cannot tailor everything they do to you individual needs – that level of customisation would dilute their profit margins. So you have to adapt your needs to their offering.

To illustrate this, we need to talk about car parking:

Welcome To The Hotel California

So… you decide to uproot your family and move into one of Silicon Valley’s finest hotels (maybe we could call it Hotel California?) so you can take advantage of all those cloud benefits discussed above. But here’s the problem, your $250/day suite only comes with one allocated parking bay in the hotel garage, yet your family has two cars. You can “burst” up by parking in the visitor spaces, but that costs $50/day and there is no guarantee of availability, so the only solution which guarantees you a second allocated bay is to rent a second room from the hotel!

This is an example of how the hotel product doesn’t quite fit with your requirements, so you have to bend your requirement to their offering – at the sacrifice of cost efficiency. (Incurring the cost of a second room that you don’t always need is called overprovisioning.) It happens all the time in every industry: any time a customer has to fit a specific requirement to a vendor’s generic offering, something somewhere won’t quite fit – and the only way to fix it is to pay more.

The public cloud is full of situations like this. The hyperscalers have extensive offerings but their size means they are less flexible to individual needs. Smaller cloud companies can be more attentive to an individual customer’s requirements, but lack the economies of scale of companies like Amazon Web Services, Microsoft and Google, meaning their products are less complete and their prices potentially higher. The only real way to get exactly what you want 100% of the time is… of course… to host your data on your own kit, managed by you, on your premises.

Such A Lovely Place

I should state here for the record that I am not anti-public cloud. Far from it. I just think it’s important to understand the implications of moving to the public cloud. There are a lot of articles written about this journey – and many of them talk about “giving up control of your data”. I’m not sure I entirely buy that argument, other than in a literal data-sovereignty sense, but one thing I believe to be absolutely beyond doubt is that a move to the public cloud will require an inevitable amount of compromise.

That should be the end of this post, but I’m afraid that I cannot now pass up the opportunity to mention one other compromise of the public cloud, purely because it fits into the Hotel California theme. I know, I’m a sucker for a punchline.

You and your family have enjoyed your break at the hotel, but you feel that it’s not completely working – those car parking charges, the way you aren’t allowed to decorate the walls of your room, the way the hotel suddenly discontinued Netflix and replaced it with Crackle. What the …? So you decide to move out, maybe to another hotel or maybe back to your own premises. But that’s when you remember about the egress charges; for every family member checking out of the hotel, you have to pay $50,000. Yikes!

I guess it turns out that, just like with the cloud, you can check out anytime you like… but you can never leave.

Cloud DBA: The Next Generation of Database Administrator?

Don’t drop the ball…

In the previous post, I ranted discussed the evolution of the DBA role, looking at how many additional functions the database administrator has inherited over the years: code fixer, virtualisation tamer, Linux / Windows juggler, reluctant storage administrator, application server hater, firewall botherer and all round fixer of any product badged as Oracle.

But the real change I am interested in comes as a result of databases moving into the cloud. Because this exposes the DBA to ownership of a new problem: cost. Specifically, ongoing operational costs – or Opex. It is my belief that this is in fact A New Thing – and New Things are not to be trusted. Sure, in the on prem world, DBAs were involved in decisions concerning capital expenditure (Capex) like the scoping of database servers, the calculation of how many database licenses were needed, the justification of additional license options (e.g. Enterprise Edition instead of Standard Edition). But in most cases, those decisions were made by a collective and then signed off by the business.

My Public Cloud Bill Just Arrived…

Cloud is different. Everything you do in the public cloud costs money. You want to spin up an instance? Kerching. You want to use some SSD storage? Kerching! You want to download copies of your data to an on prem location? Egress charges ahoy… KERCHING!

Bills, Bills, Bills…

Decisions taken by DBAs in the normal course of their day jobs can now have a significant effect on the next invoice from the cloud vendor. Do you remember in the early days of cell phones, if you used your phone a lot you were never entirely sure what the bill would look like at the end of the month? Could be a little more than usual, could be so massive you need a loan from the World Bank. Sometimes, the cloud has a similar feel.

Most cloud vendors have remarkably complex pricing structures (some say this complexity is deliberate!) and this has in fact spawned a whole industry of experts (“cloud economists”) who can help customers understand and reduce their cloud costs, often using the two step principle of 1) turn stuff off, and 2) negotiate harder for discounts.

Into this new minefield steps that brave warrior, the DBA. Often charged with the apparently simple task of “move that database into the cloud”, not only must a new technical language be learned (e.g. “it’s not a VM in the cloud, it’s an instance”) and a new set of TLAs be absorbed (“In my AWS VPC, I use EC2, EBS, S3 and ZXP”)… but also a new understanding must be gained of what each checkbox and pulldown option does to the operating cost.

Another Plate To Spin

It’s a whole new area of expertise to take on – and it’s complex. What’s more, it’s subtly different between cloud vendors – and even if you only use one cloud, it’s subject to change over time. Usually in the direction of more expensive

Here’s a simple example: provisioning an instance. You are a DBA (congrats!) and you need to migrate your on prem database into, say, Amazon Web Services. You first of all need to configure a Linux instance and some disks. There are many different ways of doing this – including templates, infrastructure-as-code and so on – but let’s do it in the GUI for fun. First, you’ll need some compute power, so let’s provision some from the Elastic Compute Cloud (EC2). Which type shall we choose?

If you are new to this, there are a lot of options. I mean, really a lotLet me see now, there’s categories of General Purpose, Compute Optimized, Memory Optimized, Accelerated Computing, or Storage Optimized. These are just the categories… each one of which contains many types, which contains many options! But “General Purpose” sounds kinda normal, so let’s choose that. Now you need to choose the instance type:

Amazon Web Services – Elastic Compute Cloud choices for General Purpose instance types

Amazon Web Services – EC2 M5 Large instance types

If we go for instance type of M5, we are told that “This family provides a balance of compute, memory, and network resources, and is a good choice for many applications”. Cool, so now you have to pick the instance size:

This screenshot only shows a fraction of the total choices, with each config of vCPUs and Memory replicated again in the m5d.* range (adds NVMe SSD storage), plus some further options around bare metal. It is a labyrinthine set of options to consider.

If you haven’t undertaken the myriad training courses for this cloud vendor, how do you know which instance size to choose? Well, maybe the same way that you specced up the config of your on prem database servers before… right? Except most DBAs didn’t do that, they were allocated servers without really playing a part in their procurement. But my real point here is that the choice you make reflects the ongoing monthly cost. And there are more choices to make! After all, you are going to need some storage from Elastic Block Store on which to place your database:

Amazon Web Services – Elastic Block Store volume types

Amazon recommends one of two different options for “I/O-intensive NoSQL and relational databases” plus a third for data warehouses. I’ll tell you right now, if your database is even mildly transactional, you will want to use io1 or io2. Whatever you choose, it will have an affect on the monthly cost – you can see this by checking it out on the AWS Calculator.

And you know what we didn’t even cover at the start? The region – the geographical location in which this instance runs – also changes the cost, sometimes significantly. Pricing for European regions is often surprisingly higher than regions in the US.

Why This Matters (TL;DR)

What I am trying to show here is that, in the course of provisioning databases in the cloud, DBAs are having to make complicated choices which not only affect the performance of their databases but also the ongoing cost. In fact, it’s a balancing act: performance and cost are two sides of the same coin. Amazon Web Services, in the example above, offers a huge and dazzling array of options which offer different trade offs for these two dimensions. That’s not a bad thing by the way – I am not criticising AWS for giving us a choice – but it’s bewildering to the uninitiated.

What’s more, if you put a database in Microsoft Azure, or Google Cloud Platform, or Oracle Cloud Infrastructure, or Alibaba Cloud or … I can’t think of any other clouds … then be prepared for the fact that everything changes again.

It’s time for DBAs to learn to juggle with yet another ball.

 

Evolution of the DBA

In the previous post, I looked at Gartner’s recent assertion that 75% of databases will be deployed to the cloud by 2022 – and that the cloud is now the default platform for managing data.

The massive shift to the public cloud has a lot of implications, many of which have been written about at length over the last few years. But one question I don’t think has been asked enough is: what does this mean for the poor, beleaguered database administrator? Let’s start with a look at the journey DBAs have been since “the old days”.

DBA 1.0: The (Good) Old Days

Data centres used to contain four distinct tribes of beings living in semi-peaceful co-existence: SysAdmins, DBAs, Network Admins and Storage Admins: Four groups of specialists, each with a distinct skillset and a fairly delineated boundary of responsibility. I say four, it was really three – as everybody who remembers this era will attest, Network Admins were actually mythical creatures who never inhabited their desks; historical evidence now suggests that they were actually just a simple script which automatically closed any ticket with the phrase “No problems were found with the network”.

The database administrator occupied a unique position in this family, because they lived further up in the application stack and so dealt with developers and application owners, business users and sometimes – whisper it – those wondrous beings, the “end users”. Conveniently, this made the DBA the perfect person to blame for almost any problem at any layer in the stack. Application slow? Must be a database problem. Query taking too long? MUST be a database problem. Never mind that the database server doesn’t have enough memory the developers have no concept of how to code in SQL and the storage system is a RAID5 bag of spanners running on spinning rust… it’s always a database problem. And we know it’s not a networking problem because it says here that “No problems were found with the network”.

One outcome of this “unique” position was that many DBAs had to learn skills outside of their core profession (networking, Linux or Windows admin skills, SQL tuning, PL/SQL decoding, hostage negotiation etc). I’d love to say this thirst for knowledge was due to professional pride, but the best DBAs I ever met simply learned these skills so they could prove they weren’t in the wrong and thus get an easier life. “Oh you think your SQL runs slow because of my database huh? Well if you rewrote it like this, it runs in 10% of the time and doesn’t make all the lights go dim in the data centre, you imbecile…”

DBA 2.0: The IT Generalist

As the data centre evolved and new technologies such as Virtualization, NoSQL, Hadoop and the Cloud became prevalent, the clearly defined roles of yesteryear started to become blurred. In the last decade, we saw the rise of a new creature in the data centre: The IT Generalist. Of course, this is mainly just another way of saying DBA with Extra Responsibilities (but no extra pay). It is now commonplace for DBAs to be managing a multitude of different technologies outside of the traditional RDBMS: many DBAs are managing, at least at some level, VMware clusters or other virtualization platforms; I know DBAs who have had tangles with firewalls and software-defined networking… I have even met a large number of DBAs who admin their All-Flash storage arrays (simpler than the old fashioned disk array, after all).

As a side note, anyone with the job title of “Oracle DBA” also found themselves lumbered with managing any technology which was Oracle-badged – and that’s a lot of stuff. Fusion Middleware, Oracle Linux, Weblogic, Oracle ZFS Appliance, anything running under Automatic Storage Management, even Java! The list goes on… how long before somebody gets a ticket because Tik Tok isn’t working properly?

Larry Ellison might have famously said he wants to get rid of the DBA, but the reality is that the DBA role has just become even more wide-ranging.

DBA 3.0: The Cloud DevOps DBA

Fast forward to 2020, the DBA is now managing applications running on databases which run in containers on virtual machines in the cloud, probably deployed via some sort of infrastructure-as-code implementation. Hey, the dream of the modern IT organisation is to achieve some utopian level of automation – and it’s the DBA who has the most practice of automating cross-function tasks; they’ve been trying to do it for years just for an easier life. (Note how the dream of “an easier life” motivates so much of DBA behaviour!)

Of course, everything is now DevOps too… right? If you aren’t DevOps, you aren’t in the gang. Remember when everything had to be agile? But, when you scratched the surface, “agile” was just a way of saying “we haven’t documented any of this”. Well, DevOps has taken over from agile as the buzz word of choice. And the literal translation of “DevOps” is “we still didn’t document anything but also we aren’t going to follow any kind of change control procedures or put any of these code releases through anything more than the most primitive of testing routines, so good luck”.

But in this long evolutionary journey, there is one thing that DBAs have never been exposed to … until today. Cost. As a DBA, you may have had to argue for more powerful servers, faster CPUs, more database processor licenses, cost options (“I need the Tuning Pack, damnit!”), but the cloud is a different ball game. A DBA building a database in the public cloud is making decisions which have a direct affect on the (quite possibly massive) monthly bill from AWS / Azure / GCP / Oracle Cloud / other vendor of choice. This is what I wanted to look at in this post before I got massively carried away.

DBAs of the World, Unite!

I’ll be honest, I didn’t intend this post to become some sort of DBA Manifesto, but once I started typing I couldn’t stop. Blogging is like that sometimes. In the next post, we’ll delve a bit deeper into the future of DBAs and angle on the cloud costs. In the meantime, let’s summarise:

Everybody knows that the DBA is the humble, hard-working hero of Enterprise IT: dedicated and underpaid, overburdened and undertrained, blamed for everything and thanked for nothing… the DBA really is the Morlock of the data centre, working long nights and hard weekends to keep all those wonderful, spoilt Eloi end users happy*. If you are a DBA, give yourself a pat on the back for surviving this evolutionary journey. If you’re a SysAdmin, be honest: you guys need to buy your DBAs a drink now and then. And if you are a Network Admin: stick to the script.

* If the Morlock and Eloi references aren’t working for you, read this.

Databases Now Live In The Cloud

 

I recently stumbled across a tech news post which surprised me so much I nearly dropped my mojito. The headline of this article screamed:

Gartner Says the Future of the Database Market Is the Cloud

Now I know what you are thinking… the first two words probably put your cynicism antenna into overdrive. And as for the rest, well duh! You could make a case for any headline which reads “The Future of ____________ is the Cloud”. Databases, Artificial Intelligence, Retail, I.T., video streaming, the global economy… But stick with me, because it gets more interesting:

On-Premises DBMS Revenue Continues to Decrease as DBMS Market Shifts to the Cloud

Yeah, not yet. That’s just a predictable sub-heading, I admit. But now we get to the meat of the article – and it’s the very first sentence which turns everything upside down:

By 2022, 75% of all databases will be deployed or migrated to a cloud platform, with only 5% ever considered for repatriation to on-premises, according to Gartner, Inc.

Boom! By the year 2022, 75% of all databases will be in the cloud! Even with the cloud so ubiquitous these days, that number caused me some surprise.

Also, I have so many questions about this:

  1. Does “a cloud platform” mean the public cloud? One would assume so but the word “public” doesn’t appear anywhere in the article.
  2. Does “all databases” include RDBMS, NoSQL, key-value stores, what? Does it include Microsoft Access?
  3. Is the “75%” measured by the number of individual databases, by capacity, by cost, by the number of instances or by the number of down-trodden DBAs who are trying to survive yet another monumental shift in their roles?
  4. How do databases perform in the public cloud?

Now, I’m writing this in mid-2020, in the middle of the global COVID19 pandemic. The article, which is a year old and so pre-COVID19, makes the prediction that this will come true within the next two years. It doesn’t allow for the possibility of a total meltdown of society or the likelihood that the human race will be replaced by Amazon robots within that timeframe. But, on the assumption that we aren’t all eating out of trash cans by then, I think the four questions above need to be addressed.

Questions 1, 2 and 3 appear to be the domain of the authors of this Gartner report. But question 4 opens up a whole new area for investigation – and that will be the topic of this next set of blogs. But let’s finish reading the Gartner notes first, because there’s more:

“Cloud is now the default platform for managing data”

One of the report’s authors, long-serving and influential Gartner analyst Merv Adrian, wrote an accompanying blog post in which he makes the assertion that “cloud is now the default platform for managing data”.

And just to make sure nobody misunderstands the strength of this claim, he follows it up with the following, even stronger, remark:

On-premises is the past, and only legacy compatibility or special requirements should keep you there.

Now, there will be people who read this who immediately dismiss it as either obvious (“we’re already in the cloud”) or gross exaggeration (“we aren’t leaving our data centre anytime soon”) – such is the fate of the analyst. But I think this is pretty big. Perhaps the biggest shift of the last few decades, in fact.

Why This Is A Big Deal

The move from mainframes to client/server put more power in the hands of the end users; the move to mobile devices freed us from the constraints of physical locations; the move to virtualization released us from the costs and constraints of big iron; but the move to the cloud is something which carries far greater consequences.

After all, the cloud offers many well-known benefits: almost infinite scalability and flexibility, immunity to geographical constraints, costs which are based on usage (instead of up-front capital expenditure), and a massive ecosystem of prebuilt platforms and services.

And all you have to give up in return is complete control of your data.

Oh and maybe also the predictability of your I.T. costs – remember in the old days of cell phones, when you never exactly knew what your bill would look like at the end of the month? Yeah, like that, but with more zeroes on the end.

Over to Merv to provide the final summary (emphasis is mine):

The message in our research is simple – on-premises is the new legacy.  Cloud is the future. All organizations, big and small, will be using the cloud in increasing amounts. While it is still possible and probable that larger organizations will maintain on-premises systems, increasingly these will be hybrid in nature, supporting both cloud and on-premises.

The two questions I’m going to be asking next are:

  1. What does this shift to the cloud mean for the unrecognised but true hero of the data center, the DBA?
  2. If we are going to be building or migrating all of our databases to the cloud, how do we address the ever-critical question of database performance?

Link to Source Article from Gartner

Link to Merv Adrian blog post

Don’t Call It A Comeback

I’ve Been Here For Years…

Ok, look. I know what I said before: I retired the jersey. But like all of the best superheroes, I’ve been forced to come out of retirement and face a fresh challenge… maybe my biggest challenge yet.

Back in 2012, I started this blog at the dawn of a new technology in the data centre: flash memory, also known as solid state storage. My aim was to fight ignorance and misinformation by shining the light of truth upon the facts of storage. Yes, I just used the phrase “the light of truth”, get over it, this is serious. Over five years and more than 200 blog posts, I oversaw the emergence of flash as the dominant storage technology for tier one workloads (basically, databases plus other less interesting stuff). I’m not claiming 100% of the credit here, other people clearly contributed, but it’s fair to say* that without me you would all still be using hard disk drives and putting up with >10ms latencies. Thus I retired to my beach house, secure in the knowledge that my legend was cemented into history.

But then, one day, everything changed…

Everybody knows that Information Technology moves in phases, waves and cycles. Mainframes, client/server, three-tier architectures, virtualization, NoSQL, cloud… every technology seems to get its moment in the sun…. much like me recently, relaxing by the pool with a well-earned mojito. And it just so happened that on this particular day, while waiting for a refill, I stumbled across a tech news article which planted the seed of a new idea… a new vision of the future… a new mission for the old avenger.

It’s time to pull on the costume and give the world the superhero it needs, not the superhero it wants…

Guess who’s back?

* It’s actually not fair to say that at all, but it’s been a while since I last blogged so I have a lot of hyperbole to get off my chest.

The Final Post: Hardware Is Dead

Hanging up the jersey

Well, my friends, this is it. The time has come to retire the flashdba jersey after more than seven years of fun and frolics. In part one of this post, I looked back at my time in the All-Flash storage industry and marvelled at the crazy, Game of Thrones-style chaos that saw so many companies arrive, fight, merge, split up and burn out. Throughout that time, I wrote articles on this blog site which attempted to explain the technical aspects of All-Flash as the industry went from niche to mainstream. Like many technical bloggers, I found this writing process enjoyable and fulfilling, because it helped me put some order to my own thoughts on the subject. But back in 2017, something changed and my blogs became less and less frequent… eventually leading here. I’ll explain why in a minute, but first we need to talk about the title of this post.

Hardware Is Meh

Back in my Oracle days, I worked with a product called Exadata – a converged database appliance which Oracle marketed as “hardware and software engineered to work together”. For a time, Oracle’s “Engineered Systems” were the future of the company and, therefore, the epicentre of their marketing campaigns. Today? It’s all about the Oracle Cloud. And this is actually a perfect representation of the I.T. industry as a whole… because, here in 2019, nobody wants to talk about hardware anymore. Whether it’s hyper-converged systems, All-Flash storage, “Engineered” database appliances or basic server and networking infrastructure, hardware is just not cool anymore.

Hardware is MehFor a long time, companies have purchased hardware systems as a capital expense, the cost then being written off over a number of years, at which point the dreaded hardware refresh is required. Choosing the correct specifications of for hardware (capacity, performance, number of ports etc) has always been extremely challenging because business is unpredictable: buy too small and you will need to upgrade at some point down the line, which could be expensive; buy too big and you are overpaying for resources you may never use. And also, if you are a small company or a startup, those capital expenses can be very hard to fund while you wait for revenue to build.

Today, nobody needs to do this anymore. The cloud – and in particular the public cloud – allows companies to consume exactly what they need, just when they need it – and fund it as an operating expense, with complete flexibility. One of the great joys of the public cloud is that hardware has been commoditised and abstracted to such a degree that you just don’t need to care about it anymore. Serverless, you might say… (IT has aways been fond of a ridiculous buzz word)

The Vendor View: AWS Is The New Enemy

For infrastructure vendors, the industry has reached a new tipping point. A few years ago, if you worked in sales for a storage startup (like me), you found business by targeting EMC customers who were unhappy with the prices they were paying / service they were getting / quality of steaks being bought for them by their EMC rep. Ditto, to a lesser extent, with HP and IBM, but EMC was the big gorilla of the marketplace. Today, everybody in storage has a new number #1 enemy: Amazon Web Services, with Microsoft Azure and Google Cloud Platform making up the top three. But make no mistake, AWS is eating everybody’s lunch – and the biggest challenge for the rest is that in many customer’s eyes, the public cloud is Amazon Web Services. (EMC, meanwhile, doesn’t even exist anymore but is instead a part of Dell… that would have been impossible to imagine five years ago).

Cloud ≠ Public Cloud

However, nobody (sane) is predicting that 100% of workloads will end up in the public cloud (and let’s be honest now, when we say “public cloud” we basically mean AWS, Azure and GCP). For some companies – where I.T. is not their core business – it makes perfect sense to do everything in the cloud. But for others, various reasons relating to control, risk, performance, security and regulation will mean that at least some data remains on premises, in private or hybrid clouds. You can argue among yourselves about how much.

So, for those people who still require their own infrastructure, what now? Once you’ve seen how easy it is to use the public cloud, sampled all the rich functionality of AWS and fallen into the trap of having staff paying for AWS instances on their credit cards (so-called “Shadow IT“), how do you go back to the old days of five-year up-front capital investments into large boxes of tin which sit in the corner of your data centre and remain stubbornly inflexible?

Consumption-Based Infrastructure

Consumption-Based Infrastructure

Ok let’s get to the conclusion. A couple of years ago, Kaminario (my employer) decided to exit the hardware business and become a software company. Like most (almost all) All-Flash storage vendors, Kaminario uses commodity whitebox components (basically, Intel x86 servers and enterprise-class SSDs) for the hardware chassis and then runs their own software on top to turn them into high-performance, highly-resilient and feature-rich storage platforms. Everybody does it: DellEMC’s XtremIO, Pure Storage, Kaminario, HP Nimble, NetApp… all of the differentiation in the AFA business is in software. So why purchase hardware components, manufacture and integrate them, keep them in inventory and then pass on all that extra cost to customers when your core business is actually software?

Kaminario decided to take a new route by disaggregating the hardware from the software and then handing over the hardware part to someone who already sells millions of hardware units all around the globe. Now, when you buy a Kaminario storage array, you get exactly the same physical appliance, but you (or your reseller) actually buy the hardware from Tech Data at commodity component cost. You then buy a consumption-based license to use the software from Kaminario based on the number of terabytes of data stored. This can be on a monthly Pay As You Go model or via a pre-paid subscription for a number of years. In a real sense, it is the cloud consumption model for people who require on-prem infrastructure.

There are all sorts of benefits to this (most customers never fill their storage arrays above 80% capacity, so why always pay for 100%?), but I’m not going to delve into them here because this is not a sales pitch, it’s an explanation for what I did next.

What I Did Next

Seeing as Kaminario decided to make a momentous shift, I thought it was a good time to make one of my own. So, two years ago, I took the decision to leave the world of technical presales and become a software sales executive. As in, a quota-carrying, non-technical, commercial sales guy with targets to hit and commission to earn. Presales people also earn commission, but are far more protected from the “lumpy” highs and lows that come with complex and lengthy high-value sales cycles (what sales people call “big ticket sales”). In commercial sales, the highs are higher and the lows are lower – and the risks are definitely riskier. Since my new role coincided with the company going through an entire change of business model, the risk was pretty hard to quantify, but I’m pleased to say that 2018 was the company’s best ever year, not just globally but also in the territory that I now manage (the United Kingdom).

More importantly for me, I’m now two years into this new journey and I have zero regrets about the decision to leave my technical past behind. I’ve learnt more than ever before (often the hard way) and I’ve experienced all the highs and lows one might expect, but I still get the same excitement from this role that I used to get in the early days of my technical career.

So, the time is right to hang up the technical jersey and bid flashdba farewell. It’s been fun and I want to say thank you to everybody who read, commented, agreed or disagreed with my content. There are almost 200 posts and pages on this site which I will leave here in the hope that they remain useful to others – and as a sort of virtual monument to my former career.

In the meantime, I’ve got to go now, because there are meetings to be had, customers to be entertained, dinners to be expensed and (hopefully) deals to be closed. Farewell, my friends, stay in touch… and remember, if you need to buy something… call me, yeah?

— flashdba —

[September 2020 Spoiler Alert: I couldn’t stay away]