All Flash Arrays: Scale Up vs Scale Out (Part 2)

In the first post on the subject of Scale Up versus Scale Out, we looked at the reasons why scalability is a key requirement for storage platforms, as well as discussing the limits of Scale Up only architectures, i.e. systems where more capacity is added to the same fixed number of controllers. In this article, we look at the alternative architecture known as Scale Out.

Scale Out – Adding Performance

In a scale out architecture, the possibility exists to add more storage controllers, thereby adding more performance capability. You may remember that the performance of a storage array is approximately proportional to the number of (and power of) its storage controllers, while the capacity is determined by the amount of flash or SSD media addressed by those controllers.

In this scenario, the base model typically consists of a pair of controllers (after all, at least two are required to provide resiliency). Most scale-up-capable storage arrays work by adding more pairs of these controllers, which are considered indivisible units and have names like “K-Block” (Kaminario K2) or “X-Brick” (DellEMC XtremIO). For now, let’s just call them controller pairs.

There are a number of technical challenges to overcome when building a system which can scale out. In the first post, we covered Active/Passive solutions, where only one controller processes I/O from the underlying media. In this scenario, the performance limitations are determined by the characteristics of the single active node – with the remaining (passive) node simply waiting to spring into action in the event of a failover. Clearly this type of architecture makes less sense as the number of nodes increases above two, since the additional nodes will also be passive and therefore adding little benefit.

Scale out architectures, then, typically employ an active/active solution whereby each controller contributes more performance capacity as it joins the system. And that means building a high-availability cluster, with all of the associated cluster management technologies that entails (failover, virtual IPs, protection against split brain scenarios, etc). No wonder some vendors stick to Scale Up only.

Of course, the biggest issue with a Scale Out only architecture is the question of what happens when additional capacity needs to be added. The answer is that another controller or set of controllers must be added too, complete with their attached storage media – but the controllers are an expensive and unnecessary addition if the only requirement is simply more capacity.

Scale Up and Scale Out – The Perfect Solution?

So what we’ve seen here is that a Scale Up architecture allows for more capacity to be added to existing controllers, while a Scale Out architecture allows for more performance to be added to existing capacity. It would therefore seem logical that the ultimate goal is to build a system which can (independently) scale both up and out. Scaling up allows more capacity to be added without the cost of more controllers. Scaling out allows more performance (controllers) to be added when required. And thus the characteristics of the storage platform can be extended in either of these two dimensions as needed. Perfect?

Arrays which can support both scale up and scale out have been surprisingly rare in the All Flash market so far, but they do exist. The concept is simple: customers that purchase storage typically do so over a three to five year period. Most people simply cannot guess how their requirements will change in that period of time… more users, more customers, more data? Yes, probably – but how much and over what time period? Choosing an architecture which allows independent (non-disruptive) scale of both capacity and performance insures against the risk associated with capacity planning, while also allowing customers to start off by purchasing only what they need today and then expanding at their own pace. Sounds a bit like cloud computing when you put it like that, doesn’t it?

Which conveniently leads us to…

The Future: Dynamically Composable / Disaggregated Storage?

None of us know how the future will look, especially in the technology industry. But one vision of the future comes from my employer, Kaminario, who is one of a number of companies exploring the concept of composable infrastructure. I think this is a very interesting new direction, which is why I’m writing about it here – but since I’m an employee of this vendor I must first give you a mandatory sales warning and you should treat the next paragraphs with a healthy dose of “well he would say that, wouldn’t he?”

In a dynamically composable storage environment, the two elements we have been discussing above (storage controllers and storage media) become completely disaggregated so that any shelf of media can be addressed by any set of controllers. These sets of systems can then be dynamically composed, so that – out of a set of multiple shelves of media and storage controllers – subsets of the two can be brought together to form virtual private arrays designed to serve specific applications.

© 2017 Kaminario

If you think about the potential of this method of presentation, it opens up many possibilities. For example, the dynamic and non-disruptive reallocation of storage resources offers customers the ability to constantly adapt to unpredictable workloads. Furthermore, concepts from AI can be used to automate this reallocation and even predict changes to requirements in advance.

This is useful for any customer with a complex estate of mixed workloads, but it’s incredibly useful to Cloud Service Providers and MSPs. After all, these organisations have no knowledge of what their customers are doing on their systems or what they will do in the future, so the ability to dynamically adapt performance and capacity requirements could provide a competitive edge.

Conclusion

We all know that I.T. is full of buzzwords, like agility or transformation. Is scalability another one? Maybe it’s in danger of becoming one. But if you think about it, one of the fundamental characteristics of any platform is its ability to scale. It essentially defines the limitations of the platform that you may meet as you grow – and growth is pretty much the point of any business. So take the time to understand what scale actually means in a storage context and you might avoid learning about those limitations the hard way…

Advertisements

All Flash Arrays: Scale Up vs Scale Out (Part 1)

Imagine you want to buy some more storage for your laptop – let’s say an external USB drive for backups. What are the fundamental questions you need to ask before you get down to the thorny issue of price? Typically, there is only one key question:

  • How much capacity do I need?

Of course there will be lesser questions, such as connectivity, brand, colour, weight, what it actually looks like and so on. But those are qualifying questions – ways to filter the drop down list on Amazon so you have less decisions to make.

However, different rules apply when buying enterprise storage: we might care less about colour and more about physical density, power requirements and the support capabilities of the vendor. We might care less about what the product looks like and more about how simple it is to administer. But most of all, for enterprise storage, there are now two fundamental questions instead of one:

  • How much capacity do I need?
  • How much performance do I need?

Of course, there is the further issue of what exactly we mean by “performance”, given that it can be measured in a number of different ways. The answer is dependant on the platform being used: for disk-based storage systems it was typically the number of I/O Operations Per Second (IOPS), while for modern storage systems it is more likely to be the bandwidth (the volume of data read and written per second). And just to add a little spice, when considering IOPS or bandwidth on All Flash array platforms the read/write ratio is also important.

So the actual requirement for, say, a three-node data warehouse cluster with 200 users might turn out to be:

“I need 50TB of usable capacity and the ability to deliver 1GB/second at 90% reads. What will this cost?”

Are we ready to spec a solution yet? Not quite. First we have to consider Rule #1.

Rule #1: Requirements Change

Most enterprise storage customers purchase their hardware for use over a period of time – with the most typical period being five years. So it stands to reason that whatever your requirements are at the time of purchase, they will change before the platform is retired. In fact, they will change many times. graph-growingData volumes will grow, because data volumes only ever get bigger… right? But also, those 200 users might grow to 500 users. The three node cluster might be extended to six nodes. The chances are that, in some way, you will need more performance and/or more capacity.

The truth is that, while customers buy their hardware based on a five year period, now more than ever they cannot even predict what will happen over the next 12 months. Forgive the cliche, the only thing predictable is unpredictability.

So as a customer in need of enterprise storage, what do you do? Clearly you won’t want to purchase all of the capacity and performance you might possibly maybe need in the future. That would be a big up-front investment which may never achieve a return. So your best bet is to purchase a storage platform which can scale. This way you can start with what you need and scale as your requirements grow.

This is where architecture becomes key.

Scalability is an Architectural Decision

You may remember from a previous post that the basic building blocks of any storage array are controllers and media, with various networking devices used to string them together. scale-starting-pointAs a gross simplification, the performance of a storage array is a product of the number of controllers and the power that those controllers have (assuming they aren’t held back by the media). The capacity of a storage array is clearly a product of the amount of media.

In a simple world you would just add more media when you wanted more capacity, you would add more controllers (or increase their power) when you needed more performance, and you would do both when you need both. But this is not a simple world.

I always think that the best way to visualise the two requirements of capacity and performance is to use two different dimensions on a graph, with performance as X and capacity as Y. So let’s use that here – we start with a single flash array which has one pair of controllers and one set of flash media.

Scale Up – Adding Capacity

The simple way to achieve scale up is to just add more media to an existing array. Media is typically arranged in some sort of indivisible unit, such as a shelf of SSDs arranged in a RAID configuration (so that it has inbuilt redundancy). In principle, adding another shelf of SSDs sounds easy, but complications arise when you consider the thorny issue of metadata.

To illustrate, consider that most enterprise All Flash arrays available today have inbuilt data reduction features including deduplication. At a high level, dedupe works by computing a hash value for each block written to the array and then comparing it to a table containing all the hash values of previously written blocks. If the hash is discovered in the table, the block already exists on the array and does not need to be written again, reducing the amount of physical media used.

This hash table is an example of the metadata storage arrays need to store and maintain in order to function; other examples of features which utilise metadata are thin provisioning, snapshots and replication. This makes metadata a critical factor in the performance of a storage array

To ensure the highest speed of access, most metadata is pinned in DRAM on the storage controllers. This has a knock-on effect in that the amount of addressable storage in an array can become directly linked to the amount of DRAM in the controllers. DRAM is a costly resource and affects the manufacturing cost of the array, so there is a balancing act required in order to have enough DRAM to store the necessary metadata without inflating the cost any more than is absolutely possible.

Hopefully you see where this is going. Adding more shelves of media increases the storage capacity of an array… thereby increasing the metadata footprint… and so increasing the need for DRAM. At some point the issue becomes whether it is even possible (technically or commercially) to support the metadata overhead of adding more shelves of physical media.

Scale-Up Only Architectures

There are many storage arrays on the market which have a scale-up only architecture, with Pure Storage being an obvious example. There are various arguments presented as to why this is the case, but my view is that these architectures were used as a compromise in order to get the products to market faster (especially if they also adopt an active/passive architecture). Having said that, it’s obvious that I am biased by the fact that I work for vendor which does not have this restriction – and who believes in offering scale up and scale out. So please don’t take my word for it – go read what the other vendors say and then form your own opinion.

One counter claim by proponents of scale-up only architecture is that performance can be added by upgrading array controllers in-place, non-disruptively. In other words, the controllers can be replaced with high-specification models with more CPU cores and more DRAM, bringing more performance capability to the array. The issue here is that this is a case of diminishing returns. Moving up through the available CPU models brings step changes in cost but only incremental increases in performance.

To try and illustrate this, let’s look at some figures for the Pure Storage //m series of All Flash arrays. There are three models increasing in price and performance: m20, m50 and m70. We can get performance figures measured in maximum IOPS (measured with a 32k block size as is Pure’s preferred way) from this datasheet and we can get details of the CPU and DRAM specifications from this published validation report by the NSA. Let’s use Wikipedia’s List of Intel Xeon microprocessors page to find the list price of those CPUs and compare the increments in price to those of the maximum stated performance:

Here we can see that the list price for the CPUs alone rises 238% and then 484% moving from //m20 to //m50 to //m70, yet the maximum performance measured in IOPS rises just 147% and 200%. You can argue that the CPU price is not a perfect indicator of the selling price of each //m series array, but it’s certainly a factor. As anyone familiar with purchasing servers will attest, buying higher-spec models takes more out of your pocket than it gives you back in performance.

My point here – and this is a general observation rather than one about Pure in particular – is that this is not a cost effective scaling strategy in comparison to the alternative, which is the ability to scale out by adding more controllers.

Coming Next: Scale Out

In the next post we’ll look at Scale Out architectures and what they mean for customers with independently varying requirements for capacity and performance.

Scale out allows the cost-effective addition of more controllers and therefore more performance capability, along with other benefits such as the addition of more ports. But there are potential downsides too…

 

The Flash Insider: To POC or Not To POC?

Proof of Concept?

Guest Post

I’m excited announce another guest blog written by my good friend and funny-talking American cousin Nathan Fuzi. Like me, Nate comes from a database background but joined the all-flash storage revolution back in its infancy. Which means, like me, Nate how has a little tombstone on his résumé marked Violin Memory. But even though he has since moved up to working in THE CLOUD, Nate’s experience working for an AFA vendor is invaluable. Over six years, he worked with hundreds of database customers who were deciding whether to purchase all-flash storage and – more importantly – wondering how to test their databases on those storage platforms. Now, for your benefit, he writes about one of the most crucial stages of the process: the proof of concept (POC).

Indulge me, if you will:  take yourself back to a time long, long ago–perhaps nearly forgotten.  Waaaay back when storage arrays were built of spinning hard drives front-ended with DRAM for caching purposes, and conventional wisdom had not yet agreed whether flash memory could serve as persistent storage media.  I know:  it seems like forever ago.  Even the ghost of Christmas Past is like, Really?  But I assure you that time happened.  I lived through it, and so did my buddy flashdba and a number of others.  Those were heady days, full of wonder and spectacle and … many, many proofs of concept.

storage-characteristicsAnd who could blame folks back then for wanting to see more than that these mysterious and spectacular “all flash” storage arrays could ingest synthetic data and spit it back at previously unseen IOPS rates, incredibly low latency numbers, and firehose-like bandwidth volumes?  Because let’s face it:  marketing numbers and theoretical performance are just that.  Theoretical.  You know, as in “your mileage may vary”.  What makes a difference to people is what kind of performance the product delivers to their specific application.  Folks like flashdba and myself got pretty good guessing at the latency numbers our products would deliver at the IOPS rates we observed in applications.  We could then do some simple math to substitute in our anticipated latency for the current value and accurately predict our improvement on execution time for a given SQL statement.  But in the early days, proving our claims to a skeptical customer often meant asking them to deploy their application on our array, as the IO profile was complex and varied.

Oh… the Pain

The PoC is still quite common and often necessary–and not just for storage products, although especially for storage products, with their increasingly wild performance claims.  But it’s painful.  You have to have an entire non-production setup in place or build one just for the PoC, and then you have to have enough additional ports on your Ethernet or FC switches (or whatever new-fangled connectivity the latest flashy product is sporting) that you can leave everything intact and hook up the new array, expose storage to the host, perform some tests, and then ideally migrate the application data over to run some “real world” tests.

But what could we achieve without doing a full-blown PoC?  There are lots of synthetic load generation utilities out there these days, some easier to use than others and some more flexible and fuller-featured.  A short list of popular tools here:

Iometer                http://www.iometer.org/

DiskSpd                https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223

VDBench              http://www.oracle.com/technetwork/server-storage/vdbench-downloads-1901681.html

Fio                          http://freecode.com/projects/fio

What are you really testing for?

One common aspect I have seen of what are, frankly, flawed testing paradigms is that admins often attempt to spin up to the max IOPS the host/array combination can drive for that particular workload setting and then hold that rate for some period.  This methodology demonstrates a couple of array attributes:  maximum sustained performance and, run long enough, the point at which caching and garbage collection mechanisms are overrun and a worst case sustained performance profile presents itself.  test-blackboardWhat it definitely does not demonstrate is the latency you can expect for your workload, which for most database environments is likely less than 10% of the maximum IOPS performance capacity of the modern all-flash array.  And what about the fact that complex animals like the Oracle Database perform both random single-block IOs and sequential multi-block IOs simultaneously and at a nanosecond’s notice, depending on the whim of the optimizer?  Simplistic performance evaluation unfortunately brings the average storage or database admin no closer to understanding how the array will perform for his actual workload–and isn’t that the whole point of doing such an evaluation?

What’s a DBA to do?

A while back, our friends over at Pure Storage wrote a blog in which they shared some metrics they had pulled from call-home data from their customer environments.  They said, for example, that Oracle environment IO activity broke down like this on average, in terms of block sizes and reads versus writes, and they helpfully provided a VDBench configuration file to drive that IO pattern:

http://blog.purestorage.com/modeling-io-size-mixes-with-vdbench/

That was really cool of them, but, on closer examination, it occurred to me that this profile really only described a blender of some number of disparate Oracle environments.  The chances of it approximating any one Oracle environment were nominal, and the chances of it approximating YOUR Oracle environment went to monkeys with typewriters producing Shakespeare.  So this driver doesn’t actually issue the IOs that your Oracle database is going to issue.  To me, that seriously limits its value.  Another problem I have with it is that, with its single read workload definition, it is going to show me the average latency for all read IOs as a single number.  But Oracle helpfully shows me my random read time separate from my random write time–and my multi-block read time separate from those, and my sequential write time for redo separate from those, etc.  This granularity is what makes Oracle’s instrumentation so valuable in performance analysis.  I refuse to give it up.

Taking Charge

So what can you do?  Well, Oracle is capturing all of your IO metrics for you automatically, so just take a look at your AWR report (you guys on SE can get this from Statspack reports) for them and build your own IO driver for VDBench.  As an example, one customer–let’s call them a large international bank–was curious to see if our products could deliver comparable or better latency than their existing storage.  They shared their AWR reports with me, and I found their IO profile section for the period they really cared about.  Here’s a snippet:

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
<SNIP>
physical read IO requests                55,301,220        7,682.5       1,180.1
physical read bytes              1.936982535373E+13 2.69087766E+09 4.1334639E+08
physical read partial requests               26,445            3.7           0.6
physical read requests optimized         49,680,085        6,901.6       1,060.2
physical read total IO requests          55,479,809        7,707.3       1,183.9
physical read total bytes        1.938706428365E+13 2.69327251E+09 4.1371427E+08
physical read total bytes optimi 1.773273192858E+13 2.46345082E+09 3.7841130E+08
physical read total multi block          19,552,557        2,716.3         417.3
physical reads cache                     14,137,864        1,964.1         301.7
physical reads cache prefetch            11,716,783        1,627.7         250.0
physical reads direct                 1,168,102,453      162,274.1      24,927.0
physical reads direct (lob)                      22            0.0           0.0
physical reads direct temporary         307,926,728       42,777.5       6,571.1
physical reads prefetch warmup                    0            0.0           0.0
physical write IO requests               37,072,831        5,150.2         791.1
physical write bytes              4,873,114,484,736  676,978,477.6 1.0399083E+08
physical write requests optimize         31,566,182        4,385.2         673.6
physical write total IO requests         37,460,357        5,204.0         799.4
physical write total bytes        4,908,503,636,480  681,894,777.9 1.0474603E+08
physical write total bytes optim  3,540,697,530,368  491,877,634.2  75,557,447.1
physical write total multi block          5,511,767          765.7         117.6
<SNIP>
redo writes                                 341,363           47.4           7.3

Of course, not every multi-block read is 1M because that would be too easy.  And good luck trying to get all the numbers to line up exactly.  That Oracle pulls the metrics from different places still means some rough math.  But, with a little patience and fiddling, we can get a great approximation of the number of single block random reads, large block sequential reads, random and multi-block writes, and redo writes that match up closely to these values, both in IOPS and bandwidth.  When in doubt, use the higher of [IOPS listed, Bandwidth listed].  Thus I could set up my VDBench workload definitions:

# single-block, 100% random reads
wd=wd_oracle_rand_read,rdpct=100,xfersize=16k,seekpct=100,iorate=3250,sd=sd*,priority=1

# multi-block, 100% sequential reads
wd=wd_oracle_seq_read,rdpct=100,xfersize=1024k,seekpct=0,iorate=2500,sd=sd*,priority=2

# single-block, 100% random writes
wd=wd_oracle_rand_write,rdpct=0,xfersize=16k,seekpct=100,iorate=5800,sd=sd*,priority=3

# multi-block, 100% sequential writes
wd=wd_oracle_seq_write,rdpct=0,xfersize=768k,seekpct=0,iorate=750,sd=sd*,priority=4

# redo write sizes vary per the LGWR mechanism, so we’ll go with redo size (bytes) per second / redo writes per second
wd=wd_oracle_redo_write,rdpct=0,xfersize=64k,seekpct=0,iorate=50,sd=sd*,priority=5
rd=rd_oracle_ramp,wd=wd_oracle*,iorate=12350,interval=1,elapsed=120,forthreads=8,warmup=5

As a quick check, with the customer’s 16KB block size, this config drives just over 50 MB/s random reads + 2500 MB/s sequential reads, which gets really close to the 2566 MB/s total reads stated in the snippet above.  It also drives about 91 MB/s random writes + 563 MB/s sequential writes + 3 MB/s redo for a total of 657 MB/s writes, which is really close to the reported 650 MB/s write bandwidth in the snippet.  I could take this even further to break out if I needed to characterize performance for other IO types or block sizes.  VDBench helpfully puts out a separate HTML file for each workload definition, allowing us to see the latency metrics for each IO type and size that you can then compare against the values in our AWR or Statspack report.  Note that you should set your forthreads value just high enough that you can drive the desired IOPS total; any higher and you’ll push latency up without achieving anything useful. And clearly the total IOPS target for the run definition should match the sum of the individual workload drivers.

PoC Avoided?  Maybe.

question-mark-diceAll of what I have described here helps to answer the question of What would each latency number look like for my IO workload as it exists today?  From this, you can use a little math to answer with great accuracy the execution time for any particular SQL with the lower latency.  The next logical question is What is going to happen to overall application performance when each query runs so much faster and completes sooner, allowing the next query to start earlier, etc?  That part is much more difficult to predict and may require a full-blown PoC to answer definitively, but at least you know the product you’re about to invest time in can deliver the latency you expect with your current IO workload profile.  If you’re hoping for a 10X performance improvement for your application, you’d better see that IO wait currently accounts for a large percentage of database time and that the latency of your new array beats the current latency by enough to make that dream a reality.

All Flash Arrays: Active/Active versus Active/Passive

running

I want you to imagine that you are about to run a race. You have your trainers on, your pre-race warm up is complete and you are at the start line. You look to your right… and see the guy next to you, the one with the bright orange trainers, is hopping up and down on one leg. He does have two legs – the other one is held up in the air – he’s just choosing to hop the whole race on one foot. Why?

You can’t think of a valid reason so you call across, “Hey buddy… why are you running on one leg?”

His reply blows your mind: “Because I want to be sure that, if one of my legs falls off, I can still run at the same speed”.

Welcome, my friends, to the insane world of storage marketing.

High Availability Clusters

The principles of high availability are fairly standard, whether you are discussing enterprise storage, databases or any other form of HA. The basic premise is that, to maintain service in the event of unexpected component failures, you need to have at least two of everything. In the case of storage array HA, we are usually talking about the storage controllers which are the interfaces between the outside world and the persistent media on which data resides.

Ok so let’s start at the beginning: if you only have one controller then you are running at risk, because a controller failure equals a service outage. No enterprise-class storage array would be built in this manner. So clearly you are going to want a minimum of two controllers… which happens to be the most common configuration you’ll find.

So now we have two controllers, let’s call them A and B. Each controller has CPU, memory and so on which allow it to deliver a certain level of performance – so let’s give that an arbitrary value: one controller can deliver 1P of performance. And finally, let’s remember that those controllers cost money – so let’s say that a controller capable of giving 1P of performance costs five groats.

Active/Passive Design

In a basic active/passive design, one controller (A) handles all traffic while the other (B) simply sits there waiting for its moment of glory. That moment comes when A suffers some kind of failure – and B then leaps into action, immediately replacing A by providing the same service. There might be a minor delay as the system performs a failover, but with multipathing software in place it will usually be quick enough to go unnoticed.

active-passive

So what are the downsides of active/passive? There are a few, but the most obvious one is that you are architecturally limited to seeing only 50% of your total available performance. You bought two controllers (costing you ten groats!) which means you have 2P of performance in your pocket, but you will forever be limited to a maximum of 1P of performance under this design.

Active/Active Design

In an active/active architecture, both controllers (A and B) are available to handle traffic. This means that under normal operation you now have 2P of performance – and all for the same price of ten groats. Both the overall performance and the price/performance have doubled.

active-active

What about in a failure situation? Well, if controller A fails you still have controller B functioning, which means you are now down to 1P of performance. It’s now half the performance you are used to in this architecture, but remember that 1P is still the same performance as the active/passive model. Yes, that’s right… the performance under failure is identical for both designs.

What About The Cost?

Smart people look at technical criteria and choose the one which best fits their requirements. But really smart people (like my buddy Shai Maskit) remember that commercial criteria matter too. So with that in mind, let’s go back and consider those prices a little more. For ten groats, the active/active solution delivered performance of 2P under normal operation. The active/passive solution only delivered 1P. What happens if we attempt to build an active/passive system with 2P of performance?

active-passive-larger

To build an active/passive solution which delivers 2P of performance we now need to use bigger, more powerful controllers. Architecturally that’s not much of a challenge – after all, most modern storage controllers are just x86 servers and there are almost always larger models available. The problem comes with the cost. To paraphrase Shai’s blog on this same subject:

Cost of storage controller capable of 1P performance  <  Cost of storage controller capable of 2P performance

In other words, building an active/passive system requires more expensive hardware than building a comparable active/active system. It might not be double, as in my picture, but it will sure as hell be more expensive – and that cost is going to be passed on to the end user.

Does It Scale?

Another question that really smart people ask is, “How does it scale?”. So let’s think about what happens when you want to add more performance.

In an active/active design you have the option of adding more performance by adding more controllers. As long as your architecture supports the ability for all controllers to be active concurrently, adding performance is as simple as adding nodes into a cluster.

But what happens when you add a node to an active/passive solution? Nothing. You are architecturally limited to the performance of one controller. Adding more controllers just makes the price/performance even worse. This means that the only solution for adding performance to an active/passive system is to replace the controllers with more powerful versions…

The Pure Storage Architecture

active-passive-backendPure Storage is an All Flash Array vendor who knows how to play the marketing game better than most, so let’s have a look at their architecture. The PS All Flash Array is a dual-controller design where both controllers send and receive I/Os to the hosts. But… only one controller processes I/Os to and from the underlying persistent media (the SSDs). So what should we call this design, active/active or active/passive?

According to an IDC white paper published on PS’s website, PS controllers are sized so that each controller can deliver 100% of the published performance of the array. The paper goes on to explain that under normal operation each controller is loaded to a maximum of 50% on the host side. This way, PS promises that performance under failure will be equal to the performance under normal operations.

In other words, as an architectural decision, the sum of the performance of both controllers can never be delivered.

So which of the above designs does that sound like to you? It sounds like active/passive to me, but of course that’s not going to help PS sell its flash arrays. Unsurprisingly, on the PS website the product is described as “active/active” at every opportunity.

Yet even PS’s chief talking head, Vaughn Stewart, has to ask the question, “Is the FlashArray an Active/Active or Active/Passive Architecture?” and eventually comes to the conclusion that, “Active/Active or Active/Passive may be debatable”.

There’s no debate in my view.

Conclusion

You will obviously draw your own conclusions on everything I’ve discussed above. I don’t usually pick on other AFA vendors during these posts because I’m aiming for an educational tone rather than trying to fling FUD. But I’ll be honest, it pisses me off when vendors appear to misuse technical jargon in a way which conveniently masks their less-glamorous architectural decisions.

My advice is simple. Always take your time to really look into each claim and then frame it in your own language. It’s only then that you’ll really start to understand whether something you read about is an innovative piece of design from someone like PS… or more likely just another load of marketing BS.

* Many thanks to my colleague Rob Li for the excellent running-on-one-leg metaphor

All Flash Arrays: Controllers Are The New Bottleneck

bottleneck

Today’s storage array market contains a wild variation of products: block storage, file storage or object storage; direct attached, SANs or NAS systems; fibre-channel, iSCSI or Infiniband… Even the SAN section of the market is full of diversity: from legacy hard disk drive-based arrays through the transitory step of tiered disk+flash hybrid systems and on to modern All-Flash Arrays (AFAs).

If you were partial to the odd terrible pun, you might even say that it was a bewildering array of choices. [*Array* of choices? Oh come on. If you’re expecting a higher class of humour than that, I’m afraid this is not the blog for you]

Anyway, one thing that pretty much all storage arrays have in common is the basic configuration blocks of which they comprise:

  • Array controllers
  • Internal networking
  • Persistent Media (flash or disk)

Over the course of this blog series I’ve talked a lot about both flash and disk media, but now it’s time to concentrate a little more on the other stuff – specifically, the controllers. A typical Storage Area Network delivers a lot more functionality than would be expected from just connecting a bunch of disks or flash – and it’s the controllers that are responsible for most of that added functionality.

Storage Array Controllers

Think of your storage system as a private network on which is located a load of dumb disk or flash drives. I say dumb because they can do little else other than accept I/O requests: reads and writes. The controllers are therefore required to provide the intelligence needed to present those drives to the outside world and add all of the functionality associated with enterprise-class storage:air-traffic-control

  • Resilience, automatic fault tolerance and high availability, RAID
  • Mirroring and/or replication
  • Data reduction technologies (compression, deduplication, thin provisioning)
  • Data management features (snapshots, clones, etc)
  • Management and monitoring interfaces
  • Vendor support integration such as call-home and predictive analytics

Controllers are able to add this “intelligence” because they are actually computers in their own right, acting as intermediaries between the back-end storage devices and the front-end storage fabric which connects the array to its clients. And as computers, they rely on the three classic computing resources (which I’m going to list using fancy colours so I can sneak them up on you again later in this post):

  1. Memory (DRAM)
  2. Processing (CPU)
  3. Networking

It’s the software running on the array controllers – and utilising these resources – that describes their behaviour. But with the rise of flash storage, this behaviour has had to change… drastically.

boxing-afa-hdd

Disk Array Controllers vs Flash Array Controllers

In the days when storage arrays were crammed full of disk drives, the controllers in those arrays spent lots of time waiting on mechanical latency. This meant that the CPUs within the controllers had plenty of idle cycles where they simply had to wait for data to be stored or retrieved by the disks they were addressing. To put it another way, CPU power wasn’t such an important priority when specifying the controller hardware.

This is clearly not the case with the controllers in an all flash array, since mechanical latency is a thing of the past. The result is that controller CPUs now have no time to spare – data is constantly being handled, addressed, moved and manipulated. Suddenly, the choice of CPU has a direct effect on the array’s ability to process I/O requests.

Dedupe Kills It

But there’s more. One of the biggest shifts in behaviour seen with flash arrays is the introduction of data reduction technology – specifically deduplication. This functionality, known colloquially as dedupe, intervenes in the write process to see if an exact copy of any written block already exists somewhere else on the array. copy-stampIf the block does exist, the duplicate copy does not need to be written – and instead a pointer to the existing version can be stored, saving considerable space. This pointer is an example of what we call metadata – information about data.

I will cover deduplication at greater length in another post, but for now there are three things to consider about the effect dedupe has on storage array controllers:

  1. Dedupe requires the creation of a fairly complex set of metadata structures – and for performance reasons much of this will need to reside in DRAM on the controllers. And as more data is stored on the array, the amount of metadata created increases – hence a growing dependency on the availability of (expensive) memory in those array controllers.
  2. The process of checking each incoming block (which involves calculating a hash value) and comparing against a table of metadata stored in DRAM is very CPU intensive. Thus array controllers which support DRAM have increasing requirements for (expensive) processing power.
  3. For storage arrays which run in an active/active configuration (i.e. with multiple redundant controllers, each of which actively send and receive data from the persistent storage layer), much of this information will need to be passed between controllers over the array’s internal networking.

Did you spot the similarities between this list and the colourful one from earlier? If you didn’t, you must be colour blind. Flash Array controllers are much more dependant on their resources – particularly CPU and DRAM – than disk array controllers.

Summary

Flash array controllers have to do almost everything that their ancestors, the venerable disk array controllers, used to do. But they have to do it much faster and in greater volume. bottleneck-signNot only that, but they have to do so much more… especially for the process known as data reduction. And as we’ve seen, the overhead of all these tasks causes a much greater strain on memory, processing and networking than was previously seen in the world of disk arrays – which is one of the reasons you cannot simply retrofit SSDs into a disk array architecture.

With the introduction of flash into storage, the bottleneck has moved away from the persistence layer and is now with the controllers. Over the next few articles we’ll look at what that means and consider the implications of various AFA architecture strategies on that bottleneck. After all, as is so often the case when it comes to matters of performance, you can’t always remove the bottleneck… but you can choose the one which works best for you 🙂

New Installation Cookbook: Oracle Linux 6.7 with Oracle 11.2.0.4 RAC

cookbookI’ve updated my install cookbooks page to include a new cookbook for installation of Oracle 11.2.0.4 Real Application Clusters on Oracle Linux 6.7.

This is also the first one I’ve published since I left the employment of Violin Memory to work for Kaminario, so this install uses a Kaminario K2 All Flash Array. However, it applies very well to any Oracle RAC installation which uses relatively capable storage.

Enjoy:

https://flashdba.com/install-cookbooks/oracle-linux-6-7-with-oracle-11-2-0-4-rac/

How the Next Generation of Flash Storage is Changing the Economics Of SaaS Businesses (Recorded Webinar)

SaaS Webinar

This week I had the opportunity to record a webinar on a subject very close to my heart, the Software-as-a-Service industry. From 2003 to 2007 I managed the production infrastructure for a global SaaS company through the transition from startup to acquisition (partly by Salesforce.com). At the time, SaaS was a relatively new phenomenon, predating any concept of “Cloud”, but the challenges we faced then are still very relevant today.

The company was run by charismatic American entrepreneur Mark Suster, now a well known venture capitalist and blogger. Looking back, it was an incredible learning experience – but I do also remember that I spent a lot of time trying to coax more performance our of multi-tenancy database platform, which was constantly being held back by … yes, you guessed it… disk performance.

The webinar was hosted by Kaminario (my employer) and co-presented by myself and Jeff Kaplan of ThinkStrategies. Here’s the synopsis and the link (registration required). Enjoy!

http://info.kaminario.com/how-flash-storage-is-changing-saas-businesses

Advances in all flash storage are reshaping infrastructure strategies for the modern data center. SaaS businesses are on the leading edge of adopting all flash storage as they build application delivery infrastructure that supports the performance, scalability, and agility required to deliver high quality business apps to users around the world.

Join Jeff Kaplan, Managing Director of ThinkStrategies and Chris Buckel, author of the FlashDBA Blog and Technology Evangelist from Kaminario for this webinar discussion of infrastructure strategies for modern SaaS Businesses.