Understanding Flash: The Fall and Rise of Flash Memory

grave

This month sees the four year anniversary of some interesting events. Commonwealth countries around the world celebrated the Diamond Jubilee of Queen Elizabeth II. Whitney Houston was tragically found dead in a Beverly Hills hotel. The Caribbean was hit hard by sargassum seaweed invasion. And I made the decision to leave the comfort of Oracle databases and join the exciting new All-Flash Array industry.

Ok, I might have been stretching the use of the word “interesting” there. But for those with an interest in flash memory, February 2012 was still a very important month due to the publication of a research paper co-authored by the University of California’s Department of Computer Science and Engineering and Microsoft Research.

The paper was entitled The Bleak Future of NAND Flash Memory – and it wasn’t pleasant reading for somebody who had just abandoned a career in databases to bet everything on flash.

The Death of Flash Memory

Rest In PeaceI have never spoken to the authors of this paper so I don’t know where the “Bleak Future” title came from, but it seems reasonable to say that it was somewhat more inflammatory than the content. In the body of the paper, the authors examined the behaviour of NAND flash memory chips as the lithography shrank – and also as the number of bits per cell increased from SLC through MLC to TLC. At the time of publication the authors were examining 25nm technology but it was already obvious that this form of NAND (known as 2D planar NAND) was going to hit physical limitations beyond which it could no longer shrink. This is known in the semiconductor world as the scaling limit.

The paper concluded:

“SSDs will continue to improve by some metrics (notably density and cost per bit), but everything else about them is poised to get worse. This makes the future of SSDs cloudy”

This sentiment, along with the “bleak future” thing, caused a bit of a stir in the tech world. TheRegister, for example, ran a typically tongue-in-cheek headline: “Flash DOOMED to drive itself off a cliff – boffins“. Various industry bloggers discussed the potential of technologies like ReRAM to take over for the next decade, while HP made it’s annual claim that Memristor technology (a form of ReRAM) would soon be here to save the day. I started wondering if I should register the domain name ReRamDBA.com

The Resurrection – Now Showing in 3D

Four years later, ReRAM is still just around the corner but now in the form of Intel and Micron’s 3D XPoint technology, while HP has significantly backtracked on its Memristor programme. Flash memory, meanwhile, is still going strong thanks to the introduction of vertical or 3D NAND.

“Reports of my death have been greatly exaggerated” – Mark Twain

Of course, hindsight is a wonderful thing. It’s easy to look back now at the publication of the Bleak Future… paper and consider it flawed. To see the flaws at the time of publication would have required a bit more thought.

So that’s why this month’s hero is Allyn Malventano, Storage Editor for PC Perspective, who published an article on 21st February 2012 (the same month!) called NAND Flash Memory – A Future Not So Bleak After All in which he described the original publication as “bad science”. Allyn’s conclusion was so prescient that I’m going to quote it right here (although you should read the whole article to get the full context):

“The point I want all of you to take home here is that just as with the CPU, RAM, or any other industry involving wafers and dies, the manufacturers will adapt and overcome to the hurdles they meet. There is always another way, and when the need arises, manufacturers will figure it out.”

Bravo. Samsung is now manufacturing its third generation V-NAND chips, while the Toshiba/SanDisk and Intel/Micro partnerships are both going 3D. Samsung’s V-NAND has already moved from 24 through 32 to 48 layers, while it has been theorised that there is no natural limit on the number of layers possible.

3d-xpointOf course, there’s always the spectre of a new technology sweeping everything before it – and the big story right now is Intel/Micron’s 3D XPoint technology. Will it take over from flash in the future? Who knows.

One thing I do know is that new technologies find their rightful place when they are both technically capable and economically viable. If 3D XPoint or any other non-volatile memory product can win the day, it will leave us all better off – and hopefully without the need for alarmist research papers.

Now, if you’ll excuse me, I’m off to check on the availability of the 3D-XPointDBA.com domain name…

(You can read more about 3D NAND here)

Understanding Flash: What is 3D NAND?

grid-cube-3d

About 18 months ago I wrote a post describing the different types of NAND flash known as SLC, MLC and TLC. However, 18 months is a lifetime in the world of technology so now I need to clarify it based on the widespread adoption of a new type of NAND flash. Let me explain…

Recap: 2D Planar NAND

Until recently, most of the flash memory used for data storage was of a form known as 2D Planar NAND and could be found in three types called Single Level Cell (SLC), Multi-Level Cell (MLC) and TLC (Triple-Level Cell). I always used to use my bucket of electrons analogy to describe the difference between them:

slc-mlc-tlc-buckets

Each cell within planar NAND flash memory stores charge in a way similar to how a bucket stores water. By considering an imaginary line half-way up the bucket we can assign a binary one or zero based on whether the bucket contains more or less water than the line. Thus a full bucket, or a fully-charged NAND cell, denotes a zero while an empty bucket / cell denotes a one… assuming we are considering SLC, where each bucket stores one bit.

Moving to MLC (two bits) or TLC (three bits) is therefore a case of adding more lines, allowing us to differentiate between more states within the same bucket. The benefit is double (MLC) or quadruple (TLC) density but the drawback is that there will be a lower margin for error when measuring the amount of water/charge stored. As a consequence, the actions of reading, writing and erasing take longer while the endurance of the cell also drops drastically (leaky buckets are more of a problem as you try to be more precise about the measurements). The original article covers this all in more detail.

Shrinking Lithographies

If you remember, I also talked about the way that flash memory manufacturers are constantly shrinking the size of NAND flash cells in order to make increasingly dense packages, thus reducing the cost – but that the technology was now approaching its physical limits. In the bucket example, just imagine that the buckets are getting smaller and smaller. This is initially a good thing because smaller buckets (actually floating gate transistors) mean more buckets can fit in the same overall space, but in time the buckets become so small that they are no longer manageable – and then the technology hits a brick wall.

So Why Is It 2D?

In NAND flash memory, sets of cells are connected together in a string to form a NAND gate:

NAND-flash-structure

Image courtesy of Warren Miller at Avnet

If you consider one of the pieces of silicon substrate contained inside a flash chip as a rectangle with dimensions X and Y, each one of these strings of cells will take up some space stretching out in one of these two dimensions. Shrinking the lithography, i.e. manufacturing everything on a smaller scale, will give us the opportunity to fit more strings on the same about of substrate. But as we previously discussed, there comes a point when things are simply too small and too close together, resulting in interference and leakage.

3D NAND: Going Vertical

Image courtesy of Kristian Vättö at AnandTech

Image courtesy of Kristian Vättö at AnandTech

The cost of a semiconductor is proportional to the die size. It is therefore a good thing for the cost if more electronics can be crammed into the same tiny piece of silicon. The fundamental difference in 3D NAND, which gives rise to its name, is that the strings previously described are now arranged vertically – another words in the Z dimension. For this reason, Samsung calls the technology V-NAND.

Imagine the string of cells shown earlier, but this time stood on its end and then folded in two to make a U shape. We now have a vertical string which takes up only a fraction of the original space in the X and Y dimensions. What’s more, we can continue to build in the Z dimension as manufacturing processes allow. Samsung’s first generation of V-NAND had strings of 24 layers, while the second generation had 32. The latest 3rd generation now has 48. And as Jim Handy explains, there are few theoretical limits on the number of layers possible. (Just to be clear, these layers are all within the same “wafer” of silicon, otherwise there would be no cost benefit…)

Crucially, since the move to a Z dimension relieves the pressure on the X and Y dimensions, 3D flash is actually free to return to a slightly larger lithography, thus avoiding all of the nasty problems that 2D planar NAND was starting to hit as it approached the 10nm range.

Charge Trap Flash and 3D TLC

Aside from the vertical stacking, there is another fundamental change with 3D NAND – it no longer uses floating gate transistors (yes, that’s right, the buckets from earlier). Instead, it uses a technology called Charge Trap Flash. cheese-iconI’m not going to attempt an explanation of CTF here, but it was memorably described by Samsung as like using cheese instead of water. So, instead of the buckets from earlier, picture cheese.

This cheese has a number of benefits over floating gate transistors in terms of endurance and power consumption, but it still works in a similar way in terms of the number of bits that can be stored – in other words SLC, MLC and TLC. However, because of its better endurance rates, with 3D NAND it is now a realistic proposition to use TLC to replace 2D planar MLC (something my employer Kaminario has already embraced).

This is big news. The cost per density of 3D TLC NAND flash is revolutionary, with plenty of room for further developments as the flash fabricators add more layers. Three years ago it looked like NAND flash was a technology in terminal decline, but with 3D techniques the future is bright. We might even get to a point soon where we see the introduction of…

Quad-Level Cell (QLC) Flash

If the endurance of CTF-based 3D NAND is acceptable, it’s not hard to envisage one of the flash fabricators releasing a quadruple-level cell version of the medium. The potential benefit is an order-of-magnitude increase in density for roughly the same cost.

After all, everybody wants more cheese… right?

All Flash Arrays: Hybrid Means Compromise

hybrid-car-engine

Sometimes the transition between two technologies is long and complicated. It may be that the original technology is so well established that it’s entrenched in people’s minds as simply “the way things are” – inertia, you might say. It could be that there is more than one form of the new technology to choose from, with smart customers holding back to wait and see which emerges as a stronger contender for their investment. Or it could just be that the newer form of technology doesn’t yet deliver all of the benefits of the legacy version.

hybrid-car-toyotaThe automotive industry seems like a good example here. After over a century of using internal combustion engines, we are now at the point where electric vehicles are a serious investment for manufacturers. However, fully-electric vehicles still have issues to overcome, while there is continued debate over which approach is better: batteries or hydrogen fuel cells. Needless to say, the majority of vehicles on the road today still use what you could call the legacy method of propulsion.

However, one type of vehicle which has been successful in gaining market share is the hybrid electric vehicle. This solution attempts to offer customers the best of both worlds: the lower fuel consumption and claimed environmental benefits of an electric vehicle, but with the range, performance and cost of a fuel-powered vehicle. Not everybody believes it makes sense, but enough do to make it a worthwhile venture for the manufacturers.

Now here’s the interesting thing about hybrid vehicles… the thing that motivated me to write two paragraphs about cars instead of flash arrays… Nobody believes that hybrid electric vehicles are the permanent solution. Everybody knows that hybrids are a transient solution on the way to somewhere else. Nobody at all thinks that hybrid is the end-game. But the people who buy hybrid cars also believe that this state of affairs will not change during the period in which they own the car.

Hybrid Flash Arrays (HFAs)

There are two types of flash storage architecture which could be labelled a hybrid – those where a disk array has been repopulated with flash and those which are designed specifically for the purpose of mixing flash and disk. I’ve talked about naming conventions before, it’s a tricky subject. But for the purposes of this article I am only discussing the latter: systems where the architecture has been designed so that disk and flash co-exist as different tiers of storage. Think along the lines of Nimble Storage, Tegile and Tintri.

Why do this? Well, as with hybrid electric vehicles the idea is to bridge the gap between two technologies (disk and flash) by giving customers the best of both worlds. That means the performance of flash plus its low power, cooling and physical space requirements – combined with the density of disk and its corresponding impact on price. In other words, if disk is cheap but slow while flash is fast but expensive, HFAs are aimed at filling the gap.

Hybrid (adjective)
of mixed character; composed of different elements.
bred as a hybrid from different species or varieties.

As you can see there are a lot of synergies between this trend and that of the electric vehicle. Also, most storage systems are purchased with a five-year refresh cycle in mind, which is not dissimilar to the average length of ownership of a car. But there’s a massive difference: the rate of change in the development of flash memory technology.

In recent years the density of NAND flash has increased by orders of magnitude, especially with the introduction of 3D NAND technology and the subsequent use of Triple-Level Cell (TLC). And when the density goes up, the price comes down – closing the gap between disk and flash. In fact we’re at the point now where Wikibon predicts that “flash … will become a lower cost media than disk … for almost all storage in 2016″:

Image courtesy of Wikibon "Evolution of All-Flash Array Architectures" by David Floyer (2015)

Image courtesy of Wikibon “Evolution of All-Flash Array Architectures” by David Floyer (2015)

That’s great news for customers – but definitely not for HFA vendors.

Conclusion

And so we reach the root of the problem with HFAs. It’s not just that they are slower than All Flash Arrays. It’s not even that they rely on the guesswork of automatic tiering algorithms to move data between their tiers of disk and flash. It’s simply that their entire existence is predicated on the idea of being a transitory solution designed to bridge a gap which is already closing faster than they can fill it.

mind-the-gapIf you want proof of this, just look at the three HFA vendors I name checked earlier – all of which are rushing to bring out All Flash versions of their arrays. Nimble Storage is the only one of the three to be publicly listed – and its recent results indicate a strategic rethink may be required.

When it comes to hybrid electric vehicles, it’s true that the concept of mass-owned fully-electric cars still belongs in the future. But when it comes to hybrid flash arrays, the adoption of All-Flash is already happening today. The advice to customers looking to invest in a five-to-seven year storage project is therefore pretty simple: Mind the gap.

Postcards from Storageland: Pure Storage Cancels Christmas

christmas-angel

Ho ho ho folks. It’s nearly Christmas Day, 2016 and it most definitely is the season to be jolly. It’s also an exciting time in the All Flash Array industry with NetApp announcing its intended acquisition of Solidfire for a reported $870 million. Merry Christmas to everyone at Solidfire!

Unfortunately those sentiments weren’t shared by Pure Storage’s VP of Product, Matt Kixmoeller, who published a blog on the subject of the acquisition which seemed a little on the bitter side. A number of commentators on Twitter agreed:

pure-twitter-comments

This was an unusual slip of the mask for Pure Storage, who normally set the bar very high when it comes to marketing and PR. I don’t know for sure but I imagine that at some point after publication red lights and klaxons started flashing and blaring in the marketing division of Pure’s Mountain View head office. Within a short amount of time the blog post had been rewritten and republished with a suitably humbling note from Mr Kixmoeller:

pure-kixmoeller-response

I have to admit I struggle with the use of the word “feedback” here when “criticism” would clearly be more accurate… but I don’t want to turn this post into my own vendor-bashing exercise. I do however think it’s a very interesting opportunity to examine the workings of a marketing department. So here – purely for your edification – are the two versions of the article published so far (courtesy of the Internet Wayback Machine):

NetApp Acquires SolidFire – Flash Strategy Take 5 [initial version published 21 Dec 2015]

NetApp Acquires SolidFire – Flash Strategy Take 5 [revised version published 22 Dec 2015]

 

And here, to make it easy to consume, is a marked up version of the original document with changes in red. There are quite a lot of them! (Click on the image to see a larger version)

Merry Christmas everybody!

pure-article-on-netapp-and-solidfire

[Thanks to the Copyscape Compare tool for making this exercise easier…]

Why Kaminario?

K2-mountain

This summer I made the decision to leave my previous employer and join another vendor in the All Flash Array space – a company called Kaminario. A lot of people have been in touch to ask me about this, so I thought I’d answer the question here… Why Kaminario?

To answer the question, we first need to look at where the All Flash industry finds itself today…

The Path To Flash Adoption

We all know that disk-based storage has been struggling to deliver to the enterprise for many years now. And most of us are aware that flash memory is the technology most suitably placed to take over the mantle as the storage medium of choice. However, even keeping in mind the typical five-to-seven year refresh cycle for enterprise storage platforms, the journey to adopt flash in the enterprise data centre has been slower than some might have expected. Why?

There are three reasons, in my view. The first two are pretty obvious: cost and functionality. I’ll cover the third in another post – but cost and functionality have changed drastically over the four phases of flash:

Phase One: Extreme Performance

The early days of enterprise flash storage were pioneered by the likes of FusionIO with their PCIe flash cards. These things sold for a $/GB price that would seem obscene in today’s AFA marketplace – and (at least initially) Fusion-io-ioDrive-Duo-640GBthey had almost no functionality in terms of thin provisioning, replication, snapshots, data reduction technologies and so on. They weren’t even shared storage! They were just fast blobs of flash that you could stick right inside your server to get performance which, at the time, seemed insanely fast – think <250 microseconds of latency.

This meant they were only really suitable for extreme performance requirements, where the cost and complexity was justified by the resultant improvement to the application.

Phase Two: Niche Performance Applications

Violin 6000 series flash memory arrayThe next step on the path to flash adoption was the introduction of flash as shared storage (i.e. SAN). These were the first All Flash Arrays, a marketplace pioneered by Violin Memory (my former employer) and Texas Memory Systems (subsequently acquired by IBM). The fact that they were shared allowed a larger number of applications to be migrated to flash, but they were still very much used as a niche performance play due to a lack of features such as data reduction, replication etc.

Phase Three: Virtualization for Servers and Desktops

The third phase was driven by the introduction of a very important feature: data reduction. Pure StorageBy implementing deduplication and/or compression – therefore massively reducing the effective price in $/GB – a couple of new entrants to the AFA space were able to redefine the marketplace and leave the pioneering AFA vendors floundering. These new players were Pure Storage and EMC with its XtremIO system – and they were able to create and attack an entirely new market: virtualization. Initially they went after Virtual Desktop Infrastructure projects, which have lots of duplicate data and create lots of IOPS, but in time the market for Virtual Server Infrastructure (i.e. VMware, Hyper-V, Xen etc) became a target too.

Phase Four: General Purpose Storage

This is where we are now – or at least, it’s where we’ve just arrived. The price of flash storage has consistently dropped as the technology has advanced, while almost all of the features and functionality originally found on enterprise-class disk arrays are now available on AFAs. We’re finally at the point now where, with some caveats, customers are either moving to or planning the wholesale replacement of their general purpose disk arrays with All Flash. Indeed, with the constant evolution of NAND flash technology it’s no longer fanciful to believe that Backup and Archive workloads could also move to flash…

We are now at the inflection point where, thanks to the combination of data reduction features and constantly-evolving NAND flash development, the cost of All Flash storage has fallen as low as enterprise disk storage while delivering all the functionality required to replace disk entirely. We call this concept the All-Flash Data Centre.

So to answer the question at the start of this post, I have joined Kaminario because I believe they are ideally placed, architecturally and commercially, to lead the adoption of this new phase of flash storage – a technology that I fundamentally believe in.

Making The All-Flash Data Centre A Reality

iphoneI mentioned earlier that NAND flash is changing and evolving all the time. It reminds me a little of smartphones – you buy the latest and greatest model only for it to become yesterday’s news almost before you’ve worked out how to use it. But the typical refresh cycle of a smartphone is one-to-two years, while for enterprise storage it’s five-to-seven. That’s a long time to risk an investment in evolving technology.

Kaminario’s K2 All Flash Array is based on its SPEAR architecture. Essentially, what Kaminario has created is a high-performance, scalable framework for taking memory and presenting it as enterprise-class storage – with all Kaminario's SPEAR architecturethe resilience, functionality and performance you would expect. When the company was first founded this memory was just that: DRAM. But since NAND flash became economically viable, Kaminario has been using flash – and the architecture is agile enough to adopt whichever technology makes the most sense in the future.

As an example of this agility, Kaminario was the first AFA vendor to adopt 3D NAND technology and the first to adopt 3D TLC. This obviously allows a major competitive advantage when it comes to providing the most cost-efficient All Flash Array. But what really drew me to Kaminario was their decision to allow customers to integrate future hardware (such as new types of flash) into their existing arrays rather than making them migrate to a new product as is typical in the industry. By protecting customers’ investments, Kaminario is taking some of the risk out of moving to an AFA solution. It calls this programme the Perpetual Array.

In addition to this, Kaminario has a unique ability to offer both scale out and scale up architecture (scalability is something I will discuss further in my Storage for DBAs series soon) and to deliver workload agnostic performance… all technical features that deliver real business value. But those are for discussion another day.

For today the message is simple: Kaminario is making the All Flash Data Centre a reality.. and I want to be here to help customers make that happen.

All Flash Arrays: SSD-based versus Ground-Up Design

design-papers

In recent articles in this series I’ve been looking at the architectural choices for building All Flash Arrays (AFAs). I surmised that there are three main approaches:

  • Hybrid Flash Arrays
  • SSD-based All Flash Arrays
  • Ground-Up All Flash Arrays (which from here on I’ll refer to as Custom Flash Module arrays or CFM arrays)

I’ve already blown metaphorical raspberries at the hybrid approach, so now it’s time to cover the other two.

SSD or CFM: The Big Question

I think the most interesting question in the AFA industry right now is the one of whether the SSD or CFM design will win. Of course, it’s easy to say “win” like that as if it’s a simple race, but this is I.T. – there’s never a simple answer. However, the reality is that each method offers benefits and drawbacks, so I’m going to use this blog post to simply describe them as I see them.

Before I do that, let me just remind you of what the vendor landscape looks like at this time:

SSD-based architecture: Right now you can buy SSD-based arrays from EMC (XtremIO), Pure Storage, Kaminario, Solidfire, HP 3PAR and Huawei to name a few. It’s fair to say that the SSD-based design has been the most common in the AFA space so far.

CFM-based architecture: On the other hand, you can now buy ground-up CFM-based arrays from Violin Memory, IBM (FlashSystem), HDS (VSP), Pure Storage (FlashArray//m) and EMC (DSSD). The latter has caused some excitement because of DSSD’s current air of mystery in the marketplace – in other words, the product isn’t yet generally available.

So which approach is “the best”?

The SSD-based Approach

If you were going to start an All Flash Array company and needed to bring a product to market as soon as possible, it’s quite likely you would go down the SSD route. Apart from anything else, flash management is hard work – and needs constant attention as new types of flash come to market. A flash hardware engineer friend of mine used to say that each new flash chip is like a snowflake – they all behave slightly differently. So by buying flash in the ready-made form of an SSD you bypass the requirement to put in all this work. The flash controller from the SSD vendor does it for you, leaving you to concentrate on the other stuff that’s needed in enterprise storage: resilience, availability, data services, etc.Samsung_840_EVO_SSD

On the other hand, it seems clear that an SSD is a package of flash pretending to behave like a disk. That often means I/Os are taking place via protocols that were designed for disk, such as Serial Attached SCSI. Also, in a unit the size of an all flash array there are likely to be many SSDs… but because each one is an isolated package of flash, they cannot work together and manage the flash holistically. In other words, if one SSD is experiencing issues due to garbage collection (for example), the others cannot take the strain.

The Ground-Up Approach

For a number of years I worked for Violin Memory, which adopts the ground-up approach at its very core. Violin’s position is that only the CFM approach can unlock the full potential benefits from NAND flash. By tightly integrating the NAND flash into its array – and by using its own controllers to manage that flash – Violin believes it has been able to deliver the best performance in the AFA market. On the other hand, many SSD vendors build products for the consumer market where the highest levels of performance simply aren’t necessary. All that’s required is something faster than disk – it doesn’t always have to be the fastest possible solution.electronics

It could also be argued that any CFM vendor who has a good relationship with a flash fabricator (for example, Violin is partly-owned by Toshiba) could gain a competitive advantage by working on the very latest NAND flash technologies before they are available in SSD form. What’s more, SSDs represent an additional step in the process of taking NAND flash from chip to All Flash Array, which potentially means there’s an extra party needing to make their margin. Could it be that the CFM approach is more cost effective?

SSD Economics

The argument about economics is an interesting one. Many technical people have a tendency to focus on what they know and love: technology. I’m as guilty of this as anyone – given two solutions to a problem I tend to gravitate toward the one that has the most elegant technical design, even if it isn’t necessarily the most commercially-favourable. Taking raw flash and integrating it into a custom flash module sounds great, but what is the cost of manufacturing those CFMs?

moneyManufacturing is all about economies of scale. If you design something and then build thousands of them, it will obviously cost you more per unit than if you build millions of them. How many ground-up all flash vendors are building their custom flash modules by the millions? In May 2015, IBM issued this press release in which they claimed that they were the “number one all-flash storage array vendor in 2014“. How many units did they ship? 2,100.

In just the second quarter of 2015, almost 24 million SSDs were shipped to customers, with Samsung responsible for 43.8% of that total (according to US analyst firm Trendfocus, Inc). Who do you think was able to achieve the best economy of scale?

Design Agility

The other important question is the one about New Stuff ™. We are always being told about fantastic new storage technologies that are going to change our lives, so who is best placed to adopt them first?

Again there’s an argument to be made on both sides. If the CFM flash vendor is working hand-in-glove with a fabricator, they may have access to the latest technology coming down the line. That means they can be prepared ahead of the pack – a clear competitive advantage, right?

But how agile is the CFM design? Changing the NVM media requires designing an entirely-new flash module, with all the associated hardware engineering costs such as prototyping, testing, QA and limited initial manufacturing runs.

For an SSD all flash array vendor, however, that work is performed by the SSD vendor… again somebody like Samsung, Intel or Micron who have vast infrastructures in place to perform that sort of work all the time. After all, a finished SSD must behave exactly like a disk, regardless of what NVM technology it uses under the covers.

Conclusion

There are obviously two sides to this argument. The SSD was designed to replace a fundamental bottleneck in storage systems: the hard disk drive. Ironically, it may be the fate of the SSD to become exactly what it replaced. For flash to become mainstream it was necessary to create a “flash-behaving-as-disk” package, but the flip side of this is the way that SSDs stifle the true potential of the underlying flash. (Although perhaps NVMe technologies will offer us some salvation…)question-mark-dice

However, unless you are a company the size of Samsung, Intel or Micron it seems unlikely that you would be able to retain the manufacturing agility and economies of scale required to produce custom flash modules at the price point of SSDs. Nor would you be likely to have the agility to adopt new NVM technologies at the moment that they become economically preferable to whatever medium you were using previously.

Whatever happens, you can be sure that each side will claim victory. With the entire primary data market to play for, this is a high stakes game. Every vendor has to invest a large amount of money to enter the field, so nobody wants to end up being consigned to the history books as the Betamax of flash…

For younger readers, Betamax was the loser in a battle with VHS over who would dominate the video tape market. You can read about it here. What do you mean, “What is a video tape?” Those things your parents used to watch movies on before the days of DVDs. What do you mean, “What is a DVD?” Jeez, I feel old.

All Flash Arrays: Where’s My Capacity? Effective, Usable and Raw Explained

capacity-effective-usable-raw

What’s the most important attribute to consider when you want to buy a new storage system? More critical than performance, more interesting than power and cooling requirements, maybe even more important than price? Whether it’s an enterprise-class All Flash Array, a new drive for your laptop or just a USB flash key, the first question on anybody’s mind is usually: how big is it?

Yet surprisingly, at least when it comes to All Flash Arrays, it is becoming increasingly difficult to get an accurate answer to this question. So let’s try and bring some clarity to that in this post.

Before we start, let’s quickly address the issue of binary versus decimal capacity measurements. For many years the computer industry has lived with two different definitions for capacity: memory is typically measured in binary values (powers of two) e.g. one kibibyte = 2 ^ 10 bytes = 1024 bytes. On the other hand, hard disk drive manufacturers have always used decimal values (powers of ten) e.g. one kilobyte = 10 ^ 3 bytes = 1000 bytes. Since flash memory is commonly used for the same purpose as disk drives, it is usually sold with capacities measured in decimal values – so make sure you factor this in when sizing your environments.

Definitions

Now that’s covered, let’s look at the three ways in which capacity is most commonly described: raw capacity, usable capacity and effective capacity. To ensure we don’t stray from the truth, I’m going to use definitions from SNIA, the Storage Networking Industry Association.

Raw Capacity: The sum total amount of addressable capacity of the storage devices in a storage system.

rawThe raw capacity of a flash storage product is the sum of the available capacity of each and every flash chip on which data can be stored. Imagine an SSD containing 18 Intel MLC NAND die packages, each of which has 32GB of addressable flash. This therefore contains 576GB of raw capacity. The word addressable is important because the packages actually contain additional unaddressable flash which is used for purposes such as error correction – but since this cannot be addressed by either you or the firmware of the SSD, it doesn’t count towards the raw value.

Usable Capacity: (synonymous with Formatted Capacity in SNIA terminology) The total amount of bytes available to be written after a system or device has been formatted for use… [it] is less than or equal to raw capacity.

usablePossibly one of the most abused terms in storage, usable capacity is what you have left after taking raw capacity and removing the space set aside for system use, RAID parity, over-provisioning (i.e. headroom for garbage collection) and so on. It is guaranteed capacity, meaning you can be certain that you can store this amount of data regardless of what the data looks like.

That last statement is important once data reduction technologies come into play, i.e. compression, deduplication and (arguably) thin provisioning. Take 10TB of usable space and write 5TB of data into it – you now have 5TB of usable capacity remaining. Sounds simple? But take 10TB of usable space and write 5TB of data which dedupes and compresses at a 5:1 ratio – now you only need 1TB of usable space to store it, meaning you have 9TB of usable capacity left available.

Effective Capacity: The amount of data stored on a storage system … There is no way to precisely predict the effective capacity of an unloaded system. This measure is normally used on systems employing space optimization technologies.

effectiveThe effective capacity of a storage system is the amount of data you could theoretically store on it in certain conditions. These conditions are assumptions, such as “my data will reduce by a factor of x:1”. There is much danger here. The assumptions are almost always related to the ability of a dataset to reduce in some way (i.e. compress, dedupe etc) – and that cannot be known until the data is actually loaded. What’s more, data changes… as does its ability to reduce.

For this reason, effective capacity is a terrible measurement on which to make any firm plans unless you have some sort of guarantee from the vendor. This takes the form of something like, “We guarantee you x terabytes of effective capacity – and if you fail to realise this we will provide you with additional storage free of charge to meet that guarantee”. This would typically then be called the guaranteed effective capacity.

The most commonly used assumptions in the storage industry are that databases reduce by around 2:1 to 4:1, VSI systems around 5:1 to 6:1 and VDI systems anything from 8:1 right up to 18:1 or even further. This means an average data reduction of around 6:1, which is the typical ratio you will see on most vendor’s data sheets. If you take 10TB of usable capacity and assume an average of 6:1 data reduction, you therefore end up with an effective capacity of 60TB. Some vendors use a lower ratio, such as 3:1 – and this is good for you the customer, because it gives you more protection from the risk of your data not reducing.

But it’s all meaningless in the real world. You simply cannot know what the effective capacity of a storage system is until you put your data on it. And you cannot guarantee that it will remain that way if your data is going to change. Never, ever buy a storage system based purely on the effective capacity offered by the vendor unless it comes with a guarantee – and always consider whether the assumed data reduction ratio is relevant to you.

Think of it this way: if you are being sold effective capacity you are taking on the financial risk associated with that data reduction. However, if you are being sold guaranteed effective capacity, the vendor is taking on that financial risk instead. Which scenario would you prefer?

Use and Abuse of Capacities

Three different ways to measure capacity? Sounds complicated. And in complexity comes opportunities for certain flash array vendors to use smoke and mirrors in order to make their products seem more appealing. I’m going to highlight what I think are the two most common tactics here.

1. Confusing Usable Capacity with Effective Capacity

Many flash array vendors have Always-On data reduction services. This is often claimed to be for the customer’s benefit but is often more about reducing the amount of writes taking place to the flash media (to alleviate performance and endurance issues). For some vendors, not having the ability to disable data reduction can be spun around to their advantage: they simply make out that the terms usable and effective are synonymous, or splice them together into the unforgivable phrase effective usable capacity to make their products look larger. How convenient.

Let me tell you this now: every flash array has a usable capacity… it is the maximum amount of unique, incompressible data that can be stored, i.e. the effective capacity if the data reduction ratio were 1:1. I would argue that this is a much more important figure to know when you buy a flash system, because if you buy on effective capacity you are just buying into dreams. Make your vendor tell you the usable capacity at 1:1 data reduction and then calculate the price per GB based on that value.

2. My Data Reduction Is Better Than Yours

Every flash vendor thinks their data reduction technologies are the best. They can’t all be right. Yet sometimes you will hear claims so utterly ridiculous that you’ll think it’s some kind of joke. I guess we all believe what we want to hear.

Here’s the truth. Compression and deduplication are mature technologies – they have been around for decades. Nobody in the world of flash storage is going to suddenly invent something that is remarkably better than the competition. Sometimes one vendor’s tech might deliver better results than another, but on other days (and, crucially, with other datasets) that will reverse. For this reason, as well as for your own sanity, you should assume they will all be roughly the same… at least until you can test them with your data. When you evaluate competitive flash products, make them commit to a guaranteed effective capacity and then hold them to it. If they won’t commit, beware.

3. Including Savings From Thin Provisioning

Thin Provisioning is a feature whereby physical storage is allocated only when it is used, rather than being preallocated at the moment of volume creation. Thus if a 10TB volume is created and then 1TB of data written to it, only 1TB of physical storage is used. The host is fooled into thinking that it has all 10TB available, but in reality there may be far less physical capacity free on the storage system.

Some vendors show the benefits from thin provisioned volumes as a separate saving from compression and deduplication, but some roll all three up into a single number. This conveniently makes their data reduction ratios look amazing – but in my opinion this then becomes a joke. Thin provisioning isn’t data reduction, because no data is being reduced. It’s a cheap trick – and it says a lot about the vendor’s faith in their compression and deduplication technologies.

Conclusion

Don’t let your vendors set the agenda when it comes to sizing. If you are planning on buying a certain capacity of flash, make sure you know the raw and usable capacities, plus the effective capacity and the assumed data reduction ratio used to calculate it. Remember that usable should be lower than raw, while effective (which is only relevant when data reduction technologies are present) will commonly be higher.

Keep in mind that Effective Capacity = Usable Capacity X Data Reduction Factor.

Be aware that when a product with an “Always-On” data reduction architecture tells you how much capacity you have left, it’s basically a guess. In reality, it’s entirely dependent on the data you intend to write. I’ve always thought that “Always-On” was another bit of marketing spin; you could easily rename it as an “Unavoidable” or “No Choice” architecture.

In my opinion, the best data reduction technology will be selectable and granular. That means you can choose, at a LUN level, whether you want to take advantage of compression and/or deduplication or not – you aren’t tied in by the architecture. Like with all features, the architecture should allow you to have a choice rather then enforce a compromise.

Finally, remember the rule about effective capacity. If it’s guaranteed, the risk is on the vendor. If they won’t guarantee, the risk is on you.

So there we have it: clarity and choice. Because in my opinion – and no matter which way you measure it – one size simply doesn’t fit all.

Follow

Get every new post delivered to your Inbox.

Join 1,174 other followers