All Flash Arrays: Controllers Are The New Bottleneck

bottleneck

Today’s storage array market contains a wild variation of products: block storage, file storage or object storage; direct attached, SANs or NAS systems; fibre-channel, iSCSI or Infiniband… Even the SAN section of the market is full of diversity: from legacy hard disk drive-based arrays through the transitory step of tiered disk+flash hybrid systems and on to modern All-Flash Arrays (AFAs).

If you were partial to the odd terrible pun, you might even say that it was a bewildering array of choices. [*Array* of choices? Oh come on. If you’re expecting a higher class of humour than that, I’m afraid this is not the blog for you]

Anyway, one thing that pretty much all storage arrays have in common is the basic configuration blocks of which they comprise:

  • Array controllers
  • Internal networking
  • Persistent Media (flash or disk)

Over the course of this blog series I’ve talked a lot about both flash and disk media, but now it’s time to concentrate a little more on the other stuff – specifically, the controllers. A typical Storage Area Network delivers a lot more functionality than would be expected from just connecting a bunch of disks or flash – and it’s the controllers that are responsible for most of that added functionality.

Storage Array Controllers

Think of your storage system as a private network on which is located a load of dumb disk or flash drives. I say dumb because they can do little else other than accept I/O requests: reads and writes. The controllers are therefore required to provide the intelligence needed to present those drives to the outside world and add all of the functionality associated with enterprise-class storage:air-traffic-control

  • Resilience, automatic fault tolerance and high availability, RAID
  • Mirroring and/or replication
  • Data reduction technologies (compression, deduplication, thin provisioning)
  • Data management features (snapshots, clones, etc)
  • Management and monitoring interfaces
  • Vendor support integration such as call-home and predictive analytics

Controllers are able to add this “intelligence” because they are actually computers in their own right, acting as intermediaries between the back-end storage devices and the front-end storage fabric which connects the array to its clients. And as computers, they rely on the three classic computing resources (which I’m going to list using fancy colours so I can sneak them up on you again later in this post):

  1. Memory (DRAM)
  2. Processing (CPU)
  3. Networking

It’s the software running on the array controllers – and utilising these resources – that describes their behaviour. But with the rise of flash storage, this behaviour has had to change… drastically.

boxing-afa-hdd

Disk Array Controllers vs Flash Array Controllers

In the days when storage arrays were crammed full of disk drives, the controllers in those arrays spent lots of time waiting on mechanical latency. This meant that the CPUs within the controllers had plenty of idle cycles where they simply had to wait for data to be stored or retrieved by the disks they were addressing. To put it another way, CPU power wasn’t such an important priority when specifying the controller hardware.

This is clearly not the case with the controllers in an all flash array, since mechanical latency is a thing of the past. The result is that controller CPUs now have no time to spare – data is constantly being handled, addressed, moved and manipulated. Suddenly, the choice of CPU has a direct effect on the array’s ability to process I/O requests.

Dedupe Kills It

But there’s more. One of the biggest shifts in behaviour seen with flash arrays is the introduction of data reduction technology – specifically deduplication. This functionality, known colloquially as dedupe, intervenes in the write process to see if an exact copy of any written block already exists somewhere else on the array. copy-stampIf the block does exist, the duplicate copy does not need to be written – and instead a pointer to the existing version can be stored, saving considerable space. This pointer is an example of what we call metadata – information about data.

I will cover deduplication at greater length in another post, but for now there are three things to consider about the effect dedupe has on storage array controllers:

  1. Dedupe requires the creation of a fairly complex set of metadata structures – and for performance reasons much of this will need to reside in DRAM on the controllers. And as more data is stored on the array, the amount of metadata created increases – hence a growing dependency on the availability of (expensive) memory in those array controllers.
  2. The process of checking each incoming block (which involves calculating a hash value) and comparing against a table of metadata stored in DRAM is very CPU intensive. Thus array controllers which support DRAM have increasing requirements for (expensive) processing power.
  3. For storage arrays which run in an active/active configuration (i.e. with multiple redundant controllers, each of which actively send and receive data from the persistent storage layer), much of this information will need to be passed between controllers over the array’s internal networking.

Did you spot the similarities between this list and the colourful one from earlier? If you didn’t, you must be colour blind. Flash Array controllers are much more dependant on their resources – particularly CPU and DRAM – than disk array controllers.

Summary

Flash array controllers have to do almost everything that their ancestors, the venerable disk array controllers, used to do. But they have to do it much faster and in greater volume. bottleneck-signNot only that, but they have to do so much more… especially for the process known as data reduction. And as we’ve seen, the overhead of all these tasks causes a much greater strain on memory, processing and networking than was previously seen in the world of disk arrays – which is one of the reasons you cannot simply retrofit SSDs into a disk array architecture.

With the introduction of flash into storage, the bottleneck has moved away from the persistence layer and is now with the controllers. Over the next few articles we’ll look at what that means and consider the implications of various AFA architecture strategies on that bottleneck. After all, as is so often the case when it comes to matters of performance, you can’t always remove the bottleneck… but you can choose the one which works best for you🙂

New Installation Cookbook: Oracle Linux 6.7 with Oracle 11.2.0.4 RAC

cookbookI’ve updated my install cookbooks page to include a new cookbook for installation of Oracle 11.2.0.4 Real Application Clusters on Oracle Linux 6.7.

This is also the first one I’ve published since I left the employment of Violin Memory to work for Kaminario, so this install uses a Kaminario K2 All Flash Array. However, it applies very well to any Oracle RAC installation which uses relatively capable storage.

Enjoy:

https://flashdba.com/install-cookbooks/oracle-linux-6-7-with-oracle-11-2-0-4-rac/

How the Next Generation of Flash Storage is Changing the Economics Of SaaS Businesses (Recorded Webinar)

SaaS Webinar

This week I had the opportunity to record a webinar on a subject very close to my heart, the Software-as-a-Service industry. From 2003 to 2007 I managed the production infrastructure for a global SaaS company through the transition from startup to acquisition (partly by Salesforce.com). At the time, SaaS was a relatively new phenomenon, predating any concept of “Cloud”, but the challenges we faced then are still very relevant today.

The company was run by charismatic American entrepreneur Mark Suster, now a well known venture capitalist and blogger. Looking back, it was an incredible learning experience – but I do also remember that I spent a lot of time trying to coax more performance our of multi-tenancy database platform, which was constantly being held back by … yes, you guessed it… disk performance.

The webinar was hosted by Kaminario (my employer) and co-presented by myself and Jeff Kaplan of ThinkStrategies. Here’s the synopsis and the link (registration required). Enjoy!

http://info.kaminario.com/how-flash-storage-is-changing-saas-businesses

Advances in all flash storage are reshaping infrastructure strategies for the modern data center. SaaS businesses are on the leading edge of adopting all flash storage as they build application delivery infrastructure that supports the performance, scalability, and agility required to deliver high quality business apps to users around the world.

Join Jeff Kaplan, Managing Director of ThinkStrategies and Chris Buckel, author of the FlashDBA Blog and Technology Evangelist from Kaminario for this webinar discussion of infrastructure strategies for modern SaaS Businesses.

Understanding Flash: The Fall and Rise of Flash Memory

grave

This month sees the four year anniversary of some interesting events. Commonwealth countries around the world celebrated the Diamond Jubilee of Queen Elizabeth II. Whitney Houston was tragically found dead in a Beverly Hills hotel. The Caribbean was hit hard by sargassum seaweed invasion. And I made the decision to leave the comfort of Oracle databases and join the exciting new All-Flash Array industry.

Ok, I might have been stretching the use of the word “interesting” there. But for those with an interest in flash memory, February 2012 was still a very important month due to the publication of a research paper co-authored by the University of California’s Department of Computer Science and Engineering and Microsoft Research.

The paper was entitled The Bleak Future of NAND Flash Memory – and it wasn’t pleasant reading for somebody who had just abandoned a career in databases to bet everything on flash.

The Death of Flash Memory

Rest In PeaceI have never spoken to the authors of this paper so I don’t know where the “Bleak Future” title came from, but it seems reasonable to say that it was somewhat more inflammatory than the content. In the body of the paper, the authors examined the behaviour of NAND flash memory chips as the lithography shrank – and also as the number of bits per cell increased from SLC through MLC to TLC. At the time of publication the authors were examining 25nm technology but it was already obvious that this form of NAND (known as 2D planar NAND) was going to hit physical limitations beyond which it could no longer shrink. This is known in the semiconductor world as the scaling limit.

The paper concluded:

“SSDs will continue to improve by some metrics (notably density and cost per bit), but everything else about them is poised to get worse. This makes the future of SSDs cloudy”

This sentiment, along with the “bleak future” thing, caused a bit of a stir in the tech world. TheRegister, for example, ran a typically tongue-in-cheek headline: “Flash DOOMED to drive itself off a cliff – boffins“. Various industry bloggers discussed the potential of technologies like ReRAM to take over for the next decade, while HP made it’s annual claim that Memristor technology (a form of ReRAM) would soon be here to save the day. I started wondering if I should register the domain name ReRamDBA.com

The Resurrection – Now Showing in 3D

Four years later, ReRAM is still just around the corner but now in the form of Intel and Micron’s 3D XPoint technology, while HP has significantly backtracked on its Memristor programme. Flash memory, meanwhile, is still going strong thanks to the introduction of vertical or 3D NAND.

“Reports of my death have been greatly exaggerated” – Mark Twain

Of course, hindsight is a wonderful thing. It’s easy to look back now at the publication of the Bleak Future… paper and consider it flawed. To see the flaws at the time of publication would have required a bit more thought.

So that’s why this month’s hero is Allyn Malventano, Storage Editor for PC Perspective, who published an article on 21st February 2012 (the same month!) called NAND Flash Memory – A Future Not So Bleak After All in which he described the original publication as “bad science”. Allyn’s conclusion was so prescient that I’m going to quote it right here (although you should read the whole article to get the full context):

“The point I want all of you to take home here is that just as with the CPU, RAM, or any other industry involving wafers and dies, the manufacturers will adapt and overcome to the hurdles they meet. There is always another way, and when the need arises, manufacturers will figure it out.”

Bravo. Samsung is now manufacturing its third generation V-NAND chips, while the Toshiba/SanDisk and Intel/Micro partnerships are both going 3D. Samsung’s V-NAND has already moved from 24 through 32 to 48 layers, while it has been theorised that there is no natural limit on the number of layers possible.

3d-xpointOf course, there’s always the spectre of a new technology sweeping everything before it – and the big story right now is Intel/Micron’s 3D XPoint technology. Will it take over from flash in the future? Who knows.

One thing I do know is that new technologies find their rightful place when they are both technically capable and economically viable. If 3D XPoint or any other non-volatile memory product can win the day, it will leave us all better off – and hopefully without the need for alarmist research papers.

Now, if you’ll excuse me, I’m off to check on the availability of the 3D-XPointDBA.com domain name…

(You can read more about 3D NAND here)

Understanding Flash: What is 3D NAND?

grid-cube-3d

About 18 months ago I wrote a post describing the different types of NAND flash known as SLC, MLC and TLC. However, 18 months is a lifetime in the world of technology so now I need to clarify it based on the widespread adoption of a new type of NAND flash. Let me explain…

Recap: 2D Planar NAND

Until recently, most of the flash memory used for data storage was of a form known as 2D Planar NAND and could be found in three types called Single Level Cell (SLC), Multi-Level Cell (MLC) and TLC (Triple-Level Cell). I always used to use my bucket of electrons analogy to describe the difference between them:

slc-mlc-tlc-buckets

Each cell within planar NAND flash memory stores charge in a way similar to how a bucket stores water. By considering an imaginary line half-way up the bucket we can assign a binary one or zero based on whether the bucket contains more or less water than the line. Thus a full bucket, or a fully-charged NAND cell, denotes a zero while an empty bucket / cell denotes a one… assuming we are considering SLC, where each bucket stores one bit.

Moving to MLC (two bits) or TLC (three bits) is therefore a case of adding more lines, allowing us to differentiate between more states within the same bucket. The benefit is double (MLC) or quadruple (TLC) density but the drawback is that there will be a lower margin for error when measuring the amount of water/charge stored. As a consequence, the actions of reading, writing and erasing take longer while the endurance of the cell also drops drastically (leaky buckets are more of a problem as you try to be more precise about the measurements). The original article covers this all in more detail.

Shrinking Lithographies

If you remember, I also talked about the way that flash memory manufacturers are constantly shrinking the size of NAND flash cells in order to make increasingly dense packages, thus reducing the cost – but that the technology was now approaching its physical limits. In the bucket example, just imagine that the buckets are getting smaller and smaller. This is initially a good thing because smaller buckets (actually floating gate transistors) mean more buckets can fit in the same overall space, but in time the buckets become so small that they are no longer manageable – and then the technology hits a brick wall.

So Why Is It 2D?

In NAND flash memory, sets of cells are connected together in a string to form a NAND gate:

NAND-flash-structure

Image courtesy of Warren Miller at Avnet

If you consider one of the pieces of silicon substrate contained inside a flash chip as a rectangle with dimensions X and Y, each one of these strings of cells will take up some space stretching out in one of these two dimensions. Shrinking the lithography, i.e. manufacturing everything on a smaller scale, will give us the opportunity to fit more strings on the same about of substrate. But as we previously discussed, there comes a point when things are simply too small and too close together, resulting in interference and leakage.

3D NAND: Going Vertical

Image courtesy of Kristian Vättö at AnandTech

Image courtesy of Kristian Vättö at AnandTech

The cost of a semiconductor is proportional to the die size. It is therefore a good thing for the cost if more electronics can be crammed into the same tiny piece of silicon. The fundamental difference in 3D NAND, which gives rise to its name, is that the strings previously described are now arranged vertically – another words in the Z dimension. For this reason, Samsung calls the technology V-NAND.

Imagine the string of cells shown earlier, but this time stood on its end and then folded in two to make a U shape. We now have a vertical string which takes up only a fraction of the original space in the X and Y dimensions. What’s more, we can continue to build in the Z dimension as manufacturing processes allow. Samsung’s first generation of V-NAND had strings of 24 layers, while the second generation had 32. The latest 3rd generation now has 48. And as Jim Handy explains, there are few theoretical limits on the number of layers possible. (Just to be clear, these layers are all within the same “wafer” of silicon, otherwise there would be no cost benefit…)

Crucially, since the move to a Z dimension relieves the pressure on the X and Y dimensions, 3D flash is actually free to return to a slightly larger lithography, thus avoiding all of the nasty problems that 2D planar NAND was starting to hit as it approached the 10nm range.

Charge Trap Flash and 3D TLC

Aside from the vertical stacking, there is another fundamental change with 3D NAND – it no longer uses floating gate transistors (yes, that’s right, the buckets from earlier). Instead, it uses a technology called Charge Trap Flash. cheese-iconI’m not going to attempt an explanation of CTF here, but it was memorably described by Samsung as like using cheese instead of water. So, instead of the buckets from earlier, picture cheese.

This cheese has a number of benefits over floating gate transistors in terms of endurance and power consumption, but it still works in a similar way in terms of the number of bits that can be stored – in other words SLC, MLC and TLC. However, because of its better endurance rates, with 3D NAND it is now a realistic proposition to use TLC to replace 2D planar MLC (something my employer Kaminario has already embraced).

This is big news. The cost per density of 3D TLC NAND flash is revolutionary, with plenty of room for further developments as the flash fabricators add more layers. Three years ago it looked like NAND flash was a technology in terminal decline, but with 3D techniques the future is bright. We might even get to a point soon where we see the introduction of…

Quad-Level Cell (QLC) Flash

If the endurance of CTF-based 3D NAND is acceptable, it’s not hard to envisage one of the flash fabricators releasing a quadruple-level cell version of the medium. The potential benefit is an order-of-magnitude increase in density for roughly the same cost.

After all, everybody wants more cheese… right?

All Flash Arrays: Hybrid Means Compromise

hybrid-car-engine

Sometimes the transition between two technologies is long and complicated. It may be that the original technology is so well established that it’s entrenched in people’s minds as simply “the way things are” – inertia, you might say. It could be that there is more than one form of the new technology to choose from, with smart customers holding back to wait and see which emerges as a stronger contender for their investment. Or it could just be that the newer form of technology doesn’t yet deliver all of the benefits of the legacy version.

hybrid-car-toyotaThe automotive industry seems like a good example here. After over a century of using internal combustion engines, we are now at the point where electric vehicles are a serious investment for manufacturers. However, fully-electric vehicles still have issues to overcome, while there is continued debate over which approach is better: batteries or hydrogen fuel cells. Needless to say, the majority of vehicles on the road today still use what you could call the legacy method of propulsion.

However, one type of vehicle which has been successful in gaining market share is the hybrid electric vehicle. This solution attempts to offer customers the best of both worlds: the lower fuel consumption and claimed environmental benefits of an electric vehicle, but with the range, performance and cost of a fuel-powered vehicle. Not everybody believes it makes sense, but enough do to make it a worthwhile venture for the manufacturers.

Now here’s the interesting thing about hybrid vehicles… the thing that motivated me to write two paragraphs about cars instead of flash arrays… Nobody believes that hybrid electric vehicles are the permanent solution. Everybody knows that hybrids are a transient solution on the way to somewhere else. Nobody at all thinks that hybrid is the end-game. But the people who buy hybrid cars also believe that this state of affairs will not change during the period in which they own the car.

Hybrid Flash Arrays (HFAs)

There are two types of flash storage architecture which could be labelled a hybrid – those where a disk array has been repopulated with flash and those which are designed specifically for the purpose of mixing flash and disk. I’ve talked about naming conventions before, it’s a tricky subject. But for the purposes of this article I am only discussing the latter: systems where the architecture has been designed so that disk and flash co-exist as different tiers of storage. Think along the lines of Nimble Storage, Tegile and Tintri.

Why do this? Well, as with hybrid electric vehicles the idea is to bridge the gap between two technologies (disk and flash) by giving customers the best of both worlds. That means the performance of flash plus its low power, cooling and physical space requirements – combined with the density of disk and its corresponding impact on price. In other words, if disk is cheap but slow while flash is fast but expensive, HFAs are aimed at filling the gap.

Hybrid (adjective)
of mixed character; composed of different elements.
bred as a hybrid from different species or varieties.

As you can see there are a lot of synergies between this trend and that of the electric vehicle. Also, most storage systems are purchased with a five-year refresh cycle in mind, which is not dissimilar to the average length of ownership of a car. But there’s a massive difference: the rate of change in the development of flash memory technology.

In recent years the density of NAND flash has increased by orders of magnitude, especially with the introduction of 3D NAND technology and the subsequent use of Triple-Level Cell (TLC). And when the density goes up, the price comes down – closing the gap between disk and flash. In fact we’re at the point now where Wikibon predicts that “flash … will become a lower cost media than disk … for almost all storage in 2016″:

Image courtesy of Wikibon "Evolution of All-Flash Array Architectures" by David Floyer (2015)

Image courtesy of Wikibon “Evolution of All-Flash Array Architectures” by David Floyer (2015)

That’s great news for customers – but definitely not for HFA vendors.

Conclusion

And so we reach the root of the problem with HFAs. It’s not just that they are slower than All Flash Arrays. It’s not even that they rely on the guesswork of automatic tiering algorithms to move data between their tiers of disk and flash. It’s simply that their entire existence is predicated on the idea of being a transitory solution designed to bridge a gap which is already closing faster than they can fill it.

mind-the-gapIf you want proof of this, just look at the three HFA vendors I name checked earlier – all of which are rushing to bring out All Flash versions of their arrays. Nimble Storage is the only one of the three to be publicly listed – and its recent results indicate a strategic rethink may be required.

When it comes to hybrid electric vehicles, it’s true that the concept of mass-owned fully-electric cars still belongs in the future. But when it comes to hybrid flash arrays, the adoption of All-Flash is already happening today. The advice to customers looking to invest in a five-to-seven year storage project is therefore pretty simple: Mind the gap.

Postcards from Storageland: Pure Storage Cancels Christmas

christmas-angel

Ho ho ho folks. It’s nearly Christmas Day, 2016 and it most definitely is the season to be jolly. It’s also an exciting time in the All Flash Array industry with NetApp announcing its intended acquisition of Solidfire for a reported $870 million. Merry Christmas to everyone at Solidfire!

Unfortunately those sentiments weren’t shared by Pure Storage’s VP of Product, Matt Kixmoeller, who published a blog on the subject of the acquisition which seemed a little on the bitter side. A number of commentators on Twitter agreed:

pure-twitter-comments

This was an unusual slip of the mask for Pure Storage, who normally set the bar very high when it comes to marketing and PR. I don’t know for sure but I imagine that at some point after publication red lights and klaxons started flashing and blaring in the marketing division of Pure’s Mountain View head office. Within a short amount of time the blog post had been rewritten and republished with a suitably humbling note from Mr Kixmoeller:

pure-kixmoeller-response

I have to admit I struggle with the use of the word “feedback” here when “criticism” would clearly be more accurate… but I don’t want to turn this post into my own vendor-bashing exercise. I do however think it’s a very interesting opportunity to examine the workings of a marketing department. So here – purely for your edification – are the two versions of the article published so far (courtesy of the Internet Wayback Machine):

NetApp Acquires SolidFire – Flash Strategy Take 5 [initial version published 21 Dec 2015]

NetApp Acquires SolidFire – Flash Strategy Take 5 [revised version published 22 Dec 2015]

 

And here, to make it easy to consume, is a marked up version of the original document with changes in red. There are quite a lot of them! (Click on the image to see a larger version)

Merry Christmas everybody!

pure-article-on-netapp-and-solidfire

[Thanks to the Copyscape Compare tool for making this exercise easier…]