Why In-Memory Computing Needs Flash

You might be tempted to think that In-Memory technologies and flash are concepts which have no common ground. After all, if you can run everything in memory, why worry about the performance of your storage? However, the truth is very different: In-Memory needs flash to reach its true potential. Here I will discuss why and look at how flash memory systems can both enable In-Memory technologies as well as alleviate some of the need for them.

Note: This is an article I wrote for a different publication recently. The brief was to discuss at a high level the concepts of In-Memory Computing. It doesn’t delve into the level of technical detail I would usually use here – and the article is more Violin marketing-orientated than those I would usually publish on my personal blog, so consider yourself warned… but In-Memory is an interesting subject so I believe the concepts are worth posting about.

In-Memory Computing (IMC) is a high-level term used to describe a number of techniques where data is processed in computer memory in order to achieve better performance. Examples of IMC include In-Memory Databases (which I’ve written about previously here and here), In-Memory Analytics and In-Memory Application Servers, all of which have been named by Gartner as technologies which are being increasingly adopted throughout the enterprise.

To understand why these trends are so significant, consider the volume of data being consumed by enterprises today: in addition to traditional application data, companies have an increasing exposure to – and demand for – data from Gartner’s “Nexus of Forces”: mobile, social, cloud and big data. As more and more data becomes available, competitive advantages can be won or lost through the ability to serve customers, process metrics, analyze trends and compute results. The time taken to convert source data to business-valuable output is the single most important differentiator, with the ultimate (and in my view unattainable – but that’s the subject for another blog post) goal being output that is delivered in real-time.

But with data volumes increasing exponentially, the goal of performance must also be delivered with a solution which is highly scalable. The control of costs is equally important – a competitive advantage can only be gained if the solution adds more value than it subtracts through its total cost of ownership.

How does In-Memory Computing Deliver Faster Performance?

The basic premise of In-Memory Computing is that data processed in memory is faster than data processed using storage. To understand what this means, first consider the basic elements in any computer system: CPU (Central Processing Unit), Memory, Storage and Networking. The CPU is responsible for carrying out instructions, whilst memory and storage are locations where data can be stored and retrieved. Along similar lines, networking devices allow for data to be sent or received from remote destinations.

Memory is used as a volatile location for storing data, meaning that the data only remains in this location while power is supplied to the memory module. Storage, in contrast, is used as a persistent location for storing data i.e. once written data will remain even if power is interrupted. The question of why these two differing locations are used together in a computer system is the single most important factor to understand about In-Memory Computing: memory is used to drive up processor utilization.

Modern CPUs can perform many billions of instructions per second. However, if data must be stored or retrieved from traditional (i.e. disk) storage this results in a delay known as a “wait”. A modern disk storage system performs an input/output (I/O) operation in a time measured in milliseconds. While this may not initially seem long, when considered in the perspective of the CPU clock cycle where operations are measured in nanoseconds or less, it is clear that time spend waiting on storage will have a significant negative impact on the total time required to complete a task. In effect, the CPU is unable to continue working on the task at hand until the storage system completes the I/O, potentially resulting in periods of inactivity for the CPU. If the CPU is forced to spend time waiting rather than working then it can be considered that the efficiency of the CPU is reduced.

Unlike disk storage, which is based on mechanical rotating magnetic disks, memory consists of semiconductor electronics with no moving parts – and for this reason access times are orders of magnitude faster. Modern computer systems use Dynamic Random Access Memory (DRAM) to store volatile copies of data in a location where they can be accessed with wait times of approximately 100 nanoseconds. The simple conclusion is therefore that memory allows CPUs to spend less time waiting and more time working, which can be considered as an increase in CPU efficiency.

In-Memory Computing techniques seek to extract the maximum advantage out of this conclusion by increasing the efficiency of the CPU to its limit. By removing waits for storage where possible, the CPU can execute instructions and complete tasks with the minimum of time spent waiting on I/O.

While IMC technologies can offer significant performance gains through this efficient use of CPU, the obvious drawback is that data is entirely contained in volatile memory, leading to the potential for data loss in the event of an interruption to power. Two solutions exist to this problem: the acceptance that all data can be lost or the addition of a “persistence layer” where all data changes must be recorded in order that data may be reconstructed in the event of an outage. Since only the latter option guarantees business continuity the reality of most IMC systems is that data must still be written to storage, limiting the potential gains and introducing additional complexity as high availability and disaster recovery solutions are added.

What are the Barriers to Success with In-Memory Computing?

The main barriers to success in IMC are the maturity of IMC technologies, the cost of adoption and the performance impact associated with adding a persistence layer on storage. Gartner reports that IMC-enabling application infrastructure is still relatively expensive, while additional factors such as the complexity of design and implementation, as well as the new challenges associated with high availability and disaster recovery, are limiting adoption. Another significant challenge is the misperception from users that data stored using an In-Memory technology is not safe due to the volatility of DRAM. It must also be considered that as many IMC products are new to the market, many popular BI and data-manipulation tools are yet to add support for their use.

However, as IMC products mature and the demand for performance and scalability increases, Gartner expects the continuing success of the NAND flash industry to be a significant factor in the adoption of IMC as a mainstream solution, with flash memory allowing customers to build IMC systems that are more affordable and have a greater impact. 

NAND Flash Allows for New Possibilities

 

The introduction of NAND flash memory as a storage medium has caused a revolution in the storage industry and is now allowing for new opportunities to be considered in realms such as database and analytics. NAND flash is a persistent form of semiconductor memory which combines the speed of memory with the persistence capabilities of traditional storage. By offering speeds which are orders of magnitude faster than traditional disk systems, Violin Memory flash memory arrays allow for new possibilities. Here are just two examples:

First of all, In-Memory Computing technologies such as In-Memory Databases no longer need to be held back by the performance of the persistence layer. By providing sustained ultra-low latency storage Violin Memory is able to facilitate customers in achieving previously unattainable levels of CPU efficiency when using In-Memory Computing.

Secondly, for customers who are reticent in adopting In-Memory Computing technologies for their business-critical applications, the opportunity now exists to remove the storage bottleneck which initiated the original drive to adopt In-Memory techniques. If IMC is the concept of storing entire sets of data in memory to achieve higher processor utilization, it can be considered equally beneficial to retain the data on the storage layer if that storage can now perform at the speed of flash memory. Violin Memory flash memory arrays are able to harness the full potential of NAND flash memory and allow users of existing non-IMC technologies to experience the same performance benefits without the cost, risk and disruption of adopting an entirely new approach.

More on Exadata X3 “Database In-Memory” (but not by me)

Not a real post – but a recommendation… Kevin Closson, former Performance Architect within Oracle’s Exadata development organisation, has (finally!) written a blog post about the new Exadata X3 model with it’s claimed “Database In-Memory” marketing title.

For the history of Exadata click here. But more importantly, for the insider view, click here:

Oracle Exadata X3 Database In-Memory Machine: Timely Thoughtful Thoughts For The Thinking Technologist

Recommended reading…

Technology Hype Cycles

I had a great idea this week. It started because I wanted to write about Business Intelligence and the benefits of flash memory for Decision Support Systems, but realised that it’s hard to mention those subjects these days without referencing Unstructured Data. That got me thinking that the hype surrounding Big Data and the way in which trends such as Cloud and In Memory Databases work.

Let’s consider the Y2K bug as an example. Back in the 1990’s it became apparent that systems which had not been coded to account for the Y2K issue might fail, so steps were taken to investigate and fix potential issues. This was clearly a Good Thing. However, as the hype surrounding Y2K exploded, every man and his dog felt the need to join the party, no matter how tenuous their connection, until it reached the point where (as I’ve mentioned before) the school that I attended when I was young received a letter from a “Y2K conformance specialist” offering to check the football pitch for “Y2K compliance”. I thought about these situations and then sketched a graph, very much like the one above, showing the hype rising inexorably, then falling away as everyone got bored of the idea (or saw through the charlatans), then finally rising slightly to reach a plateau.

I was very pleased with myself at this point and decided to write a blog showing the world how clever I was. However, a five second Google search was enough to pop that particular bubble of hubris, because it turns out that Gartner not only thought of this years ago but even publish their own Technology Hype Cycle every year. So instead of being clever it turns out I am just ignorant. No wonder Gartner’s net worth is significantly higher than mine. (And I bet you knew about this too didn’t you? So why didn’t you tell me? It could have saved us both a lot of time…)

Another thing I have to credit Gartner for is the fantastic names used to describe the phases of the cycle:

  1. Technology Trigger
  2. Peak of Inflated Expectations
  3. Trough of Disillusionment
  4. Slope of Enlightenment
  5. Plateau of Productivity

I love these names. The Peak of Inflated Expectations? I’m sure I’ve climbed that peak at some point. The Trough of Disillusionment? Wallowed in it. The Slope of Enlightenment? I’m hoping to climb it one day.

Anyway, now that we’ve established I am no substitute for Gartner, let’s have a look at the current Gartner 2012 Emerging Technologies Hype Cycle graph (courtesy of Forbes):

Gartner’s 2012 Emerging Technologies Hype Cycle

There are a couple of interesting things to note from this graph. Firstly, Big Data appears to be entering the phase known as the peak of inflated expectations. That sounds about right to me. Note that this is not the same thing as suggesting Big Data is a waste of time and should be ignored. Far from it. This phase is identified as the period when “a frenzy of publicity typically generates over-enthusiasm and unrealistic expectations“. My reading of this is that some people are far too keen to throw away the benefits of relational data and transactional consistency in order to embrace a new trend which they feel their organisation should be following.

The other thing that really interests me is the position of In-Memory Database Management Systems, just beginning to accelerate downwards on the big ski-slope towards the trough of disillusionment where it will join In-Memory Analytics. This, lest we forget, is identified as the point where technologies “fail to meet expectations and quickly become unfashionable“. Gartner also indicates that they believe it will be 2-5 years before these In-Memory technologies reach the plateau of productivity, where their benefits “become widely demonstrated and accepted“. Again, it’s important to emphasise that a technology located in the trough of disillusionment is not a bad technology nor one that should be avoided; it is merely one where any potential suitor should be extremely careful to ignore the marketing hype and concentrate on the facts.

Interpretation

Now, I am fully aware that we all see what we want to see, so you may disagree with my perception here. But for the In-Memory technologies I see a correlation between the over-hyping of the term In-Memory Database and the redefinition of Oracle Exadata X3 as a “Database In-Memory Machine“. I also see a correlation between the gradual maturity of SAP HANA and the suggestion that In-Memory technologies will achieve the plateau of productivity within 2-5 years. I am not convinced that In-Memory technologies will become “unfashionable” but I do believe that there is a danger users (and potential users) will become sceptical about the claimed benefits of IMDBs. As more vendors attempt to portray their products as In-Memory I feel this is inevitable.

Maybe you agree, or maybe you see it differently; in either case I’d love to here your views. In the meantime it’s back to the drawing board for me, to see if my latest idea will make me a million. I’ve decided to place vendors on a sort of square graph and divide it into quarters, which I am going to call the Magic Quadrangle. I just need to check if it’s been done before

[Legal Notice: I am (unfortunately) not the inventor of either the Gartner Technology Hype Cycle or the Gartner Magic Quadrant. All copyrights and intellectual property around the Technology Hype Curve and the Magic Quadrant are therefore owned by Gartner, Inc. If you find the content discussed here interesting then I urge you to go to www.gartner.com and purchase a subscription so that you can avoid being as spectacularly uninformed as I was prior to researching this article.]

In Memory Databases: HANA, Exadata X3 and Flash Memory (Part 2)

In the first part of this blog series on In Memory Databases (IMDBs) I talked about the definition of “memory” and found it surprisingly hard to pin down. There was no doubt that Dynamic Random Access Memory (DRAM), such as that found in most modern computers, fell into the category of memory whilst disk clearly did not. The medium which caused the problem was NAND flash memory, such as that found in consumer devices like smart phones, tablets and USB sticks or enterprise storage like the flash memory arrays made by my employer Violin Memory.

There is no doubt in my mind that flash memory is a type of memory – otherwise we would have to have a good think about the way it was named. My doubts are along different lines: if a database is running on flash memory, can it be described as an IMDB? After all, if the answer is yes then any database running on Violin Memory is an In Memory Database, right?

What Is An In Memory Database?

As always let’s start with the stupid questions. What does an IMDB do that a non-IMDB database does not do? If I install a regular Oracle database (for example) it will have a System Global Area (SGA) and a Program Global Area (PGA), both of which are areas set aside in volatile DRAM in order to contain cached copies of data blocks, SQL cursors and sorting or hashing areas. Surely that’s “in-memory” in anyone’s definition? So what is the difference between that and, for example, Oracle TimesTen or SAP HANA?

Let’s see if the Oracle TimesTen documentation can help us:

“Oracle TimesTen In-Memory Database operates on databases that fit entirely in physical memory”

That’s a good start. So with an IMDB, the whole dataset fits entirely in physical memory. I’m going to take that sentence and call it the first of our fundamental statements about IMDBs:

IMDB Fundamental Requirement #1:
In Memory Databases fit entirely in physical memory.

But if I go back to my Oracle database and ensure that all of the data fits into the buffer cache, surely that is now an In Memory Database?

Maybe an IMDB is one which has no physical files? Of course that cannot be true, because memory is (or can be) volatile, so some sort of persistent later is required if the data is to be retained in the event of a power loss. Just like a “normal” database, IMDBs still have to have datafiles and transaction logs located on persistent storage somewhere (both TimesTen and SAP HANA have checkpoint and transaction logs located on filesystems).

So hold on, I’m getting dangerously close to the conclusion that an IMDB is simply a normal DB which cannot grow beyond the size of the chunk of memory it has been allocated. What’s the big deal, why would I want that over say a standard RDBMS?

Why is an In-Memory Database Fast?

Actually that question is not complete, but long questions do not make good section headers. The question really is: why is an In Memory Database faster than a standard database whose dataset is entirely located in memory?

Back to our new friend the Oracle TimesTen documentation, with the perfectly-entitled section “Why is Oracle TimesTen In-Memory Database fast?“:

“Even when a disk-based RDBMS has been configured to hold all of its data in main memory, its performance is hobbled by assumptions of disk-based data residency. These assumptions cannot be easily reversed because they are hard-coded in processing logic, indexing schemes, and data access mechanisms.

TimesTen is designed with the knowledge that data resides in main memory and can take more direct routes to data, reducing the length of the code path and simplifying algorithms and structure.”

This is more like it. So an IMDB is faster than a non-IMDB because there is less code necessary to manipulate data. I can buy into that idea. Let’s call that the second fundamental statement about IMDBs:

IMDB Fundamental Requirement #2:
In Memory Databases are fast because they do not have complex code paths for dealing with data located on storage.

I think this is probably a sufficient definition for an IMDB now. So next let’s have a look at the different implementations of “IMDBs” available today and the claims made by the vendors.

Is My Database An In Memory Database?

Any vendor can claim to have a database which runs in memory, but how many can claim that theirs is an In Memory Database? Let’s have a look at some candidates and subject them to analysis against our IMDB fundamentals.

1. Database Running in DRAM – e.g. SAP HANA

I have no experience of Oracle TimesTen but I have been working with SAP HANA recently so I’m picking that as the example. In my opinion, HANA (or NewDB as it was previously known) is a very exciting database product – not especially because of the In Memory claims, but because it was written from the ground up in an effort to ignore previous assumptions of how an RDBMS should work. In contrast, alternative RDBMS such as Oracle, SQL Server and DB/2 have been around for decades and were designed with assumptions which may no longer be true – the obvious one being that storage runs at the speed of disk.

The HANA database runs entirely in DRAM on Intel x86 processors running SUSE Linux. It has a persistent layer on storage (using a filesystem) for checkpoint and transaction logs, but all data is stored in DRAM along with an additional allocation of memory for hashing, sorting and other work area stuff. There are no code paths intended to decide if a data block is in memory or on disk because all data is in memory. Does HANA meet our definition of an IMDB? Absolutely.

What are the challenges for databases running in DRAM? One of the main ones is scalability. If you impose a restriction that all data must be located in DRAM then the amount of DRAM available is clearly going to be important. Adding more DRAM to a server is far more intrusive than adding more storage, plus servers only have a limited number of locations on the system bus where additional memory can be attached. Price is important, because DRAM is far more expensive than storage media such as disk or flash. High Availability is also a key consideration, because data stored in memory will be lost when the power goes off. Since DRAM cannot be shared amongst servers in the same way as networked storage, any multiple-node high availability solution has to have some sort of cache coherence software in place, which increases the complexity and moves the IMDB away from the goal of IMDB Fundamental #2.

Gong back to HANA, SAP have implemented the ability to scale up (adding more DRAM – despite Larry’s claims to the contrary, you can already buy a 100TB HANA database system from IBM) as well as to scale out by adding multiple nodes to form a cluster. It is going to be fascinating to see how the Oracle vs SAP HANA battle unfolds. At the moment 70% of SAP customers are running on Oracle – I would expect this number to fall significantly over the next few years.

2. Database Running on Flash Memory – e.g. on Violin Memory

Now this could be any database, from Oracle through SQL Server to PostgreSQL. It doesn’t have to be Violin Memory flash either, but this is my blog so I get to choose. The point is that we are talking about a database product which keeps data on storage as well as in memory, therefore requiring more complex code paths to locate and manage that data.

The use of flash memory means that storage access times are many orders of magnitude faster than disk, resulting in exceptional performance. Take a look at recent server benchmark results and you will see that Cisco, Oracle, IBM, HP and VMware have all been using Violin Memory flash memory arrays to set new records. This is fast stuff. But does a (normal) database running on flash memory meet our fundamental requirements to make it an IMDB?

First there is the idea of whether it is “memory”. As we saw before this is not such a simple question to answer. Some of us (I’m looking at you Kevin) would argue that if you cannot use memory functions to access and manipulate it then it is not memory. Others might argue that flash is a type of memory accessed using storage protocols in order to gain the advantages that come with shared storage, such as redundancy, resilience and high availability.

Luckily the whole question is irrelevant because of our second fundamental requirement, which is that the database software does not have complex code paths for dealing with blocks located on storage. Bingo. So running an Oracle database on flash memory does not make it an In Memory Database, it just makes it a database which runs at the speed of flash memory. That’s no bad thing – the main idea behind the creation of IMDBs was to remove the bottlenecks created by disk, so running at the speed of flash is a massive enhancement (hence those benchmarks). But using our definitions above, Oracle on flash does not equal IMDB.

On the other hand, running HANA or some other IMDB on flash memory clearly has some extra benefits because the checkpoint and transactional logs will be less of a bottleneck if they write data to flash than if they were writing to disk. So in summary, the use of flash is not the key issue, it’s the way the database software is written that makes the difference.

3. Database Accessing Remote DRAM and Flash Memory: Oracle Exadata X3

Why am I talking about Oracle Exadata now? Because at the recent Oracle OpenWorld a new version of Exadata was announced, with a new name: the Oracle Exadata X3 Database In-Memory Machine. Regular readers of my blog will know that I like to keep track of Oracle’s rebranding schemes to monitor how the Exadata product is being marketed, and this is yet another significant renaming of the product.

According to the press release, “[Exadata] can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives“. Now that’s a brave claim, although to be fair Oracle is at least acknowledging that this is “Flash and RAM memory”. On the other hand, what’s this about “hundreds of Terabytes of compressed user data”? Here’s the slide from the announcement, with the important bit helpfully highlighted in red (by Oracle not me):

Also note the “26 Terabytes of DRAM and Flash in one Rack” line. Where is that DRAM and Flash? After all, each database server in an Exadata X3-2 has only 128GB DRAM (upgradeable to 256GB) and zero flash. The answer is that it’s on the storage grid, with each storage cell (there are 14 in a full rack) containing 1.6TB flash and 64GB DRAM. But the database servers cannot directly address this as physical memory or block storage. It is remote memory, accessed over Infiniband with all the overhead of IPC, iDB, RDS and Infiniband ZDP. Does this make Exadata X3 an In Memory Database?

I don’t see how it can. The first of our fundamental requirements was that the database should fit entirely in memory. Exadata X3 does not meet this requirement, because data is still stored on disk. The DRAM and Flash in the storage cells are only levels of cache – at no point will you have your entire dataset contained only in the DRAM and Flash*, otherwise it would be pretty pointless paying for the 168 disks in a full rack – even more so because Oracle Exadata Storage Licenses are required on a per disk basis, so if you weren’t using those disks you’d feel pretty hard done by.

[*see comments section below for corrections to this statement]

But let’s forget about that for a minute and turn our attention to the second fundamental requirement, which is that the database is fast because it does not have complex code paths designed to manage data located both in memory or on disk. The press release for Exadata X3 says:

“The Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks”

This is more complexity… more code paths to handle data, not less. Exadata is managing data based on its usage rate, moving it around in multiple different levels of memory and storage (local DRAM, remote DRAM, remote flash and remote disks). Most of this memory and storage is remote to the database processes representing the end users and thus it incurs network and communication overheads. What’s more, to compound that story, the slide up above is talking about compressed data, so that now has to be uncompressed before being made available to the end user, navigating additional code paths and incurring further overhead. If you then add the even more complicated code associated with Oracle RAC (my feelings on which can be found here) the result is a multi-layered nest of software complexity which stores data in many different places.

Draw your own conclusions, but in my opinion Exadata X3 does not meet either of our requirements to be defined as an In Memory Database.

Conclusion

“In Memory” is a buzzword which can be used to describe a multitude of technologies, some of which fit the description better than others. Flash memory is a type of memory, but it is also still storage – whereas DRAM is memory accessed directly by the CPU. I’m perfectly happy calling flash memory a type of “memory”, even referring to it performing “at the speed of memory” as opposed to the speed of disk, but I cannot stretch to describing databases running on flash as “In Memory Databases”, because I believe that the only In Memory Databases are the ones which have been designed and written to be IMDBs from the ground up.

Anything else is just marketing…

Thoughts on In Memory Databases (Part 1)

Everyone is talking about In Memory at the moment. On blogs, in tweets, in the press, in the Oracle marketing department, in books by SAP employees, even my Violin colleagues… it’s everywhere. What can I possibly add that will be of any value?

Well, how about owning up to something: I find myself in a bit of a quandary on this subject. On the one hand it’s a new buzzword, which means that a) it’s got everyone’s attention, and b) many people with their own agenda will seek to use it to their advantage… but on the other hand, given the nature of my employment (I work for Violin Memory, purveyors of flash memory systems), it seems like something we ought to be talking about.

As anyone who works in the IT industry knows (and perhaps it’s the same in other industries), we love a buzzword. Cloud, Analytics, Big Data, In Memory, Transformation… all of these phrases have been used at one time or another to try and wring cash out of customers who may or may not need the services and products they imagine the phrase represents. Even back at the end of the last millenium consultants worldwide were making huge amounts of money out of exploiting the phrase “Y2K”, some with more honourable intentions than others. I remember my old school received a letter from a “Y2K conformance specialist” informing them that this person could visit and inspect their football pitches to ensure they were “Y2K compliant”… (true story!)

So if buzzwords are prone to misuse, maybe the first thing we need to do is explore what “In Memory” really means? In fact, rewind a step – what do we mean when we say “Memory”?

What Is Memory?

It’s a basic question, but a good definition is surprisingly hard to pin down. Clearly this is an IT blog so (despite the deceiving picture above) I am only interested in talking about computer memory rather than the stuff in my head which stops working after I drink tequila. The definition of this term in the Free Online Dictionary of Computing is:

memory: These days, usually used synonymously with Random Access Memory or Read-Only Memory, but in the general sense it can be any device that can hold data in machine-readable format.

So that’s any device that can hold data in machine-readable format. So far so ambiguous. And of course that is the perfect situation for any would-be freeloader to exploit, since the less well-defined a definition is, the more room there is to manoeuvre any product into position as a candidate for that description.

Here’s what most people think of when they talk about computer memory… DRAM:

Dynamic Random Access Memory (DRAM)

This is Dynamic Random Access Memory – and it’s most likely what’s in your laptop, your desktop and your servers. You know all about this stuff – it’s fast, it’s volatile (i.e. the data stored on it is lost when the power goes off) and it’s comparatively expensive to say… disk, for which many orders of magnitude more are available at the same price point.

But now there is a new type of “memory” on the market, NAND flash memory. Actually it’s been around for over 25 years (read this great article for more details) but it is only now that we are seeing it being adopted en mass in data-centres, as well as being prevalent in consumer devices – the chances are your phone contains NAND flash, your tablet (if you have one) and maybe your computer if you are fortunate enough to have an SSD drive in it.

Toshiba NAND Flash

Flash memory, unlike DRAM, is persistent. That means when the power goes, the data remains. Flash access speeds are measured in microseconds – let’s say around 100 microseconds for a single random access. That’s significantly faster than disk, which is measured in (multiple) milliseconds – but still slower than DRAM, for which you would expect an access in around 100 nanoseconds. Flash is available in many forms, from USB devices and SSDs which fit into normal hard drive bays, through PCIe cards which connect direct to the system bus, and on to enterprise-class storage arrays such as those made by my employer like the Violin Memory 6000 series array.

Is flash a type of memory? It certainly fits the dictionary description above. But if you run something on flash, can you describe that something as now running “in memory”? You could argue the point either way I suppose.

Since we don’t seem to be doing well with defining what memory is, let’s change tack and talk about what it definitely isn’t. And that’s simple, because it definitely isn’t disk.

Disk

Whether it’s part of the formal definition or not, almost anyone would assume that memory is fast and non-mechanical, i.e. it has no moving parts. It is all semiconductors and silicon, not motors and magnets. A hard disk drive, with its rotating platters and moving actuator arm, is about the most un-memory-like way you can find to store your data, short of putting it on a big reel of tape. And, consistent with our experience of memory versus non-memory devices, it’s slow. In fact, every disk array vendor in the industry stuffs their enterprise disk arrays full of DRAM caches to make up for the slow performance of disk. So memory is something they use to mask the speed of their non-memory-based storage. Hang on then, if you have a small enough dataset so that the majority of your disk reads are coming from your disk array cache, does that mean you are running “in memory” too? No of course not, but the ambiguity is there to be exploited.

Primary Storage versus Secondary Storage

Since we are struggling with a formal definition of memory, perhaps another way to look at it is in terms of primary storage and secondary storage. The main difference here is that primary storage is directly addressable by the CPU, whereas secondary storage is addressed through input/output channels. Is that a good way of distinguishing memory from non-memory? It certainly works with DRAM, which ends up in the primary storage category, as well as disk, which ends up in the secondary storage category. But with flash it is a less successful differentiator.

The first problem is that as previously mentioned flash is available in multiple different forms. PCIe flash cards are directly addressable by the CPU whilst SSDs slot into hard drive bays and are accessed using storage protocols. In fact, just looking at the Violin Memory 6000 series array around which my day job revolves, connectivity options include PCIe direct attached, fibre-channel and Infiniband, meaning it could easily fit into either of the above categories.

What’s more, if you think of primary storage as somehow being faster than secondary storage, the Infiniband connectivity option of the Violin array is only about 50-100 microseconds slower than the PCIe version, yet brings a wealth of additional benefits such as high availability. It’s hard to think of a reason why you would choose the direct attached version of that with Infiniband.

Volatile versus Persistent

Maybe this is a better method of differentiating? Perhaps we can say that memory is that which is volatile, i.e. data stored on it will be lost when power is no longer available. The alternative is persistent storage, where data exists regardless of the power state. Does that make sense?

Not really. Think about your traditional computer, whether it’s a desktop or server. You have four high-level resources: CPU to do the work, network to communicate with the outside world, disk to store your data (the persistence layer). Why do you have memory in the form of DRAM? Why commit extra effort to managing a volatile store of data, much of which is probably duplicated on the persistence layer?

DRAM exists to drive up CPU utilisation. Processor speeds have famously doubled every couple of years or so. Network speeds have also increased drastically since the days of the 56k modem I used to struggle with in the 1990’s. Disk hasn’t – nowhere near in fact. Sure, capacity has increased – and speeds have slowly struggled upwards until they reached the limit of the 15k RPM drive, but in comparison to CPU improvements disk has been absolutely stagnant. So your computer is stuffed full of DRAM because, if it weren’t, the processors would spend all their time waiting for I/O instead of doing any work. By keeping as much data in volatile DRAM as possible, the speed of access is increased by around five orders of magnitude, resulting in CPUs which can spend more time working and less time waiting.

In the world of flash memory things are slightly different. DRAM is still necessary to maintain CPU utilisation, because flash is around two-and-a-half to three orders of magnitude slower than DRAM. But does it make sense to assume that “memory” is therefore only applicable to volatile data storage? What if a hypothetical persistent flash medium arrived with DRAM access speeds? Would we refuse to say that something running on this magic new media was running “In Memory”?

I don’t have an answer, only an opinion. My opinion is that memory is solid-state semiconductor-based storage and can be volatile or persistent. DRAM is a type of memory, but not the only type. Flash is a type of memory, while disk clearly is not.

So with that in mind, in the next part of this blog series I’m going to look at In Memory Database technologies and describe what I see as the three different architectures of IMDB that are currently available. As a taster, one of them is SAP HANA, one of them involves Violin Memory and the third one is the new Oracle Exadata X3 “Database In-Memory Machine”. And as a conclusion I will have to make a decision about the quandary I mentioned at the start of this article: should we at Violin claim a piece of the “In Memory” pie?

<Part Two of this blog series is located here>