Database Consolidation

This page acts as an entry point for a blog series I wrote about the subject of database consolidation. The series consisted of four articles, listed here in order:

Part 1 – Business Drivers and Technical Challenges

Part 2 – Shared Infrastructure Design Choices

Part 3 – It’s All About Capacity

Part 4 – Flash Memory Makes The Difference

It’s a big subject, so they are long articles. If you want to skip to the punchline, I’ll reprint it here:

Summary

Database consolidation on flash memory – whether it be through a shared-platform or by use of virtualisation technologies – allows for more efficient utilisation of resources. Specifically it:

  • Provides the necessary storage capacity without having to overprovision expensive disk arrays, therefore reducing operational expenditures such as power, cooling and data centre footprint
  • Allows for more I/O operations to be performed per second, allowing for more databases to be consolidated per platform
  • Provides not only better latency but also protection from unpredictable latency when experiencing peak loads
  • Allows for a reduction in memory requirements, meaning that more instances can fit in the same amount of physical memory
  • Increases the utilisation of a system’s CPUs by reducing the amount of time spend waiting on I/O

The conclusion therefore is that consolidating on flash memory increases agility by allowing for a greater density of databases to be achieved on the underlying infrastructure; it reduces risk by offering better protection against peak capacity issues; and it reduces cost in comparison to disk by requiring less power, less cooling and less of that valuable space in the data centre.

More agility, less risk, lower cost. Now who wouldn’t want that?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.