Exadata Hardware

Disclaimer: This post was written in 2012 and covers the Exadata X2 model. I have no plans to update it to cover the later models, so please be aware that the information below is no longer be up to date.

The Exadata X2 model comes in two versions, one of which is available in multiple sizes:

  • Exadata X2-2   – available as quarter, half or full rack
  • Exadata X2-8   – available as full rack only
Additionally, each system is available with either high performance (HP) or high capacity (HC) disks. The HP disks are 600GB 15,000RPM SAS, whilst the HC disks are 3TB 7,200 RPM SAS.

On the X2-2, upgrade paths exist from quarter to half and half to full. Machines of the same type can also be cabled together via Infiniband to create a “multi-rack” machine. Additionally, Storage Expansion racks can be purchased which contain more storage cells. All systems are delivered in a standard 42U rack.

The X2-2 is a continuation of the v1 and v2 models originally produced by Oracle. The X2-8 is a departure from this design and has so far been positioned more towards the consolidation market and has been described by Oracle as “a complete grid, or private cloud”.

Exadata Components

The Exadata machines are built using standard components which cannot be modified by customers. There are many implications to this but the most important one is that no additional peripheral component cards can be added to the database or storage servers, which in turn means that no external storage can be attached using – for example – fibre channel or PCIe.

Database Servers

The second number in the X2-n naming convention refers to the number of sockets on the database servers:

On the X2-2 each database server has two sockets containing six-core Intel Xeon X5675 processors (3.06 GHz) giving a total of 96 cores for a full rack. Each X2-2 database server also contains:

  • 96GB RAM (which can be upgrade to 144GB at an additional cost)
  • four 300GB 10,000 RPM SAS Disks
  • four 1Gb Ethernet ports (one of which is reserved) and two 10Gb Ethernet ports

Exadata X2-2 database servers are Sun Fire X4170 M2 servers (see data sheet).

On the X2-8 each database server has eight sockets containing ten-core Intel Xeon E7-8870 processors (2.40 GHz) giving a total of 160 cores. Each X2-8 database server also contains:

  • 2TB RAM
  • eight 300GB 10,000 RPM SAS Disks
  • eight 1Gb Ethernet ports (one of which is reserved) and eight 10Gb Ethernet ports

Exadata X2-8 database servers are Sun Fire X4800 M2 servers (see data sheet).

Storage Servers

The Exadata Storage Server is a standard component based on the Sun Fire X4270 M2 server. The same model is used in both the X2-2 and X2-8 as well as the Storage Expansion racks:

Exadata X2-2 (quarter)                         3 storage servers        ( 36 cores)
Exadata X2-2 (half)                            7 storage servers        ( 84 cores)
Exadata X2-2 (full)                           14 storage servers        (168 cores)
Exadata X2-8                                  14 storage servers        (168 cores)
Storage Expansion (quarter)                    4 storage servers        ( 48 cores)
Storage Expansion (half)                       9 storage servers        (108 cores)
Storage Expansion (full)                      18 storage servers        (216 cores)

Each storage server contains two six-core Intel Xeon L5640 (2.26 GHz) processors as well as:

  • 24GB RAM
  • 12 SAS disks (600GB 15,000 RPM High Performance or 3TB 7,200 RPM High Capacity)*
  • Disk Controller HBA with 512MB Battery Backed Write Cache
  • four 96GB Sun Flash Accelerator F20 PCIe Cards

* Note that when Oracle states raw disk capacity, 1GB = 1 billion bytes

Infiniband

In addition to the database and storage servers, each Exadata system contains three quad data rate Sun Datacenter Infiniband switches (one spine switch and two leaf switches), with the exception of the quarter racks which only contain two (both leaf switches). Each database and storage server has two Infiniband ports connected to the internal Infiniband network, one to each leaf switch. These two connections are bonded in an active/passive configuration so that each leaf switch will then have 50% of the active connections and 50% of the passive. The leaf switches also have seven interconnects to each other as well as one connection each to the spine switch (where present). In the process of multi-racking (connecting multiple racks together) these interconnecting links are redistributed in order that each leaf switch is connected to each spine switch.

Management Switch

Every 42U Exadata system rack (regardless of the size and including the storage expansion racks) contains a standard Ethernet management switch. All components in the rack (with the exception of the KVM and PDUs) are connected to this switch via the pre-cabled management network. Since the release of the X2 model this switch has been the Cisco Catalyst 4948 Ethernet Switch. This switch is the only component in the rack that customers are allowed to remove and replace with their own equipment.

Whilst Oracle provides both hardware and software support of the other components in an Exadata machine (and Storage Expansion rack), the situation with the Cisco switch is different. For this component, Oracle only provides hardware support – software support is not required but can be purchased direct from Cisco.

Miscellaneous Equipment

In addition to the database and storage servers, Infiniband and management switches, each rack comes with two Power Distribution Units (PDUs) which can optionally be connected to the management network in order to enable remote monitoring (although there are no available ports on the internal management switch). Connected to these, each component within the rack has dual redundant power supplies, with the exception of the X2-8 database servers which have four power supplies).

Also contained within the X2-2 and Storage Expansion racks is a KVM (Keyboard Video Mouse) manufactured by Avocent – however the X2-8 rack does not have room for this component. The KVM can also be connected to the management network but again there is no free port on the internal management switch.

Flash Cards

Each Exadata storage server contains four 96GB Sun Flash Accelerator F20 cards giving a total of 384GB of flash storage. These cards are connected via PCIe and allow each storage server to perform up to 125,000 flash IOPS end-to-end through the database and storage server. These figures are based on 8k IO requests. Oracle’s published comparison of flash and disk speeds in Exadata shows:

Exadata X2-8 and X2-2
Full Rack

Exadata X2-2
Half Rack

Exadata X2-2
Quarter Rack

Disk Data Bandwidth
• High Performance SAS
• High Capacity SAS

Up to
25 GB/sec
14 GB/sec

Up to
12.5 GB/sec
7.0 GB/sec

Up to
5.4 GB/sec
3.0 GB/sec

Flash Data Bandwidth
• High Performance SAS
• High Capacity SAS

Up to
75 GB/sec
64 GB/sec

Up to
37.5 GB/sec
32 GB/sec

Up to
16 GB/sec
13.5 GB/sec

Database Flash Cache IOPS

Up to 1,500,000

Up to 750,000

Up to 375,000

Database Disk IOPS
• High Performance SAS
• High Capacity SAS

Up to
50,000
25,000

Up to
25,000
12,500

Up to
10,800
5,400

Oracle claims that each F20 flash card is capable of:

  • 100,110 random read IOPS (4k block size)
  • 83,996 random write IOPS (4k block size)
  • 400µs random read/write service times
  •  1,092 MB/s sequential read
  • 501 MB/s sequential write
  • 16.5 Watts average power rating
  • Over 2m hours MTBF

Connectivity

A default Exadata installation will result in each database using the following two networks.

SQL*Net traffic from the application or end users routes via the “public access” network, which is located on two ports of either the 1Gb or 10Gb network adapters. The ports are bonded in an active / passive configuration by default, although customers are able to aggregate them if they desire.

The private interconnect used by RAC and Clusterware, as well as for ASM to communicate with the storage network, runs over the Infiniband network. Again this is configured with two ports using an active / passive bonding configuration.

All Exadata systems also contain a management network running on 1Gb Ethernet. All servers (database and storage) have a connection to this network, along with each server’s Integrated Lights Out Management (ILOM) card. The Infiniband switches also have connections to the management network, all of which is then routed through the Cisco Catalyst 4948 switch. The KVM (X2-2 only) and PDUs also have optional connectivity to the management network but are not connected to the switch due to the limited number of ports available.

Connections to Exalogic

When connecting Exadata to the Exalogic Elastic Cloud (Oracle’s engineered system for middleware) SQL*Net traffic can be configured to flow over the Infiniband network using Sockets Direct Protocol (SDP), which Oracle claims gives higher throughput and lower latency (see page 7 of this overview)

Connections to External Storage

Since the Exadata hardware cannot be modified, it is not supported to add HBA cards to any of the Exadata servers. It is supported to present storage via the network ports on the database servers via NFS or iSCSI, although Fibre Channel over Ethernet (FCoE) is not supported.

Oracle’s recommended solution for external backup infrastructure to Exadata is the Sun ZFS Storage Appliance, which presents storage to Exadata via NFS. The ZFSSA has native 10Gb and Infiniband connectivity and so is able to connect to Exadata using either protocol.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.