The Future of All-flash Storage Arrays & HCI

The Future of All-flash Storage Arrays & HCI

Let's take a quick look at where all-flash storage arrays and HCI are going in 2017. There will big changes in the devices themselves, the interconnects used to access them in an external storage arrays and the APIs that are used by applications to perform their storage operations.

THE DEVICES

The latency to SATA HDDs is on the order of 10,000uS and the latency to SATA SSDs is 1,000uS, so it's no wonder all-flash arrays have captured market segments that need the 10X speed increase. The next wave of devices are NVMe SSDs, which have latencies on the order of 100uS, making them 10X faster than SATA SSDs. But wait there's more - Storage Class Memory (SCM) like Intel/Micron 3D XPoint NVDIMMs are here and because they live on the memory bus instead of the PCIe bus, they have latencies on the order of 1uS, that's 100x the performance of SATA SSDs. Just as SSDs are used to cache data residing on HDDs today, next-gen storage arrays will use 3D XPoint NVDIMMs to cache data residing on NVMe SSDs.

THE NETWORKS

The SATA 3.0 bus delivers roughly .6GB/s of throughput while the PCIex4 bus commonly used for NVMe SSDs delivers just under 4GB/s - an increase of roughly 6.5X in throughput. To implement shared NVMe SSD storage in an external array, a proprietary PCIex4 mesh architecture is used by EMC DSSD and a wave of innovative storage arrays are arriving that use the new NVMe over Fabrics (NVMe-oF) industry standard protocol that can use, among other fabrics, common 10Gb to 100Gb Ethernet links through prevalent DCB-compliant Ethernet switches and popular RNICs such as those from Milano and Chelsio. Without the DMA capability of RNICs, Ethernet network communications introduce too much latency, which is why they are already widely used in datacenters. NVMe-oF will bring to market standards-based equipment and a choice of vendors for datacenter operators.

THE PROTOCOLS

What's really going to turn storage on it's head is the protocol changes for applications that are doing storage operations. The ubiquitous SCSI protocol stack with its user-space to kernel-space transitions, copies and asynchronous I/O mechanism that relies on I/O interrupts all combine to create a huge software latency for each I/O request on the order of 100uS according to SNIA. Yes, you read that right. Add in the latencies of the operating system's file system and the time required to do in-line compression, encryption, HA, durability and snapshots, and the impressive speed increases of NVMe SSDs and SCM memory are completely wasted. 

THE NEW APIs

When applications need maximum speed, they will move to a programing model where the entire I/O operation takes place in user space over RNICs to NVMe-oF storage arrays. The fastest client-side programming interfaces include the NVM Express APIs, Intel's SPDK APIs, or the Windows Server 2016 SCM APIs that use memory-mapped files as the programming paradigm with IB-style request and completion queues that reside in user space to eliminate transitions to kernel space, unnecessary copies and interrupt latencies. Two application programing models will emerge for access to NVMe SSD storage: one will be compatible with existing applications, and will inherit legacy latencies, while the new paradigm stays in user space and will be 2-10x faster - for new applications that have the most pressing need for speed.

THE FUTURE

In 2017 opportunities exist for hardware manufacturers, firmware creators and software engineers to leverage the new cutting-edge NVMe SSDs for speed, and to add enterprise-class features to NVMe-oF storage arrays and HCI appliances like compression, de-duplication, encryption, HA, durability and snapshots - without introducing latencies. Please don't forget my favorite storage feature - the zero copy clone for my VMs and please give me support for my persistent Docker Containers. To deliver that, it will be a busy and exciting 2017 indeed!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics