Pondering NVMe-oF

December 7, 2017

by:  Woody Hutsell, appICU.com

I published a blog on IBM developerWorks on a recent technology preview of NVMe-oF with Power9 and FlashSystem.  I hope you take a minute to read it.

I have many mixed feelings on this topic that I thought I would share.

  1.  I think NVMe-oF will make a positive impact on application performance.
  2.  I have lived through the early days of other protocols and this one is no different:  immature standards, proprietary solutions, and slow customer adoption.
  3. I believe vendors will offer a bunch of NVMe-oF enabled solutions.  Most of them won’t make any sense.  NVMe-oF is about shaving latency.  If the solution being paired with NVMe-oF is loaded with latency from poorly implemented architectures and slow storage services, adding NVMe-oF will hardly make a difference.
  4. The wide range of NVMe-oF options is an impediment to its success:  InfiniBand, RoCE, iWARP, FC-NVMe, and more on the way.  The fact that different vendors are throwing their weight into different protocols is also not helping.
  5. The focus on lower latency for the customer is positive and I am delighted to see the storage industry refocused on latency even if these are the same people I heard mutter that latency under 500 microseconds doesn’t matter.
  6. Don’t be one of those people who says NVMe when you mean NVMe-oF.  I have seen industry experts get lost in the terminology.

Woody

 

Advertisement

Re-prioritizing Analytics

June 26, 2017

Woody Hutsell, http://www.appICU.com

See this link to read my first ever blog post on an IBM.com website.

 

 


Stop waiting on NVMe all flash arrays

December 6, 2016

by Woody Hutsell, AppICU

NVMe has taken the flash array market by storm if you consider the number of storage vendors getting in line to deploy NVMe SSDs inside their all flash arrays.  NVMe inside the server (which is the basis for most all flash arrays) is an improvement over SAS or SATA due to the lighter protocol and is an improvement over PCI flash because it is hot swappable and in a drive form factor.

However, just as with the adoption of SAS SSDs inside all flash arrays, these early all flash arrays that include NVMe SSDs will be a figment of what is possible with the technology.  Why? The first flash arrays using NVMe SSDs have the same fundamental software heavy architectures that are already wasting the speed of the internal SAS SSDs.  The move to NVMe SSDs in these bloated solutions will result in some latency/IOPS improvements but ignore the problem that the storage platform is the bottleneck.  Why is it that most all flash arrays, even those with low to no storage services, are in the 500 microsecond range for latency?  One of the main reasons is the data path is littered with obstacles to low latency.  It is the server architecture, the bulky operating system, the software RAID and clumsy storage services that are behind the terrible latency not the flash media or even the SCSI protocol.

If you find yourself waiting on a low latency NVMe driven all flash array, you can stop waiting (just as your application can stop waiting), because a solution is here and available now.  The IBM FlashSystem 900, which has no software in the data path, is shipping with the low latency characteristics your applications demand.  What’s more it doesn’t require proprietary host drivers like some competing solutions (EMC DSSD and E8).  It uses industry standard Fibre Channel and InfiniBand to attach to your existing storage network.  You might protest, the FlashSystem 900 does not use NVMe inside the storage array and you would be right.  There is absolutely no NVMe inside the FlashSystem 900.  There is no storage protocol inside the FlashSystem 900.  Once the data hits the interface controller it ceases to be SCSI or PCI or NVMe.  The only thing better than an improved protocol like NVMe, is no protocol.  The FlashSystem 900, like many prior generations of FlashSystem solutions treats the flash inside the system like memory.  The result is unmatched latency characteristics.

So what do you do with the FlashSystem 900 and its low latency?  Make your applications faster.  For many database driven applications, storage services are already provided at the application or relational database layer.  The FlashSystem 900 is the perfect accelerator for these environments.  For customers who have embraced software defined storage, the FlashSystem 900 is a software defined storage accelerators, just ask the customers who have accelerated IBM SAN Volume Controller and Spectrum Virtualize with FlashSystem.  For customers who need the full storage services feature set in an integrated storage solution, the FlashSystem V9000 and FlashSystem A9000/A9000R include the FlashSystem 900 as the storage enclosure.

NVMe is full of promise for servers and for storage vendors willing to start fresh or further optimize their solutions to actually benefit from the technology.  There are noteworthy examples of new solutions on the market designed for NVMe with encouraging performance gains.  Oddly, the most noteworthy of these solutions are hard to deploy due to custom interface technologies and proprietary drivers (I think of these devices as standards based inside and proprietary outside).  The FlashSystem 900 delivers all of the benefits of NVMe today but without requiring you to change your storage network.  I think of it as proprietary inside but standards based outside.   I think the choice between these options is easy.  The fastest path to improved application performance is with the FlashSystem 900.


Cloud Grid Architecture

June 30, 2016

by Woody Hutsell, AppICU

Prevent cloud failures with grid architecture

Public and private cloud architectures fail with alarming frequency. David Linthicum, with Cloud Technology Partners, wrote in an article – Bracing for the Failure of Your Private Cloud Architecture – for TechTarget’s SearchCloudComputing that a major problem with private cloud deployments results from reusing the same hardware they used for their traditional IT. Specifically, he comments that “hardware requirements for most private cloud operating systems are demanding” and later that “If the hardware doesn’t have enough horsepower, the system will begin thrashing, which causes poor performance and likely a system crash.

Andrew Froehlich, writing 9 Spectacular Cloud Computing Fails for InformationWeek, extends this thought to the public cloud when he says that one of the three key reasons cloud service providers fail is due to “beginner mistakes on the part of service providers…when the provider starts out or grows at a faster rate than can be properly managed by its data center staff.”

Serving up applications in the cloud is different from traditional IT. Cloud deployments thrive when ease of application deployment is matched by ease of management combined with consistent performance under all workloads. Successful cloud deployments support many demanding applications and customers. With the increasing diversity of hosted applications comes some infrastructure headaches. We often custom tailor our traditional IT environments to meet the needs of a specific application or class of applications.  We know it has certain peaks for online transaction processing or batch processes. We know when we can perform maintenance. With the cloud, success means we have many applications with overlapping (or not) peak performance periods. With the cloud, we may be more likely to see constant use resulting in fewer opportunities to perform maintenance and restructure our storage to balance for intense workloads.

Successful cloud deployments can challenge and break traditional storage from a performance point of view. Traditional storage scales poorly. Whether the traditional storage array uses HDD or hybrid architectures, it will experience the same problem: as the number of I/Os to the system increase, the system performance will degrade rapidly. With an all-HDD system the latency will begin high and rapidly decay; with a hybrid configuration (SSD + HDD), the system latency will start lower, stay low longer but then rapidly decay.  When latency decays, applications and users suffer.

Successful cloud deployments can also challenge and break traditional storage from a management point of view. Traditional storage arrays are difficult to configure and deploy. It is not unheard of for initial deployments of scalable traditional storage to take days or sometimes weeks for the system to be tuned so that applications are properly mapped to the right RAID groups. Do you need a RAID group with SSDs; do you need a tiered deployment with SSDs, SAS, and SATA? How many drives are needed in each RAID group?  Should you implement RAID 0, 1, 5 or 6?  Once sized, configured, and deployed, further tweaking of these systems can be administrator intensive. When workloads change, as is the expectation in a cloud deployment, how quickly can you create new volumes and what happens when the performance needed for an application exceeds what the system is capable of delivering? The hard answer is that traditional storage was not designed for the cloud.

Fortunately, IBM has a solution – the IBM FlashSystem A9000 a modular configuration that is also available as the IBM FlashSystem A9000R, a multi-unit rack model. The new IBM FlashSystem family members tackle the performance and management issues caused by successful cloud deployments. Where the cloud needs consistent low latency even as I/O increases, FlashSystem A9000 applies low latency all-flash storage. Where the cloud needs simplified management, the systems apply grid storage architecture.

It all starts with the configuration. FlashSystem A9000 customers do not have to configure RAID groups, the system automatically implements a Variable Stripe RAID within each MicroLatency flash module and a RAID-5 stripe across all of the modules in an enclosure. An administrator configuring the system creates volumes and assigns those volumes to hosts for application use. Every volume’s data is distributed evenly across the grid controllers (this is where the storage services software runs) and the flash enclosures (this is where the data is stored). This grid distribution prevents hot spots and never requires tuning in order to maintain performance. No tuning means substantially less on-going system management. When the rack-based FlashSystem A9000R is expanded it automatically redistributes the workloads across the new grid controllers and flash enclosures.

When an I/O comes into these new FlashSystem arrays, it is written to three separate grid controllers simultaneously. These I/Os are cached in controller RAM and the write is considered committed from the application’s point of view. In this way, the application is not slowed down by data reduction. Next, the three controllers distribute the pattern reduction, inline data deduplication, and data compression tasks across all the grid controllers, thus providing the best possible data reduction performance before writing the data to the flash enclosure(s). Data can be written across any of the flash enclosures in the system, preserving the grid architecture and distribution of workload. When data is written to flash inside the flash enclosure, it is distributed evenly across the flash in a way that ensures consistent low latency performance. All of this is aided by IBM FlashCore™ technology which provides a hardware only data path inside the flash enclosure during the time data is written persistently to flash. The flash storage is housed in IBM MicroLatency® modules whose massively parallel array of flash chips provides high storage density, extremely fast I/O, and consistent low latency.

Together these technologies are a real blessing for the cloud service provider (CSP). When new customers arrive, CSPs know they can easily allocate new storage to new customers and not worry about special tuning to ensure the best performance possible. When existing customers’ performance demands skyrocket, CSPs know that their FlashSystem A9000-based systems offer enough performance to match the growing requirements of their customers without negatively impacting other customers. And when launching or expanding their businesses, CSPs know that FlashSystem A9000 can eliminate one of the leading causes of cloud offering failures, the inability of storage architectures to scale.

For more information, read Ray Luchessi’s, Silverton Consulting, article on Grid Storage Technology and Benefits


The new storage UI from IBM: simply sophisticated

May 26, 2016

What has twenty patents, eight tentacles, and is cooler than a six-pack on a scorching day? Hint: it “lives” in the recently announced IBM FlashSystem A9000. Give up? It’s the IBM…

Source: The new storage UI from IBM: simply sophisticated


Pure Myth

August 25, 2015

a blog by Woody Hutsell, http://www.appICU.com

Once upon a time, in a land not so far away, the advisors to the empire crowned a prince and proclaimed that he would soon be king. These advisors were known throughout the realm and therefore trusted. They regaled the adoring masses with the conquests of this pure prince – dragons slayed, damsels rescued, and crises averted. While the public was cynical at first, the stories about the pure prince were very convincing. The accolades from one advisor were soon amplified by another and then another until there were almost no dissenters. Who could possibly want to be the only advisor that was not a supporter of the future king?DragonSlayer

As time passed, the other princes in the land grew suspicious. A very few observers noticed that the pure prince was being credited for dragons slayed and damsels rescued that never happened. But the prince did not deny what the adoring advisors were saying about his conquests, because he understood that the true battle was a matter of perception. It was about the myths that preceded the prince into combat.  And because many believed it so, their gold and their support was rapidly flowing toward the pure prince, leaving some princes without the resources they needed to compete.

Eventually the pure prince made his play for the throne. His accountants and prophets detailed his conquests and ledgers. And while the wins in battle were impressive, they were far fewer than all had been led to believe. And the amount of gold spent to win those battles was breathtaking. A king running the empire or a company doing business with this strategy would surely go bankrupt. And thus the pure prince’s own accountants exposed, in a way that discredited the adoring advisors, how he was not what the advisors had made him out to be. With the eyes of the once adoring advisors now opened and the pure prince needing to curtail his spending, the competing princes rallied.

The perception of pure success had been exposed as illusion. The pure prince was nothing more, and maybe something less, than any other prince. The competing princes were suddenly more visible on the battlefield of the marketplace where gold in hand, constant struggle, endless innovation, a capricious bit of Luck, but not the obscuring mist of hype would ultimately crown the victor.

And so the princes all rode forth into battle. But the many prophets and advisors did not. As they had for generations, they hid behind the smoke of industry and the sound and fury of competition…watching and waiting to leap forth at the first opportunity to once again crown a new prince and prepare him for the throne…


Reflections on Flash Memory Summit 2015

August 24, 2015

I just got back from my nth Flash Memory Summit. Special thanks to Tom Coughlin and the crew for putting on a good show and providing an excuse to get together with my friends in the industry.

I have some observations from the show this year:

1. There is nothing quite like a multi-billion dollar industry threatened by extinction to generate new breakthroughs, and 3D NAND technology looks like just the technology to extend the life of NAND flash for many more years. This means we can continue to project out density and cost improvements with NAND flash even as 3D NAND makes dealing with wear levelling a little bit easier for a generation or two.

2. We have been saying for years that “in five years” we would have a technology that could displace NAND flash. It looks like we will continue to be wrong, but the new announcement from Micron about 3D XPoint is nonetheless exciting because it may be our first viable storage class memory. There are a host of things people would like to do with NAND flash but they can’t because NAND is too slow to act as memory, or that they would like to do with RAM but RAM is volatile and low density. 3D XPoint appears to be a product that will enable some clever engineers to reach some markets poorly served by NAND flash.

3. Coincidental to the show, Pure Storage filed for a public offering. The financials that accompanies their filing made me pause and reflect that building a company to launch today is very different from building a company capable of long-term survival. At Texas Memory Systems (TMS), we did business the old fashioned way – we were profitable. The CEO never took on venture capital or long-term debt. His business could have continued indefinitely. The obvious downside of our approach at TMS was that we could not buy market attention and market share. To be relevant in our marketplace we had to produce the best technology. To be interesting to IBM, TMS had to have the best engineering. Now, it seems that the other start-ups in the industry that attempt to develop sustainable business models are mocked rather than celebrated. I think a few more market disappointments and with any luck we will learn to value businesses that build for a future. I believe the companies that are built to survive are better acquisition candidates.

4. The all-flash array market continues to be vibrant and fast growing. Even dropping $100 million from the 2014 market size estimates still shows a market in the early stages of spectacular growth. Just as interesting, we are starting to see companies aim for the edges of the market and position for promising new niches. The long awaited takeover of the data center by flash is well underway.

My final word of wisdom from this journey is that you should never take a cab from SFO to Santa Clara.


Flash as Memory

May 4, 2015

by Woody Hutsell, http://www.appICU.com

A new analyst report on the use of flash as memory posted by David Floyer at Wikibon generated some attention in the market when it was covered by the Register. Floyer labels the architecture as FaME (Flash as Memory Extension). This blog discusses the merits of flash as memory architectures and how IBM approaches the flash as memory market.

The latency problems of disk have driven the IT industry toward a flock of solutions. One of course is flash storage, and the flash adoption rates show that this solution is rapidly gaining popularity. Another potential solution is to move more and more data into server memory (such as in-memory databases). But DRAM is volatile and relatively expensive. Flash is not volatile and much less expensive, but it’s of course slower than DRAM. In the middle of these two options lies the concept of “flash as memory.”

When we talk about using flash as memory we are wandering into interesting semantic territory. Flash is memory, so why do we need to have a new discussion about how to use flash as memory? In traditional use cases flash is used as storage, tucked away behind a block storage protocol, to allow flash memory to be easily integrated into traditional applications.

Using flash memory as memory, rather than as block storage, is gaining some traction in the marketplace. But using flash as memory requires some interesting trade-offs. Most importantly, we trade off latency (because RAM has much lower latency than flash) for much higher capacity/density and much lower cost per capacity. This latency trade-off runs counter to our typical reasons for using flash, which is to decrease the latency for data accesses vs hard disk drives. With flash as memory, we are increasing latency but for economic reasons.

For application developers, the choices have meaningful impacts. Relying on flash instead of disk means access to an infinitely large capacity of storage with low latencies. But these latencies are impacted by the processor to backplane latency + OS latency + file system latency + protocol latency + network latency + storage system latency. Various mitigating technologies are in play across each of these components, with different options affecting the total latency/cost/efficiency for the application. Relying on RAM, instead of disk, means that you get the lowest possible latency but with dramatic constraints on maximum capacity, acceptance of volatility, and the highest cost per capacity. Nonetheless, any “memory architecture” offers performance improvements on any models where hard disk drives are used as storage.

When application developers decide to adopt a flash-as-memory architecture, they can’t just plug in a flash system and expect the application to be plug and play. For application developers, using flash as memory means coding their applications to appropriate application program interfaces (API), which are likely to use memory access semantics instead of block access semantics. The development effort required to adopt a new API remains a significant limiting factor for broad marketplace adoption of these approaches in traditional applications, although efforts at standardization are emerging to lower that barrier. In most cases, application development with memory access semantics instead of block access semantics actually results in substantially simpler code. Once the application is coded to the API, then the experience for customers using the application vs an application previously running out of RAM is identical (recognizing some potential performance implications).

The question becomes: What is the best way to implement a flash-as-memory architecture? Is the best approach to use flash inside a server? Is the best approach to use PCI-attached flash? Is the best approach to use a flash appliance? Within each of these categories, solutions can be dramatically different regarding performance, reliability, and cost. Flash inside the server is fine for smaller capacity uses. If you decide to leave the boundaries of the server, the question becomes: What is the best way to connect an external flash appliance? There are a limited number of choices on the market today. Since 2014, IBM has offered the Data Engine for NoSQL – Power Systems Edition. There are many innovations in this particular flash-as-memory solution that have likely escaped the attention of the market. First, with the introduction of POWER8 technology, our Power Systems offerings now provide a new interface called CAPI (coherent accelerator processor interface) that cuts through many of the layers required in traditional x86 I/O designs. CAPI is an improvement on PCI Express used in traditional servers:

  • CAPI allows the IBM FlashSystem 900 to interact with processors and system memory like another processor would
  • This enables applications to access flash directly, bypassing the device driver, kernel, pinned pages, memory copies, etc.
  • By removing the I/O subsystem overhead, the flash can be viewed as long-latency memory instead of as I/O-attached storage
  • This eliminates >95% of the CPU cycles associated with moving data to and from flash, freeing up CPUs to do useful work (thus avoiding one of the pitfalls associated with other flash as memory solutions)
  • The removal of code path length from the flash access reduces application-visible latency by more than a factor of 2 relative to accessing flash via the legacy I/O subsystem architecture
  • The presence of a CAPI controller in the path to the flash enables future innovations which embed hardware-accelerated compute functionality in the flash read/write data path, leveraging the CPU efficiency and ease of programming that IBM’s CAPI architecture provides.

The second advantage of the Data Engine for NoSQL introduced above is that it uses IBM FlashSystem 900 as its flash memory repository. At this point, you are thinking – Aren’t all flash appliances created equal? What you should realize, of course, is that there are massive technology differences between flash appliances. IBM FlashSystem 900 is a product whose legacy was storing data in RAM. In RAM? Yes. For over 30 years Texas Memory Systems, the company IBM acquired to enter the flash memory business, sold systems based on RAM. Why does this matter? First, as I highlighted in my previous blog, our engineers are hard core when it comes to low latency. Our FlashSystem 900 is not polluted by latency-inducing storage services or bogged down by architectures originally designed for disk drives or even, for that matter, most flash drives – I don’t care whether that flash drive is attached with SAS, SATA, or NVMe. What IBM engineers do is inherently better because we started with an architecture that always treats flash as memory (remember we started with RAM) and then we just translate at the interface layer from whatever protocol we are attached to into direct memory accesses (DMA). A close look at the architecture reveals a flash appliance that does not use a PCI or NVMe backplane but an essentially native protocol-less backplane because we don’t want any software or artificial limits in the data path.

This architecture gives FlashSystem engineers endless flexibility, as demonstrated in our current array of solutions. This flexibility means that FlashSystem 900 can be used with the Data Engine for NoSQL in flash as memory use cases. It can also be used in traditional application acceleration environments where low latency storage is required, such as with Oracle RAC databases. It can be used in Tier 1 disk replacement architectures when used with IBM Spectrum Virtualize (SVC) or as part of our FlashSystem V9000. It can be used in scale-out object, block, and file use cases with our IBM Spectrum Scale solution. One elegantly defined system with multiple use cases.

The marketplace has not yet spoken on the importance of flash-as-memory. IBM with its Data Engine for NoSQL is a major early participant in this new storage direction, enabled by revolutionary foundational technologies such as CAPI and IBM FlashSystem – an end-to-end architecture only possible from a company whose reach spans from the server to the storage array.


Post Launch Reflections

April 7, 2015

by Woody Hutsell, http://www.appICU.com

It feels like I just participated in my billionth solid state storage product launch. In fact, I have participated in quite a few since the year 2000. In almost chronological order, here are the solid state storage systems I can recall launching into the market. If you are a solid state storage historian, you should pay close attention to the following table:

table3

Some additional observations on trends over this period:

  • Prices have come down from $5,000/GB in 2000 to below $10/GB in 2015. That means solid state storage costs 0.20% of what it cost 15 years ago.
  • Capacity per rack unit in 2000 was 10 GB/RU. Capacity per rack unit in 2015 is 28,500 GB/RU. In the past 15 years solid state storage density has increased by 2,850x. Mind-blowing, really.
  • Latency exhibited by the RAM-based products was very consistent and low over the 10 year period. Considerable cost, density, and persistence advantages forced a move to NAND flash memory in 2007, but latency for the flash-based products has remained consistently below 250 microseconds over the last 8 years.
  • The continual trend to lower cost per bit NAND technologies (SLC to eMLC to MLC).
  • There has been a consistent investment in enhanced data protection that started with protecting the memory in RAM using Chipkill and subsequently Variable Stripe RAID, a patented TMS technology.
  • TMS, and now IBM, has improved RAS features as the systems evolved, from requiring two units for data protection to requiring only one highly available system.
  • We’ve recently seen more demand for advanced storage services as the market for all flash arrays broadened from application acceleration to Tier 1 disk replacement.

Looking to make predictions about the future of solid state storage? A quick glance at the rate of product innovation and the reasons for change over the last 15 years can tell you much about how the industry will continue to evolve.


Ready or Not, Here Comes Flash

November 17, 2014

Ready or Not, Here Comes Flash.

Blog posted originally on:  http://datascopes.wordpress.com/ by Elan Freedberg