Start waiting on 3DXP arrays

June 1, 2017

Start Waiting on 3DXP arrays, by Woody Hutsell, AppICU

Let’s get one thing out of the way.  Most storage systems will eventually offer 3DXP.  Why?  Because adding 3DXP SSDs to a storage array will be easy.

A second thing, I think the early usage for 3DXP will flow largely to server vendors (and their suppliers).  This is a major point and central to my thoughts on storage and 3DXP.  In the server, 3DXP reduces cost and increases density versus RAM.

3DXP in external storage will lag expectations until there are major advances in density and price.

I have worked in the part of the market that 3DXP external storage solutions will target for the last 17 years.  For most of those 17 years, I think we could comfortably call this space Tier 0.  These are customers whose end-customer satisfaction, missions or revenue are directly tied to the performance of their storage arrays.  When I say performance, I really mean latency sensitive.  They are so latency sensitive that they will not tolerate storage services getting in the way of application performance.  There are customers in the financial, telecom, defense, government, retail, e-commerce and logistics businesses that I could probably with a high degree of accuracy predict their interest in this solution.

These customers are willing to pay for low latency.  Customers in this category bought all RAM solid state storage.  They were early adopters of all flash arrays. They still buy based on latency curves (who delivers predictable low latency at the IOPS level they require).

These are not the customers buying Tier 1 arrays with a full suite of storage services.  They will not tolerate data reduction or storage services if it impacts latency.  These are not the customers buying primarily on cost/capacity though they still have budgets and need a solution that fits that budget.

I love this Tier 0 market, because these customers are solving world class problems and must stay on the bleeding edge of technology to grow their business.    These customers will buy 3DXP arrays that deliver on the low latency potential of 3DXP.  The phrasing of this sentence is no accident, if the array offers 3DXP but only delivers modest latency improvements, it will be largely ignored.

The first enterprise market to hit it big with flash was inside the server, particularly PCI flash (think Fusion-io).  The second enterprise market to hit it big with flash, a few years later, was the Tier 0 external storage market (think Texas Memory Systems (subsequently as IBM) and Violin Memory).  These splashes were nothing compared to the tsunami of business when all flash arrays entered the Tier 1 market with compelling economics driven by adoption of flash in consumer devices and supported by inline data reduction technologies to further reduce the cost per capacity.  These were majority buyers who were confident that the technology wrinkles were ironed out and who by and large wanted better performance than they could get from their disk-based solutions but were very focused on storage services, cost and cost/capacity.  They are not Tier 0 buyers though they won’t go back to disk having tasted the sweet nectar of low latency storage.

Tier 1 customers are unlikely to buy into all 3DXP storage arrays until the cost approaches the cost of flash because for these customers the difference between 120 microseconds of latency and 20 microseconds of latency is not as motivating as the difference between 5-20 milliseconds of latency and ½ a millisecond of latency.  And can you really get 20 microsecond latency on a Tier 1 device loaded with storage services?

What does this mean for the industry?  The market for 3DXP in external storage arrays will appear vibrant due to product introductions but the revenue that can be directly attributed to 3DXP in external storage will be low until the cost and density make meaningful improvements.  Storage architects are already designing ways to use 3DXP as a RAM replacement/supplement in the storage array.  There is some interesting potential here given the memory requirements for flash metadata and caching and the use of 3DXP as a tier of storage.  These steps are reminiscent of the way flash was gradually introduced into Tier 1 before it became Tier 1, for example in RAID cache backups.  As with the all flash arrays, the all 3DXP arrays custom built for the best latency curve at the right price will start out in the Tier 0 space waiting for the cost and density improvements that bring it to the big time.  This time around, that transition could take much longer than it did with flash based arrays.  Flash arrays benefited massively from the density and cost reductions needed in the consumer space.  3DXP does not appear to have the same tailwinds yet.


Our Growing FlashSystem family

April 27, 2016

IBM storage is proud to introduce our new twins, FlashSystem A9000 and FlashSystem A9000R, affectionately known as pod and rack respectively. The twins come from the loving family created by the marriage of FlashSystem (hometown Houston, Texas, USA) and XIV (hometown Tel Aviv, Israel). The twins share the same DNA but have taken on completely different appearances and capabilities.

new family announcement

I have to say, as a member of one of the proud parent teams, the last two years have been a real eye opening experience. Any major system release is an exercise in coordination and collaboration, and this one crossed many time zones and cultures, to say nothing of merging technologies. The marriage of these two groups involved integration of offering management, product marketing, marketing, technical sales, sales, support, development, testing, and sales enablement.

As a quick refresher, IBM acquired XIV in 2008 and Texas Memory Systems (now referred to as FlashSystem) in 2012. XIV’s claim to fame was taking the world’s least reliable disk technology (SATA HDDs) and packaging them into a highly reliable, scalable, and high performance enterprise storage solution. The Texas Memory Systems and FlashSystem claim to fame involved extracting the lowest possible latency from solid state storage media in a shared storage solution. It is obvious why these two solutions would be merged together, isn’t it?

OK, so maybe it isn’t that obvious, so I will explain. Over two years ago, we were looking to the future and envisioning a world where solutions for the cloud service provider market took on increasing importance. The vision really crystallized in 2013 when IBM acquired SoftLayer. As with any initiative of this sort, IBM went through a buy vs build analysis. With the acquisition of Texas Memory Systems only having been recently closed, buying another all-flash array provider was not likely. A quick look across the storage stacks available from within IBM revealed some great options: the software behind IBM SAN Volume Controller, the software behind XIV, and what we now refer to as Spectrum Scale. We were looking for some key features: scalability, because we knew cloud service providers need to be able to grow with their customer base; quality-of-service so that our customers can prevent noisy neighbor problems in multi-tenant environments; multi-tenant management so that those tenants could manage their own logical component of the system; and critically, a team (the people) with the experience and resources to implement full-time data reduction so that we could help cloud service providers lower the cost of their all-flash deployments. When you put it all together, it was obvious that our best match was with the XIV software (and team). XIV, for years, had led IBM’s focus on cloud integration points, including many of the key features mentioned above plus strong links to cloud orchestration solutions from Microsoft, VMware, and OpenStack.

Leaving out the details, suffice it to say there have been many IBMers crossing the Atlantic and Mediterranean in order to bring these new members of our product family to market.

But the strength of the family is not just the individuals…it’s in the family itself and here is where our marriage makes even more sense. As much as we can see the future through our all-flash lenses, it is abundantly clear that customers will take a variety of paths and differing amounts of time to get there. Our combined family includes a true software defined storage capability in Spectrum Accelerate, a capacity-optimized solution with XIV, and a performance solution with the FlashSystem A9000 twins. In addition to sharing a software lineage, these products actually can share licensing. A customer testing the waters with this family could start with a trial deployment of Spectrum Accelerate, then actually buy software licenses on a per capacity basis for Spectrum Accelerate. Those software licenses are then transferable to XIV for low cost capacity and to FlashSystem A9000 for dynamic performance with full time data reduction. In the near future, customers will be able to asynchronously replicate from a FlashSystem A9000 to an XIV, enabling additional cost cutting for disaster recovery deployments.

It’s been quite journey, literally, getting these two new products to market. But now that they’ve arrived, please join us in welcoming the twins to our growing FlashSystem family!


Waves of Opportunity

May 28, 2011

by Woody Hutsell at www.appicu.com

The next big opportunity/threat for SSD manufacturers is playing itself out right now. SSD vendors are scrambling to be a part of this next big wave. The winners are your next acquisition targets or companies poised to go public. The losers will hope that this new wave expands the overall market just like the first wave.

The first big wave in the enterprise SSD market was the rapid adoption of hard disk form factor SSDs for use in enterprise storage arrays. The SSD companies most seriously contending to ride this wave were BitMicro and STEC. STEC, by virtue of their GnuTek acquisition, had the right product at the right time and were able to win early business with EMC. Suddenly, venture money was pouring into the market and any company that had ever put a Flash chip on a board was selling Flash disk drives. The clear winners in this category have been STEC, who continues to have great revenue growth, and Pliant’s investors who have successfully sold their company to SanDisk after getting some traction with the OEM community. The story in this market is not finished as companies like Western Digital, Seagate, LSI and Intel look to chip away at this part of the business. At the same time though, a few companies were swept out to sea and others saw their golden opportunity for enterprise riches turn into dreams of big volumes (but low margins) in consumer markets. As I have argued before, the use of Flash hard drives in enterprise arrays is really about accelerating infrastructures more than about accelerating a specific application. This first big wave actually increased opportunities for all SSD companies by increasing the market size and validating the technology for mainstream use.

The newest wave to entice and yet concern SSD manufacturers is hitting closer to home for those manufacturers focused on the application acceleration market. For many years, the data warehousing sector has led to some great success stories for companies like Netezza who tightly bundled database functionality with hardware. Netezza’s success led Oracle and HP to try Exadata which was anything but a rousing success in the market. But somewhere along the way, Oracle was watching what Sun was doing with solid state storage and noticed a way to take the relatively less exciting Exadata and turn it into something much more captivating and yet similarly named Exadata 2. Some day we will learn whether the prospects of Exadata 2 were a big motivator for the Sun acquisition or just a quick way to demonstrate that Oracle was serious about the hardware market. Either way, Oracle’s claims of big margins and big potential revenue streams for Exadata 2 have ignited a flurry of activity in the market. Already vendors are clamoring to get into this space and there is a series of speed dating exercises going on as database vendors, server vendors and SSD vendors start trying to find some magical combination which helps them beat Oracle at this new market. Will the rich SSD vendors get richer still in this category or will the remaining SSD manufacturers find new partners, buyers and OEMs? Can any combination beat Oracle?

Whoever the winners, this second wave will show more clearly the ability of a tightly integrated solid state storage solution to increase application performance.


What an Interface Says About an SSD

February 1, 2011

When an SSD manufacturer brings a product to market you don’t need to look any further than the interface between the SSD and the server to understand its target market. Solid state storage systems are available with a wide array of sizes, shapes, densities, media, performance, cost and interfaces. The interface used gives the best hints as to how the manufacturer predicted the product would be used and more specifically which market they are targeting.

Fibre Channel SSDs are aimed at the enterprise data center. For most of the last decade, Fibre Channel has been the interface of choice for Tier 1 disk drives and the main interface for attaching external storage arrays in most data centers. Interestingly, the Tier 1 disk drives are now migrating to SAS, but the predominant interface for the enterprise storage array to the server is still Fibre Channel. Companies developing Fibre Channel SSDs want to appeal to enterprise data centers who have made major investments in Fibre Channel based storage area networks. There are plenty of predictions about the demise of Fibre Channel in the data center, but if you were making a choice about an interface for the enterprise today, you would offer Fibre Channel first. If I were deciding the next interface for an SSD or a storage array, I might go with FCOE, but I would probably wait to see that market develop further first. The rapid introduction of converged network adapters (CNA) could translate into changes at the storage controller, but I would also wait to see what happens in that arena.

InfiniBand SSDs are aimed at the high performance computing (HPC) market. InfiniBand is touted for its high bandwidth per link and its low latency. For SSDs with large backplanes, an InfiniBand (IB) controller is a good way to tout your bandwidth capability. Yes, I know there are other companies using IB outside of the HPC market, but the bulk of big opportunities for IB SSD are in that space today. I broadly define HPC to also include oil & gas and entertainment industries. I do believe that IB attached SSDs are an interesting option for data warehousing applications where bandwidth is more important than IOPS.

 NAS SSDs are aimed at the middle of the enterprise. This segment is one of the more intriguing to watch. A couple of companies have made credible attempts to develop NAS caching solutions which sit in front of existing NAS and provide a read or read/write caching layer. In a future blog, I might examine the challenges these companies face. As with mainstream Fibre Channel attached storage, the NAS vendors have incorporated SSD as a storage tier. Only one vendor comes to mind that is doing a pure SSD NAS solution, but others are likely to follow. NAS solutions are so much about software that it is harder for a new company to enter this space and compete with the incumbent suppliers.

iSCSI SSDs are aimed at the low to middle of the enterprise. This has not been a terribly active segment for pure SSD solutions but interesting options are on the horizon. Clearly, existing iSCSI storage arrays have options for including hard disk based SSDs. My automatic expectation when I look at an iSCSI solution is that it will be less expensive than a Fibre Channel SSD. The main reason I would offer iSCSI is to target the cost-sensitive part of the market. Given the increased availability of 10Gbit Ethernet and advanced TCP off-load engines, it is quite reasonable for an iSCSI SSD to offer good performance.

Internal PCI SSD. There has to be an exception to every rule and PCI SSD may be the exception to my rule about an interface telling you about the application for an SSD. PCI SSDs cover a wide variety of price ranges, capacities, media, performance and reliability. On the high end, there are a bunch of applications, particularly scale-out applications, which are server-centric and not storage network centric. PCI SSD have had tremendous success in this category. Similarly, for companies with smaller data sets and budgets, PCI SSD can be alluring. It is not a stretch to pitch PCI SSD for prosumer or high end gaming customers.

External SAS SSD. There are very few externally attached SAS SSDs on the market today. I think the people who offer them were probably temporarily delusional about the future role of SAS in the market and its ability to get rid of Fibre Channel for storage networking. This is not to say that SAS is a bad interconnect, in fact it is being effectively used to replace Fibre Channel as the backplane for many modern storage arrays (i.e. the connections between a disk controller and its enclosures are increasingly SAS).

Hard Disk Drive (HDD) Form Factor SAS SSD.  With the help of solid state storage, SAS HDD has killed the Fibre Channel disk drive. Hard drive form factor SSDs with SAS interfaces are more likely than not intended to be sold to a storage or server OEMs. For the storage OEMs, they replace their Fibre Channel SSDs (if they were ever offered). For the server OEMs, SAS SSD may be used as a boot drive.

SATA SSDs are aimed at the consumer, prosumer, gaming and small business markets. I cannot currently see an enterprise market for SATA SSD. In enterprise storage arrays, SATA HDDs are only used to offer the 3.5” high density (slower) drives.

External PCI. There are a few varieties of external PCI offerings including devices that are ground up designed to offer external PCI SSD and others that are I/O expansion chassis that can be loaded up with PCI SSD. My personal opinion is that the genesis of the external PCI SSD was to serve as extended memory for servers at a time when server memory capacities were limited and at high densities extremely expensive. In my experience, the only way to make one of these devices useful for traditional data centers is to put the external PCI chassis behind some other storage gateway. The storage gateway attaches to the storage network with Fibre Channel. This is all good, but the gateway is now the main dictator of your performance characteristics.

The story on SSD interfaces is certainly not complete. Innovative companies will capitalize on new markets and new interfaces in ways that we cannot yet predict. For the innovators in these segments lie new markets and new opportunities.


START for SSD Marketing

December 20, 2010

As the United States looks to approve the START treaty, I thought it was time to propose that SSD manufacturers enter their own strategic arms reduction treaty to control the rampant and destructive proliferation of million IOPS marketing.

The road to an IOPS arms race began innocently enough, SSD manufacturers had a novel story to tell.  The IOPS from hard disk drives have been atrocious since the dawn of computer time.  Even today, the lowly hard drive can only squeak out 300 random IOPS.  From the earliest days of the SSD, IOPS marketing was a big part of the story.  The beauty of a solid state storage device was that it could move more data (IOPS) to a processor faster (less latency) than traditional disk.  This simple story has been at the core of the SSD value proposition for 30 years.

Admittedly, I fired the first shots (and probably the second, third, fourth….) in the escalating IOPS arms race in 2001 when Texas Memory Systems announced the RamSan-520 a 5U monster of a system with all of 128GB of RAM capacity.  This system with fifteen 1Gbit Fibre Channel ports was said to deliver 750,000 random IOPS.  Do you have any idea what kind of reaction that generated at storage conferences in 2001.  Wow!  Impossible, most would say.  This process led to TMS proudly declaring itself the “World’s Fastest Storage®”.   After firing this weapon for nearly ten years, I have to admit the time has come to stop the IOPS marketing arms race.  The challenge in 2001, as it is today, is finding the customer that can drive 1,000,000 IOPS.  Actually, in 2001 finding a server to drive that many IOPS was impossible.  The processors, operating systems, host bus adapters, etc. were all too slow.  Fortunately, the imposition of Moore’s law on electronics led to breakthrough after breakthrough enabling SSD manufacturers to demonstrate high IOPS with single server configurations.

I like to think, but hesitate to admit, that in addition to pioneering million IOPS marketing, TMS also drove the widespread use of IOMeter (a tool used to test storage devices by generating IO) at trade shows.   As a storage marketer my grandest dream was to find a customer that needed to run IOMeter as their business application.  What a perfect customer this would be.  I searched the world over.  Strangely, the financial exchanges didn’t need IOMeter to complete trades.  Telecom companies didn’t need it to bill cellular customers.  Who could possibly need IOMeter for their business?  Imagine my glee when host bus adapter manufacturers and switch manufacturers started caring about IOPS as a marketing tool.  Finally, my dream customer had arrived.  My apologies to storage industry exhibit hall wanderers; I too have tired of seeing IOMeter.

This brings us back, admittedly after a brief tangent, to million IOPS claims.  I hesitate to examine the number of vendors that are persistently and proudly proclaiming profound performance.  One million IOPS.  Yawn!  Is that all you’ve got!  In fact, I would argue if the extent of your marketing message is your IOPS you don’t have enough… marketing talent.

Using a solid state storage device is about a lot of things: application acceleration, lower power consumption, enabling business growth and solving mission critical problems.  Tell us the customer stories.  How many 1,000,000 IOPS customer stories have you read?  Hmmm.  Fair enough, 1 million IOPS sounds like so many that people will stop worrying about whether the storage device can meet their production performance requirements.  But can we stop at 1 million? Perhaps we should shoot for 2 million.  No.  It is time for the SSD IOPS marketing proliferation to be stopped while the customers still care.   Start designing systems that satisfy the range of customer buying requirements:  low latency (the number one reason most customers benefit from SSD), good-enough IOPS, bandwidth suitable to the application’s goal, five 9’s reliability, low mean time to repair, low power consumption, interoperability and low total cost of ownership.