What a long strange year it’s been

December 18, 2013

Woody Hutsell, AppICU

Flash back one year ago.  I was working at Fusion-io on a software defined storage solution with some of the brightest minds in the industry.  Fusion-io was flying high reaching over $100 million in quarterly revenue.  David Flynn, Rick White, Jim Dawson were leading one of the most talented teams I have been around.  There are still really talented people at Fusion-io, but take away Rick White (heart), David Flynn (mind) and Jim Dawson (soul) and you have just another company.  A company still bringing in some real revenue by the way and continuing to dominate in PCI Flash.  Their relationship with my employer, IBM, is still strong.  If I were buying PCI Flash for Intel servers, they would still be my first choice.

I left Fusion-io at the end of March to go back home, literally and figuratively.  I loved working at Fusion-io, but traveling from my home in Houston to Salt Lake City/San Jose twice per month was not great fun.  More importantly, IBM had closed its acquisition of Texas Memory Systems and my friends, co-workers and family were encouraging me to come back.  The idea of being with a company of IBM’s capability, picking up where I left off with my first solid state storage baby (the RamSan), and working with friends and family less than two miles from home was too much to pass up.  I could feel the excitement from the TMSers who were now IBMers and saw that IBM was out to win in the all flash array category.  Did someone say a billion dollar investment in flash?  Makes the $150 million for Pure Storage look like pocket change.

My initial conversations with the IBM team, pre-joining, validated this feeling I was getting.   IBM had brought the best and was basing many of them in Houston.  As important to me, was seeing that many of the other talented people who had left TMS in the years prior to the acquisition were returning including friends who had great roles at Oracle and HP.

If history has taught us anything related to the solid state storage industry, the fate of companies rises and falls on the strength of their relationships with the big companies in the industry.  STEC made the first big splash locking up OEM deals for Zeus-IOPS.  Fusion-io made the next big splash in the PCI Flash space locking up OEM deals for ioDrives and ioScale.  Violin had their first big peak on the back of a short-lived relationship with HP.  All of these company’s fortunes have surged, and at times collapsed, from these relationships.  It only made sense to me then that the one thing better than being OEM’d by the big company was being the big company; and so far I am right.

So here we are at the end of 2013.  I think 2013 will be seen as the year that the all flash array market finally took off generating the real revenues that have been anticipated for years.

2014 will witness the bifurcation of the all flash array market that Jeff Janukowicz at IDC first called out in a research report a couple of years ago creating real separation from products that are focused on “absolute performance” and those focused on the “enterprise.”  In some ways this is a bit like talking about the market that is and the market that could be.  Today, the majority of all flash array purchases in the enterprise are used for database acceleration (bare-metal or virtual). These workloads, more so than many others, especially benefit from absolute performance systems and notably do not benefit from inline data deduplication.  Curiously, the venture backed companies in the market are almost exclusively focused on the enterprise feature rich category.  Even Violin, who once had a credible offering in this category, has chosen an architectural path that moves them away from the absolute performance segment of the market.  The company with the most compelling solution in this category (in my clearly biased opinion) is IBM with its FlashSystem product.  I have for at least a decade heard the industry characterizing the RamSan and now the FlashSystem as the Ferrari of flash arrays. What our competitors have discovered along the way is that performance is the first cut at most customer sites and beyond that FlashSystem brings a much better economic solution because of its low latency, high density and low power consumption.

Does this mean IBM doesn’t have a play in the all-flash enterprise category?  Stay tuned.  It’s not 2014 yet.  In fact, mark your calendars for the years’ first big announcement webcast bit.ly/SCJanWebcast

And really, did you even think that thought.  IBM has the broadest flash portfolio in the industry.  IBM has clearly said that the market is approaching a tipping point, a point where the economic benefits of flash outweigh its higher cost.  This tipping point will lead to the all-flash data center.  And nobody understands the data center better than IBM.

I am looking forward to an eventful 2014.  Happy Holidays and Happy New Year.

Woody


Power and PCI flash, the performance balancing act!

October 9, 2013

Woody Hutsell, http://www.appICU.com

Amidst a slew of other IBM announcements yesterday was IBM’s launch of the Flash Adapter 90 for AIX and Linux based Power Systems environments.  Flash Adapter 90 is a PCIe flash solution that accelerates performance and eliminates the bottleneck for latency sensitive, IO intensive applications.  Enterprise application environments such as transaction processing (OLTP) and analytics (OLAP) benefit by having high performance Power processors balanced against equally high performance PCIe flash.   This type of balance thereby, increases server productivity, application productivity, and user productivity to drive a more efficient business.

The Flash Adapter 90 is full-height and half-length PCI Gen 2 providing 900GB of usable eMLC flash capacity.  Flash Adapter 90 is a native flash solution without the bottlenecks common to other PCI flash solutions and uses on-adapter processing and metadata to lessen the impact on server RAM and processing.   Up to four of the adapters can be used inside supported Power servers.

In a recent article “Flash fettlers Fusion-io scoop IBM as reseller partner”, Chris Mellor observed that IBM’s recent decision to launch Fusion ioScale based IBM Flash Adapters Enterprise Value for System x® solutions was evidence that IBM had abandoned the PCI flash technology that IBM received when they acquired Texas Memory Systems.  The Flash Adapter 90 product launch demonstrates that IBM has not discarded this technology, merely waited for the perfect time and perfect platform to bring it to market.  IBM has consistently demonstrated a desire to meet client needs whether that involves engaging IBM R&D to develop solutions, such as the Flash Adapter 90, or bringing in industry standard components.

Flash Adapter 90 brings IBM patented Variable Stripe RAID technology and enterprise performance to the Power Systems client base who have anxiously awaited a solution with a driver tuned to take advantage of AIX and Linux operating systems.  Power Systems are acknowledged as the world’s fastest servers and now have a bit of world’s fastest storage to create an unbeatable combination of processor and storage for accelerating business critical  applications.  Along the way, IBM tested the combined solution with IBM’s Identity Insight for DB2, demonstrating IBM’s ability to combine multiple products from application to server to storage for a consistent predictable client experience.  This combination of products showed performance superior to other tested configurations yet at a much lower cost per solution.

With this announcement, IBM offers its Power Systems clients more choice in deciding what flash storage they will use to accelerate their application.  Power System clients can consume flash from IBM in any manner that best suits their data center or application environment structure.  Clients may choose from IBM FlashSystem, IBM Flash Adapter 90, EXP 30 Ultra SSD Drawers (a direct-attach storage solution) in addition to a host of other IBM System Storage products.  For applications or client architectures that are server-centric, i.e. use server scale-out/clustering for reliability, the Flash Adapter 90 is a low cost method for delivering outstanding application performance.  Applications based on DB2 and Oracle databases are excellent candidates for acceleration.

Long live the Flash Adapter 90.

More information on IBM Power Systems flash options can be found at:  http://www-03.ibm.com/systems/power/hardware/peripherals/ssd/index.html


Video Entry: The Solid State Disk Market

August 18, 2011

Woody Hutsell, AppICU

I had a chance to sit down with my friends from MarketingSage (www.MarketingSage.com) at the Flash Memory Summit last week.  They asked me some tough questions about the solid state disk market, here are videos they prepared from our discussion:

Where do solid state disk cache solutions fit in the market?

How is virtualization changing demand for SSD and Flash?

What’s driving SSD sales success with end-users?

How would you describe the vendor landscape for solid state disks?

I hope you enjoy these brief videos and thanks again to MarketingSage for making it happen.

Woody


Long Live RAM SSD

April 22, 2011

by Woody Hutsell at www.appICU.com

In late 2006, Robin Harris at www.StorageMojo.com wrote “RAM-based SSDs are Toast –Yippie ki-yay”.  As a leader of the largest RAM-based solid state storage vendor at the time, I can assure you that his message was not lost on me.  In fact, we posted a response to Robin in “A Big SSD Vendor Begs to Differ” to which Robin famously responded “If I were TMS, I’d ask a couple of my better engineers to work part time on creative flash-based SSD architectures.”  I cannot honestly remember the timing, but it is fair to say that the comment minimally reinforced our internal project to develop a system that relied heavily on SLC NAND Flash for most of its storage capacity.  Within a few years, TMS had transitioned from a RAM-based SSD company to a company whose growth was driven primarily by Flash-based SSD.  Nearly five years after the predicted death of RAM-based SSD I thought it would be interesting to evaluate the role of RAM SSD in the application acceleration market.

First off, it is important to note that RAM-based SSDs are not toast.  In fact, a number of companies continue to promote RAM-based SSDs including my employer, ViON, who is still marketing, selling and supporting RAM-based SSDs.  What may be more surprising is that the intervening years have actually seen a few new companies join the RAM-based SSD market.  What all of these companies have identified is that there are still use cases for the high-performance per density available with RAM-based SSD.  In particular, RAM-based SSDs continue to be ideal for database transaction logs, temporary segments or small to medium databases where the ability to scale transactions without sacrificing latency is critical.  Customers in the e-commerce, financial and telecom markets will still use RAM SSD.  When a customer says to me that they need to do be able to say they have done “everything possible” to make a database fast, I still point them to RAM SSD if the economics are reasonable.  I think the RAM SSD business has promise for these specific use cases and will watch with curiosity the companies that try to expand the use cases to much higher capacities.

The second thing to note is that without RAM, Flash SSDs would not be all that appealing.  You will probably all recall the reaction to initial Flash SSDs that had write performance slower than hard disk drives.  How did the vendors solve this problem?  Well for one thing they over-provisioned Flash so that writes don’t wait so much on erases.  In enterprise solutions, however, the real solution is RAM.  Because the NAND Flash media just needs a little bit of help, a small amount of RAM caching goes a long way toward decreasing write latencies and dramatically improving peak and sustainable write IOPS.  This increases the cost and complexity of the Flash SSD but makes it infinitely more attractive to the application acceleration market.

Third, the companies with the most compelling Flash SSD performance characteristics have come out of the RAM SSD market.  These companies had developed low latency, high bandwidth controllers and backplanes that were tuned for RAM.  Contrast this with the difficulties the integrated storage manufacturers have had since their controllers and backplanes were tuned for hard disk drives.

Casual industry observers might ask a couple of other questions about this market:

  • With the rapid decrease in RAM prices, is RAM likely to replace Flash as the storage media of choice for enterprise SSD?  No.
  • Are the large integrated storage companies likely to add a non-volatile RAM SSD tier in front of their new Flash SSD tier?  I tend to doubt it, but would not rule it out completely.
  • Aren’t customers that start with Flash going to look to RAM SSD to go even faster?  I think some of these customers will want more speed but for most users Flash will be “good-enough”.
  • Aren’t customers that start with RAM likely to move to Flash SSD on technology refreshes?  Probably not.  RAM SSD is addictive.  Once you start with RAM SSD, it is hard to contemplate going slower.

To put this all in perspective, Flash SSDs did not kill the RAM SSD market.  In some ways, Flash SSD and the big companies who have embraced it have added legitimacy to the RAM SSD market that it lacked for decades.  I think RAM SSDs will continue to be an important niche in the overall application acceleration market and anticipate innovative companies introducing new use cases and products over the next five years.

To give credit where credit is due while Flash SSDs did not kill the RAM SSD market, it has come to dominate the enterprise storage landscape like no other technology since the advent of disk storage.  Robin Harris may not have accurately predicted the end of RAM SSD but he was at the forefront of analysts and bloggers, including Zsolt at www.StorageSearch.com, predicting Flash SSD’s widespread success.


Escape Velocity

March 25, 2011

What does it take to build an SSD company into a sustainable enterprise? What does it take to make it profitable? What does it take to go public? Given that many of the SSD companies focused on the enterprise are private, it is awfully hard to get good data on the costs of entry.

As a participant in the SSD industry for the last decade, I have had the benefit of watching the ascent of Fusion-IO from a small start-up company to a marketing machine and can now observe along with the rest of you their attempt to go public.

Say what you will about Fusion-IO and their products, and believe me at various times I have said all of those things good and bad, they are a marketing machine. But to leave it at that would be a terrible injustice. David Flynn and his team launched a product that the rest of us did not see the need for. Don’t get me wrong, Cenatek and MicroMemory both had server-based PCI SSDs well before Fusion-IO, but Fusion-IO did two things that really set their product apart 1) they were the first PCI SSD company to really take advantage of the first generation of Flash suitable for enterprise customers; 2) they were unashamed about beating their way into the enterprise market even if they took what I consider a fire:ready:aim approach to marketing. I remember the early days of Fusion-IO marketing which was clearly aimed at companies using storage area networks. The ad went something like “the power of a SAN in your server”. Interesting concept, but the people who used storage area networks were pretty sure that a PCI card was not about to replace their SAN. I know, some people have done this but by and large I would suggest that what Fusion discovered and now markets to directly is the large set of customers that need server-based application acceleration. These scale-out applications include those run at customers like Facebook who improve application performance by adding servers. Historically, those servers would be laden with expensive high-density RAM. Fusion-IO brought them a product that was not cheap, but less expensive than RAM. The other sweet spot that Fusion discovered was that this market was very read-intensive and good therefore to use MLC Flash enabling the customers to get better density and better pricing.

I remember the first time I met David Flynn. I was participating in a SNIA Summer Symposium in early 2008. Until this point, my only real exposure to Fusion-IO had been from their marketing. When the topic in the room turned to Flash technology, David was quick to join the discussion and showed a grasp of Flash that clearly exceeded that of most of the people in the room. From that point, I knew that Fusion was much more than marketing.

At one point during my time at TMS I tried to assess what it was that made Fusion-IO go from yet another random company involved in SSD to a company that was always in the limelight (disproportionately to their revenues – another hallmark of good marketing). I used Google trends data to see if there was an inflection point for Fusion-IO and I found it. The inflection point was their hiring of Steve Wozniak. What a brilliant publicity move and one that I think continues to pay off for Fusion-IO. From the day his involvement with Fusion-IO was announced, the company took off in terms of web hits. I can’t tell you how much time, because it would be embarrassing, but I spent a lot of time trying to figure out how to create a similar event at TMS. I thought if we could hire “Elvis” we would have a chance.

The next brilliant tactical move by Fusion-IO was tying up the server OEMs. You see, one of the biggest challenges with selling products that go into another manufacturer’s servers is getting that server vendor to bless and support the solution. Fusion realized this problem early on and announced relationships with HP, IBM and Dell. Not to mention that Michael Dell was an investor. The big problem was solved with the announcements, the big server vendors had blessed Fusion-IO with credibility typically reserved for companies like Seagate and Intel. It is worth noting that these server vendors hate single sourced products leaving plenty of room for competitors to get the same blessings.

Plenty of things good and bad have happened along the way for Fusion-IO. They encountered many of the problems that fast-growing venture backed companies have. There were delayed product releases, there were quality problems, Fusion’s approach to business created a lot of enemies, they went through an impressive number of CEOs (though David Flynn remained the key guy all along) and sales teams, and there were missteps in channel marketing strategy but through it all they have shown impressive perseverance.

As Fusion’s revenues have grown in the last two years to match their marketing, they have added some really impressive depth to their team. Marius Tudor who largely led the sales & marketing efforts for industry pioneer BitMicro is involved. Another marketing genius, Gary Orenstein who led marketing at Gear 6 among other places has joined the team. This is not to imply that I don’t have deep respect for their Chief Marketing Officer Rick White, another gaming enthusiast like myself, but really, does he need this much help. Leaving behind some marketing talent for the rest of the industry would have been gracious.

For the SSD vendor community, whatever you think of Fusion-IO, their effort to go public is a major milestone for the industry. Have you ever tried to get private company valuations without comparables? Valuations become guesswork. Do you have a great SSD idea and need VCs to get excited about it? Fusion-IO successfully going public would help the rest of the private companies eying their own exit strategy (going public, staying private, being acquired). What does it take for an SSD company to go public? What revenue, what profitability, what gross-margins? We may soon find out. In this pursuit, I for one am rooting for Fusion-IO and in turn the industry.


Consistency Groups: The Trouble with Stand-alone SSDs

February 28, 2011

SSDs (Solid State Disks) are fast; everyone knows this.  So, if they are all so very fast, why are we still using spinning disks at all?  The thing about SSDs (OK, well, one of the things) is that while they are unarguably fast, they can need to be implemented with reliability and availability in mind just like any other storage media.  Deploying them in an Enterprise environment can be sort of like “putting all of your eggs in one basket”.  In order for them to meet the RAS needs of enterprise customers, they must be “backed up” in some meaningful way.  It is not good enough to make back-up copies occasionally; we must protect their data in real time, all of the time.  Enterprise storage systems do this in many different ways, and over time, we will touch upon all of these ways.  Today, we want to talk about one of the ways – replication.

One of the key concepts in data center replication is the concept of consistency groups.  A consistency group is a set of files that must be backed up/replicated/restored together with the primary data in order for the application to be properly restored.  Consistency groups are the cause of the most difficult discussions between end-users and SSD manufacturers.  At the end of this article, I will suggest some solutions to this problem.

The largest storage manufacturers have a corner on the enterprise data center marketplace because they have array-based replication tools that have been proven, in many locations over many years.  For replicated data to be restored, an entire consistency group must be replicated using the same tool set.  This is where external SSDs encounter a problem.  External SSDs are not typically (though this is changing) used to store all application data; furthermore, they do not usually offer replication.  In a typical environment, the most frequently accessed components of an application are stored on SSD and the remaining, less frequently accessed data, are stored on slower, less expensive disk.  If a site has array-based replication, that array no longer has the entire consistency group to replicate.

External SSD write caching solutions encounter a more significant version of this same problem.  Instead of storing specific files that are accessible to the array-based replication tool, it has cached some writes that may, or may not be, flushed through to the replicating array.  The replicating array has no way of knowing this and will snapshot or replicate and not have a full set of consistent data because some of that data is cached in the external caching solution.  I am aware that some of these third party write caching solutions do have a mechanism to flush cache and allow the external array to snapshot or replicate, but generally speaking, these caching SSDs have historically been used to cache only reads, since write-caching creates too many headaches.  Unless the external caching solution is explicitly certified and blessed by the manufacturer of the storage being cached, using these products for anything more than read caching can be a pretty risky decision.

Automatic integration with array-based replication tools is a main reason that some customers will select disk form factor SSD rather than third party SSDs, in spite of huge performance benefits from the third party SSD.  If you are committed to attaining the absolute highest performance, and are willing to invest just a little bit of effort to maximize performance, the following discussion details some options for getting around this problem.

Solution 1:  Implement a preferred-read mirror.  For sites committed to array-based replication, a preferred-read mirror is often the best way to get benefit from an external SSD and yet keep using array-based replication.  A preferred-read mirror writes to both the external SSD and to the replicating SAN array.  In this way, the replicating array has all of the data needed to maintain the consistency group and yet all reads come from the faster external SSD.  One side benefit of this model is that it allows a site to avoid mirroring two expensive external SSDs for reliability, saving money.  This is because the existing array provides this role.  If your host operating system or individual software application does not offer preferred read mirroring, then a common solution is to use third-party storage application such as Symnatec’s Veritas Storage Foundation to provide this feature.  You must bear in mind that a preferred read mirror does not accelerate writes.

Solution 2:  Implement server-based replication.  There are an increasing number of good server-based replication solutions.  These tools allow you to maintain consistency groups from the server rather than from the controller inside the storage array, allowing one tool to replicate multiple heterogeneous storage solutions.

Solution 3:  For enterprise database environments, it is common for a site to replicate using transaction log shipping.  Transaction log shipping makes sure all writes to a database are replicated to a remote site where a database can be rebuilt if needed.  This approach takes database replication away from the array – moving things closer to the database application. 

Solution 4:  Implement a virtualizing controller with replication capabilities.  A few external SSD manufacturers have partnered with vendors that offer controller based replication and who support heterogeneous external storage behind that controller.  This moves the SSD behind a controller capable of performing replication.  The performance characteristics of the virtualizing controller now are a gating factor in determining the effectiveness, and indeed the value added by the external SSD.  In other words, if the virtualizing controller adds latency (it must) or has bandwidth limitations (generally they do), those will now apply to the external SSD.  This can slow SSDs down by a factor of from three to ten times.  It is also the case that this approach will solve the consistency group problem only if the entire consistency group is stored behind the virtualizing controller.

Most companies implementing external SSD have had to make decisions, trying to grapple with the impact of consistency groups on application performance, replication and recovery speed.  Even so, the great speed associated with external SSDs often leads them to implement external SSD using one of the solutions we have discussed. 

What has been your experience?