Coming Home

October 5, 2014

by Woody Hutsell, appICU

Wherever you are in your career, there is probably some place where you thought you would stay forever, a place where you developed the deepest roots or made the most impact. Things happen, you decide to move on, but you continue to feel a connection. For me, this was the case at Texas Memory Systems.

As those of you who have followed this blog know, I spent ten years of my life building the RAM and then flash systems business at Texas Memory Systems (TMS). Leaving in 2010 was a hard but necessary decision. When I left, I was not certain I would stay in the enterprise flash storage industry. In fact, a trip to the Flash Memory Summit in 2010 left me fairly certain the industry was not maturing in a way that was compelling, to me or to many enterprise storage buyers. Nonetheless, I still felt linked to and defined by my experience in the solid state storage business and started blogging through this site. I joined ViON and had a terrific experience selling enterprise storage to the enterprise, including solid state storage. I presented at the Flash Memory Summit in 2011 representing ViON and realized that I missed the industry more than I expected. More importantly, the industry was changing in some really interesting ways that made it enticing to return. I was presented with a great opportunity to work with Fusion-io on a software defined storage solution that we ended up calling the ION Data Accelerator.

In the background, the industry continued to mature, with the big companies finally moving into flash in a serious way. What started as OEM relationships and partnerships between companies was evolving into acquisitions. The most significant of these acquisitions was IBM’s decision to acquire Texas Memory Systems. Because my association with TMS reached back over 20 years, and because most of my friends in Houston were still employed at TMS, my interest in returning began to increase. As I listened to these friends and former co-workers, I realized that IBM was serious about this move into the flash market. I could see the culture and processes at TMS improving in ways I had literally dreamed that they could when I was at TMS years before. I gave the process a few months, just to make sure the post-acquisition honeymoon was not distorting opinions. Simultaneously, something really important started to happen; some of the really talented people that had left TMS were coming back and I could see that combined with the people IBM was bringing in, this new entity might really impact the market. The more people started returning to TMS, the more interested I became. The thing that finally clinched it was my realization that IBM was creating a flash division within the Systems and Technology Group less than two miles from my home. My decision was made and I am home again.

Here I am, nearly 15 months into the experience. I have to admit some initial fear about joining a big company like IBM. Will the TMS culture be crushed? Will the product suffer? Will IBM be too big and too slow to move fast in the flash market? Will the internal bureaucracy make the day-to-day work unpleasant? I definitely felt all of those concerns. Now, while there is no way to escape the size of IBM, I can say with conviction that most of my fears have not come true. But this was not without some struggle. The IBM team leading the integration has done everything in their power to preserve the culture and pace of TMS. IBM has done small things like preventing people from scheduling meetings over lunch hours so that the fairly notorious big lunch groups could continue. IBM has done big things like taking frequent surveys of employees to monitor their engagement, with real follow-through on issues identified. Yes, there is the back drop of a complex organization with some surging and some declining lines of business, but more than anything I see a depth of talent and resources that I consider to be unmatched.

The accolades that are accompanying IBM’s success in flash are rewarding, but it is really about the people and the journey. My compliments to the IBMers who moved to the FlashSystem team, and to the former TMSers who have worked harder than ever to deliver. I think sometimes the great people get lost in big organizations, so I want to recognize some spectacular contributions from Terri Mitchell, the IBM integration executive for fighting for this team; Jan Janick and Mike Nealon for preserving the engineering culture; Mike Kuhn for learning the flash market faster than I could have possibly imagined; Andy Walls for his architectural brilliance, as well as the hundreds of people on the FlashSystem team who are pushing every boundary and changing minds within and outside of IBM. This effort to become the unqualified leader in the all-flash array category is just beginning, but it is nice to get a great start with a strong team. Happy two year anniversary to the IBM Flash team!

This week I will be at IBM Enterprise 2014 Conference in Las Vegas presenting in three different sessions about FlashSystem. Also, make sure and visit http://www-03.ibm.com/systems/storage/flash/ for the latest product features coming October 6th.

Advertisement

Flash Industry Chaos

September 9, 2014

Flash Industry Chaos

by Woody Hutsell, http://www.appICU.com

I’ve been watching the flash storage industry evolve since essentially its beginnings. A lot has changed in not so many years. Lately, though, I’ve seen some developments that suggest parts of the flash storage industry are in a period of rapid change, even chaos. And at the same time I’ve seen some clarity settle on the industry space closest to home.

In the last month, I visited two of the biggest shows for the enterprise flash market: Flash Memory Summit and VMworld. One of the things I like about going to these shows is keeping up with old friends, former co-workers, partners, analysts, and customers. I am used to seeing my colleagues in the industry move between companies (as I have done), but this year has really redefined the industry. The segment of the industry with the highest level of change is PCI flash. Last year set a prelude for changes in the server-based flash world with a few smaller deals like Virident selling to WD/HGST and Toshiba buying OCZ’s assets. This year, the two biggest players have changed hands, with both the LSI to Avago to Seagate and the Fusion-io to SanDisk deals closing.

From the perspective of its likelihood to disrupt the balance of power in the industry, the SanDisk acquisition of Fusion-io stands to make the biggest waves, or lead to the most chaos. Why do I think the SanDisk acquisition may lead to chaos? First, if SanDisk manages to retain the people that made Fusion-io special, SanDisk will gain a powerful new asset, but that isn’t easy. Buying a company for its strong engineering and sales talent means you must keep those people around. But, I have already seen a number of the excellent people at Fusion-io move on to other companies. Second, SanDisk now has the products and people necessary to sell directly to enterprise end users, even if the deals close through OEMs. They acquired a sales team that many competitors (think STEC, OCZ, Violin) tried to build but could not and were nearly bankrupted trying. But, this moves SanDisk perilously close to competing with its OEMs, which will create a fine line for them to walk, or fall from.

Another industry segment where chaos may be brewing is in VMware storage. At VMworld I saw a number of interesting new software defined storage solutions from VMware, plus a plethora of hyperconverged storage solutions like Nutanix. This part of the market has soaked up a lot of venture capital and it is apparent to me that the majority of the companies in this space will not make it. This year’s VMworld reminded me of previous Oracle World shows where you saw the small companies whose names you barely recognized buying the huge booth to try and capture some sliver of market attention. Almost inevitably, these companies crater rather than prosper. I think a really large booth when you are a small company with little revenue is the first evidence that you are on the path to irrelevance.

Finally, instead of rapid change and even chaos, I see one area within the flash storage industry that is gaining clarity – the all-flash array space. This industry category has reached its seventh birthday, if you consider that all-flash products were introduced to this space in 2007 by Texas Memory Systems. The recent round of Gartner Market Share and Magic Quadrant studies have confirmed what those in the industry realize – currently this is a three horse race, with IBM, EMC, and Pure Storage leading the industry in revenue and all three in the leadership quadrant. But it is clear to me that the other storage OEMs are gaining steam. Expect revenue from HDS, HP, and NetApp to increase on pace with industry growth. There continues to be a variety of small companies/start-ups that have missed out on the first wave of industry consolidation and are growing at a much smaller rate than the industry. For these companies, there is still a future if they can be acquired or grow into a profitable niche. It now takes much more as a startup (or established company) to enter and succeed in the AFA market than it took a year ago. The gap from the leaders to the followers in the AFA space continues to grow, and as that gap grows it becomes more important for clients to evaluate the long-term prospects of their flash array providers.

For more information on IBM FlashSystem, I encourage you to visit: http://www-03.ibm.com/systems/storage/flash/


IBM V840: The way “Software Defined Storage” should be done

April 17, 2014

by Woody Hutselll, http://www.appICU.com

The phrase “software defined storage” burst into the storage marketing lexicon in seemingly less time than the data access latency of a good SSD. Unless you were born yesterday, you saw it happen. Solid state storage vendors piled on the bandwagon, most of them leaping by the most convenient route. But IBM has taken a more reasoned, and seasoned, approach, resulting in a software defined storage solution that captures the benefits originally imagined in the phrase, without resorting to some quick-time-to-market strategies.

One of the more fascinating stories to me in the last two years has been the rapid adoption of the phrase: “software defined storage.” Here for your viewing pleasure is a Google Trends view:

Image

The mainstream use of the term software defined storage started in August 2012 with the launch of the Fusion ION Data Accelerator. Within a few months every major and minor storage vendor was labeling their solution as software defined storage, including companies with solutions as different as Nexenta, NetApp, and IBM.

While researching for this blog, I came across a nice blog post by a fellow IBMer that casts additional light on the idea of software defined storage. I love that IDC created a software defined storage Taxonomy in April 2013. Can you believe it? From creation as a phrase, to requiring a taxonomy in less than eight months. If you are reading this, you can count yourself as having been along for the ride as this phrase started to infiltrate storage marketing.

As I explore the meaning of software defined storage, I will use a really basic definition that I think allows everyone to jump on the bandwagon:

Software-defined storage involves running storage services (such as replication, snapshots, tiering, virtualization and data reduction) on a server platform.

No wonder everyone can claim to be in the software defined storage business. Count IBM and its SAN Volume Controller (SVC) with over 10 years in the industry as a pioneer in this category. Certainly NetApp, Nexenta, and others belong as well. For years the storage industry has been migrating the delivery of storage services from custom-architected hardware to commodity server hardware. In doing so, vendors gain lower cost hardware, a faster time to market, and the advantage of using industry standard and open source software components. This isn’t to say the solutions aren’t differentiated; they are on the basis of their feature sets, but they are not significantly differentiated based on the hardware of their solution.

The introduction of all-Flash appliances into the product mix provided a real test of the capability of software defined storage. I remember IBM talking about project Quicksilver in 2008. Quicksilver used IBM SVC. The results were impressive and showed that software defined solutions could scale to IOPS levels required by the enterprise. Since that time nearly every Flash product brought to market could be labelled software defined storage: Intel server platform, Linux OS, software storage stack like SCST/LIO, HBAs/NICs, third party SSDs, and software for storage services. Storage has become integration and tuning rather than engineering. This approach to system design leaves a lot to be desired. Are the OS’s, storage stacks, RAID, enclosures, or HBAs all really designed for Flash? No, actually. The integration happens only in the minds of the marketers, unless you count the SAS link that connects the server to the storage enclosure or subsystem.

Instead, IBM has taken a novel approach to the Flash market, recognizing that producing extreme performance requires custom hardware, while also acknowledging that offering rich storage services is best accomplished with software defined storage. This recognition led IBM to offer a brand new solution called the FlashSystem V840 Enterprise Performance Solution. The software side of the equation is driven by IBM’s extensive experience building actual, integrated software defined storage solutions. The hardware side of the equation, rather than being a potpourri of third party stuff, is a custom-engineered Flash storage system (the IBM FlashSystem 840). On the software side, the software defined storage control modules have been purposely developed with data paths that substantially reduce the latency impact of most storage services. In fact, the FlashSystem V840 achieves latency for data accesses from Flash as low as 200 microseconds.

For a minute, let’s contrast the FlashSystem V840 with the attributes of nearly every competing Flash appliance offering:

Typical storage enclosure

  • Third Party MLC/eMLC SSDs
  •      No SSD-level data protection
  •      Inexpensive processors as Flash controllers
  •      SAS-connected
  • Limited density and scalability due to form factor
  • Off the shelf HBAs as interface controllers
  • Software RAID and over-provisioning provided by the control enclosures

FlashSystem 840

  • IBM designed FlashSystem Flash Modules
  • IBM patented Variable Stripe RAID™ protects performance/availability even with cell, layer, or chip failures
  • IBM engineered PowerPC processors combined with FPGAs as Flash controllers
  • High speed proprietary interconnect
  • High density and highly scalable
  • IBM engineered interface controllers
  • Optimized for low latency and high IOPS
  • IBM engineered hardware RAID controllers
  • Optimized for low latency and high IOPS with FPGAs as RAID controllers.

All of this discussion about proprietary hardware may have users worried about vendor lock-in and creating silos of data, however, the FlashSystem V840, with its storage virtualization feature, enables data center managers to break vendor lock-in by virtualizing heterogeneous third party arrays behind the FlashSytem V840 taking advantage of its feature rich set of storage services.

The choice of third party SSDs combined with software defined RAID architectures pushes storage processing work from the storage enclosure to the control enclosures. The problem is that these storage processing tasks are processor intensive (taking up threads and cores from what are already limited processors). The net result is that the control enclosures, without running any desirable storage services, are already burdened because they are performing functions that are best off-loaded to the storage enclosure. Combine this with the proven inefficiency of software RAID and the result is the terrible performance metrics we see from IBM’s Flash appliance competitors. Look closely at write IOPS performance and you will clearly see the deleterious effect of software RAID on performance. Try adding storage services to these control enclosures and you understand why the other Flash appliances on the market are not feature rich. Except by adding additional processors, they cannot add more features without cratering their already terrible performance.

In the case of the IBM FlashSystem V840, the storage enclosure functions as a high performance processing offload engine, freeing the control enclosures to do what they do best – implement storage services. The resulting solution delivers industry leading latency, IOPS, and bandwidth with a much more scalable solution.

Software defined storage may have its place, but only if done well. Abandoning effective hardware/software integration just for the chance to save on engineering seems like a terrible choice for all-Flash appliances. IBM has taken a different tack, purposely engineering and integrating a software defined storage solution that offers all the benefits, without resorting to the short-cuts that most storage vendors have used to get there.

To learn more about IBM and Software Defined Storage make sure and attend Edge2014.


Reaching the Summit

January 17, 2014

Reaching the Summit
Woody Hutsell, AppICU

If you’ve worked for a small company you know making progress sometimes happens in baby steps. You deal with constrained resources. You deal with hasty or delayed decisions. You just deal with reality. We went through seven or so generations of RAM based SSD systems at TMS before we got to a solution I considered the pinnacle of achievement, the RamSan-440. It combined performance and reliability features that were the class of the industry. Even a year after flash solutions were released by TMS, I still recommended the RamSan-440 despite its higher cost per capacity.

You almost have to have been intimate to the all flash array business since its inception in 2007 to fully appreciate this next comment, but if you were and/or still are you will understand. Since the RamSan-500 and earliest competitor systems there has been a product engineering battle raging between competitors to develop the ultimate single box highly available all flash solution. Single box HA solutions are desirable for their improved density, performance, power and cost to deploy and support. This was a battle that could only be engaged by companies with real hardware engineering talent (a talent missing from most all-flash players today). For years, Tier 0 all flash arrays had to be deployed with 2x the capacity for 1x the usable capacity because of a range of issues including: lack of full redundancy in components, lack of hot swap components, lack of easy access to hot swap components and the inability of systems to have firmware upgrades without requiring downtime. These deficits resulted in a creative mix of deployment architectures to resolve the issue some more elegantly than others. Each product iteration has gotten us measurably closer to summiting this peak but not reached the peak. Similar issues surround competitor products with some farther behind than others. The achievement in the FlashSystem 840 is that it has reached the summit. I could not be happier with the product team who defined this product or the development team that brought it to market.

For more information on the FlashSystem 840, which IBM just announced yesterday, I encourage you to visit: http://www-03.ibm.com/systems/data/flash/storage/infographic/flash-data-center-optimized.html


What a long strange year it’s been

December 18, 2013

Woody Hutsell, AppICU

Flash back one year ago.  I was working at Fusion-io on a software defined storage solution with some of the brightest minds in the industry.  Fusion-io was flying high reaching over $100 million in quarterly revenue.  David Flynn, Rick White, Jim Dawson were leading one of the most talented teams I have been around.  There are still really talented people at Fusion-io, but take away Rick White (heart), David Flynn (mind) and Jim Dawson (soul) and you have just another company.  A company still bringing in some real revenue by the way and continuing to dominate in PCI Flash.  Their relationship with my employer, IBM, is still strong.  If I were buying PCI Flash for Intel servers, they would still be my first choice.

I left Fusion-io at the end of March to go back home, literally and figuratively.  I loved working at Fusion-io, but traveling from my home in Houston to Salt Lake City/San Jose twice per month was not great fun.  More importantly, IBM had closed its acquisition of Texas Memory Systems and my friends, co-workers and family were encouraging me to come back.  The idea of being with a company of IBM’s capability, picking up where I left off with my first solid state storage baby (the RamSan), and working with friends and family less than two miles from home was too much to pass up.  I could feel the excitement from the TMSers who were now IBMers and saw that IBM was out to win in the all flash array category.  Did someone say a billion dollar investment in flash?  Makes the $150 million for Pure Storage look like pocket change.

My initial conversations with the IBM team, pre-joining, validated this feeling I was getting.   IBM had brought the best and was basing many of them in Houston.  As important to me, was seeing that many of the other talented people who had left TMS in the years prior to the acquisition were returning including friends who had great roles at Oracle and HP.

If history has taught us anything related to the solid state storage industry, the fate of companies rises and falls on the strength of their relationships with the big companies in the industry.  STEC made the first big splash locking up OEM deals for Zeus-IOPS.  Fusion-io made the next big splash in the PCI Flash space locking up OEM deals for ioDrives and ioScale.  Violin had their first big peak on the back of a short-lived relationship with HP.  All of these company’s fortunes have surged, and at times collapsed, from these relationships.  It only made sense to me then that the one thing better than being OEM’d by the big company was being the big company; and so far I am right.

So here we are at the end of 2013.  I think 2013 will be seen as the year that the all flash array market finally took off generating the real revenues that have been anticipated for years.

2014 will witness the bifurcation of the all flash array market that Jeff Janukowicz at IDC first called out in a research report a couple of years ago creating real separation from products that are focused on “absolute performance” and those focused on the “enterprise.”  In some ways this is a bit like talking about the market that is and the market that could be.  Today, the majority of all flash array purchases in the enterprise are used for database acceleration (bare-metal or virtual). These workloads, more so than many others, especially benefit from absolute performance systems and notably do not benefit from inline data deduplication.  Curiously, the venture backed companies in the market are almost exclusively focused on the enterprise feature rich category.  Even Violin, who once had a credible offering in this category, has chosen an architectural path that moves them away from the absolute performance segment of the market.  The company with the most compelling solution in this category (in my clearly biased opinion) is IBM with its FlashSystem product.  I have for at least a decade heard the industry characterizing the RamSan and now the FlashSystem as the Ferrari of flash arrays. What our competitors have discovered along the way is that performance is the first cut at most customer sites and beyond that FlashSystem brings a much better economic solution because of its low latency, high density and low power consumption.

Does this mean IBM doesn’t have a play in the all-flash enterprise category?  Stay tuned.  It’s not 2014 yet.  In fact, mark your calendars for the years’ first big announcement webcast bit.ly/SCJanWebcast

And really, did you even think that thought.  IBM has the broadest flash portfolio in the industry.  IBM has clearly said that the market is approaching a tipping point, a point where the economic benefits of flash outweigh its higher cost.  This tipping point will lead to the all-flash data center.  And nobody understands the data center better than IBM.

I am looking forward to an eventful 2014.  Happy Holidays and Happy New Year.

Woody


Power and PCI flash, the performance balancing act!

October 9, 2013

Woody Hutsell, http://www.appICU.com

Amidst a slew of other IBM announcements yesterday was IBM’s launch of the Flash Adapter 90 for AIX and Linux based Power Systems environments.  Flash Adapter 90 is a PCIe flash solution that accelerates performance and eliminates the bottleneck for latency sensitive, IO intensive applications.  Enterprise application environments such as transaction processing (OLTP) and analytics (OLAP) benefit by having high performance Power processors balanced against equally high performance PCIe flash.   This type of balance thereby, increases server productivity, application productivity, and user productivity to drive a more efficient business.

The Flash Adapter 90 is full-height and half-length PCI Gen 2 providing 900GB of usable eMLC flash capacity.  Flash Adapter 90 is a native flash solution without the bottlenecks common to other PCI flash solutions and uses on-adapter processing and metadata to lessen the impact on server RAM and processing.   Up to four of the adapters can be used inside supported Power servers.

In a recent article “Flash fettlers Fusion-io scoop IBM as reseller partner”, Chris Mellor observed that IBM’s recent decision to launch Fusion ioScale based IBM Flash Adapters Enterprise Value for System x® solutions was evidence that IBM had abandoned the PCI flash technology that IBM received when they acquired Texas Memory Systems.  The Flash Adapter 90 product launch demonstrates that IBM has not discarded this technology, merely waited for the perfect time and perfect platform to bring it to market.  IBM has consistently demonstrated a desire to meet client needs whether that involves engaging IBM R&D to develop solutions, such as the Flash Adapter 90, or bringing in industry standard components.

Flash Adapter 90 brings IBM patented Variable Stripe RAID technology and enterprise performance to the Power Systems client base who have anxiously awaited a solution with a driver tuned to take advantage of AIX and Linux operating systems.  Power Systems are acknowledged as the world’s fastest servers and now have a bit of world’s fastest storage to create an unbeatable combination of processor and storage for accelerating business critical  applications.  Along the way, IBM tested the combined solution with IBM’s Identity Insight for DB2, demonstrating IBM’s ability to combine multiple products from application to server to storage for a consistent predictable client experience.  This combination of products showed performance superior to other tested configurations yet at a much lower cost per solution.

With this announcement, IBM offers its Power Systems clients more choice in deciding what flash storage they will use to accelerate their application.  Power System clients can consume flash from IBM in any manner that best suits their data center or application environment structure.  Clients may choose from IBM FlashSystem, IBM Flash Adapter 90, EXP 30 Ultra SSD Drawers (a direct-attach storage solution) in addition to a host of other IBM System Storage products.  For applications or client architectures that are server-centric, i.e. use server scale-out/clustering for reliability, the Flash Adapter 90 is a low cost method for delivering outstanding application performance.  Applications based on DB2 and Oracle databases are excellent candidates for acceleration.

Long live the Flash Adapter 90.

More information on IBM Power Systems flash options can be found at:  http://www-03.ibm.com/systems/power/hardware/peripherals/ssd/index.html


Video Entry: The Solid State Disk Market

August 18, 2011

Woody Hutsell, AppICU

I had a chance to sit down with my friends from MarketingSage (www.MarketingSage.com) at the Flash Memory Summit last week.  They asked me some tough questions about the solid state disk market, here are videos they prepared from our discussion:

Where do solid state disk cache solutions fit in the market?

How is virtualization changing demand for SSD and Flash?

What’s driving SSD sales success with end-users?

How would you describe the vendor landscape for solid state disks?

I hope you enjoy these brief videos and thanks again to MarketingSage for making it happen.

Woody


Long Live RAM SSD

April 22, 2011

by Woody Hutsell at www.appICU.com

In late 2006, Robin Harris at www.StorageMojo.com wrote “RAM-based SSDs are Toast –Yippie ki-yay”.  As a leader of the largest RAM-based solid state storage vendor at the time, I can assure you that his message was not lost on me.  In fact, we posted a response to Robin in “A Big SSD Vendor Begs to Differ” to which Robin famously responded “If I were TMS, I’d ask a couple of my better engineers to work part time on creative flash-based SSD architectures.”  I cannot honestly remember the timing, but it is fair to say that the comment minimally reinforced our internal project to develop a system that relied heavily on SLC NAND Flash for most of its storage capacity.  Within a few years, TMS had transitioned from a RAM-based SSD company to a company whose growth was driven primarily by Flash-based SSD.  Nearly five years after the predicted death of RAM-based SSD I thought it would be interesting to evaluate the role of RAM SSD in the application acceleration market.

First off, it is important to note that RAM-based SSDs are not toast.  In fact, a number of companies continue to promote RAM-based SSDs including my employer, ViON, who is still marketing, selling and supporting RAM-based SSDs.  What may be more surprising is that the intervening years have actually seen a few new companies join the RAM-based SSD market.  What all of these companies have identified is that there are still use cases for the high-performance per density available with RAM-based SSD.  In particular, RAM-based SSDs continue to be ideal for database transaction logs, temporary segments or small to medium databases where the ability to scale transactions without sacrificing latency is critical.  Customers in the e-commerce, financial and telecom markets will still use RAM SSD.  When a customer says to me that they need to do be able to say they have done “everything possible” to make a database fast, I still point them to RAM SSD if the economics are reasonable.  I think the RAM SSD business has promise for these specific use cases and will watch with curiosity the companies that try to expand the use cases to much higher capacities.

The second thing to note is that without RAM, Flash SSDs would not be all that appealing.  You will probably all recall the reaction to initial Flash SSDs that had write performance slower than hard disk drives.  How did the vendors solve this problem?  Well for one thing they over-provisioned Flash so that writes don’t wait so much on erases.  In enterprise solutions, however, the real solution is RAM.  Because the NAND Flash media just needs a little bit of help, a small amount of RAM caching goes a long way toward decreasing write latencies and dramatically improving peak and sustainable write IOPS.  This increases the cost and complexity of the Flash SSD but makes it infinitely more attractive to the application acceleration market.

Third, the companies with the most compelling Flash SSD performance characteristics have come out of the RAM SSD market.  These companies had developed low latency, high bandwidth controllers and backplanes that were tuned for RAM.  Contrast this with the difficulties the integrated storage manufacturers have had since their controllers and backplanes were tuned for hard disk drives.

Casual industry observers might ask a couple of other questions about this market:

  • With the rapid decrease in RAM prices, is RAM likely to replace Flash as the storage media of choice for enterprise SSD?  No.
  • Are the large integrated storage companies likely to add a non-volatile RAM SSD tier in front of their new Flash SSD tier?  I tend to doubt it, but would not rule it out completely.
  • Aren’t customers that start with Flash going to look to RAM SSD to go even faster?  I think some of these customers will want more speed but for most users Flash will be “good-enough”.
  • Aren’t customers that start with RAM likely to move to Flash SSD on technology refreshes?  Probably not.  RAM SSD is addictive.  Once you start with RAM SSD, it is hard to contemplate going slower.

To put this all in perspective, Flash SSDs did not kill the RAM SSD market.  In some ways, Flash SSD and the big companies who have embraced it have added legitimacy to the RAM SSD market that it lacked for decades.  I think RAM SSDs will continue to be an important niche in the overall application acceleration market and anticipate innovative companies introducing new use cases and products over the next five years.

To give credit where credit is due while Flash SSDs did not kill the RAM SSD market, it has come to dominate the enterprise storage landscape like no other technology since the advent of disk storage.  Robin Harris may not have accurately predicted the end of RAM SSD but he was at the forefront of analysts and bloggers, including Zsolt at www.StorageSearch.com, predicting Flash SSD’s widespread success.


Escape Velocity

March 25, 2011

What does it take to build an SSD company into a sustainable enterprise? What does it take to make it profitable? What does it take to go public? Given that many of the SSD companies focused on the enterprise are private, it is awfully hard to get good data on the costs of entry.

As a participant in the SSD industry for the last decade, I have had the benefit of watching the ascent of Fusion-IO from a small start-up company to a marketing machine and can now observe along with the rest of you their attempt to go public.

Say what you will about Fusion-IO and their products, and believe me at various times I have said all of those things good and bad, they are a marketing machine. But to leave it at that would be a terrible injustice. David Flynn and his team launched a product that the rest of us did not see the need for. Don’t get me wrong, Cenatek and MicroMemory both had server-based PCI SSDs well before Fusion-IO, but Fusion-IO did two things that really set their product apart 1) they were the first PCI SSD company to really take advantage of the first generation of Flash suitable for enterprise customers; 2) they were unashamed about beating their way into the enterprise market even if they took what I consider a fire:ready:aim approach to marketing. I remember the early days of Fusion-IO marketing which was clearly aimed at companies using storage area networks. The ad went something like “the power of a SAN in your server”. Interesting concept, but the people who used storage area networks were pretty sure that a PCI card was not about to replace their SAN. I know, some people have done this but by and large I would suggest that what Fusion discovered and now markets to directly is the large set of customers that need server-based application acceleration. These scale-out applications include those run at customers like Facebook who improve application performance by adding servers. Historically, those servers would be laden with expensive high-density RAM. Fusion-IO brought them a product that was not cheap, but less expensive than RAM. The other sweet spot that Fusion discovered was that this market was very read-intensive and good therefore to use MLC Flash enabling the customers to get better density and better pricing.

I remember the first time I met David Flynn. I was participating in a SNIA Summer Symposium in early 2008. Until this point, my only real exposure to Fusion-IO had been from their marketing. When the topic in the room turned to Flash technology, David was quick to join the discussion and showed a grasp of Flash that clearly exceeded that of most of the people in the room. From that point, I knew that Fusion was much more than marketing.

At one point during my time at TMS I tried to assess what it was that made Fusion-IO go from yet another random company involved in SSD to a company that was always in the limelight (disproportionately to their revenues – another hallmark of good marketing). I used Google trends data to see if there was an inflection point for Fusion-IO and I found it. The inflection point was their hiring of Steve Wozniak. What a brilliant publicity move and one that I think continues to pay off for Fusion-IO. From the day his involvement with Fusion-IO was announced, the company took off in terms of web hits. I can’t tell you how much time, because it would be embarrassing, but I spent a lot of time trying to figure out how to create a similar event at TMS. I thought if we could hire “Elvis” we would have a chance.

The next brilliant tactical move by Fusion-IO was tying up the server OEMs. You see, one of the biggest challenges with selling products that go into another manufacturer’s servers is getting that server vendor to bless and support the solution. Fusion realized this problem early on and announced relationships with HP, IBM and Dell. Not to mention that Michael Dell was an investor. The big problem was solved with the announcements, the big server vendors had blessed Fusion-IO with credibility typically reserved for companies like Seagate and Intel. It is worth noting that these server vendors hate single sourced products leaving plenty of room for competitors to get the same blessings.

Plenty of things good and bad have happened along the way for Fusion-IO. They encountered many of the problems that fast-growing venture backed companies have. There were delayed product releases, there were quality problems, Fusion’s approach to business created a lot of enemies, they went through an impressive number of CEOs (though David Flynn remained the key guy all along) and sales teams, and there were missteps in channel marketing strategy but through it all they have shown impressive perseverance.

As Fusion’s revenues have grown in the last two years to match their marketing, they have added some really impressive depth to their team. Marius Tudor who largely led the sales & marketing efforts for industry pioneer BitMicro is involved. Another marketing genius, Gary Orenstein who led marketing at Gear 6 among other places has joined the team. This is not to imply that I don’t have deep respect for their Chief Marketing Officer Rick White, another gaming enthusiast like myself, but really, does he need this much help. Leaving behind some marketing talent for the rest of the industry would have been gracious.

For the SSD vendor community, whatever you think of Fusion-IO, their effort to go public is a major milestone for the industry. Have you ever tried to get private company valuations without comparables? Valuations become guesswork. Do you have a great SSD idea and need VCs to get excited about it? Fusion-IO successfully going public would help the rest of the private companies eying their own exit strategy (going public, staying private, being acquired). What does it take for an SSD company to go public? What revenue, what profitability, what gross-margins? We may soon find out. In this pursuit, I for one am rooting for Fusion-IO and in turn the industry.


Consistency Groups: The Trouble with Stand-alone SSDs

February 28, 2011

SSDs (Solid State Disks) are fast; everyone knows this.  So, if they are all so very fast, why are we still using spinning disks at all?  The thing about SSDs (OK, well, one of the things) is that while they are unarguably fast, they can need to be implemented with reliability and availability in mind just like any other storage media.  Deploying them in an Enterprise environment can be sort of like “putting all of your eggs in one basket”.  In order for them to meet the RAS needs of enterprise customers, they must be “backed up” in some meaningful way.  It is not good enough to make back-up copies occasionally; we must protect their data in real time, all of the time.  Enterprise storage systems do this in many different ways, and over time, we will touch upon all of these ways.  Today, we want to talk about one of the ways – replication.

One of the key concepts in data center replication is the concept of consistency groups.  A consistency group is a set of files that must be backed up/replicated/restored together with the primary data in order for the application to be properly restored.  Consistency groups are the cause of the most difficult discussions between end-users and SSD manufacturers.  At the end of this article, I will suggest some solutions to this problem.

The largest storage manufacturers have a corner on the enterprise data center marketplace because they have array-based replication tools that have been proven, in many locations over many years.  For replicated data to be restored, an entire consistency group must be replicated using the same tool set.  This is where external SSDs encounter a problem.  External SSDs are not typically (though this is changing) used to store all application data; furthermore, they do not usually offer replication.  In a typical environment, the most frequently accessed components of an application are stored on SSD and the remaining, less frequently accessed data, are stored on slower, less expensive disk.  If a site has array-based replication, that array no longer has the entire consistency group to replicate.

External SSD write caching solutions encounter a more significant version of this same problem.  Instead of storing specific files that are accessible to the array-based replication tool, it has cached some writes that may, or may not be, flushed through to the replicating array.  The replicating array has no way of knowing this and will snapshot or replicate and not have a full set of consistent data because some of that data is cached in the external caching solution.  I am aware that some of these third party write caching solutions do have a mechanism to flush cache and allow the external array to snapshot or replicate, but generally speaking, these caching SSDs have historically been used to cache only reads, since write-caching creates too many headaches.  Unless the external caching solution is explicitly certified and blessed by the manufacturer of the storage being cached, using these products for anything more than read caching can be a pretty risky decision.

Automatic integration with array-based replication tools is a main reason that some customers will select disk form factor SSD rather than third party SSDs, in spite of huge performance benefits from the third party SSD.  If you are committed to attaining the absolute highest performance, and are willing to invest just a little bit of effort to maximize performance, the following discussion details some options for getting around this problem.

Solution 1:  Implement a preferred-read mirror.  For sites committed to array-based replication, a preferred-read mirror is often the best way to get benefit from an external SSD and yet keep using array-based replication.  A preferred-read mirror writes to both the external SSD and to the replicating SAN array.  In this way, the replicating array has all of the data needed to maintain the consistency group and yet all reads come from the faster external SSD.  One side benefit of this model is that it allows a site to avoid mirroring two expensive external SSDs for reliability, saving money.  This is because the existing array provides this role.  If your host operating system or individual software application does not offer preferred read mirroring, then a common solution is to use third-party storage application such as Symnatec’s Veritas Storage Foundation to provide this feature.  You must bear in mind that a preferred read mirror does not accelerate writes.

Solution 2:  Implement server-based replication.  There are an increasing number of good server-based replication solutions.  These tools allow you to maintain consistency groups from the server rather than from the controller inside the storage array, allowing one tool to replicate multiple heterogeneous storage solutions.

Solution 3:  For enterprise database environments, it is common for a site to replicate using transaction log shipping.  Transaction log shipping makes sure all writes to a database are replicated to a remote site where a database can be rebuilt if needed.  This approach takes database replication away from the array – moving things closer to the database application. 

Solution 4:  Implement a virtualizing controller with replication capabilities.  A few external SSD manufacturers have partnered with vendors that offer controller based replication and who support heterogeneous external storage behind that controller.  This moves the SSD behind a controller capable of performing replication.  The performance characteristics of the virtualizing controller now are a gating factor in determining the effectiveness, and indeed the value added by the external SSD.  In other words, if the virtualizing controller adds latency (it must) or has bandwidth limitations (generally they do), those will now apply to the external SSD.  This can slow SSDs down by a factor of from three to ten times.  It is also the case that this approach will solve the consistency group problem only if the entire consistency group is stored behind the virtualizing controller.

Most companies implementing external SSD have had to make decisions, trying to grapple with the impact of consistency groups on application performance, replication and recovery speed.  Even so, the great speed associated with external SSDs often leads them to implement external SSD using one of the solutions we have discussed. 

What has been your experience?