What a long strange year it’s been

December 18, 2013

Woody Hutsell, AppICU

Flash back one year ago.  I was working at Fusion-io on a software defined storage solution with some of the brightest minds in the industry.  Fusion-io was flying high reaching over $100 million in quarterly revenue.  David Flynn, Rick White, Jim Dawson were leading one of the most talented teams I have been around.  There are still really talented people at Fusion-io, but take away Rick White (heart), David Flynn (mind) and Jim Dawson (soul) and you have just another company.  A company still bringing in some real revenue by the way and continuing to dominate in PCI Flash.  Their relationship with my employer, IBM, is still strong.  If I were buying PCI Flash for Intel servers, they would still be my first choice.

I left Fusion-io at the end of March to go back home, literally and figuratively.  I loved working at Fusion-io, but traveling from my home in Houston to Salt Lake City/San Jose twice per month was not great fun.  More importantly, IBM had closed its acquisition of Texas Memory Systems and my friends, co-workers and family were encouraging me to come back.  The idea of being with a company of IBM’s capability, picking up where I left off with my first solid state storage baby (the RamSan), and working with friends and family less than two miles from home was too much to pass up.  I could feel the excitement from the TMSers who were now IBMers and saw that IBM was out to win in the all flash array category.  Did someone say a billion dollar investment in flash?  Makes the $150 million for Pure Storage look like pocket change.

My initial conversations with the IBM team, pre-joining, validated this feeling I was getting.   IBM had brought the best and was basing many of them in Houston.  As important to me, was seeing that many of the other talented people who had left TMS in the years prior to the acquisition were returning including friends who had great roles at Oracle and HP.

If history has taught us anything related to the solid state storage industry, the fate of companies rises and falls on the strength of their relationships with the big companies in the industry.  STEC made the first big splash locking up OEM deals for Zeus-IOPS.  Fusion-io made the next big splash in the PCI Flash space locking up OEM deals for ioDrives and ioScale.  Violin had their first big peak on the back of a short-lived relationship with HP.  All of these company’s fortunes have surged, and at times collapsed, from these relationships.  It only made sense to me then that the one thing better than being OEM’d by the big company was being the big company; and so far I am right.

So here we are at the end of 2013.  I think 2013 will be seen as the year that the all flash array market finally took off generating the real revenues that have been anticipated for years.

2014 will witness the bifurcation of the all flash array market that Jeff Janukowicz at IDC first called out in a research report a couple of years ago creating real separation from products that are focused on “absolute performance” and those focused on the “enterprise.”  In some ways this is a bit like talking about the market that is and the market that could be.  Today, the majority of all flash array purchases in the enterprise are used for database acceleration (bare-metal or virtual). These workloads, more so than many others, especially benefit from absolute performance systems and notably do not benefit from inline data deduplication.  Curiously, the venture backed companies in the market are almost exclusively focused on the enterprise feature rich category.  Even Violin, who once had a credible offering in this category, has chosen an architectural path that moves them away from the absolute performance segment of the market.  The company with the most compelling solution in this category (in my clearly biased opinion) is IBM with its FlashSystem product.  I have for at least a decade heard the industry characterizing the RamSan and now the FlashSystem as the Ferrari of flash arrays. What our competitors have discovered along the way is that performance is the first cut at most customer sites and beyond that FlashSystem brings a much better economic solution because of its low latency, high density and low power consumption.

Does this mean IBM doesn’t have a play in the all-flash enterprise category?  Stay tuned.  It’s not 2014 yet.  In fact, mark your calendars for the years’ first big announcement webcast bit.ly/SCJanWebcast

And really, did you even think that thought.  IBM has the broadest flash portfolio in the industry.  IBM has clearly said that the market is approaching a tipping point, a point where the economic benefits of flash outweigh its higher cost.  This tipping point will lead to the all-flash data center.  And nobody understands the data center better than IBM.

I am looking forward to an eventful 2014.  Happy Holidays and Happy New Year.

Woody


Power and PCI flash, the performance balancing act!

October 9, 2013

Woody Hutsell, http://www.appICU.com

Amidst a slew of other IBM announcements yesterday was IBM’s launch of the Flash Adapter 90 for AIX and Linux based Power Systems environments.  Flash Adapter 90 is a PCIe flash solution that accelerates performance and eliminates the bottleneck for latency sensitive, IO intensive applications.  Enterprise application environments such as transaction processing (OLTP) and analytics (OLAP) benefit by having high performance Power processors balanced against equally high performance PCIe flash.   This type of balance thereby, increases server productivity, application productivity, and user productivity to drive a more efficient business.

The Flash Adapter 90 is full-height and half-length PCI Gen 2 providing 900GB of usable eMLC flash capacity.  Flash Adapter 90 is a native flash solution without the bottlenecks common to other PCI flash solutions and uses on-adapter processing and metadata to lessen the impact on server RAM and processing.   Up to four of the adapters can be used inside supported Power servers.

In a recent article “Flash fettlers Fusion-io scoop IBM as reseller partner”, Chris Mellor observed that IBM’s recent decision to launch Fusion ioScale based IBM Flash Adapters Enterprise Value for System x® solutions was evidence that IBM had abandoned the PCI flash technology that IBM received when they acquired Texas Memory Systems.  The Flash Adapter 90 product launch demonstrates that IBM has not discarded this technology, merely waited for the perfect time and perfect platform to bring it to market.  IBM has consistently demonstrated a desire to meet client needs whether that involves engaging IBM R&D to develop solutions, such as the Flash Adapter 90, or bringing in industry standard components.

Flash Adapter 90 brings IBM patented Variable Stripe RAID technology and enterprise performance to the Power Systems client base who have anxiously awaited a solution with a driver tuned to take advantage of AIX and Linux operating systems.  Power Systems are acknowledged as the world’s fastest servers and now have a bit of world’s fastest storage to create an unbeatable combination of processor and storage for accelerating business critical  applications.  Along the way, IBM tested the combined solution with IBM’s Identity Insight for DB2, demonstrating IBM’s ability to combine multiple products from application to server to storage for a consistent predictable client experience.  This combination of products showed performance superior to other tested configurations yet at a much lower cost per solution.

With this announcement, IBM offers its Power Systems clients more choice in deciding what flash storage they will use to accelerate their application.  Power System clients can consume flash from IBM in any manner that best suits their data center or application environment structure.  Clients may choose from IBM FlashSystem, IBM Flash Adapter 90, EXP 30 Ultra SSD Drawers (a direct-attach storage solution) in addition to a host of other IBM System Storage products.  For applications or client architectures that are server-centric, i.e. use server scale-out/clustering for reliability, the Flash Adapter 90 is a low cost method for delivering outstanding application performance.  Applications based on DB2 and Oracle databases are excellent candidates for acceleration.

Long live the Flash Adapter 90.

More information on IBM Power Systems flash options can be found at:  http://www-03.ibm.com/systems/power/hardware/peripherals/ssd/index.html


Server-Side Caching

January 20, 2012

Woody Hutsell, http://www.appICU.com

Fusion-io recently posted this blog that I wrote:   http://www.fusionio.com/blog/why-server-side-caching-rocks/

I feel strongly that 2011 will be remembered, at least in the SSD industry, for establishing the role of server-side caching using Flash.  I recall soaking in all of the activity at last year’s Flash Memory Summit and being excited about the new ways Flash was being applied to solve customer problems.  It is a great time to be in the market.  I look forward to sharing more of the market’s evolution with you.

 

 


Flash Memory Summit Presentation

September 6, 2011

Woody Hutsell, www.appICU.com

For those of you who are interested, here is a link to a presentation that I delivered at the 2011 Flash Memory Summit on “Mission Critical Computing with SSD”.

http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2011/20110810_T1B_Hutsell.pdf

 


Video Entry: The Solid State Disk Market

August 18, 2011

Woody Hutsell, AppICU

I had a chance to sit down with my friends from MarketingSage (www.MarketingSage.com) at the Flash Memory Summit last week.  They asked me some tough questions about the solid state disk market, here are videos they prepared from our discussion:

Where do solid state disk cache solutions fit in the market?

How is virtualization changing demand for SSD and Flash?

What’s driving SSD sales success with end-users?

How would you describe the vendor landscape for solid state disks?

I hope you enjoy these brief videos and thanks again to MarketingSage for making it happen.

Woody


Third Party Caching

August 1, 2011

By Woody Hutsell, appICU

I have a point of view about third party caching (particularly as it applies to external systems as opposed to caching at the server with PCI-E) that is different than many in the industry.  Some will see this as bashing of some particular product, but it is not intended to be that.  As far as I know, I am not competing with a third party caching solution at any customer site.  My goal here is to start a discussion on third party caching, I will lead with my opinions and hope that others weigh-in.  I am open to changing my mind on this topic as I have numerous friends in the industry who stand behind this category.

First, some background.  Many years ago, 2003 to be exact, I helped bring a product to market to provide third party caching with RAM SSD.  I believed in the product and was able to get many others to believe in the product.  What I was not able to do was to get many people to buy the product.  As I look at solutions on the market, I can see that companies trying to sell third party caching solutions are encountering the same obstacles and are fixing or working around the problems.  Here are some problems I have experienced with third party caching solutions:

1.  Writes.  The really delicious problem to solve several years ago with a RAM caching appliance was related to write performance.  Many storage systems had relatively small write caching capabilities that caused major pain for write intensive applications.  A large RAM SSD (at the time I think we were using 128GB RAM) as a write cache was a major problem solver for these environments.  Several things have happened to make selling write caching as a solution more difficult:

•  RAID systems increasingly offered reasonable cache levels narrowing down the field of customers that need write caching.  At the time we offered this RAM write cache, we thought that Xiotech customers were the perfect target as they did not believe in write caching at the time. Fact is, the combined solution worked out pretty well but was only useful until Xiotech realized that offering their own write cache could solve most customer problems.

•  Third party write caching introduces a point of failure into the solution.  If you write-cache, you have to be at least as reliable as the solution you are caching otherwise you have net lost the customer reliability.

•  Write caching is nearly impossible if the backend storage array has replication or snapshot capabilities.   Arrays with snapshot have to be cache aware when they snapshot or else they risk snapshotting without the full data set.  I have seen companies try to get around this but most of the solutions look messy to me.

•  Putting a third party device from a small company in front of a big expensive product from a big company is a good way for a customer to lose support.  We realized early on that the only way for this product to really succeed was to get storage OEMs to certify it and approve it for their environments (we did not do very well at this).

2.  Reads.  Given the challenges with write caching it seems to me that most companies today are focused on read caching.  Read caching solutions have a long history.  Gear 6 was one of the first to take the space seriously and had some limited success with environments such as oil & gas HPC and rendering.  Some of the companies that have followed Gear 6, seem to be following in their footsteps with markedly different types of hardware and cost.  Here are some issues I see with read caching:

•  A third party read-only cache adds a write bottleneck (as writes to the cache have to be subsequently written to the storage). i.e. Latency injection.  I assume there are architectures that get around this today.

•  A third party read only cache really only make sense if your controller is 1) poorly cached or 2) does not have fast backend storage or 3) is processor limited or 4) has inherently poor latency.  This may be the real long term problem for this market.  Whether you talk about SAN solutions or NAS solutions all storage vendors today are offering Flash SSD as disk storage.  In SAN environments, many vendors can dynamically tier between disk levels (thus implementing their own internal kind of caching).  NetApp has Flash PAM cards. Both BlueArc and NetApp can implement read caching.  The only hope is that the customer has legacy equipment or poorly scoped their solution such that they need a third party caching product.

•  Third party caching creates a support problem.  Imagine you are NetApp and the customer calls in and says I am having problems with my NetApp storage can you fix it.  Support says, describe the environment.  Customer says “blah…blah…third party cache cache…NetApp”.  NetApp says “that is not a supported environment”.  I always saw this as a major limiting factor for third party caching solutions.  How do you get the blessing of the array/NAS vendor so that your customer maintains support after placing your box between the servers and the storage.

•  Third party read caching solutions cannot become a single point of failure for the architecture.

So, there it is. I am looking forward to some insightful comments and feedback from the industry.  As you can see many are my opinions are based on scars from prior efforts in this segment and not meant to be a reflection on existing products and approaches.

 

 


Tales from the Field

June 22, 2011

Tales from the Field

by Woody Hutsell, www.appICU.com

Instead of marketing from afar, I have been selling from the trenches and let me tell you the world looks very different from this view point.

I have a variety of observations from my first 9 months of working closely with IT end-users:

  1. At least 50% of the IT people I talk to are generally unfamiliar with solid state storage.  These 50% are so busy worrying about backups, replication, storage capacity and virtualization that it would take a whole screaming train full of end users before they would care about performance.  What they are likely to think they know about SSD is that they are unreliable and don’t have great write performance.  I always ask these end users about performance or interest in SSD and usually get fairly blank looks back.  Don’t get me wrong, their interest in performance or SSD is no reflection on them just a reflection on their situation.  Maybe they don’t need any more performance than they already get from their storage.  Maybe performance is so far down their list of concerns as to not matter.  Maybe they just can’t budget a big investment in SSD.
  2. Some high percentage of IT buying is done without any real research.  So much for technical marketing.  You could write any number of case studies, brochures and white papers and these guys wouldn’t learn about it unless the sales person sitting across from them drops in at just the right time immediately after the aforementioned train full of end-users has started complaining about performance (and the IT guy happens to have budget to spend on something other than backup, storage capacity, replication or virtualization).
  3. These groups are deploying server virtualizationin mass.
  4. These groups are standardizing on low cost storage solutions.  The rush to standardize is driven by the number one reality affecting many IT shops:  they are under staffed and their budgets are constrained.  The lack of staffing means that it is hard to get staff trained on multiple products and life is easier if they can manage multiple components from a single interface.  The lack of budget means that IT buyers have to make compromises when it comes to storage solutions.  Because of item #2 (above), they are reasonably likely to buy storage from their server vendor and often find their way to the bottom of the storage line-up to save money.

You might think these observations would be disheartening, but really I think the story is that SSD is just starting to make its way through to the more mature buyers in the market.  Eventually, I believe that all IT storage buyers will be as familiar with and concerned with protecting application performance as they are with capacity and reliability.

A case in point, I have run into at least two customers where the drive to standardize with VMWare and low cost storage is crushing application performance for mission critical applications.  The good news for these IT shops is they have low storage costs and an easy to manage environment (because they have one storage vendor and one server virtualization solution).  The bad news is that their core business is suffering.

From my limited point of view, standardization is something that the IT guys like and the application owners don’t like.  You might assume that I think the IT guys are short-sighted, but no, increasingly I am seeing that they just don’t have a choice; they have to standardize or die under a staggering workload and shrinking budget.  Something though has to give.  A core business of one of these operations was risk analysis.  This company deployed low-cost storage and had virtualized the entire IT environment with VMWare (including the SQLServer database).  The entire IT infrastructure ran great for this customer but a mission critical sub-terabyte database was a victim of standardization.  The risk managers, whose decisions drove business profitability, were punished every time they did complex analyses by slow application response time.  The second business is really a conglomerate of some 50+ departments.  These departments were not created equally, however, there were some really profitable big departments and some paper-pushing small departments.  To the benefit of some end users and the tremendous detriment of others this business standardized on a middle tier storage solution with generous capacity scalability but not so generous performance scalability.  Their premier revenue generating department was suffering with, you won’t believe this, 60 millisecond latencies from storage for their transaction processing system.  Yikes.  For the non-storage geeks reading this blog, a really fast solid state storage system will return data to the host in well under 1 millisecond.  A well-tuned hard disk based RAID array will return data in 5 to 7 milliseconds.  A 60 millisecond response time is indicative of a major storage bottleneck.  Experiencing a 60 millisecond response time on a single request is no big deal but when this is during a batch process or spread across many concurrent users applications get to be very slow, end-users wait for seconds or batch process take too long to complete resulting in blown batch processing windows.

For now, the story for these two environments is not finished.  Once companies head down the standardization trail they are pretty confident and committed.  Eventually, the wheels fall off and people begin to realize that it is as bad to standardize on all low cost storage as it is to standardize on all high end storage.  Eventually, people realize that IT needs to align to business and not the other way around.

As companies amass larger data stores and the price and options for deploying SSD evolves, SSD solutions will become more common in the data center and a part of each IT manager’s bag of tricks.  Zsolt Kerekes, at StorageSearch.com, put it best in his 2010 article “This Way to Petabyte SSD” (http://www.storagesearch.com/ssd-petabyte.html) when he said “The ability to leverage the data harvest will create new added value opportunities in the biggest data use markets – which means that backup will no longer be seen as an overhead cost. Instead archived data will be seen as a potential money making resource or profit center. Following the Google experience – that analyzing more data makes the product derived from that data even better. So more data is good rather than bad. (Even if it’s expensive.)”


Waves of Opportunity

May 28, 2011

by Woody Hutsell at www.appicu.com

The next big opportunity/threat for SSD manufacturers is playing itself out right now. SSD vendors are scrambling to be a part of this next big wave. The winners are your next acquisition targets or companies poised to go public. The losers will hope that this new wave expands the overall market just like the first wave.

The first big wave in the enterprise SSD market was the rapid adoption of hard disk form factor SSDs for use in enterprise storage arrays. The SSD companies most seriously contending to ride this wave were BitMicro and STEC. STEC, by virtue of their GnuTek acquisition, had the right product at the right time and were able to win early business with EMC. Suddenly, venture money was pouring into the market and any company that had ever put a Flash chip on a board was selling Flash disk drives. The clear winners in this category have been STEC, who continues to have great revenue growth, and Pliant’s investors who have successfully sold their company to SanDisk after getting some traction with the OEM community. The story in this market is not finished as companies like Western Digital, Seagate, LSI and Intel look to chip away at this part of the business. At the same time though, a few companies were swept out to sea and others saw their golden opportunity for enterprise riches turn into dreams of big volumes (but low margins) in consumer markets. As I have argued before, the use of Flash hard drives in enterprise arrays is really about accelerating infrastructures more than about accelerating a specific application. This first big wave actually increased opportunities for all SSD companies by increasing the market size and validating the technology for mainstream use.

The newest wave to entice and yet concern SSD manufacturers is hitting closer to home for those manufacturers focused on the application acceleration market. For many years, the data warehousing sector has led to some great success stories for companies like Netezza who tightly bundled database functionality with hardware. Netezza’s success led Oracle and HP to try Exadata which was anything but a rousing success in the market. But somewhere along the way, Oracle was watching what Sun was doing with solid state storage and noticed a way to take the relatively less exciting Exadata and turn it into something much more captivating and yet similarly named Exadata 2. Some day we will learn whether the prospects of Exadata 2 were a big motivator for the Sun acquisition or just a quick way to demonstrate that Oracle was serious about the hardware market. Either way, Oracle’s claims of big margins and big potential revenue streams for Exadata 2 have ignited a flurry of activity in the market. Already vendors are clamoring to get into this space and there is a series of speed dating exercises going on as database vendors, server vendors and SSD vendors start trying to find some magical combination which helps them beat Oracle at this new market. Will the rich SSD vendors get richer still in this category or will the remaining SSD manufacturers find new partners, buyers and OEMs? Can any combination beat Oracle?

Whoever the winners, this second wave will show more clearly the ability of a tightly integrated solid state storage solution to increase application performance.


Long Live RAM SSD

April 22, 2011

by Woody Hutsell at www.appICU.com

In late 2006, Robin Harris at www.StorageMojo.com wrote “RAM-based SSDs are Toast –Yippie ki-yay”.  As a leader of the largest RAM-based solid state storage vendor at the time, I can assure you that his message was not lost on me.  In fact, we posted a response to Robin in “A Big SSD Vendor Begs to Differ” to which Robin famously responded “If I were TMS, I’d ask a couple of my better engineers to work part time on creative flash-based SSD architectures.”  I cannot honestly remember the timing, but it is fair to say that the comment minimally reinforced our internal project to develop a system that relied heavily on SLC NAND Flash for most of its storage capacity.  Within a few years, TMS had transitioned from a RAM-based SSD company to a company whose growth was driven primarily by Flash-based SSD.  Nearly five years after the predicted death of RAM-based SSD I thought it would be interesting to evaluate the role of RAM SSD in the application acceleration market.

First off, it is important to note that RAM-based SSDs are not toast.  In fact, a number of companies continue to promote RAM-based SSDs including my employer, ViON, who is still marketing, selling and supporting RAM-based SSDs.  What may be more surprising is that the intervening years have actually seen a few new companies join the RAM-based SSD market.  What all of these companies have identified is that there are still use cases for the high-performance per density available with RAM-based SSD.  In particular, RAM-based SSDs continue to be ideal for database transaction logs, temporary segments or small to medium databases where the ability to scale transactions without sacrificing latency is critical.  Customers in the e-commerce, financial and telecom markets will still use RAM SSD.  When a customer says to me that they need to do be able to say they have done “everything possible” to make a database fast, I still point them to RAM SSD if the economics are reasonable.  I think the RAM SSD business has promise for these specific use cases and will watch with curiosity the companies that try to expand the use cases to much higher capacities.

The second thing to note is that without RAM, Flash SSDs would not be all that appealing.  You will probably all recall the reaction to initial Flash SSDs that had write performance slower than hard disk drives.  How did the vendors solve this problem?  Well for one thing they over-provisioned Flash so that writes don’t wait so much on erases.  In enterprise solutions, however, the real solution is RAM.  Because the NAND Flash media just needs a little bit of help, a small amount of RAM caching goes a long way toward decreasing write latencies and dramatically improving peak and sustainable write IOPS.  This increases the cost and complexity of the Flash SSD but makes it infinitely more attractive to the application acceleration market.

Third, the companies with the most compelling Flash SSD performance characteristics have come out of the RAM SSD market.  These companies had developed low latency, high bandwidth controllers and backplanes that were tuned for RAM.  Contrast this with the difficulties the integrated storage manufacturers have had since their controllers and backplanes were tuned for hard disk drives.

Casual industry observers might ask a couple of other questions about this market:

  • With the rapid decrease in RAM prices, is RAM likely to replace Flash as the storage media of choice for enterprise SSD?  No.
  • Are the large integrated storage companies likely to add a non-volatile RAM SSD tier in front of their new Flash SSD tier?  I tend to doubt it, but would not rule it out completely.
  • Aren’t customers that start with Flash going to look to RAM SSD to go even faster?  I think some of these customers will want more speed but for most users Flash will be “good-enough”.
  • Aren’t customers that start with RAM likely to move to Flash SSD on technology refreshes?  Probably not.  RAM SSD is addictive.  Once you start with RAM SSD, it is hard to contemplate going slower.

To put this all in perspective, Flash SSDs did not kill the RAM SSD market.  In some ways, Flash SSD and the big companies who have embraced it have added legitimacy to the RAM SSD market that it lacked for decades.  I think RAM SSDs will continue to be an important niche in the overall application acceleration market and anticipate innovative companies introducing new use cases and products over the next five years.

To give credit where credit is due while Flash SSDs did not kill the RAM SSD market, it has come to dominate the enterprise storage landscape like no other technology since the advent of disk storage.  Robin Harris may not have accurately predicted the end of RAM SSD but he was at the forefront of analysts and bloggers, including Zsolt at www.StorageSearch.com, predicting Flash SSD’s widespread success.


Escape Velocity

March 25, 2011

What does it take to build an SSD company into a sustainable enterprise? What does it take to make it profitable? What does it take to go public? Given that many of the SSD companies focused on the enterprise are private, it is awfully hard to get good data on the costs of entry.

As a participant in the SSD industry for the last decade, I have had the benefit of watching the ascent of Fusion-IO from a small start-up company to a marketing machine and can now observe along with the rest of you their attempt to go public.

Say what you will about Fusion-IO and their products, and believe me at various times I have said all of those things good and bad, they are a marketing machine. But to leave it at that would be a terrible injustice. David Flynn and his team launched a product that the rest of us did not see the need for. Don’t get me wrong, Cenatek and MicroMemory both had server-based PCI SSDs well before Fusion-IO, but Fusion-IO did two things that really set their product apart 1) they were the first PCI SSD company to really take advantage of the first generation of Flash suitable for enterprise customers; 2) they were unashamed about beating their way into the enterprise market even if they took what I consider a fire:ready:aim approach to marketing. I remember the early days of Fusion-IO marketing which was clearly aimed at companies using storage area networks. The ad went something like “the power of a SAN in your server”. Interesting concept, but the people who used storage area networks were pretty sure that a PCI card was not about to replace their SAN. I know, some people have done this but by and large I would suggest that what Fusion discovered and now markets to directly is the large set of customers that need server-based application acceleration. These scale-out applications include those run at customers like Facebook who improve application performance by adding servers. Historically, those servers would be laden with expensive high-density RAM. Fusion-IO brought them a product that was not cheap, but less expensive than RAM. The other sweet spot that Fusion discovered was that this market was very read-intensive and good therefore to use MLC Flash enabling the customers to get better density and better pricing.

I remember the first time I met David Flynn. I was participating in a SNIA Summer Symposium in early 2008. Until this point, my only real exposure to Fusion-IO had been from their marketing. When the topic in the room turned to Flash technology, David was quick to join the discussion and showed a grasp of Flash that clearly exceeded that of most of the people in the room. From that point, I knew that Fusion was much more than marketing.

At one point during my time at TMS I tried to assess what it was that made Fusion-IO go from yet another random company involved in SSD to a company that was always in the limelight (disproportionately to their revenues – another hallmark of good marketing). I used Google trends data to see if there was an inflection point for Fusion-IO and I found it. The inflection point was their hiring of Steve Wozniak. What a brilliant publicity move and one that I think continues to pay off for Fusion-IO. From the day his involvement with Fusion-IO was announced, the company took off in terms of web hits. I can’t tell you how much time, because it would be embarrassing, but I spent a lot of time trying to figure out how to create a similar event at TMS. I thought if we could hire “Elvis” we would have a chance.

The next brilliant tactical move by Fusion-IO was tying up the server OEMs. You see, one of the biggest challenges with selling products that go into another manufacturer’s servers is getting that server vendor to bless and support the solution. Fusion realized this problem early on and announced relationships with HP, IBM and Dell. Not to mention that Michael Dell was an investor. The big problem was solved with the announcements, the big server vendors had blessed Fusion-IO with credibility typically reserved for companies like Seagate and Intel. It is worth noting that these server vendors hate single sourced products leaving plenty of room for competitors to get the same blessings.

Plenty of things good and bad have happened along the way for Fusion-IO. They encountered many of the problems that fast-growing venture backed companies have. There were delayed product releases, there were quality problems, Fusion’s approach to business created a lot of enemies, they went through an impressive number of CEOs (though David Flynn remained the key guy all along) and sales teams, and there were missteps in channel marketing strategy but through it all they have shown impressive perseverance.

As Fusion’s revenues have grown in the last two years to match their marketing, they have added some really impressive depth to their team. Marius Tudor who largely led the sales & marketing efforts for industry pioneer BitMicro is involved. Another marketing genius, Gary Orenstein who led marketing at Gear 6 among other places has joined the team. This is not to imply that I don’t have deep respect for their Chief Marketing Officer Rick White, another gaming enthusiast like myself, but really, does he need this much help. Leaving behind some marketing talent for the rest of the industry would have been gracious.

For the SSD vendor community, whatever you think of Fusion-IO, their effort to go public is a major milestone for the industry. Have you ever tried to get private company valuations without comparables? Valuations become guesswork. Do you have a great SSD idea and need VCs to get excited about it? Fusion-IO successfully going public would help the rest of the private companies eying their own exit strategy (going public, staying private, being acquired). What does it take for an SSD company to go public? What revenue, what profitability, what gross-margins? We may soon find out. In this pursuit, I for one am rooting for Fusion-IO and in turn the industry.