Tales from the Field

June 22, 2011

Tales from the Field

by Woody Hutsell, www.appICU.com

Instead of marketing from afar, I have been selling from the trenches and let me tell you the world looks very different from this view point.

I have a variety of observations from my first 9 months of working closely with IT end-users:

  1. At least 50% of the IT people I talk to are generally unfamiliar with solid state storage.  These 50% are so busy worrying about backups, replication, storage capacity and virtualization that it would take a whole screaming train full of end users before they would care about performance.  What they are likely to think they know about SSD is that they are unreliable and don’t have great write performance.  I always ask these end users about performance or interest in SSD and usually get fairly blank looks back.  Don’t get me wrong, their interest in performance or SSD is no reflection on them just a reflection on their situation.  Maybe they don’t need any more performance than they already get from their storage.  Maybe performance is so far down their list of concerns as to not matter.  Maybe they just can’t budget a big investment in SSD.
  2. Some high percentage of IT buying is done without any real research.  So much for technical marketing.  You could write any number of case studies, brochures and white papers and these guys wouldn’t learn about it unless the sales person sitting across from them drops in at just the right time immediately after the aforementioned train full of end-users has started complaining about performance (and the IT guy happens to have budget to spend on something other than backup, storage capacity, replication or virtualization).
  3. These groups are deploying server virtualizationin mass.
  4. These groups are standardizing on low cost storage solutions.  The rush to standardize is driven by the number one reality affecting many IT shops:  they are under staffed and their budgets are constrained.  The lack of staffing means that it is hard to get staff trained on multiple products and life is easier if they can manage multiple components from a single interface.  The lack of budget means that IT buyers have to make compromises when it comes to storage solutions.  Because of item #2 (above), they are reasonably likely to buy storage from their server vendor and often find their way to the bottom of the storage line-up to save money.

You might think these observations would be disheartening, but really I think the story is that SSD is just starting to make its way through to the more mature buyers in the market.  Eventually, I believe that all IT storage buyers will be as familiar with and concerned with protecting application performance as they are with capacity and reliability.

A case in point, I have run into at least two customers where the drive to standardize with VMWare and low cost storage is crushing application performance for mission critical applications.  The good news for these IT shops is they have low storage costs and an easy to manage environment (because they have one storage vendor and one server virtualization solution).  The bad news is that their core business is suffering.

From my limited point of view, standardization is something that the IT guys like and the application owners don’t like.  You might assume that I think the IT guys are short-sighted, but no, increasingly I am seeing that they just don’t have a choice; they have to standardize or die under a staggering workload and shrinking budget.  Something though has to give.  A core business of one of these operations was risk analysis.  This company deployed low-cost storage and had virtualized the entire IT environment with VMWare (including the SQLServer database).  The entire IT infrastructure ran great for this customer but a mission critical sub-terabyte database was a victim of standardization.  The risk managers, whose decisions drove business profitability, were punished every time they did complex analyses by slow application response time.  The second business is really a conglomerate of some 50+ departments.  These departments were not created equally, however, there were some really profitable big departments and some paper-pushing small departments.  To the benefit of some end users and the tremendous detriment of others this business standardized on a middle tier storage solution with generous capacity scalability but not so generous performance scalability.  Their premier revenue generating department was suffering with, you won’t believe this, 60 millisecond latencies from storage for their transaction processing system.  Yikes.  For the non-storage geeks reading this blog, a really fast solid state storage system will return data to the host in well under 1 millisecond.  A well-tuned hard disk based RAID array will return data in 5 to 7 milliseconds.  A 60 millisecond response time is indicative of a major storage bottleneck.  Experiencing a 60 millisecond response time on a single request is no big deal but when this is during a batch process or spread across many concurrent users applications get to be very slow, end-users wait for seconds or batch process take too long to complete resulting in blown batch processing windows.

For now, the story for these two environments is not finished.  Once companies head down the standardization trail they are pretty confident and committed.  Eventually, the wheels fall off and people begin to realize that it is as bad to standardize on all low cost storage as it is to standardize on all high end storage.  Eventually, people realize that IT needs to align to business and not the other way around.

As companies amass larger data stores and the price and options for deploying SSD evolves, SSD solutions will become more common in the data center and a part of each IT manager’s bag of tricks.  Zsolt Kerekes, at StorageSearch.com, put it best in his 2010 article “This Way to Petabyte SSD” (http://www.storagesearch.com/ssd-petabyte.html) when he said “The ability to leverage the data harvest will create new added value opportunities in the biggest data use markets – which means that backup will no longer be seen as an overhead cost. Instead archived data will be seen as a potential money making resource or profit center. Following the Google experience – that analyzing more data makes the product derived from that data even better. So more data is good rather than bad. (Even if it’s expensive.)”

Advertisements

Waves of Opportunity

May 28, 2011

by Woody Hutsell at www.appicu.com

The next big opportunity/threat for SSD manufacturers is playing itself out right now. SSD vendors are scrambling to be a part of this next big wave. The winners are your next acquisition targets or companies poised to go public. The losers will hope that this new wave expands the overall market just like the first wave.

The first big wave in the enterprise SSD market was the rapid adoption of hard disk form factor SSDs for use in enterprise storage arrays. The SSD companies most seriously contending to ride this wave were BitMicro and STEC. STEC, by virtue of their GnuTek acquisition, had the right product at the right time and were able to win early business with EMC. Suddenly, venture money was pouring into the market and any company that had ever put a Flash chip on a board was selling Flash disk drives. The clear winners in this category have been STEC, who continues to have great revenue growth, and Pliant’s investors who have successfully sold their company to SanDisk after getting some traction with the OEM community. The story in this market is not finished as companies like Western Digital, Seagate, LSI and Intel look to chip away at this part of the business. At the same time though, a few companies were swept out to sea and others saw their golden opportunity for enterprise riches turn into dreams of big volumes (but low margins) in consumer markets. As I have argued before, the use of Flash hard drives in enterprise arrays is really about accelerating infrastructures more than about accelerating a specific application. This first big wave actually increased opportunities for all SSD companies by increasing the market size and validating the technology for mainstream use.

The newest wave to entice and yet concern SSD manufacturers is hitting closer to home for those manufacturers focused on the application acceleration market. For many years, the data warehousing sector has led to some great success stories for companies like Netezza who tightly bundled database functionality with hardware. Netezza’s success led Oracle and HP to try Exadata which was anything but a rousing success in the market. But somewhere along the way, Oracle was watching what Sun was doing with solid state storage and noticed a way to take the relatively less exciting Exadata and turn it into something much more captivating and yet similarly named Exadata 2. Some day we will learn whether the prospects of Exadata 2 were a big motivator for the Sun acquisition or just a quick way to demonstrate that Oracle was serious about the hardware market. Either way, Oracle’s claims of big margins and big potential revenue streams for Exadata 2 have ignited a flurry of activity in the market. Already vendors are clamoring to get into this space and there is a series of speed dating exercises going on as database vendors, server vendors and SSD vendors start trying to find some magical combination which helps them beat Oracle at this new market. Will the rich SSD vendors get richer still in this category or will the remaining SSD manufacturers find new partners, buyers and OEMs? Can any combination beat Oracle?

Whoever the winners, this second wave will show more clearly the ability of a tightly integrated solid state storage solution to increase application performance.


Consistency Groups: The Trouble with Stand-alone SSDs

February 28, 2011

SSDs (Solid State Disks) are fast; everyone knows this.  So, if they are all so very fast, why are we still using spinning disks at all?  The thing about SSDs (OK, well, one of the things) is that while they are unarguably fast, they can need to be implemented with reliability and availability in mind just like any other storage media.  Deploying them in an Enterprise environment can be sort of like “putting all of your eggs in one basket”.  In order for them to meet the RAS needs of enterprise customers, they must be “backed up” in some meaningful way.  It is not good enough to make back-up copies occasionally; we must protect their data in real time, all of the time.  Enterprise storage systems do this in many different ways, and over time, we will touch upon all of these ways.  Today, we want to talk about one of the ways – replication.

One of the key concepts in data center replication is the concept of consistency groups.  A consistency group is a set of files that must be backed up/replicated/restored together with the primary data in order for the application to be properly restored.  Consistency groups are the cause of the most difficult discussions between end-users and SSD manufacturers.  At the end of this article, I will suggest some solutions to this problem.

The largest storage manufacturers have a corner on the enterprise data center marketplace because they have array-based replication tools that have been proven, in many locations over many years.  For replicated data to be restored, an entire consistency group must be replicated using the same tool set.  This is where external SSDs encounter a problem.  External SSDs are not typically (though this is changing) used to store all application data; furthermore, they do not usually offer replication.  In a typical environment, the most frequently accessed components of an application are stored on SSD and the remaining, less frequently accessed data, are stored on slower, less expensive disk.  If a site has array-based replication, that array no longer has the entire consistency group to replicate.

External SSD write caching solutions encounter a more significant version of this same problem.  Instead of storing specific files that are accessible to the array-based replication tool, it has cached some writes that may, or may not be, flushed through to the replicating array.  The replicating array has no way of knowing this and will snapshot or replicate and not have a full set of consistent data because some of that data is cached in the external caching solution.  I am aware that some of these third party write caching solutions do have a mechanism to flush cache and allow the external array to snapshot or replicate, but generally speaking, these caching SSDs have historically been used to cache only reads, since write-caching creates too many headaches.  Unless the external caching solution is explicitly certified and blessed by the manufacturer of the storage being cached, using these products for anything more than read caching can be a pretty risky decision.

Automatic integration with array-based replication tools is a main reason that some customers will select disk form factor SSD rather than third party SSDs, in spite of huge performance benefits from the third party SSD.  If you are committed to attaining the absolute highest performance, and are willing to invest just a little bit of effort to maximize performance, the following discussion details some options for getting around this problem.

Solution 1:  Implement a preferred-read mirror.  For sites committed to array-based replication, a preferred-read mirror is often the best way to get benefit from an external SSD and yet keep using array-based replication.  A preferred-read mirror writes to both the external SSD and to the replicating SAN array.  In this way, the replicating array has all of the data needed to maintain the consistency group and yet all reads come from the faster external SSD.  One side benefit of this model is that it allows a site to avoid mirroring two expensive external SSDs for reliability, saving money.  This is because the existing array provides this role.  If your host operating system or individual software application does not offer preferred read mirroring, then a common solution is to use third-party storage application such as Symnatec’s Veritas Storage Foundation to provide this feature.  You must bear in mind that a preferred read mirror does not accelerate writes.

Solution 2:  Implement server-based replication.  There are an increasing number of good server-based replication solutions.  These tools allow you to maintain consistency groups from the server rather than from the controller inside the storage array, allowing one tool to replicate multiple heterogeneous storage solutions.

Solution 3:  For enterprise database environments, it is common for a site to replicate using transaction log shipping.  Transaction log shipping makes sure all writes to a database are replicated to a remote site where a database can be rebuilt if needed.  This approach takes database replication away from the array – moving things closer to the database application. 

Solution 4:  Implement a virtualizing controller with replication capabilities.  A few external SSD manufacturers have partnered with vendors that offer controller based replication and who support heterogeneous external storage behind that controller.  This moves the SSD behind a controller capable of performing replication.  The performance characteristics of the virtualizing controller now are a gating factor in determining the effectiveness, and indeed the value added by the external SSD.  In other words, if the virtualizing controller adds latency (it must) or has bandwidth limitations (generally they do), those will now apply to the external SSD.  This can slow SSDs down by a factor of from three to ten times.  It is also the case that this approach will solve the consistency group problem only if the entire consistency group is stored behind the virtualizing controller.

Most companies implementing external SSD have had to make decisions, trying to grapple with the impact of consistency groups on application performance, replication and recovery speed.  Even so, the great speed associated with external SSDs often leads them to implement external SSD using one of the solutions we have discussed. 

What has been your experience?


What an Interface Says About an SSD

February 1, 2011

When an SSD manufacturer brings a product to market you don’t need to look any further than the interface between the SSD and the server to understand its target market. Solid state storage systems are available with a wide array of sizes, shapes, densities, media, performance, cost and interfaces. The interface used gives the best hints as to how the manufacturer predicted the product would be used and more specifically which market they are targeting.

Fibre Channel SSDs are aimed at the enterprise data center. For most of the last decade, Fibre Channel has been the interface of choice for Tier 1 disk drives and the main interface for attaching external storage arrays in most data centers. Interestingly, the Tier 1 disk drives are now migrating to SAS, but the predominant interface for the enterprise storage array to the server is still Fibre Channel. Companies developing Fibre Channel SSDs want to appeal to enterprise data centers who have made major investments in Fibre Channel based storage area networks. There are plenty of predictions about the demise of Fibre Channel in the data center, but if you were making a choice about an interface for the enterprise today, you would offer Fibre Channel first. If I were deciding the next interface for an SSD or a storage array, I might go with FCOE, but I would probably wait to see that market develop further first. The rapid introduction of converged network adapters (CNA) could translate into changes at the storage controller, but I would also wait to see what happens in that arena.

InfiniBand SSDs are aimed at the high performance computing (HPC) market. InfiniBand is touted for its high bandwidth per link and its low latency. For SSDs with large backplanes, an InfiniBand (IB) controller is a good way to tout your bandwidth capability. Yes, I know there are other companies using IB outside of the HPC market, but the bulk of big opportunities for IB SSD are in that space today. I broadly define HPC to also include oil & gas and entertainment industries. I do believe that IB attached SSDs are an interesting option for data warehousing applications where bandwidth is more important than IOPS.

 NAS SSDs are aimed at the middle of the enterprise. This segment is one of the more intriguing to watch. A couple of companies have made credible attempts to develop NAS caching solutions which sit in front of existing NAS and provide a read or read/write caching layer. In a future blog, I might examine the challenges these companies face. As with mainstream Fibre Channel attached storage, the NAS vendors have incorporated SSD as a storage tier. Only one vendor comes to mind that is doing a pure SSD NAS solution, but others are likely to follow. NAS solutions are so much about software that it is harder for a new company to enter this space and compete with the incumbent suppliers.

iSCSI SSDs are aimed at the low to middle of the enterprise. This has not been a terribly active segment for pure SSD solutions but interesting options are on the horizon. Clearly, existing iSCSI storage arrays have options for including hard disk based SSDs. My automatic expectation when I look at an iSCSI solution is that it will be less expensive than a Fibre Channel SSD. The main reason I would offer iSCSI is to target the cost-sensitive part of the market. Given the increased availability of 10Gbit Ethernet and advanced TCP off-load engines, it is quite reasonable for an iSCSI SSD to offer good performance.

Internal PCI SSD. There has to be an exception to every rule and PCI SSD may be the exception to my rule about an interface telling you about the application for an SSD. PCI SSDs cover a wide variety of price ranges, capacities, media, performance and reliability. On the high end, there are a bunch of applications, particularly scale-out applications, which are server-centric and not storage network centric. PCI SSD have had tremendous success in this category. Similarly, for companies with smaller data sets and budgets, PCI SSD can be alluring. It is not a stretch to pitch PCI SSD for prosumer or high end gaming customers.

External SAS SSD. There are very few externally attached SAS SSDs on the market today. I think the people who offer them were probably temporarily delusional about the future role of SAS in the market and its ability to get rid of Fibre Channel for storage networking. This is not to say that SAS is a bad interconnect, in fact it is being effectively used to replace Fibre Channel as the backplane for many modern storage arrays (i.e. the connections between a disk controller and its enclosures are increasingly SAS).

Hard Disk Drive (HDD) Form Factor SAS SSD.  With the help of solid state storage, SAS HDD has killed the Fibre Channel disk drive. Hard drive form factor SSDs with SAS interfaces are more likely than not intended to be sold to a storage or server OEMs. For the storage OEMs, they replace their Fibre Channel SSDs (if they were ever offered). For the server OEMs, SAS SSD may be used as a boot drive.

SATA SSDs are aimed at the consumer, prosumer, gaming and small business markets. I cannot currently see an enterprise market for SATA SSD. In enterprise storage arrays, SATA HDDs are only used to offer the 3.5” high density (slower) drives.

External PCI. There are a few varieties of external PCI offerings including devices that are ground up designed to offer external PCI SSD and others that are I/O expansion chassis that can be loaded up with PCI SSD. My personal opinion is that the genesis of the external PCI SSD was to serve as extended memory for servers at a time when server memory capacities were limited and at high densities extremely expensive. In my experience, the only way to make one of these devices useful for traditional data centers is to put the external PCI chassis behind some other storage gateway. The storage gateway attaches to the storage network with Fibre Channel. This is all good, but the gateway is now the main dictator of your performance characteristics.

The story on SSD interfaces is certainly not complete. Innovative companies will capitalize on new markets and new interfaces in ways that we cannot yet predict. For the innovators in these segments lie new markets and new opportunities.


Application Owners vs. Data Center Operators

January 12, 2011

This blog seeks to articulate a perceived divide between the application owners and the data center operators and then explain how this rift has impacted the solid state storage market’s past, present and future.  I will close by predicting the real winner in this market. 

The divide between these groups starts in their background, their education and their experience.  The application-side has generally come up through the ranks as either business analysts or developers.  If they completed a college degree, they are more likely to have come from business or management information systems types of programs.  It is often the business analyst whose job it is to understand the business and map the business requirements to a custom software project or to be used in software selection.  Some of these folks have risen through these roles to become project managers or product managers.  The people with the strongest business skills document requirements, make great testers and are often skilled implementers.  The people with stronger technical aptitudes usually become developers, programmers, database administrators and application architects.  Together, these business analysts and systems analysts collectively represent the application-side.  Most IT consultants one meets are on the application side of things.

The data center sort of person has, more often than not, come up through the ranks as a system, network or storage administrator.  If they have completed a college degree, they are more likely to have an engineering background though I have seen a wide range of backgrounds in this field.  It is the system administrators that understand, better than anyone else, the practical impacts of the hardware choices dictated by application choices.  The system administrators frequently move on to take titles like “infrastructure manager” and “data center manager”.

People from either side can move into CIO roles, but the biases of their previous experiences can be difficult to separate from their decision making strategies.

In addition to different career paths, the two sides tend to have different objectives.  At the most basic level, the application owner is often driven to generate profits by maximizing features (more powerful queries, preference for real-time, faster applications, and features that enable process re-engineering).  The data center manager, on the other hand, can be driven to generate profits by minimizing costs and reducing risks (simplified management, high reliability, and standardization).

In the context of this complicated background, where do solid state storage devices fit, how do they become a part of the story?  As you might suspect, the answer is related to performance.  The pain of performance bottlenecks is first felt by the application owners. The application owners receive complaints from internal and external customers any time performance is thought to be slow.  In order to improve performance, the application owners invest a great deal of time in their code – they hire DBAs, who tune their code, improve their SQL statements, and change priorities, restricting application features that are less important.  If these actions don’t work, the application owners pressure the data center operators to move the applications to bigger servers, in the process adding more processors, more RAM, more disk drives, or adding more storage caching memory.  Data center operators naturally trying to protect their positions, sometimes blame poor performance on poorly written code.  Application owners, on the other hand, tend to blame performance on the hardware.  So what happens when the hardware is optimized, the code is as tuned as it can be and performance is still poor?  Generally speaking, performance stagnates unless features are dropped or hardware is refreshed.

In the thirty years since Solid State Disks entered the market, most SSD manufacturers quickly learned that their customers were the application owners.  As a result, the manufacturers geared their pre-sales, sales, marketing and product features around serving application owners.  By necessity, pre-sales teams became expert in conducting performance analysis for operating systems, file systems and even databases.  Sales teams learned to develop “champions” on the application side of the business.  Advertising and marketing programs were aimed at database and application audiences more successfully than storage audiences.

Slowly, the data center operators started to become interested in solid state storage.  Some cared because these SSD products were adding complexity to their data centers.  Others cared because they were concerned about losing control of hardware decisions.  Some cared because they could see the benefits of SSD for their infrastructure.  A mix of strategies began to unfold in the data center as the more adept data center operators maintained tight controls on technology standards by serving as the testing ground for SSD options.  As the data center operators became more engaged in SSD analysis, the big storage manufacturers started paying attention.

The big storage manufacturers finally entered the SSD market in 2008, targeting their primary customer, the data center operator.  The data center operator is not usually focused on accelerating one application; they want to accelerate their infrastructure.  They want their centralized, reliable and easily managed storage environment to get faster.  The big storage manufacturers focused on making the introduction of SSD simple for the data center operator.  It has become a sort of mantra “just add Flash SSD hard drives to the hard disk enclosures and provide some tools to make it easy” for the data center operator to move data between tiers of storage.  Over time, the manufacturers have made it so that these systems can even dynamically migrate data between storage tiers.  Why focus on accelerating one application when we can make everything faster?

Both SSD manufacturers and big storage manufacturers remain true to their customers, but neither has done much to sway the other’s customers.  The application owners, who, if the truth be told, would rather not have their mission critical business application on virtualized servers and centralized storage are much happier with a dedicated SSD for their application.  In head-to-head testing, they can observe that the pure SSD manufacturers, who have always focused on decreasing latency and increasing throughput, have the edge when it comes to the number one thing that they care about – making their application faster.  The data center owners, who are the people called on to actually install and support SSD solutions, can see that the integrated solutions offer what they care about – lower risk and easier management.  They can also see that the big integrated solutions offer “good-enough” performance for many users.

Today and, I would predict, for the next several years, the SSD market will be split.  Pure SSD manufacturers will continue to grow by solving application performance problems better than integrated storage manufacturers.  The customers buying these solutions will continue to be led by application owners.  Integrated storage manufacturers will rapidly grow market share by offering solutions which accelerate entire infrastructures (think centralized storage environments with dozens of applications and virtualized server environments).  The customers buying these solutions will be led by data center operators.  Thus, the great divide between the application owners and the data center operators will continue for the foreseeable future.

The next few years of R&D could reduce the divide somewhat.  Will the pure SSD manufacturers add storage services and reliability features equivalent to the big storage manufacturers or do they maintain their place in the world by widening the performance gap?  Will the big storage manufacturers decrease controller, cache and backplane latency or increase the divide by offering more storage services?  My bet is on …  Well as you may have guessed, I am hedging my bets a bit by working in an environment where I can evaluate a customer’s requirements, determine the fit for either pure SSD or big storage with SSD, and recommend the right solution.  In the end, the end-user wins because the introduction of any sort of SSD into their enterprise will make their applications faster.


Welcome to AppICU

December 21, 2010

AppICU was conceptualized years ago.  It started as an idea for a TMS consulting practice.  It spent some time (in my head) as the name of a new business.  I suppose it is fitting that the name finally makes its first public appearance as the title for my blog.  If you are a marketer, which was my primary responsibility for the last ten years, it is almost impossible to find the time or energy to blog about products that you are already marketing.  If it is such a great blog idea, why isn’t it a whitepaper or submitted for publishing?  Do you blog to bash competitors products?  Do you blog to brag about your cool new product/service?  Do you blog to become famous?  My goal is to help provide guidance to the buyers of application acceleration solutions or solid state storage by providing clarity where there is chaos.  Along the way, I hope to influence the companies that develop and market to these buyers.

The market for application acceleration and solid state storage is well served by a variety of other writers/bloggers and analysts.   You won’t find anyone promoting the solid state storage industry better than Zsolt Kerekes at StorageSearch.com.  Zsolt publishes more original content about SSD than anyone I know.  Without his independent view of the market, SSD buyers would be lost.  If you are looking for analysts who know their stuff when it comes to SSD I encourage you to follow:  Jeff Janukowicz at IDC, Joseph Unsworth at Gartner, Greg Schulz at StorageIO, Robin Harris at Storage Mojo, Ray Luchessi at Silverton Consulting, Jeff Boles at Taneja Group, and George Crump at Storage Switzerland.  These guys all suffer through endless hours of vendor fluff to distill nuggets of useful information to pass on to their customers/readers.

Welcome to my blog.  I hope it adds to the discourse and proves to be a good use of your time.


START for SSD Marketing

December 20, 2010

As the United States looks to approve the START treaty, I thought it was time to propose that SSD manufacturers enter their own strategic arms reduction treaty to control the rampant and destructive proliferation of million IOPS marketing.

The road to an IOPS arms race began innocently enough, SSD manufacturers had a novel story to tell.  The IOPS from hard disk drives have been atrocious since the dawn of computer time.  Even today, the lowly hard drive can only squeak out 300 random IOPS.  From the earliest days of the SSD, IOPS marketing was a big part of the story.  The beauty of a solid state storage device was that it could move more data (IOPS) to a processor faster (less latency) than traditional disk.  This simple story has been at the core of the SSD value proposition for 30 years.

Admittedly, I fired the first shots (and probably the second, third, fourth….) in the escalating IOPS arms race in 2001 when Texas Memory Systems announced the RamSan-520 a 5U monster of a system with all of 128GB of RAM capacity.  This system with fifteen 1Gbit Fibre Channel ports was said to deliver 750,000 random IOPS.  Do you have any idea what kind of reaction that generated at storage conferences in 2001.  Wow!  Impossible, most would say.  This process led to TMS proudly declaring itself the “World’s Fastest Storage®”.   After firing this weapon for nearly ten years, I have to admit the time has come to stop the IOPS marketing arms race.  The challenge in 2001, as it is today, is finding the customer that can drive 1,000,000 IOPS.  Actually, in 2001 finding a server to drive that many IOPS was impossible.  The processors, operating systems, host bus adapters, etc. were all too slow.  Fortunately, the imposition of Moore’s law on electronics led to breakthrough after breakthrough enabling SSD manufacturers to demonstrate high IOPS with single server configurations.

I like to think, but hesitate to admit, that in addition to pioneering million IOPS marketing, TMS also drove the widespread use of IOMeter (a tool used to test storage devices by generating IO) at trade shows.   As a storage marketer my grandest dream was to find a customer that needed to run IOMeter as their business application.  What a perfect customer this would be.  I searched the world over.  Strangely, the financial exchanges didn’t need IOMeter to complete trades.  Telecom companies didn’t need it to bill cellular customers.  Who could possibly need IOMeter for their business?  Imagine my glee when host bus adapter manufacturers and switch manufacturers started caring about IOPS as a marketing tool.  Finally, my dream customer had arrived.  My apologies to storage industry exhibit hall wanderers; I too have tired of seeing IOMeter.

This brings us back, admittedly after a brief tangent, to million IOPS claims.  I hesitate to examine the number of vendors that are persistently and proudly proclaiming profound performance.  One million IOPS.  Yawn!  Is that all you’ve got!  In fact, I would argue if the extent of your marketing message is your IOPS you don’t have enough… marketing talent.

Using a solid state storage device is about a lot of things: application acceleration, lower power consumption, enabling business growth and solving mission critical problems.  Tell us the customer stories.  How many 1,000,000 IOPS customer stories have you read?  Hmmm.  Fair enough, 1 million IOPS sounds like so many that people will stop worrying about whether the storage device can meet their production performance requirements.  But can we stop at 1 million? Perhaps we should shoot for 2 million.  No.  It is time for the SSD IOPS marketing proliferation to be stopped while the customers still care.   Start designing systems that satisfy the range of customer buying requirements:  low latency (the number one reason most customers benefit from SSD), good-enough IOPS, bandwidth suitable to the application’s goal, five 9’s reliability, low mean time to repair, low power consumption, interoperability and low total cost of ownership.