This blog seeks to articulate a perceived divide between the application owners and the data center operators and then explain how this rift has impacted the solid state storage market’s past, present and future. I will close by predicting the real winner in this market.
The divide between these groups starts in their background, their education and their experience. The application-side has generally come up through the ranks as either business analysts or developers. If they completed a college degree, they are more likely to have come from business or management information systems types of programs. It is often the business analyst whose job it is to understand the business and map the business requirements to a custom software project or to be used in software selection. Some of these folks have risen through these roles to become project managers or product managers. The people with the strongest business skills document requirements, make great testers and are often skilled implementers. The people with stronger technical aptitudes usually become developers, programmers, database administrators and application architects. Together, these business analysts and systems analysts collectively represent the application-side. Most IT consultants one meets are on the application side of things.
The data center sort of person has, more often than not, come up through the ranks as a system, network or storage administrator. If they have completed a college degree, they are more likely to have an engineering background though I have seen a wide range of backgrounds in this field. It is the system administrators that understand, better than anyone else, the practical impacts of the hardware choices dictated by application choices. The system administrators frequently move on to take titles like “infrastructure manager” and “data center manager”.
People from either side can move into CIO roles, but the biases of their previous experiences can be difficult to separate from their decision making strategies.
In addition to different career paths, the two sides tend to have different objectives. At the most basic level, the application owner is often driven to generate profits by maximizing features (more powerful queries, preference for real-time, faster applications, and features that enable process re-engineering). The data center manager, on the other hand, can be driven to generate profits by minimizing costs and reducing risks (simplified management, high reliability, and standardization).
In the context of this complicated background, where do solid state storage devices fit, how do they become a part of the story? As you might suspect, the answer is related to performance. The pain of performance bottlenecks is first felt by the application owners. The application owners receive complaints from internal and external customers any time performance is thought to be slow. In order to improve performance, the application owners invest a great deal of time in their code – they hire DBAs, who tune their code, improve their SQL statements, and change priorities, restricting application features that are less important. If these actions don’t work, the application owners pressure the data center operators to move the applications to bigger servers, in the process adding more processors, more RAM, more disk drives, or adding more storage caching memory. Data center operators naturally trying to protect their positions, sometimes blame poor performance on poorly written code. Application owners, on the other hand, tend to blame performance on the hardware. So what happens when the hardware is optimized, the code is as tuned as it can be and performance is still poor? Generally speaking, performance stagnates unless features are dropped or hardware is refreshed.
In the thirty years since Solid State Disks entered the market, most SSD manufacturers quickly learned that their customers were the application owners. As a result, the manufacturers geared their pre-sales, sales, marketing and product features around serving application owners. By necessity, pre-sales teams became expert in conducting performance analysis for operating systems, file systems and even databases. Sales teams learned to develop “champions” on the application side of the business. Advertising and marketing programs were aimed at database and application audiences more successfully than storage audiences.
Slowly, the data center operators started to become interested in solid state storage. Some cared because these SSD products were adding complexity to their data centers. Others cared because they were concerned about losing control of hardware decisions. Some cared because they could see the benefits of SSD for their infrastructure. A mix of strategies began to unfold in the data center as the more adept data center operators maintained tight controls on technology standards by serving as the testing ground for SSD options. As the data center operators became more engaged in SSD analysis, the big storage manufacturers started paying attention.
The big storage manufacturers finally entered the SSD market in 2008, targeting their primary customer, the data center operator. The data center operator is not usually focused on accelerating one application; they want to accelerate their infrastructure. They want their centralized, reliable and easily managed storage environment to get faster. The big storage manufacturers focused on making the introduction of SSD simple for the data center operator. It has become a sort of mantra “just add Flash SSD hard drives to the hard disk enclosures and provide some tools to make it easy” for the data center operator to move data between tiers of storage. Over time, the manufacturers have made it so that these systems can even dynamically migrate data between storage tiers. Why focus on accelerating one application when we can make everything faster?
Both SSD manufacturers and big storage manufacturers remain true to their customers, but neither has done much to sway the other’s customers. The application owners, who, if the truth be told, would rather not have their mission critical business application on virtualized servers and centralized storage are much happier with a dedicated SSD for their application. In head-to-head testing, they can observe that the pure SSD manufacturers, who have always focused on decreasing latency and increasing throughput, have the edge when it comes to the number one thing that they care about – making their application faster. The data center owners, who are the people called on to actually install and support SSD solutions, can see that the integrated solutions offer what they care about – lower risk and easier management. They can also see that the big integrated solutions offer “good-enough” performance for many users.
Today and, I would predict, for the next several years, the SSD market will be split. Pure SSD manufacturers will continue to grow by solving application performance problems better than integrated storage manufacturers. The customers buying these solutions will continue to be led by application owners. Integrated storage manufacturers will rapidly grow market share by offering solutions which accelerate entire infrastructures (think centralized storage environments with dozens of applications and virtualized server environments). The customers buying these solutions will be led by data center operators. Thus, the great divide between the application owners and the data center operators will continue for the foreseeable future.
The next few years of R&D could reduce the divide somewhat. Will the pure SSD manufacturers add storage services and reliability features equivalent to the big storage manufacturers or do they maintain their place in the world by widening the performance gap? Will the big storage manufacturers decrease controller, cache and backplane latency or increase the divide by offering more storage services? My bet is on … Well as you may have guessed, I am hedging my bets a bit by working in an environment where I can evaluate a customer’s requirements, determine the fit for either pure SSD or big storage with SSD, and recommend the right solution. In the end, the end-user wins because the introduction of any sort of SSD into their enterprise will make their applications faster.
I, too, have been writing about SSDs and their place in applications, from the point of view of databases. So far, I seem to be a lone wolf in my view, which is that fully normalized RDBMS schemas on pure SSDs is the major value add for SSDs. Not so much the “tier 0” meme.
This post gets closer to the argument than any I’ve seen.
I’m interested in your thoughts on the proposition.
Robert,
I think your premise is solid. The major value advantage for SSD is database/metadata acceleration (especially as it relates to their use in the enterprise data center). Whether you use SSD as a cache, as a storage target or as a tier 0 in a integrated storage infrastructure your main reason for deploying it is generically application acceleration and typically that means database acceleration. “Tier 0” is just a marketing message for the data center operators to understand how SSD fits into their information lifecycle and tiered storage model (keep in mind that tiered storage models are older than the use of SSD). “Dynamic tiering” then is a marketing message to build off the “tier 0” message and is targeted at the data center operator who doesn’t want to manually move data around to different storage tiers. Keep in mind that dynamic tiering is not a new message either as hierarchical storage management (HSM) software has existed for decades but historically focused on moving old data to tape storage. The main feature of dynamic tiering is that it moves data to faster devices as efficiently as it moves less used data to slower devices. In conclusion, you deploy SSD to make an application (or a set of applications) faster. Your deployment method and marketing message depends on who is buying the SSD. “Tier 0” and “Dynamic Tiering” are not reasons to buy SSD, they just make it easier for data center operators to buy it.
Woody
I applaud Woody Hutsell for shining a light on this issue of who really owns the high performance storage purchasing decision, application owners or data center operators. It is a dilemma for big and small storage vendors alike who are trying to make the most of targeting their marketing dollars to the right audience.
But more importantly, once you do get to the right people to talk to, it is also important to make sure the application owner and data center operators both understand the source and cause of their application performance issues. In most cases, neither the source (i.e., application, system, storage, or network), or the cause (e.g., application tuning, system resources, I/O wait, network delays, etc.) are obvious. And, in many cases, there is more than one source and cause.
Prior to assessing a high performance SSD storage solution, it is always wise to do a complete analysis of your application environment and storage infrastructure to get some needed answers. And yes, this does mean that application owners and data center operators are going to need to play nicely together in the sandbox.
If you are wondering what an assessment of application and storage performance looks like, these and related issues are discussed in The I/O Storm blog. It is worth checking out before you make your next SSD purchase.