Death of a Trade-off

October 24, 2017

The death of a trade-off, by Woody Hutsell, appICU.com

Everyone hates trade-offs but we almost always have to make them.  One of the most famous trade-offs in the IT world is that you can have it fast, cheap or good, pick two. One version of this trade off in the flash industry is that you can have it fast or low cost, pick one.  This is because our primary way to lower cost for all flash arrays has been to implement data reduction.  These data reduction tools lower the effective cost but they add latency thus slowing down the all flash arrays.  A quick look at the latency specifications for devices that data reduce and those that don’t will confirm this notion even where the marketing seeks to obfuscate this reality.

With its latest refresh of the FlashSystem 900, IBM allows the customer to get it fast and get it inexpensively.

There are two key technology advancements in the FlashSystem 900.  First, it has IBM enhanced 3D TLC NAND flash.  As with prior generations of FlashSystem, IBM has acquired Micron chips directly from the fab and enhanced them with our advanced flash management.  The economic benefits of moving to 3D TLC are well documented and apply to the new FlashSystem 900.  With the new chips, we achieve up to a 3x increase in maximum capacity.

The second key technology advance is line speed hardware compression.  IBM is the second major vendor to implement hardware compression but the first to deliver it for 3D TLC.  IBM compresses data in our field programmable gate arrays (FPGAs) within every flash module.  If you work with our sales and business partner teams we will put in a writing a 2:1 compression guarantee (and yes, your data must be compressible).  We have used a variety of terms to describe the performance of this new compression solution such as zero performance impact and worry-free compression.  But I want to take it one step further.  In most cases, our hardware compression will deliver better performance than the prior generation FlashSystem 900.

Implementing compression has always been a trade-off.  You implement compression to improve economics but trade-off performance.  Now, this trade-off is history thanks to the new FlashSystem 900.

Advertisement

Cloud Grid Architecture

June 30, 2016

by Woody Hutsell, AppICU

Prevent cloud failures with grid architecture

Public and private cloud architectures fail with alarming frequency. David Linthicum, with Cloud Technology Partners, wrote in an article – Bracing for the Failure of Your Private Cloud Architecture – for TechTarget’s SearchCloudComputing that a major problem with private cloud deployments results from reusing the same hardware they used for their traditional IT. Specifically, he comments that “hardware requirements for most private cloud operating systems are demanding” and later that “If the hardware doesn’t have enough horsepower, the system will begin thrashing, which causes poor performance and likely a system crash.

Andrew Froehlich, writing 9 Spectacular Cloud Computing Fails for InformationWeek, extends this thought to the public cloud when he says that one of the three key reasons cloud service providers fail is due to “beginner mistakes on the part of service providers…when the provider starts out or grows at a faster rate than can be properly managed by its data center staff.”

Serving up applications in the cloud is different from traditional IT. Cloud deployments thrive when ease of application deployment is matched by ease of management combined with consistent performance under all workloads. Successful cloud deployments support many demanding applications and customers. With the increasing diversity of hosted applications comes some infrastructure headaches. We often custom tailor our traditional IT environments to meet the needs of a specific application or class of applications.  We know it has certain peaks for online transaction processing or batch processes. We know when we can perform maintenance. With the cloud, success means we have many applications with overlapping (or not) peak performance periods. With the cloud, we may be more likely to see constant use resulting in fewer opportunities to perform maintenance and restructure our storage to balance for intense workloads.

Successful cloud deployments can challenge and break traditional storage from a performance point of view. Traditional storage scales poorly. Whether the traditional storage array uses HDD or hybrid architectures, it will experience the same problem: as the number of I/Os to the system increase, the system performance will degrade rapidly. With an all-HDD system the latency will begin high and rapidly decay; with a hybrid configuration (SSD + HDD), the system latency will start lower, stay low longer but then rapidly decay.  When latency decays, applications and users suffer.

Successful cloud deployments can also challenge and break traditional storage from a management point of view. Traditional storage arrays are difficult to configure and deploy. It is not unheard of for initial deployments of scalable traditional storage to take days or sometimes weeks for the system to be tuned so that applications are properly mapped to the right RAID groups. Do you need a RAID group with SSDs; do you need a tiered deployment with SSDs, SAS, and SATA? How many drives are needed in each RAID group?  Should you implement RAID 0, 1, 5 or 6?  Once sized, configured, and deployed, further tweaking of these systems can be administrator intensive. When workloads change, as is the expectation in a cloud deployment, how quickly can you create new volumes and what happens when the performance needed for an application exceeds what the system is capable of delivering? The hard answer is that traditional storage was not designed for the cloud.

Fortunately, IBM has a solution – the IBM FlashSystem A9000 a modular configuration that is also available as the IBM FlashSystem A9000R, a multi-unit rack model. The new IBM FlashSystem family members tackle the performance and management issues caused by successful cloud deployments. Where the cloud needs consistent low latency even as I/O increases, FlashSystem A9000 applies low latency all-flash storage. Where the cloud needs simplified management, the systems apply grid storage architecture.

It all starts with the configuration. FlashSystem A9000 customers do not have to configure RAID groups, the system automatically implements a Variable Stripe RAID within each MicroLatency flash module and a RAID-5 stripe across all of the modules in an enclosure. An administrator configuring the system creates volumes and assigns those volumes to hosts for application use. Every volume’s data is distributed evenly across the grid controllers (this is where the storage services software runs) and the flash enclosures (this is where the data is stored). This grid distribution prevents hot spots and never requires tuning in order to maintain performance. No tuning means substantially less on-going system management. When the rack-based FlashSystem A9000R is expanded it automatically redistributes the workloads across the new grid controllers and flash enclosures.

When an I/O comes into these new FlashSystem arrays, it is written to three separate grid controllers simultaneously. These I/Os are cached in controller RAM and the write is considered committed from the application’s point of view. In this way, the application is not slowed down by data reduction. Next, the three controllers distribute the pattern reduction, inline data deduplication, and data compression tasks across all the grid controllers, thus providing the best possible data reduction performance before writing the data to the flash enclosure(s). Data can be written across any of the flash enclosures in the system, preserving the grid architecture and distribution of workload. When data is written to flash inside the flash enclosure, it is distributed evenly across the flash in a way that ensures consistent low latency performance. All of this is aided by IBM FlashCore™ technology which provides a hardware only data path inside the flash enclosure during the time data is written persistently to flash. The flash storage is housed in IBM MicroLatency® modules whose massively parallel array of flash chips provides high storage density, extremely fast I/O, and consistent low latency.

Together these technologies are a real blessing for the cloud service provider (CSP). When new customers arrive, CSPs know they can easily allocate new storage to new customers and not worry about special tuning to ensure the best performance possible. When existing customers’ performance demands skyrocket, CSPs know that their FlashSystem A9000-based systems offer enough performance to match the growing requirements of their customers without negatively impacting other customers. And when launching or expanding their businesses, CSPs know that FlashSystem A9000 can eliminate one of the leading causes of cloud offering failures, the inability of storage architectures to scale.

For more information, read Ray Luchessi’s, Silverton Consulting, article on Grid Storage Technology and Benefits


Flash Riddle

January 7, 2015

by Woody Hutsell, http://www.appICU.com

This isn’t Batman; this is your data center!

Riddle #1: What is a flash array that is fast like a Ferrari, has reliability and service like a Lexus, but is priced like a Chevrolet?

Answer: IBM FlashSystem

Riddle #2: What do you call the fastest, most feature rich but least expensive offering in a market?

Answer: The market leader in capacity shipments (see this link)

For as long as I have been associated with the solid state storage market, the products formerly known as Texas Memory Systems’ RamSan were labelled as the Ferrari of the market, but mostly in this context: “Who wouldn’t want to go that fast, but who can afford it?” For the most part, we embraced the label because we were the fastest. A quick look at Storage Performance Council results over the last decade can easily substantiate that position. But we did have a problem: The market didn’t perceive RamSan as the affordable choice, so we were left out of competitions before even being given a chance to compete. Who starts out their car buying process by verifying that the Ferrari is cost competitive? It was understood we were that fast and that expensive.

Since then, an interesting change has happened. IBM, with its buying power and economies of scale, has taken the Ferrari engine, surrounded it with Lexus-like reliability characteristics, and is now delivering it to the market with the lowest all-flash array price per capacity, according to some simple extrapolations from the latest IDC report on the state of the flash market.

Why is IBM throwing away its margins to take ownership of this market? It’s not. The economics are actually simple. IBM engineers the entirety of FlashSystem. As any accountant can tell you, this means that our R&D and engineering costs are going to be higher than the industry. But this is, in accounting terms, a fixed cost. If we pay this cost and don’t sell many products, we run at a loss. But if we pay this cost and sell a lot, our cost per unit only drops.

IBM buys NAND flash chips for FlashSystem; we don’t buy SSDs. Why does this matter? SSDs, in spite of their commodity nature and poor performance, are margin rich products for the companies that sell them. When our competitors buy SSDs to put in their all-flash arrays they are paying to someone else the margin needed to make investors happy while covering engineering investments. Thus, using SSDs actually makes the flash array product you buy more expensive. In accounting terms, SSDs represent a variable cost. As a vendor, you pay that same variable cost on every product you sell. Any business person will tell you it pays to decrease your variable costs because this enables you to bring your product to market for less cost than your competitors. This is especially important when you’re selling at the kinds of volumes where IBM sells in the all-flash array market – more than the next two competitors combined in the first half of this year, according to the same IDC report noted above. This explains why we are indeed a leader in this market space.

Maybe not what you’d expect from a company with an enterprise-grade reputation like IBM.

So, what does this mean to our clients and potential clients? FlashSystem can save you money. But the advantages don’t stop there.

Did you know FlashSystem offers inline compression and what’s more, testing of our inline compression at customer sites shows that it can be more effective and faster than that of our competitors? As a potential customer, there is a simple way for you to find out if this is true for your workload – include FlashSystem in your next storage procurement evaluation.

You could pay more and get less, but why should you? That’s a riddle worth answering.


Real Flash Storage Systems multi-task!

August 1, 2014

by Woody Hutsell, http://www.appICU.com

In the old days, real men didn’t eat broccoli and the storage solutions we implemented coped effectively with only a few related types of workload profiles. Those days are dead. Now, as data centers move toward virtualization and then extend virtualization into arenas such as the desktop while continuing to address traditional database workloads, storage must handle multiple requirements equally well. Disk can’t do it anymore. Flash can.

First, we must move beyond the concept that we implement storage solutions to solve individual application requirements. Instead, at every opportunity data center managers should be architecting and then implementing storage solutions capable of addressing multiple storage requirements. And even more, such a comprehensive storage solution must be cost effective when we buy it, yet possess additional capabilities that will enable both future growth and new business initiatives.

Certain flash products are a very good choice as do-more storage solutions. Others, not so much. Virtual Desktop Infrastructure (VDI) and inline deduplication offer insights into why IBM FlashSystem makes a very good choice to fill the multi-tasking role in your storage architecture.

Consider VDI. VDI seeks to eliminate hundreds to thousands of difficult to upgrade, manage, and secure desktops with a consolidated set of centralized servers and storage that are in turn easier to upgrade, manage, and secure. But here’s the key ingredient of a smarter data center: The infrastructure used to support VDI must be able to do more than implement VDI. The VDI workload has very high I/O density. While the I/O of a single physical desktop is easily handled with a fast HDD or small SSD, consolidating all of these desktops into a VDI creates extremely high I/O demands that are difficult to meet with typical hybrid SAN storage arrays. Principal causes of failed VDI installations include the costs and complexities of implementing storage to support it. A simplistic way to solve the problem is to buy an HDD or SSD for every virtualized desktop. This is expensive and inefficient, resulting in almost no practical cost savings versus the storage already in the desktop.

It turns out that VDI workloads benefit from inline deduplication. Whether the VDI is persistent or stateless, inline deduplication often results in a nearly 10x reduction in storage capacity needed. Inline deduplication works so well in VDI environments because the images needed for each virtual desktop are largely the same across desktops. Additionally, inline deduplication is effective at decreasing the capacity needed to store the unstructured files generated most often in a typical desktop environment.

Inline deduplication is essential to reducing the cost of large scale VDI. Inline deduplication, however, has a dark side: it dramatically increases the I/O density for VDI, making traditional storage arrays an incredibly poor choice for VDI. Before inline deduplication, the I/O density of VDI was not substantially different from the I/O density of the actual desktop.

Flash appliances are the best solution for handling the I/O density created by inline deduplication with VDI. Flash appliances are optimized for high I/O density workloads and bring an added benefit in that they tend to decrease the latency for data access, meaning the end user experience with flash as the storage media is likely to be even better than if users were getting data from a disk drive inside their desktop.

Data center managers have a choice to make: choose a storage architecture that creates an application silo or choose a storage architecture that can support multiple performance sensitive use cases. In fact, VDI is not the only application that benefits from flash appliances. The number one application for flash appliances is database acceleration. It is beneficial for the data center manager to pick a flash appliance that can truly multi-task, handling VDI workloads and database workloads with equal effectiveness. But, the capability to handle high I/O density is the number one requirement for VDI workloads, whereas extremely low latency is the number one requirement for database workloads.

At this point, the field of potential do-everything solutions narrows quickly. It just so happens that flash appliances with built-in deduplication are the worst choices for database acceleration. The inline deduplication that provides significant benefits for VDI provides almost no data reduction benefits for databases; instead the very process of deduplicating data is latency inducing, thus degrading database performance. For this reason, IBM with its FlashSystem appliance does not implement full-time, can’t be turned off, inline deduplication. This would be contrary to the trajectory of the data center toward virtualization, decreased silos, and ultimately storage solutions that do everything well.

In this way, IBM covers all the bases. FlashSystem offers the low latency, extreme performance, high availability, and fat bandwidth to serve very well as the foundational multi-tasking storage. Then, IBM offers a variety of ways that solutions for specific application requirements can easily be layered over the FlashSystem foundation. For example, IBM partners with Atlantis Computing to provide a best-of-breed solution for VDI. Atlantis Computing ILIO software executes within a virtual machine (VM), thus it does not require a server silo and provides compression and deduplication capabilities explicitly designed for VDI. A single FlashSystem appliance can serve up over one thousand volumes from its 40TB of protected capacity. The appropriate capacity is allocated for use with VDI and provides the I/O density and low latency that reduce the cost per desktop of VDI while improving the end user experience. Because even very large VDI implementations do not use 40TB of capacity, the remaining capacity of the IBM FlashSystem can be allocated to accelerating databases.

As the data center footprint of flash expands, FlashSystem is uniquely capable of supporting every workload with equal efficiency. With the economics of flash already past the tipping point, data center managers should be looking at long term strategies for replacing performance HDD with flash appliances. Creating silos that only handle a single storage challenge such as VDI will waste multiple opportunities to increase overall data center storage performance and efficiency while at the same time lowering storage costs. Implementing smarter, highly capable FlashSystem storage enables data center managers to address multiple storage challenges today, while empowering growth and innovation in the future.

Learn more about using flash to handle multiple workloads at the upcoming Flash Memory Summit in Santa Clara and VMworld in San Francisco! I will be at both events and hope to see you there. To learn more about the work IBM is doing with Atlantis Computing, please visit the IBM FlashSystem EcoSystem website.


Server-Side Caching

January 20, 2012

Woody Hutsell, http://www.appICU.com

Fusion-io recently posted this blog that I wrote:   http://www.fusionio.com/blog/why-server-side-caching-rocks/

I feel strongly that 2011 will be remembered, at least in the SSD industry, for establishing the role of server-side caching using Flash.  I recall soaking in all of the activity at last year’s Flash Memory Summit and being excited about the new ways Flash was being applied to solve customer problems.  It is a great time to be in the market.  I look forward to sharing more of the market’s evolution with you.

 

 


Flash Memory Summit Presentation

September 6, 2011

Woody Hutsell, www.appICU.com

For those of you who are interested, here is a link to a presentation that I delivered at the 2011 Flash Memory Summit on “Mission Critical Computing with SSD”.

http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2011/20110810_T1B_Hutsell.pdf

 


Third Party Caching

August 1, 2011

By Woody Hutsell, appICU

I have a point of view about third party caching (particularly as it applies to external systems as opposed to caching at the server with PCI-E) that is different than many in the industry.  Some will see this as bashing of some particular product, but it is not intended to be that.  As far as I know, I am not competing with a third party caching solution at any customer site.  My goal here is to start a discussion on third party caching, I will lead with my opinions and hope that others weigh-in.  I am open to changing my mind on this topic as I have numerous friends in the industry who stand behind this category.

First, some background.  Many years ago, 2003 to be exact, I helped bring a product to market to provide third party caching with RAM SSD.  I believed in the product and was able to get many others to believe in the product.  What I was not able to do was to get many people to buy the product.  As I look at solutions on the market, I can see that companies trying to sell third party caching solutions are encountering the same obstacles and are fixing or working around the problems.  Here are some problems I have experienced with third party caching solutions:

1.  Writes.  The really delicious problem to solve several years ago with a RAM caching appliance was related to write performance.  Many storage systems had relatively small write caching capabilities that caused major pain for write intensive applications.  A large RAM SSD (at the time I think we were using 128GB RAM) as a write cache was a major problem solver for these environments.  Several things have happened to make selling write caching as a solution more difficult:

•  RAID systems increasingly offered reasonable cache levels narrowing down the field of customers that need write caching.  At the time we offered this RAM write cache, we thought that Xiotech customers were the perfect target as they did not believe in write caching at the time. Fact is, the combined solution worked out pretty well but was only useful until Xiotech realized that offering their own write cache could solve most customer problems.

•  Third party write caching introduces a point of failure into the solution.  If you write-cache, you have to be at least as reliable as the solution you are caching otherwise you have net lost the customer reliability.

•  Write caching is nearly impossible if the backend storage array has replication or snapshot capabilities.   Arrays with snapshot have to be cache aware when they snapshot or else they risk snapshotting without the full data set.  I have seen companies try to get around this but most of the solutions look messy to me.

•  Putting a third party device from a small company in front of a big expensive product from a big company is a good way for a customer to lose support.  We realized early on that the only way for this product to really succeed was to get storage OEMs to certify it and approve it for their environments (we did not do very well at this).

2.  Reads.  Given the challenges with write caching it seems to me that most companies today are focused on read caching.  Read caching solutions have a long history.  Gear 6 was one of the first to take the space seriously and had some limited success with environments such as oil & gas HPC and rendering.  Some of the companies that have followed Gear 6, seem to be following in their footsteps with markedly different types of hardware and cost.  Here are some issues I see with read caching:

•  A third party read-only cache adds a write bottleneck (as writes to the cache have to be subsequently written to the storage). i.e. Latency injection.  I assume there are architectures that get around this today.

•  A third party read only cache really only make sense if your controller is 1) poorly cached or 2) does not have fast backend storage or 3) is processor limited or 4) has inherently poor latency.  This may be the real long term problem for this market.  Whether you talk about SAN solutions or NAS solutions all storage vendors today are offering Flash SSD as disk storage.  In SAN environments, many vendors can dynamically tier between disk levels (thus implementing their own internal kind of caching).  NetApp has Flash PAM cards. Both BlueArc and NetApp can implement read caching.  The only hope is that the customer has legacy equipment or poorly scoped their solution such that they need a third party caching product.

•  Third party caching creates a support problem.  Imagine you are NetApp and the customer calls in and says I am having problems with my NetApp storage can you fix it.  Support says, describe the environment.  Customer says “blah…blah…third party cache cache…NetApp”.  NetApp says “that is not a supported environment”.  I always saw this as a major limiting factor for third party caching solutions.  How do you get the blessing of the array/NAS vendor so that your customer maintains support after placing your box between the servers and the storage.

•  Third party read caching solutions cannot become a single point of failure for the architecture.

So, there it is. I am looking forward to some insightful comments and feedback from the industry.  As you can see many are my opinions are based on scars from prior efforts in this segment and not meant to be a reflection on existing products and approaches.

 

 


Tales from the Field

June 22, 2011

Tales from the Field

by Woody Hutsell, www.appICU.com

Instead of marketing from afar, I have been selling from the trenches and let me tell you the world looks very different from this view point.

I have a variety of observations from my first 9 months of working closely with IT end-users:

  1. At least 50% of the IT people I talk to are generally unfamiliar with solid state storage.  These 50% are so busy worrying about backups, replication, storage capacity and virtualization that it would take a whole screaming train full of end users before they would care about performance.  What they are likely to think they know about SSD is that they are unreliable and don’t have great write performance.  I always ask these end users about performance or interest in SSD and usually get fairly blank looks back.  Don’t get me wrong, their interest in performance or SSD is no reflection on them just a reflection on their situation.  Maybe they don’t need any more performance than they already get from their storage.  Maybe performance is so far down their list of concerns as to not matter.  Maybe they just can’t budget a big investment in SSD.
  2. Some high percentage of IT buying is done without any real research.  So much for technical marketing.  You could write any number of case studies, brochures and white papers and these guys wouldn’t learn about it unless the sales person sitting across from them drops in at just the right time immediately after the aforementioned train full of end-users has started complaining about performance (and the IT guy happens to have budget to spend on something other than backup, storage capacity, replication or virtualization).
  3. These groups are deploying server virtualizationin mass.
  4. These groups are standardizing on low cost storage solutions.  The rush to standardize is driven by the number one reality affecting many IT shops:  they are under staffed and their budgets are constrained.  The lack of staffing means that it is hard to get staff trained on multiple products and life is easier if they can manage multiple components from a single interface.  The lack of budget means that IT buyers have to make compromises when it comes to storage solutions.  Because of item #2 (above), they are reasonably likely to buy storage from their server vendor and often find their way to the bottom of the storage line-up to save money.

You might think these observations would be disheartening, but really I think the story is that SSD is just starting to make its way through to the more mature buyers in the market.  Eventually, I believe that all IT storage buyers will be as familiar with and concerned with protecting application performance as they are with capacity and reliability.

A case in point, I have run into at least two customers where the drive to standardize with VMWare and low cost storage is crushing application performance for mission critical applications.  The good news for these IT shops is they have low storage costs and an easy to manage environment (because they have one storage vendor and one server virtualization solution).  The bad news is that their core business is suffering.

From my limited point of view, standardization is something that the IT guys like and the application owners don’t like.  You might assume that I think the IT guys are short-sighted, but no, increasingly I am seeing that they just don’t have a choice; they have to standardize or die under a staggering workload and shrinking budget.  Something though has to give.  A core business of one of these operations was risk analysis.  This company deployed low-cost storage and had virtualized the entire IT environment with VMWare (including the SQLServer database).  The entire IT infrastructure ran great for this customer but a mission critical sub-terabyte database was a victim of standardization.  The risk managers, whose decisions drove business profitability, were punished every time they did complex analyses by slow application response time.  The second business is really a conglomerate of some 50+ departments.  These departments were not created equally, however, there were some really profitable big departments and some paper-pushing small departments.  To the benefit of some end users and the tremendous detriment of others this business standardized on a middle tier storage solution with generous capacity scalability but not so generous performance scalability.  Their premier revenue generating department was suffering with, you won’t believe this, 60 millisecond latencies from storage for their transaction processing system.  Yikes.  For the non-storage geeks reading this blog, a really fast solid state storage system will return data to the host in well under 1 millisecond.  A well-tuned hard disk based RAID array will return data in 5 to 7 milliseconds.  A 60 millisecond response time is indicative of a major storage bottleneck.  Experiencing a 60 millisecond response time on a single request is no big deal but when this is during a batch process or spread across many concurrent users applications get to be very slow, end-users wait for seconds or batch process take too long to complete resulting in blown batch processing windows.

For now, the story for these two environments is not finished.  Once companies head down the standardization trail they are pretty confident and committed.  Eventually, the wheels fall off and people begin to realize that it is as bad to standardize on all low cost storage as it is to standardize on all high end storage.  Eventually, people realize that IT needs to align to business and not the other way around.

As companies amass larger data stores and the price and options for deploying SSD evolves, SSD solutions will become more common in the data center and a part of each IT manager’s bag of tricks.  Zsolt Kerekes, at StorageSearch.com, put it best in his 2010 article “This Way to Petabyte SSD” (http://www.storagesearch.com/ssd-petabyte.html) when he said “The ability to leverage the data harvest will create new added value opportunities in the biggest data use markets – which means that backup will no longer be seen as an overhead cost. Instead archived data will be seen as a potential money making resource or profit center. Following the Google experience – that analyzing more data makes the product derived from that data even better. So more data is good rather than bad. (Even if it’s expensive.)”


Consistency Groups: The Trouble with Stand-alone SSDs

February 28, 2011

SSDs (Solid State Disks) are fast; everyone knows this.  So, if they are all so very fast, why are we still using spinning disks at all?  The thing about SSDs (OK, well, one of the things) is that while they are unarguably fast, they can need to be implemented with reliability and availability in mind just like any other storage media.  Deploying them in an Enterprise environment can be sort of like “putting all of your eggs in one basket”.  In order for them to meet the RAS needs of enterprise customers, they must be “backed up” in some meaningful way.  It is not good enough to make back-up copies occasionally; we must protect their data in real time, all of the time.  Enterprise storage systems do this in many different ways, and over time, we will touch upon all of these ways.  Today, we want to talk about one of the ways – replication.

One of the key concepts in data center replication is the concept of consistency groups.  A consistency group is a set of files that must be backed up/replicated/restored together with the primary data in order for the application to be properly restored.  Consistency groups are the cause of the most difficult discussions between end-users and SSD manufacturers.  At the end of this article, I will suggest some solutions to this problem.

The largest storage manufacturers have a corner on the enterprise data center marketplace because they have array-based replication tools that have been proven, in many locations over many years.  For replicated data to be restored, an entire consistency group must be replicated using the same tool set.  This is where external SSDs encounter a problem.  External SSDs are not typically (though this is changing) used to store all application data; furthermore, they do not usually offer replication.  In a typical environment, the most frequently accessed components of an application are stored on SSD and the remaining, less frequently accessed data, are stored on slower, less expensive disk.  If a site has array-based replication, that array no longer has the entire consistency group to replicate.

External SSD write caching solutions encounter a more significant version of this same problem.  Instead of storing specific files that are accessible to the array-based replication tool, it has cached some writes that may, or may not be, flushed through to the replicating array.  The replicating array has no way of knowing this and will snapshot or replicate and not have a full set of consistent data because some of that data is cached in the external caching solution.  I am aware that some of these third party write caching solutions do have a mechanism to flush cache and allow the external array to snapshot or replicate, but generally speaking, these caching SSDs have historically been used to cache only reads, since write-caching creates too many headaches.  Unless the external caching solution is explicitly certified and blessed by the manufacturer of the storage being cached, using these products for anything more than read caching can be a pretty risky decision.

Automatic integration with array-based replication tools is a main reason that some customers will select disk form factor SSD rather than third party SSDs, in spite of huge performance benefits from the third party SSD.  If you are committed to attaining the absolute highest performance, and are willing to invest just a little bit of effort to maximize performance, the following discussion details some options for getting around this problem.

Solution 1:  Implement a preferred-read mirror.  For sites committed to array-based replication, a preferred-read mirror is often the best way to get benefit from an external SSD and yet keep using array-based replication.  A preferred-read mirror writes to both the external SSD and to the replicating SAN array.  In this way, the replicating array has all of the data needed to maintain the consistency group and yet all reads come from the faster external SSD.  One side benefit of this model is that it allows a site to avoid mirroring two expensive external SSDs for reliability, saving money.  This is because the existing array provides this role.  If your host operating system or individual software application does not offer preferred read mirroring, then a common solution is to use third-party storage application such as Symnatec’s Veritas Storage Foundation to provide this feature.  You must bear in mind that a preferred read mirror does not accelerate writes.

Solution 2:  Implement server-based replication.  There are an increasing number of good server-based replication solutions.  These tools allow you to maintain consistency groups from the server rather than from the controller inside the storage array, allowing one tool to replicate multiple heterogeneous storage solutions.

Solution 3:  For enterprise database environments, it is common for a site to replicate using transaction log shipping.  Transaction log shipping makes sure all writes to a database are replicated to a remote site where a database can be rebuilt if needed.  This approach takes database replication away from the array – moving things closer to the database application. 

Solution 4:  Implement a virtualizing controller with replication capabilities.  A few external SSD manufacturers have partnered with vendors that offer controller based replication and who support heterogeneous external storage behind that controller.  This moves the SSD behind a controller capable of performing replication.  The performance characteristics of the virtualizing controller now are a gating factor in determining the effectiveness, and indeed the value added by the external SSD.  In other words, if the virtualizing controller adds latency (it must) or has bandwidth limitations (generally they do), those will now apply to the external SSD.  This can slow SSDs down by a factor of from three to ten times.  It is also the case that this approach will solve the consistency group problem only if the entire consistency group is stored behind the virtualizing controller.

Most companies implementing external SSD have had to make decisions, trying to grapple with the impact of consistency groups on application performance, replication and recovery speed.  Even so, the great speed associated with external SSDs often leads them to implement external SSD using one of the solutions we have discussed. 

What has been your experience?