AppICU was conceptualized years ago. It started as an idea for a TMS consulting practice. It spent some time (in my head) as the name of a new business. I suppose it is fitting that the name finally makes its first public appearance as the title for my blog. If you are a marketer, which was my primary responsibility for the last ten years, it is almost impossible to find the time or energy to blog about products that you are already marketing. If it is such a great blog idea, why isn’t it a whitepaper or submitted for publishing? Do you blog to bash competitors products? Do you blog to brag about your cool new product/service? Do you blog to become famous? My goal is to help provide guidance to the buyers of application acceleration solutions or solid state storage by providing clarity where there is chaos. Along the way, I hope to influence the companies that develop and market to these buyers.
The market for application acceleration and solid state storage is well served by a variety of other writers/bloggers and analysts. You won’t find anyone promoting the solid state storage industry better than Zsolt Kerekes at StorageSearch.com. Zsolt publishes more original content about SSD than anyone I know. Without his independent view of the market, SSD buyers would be lost. If you are looking for analysts who know their stuff when it comes to SSD I encourage you to follow: Jeff Janukowicz at IDC, Joseph Unsworth at Gartner, Greg Schulz at StorageIO, Robin Harris at Storage Mojo, Ray Luchessi at Silverton Consulting, Jeff Boles at Taneja Group, and George Crump at Storage Switzerland. These guys all suffer through endless hours of vendor fluff to distill nuggets of useful information to pass on to their customers/readers.
Welcome to my blog. I hope it adds to the discourse and proves to be a good use of your time.
I’m looking forward to your future blogs. Please tells us more about AppICU, what it means/does, and what are some steps if people want to engage further with AppICU.
As the name of my blog implies, I came to the SSD party (back when TMS was about the only game in town; before flash was used) from the point of view that RDBMS in fully normalized form offers the greatest bang for the buck for SSD implementation.
I suppose IBM felt something similar with the advent of disk drives: a change in paradigm from sequential operations on tape to random operations on disk. As events turned out, COBOL cowboys treated disk just as faster tape; it appears that SSD is going the same route. Which is too bad, for clients, anyway. Fully normalized schemas will lead to much smaller data footprints, perhaps to an order or two in magnitude; not so many bytes, may be not so many parts shipped. For those applications which treated the data as file images, re-write is pretty much required. (There are legends that there remain 360s out in the wild running check printing programs.) For those which were smart enough to sequester the data behind views and stored procs, swapping out a bad schema for a good running on super fast SSD is a cake walk.
During your sojourn through blogging, I’d be interested to read your thoughts on SSD vendors getting involved in application design to the level of pairing their product with datastore specification. Have you seen this done? Or do the SSD vendors just say: “it’s a faster Raptor”?
A few thoughts. Smaller data is better for SSD because it decreases the cost of the implementation (a major objection to SSD implementation). SSDs are also special in that they offer extremely high performance per density. In other words, increasing I/O to a smaller amount of storage capacity is an awful thing for hard disks but a great thing for most SSD.
Most SSD vendors just push hardware and would probably leave it at “it’s a faster Raptor”. The early days of SSD bred more companies capable of doing application level performance analysis than I suspect you are seeing today. Why? Because in order to make the sale, you had to have a good proof-of-concept. Awareness of SSD was so low that you had to prove it before the customer would buy. This resulted in SSD companies developing application analysis skills for two reasons: 1) to help make the proof-of-concept a success and 2) to avoid sending a demonstration unit to customers who would not benefit from SSD. I think this is one area TMS really differentiated itself with things like http://www.statspackanalyzer.com which reviews Oracle AWR and Statspack results to identify best practices for application performance improvement. TMS was also unique in that it hired an Oracle performance expert to help advise customers. When I work with customers, I encourage them to think about application features and designs that are restricted because of disk storage.
I do see operating system, file system and database vendors developing solutions that expect SSD. While this type of tight integration is not needed for an SSD to be beneficial to an application, it won’t hurt.
After years of hoping for an SSD market to appear and after years of following BitMicro, TMS, FusionIO, StorageSearch etc. I just stumbled across your blog today… and I feel like I have discovered a mental twin. Only a lot brighter and with far more depth and insight in that market. And able to play with the big boys.
On one hand I hate it when none of my thoughts turn out to be original, on the other hand a quick reference to your blog may now save me all that breath.
Since TMS was too committed to RAM at the time, we chose FusionIO for caching, but for much of our business TMS and FusionIO are too expensive, not flexible enough and faster than we require. We don’t need more speed than consumer level SSDs and we need their flexibility, but we need TMS or FusionIO reliability. And nobody fills that gap for us.
Probably because we are too small: With our electronic payment business we no Google and even after heavy consolidation (but avoiding the hypervisor latency trap using OpenVZ instead) servers grow faster than our business, where growth is steady and predictable.
For OLTP or payment authorization we need robustness and availability and would like to switch to SSD mainly because the IO-gap killed DAS. We see SAN more as a back-office solution and it’s very expensive and complicated to make as reliant as we need it for OLTP.
After almost a decade of searching we are still a bit lost in that gap between FusionIO and HDD SSDs. A bootable (or rather a standard HDD device driver compatible) PCIe controller, which can be stocked/expanded via standard sized HDD form factor enterprise grade SSDs, but which gives the integration and “certification” quality of a PCIe plugin-SSD would be perfect for us.
Most of the time 50,000 Oracle IOPS and 6GB SAS bandwidths are quite sufficient and only the flexible deployment in standard servers with mix of application servers and database servers is the issue.
Again for size reasons we can’t really certify our own hardware stack or support endless variants. We’d like to deploy one type of SSD for OS and local storage as well as OLTP DB.
And while technically none of that is rocket science, I guess after reading your blog I better understand, that we are right in the middle of the two golden domains, which resembles a human behind:
On the left you have the Googles and Amazons of this world, who stand to gain from a supplier adapting as close to their needs as possible and on the right you have the mass market, where pickings are slim but volumes may make it worthwhile anyway.
In the middle, there is neither volumes nor huges amounts of money.
But I’d still pay twice the price of a Vertex 3 for no extra performance, but TMS or FusionIO like reliability.
Thanks for your kind words. I think you have identified a gap in the market. I especially like the analogy. I strongly believe the market is coming to you. Here are some ways this is happening.
1. The companies in the low-end are attracted to the margins of enterprise business. These products are being improved on many vertices, but I think they are over-focused on performance. I agree with you that for these companies to really make their mark they have to improve reliability. I think eMLC devices from this tier will be interesting.
2. The companies in the high-end are attracted to the volumes of the consumer business. The companies you describe in the high end of the market are moving dowstream. As an example, TMS recently introduced an eMLC solution which drops the cost/GB in half versus SLC.
3. I think the recent focus on server caching solutions is a positive step for the middle of the market.
I think the middle of the market is up for grabs and that reliable, affordable solutions with “good-enough” performance will win in this space. I cannot over-emphasize the importance of reliability for this category. The customers in the mature middle part of the market have certain expectation from storage/server products and do not handle issues as well as people who are innovators/early adopters.