Solid-state drives offer extreme performance gains over hard-disk drives.
But how fast is fast enough?
Solid state drives (SSD) are fast becoming a central part of enterprise storage. With no moving parts, SSDs are approximately 100 times faster than a rotating disk in most common access profiles. This makes them ideal for the most demanding applications. Although SSDs are being adopted more broadly each year, they are likely to remain complementary to hard-disk drives (HDDs) – not replace them – for the visible future.
In this Q&A we talk with LSI's Luca Bert, director of DAS RAID Strategic Planning & Architecture, about the value of SSDs in the enterprise today – and where they’re going tomorrow.
Q: Hello, Luca. Why don’t we begin with a quick overview of SSDs?
A: Sure. SSDs are high-IOPS – that’s I/O operations per second – storage devices aimed at business applications that need fast-access storage. In addition to better IOPS performance, SSDs offer a number of benefits over electromechanical HDDs, including better reliability and lower power consumption.
Q: But hasn’t HDD technology improved dramatically over the years?
A: Yes, but because of its mechanical moving parts, there are inherent physical limitations to the response times that an HDD can achieve.
Q: So by eliminating the rotational delay of a spinning platter and waiting for the head to locate and read/write the data, SSDs make data available nearly immediately. Correct?
A: Yes. And by doing so, SSDs can reduce I/O bottlenecks. An SSD works at speeds much closer to those of memory. With SSDs, we can build RAID storage controllers
that get close to one million IOPS per second inside each server. If we look back at just 3-4 years ago, we were looking at around 30,000 IOPS. Now we can deliver 30x that, which, speed-wise, is larger than all speed improvements we have seen in the last two decades.
"We need to start thinking about different ways to use these new ultra-fast storage devices. SSDs will enable completely different ways to design storage systems and servers. And our challenge at LSI is to out-think the market needs."
Q: Would you call SSDs a revolutionary technology?
A: The technology is truly revolutionary and disruptive in its usage model and probably presents the single biggest industry game-changer since the introduction of the Winchester drive in 1980.We’ve grown accustomed to the fact that hard drives have been evolving in performance linearly. Now we’re dealing with SSDs that are evolving exponentially – 100 times faster from one generation to the next. And they won’t be limited by mechanical restrictions but, rather, follow the more traditional improvements of Moore’s law.
Q: This begs the question: Now that we have all this speed, what do we do with it?
A: We don’t actually know what innovations will result from the order-of-magnitude performance gain that SSDs
provide. We have to be nimble and watch for new innovations unleashed by SSDs. However there are some signs on what’s ahead: new storage paradigms will introduce new storage architectures, and we will see the first few ones in caching and tiering, which we sell under the CacheCade
product name. These new usage models, tied with protection needs, will create a completely new requirement in the storage subsystem that can result in a potentially high overhead. Part of the new speed improvements will be delivered to the system, part will be used to address overhead, but both require us to stay ahead in the speed race. Technology history has shown again and again that there is nothing “too fast to matter”: you build it and a solution using it will come.
Q: How fast is fast enough when it comes to IOPS?
A: Well, we are in unchartered territory here. Like in every performance model, you cannot be any faster than your slowest element. For 20 years, 30 years, 40 years the slowest element has always been the hard drive. Now SSD storage is as fast as memory, CPUs, or graphics processors. That’s the reason we knew we first had to improve the speed of our controllers. Our MegaRAID controllers
will be able to handle the greatly increased speeds realized with SSDs. The transition we see is that with HDDs, they were the bottleneck. With SSDs, that bottleneck moves to the RAID controller. Now, with the new generation of MegaRAID products, we’re adding capabilities that move the bottleneck further upstream, in the server. We need to make sure people know that no matter how high-performing the system they build will be, they will always have an option of using MegaRAID and SSD configurations to meet performance challenges.
Q: In other words, your main target on performance is not to be the bottleneck, right?
A: Yes. We started looking at how to invest for performance. The result was a performance roadmap that brought us from 35,000 IOPS in 2007 to what we expect will be 800,000+ IOPS in late 2011/ early 2012.
Q: Where does the relationship between hardware and firmware fit into all of this?
A: Portions of the firmware code have migrated into the actual hardware to keep up with 100x performance gains. Also, to make more efficient use of SSDs, we have the expertise to identify, optimize, and commit to hardware key portions of the firmware stack. By the way, this is a first in this market and we believe we are well ahead of any competition in this area.
Q: Let’s talk about using SSDs for caching purposes and the LSI product CacheCade.
A: The idea with CacheCade
is that an SSD can be used for mid-tier storage to cache data transitioning to an HDD, and thus create a one or two order of magnitude increase in performance.
Q: So new technologies don’t mean business as usual?
A: That's right. We need to start thinking about different ways to use these new ultra-fast storage devices. SSDs will enable completely different ways to design storage systems and servers. And our challenge at LSI is to out-think the market needs.
Q: But all these SSD technologies seem very expensive.
A: It depends on what you mean by “expensive.” Suppose that you have an SSD-based solution that costs 2x the traditional one. If it can provide the workload that was traditionally done by three servers, the extra cost is more than balanced by the savings resulting from not having to buy extra servers and software licenses. Also, remember that servers now ship with gigabytes of DRAM, because today’s CPUs are so fast and HDDs so slow that adding DRAM in between is the only way to make servers efficient. But DRAM is very expensive, consumes lot of power, and won’t scale. However, if you put a $1,000 SSD-based solution in between the DRAM and HDD, you may end up saving $10,000 in DRAM costs. Cost is always a matter of proportion and the problem being solved. And we have lots of them at hand here.
Q: Tell us about LSI’s point of view around SSDs.
A: The LSI solid-state approach recognizes different niches in the solid-state market. We cover it all and have solutions for each one. For example, our CacheCade is a MegaRAID-branded software (or firmware if you prefer) solution for "hot-spot" acceleration, enabling SSDs to be configured as a secondary tier of cache to maximize transactional I/O performance for read-intensive applications, like web servers with lots of small-block data reads. There’s our MegaRAID FastPath software
, which provides high-performance IO acceleration for SSDs. Then there’s WarpDrive
, our solid-state board-level plug-and-play product that enables application acceleration.
Q: Sounds like you have this market covered.
A: Yes. The breadth of our offerings speaks to the broad knowledge and expertise we have in storage systems and sub-systems. We understand every layer of the storage stack and have solutions optimized for each.