Apple Music Karaoke Mode Musk Briefly Not Richest COVID Variants Call of Duty and Nintendo 'Avatar 2' Director 19 Gizmo and Gadget Gifts Gifts $30 and Under Anker MagGo for iPhones
Want CNET to notify you of price drops and the latest stories?
No, thank you

EMC rolls out FAST

We may be seeing the beginning of big changes in how storage is architected in enterprise data centers.

EMC hasn't exactly kept its fully automated storage tiering (FAST) a secret. The company has talked about the technology at analyst events and its global marketing CTO, Chuck Hollis, has blogged on the topic.

But now version 1 has officially launched, despite earlier reports that it wouldn't arrive until 2010. I'll get to why there have probably been some mixed signals about availability in a bit, but first let's look at what FAST is.

Different types of storage associated with computers perform relatively better or worse. Faster is usually better of course. But faster also tends to mean more expensive per unit of capacity. This relationship holds pretty generally. After all, if a technology were simultaneously slower and more expensive, no one would probably use it. There are some other relevant characteristics, such as permanence, removability, and so forth but price and performance are two of the big ones.

FAST automates the placement of data based on the way it is accessed. For example, a database index that is frequently read and written to will migrate to high-performance storage while older data that hasn't been touched for a while will move to slower, cheaper storage. The fundamental idea is that a relatively small amount of fast/expensive storage can let an application run almost as quickly as if all the storage were fast and expensive.

The concept is similar in some respects to Sun's storage pools in its ZFS file system, a component of Solaris.

Unsurprisingly, given that EMC tends to view storage as being in the middle of things in the data center, in its case, FAST lives in the array. Three of its product lines are supported: Symmetrix V-Max (high-end storage area network arrays), CLARiiON CX4 (mid-range storage area network arrays), and Celerra NS (file-based network-attached storage). The basic FAST concept is the same across these products, but details differ in how they are managed and in some low-level specifics.

This is because Symmetrix and CLARiiON come from largely separate technology roots--and because Celerra operates at the file rather the block level. However, EMC told me in a recent briefing that its goal in FAST version 2, slated for mid-2010, is to largely mask platform differences from users using management and other administrative interfaces.

And this is where I think some of the mixed signals about release dates come from. FAST v1 certainly brings interesting and useful capabilities to the market. However, in v1, Symmetrix and CLARiiON also only apply migration policies at the logical unit number (LUN) level, a concept analogous to a drive letter on a Windows PC likely to correspond to many gigabytes of storage. FAST v2 will enable the relocation of blocks of under 1 megabyte.

And v2 will also see additional important capabilities such as the introduction of chargeback accounting in Ionix ControlCenter for those organizations that want to more precisely allocate costs to different business units.

In short, without suggesting that v1 isn't fully baked, clearly v2 following in just six months or so will be a significantly more complete and integrated technology suite.

EMC is making a huge deal of FAST, as well they should. If you look at where different storage technologies sit today, change is a-brewing. Let me explain.

The idea of storage tiers aren't new. They historically featured tape as a major piece, but solid state (flash) drives have been around for a long time as well. Disks and disk arrays have also long used memory caches, sometimes backed up with batteries, to improve performance.

But the caches, on the one hand, were limited. In the case of disk arrays, they mostly served the purpose of minimizing the performance degradation associated with certain RAID (redundant array of inexpensive disks) configurations which store parity information that allows recovery in the event of a disk failure.

And the other parts of the hierarchy were rather manual. This was sometimes OK in the case of tape used to archive data according to some preset policy. But solid state long remained a niche. You just needed too much of it to gain its performance benefits. In addition, for a long time, bottlenecks in storage controllers and in the connection between server and storage limited the performance benefit of solid state anyway.

But some dynamics are changing today.

The first is that we've pretty much reached the performance limits of "spinning rust" (as storage folks like to jokingly call disks). Drives continue to get bigger certainly. But 15,000 rpm Fibre Channel disks aren't going to get a whole lot faster. Sure, we can always add more of them--and that helps some--but then you have wasted capacity, more power and heat, and higher costs.

Another is that tape is going away for many purposes. Yes, it will be a long slow decline, but that's the trend.

And solid state is getting cheaper. It remains significantly more expensive than disk drives for a given size but its now affordable in quantities that are interesting for mainstream commercial computing.

Add those together and techniques allowing enterprise fibre channel drives (and tape) to be largely replaced over time by a combination of solid state and capacity/power-use-optimized SATA or SAS disk drives start to look very interesting.

EMC's description of this new hierarchy is FAST, Thin, Small, Green, Gone. In other words, solid state for performance, a reduced number of active high performance disk drives, de-duplicated data, low-activity drives that are spundown when not in use, and final data that is purged when no longer needed.

This is certainly a long-term vision. Change does not happen quickly in enterprise storage. But it's starting to happen.