X

Latency (still) matters

Latency isn't being talked about much when it comes to cloud computing. But it's a big deal.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
3 min read

Over five years ago, I wrote a research note titled "Latency Matters!" The impetus was the following observation:

What's the best way to estimate travel time? Would you rely on an estimate based solely on the number of lanes in the road and the sound of the engine? Nope. You need to know, at minimum, how far you have to travel, the condition of the road, and how fast you'll likely be able to go. Obvious, right?

You'd think so. But system and networking specs rate computer performance according to bandwidth and clock speed, the IT equivalents of just measuring the width of the road and the engine's revolutions per minute...

Latency is the time that elapses between a request for data and its delivery. It is the sum of the delays each component adds in processing a request. Since it applies to every byte or packet that travels through a system, latency is at least as important as bandwidth, a much-quoted spec whose importance is overrated. High bandwidth just means having a wide, smooth road instead of a bumpy country lane. Latency is the difference between driving it in an old pickup or a Formula One racer.

Some of the particulars discussed in that research note are less central to everyday IT concerns than they were at the time. For example, although as much engineering attention goes into designing high-end systems as ever, the details of memory architectures, internal processor interconnects, and the like have increasingly receded into the background as far as most IT generalists are concerned.

But, as Todd Hoff notes in "Latency is Everywhere and it Costs You Sales - How to Crush it," latency concerns are still very much with us. In fact, the nine sources of latency that he lists suggest that latency is actually a much thornier problem in a world where applications are broken into pieces and often distributed around the world.

This strikes me as a particularly important observation amid the cloud computing hoopla which tends to reduce relationships between software components and their associated data as taking place within some idealized network abstraction. That's not to say that such an abstraction isn't a useful concept. But it does tend to de-emphasize how parts interact in the real, physical world.

Consider, for example, the case of storage. Even a lot of staunch cloud computing advocates who liken using processor cycles out of some network grid to a computing version of the electric utility generally concede that storage is a trickier problem. Whereas computing is something you just consume, data has state. And if you lose that state, it's gone. That's a fundamentally more serious problem than losing access to a compute utility for a few minutes or even an hour. For this reason and others (regulatory compliance, etc.), my Illuminata colleague John Webster has written that "Internal storage clouds will become way more popular than external storage clouds."

OK, you say, so cloud computing (in the sense of external clouds out in the network somewhere) will be more popular for processing things than storing them. So what?

The so what (or one of them anyway) is latency. We tend to run applications close to the data they operate on for a reason. That's because application performance is often largely a function of how quickly it can read and write the data that it's working on. And data stored on a local hard disk can almost always be accessed faster than that same data sitting at the other end of a network pipe hundreds or even thousands of miles away.

Thus, if storage stays inside organizations, that implies that a lot of the processing of that data will as well. And the general trend towards more data-intensive modeling and mining only strengthens this relationship. Because latency matter more than ever in a world where the pipes are distributed networks.