Can we stop lying about numbers now?

You're being lied to everyday, and it's only getting worse. If there's anything we've learned, it's that there are lies, damn lies and product specs.

Craig Simms Special to CNET News
Craig was sucked into the endless vortex of tech at an early age, only to be spat back out babbling things like "phase-locked-loop crystal oscillators!". Mostly this receives a pat on the head from the listener, followed closely by a question about what laptop they should buy.
Craig Simms
7 min read

You're being lied to everyday, and it's only getting worse. If there's anything we've learned, it's that there are lies, damn lies and product specs.

But there are those out there who truly believe, say, that USB 3.0 can go up to 5Gbps. This isn't a random target of wrath — we've had a lot of USB 3.0 equipment in recently, enough for us to get a handle on what the technology is capable of.

For reference, 5Gbps equates to 625MBps. In marketing speak that'd be "about one music CD a second", for those who still use the ageing format. This speed is, naturally, just shy of SATA 6Gbps, and comes with the unspoken promise that external storage might finally be as fast as internal, and be powered off the bus to boot. No more compromises trying to get power to an eSATA-enabled device.

Ascertaining the performance ceiling for USB 3.0 has been a patience game — first, waiting for SSDs fast enough to potentially saturate the bus (hello, SandForce 2281 controller), then waiting for someone to build a SATA 6Gbps to USB 3.0 adapter.

Many adapter makers claim "up to 5Gbps speeds" courtesy of the USB 3.0 port, but shy away from mentioning that they're still using a SATA 3Gbps connection on the other end, introducing a bottleneck. Today, though, we have the equipment we need, in the form of a Kingston HyperX SSD and Vantec NexStar CB-SATAU3-6, thanks to Kingston and the incredibly helpful and no doubt stunningly attractive people at PC Case Gear.

The only 6Gbps-certified USB 3.0 adapter we're aware of. Thing is, it's likely not needed. (Credit: Vantec)

The results aren't good. After testing a few different chipsets, the highest speed we could attain was 254MBps. Now to be fair, that's well over 225MBps faster than USB 2.0 often affords us — but it still falls a damn sight shorter than 625MBps. We could almost hear the SandForce drive laughing at us. There's a reason that there's not many SATA 6Gbps to USB 3.0 converters in the market — USB 3.0 barely approaches SATA 3Gbps speeds. Perhaps things will change once Intel gets its controller into the market, but we're not holding our breath.

Monitor myths

This over-optimism doesn't just plague ports. Displays, whether monitors or TVs, are filled with so many irrelevant numbers it's easy to feel bamboozled.

Take contrast ratios, the ratio between the black level and white level of a TV, essentially telling you how much shading detail can enter a scene. A great contrast ratio can mean not only more vibrant colours, but make it easier to tell the difference between certain shades of colour. If you see someone claiming a contrast ratio of anything higher than 2000 (and anything above 1200 should start ringing alarm bells), unless it's OLED it's also complete rubbish.

The spec they're quoting here is dynamic contrast ratio (DCR), not static. It's a technology that adjusts that backlight of your TV depending on what's being shown; an attempt to get richer blacks out of darker scenes where a high black level is noticeable. Initial attempts were quite bad, not able to react quickly enough between scene-brightness changes, resulting in obvious and disruptive light shifts. Technology has become refined with time, and LED-backlit TVs are able to change the lighting in zones now, rather than just the whole panel, but it's still not in the per-pixel nirvana it needs to be, and can create some interesting artefacts.

This monitor claims a 50,000,000:1 contrast ratio, despite being a TN-based panel. Its typical contrast ratio will most likely be 1000:1, unlike OLED, which . (Credit: AOC)

Since it is by nature dynamic, the [monitor] won't be able to achieve its quoted 50,000,000:1 ratio all the time. The ways the measurements are taken are also questionable, and not standardised across the industry. For the black level, quite often the backlight is simply turned off. That's like saying, "this car is capable of 1200 kilometres per hour ... in a hurricane". You just don't see that sort of real-world performance.

I usually just turn DCR off. Along with whatever "insert ridiculous number of Hz" frame interpolation is running. Don't get me wrong, I have no issue with high frame rates — PC gamer right here. But when a film is recorded at 24 frames per second, and suddenly you're jamming in an extra 176 that never existed in the first place, movement starts looking unnatural.

One of the big ones in gaming monitors is response time — that once again, manufacturers don't follow the same standard on, opting instead to choose whatever method gives the lowest number. There are generally two values: typical response time, which measures the period a pixel takes to go from black to white and back to black again (sometimes known as rise and fall), or grey-to-grey response time (G2G), where the time to transition between two grey values is recorded. The latter is a more likely scenario than a full rise and fall, but it seems to be up to the manufacturer as to which grey values are used, and it's not always indicative of performance, anyway. Maximum PC has a wonderful article debunking the entire display marketing campaign, courtesy of DisplayMate.

Did I mention that viewing angles can be measured in two different ways as well?

Networking non-truths

You can never, ever trust numbers on network packaging. In the case of wireless routers, vendors are quoting the total backplane capacity of the device. One connected device will never see that throughput — it's all about the available bandwidth to serve multiple clients.

The most we've ever seen out of a wireless device to a single client? 166.67Mbps. And that's using iperf, a synthetic benchmark, under incredibly optimal conditions. Real-world results will be lower.

The WNDR4500, which claims 900Mbps. No single client will ever see it. Not to mention that number is a combination of performance across two separate bands, making it utter fiction. (Credit: Netgear)

Internet indecencies

Internet speeds are a hard one to predict: there are simply too many factors for ISPs to give users an accurate reading on what sort of speed they'll get. Still, there's been some effort in the UK at least to explain to the average citizen why their "up to 24Mbps" is ending up considerably less. No such luck yet in Australia, where Whirlpool is your best bet to pose the question, leaving people to analyse your distance from exchange and SNR like tea leaves and entrails, attempting to predict the future.

Our main beef, though, is with the term "unlimited". Almost every internet service provider in Australia that's advertised true "unlimited" has either gone under, or very quickly adjusted its plans. Anyone else left advertising "unlimited" is most likely lying, as a read of the terms and conditions often illuminate, usually highlighting a "soft" limit then enforced by speed throttling. The exception to the rule? TPG's unlimited plan, which seems to have lasted for some time, although the company still managed to be misleading somewhere.

Hard-drive hoaxes

We're quite sure at some point someone has bought a hard drive, loaded their file manager and then discovered they have significantly less space available than what it says on the box. Nope, it's not faulty, it's just marketing. The advertised size is just the "unformatted capacity".

How is this number remotely useful if it can never be achieved? Sure, it's nice to have a round figure, but I'd much prefer to be sold what I'm getting. I have some sympathy for drive manufacturers — formatted capacity depends entirely on what file system you're using, and in what configuration. But I'm yet to be made aware of a file system that claims tens of gigabytes of space when compared to another.

This is compounded by a measurement schism. Once upon a time kilobytes were 1024 bytes, megabytes were 1024 kilobytes and gigabytes were 1024 megabytes. At some point the industry realised using metric terms could be misleading, and promptly rounded those 1024s down to an even 1000. Great from a metric sense, not so wonderful from a binary point of view.

Marketing loves round numbers. Unfortunately, this means you don't get what's written on the box when you buy a hard drive. (Credit: Seagate)

Given the 1024 values were still important, they were given new names — kibibytes, mibibytes and gibibytes. Putting aside how ridiculous they sound, I've yet to hear anyone say them in public — techy types simply default to the 1024 value of gigabyte, non-techy the 1000.

Let's look at a test case — the very 500GB drive being used while this article is being written. Depending on manufacturer, exact space will vary — in our test sample's case, we've got 500,000,878,592 bytes. In old 1024-speak, that's only 488GB, not 500. But the actual formatted size reported by Windows is a much smaller 465GB (taking into account that Windows 7 takes 100MB in this case for its recovery environment and BitLocker tools). No matter which way you slice it, at least 13GB is frittered away into the ether. There's been court cases and even two separate settlements, but drives continue to be advertised in this manner.

Size versus size on disk: not a lie

It's not all misrepresented, however. Take this old gem:

Yes, the size of the file is different to the size of the file on the disk. Techies understand, but mostly heads just explode. (Screenshot by CBS Interactive)

Size versus size on disk. This is a case of truth in numbers, even if it may cause some confusion. The first number is the actual file size, but its size on disk may vary depending on where the file lives.

This is down to something called file system slack. Say you have a file formatted to 512-byte clusters — which there's a high chance you do, unless you're particularly techy. If you have a file that's less than 512 bytes in size, the file will be pushed into a 512-byte cluster, with the remaining space unable to be filled. Now imagine you've got a 513-byte file: you're now filling that first cluster, but a second cluster needs to be allocated to that remaining byte, wasting another 511 bytes of storage that can never be filled. Take into account that files may be sharing and spanning multiple clusters, and the amount of slack space burgeons rather quickly.

Defragging goes some way to addressing this, reordering files in a more optimum way — which is why in some cases you can get space back after a defrag.

Outlook: gloomy

Marketing isn't exactly going anywhere soon, and the average consumer's tendency to believe higher numbers are better seems a fixture as well. There are truly people who believe that a higher megapixel count will mean a better picture every time, and those who think more GHz means better performance regardless of other factors.

As always, the only way around advertising is education — and for those about to shop, I hope I've managed to do this here.