CNET también está disponible en español.

Ir a español

Don't show this again

Tech Industry

Rambus to reassure at Comdex

Rambus faithful will be able to get an eyeful at Comdex next week as the company shows off its memory chips.

People drawn to Rambus computer memory, the heir-apparent memory technology standard, will be able to get an eyeful at Comdex next week.

Rambus, which designs a high-speed memory system that other manufacturers license for the basis of their own products, will be showing its memory chips from several manufacturers and modules from memory giant Kingston, said Sibodh Toprani, vice president of Rambus' logic division.

"We're trying to reassure PC [manufacturers] that we're getting to the volumes they want," Toproni said.

Rambus' Direct RDRAM technology enables peak speeds of 1.6GB/sec. That's twice as fast as the 800MB/sec peak speed of today's fastest memory technology, synchronous DRAM (SDRAM), Toprani said. In addition, because of the way Rambus memory chips (RDRAMs) are designed, the Rambus design comes closer to reaching that peak performance in practice.

The company has licensed its designs to the top 14 or 15 memory manufacturers, and memory chips from Toshiba, Samsung, LG Semiconductor, Fujitsu, and NEC will be on display. Kingston, which assembles those RDRAM chips into modules called RIMMs, also will have products on display, said Kingston's Richard Kanadjian.

Also at the Rambus' Comdex area will be Hewlett-Packard, which manufactures equipment to test RDRAM, and Molex, which manufactures the computer slots that memory plugs into.

Computers using RDRAM are expected in the first half of 1999, Toprani said.

RDRAM, available only in limited quantities for engineers, currently costs twice as much as SDRAM. Once it begins shipping in quantity, Rambus hopes it will cost only 10 percent more. Eventually, when SDRAM starts disappearing from circulation in 2000 or 2001, its price will rise to meet RDRAM's, Toprani predicted.

Intel and invested $500 million in DRAM manufacturer and Rambus licensee Micron Technology in order to support Micron's RDRAM manufacturing efforts.

Even though RDRAM has bigger bandwidth than SDRAM, it still suffers from a key problem common to all DRAM designs: latency.

Latency is the amount of time a processor spends waiting after requesting information be retrieved from memory. Today's processors, running at about 500 MHz, can execute an instruction every 2 billionths of a second, but the DRAM itself forces a wait of at least 40 billionths of a second, Toprani said. That means that the processor is forced to do the electronic equivalent of twiddling its thumbs--while it could have been processing 20 or more instructions.

On top of that, is the wait imposed by the system--such as Direct RDRAM--that transfers that information. Early Direct RDRAM technology had high latency, but the modern design is about 10 billionths of a second faster than any other technology, shipping or planned, Toprani said.

The current solution to latency is to use a special high-speed memory called a cache that can respond faster to the processor's demands. Cache sizes have been getting larger and cache speeds have been getting faster to keep ever-swifter processors from being dragged down too far by sluggish memory.