Intel's next-gen memory closer to reality

Intel and Numonyx announced practical advancements they believe will make phase-change memory meet its performance and capacity promise.

Researchers are two steps closer to creating a mass-market version of technology called phase-change memory that could change how computers of the future are put together.

Intel and Numonyx, the chipmaker's joint venture with STMicroelectonics that's focused on flash memory, announced Wednesday they've built a new type of phase-change memory chip they hope will help fulfill the technology's promise for small size and large capacity.

Its 64-megabit capacity isn't momentous on its own--Numonyx announced a 128Mb device in 2006 and Samsung said in September it's producing a 512Mb chip. But what is significant are two major advances in making the decades-old idea practical.

First, the researchers built a grid of wires into the chip so a computer can easily control the writing of a 1 or 0 in each of the 64 million memory cells. Second, they announced their manufacturing process lets them stack several layers atop each other so memory can be packed more densely in a given volume.

This image shows phase change memory built atop a conventional CMOS microchip. Memory cells can be controlled using rows and columns of wires that lead through the chip.
This image shows phase-change memory built atop a conventional CMOS microchip. Memory cells can be controlled using rows and columns of wires that lead through the chip. Intel

Storing numbers in a computer hardly is new, so why could phase-change memory, which records ones and zeros by changing the molecular state of a particular type of glass, be a big deal?

In short, it could combine conventional computer memory's high speed with flash memory's low cost, low power demands, and high capacity. Having lots of fast memory on hand could simplify computer hardware and software that today must reckon with a hierarchy of storage technologies that trade off performance for capacity.

Operating systems today must constantly work to keep important information in memory while relegating the rest to "virtual memory" stored on hard drives--or, these days, an intermediate layer in the hierarchy, solid state disks made of flash memory. Deciding what goes where is complicated, and priorities change from one moment to the next.

"At Intel, we see this as an important milestone in enabling a future class of memory where you can combine attributes of memory semantics and storage semantics, potentially collapsing the technologies into one memory type," said Al Fazio, Intel's director of memory technology development, discussing the technology Wednesday. "The research is very promising in delivering that."

For another thing, phase-change memory could get around difficulties of shrinking current memory technologies to ever-smaller sizes. And for another, it could lower the power consumption, reducing waste heat and extending battery life.

A long history
But be sure to temper that promise with a long history.

Phase-change memory is a decades-old idea. Intel co-founder Gordon Moore, of Moore's Law fame, wrote a paper on the idea in 1970. It's made some headway since then: phase-change technology is used to store data on rewritable DVDs and CDs.

Intel and Numonyx aren't alone in trying to commercialize the technology. Start-up Ovonyx also is working on it, as are IBM , Samsung, and Philips Electronics. But as the years of labor show, it's been difficult bringing phase-change memory to market .

Fazio and Greg Atwood, senior technology fellow at Numonyx, took pains to say their companies' work on the technology began in earnest at the beginning of the decade.

"Significant new memory technologies are really quite rare," Atwood said. "There are many hurdles in introduction of new memory. Ten years is not an unreasonable time frame."

Arguably, Atwood said, there only are three forms of memory developed since the 1960s, he said: dynamic random access memory (DRAM) that's the mainstay of computer memory, the more expensive static dynamic random access memory (SRAM) that's often integrated on processors, and electrically erasable programmable read-only memory (EEPROM), of which flash is one variety.

Adding phase-change memory, sometimes called PCM, PRAM, or ovonics, therefore would be quite a departure in the history of computing.

How's it work?
Phase change memory stores 1s and 0s in a tiny patch of glass material that can be changed from one state to another--specifically, so its molecules are arranged either in a crystalline pattern or an amorphous jumble. It's conceptually similar to water being either a liquid or an ice.

In addition, Intel announced "multi-level" phase change memory in 2008 that adds two intermediate states, a move that means a single cell could hold two bits of data instead of one--the binary numbers of 00, 01, 10, or 11. That effectively doubled the 128Mbit capacity of the prototype chip to a 256Mbit chip, Intel said.

The stacking technique could increase memory density further, though there are limits, Atwood said.

"In principle, we can stack as high as we choose. In practice, every layer of memory has an additional cost," he said, requiring more processing and increasing the risks that defects will lower the yield of useful chips produced from a production batch. "There's no reason why we couldn't stack four layers for example, or potentially more."

Though the researchers are excited about stacking, the 64Mbit prototypes uses only a single layer of memory cells. "The first layer is the hardest layer," Fazio said. And today's flash memory is only one layer thick.

Like flash but unlike conventional computer memory, phase-change memory is nonvolatile, which means that once data is written, it stays put even if the power is switched off. That doesn't just preserve data when a device is off; it also means that unlike DRAM, power isn't required to keep the data in memory.

Ever smaller
Intel was cagey about just how closely packed its latest memory cells are. But the company expects to achieve the same density of memory cells as flash memory--then go beyond it eventually.

Flash memory today requires relatively high voltages--about 20 volts--to store its data, Fazio said. But high voltage and small distances are hard to put together, a fact that imposes limits on flash memory.

Today's flash memory features measure about 30 nanometers, or billionths of a meter. Because of the voltage issue and the fact that the difference between a 1 and a 0 is just "a handful of electrons," it's getting harder to shrink flash memory technology.

Phase-change memory, though, can get much smaller. "Research in the industry has shown that to be stable down to 5 nanometers and lower," Fazio said.

The new wiring grid helps keep up with the shrinking trend by providing a way to get at the data even as cells get smaller.

All this is important for the computer industry, which has struggled with the challenges of data storage.

Once upon a time, memory and processors worked at closer speeds, but they've diverged over the years, which means processors often must idle while the memory system fetches data the CPU has requested. System architects have responded by building a hierarchy of storage systems--different levels of SRAM cache memory on the chip or right next to it, DRAM that's one level removed, and hard drives a step beyond that.

Today's flash memory, which is faster than a hard drive and cheaper than conventional memory, is changing that arrangement. It's already revolutionized the portable device market with enough capacity for lots of songs, videos, and photos. Now it's begun arriving in high-end laptops with solid-state drives that offer longer battery life, higher performance, and greater ruggedness. And servers are on the cusp of major changes with the incorporation of flash memory.

But flash memory is sluggish compared to conventional memory. If phase-change memory meets its high-performance promise in coming years, expect more profound changes for computing systems.

About the author

Stephen Shankland has been a reporter at CNET since 1998 and covers browsers, Web development, digital photography and new technology. In the past he has been CNET's beat reporter for Google, Yahoo, Linux, open-source software, servers and supercomputers. He has a soft spot in his heart for standards groups and I/O interfaces.

 

Join the discussion

Conversation powered by Livefyre

Don't Miss
Hot Products
Trending on CNET

HOT ON CNET

Looking for an affordable tablet?

CNET rounds up high-quality tablets that won't break your wallet.