CNET también está disponible en español.

Ir a español

Don't show this again

Tech Industry

A new era of molecular circuit chips

Computer scientist Stan Williams of HP Labs is spearheading a project to find new ways to push silicon-based memory and processor technology far beyond its current limits.

When it comes to predicting the future of technology, Stan Williams shows the traditional caution of a veteran computer scientist. But when it comes to the role silicon plays in the composition of computer circuitry, he is convinced that a major change is in the offing.

That change is probably not around the corner. But within the next decade--and definitely within the next 15 years-- it's all but certain that new substances will supplant silicon as the material of choice in computer chips, says Williams, who directs quantum science research at HP Labs.

Williams is one of the architects of the molecular grid circuit. This new approach to building devices such as memory chips uses readily available materials to construct circuits that can be printed right on top of a silicon base.

CNET recently spoke with Williams about how this breakthrough in molecular grid technology will affect processor performance and about the challenges involved in turning his vision into practice.

Q: How will your invention surpass silicon?
A: We believe that we will have a solution that will be at least near ready at the time when silicon needs help. Our view is that the first devices will not be only molecular electronics--they will combine both (molecular and silicon) components together in the same circuit. The silicon-integrated circuit would be a substrate, and we'd print our circuits on top of the silicon.

What would the function of the silicon be?
Silicon would essentially provide the electrical power and the input-output for the contacts for the molecular memory. So silicon becomes the equivalent of today's printed circuit boards. As time goes on, molecular electronics would take up more of the duties.

Whatever anybody comes up with has to be able to reinforce and build on top of that structure. It was Newton who said he had seen farther because he was standing on the shoulders of giants. If we're going to go farther, it will be because we're standing on silicon.

How much will these molecular-circuit chips cost?
It's very difficult to judge costs. At this stage of the research, we've never tried to come up with dollar estimates. But we believe that at the very worst, we ought to be able to keep costs (the same) or get onto a new curve, where you've got the silicon base and the added (cost) is significantly less.

This technology will be really ubiquitous about 15 years from now.
Taking this all into account, what will my PC look like in 10 years?
This technology will be really ubiquitous about 15 years from now. What I anticipate is that our digital assistants are going to be smart enough that you do away with keyboards almost completely and everything is dealt with by voice. I'm not talking about current speech technology--we have to increase the capabilities of our present-day machines by a factor of at least 10,000 to get into this range, where all of our appliances become easier to work with and not everyone has to be a computer geek to work with his microwave oven.

That will take 10 to 20 years. By that time, I believe we will have conversation-active machines, where you'll just tell them what you want and they'll do it or engage you in a conversation to learn what you really want. And then go out and do it.

It sounds like science fiction.
(It) is within the realm of possibility and within the realm of possibility of the handheld device. The only way we're going to get there, I believe, is through low-power high-density stuff. That is made possible by molecular (circuit) machines.

How will this compete with prevailing microprocessors of the time?
The architecture we have uses a relatively low clock speed, but a very large amount of parallelism. We might have a kilohertz clock--but if we're doing a billion things per second, we're actually doing a teraoperation (one trillion operations) per second. It's an entirely different way of doing computing. Instead of having a single processor screaming away, our approach is to have many operations going on in parallel to save on battery power.

How exactly do you make a molecular grid circuit?
We put a little plug of material between two sets of wires; we put voltage across those wires, and the molecules change shape. When they change shape, their resistance changes. We got lucky and found some molecules where the resistance changes a lot--by a factor of 10,000.

How far can you scale that?
We're hoping we're going to be able to scale and use a small number of molecules as a bit in the memory. The demo that we built has about 1,000 molecules between each set of wires. We're hoping we can get down to one molecule. That gives us a sense of hope that we could scale our existing memory down by 1,000. We could have a density of a factor of 1,000 up from what we've already demonstrated.

The same structure could be (used for) memory, and then we could turn around and use that same thing as a piece of logic.
Did you set out to remake chip-manufacturing from the ground up?
We thought that was going to be necessary for several reasons, not the least of which is the economics. The cost of manufacturing facilities is getting extremely expensive. We did a study, and by looking at the cost of tools in fabs (factories where chips are made), we found that most of the cost was related to mechanical precision--to the machines that are required in order to align manufacturing processes closely.

The way a chip is made nowadays requires many layers of photolithography that are placed on top of each other. The precision is in getting the layers to be correctly aligned. That's the underlying issue that's driving the expense of the fabs.

How do you cap expenses?
By taking advantage of self-assembly and self-alignment, we remove a lot of the requirements for mechanical precision. The looser the tolerance you have, the less expensive your manufacturing process should be.

Fabs are $2 billion to $3 billion right now. They are looking to cost in the $6 billion to $12 billion range in just a couple of years. It looks like we're reaching an inflection point--if we haven't already--where the fab cost has reached the point where it negates the economy of scale.

So you're looking at creating chips that are simpler to manufacture?
Ideally, that's what we'd like to do. They're just getting too complicated. We haven't gotten that far yet. But we're doing designs that don't have a lot of mechanical precision required.

In the past, many pretenders to the throne of silicon have arisen, and silicon has just flattened them all, because of this incredible Moore's Law rule of costs coming down.

How is your approach better?
I think there's a better possibility for a new technology to be able to come in, because of the fact that silicon will slow down. Once that slows down, it gives the competition a chance. You can view the research we're doing as an insurance policy. If silicon does start to slip and reaches a limit (physical or economic), then we have something we believe can scale for many more decades.

What will these circuits be made of?
We've been looking at a wide range of materials. The one we've discussed openly has been a molecule called a "rotaxane." That's a fairly exotic molecule made by the research group of Prof. J. Fraser Stoddart in UCLA's (University of California at Los Angeles) chemistry department. We have been playing with quite a few rotaxane molecules. We're still very much early in the learning curve on these. They contain carbon, oxygen, hydrogen and nitrogen and sulfur. They're made of the same stuff we are--the same elements that make up the proteins in your body--but they are arranged differently.

We use normal metals like titanium and platinum as the wires. We chose those metals because they're also compatible with silicon. You'll find those metals in almost every IC (integrated circuit) that's made today. It sounds exotic, but it isn't really. It's standard stuff.

How will this work?
This is not a transistor--it's a switch. It's what we'd call a two-terminal device. There are two wires that cross each other at right angles with molecules in-between. We can change the resistance of the molecule connecting the two wires, to essentially create an on-and-off switch. We can flip the switch with a pulse of voltage.

How do you create a processor?
We can create a pattern of on-and-off switches. That pattern becomes what's known as a "programmable logic array." By creating a particular type of pattern of on-and-off switches, we can take input voltage and--by routing it through that pattern--we can perform all of the functions a computer needs to work.

It's a different way of reaching the same end goal.
The same structure could be (used for) memory, and then we could turn around and use that same thing as a piece of logic. It has a very nice property that people call reconfigureability. You can trade off between memory and processing.

The memory you have shown already is fairly potent, isn't it?
The equivalent density of what we built is 6.4 gigabits per square centimeter, which is about 10 times the current density of DRAM. (DRAM, or dynamic RAM, is the standard memory used inside personal computers.)

How long until you bring this to market?
We think we can get there in a few years. If everything goes absolutely perfectly, we could have something ready in five years. But that's the absolute fastest time anything could come out. I think the more likely time frame is seven years, and my gut-level feeling is it's almost a dead certainty in 10 years.

Our HP Teramac was a home-built supercomputer...that had a clock speed of only 1MHz, but even though it had a clock speed of only 1MHz, it could outperform 100 workstations of the 1995 vintage. We're looking to reproduce that type of system with these molecular components.