X

Intel: Software needs to heed Moore's Law

If software makers want better performance, they'll have to make some big changes in how they write applications.

Ina Fried Former Staff writer, CNET News
During her years at CNET News, Ina Fried changed beats several times, changed genders once, and covered both of the Pirates of Silicon Valley.
Ina Fried
3 min read
SAN FRANCISCO--After years of delivering faster and faster chips that can easily boost the performance of most desktop software, Intel says the free ride is over.

Already, chipmakers like Intel and Advanced Micro Devices are delivering processors that have multiple brains, or cores, rather than single brains that run ever faster. The challenge is that most of today's software isn't built to handle that kind of advance.

"The software has to also start following Moore's law," Intel fellow Shekhar Borkar said, referring to the notion that chips offer roughly double the performance every 18 months to two years. "Software has to double the amount of parallelism that it can support every two years."

But it's a big challenge for the industry. Things are better on the server side, where machines are handling multiple simultaneous workloads. Desktop applications can learn some from the way supercomputers and servers have handled things, but another principle, Amdahl's Law, holds that there is only so much parallelism that programs can incorporate before they hit some inherently serial task.

Speaking to a small group of reporters on Friday, Borkar said that there are other options. Applications can handle multiple distinct tasks, and systems can run multiple applications. Programs and systems can also both speculate on what tasks a user might want and use processor performance that way. But what won't work is for the industry to just keep going with business as usual.

Microsoft has recently been sounding a similar warning. At last week's Windows Hardware Engineering Conference in Los Angeles, Chief Research and Strategy Officer Craig Mundie tried to spur the industry to start addressing the issue.

"We do now face the challenge of figuring out how to move, I'll say, the whole programming ecosystem of personal computing up to a new level where they can reliably construct large-scale applications that are distributed, highly concurrent, and able to utilize all this computing power," Mundie said in an interview there. "That is probably the single most disruptive thing that we will have done in the last 20 or 30 years."

Earlier this week, Microsoft's Ty Carlson said that the next version of Windows will have to be "fundamentally different" to handle the amount of processing cores that will become standard on PCs. Vista, he said, is designed to handle multiple threads, but not the 16 or more that chips will soon be able to handle. And the applications world is even further behind.

"In 10 to 15 years' time we're going to have incredible computing power," Carlson said. "The challenge will be bringing that ecosystem up that knows how to write programs."

But Intel's Borkar said that Microsoft and other large software makers have known this shift is coming and have not moved fast enough.

"They talk; they talk a lot, but they are not doing much about it," he said in an interview following his discussion. "It's a big company (Microsoft) and so there is inertia."

He said that companies need to quickly adjust to the fact they are not going to get the same kind of performance improvements they are used to without retooling the way they do things.

"This is a physical limit," he said, referring to the fact that core chip speed is not increasing.

Despite the concern, Borkar said he is confident that the industry can rise to the challenge. Competition, for one, will spur innovation

"For every software (company) that doesn't buy this, there is another that will look at it as an opportunity," Borkar said.

He pointed to some areas where software has seen progress, such as in gaming. He also identified other areas that might be fruitful. In particular, specific tasks could have their own optimized languages. Networking tasks, for example, could be handled by specific optimized networking code.

Intel has also been releasing more of its own software tools aimed at harnessing multicore performance. Another of Intel's efforts is to work with universities to change the way programming is taught to focus more on parallelism; that way the next generation of developers will have such techniques in the forefront of their minds.

"You start with the universities," Borkar said. "Us old dogs, you cannot teach us new tricks."