X

IBM's big thinker

Executive Irving Wladawsky-Berger helped steer Big Blue to the Internet, Linux and open-source computing. His newest mission: grid computing.

Charles Cooper Former Executive Editor / News
Charles Cooper was an executive editor at CNET News. He has covered technology and business for more than 25 years, working at CBSNews.com, the Associated Press, Computer & Software News, Computer Shopper, PC Week, and ZDNet.
Charles Cooper
10 min read
If he could realize his fantasy, Irving Wladawsky-Berger would be playing professional baseball, taking the field for his beloved New York Mets.

But even if the Cuban-born computer scientist won't be living out his Walter Mitty dreams on the baseball diamond anytime soon, he has nonetheless found fortune and fame during the course of an increasingly high-profile 31-year career at IBM.

Since the mid-1990s, Chairman Lou Gerstner has called upon Wladawsky-Berger to spearhead several projects that ultimately proved critical to Big Blue's revival by the end of the decade. Wladawsky-Berger helped formulate IBM's Internet and network computing plans, as well as its Linux and open-source software strategy. He was in charge of supercomputing and parallel computing and also ran IBM's Unix-based RS/6000 division.

More recently, Wladawsky-Berger--now vice president of technology and strategy in the IBM Server Group--has been playing the point for IBM's grid computing initiative, an ambitious program to commercialize a distributed-computing concept that, until recently, has remained largely theoretical and confined to academia.

On the eve of the just-concluded IBM technical developers conference, Wladawsky-Berger sat for an evening of table talk about grid computing, the open-source phenomenon and IBM's bear hug of seemingly everything Linux.

Q: For the longest while, IBM seemed to be on the ropes. You were late to market with cutting-edge products, and the company really seemed adrift. As you look at the company today, what's the biggest difference between the way IBM goes about its business now and what prevailed back then?
A: Until the Gerstner era, what you had on the business side was that people were very conservative. So even though there was all this knowledge of what had to be done, there was all this conservatism, and things were not getting out too quickly. In research, you have a pretty good idea that there are parts of your work that should be published in the community because you don't just work for your company--you are working for peer review. People were being especially careful not to have things impact the existing businesses.

So was it cultural as much as anything else?
Yes, and I think one of the main things that's changed at IBM is the culture that now says, "Listen, the marketplace is the marketplace. If the marketplace is going to change, then the marketplace is going to change. And we have to listen to it and then we have to be out there."

IBM has a really big technical research culture and an amazingly strong technical set of people. The research labs, for example, are incredible. So as we were getting in trouble with the mainframes and everything else, we also had the solutions, like CMOS microprocessors or parallel supercomputers that we were working on in the laboratories. At the very least, we had the base of work that was ready; that is, we are always trying to understand what was going to happen and be way ahead of it. I think most companies don't have that.

How did you sell Lou Gerstner and the board on this whole idea of open source and Linux? In retrospect, wasn't that pretty radical stuff for a company like IBM?
Not really. At some level, open source is part of the research culture here. Remember that in the research culture, you have a pretty good idea that there are parts of your work that should be published in the community because you don't just work for your company--you are working for peer review. If you want to attract the best and brightest, then you let them publish papers. And it is just part of the culture that you always try to be a very good citizen for the good of the community unless it's absolutely necessary to protect the business.

To some extent, open source is just part of that world where you are a professional in the community, helping with Apache, helping with Linux.

But still, you're walking into a meeting and saying, "Folks, we want to embrace this operating system, which nobody owns." That's not the way IBM has traditionally done business. So what was the larger context in which management decided it made strategic sense to do that?
The first reason, as I said, was the research culture. By the way, Linus Torvalds wrote this wonderful article talking about how open source represented an evolution of the research culture. It's publishing papers and publishing results, and not just thinking of yourself in a profit-centric way, but as part of the community in the larger scientific field.

When IBM embraced the Internet in 1995, one of the major things we absolutely became convinced of was that if you embrace the Internet, then you had to embrace the culture of standards. Without standards, you couldn't have an Internet, you couldn't integrate all the pieces, and you couldn't make things work. And that has permeated through the organization so that all of us now think of the Internet as a driving force in the industry, and right along with it is the culture of standards.

Linux is part of that. By embracing Linux and open source, you're advancing standards. There is a whole layer in the system that, by having it based on standards and open source, facilitates integration.

But truth be told, it was also easier for IBM to do that since unlike Sun or Microsoft, you're not wedded to a proprietary OS.
Well, we've wedded ourselves to the integration of the solution, the notion being that the Internet and e-business solutions are more important than any particular component. And as a result, we've changed all our business models so that the integration of the pieces has become more important than any one piece.

How did that factor into your embrace of e-business?
Again, I think it was because we really became convinced the integration of the pieces was crucially important. If you look at the way our whole strategy played out, you'll see that the key was integrating all the pieces--and the fact that our middleware runs on everybody's platform. We really made the jump, and I honestly don't think Microsoft or Sun has made that jump. Compaq and (Hewlett-Packard) sort of went that way, but because they don't have a software services business, it's been hard for them to execute.

Actually, services was a bright spot in Compaq's second quarter. But leaving that aside, what about the adoption rate of Linux? Anything different from six to 12 months ago when there were lots of questions about scalability?
We look upon this as business as usual. We knew all along that scaling Linux would take time, just because scaling any operating system takes time. It was no different for Solaris, AIX, Windows or Linux, but more and more businesses are adopting Linux. It's happening; clusters are happening.

Still a relatively small fraction of the overall pie?
Oh yeah, it's small, but it's very prestigious--and growing. In fact, I think IDC puts it as the fastest-growing operating system.

Let me ask about Web services, an area that's received increasing attention the last half-year. Sometime soon, .Net will become more than a PowerPoint display, and Microsoft will get it out into the market. What's IBM's Web services strategy, and how will Websphere figure in?
The centerpiece is definitely Websphere, but DB2 is very important also. The whole Web services is embedded in everything we're doing in middleware. But at the same time, you know, we work closely with Microsoft in defining the Web services standard. All of our Web services middleware will interoperate with Microsoft.

And incorporate SOAP and XML as standards?
Yes, it's an area where the whole industry is in agreement.

So how is IBM's vision for Web services going to differ from Microsoft's?
When IBM embraced the Internet in 1995, one of the major things we absolutely became convinced of was that if you embrace the Internet, then you embrace the culture of standards. The difference is that we build Web services as being a layer of software that works across all systems, independent of architecture or operating systems. So it's going to work not just on all of our systems, but also across everybody's systems. Microsoft has a Windows-centric view of the world, and we think they're wrong. We think that the view of Web services as being one layer across the operating systems is the right view. In fact, to underscore that point--I don't know if you saw all the coverage of grid computing...

Yes.
That is the same thought, that grid computing is a set of research management services that sit on top of the OS to link different systems together. We will work with the Globus community to build this layer of software to help share resources. All of our systems will be enabled to work with the grid, and all of our middleware will integrate with the software.

Government and academia were key to helping build up the Internet. Do you expect these same communities to similarly become involved in turning these different grids into a commercial reality?
I agree totally. In fact, part of the appeal of working with this community is their batting average. This is the community that built the Internet and Linux.

When all's said and done, have you figured out a business model for grid computing?
I think there are a few key things. It helps to sell our servers, and the fact they are grid-enabled is a very big deal. Second, it has a huge potential for services. Remember, the promise of the grid is that we will help interconnect your computing resources so you can share them more effectively.

How will vendors package the technology? What will the market for this stuff look like?
It will look like software we package in all our servers and services. How the software goes out, whether it's Websphere Grid Edition or part of the OS, is too early to tell. But we'll definitely build it around Globus. The services will help our customers connect their old systems into the grid and share them more effectively--which is what we're now doing for the National Science Foundation and the U.K. government. Once you buy the grid model, which says we don't care where the computers are, we don't care where the storage is, the next step is, Why don't you rent that as a service from us?

So we should expect different vendors to offer different grid services?
I think it will be built around a standard because it's an evolution of the Internet, and yes, I think different vendors will offer different grid services. It will be very much like the Internet.

Is it an oversimplification to suggest that the objective of the grid approach is to turn computing power into a utility like electricity?
In the following sense: A lot of companies may say, "Irving, help me build my own grids in-house. I don't want your damn computer." However, maybe I'll connect your computer in case I need more power for surges, and once we do that--whether it's 5 percent or 50 percent or 75 percent--customers will decide, based on economics. So it gives us the flexibility of helping them do it themselves or helping us do it for them or anything in between.

And would you charge on a per-usage basis?
Some per-usage fee to which we don't know the answer yet. The best model today is Web hosting, storage services; we build a whole resource and strategy to start the evolution.

You have the reputation inside IBM of being a big-picture guy. So what's the big picture five years from now? How big?
Oh, I think it's huge.

Bigger than the Net?
No, it's part of the Internet. It's not different from the Net, it's an evolution of the Internet.

Based on what we discussed earlier, as far as your views on standards, I suppose the key to the future success of grid computing is developing the right standards?
I hope you detect a consistency here. Because in the end, a lot of the grid computing is this notion of a plug in the wall: You cannot put a plug in the wall if there are no standards. The Globus community, which is very close to the Internet community in general, is working to develop this concept that when you plug into the wall, you plug into a virtual computer that you're obviously authorized to use.

Which gives you what?
That virtual computer will have what are the resources at your disposal. When you turn on a computer today, it's the resources in your PC. The (grid) metaphor is that your computer is the aggregation of resources on the Internet, and you're allowed to see and work on it. So let's say you are doing critical pharmaceutical research; then, you may be part of the virtual organization that has all this genomic data, all the historical data, and somehow, from your client machine, you get to see not just what's on your machine, but all the stuff here. And you can run applications, you can access data, and when you want to do things, it just happens out there. And it happens out there because of the software that helps you to get your job done.

What's the next big hurdle to turn this into reality?
One is to get agreement on all the protocols and resources, and that's what Globus is trying to do. Also, in terms of resource management and developing the right algorithms and software...the resource management layer is very sophisticated because it has to make decisions about where you have free cycles, whether you are better off going to this computer and bringing the data to it, or whether you are better off going to that computer where the data is. We have a lot of experience in that work because this is not that different from a lot of the management of resources in a mainframe.

Grid computing is happening in the scientific world today. But when do you expect a breakout to the commercial side?
In the commercial world, by 2003 or 2004, though it might happen sooner. There is a part of the commercial world doing technical work, like petroleum or pharmaceutical research or engineering and design, and there it will happen faster, probably in the next year or two.

What could derail that timetable?
Well, this is very complicated stuff. On the other hand, given the players involved, this will happen for sure because the scientific community is totally geared for it.