IBM today announced the formation of the Deep Computing Institute, a $29-million research initiative that will bring together experts in academia and industry to address some of the world's most challenging business and scientific problems, the company said.
"Deep computing" refers to supercomputer-scale processing initiatives that combine massive computation and sophisticated software algorithms to attack problems previously beyond the reach of information technology. Deep computing techniques include optimization, simulation, and visualization, as well as advanced pattern matching and discovery, the company said.
Along with pervasive computing, IBM considers deep computing to be one of the two primary technology trends driving the next stage of e-business.
Based at IBM's T.J. Watson Research Center in Westchester County, New York, the Deep Computing Institute will be guided by an advisory board of leaders from universities, government laboratories, and corporations. IBM is committing more than 120 scientists and technologists in research labs in New York, San Jose, Austin, Tokyo, Zurich, Haifa, Beijing, and New Delhi in collaborative efforts that will address an initial slate of deep computing projects in areas ranging from how to schedule personnel in complex environments, such as airline scheduling, to modeling precise weather patterns.
"Deep computing combines the best of business and scientific computing techniques to find the value buried in all this data and to apply that information to solve real-world problems," William R. Pulleyblank, director of Mathematical Sciences at IBM Research and director of the new institute, said in a statement. "Thanks to the tremendous advances in computing power and mathematical algorithms, it's now possible to tackle problems of unbelievable complexity--things we couldn't dream of doing even a few years ago."
In addition to specific research projects, the Institute will launch a series of efforts designed to stimulate discussion, experimentation, and development in the field of deep computing. As part of that initiative, the Institute will release through open-source availability the IBM Visualization Data Explorer, a software package that can be used to analyze and create 3D representations of data.
Announced today, Data Explorer brings together computational and rendering tools in a programmable framework that enables users to create visualizations of highly complex data from disparate sources. Visualization is an often-critical component of deep computing analysis, combining with powerful computers and advanced algorithms to solve complex business and scientific problems.
With the release of the Data Explorer source code, IBM said it is hoping developers can collaborate with the company to make improvements to the software that will benefit the entire user community.
"With Data Explorer, a picture can be worth a million words," said Pulleyblank. "This software can transform incredibly complex information into 3D images that make it easier to analyze data--to uncover patterns, identify trends, and model 'what-if' scenarios."
Data Explorer can be used to add visualization capabilities to existing applications. It is currently used by companies and institutions for applications in a large variety of visualization fields including computational fluid dynamics, medical imagery, computational chemistry, and engineering analysis.
Data Explorer runs on systems ranging from Unix-based supercomputers and workstations to PC's and servers running Microsoft Windows or Linux. Open source of Data Explorer is set to be available from the Deep Computing Web site May 26.