Called DSpace, the new system is essentially a centralized, electronic repository for the massive amounts of intellectual property created by research institutions, said Mackenzie Smith, associate director of MIT Libraries and the DSpace project director.
Preserving data in an accessible manner is increasingly becoming a problem for a number of universities and government agencies. MIT itself produces an estimated 10,000 pieces of digital content a year, a figure that includes conference papers and technical reports.
Some of the data is also quite large and difficult to access. One faculty member has generated ocean floor maps that take up 30 terabytes (30 trillion bytes) of data.
"We began this to get some kind of territorial control over all of this research," Smith said. "If you're lucky, you can get some of it on Google, but most of the stuff we are talking about is not indexed in any way you can get it."
Potentially, DSpace will lead to the creation of a virtual library that meshes the collections of several research universities. MIT is already discussing using the system to link to the libraries of Cambridge and Cornell, she said. Corporations and government agencies have also been in contact with MIT.
The heart of the DSpace system is an open-source storage and retrieval system. Each academic department has been assigned a customized portal for submitting materials, Smith said. Professors and researchers can then deposit information directly into the system through a portal, or after a peer review, depending on the departmental regulations.
To retrieve documents, researchers can consult an index. Author and text searches will come in later versions, she said.
"Part of the reason for doing this is that the faculty says, 'My stuff is too hard to find,'" she said.
Using open-source software also cut costs, Smith added. The MIT system, which currently can hold two terabytes of data, can be replicated for $100,000 to $500,000, with most of the expense deriving from hardware. The software will be licensed freely under the Berkeley license.
The system can also be expanded. Eventually, MIT's system will wield more than a petabyte, or a quadrillion bytes of data.
Over time, academics and librarians will have to go through the arduous process of determining what to keep and what to eliminate. The system will ideally let universities cut costs associated with housing documents and research findings, but electronic storage still isn't free so culling is inevitable, she said.
The project started about 18 months ago and was jointly developed by MIT and HP. The company and the university have collaborated on a number of projects. Recently, the twoall of the output of MIT Press, including out-of-print textbooks, and put it into a searchable database.