News Release

'Space-Capsule' Computing Concept May Unlock Petaflops Power, UD Researchers Report

Peer-Reviewed Publication

University of Delaware

August 25, 1997--A new computing concept--patterned after successful space missions--may soon help University of Delaware researchers complete the architectural blueprint for a supercomputer 1 million times more powerful than the most advanced personal computer now on the market.

Capable of processing 1 million billion commands or "floating point operations" per second, the world's first "petaflops" machine may feature superconducting microprocessors, three-dimensional holographic data storage, advanced semiconductor memory and optical interconnections. But first, researchers must figure out how to compensate for the fact that the machine's processing chips will work much faster than its memory.

The space-capsule computing concept should help bridge this technological gap, says Guang R. Gao, director of UD's growing Computer Architecture and Parallel Systems Laboratory (CAPSL) and a leading expert on the "multi-threaded program model," a processing strategy gaining increasing attention from high-performance chip and system designers. Gao introduced his research team's latest findings during the national Hybrid Technology Multi-threaded (HTMT) Architecture workshop, held at UD July 20-21.

How does the concept work? The key, Gao says, is to prepare "parallel computational threads--essentially, many independent instruction pathways--within the machine's lower-level memory hierarchy." The brain of a multi-threaded petaflops computer, a series of processors powered by superconducting materials that lose all resistance to electricity when deeply chilled, would execute many different tasks in turn, Gao explains. Unfortunately, these superconducting processors might run into problems when gathering information from many different sites within the computer's deep-memory hierarchy, such as the optical memory unit or the dynamic random access memory (D-RAM) region. Different types of data therefore must be converted into a single "capsule" or parcel of information, Gao says.

In other words: "You stock your capsule with all the information needed by the processors before launching it into the superconducting region," Gao says. "If you launch the information without preparing it first, the execution of tasks will almost certainly be interrupted while the processor fetches what it needs from different sites." After all, "if the Mars rover had been sent into space without all the proper equipment," Gao notes, "that mission would have been a disaster!"

For handling large, non-regular problems ranging from real-time weather forecasting and biochemical modeling to simulations of complex systems such as aircraft, a petaflops computer may prove essential, says Kevin B. Theobald, one of a half-dozen graduate students and postdoctoral associates in Gao's lab. Gao's work "is a critical path element in the success of the HTMT project," says researcher Thomas Sterling of the Jet Propulsion Laboratory (JPL) in Pasadena, Calif., principal investigator for the HTMT project and one of three visionaries to propose a petaflops machine in 1995.

Resulting from a study funded by the National Science Foundation and the National Aeronautics and Space Administration (NASA), the HTMT project is now sponsored by the Defense Advanced Research Project Agency (DARPA), the National Security Administration (NSA) and NASA. Gao's lab will receive $800,000 over the next several years to develop the architectural blueprint for a petaflops computer. Along with UD, the HTMT project includes the California Institute of Technology and JPL, the State University of New York at Stony Brook, Notre Dame University, Princeton University, and government and industry labs.

The UD team members are system-design veterans who previously developed a high-performance, multi-threaded, multi-processor system known as EARTH (Efficient Architecture for Running Threads)--a project directed by Gao at McGill University in Montreal, where he taught before joining the UD faculty in 1996. The EARTH platform is built atop a 20-node, 40-processor parallel machine called MANNA (Massively parallel Architecture for Numerical and Non-numerical Applications), contributed by the GMD-First computer firm of Berlin, Germany. Doctoral student Andres Marquez, who helped design the memory system for the MANNA, is now part of the UD team and one of the lead designers for the HTMT project, Gao notes.

The EARTH system also can run on the IBM SP-2 parallel computer, thanks to support from C.J. Tan and others at IBM's T.J. Watson Research Center. Tan, senior manager in charge of IBM's Deep Blue chess project, will be speaking at UD on Oct. 21.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.