News Release

Carnegie Mellon combines hundreds of videos to reconstruct 3D motion without markers

With so many video feeds, choosing which to use is technical challenge

Peer-Reviewed Publication

Carnegie Mellon University

Reconstructing the Swing of a Bat

image: Thousands of video trajectories of a man swinging a baseball bat, captured in Carnegie Mellon University's Panoptic Studio, are reconstructed to create this image. view more 

Credit: Carnegie Mellon University

PITTSBURGH—Carnegie Mellon University researchers have developed techniques for combining the views of 480 video cameras mounted in a two-story geodesic dome to perform large-scale 3D motion reconstruction, including volleyball games, the swirl of air currents and even a cascade of confetti.

Though the research was performed in a specialized, heavily instrumented video laboratory, Yaser Sheikh, an assistant research professor of robotics who led the research team, said the techniques might eventually be applied to large-scale reconstructions of sporting events or performances captured by hundreds of cameras wielded by spectators.

The video lab, called the Panoptic Studio, also can be used to capture the fine details of people interacting, whether it be college students casually conversing or a child being evaluated by a psychologist for signs of autism.

In contrast to most previous work, which typically has involved just 10 to 20 video feeds, the Carnegie Mellon researchers didn't have to worry about filling in gaps in data; their camera system can track 100,000 points at a time. Rather, they have to figure out how to choose which of the hundreds of video trajectories can see each of those points and select only those camera views for the reconstruction.

"At some point, extra camera views just become 'noise,'" said Hanbyul Joo, a Ph.D. student in the Robotics Institute. "To fully leverage hundreds of cameras, we need to figure out which cameras can see each target point at any given time."

The research team developed a technique for estimating visibility that uses motion as a cue. In contrast to motion capture systems that use balls or other markers, the researchers used established techniques for automatically identifying and tracking points based on appearance features — in this case, distinctive patterns. For each point, the system then seeks to determine which cameras see motion that is consistent with that point.

For instance, if a point on a person's chest is being tracked and most cameras show that point is moving to the right, a camera that picks up motion in the opposite direction is probably seeing a person or object that is in between the target and the camera. Or it may indicate the person has turned and the chest is no longer visible to the camera. In either case, the system knows that camera cannot see the target point and that its video feed is not useful for 3D reconstruction involving that point.

Other researchers have been able to use images from a large number of cameras, such as smartphones, to create 3D reconstructions of still images, Joo noted. But without methods such as the visibility estimation technique, 3D motion reconstruction at such a large scale has not been possible.

In the Panoptic Studio, the researchers have 480 video cameras, plus an additional 30 high-definition video cameras, arrayed all around and halfway up the walls of a geodesic dome that can easily accommodate 10 people.

Such a dense array of cameras enables the researchers to perform 3D motion reconstructions not previously possible. These include 3D reconstructions of a person tossing confetti into the air, with each piece of paper tracked until it reaches the floor. In another case, confetti is fed into a fan, enabling a motion capture of the air flow. "You couldn't put markers on the paper without changing the flow," Joo explained.

Likewise, such techniques might be used for reconstruction of the motion of animals, which typically can't be instrumented.

###

A video of the 3D reconstructions and links to the team's research paper are available on the project website, http://www.cs.cmu.edu/~hanbyulj/14/visibility.html.

The findings were presented at the Computer Vision and Pattern Recognition conference, June 24-27, in Columbus, Ohio. In addition to Sheikh and Joo, the research team included Hyun Soo Park, who this year completed his Ph.D. in mechanical engineering at CMU and is now a post-doctoral researcher at the University of Pennsylvania.

This research was supported by the National Science Foundation and a Samsung Scholarship.

The Robotics Institute is part of Carnegie Mellon's top-ranked School of Computer Science, which is celebrating its 25th year. Follow the school on Twitter @SCSatCMU.

About Carnegie Mellon University:

Carnegie Mellon is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 12,000 students in the university's seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation. A global university, Carnegie Mellon has campuses in Pittsburgh, Pa., California's Silicon Valley and Qatar, and programs in Africa, Asia, Australia, Europe and Mexico.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.