Virtual reality (VR) and augmented reality (AR) developers promise that the technology is only limited by imagination, but wearing VR goggles for even a short period of time can be challenging. Eye strain, motion sickness, and fatigue are frequent physical complaints that limit the time that can be spent in a VR environment.
However, a new breakthrough at the iOptics Lab (Intelligent Optics Lab) at the University of Illinois at Urbana-Champaign is poised to change that. Liang Gao, assistant professor of electrical and computer engineering, and graduate student Wei Cui, both affiliated with the Beckman Institute, have created a new optical mapping 3D display that makes VR viewing more comfortable.
According to the pair's report published in Optics Letters, most current 3D VR/AR displays present two images that the viewer's brain uses to construct an impression of the 3D scene. This stereoscopic display method can cause eye fatigue and discomfort because of an eye focus problem called the vergence-accomodation conflict.
When you look at an object, your eyes point toward the object and your lenses focus on the object. Depending on the distance between you and the object, your eyes converge or diverge, then the lenses accommodate. Vergence and accommodation automatically work together, but when you're presented with a rendered 3D scene, the conflict arises.
The two images that make up stereoscopic 3D images are displayed on a single surface that is the same distance from your eyes. But these images are slightly offset to create the 3D effect. Your eyes have to work differently than usual, converging to a distance that seems further away, but keeping your lenses focused on the image that is centimeters from your face. (Learn more about the vergence-accomodation conflict in the Journal of Vision March 2008, Vol.8, 33. doi:10.1167/8.3.33.)
To overcome these stereoscopic limitations, Cui and Gao created an optical mapping near-eye (OMNI) three-dimensional display method. Their method divides the digital display into subpanels. A spatial multiplexing unit (SMU) shifts these subpanel images to different depths with correct focus cues for depth perception. But unlike the offset images from the stereoscopic method, the SMU also aligns the centers of the images to the optical access. An algorithm blends the images together, making a seamless image.
"People have tried methods similar to ours to create multiple plane depths, but instead of creating multiple depth images simultaneously, they changed the images very quickly," Gao said in an OSA news release. "However, this approach comes with a trade-off in dynamic range, or level of contrast, because the duration each image is shown is very short."
The researchers are continuing work on the display, increasing power efficiency and reducing weight and size. "In the future, we want to replace the spatial light modulators with another optical component such as a volume holography grating," said Gao. "In addition to being smaller, these gratings don't actively consume power, which would make our device even more compact and increase its suitability for VR headsets or AR glasses."
###
Journal
Optics Letters