Scientists from Skoltech ADASE (Advanced Data Analytics in Science and Engineering) lab have found a way to enhance depth map resolution, which should make virtual reality and computer graphics more realistic. They presented their research results at the prestigious International Conference on Computer Vision 2019 in Korea.
When taking a photo, we capture visual information about objects around us, with the different pixels in the image containing the colors of the respective parts of the object. Depth maps are photos that capture spatial information and their pixels contain the distances from the camera to the respective points in space. Applications such as computer graphics and augmented or virtual reality use spatial information to reconstruct a 3D object's shape and, for instance, display it on a computer screen.
One of the issues of depth cameras is that their resolution, that is, the spatial frequency of distance measurements, is insufficient for restoration of high quality object's shape, making the virtual reconstructions look all but unrealistic.
The researchers are faced with the challenge of finding a way to obtain high-resolution depth maps from low-resolution depth maps. Scientists from Skoltech ADASE lab have proposed to assess the reconstruction quality using a novel method closely related to human perception. Training an artificial neural network with this quality assessment technique produces a depth map super-resolution method that largely outperforms the existing methods in the visual quality of the result.
"When dealing with super-resolved depth maps, one should assess the quality of the result, first, to compare the performance of different methods, and second, to use it as feedback for further improvements. The easiest way is to compare the result to some reference. The overwhelming majority of works on depth map super-resolution use for this purpose mean difference between super-resolved and reference depth values. By no means does this method reflect the visual quality of the 3D reconstruction obtained from the super-resolved depth map," explains the first author of the study, Oleg Voynov.
"We propose an altogether different method which leverages the human perception of the difference between visualizations of the 3D reconstructions obtained from super-resolved and reference depth maps. The graphics you obtain with this method looks highly realistic. We hope that our method will find extensive use," says one of the developers, Alexey Artemov.