Gaze-Aware Inverse Light Field Mapping for Autostereoscopic Displays

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

To overcome the limitations of traditional light field mapping algorithms, including pixel holes in depth discontinuities and computational complexity that grows polynomially as scene resolution increases, we propose a gaze-aware inverse light field mapping algorithm. A corresponding autostereoscopic display system is also developed. In this method, we decouple real-time gaze tracking and inverse light field mapping, executing them as parallel processing threads. The gaze-tracking thread processes video streams from the depth camera. A deep neural network is used to achieve real-time localization of the human eye's 3D coordinates. Based on the real-time 3D eye coordinates, the inverse mapping thread establishes an inverse mapping model from the elemental image to the scene. Experimental results demonstrate that the proposed algorithm effectively eliminates the pixel holes in depth discontinuities common in traditional algorithms, thereby improving light field reproduction quality. As the scene resolution doubles, when the scene resolution doubles, the speedup factor of the proposed method over the traditional one reaches 1.15–1.20, confirming the method’s efficiency and practicality in high-resolution scenarios.

Article activity feed