Abstract
The light field faithfully records the spatial and angular configurations of the scene, which facilitates a wide range of imaging possibilities. In this work, we propose a Light Field (LF) rendering algorithm which renders high quality novel LF views far outside the range of angular baselines of the given references. A stratified rendering strategy is adopted which parses the scene contents based on stratified disparity layers and across a varying range of spatial granularity. Such stratified methodology proves to help preserve scene content structures over large perspective shifts, and it provides informative clues for inferring the textures of occluded regions. A Generative-Adversarial Network model has been adopted for parallax correction and occlusion completion conditioned on the stratified rendering features. Experiments show that our proposed model can provide more reliable novel view rendering quality at large baseline expansion ratios. Over 3dB quality improvement has been achieved against state-of-the-art LF view rendering algorithms.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1903.02688