Abstract
We present a novel approach to 3D object reconstruction from its 2D projections. Our unique, GAN-inspired system employs a novel $C^\infty$ smooth differentiable renderer. Unlike the state-of-the-art, our renderer does not display any discontinuities at occlusions and dis-occlusions, facilitating training without 3D supervision and only minimal 2D supervision. Through domain adaptation and a novel training scheme, our network, the Reconstructive Adversarial Network (RAN), is able to train on different types of images. In contrast, previous work can only train on images of a similar appearance to those rendered by a differentiable renderer. We validate our reconstruction method through three shape classes from ShapeNet, and demonstrate that our method is robust to perturbations in view directions, different lighting conditions, and levels of texture details.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1903.11149