Novel view synthesis with light-weight view-dependent texture mapping for a stereoscopic HMD

Abstract

The proliferation of off-the-shelf head-mounted displays (HMDs) let end-users enjoy virtual reality applications, some of which render a real-world scene using a novel view synthesis (NVS) technique. View-dependent texture mapping (VDTM) has been studied for NVS due to its photo-realistic quality. The VDTM technique renders a novel view by adaptively selecting textures from the most appropriate images. However, this process is computationally expensive because VDTM scans every captured image. For stereoscopic HMDs, the situation is much worse because we need to render novel views once for each eye, almost doubling the cost. This paper proposes light-weight VDTM tailored for an HMD. In order to reduce the computational cost in VDTM, our method leverages the overlapping fields of view between a stereoscopic pair of HMD images and pruning the images to be scanned. We show that the proposed method drastically accelerates the VDTM process without spoiling the image quality through a user study.

Publication
Proceedings - IEEE International Conference on Multimedia and Expo
Yuta Nakashima
Yuta Nakashima
Professor

Yuta Nakashima is a professor with Institute for Datability Science, Osaka University. His research interests include computer vision, pattern recognition, natural langauge processing, and their applications.