par Dury, Sarah
;Bonatto, Daniele
;Lafruit, Gauthier
;Teratani, Mehrdad 
Référence IEEE transactions on multimedia
Publication Publié, 2025-10-01




Référence IEEE transactions on multimedia
Publication Publié, 2025-10-01
Article révisé par les pairs
Résumé : | We present a novel, first-of-its-kind view synthesis method for plenoptic images, which enables the direct manipulation of images in the micro-images array format, thereby bypassing intermediate transformation steps. Current plenoptic imaging approaches typically rely on an initial conversion to dense multiview images, also known as subaperture images extraction. However, the use of subaperture images presents two main limitations that ultimately impact further processing. First, existing subaperture view extraction methods offer limited control over camera parameters, resolutions, and poses of the subaperture views, which are also constrained to a small area around the main lens, thus restricting free navigation. Second, subaperture images are susceptible to artifacts which can propagate to subsequent processes such as calibration, depth estimation and view synthesis. In this paper, we propose a camera model that enables depth image-based rendering with plenoptic cameras, in a way that allows for the direct synthesis of any target viewpoint. In our evaluation, we show that our method expands view synthesis extrapolation to a range that is two to three times greater than that of pipelines requiring a conversion to subaperture images, including generally accepted tools such as depth image-based rendering and learning-based rendering approaches. |