par Razavi Khosroshahi, Hamed ;Sancho Aragon, Jaime;Bang, Gun;Lafruit, Gauthier ;Juarez, Eduardo;Teratani, Mehrdad
Référence 9th DSP Workshop for In-Vehicle Systems and Safety - The Challenge to Society of Moving Intelligence(22-23 Aug. 2024: Brussels, Belgium), Intelligent Vehicles and Transportation
Publication Publié, 2024-08-22
Référence 9th DSP Workshop for In-Vehicle Systems and Safety - The Challenge to Society of Moving Intelligence(22-23 Aug. 2024: Brussels, Belgium), Intelligent Vehicles and Transportation
Publication Publié, 2024-08-22
Abstract de conférence
Résumé : | Neural Radiance Fields (NeRF) demonstrate impressive capabilities in rendering novel views of specific scenes by learning an implicit volumetric representation from posed RGB images without any depth information. One significant challenge in this domain is the need for a large number of images in the training datasets for neural network-based view synthesis frameworks, which is often impractical in real-world scenarios. Our work addresses the challenge of data augmentation for view synthesis applications. NeRF models require comprehensive scene coverage in multiple views to accurately estimate radiance and density at any point. However, insufficient coverage can limit the model’s ability to interpolate or extrapolate unseen parts of a scene effectively. We introduce a novel pipeline to tackle this data augmentation problem using depth data to add novel, non-existent views to the training sets of NeRF framework. Our experimental results show that proposed approach significantly enhances the quality of the rendered images using the NeRF model, with an average increase of 6.4 dB in Peak Signal-to-Noise Ratio (PSNR) and a maximum increase of 11 dB. This work can be extended by integrating LiDAR cameras and their depth maps, to enhance the quality of the view synthesis process, improving the perception and decision-making capabilities of intelligent vehicles. |