Ajou University repository

OmniStitch: Depth-Aware Stitching Framework for Omnidirectional Vision with Multiple Cameras
Citations

SCOPUS

0

Citation Export

Publication Year
2024-10-28
Journal
MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
Publisher
Association for Computing Machinery, Inc
Citation
MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia, pp.10210-10219
Keyword
image stitchingomnidirectional view datasetomnidirectional vision
Mesh Keyword
Advanced driver assistancesEnvironmental awarenessGround truthImage stitchingMultiple camerasOmni-directional viewOmni-directional visionOmnidirectional view datasetOmnidirectional vision systemPanoramic views
All Science Classification Codes (ASJC)
Artificial IntelligenceComputer Graphics and Computer-Aided DesignHuman-Computer InteractionSoftware
Abstract
Omnidirectional vision systems provide a 360-degree panoramic view, enabling full environmental awareness in various fields, such as Advanced Driver Assistance Systems (ADAS) and Virtual Reality (VR). Existing omnidirectional stitching methods rely on a single specialized 360-degree camera. However, due to hardware limitations such as high mounting heights and blind spots, adapting these methods to vehicles of varying sizes and geometries is challenging. These challenges include limited generalizability due to the reliance on predefined stitching regions for fixed camera arrays, performance degradation from distance parallax leading to large depth differences, and the absence of suitable datasets with ground truth for multi-camera omnidirectional systems. To overcome these challenges, we propose a novel omnidirectional stitching framework and a publicly available dataset tailored for varying distance scenarios with multiple cameras. The framework, referred to as OmniStitch, consists of a Stitching Region Maximization (SRM) module for automatic adaptation to different vehicles with multiple cameras and a Depth-Aware Stitching (DAS) module to handle depth differences caused by distance parallax between cameras. In addition, we create and release an omnidirectional stitching dataset, called GV360, which provides ground truth images that maintain the perspective of the 360-degree FOV, designed explicitly for vehicle-agnostic systems. Extensive evaluations of this dataset demonstrate that our framework outperforms state-of-the-art stitching models, especially in handling varying distance parallax. The proposed dataset and code are publicly available in https://github.com/tngh5004/Omnistitch.
Language
eng
URI
https://aurora.ajou.ac.kr/handle/2018.oak/37154
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85209788020&origin=inward
DOI
https://doi.org/10.1145/3664647.3681208
Type
Conference
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Cho, Hyunsouk Image
Cho, Hyunsouk조현석
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.