Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Sooho | - |
dc.contributor.author | Hong, Soyeon | - |
dc.contributor.author | Park, Kyungsoo | - |
dc.contributor.author | Cho, Hyunsouk | - |
dc.contributor.author | Sohn, Kyung Ah | - |
dc.date.issued | 2024-10-28 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/37154 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85209788020&origin=inward | - |
dc.description.abstract | Omnidirectional vision systems provide a 360-degree panoramic view, enabling full environmental awareness in various fields, such as Advanced Driver Assistance Systems (ADAS) and Virtual Reality (VR). Existing omnidirectional stitching methods rely on a single specialized 360-degree camera. However, due to hardware limitations such as high mounting heights and blind spots, adapting these methods to vehicles of varying sizes and geometries is challenging. These challenges include limited generalizability due to the reliance on predefined stitching regions for fixed camera arrays, performance degradation from distance parallax leading to large depth differences, and the absence of suitable datasets with ground truth for multi-camera omnidirectional systems. To overcome these challenges, we propose a novel omnidirectional stitching framework and a publicly available dataset tailored for varying distance scenarios with multiple cameras. The framework, referred to as OmniStitch, consists of a Stitching Region Maximization (SRM) module for automatic adaptation to different vehicles with multiple cameras and a Depth-Aware Stitching (DAS) module to handle depth differences caused by distance parallax between cameras. In addition, we create and release an omnidirectional stitching dataset, called GV360, which provides ground truth images that maintain the perspective of the 360-degree FOV, designed explicitly for vehicle-agnostic systems. Extensive evaluations of this dataset demonstrate that our framework outperforms state-of-the-art stitching models, especially in handling varying distance parallax. The proposed dataset and code are publicly available in https://github.com/tngh5004/Omnistitch. | - |
dc.language.iso | eng | - |
dc.publisher | Association for Computing Machinery, Inc | - |
dc.subject.mesh | Advanced driver assistances | - |
dc.subject.mesh | Environmental awareness | - |
dc.subject.mesh | Ground truth | - |
dc.subject.mesh | Image stitching | - |
dc.subject.mesh | Multiple cameras | - |
dc.subject.mesh | Omni-directional view | - |
dc.subject.mesh | Omni-directional vision | - |
dc.subject.mesh | Omnidirectional view dataset | - |
dc.subject.mesh | Omnidirectional vision system | - |
dc.subject.mesh | Panoramic views | - |
dc.title | OmniStitch: Depth-Aware Stitching Framework for Omnidirectional Vision with Multiple Cameras | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2024.10.28. ~ 2024.11.1. | - |
dc.citation.conferenceName | 32nd ACM International Conference on Multimedia, MM 2024 | - |
dc.citation.edition | MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia | - |
dc.citation.endPage | 10219 | - |
dc.citation.startPage | 10210 | - |
dc.citation.title | MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia | - |
dc.identifier.bibliographicCitation | MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia, pp.10210-10219 | - |
dc.identifier.doi | 10.1145/3664647.3681208 | - |
dc.identifier.scopusid | 2-s2.0-85209788020 | - |
dc.subject.keyword | image stitching | - |
dc.subject.keyword | omnidirectional view dataset | - |
dc.subject.keyword | omnidirectional vision | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | false | - |
dc.subject.subarea | Artificial Intelligence | - |
dc.subject.subarea | Computer Graphics and Computer-Aided Design | - |
dc.subject.subarea | Human-Computer Interaction | - |
dc.subject.subarea | Software | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.