Ajou University repository

3d vehicle trajectory extraction using dcnn in an overlapping multi-camera crossroad sceneoa mark
Citations

SCOPUS

2

Citation Export

Publication Year
2021-12-01
Publisher
MDPI
Citation
Sensors, Vol.21
Keyword
3D bounding box estimation3D trajectory extractionCamera calibrationMulti-object trackingOverlapping multi-camera crossroad scene
Mesh Keyword
3-D trajectory3d bounding box estimation3d trajectory extractionBounding-boxCamera calibrationMulti-camerasMulti-object trackingOverlapping multi-camera crossroad sceneTrajectory extractionVehicle trajectories
All Science Classification Codes (ASJC)
Analytical ChemistryInformation SystemsAtomic and Molecular Physics, and OpticsBiochemistryInstrumentationElectrical and Electronic Engineering
Abstract
The 3D vehicle trajectory in complex traffic conditions such as crossroads and heavy traffic is practically very useful in autonomous driving. In order to accurately extract the 3D vehicle trajectory from a perspective camera in a crossroad where the vehicle has an angular range of 360 degrees, problems such as the narrow visual angle in single-camera scene, vehicle occlusion under conditions of low camera perspective, and lack of vehicle physical information must be solved. In this paper, we propose a method for estimating the 3D bounding boxes of vehicles and extracting trajectories using a deep convolutional neural network (DCNN) in an overlapping multi-camera crossroad scene. First, traffic data were collected using overlapping multi-cameras to obtain a wide range of trajectories around the crossroad. Then, 3D bounding boxes of vehicles were estimated and tracked in each single-camera scene through DCNN models (YOLOv4, multi-branch CNN) combined with camera calibration. Using the abovementioned information, the 3D vehicle trajectory could be extracted on the ground plane of the crossroad by calculating results obtained from the overlapping multi-camera with a homography matrix. Finally, in experiments, the errors of extracted trajectories were corrected through a simple linear interpolation and regression, and the accuracy of the proposed method was verified by calculating the difference with ground-truth data. Compared with other previously reported methods, our approach is shown to be more accurate and more practical.
ISSN
1424-8220
Language
eng
URI
https://dspace.ajou.ac.kr/dev/handle/2018.oak/32393
DOI
https://doi.org/10.3390/s21237879
Fulltext

Type
Article
Funding
Funding: This research was supported by the Unmanned Vehicles Core Technology Research and Development Program through the National Research Foundation of Korea (NRF) and Unmanned Vehicle Advanced Research Center (UVARC) funded by the Ministry of Science and ICT, the Republic of Korea (Grant Number: 2020M3C1C1A01084900).
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Kwon, Yong Jin Image
Kwon, Yong Jin권용진
Department of Industrial Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.