Ajou University repository

Sensor Fusion for Aircraft Detection at Airport Ramps Using Conditional Random Fields
Citations

SCOPUS

8

Citation Export

Publication Year
2022-10-01
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
IEEE Transactions on Intelligent Transportation Systems, Vol.23, pp.18100-18112
Keyword
Aircraft detectionpoint cloud segmentationself-driving at airport rampssensor fusion
Mesh Keyword
Camera sensorLIDAR sensorsPoint cloud compressionPoint cloud segmentationPoint cloud segmentation.Point-cloudsRandom fieldsSelf drivingsSelf-driving at airport rampSensor fusion
All Science Classification Codes (ASJC)
Automotive EngineeringMechanical EngineeringComputer Science Applications
Abstract
Self-driving baggage tractors on airport ramps or aprons enable better airport operation procedures and support the expansion of the aviation market. Airport ramps have unique mobility requirements in terms of layout, population, demand, and patterns. Avoiding aircraft movement on an airport apron is a top priority because of critical security and safety issues. Existing aircraft detection approaches use remote-sensing images or surveillance cameras. However, these are not compatible with sensors for low-height equipment at airport ramps. Similarly, public road-based self-driving studies have not considered detecting the massive size and concave contours of movable objects. Camera sensors cannot accurately measure the distance of concave contours, whereas a lidar sensor cannot easily cluster or classify an object among point cloud data. In this paper, we present the fusion of cameras and lidar sensors for aircraft and object detection at airport ramps. We use parallel detection from lidar and camera sensors and then integrate both detection results to compensate for any issues. Using the proposed energy optimization model by adapting a conditional random field, we can handle over- and under-segmentation of the point cloud objects caused by the sparse point cloud generated by the aircraft. Our algorithm achieves 31.1% improvement on tracking and 5.5% improvement on classification over other fusion algorithms when applied to a dataset acquired from the Cincinnati and Northern Kentucky airport.
Language
eng
URI
https://dspace.ajou.ac.kr/dev/handle/2018.oak/32613
DOI
https://doi.org/10.1109/tits.2022.3157809
Fulltext

Type
Article
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Lee, Soo Mok Image
Lee, Soo Mok이수목
Department of Mobility Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.