Despite various 3D point cloud place recognition studies leveraging Pointnet, sparse convolution and graph-based methods to enhance 3D point cloud analysis, limitations persist in fully capturing the profound descriptor and ensuring domain-invariant performance across diverse environments. In this paper, for place recognition, we introduce a cross-source robust architecture that incorporates a 3D structural block and double descriptor network (SBDD-Net). 3D strucutral block significantly enhancing the comprehension of spatial structural features. Through integrating structural convolution and reverse density point pooling, we achieve superior feature extraction. The structural feature allows cross-domain robustness because the object structure does not change regardless of platforms or sensors. We aggregate these features in two distinct ways to create the key feature descriptor and the global feature descriptor, each representing the same submap differently. These descriptors, trained with our proposed degree and euclidean loss, enhance place recognition capabilities across changing environments or dataset domains. Evaluation on testing in four cross-source dataset demonstrates the domain-invariant features of our proposed methods in place recognition.
This work was supported by Korea National Police Agency (KNPA) under the project \\\Development of autonomous driving patrol service for active prevention and response to traffic accidents\\\ (RS-2024-00403630), Institute of Information communications Technology Planning & Evaluation (IITP) under the Artificial Intelligence Convergence Innovation Human Resources Development (IITP-2023-No.RS-2023-00255968) grant funded by the Korea government(MSIT) and the BK21 FOUR program of the National Research Foundation Korea funded by the Ministry of Education(NRF5199991014091).