Download PDFOpen PDF in browser

Artifacts Mapping: Multi-Modal Semantic Mapping for Object Detection and 3D Localization

EasyChair Preprint no. 10724

8 pagesDate: August 15, 2023

Abstract

Geometric navigation is nowadays a well-established field of robotics and the research focus is shifting towards higher-level scene understanding, such as Semantic Mapping. When a robot needs to interact with its environment, it must be able to comprehend the contextual information of its surroundings. This work focuses on classifying and localising objects within a map, which is under construction (SLAM) or already built. To further explore this direction, we propose a framework that can autonomously map predefined objects in a known environment using a multi-modal sensor fusion approach (combining RGB and depth data from an RGB-D camera and a lidar). The framework consists of three key elements: understanding the environment through RGB data, estimating depth through multi-modal sensor fusion, and managing artifacts (i.e., filtering and stabilizing measurements). The experiments show that the proposed framework can accurately detect 98% of the objects in the real sample environment, without post-processing, while 85% and 80% of the objects were mapped using the single camera or lidar setup respectively. The comparison with single-sensor (camera or lidar) experiments is performed to show that sensor fusion allows the robot to accurately detect near and far obstacles, which would have been noisy or imprecise in a purely visual or laser-based approach.

Keyphrases: Multi-modal perception, semantic mapping, Sensors fusion

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:10724,
  author = {Federico Rollo and Gennaro Raiola and Andrea Zunino and Nikolaos Tsagarakis and Arash Ajoudani},
  title = {Artifacts Mapping: Multi-Modal Semantic Mapping for Object Detection and 3D Localization},
  howpublished = {EasyChair Preprint no. 10724},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser