Download PDFOpen PDF in browser

Layered Integration of Visual Foundation Models for Enhanced Robot Manipulation and Motion Planning

EasyChair Preprint no. 13187

10 pagesDate: May 6, 2024

Abstract

Robotics research has seen significant advancements in recent years, particularly in the realms of visual perception, manipulation, and motion planning. This paper proposes a novel approach termed Layered Integration of Visual Foundation Models (LIVFM) aimed at enhancing robot manipulation and motion planning tasks. LIVFM integrates multiple visual perception models in a layered fashion, leveraging the strengths of each model to overcome their individual limitations. By combining the outputs of these models, robots can achieve enhanced understanding of their environment, leading to improved manipulation capabilities and more robust motion planning strategies. This paper presents the theoretical framework of LIVFM, discusses its implementation details, and provides experimental results demonstrating its effectiveness in various robotic scenarios.

Keyphrases: enhanced performance, Integration, Layered models, manipulation, motion planning, Robotics, visual perception

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:13187,
  author = {Anthony Lambert and Wahaj Ahmed},
  title = {Layered Integration of Visual Foundation Models for Enhanced Robot Manipulation and Motion Planning},
  howpublished = {EasyChair Preprint no. 13187},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser