Skip to content

[arXiv '24] Real-Time 3D Semantic Scene Perception for Egocentric Robots with Binocular Vision

Notifications You must be signed in to change notification settings

mkhangg/semantic_scene_perception

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Real-Time 3D Semantic Scene Perception for Egocentric Robots with Binocular Vision

Table of Contents
  1. Authors
  2. Abstract
  3. Prerequisites
  4. Pipeline Overview
  5. Egocentric Segmentation, Feature Matching, and Point Cloud Alignment
  6. Real-Time Deployment on Baxter Robot
  7. Citing

Authors

  1. Khang Nguyen
  2. Tuan Dang
  3. Manfred Huber

All authors are with Learning and Adaptive Robotics Laboratory, Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76013, USA.

Abstract

Perceiving a three-dimensional (3D) scene with multiple objects while moving indoors is essential for vision-based mobile cobots, especially for enhancing their manipulation tasks. In this work, we present an end-to-end pipeline with instance segmentation, feature matching, and point-set registration for egocentric robots with binocular vision, and demonstrate the robot's grasping capability through the proposed pipeline. First, we design an RGB image-based segmentation approach for single-view 3D semantic scene segmentation, leveraging common object classes in 2D datasets to encapsulate 3D points into point clouds of object instances through corresponding depth maps. Next, 3D correspondences of two consecutive segmented point clouds are extracted based on matched keypoints between objects of interest in RGB images from the prior step. In addition, to be aware of spatial changes in 3D feature distribution, we also weigh each 3D point pair based on the estimated distribution using kernel density estimation (KDE), which subsequently gives robustness with less central correspondences while solving for rigid transformations between point clouds. Finally, we test our proposed pipeline on the 7-DOF dual-arm Baxter robot with a mounted Intel RealSense D435i RGB-D camera. The result shows that our robot can segment objects of interest, register multiple views while moving, and grasp the target object. The demo is available at YouTube.


The GIF is played with x2 speed for small file size.

Prerequisites

To install all requirements, please run the command line below:

pip install -r requirements.txt

Please check the config/config.yaml file before running. Variable 'device' in the file also allows to change the processing unit for segmentation model inference (GPU or CPU).

Pipeline Overview

The 3D semantic scene perception pipeline when the robot takes two (or multiple) views of a scene includes (1) egocentric segmentation to create point clouds of objects of interest, (2) extracting and matching corresponding features on masked RGB images to infer 3D correspondences via depth maps, (3) finding optimal transformations based on weighted 3D correspondences and reconstructing the 3D scene, and (4) returning the aligned point cloud from multiple views with segmented objects.


Overview of Pipeline when Robot Moving in an Indoor Envinronment.

Egocentric Segmentation, Feature Matching, and Point Cloud Alignment

Egocentric Segmentation

The egocentric object segmentation process includes (a) segmenting masks on the RGB image using YOLOv8n segmentation model, (b) obtaining and aggregating binary masks of the objects of interest, (c) aligning the corresponding depth image, (d) rectifying non-masked depth pixels on the aligned depth image with obtained masks, and (e) creating point clouds of such objects.


Egocentric Object Segmentation Procedure.

Feature Matching

The 3D correspondences matching process includes (a) extracting and matching keypoints between masked RGB images, (b) finding corresponding depth pixels on rectified depth images, and (c) mapping 3D correspondences between point clouds of object instances.


3D Correspondences Matching Procedure.

Point Cloud Alignment

The point cloud alignment process includes (a) dimension-wise estimating densities of 3D correspondences along x-axis (blue), y-axis (orange), and z-axis (yellow), (b) computing weights for 3D correspondences, (c) solving for the optimal rigid transformation based on 3D correspondences and their weights, and (d) aligning point clouds (top) with re-colorization based on each object instance (down).


The Point Cloud Aligment Procedure.

Real-Time Deployment on Baxter Robot

We mount the Intel RealSense D435i RGB-D camera on the display of the Baxter robot. The Baxter robot firsts stands in one position, captures the scene at that view, moves to another position of 20-degree displacement and captures the scene at the scond view, and grasps one of the plastic cups. All the source code for the deployment procedure is in deployment/main_pc and deployment/base_pc folders.


Action Sequence of Baxter Robot and Result at Each Step.

Experiment setup with (top) the Baxter (a) observing the scene at its first view, (b) moving to and capturing the second scene of 20-degree displacement, (c) approaching the target objects, and (d) grasping one of them; and (down) results at each step in our pipeline.

NOTE: For training SuperPoint with positional embedding using PyTorch, please refer to the superpoint_with_pos_embed/ folder. This is the incremental development based on Shao-Feng Zeng's GitHub repository. Give him a star when you use his code for your work!

Citing

@article{nguyen2024real,
  title={Real-time 3D Semantic Scene Perception for Egocentric Robots with Binocular Vision},
  author={Nguyen, Khang and Dang, Tuan and Huber, Manfred},
  journal={arXiv preprint arXiv:2402.11872},
  year={2024}
}

About

[arXiv '24] Real-Time 3D Semantic Scene Perception for Egocentric Robots with Binocular Vision

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published