Paper Image

Using vision and touch sensing to assess robotic grasp stability

Paper Authors:

Zhuangzhuang Zhang,

Zhenning Zhou,

Haili Wang,

Zhinan Zhang,

Huang Huang,

Qixin Cao

Bullets

Key Details

Proposes an attention-guided cross-modality fusion model to assess grasp stability

Collects a large-scale visual-tactile dataset in simulation for training

Achieves over 10% higher accuracy than baseline methods

Enables minimum-force grasping in real robotic tests

Bridges sim-to-real gap via domain randomization and adaptation techniques

AI generated summary

Using vision and touch sensing to assess robotic grasp stability

This paper proposes a deep learning approach that integrates visual and tactile data to evaluate grasp stability for robotic manipulation tasks. The model utilizes attention mechanisms to enhance unimodal features and capture interactions between vision and touch. Extensive experiments demonstrate the model's superior performance over baselines and its ability to enable delicate grasping on real hardware after training in simulation.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up