Optimal Motion Prediction for Vision-Free Human-to-Robot Handovers

Submitted to, IEEE/RSJ IROS 2025 - International Conference on Intelligent Robots and Systems, 2025

Code HAL Paper
  • 1 LAAS-CNRS, UniversitĂ© de Toulouse, CNRS, Toulouse
  • 2 School of Computing, National University of Singapore, Singapore
  • 3 Machines in Motion Laboratory, New York University, USA
  • 4 Image and Pervasive Access Laboratory (IPAL), CNRS-UMI, 2955,Singapore
  • 5 Artificial and Natural Intelligence Toulouse Institute (ANITI), Toulouse
  • 6 Smart Systems Institute, National University of Singapore, Singapore

Abstract

Seamless human-robot handovers require precision, timing and safety. In the absence of visual feedback for humans, robots rely on accurately estimating, and predicting their motion. In this work, a real-time human motion prediction and estimation framework for vision-free human-to-robot handovers relying on a planar biomechanical model and cost functions extracted from the motor control literature is proposed. Thanks to inverse reinforcement learning, it is possible to iteratively determine the optimal weighting of these cost functions by solving a direct optimal control problem for reaching tasks. An affordable, markerless human pose estimation pipeline was used to estimate in real-time and predict the human arm motion. These predictions were then integrated into a model predictive controller for a seven-degree-of-freedom robot manipulator, successfully intercepting participants’ hand in 88.6% of trials, 0.63s before they reached their intended final hand pose. Experimental validation with blindfolded participants resulted in predicted joint angle error of 8.7deg during handover trials. The proposed framework offers a promising solution for safe and effective human-to-robot handovers, particularly for applications involving visually impaired users.