Seamless human-robot handovers require precision, timing and safety. In the absence of visual feedback for humans, robots rely on accurately estimating, and predicting their motion. In this work, a real-time human motion prediction and estimation framework for vision-free human-to-robot handovers relying on a planar biomechanical model and cost functions extracted from the motor control literature is proposed. Thanks to inverse reinforcement learning, it is possible to iteratively determine the optimal weighting of these cost functions by solving a direct optimal control problem for reaching tasks. An affordable, markerless human pose estimation pipeline was used to estimate in real-time and predict the human arm motion. These predictions were then integrated into a model predictive controller for a seven-degree-of-freedom robot manipulator, successfully intercepting participants’ hand in 88.6% of trials, 0.63s before they reached their intended final hand pose. Experimental validation with blindfolded participants resulted in predicted joint angle error of 8.7deg during handover trials. The proposed framework offers a promising solution for safe and effective human-to-robot handovers, particularly for applications involving visually impaired users.