Brain-machine interfaces (BMIs) are increasingly being used in rehabilitation research to improve the quality of life of clinical populations. Current BMI technology allows us to control, with a high level of accuracy, the positioning of robotic hands in space. We have shown previously that it is possible to decode the dexterous movements of fingers during grasping, from noninvasively recorded electroencephalographic (EEG) activity. Due to the absence of overt movement in clinical subjects with impaired hand function, however, it is not possible to construct decoder models directly by simultaneously recording brain activity and kinematics. The mirror neuron system is activated in a similar fashion during both overt movements and observing movements performed by other agents. Here, we investigate action-observation as a strategy to calibrate decoders for grasping in human subjects. Subjects observed while a robotic hand performed grasping movements, and decode models were calibrated using the EEG activity of the subjects and the kinematics of the robotic hand. Decoding accuracy was tested on unseen data, in an 8-fold cross validation scheme, as the correlation coefficient between the predicted and actual trajectories. High decoding accuracies were obtained (r = 0.70 ± 0.07), demonstrating the feasibility of using action-observation as a calibration technique for decoding grasping movements.