The recent availability of the so called depth cameras in the consumer electronics market has spawn a lot of research in the field of human motion tracking using computer vision techniques. Some of these researches have also found their way into commercial products: some notable examples are the Microsoft Kinect body tracking system and the Intel RealSense hand tracker solution. As far as the hand motion tracking is concerned, most scientific works provided a solution for full DOF pose reconstruction, that is the whole cinematic information about the hand motion are recovered from the depth sensor image stream. These include the finger rotations, as well as the global 3D position and orientation. Despite the complete tracking of hand motion, the validation of accuracy is generally limited to the distance between the real and the predicted position for specific salient points of the hand (usually the fingertips). As a result, no exact information is available about the accuracy in estimating the hand orientation. For some applications, a precise measurement of hand rotation is needed. For instance, in the context of telemonitoring and telerehablitation of the Parkinson disease, and exercise of the Unified Parkinson Disease Scale (UPDRS) consists exactly in the execution of hand rotations in front of the doctor. Estimating precisely how the patient rotate the hand throughout the exercise execution provides valuable information to the doctor or therapist about the state of the disease. In this case, no information is needed about the rest of hand (e.g. fingers abduction/adduction or flexion/extension). Furthermore, an accurate tracker for hand rotation only might be leveraged to simplify the full DOF tracking process, for example to initialize more complex and orientation-specific hand trackers in a hierarchical fashion. In this report the athor presents his preliminary work in the real-time estimation of hand rotation from depthmaps. The accuracy results achieved resorting to both state of the art approaches and a new formulation of the problem are presented. The work focuses on single-frame estimation, i.e. no temporal information is used.
Hand orientation prediction using Random Forests: preliminary results
Daniele Pianu;Antonio Chimienti
2016
Abstract
The recent availability of the so called depth cameras in the consumer electronics market has spawn a lot of research in the field of human motion tracking using computer vision techniques. Some of these researches have also found their way into commercial products: some notable examples are the Microsoft Kinect body tracking system and the Intel RealSense hand tracker solution. As far as the hand motion tracking is concerned, most scientific works provided a solution for full DOF pose reconstruction, that is the whole cinematic information about the hand motion are recovered from the depth sensor image stream. These include the finger rotations, as well as the global 3D position and orientation. Despite the complete tracking of hand motion, the validation of accuracy is generally limited to the distance between the real and the predicted position for specific salient points of the hand (usually the fingertips). As a result, no exact information is available about the accuracy in estimating the hand orientation. For some applications, a precise measurement of hand rotation is needed. For instance, in the context of telemonitoring and telerehablitation of the Parkinson disease, and exercise of the Unified Parkinson Disease Scale (UPDRS) consists exactly in the execution of hand rotations in front of the doctor. Estimating precisely how the patient rotate the hand throughout the exercise execution provides valuable information to the doctor or therapist about the state of the disease. In this case, no information is needed about the rest of hand (e.g. fingers abduction/adduction or flexion/extension). Furthermore, an accurate tracker for hand rotation only might be leveraged to simplify the full DOF tracking process, for example to initialize more complex and orientation-specific hand trackers in a hierarchical fashion. In this report the athor presents his preliminary work in the real-time estimation of hand rotation from depthmaps. The accuracy results achieved resorting to both state of the art approaches and a new formulation of the problem are presented. The work focuses on single-frame estimation, i.e. no temporal information is used.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.