Autors:

CiteWeb id: 20130000038

CiteWeb score: 2725

DOI: 10.1145/2398356.2398381

We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes.The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.

The publication "Real-time human pose recognition in parts from single depth images" is placed in the Top 10000 of the best publications in CiteWeb. Also in the category Computer Science it is included to the Top 1000. Additionally, the publicaiton "Real-time human pose recognition in parts from single depth images" is placed in the Top 100 among other scientific works published in 2013.
Links to full text of the publication: