Métadonnées
Afficher la notice complètePartager cette publication !
Real-time 3D motion capture by monocular vision and virtual rendering
Langue
en
Article de revue
Ce document a été publié dans
Machine Vision and Applications. 2017-11, vol. 28, n° 8, p. 839 - 858
Springer Verlag
Résumé en anglais
Networked 3D virtual environments allow multiple users to interact over the Internet by means of avatars and to get some feeling of a virtual telepresence. However, avatar control may be tedious. 3D sensors for motion ...Lire la suite >
Networked 3D virtual environments allow multiple users to interact over the Internet by means of avatars and to get some feeling of a virtual telepresence. However, avatar control may be tedious. 3D sensors for motion capture systems based on 3D sensors have reached the consumer market, but webcams remain more widespread and cheaper. This work aims at animating a user's avatar by real-time motion capture using a personal computer and a plain webcam. In a classical model-based approach, we register a 3D articulated upper-body model onto video sequences and propose a number of heuristics to accelerate particle filtering while robustly tracking user motion. Describing the body pose using wrists 3D positions rather than joint angles allows efficient handling of depth ambiguities for probabilistic tracking. We demonstrate experimentally the robustness of our 3D body tracking by real-time monocular vision, even in the case of partial occlusions and motion in the depth direction< Réduire
Mots clés en anglais
Monocular vision
3D/2D registration
3D motion capture
Real-time computer vision
Particle filtering
Unités de recherche