Abstract
In this paper an effective method is presented to recognize human gestures from sequences of depth images. Specifically, we propose a three dimensional motion trail model (3D-MTM) to explicitly represent the dynamics and statics of gestures in 3D space. In 2D space, the motion trail model (2D-MTM) consists of both motion information and static posture information over the gesture sequence along the xoy-plane. Considering gestures are performed in 3D space, depth images are projected onto two other planes to encode additional gesture information. The 2D-MTM is then extensively combined with complementary motion information from additional two planes to generate the 3D-MTM. Furthermore, the Histogram of Oriented Gradient (HOG) feature vector is extracted from the proposed 3D-MTM as the representation of a gesture sequence. The experiment results show that the proposed method achieves better results on two publicly available datasets namely MSR Action3D dataset and ChaLearn gesture dataset
Original language | English |
---|---|
Title of host publication | ICCVW 2013 |
Place of Publication | United States |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 684-691 |
Number of pages | 8 |
ISBN (Electronic) | 9781479930227 |
DOIs | |
Publication status | Published - 2013 |
Event | IEEE International Conference on Computer Vision - Sydney, Australia Duration: 02 Dec 2013 → 08 Dec 2013 |
Conference
Conference | IEEE International Conference on Computer Vision |
---|---|
Country/Territory | Australia |
Period | 02/12/13 → 08/12/13 |