12 Citations (Scopus)
5 Downloads (Pure)


In this paper an effective method is presented to recognize human gestures from sequences of depth images. Specifically, we propose a three dimensional motion trail model (3D-MTM) to explicitly represent the dynamics and statics of gestures in 3D space. In 2D space, the motion trail model (2D-MTM) consists of both motion information and static posture information over the gesture sequence along the xoy-plane. Considering gestures are performed in 3D space, depth images are projected onto two other planes to encode additional gesture information. The 2D-MTM is then extensively combined with complementary motion information from additional two planes to generate the 3D-MTM. Furthermore, the Histogram of Oriented Gradient (HOG) feature vector is extracted from the proposed 3D-MTM as the representation of a gesture sequence. The experiment results show that the proposed method achieves better results on two publicly available datasets namely MSR Action3D dataset and ChaLearn gesture dataset
Original languageEnglish
Title of host publicationICCVW 2013
Place of PublicationUnited States
PublisherInstitute of Electrical and Electronics Engineers
Number of pages8
ISBN (Electronic)9781479930227
Publication statusPublished - 2013
EventIEEE International Conference on Computer Vision - Sydney, Australia
Duration: 02 Dec 201308 Dec 2013


ConferenceIEEE International Conference on Computer Vision


Dive into the research topics of 'Three Dimensional Motion Trail Model for Gesture Recognition'. Together they form a unique fingerprint.

Cite this