Abstract
This paper proposes a novel approach to multi-modal gesture recognition by using skeletal joints and motion trail model. The approach includes two modules, i.e. spotting and recognition. In the spotting module, a continuous gesture sequence is segmented into individual gesture intervals based on hand joint positions within a sliding window. In the recognition module, three models are combined to classify each gesture interval into one gesture category. For skeletal model, Hidden Markov Models (HMM) and Support Vector Machines (SVM) are adopted for classifying skeleton features. For depth maps and user masks, we employ 2D Motion Trail Model (2DMTM) for gesture representation to capture motion region information. SVM is then used to classify Pyramid Histograms of Oriented Gradient (PHOG) features from 2DMTM. These three models are complementary to each other. Finally, a fusion scheme incorporates the probability weights of each classifier for gesture recognition. The proposed approach is evaluated on the 2014 ChaLearn Multi-modal Gesture Recognition Challenge dataset. Experimental results demonstrate that the proposed approach using combined models outperforms single-modal approaches, and the recognition module can perform effectively on user-independent gesture recognition.
Original language | English |
---|---|
Title of host publication | Computer Vision - ECCV 2014 Workshops |
Subtitle of host publication | Zurich, Switzerland, September 6-7 and 12, 2014, Proceedings, Part I |
Place of Publication | Switzerland |
Publisher | Springer |
Pages | 623-638 |
Number of pages | 16 |
DOIs | |
Publication status | Published - 2014 |
Event | European Conference on Computer Vision - Zurich, Switzerland Duration: 06 Sept 2014 → 12 Sept 2014 https://www.springer.com/gp/book/9783319105925 |
Conference
Conference | European Conference on Computer Vision |
---|---|
Abbreviated title | ECCV |
Country/Territory | Switzerland |
City | Zurich |
Period | 06/09/14 → 12/09/14 |
Internet address |