Modeling motion of body parts for action recognition

Khai N. Tran, Ioannis A. Kakadiaris, Shishir K. Shah

Research output: Contribution to conferencePaperpeer-review

25 Scopus citations


This paper presents a simple and computationally efficient framework for human action recognition based on modeling the motion of human body parts. Intuitively, a collective understanding of human part movements can lead to better understanding and representation of any human action. In this paper, we propose a generative representation of the motion of the human body parts to learn and classify human actions. The proposed representation combines the advantages of both local and global representations, encoding the relevant motion information as well as being robust to local appearance changes. Our work is motivated by the pictorial structures model and the framework of sparse representations for recognition. Human part movements are represented efficiently through quantization in the polar space. The key discrimination within each action is efficiently encoded by sparse representation to perform classification. The proposed method is evaluated on both the KTH and the UCF action datasets and the results are compared against other state-of-the-art methods.

Original languageEnglish (US)
StatePublished - 2011
Event2011 22nd British Machine Vision Conference, BMVC 2011 - Dundee, United Kingdom
Duration: Aug 29 2011Sep 2 2011


Conference2011 22nd British Machine Vision Conference, BMVC 2011
Country/TerritoryUnited Kingdom

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Modeling motion of body parts for action recognition'. Together they form a unique fingerprint.

Cite this