Article
which is determined facial velocity information. Then, these two features are integrated and converted to visual words using “bag-of-words” models, and facial expression is represented by a number of visual words. Secondly, the Latent Dirichlet Allocation (LDA) model are utilized to classify different facial expressions such as “anger”, “disgust”, “fear”, “happiness”, “neutral”, “sadness”, and “surprise”. The experimental results show that our proposed method not only performs stably and robustly
Shaoping Zhu
International Journal on Smart Sensing and Intelligent Systems , ISSUE 3, 1464–1483
Article
facial expression. Then, fusing the local texture feature and facial velocity information get the hybrid characteristics using Bag of Words. Finally, Multi-Instance Boosting model is used to recognize facial expression from video sequences. In order to be learned quickly and complete the detection, the class label information is used for the learning of the Multi-Instance Boosting model. Experiments were performed on a facial expression dataset built by ourselves and on the JAFFE database to evaluate
Shaoping Zhu,
Yongliang Xiao
International Journal on Smart Sensing and Intelligent Systems , ISSUE 1, 581–601
Article
In Bag of Words image presentation model, visual words are generated by unsupervised clustering, which leaves out the spatial relations between words and results in such shorting comings as limited semantic description and weak discrimination. To solve this problem, we propose to substitute visual words by visual phrases in this article. Visual phrases built according to spatial relations between words are semantic distrainable, and they can improve the accuracy of Bag of Words model
Tao Wang,
Wenqing Chen,
Bailing Wang
International Journal on Smart Sensing and Intelligent Systems , ISSUE 4, 1470–1492
Article
Ning Zhang,
Jinfu Zhu
International Journal on Smart Sensing and Intelligent Systems , ISSUE 1, 45–64