Detection and classification of the behavior of people in an intelligent building by camera

Publications

Share / Export Citation / Email / Print / Text size:

International Journal on Smart Sensing and Intelligent Systems

Professor Subhas Chandra Mukhopadhyay

Exeley Inc. (New York)

Subject: Computational Science & Engineering, Engineering, Electrical & Electronic

GET ALERTS

eISSN: 1178-5608

DESCRIPTION

32
Reader(s)
59
Visit(s)
0
Comment(s)
0
Share(s)

VOLUME 6 , ISSUE 4 (September 2013) > List of articles

Detection and classification of the behavior of people in an intelligent building by camera

Henni Sid Ahmed * / Belbachir Mohamed Faouzi / Jean Caelen

Keywords : video analysis, people detection, intelligent building, classification.

Citation Information : International Journal on Smart Sensing and Intelligent Systems. Volume 6, Issue 4, Pages 1,317-1,342, DOI: https://doi.org/10.21307/ijssis-2017-592

License : (CC BY-NC-ND 4.0)

Received Date : 10-April-2013 / Accepted: 30-July-2013 / Published Online: 03-September-2013

ARTICLE

ABSTRACT

an intelligent building is an environment that contains a number of sensor and camera, which aims to provide information that give the various actions taken by individuals, and their status to be processed by a system of detection and classification of behaviors . This system of detection and classification uses this information as input to provide maximum comfort to people who are in this building with optimal energy consumption, for example if I workout in the room then the system will lower the heating . My goal is to develop a robust and reliable system which is composed of two fixed cameras in every room of intelligent building which are connected to a computer for acquisition of video sequences, with a program using these video sequences as inputs, we use RGB color histograms and textures for LBP represented different images of video sequences, and SVM (support vector machine) Lights as a programming tool for the detection and classification of the behavior of people in this intelligent building in order to give maximum comfort with optimized energy consumption. The classification is performed using the classification of k 1 and k = 11 in our case, we built 11 models in the learning phase using different nucleus in order to choose the best models that give the highest classification rate and finally for, the classification phase, to classify the behavior, we compare it to the 11 behaviors, that is to say, we make 11 classification and take the behavior that has the highest classification rate. This work has been carried out within the University Joseph Fourier in Grenoble precisely LIG (Grenoble computer lab) in the team MULTI COM and the University of Oran Algeria USTO. Our contribution in this field is the design and implementation of a robust, and accurate system that make detection and classification of 11 behaviors cameras in an intelligent building, varying illumination it means, whatever lighting is our system must be capable of detecting and classifying behaviors.

Content not available PDF Share

FIGURES & TABLES

REFERENCES

[1] T. Emmanuel, S. Intille and K. Larson, “Activity Recognition in the Home Using Simple and Ubiquitous Sensors”, In Proceedings of 2nd International Conference on Pervasive Computing in LNCS, Springer, Vol. 3001, 2004, pp. 158-175.
[2] A. Mahajan, C. Oesch, H. Padmanaban, L. Utterback, S. Chitikeshi and F. Figueroa, “Physical and Virtual Intelligent Sensors for Integrated Health Management Systems”, International Journal on Smart Sensing and Intelligent Systems, Vol. 5, No. 3, September 2012, pp. 559 – 575.
[3] T.Jayakumar, C.Babu Rao, John Philip, C.K.Mukhopadhyay, J.Jayapandian, C.Pandian, “Sensors for Monitoring Components, Systems and Processes”, International Journal on Smart Sensing and Intelligent Systems, Vol. 3, No. 1, March 2010, pp. 61-74.
[4] P.Wide, “Human-Based Sensing – Sensor Systems to Complement Human Perception”, International Journal on Smart Sensing and Intelligent Systems, vol. 1, no.1, 2008, pp. 57 – 69.
[5] S. Boukhenous, “A Low Cost Three-Directional Force Sensor”, International Journal on
Smart Sensing and Intelligent Systems, vol. 4, no. 1, 2011, pp. 21-34.
[6] M.F. Rahmat, N.H. Sunar, Sy Najib Sy Salim, Mastura Shafinaz Zainal Abidin, A.A Mohd
Fauzi and Z.H. Ismail, “Review on Modeling and Controller Design in Pneumatic Actuator
Control System”, International Journal on Smart Sensing and Intelligent Systems, vol. 4, no. 4,
2011, pp. 630-661.
[7] T. K. Dakhlallah, M. A. Zohdy, “Type-2 Fuzzy Kalman Hybrid Application for Dynamic Security Monitoring Systems based on Multiple Sensor Fusion”, International Journal on Smart Sensing and Intelligent Systems, Vol.4, No.4, 2011, pp. 607-629.
[8] X.Pang, P.Bhattacharya, Z.O’Neill, P.Haves, M.Wetter, and T.Bailey; “ Real time building energy simulation using Energy Plus and the building controls virtual test bed”. Proceedings of Building Simulation, 12th Conference of International Building Performance Simulation Association, Sydney, November 2011. Proceedings of Building Simulation 2011, pp. 2890-2896. [9] M.Wetter, “Co-simulation of Building Energy and Control Systems with the Building Controls Virtual Test Bed”, Journal of Building Performance Simulation, Vol.4, no.3, 2011 pp. 185-203.
[10] T.S. Nouidui, M. Wetter, Z. Li, X. Pang, P. Bhattacharya et P. Haves, “BACnet and analog/digital interfaces of the Building Controls Virtual Test Bed”, Proceedings of 12th International IBPSA Conference Building Simulation, , Sydney,Australia, November 2011, pp. 294-301.
[11] D.L. Ha, H. Joumaa, S. Ploix, M. Jacomino. “An optimal approach for electrical management problem in dwellings”. Energy and Buildings, Vol 45, , February 2012, pp 1-14.
[12] Mei-Ling SHYU, Zongxing Xie abd MIN CHEN and Shu-Ching CHEN, ‘‘Video semantic
event/concept detection using a subspace-based multimedia data mining framework’’, IEEE transactions on multimedia ISSN 1520-9210, Vol 10, 2008, pp. 252–259.
[13] J. K. Aggarwal and Q. Cai, ‘‘Human motion analysis: a review’’, Computer Vision and Image Understanding, Vol 73, 1999, pp. 90-102.
[14] D. M. Gavrila, ‘‘The visual analysis of human movement: a survey’’, Computer Vision and Image Understanding, Vol 73, 1999, pp. 82-98.
[15] W. Hu, T. Tan, L. Wang, and S. Maybank, ‘‘A survey on visual surveillance of object motion and behaviors’’, Systems, Man, and Cybernetics, Part C:Applications and Reviews, Vol 34, no. 3, 2004, pp. 334-352.
[16] David A. Forsyth, Okan Arikan, Leslie Ikemoto, James O’brien and amanan, ‘‘Computational studies of human motion: part 1, tracking and motion synthesis’’, Found. Trends. Comput. Graph. Vis, Vol 1, 2005, pp. 77–254.
[17] Ronald Poppe, ‘‘A survey on vision-based human action recognition’’, Image and Vision Computing (IVC), Vol 28, no. 6, 2010, pp.976 – 990.
[18] Poppe, R. ‘‘A survey on vision-based human action recognition ’’, Image and Vision
Computing (IVC), Vol 28, no. 6, 2010, pp. 976 – 990.
[19] Turaga, P., R. Chellappa, V. S. Subrahmanian, and O. Udrea , ‘‘ Machine recognition of
human activities A survey ’’, IEEE Transactions on Circuits and Systems for Video Technology Vol 18, no. 11, 2008, pp.1473–1488.
[20] Ali, S. and Shah, ‘‘ Human action recognition in videos using kinematic features and
multipleinstance learning’’, IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI), Vol 32, no. 2, 2010, pp. 288–303.
[21] Dollar, P., V. Rabaud, G. Cottrell, and Belongie , ‘‘ Behavior recognition via sparse spatiotemporal features’’, In 2nd International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (PETS), 2005, pp. 65–72.
[22] Willems, G., T. Tuytelaars, and V. Gool, ‘‘An efficient dense and scale-invariant spatiotemporalinterest point detector’’, In European Conference on Computer Vision (ECCV), Vol 102, 2008, pp. 650-663.
[23] Fathi, A. and G. Mori, ‘‘Action recognition by learning mid-level motion features ’’, In
International Conference on Computer Vision and Pattern Recognition (CVPR), Vol 2, 2008, pp. 726-733.
[24] Laptev, I., M. Marszałek, C. Schmid, and Rozenfeld , ‘‘ Learning realistic human actions from movies’’, In International Conference on Computer Vision and Pattern Recognition
(CVPR), Vol 64, 2008, pp. 107-123.
[25] Kläser, A., M. Marszałek, and C. Schmid,‘‘ A spatio-temporal descriptor based on 3dgradients ’’, In British Machine Vision Conference (BMVC), 2008, pp. 995-1004
[26] Mauthner, T., P. M. Roth, and H. Bischof, ‘‘Instant action recognition ’’, In 16th Scandinavian Conference on Image Analysis (SCIA), 2009, pp1-10.
[27] Huang, W. and J. Wu,‘‘Human action recognition using recursive self organizing map
and longest common subsequence matching ’’, In International Workshop on Applications of
Computer Vision (WACV), 2009, pp. 1 –6.
[28] Wang, L., H. Zhou, S.-C. Low, and Leckie,‘‘Action recognition via multi-feature
fusion and gaussian process classification ’’, In International Workshop on Applications of
Computer Vision (WACV), 2009, pp. 1-6.
[29] Yang, W., Y. Wang, and G. Mori,‘‘ Efficient human action detection using a transferable
distance function’’, In Asian Conference on Computer Vision (ACCV), Vol 5995, 2009, pp. 417- 426.
[30] Zhang, J. and S.Gong ,‘‘ Action categorization with modified hidden conditional random
field’’, Pattern Recognition (PR), Vol 43, no.1, 2010, pp. 197- 203.
[31] Laptev, I. and T Lindeberg,‘‘Velocity adaptation of space-time interest points’’,
International Conference on Pattern Recognition (ICPR), 2004, pp. 52–56.
[32] R. Kehl, M. Bray, and L.Van Gool, ‘‘Full body tracking from multiple views using stochastic sampling’’, interantional conference on Computer Vision and Pattern Recognition, Vol 2, 2005 pp. 129-136.
[33] D. Weinland, R. Ronfard, and E. Boyer, ‘‘Free viewpoint action recognition using motion history volumes’’, Computer Vision and Image Understanding, Vol 104, no. 2, 2006, pp. 249-257.
[34] F. Lv and R. Nevatia, ‘‘Single view human action recognition using key pose matching and viterbi path searching’’, international conference on Computer Vision and Pattern Recognition, 2007, pp. 1-8.
[35] C. Rao, A. Yilmaz, and M. Shah, ‘‘View-invariant representation and recognition of actions’’, International Journal of Computer Vision, Vol 50, no. 2, 2002, pp. 203-226.
[36] V.Parameswaran and R. Chellappa, ‘‘View invariance for human action recognition’’, International Journal of Computer Vision, Vol 66, no. 1, 2006, pp. 83-101.
[37] A. Gritai, Y. Sheikh, and M. Shah,‘‘On the use of anthropometry in the invariant analysis of human actions’’, International Conference on Pattern Recognition, Vol 2, 2004, pp. 923-926.
[38] A. Yilmaza and M. Shah, ‘‘Matching actions in presence of camera motion’’, Computer Vision and Image Understanding, Vol 104, no. 2, 2006, pp. 221-231.
[39] C. Rao, A.Gritai, M.Shah, and T. Syeda-Mahmood, ‘‘View-invariant alignment and matching of video sequences’’, International Conference on Computer Vision, Vol 2, 2003, pp. 939-945.
[40] T. Syeda-Mahmood, A. Vasilescu, and S. Sethi, ‘‘Recognizing action events from multiple viewpoints’’, Detection and Recognition of Events in Video Workshop, 2001, pp. 64-72.
[41] Qiang He and C. Debrunner, ‘‘Individual recognition from periodic activity using hidden markov models’’, Human Motion Workshop, 2000, pp. 47-52.
[42] A.A. Efros, A.C. Berg, G. Mori, and J. Malik, ‘‘Recognizing action at a distance’’, International Conference on Computer Vision, Vol 2, 2003, pp. 726-733.
[43] R. Cutler and M. Turk, ‘‘View-based interpretation of real-time optical _ow for gesture recognition’’, International Conference on Automatic Face and Gesture Recognition, 1998, pp. 416-421.
[44] J.W. Davis and A.F. Bobick, ‘‘The representation and recognition of action using temporal templates’’, International conference on Computer Vision and Pattern Recognition, 1997, pp. 928-934.
[45] Ojala, T., Pietikainen, M., and Harwood, D, ‘‘A comparative study of texture measures with classification based on feature distributions’’, In Pattern Recognition, Vol 29, 1996, pp. 51–59
[46] Ojala, T., Pietikainen, M., and Maenpaa, T, ‘‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns’’, Vol 24, no. 7, 2002, pp. 971–987.
[47] Ahonen, T., Hadid, A., and Pietikäinen, M.,‘‘Face description with local binary patterns : application to face recognition’’, IEEE Trans Pattern Anal Mach Intell, Vol 28, no. 12, 2006, pp. 2037–2041.
[48] Tan, X. and Triggs, B, ‘‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’’, In IEEE Conf. on AMFG, 2007, pp. 168 –182.
[49] Kolesnik, M. and Fexa, A, ‘‘Multi-dimensional color histograms for segmentation of wounds in images’’, Lecture Notes in Computer Science, Vol 3656, 2005, pp. 1014–1022.
[50] Swain, M. and Ballard, D, ‘‘Color indexin’’, International Journal of Computer Vision (IJCV), Vol 7, no. 1, 1991, pp. 11–32.
[51] . Huang, J., Kumar, S., Mitra, M., Zhu, W.-J., and Zabih, R, ‘‘Image indexing using color correlograms’’, In Proc IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997, pp. 762–768.
[52] Messing, D., van Beek P., and Errico, J, ‘‘The mpeg-7 colour structure descriptor : image description using colour and local spatial information’’, In Proc. International Conference on Image Processing, Vol 1, 2001, pp. 670–673.
[53] Boujemaa, N. and Vertan, C, ‘‘Upgrading color distributions for image retrieval : can we do better ? In Proc’’, of International Conference on Visual Information System (VIS00), 2000, pp. 178–188.
[54] Vertan, C. and Boujemaa, N, ‘‘Embedding fuzzy logic in content based image retrieval’’, In Proc. NAFIPS Fuzzy Information Processing Society 19th International Conference of the North American, 2000, pp. 85–89.
[55] Zhao, R. and Grosky, W, ‘‘From features to semantics : some preliminary results’’, In Proc. IEEE International Conference on Multimedia and Expo ICME 2000, Vol 2, 2000, pp. 679–682
[56] Smith, J. R. and Chang, S. F, ‘‘Tools and techniques for color image retrieval’’, In IST/SPIE Proceedings, 1996, pp. 426–437.
[57] Carson, C., Belongie, S., Greenspan, H., and Malik, J, ‘‘Blobworld : image segmentation using expectation-maximization and its application to image querying’’, IEEE Trans on Pattern Anal and Machine Intill. (PAMI), Vol 24, no. 8, 2002, pp.1026–1038
[58] C. Cortes and V. Vapnik ,“Support-vector network,” Mach. Learn., Vol 20,1995, pp. 273–297.
[59] V. Caselles, J. L. Lisani, J. M. Morel, and G. Sapiro, “Shape preserving local histogram
modification”, IEEE Trans. on Image Processing, Vol 8, 1999, pp. 220–229.
[60] D. Sen and P. Sankar,“ Automatic exact histogram specification for contrast enhancement
and visual system based quantitative evaluation ”, IEEE Trans. on Image Processing, Vol 20, 2011, pp. 1211–1220.
[61] D. Coltuc, P. Bolon, and J.-M. Chassery, “Exact histogram specification ”, IEEE Trans.
on Image Processing, Vol 15, 2006, pp. 1143–1152.
[62] E. L. Hall, “Almost uniform distributions for computer image enhancement ”, IEEE
Transactions on Computers, Vol 23, 1974, pp. 207–208.
[63] Y. Wan and D. Shi, “Joint exact histogram specification and image enhancement through the wavelet transform”, IEEE Trans. on Image Processing, Vol 16, 2007, pp. 2245–2250.
[64] M. Nikolova, Y. Wen, and R. Chan,“ Exact histogram specification for digital images
using a variational approach ”, J. of Mathematical Imaging and Vision, 2012, pp. 1-17
[65] B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Proc. 5th Annu. Workshop on Computational Learning Theory, 1992, pp.144-152.
[66] T. Joachims, ‘‘Making large-scale support vector machine learning practical, In A. Smola B. Scholkopf, C. Burges, editor, Advances in Kernel Methods : Support Vector Machines”, editors IEEE transactions on information theory, Vol 44, no.2, MIT Press, Cambridge, MA, 1998, pp. 525-536;
[67] E.Osuna, R. Freund, and F. Girosi, ‘‘Training Support Vector Machines: an Application to Face Detection’’, roceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97), New York, 1997, pp.130-136.
[68] T.Joachims, ‘‘Making large-scale support vector machine learning practical, In A. Smola B. Scholkopf, C. Burges, editor, Advances in Kernel Methods : Support Vector Machines”, Cambridge, MIT Press, MA, USA, 1999, pp. 169-184

EXTRA FILES

COMMENTS