A Novel Based 3d Facial Expression Detection Using Recurrent Neural Network

Authors

  • Jaswanth K S  B.E Scholar, Department of Computer Science and Engineering, IFET College of Engineering, Villupuram, Tamil Nadu, India
  • Dr. D. Stalin David  Assistant Professor, Department of Computer Science and Engineering, IFET College of Engineering, Villupuram, Tamil Nadu, India

DOI:

https://doi.org//10.32628/CSEIT20622

Keywords:

FER, RNN, AdaBoost, 3D DCT

Abstract

People periodically have diverse facial expressions and disposition changes in this way. Human facial expression acknowledgment plays a really energetic part in social relations. The acknowledgment of feelings has been an dynamic breakdown point from early age. The real-time location of facial expressions like appall, upbeat, pitiful, irate, anxious, astonish. The proposed framework can recognize 6 diverse facial expression. A facial expression acknowledgment framework needs to perform location and change to 3D image, then the facial highlight extraction, and facial expression classification is worn. Out proposed strategy we should be utilizing Recurrent Neural Network (RNN). This RNN show is prepared on JAFEE and Yale database dataset. This framework has capacity to screen individuals’ feelings, to segregate between feelings and name them fittingly.

References

  1. W.-L. Chao, J.-J. Ding, and J.-Z. Liu, “Facial expression recognitionbased on improved local binary pattern and class-regularized localitypreserving projection,” Signal Processing, vol. 117, pp. 1 - 10,2015. Online]. Available: //www.sciencedirect.com/science/article/pii/S0165168415001425
  2. T. Danisman, I. M. Bilasco, J. Martinet, and C. Djeraba, “Intelligentpixels of interest selection with application to facial expression recognition using multilayer perceptron,” Signal Processing, vol. 93,no. 6, pp. 1547 - 1556, 2013, special issue on Machine Learning in Intelligent Image Processing. Online]. Available: //www.sciencedirect.com/science/article/pii/S0165168412002745.
  3. D. Ververidis and C. Kotropoulos, “Fast and accurate sequential floatingforward feature selection with the bayes classifier applied to speechemotion recognition,” Signal Processing, vol. 88, no. 12, pp. 2956- 2970, 2008. Online]. Available: //www.sciencedirect.com/science/article/pii/S0165168408002120
  4. X. Li, Q. Ruan, Y. Jin, G. An, and R. Zhao, “Fully automatic3d facial expression recognition using polytypic multi-block local binary patterns,” Signal Processing, vol. 108, pp. 297 - 308,2015. Online]. Available: //www.sciencedirect.com/science/article/pii/S0165168414004563
  5. S. Gupta, A. Mehra et al., “Speech emotion recognition using svmwith thresholding fusion,” in Signal Processing and Integrated Networks (SPIN), 2015 2nd International Conference on. IEEE, 2015, pp. 570-574.
  6. Y. D. Zhang, Z. J. Yang, H. M. Lu, X. X. Zhou, P. Phillips, Q. M. Liu,and S. H. Wang, “Facial emotion recognition based on biorthogonalwavelet entropy, fuzzy support vector machine, and stratified crossvalidation,” IEEE Access, vol. 4, pp. 8375-8385, 2016.
  7. S. Zhao, F. Rudzicz, L. G. Carvalho, C. Marquez-Chin, and S. Livingstone, “Automatic detection of expressed emotion in parkinson’sdisease,” in 2014 IEEE International Conference on Acoustics, Speechand Signal Processing (ICASSP), May 2014, pp. 4813-4817.
  8. F. Y. Shih, C.-F. Chuang, and P. S. Wang, “Performance comparisonsof facial expression recognition in jaffe database,” InternationalJournalof Pattern Recognition and Artificial Intelligence, vol. 22, no. 03, pp.445-459, 2008.
  9. S. P., K. D., and S. Tripathi, “Pose invariant method for emotionrecognition from 3d images,” in 2015 Annual IEEE India Conference (INDICON), Dec 2015, pp. 1-5.
  10. A. C. Cruz, B. Bhanu, and N. S. Thakoor, “Vision and attentiontheory based sampling for continuous facial emotion recognition,” IEEE Transactions on Affective Computing, vol. 5, no. 4, pp. 418-431, Oct2014.
  11. M. H. A. Latif, H. M. Yusof, S. N. Sidek, and N. Rusli, “Thermalimaging based affective state recognition,” in 2015 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Oct 2015, pp.214-219.
  12. V. Sudha, G. Viswanath, A. Balasubramanian, P. Chiranjeevi, K. Basant, and M. Pratibha, “A fast and robust emotion recognition system for realworldmobile phone data,” in 2015 IEEE International Conference on Multimedia Expo Workshops (ICMEW), June 2015, pp. 1-6.
  13. Y. Sun and Y. An, “Research on the embedded system of facialexpression recognition based on hmm,” in 2010 2nd IEEE InternationalConference on Information Management and Engineering, April 2010, pp. 727-731.
  14. P. Chiranjeevi, V. Gopalakrishnan, and P. Moogi, “Neutral face classification using personalized appearance models for fast and robust emotiondetection,” IEEE Transactions on Image Processing, vol. 24, no. 9, pp.2701-2711, Sept 2015.
  15. D. Ghimire and J. Lee, “Geometric feature-based facial expressionrecognition in image sequences using multi-class adaboost and supportvector machines,” Sensors, vol. 13, no. 6, pp. 7714-7734, 2013.
  16. R. Brunelli and T. Poggio, “Face recognition: features versus templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 15, no. 10, pp. 1042-1052, Oct 1993.
  17. L. Zhang, D. Tjondronegoro, and V. Chandran, “Toward a more robustfacial expression recognition in occluded images using randomly sampledgabor based templates,” in 2011 IEEE International Conference on Multimedia and Expo, July 2011, pp. 1-6.
  18. X. Wang, X. Liu, L. Lu, and Z. Shen, “A new facial expressionrecognition method based on geometric alignment and lbp features,” in 2014 IEEE 17th International Conference on Computational Scienceand Engineering, Dec 2014, pp. 1734-1737.
  19. H. Tang, B. Yin, Y. Sun, and Y. Hu, “3d face recognition using localbinary patterns,” Signal Processing, vol. 93, no. 8, pp. 2190 - 2198, 2013, indexing of Large-Scale Multimedia Signals. Online]. Available: //www.sciencedirect.com/science/article/pii/S0165168412001120
  20. X. Zhao, J. Zou, H. Li, E. Dellandra, I. A. Kakadiaris, and L. Chen,“Automatic 2.5-d facial landmarking and emotion annotation for socialinteraction assistance,” IEEE Transactions on Cybernetics, vol. 46, no. 9,pp. 2042-2055, Sept 2016.
  21. S. K. A. Kamarol, M. H. Jaward, J. Parkkinen, and R. Parthiban,“Spatiotemporal feature extraction for facial expression recognition,”IET Image Processing, vol. 10, no. 7, pp. 534-541, 2016.
  22. M. J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, and J. Budynek, “The japanese female facial expression (jaffe) database,” in Proceedings ofthird international conference on automatic face and gesture recognition,1998, pp. 14-16.
  23. A. Poursaberi, H. A. Noubari, M. Gavrilova, and S. N. Yanushkevich, “Gauss-laguerre wavelet textural feature fusion with geometrical informationfor facial expression identification,” EURASIP Journal on Imageand Video Processing, vol. 2012, no. 1, pp. 1-13, 2012.
  24. F. Cheng, J. Yu, and H. Xiong, “Facial expression recognition in jaffedataset based on gaussian process classification,” IEEE Transactions onNeural Networks, vol. 21, no. 10, pp. 1685-1690, Oct 2010.
  25. S. Kamal, F. Sayeed, and M. Rafeeq, “Facial emotion recognition forhuman-computer interactions using hybrid feature extraction technique,”in Data Mining and Advanced Computing (SAPIENCE), InternationalConference on. IEEE, 2016, pp. 180-184.

Downloads

Published

2020-04-30

Issue

Section

Research Articles

How to Cite

[1]
Jaswanth K S, Dr. D. Stalin David, " A Novel Based 3d Facial Expression Detection Using Recurrent Neural Network, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 6, Issue 2, pp.48-53, March-April-2020. Available at doi : https://doi.org/10.32628/CSEIT20622