Gesture Based Real-time Indian Sign Language Interpreter

Authors

  • Akshay Divkar  Department of Computer Engineering, Datta Meghe College of Engineering Navi Mumbai, India
  • Rushikesh Bailkar  Department of Computer Engineering, Datta Meghe College of Engineering Navi Mumbai, India
  • Dr. Chhaya S. Pawar  Department of Computer Engineering, Datta Meghe College of Engineering Navi Mumbai, India

DOI:

https://doi.org//10.32628/CSEIT217374

Keywords:

Language Interpreter, Convolutional Neural Network, Recurrent Neural Network

Abstract

Hand gesture is one of the methods used in sign language for non-verbal communication. It is most commonly used by hearing & speech impaired people who have hearing or speech problems to communicate among themselves or with normal people. Developing sign language applications for hearing impaired people can be very important, as hearing & speech impaired people will be able to communicate easily with even those who don’t understand sign language. This project aims at taking the basic step in bridging the communication gap between normal people, deaf and dumb people using sign language. The main focus of this work is to create a vision based system to identify sign language gestures from the video sequences. The reason for choosing a system based on vision relates to the fact that it provides a simpler and more intuitive way of communication between a human and a computer. Video sequences contain both temporal as well as spatial features. In this project, two different models are used to train the temporal as well as spatial features. To train the model on the spatial features of the video sequences a deep Convolutional Neural Network. Convolutional Neural Network was trained on the frames obtained from the video sequences of train data. To train the model on the temporal features Recurrent Neural Network is used. The Trained Convolutional Neural Network model was used to make predictions for individual frames to obtain a sequence of predictions. Now this sequence of prediction outputs was given to the Recurrent Neural Network to train on the temporal features. Collectively both the trained models i.e. Convolutional Neural Network and Recurrent Neural Network will produce the text output of the respective gesture.

References

  1. P. V. V. Kishore, D. A. Kumar, A. S. C. S. Sastry and E. K. Kumar, "Motionlets Matching With Adaptive Kernels for 3-D Indian Sign Language Recognition," in IEEE Sensors Journal, vol. 18, no. 8, pp. 3327-3337, 15 April15, 2018, doi: 10.1109/JSEN.2018.2810449.
  2. H. Muthu Mariappan and V. Gomathi, "Real-Time Recognition of Indian Sign Language," 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), Chennai, India, 2019, pp. 1-6, doi: 10.1109/ICCIDS.2019.8862125.
  3. T. Oliveira, N. Escudeiro, P. Escudeiro, E. Rocha and F. M. Barbosa, "The VirtualSign Channel for the Communication Between Deaf and Hearing Users," in IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, vol. 14, no. 4, pp. 188-195, Nov. 2019, doi: 10.1109/RITA.2019.2952270.
  4. M. Hedayati, W. M. D. W. Zaki and A. Hussain, "Real-time background subtraction for video surveillance: From research to reality," 2010 6th International Colloquium on Signal Processing & its Applications, 2010, pp. 1-6, doi: 10.1109/CSPA.2010.5545277.
  5. P. Roy, S. Dutta, N. Dey, G. Dey, S. Chakraborty and R. Ray, "Adaptive thresholding: A comparative study," 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014, pp. 1182-1186, doi: 10.1109/ICCICCT.2014.6993140.

Downloads

Published

2021-06-30

Issue

Section

Research Articles

How to Cite

[1]
Akshay Divkar, Rushikesh Bailkar, Dr. Chhaya S. Pawar, " Gesture Based Real-time Indian Sign Language Interpreter , IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 7, Issue 3, pp.387-394, May-June-2021. Available at doi : https://doi.org/10.32628/CSEIT217374