Sign Language Based Video Calling App

Authors

  • Shadab Shaikh  Department of Computer Engineering, Shree L.R Tiwari College of Engineering, Thane, Maharashtra, India
  • Mohit Hadiyal  Department of Computer Engineering, Shree L.R Tiwari College of Engineering, Thane, Maharashtra, India
  • Narendra Choudhary  Department of Computer Engineering, Shree L.R Tiwari College of Engineering, Thane, Maharashtra, India
  • Nivedita Rajbhar  Department of Computer Engineering, Shree L.R Tiwari College of Engineering, Thane, Maharashtra, India
  • Dipali Bhole  Assistant Professor, Department of Computer Engineering, Shree L.R Tiwari College of Engineering, Thane, Maharashtra, India

DOI:

https://doi.org//10.32628/CSEIT121541

Keywords:

Gesture Recognition, Static gestures, American Sign Language

Abstract

Deaf and hard of hearing people communicate with others and within their own communities by using sign language. Computer recognition of sign language begins with the learning of sign gestures and continues through the production of text and speech. There are two types of sign gestures: static and dynamic. Both gesture recognition systems are crucial to the human community, even if static gesture recognition is easier than dynamic gesture recognition. We have conducted research on the steps required to convert static American sign language (ASL) to readable text and selected the best available methods to do so. Examined general steps are the data collection, pre-processing, transformation, feature extraction, and classification. There are also some recommendations for further study in this field.

References

  1. A. Osipov and M. Ostanin, "Real-time static custom gestures recognition based on skeleton hand," 2021 International Conference "Nonlinearity, Information and Robotics" (NIR), Innopolis, Russian Federation, 2021, pp. 1-4, doi: 10.1109/NIR52917.2021.9665809.
  2. Hwang, J., & Kim, S. (2020). A comparative analysis of sign language recognition technologies: A review of recent advances. Sensors, 20(5), 1385. https://doi.org/10.3390/s20051385
  3. Biau, G., Scornet, E. A random forest guided tour. TEST 25, 197–227 (2016). https://doi.org/10.1007/s11749-016-0481-7
  4. S. ÖZTÜRK et al., "Functionality, Performance and Usability Tests of WebRTC Based Video Conferencing Products," 2021 15th Turkish National Software Engineering Symposium (UYMS), Izmir, Turkey, 2021, pp. 1-6, doi: 10.1109/UYMS54260.2021.9659594.

Downloads

Published

2023-04-30

Issue

Section

Research Articles

How to Cite

[1]
Shadab Shaikh, Mohit Hadiyal, Narendra Choudhary, Nivedita Rajbhar, Dipali Bhole, " Sign Language Based Video Calling App , IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 9, Issue 2, pp.418-423, March-April-2023. Available at doi : https://doi.org/10.32628/CSEIT121541