Sign Language Based Video Calling App
DOI:
https://doi.org/10.32628/CSEIT121541Keywords:
Gesture Recognition, Static gestures, American Sign LanguageAbstract
Deaf and hard of hearing people communicate with others and within their own communities by using sign language. Computer recognition of sign language begins with the learning of sign gestures and continues through the production of text and speech. There are two types of sign gestures: static and dynamic. Both gesture recognition systems are crucial to the human community, even if static gesture recognition is easier than dynamic gesture recognition. We have conducted research on the steps required to convert static American sign language (ASL) to readable text and selected the best available methods to do so. Examined general steps are the data collection, pre-processing, transformation, feature extraction, and classification. There are also some recommendations for further study in this field.
References
- A. Osipov and M. Ostanin, "Real-time static custom gestures recognition based on skeleton hand," 2021 International Conference "Nonlinearity, Information and Robotics" (NIR), Innopolis, Russian Federation, 2021, pp. 1-4, doi: 10.1109/NIR52917.2021.9665809.
- Hwang, J., & Kim, S. (2020). A comparative analysis of sign language recognition technologies: A review of recent advances. Sensors, 20(5), 1385. https://doi.org/10.3390/s20051385
- Biau, G., Scornet, E. A random forest guided tour. TEST 25, 197–227 (2016). https://doi.org/10.1007/s11749-016-0481-7
- S. ÖZTÜRK et al., "Functionality, Performance and Usability Tests of WebRTC Based Video Conferencing Products," 2021 15th Turkish National Software Engineering Symposium (UYMS), Izmir, Turkey, 2021, pp. 1-6, doi: 10.1109/UYMS54260.2021.9659594.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRCSEIT

This work is licensed under a Creative Commons Attribution 4.0 International License.