An Innovative Artificial Replacement to Facilitate Communication Between Visually And Hearing- Impaired People

Authors

  • Sangeetha R  Research Scholar, VLSI,Vandayar Engineering College, Thanjavur, Tamilnadu, India
  • Elakkiya S  Assistant Professor of ECE in VandayarEngineering College,Thanjavur, Tamilnadu, India

Keywords:

Sign language synthesis, ANN, speech synthesis, feature selection, feature extraction

Abstract

The communication between visually- and hearing-impaired people is don’t share any common communication channel. A proposed scheme included algorithms for speech recognition and synthesis to aid communication between visually and hearing-impaired people. The communication environment designed to foster an immersive experience for the visually and hearing impaired. The modality replacement framework combines a set of different modules, sign language analysis and synthesis, speech analysis and synthesis, etc.., an accelerometer-based gesture recognition algorithm. We proposed a new technique called artificial speaking mouth for dumb people. Some peoples are easily able to get the information from their motions, remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dump people. This system is based on the motion sensor. Their message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal.

References

  1. J. Lumsden and S.A. Brewster, "A Paradigm Shift: Alternative Interaction Techniques for Use with Mobile & Wearable Devices," Proc. 13th Ann. IBM Centers for Advanced Studies Conf., IBM Press, 2003, pp. 97-100.
  2. O. Lahav and D. Mioduser, "Exploration of Unknown Spaces by People Who Are Blind, Using a Multisensory Virtual Environment (MVE)," J. Special Education Technology, vol. 19, no. 3, 2004, pp. 15-24.
  3. T. Pun et al., "Image and Video Processing for Visually Handicapped People," Eurasip J. Image and Video Processing, vol. 2007, article ID 25214, 2007.
  4. A. Caplier et al., "Image and Video for Hearing Impaired People," Eurasip J. Image and Video Processing, vol. 2007, article ID 45641, 2007.
  5. S.C.W. Ong and S. Ranganath, "Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 6, 2005, pp. 873-891.
  6. N. Bourbakis, A. Exposito, and D. Kobraki, "MultiModal Interfaces for Interaction-Communication Between Hearing and Visually Impaired Individuals: Problems and Issues," Proc. 19th IEEE Int’l Conf. Tools with Artificial Intelligence, IEEE Press, 2007, pp. 522-530.
  7. D. Tzovaras et al., "Design and Implementation of Haptic Virtual Environments for the Training of Visually Impaired," IEEE Trans. Neural Systems and Rehabilitation Eng., vol. 12, no. 2, 2004, pp. 266-278.
  8. H. Petrie et al., "Universal Interfaces to Multimedia Documents," Proc. 4th IEEE Int’l Conf. Multimodal Interfaces, IEEE CS Press, 2002, pp. 319-324.
  9. N.O. Bernsen and L. Dybkjaer, Multimodal Usability, HumanComputer Interaction Series, Springer, 2010.
  10. O. Aran et al., "Signtutor: An Interactive System for Sign Language Tutoring," IEEE MultiMedia, vol. 16, no. 1, 2009, pp. 81-93.
  11. G. Bologna et al., "Transforming 3D Coloured Pixels into Musical Instrument Notes for Vision Substitution Applications," Eurasip J. Image and Video Processing, special issue on image and video processing for disability, vol. 2007, article ID 76204, 2007.
  12. M. Papadogiorgaki et al., "Gesture Synthesis from Sign Language Notation Using MPEG-4 Humanoid Animation Parameters and Inverse Kinematics," Proc. 2nd Int’l Conf. Intelligent Environments, IET, 2006.
  13. G.C. Burdea, Force and Touch Feedback for Virtual Reality, Wiley-Interscience, 1996.
  14. K. Moustakas, D. Tzovaras, and M.G. Strintzis "Sq-map: Efficient Layered Collision Detection and Haptic Rendering," IEEE Trans. Visualization and Computer Graphics, vol. 13, no. 1, 2007, pp. 80-93.
  15. K. Moustakas et al., "Haptic Rendering of Visual Data for the Visually Impaired," IEEE MultiMedia, vol. 14, no. 1, 2007, pp. 62-72.
  16. R. Ramloll et al., "Constructing Sonified Haptic Line Graphs for the Blind Student: First Steps," Proc. ACM Conf. Assistive Technologies, ACM Press, 2000.
  17. S. Young, the HTK Hidden Markov Model Toolkit: Design and Philosophy, tech. report CUED/ F-INFENG/TR152, Cambridge Univ. Engineering Dept., Sept. 1994.
  18. B. Bozkurt et al. "Improving Quality of Mbrola Synthesis for Non-Uniform Units Synthesis," Proc. IEEE Workshop Speech Synthesis, IEEE Press, 2002, pp. 7-10.

Downloads

Published

2018-06-30

Issue

Section

Research Articles

How to Cite

[1]
Sangeetha R, Elakkiya S, " An Innovative Artificial Replacement to Facilitate Communication Between Visually And Hearing- Impaired People, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 3, Issue 5, pp.503-511, May-June-2018.