Submit your paper : editorIJETjournal@gmail.com Paper Title : Sign Language Recognition using Convolutional Neural Networks ISSN : 2395-1303 Year of Publication : 2022 10.5281/zenodo.7186436 MLA Style: -Adusumilli Yagna Gayathri, Suraka Maha Lakshmi Reddy Sign Language Recognition using Convolutional Neural Networks , Volume 8 - Issue 5 September - October 2022 International Journal of Engineering and Techniques (IJET) ,ISSN:2395-1303 , www.ijetjournal.org APA Style: - Adusumilli Yagna Gayathri, Suraka Maha Lakshmi Reddy Sign Language Recognition using Convolutional Neural Networks , Volume 8 - Issue 5 September - October 2022 International Journal of Engineering and Techniques (IJET) ,ISSN:2395-1303 , www.ijetjournal.org Abstract Hand gesture interpretation is an appealing research area with numerous applications, including video games and telesurgery. Another important application of hand gesture recognition is the translation of sign language for non-verbal communication. It is most commonly used by deaf and dumb people who have hearing or speech problems to communicate among themselves or with physically enabled people. It is a successful way of communication among verbally and aurally impaired humans. The primitives of complex expressions in sign language are the configuration of the fingers, the orientation of the hand, and the relative positioning of the hand in terms of the body. The proliferation of touchless apps and the fast rise of the hearing-impaired population have enhanced the necessity of hand gesture recognition. However, while developing an efficient recognition system one needs to overcome the challenges of hand segmentation, local hand shape representation, global body configuration representation, and gesture sequence modeling. A platform that helps to communicate with and for deaf and dumb people and makes the communication possible by conveying the message of one person to the other using multiple deep learning architectures and classification techniques(CNN) for hand segmentation, local and global feature representations, and sequence feature globalization and recognition. The gestures and the voice message will be transcripted to text or voice messages according to the requirement of the user. The vision-based framework can be developed to allow the users to interact with the opposite person through human gestures. The hand gestures of the opposite person will be identified using image processing frameworks. Those gestures are recognized and the particular messages are predicted. The messages are sent to that person in the form of text or audio. In this way, communication between two people can be made simpler. Reference .http://mospi.nic.in/sites/default/files publication_reportsDisabled_persons_in_India_2016. pdf 2.http://www.lifeprint.com/asl101/pageslayout/concepts.html 3.https://developers.google.commachine-learning/problem-framing/ 4.https://github.com/kevinam99capturing-images-fromwebcam-using-opencv python 5.https://jayrambhia.wordpress.com2012/05 10/capture-images-and-video fromcamera-in-opencv-2-3-1/ 6.https://gtts.readthedocs.io/en/latest/ 7.https://towardsdatascience.comhow-to-getstarted-with-google-text-tospeech-using-python-485e43d1d544 8.https://docs.w3cub.com/tensorflow~guidetutorials/layers.html Keywords — Gesture, sign language, Segmentation, vision-based, communication, CNN. |