Research Publications
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4194
This main community comprises five sub-communities, each representing the academic contribution made by SLIIT-affiliated personnel.
Browse
2 results
Search Results
Publication Open Access A Multifunctional Communication System for Differently Abled People(Sri Lanka Institute of Information Technology, 2023-03-25) Gunaratne, K.M.S.T.; Senanayaka, V.P.; Walakuluarachchi, E.I.; Malasinghe, LA person should be able to connect with other people to have a fulfilling life. Having challenges such as being blind, deaf, or mute is a significant concern in this matter. According to world statistics and research, it has shown that 0.2% of the world’s population lives with severe deaf blindness. This project aims to design and develop a communication system to improve interactions between a person without any disability and a deaf-blind person or between two deaf-blind people. Each may communicate differently, so this system will have a textual display for those who can see, a speech output for those who can hear, and a tactile braille display for those who cannot see or hear. This system can benefit educational institutes and care homes facilitating people with the above disabilities. The primary purpose of this system is to make the differently abled people feel independent and confident by seeing, hearing, and talking to each other without facing the barriers in the translation.Publication Embargo Continuous American Sign Language Recognition Using Computer Vision And Deep Learning Technologies(IEEE, 2022-08-29) Senanayaka, S.A.M.A.S; Perera, R.A.D.B.S; Rankothge, W.; Usgalhewa, S.S.; Hettihewa, H.DSign language is a non-verbal communication method used to communicate between hard of hearing or deaf and ordinary people. Automatic Sign language detection is a complex computer vision problem due to the diversity of modern sign languages and variations in gesture positions, hand and finger form, and body part placements. This research paper aims to conduct a systematic experimental evaluation of computer vision-based approaches for sign language recognition. The present research focuses on mapping non-segmented video streams to glosses to gain insights into sign language recognition. The proposed machine learning model consists of Recurrent Neural Network (RNN) layers such as Long Short-Term Memory (LSTM). The model is implemented using current deep learning frameworks such as Google TensorFlow and Keras API.
