Research Publications
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4194
This main community comprises five sub-communities, each representing the academic contribution made by SLIIT-affiliated personnel.
Browse
4 results
Filters
Advanced Search
Filter by
Settings
Search Results
Item Embargo Deep Learning Based Sinhala Sign Language Recognition(Institute of Electrical and Electronics Engineers Inc., 2025) Samarakoon, S.C; Weerasinghe, MDeaf individuals in Sri Lanka rely primarily on Sinhala Sign Language (SSL) for communication due to hearing impairments. However, effective communication between the Deaf and hearing populations remains challenging due to the limited knowledge of SSL among hearing individuals. This research aims to address this gap by developing an SSL gesture recognition system using computer vision and deep learning techniques. Specifically, the study compares the performance of 3D Convolutional Neural Networks (3D-CNNs) and a hybrid 2D Convolutional Neural Network with Long Short-Term Memory (2D-CNN+LSTM) for classifying short-duration spatiotemporal SSL gestures. Additionally, the research emphasizes reducing computational complexity to ensure efficient operation of the system on low-end devices. These contributions advance the accessibility and practical usability of gesture recognition systems for the Sinhala Sign Language.Publication Open Access Real-time Multi-spectral Iris Extraction in Diversified Eye Images Utilizing Convolutional Neural Networks(IEEE, 2024-07-03) Rathnayake, R; Madhushan, N; Jeeva, A; Darshani, D; Pathirana, I; Ghosh, S; Subasinghe, A; Silva, B N; Wijenayake, UIris extraction has gained prominence due to its application versatility across many domains. However, achieving real-time iris extraction poses challenges due to several factors. Learning-based algorithms outperform non-learning-based iris extraction methods, delivering superior accuracy and performance. In response, this article proposes a Convolutional Neural Networks (CNN)-based, accurate direct iris extraction mechanism for a broad spectrum of eye images. The innovation of our approach lies in its proficiency with varied image types, including those where the iris is partially obscured by the eyelid. We enhance the method’s reliability by introducing a modified Circular Hough Transform (CHT). Extensive testing demonstrates our method’s excellent real-time performance across diverse image types, even under challenging conditions. These findings underscore the proposed method’s potential as a cost-effective and computationally efficient solution for real-time iris extraction in varied application domains.Publication Embargo EasyChat: A Chat Application for Deaf/Dumb People to Communicate with the General Community(Springer, Cham, 2022-07-07) Sriyaratna, D; Samararathne, W. A. H. K.; Gurusinghe, P. M.; Gunathilake, M. D. S. S.; Wijenayake, W. W. G. P. A.Sign Language is closely associated with the deaf and dumb community to communicate with each other. However, not everyone understands sign language or verbal languages, so these communities need proper ways to communicate online. Therefore, this paper presents EasyChat, a sign language chat application that can translate three main sign languages into Simple English text as well as Simple English text into sign language, which would benefit for deaf/dumb community to express their ideas with the general community by simply capturing their British Sign Language (BSL) or Makaton gestures/symbols or lip movements. These steps are handled by four components. The first component, Convert BSL into Simple English, and the second component, handles Lip Reading conversion. The Makaton gesture and symbol conversion component produces a simple English text-formatted output for identified Makaton hand signs. Finally, the Text/voice to Sign Converter works on converting entered English text back into the sign language-based images. By using these components, EasyChat can detect relevant gestures and lip movement inputs with superior accuracy and translate. This can lead to more effective and efficient online communication between the community of deaf/dumb individuals and the general public.Publication Embargo Intelligent platform for visually impaired children for learning indoor and outdoor objects(IEEE, 2019-10-17) Jayawardena, C; Balasuriya, B. K; Lokuhettiarachchi, N. P; Ranasinghe, A. R. M. D. NUsing Artificial Intelligence and Computer Vision to assist Visually Impaired personal has been a topic discussed in recent years. Many researchers are focusing on combining several technologies to assist said individuals to perform day to day tasks. Although there are already many technologies being used as platforms to help these individuals, focus put on children who are aged in between 6 - 14 years is considerably less. Therefore; in this research we are focusing on how to use latest advancements of Region Based Convolutional Networks (R-CNN), Recurrent Neural Networks (RNN) and Speech models to provide an interactive learning experience to visually impaired children.
