Faculty of Computing

Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4202

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    PublicationEmbargo
    Review On Hand Gesture Recognition for Bengali Sign Language
    (IEEE, 2022-04-14) Perera, D; Kanchana, B; Peiris, R; Madushan, K; Kasthurirathna, D
    Communication becomes difficult when interaction between the disabled and the general public are required. People with disabilities of various races communicate using various sign languages. For persons who are deaf or hard of hearing sign language is their primary mode of communication. However, the majority of our community does not understand sign language, taking them out in public is incredibly challenging. In order to make sign language understandable to the general public, computer vision-based methods are now widely used. Recognition of hand gesture is one of the computer vision based technologies for recognizing sign language, and it is attracting a lot of attention from analysis. For a long time, it has been a popular research area. In the area of hand gesture recognition in computer vision, some recent research has achieved outstanding improvements by employing deep learning techniques. In this paper we have discussed the previous research methods, technologies, datasets and models used in Bengal sign language gestures that are interconnected in terms of achieving a successful result. Therefore, this review article tried to reveal the independent techniques which are used to overcome the challenges in research.
  • Thumbnail Image
    PublicationEmbargo
    Review On Hand Gesture Recognition for Bengali Sign Language
    (IEEE, 2022-02-23) Perera, D; Kanchana, B. M; Peiris, R; Madushan, K; Kasthurirathna, D
    Communication becomes difficult when interaction between the disabled and the general public are required. People with disabilities of various races communicate using various sign languages. For persons who are deaf or hard of hearing sign language is their primary mode of communication. However, the majority of our community does not understand sign language, taking them out in public is incredibly challenging. In order to make sign language understandable to the general public, computer vision-based methods are now widely used. Recognition of hand gesture is one of the computer vision based technologies for recognizing sign language, and it is attracting a lot of attention from analysis. For a long time, it has been a popular research area. In the area of hand gesture recognition in computer vision, some recent research has achieved outstanding improvements by employing deep learning techniques. In this paper we have discussed the previous research methods, technologies, datasets and models used in Bengal sign language gestures that are interconnected in terms of achieving a successful result. Therefore, this review article tried to reveal the independent techniques which are used to overcome the challenges in research.
  • Thumbnail Image
    PublicationEmbargo
    EasyTalk: A Translator for Sri Lankan Sign Language using Machine Learning and Artificial Intelligence
    (IEEE, 2020-12-10) Manoj Kumar, D; Bavanraj, k; Thavananthan, S; Bastiansz, G. M. A. S; Harshanath, S. M. B; Alosious, J
    Sign language is used by the hearing-impaired and inarticulate community to communicate with each other. But not all Sri Lankans are aware of the sign language or verbal languages and a translation is required. The Sri Lankan Sign Language is tightly bound to the hearing-impaired and inarticulate. The paper presents EasyTalk, a sign language translator which can translate Sri Lankan Sign Language into text and audio formats as well as translate verbal language into Sri Lankan Sign Language which would benefit them to express their ideas. This is handled in four separate components. The first component, Hand Gesture Detector captures hand signs using pre-trained models. Image Classifier component classifies and translates the detected hand signs. The Text and Voice Generator component produces a text or an audio formatted output for identified hand signs. Finally, Text to Sign Converter works on converting an entered English text back into the sign language based animated images. By using these techniques, EasyTalk can detect, translate and produce relevant outputs with superior accuracy. This can result in effective and efficient communication between the community with differently-abled people and the community with normal people.