SLIIT Conference and Symposium Proceedings
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/295
All SLIIT faculties annually conduct international conferences and symposiums. Publications from these events are included in this collection.
Browse
2 results
Search Results
Publication Open Access Computer Vision Controlled Humanoid Robotic Arm(SLIIT City UNI, 2025-07-08) Firdouse, M S; Benorith, LThis paper presents the design and implementation of a low-cost, vision-based gesture-controlled humanoid robotic arm that mimics human hand and wrist movements in real time. The system uses a USB webcam and MediaPipe for hand landmark detection, OpenCV for image processing, and a Raspberry Pi 4 to compute landmark vectors and control servo motors via a PCA9685 driver. Calibration modes were introduced for each joint to ensure accurate servo mapping. The solution supports full gesture-based manipulation of a five-fingered robotic hand, including wrist orientation, with minimal latency and no physical contact. The system provides a more intuitive and natural method for robotic arm control compared to traditional input devices and has potential applications in prosthetics, automation, and human-robot interaction.Publication Open Access Utalk: Sri Lankan Sign Language Converter Mobile App using Image Processing and Machine Learning(2020 2nd International Conference on Advancements in Computing (ICAC), SLIIT, 2020-12-10) Dissanayake, I.S.M.; Wickramanayake, P.J.; Mudunkotuwa, M.A.S; Fernando, P.W.N.Deaf and mute people face various difficulties in daily activities due to the communication barrier caused by the lack of Sign Language knowledge in the society. Many researches have attempted to mitigate this barrier using Computer Vision based techniques to interpret signs and express them in natural language, empowering deaf and mute people to communicate with hearing people easily. However, most of such researches focus only on interpreting static signs and understanding dynamic signs is not well explored. Understanding dynamic visual content (videos) and translating them into natural language is a challenging problem. Further, because of the differences in sign languages, a system developed for one sign language cannot be directly used to understand another sign language, e.g., a system developed for American Sign Language cannot be used to interpret Sri Lankan Sign Language. In this study, we develop a system called Utalk to interpret static as well as dynamic signs expressed in Sri Lankan Sign Language. The proposed system utilizes Computer Vision and Machine Learning techniques to interpret sings performed by deaf and mute people. Utalk is a mobile application, hence it is non-intrusive and cost-effective. We demonstrate the effectiveness of the our system using a newly collected dataset.
