Research Publications
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4194
This main community comprises five sub-communities, each representing the academic contribution made by SLIIT-affiliated personnel.
Browse
16 results
Filters
Advanced Search
Filter by
Settings
Search Results
Item Embargo AI Interviews with Facial Emotion Recognition for Real-Time Feedback and Career Recommendations(Institute of Electrical and Electronics Engineers Inc., 2025) Herath R.P.N.M; Arachchi D.S.U.; Gunaratne M.H.B.P.T.; Hansana K.T.; Wijayasekara, S.K; Jayasinghe, DThe hiring process is complex, requiring evaluation of candidates across multiple dimensions, including technical proficiency, behavioral traits, and credibility. Traditional interviews often suffer from biases and inefficiencies. This research presents an AI-driven Interview System integrating Machine Learning (ML), Natural Language Processing (NLP), and Computer Vision to automate and enhance recruitment. The system generates contextual interview questions, evaluates candidate responses using LLM-based scoring models, and provides real-time feedback for engagement. It includes speech-to-text transcription and offensive word detection to ensure professionalism. The behavioral analysis module leverages facial emotion recognition and computer vision to assess non-verbal cues such as confidence and attentiveness. Additionally, Curriculum Vitae (CV) parsing and LinkedIn data extraction use NLP-based entity recognition to extract educational background, work experience, and key skills, enabling personalized interviews. The technical assessment module administers real-time coding challenges, evaluating solutions for correctness, efficiency, and best practices while providing AI-generated feedback. By automating these key hiring aspects, this system enhances objectivity, efficiency, and decision-making, ensuring a data-driven, unbiased, and scalable selection process while improving the candidate's experience and employer insightsPublication Open Access Computer Vision Controlled Humanoid Robotic Arm(SLIIT City UNI, 2025-07-08) Firdouse, M S; Benorith, LThis paper presents the design and implementation of a low-cost, vision-based gesture-controlled humanoid robotic arm that mimics human hand and wrist movements in real time. The system uses a USB webcam and MediaPipe for hand landmark detection, OpenCV for image processing, and a Raspberry Pi 4 to compute landmark vectors and control servo motors via a PCA9685 driver. Calibration modes were introduced for each joint to ensure accurate servo mapping. The solution supports full gesture-based manipulation of a five-fingered robotic hand, including wrist orientation, with minimal latency and no physical contact. The system provides a more intuitive and natural method for robotic arm control compared to traditional input devices and has potential applications in prosthetics, automation, and human-robot interaction.Publication Embargo Computer Vision Based Navigation Robot(IEEE, 2022-12-26) Haputhanthri, M; Himasha, C; Balasooriya, H; Herath, M; Rajapaksha, S; Harshanath, S.M.B.The majority of industrial environments and homewares need help when exploring unknown locations owing to a lack of understanding about the building structure and the various impediments that may be faced while transporting products from one spot to another. This is because there is a lack of knowledge about the building structure and the potential obstacles that may be encountered. This paper provides “Computer Vision-Based Navigation Robot” as a strategy for indoor navigation with optimal accessibility, usability, and security, decreasing issues that the user may encounter when traveling through indoor and outdoor areas with real-time monitoring of the most up to date IoT technology. The article is titled “Indoor Navigation with Optimal Accessibility, Usability, and Security.” This article proposes “Computer Vision-Based Navigation Robot” as a solution for interior navigation that provides optimum accessibility, usability, and security. This is done in order to tackle the issue that was presented before. Since the readers of this post include people who work in industry as well as physically challenged people who live alone, CVBN Robot takes object-based inputs from its surroundings. This is because the audience for this essay includes both groups of people. This study also covers a variety of methods for localization, sensors for the detection of obstacles, and a protocol for an Internet of Things connection between the server and the robot. This connection enables real-time position and status updates for the robot as it navigates a known but unknown interior environment. In addition, this study covers a variety of methods for localization, sensors for the detection of obstacles, and a protocol for an Internet of Things connection between the server and the robot.Publication Embargo Kaizen: Computer Vision Based Interactive Karate Training Platform(Institute of Electrical and Electronics Engineers Inc., 2022-11-04) Jayasekara, S. M; Weerasinghe, S. S.; Abayawardana, D.Y.W.; Welagedara, A. R.; Siriwardana, S.E.R.; Koralalage, M. NAll types of martial arts consist of several forms of combat used in self-defense, which are deeply rooted in many countries. Of all the martial art types, karate is considered the most well-known out of them all. Due to the pandemic situation in Sri Lanka, karate enthusiasts have lost the opportunity to train in a well-guided environment. As a result, even though virtual training came into play, it has continuously proved its ineffectiveness in evaluating the performance and accuracy of the trainees. The main objective of this proposed system is to virtualize the processes of a physical karate dojo. Kaizen - A Computer Vision-Based Interactive Karate Training Platform is a web-based application that functions as a virtual instructor. The proposed system consists of two main core components for Training and Assessments. The karate training component evaluates the techniques against a set of predefined joint angles. The BlazePose model is used for keypoint detection, and Analytic Geometry is used to extract joint angles. It is also integrated with Amazon Polly, a Deep Learning-based Text-To-Speech (TTS) service to produce real-time audio feedback. The assessment component has the capability to evaluate the trainees through a built-in Smart Evaluator based on a Recurrent Neural Network (RNN). Additionally, the capability to manage the assessments supports the instructors in conducting all the assessments virtually, overcoming the barriers in physical training.Publication Embargo Continuous American Sign Language Recognition Using Computer Vision And Deep Learning Technologies(IEEE, 2022-08-29) Senanayaka, S.A.M.A.S; Perera, R.A.D.B.S; Rankothge, W.; Usgalhewa, S.S.; Hettihewa, H.DSign language is a non-verbal communication method used to communicate between hard of hearing or deaf and ordinary people. Automatic Sign language detection is a complex computer vision problem due to the diversity of modern sign languages and variations in gesture positions, hand and finger form, and body part placements. This research paper aims to conduct a systematic experimental evaluation of computer vision-based approaches for sign language recognition. The present research focuses on mapping non-segmented video streams to glosses to gain insights into sign language recognition. The proposed machine learning model consists of Recurrent Neural Network (RNN) layers such as Long Short-Term Memory (LSTM). The model is implemented using current deep learning frameworks such as Google TensorFlow and Keras API.Publication Open Access Gesture driven smart home solution for bedridden people(Association for Computing Machinery, 2020-09-21) Jayaweera, N; Gamage, B; Samaraweera, M; Liyanage, S; Lokuliyana, S; Kuruppu, TConversion of ordinary houses into smart homes has been a rising trend for past years. Smart house development is based on the enhancement of the quality of the daily activities of normal people. But many smart homes have not been designed in a way that is user friendly for differently-abled people such as immobile, bedridden (disabled people with at least one hand movable). Due to negligence and forgetfulness, there are cases where the electrical devices are left switched on, regardless of any necessity. It is one of the most occurred examples of domestic energy wastage. To overcome those challenges, this research represents the improved smart home design: MobiGO that uses cameras to capture gestures, smart sockets to deliver gesture-driven outputs to home appliances, etc. The camera captures the gestures done by the user and the system processes those images through advanced gesture recognition and image processing technologies. The commands relevant to the gesture are sent to the specific appliance through a specific IoT device attached to them. The basic literature survey content, which contains technical words, is analyzed using Deep Learning, Convolutional Neural Network (CNN), Image Processing, Gesture recognition, smart homes, IoT. Finally, the authors conclude that the MobiGO solution proposes a smart home system that is safer and easier for people with disabilitiesPublication Open Access Animal Classification System Based on Image Processing & Support Vector Machine(Scientific Research Publishing, 2016-01-15) Seneviratne, L; Shalika, A. W. D. UThis project is mainly focused to develop system for animal researchers & wild life photographers to overcome so many challenges in their day life today. When they engage in such situation, they need to be patiently waiting for long hours, maybe several days in whatever location and under severe weather conditions until capturing what they are interested in. Also there is a big demand for rare wild life photo graphs. The proposed method makes the task automatically use microcontroller controlled camera, image processing and machine learning techniques. First with the aid of microcontroller and four passive IR sensors system will automatically detect the presence of animal and rotate the camera toward that direction. Then the motion detection algorithm will get the animal into middle of the frame and capture by high end auto focus web cam. Then the captured images send to the PC and are compared with photograph database to check whether the animal is exactly the same as the photographer choice. If that captured animal is the exactly one who need to capture then it will automatically capture more. Though there are several technologies available none of these are capable of recognizing what it captures. There is no detection of animal presence in different angles. Most of available equipment uses a set of PIR sensors and whatever it disturbs the IR field will automatically be captured and stored. Night time images are black and white and have less details and clarity due to infrared flash quality. If the infrared flash is designed for best image quality, range will be sacrificed. The photographer might be interested in a specific animal but there is no facility to recognize automatically whether captured animal is the photographer’s choice or not.Publication Open Access Utalk: Sri Lankan Sign Language Converter Mobile App using Image Processing and Machine Learning(2020 2nd International Conference on Advancements in Computing (ICAC), SLIIT, 2020-12-10) Dissanayake, I.S.M.; Wickramanayake, P.J.; Mudunkotuwa, M.A.S; Fernando, P.W.N.Deaf and mute people face various difficulties in daily activities due to the communication barrier caused by the lack of Sign Language knowledge in the society. Many researches have attempted to mitigate this barrier using Computer Vision based techniques to interpret signs and express them in natural language, empowering deaf and mute people to communicate with hearing people easily. However, most of such researches focus only on interpreting static signs and understanding dynamic signs is not well explored. Understanding dynamic visual content (videos) and translating them into natural language is a challenging problem. Further, because of the differences in sign languages, a system developed for one sign language cannot be directly used to understand another sign language, e.g., a system developed for American Sign Language cannot be used to interpret Sri Lankan Sign Language. In this study, we develop a system called Utalk to interpret static as well as dynamic signs expressed in Sri Lankan Sign Language. The proposed system utilizes Computer Vision and Machine Learning techniques to interpret sings performed by deaf and mute people. Utalk is a mobile application, hence it is non-intrusive and cost-effective. We demonstrate the effectiveness of the our system using a newly collected dataset.Publication Embargo Smart Plant Disorder Identification using Computer Vision Technology(IEEE, 2020-11-04) Manoharan, S; Sariffodeen, B; Ramasinghe, K. T; Rajaratne, L. H; Kasthurirathna, D; Wijekoon, JThe soil composition around the world is depleting at a rapid rate due to overexploitation by the unsustainable use of fertilizers. Streamlining the availability of nutrient deficiency and fertilizer related knowledge among impoverished farming communities would promoter environmentally and scientifically sustainable farming practices. Thus, contributing to several Sustainable Development Goals set out by the United Nations. The most direct solution to the inappropriate fertilizer usage is to add only the necessary amounts of fertilizer required by plants to produce a significant yield without nutrition deficiencies. To this end this paper proposes a Smart Nutrient Disorder Identification system employing computer vision and machine learning techniques for identification purposes and a decentralized blockchain platform to streamline a bias-less procurement system. The proposed system yielded 88% accuracy in disorder identification, while also enabling secure, transparent flow of verified information.Publication Embargo JESSY: An Intelligence Travel Assistant(2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Dilshan, K.K.D.N.; Parussella, U.M.D.M.; Herath, H.M.C.J.; Chandranath, C.A.J.P.; Thelijjagoda, S.; Jayalath, T.Sri Lanka has a lengthy history as a tourist destination, but due to political and social forces, there have been numerous ups and downs. In Sri Lanka, the tourism industry is regarded as the third most important economic activity. As per the other countries in the world, Sri Lanka also facing the Covid19 pandemic situation, and the research team has identified several issues that need to be addressed immediately. When tourists visit Sri Lanka, some of them utilize package-based services provided by travel companies, whereas backpackers are independent budget travelers who book hotels, modes of transportation, meal plans, and destinations independently. Thus, ‘JESSY’, the intelligent travel assistant has been proposed with the intension to help independent travelers to travel around the country safely. Accordingly, the ‘JESSY’ is a collection of all the items that a traveler should carry with them every times. The main objective of the research team was to develop a mobile application which includes a leisure time planner, a trustworthy travel guide recommendation for booking, a virtual guide experience for travelers, a chatbot that offers automatic replies by analyzing and assessing data and information, utilities, resources, and safety. 'JESSY' is available for both Android and IOS operating systems.
