Research Publications Authored by SLIIT Staff

Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4195

This collection includes all SLIIT staff publications presented at external conferences and published in external journals. The materials are organized by faculty to facilitate easy retrieval.

Browse

Search Results

Now showing 1 - 10 of 13
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision Based Navigation Robot
    (IEEE, 2022-12-26) Haputhanthri, M; Himasha, C; Balasooriya, H; Herath, M; Rajapaksha, S; Harshanath, S.M.B.
    The majority of industrial environments and homewares need help when exploring unknown locations owing to a lack of understanding about the building structure and the various impediments that may be faced while transporting products from one spot to another. This is because there is a lack of knowledge about the building structure and the potential obstacles that may be encountered. This paper provides “Computer Vision-Based Navigation Robot” as a strategy for indoor navigation with optimal accessibility, usability, and security, decreasing issues that the user may encounter when traveling through indoor and outdoor areas with real-time monitoring of the most up to date IoT technology. The article is titled “Indoor Navigation with Optimal Accessibility, Usability, and Security.” This article proposes “Computer Vision-Based Navigation Robot” as a solution for interior navigation that provides optimum accessibility, usability, and security. This is done in order to tackle the issue that was presented before. Since the readers of this post include people who work in industry as well as physically challenged people who live alone, CVBN Robot takes object-based inputs from its surroundings. This is because the audience for this essay includes both groups of people. This study also covers a variety of methods for localization, sensors for the detection of obstacles, and a protocol for an Internet of Things connection between the server and the robot. This connection enables real-time position and status updates for the robot as it navigates a known but unknown interior environment. In addition, this study covers a variety of methods for localization, sensors for the detection of obstacles, and a protocol for an Internet of Things connection between the server and the robot.
  • Thumbnail Image
    PublicationEmbargo
    Kaizen: Computer Vision Based Interactive Karate Training Platform
    (Institute of Electrical and Electronics Engineers Inc., 2022-11-04) Jayasekara, S. M; Weerasinghe, S. S.; Abayawardana, D.Y.W.; Welagedara, A. R.; Siriwardana, S.E.R.; Koralalage, M. N
    All types of martial arts consist of several forms of combat used in self-defense, which are deeply rooted in many countries. Of all the martial art types, karate is considered the most well-known out of them all. Due to the pandemic situation in Sri Lanka, karate enthusiasts have lost the opportunity to train in a well-guided environment. As a result, even though virtual training came into play, it has continuously proved its ineffectiveness in evaluating the performance and accuracy of the trainees. The main objective of this proposed system is to virtualize the processes of a physical karate dojo. Kaizen - A Computer Vision-Based Interactive Karate Training Platform is a web-based application that functions as a virtual instructor. The proposed system consists of two main core components for Training and Assessments. The karate training component evaluates the techniques against a set of predefined joint angles. The BlazePose model is used for keypoint detection, and Analytic Geometry is used to extract joint angles. It is also integrated with Amazon Polly, a Deep Learning-based Text-To-Speech (TTS) service to produce real-time audio feedback. The assessment component has the capability to evaluate the trainees through a built-in Smart Evaluator based on a Recurrent Neural Network (RNN). Additionally, the capability to manage the assessments supports the instructors in conducting all the assessments virtually, overcoming the barriers in physical training.
  • Thumbnail Image
    PublicationEmbargo
    Continuous American Sign Language Recognition Using Computer Vision And Deep Learning Technologies
    (IEEE, 2022-08-29) Senanayaka, S.A.M.A.S; Perera, R.A.D.B.S; Rankothge, W.; Usgalhewa, S.S.; Hettihewa, H.D
    Sign language is a non-verbal communication method used to communicate between hard of hearing or deaf and ordinary people. Automatic Sign language detection is a complex computer vision problem due to the diversity of modern sign languages and variations in gesture positions, hand and finger form, and body part placements. This research paper aims to conduct a systematic experimental evaluation of computer vision-based approaches for sign language recognition. The present research focuses on mapping non-segmented video streams to glosses to gain insights into sign language recognition. The proposed machine learning model consists of Recurrent Neural Network (RNN) layers such as Long Short-Term Memory (LSTM). The model is implemented using current deep learning frameworks such as Google TensorFlow and Keras API.
  • Thumbnail Image
    PublicationOpen Access
    Gesture driven smart home solution for bedridden people
    (Association for Computing Machinery, 2020-09-21) Jayaweera, N; Gamage, B; Samaraweera, M; Liyanage, S; Lokuliyana, S; Kuruppu, T
    Conversion of ordinary houses into smart homes has been a rising trend for past years. Smart house development is based on the enhancement of the quality of the daily activities of normal people. But many smart homes have not been designed in a way that is user friendly for differently-abled people such as immobile, bedridden (disabled people with at least one hand movable). Due to negligence and forgetfulness, there are cases where the electrical devices are left switched on, regardless of any necessity. It is one of the most occurred examples of domestic energy wastage. To overcome those challenges, this research represents the improved smart home design: MobiGO that uses cameras to capture gestures, smart sockets to deliver gesture-driven outputs to home appliances, etc. The camera captures the gestures done by the user and the system processes those images through advanced gesture recognition and image processing technologies. The commands relevant to the gesture are sent to the specific appliance through a specific IoT device attached to them. The basic literature survey content, which contains technical words, is analyzed using Deep Learning, Convolutional Neural Network (CNN), Image Processing, Gesture recognition, smart homes, IoT. Finally, the authors conclude that the MobiGO solution proposes a smart home system that is safer and easier for people with disabilities
  • Thumbnail Image
    PublicationOpen Access
    Animal Classification System Based on Image Processing & Support Vector Machine
    (Scientific Research Publishing, 2016-01-15) Seneviratne, L; Shalika, A. W. D. U
    This project is mainly focused to develop system for animal researchers & wild life photographers to overcome so many challenges in their day life today. When they engage in such situation, they need to be patiently waiting for long hours, maybe several days in whatever location and under severe weather conditions until capturing what they are interested in. Also there is a big demand for rare wild life photo graphs. The proposed method makes the task automatically use microcontroller controlled camera, image processing and machine learning techniques. First with the aid of microcontroller and four passive IR sensors system will automatically detect the presence of animal and rotate the camera toward that direction. Then the motion detection algorithm will get the animal into middle of the frame and capture by high end auto focus web cam. Then the captured images send to the PC and are compared with photograph database to check whether the animal is exactly the same as the photographer choice. If that captured animal is the exactly one who need to capture then it will automatically capture more. Though there are several technologies available none of these are capable of recognizing what it captures. There is no detection of animal presence in different angles. Most of available equipment uses a set of PIR sensors and whatever it disturbs the IR field will automatically be captured and stored. Night time images are black and white and have less details and clarity due to infrared flash quality. If the infrared flash is designed for best image quality, range will be sacrificed. The photographer might be interested in a specific animal but there is no facility to recognize automatically whether captured animal is the photographer’s choice or not.
  • Thumbnail Image
    PublicationEmbargo
    Smart Plant Disorder Identification using Computer Vision Technology
    (IEEE, 2020-11-04) Manoharan, S; Sariffodeen, B; Ramasinghe, K. T; Rajaratne, L. H; Kasthurirathna, D; Wijekoon, J
    The soil composition around the world is depleting at a rapid rate due to overexploitation by the unsustainable use of fertilizers. Streamlining the availability of nutrient deficiency and fertilizer related knowledge among impoverished farming communities would promoter environmentally and scientifically sustainable farming practices. Thus, contributing to several Sustainable Development Goals set out by the United Nations. The most direct solution to the inappropriate fertilizer usage is to add only the necessary amounts of fertilizer required by plants to produce a significant yield without nutrition deficiencies. To this end this paper proposes a Smart Nutrient Disorder Identification system employing computer vision and machine learning techniques for identification purposes and a decentralized blockchain platform to streamline a bias-less procurement system. The proposed system yielded 88% accuracy in disorder identification, while also enabling secure, transparent flow of verified information.
  • Thumbnail Image
    PublicationEmbargo
    JESSY: An Intelligence Travel Assistant
    (2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Dilshan, K.K.D.N.; Parussella, U.M.D.M.; Herath, H.M.C.J.; Chandranath, C.A.J.P.; Thelijjagoda, S.; Jayalath, T.
    Sri Lanka has a lengthy history as a tourist destination, but due to political and social forces, there have been numerous ups and downs. In Sri Lanka, the tourism industry is regarded as the third most important economic activity. As per the other countries in the world, Sri Lanka also facing the Covid19 pandemic situation, and the research team has identified several issues that need to be addressed immediately. When tourists visit Sri Lanka, some of them utilize package-based services provided by travel companies, whereas backpackers are independent budget travelers who book hotels, modes of transportation, meal plans, and destinations independently. Thus, ‘JESSY’, the intelligent travel assistant has been proposed with the intension to help independent travelers to travel around the country safely. Accordingly, the ‘JESSY’ is a collection of all the items that a traveler should carry with them every times. The main objective of the research team was to develop a mobile application which includes a leisure time planner, a trustworthy travel guide recommendation for booking, a virtual guide experience for travelers, a chatbot that offers automatic replies by analyzing and assessing data and information, utilities, resources, and safety. 'JESSY' is available for both Android and IOS operating systems.
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision and NLP based Multimodal Ensemble Attentiveness Detection API for E-Learning
    (IEEE, 2021-04-21) Wijeratne, M. D; Lakmal, R. H. G. A; Geethadhari, W. K. S; Athalage, M. A; Gamage, A; Kasthurirathna, D
    Attention is the fundamental element of effective learning, memory, and interaction. Learning however, with the evolvement of technologies in the modern digital age, has surpassed traditional learning systems to more convenient online or e-learning systems. Nevertheless, unlike in the traditional learning systems, attention detection of a student in an e-learning environment remains one of the barely explored areas in Human Computer Interaction. This study proposes a multimodal ensemble solution to detect the level of attentiveness of a student in an e-learning environment, with the use of computer vision, natural language processing, and deep learning to overcome the barriers in identifying user attention in e-learning. The proposed multimodal captures, processes, and predicts user attentiveness levels of individual students, which are subsequently aggregated through an ensemble model to derive an overall outcome of better accuracy than individual model outcomes. The final outcome of the ensemble model produces a range of percentages, within which the attentiveness level of the student lies during a single online lesson. This range is consequently delivered to the users through an Application Programming Interface.
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision Enabled Drowning Detection System
    (IEEE, 2021-12-09) Handalage, U; Nikapotha, N; Subasinghe, C; Prasanga, T; Thilakarthna, T; Kasthurirathna, D
    Safety is paramount in all swimming pools. The current systems expected to address the problem of ensuring safety at swimming pools have significant problems due to their technical aspects, such as underwater cameras and methodological aspects such as the need for human intervention in the rescue mission. The use of an automated visual-based monitoring system can help to reduce drownings and assure pool safety effectively. This study introduces a revolutionary technology that identifies drowning victims in a minimum amount of time and dispatches an automated drone to save them. Using convolutional neural network (CNN) models, it can detect a drowning person in three stages. Whenever such a situation like this is detected, the inflatable tube-mounted self-driven drone will go on a rescue mission, sounding an alarm to inform the nearby lifeguards. The system also keeps an eye out for potentially dangerous actions that could result in drowning. This system’s ability to save a drowning victim in under a minute has been demonstrated in prototype experiments' performance evaluations.
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision for Autonomous Driving
    (IEEE, 2021-12-09) Kanchana, B; Peiris, R; Perera, D; Jayasinghe, D; Kasthurirathna, D
    Computer vision in self-driving vehicles can lead to research and development of futuristic vehicles that can mitigate the road accidents and assist in a safer driving environment. By using the self-driving technology, the riders can be roamed to their destinations without using human interaction. But in recent times self-driving vehicle technology is still at the early stage. Mostly in the rushed areas like cities it becomes challenging to deploy such autonomous systems because even a small amount of data can cause a critical accident situation. In Order to increase the autonomous driving conditions computer vision and deep learning-based approaches are tended to be used. Finding the obstacles on the road and analyzing the current traffic flow are mainly focused areas using computer vision-based approaches. As well as many researchers using deep learning-based approaches like convolutional neural networks to enhance the autonomous driving conditions. This research paper focused on the evaluation of computer vision used in self-driving vehicles.