Faculty of Computing

Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4202

Browse

Search Results

Now showing 1 - 10 of 11
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision Based Navigation Robot
    (IEEE, 2022-12-26) Haputhanthri, M; Himasha, C; Balasooriya, H; Herath, M; Rajapaksha, S; Harshanath, S.M.B.
    The majority of industrial environments and homewares need help when exploring unknown locations owing to a lack of understanding about the building structure and the various impediments that may be faced while transporting products from one spot to another. This is because there is a lack of knowledge about the building structure and the potential obstacles that may be encountered. This paper provides “Computer Vision-Based Navigation Robot” as a strategy for indoor navigation with optimal accessibility, usability, and security, decreasing issues that the user may encounter when traveling through indoor and outdoor areas with real-time monitoring of the most up to date IoT technology. The article is titled “Indoor Navigation with Optimal Accessibility, Usability, and Security.” This article proposes “Computer Vision-Based Navigation Robot” as a solution for interior navigation that provides optimum accessibility, usability, and security. This is done in order to tackle the issue that was presented before. Since the readers of this post include people who work in industry as well as physically challenged people who live alone, CVBN Robot takes object-based inputs from its surroundings. This is because the audience for this essay includes both groups of people. This study also covers a variety of methods for localization, sensors for the detection of obstacles, and a protocol for an Internet of Things connection between the server and the robot. This connection enables real-time position and status updates for the robot as it navigates a known but unknown interior environment. In addition, this study covers a variety of methods for localization, sensors for the detection of obstacles, and a protocol for an Internet of Things connection between the server and the robot.
  • Thumbnail Image
    PublicationEmbargo
    Kaizen: Computer Vision Based Interactive Karate Training Platform
    (Institute of Electrical and Electronics Engineers Inc., 2022-11-04) Jayasekara, S. M; Weerasinghe, S. S.; Abayawardana, D.Y.W.; Welagedara, A. R.; Siriwardana, S.E.R.; Koralalage, M. N
    All types of martial arts consist of several forms of combat used in self-defense, which are deeply rooted in many countries. Of all the martial art types, karate is considered the most well-known out of them all. Due to the pandemic situation in Sri Lanka, karate enthusiasts have lost the opportunity to train in a well-guided environment. As a result, even though virtual training came into play, it has continuously proved its ineffectiveness in evaluating the performance and accuracy of the trainees. The main objective of this proposed system is to virtualize the processes of a physical karate dojo. Kaizen - A Computer Vision-Based Interactive Karate Training Platform is a web-based application that functions as a virtual instructor. The proposed system consists of two main core components for Training and Assessments. The karate training component evaluates the techniques against a set of predefined joint angles. The BlazePose model is used for keypoint detection, and Analytic Geometry is used to extract joint angles. It is also integrated with Amazon Polly, a Deep Learning-based Text-To-Speech (TTS) service to produce real-time audio feedback. The assessment component has the capability to evaluate the trainees through a built-in Smart Evaluator based on a Recurrent Neural Network (RNN). Additionally, the capability to manage the assessments supports the instructors in conducting all the assessments virtually, overcoming the barriers in physical training.
  • Thumbnail Image
    PublicationEmbargo
    Continuous American Sign Language Recognition Using Computer Vision And Deep Learning Technologies
    (IEEE, 2022-08-29) Senanayaka, S.A.M.A.S; Perera, R.A.D.B.S; Rankothge, W.; Usgalhewa, S.S.; Hettihewa, H.D
    Sign language is a non-verbal communication method used to communicate between hard of hearing or deaf and ordinary people. Automatic Sign language detection is a complex computer vision problem due to the diversity of modern sign languages and variations in gesture positions, hand and finger form, and body part placements. This research paper aims to conduct a systematic experimental evaluation of computer vision-based approaches for sign language recognition. The present research focuses on mapping non-segmented video streams to glosses to gain insights into sign language recognition. The proposed machine learning model consists of Recurrent Neural Network (RNN) layers such as Long Short-Term Memory (LSTM). The model is implemented using current deep learning frameworks such as Google TensorFlow and Keras API.
  • Thumbnail Image
    PublicationOpen Access
    Gesture driven smart home solution for bedridden people
    (Association for Computing Machinery, 2020-09-21) Jayaweera, N; Gamage, B; Samaraweera, M; Liyanage, S; Lokuliyana, S; Kuruppu, T
    Conversion of ordinary houses into smart homes has been a rising trend for past years. Smart house development is based on the enhancement of the quality of the daily activities of normal people. But many smart homes have not been designed in a way that is user friendly for differently-abled people such as immobile, bedridden (disabled people with at least one hand movable). Due to negligence and forgetfulness, there are cases where the electrical devices are left switched on, regardless of any necessity. It is one of the most occurred examples of domestic energy wastage. To overcome those challenges, this research represents the improved smart home design: MobiGO that uses cameras to capture gestures, smart sockets to deliver gesture-driven outputs to home appliances, etc. The camera captures the gestures done by the user and the system processes those images through advanced gesture recognition and image processing technologies. The commands relevant to the gesture are sent to the specific appliance through a specific IoT device attached to them. The basic literature survey content, which contains technical words, is analyzed using Deep Learning, Convolutional Neural Network (CNN), Image Processing, Gesture recognition, smart homes, IoT. Finally, the authors conclude that the MobiGO solution proposes a smart home system that is safer and easier for people with disabilities
  • Thumbnail Image
    PublicationEmbargo
    Smart Plant Disorder Identification using Computer Vision Technology
    (IEEE, 2020-11-04) Manoharan, S; Sariffodeen, B; Ramasinghe, K. T; Rajaratne, L. H; Kasthurirathna, D; Wijekoon, J
    The soil composition around the world is depleting at a rapid rate due to overexploitation by the unsustainable use of fertilizers. Streamlining the availability of nutrient deficiency and fertilizer related knowledge among impoverished farming communities would promoter environmentally and scientifically sustainable farming practices. Thus, contributing to several Sustainable Development Goals set out by the United Nations. The most direct solution to the inappropriate fertilizer usage is to add only the necessary amounts of fertilizer required by plants to produce a significant yield without nutrition deficiencies. To this end this paper proposes a Smart Nutrient Disorder Identification system employing computer vision and machine learning techniques for identification purposes and a decentralized blockchain platform to streamline a bias-less procurement system. The proposed system yielded 88% accuracy in disorder identification, while also enabling secure, transparent flow of verified information.
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision and NLP based Multimodal Ensemble Attentiveness Detection API for E-Learning
    (IEEE, 2021-04-21) Wijeratne, M. D; Lakmal, R. H. G. A; Geethadhari, W. K. S; Athalage, M. A; Gamage, A; Kasthurirathna, D
    Attention is the fundamental element of effective learning, memory, and interaction. Learning however, with the evolvement of technologies in the modern digital age, has surpassed traditional learning systems to more convenient online or e-learning systems. Nevertheless, unlike in the traditional learning systems, attention detection of a student in an e-learning environment remains one of the barely explored areas in Human Computer Interaction. This study proposes a multimodal ensemble solution to detect the level of attentiveness of a student in an e-learning environment, with the use of computer vision, natural language processing, and deep learning to overcome the barriers in identifying user attention in e-learning. The proposed multimodal captures, processes, and predicts user attentiveness levels of individual students, which are subsequently aggregated through an ensemble model to derive an overall outcome of better accuracy than individual model outcomes. The final outcome of the ensemble model produces a range of percentages, within which the attentiveness level of the student lies during a single online lesson. This range is consequently delivered to the users through an Application Programming Interface.
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision Enabled Drowning Detection System
    (IEEE, 2021-12-09) Handalage, U; Nikapotha, N; Subasinghe, C; Prasanga, T; Thilakarthna, T; Kasthurirathna, D
    Safety is paramount in all swimming pools. The current systems expected to address the problem of ensuring safety at swimming pools have significant problems due to their technical aspects, such as underwater cameras and methodological aspects such as the need for human intervention in the rescue mission. The use of an automated visual-based monitoring system can help to reduce drownings and assure pool safety effectively. This study introduces a revolutionary technology that identifies drowning victims in a minimum amount of time and dispatches an automated drone to save them. Using convolutional neural network (CNN) models, it can detect a drowning person in three stages. Whenever such a situation like this is detected, the inflatable tube-mounted self-driven drone will go on a rescue mission, sounding an alarm to inform the nearby lifeguards. The system also keeps an eye out for potentially dangerous actions that could result in drowning. This system’s ability to save a drowning victim in under a minute has been demonstrated in prototype experiments' performance evaluations.
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision for Autonomous Driving
    (IEEE, 2021-12-09) Kanchana, B; Peiris, R; Perera, D; Jayasinghe, D; Kasthurirathna, D
    Computer vision in self-driving vehicles can lead to research and development of futuristic vehicles that can mitigate the road accidents and assist in a safer driving environment. By using the self-driving technology, the riders can be roamed to their destinations without using human interaction. But in recent times self-driving vehicle technology is still at the early stage. Mostly in the rushed areas like cities it becomes challenging to deploy such autonomous systems because even a small amount of data can cause a critical accident situation. In Order to increase the autonomous driving conditions computer vision and deep learning-based approaches are tended to be used. Finding the obstacles on the road and analyzing the current traffic flow are mainly focused areas using computer vision-based approaches. As well as many researchers using deep learning-based approaches like convolutional neural networks to enhance the autonomous driving conditions. This research paper focused on the evaluation of computer vision used in self-driving vehicles.
  • Thumbnail Image
    PublicationOpen Access
    Computer vision based indoor navigation for shopping complexes
    (acm.org, 2020-12-09) Perera, G. S. T; Madhubhashini, K. W. R; Lunugalage, D; Piyathilaka, D. V. S; Lakshani, W. H. U; Kasthurirathna, D
    Smartphone-based indoor navigation systems are frantically required in indoor situations. This limitation of clients is significant. Global Positioning System (GPS) isn't plausible for indoor areas as it gives exceptionally helpless outcomes for indoor restriction. In this research paper, we present a Computer Vision-Based Indoor Navigation System for shopping complexes. Computer vision is used in this system to find the exact location/current location of the user. It contains a mobile android application for positioning, navigating, and displaying the current location for showing on 2D Map. The system will detect the user's position, generate a GIS map, display the shortest path using A* search algorithm, and provides step-by-step direction to the destination using audio instruction for localization with Augmented Reality (AR) map and navigation using mobile phone sensor technologies like accelerometer, gyroscope, and magnetometer. The audio instructions include active guidance for upcoming turns in the traveling path, distance of each section between turns. This system uses a suggestion-based Chabot that uses a trained model to improve the user's experience. Thus, this research expects to build a cost-effective, efficient, and timely response system that will help the users for a smart shopping experience.
  • Thumbnail Image
    PublicationEmbargo
    Computer-Vision Enabled Waste Management System for Green Environment
    (2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Hewagamage, P.; Mihiranga, A.; Perera, D.; Fernando, R.; Thilakarathna, T.; Kasthurirathna, D.
    Waste management has become a critical requirement to maintain a green environment in Sri Lanka as well as other countries. Town councils have to regularly collect different types of wastes to clean cities/towns. Hence managing the waste of the cities is a challenging task. However, most of the urban councils currently use a manual approach to managing waste. However, it results in many difficulties for the people and cleaning staff who involve in the process by following strict guidelines. Issues due to waste contamination, no proper information management of waste collection, and no punctuality in removing waste from the garbage bins are some of the significant issues arising from the manual process. Due to the drawbacks of the manual approach, social issues, environmental issues, health issues can occur easily. This paper proposes a better solution to replace this manual system with an automated system to overcome these issues. Hence, the main objective of this research is to introduce an ICT-based innovative design that can be used to develop an effective waste management system in town councils. In the proposed model, we will introduce a Computer Vision-based smart waste bin system with real-time monitoring that incorporates various technologies such as computer vision, sensor-based IoT devices, and geographical information system (GIS) related technologies. Our proposed solution consists of a waste bin system, which is capable of automated waste segregation. Our design facilitates the admin users to expand the waste bin kit by adding more waste categories in a user-friendly manner, making our product adaptive in any environment. At the same time, waste bins can notify the real-time waste status. Our system generates the optimum collection routing path and displays it in a mobile app using those real-time status details. We also demonstrate a lowcost prototype.