Research Papers - Dept of Software Engineering

Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/1022

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    PublicationEmbargo
    DS-HPE: Deep Set for Head Pose Estimation
    (IEEE, 2023-04-18) Menan, V; Gawesha, A; Samarasinghe, p; Kasthurirathna, D
    Head pose estimation is a critical task that is fundamental to a variety of real-world applications, such as virtual and augmented reality, as well as human behavior analysis. In the past, facial landmark-based methods were the dominant approach to head pose estimation. However, recent research has demonstrated the effectiveness of landmark-free methods, which have achieved state-of-the-art (SOTA) results. In this study, we utilize the Deep Set architecture for the first time in the domain of head pose estimation. Deep Set is a specialized architecture that works on a “set” of data as a result of the “permutation-invariance” operator being utilized in the model. As a result, the model is a simple yet powerful and edge-computation-friendly method for estimating head pose. We evaluate our proposed method on two benchmark data sets, and we compare our method against SOTA methods on a challenging video-based data set. Our results indicate that our proposed method not only achieves comparable accuracy to these SOTA methods but also requires less computational time. Furthermore, the simplicity of our proposed method allows for its deployment in resource-constrained environments without the need for expensive hardware such as Graphics Processing Units (GPUs). This work underscores the importance of accurate and resource-efficient head pose estimation in the fields of computer vision and human-computer interaction, and the Deep Set architecture presents a promising approach to achieving this goal.
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision for Autonomous Driving
    (2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Kanchana, B.; Peiris, R.; Perera, D.; Jayasinghe, D.; Kasthurirathna, D.
    Computer vision in self-driving vehicles can lead to research and development of futuristic vehicles that can mitigate the road accidents and assist in a safer driving environment. By using the self-driving technology, the riders can be roamed to their destinations without using human interaction. But in recent times self-driving vehicle technology is still at the early stage. Mostly in the rushed areas like cities it becomes challenging to deploy such autonomous systems because even a small amount of data can cause a critical accident situation. In Order to increase the autonomous driving conditions computer vision and deep learning-based approaches are tended to be used. Finding the obstacles on the road and analyzing the current traffic flow are mainly focused areas using computer vision-based approaches. As well as many researchers using deep learning-based approaches like convolutional neural networks to enhance the autonomous driving conditions. This research paper focused on the evaluation of computer vision used in self-driving vehicles.
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision Enabled Drowning Detection System
    (2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Handalage, U.; Nikapotha, N.; Subasinghe, C.; Prasanga, T.; Thilakarthna, T.; Kasthurirathna, D.
    Safety is paramount in all swimming pools. The current systems expected to address the problem of ensuring safety at swimming pools have significant problems due to their technical aspects, such as underwater cameras and methodological aspects such as the need for human intervention in the rescue mission. The use of an automated visual-based monitoring system can help to reduce drownings and assure pool safety effectively. This study introduces a revolutionary technology that identifies drowning victims in a minimum amount of time and dispatches an automated drone to save them. Using convolutional neural network (CNN) models, it can detect a drowning person in three stages. Whenever such a situation like this is detected, the inflatable tube-mounted selfdriven drone will go on a rescue mission, sounding an alarm to inform the nearby lifeguards. The system also keeps an eye out for potentially dangerous actions that could result in drowning. This system's ability to save a drowning victim in under a minute has been demonstrated in prototype experiments' performance evaluations.