Scopus Index Publications

Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/2162

This collection consists of all Scopus-indexed publications produced by SLIIT researchers. Scopus is recognized worldwide as a leading and reputable academic indexing database.

Browse

Search Results

Now showing 1 - 6 of 6
  • Thumbnail Image
    PublicationEmbargo
    Comparative Study of Parameter Selection for Enhanced Edge Inference for a Multi-Output Regression model for Head Pose Estimation
    (Institute of Electrical and Electronics Engineers Inc., 2022-11-04) Lindamulage, A; Kodagoda, N; Reyal, S; Samarasinghe, P; Yogarajah, P
    Magnitude-based pruning is a technique used to optimise deep learning models for edge inference. We have achieved over 75% model size reduction with a higher accuracy than the original multi-output regression model for head-pose estimation
  • Thumbnail Image
    PublicationEmbargo
    Children's Behavior Analysis Through Smart Toys
    (Institute of Electrical and Electronics Engineers Inc., 2022-11) Ramesha, M. D. D; Kavindi, M. V; Somawansa, R.P; Yadav, A; Samarasinghe, P; Wedasinghe, N; Jayasinghearachchi, V
    Analyzing children's behavior is a major part of pediatric psychological studies. Here we are going to use the hand movements of the child to understand the behavioral pattern with the help of IoT-based toys. © 2022 IEEE.
  • Thumbnail Image
    PublicationEmbargo
    Child Head Gesture Classification through Transformers
    (Institute of Electrical and Electronics Engineers Inc., 2022-11-04) Wedasingha, N; Samarasinghe, P; Singarathnam, D; Papandrea, M; Puiatti, A; Seneviratne, L
    This paper proposes a transformer network for head pose classification (HPC) which outperforms the existing SoA for HPC. This robust model is then extended to overcome the limited child data challenge by applying transfer learning resulting in an accuracy of 95.34% for child HPC in the wild.
  • Thumbnail Image
    PublicationOpen Access
    2D Pose Estimation based Child Action Recognition
    (Institute of Electrical and Electronics Engineers Inc., 2022-11) Mohottala, S; Abeygunawardana, S; Samarasinghe, P; Kasthurirathna, D; Abhayaratne, C
    We present a graph convolutional network with 2D pose estimation for the first time on child action recognition task achieving on par results with LRCN on a benchmark dataset containing unconstrained environment based videos.
  • Thumbnail Image
    PublicationEmbargo
    The Automated Temporal Analysis of Gaze Following in a Visual Tracking Task
    (Springer, Cham, 2022-05-15) Dhanawansa, V; Samarasinghe, P; Gardiner, B; Yogarajah, P; Karunasena, A
    The attention assessment of an individual in following the motion of a target object provides valuable insights into understanding one’s behavioural patterns in cognitive disorders including Autism Spectrum Disorder (ASD). Existing frameworks often require dedicated devices for gaze capture, focus on stationary target objects, or fails to conduct a temporal analysis of the participant’s response. Thus, in order to address the persisting research gap in the analysis of video capture of a visual tracking task, this paper proposes a novel framework to analyse the temporal relationship between the 3D head pose angles and object displacement, and demonstrates its validity via application on the EYEDIAP video dataset. The conducted multivariate time-series analysis is two-fold; the statistical correlation computes the similarity between the time series as an overall measure of attention; and the Dynamic Time Warping (DTW) algorithm aligns the two sequences, and computes relevant temporal metrics. The temporal features of latency and maximum time of focus retention enabled an intragroup comparison between the performance of the participants. Further analysis disclosed valuable insights into the behavioural response of participants, including the superior response to horizontal motion of the target and the improvement in retention of focus on the vertical motion over time, implying that following a vertical target initially proved a challenging task.
  • Thumbnail Image
    PublicationEmbargo
    EyeDriver: Intelligent Driver Assistance System
    (IEEE, 2019-12-18) Gayadeeptha, P; Baddewithana, T. P; Pannegama, K. V; Samarakkody, C. S; Samarasinghe, P; Siriwardana, S
    “EyeDriver” is a driver assistance system that analyzes and provides real-time driver assistant data from four separate components. These main components are drowsiness detection and head pose estimation, over-speed detection, lane departure, and front collision avoidance. It is a compact product that included a Raspberry pi board, a USB camera module, Pi camera, and a TFT LCD. Since the “EyeDriver” is a first affordable aftermarket solution in Sri Lanka, it can be mounted and configured in any vehicle without any professional knowledge in less effort. Drowsiness detection and head pose estimation component will monitor the driver's eyes and keep track of whether the driver's head's position is inconsistent or deviated from the optimal position. In accordance with the road's recommended speed, the vehicle's actual speed is analyzed and if it is more than the permitted, the system makes a notification. It is done by the over-speed detection component. Lane departure component consists of assisting in keeping the vehicle stable on the desired lane on the road. Also, when the driver makes an intended lane change, the system provides a notification. The Front collision avoidance part will detect the frontal obstacle on the road and provide pre-collision/proximity warning notification. The notification makes according to the vehicle speed and distance between the object and the vehicles. The whole system is based on the Raspberry Pi 3 Model B+ board and the implementation of the system has been done by using OpenCV and Python.