Repository logo
Repository
Browse
SLIIT Journals
OPAC
Log In
  1. Home
  2. Browse by Author

Browsing by Author "Dhanawansa, V"

Filter results by typing the first few letters
Now showing 1 - 5 of 5
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    PublicationEmbargo
    Automated Child Social Attention Evaluation
    (IEEE, 2022-12-09) Sandunika Wasala, K; Dhanawansa, V; Velayuthan, M; Samarasinghe, P
    Providing proper care for children with attention difficulty disorder is crucial, one way to ensure this is early identification of these disorders. In Sri Lanka, a developing country, it is difficult to find resources such as clinics, clinical expertise, and other resources which are essential for diagnosis. The absence of these apparatuses risks the mental well-being of the child as well as access to help. Hence a need arises to develop an automated social attention evaluation system. This will serve as the first line of diagnosis and help the parents/guardians secure the help required from an early age for the child. To the best of the authors’ knowledge, no solution of this nature is readily available for the Sri Lankan community so far. Keeping the low-income bracket of the country in mind, we propose a solution that can be easily deployed even on a cheap mobile/tablet-like device. It is difficult to perform these evaluations for children in similar settings as adults, as they are easily distracted. Therefore, care must be taken to grab the child’s attention throughout the evaluation process. In this research, we developed applications for children at different levels and each level assesses child attention between social objects and non-social objects through a child-friendly game, as they have sufficient visual stimuli to hold the child’s attention. In this study we investigated the screen time spent by the child, the attention of the child on different categories of images (High Autism Interested or Low Autism Interested images), and the switching patterns of the attention between these images. Only typical children were evaluated for this research due to the pandemic situation as well as other internal problems in the country. This system will test and evaluate atypical children in our future work.
  • Thumbnail Image
    PublicationEmbargo
    The Automated Temporal Analysis of Gaze Following in a Visual Tracking Task
    (Springer, Cham, 2022-05-15) Dhanawansa, V; Samarasinghe, P; Gardiner, B; Yogarajah, P; Karunasena, A
    The attention assessment of an individual in following the motion of a target object provides valuable insights into understanding one’s behavioural patterns in cognitive disorders including Autism Spectrum Disorder (ASD). Existing frameworks often require dedicated devices for gaze capture, focus on stationary target objects, or fails to conduct a temporal analysis of the participant’s response. Thus, in order to address the persisting research gap in the analysis of video capture of a visual tracking task, this paper proposes a novel framework to analyse the temporal relationship between the 3D head pose angles and object displacement, and demonstrates its validity via application on the EYEDIAP video dataset. The conducted multivariate time-series analysis is two-fold; the statistical correlation computes the similarity between the time series as an overall measure of attention; and the Dynamic Time Warping (DTW) algorithm aligns the two sequences, and computes relevant temporal metrics. The temporal features of latency and maximum time of focus retention enabled an intragroup comparison between the performance of the participants. Further analysis disclosed valuable insights into the behavioural response of participants, including the superior response to horizontal motion of the target and the improvement in retention of focus on the vertical motion over time, implying that following a vertical target initially proved a challenging task.
  • Thumbnail Image
    PublicationEmbargo
    Mobile-Based Analysis of Visual Attention in Young Children
    (IEEE, 2022-12-09) Jayakody, K; Dhanawansa, V; Velayuthan, M; Samarasinghe, P
    There is a crucial need to screen young children for attention impairments given that the ability of a child to deal with the demands of everyday life is dependent on the development of the child’s attention. Intervention at a young age facilitates the training and enhancement of attention, as young brains are the most responsive to treatment. Sri Lanka, a low-income country, lacks accessible, home-based screening tools which can be used to assess the attention of young children. Moreover, most Sri Lankan parents are not aware of attention impairments. To bridge this gap, this paper proposes an easily accessible, home-based attention assessment tool in the form of a mobile application. The application provides a series of engaging tasks for assessing and training, the aspects of visual attention (focused attention, selective attention, divided attention, sustained attention and shifting attention). The assessments were carefully designed to suit the age and the attention span of the child. The performance analysis performed on the data collected showed the varied responses of children of different ages on different assessments. Clustering was performed in identifying the varying performance levels of typical children and this project will be extended to evaluate atypical child performance.
  • Thumbnail Image
    PublicationEmbargo
    Qualitative Analysis of Automated Visual Tracking of Objects Through Head Pose Estimation
    (IEEE, 2022-12-09) Abeysinghe, A; Arachchige, I. D; Samarasinghe, P; Dhanawansa, V; Velayuthan, M
    An automated approach for object tracking and gaze estimation via head pose estimation is crucial, to facilitate a range of applications in the domain of -human-computer interfacing, this includes the analysis of head movement with respect to a stimulus in assessing one’s level of attention. While varied approaches for gaze estimation and object tracking exist, their suitability within such applications have not been justified. In order to address this gap, this paper conducts a quantitative comparison of existing models for gaze estimation including Mediapipe and standalone models of Openface and custom head pose estimation with MTCNN face detection; and object detection including models from CSRT object tracker, YOLO object detector, and a custom object detector. The accuracy of the aforementioned models were compared against the annotations of the EYEDIAP dataset, to evaluate their accuracy both relative and non-relative to each other. The analysis revealed that the custom object detector and the Openface models are relatively more accurate than the others when comparing the number of annotations, absolute mean error, and the relationship between x displacement-yaw, and y displacement-pitch, and thereby can be used in combination for gaze tracking tasks.
  • Thumbnail Image
    PublicationEmbargo
    Sinhala Sign Language Interpreter Optimized for Real – Time Implementation on a Mobile Device
    (2021-08-11) Dhanawansa, V; Rajakaruna, T
    This paper proposes a framework for a vision based Sinhala Sign Language interpreter targeted for implementation on a portable device, optimized for real-time use. The translator is aimed at enabling conversation between a hearing-impaired and a non-signing individual. The scope covers both static and dynamic signs, portrayed using the right hand. Skin segmentation and contour extraction followed by a combination of hand detection and tracking algorithms isolate the signing hand against varied background conditions. A Convolutional Neural Network model was developed to extract and classify the features of the chosen static signs. A standard, expandable dataset of Sinhala static signs was prepared for this task. Dynamic signs were modeled as a tree data structure using a sequence of static signs. The model was optimized using motion based temporal segmentation between consecutive signs, to minimize the processing overhead. The interpreter recorded an average accuracy of 99.5% and 81.2% on the static sign dataset, and combined dataset of static and dynamic signs, respectively. A response time of333 ms was resulted between the occurrence and prediction of a sign, demonstrating the effectiveness of the framework for real-time use.

Copyright 2025 © SLIIT. All Rights Reserved.

  • Privacy policy
  • End User Agreement
  • Send Feedback