Research Publications Authored by SLIIT Staff

Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4195

This collection includes all SLIIT staff publications presented at external conferences and published in external journals. The materials are organized by faculty to facilitate easy retrieval.

Browse

Search Results

Now showing 1 - 2 of 2
  • Thumbnail Image
    PublicationEmbargo
    DS-HPE: Deep Set for Head Pose Estimation
    (IEEE, 2023-04-18) Menan, V; Gawesha, A; Samarasinghe, p; Kasthurirathna, D
    Head pose estimation is a critical task that is fundamental to a variety of real-world applications, such as virtual and augmented reality, as well as human behavior analysis. In the past, facial landmark-based methods were the dominant approach to head pose estimation. However, recent research has demonstrated the effectiveness of landmark-free methods, which have achieved state-of-the-art (SOTA) results. In this study, we utilize the Deep Set architecture for the first time in the domain of head pose estimation. Deep Set is a specialized architecture that works on a “set” of data as a result of the “permutation-invariance” operator being utilized in the model. As a result, the model is a simple yet powerful and edge-computation-friendly method for estimating head pose. We evaluate our proposed method on two benchmark data sets, and we compare our method against SOTA methods on a challenging video-based data set. Our results indicate that our proposed method not only achieves comparable accuracy to these SOTA methods but also requires less computational time. Furthermore, the simplicity of our proposed method allows for its deployment in resource-constrained environments without the need for expensive hardware such as Graphics Processing Units (GPUs). This work underscores the importance of accurate and resource-efficient head pose estimation in the fields of computer vision and human-computer interaction, and the Deep Set architecture presents a promising approach to achieving this goal.
  • Thumbnail Image
    PublicationEmbargo
    Qualitative Analysis of Automated Visual Tracking of Objects Through Head Pose Estimation
    (IEEE, 2022-12-09) Abeysinghe, A; Arachchige, I. D; Samarasinghe, P; Dhanawansa, V; Velayuthan, M
    An automated approach for object tracking and gaze estimation via head pose estimation is crucial, to facilitate a range of applications in the domain of -human-computer interfacing, this includes the analysis of head movement with respect to a stimulus in assessing one’s level of attention. While varied approaches for gaze estimation and object tracking exist, their suitability within such applications have not been justified. In order to address this gap, this paper conducts a quantitative comparison of existing models for gaze estimation including Mediapipe and standalone models of Openface and custom head pose estimation with MTCNN face detection; and object detection including models from CSRT object tracker, YOLO object detector, and a custom object detector. The accuracy of the aforementioned models were compared against the annotations of the EYEDIAP dataset, to evaluate their accuracy both relative and non-relative to each other. The analysis revealed that the custom object detector and the Openface models are relatively more accurate than the others when comparing the number of annotations, absolute mean error, and the relationship between x displacement-yaw, and y displacement-pitch, and thereby can be used in combination for gaze tracking tasks.