Please use this identifier to cite or link to this item: https://rda.sliit.lk/handle/123456789/3292
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAbeysinghe, A-
dc.contributor.authorArachchige, I. D-
dc.contributor.authorSamarasinghe, P-
dc.contributor.authorDhanawansa, V-
dc.contributor.authorVelayuthan, M-
dc.date.accessioned2023-03-03T09:46:32Z-
dc.date.available2023-03-03T09:46:32Z-
dc.date.issued2022-12-09-
dc.identifier.citationA. Abeysinghe, I. D. Arachchige, P. Samarasinghe, V. Dhanawansa and M. Velayuthan, "Qualitative Analysis of Automated Visual Tracking of Objects Through Head Pose Estimation," 2022 4th International Conference on Advancements in Computing (ICAC), Colombo, Sri Lanka, 2022, pp. 369-374, doi: 10.1109/ICAC57685.2022.10025053.en_US
dc.identifier.issn979-8-3503-9809-0-
dc.identifier.urihttps://rda.sliit.lk/handle/123456789/3292-
dc.description.abstractAn automated approach for object tracking and gaze estimation via head pose estimation is crucial, to facilitate a range of applications in the domain of -human-computer interfacing, this includes the analysis of head movement with respect to a stimulus in assessing one’s level of attention. While varied approaches for gaze estimation and object tracking exist, their suitability within such applications have not been justified. In order to address this gap, this paper conducts a quantitative comparison of existing models for gaze estimation including Mediapipe and standalone models of Openface and custom head pose estimation with MTCNN face detection; and object detection including models from CSRT object tracker, YOLO object detector, and a custom object detector. The accuracy of the aforementioned models were compared against the annotations of the EYEDIAP dataset, to evaluate their accuracy both relative and non-relative to each other. The analysis revealed that the custom object detector and the Openface models are relatively more accurate than the others when comparing the number of annotations, absolute mean error, and the relationship between x displacement-yaw, and y displacement-pitch, and thereby can be used in combination for gaze tracking tasks.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.ispartofseries2022 4th International Conference on Advancements in Computing (ICAC);-
dc.subjectQualitative Analysisen_US
dc.subjectAutomated Visual Trackingen_US
dc.subjectObjectsen_US
dc.subjectPose Estimationen_US
dc.subjectThrough Headen_US
dc.titleQualitative Analysis of Automated Visual Tracking of Objects Through Head Pose Estimationen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ICAC57685.2022.10025053en_US
Appears in Collections:4th International Conference on Advancements in Computing (ICAC) | 2022
Department of Information Technology
Research Papers - IEEE
Research Papers - SLIIT Staff Publications
Research Publications -Dept of Information Technology

Files in This Item:
File Description SizeFormat 
Qualitative_Analysis_of_Automated_Visual_Tracking_of_Objects_Through_Head_Pose_Estimation.pdf
  Until 2050-12-31
868.57 kBAdobe PDFView/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.