Please use this identifier to cite or link to this item:
https://rda.sliit.lk/handle/123456789/3292
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Abeysinghe, A | - |
dc.contributor.author | Arachchige, I. D | - |
dc.contributor.author | Samarasinghe, P | - |
dc.contributor.author | Dhanawansa, V | - |
dc.contributor.author | Velayuthan, M | - |
dc.date.accessioned | 2023-03-03T09:46:32Z | - |
dc.date.available | 2023-03-03T09:46:32Z | - |
dc.date.issued | 2022-12-09 | - |
dc.identifier.citation | A. Abeysinghe, I. D. Arachchige, P. Samarasinghe, V. Dhanawansa and M. Velayuthan, "Qualitative Analysis of Automated Visual Tracking of Objects Through Head Pose Estimation," 2022 4th International Conference on Advancements in Computing (ICAC), Colombo, Sri Lanka, 2022, pp. 369-374, doi: 10.1109/ICAC57685.2022.10025053. | en_US |
dc.identifier.issn | 979-8-3503-9809-0 | - |
dc.identifier.uri | https://rda.sliit.lk/handle/123456789/3292 | - |
dc.description.abstract | An automated approach for object tracking and gaze estimation via head pose estimation is crucial, to facilitate a range of applications in the domain of -human-computer interfacing, this includes the analysis of head movement with respect to a stimulus in assessing one’s level of attention. While varied approaches for gaze estimation and object tracking exist, their suitability within such applications have not been justified. In order to address this gap, this paper conducts a quantitative comparison of existing models for gaze estimation including Mediapipe and standalone models of Openface and custom head pose estimation with MTCNN face detection; and object detection including models from CSRT object tracker, YOLO object detector, and a custom object detector. The accuracy of the aforementioned models were compared against the annotations of the EYEDIAP dataset, to evaluate their accuracy both relative and non-relative to each other. The analysis revealed that the custom object detector and the Openface models are relatively more accurate than the others when comparing the number of annotations, absolute mean error, and the relationship between x displacement-yaw, and y displacement-pitch, and thereby can be used in combination for gaze tracking tasks. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartofseries | 2022 4th International Conference on Advancements in Computing (ICAC); | - |
dc.subject | Qualitative Analysis | en_US |
dc.subject | Automated Visual Tracking | en_US |
dc.subject | Objects | en_US |
dc.subject | Pose Estimation | en_US |
dc.subject | Through Head | en_US |
dc.title | Qualitative Analysis of Automated Visual Tracking of Objects Through Head Pose Estimation | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/ICAC57685.2022.10025053 | en_US |
Appears in Collections: | 4th International Conference on Advancements in Computing (ICAC) | 2022 Department of Information Technology Research Papers - IEEE Research Papers - SLIIT Staff Publications Research Publications -Dept of Information Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Qualitative_Analysis_of_Automated_Visual_Tracking_of_Objects_Through_Head_Pose_Estimation.pdf Until 2050-12-31 | 868.57 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.