Publication: 2D Pose Estimation based Child Action Recognition
| dc.contributor.author | Mohottala, S | |
| dc.contributor.author | Abeygunawardana, S | |
| dc.contributor.author | Samarasinghe, P | |
| dc.contributor.author | Kasthurirathna, D | |
| dc.contributor.author | Abhayaratne, C | |
| dc.date.accessioned | 2023-01-23T10:47:28Z | |
| dc.date.available | 2023-01-23T10:47:28Z | |
| dc.date.issued | 2022-11 | |
| dc.description.abstract | We present a graph convolutional network with 2D pose estimation for the first time on child action recognition task achieving on par results with LRCN on a benchmark dataset containing unconstrained environment based videos. | en_US |
| dc.identifier.citation | S. Mohottala, S. Abeygunawardana, P. Samarasinghe, D. Kasthurirathna and C. Abhayaratne, "2D Pose Estimation based Child Action Recognition," TENCON 2022 - 2022 IEEE Region 10 Conference (TENCON), Hong Kong, Hong Kong, 2022, pp. 1-7, doi: 10.1109/TENCON55691.2022.9977799. | en_US |
| dc.identifier.doi | 10.1109/TENCON55691.2022.9977799 | en_US |
| dc.identifier.issn | 21593442 | |
| dc.identifier.uri | https://rda.sliit.lk/handle/123456789/3141 | |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
| dc.relation.ispartofseries | IEEE Region 10 Annual International Conference, Proceedings/TENCON; | |
| dc.subject | child action recognition | en_US |
| dc.subject | graph convolutional networks | en_US |
| dc.subject | Long-term recurrent convolutional network | en_US |
| dc.subject | transfer learning | en_US |
| dc.title | 2D Pose Estimation based Child Action Recognition | en_US |
| dc.type | Article | en_US |
| dspace.entity.type | Publication |
Files
Original bundle
1 - 1 of 1
- Name:
- 2D_Pose_Estimation_based_Child_Action_Recognition.pdf
- Size:
- 380.76 KB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description:
