Please use this identifier to cite or link to this item:
https://rda.sliit.lk/handle/123456789/2956
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Hewagama, K. G | - |
dc.contributor.author | Suwandarachchi, T. D | - |
dc.contributor.author | Hettiarachchi, C. R | - |
dc.contributor.author | Alwis, P. L. D. N | - |
dc.contributor.author | Karunasena, A | - |
dc.contributor.author | Weerasinghe, K. M. L. P | - |
dc.date.accessioned | 2022-09-05T09:23:46Z | - |
dc.date.available | 2022-09-05T09:23:46Z | - |
dc.date.issued | 2022-07-18 | - |
dc.identifier.citation | K. G. Hewagama, T. D. Suwandarachchi, C. R. Hettiarachchi, P. L. D. N. Alwis, A. Karunasena and K. M. L. P. Weerasinghe, "eVision - A technological solution to assist vision impaired in self-navigation," 2022 IEEE 7th International conference for Convergence in Technology (I2CT), 2022, pp. 1-7, doi: 10.1109/I2CT54291.2022.9825308. | en_US |
dc.identifier.isbn | 978-1-6654-2168-3 | - |
dc.identifier.uri | http://rda.sliit.lk/handle/123456789/2956 | - |
dc.description.abstract | Visually impaired people face many difficulties in navigation such as crossing the road, identifying signs and text in indoor and outdoor environments and avoiding obstacles. Even though much research has been done to assist visually impaired people, most methods are unpopular, and almost all visually impaired people still only rely on the white cane. This paper proposes eVision which consists of a mobile app as well as a wearable tool that enables visually impaired people to detect obstacles and objects such as moving vehicles and staircases, identify signs, provide assistance with road crossing and natural scene text recognition, using Convolutional Neural Networks and image processing techniques. The CNN architecture used for object detection was SSD Mobilenet V2, since it provided around 95% accuracy for most objects with good performance on mobile. Mobilenet V2 transfer learning model was used for classification of objects, which provided around 94% accuracy. For text detection, the EAST algorithm was used, and the method resulted an accuracy around 98%. From the generated data from models, eVision provides audio feedback to the user using a text to speech(TTS) system. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartofseries | 2022 IEEE 7th International conference for Convergence in Technology (I2CT); | - |
dc.subject | eVision | en_US |
dc.subject | technological solution | en_US |
dc.subject | vision impaired | en_US |
dc.subject | assist vision | en_US |
dc.subject | self-navigation | en_US |
dc.title | eVision - A technological solution to assist vision impaired in self-navigation | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/I2CT54291.2022.9825308 | en_US |
Appears in Collections: | Department of Information Technology Research Papers - IEEE Research Papers - SLIIT Staff Publications Research Publications -Dept of Information Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
eVision_-_A_technological_solution_to_assist_vision_impaired_in_self-navigation.pdf Until 2050-12-31 | 1.75 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.