Please use this identifier to cite or link to this item: https://rda.sliit.lk/handle/123456789/2956
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHewagama, K. G-
dc.contributor.authorSuwandarachchi, T. D-
dc.contributor.authorHettiarachchi, C. R-
dc.contributor.authorAlwis, P. L. D. N-
dc.contributor.authorKarunasena, A-
dc.contributor.authorWeerasinghe, K. M. L. P-
dc.date.accessioned2022-09-05T09:23:46Z-
dc.date.available2022-09-05T09:23:46Z-
dc.date.issued2022-07-18-
dc.identifier.citationK. G. Hewagama, T. D. Suwandarachchi, C. R. Hettiarachchi, P. L. D. N. Alwis, A. Karunasena and K. M. L. P. Weerasinghe, "eVision - A technological solution to assist vision impaired in self-navigation," 2022 IEEE 7th International conference for Convergence in Technology (I2CT), 2022, pp. 1-7, doi: 10.1109/I2CT54291.2022.9825308.en_US
dc.identifier.isbn978-1-6654-2168-3-
dc.identifier.urihttp://rda.sliit.lk/handle/123456789/2956-
dc.description.abstractVisually impaired people face many difficulties in navigation such as crossing the road, identifying signs and text in indoor and outdoor environments and avoiding obstacles. Even though much research has been done to assist visually impaired people, most methods are unpopular, and almost all visually impaired people still only rely on the white cane. This paper proposes eVision which consists of a mobile app as well as a wearable tool that enables visually impaired people to detect obstacles and objects such as moving vehicles and staircases, identify signs, provide assistance with road crossing and natural scene text recognition, using Convolutional Neural Networks and image processing techniques. The CNN architecture used for object detection was SSD Mobilenet V2, since it provided around 95% accuracy for most objects with good performance on mobile. Mobilenet V2 transfer learning model was used for classification of objects, which provided around 94% accuracy. For text detection, the EAST algorithm was used, and the method resulted an accuracy around 98%. From the generated data from models, eVision provides audio feedback to the user using a text to speech(TTS) system.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.ispartofseries2022 IEEE 7th International conference for Convergence in Technology (I2CT);-
dc.subjecteVisionen_US
dc.subjecttechnological solutionen_US
dc.subjectvision impaireden_US
dc.subjectassist visionen_US
dc.subjectself-navigationen_US
dc.titleeVision - A technological solution to assist vision impaired in self-navigationen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/I2CT54291.2022.9825308en_US
Appears in Collections:Department of Information Technology
Research Papers - IEEE
Research Papers - SLIIT Staff Publications
Research Publications -Dept of Information Technology

Files in This Item:
File Description SizeFormat 
eVision_-_A_technological_solution_to_assist_vision_impaired_in_self-navigation.pdf
  Until 2050-12-31
1.75 MBAdobe PDFView/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.