SLIIT Conference and Symposium Proceedings

Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/295

All SLIIT faculties annually conduct international conferences and symposiums. Publications from these events are included in this collection.

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    PublicationOpen Access
    Object Recognition and Assistance System for Visually Impaired Shoppers
    (Sri Lanka Institute of Information Technology, 2023-03-25) Tennekoon, S; Abhayasinghe, N; Wedasingha, N
    Shopping is indeed effortless for many individuals. However, it could certainly be a struggle and chaotic experience for the visually impaired. Visual impairment causes many societal stigma and inconvenience to visually impaired individuals. Although shopping may sound extremely easy, this is a crucial social activity for many visually impaired (VI) individuals. Visually impaired (VI) shoppers always require assistance when shopping for product identification purposes. This may lead to greater inconvenience as delays, lack of information and product familiarity of shop assistants may occur. Therefore, allowing visually impaired shoppers to independently perform shopping regardless of size and position of the shopping mall is essential. This encourages them to participate in enhanced social activities and perform their daily chores in independence. Although many products have been developed to assist visually impaired shoppers at shopping malls, due to their drawbacks, some of these have seem to undergo failures in producing accurate information to the visually impaired shopper for object identification and caused inconvenience. This project proposes a feasible solution for visually impaired shoppers to perform their shopping at ease and independently. Object recognition has been made possible in order to identify garment items while shopping with no assistance of another individual. The Convolutional Neural Network (CNN) has been used to obtain a sufficiently good accuracy and precision with a validation accuracy of 90%. Some of the novel techniques such as Ensemble Modelling has also been performed in order to reduce any generalization errors of the prediction and achieve a greater accuracy while overcoming all of the drawbacks of the currently existing products in the market. The overall product is proposed to attain maximum consumer population of visually impaired shoppers with satisfaction, reliability, and low cost.
  • Thumbnail Image
    PublicationOpen Access
    Enhancement of Images Under Low Light Conditions Using Artificial Intelligence
    (Sri Lanka Institute of Information Technology, 2023-03-25) Marzook, M; Herath, M; Liyanage, M; Thilakanayake, T
    Images taken in low light conditions do not contain all the information well-lit images contain. Various features including the colours of objects, details and the quality are lost. Extracting these features from images is very important for any kind of application of it. This study proposes a model to enhance the features of the image taken under low light conditions, by delivering a solution which improves the quality of the image through Artificial Intelligence. Through the proposed method, the clarity of the image is improved, making it closer to a well-lit image equivalent. Both Image Processing and Deep Learning based techniques are explored, including Convolutional Neural Network (CNN) based generative models. The Generative models considered are Autoencoders (AE) and Generative Adversarial Networks (GANs). The study has been carried out by using several datasets combined together, which include image pairs of well-lit and low light images. A comparison between the two CNN-based generative models is carried out. Through the study, it is quantitatively found, by the Structural Similarity Index and supported by the Peak Signal to Noise Ratio, that the proposed CNNbased Autoencoder model overrides the proposed CNN-based GAN model. This is further supported by qualitative observations of the image results. Both models, however, greatly enhance the low light images, bringing to light features that were not visible beforehand, and also provide results with good colour accuracy. Through this research study, the methods and solutions to enhance low light images have been addressed, as well as providing a comparison between two suitable models, Autoencoders and GANs. The proposed solution is able to address many of the limitations existing in the extent literature.
  • Thumbnail Image
    PublicationEmbargo
    Deep Transfer Learning Approach for Facial and Verbal Disease Detection
    (2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Manage, D.M.; Alahakoon, A.M.I.S.; Weerathunga, K.; Weeratunga, T.; Lunugalage, D.; De Silva, H.
    Millions of people have been subjected to different kind of acute diseases, some of them are eye diseases, facial skin diseases, tongue diseases and voice abnormalities. Most of eye diseases cause fully or partial blindness. Skin and tongue complications can be signs of cancers. Voice abnormalities can be cured at initial stages. Well-practiced medical practitioners have the ability of diagnose these diseases, but due to the pandemic situations and high consultation costs people do not tend to consult doctors. This research is predominantly focused on development of an application for automatic detection of eye, skin, tongue and verbal diseases using transfer learning (TL) based deep learning (DL) approach. Deep learning is a part of machine learning (ML) which has been used in most computer vision approaches. Transfer learning has been used to rebuild the existing convolutional neural network (CNN) models and used in disease detection. DenseNet121, MobileNetV2, RestNet152V2, models have been used to detect eye, skin and tongue diseases respectively and a new model has been used to detect voice abnormalities. CNN models are capable of automatically extracting features from the given images and voice data. All the trained models have been given accuracy rate of 80%-95%.