Research Papers - Dept of Information Technology

Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/593

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    PublicationOpen Access
    Gesture driven smart home solution for bedridden people
    (Association for Computing Machinery, 2020-09-21) Jayaweera, N; Gamage, B; Samaraweera, M; Liyanage, S; Lokuliyana, S; Kuruppu, T
    Conversion of ordinary houses into smart homes has been a rising trend for past years. Smart house development is based on the enhancement of the quality of the daily activities of normal people. But many smart homes have not been designed in a way that is user friendly for differently-abled people such as immobile, bedridden (disabled people with at least one hand movable). Due to negligence and forgetfulness, there are cases where the electrical devices are left switched on, regardless of any necessity. It is one of the most occurred examples of domestic energy wastage. To overcome those challenges, this research represents the improved smart home design: MobiGO that uses cameras to capture gestures, smart sockets to deliver gesture-driven outputs to home appliances, etc. The camera captures the gestures done by the user and the system processes those images through advanced gesture recognition and image processing technologies. The commands relevant to the gesture are sent to the specific appliance through a specific IoT device attached to them. The basic literature survey content, which contains technical words, is analyzed using Deep Learning, Convolutional Neural Network (CNN), Image Processing, Gesture recognition, smart homes, IoT. Finally, the authors conclude that the MobiGO solution proposes a smart home system that is safer and easier for people with disabilities
  • Thumbnail Image
    PublicationEmbargo
    Computer Vision and NLP based Multimodal Ensemble Attentiveness Detection API for E-Learning
    (IEEE, 2021-04-21) Wijeratne, M. D; Lakmal, R. H. G. A; Geethadhari, W. K. S; Athalage, M. A; Gamage, A; Kasthurirathna, D
    Attention is the fundamental element of effective learning, memory, and interaction. Learning however, with the evolvement of technologies in the modern digital age, has surpassed traditional learning systems to more convenient online or e-learning systems. Nevertheless, unlike in the traditional learning systems, attention detection of a student in an e-learning environment remains one of the barely explored areas in Human Computer Interaction. This study proposes a multimodal ensemble solution to detect the level of attentiveness of a student in an e-learning environment, with the use of computer vision, natural language processing, and deep learning to overcome the barriers in identifying user attention in e-learning. The proposed multimodal captures, processes, and predicts user attentiveness levels of individual students, which are subsequently aggregated through an ensemble model to derive an overall outcome of better accuracy than individual model outcomes. The final outcome of the ensemble model produces a range of percentages, within which the attentiveness level of the student lies during a single online lesson. This range is consequently delivered to the users through an Application Programming Interface.
  • Thumbnail Image
    PublicationEmbargo
    Intelligent platform for visually impaired children for learning indoor and outdoor objects
    (IEEE, 2019-10-17) Jayawardena, C; Balasuriya, B. K; Lokuhettiarachchi, N. P; Ranasinghe, A. R. M. D. N
    Using Artificial Intelligence and Computer Vision to assist Visually Impaired personal has been a topic discussed in recent years. Many researchers are focusing on combining several technologies to assist said individuals to perform day to day tasks. Although there are already many technologies being used as platforms to help these individuals, focus put on children who are aged in between 6 - 14 years is considerably less. Therefore; in this research we are focusing on how to use latest advancements of Region Based Convolutional Networks (R-CNN), Recurrent Neural Networks (RNN) and Speech models to provide an interactive learning experience to visually impaired children.