Research Publications Authored by SLIIT Staff
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4195
This collection includes all SLIIT staff publications presented at external conferences and published in external journals. The materials are organized by faculty to facilitate easy retrieval.
Browse
6 results
Filters
Advanced Search
Filter by
Settings
Search Results
Publication Open Access Advancing Object Detection: A Narrative Review of Evolving Techniques and Their Navigation Applications(Institute of Electrical and Electronics Engineers Inc., 2025-03-17) Tennekoon, S; Wedasingha, N; Welhenge, A; Abhayasinghe, N; Murray Am, IObject detection plays a pivotal role in advancing computer vision systems by enabling machines to perceive and interact intelligently with their environments. Despite significant advancements, comprehensive exploration of its evolution and applications in navigation remains underrepresented. This review paper examines the evolution of object detection technologies, from early methodologies to contemporary advancements, and their critical role in navigation tasks. The emphasis was on the significance of contextual learning in enhancing object detection performance by leveraging spatial and temporal information. Furthermore, the limitations of conventional approaches that rely heavily on hand-engineered features are examined. It is then demonstrated that contextual learning facilitates automated feature extraction, resulting in improved accuracy exceeding a 50% increase and adaptability in diverse applications. The review concludes by outlining future trends and opportunities for further advancements in object detection and, underscoring its transformative impact on autonomous navigation and beyond. In summary, this review contributes to a comprehensive understanding of object detection technologies by offering insights into their evolution, highlighting their applications in navigation, and providing guidance for future research in context-aware systems.Publication Open Access Early Diagnosis and Severity Assessment of Weligama Coconut Leaf Wilt Disease and Coconut Caterpillar Infestation Using Deep Learning-Based Image Processing Techniques(Institute of Electrical and Electronics Engineers Inc., 2025-02-03) Vidhanaarachchi, S; Wijekoon, J. l; Abeysiriwardhana, W. A. S.P; Wijesundara, MGlobal Coconut (Cocos nucifera (L.)) cultivation faces significant challenges, including yield loss, due to pest and disease outbreaks. In particular, Weligama Coconut Leaf Wilt Disease (WCWLD) and Coconut Caterpillar Infestation (CCI) damage coconut trees, causing severe coconut production loss in Sri Lanka and nearby coconut-producing countries. Currently, both WCWLD and CCI are detected through on-field human observations, a process that is not only time-consuming but also limits the early detection of infections. This paper presents a study conducted in Sri Lanka, demonstrating the effectiveness of employing transfer learning-based Convolutional Neural Network (CNN) and Mask Region-based-CNN (Mask R-CNN) to identify WCWLD and CCI at their early stages and to assess disease progression. Further, this paper presents the use of the You Only Look Once (YOLO) object detection model to count the number of caterpillars distributed on leaves with CCI. The introduced methods were tested and validated using datasets collected from Matara, Puttalam, and Makandura, Sri Lanka. The results show that the proposed methods identify WCWLD and CCI with an accuracy of 90% and 95%, respectively. In addition, the proposed WCWLD disease severity identification method classifies the severity with an accuracy of 97%. Furthermore, the accuracies of the object detection models for calculating the number of caterpillars in the leaflets were: YOLOv5-96.87%, YOLOv8-96.1%, and YOLO11-95.9%.Publication Open Access A Deep Learning-Based Dual-Model Framework for Real-Time Malware and Network Anomaly Detection with MITRE ATT&CK Integration(Science and Information Organization, 2025) Migara H.M.S; Sandakelum M.D.B; Maduranga D.B.W.N; Kumara D.D.K.C; Fernando, H; Abeywardena, KThe contemporary world of high connectivity in the digital realm has presented cybersecurity with more advanced threats, such as advanced malware and network attacks, which in most cases will not be detected using traditional detection tools. Static cybersecurity tools, which are traditional, often fail to deal with dynamic and hitherto unseen attacks, including signature-based antivirus systems and rule-based intrusion detection. To ad-dress this issue, we would suggest a two-part, AI-powered solution to cybersecurity which would allow real-time threat detection on an endpoint and a network level. The first element uses a Feedfor-ward Neural Network (FNN) to categorize Windows Portable Ex-ecutable (PE) files, whether they are benign or malicious, by using structured static features. The second component improves net-work anomaly detection with a deep learning model that is aug-mented by Generative Adversarial Networks (GAN) and effec-tively addresses the data imbalance issue and sensitivity to rare cyber-attacks. To enhance its performance further, the system is integrated with the MITRE ATT&CK adversarial tactics and techniques, which correlate real-time detection results with adver-sarial tactics and techniques, thus offering actionable context to incident response teams. Tests based on open-source datasets pro-vided accuracies of 98.0 per cent of malware detection and 96.2 per cent of network anomaly detection. Data augmentation using GAN was very effective in improving the detection of less popular attacks, including SQL injections and internal reconnaissance. Moreover, the system is horizontally scalable and responsive in real-time due to Docker-based deployment. The suggested frame-work is an effective, explainable and scalable cybersecurity de-fense system, which is perfectly applicable to Managed Security Service Providers (MSSPs) and Security Operations Centers (SOCs), greatly increasing the precision rate and contextual in-sight of threat detection. © (2025), (Science and Information Organization)Publication Embargo Recognition and translation of Ancient Brahmi Letters using deep learning and NLP(IEEE, 2019-12) Wijerathna, K. A. S. A. N; Sepalitha, R; Thuiyadura, I; Athauda, H; Suranjini, P. D; Silva, J. A. D. C; Jayakodi, AInscriptions are major resources for studying the ancient history and culture of civilization in any country. Analyzing, recognizing and translating the ancient letters (Brahmi letters) from the inscription is a very difficult work for present generation. There is no any automatic system for translating Brahmi letters to Sinhala language. However, they are using manual method for translating inscriptions. The method that used in epigraphy is being taken a long period to decipher, analyze and translate the inscribed text in inscriptions. This research mainly focuses on recognition of ancient Brahmi characters written the time period between 3 rd B.C and 1 st A. D. First, we remove the noise, segment the letters from the inscription image and convert it into the binary image using image processing techniques. Secondly, we recognize the correct Brahmi letters, broken letters and then identify the time period of the inscriptions using Convolution Neural Networks in deep learning. Finally, the Brahmi letters are translated into modern Sinhala letters and provide the meaning of the inscription using Natural Language Processing. This proposed system builds up solution to overcome the existing problems in epigraphy.Publication Open Access A User-oriented Ensemble Method for Multi-Modal Emotion Recognition(SLAAI - International Conference on Artificial Intelligence, 2019-12-12) Iddamalgoda, N; Thrimavithana, P; Fernando, H; Ratnayake, T; Priyadarshana, Y. H. P. P; Aththidiye, R; Kasthurirathna, DEmotions play a vital role in mental and physical activities of human lives. One of the biggest challenges in Human-Computer Interaction is emotion recognition. With the resurgence in the fields of Artificial Intelligence and Machine learning, a considerable number of studies have been carried out in order to address the challenge of emotion recognition. The individual heterogeneity of expressing emotions is a key problem that needs to be addressed in accurately detecting the emotional state of an individual. The purpose of this work is to propose a novel ensemble method to predict the emotions using a multimodal approach. The presented multimodal approach with the modalities of facial expressions, voice variations and, speech and social media content, are used to identify seven emotional states: anger, fear, disgust, happiness, sadness, surprise and neutral emotion. In this study, for the facial expression-based emotion recognition and voice variation-based emotion recognition, Deep Neural Network models have been used, and for emotion recognition using speech and social media content, Multinomial Naïve Bayesian algorithm is used. The mentioned three modalities were integrated using a novel ensemble method that captures the heterogeneity of individuals in how they express their emotions. The proposed ensemble method was evaluated with respect to real states of human emotions of a sample user group and the experimental results suggest that the suggested ensemble method may be more accurate in recognizing emotions. Accurate recognition of emotions may have myriad applications in domains such as healthcare, advertising and human resource management.Publication Embargo AI Base E-Learning Solution to Motivate and Assist Primary School Students(2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Silva, P.H.D.D.; Sudasinghe, S.A.V.D.; Hansika, P.D.U.; Gamage, M.P.; Gamage, M.P.A.W.E-learning is a form of providing education by using electronic devices. Lack of proper mechanisms for encouraging and assisting students are key issues faced by many students in an e-learning environment. The ‘Vidu Mithuru’ is a question-based e-learning application which has been developed as a solution to overcome these problems. This mobile application will auto generate and categorize the questions, evaluate the answers and track the performance while providing motivational quotes by detecting the emotions of the student. This mobile application is based on Neural Networks, Natural Language Processing and Machine Learning concepts. In order to developing this application, the information provided by the primary education professionals was used to comply with the standards. The core objective of the proposed solution is to track the performance level and assist the students to improve in their studies while keeping them motivated. The trained Machine Learning models have achieved the accuracy of 75%, 78%, 99% and 86% for question categorization model, speech emotion detection model, facial emotion detection model and model to evaluate answers as respectively. We have received favorable responses as the results after testing the developed ‘Vidu Mithuru’ mobile application
