MSc in Enterprise Application Development
Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/2480
Students in the MSc in Enterprise Application Development programme are required to submit a thesis as a compulsory component of their degree requirements. This collection features merit-based theses submitted by postgraduate students specialising in Enterprise Application Development. Abstracts are available for public viewing, while the full texts can be accessed on-site within the library.
Theses and Dissertations of the Sri Lanka Institute of Information Technology (SLIIT) are licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Browse
Publication Open Access A Hybrid Machine Learning and Rule-Based Approach for a Sinhala Natural Language Interface to Database(Sri Lanka Institute of Information Technology, 2025-12) Mahdi, M. R. M.The objective of Natural Language Interfaces to Databases (NLIDBs), is to provide users with an intuitive way to get at their data; users can ask questions of their relational databases using natural language, instead of using a formal query language like SQL. While there have been significant advancements in developing NLIDs for high resource languages such as English, support for low resource and morphologically rich languages such as Sinhala continues to be limited. Most existing Sinhala NLIDBs have employed rule-based approaches, these have limitations in terms of adapting to new conditions and scaling. In this paper we propose a hybrid approach to developing Sinhala NLIDBs that combines rule-based logic with statistical methods to address the current limitations of Sinhala NLIDBs. Our focus will be on a single student table and supporting the basic SQL operations. We will employ a combination of core linguistic preprocessing techniques (tokenization, stemming, POS-tagging) along with a grammar driven query parser that is specifically designed to accommodate the unique structure of Sinhala. We will use a manually annotated dataset of 800 Sinhala-SQL query pairs to improve our model’s ability to identify semantic elements through Named Entity Recognition (NER). Furthermore, we will employ an intent classifier to guide the SQL generation process, enabling us to correctly understand a variety of natural language queries. Our hybrid architecture seeks to achieve a balance between the precision of rule-based systems and the flexibility of statistical systems; providing both interpretability and generalizability. In addition to improving the accessibility of databases for users who speak Sinhala, this research provides a foundation for developing future multilingual and multidomain NLIDBs for low resource languages.Publication Open Access A Multi-Modal Deep Learning and Explainable AI Framework for Transparent Job Matching and Career Development(Sri Lanka Institute of Information Technology, 2025-12) Warnasooriya,D. M. D. W. RIn the digital age, career and professional growth are being influenced with advanced systems which enable finding employment, reviewing applicants and skill advancement. The changing aspect of Explainable Artificial Intelligence (XAI) is essential to allow contextual job matching and reduce discrimination in AI-based job processes. However, most of the existing systems are still non-transparent and restrictive, perpetuating prejudice and weakening credibility. The study presents a new career development and recruitment platform using XAI and surpasses the traditional methods. The proposed system uses a hybrid two-stage system to combine deep learning with the Graph Neural Networks to encode candidate job relevance as well as structural dynamics of career progression, skills dependencies, and mentorship networks. New feature engineering algorithms simulate the dynamics of temporal profiles development and skill acquisition, which allow dynamic and context-sensitive candidate representations. In order to guarantee interpretability, a recruitment-specific explainability engine offers stakeholder-specific explanations such as comparative explanations between a candidate and a job, trajectory correspondence insights, and visualizations of fair trade-offs. The system is tested to execute its functions: a real-world evaluation, which is a combination of fairness statistical measures and accuracy with user-centric interpretability measures, proves the effectiveness of the system. The results highlight the potential of radically changing the current state of hybrid AI architectures and domain-specific explainability to create ethical, equitable, and adaptive solutions in the future of work.Publication Open Access Advanced Collaborative BI Chatbot for Enhanced Enterprise Decision-Making(Sri Lanka Institute of Information Technology, 2025-12) Shanika,W. D.This research outlines the creation of an innovative collaborative Business Intelligence (BI) chatbot aimed at improving enterprise decision-making by utilizing context awareness, multimodal data integration, predictive analytics, and real-time collaboration. The system merges structured data from SQL databases and CSV files with unstructured resources like text files, PDFs, and images. A multimodal data integration component utilizes FAISS-based vector embeddings to enable semantic retrieval from unstructured materials while guaranteeing smooth access to structured data repositories. The predictive analytics feature goes beyond simple regression by integrating statistical model selection to determine the most appropriate forecasting technique. It also produces interactive dashboards using Dash and generates static PDF reports to cater to various decision-making scenarios. The context-awareness module incorporates tokenization, categorization, and embedding-based retrieval, as well as the capability to create user-specific reports, providing responses that are tailored to both inquiries and analytical requirements. Real-time collaboration among teams is facilitated through connections with Slack and Telegram, in conjunction with a custom chatbot interface, allowing several users to question, share, and annotate insights together. To enable enterprise deployment, the system encompasses API generation with secure management of API keys, credit allocation, and token-based pricing, ensuring controlled access and scalability. Together, these advancements transform the chatbot into a flexible decision-support platform that consolidates various data sources, generates predictive insights, produces contextual reports, and promotes collaborative analytics in real time.Publication Open Access AI-Driven Adaptive UI Generation: Personalizing E-Learning Interfaces Based on Cognitive Abilities of Undergraduates(Sri Lanka Institute of Information Technology, 2025-12) Weerakoon S. D.E-learning platforms often adopt uniform interface designs and neglect learners’ cognitive differences, leading to cognitive overload and disengagement. While content personalization is common, dynamic interface-level adaptation remains underexplored. To address this gap, this study introduces an AI-driven adaptive user interface framework to personalize the interface dynamically based on individual cognitive attributes, namely attention span, memory capacity, and cognitive load, with the consideration of layout modification, navigation structure, and information density. Three validated methods are used for cognitive profiling, namely attention via WebGazer.js, cognitive load through the N-Back test, and memory capacity via the Digit Span Test. A within-subjects experiment was conducted by using 30 undergraduates in Sri Lanka. All the participants interacted with both static and AI-driven adaptive interfaces, along with a post-interaction evaluation based on validated instruments combining NASA-TLX, SUS, and UEQ scales. Results indicated a 96.7% adaptation success rate, along with positive post-feedback evaluations (M > 4.2/5) across cognitive load, navigation efficiency, personalization, usability, and engagement. Correlation patterns indicated that cognitive profiles influenced perceived outcomes. The impact of the AI-driven adaptive user interface is evaluated using quantitative analysis along with statistical data analysis using statistical software. The proposed system is designed as a web-based platform, ensuring AI-driven personalization to enhance user engagement and learning effectiveness. Further, the research findings contribute to the field of Human-computer interaction and the domain of education by validating AI-driven adaptive UI generation.Publication Open Access An Enterprise-Grade EdTech Solution for Real-Time Handwriting Assistance: A Usability and Accessibility Approach(Sri Lanka Institute of Information Technology, 2025-12) Herath, H.M.P.P.BThe research presents Fun Letter Tracing as an educational technology system which provides real-time handwriting assistance for primary school students. The system uses React for its browser-based front end and Flask for its lightweight scoring service to create a practice loop that analyzes user input after tracing followed by tip generation and retry functionality. The system evaluates handwriting quality through a 50-dimensional feature vector which includes smoothness and consistency and spatial spread and completeness and temporal cues and uses Random Forest and small FFNN models with letter-specific geometric checks based on keypoints and pixel corridors. The system provides a limited API set which includes /analyze and /letter-info/ and /progress-summary and /health endpoints to make school integration easier while maintaining privacy protection through default operation. The development team focused on creating an accessible system which follows WCAG 2.2 standards through its design of large touch areas and visible focus indicators and non-color based alerts and TTS functionality and simple language for children. The evaluation strategy combines three assessment methods which include expert walkthroughs and controlled testing and classroom-based trials involving 30-50 students and 10-15 teachers. The evaluation uses both quantitative metrics including SUS scores and completion rates and time-to-first-success and latency performance at 150 ms or below and expert rubric assessments and qualitative feedback from teachers and students. The thesis presents detailed information about system architecture and security measures and accessibility standards and testing procedures for school implementation.Publication Open Access Automating Voice-based Conversations into Formal User Stories using NLP and Speech Recognition(Sri Lanka Institute of Information Technology, 2025-12) Jayasingha H M C PIn agile software development, systems and software functions are often discussed and informally transcribed through conversations over Agile meetings, which leads to gaps and errors in documentation. Equally, traditional approaches which use voice recordings heavily depend on automated voice recognition systems to document conversations, making them riddled with errors and inconsistencies. This paper offers an automated pipeline for the transcription and analysis of Agile voice conversations in which requirements are gathered. The voice conversations are transcribed using OpenAI’s Whisper Model while formalized user stories are extracted through spoken large language models (LLMs). trained and evaluated various LLMs ranging from T5 and BART to DeepSeek on real and synthetic datasets for user story generation. Evaluation metrics focused on date and document accuracy included: narrative output through BLEU, ROUGE, F1, and WER for transcription. Results demonstrate that DeepSeek fine-tuned model outperformed others in contextual accuracy, requirement completeness, and consistency. This research automates the processes enhancing effective Agile documentation and minimizes manual effort.Publication Embargo Cloud Oriented Micro Services Resource Optimization by Content Delivery Networks(2021) Jayasundara, P.P.A.S.In the field of modern information technology, the most intriguing topic is cloud-based application development. After several decades of rapid development and research on cloud technologies, nowadays almost all cloud service providers are providing a massive range of services with higher reliability but there are a couple of business domains that having additional technical requirements and these are unique to their business domains. Capital Market and Finance is one of such specific business domains which need to address additional technical and compliance requirements. The main technical barrier in this domain is providing business functionalities for all users across the globe with micro-second level latency. Therefore, when developing and maintaining such a system, we are highly concern about system throughput and hardware resource allocation. While on the subject, the cloud-based system architecture is an ideal infrastructure for this kind of application development because we can upgrade hardware resources within a couple of minutes. however, there are significant issues remains as it is. Message queues are growing unexpectedly until resource upgrade. Lack of accurate cloud services to identify duplicate API requests. User connectivity and API access are limited due to service back off Peek time is limited to sort period and resources are billing on hours. System recovery in machine terminate is very costly mechanism As a matter of above technical concerns, we are conducting this research to propose a better solution to handle these types of technical barriers in without upgrading hardware resources unnecessarily and the proposed solution will not be limited to Capital market but it can be used for any application service to utilize their hardware resources while high network trafficPublication Embargo Cognitive Code Analyzer(2021) Thirunayan, D.J.Source code is the building block of any form of software and maintaining efficiency and readability of source code is crucial for the long-term maintainability and usability of any software product. And it is the responsibility of software engineering teams to maintain consistent standards for their source code. The most common approach used by software teams to maintain source code readability and identify bugs is through source code review. Source code review is a process in which when an engineer finishes a project component, functionality, or module, before the developed functionality is released the source code changes in the newly developed functionality are reviewed by another software engineer who is typically more experienced. Although code review was proven to be an effective method for maintaining code consistency, one of the biggest problems in source code review is the amount of time spent by engineers to review code. Maintaining consistent efficiency of source code is an even tougher task because there is no single metric to measure the efficiency of source code. And even metrics like time complexity do not have an algorithmically straightforward method of evaluation from source code. In this work we propose a “Hydranet” inspired deep learning based model architecture which can effectively learn the underlying patterns in the structure of source code code through it’s syntactic and semantic representations and use the learned representations to perform two primary downstream tasks : generating source code review and predicting time complexity.Publication Open Access Complexity Analysis and Visualization Tool(SLIIT, 2024-12) Sampath, B. M. W. G. K. R.The paper presents an original software metrics tool for measuring complexity which goes beyond the limitations of current tools and measurements. However, standard metrics such as those developed by Chidamber and Kemerer, they focus only on the technical aspects of software development ignoring cognitive perspectives of complexity. This research introduces advanced metrics like including Cyclomatic Complexity measure, Cognitive Functional Size and Improved CB that incorporate cognitive complexity into the evaluation of software quality. Moreover, the research describes a new tool incorporating traditional, object-oriented and these advanced measures to offer a thorough evaluation methodology. The tool has user friendly interfaces with visualizations and lack of standardization of current practices. In developing this tool and gathered observations from industry experts, including project managers and architects, to understand their needs and expectations when visualizing calculated metrics. This technique is geared towards improving software quality measurement through a more holistic appraisal system for its complexity to help get better decision in maintenance or creation processes. This advanced tool aims to explain the existing metrics and their limitations in relation to software complexity from the perspective of cognitive inclusion. In addition, this paper outlines the iterative strategy used in the design and construction of the tool, highlighting the use of the end user’s feedback to refine the operations of software developers, project managers. This combination of inputs assists in ensuring that the tool is not just better at estimating complexity but is also better suited to address practical problems associated with the needs of the software development industry.Publication Open Access Design and Implementation of an AI-Assisted Code Review Tool for Low-Code Platforms to Improve Quality and Security(Sri Lanka Institute of Information Technology, 2025-12) PATHIRANA P.P.P.S.PLow-code platforms like Mendix fast-tracks application development but, due to limited review mechanisms, face challenges in sustaining the code quality and security. Existing code review approaches are not optimized for visual cues, model-driven workflows, increasing the possibility of logical, security, and performance issues introduced by citizen developers. This research introduces an AI-assisted code review tool that combines GPT-4 and Claude Opus 4 for workflow analysis and defect detection in low-code environments. The approach evolved from few-shot prompting to workflow-oriented fine-tuning, resulting in improved analytical precision and reliability. The tool was further enhanced to perform business gap assessments and deliver user-friendly, structured feedback via a pluggable React-based widget integrated into the Mendix environment. The evaluation of the tool demonstrated an average precision of 84.5% and an average recall of 84.8% and an F1 score between (0.82-0.87), with workflow-based fine-tuning outperforming few-shot learning. A preliminary usability study with 25 developers demonstrated a 90% satisfaction rate and approximately 50% reduction in issue resolution time. Proxy validation using generative AI models was performed due to the limited availability of Mendix domain experts. These findings highlight the capability of AI-assisted code review to enhance workflow quality, strengthen application security, and improve developer productivity in low-code environments.Publication Embargo Detecting rosacea skin disease severity level from selfie images with help of Transferee Learning Regression(2021-05) Wickramarathna, R M DilanThe aspect of detecting and preventing or avoiding a disease is a significant aspect of the health care industry when taking into consideration the behavioral protection that is needed against diseases and pandemics. Presently, due to the prevailing pandemic situation, the healthcare industry is being over-whelmed and facing a large and unmanageable workload when considering the anomaly detection pertaining to the patients. When healthcare workers, researchers and advocates do not possess the in-depth knowledge needed pertaining to a disease as well as its anomaly symptoms, it is a challenging task to identify reluctant individuals. In most of the situations, an average individual will not be able to determine the symptoms of the disease by simply glancing or looking at the facial features. Furthermore, it is difficult to identify dermatological changes that cannot be recognized from a general clinical observation. This influences the need for an accurate, effective and efficient automation pertaining to detection of anomaly and symptoms by simply observing the surface of the face to evaluate diseases such as rosacea, acne, shingles, Covid-19 rashes etc. that portray similar face diseases. Rosacea is identified as a skin disease that is able to affect an individual in the long-term pertaining to the skin surface of the face. Its symptoms are pimples on the skin, redness, swelling as well as superficial dilated blood vessels that is found around the face, nose and neck. Rosacea is identified to be one of the most severe yet common skin conditions or disorders across the globe. Due to its severity, most of the time the tests as well as assessments are conducted by trained and specialized dermatologists in a special and controlled environment. The disease is seen to be mostly spread around the European region since the skin of European citizens are quite sensitive. The real reason behind the disease is unknown. The symptoms can be shown at unexpected instances as well. The need to detect the symptoms of rosacea can be frequent. Therefore, there is a dire need of a certain media to detect the symptoms of the condition with ease and accuracy. Considering the proposed topic, the ultimate goal of the thesis is to develop a mobile application that is able to observe a selfie image at any given time and receive the feedback from the app with regard to the skin condition, the severity as well as preventative measures or remedies pertaining to the skin condition, similar to how a trained and professional dermatologist would.Publication Open Access Developing an Enhanced Soft Sensor for Wastewater Treatment Plants: A Comparative Study of Multiple Machine Learning Approaches(Sri Lanka Institute of Information Technology, 2025) Kaluarachchi, C.DWastewater treatment plants (WWTPs) require continuous monitoring of critical water quality parameters to ensure operational efficiency and regulatory compliance. Traditional physical sensors are accurate but expensive and maintenance-intensive, creating a need for cost-effective alternatives. This research investigates the development of enhanced soft sensors using advanced machine learning techniques to estimate key wastewater parameters including Chemical Oxygen Demand (COD) and Total Phosphorus (TP) concentrations at both influent and effluent points. The study addresses fundamental limitations of existing soft sensor implementations particularly their inability to capture complex non-linear relationships which is suspected to have sensor drift and degradation due to seasonal variations and equipment aging. Through comprehensive evaluation of multiple machine learning approaches including Neural Networks and Decision Tree-based methods with the aim to develop robust, adaptable soft sensor models that maintain accuracy over extended periods with reduced recalibration requirements. The methodology involves systematic data collection from a Norwegian WWTP, comprehensive preprocessing to handle data quality issues, feature engineering and rigorous comparative evaluation based on prediction accuracy, computational efficiency and adaptability. Expected outcomes include deployable soft sensor models offering reliable real-time monitoring capabilities, significant cost savings, and improved operational efficiency for WWTPs. The research contributes both theoretical insights into soft sensor design and practical solutions for the wastewater treatment industry.Publication Open Access Development of an Integrated IoT System for Remote Monitoring and Enhanced Safety Assurance in Outdoor Environments(SLIIT, 2024-12) Dawlagala, D. S. D. M. D.Making places safer outside for freely operating has also risen in priority especially in places with little or no communication facilities and people can go out for camping and hiking. In this study, a distributed Internet of Things (IoT) system incorporating geofencing, environmental monitoring and real-time positioning system to improve outdoor navigation and safety has been developed and tested. The system includes one main hub and a number of sub devices connected to ESP32 microcontroller and LoRa 433 MHz for communication in addition to GPS (Neo-7M) for position location purposes. Linear and angular displacement measurements and their data processing and recording were performed using sub-devices that incorporated GPS and other modules used for receiving and transmitting data over a LoRa network. Sub and the main devices demonstrated data on 0.96 inches OLED both devices which enable users to receive up to date responses. There is also a type of geofencing action in the system, where an alert is sent when a sub-device departs an area that has been defined. The geofencing alerts can be managed though a web dashboard that utilizes Node.js, Next.js, and Web-Sockets for dashboard and main hub interaction. The main hub was also able to send and receive updates from the internet and then transmit this information to the sub-devices over the LoRa network thus bridging the gap that existed between local and remote operation. From the initial results, the newly proposed system is able to provide secure communication, precise location coordinates and prompt geofencing alerts even in outdoor environments with poor network support. The subsequent steps will seek to enhance the energy consumption of the system, improve communication coverage, and make more tests in real-life scenarios to assess the system as a whole.Publication Open Access Enhancing OTP Security with Private Blockchain, Geolocation And AI: A Decentralized and PrivacyPreserving Mobile Identity Authentication Framework(Sri Lanka Institute of Information Technology, 2025-12) SULOCHANA, G. G. D.One-Time Password (OTP) authentication is an important tool in protecting online banking, financial services, and online platforms. Nevertheless, the classical OTP systems, which are often based on centralized provision of SMS or email, are becoming susceptible to advanced cyberattacks, including SIM swap fraud, phishing, session jacking, and device spoofing. This study provides an in-depth mobile identity authentication system that would increase the security of OTP by combining the use of private blockchain, artificial intelligence (AI), and contextual verification through geolocation. The framework uses Hyperledger Fabric to decentralize identity verification and user privacy is ensured by a hybrid on-chain/off-chain data model, which is backed by smart contracts. Anomaly detection models based on AI and trained on behavioral patterns of SIM usages and previously known fraud cases have an accuracy rate of 85% when it comes to detecting realtime attacks of SIM swapping. Geolocation authentication, a geo-hashing method-based approach, is a further development of contextual trust by authenticating OTP requests only within defined and trusted geographic areas with an accuracy of 90 percent. Besides that, the system also engages in decentralized Know-Your-Customer (KYC) verification, which can guarantee privacypreserving mobile identity management. It developed a full-fledged prototype that was tested showing the performance of less than 500 milliseconds latency, high transaction throughput, and proper fraud detection. The APIs that are based on microservices are flexible and interoperable with mobile network operators (MNOs) and service providers. With a combination of these technologies, the framework can augment the reliability and security of the OTP-based authentication considerably. This work describes the severe shortcomings of existing centralized OTPs and a scalable and privacy-sensitive way to provide mobile and digital identity ecosystems in the futurePublication Open Access Enhancing Sinhala Hate Speech Detection in Online Platforms(SLIIT, 2024-12) Silva, W. M. R. D.The rise of deep learning methodologies has indeed revolutionized text analysis, enabling more sophisticated and nuanced understanding of language dynamics. With the proliferation of social media platforms, these advancements have been particularly crucial in navigating the vast amounts of data generated by online interactions. However, amidst the benefits of this digital age, the prevalence of hate speech has emerged as a pressing concern, transcending linguistic and cultural boundaries. In the context of Sinhala, a language rich in nuances and deeply intertwined with cultural complexities, the challenges in detecting and mitigating hate speech are further compounded. Language is not merely a tool for communication but also a reflection of societal norms, values, and power structures. In the Sinhala-speaking context, historical legacies, religious beliefs, and political tensions intertwine to shape discourse in multifaceted ways. Consequently, any hate speech detection mechanism must navigate these intricate layers of meaning, accounting for cultural sensitivities and contextual nuances to ensure accurate identification of harmful content. The integration of deep learning techniques and advanced semantic analysis holds promise in enhancing hate speech detection in Sinhala. By leveraging the power of neural networks to discern patterns and contexts within textual data, such mechanisms can offer a more nuanced understanding of language dynamics. Moreover, the evaluation of these tools on real-world social media data not only validates their effectiveness but also provides insights into the evolving nature of online discourse. Ultimately, addressing hate speech in Sinhala and similar low-resource languages requires a multifaceted approach that combines technological innovation with cultural sensitivity and community engagement to foster safer and more inclusive online spaces.Publication Open Access Exploring how Natural Language Processing Techniques can be used for Personalized Learning(SLIIT, 2024-12) Samarasinghe, KArtificial intelligence (AI) has had a profound impact on many industries, and education is no exception. Large Language Models (LLMs), including GPT-3 and GPT-4, stand out among AI-driven technologies as ground-breaking instruments that have the power to revolutionize conventional learning settings. These models allow for a realistic, conversational interface between students and instructional information because they were trained on massive amounts of text data. They produce logical, contextually appropriate answers to a range of queries by examining linguistic patterns, giving students the opportunity to participate in individualized learning experiences. Though LLMs offer chances for dynamic learning, their effectiveness is constrained by the static nature of their knowledge, which is dependent on the training data. Tasks requiring current or specialized knowledge are made more difficult by this constraint. To close this disparity, In particular, this thesis investigates the use of RAG and LLM models for personalized learning in educational environments that prioritize tailored instruction and flexible learning pathways. The limitations of conventional educational systems can be overcome by incorporating these AI-driven technologies, giving students access to a more flexible, personalized, and interactive learning environment. Essentially, personalized learning is adjusting the pace, approach, and substance of training to meet the individual requirements and preferences of each student. When LLMs and RAG are used together, the system can comprehend the demands of the learner and adjust in real-time to provide feedback, fresh knowledge, and questions for critical thought. This makes for an interactive learning process. Personalized learning is based on the necessity of flexibility. Because different learners have different comprehension levels, learning styles, and rates of advancement, it is critical to have a flexible system that can change in real time to meet the demands of each unique user. Because of their extensive language comprehension skills, LLMs are excellent at offering this flexibility. LLMs can help learners learn by having adaptive discussions in which they provide factual answers to basic queries, in-depth explanations of difficult subjects, or even the introduction of new ideas. When studying Sri Lanka's history in the 1980s, for example, a student may begin by asking general questions about significant occurrences and then go further into topics like the country's political climate or economic shifts at that time. A very participatory and interesting learning environment is made possible by the LLM's capacity to understand and answer such questions in a conversational manner. But personalized learning uses LLMs for more than just conversation. The capacity of LLMs to give learners rapid feedback is one of its main advantages. This is particularly crucial in educational environments as pupils frequently need their misconceptions cleared up or corrected. In order to make sure the subject is understood, LLMs can spot areas where a learner might be having difficulty and provide thorough explanations or alternative explanations. Additionally, by emulating the Socratic technique of guided questioning, LLMs can motivate students to consider other viewpoints and reflect critically on their own responses. To encourage a deeper cognitive engagement with the material, the LLM could, for instance, ask the student to consider the reasons and effects of particular events rather than just providing a historical answer. Because LLMs are trained on static data, they are intrinsically constrained even if they offer a strong framework for individualized learning. As a result, when it comes to current events or specialized issues, their expertise is limited to what they learned during their training, which may result in inaccurate or obsolete information. This restriction may make it impossible for the LLM to give precise information about certain historical events, people, or policies. Retrieval-Augmented Generation (RAG) models are useful in this situation. RAG models provide a link between real-time information retrieval and LLMs. Through the integration of a retrieval mechanism that combs through databases or outside sources, RAG models enhance the generative process by adding highly relevant and current data. Because of this dual structure, the learning platform can deliver timely, factually accurate knowledge in addition to responses that are coherent and appropriate for the given situation. For instance, the LLM may provide a generic response based on its prior knowledge if a student asking about a particular political incident in Sri Lanka's history during the 80s asks about that event. In contrast, the RAG component enriches the learning process by retrieving recent documents, articles, or academic papers to offer a more accurate and fact-based response. Enhancing the range and depth of information that an educational chatbot or system may offer requires the integration of RAG models. RAG models make sure the system can serve both broad learners and individuals looking for more specialized or up-to-date information by drawing from a dynamic knowledge base. This is especially helpful in subjects like history, where having access to original sources, research papers, and other academic materials may greatly improve the caliber of instruction. Furthermore, by directing students toward more resources, the retrieval element of RAG models can support reinforcement of learning by allowing them to delve further into a subject and carry on learning after their first engagement with the system. The Socratic method is one of the most effective ways to apply LLMs and RAG models in individualized learning. A well-known instructional approach is the Socratic method of inquiry, which entails posing open-ended questions that promote introspection and more in-depth thought. The approach can support active learning, where students are more involved and take charge of their education, by encouraging them to consider their responses carefully. For example, the LLM may ask follow-up questions to encourage the student to delve deeper into the subject matter, instead than giving a direct response to a query from the student. This method promotes the growth of critical thinking abilities in addition to reiterating the students' comprehension of the material. Beyond technological considerations, LLMs and RAG models are being implemented for individualized learning. The user experience must be prioritized in order to guarantee the efficacy of these solutions. With the use of these technologies, an interactive chatbot or virtual tutor may be created that allows students to easily ask questions, get helpful answers, and participate in insightful conversations. Furthermore, learning routes should be dynamically adjusted by the system based on the student's performance and development, which will allow it to modify the level of difficulty, recommend new subjects, or provide remediation as needed. The provision of an effective learning experience that is customized for each individual depends on this adaptability. Ultimately, even if the application of RAG models and LLMs has great potential for individualized learning, there are certain issues that need to be resolved. For example, it's crucial to make sure the data retrieved is precise, dependable, and suitable for the student's needs. This necessitates the meticulous selection of outside information sources and the creation of algorithms capable of determining the reliability of content retrieval. Furthermore, both RAG and LLM models have large computational requirements, thus preserving high-quality outputs while maximizing efficiency is essential. In conclusion, the integration of LLMs and RAG models into personalized learning platforms represents a significant step forward in the evolution of education. By combining the language understanding and generation capabilities of LLMs with the real-time retrieval capabilities of RAG models, educators can create adaptive, engaging, and highly effective learning environments. The addition of the Socratic method further enhances these platforms by encouraging critical thinking and active engagement. As AI continues to evolve, the potential for creating even more personalized, responsive, and impactful learning experiences will grow, ultimately transforming how we learn and interact with educational content.Publication Embargo Extended User Experience For Data Entry Process In The ERP Systems(2021-05) Perera, G.D.M.Publication Embargo Indoor Crowd Interaction Surveillance Using Image Processing in Post-COVID-19 Situation(2021) Piumal, M. K. I.Working title: Indoor crowd interaction surveillance using image processing in post-COVID19 situation Human interaction is limited in today’s society because of Covid 19 health restrictions, which are in place to prevent the virus from spreading. According to the rules, individuals must be at least one meter apart, and the number of individuals in an indoor environment is limited to a certain number. However, most people do not follow the instructions, putting the disease’s spread at risk. The severity is substantially higher if the environment is indoor. If a single infected person is detected in the area, health officials should trace the close contacts of the person. To answer this problem, the research project was conducted by providing a solution for contact trace. The research is conducted by implementing a convolutional neural network to obtain the risk footage from the CCTV footage and determine the health guideline violations. With the violated information digital contact tracing was done through the face search framework.Publication Open Access Intelligent Code Comprehensibility Index: A Cognitive-Based Metric for Enhancing Code Review and Documentation(Sri Lanka Institute of Information Technology, 2025-12) Godamune G.A.P.J.As software systems become increasingly complex, developers face more challenging tasks in understanding, maintaining, and evolving code. Traditional software metrics like Lines of Code, Cyclomatic Complexity, and Halstead metrics provide structural insights but often fail to capture the cognitive aspects of code comprehension. This paper introduces the Intelligent Code Comprehensibility Index, a new multi-dimensional metric framework based on Cognitive Load Theory. The Intelligent Code Comprehensibility Index assesses code comprehensibility by examining three key dimensions: Structural Complexity, Documentation Quality, and Naming Quality. Each dimension targets specific cognitive loads, Intrinsic, Extraneous, and Germane, by including syntactic metrics for semantic alignment and drawing on empirical research from software engineering and neuroscience. The proposed framework aims to offer a more comprehensive and cognitively aligned method for evaluating and improving source code understandability, thereby boosting developer productivity and code quality.Publication Embargo IoT For Sustainable Farming Without Soil: Reinforcement Learning For Device Interaction(2021) Liyanage, D. L. K. S.
