MSc in Enterprise Application Development

Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/2480

Students in the MSc in Enterprise Application Development programme are required to submit a thesis as a compulsory component of their degree requirements. This collection features merit-based theses submitted by postgraduate students specialising in Enterprise Application Development. Abstracts are available for public viewing, while the full texts can be accessed on-site within the library.

Browse

Search Results

Now showing 1 - 10 of 31
  • Thumbnail Image
    PublicationOpen Access
    Advanced Collaborative BI Chatbot for Enhanced Enterprise Decision-Making
    (Sri Lanka Institute of Information Technology, 2025-12) Shanika,W. D.
    This research outlines the creation of an innovative collaborative Business Intelligence (BI) chatbot aimed at improving enterprise decision-making by utilizing context awareness, multimodal data integration, predictive analytics, and real-time collaboration. The system merges structured data from SQL databases and CSV files with unstructured resources like text files, PDFs, and images. A multimodal data integration component utilizes FAISS-based vector embeddings to enable semantic retrieval from unstructured materials while guaranteeing smooth access to structured data repositories. The predictive analytics feature goes beyond simple regression by integrating statistical model selection to determine the most appropriate forecasting technique. It also produces interactive dashboards using Dash and generates static PDF reports to cater to various decision-making scenarios. The context-awareness module incorporates tokenization, categorization, and embedding-based retrieval, as well as the capability to create user-specific reports, providing responses that are tailored to both inquiries and analytical requirements. Real-time collaboration among teams is facilitated through connections with Slack and Telegram, in conjunction with a custom chatbot interface, allowing several users to question, share, and annotate insights together. To enable enterprise deployment, the system encompasses API generation with secure management of API keys, credit allocation, and token-based pricing, ensuring controlled access and scalability. Together, these advancements transform the chatbot into a flexible decision-support platform that consolidates various data sources, generates predictive insights, produces contextual reports, and promotes collaborative analytics in real time.
  • Thumbnail Image
    PublicationOpen Access
    Story Point Estimation with Explainable AI
    (Sri Lanka Institute of Information Technology, 2025-09) Dassanayake, D M S S
    Accurate story point estimation remains as a challenge in Agile software development because story point estimation depends on human intuition, experience, and subjectivity. Traditional story point estimation methods like planning poker and expert judgment most of the time lead to inconsistencies and biases, and it could impact resource allocation of the project and predictability of the project. This study addresses those limitations by seamless integration of transformer based natural language processing (NLP) models (BERT variants) and Explainable AI techniques (XAI) to interpret the estimation of story points. Four transformer-based models, BERT-base-uncased, DistilBERT, RoBERTa-base, and DistilRoBERTa-base were trained using TAWOS dataset with baseline and customized preprocessing pipelines. Advanced preprocessing techniques such as adaptive Fibonacci mapping, semantic T5 based data augmentation, and context injection, improved the model accuracy. The model trained with DistilBERT achieved the highest performance with 0.520 with 10 early stopping patience and 0.507 accuracy with early stopping patience 4 and the lowest mean absolute error 0.72 (MAE = 0.72). To improve the transparency and interpretability of trained models, XAI methods like SHAP and LIME explanations were applied. A survey with 8 agile practitioners showed strong alignment between XAI explanations and human explanations, with SHAP explanation achieving 72% and LIME explanation achieving 65% overlapping with agile practitioners identified keywords. The findings show that transformer models with XAI achieved accuracy comparable to human estimations (80% agreement) with interpretable predictions. This study contributes to a transparent, and data-driven framework for agile story point estimation, connecting the gap between human expertise and Artificial intelligence decisions.
  • Thumbnail Image
    PublicationOpen Access
    Smart-Split: Ai-Driven Context-Aware System Decomposition For Small And Medium-Sized Businesses
    (Sri Lanka Institute of Information Technology, 2025-11) Subasinghe, L. R. S.
    The transition from monolithic to microservices architecture has become essential for software modernization, yet small and medium-sized enterprises (SMEs) face significant barriers, including prohibitively expensive commercial tools, resource-intensive processes, and context-unaware decomposition approaches. Existing solutions like IBM Mono2Micro and AWS Microservice Extractor rely primarily on static analysis, overlooking critical runtime behavior patterns and domain knowledge, resulting in suboptimal service boundaries misaligned with business capabilities. This research proposes SMART-Split, a resource-efficient multi-agent Retrieval-Augmented Generation (RAG) framework for automated monolith decomposition, specifically designed for Go applications under 50,000 lines of code. The framework employs specialized agents—Static Analyzer, Runtime Profiler, Domain Knowledge Agent, and Decomposer Agent coordinated through a supervisor pattern to integrate multiple analysis perspectives. By combining Abstract Syntax Tree analysis, runtime execution traces, and domain knowledge extraction through RAG, SMART-Split addresses critical gaps in existing decomposition tools. The framework introduces three key innovations: (1) a multi-agent collaborative architecture that synthesizes static, dynamic, and domain context; (2) a lightweight RAG implementation optimized for resourceconstrained environments; and (3) a hybrid decomposition algorithm that produces business-aligned service boundaries. Validation across three open-source Go monoliths demonstrates improved decomposition quality through metrics including Modularity Quality (MQ > 0.7), Service Independence Score (SIS > 0.8), and Business Alignment Index (BAI > 0.9). Results indicate SMART-Split achieves comparable decomposition quality to commercial tools while requiring significantly fewer computational resources, making microservices modernization accessible and affordable for SMEs.
  • Thumbnail Image
    PublicationOpen Access
    Intelligent Code Comprehensibility Index: A Cognitive-Based Metric for Enhancing Code Review and Documentation
    (Sri Lanka Institute of Information Technology, 2025-12) Godamune G.A.P.J.
    As software systems become increasingly complex, developers face more challenging tasks in understanding, maintaining, and evolving code. Traditional software metrics like Lines of Code, Cyclomatic Complexity, and Halstead metrics provide structural insights but often fail to capture the cognitive aspects of code comprehension. This paper introduces the Intelligent Code Comprehensibility Index, a new multi-dimensional metric framework based on Cognitive Load Theory. The Intelligent Code Comprehensibility Index assesses code comprehensibility by examining three key dimensions: Structural Complexity, Documentation Quality, and Naming Quality. Each dimension targets specific cognitive loads, Intrinsic, Extraneous, and Germane, by including syntactic metrics for semantic alignment and drawing on empirical research from software engineering and neuroscience. The proposed framework aims to offer a more comprehensive and cognitively aligned method for evaluating and improving source code understandability, thereby boosting developer productivity and code quality.
  • Thumbnail Image
    PublicationOpen Access
    Enhancing OTP Security with Private Blockchain, Geolocation And AI: A Decentralized and PrivacyPreserving Mobile Identity Authentication Framework
    (Sri Lanka Institute of Information Technology, 2025-12) SULOCHANA, G. G. D.
    One-Time Password (OTP) authentication is an important tool in protecting online banking, financial services, and online platforms. Nevertheless, the classical OTP systems, which are often based on centralized provision of SMS or email, are becoming susceptible to advanced cyberattacks, including SIM swap fraud, phishing, session jacking, and device spoofing. This study provides an in-depth mobile identity authentication system that would increase the security of OTP by combining the use of private blockchain, artificial intelligence (AI), and contextual verification through geolocation. The framework uses Hyperledger Fabric to decentralize identity verification and user privacy is ensured by a hybrid on-chain/off-chain data model, which is backed by smart contracts. Anomaly detection models based on AI and trained on behavioral patterns of SIM usages and previously known fraud cases have an accuracy rate of 85% when it comes to detecting realtime attacks of SIM swapping. Geolocation authentication, a geo-hashing method-based approach, is a further development of contextual trust by authenticating OTP requests only within defined and trusted geographic areas with an accuracy of 90 percent. Besides that, the system also engages in decentralized Know-Your-Customer (KYC) verification, which can guarantee privacypreserving mobile identity management. It developed a full-fledged prototype that was tested showing the performance of less than 500 milliseconds latency, high transaction throughput, and proper fraud detection. The APIs that are based on microservices are flexible and interoperable with mobile network operators (MNOs) and service providers. With a combination of these technologies, the framework can augment the reliability and security of the OTP-based authentication considerably. This work describes the severe shortcomings of existing centralized OTPs and a scalable and privacy-sensitive way to provide mobile and digital identity ecosystems in the future
  • Thumbnail Image
    PublicationOpen Access
    Developing an Enhanced Soft Sensor for Wastewater Treatment Plants: A Comparative Study of Multiple Machine Learning Approaches
    (Sri Lanka Institute of Information Technology, 2025) Kaluarachchi, C.D
    Wastewater treatment plants (WWTPs) require continuous monitoring of critical water quality parameters to ensure operational efficiency and regulatory compliance. Traditional physical sensors are accurate but expensive and maintenance-intensive, creating a need for cost-effective alternatives. This research investigates the development of enhanced soft sensors using advanced machine learning techniques to estimate key wastewater parameters including Chemical Oxygen Demand (COD) and Total Phosphorus (TP) concentrations at both influent and effluent points. The study addresses fundamental limitations of existing soft sensor implementations particularly their inability to capture complex non-linear relationships which is suspected to have sensor drift and degradation due to seasonal variations and equipment aging. Through comprehensive evaluation of multiple machine learning approaches including Neural Networks and Decision Tree-based methods with the aim to develop robust, adaptable soft sensor models that maintain accuracy over extended periods with reduced recalibration requirements. The methodology involves systematic data collection from a Norwegian WWTP, comprehensive preprocessing to handle data quality issues, feature engineering and rigorous comparative evaluation based on prediction accuracy, computational efficiency and adaptability. Expected outcomes include deployable soft sensor models offering reliable real-time monitoring capabilities, significant cost savings, and improved operational efficiency for WWTPs. The research contributes both theoretical insights into soft sensor design and practical solutions for the wastewater treatment industry.
  • Thumbnail Image
    PublicationOpen Access
    Design and Implementation of an AI-Assisted Code Review Tool for Low-Code Platforms to Improve Quality and Security
    (Sri Lanka Institute of Information Technology, 2025-12) PATHIRANA P.P.P.S.P
    Low-code platforms like Mendix fast-tracks application development but, due to limited review mechanisms, face challenges in sustaining the code quality and security. Existing code review approaches are not optimized for visual cues, model-driven workflows, increasing the possibility of logical, security, and performance issues introduced by citizen developers. This research introduces an AI-assisted code review tool that combines GPT-4 and Claude Opus 4 for workflow analysis and defect detection in low-code environments. The approach evolved from few-shot prompting to workflow-oriented fine-tuning, resulting in improved analytical precision and reliability. The tool was further enhanced to perform business gap assessments and deliver user-friendly, structured feedback via a pluggable React-based widget integrated into the Mendix environment. The evaluation of the tool demonstrated an average precision of 84.5% and an average recall of 84.8% and an F1 score between (0.82-0.87), with workflow-based fine-tuning outperforming few-shot learning. A preliminary usability study with 25 developers demonstrated a 90% satisfaction rate and approximately 50% reduction in issue resolution time. Proxy validation using generative AI models was performed due to the limited availability of Mendix domain experts. These findings highlight the capability of AI-assisted code review to enhance workflow quality, strengthen application security, and improve developer productivity in low-code environments.
  • Thumbnail Image
    PublicationOpen Access
    Automating Voice-based Conversations into Formal User Stories using NLP and Speech Recognition
    (Sri Lanka Institute of Information Technology, 2025-12) Jayasingha H M C P
    In agile software development, systems and software functions are often discussed and informally transcribed through conversations over Agile meetings, which leads to gaps and errors in documentation. Equally, traditional approaches which use voice recordings heavily depend on automated voice recognition systems to document conversations, making them riddled with errors and inconsistencies. This paper offers an automated pipeline for the transcription and analysis of Agile voice conversations in which requirements are gathered. The voice conversations are transcribed using OpenAI’s Whisper Model while formalized user stories are extracted through spoken large language models (LLMs). trained and evaluated various LLMs ranging from T5 and BART to DeepSeek on real and synthetic datasets for user story generation. Evaluation metrics focused on date and document accuracy included: narrative output through BLEU, ROUGE, F1, and WER for transcription. Results demonstrate that DeepSeek fine-tuned model outperformed others in contextual accuracy, requirement completeness, and consistency. This research automates the processes enhancing effective Agile documentation and minimizes manual effort.
  • Thumbnail Image
    PublicationOpen Access
    An Enterprise-Grade EdTech Solution for Real-Time Handwriting Assistance: A Usability and Accessibility Approach
    (Sri Lanka Institute of Information Technology, 2025-12) Herath, H.M.P.P.B
    The research presents Fun Letter Tracing as an educational technology system which provides real-time handwriting assistance for primary school students. The system uses React for its browser-based front end and Flask for its lightweight scoring service to create a practice loop that analyzes user input after tracing followed by tip generation and retry functionality. The system evaluates handwriting quality through a 50-dimensional feature vector which includes smoothness and consistency and spatial spread and completeness and temporal cues and uses Random Forest and small FFNN models with letter-specific geometric checks based on keypoints and pixel corridors. The system provides a limited API set which includes /analyze and /letter-info/ and /progress-summary and /health endpoints to make school integration easier while maintaining privacy protection through default operation. The development team focused on creating an accessible system which follows WCAG 2.2 standards through its design of large touch areas and visible focus indicators and non-color based alerts and TTS functionality and simple language for children. The evaluation strategy combines three assessment methods which include expert walkthroughs and controlled testing and classroom-based trials involving 30-50 students and 10-15 teachers. The evaluation uses both quantitative metrics including SUS scores and completion rates and time-to-first-success and latency performance at 150 ms or below and expert rubric assessments and qualitative feedback from teachers and students. The thesis presents detailed information about system architecture and security measures and accessibility standards and testing procedures for school implementation.
  • Thumbnail Image
    PublicationOpen Access
    AI-Driven Adaptive UI Generation: Personalizing E-Learning Interfaces Based on Cognitive Abilities of Undergraduates
    (Sri Lanka Institute of Information Technology, 2025-12) Weerakoon S. D.
    E-learning platforms often adopt uniform interface designs and neglect learners’ cognitive differences, leading to cognitive overload and disengagement. While content personalization is common, dynamic interface-level adaptation remains underexplored. To address this gap, this study introduces an AI-driven adaptive user interface framework to personalize the interface dynamically based on individual cognitive attributes, namely attention span, memory capacity, and cognitive load, with the consideration of layout modification, navigation structure, and information density. Three validated methods are used for cognitive profiling, namely attention via WebGazer.js, cognitive load through the N-Back test, and memory capacity via the Digit Span Test. A within-subjects experiment was conducted by using 30 undergraduates in Sri Lanka. All the participants interacted with both static and AI-driven adaptive interfaces, along with a post-interaction evaluation based on validated instruments combining NASA-TLX, SUS, and UEQ scales. Results indicated a 96.7% adaptation success rate, along with positive post-feedback evaluations (M > 4.2/5) across cognitive load, navigation efficiency, personalization, usability, and engagement. Correlation patterns indicated that cognitive profiles influenced perceived outcomes. The impact of the AI-driven adaptive user interface is evaluated using quantitative analysis along with statistical data analysis using statistical software. The proposed system is designed as a web-based platform, ensuring AI-driven personalization to enhance user engagement and learning effectiveness. Further, the research findings contribute to the field of Human-computer interaction and the domain of education by validating AI-driven adaptive UI generation.