MSc in Enterprise Application Development

Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/2480

Students in the MSc in Enterprise Application Development programme are required to submit a thesis as a compulsory component of their degree requirements. This collection features merit-based theses submitted by postgraduate students specialising in Enterprise Application Development. Abstracts are available for public viewing, while the full texts can be accessed on-site within the library.

Browse

Search Results

Now showing 1 - 2 of 2
  • Thumbnail Image
    PublicationOpen Access
    Story Point Estimation with Explainable AI
    (Sri Lanka Institute of Information Technology, 2025-09) Dassanayake, D M S S
    Accurate story point estimation remains as a challenge in Agile software development because story point estimation depends on human intuition, experience, and subjectivity. Traditional story point estimation methods like planning poker and expert judgment most of the time lead to inconsistencies and biases, and it could impact resource allocation of the project and predictability of the project. This study addresses those limitations by seamless integration of transformer based natural language processing (NLP) models (BERT variants) and Explainable AI techniques (XAI) to interpret the estimation of story points. Four transformer-based models, BERT-base-uncased, DistilBERT, RoBERTa-base, and DistilRoBERTa-base were trained using TAWOS dataset with baseline and customized preprocessing pipelines. Advanced preprocessing techniques such as adaptive Fibonacci mapping, semantic T5 based data augmentation, and context injection, improved the model accuracy. The model trained with DistilBERT achieved the highest performance with 0.520 with 10 early stopping patience and 0.507 accuracy with early stopping patience 4 and the lowest mean absolute error 0.72 (MAE = 0.72). To improve the transparency and interpretability of trained models, XAI methods like SHAP and LIME explanations were applied. A survey with 8 agile practitioners showed strong alignment between XAI explanations and human explanations, with SHAP explanation achieving 72% and LIME explanation achieving 65% overlapping with agile practitioners identified keywords. The findings show that transformer models with XAI achieved accuracy comparable to human estimations (80% agreement) with interpretable predictions. This study contributes to a transparent, and data-driven framework for agile story point estimation, connecting the gap between human expertise and Artificial intelligence decisions.
  • Thumbnail Image
    PublicationOpen Access
    A Multi-Modal Deep Learning and Explainable AI Framework for Transparent Job Matching and Career Development
    (Sri Lanka Institute of Information Technology, 2025-12) Warnasooriya,D. M. D. W. R
    In the digital age, career and professional growth are being influenced with advanced systems which enable finding employment, reviewing applicants and skill advancement. The changing aspect of Explainable Artificial Intelligence (XAI) is essential to allow contextual job matching and reduce discrimination in AI-based job processes. However, most of the existing systems are still non-transparent and restrictive, perpetuating prejudice and weakening credibility. The study presents a new career development and recruitment platform using XAI and surpasses the traditional methods. The proposed system uses a hybrid two-stage system to combine deep learning with the Graph Neural Networks to encode candidate job relevance as well as structural dynamics of career progression, skills dependencies, and mentorship networks. New feature engineering algorithms simulate the dynamics of temporal profiles development and skill acquisition, which allow dynamic and context-sensitive candidate representations. In order to guarantee interpretability, a recruitment-specific explainability engine offers stakeholder-specific explanations such as comparative explanations between a candidate and a job, trajectory correspondence insights, and visualizations of fair trade-offs. The system is tested to execute its functions: a real-world evaluation, which is a combination of fairness statistical measures and accuracy with user-centric interpretability measures, proves the effectiveness of the system. The results highlight the potential of radically changing the current state of hybrid AI architectures and domain-specific explainability to create ethical, equitable, and adaptive solutions in the future of work.