MSc in Information Technology
Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/2484
Students enrolled in the MSc in Information Technology programme are required to submit a thesis as a compulsory component of their degree requirements. This collection features merit-based theses submitted by postgraduate students specialising in Information Technology. Abstracts are available for public viewing, while the full texts can be accessed on-site within the library.
Theses and Dissertations of the Sri Lanka Institute of Information Technology (SLIIT) are licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Browse
Publication Embargo 10-Year Cardiovascular Disease (CVD) Risk Prediction of Sri Lankans: A Longitudinal Cohort Study(2021) Solangaarachchige, M.BCardiovascular diseases are one of the leading causes of mortality in the world. A cornerstone of preventive cardiology is identifying individuals at risk of cardiovascular diseases (CVD) at the earliest. Clinical guideline primarily recommends risk prediction models that are based on a limited number of predictors that perform poorly across all patient groups. Predicting cardiovascular risk is crucial for making treatment decisions, especially in the primary prevention of CVDs using a total risk approach. Despite the fact that several cardiovascular risk prediction models exist, only a handful are specifically designed for Asians, and none are generated from South Asians, including Sri Lankans. Machine learning (ML) and neural networks appear to be increasingly promising in supporting decision-making and forecasting from the huge amounts of data generated by the healthcare industry. This led us to develop a CVD model using Machine Learning to predict 10-year risk of developing a CVD in Sri Lankans. We investigated whether we could adopt ML to develop a model and whether there is an improvement in including nontraditional variables for the accuracy of CVD risk estimates and how to validate the ML model with existing WHO risk charts. Using data on 2596 participants without CVD at baseline data collection of Ragama Medical Officer of Health (MOH) area in Sri Lanka, we developed a ML-based model for predicting CVD risk based on 75 available variables. However, the ratio of developing a CVD vs no CVD in 10 years was 7:93, which is extremely unbalanced. Therefore, at first, we derived a balanced dataset from the main dataset and build a ML model and it recorded an 80.56% accuracy. Secondly, to alleviate the dataset's imbalance, we adopted two techniques, which are 10-fold cross validation and stratified 10-fold cross validation (SKF) and trained six ML classification algorithms. They are Random Forest (RF), Decision Tree, AdaBoost, Gradient Boosting, K-Nearest Neighbor and 2D Neural Network. Out of these six algorithms RF model with SKF showed the highest accuracy in predicting a CVD event with an accuracy of 93.11%. Our ML model included predictors that are not usually considered in existing risk prediction models. Systolic blood pressure was the most important variable in this model. There were six non-traditional variables in the most ten important variable list and three of them were non-laboratory variables. To validate the model with existing WHO risk charts, we explored an experimental approach by developing a simple logistic regression function using the same techniques as the best selected model, with the seven traditional risk factors used in WHO risk charts and our Random Forest model indicated the highest accuracy compared to the WHO model, with a difference of 26.20 %. Our ML model improves the accuracy of CVD risk prediction in the Sri Lankan population. This approach justifies that the CVD prediction models also can be derived using ML for each subregion individually. Additionally, our research discovered novel CVD disease factors that may now be investigated in prospective studies.Publication Open Access A Machine Learning Approach to Identify the Key Factors Affecting Correct Stream Selection and To Predict Suitable Subject Streams for Advanced Level Students in Sri Lanka(Sri Lanka Institute of Information Technology, 2025-12) Abeywardhana,K.G.H.Education plays a vital role in shaping the economic growth and sustainable development of a nation. It is not only a measure of a country’s intellectual wealth but also a determining factor in its future progress. In Sri Lanka, education is provided free of charge by the government from primary school through university, ensuring equal access for all students. Within this framework, the General Certificate of Education (Ordinary Level) – G.C.E. (O/L) and the General Certificate of Education (Advanced Level) – G.C.E. (A/L) examinations represent two critical milestones in the academic journey. The G.C.E. (A/L) examination, in particular, serves as the gateway to higher education and university admission, marking a pivotal stage in shaping students’ academic and professional futures. At the end of the O/L stage, students are required to select a subject stream such as Science, Arts, Commerce, or Technology to pursue during their A/L studies. This choice has a lasting impact, as it directly determines the student’s educational direction and career opportunities. However, many students make this crucial decision based on external influences, such as parental pressure, peer comparison, or limited guidance, rather than through a clear understanding of their academic strengths, personal interests, or long-term career aspirations. Consequently, this often leads to dissatisfaction, stream switching, or even discontinuation of studies. To address this issue, it is essential to adopt a data-driven approach that considers multiple factors, including students’ O/L examination performance, inborn talents, extracurricular activities, and preferred professional fields. This research introduces a machine learning-based model the Subject Stream Prediction System—designed to recommend the most suitable A/L subject stream for students. The proposed system not only predicts the optimal subject stream but also provides additional guidance by suggesting potential career paths, relevant educational qualifications, and technical skills aligned with the student’s profile. Four supervised machine learning algorithms K-Nearest Neighbors (KNN), Decision Tree, Random Forest, and Support Vector Machine (SVM)were trained and evaluated to develop the predictive model, ensuring the highest possible accuracy and reliability.Publication Open Access Address IoT Security and Privacy Challenges(2021-01) Wijesinghe, K.IData Innovation has seen quick cross-stage and crosses utilitarian improvements for example sensors, Nano-innovation, and bio-enterprises. In medical clinics, for the most part, the E healthcare framework is utilized for getting the data of a patient. Outstandingly, the living e healthcare approach has been achieved inside cabled discussion among recognized fields for example network convention and data set in hospice climate. There has been an expansion in the healthcare framework's utilization of the versatility attributes and remote correspondence and the rise in advancements has empowered shrewd apparatuses and devices with mean evaluating energy to take advantage of remote sensor hubs. In the new age of innovation and remote correspondence, the gigantic ascent in electronic devices made by advanced cells and tablets has turned into the most famous and key apparatus of everyday life. Progressions in the Internet of Things (IoT) are generally utilized for interfacing various devices like sensors, apparatuses, vehicles, and different articles. This multitude of devices might furnish with radio-frequency identification (RFID) tags, actuators, sensors, cell phones, and numerous others. By utilizing IoT this large number of devices are associated with laying out the correspondence among themselves and effectively accessing the data. The principal favor of IoT is to enlarge the profit of the Internet with controller ability, information sharing, timeless network, and more. The healthcare servers keep electronic medical records of enlisted clients and offer various types of assistance to patients, medical advisors, and casual guardians. The patient's specialist can get the information from the office through the internet and look at the patient's set of experiences, current side effects, and patient's reaction to a given treatment. When the WBAN network is arranged, the healthcare server deals with the organization, dealing with channel sharing. A Wireless Body Area Network (WBAN) encompasses small and keen systems or contraptions subsidiary to the body of the cases moreover is to be continually managed by the cell well-being plan across a linkless discussion gear which can be Bluetooth, Zigbee, or RFID. The WBAN bargains the steady data and managing and genuine period diagrams and reactions to the business, human case, or the medical care specialists allocated for that case. Later counts seized are used for gauging clarification. The weighted counts are used to evaluate such all accommodating of disease will happen. The information is noted for the drawn-out period. Kevin Ashton first introduced the Internet of Things (IoT) in 1999. He connected numerous sensors to actual objects and relayed the collected data to the internet. The IoT mechanical talent is presently used in specific fields, such as computerized oilfield, home, and construction mechanization, smart network, improved clinical cure-wise haulage, and so on. RFIDs allow radio frequency labels to detect real counters. An RFID sensor also transmits data to the user and allows for the identification, tracking, and grouping of items. IoT science can yield colossal information about individuals, time, things, and space. Indeed, even joining the current Web science and IoT characterize an extreme use and immense amount of area set on base charge sensors and wireless correspondence. Internet convention v6 and Cloud help the advancement of a blend of web and IoT. It is enriching additional possibilities for information collecting, data treatment, organization, and different novel administrations. IPv6 is utilized to perceive an item that interfaces with IoT by an interesting addressing plan. In a country area, the majority of the people groups don't get suitable ways to deal with well-being observing and centers. Thus, it is important to plan the successful well-being observing framework. A minuscule wireless device is goal-bound with IoT can shape a possible method for directing patients remotely as opposed to dating the genuine center. The surprising little transducers are relocating into the human to total the subtleties through which the framework gets human wellness information security and for examination for treatment. The gathered information is then shipped off to remote stations through dissimilar correspondence advances (like a 3G/4G empowered base station or Wi-Fi network with the Internet. From the information that came from the internet, the medical professionals can hold onto the end and thus outfit benefits midway. The main advantage of this electronic healthcare is that it enhances the five-star exhibiting presence and offers heavenly leisure to patients and healthcare donors. The patient's privacy isn't considered in this computerized healthcare system, even though it is crucial in the patient's case, and this is its worst flaw. RFID technology is employed to overcome this problem. With its simplicity and adaptability, it handles patient reports. Similar to this, RFID's main advantage is that it defends against a variety of threats, which reduces the amount of noise in signal transmission [1][5]. A large portion of the plan is the different security systems with privacy conventions and minimal expense for improvement of materialness. Along these lines, it is important to plan useful super lightweight cryptographic conventions for a costless RFID framework. The IoT is the best answer for this reason lately. Hence, in this paper, the compelling healthcare checking framework is planned by utilizing the IoT and RFID labels. The trial brings about this paper shows the hearty result against the various attacks. In this framework to get the careful valuation return, administering and looking at the wellness state of the patient and to build the force of IoT, the blend of microcontroller with sensors is present. The various sensors are utilized to quantify the various boundaries [6]. These sensors are an ECG sensor, Pulse sensor, Temperature sensor, Movement sensor, EEG sensor, and Blood Glucose sensor. To get the productive result the blend of brilliant sensors with microcontroller parts is thought about because it enjoys loads of benefits like mean power workout, consolidated exactitude-simple abilities, and well-disposed UI. On the planet, most clinic clients utilize the PDAs and late well-being in no way, shape, or form shape administration of advanced cell sensors to administer patients' conditions. Accordingly, in this paper, there is the advantage of living advanced cell sensor devices to manage e-wellbeing. The proposed paper presents the stage for substantial sensors, which are connected straight with the patient's advanced cell to get in an arrangement at run time. This data is handled and put away in the distributed storage. The put-away data may likewise be gotten to through professionals and medical staff, later on, to notice and show victims' prosperity. Association of this paper is in the accompanying manner segment II audits the writing review of the proposed framework. In area III the Presentation of IoT and RFID are presented. Segment IV shows the advancement of the framework, and the different proposed techniques utilized in this paper are presented in this part. In segment V the exploratory execution results are presented. And at last, segment VI finishes this paper.Publication Open Access AgriSense: An IoT-Integrated Crop Recommendation and Price Forecasting System Using Machine Learning(Sri Lanka Institute of Information Technology, 2025-12) Godfrri Croos, NSri Lankan smallholders make planting and selling decisions in the presence of shifting monsoon patterns and volatile local markets. AgriSense is an end-to-end system that integrates a low-cost IoT field device with cloud-based machine learning to support both crop selection and price planning. The device collects plot-level measurements of soil chemistry and condition, together with ambient data and location, and streams these to a backend designed to tolerate intermittent rural connectivity. In the cloud, a supervised crop-recommendation model trained on a soil feature set produces a ranked shortlist of suitable crops with calibrated probabilities.Publication Embargo An AI Bot Who Is Suggesting Words to Create Trending Social Media Posts(2021-06) Rajapaksha, S.J.S.U.Today, social media is the mainstay of many advertising campaigns. As an example, facebook, youtube social media are widely used by TV and radio channels to advertise their programs. Not only that but many higher education institutions and even business organizations use social media extensively to reach out to a wider audience. At the same time, more and more of these advertising agencies are emerging than ever before. Despite spending so much money on social media, it can be seen that only selected advertisements reach the masses. The main reason for this is that although many people advertise on social media, they do not have a good understanding of how to do it correctly using the correct keywords. As a solution to this, before placing such an advertisement or any post on social media, if there is a prior understanding of how the product or service should be advertised these days and what words and pictures should be used for the post, then advertising is most effectively can be done. Therefore, the purpose of this project, which author is going to carry out, is to create a website for those who want to study the trending information in the social media, using artificial intelligence technology and want to do a new publicity. In other words author has created a system to get suggested keywords for social media posts according to the relevant category what should be included in their new posts to be a trending post. The author has collected data & information to archive this task related to trending social media posts for various categories and then has done a prediction by considering the amount of reaches, likes and also the comments. In this project author has used two models with linear regression and multi linear regression based AI techniques. In methodology chapter author has clearly mentioned about them. After used newly created system it was identified that new system has an increment of user reactions than using the traditional posting methods. Therefore following the testing of the new system, user responses to posts created using keywords derived from the new system were found to be higher than the responses to posts created in the normal way, and more details are contained in testing & evaluation chapter.Publication Open Access AI for Legal Domain Identification and Guidance in Sri Lankan Civil Law: A Comparative Study of Open-Source vs Proprietary AI Models(Sri Lanka Institute of Information Technology, 2025-12) Athulathmudali A. M.This study investigated the application of retrieval-augmented generation (RAG) architectures powered by large language models (LLMs) to improve access to civil-law information in Sri Lanka. It addresses a key challenge in the country’s justice system, the limited accessibility to affordable and reliable legal guidance. A RAG-based legal information assistant was designed, implemented, and evaluated using two back-end models: OpenAI’s GPT-3.5-Turbo and the open-source Mistral-7B-v0.1. Both systems were integrated into a curated Sri Lankan civil-law corpus and compared across three metrics: accuracy, latency, and cost using a set of test queries. GPT-3.5 Turbo achieved higher accuracy (92.5%) and lower average latency (4.17s) at a lower cost (USD 0.000487 per query) than Mistral-7B-v0.1 (82.5% accuracy, 15.64s average latency, USD 0.000742 average cost). Statistical tests confirmed significant differences in latency and cost. GPT-3.5-Turbo therefore exhibited superior responsiveness and efficiency for real-time, citizen-facing legal assistance, whereas Mistral-7B offers a competitive, viable, privacy-preserving alternative for institutional or offline use. The research contributes a reproducible evaluation framework for legal-domain LLMs and a localized civil-law corpus designed for retrieval-augmented systems. More broadly, it demonstrates that responsibly designed AI can enhance access to justice in low-resource contexts. The findings establish a foundation for future multilingual, ethically aligned, and jurisdiction-aware legal-AI systems in Sri Lanka.Publication Open Access AI-Driven Code Comment Quality Assessment and Its Impact on Software Complexity(Sri Lanka Institute of Information Technology, 2025-12) Nagodavithana, J.C.N.Code comments play a critical role in software readability and maintainability. However, poorly written, redundant, or misleading comments can increase software complexity and hinder developer productivity. This research proposes a novel AI-driven framework for assessing code comment quality using a Comment Quality Index (CQI), which combines both structural and semantic features. Structural scoring was implemented through heuristic methods, while semantic scoring leveraged transformer-based models, specifically BERT, to capture the meaning and relevance of comments. To validate the approach, heuristic and AI-based scores were compared against developer-rated comments collected via a survey. Additionally, the study introduces CQI value bands to classify comments as poor, average, or good, providing actionable insights for developers. Statistical analyses, including ANOVA and correlation tests, confirm the effectiveness and reliability of the proposed scoring framework. For the AI component, the BERT model was fine-tuned on 90 developer-rated comments, showing consistent training loss reduction across epochs. The fine-tuned model was saved and applied for inference on unseen comments, demonstrating its ability to generalize and provide real-time quality assessment. The results indicate a strong alignment between AI-predicted scores and human evaluations, highlighting the potential of AI-assisted comment analysis to enhance software quality. Finally, the research explores the integration of this framework into an IDE plugin, enabling developers to receive immediate feedback on comment quality during code development. Overall, the study provides a comprehensive methodology for automated comment quality evaluation, combining empirical validation, AI-based semantic analysis, and practical implementation in modern software engineering environments.Publication Open Access AI-Driven Nutrient Management in Hydroponics for Urban Agriculture Enhancing Food Security through Technology(SLIIT, 2024-12) De Silva, G.P.S.NThis research investigates the integration of artificial intelligence (AI) into hydroponic farming systems to tackle challenges in urban agriculture, particularly food security and resource optimization. Urban expansion and shrinking arable land necessitate innovative agricultural solutions, and hydroponics—a soilless cultivation method—is increasingly recognized for its efficiency and scalability in urban environments. By leveraging AI and Internet of Things (IoT) technologies, this study develops an automated nutrient management system that optimizes critical parameters such as pH, electrical conductivity (EC), and nutrient concentrations (NPK: Nitrogen, Phosphorus, Potassium) to enhance plant growth and resource efficiency. The experimental design includes two hydroponic systems: an AI-driven system and a manual control setup, both operating under identical conditions. The AI-driven system utilizes real-time sensor data, processed by machine learning models, to automate nutrient adjustments. Data collected from sensors, including pH, EC, and temperature, is transmitted via AWS IoT Core and stored in DynamoDB for real-time monitoring and historical analysis. The system's performance is visualized through an Angular-based dashboard, enabling continuous monitoring and decision-making. Results demonstrate that the AI-driven system significantly outperforms manual nutrient management in terms of plant growth, resource efficiency, and environmental stability. Plants grown in the automated system exhibited a 48% increase in weight and improved root development compared to those grown in the manual system. The automated system 4 also maintained optimal pH (6.3–6.7) and EC (1.8–2.4) levels with minimal deviations, reducing nutrient waste and ensuring precise dosing. This research contributes to the field of smart agriculture by showcasing the transformative potential of AI and IoT technologies in hydroponic farming. The findings emphasize the viability of AI-driven systems to enhance the sustainability, scalability, and efficiency of hydroponics for urban agriculture. The integration of advancedPublication Open Access AI-Powered Sinhala Character Recognition and Digital Transformation(SLIIT, 2024-12) Vidyalankara, R. A. SumuduThis research focuses on developing an effective system for recognizing and converting handwritten and printed Sinhala text into digital format. As the primary language of Sri Lanka, Sinhala presents unique challenges for handwriting recognition due to its intricate strokes and complex character structures. Existing methods often fall short in accurately interpreting Sinhala characters, highlighting the need for a tailored solution. The proposed system employs Convolutional Neural Networks to classify and recognize Sinhala characters with high precision. A key innovation is error-guided preprocessing, applied iteratively to images misclassified during the initial training phase. Failed images are processed using methods such as blurriness detection, dynamic contrast adjustment, noise removal with bilateral filtering, and morphological operations for stroke enhancement. This approach ensures improved image quality and meaningful feature extraction for subsequent retraining. Additional techniques like contour analysis and gradient- based feature extraction further enhance the system's recognition capabilities. To optimize performance, strategies such as data augmentation, hyperparameter tuning, and model ensembles are explored, improving the system's adaptability and robustness. The system is evaluated on a diverse dataset of handwritten and printed Sinhala text, demonstrating significant improvements in recognition, accuracy and efficiency. Its applications include optical character recognition, document digitization, and automated form processing. This thesis contributes a comprehensive, CNN-based methodology tailored to the complexities of Sinhala script, offering a promising solution for advancing Sinhala language technologies.Publication Embargo An Analysis and Early Warning of Toxic Gas Outbursts in Small-Scale Gem Mining: Model Evaluation and GIS-Based Risk Mapping in Sri Lanka(Sri Lanka Institute of Information Technology, 2025-12) Wanasundara,W M U SArtisanal and small-scale gem mining in Sri Lanka, particularly within the Pelmadulla region, is highly susceptible to toxic gas accumulation due to inadequate ventilation and the absence of systematic early-warning mechanisms. This research aimed to develop a predictive–GIS integrated framework for the detection and spatial mapping of toxic gas hazards in small-scale mining environments. Utilizing globally available gas sensor datasets (UCI, IEEE, Mendeley) and localized geological and spatial data, multiple predictive algorithms—Random Forest (RF), XGBoost, LSTM, GRU, TCN, and IBWO-TCN—were trained and evaluated using precision, recall, F1-score, AUROC, and lead-time metrics. The Random Forest model exhibited the highest predictive performance (F1 = 0.93, AUROC = 0.97) and was subsequently integrated with GIS-based hazard mapping for the Pelmadulla study area. The spatial analysis indicated that approximately one-third of mining sites fall within high or very high-risk zones. The findings highlight the potential of integrating predictive analytics and geospatial modeling to establish a low-cost, data-driven early-warning system, thereby enhancing occupational safety and supporting sustainable mining practices in Sri Lanka.Publication Embargo Analysis of ‘Toll Free Agricultural Advisory Service’ Data as a Decision Support Tool For the Department of Agriculture(2021) Rajapaksha, N.C.Toll-Free Agricultural Advisory Service of the Department of Agriculture named as “Govi Sahana Sarana” was established in the year 2006 with the 1920 short code and connected to all of Sri Lanka's land and mobile telephone service providers. Farmers and other stakeholders were enabled to directly contact technical officers (Agricultural Instructors) utilizing this short code. All information was entered into the 1920 call center database manually. Monthly statistics that were generated in the 1920 database were then summarized into a tabular format using Microsoft Excel and distributed to top management of the Department of Agriculture. Top management was assumed to make decisions based on analytics of the content of these reports. Farmers all over the island bring their agricultural problems to 1920 Agricultural Advisory Service. Those may be different types of agricultural problems. These farmers’ problems can be identified into several major categories. However, it can be seen that they do not analyze these problems and give solutions to farmers at that moment only. If so analyses, that big data can benefit in the future on a vast scale at the national level. This study for carrying out to explore the possibility of introducing a decision support for 1920 reporting system to generate enhanced analytics and to make it easier to make informed decisions by the top management of DOA, more efficiently and effectively than the reporting method previously. First, a basic preliminary analysis was performed. Preparing it for further analysis, edited dataset was into describe the main features of the data and summarize the results. Results of the frequency analysis had been obtained. Accordingly, the districts with the highest number of problems were found. It was also possible to find out which category received the most problems. It was also found out from which district the problems related to that category were received the most. Quantitative and qualitative approaches were used to achieve the objectives to do this research. The topic covered include measurement scales, data types and analysis methods. Then, a Regression Model was built using SPSS statistics software. It was able to make predictions related to farmers’ problems. It provides probabilistic conspiracies and other basic descriptive statistics of data, such as mean, standard deviation and so on. There is used validation methods to select the best model. That is the Normal Probability Plot and R Square. It is used these validation methods to select the best model. The results of this analysis can be used as a decision support tool for the Department of Agriculture at the national level. That means, results can be made basing on how the Independent variables respond to the dependent variable, which is very helpful on the decisions of the Department of Agriculture.Publication Open Access Analysis of Human Interpretability in Document Classification(2018) Kumari, P.K.SuriyaaWith high use of computers, the collection of textual data generated, exchanged, stored and accessed increased in massive amount and became one of the richest sources of data for the organization. As a result, people are tending to use natural language processing application to categorize this large volume of data efficient and accurate manner. Their application of machine learning models. When it comes to Natural Language processing (NLP) applications where most of them follows supervised learning techniques, automatic document classification models developed to do content-based assignment here the materials are assigned into predefined categories. This makes it easier to find the relevant information at the right time and for filtering and routing documents directly to correct users. Mostly these learning models are operating in black-box manner wh re there is no way to interpret how the model has decided which class an instarce should assigned. understanding the reason behind how learning makes these redictions ~7 are very important to trust such learning models in real application. [his thesis presents the work related to the experimental work been carried with set of text classifiers to interpret text classifiers predictions, so any classifier can be evaluated based on how well they support classification purpose.Publication Embargo Analysis on the Risk and the Categorization on Test Automation in Sri Lankan Software Industry(2021-08) Sundaralingam, SakthiDelivering quality software to customer is the key objective of software industry. One of the essential fragment of life cycle of software is software testing. In software testing test automation is playing a major role. Test automation is an art which needs to be managed and executed properly in order to provide quality software. If test automation cannot be practiced in proper way the delivery of the software quality would impact directly and leads to loss of customer, which is a failure of business. Test automation contain several phases from scoping to maintenance. Each phase has several steps which require right analyzing, decision making and operating. Test automation has several problems which needs to address in each stage. Test automation cause several issues when execute test automation in a company. All these issues need to be handled by different people, therefore initially issues need to be identified and classified and then solve properly. This research is to identify the improvements to categorize the problems automatically and find the solution for the problem in test automation process and hence to practice the test automation in healthier way in order to achieve better software quality. Test automation issue are analyzed and the solutions are proposed. On which stage, the test automation is causing problems and how to solve them are recommend in this research, Test automation issues are categorized and under relevant category therefore issues can be solved speedily. The automation cause huge amount of issues. Test automation is running over nigh in most of the companies and it result lots of issues. These issues can’t be analyzed one by one and allocated to relevant people. The implementation of this research will categorize the issues under relevant category and which would lead for speedy allocation under relevant person and solution. This research produced a system to categorize the test automation problems in automatic way using text classification approach. The issues are passed as sentence and they are categorized under the relevant category to fix them quickly. The sentences are preprocessed and conducted feature selection using filter methods and predict under appropriate category. The issue has been cleaned in preprocess stage. Implemented LSTM base algorithm using filter method to categorize the issues.Find the solution for the problem in test automation process and how to practice the test automation in healthier and proper way in order to achieve better software quality. In this research an implementation to categorize test automation problems were formed. Recommendation and solutions are proposed on test automation which would aid to practice test automation in better way and that would leads to better software quality delivery.Publication Open Access Analyzing the Influence of Automated Water Distribution Systems on Precision irrigation for Orchids A Case Study Using Dendrobium Phalaenopsis Orchid Group(SLIIT, 2024-12) Maleesha, R. P. G. S.This research seeks to establish the efficiency of an automated water treatment of the Dendrobium Phalaenopsis orchids using remote monitoring and controlling through a dash- board in Audino Cloud. Soil moisture, temperature and humidity levels in the terrain are Other environment factors monitored and the application controls water discharge in response to the results. Water is only added once the soil moisture level gets to a low level of 30 percent as to avoid unnecessarily using water. The system Water Use Efficiency was 60 to 95 percent, thus the system was good at maintaining the moisture level without wasting much water. Temperature ranged from 22-28 and humidity ranged from 40-95 percent affected water demand but the system took into consideration the soil moisture values. It operated correspondingly under principles of precision irrigation that is they provided water where it was needed and when it was needed. , which might be added in the future to the algorithm parameters, include temperature and humidity, as well as predictions of possible changes to environmental climates for even greater water savings. Through the results, it is noticed the prospect for automation supply systems to reestablish the cultivation practices of orchids, having special concern with the rational use of resources and sustainability in the agricultural activityPublication Open Access AnyDbMobileSync : Database Agnostic Synchronization Framework for Mobile Application(2014-12) Abeysingha Gunawardhana, SumindaImplement database independent data synchronization middleware framework (AnyDBMobileSync) for Server side SQL server/oracle database/my s Ietc. and client side any data store (HTML5 local store/websql/sqllite etc). If an offline mobile application used this middleware sync framework then their backend Database can change without mobile application extra development. Framework handled different database type (Sql server/oracle/my QL) that communicates mobile clients' offline data. Framework and Mobile Client communicate using Rest-full service. because of that mobile client-side doesn't need to install extra API. Communicate data object type is JSON and JSON can use any application language to e tract data from JSON object.Publication Open Access An Approach towards Password Protection Based On Typing Style(2014-12) Arnarasena, Nelum ChathurangaThe most common user authentication mechanism is password verification. other words, characterswhich types as text in a password field. However, the aim of this research is to find outwhether the rhythm and/or the style of typing; how types instead of what pes (Keystroke Dynamics), is sufficiently reliable as a security enhancement. This is a biometric approach. Biometric solutions are costly; requires at least one additional sensor. But this study focuses on an economical biometric solution that does not necessitate any additional sens r other than the keyboard.Keystroke dynamics is an interesting biometric because it always invisible for users, unlessthey are physically present, and also it is not depending on a dedicated device or hardware infrastructure. When a person is typing at a keyboard, the detailed timing information that describes exactly when each key was pressed and when it was released and va .ation of speed movingbetween two keys are continuously monitoring in order to recognizes a unique pattern. Thenanother pattern recognition part is operating in stealth mode with the passw rd verification. Thereforeafter the password is verified successfully, the pattern recognition part also needs to be completedin order to authenticate the user. Significance of this research is the way of generating and storing the pattern. As I explained thrgJ.,lghthe literature study keystroke dynamics is not a veryreliable biometric. So the challenge was to make a strong pattern which i hard to reveal usinga less reliable building block which I achieved successfully through this study.Publication Open Access Assessing the Impact of Atmospheric CO2 Concentrations on Rainfall Patterns(SLIIT, 2024-12) Wijesinghe, R.A.This research aims to assess the impact of atmospheric CO₂ concentrations on rainfall patterns, focusing on the relationship between key environmental parameters such as temperature, humidity, wind speed, wind direction, atmospheric pressure, and rainfall. Data were collected over 17 months, including CO₂ data sourced from the National Building Research Organization (NBRO) in Colombo and additional CO₂ measurements captured via MG811 CO2 sensor. Environmental data such as temperature, humidity, wind speed, wind direction, and pressure were obtained from the Sri Lanka Meteorological Department, ensuring a comprehensive dataset for analysis. Machine learning algorithms, including Random Forest, XGBoost, and LSTM, were employed to develop predictive models for rainfall based on the collected data. The performance of these models was evaluated using various metrics like Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared values. Results indicate that incorporating CO₂ data improves model performance, particularly with the Random Forest model, which demonstrated the lowest error rates and highest predictive reliability when CO₂ was included as a feature. The findings underscore the importance of considering atmospheric CO₂ in climate modeling, revealing that CO₂ levels may have a more complex and region-specific influence on rainfall than previously recognized. This enhanced forecasting approach has significant implications for various sectors in Sri Lanka, including agriculture, aviation, and fisheries, where accurate rainfall predictions are critical for planning and resource management. The study's outcomes are especially relevant for policymakers and environmental stakeholders, as they highlight the potential for data-driven strategies to mitigate climate impacts and promote sustainable development practices. Additionally, the research contributes to the academic discourse on climate dynamics, offering valuable insights for future studies and serving as a foundation for educational initiatives in environmental science and meteorology.Publication Open Access Assistive tool for the Evaluation of online Exam Papers in Tertiary Education(2022-12) Perera, J.T.NDigital education or online education is a necessity of the age for all students and educators. Of these, the online exams occupy a leading position. Because both parties finally face that experience. According to current difficult world situation (especially covid 19) online exams help for educators and students to continue their education. That situation students may face to a successful online exam paper and teachers must be prepared standard online exam paper. I considered what the best methods are and what are the draw backs of current online examination systems in tertiary level. Also, I have researched how to evaluate online exam paper in those platforms. I have discussed with exam platform developers and collect information. After collecting these data then analyze and make report for the final thesis. This project’s main aim is to automate the evaluation process of the online exam paper. Here I reveal the bloom’s revised taxonomy level of each question and generate report. Then find how many percentages covered by the online exam paper of related course unit. This platform allows for educators to login and prepare the question paper and evaluate it. System will provide report and educator can analyze it. I use Bloom’s Revised Taxonomy model to evaluation process. Further I applied Natural Language Processing technology for reveal the percentage of course unit that covered by the exam paper. I used python, Django, HTML, CSS and JavaScript as other technologies.Publication Open Access Automated Analysis of Commenting Styles and Documentation Practices: A Data-Driven Approach to Software Quality and Maintainability(Sri Lanka Institute of Information Technology, 2025-12) Sathyangani, K.A.H.P.Software maintainability is strongly influenced by the quality of code comments, which guide developers in understanding system functionality and behaviour. Poorly written, missing, or ambiguous comments reduce productivity and increase the cost of maintenance. The current study introduces an automated, data-driven approach to evaluating comment quality in Java projects. The proposed solution, implemented as a Java-based tool named Comment Quality Analyser, automatically scans source files, extracts comments, and evaluates them using four quality dimensions: grammatical correctness, readability, understandability, and meaningfulness. The tool integrates LanguageTool for grammatical analysis, the Flesch Reading Ease metric for readability, heuristic rules for understandability, and a Jaccard-similarity-based algorithm for measuring semantic alignment between comments and code identifiers. The results are presented through JSON reports and an interactive HTML dashboard that visualises the quality distribution across files. Real-world validation was conducted using the Apache Commons IO open-source repository, containing over 100 comments. Experimental results indicate that the system provides consistent scoring with an average accuracy of 86 % when compared with manual reviews. The proposed framework contributes to improving software documentation practices and offers a foundation for further research integrating Natural Language Processing (NLP) and Machine Learning (ML) to enhance software maintainability analysis.Publication Open Access Automated Research Paper Summarization with Multiple Model and Accessibility Enhancements(Sri Lanka Institute of Information Technology, 2025-12) Wijesooriya, A.I.EThe number of research papers published each year is growing at an overwhelming pace, making it difficult for students, researchers, and professionals to keep up with new knowledge. Existing summarization tools can help, but most of them rely on large models like GPT, Pegasus, or BERT, which need powerful hardware and constant internet access. This limits their use, especially in low-resource or offline environments. This work introduces a novel framework for Automated Research Paper Summarization that employs a multi-model hybrid pipeline, integrating both extractive and abstractive strategies. Unlike resource intensive models, this approach emphasizes lightweight architectures, enabling efficient performance even in low-resource settings while preserving summary quality. To further enhance usability, the system includes keyword extraction modules that highlight central concepts and accessibility features such as text-to-speech, supporting users with visual or cognitive challenges. A distinctive feature of this framework is its section-wise summarization output, which mirrors the logical flow of research papers allowing users to quickly access context, methodology, findings, or conclusions as needed. System performance is assessed through standard metrics like ROUGE and BLEU, complemented by qualitative evaluations of readability, informativeness, and coherence. By avoiding full dependence on large, pre-built models such as GPT or Pegasus, this work prioritizes component level innovation, offline functionality, and greater privacy, making it adaptable across diverse use cases. The study advances the field of scientific summarization by offering a practical, modular, and accessible tool that supports knowledge discovery and management in research intensive domains.
