MSc in Information Technology
Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/2484
Students enrolled in the MSc in Information Technology programme are required to submit a thesis as a compulsory component of their degree requirements. This collection features merit-based theses submitted by postgraduate students specialising in Information Technology. Abstracts are available for public viewing, while the full texts can be accessed on-site within the library.
Theses and Dissertations of the Sri Lanka Institute of Information Technology (SLIIT) are licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Browse
3 results
Search Results
Publication Open Access A Machine Learning Approach to Identify the Key Factors Affecting Correct Stream Selection and To Predict Suitable Subject Streams for Advanced Level Students in Sri Lanka(Sri Lanka Institute of Information Technology, 2025-12) Abeywardhana,K.G.H.Education plays a vital role in shaping the economic growth and sustainable development of a nation. It is not only a measure of a country’s intellectual wealth but also a determining factor in its future progress. In Sri Lanka, education is provided free of charge by the government from primary school through university, ensuring equal access for all students. Within this framework, the General Certificate of Education (Ordinary Level) – G.C.E. (O/L) and the General Certificate of Education (Advanced Level) – G.C.E. (A/L) examinations represent two critical milestones in the academic journey. The G.C.E. (A/L) examination, in particular, serves as the gateway to higher education and university admission, marking a pivotal stage in shaping students’ academic and professional futures. At the end of the O/L stage, students are required to select a subject stream such as Science, Arts, Commerce, or Technology to pursue during their A/L studies. This choice has a lasting impact, as it directly determines the student’s educational direction and career opportunities. However, many students make this crucial decision based on external influences, such as parental pressure, peer comparison, or limited guidance, rather than through a clear understanding of their academic strengths, personal interests, or long-term career aspirations. Consequently, this often leads to dissatisfaction, stream switching, or even discontinuation of studies. To address this issue, it is essential to adopt a data-driven approach that considers multiple factors, including students’ O/L examination performance, inborn talents, extracurricular activities, and preferred professional fields. This research introduces a machine learning-based model the Subject Stream Prediction System—designed to recommend the most suitable A/L subject stream for students. The proposed system not only predicts the optimal subject stream but also provides additional guidance by suggesting potential career paths, relevant educational qualifications, and technical skills aligned with the student’s profile. Four supervised machine learning algorithms K-Nearest Neighbors (KNN), Decision Tree, Random Forest, and Support Vector Machine (SVM)were trained and evaluated to develop the predictive model, ensuring the highest possible accuracy and reliability.Publication Open Access Publisher-Centric Machine Learning-Based Solution for Click Fraud(SLIIT, 2024-12) Pathirage, G.SInvalid traffic and click fraud present significant challenges in online advertising, impacting advertising metrics and causing substantial financial losses across the digital advertising ecosystem. While advertisers have access to various protective solutions and receive protection from advertising networks, publishers face limited options for detecting and preventing fraudulent activities on their websites. This gap in publisher-side protection creates a critical area for investigation and development of practical solutions. This research presents an effective publisher-side solution: the Ad Click Fraud Protector (ACFP), an open-source WordPress plugin that detects and prevents click fraud and invalid traffic. The research methodology involved studying browser fingerprinting approaches by collecting browser fingerprints from legitimate users and bots, distinguished through firewall rules and honeypots. Experimental analysis identified six key browser fingerprinting attributes that effectively distinguish between legitimate and fraudulent traffic. These findings informed the development of the ACFP plugin, which incorporates additional security measures for enhanced protection. Testing of the plugin on two AdSense publisher accounts demonstrated its effectiveness in reducing invalid clicks, minimizing invalid traffic, and decreasing revenue deductions due to invalid clicks. The results show that publishers can effectively protect their ad accounts from penalties and deductions through browser fingerprint-based traffic filtering. This research provides publishers with an accessible, opensource solution for combating click fraud while contributing to the theoretical understanding of browser fingerprinting effectiveness in fraud detection. Additionally, it establishes a framework for future development in publisher-side protection systems.Publication Embargo Leveraging Word Embedding for Automated Candidate Ranking in Talent Acquisition Processes(SLIIT, 2024-12) Rasanayagam, JRanking the applicants who applied for a certain position in a company is mostly done manually. To ease this process, this system creates a ranking system by giving scores for each applicant based on the word embedding model trained using the past datasets. The job advertisements related to information technology fields or related to certain positions are collected and trained a model using the word embedding process. The system compares the resume of the applicant with this model and allocates a specific score for each applicant and orders them in the ascending order. Data crawling and scraping, text preprocessing and training the model are the main components in this research. The goal of this research is to collect the data of job openings related to the information technology industry and collect the job seekers information through the web scraping and crawling and train a model to rank the applicants. The crawled data is used to prepare the corpus. Python scrapy is used to prepare the crawler script for this crawling mechanism. The crawled data is then undergone for the preprocessing. Finally, the preprocessed corpus is undergone for the word embedding. Word2Vec, Gensim are some algorithms used here to train a model. This model is used to compare the resumes of each applicant and get value from the model and finally it will output a total score for each resume and then the system finally ranks the applicants based on the scores they got in ascending order.
