SLIIT Conference and Symposium Proceedings
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/295
All SLIIT faculties annually conduct international conferences and symposiums. Publications from these events are included in this collection.
Browse
15 results
Search Results
Publication Open Access Artificial Intelligence and the Future of Mental Health: Innovations, Challenges, and Ethical Imperatives(School of Psychology. Faculty of Humanities and Sciences, SLIIT, 2025-10-10) Jayalath, J.GArtificial Intelligence (AI) is increasingly viewed as a promising tool for improving access to and scalability of mental health services, particularly thrrough application such as Chatbot, predictive modeling and emotion recognition technology.However, its integration raises significant ethical and psychological concerns, including algorithmic bias, privacy violations, and the potential erosion of human empathy. This qualitative integrative review aimed to critically examine the dual role of AI in mental health, synthesizing evidence on its efficacy and ethical challenges. The study systematically searched Scopus, Google Scholar, PubMed, and PsycINFO databases, employing a structured search strategy. From an initial pool of 70 papers, 10 high-impact studies were selected based on rigorous inclusion criteria (peer-reviewed, focus on AI applications, ethical/psychological implications).Publication Open Access Impact of Artificial Intelligence on Academic Integrity in Higher Education, Sri Lanka(School of Education, Faculty of Humanities and Sciences, SLIIT, 2025-10-10) Wijayasiri, K. D.S.NThe blistering pace of artificial intelligence (AI) adoption in the college and university sector has, in effect, revolutionized the academic sector, offering high potential while simultaneously encompassing numerous obstacles regarding academic integrity. This study examines the various ways in which artificial intelligence has impacted academic integrity in higher education institutions in Sri Lanka. Investigating the ever-developing field of AI through the prism of professional literature, this research addresses the issue of how the use of AI tools, mainly large language models, such as ChatGPT, is reinventing the familiar pattern of assessments, establishing new patterns of academic dishonesty, and causing the emergence of new solutions to the problem of preservingacademic integrity. The results indicate that although current AI-based technologies provide significant value to personalized learning and educational improvement, they also present significant risks to academic integrity,which must be addressed promptly by educators, policymakers, and institutional officials. The following paper proposes an approach to addressing these issues by redesigning policies, course and examination evaluation, and incorporating ethical AI strategies tailored to the specific context of Sri Lankan higher education.Publication Open Access Anthropocentricity and Copyright Protection; A Theoretical Emphasis on the Human Authorship Requirement in Copyright after Thaler v. Perlmutter(School of Law, Faculty of Humanities and Sciences, 2025-10-10) Widanapathirana, S. H.The rapid development of Artificial Intelligence (AI) has created new avenues for creativity across diverse fields. While expanding the horizons of creativity, AI has also introduced a plethora of challenges to the intellectual property law regime. Copyright protection, as a cornerstone of intellectual property law, safeguards the moral rights and economic rights of authors who create original artistic, literary or scientific work utilising independent skill, thought and efforts. Thus, creations generated by utilising AI models via prompting, conflicting notions surrounding the ownership and authorship of the rights attached to such creations have been subjected to extensive debates. The recent decision in Thaler v. Perlmutter by the Court of Appeal for the District of Columbia Circuit has decisively addressed the debate pertaining to the ownership of copyright in AI generated works. The judgement cemented the view that copyrightfor works that are generated by AI platforms cannot be attributed to the AI system itself,depicting the quality of anthropocentricity. Thus, by employing the doctrinal and explorative approaches in qualitative research methodology, this study methodologically evaluates the judgement in Thaler v. Perlmutter with the goal of developing a conceptual foundation for the domain of authorship and property rights for AI generated works. The study drew upon the natural rights theory and personhood theory of intellectual property law for the purposes pertaining to the evaluation. The findings depict that Thaler v. Perlmutter reinforces the necessity of human authorship for copyright protection portraying conformity with the theoretical underpinnings of intellectual property law. At the same time, the study recognizes emerging calls for legal reforms to accommodate the evolving nature of creativity motivated by AI-relatedtechnologies.Publication Open Access Rewriting Delictual Liability in the Age of AI: Assessing Negligence for Physical Harm Caused by AI Driven Robots in Sri Lanka(School of Law, Faculty of Humanities and Sciences, 2025-10-10) Dharmawardhane, D; Jayamaha, SWhere an Artificial Intelligence driven robot causes physical injuries to human beings, Aquilian action under the law of delict shall be applicable by default considering that there are no statutes or case law governing such incidents. Nonetheless, the application of the traditional delictual law doctrines in this context is difficult or rather impractical due to their inherent characteristics such as lack of explainability, unpredictability, autonomy, and multi-party involvement. The objective of this paper is to analyse the aforesaid issue and provide recommendations to resolve the matter.While the paper indicates how conventional Aquilian action fails in this context, it offers recommendations to incorporate recent developments in the field of AI and judicial/ legislative requirements.Publication Open Access FocusBoost – A Study Aid with Adaptive Learning Techniques(SLIIT City UNI, 2025-07-08) Prabaharan, N; Dampalessa, D.R.C.G.K.FocusBoost is an AI-powered adaptive learning platform designed to support children with Attention Deficit Hyperactivity Disorder (ADHD) through personalized learning experiences. By integrating video-based learning with voice input analysis, the system uses speech processing techniques to assess a child's engagement and comprehension in real-time. Based on real-time analysis, the platform dynamically adjusts content difficulty and pace to the needs of the individual learner. In practical testing, the system demonstrated high accuracy in classifying learner engagement and comprehension, with more ADHD learners reporting improved focus and content retention. Additionally, parents have noticed positive changes in their child’s study habits and attention span through its use. The site has a performance tracking accuracy page for children, which shows their level of comprehension. This research highlights the effectiveness of AI-enhanced learning for students with brain and neurological issues and its potential to improve inclusive, sustainable education practices. The system is designed with scalability in mind, allowing for multilingual support, culturally adaptive content, and future integration with medical professionals, expanding its impact across a variety of educational and therapeutic settings.Publication Open Access Beyond the Wrist: Holographic Pathway for Universal Depression Management(SLIIT City UNI, 2025-07-08) Weerasuriya, B.M.; Egodage, M.D; Ranasinghe, R.K.N.N; Vithurshika, J.; Vihansa, N.K.V.; Niranga, G.D.HThis concept paper introduces a novel smartwatchbased system that leverages artificial holographic technology to address the growing need for accessible mental health support, particularly for individuals experiencing depression. Recognizing the communication barriers and lack of resources for the deaf community, the proposed system is designed to be inclusive for both deaf and non-deaf users. This system blends artificial intelligence, holographic technology, mood tracking, and an inventive smartwatch that can detect individual emotions. A smartphone application will be used to oversee and control each of these components. By integrating wearable technology with emotional wellbeing support, the proposed model will provide continuous, accessible, and user-friendly assistance. If implemented, this tool could enhance user engagement and emotional awareness in therapeutic contexts. To validate feasibility and effectiveness, further research and development are needed.Publication Open Access Trusty Record - Decentralized Medical Record Management System using Blockchain and Artificial Intelligence(SLIIT City UNI, 2025-07-08) Vijayaraj, A; Worthington, A.EIncreasing demand for secure, accessible, and patient-controlled healthcare data systems has exposed the limitations of traditional centralized electronic health record (EHR) platforms. These systems often suffer from data breaches, limited interoperability, and a lack of transparency, leaving patients with minimal control over their personal medical information. This paper presents TrustyRecord, a decentralized medical record management system that leverages blockchain technology and artificial intelligence (AI) to overcome these challenges. Ethereum-based smart contracts are used to manage access control, ensuring transparency and immutability, while InterPlanetary File System (IPFS) enables tamper-proof, distributed storage of sensitive medical data. Additionally, a machine learning model trained on real-world clinical data performs predictive analysis, providing patients with early warnings of heart risk based on extracted health indicators. The system integrates Optical Character Recognition (OCR) technology to process unstructured medical files and convert them into structured data for analysis. TrustyRecord offers a secure, scalable, and intelligent approach to health data management, enhancing both patient empowerment and proactive healthcare delivery.Publication Open Access ‘Rhetoric’ and ‘Reality’ of Artificial Intelligence in Apparel Sector in Sri Lanka: Comparative Case Study(ICSDB 2024 and SLIIT Business School, 2024-12-10) Sandatharaka, S.; Neranjani, K.; Gayashan, N.; Himahansika, C.; Liyanage, T.; Jayasuriya, N.; Ehalapitiya, S.Artificial Intelligence (AI) has emerged as a transformational force in today's rapidly changing business environment. The apparel sector in Sri Lanka increasingly embracing AI technologies for the forthcoming adoption of AI technologies within the industry. Referring to evidence from companies that have evolved in Sri Lanka's apparel sector, this study examines the gap between AI’s rhetorical promises and its practical (reality) application. It focuses on workplace perceptions of AI, bridging the gap between theoretical AI concepts and their implementation, the dynamics of integrating AI into organizational processes, future directions, and the reasons behind the adoption of AI technologies by case study organizations. Drawing from qualitative data, the study delves into the perceptions of AI among industry professionals, the integration of AI into organizational processes, and the strategic motivations behind adopting AI technologies. The findings highlight a significant disparity between the high expectations promoted by AI rhetoric and the reality and effectiveness of AI in practice. While AI is often heralded as a tool to enhance efficiency and reduce manual Labor, the reality within the case study organizations reveals a slower, more complex adoption process. This research paper further describes the rhetoric and reality insights of AI in case study organizations while extending the rhetoric institutionalism theory, how organizations develop specific rhetorical strategies when defining the organizational goals and how organizations strategically use symbols like (words and signs) to empower the ability of practicality in the organizationsPublication Open Access The Ethical Consequences of Artificial Intelligence in Countering Cyber Speech: Combining Effectiveness with Maintaining Human Rights(Faculty of Humanities and Sciences, SLIIT, 2024-12-04) Godigamuwa, A.HArtificial Intelligence (AI) offers tremendous potential and difficult moral dilemmas in the fight against cyber speech, including hate speech, disinformation, and cyberbullying. This study looks at the two requirements that must be met to protect civil rights and successfully combat harmful online speech. By showcasing developments in deep learning algorithms, natural language processing, and automated moderation tools, it explores the potential of AI systems to identify, regulate, and lessen harmful online behavior. The ethical implications of AI in moderating online debate are rigorously examined in this paper, with particular attention paid to issues with biases, privacy, and freedom of speech. AI creates concerns about data exploitation and spying. It may also over-censor or misinterpret context, which puts permissible expression at risk of being unfairly suppressed. Additionally, AI systems have the power to amplify and perpetuate preconceptions, resulting in biased judgments that affect marginalized communities. Through an analysis of case studies and statutes, the study seeks to strike a balance between the need to preserve fundamental rights and AI’s ability to make online places safer. It promotes a plan that upholds justice and human dignity by fusing technical advancements with strict moral standards and open governance.Publication Embargo Evaluating the Threshold of Authenticity in Deepfake Audio and Its Implications Within Criminal Justice(SLIIT, Faculty of Engineering, 2024-10) Rodgers, J; Jones, K.O; Robinson, C; Chandler-Crnigoj, S; Burrell, H; McColl, SDeepfake technology has come a long way in recent years and the world has already seen cases where it has been used maliciously. After a deepfake of UK independent financial advisor and poverty champion Martin Lewis was released on social media, a theory has been proposed where the deepfake target is accompanied by additional media to increase the authenticity of the file, for instance, ambient noise or processing to match how the deepfake would sound if it was recorded from a specific device such as a cellular/mobile phone. Focussing on deepfake audio, a critical listening experiment was conducted where participants were asked to identify the deepfake audio file from a set of three, across a number of sets of three files. A number of audio files were created using real voices with additional sounds added, volunteers recording their voice which is then put through a deepfake generation system, and voices taken from publicly available podcasts which were also applied to the deepfake software – the latter set mimics using web accessible voice recordings of prominent or famous people, such as the Prime Minister of the UK. The results show participants were able to successfully detect one third of the deepfake audio files presented, however they also incorrectly marked another one third of the real files as deepfakes whilst the remaining third were missed. Results also showed no definitive confirmation that audio and/or forensic professionals had any greater ability to successfully detect deepfake audio files when compared to others. The false positive result may also reinforce the scepticism and lack of trust created by what is known as “Liar’s Dividend”. The paper details how the files were created, the testing methodology, and the experimental results. Furthermore, a discussion on the future directions of research and the effects that deepfakes may have on the criminal justice system is presented.
