Repository logo
Repository
Browse
SLIIT Journals
OPAC
Log In
  1. Home
  2. Browse by Author

Browsing by Author "Nadeeshani, M"

Filter results by typing the first few letters
Now showing 1 - 6 of 6
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    PublicationEmbargo
    AI-based Behavioural Analyser for Interviews/Viva
    (IEEE, 2022-01-03) Dissanayake, D. Y; Amalya, V; Dissanayaka, R; Lakshan, L; Samarasinghe, P; Nadeeshani, M; Samarasinghe, P
    Globalization and technology have made virtual interviews to be the choice of recruitment. Even though online interviews/viva have eliminated time, budgetary, and geographical barriers, the lack of comprehension regarding the interviewee’s behavioural aspects is yet to overcome. Therefore, a machine-based approach is proposed in this research for detecting and assessing changes in interviewees’ behaviour and personality traits based on nonverbal cues. Additionally, a group analysis of other applicants, as well as a comparison of the interview environment with the non-interview environment is also being obtained. To achieve this, we focus on the candidate’s emotion, eye movement, smile, and head movements. The system was carried out using deep learning and machine learning models which achieved accuracies over 85% for all smile, eye gaze, emotion, and head pose analysis. Furthermore, several machine learning models were developed based on the analysed behavioural outcomes of the interviewee to identify big five personality traits with Random Forest model yielding highest accuracy rate of over 75%. Our findings indicate that nonverbal behavioural cues can be utilized to determine personality traits.
  • Thumbnail Image
    PublicationEmbargo
    Anonymo: Automatic Response and Analysis of Anonymous Caller Complaints
    (IEEE Computer Society, 2022-08-17) Azhar, A; Maweekumbura, S; Gunathilake, R; Maddumaarachchi, T; Karunasena, A; Nadeeshani, M
    Customers are considered as the most valued asset in any business organization. Therefore, attending especially to negative feedback provided by customer in form of complaints is important for an organization to identify areas to improve and retain customers. To quickly respond to customer complaints many business organizations have made hotlines available. Such caller hotlines are dedicated for the purpose of receiving complaints or allowing whistleblowers to reveal information. Due to the fear of being identified, there is a hesitancy in the public to use these hotlines. From the perspective of the organizations when a customer complaint is received it is required to evaluate the validity of the call made to hotlines. Furthermore, when complaints are made, it is required to handle them efficiently by transferring them to relevant departments and prioritize complaints This research proposes 'Anonymo', a system to handle customer complaints in a secure and an efficient manner. To do so, the system analyses the complaints obtained by a caller and provides the end users with the appropriate responses and output, that includes the following: i. Conversational AI agent to respond to callers, ii. Wanted and unwanted call classification, iii. Department-based Complaint classification, iv. Caller Emotion detection and caller complaint analysis while establishing the caller's anonymity. An accuracy of 88.26% was obtained for identification of wanted complaints using SVM algorithm, an accuracy of 85% was obtained for department-based classification using SVM algorithm and 67% accuracy was obtained for emotion analysis by LSTM algorithm
  • Thumbnail Image
    PublicationOpen Access
    Diagnosing autism in low‐income countries: Clinical record‐based analysis in Sri Lanka
    (Wily, 2022-06-16) Samarasinghe, P; Wickramarachchi, C; Peiris, H; Vance, P; Dahanayake, D. M. A.; Kulasekara, V; Nadeeshani, M
    Use of autism diagnosing standards in low-income countries (LICs) are restricted due to the high price and unavailability of trained health professionals. Furthermore, these standards are heavily skewed towards developed countries and LICs are underrepresented. Due to such constraints, many LICs use their own ways of assessing autism. This is the first retrospective study to analyze such local practices in Sri Lanka. The study was conducted at Ward 19B of Lady Ridgeway Hospital (LRH) using the clinical forms filled for diagnosing ASD. In this study, 356 records were analyzed, from which 79.5% were boys and the median age was 33 months. For each child, the clinical form together with the Childhood Autism Rating Scale (CARS) value were recorded. In this study, a Clinically Derived Autism Score (CDAS) is obtained from the clinical forms. Scatter plot and Pearson product moment correlation coefficient were used to benchmark CDAS with CARS, and it was found CDAS to be positively and moderately correlated with CARS. In identifying the significant variables, a logistic regression model was built based on clinically observed data and it evidenced that “Eye Contact,” “Interaction with Others,” “Pointing,” “Flapping of Hands,” “Request for Needs,” “Rotate Wheels,” and “Line up Things” variables as the most significant variables in diagnosing autism. Based on these significant predictors, the classification tree was built. The pruned tree depicts a set of rules, which could be used in similar clinical environments to screen for autism.
  • Thumbnail Image
    PublicationEmbargo
    Facial emotion prediction through action units and deep learning
    (IEEE, 2020-12-10) Nadeeshani, M; Jayaweera, A; Samarasinghe, P
    With the recent advancements in deep learning techniques, attention has been given to training and testing facial emotions through highly complex deep learning systems. In this paper we apply machine learning techniques which require less resources to produce comparable results for emotion prediction. As the underlying technique for the emotion prediction in this research is based on clinically recognized Facial Action Coding System (FACS), a further analysis is given on the contribution of each of the Action Units (AUs) for the predicted emotion. This analysis would complement, strengthen and be a main resource for addressing many different health issues related to facial muscle movements.
  • Thumbnail Image
    PublicationEmbargo
    Pubudu: Deep learning based screening and intervention of dyslexia, dysgraphia and dyscalculia
    (IEEE, 2019-12-18) Kariyawasam, R; Nadeeshani, M; Hamid, T; Subasinghe, I; Samarasinghe, P; Ratnayake, p
    Dyslexia, Dysgraphia and Dyscalculia are significant learning disabilities that affect around 10% of children in the world. Despite the advancement of technology literacy in the community, limited attention has been given for screening and intervention of these disabilities using mobile applications in Sri Lanka. In this research, one of the first deep learning and machine learning based mobile applications, named “Pubudu” was developed for screening and intervention of dyslexia, dysgraphia and dyscalculia supporting local languages. In “Pubudu” we have followed up clinical screening and diagnostic procedures recommended by health professionals for screening and intervention. The screening of dyslexia, letter dysgraphia and numeric dysgraphia was carried out using deep neural network and the screening for dyscalculia was carried out using machine learning techniques. Intervention techniques are implemented using gamified environments. System testing was carried out using 50 differently abled children and 50 typical children. With the initial dataset 88%, 58%, 99% screening accuracies are achieved in neural networks for letter dysgraphia, dyslexia and numeric dysgraphia screening while dysgraphia, whereas 90% accuracy was achieved for dyscalculia. Handwritten letters and numbers were fed as inputs to CNN model in letter dysgraphia and numeric dysgraphia while embedded audio clips of letter pronunciation were fed in to voice recognition CNN model in dyslexia. “Pubudu” shows significant potential for screening and intervention of dyslexia, dysgraphia and dyscalculia in local languages motivating children and interactively making them able and would be an enabling app for most of the underprivileged children in Sri Lanka.
  • Thumbnail Image
    PublicationEmbargo
    Voice Enabled Intelligent Programming Assistant
    (IEEE, 2022-12-09) Wataketiya, R; Chandrasiri, N; Kithsiri, R; Malwatta, H; Nadeeshani, M; Siriwardana, S
    In modern era where software development is of vital importance, software developers are challenged with conditions like Repetitive Strain Injury (RSI) which hinders their ability to work effectively. Furthermore, people with difficulties with using their hands also find it challenging to program in the traditional manner. As a solution, coding with one’s voice has been experimented with, but current solutions lack interactivity and are harder to use and setup leaving much room for improvement in this domain. In this research work, by using input classifier models with accuracies over 90%, intent classifiers with accuracies over 70%, code parsing and various human computer interaction techniques, we developed a conversationally interactive, programming language agnostic, easy to setup and easy to use Voice Coding Assistant. This will potentially help a global audience of programmers to achieve their goals and improve productivity and lead a healthier life. We have named the system thus developed, “Venic”.

Copyright 2025 © SLIIT. All Rights Reserved.

  • Privacy policy
  • End User Agreement
  • Send Feedback