Browsing by Author "Samarasinghe, P"
Now showing 1 - 20 of 42
- Results Per Page
- Sort Options
Publication Open Access 2D Pose Estimation based Child Action Recognition(Institute of Electrical and Electronics Engineers Inc., 2022-11) Mohottala, S; Abeygunawardana, S; Samarasinghe, P; Kasthurirathna, D; Abhayaratne, CWe present a graph convolutional network with 2D pose estimation for the first time on child action recognition task achieving on par results with LRCN on a benchmark dataset containing unconstrained environment based videos.Publication Embargo AI-based Behavioural Analyser for Interviews/Viva(IEEE, 2022-01-03) Dissanayake, D. Y; Amalya, V; Dissanayaka, R; Lakshan, L; Samarasinghe, P; Nadeeshani, M; Samarasinghe, PGlobalization and technology have made virtual interviews to be the choice of recruitment. Even though online interviews/viva have eliminated time, budgetary, and geographical barriers, the lack of comprehension regarding the interviewee’s behavioural aspects is yet to overcome. Therefore, a machine-based approach is proposed in this research for detecting and assessing changes in interviewees’ behaviour and personality traits based on nonverbal cues. Additionally, a group analysis of other applicants, as well as a comparison of the interview environment with the non-interview environment is also being obtained. To achieve this, we focus on the candidate’s emotion, eye movement, smile, and head movements. The system was carried out using deep learning and machine learning models which achieved accuracies over 85% for all smile, eye gaze, emotion, and head pose analysis. Furthermore, several machine learning models were developed based on the analysed behavioural outcomes of the interviewee to identify big five personality traits with Random Forest model yielding highest accuracy rate of over 75%. Our findings indicate that nonverbal behavioural cues can be utilized to determine personality traits.Publication Embargo Analysis and performance of CMA blind deconvolution for image restoration(Wiley Online Library, 2015-09) Samarasinghe, P; Kennedy, R. AIn this paper we study the applicability of classical blind deconvolution methods such as constant modulus algorithm (CMA) for blind adaptive image restoration. The requirements such as the source to be white, uniformly distributed and zero mean, which yield satisfactory convergence in the data communication application context, are revisited in the image restoration context, where a linear deblur kernel needs to be blindly adapted to compensate for an unknown image blur kernel with the objective to recover a source ground truth image. Through analysis and performance studies, we show that the performance of CMA is adversely affected by the intrinsic spatial correlation of natural images and by any deviation of their distribution from being platykurtic. We also show that decorrelation techniques designed to overcome spatial correlation cannot be effectively applied to rectify CMA performance for blind adaptive image restorationPublication Embargo Analyzing Payment Behaviors And Introducing An Optimal Credit Limit(IEEE, 2019-12-05) Bandara, H. M. M. T; Samarasinghe, D. P; Manchanayake, S. M. A. M; Perera, L. P. J; Kumaradasa, K. C; Pemadasa, N; Samarasinghe, PIdentifying an optimal credit limit plays a vital role in telecommunication industry as the credit limit given to customers is influence on the market, revenue stabilization and customer retention. Most of the time service providers offer a fixed credit limit for customers which may cause customer dissatisfaction and loss of potential revenue. Therefore, it is essential to determine an optimal credit limit that maintains customer satisfaction while stabilizing the company revenue. Clustering algorithms were used to group customers with similar payment and usage behaviors. Then the optimal credit limit derived for each cluster is applicable to all the customers within the cluster. In order to identify the most suitable clustering algorithm, cluster validation statistics namely, Silhouette and Dunn indexes were used in this research. Based on the scores generated from these statistics KMeans algorithm was chosen. Furthermore, the quality of the KMeans clustering was evaluated using Silhouette score and the Elbow method. The optimal number of clusters are identified by those validation statistics. The significance of this approach is that the optimal credit limits generated by these clustering models suit dynamic behaviors of the customer which in turn increases customer satisfaction while contributing to reducing customer churn and potential loss of revenue.Publication Embargo Auto Encoder Based Image Inpainting Model Using Multi Layer Latent Representations(IEEE, 2021-12-13) Walgampaya, M. M. P. N; Kodikara, N. D; Samarasinghe, PImage inpainting is used in computer vision to reconstruct images after removal of unwanted objects and in the construction of damaged images in a visually acceptable manner. Use of this technique is mainly found in the areas of image editing. Although many algorithms have been developed by researchers over the years, these mainly fall into the reconstruction of small regions or objects with less structural complexities. With the advancement of machine learning techniques, innovative ideas have emerged which led to the development of mechanisms to reconstruct more complex structural variations in large regions of the images. In this research, a considerably large region of a damaged image has been inpainted using a convolutional auto encoder with an encoder-decoder combination. It is equipped with a novel approach to modify the latent space of the input image. These latent representations are created from multiple layers of the encoder to form a Multi Layer Latent Representation (MLLR). This MLLR is fed to the decoder which generates the image by applying the transpose convolution operation. The quality of the inpainted images generated from our model is compared with the images generated from the model having a single latent representation without the MLLR. Peak Signal to Noise Ratio (PSNR) and Structured Similarity Index Metrics (SSIM) are used in the evaluation. Empirical analysis indicate that the model is able to provide SSIM values over 0.9 for the reconstructed images with damaged areas that consist of 12% of the image surface.Publication Embargo Autoencoder based data clustering for identifying anomalous repetitive hand movements, and behavioral transition patterns in children(Springer Science and Business Media Deutschland GmbH, 2025-01-21) Wedasingha, N; Samarasinghe, P; Senevirathna, L; Papandrea, M; Puiatti, AThe analysis of repetitive hand movements and behavioral transition patterns holds particular significance in detecting atypical behaviors in early child development. Early recognition of these behaviors holds immense promise for timely interventions, which can profoundly impact a child’s well-being and future prospects. However, the scarcity of specialized medical professionals and limited facilities has made detecting these behaviors and unique patterns challenging using traditional manual methods. This highlights the necessity for automated tools to identify anomalous repetitive hand movements and behavioral transition patterns in children. Our study aimed to develop an automated model for the early identification of anomalous repetitive hand movements and the detection of unique behavioral patterns. Utilizing autoencoders, self-similarity matrices, and unsupervised clustering algorithms, we analyzed skeleton and image-based features, repetition count, and frequency of repetitive child hand movements. This approach aimed to distinguish between typical and atypical repetitive hand movements of varying speeds, addressing data limitations through dimension reduction. Additionally, we aimed to categorize behaviors into clusters beyond binary classification. Through experimentation on three datasets (Hand Movements in Wild, Updated Self-Stimulatory Behaviours, Autism Spectrum Disorder), our model effectively differentiated between typical and atypical hand movements, providing insights into behavioral transitional patterns. This aids the medical community in understanding the evolving behaviors in children. In conclusion, our research addresses the need for early detection of atypical behaviors through an automated model capable of discerning repetitive hand movement patterns. This innovation contributes to early intervention strategies for neurological conditionsPublication Embargo Automated Analysis of Children Emotion Expression Levels(IEEE, 2022-08-25) Nadeeshani, N; Kalaichelvan, K; Karunasena, A; Samarasinghe, PDespite the advancement in the field of facial emotion expression analysis, less attention has been given for facial emotion expression and emotion level analysis in children. This paper presents three novel findings in the area of child emotion expression. Identifying and validating the AU stimulation of children, automating the child emotion and level of emotion prediction and age wise analysis of child emotion expression. Emotion predictions were compared resulting through deep learning methods such as 3DCNN and machine learning approaches using EFA.AU stimulation results generated through EFA are consistent with the FACS. Through AU analysis, the paper shows that a child video or image can be predicted for the expressed emotion and its level with 91.04% accuracy through KNN classifier. While the 3DCNN approach resulted in 82.64% accuracy, the age wise emotion prediction through CNN resulted in the range of 60% to 86.6%. Though all approaches evidenced comparable results in emotion prediction, the emotion level prediction through EFA and AU outperformed 3DCNN and CNN approaches in all cases. Happy emotion prediction in age wise emotion analysis resulted in a higher accuracy over sad and disgust emotions. As emotion level prediction in age wise analysis display mixed results, a further research on age wise AU stimulation is encouraged.Publication Embargo Automated Child Social Attention Evaluation(IEEE, 2022-12-09) Sandunika Wasala, K; Dhanawansa, V; Velayuthan, M; Samarasinghe, PProviding proper care for children with attention difficulty disorder is crucial, one way to ensure this is early identification of these disorders. In Sri Lanka, a developing country, it is difficult to find resources such as clinics, clinical expertise, and other resources which are essential for diagnosis. The absence of these apparatuses risks the mental well-being of the child as well as access to help. Hence a need arises to develop an automated social attention evaluation system. This will serve as the first line of diagnosis and help the parents/guardians secure the help required from an early age for the child. To the best of the authors’ knowledge, no solution of this nature is readily available for the Sri Lankan community so far. Keeping the low-income bracket of the country in mind, we propose a solution that can be easily deployed even on a cheap mobile/tablet-like device. It is difficult to perform these evaluations for children in similar settings as adults, as they are easily distracted. Therefore, care must be taken to grab the child’s attention throughout the evaluation process. In this research, we developed applications for children at different levels and each level assesses child attention between social objects and non-social objects through a child-friendly game, as they have sufficient visual stimuli to hold the child’s attention. In this study we investigated the screen time spent by the child, the attention of the child on different categories of images (High Autism Interested or Low Autism Interested images), and the switching patterns of the attention between these images. Only typical children were evaluated for this research due to the pandemic situation as well as other internal problems in the country. This system will test and evaluate atypical children in our future work.Publication Embargo The Automated Temporal Analysis of Gaze Following in a Visual Tracking Task(Springer, Cham, 2022-05-15) Dhanawansa, V; Samarasinghe, P; Gardiner, B; Yogarajah, P; Karunasena, AThe attention assessment of an individual in following the motion of a target object provides valuable insights into understanding one’s behavioural patterns in cognitive disorders including Autism Spectrum Disorder (ASD). Existing frameworks often require dedicated devices for gaze capture, focus on stationary target objects, or fails to conduct a temporal analysis of the participant’s response. Thus, in order to address the persisting research gap in the analysis of video capture of a visual tracking task, this paper proposes a novel framework to analyse the temporal relationship between the 3D head pose angles and object displacement, and demonstrates its validity via application on the EYEDIAP video dataset. The conducted multivariate time-series analysis is two-fold; the statistical correlation computes the similarity between the time series as an overall measure of attention; and the Dynamic Time Warping (DTW) algorithm aligns the two sequences, and computes relevant temporal metrics. The temporal features of latency and maximum time of focus retention enabled an intragroup comparison between the performance of the participants. Further analysis disclosed valuable insights into the behavioural response of participants, including the superior response to horizontal motion of the target and the improvement in retention of focus on the vertical motion over time, implying that following a vertical target initially proved a challenging task.Publication Embargo Automatic anemia identification through morphological image processing(IEEE, 2014-12-22) Chandrasiri, S; Samarasinghe, PThough blood cell manipulation has been an interesting research area for many years, most of the techniques presented in literature produce poor segmentation results for images with high overlapped blood cells. In this paper, we introduce a fully automatic low cost and accurate system to identify four common types of anemia and report on blood cell count. The results of our system indicate a good impact with the manually processed results of 99.678% accuracy of Red Blood Cell count. The diagnosis of Elliptocytes, Microcytes, Macrocyte and Spherocytes anemia result in the range of 91%-97% accuracy.Publication Embargo Child Head Gesture Classification through Transformers(Institute of Electrical and Electronics Engineers Inc., 2022-11-04) Wedasingha, N; Samarasinghe, P; Singarathnam, D; Papandrea, M; Puiatti, A; Seneviratne, LThis paper proposes a transformer network for head pose classification (HPC) which outperforms the existing SoA for HPC. This robust model is then extended to overcome the limited child data challenge by applying transfer learning resulting in an accuracy of 95.34% for child HPC in the wild.Publication Embargo Children's Behavior Analysis Through Smart Toys(Institute of Electrical and Electronics Engineers Inc., 2022-11) Ramesha, M. D. D; Kavindi, M. V; Somawansa, R.P; Yadav, A; Samarasinghe, P; Wedasinghe, N; Jayasinghearachchi, VAnalyzing children's behavior is a major part of pediatric psychological studies. Here we are going to use the hand movements of the child to understand the behavioral pattern with the help of IoT-based toys. © 2022 IEEE.Publication Embargo CIS: an automated criminal identification system(IEEE, 2018-12-21) Rasanayagam, K; Kumarasiri, S. D. D. C; Tharuka, W. A. D. D; Samaranayake, N. T; Samarasinghe, P; Siriwardana, S. E. RThe identification of criminals and terrorists is a primary task for police, military and security forces. The terrorist activities and crime rate had increased abnormally. Combating them is a challenging task for all security departments. Presently, these departments are using latest technologies. But they have not enough efficient and accuracy as they expected This research study is based on the analysis of faces, emotions, Ages and genders to identify the suspects. Face recognition, emotion, age and gender identifications are implemented using deep learning based CNN approaches. Suits identification is based on LeNet architecture. In the implementation phase for the classification purpose, Keras deep learning library is used, which is implemented on top of Tensorflow. IMDb is the dataset used for the whole training purpose. Training is performed using in AWS cloud which is more powerful and capable way of training instead of using local machines. Real-time Video and images are taken for the experiment. Results of the training and predictions are discussed below in brief.Publication Embargo Comparative Study of Parameter Selection for Enhanced Edge Inference for a Multi-Output Regression model for Head Pose Estimation(Institute of Electrical and Electronics Engineers Inc., 2022-11-04) Lindamulage, A; Kodagoda, N; Reyal, S; Samarasinghe, P; Yogarajah, PMagnitude-based pruning is a technique used to optimise deep learning models for edge inference. We have achieved over 75% model size reduction with a higher accuracy than the original multi-output regression model for head-pose estimationPublication Embargo Deep Learning Based Dog Behavioural Monitoring System(IEEE, 2020-12-03) Boteju, W. J. M; Herath, H. M. K. S; Peiris, M. D. P; Wathsala, A. K. P. E; Samarasinghe, P; Weerasinghe, LDogs are one of the most popular pets in the world. It is usual that pet owners are always concerned about the health and the wellbeing of their pets. The activity levels of the dogs vary from each other based on breed and age. Tracking the behavioral changes using image processing and machine learning concepts and notifying the pet owners via a mobile application is the main objective of this research. Breed recognition has been done applying deep learning concepts to the user-uploaded video or the photograph of the dog. This research mainly focuses on walking, running, resting, and barking activity patterns of the dog. A surveillance camera and sensors were the main equipment for data collection. The audio feature of the surveillance camera is used to identity the barking behavior of the dog. Dogs from different ages belonging to Pomeranian and German Shepherd breeds have been selected for this experiment. Transfer learning with ResNet50, Inception V3, and support vector machines have been used to recognize and classify the activities of the dogs. The research study was able to achieve the accuracy levels as follows: - breed recognition - 89%+, walking pattern recognition - 99.5%, resting pattern recognition - 97% and barking pattern recognition - 60%. With the above accuracy levels, the research was able to identify the unusual behaviour of the dogs.Publication Embargo Deep learning based flood prediction and relief optimization(IEEE, 2019-12-05) Pathirana, D; Chandrasiri, L; Jayasekara, D; Dilmi, V; Samarasinghe, P; Pemadasa, NFlood is a major natural disaster that occurs recurrently in Sri Lanka. It is important to stay on alert and get early preparations to avoid unnecessary risks that cause damage to both life and property. This project developed a flood assistance application “DHARA” to support early flood preparation and flood recovery process. DHARA mobile application facilitates river water level prediction, safest evacuation route suggestion and provides relevant warnings and alert notifications and the web application provides affected area detection, victim and relief estimation to assist flood recovery management. This system is developed as a mobile application and a web application. A recurrent neural network architecture named Long Short Term Memory (LSTM), Convolutional Neural Network (CNN), a path finding algorithm namely A star (A*) algorithm and a clustering technique named Fuzzy Clustering are used for the development of the system. The system is verified with sample data related to “Wellampitiya” and “Kaduwela” area based on river “Kelanl”. The river water level prediction model has successfully predicted the water level 4 hours in advance. The verification results of the river water level prediction showed a satisfactory agreement between the predicted and real records with 85.4% accuracy.Publication Embargo Development of Low Resource Machine Learning Models for Child Cognitive Ability Assessments(IEEE, 2022-12-09) Kahawanugoda, A; Gnanarathna, K; Meegoda, N; Monarawila, R; Samarasinghe, P; Lindamulage, A.GAutomated cognitive assessment tools are state-of-the-art in assessing cognition development. Due to the low availability of resources, building automated cognitive ability evaluation tools is challenging. This study focuses on developing machine learning models using a limited amount of data to assess Reasoning IQ, Knowledge IQ, Mental Chronometry and Attention-levels of Sinhala-speaking children between the age of 7 to 9 years. Our solution includes Sinhala speech recognition systems, image classification models, gaze estimation, blink count detection and facial expression recognition models to evaluate the above four cognitive ability measuring factors. Open domain speech recognition has been used to evaluate complex Sinhala child verbal responses and limited vocabulary responses were assessed using an end-to-end speech recognition system, respectively achieving 40.1% WER and 97.14% accuracy. Additionally, the image classification models for handwritten Sinhala letter recognition and two shape recognition models have gained 97%, 89% and 99% accuracy. The linear regression model for attention level evaluation that utilizes the inputs from a combination of eye-gaze estimation, facial expression recognition and blink rate detection models has gained 85% accuracy.Publication Open Access Diagnosing autism in low‐income countries: Clinical record‐based analysis in Sri Lanka(Wily, 2022-06-16) Samarasinghe, P; Wickramarachchi, C; Peiris, H; Vance, P; Dahanayake, D. M. A.; Kulasekara, V; Nadeeshani, MUse of autism diagnosing standards in low-income countries (LICs) are restricted due to the high price and unavailability of trained health professionals. Furthermore, these standards are heavily skewed towards developed countries and LICs are underrepresented. Due to such constraints, many LICs use their own ways of assessing autism. This is the first retrospective study to analyze such local practices in Sri Lanka. The study was conducted at Ward 19B of Lady Ridgeway Hospital (LRH) using the clinical forms filled for diagnosing ASD. In this study, 356 records were analyzed, from which 79.5% were boys and the median age was 33 months. For each child, the clinical form together with the Childhood Autism Rating Scale (CARS) value were recorded. In this study, a Clinically Derived Autism Score (CDAS) is obtained from the clinical forms. Scatter plot and Pearson product moment correlation coefficient were used to benchmark CDAS with CARS, and it was found CDAS to be positively and moderately correlated with CARS. In identifying the significant variables, a logistic regression model was built based on clinically observed data and it evidenced that “Eye Contact,” “Interaction with Others,” “Pointing,” “Flapping of Hands,” “Request for Needs,” “Rotate Wheels,” and “Line up Things” variables as the most significant variables in diagnosing autism. Based on these significant predictors, the classification tree was built. The pruned tree depicts a set of rules, which could be used in similar clinical environments to screen for autism.Publication Open Access Evaluation of Generative Adversarial Network Generated Super Resolution Images for Micro Expression Recognition(SciTePress, 2022-02-05) Sharma, P; Coleman, S; Yogarajah, P; Taggart, L; Samarasinghe, PThe Advancements in micro expression recognition techniques are accelerating at an exceptional rate in recent years. Envisaging a real environment, the recordings captured in our everyday life are prime sources for many studies, but these data often suffer from poor quality. Consequently, this has opened up a new research direction involving low resolution micro expression images. Identifying a particular class of micro expression among several classes is extremely challenging due to less distinct inter-class discriminative features. Low resolution of such images further diminishes the discriminative power of micro facial features. Undoubtedly, this increases the recognition challenge by twofold. To address the issue of low-resolution for facial micro expression, this work proposes a novel approach that employs a super resolution technique using Generative Adversarial Network and its variant. Additionally, Local Binary Pattern & Local phase quantization on three orthogonal planes are used for extracting facial micro features. The overall performance is evaluated based on recognition accuracy obtained using a support vector machine. Also, image quality metrics are used for evaluating reconstruction performance. Low resolution images simulated from the SMIC-HS dataset are used for testing the proposed approach and experimental results demonstrate its usefulness.Publication Embargo EyeDriver: Intelligent Driver Assistance System(IEEE, 2019-12-18) Gayadeeptha, P; Baddewithana, T. P; Pannegama, K. V; Samarakkody, C. S; Samarasinghe, P; Siriwardana, S“EyeDriver” is a driver assistance system that analyzes and provides real-time driver assistant data from four separate components. These main components are drowsiness detection and head pose estimation, over-speed detection, lane departure, and front collision avoidance. It is a compact product that included a Raspberry pi board, a USB camera module, Pi camera, and a TFT LCD. Since the “EyeDriver” is a first affordable aftermarket solution in Sri Lanka, it can be mounted and configured in any vehicle without any professional knowledge in less effort. Drowsiness detection and head pose estimation component will monitor the driver's eyes and keep track of whether the driver's head's position is inconsistent or deviated from the optimal position. In accordance with the road's recommended speed, the vehicle's actual speed is analyzed and if it is more than the permitted, the system makes a notification. It is done by the over-speed detection component. Lane departure component consists of assisting in keeping the vehicle stable on the desired lane on the road. Also, when the driver makes an intended lane change, the system provides a notification. The Front collision avoidance part will detect the frontal obstacle on the road and provide pre-collision/proximity warning notification. The notification makes according to the vehicle speed and distance between the object and the vehicles. The whole system is based on the Raspberry Pi 3 Model B+ board and the implementation of the system has been done by using OpenCV and Python.
- «
- 1 (current)
- 2
- 3
- »
