Browsing by Author "Rupasinghe, L."
Now showing 1 - 9 of 9
- Results Per Page
- Sort Options
Publication Embargo AuthDNA: An Adaptive Authentication Service for any Identity Server(2019 1st International Conference on Advancements in Computing (ICAC), SLIIT, 2019-12-05) De Silva, H.L.S.R.P.; Claude Wittebron, D.; Lahiru, A.M.R.; Madumadhavi, K.L.; Rupasinghe, L.; Abeywardena, K.Y.Adaptive authentication refers to the way that configures two factors or multi-factor authentication, based on the user’s risk profile. One of the most pressing concerns in modern days is the security of credentials. As a solution, developers have introduced the multifactor authentication. The multi-factor authentication has an adverse effect on user experience. This paper proposes a novel adaptive authentication mechanism which tries to eradicate the negative user experience of the traditional multi factor authentication systems. Adaptive authentication gathers information about each user and prevents fraudulent attempts by validating them against the created profiles. This approach will increase the usability, user-friendliness by introducing multi-factor authentication only when its necessary using a risk based adaptive approach. Furthermore, the solution ensures security by authenticating the legitimate user through collectively analyzing the properties, behavior, device and network related information. In the creation of the user profile, the adaptive authentication system will gather and analyze the user typing behaviors using a unique recurrent neural network algorithm named LSTMs with 95.55% accuracy and mouse behaviors using SVMs with 95.48% accuracy. In device-based authentication, a fingerprint is generated to the browser and to the mobile device which is utilized in the analysis of the accuracy rate of the authentication. Blacklisting and whitelisting of the networks and geo velocity of the authentication request are captured under the geolocation and network-based authentication. All the accuracy rates are fed to the risk-based authentication which helps the decision of re-authentication or in the grant of access to the system by analyzing the risk score generated for the authentication request.Publication Embargo An Automated Tool for Memory Forensics(2019 1st International Conference on Advancements in Computing (ICAC), SLIIT, 2019-12-05) Murthaja, M.; Sahayanathan, B.; Munasinghe, A.N.T.S.; Uthayakumar, D.; Rupasinghe, L.; Senarathne, A.In the present, memory forensics has captured the world’s attention. Currently, the volatility framework is used to extract artifacts from the memory dump, and the extracted artifacts are then used to investigate and to identify the malicious processes in the memory dump. The investigation process must be conducted manually, since the volatility framework provides only the artifacts that exist in the memory dump. In this paper, we investigate the four predominant domains of registry, DLL, API calls and network connections in memory forensics to implement the system ‘Malfore,’ which helps automate the entire process of memory forensics. We use the cuckoo sandbox to analyze malware samples and to obtain memory dumps and volatility frameworks to extract artifacts from the memory dump. The finalized dataset was evaluated using several machine learning algorithms, including RNN. The highest accuracy achieved was 98%, and it was reached using a recurrent neural network model, fitted to the data extracted from the DLL artifacts, and 92% accuracy was reached using a recurrent neural network model,fitted to data extracted from the network connection artifacts.Publication Embargo Autonomous Cyber AI for Anomaly Detection(2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Madhuvantha, K.A.N.; Hussain, M.H.; De Silva, H.W.D.T.; Liyanage, U.I.D.; Rupasinghe, L.; Liyanapathirana, C.Since available signature-based Intrusion Detection systems (IDS) are lacking in performance to identify such cyber threats and defend against novel attacks. It does not have the ability to detect zero-day or advanced malicious activities. To address the issue with signature-based IDS, a possible solution is to adopt anomaly-based detections to identify the latest cyber threats including zero days. We initially focused on network intrusions. This research paper discusses detecting network anomalies using AIbased technologies such as machine learning (ML) and natural language processing (NLP). In the proposed solution, network traffic logs and HTTP traffic data are taken as inputs using a mechanism called beats. Once relevant data has been extracted from the captured traffic, it will be passed to the AI engine to conduct further analysis. Algorithms such as Word2vec, Convolution Neural Network (CNN), Artificial Neural networks (ANN), and autoencoders are used in order to conduct the threat analysis. HTTP DATASET CSIC 2010, that NSL-KDD, CICIDS are the benchmarking datasets used in parallel with the above algorithms in order to receive high accuracy in detection. The outputted data is integrated and visualized using the Kibana dashboard and blockchain model is implemented to maintain and handle all the data.Publication Embargo Code Vulnerability Identification and Code Improvement using Advanced Machine Learning(2019 1st International Conference on Advancements in Computing (ICAC), SLIIT, 2019-12-05) Ruggahakotuwa, L.; Rupasinghe, L.; Abeygunawardhana, P.Cyber-attacks are fairly mundane. The misconfigurations of the source code can result in security vulnerabilities that potentially encourage the attackers to exploit them and compromise the system. This paper aims to discover various mechanisms of automating the detection and correction of vulnerabilities in source code. Usage of static and dynamic analysis, various machine learning, deep learning, and neural network techniques will enhance the automation of detecting and correcting processes. This paper systematically presents the various methods and research efforts of detecting vulnerabilities in the source code, starting with what is a software vulnerability and what kind of exploitation, existing vulnerability detection methods, correction methods and efforts of best researches in the world relevant to the research area. A plugin will be developed which is capable of intelligently and efficiently detecting the vulnerable source code segment and correcting the source code accurately in the development stage.Publication Embargo Comparative analysis of the application of Deep Learning techniques for Forex Rate prediction(2019 1st International Conference on Advancements in Computing (ICAC), SLIIT, 2019-12-05) Aryal, S.; Nadarajah, D.; Kasthurirathna, D.; Rupasinghe, L.; Jayawardena, C.Forecasting the financial time series is an extensive field of study. Even though the econometric models, traditional machine learning models, artificial neural networks and deep learning models have been used to predict the financial time series, deep learning models have been recently employed to do predictions of financial time series. In this paper, three different deep learning models called Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN) and Temporal Convolution Network (TCN) have been used to predict the United States Dollar (USD) to Sri Lankan Rupees (LKR) exchange rate and compared the accuracy of the models. The results indicate the superiority of CNN model over other models. We conclude that CNN based models perform best in financial time series prediction.Publication Embargo Deepfake Audio Detection: A Deep Learning Based Solution for Group Conversations(2020 2nd International Conference on Advancements in Computing (ICAC), SLIIT, 2020-12-10) Wijethunga, R.L.M.A.P.C.; Matheesha, D.M.K.; Al Noman, A.; De Silva, K.H.V.T.A.; Tissera, M.; Rupasinghe, L.The recent advancements in deep learning and other related technologies have led to improvements in various areas such as computer vision, bio-informatics, and speech recognition etc. This research mainly focuses on a problem with synthetic speech and speaker diarization. The developments in audio have resulted in deep learning models capable of replicating naturalsounding voice also known as text-to-speech (TTS) systems. This technology could be manipulated for malicious purposes such as deepfakes, impersonation, or spoofing attacks. We propose a system that has the capability of distinguishing between real and synthetic speech in group conversations.We built Deep Neural Network models and integrated them into a single solution using different datasets, including but not limited to Urban- Sound8K (5.6GB), Conversational (12.2GB), AMI-Corpus (5GB), and FakeOrReal (4GB). Our proposed approach consists of four main components. The speech-denoising component cleans and preprocesses the audio using Multilayer-Perceptron and Convolutional Neural Network architectures, with 93% and 94% accuracies accordingly. The speaker diarization was implemented using two different approaches, Natural Language Processing for text conversion with 93% accuracy and Recurrent Neural Network model for speaker labeling with 80% accuracy and 0.52 Diarization-Error-Rate. The final component distinguishes between real and fake audio using a CNN architecture with 94% accuracy. With these findings, this research will contribute immensely to the domain of speech analysis.Publication Embargo Human and Organizational Threat Profiling Using Machine Learning(2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Kumara, P.M.I.N.; Dananjaya, K.G.S.; Amarasena, N.P.N.H.; Pinto, H.M.S.; Yapa, K.; Rupasinghe, L.The usage of online social networking sites is increasing rapidly. But the downside is that the growth of various kinds of ongoing social media threats such as fake profiles, cyberbullying, and fake news. Many important observations can be made to increase the existing knowledge about social media threats by studying various information exchanged through public and organizations. One direction is to conduct studies on human behavior and personality traits using public user profile data and the organizational threat classifying. This research aims to build a system to predict human personality behaviors on social media profiles based on the OCEAN Model and company-based threat profiling. All the data collected relating to everyone in the consumer’s friend list is analyzed to obtain the threatening behaviors and classified according to the OCEAN to generate a threat report. Organizational network gathered log data for filtered log protection against malware. Logs received from these endpoints will be collected by collectors. Those logs will be forwarded to our filter, made of a Machine Learning Algorithm (MLA). This will be a custom MLA specially designed for this purpose. MLA will classify and categorize threats according to their severity, filtered log protection system against malware and other threats.Publication Embargo An Integrated Framework for Predicting Health Based on Sensor Data Using Machine Learning(2020 2nd International Conference on Advancements in Computing (ICAC), SLIIT, 2020-12-10) Jayaweera, K.N.; Kallora, K.M.C.; Subasinghe, N.A.C.K.; Rupasinghe, L.; Liyanapathirana, C.According to recent studies, the majority of the world's population shows a lack of concern in their health. As a consequence, the non-communicable disease rate has increased dramatically. Amongst these diseases, heart diseases have caused the most catastrophic situations. Apart from the busy lifestyle, studies also show that stress is another factor that causes these diseases. Therefore, the focus of our research is to provide a user-friendly health monitoring system that causes minimum disturbance to its users. However, many studies have focused on predicting health; very few have focused on its usability. The objective of our research is to predict the possibility of cardiac arrests and the presence of stress in real-time using a wearable device prototype. The system uses biometric signals obtained from the photoplethysmogram sensor embedded in the wearable device to perform real-time predictions. We trained three models using random forest, k-nearest neighbor, and logistic regression classification algorithms to predict sudden cardiac arrests with accuracies 99.93%, 99.10%, and 94.47%, respectively. Further, we trained three additional models to predict stress using the same algorithms with accuracies 99.87%, 96.83%, and 65.00%, respectively. Thus, the results of this study show that an integrated framework, capable of predicting different health-related conditions, through sensor data collected from wearable sensors, is feasible.Publication Embargo Price Optimisation and Management(2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT, 2021-12-09) Shafkhan, M.T.M.; Jayasundara, P.R.S.S.; Kariyapperuma, K.A.D.R.L.; Lakruwan, H.P.S.; Rupasinghe, L.One of the most crucial decisions a company makes is its pricing strategy. When it comes to pricing, a company must consider the present, as well as the future and the past pricing. It enables a company to make sound judgments. In the process of marketing products, price is the only factor that creates income; everything else is a cost. Guessing at product pricing is a little like throwing darts blindfolded; some will hit something, but it probably will not be the dartboard. Large-scale enterprises throughout the world still depend on Excel sheets with numerous manpower or expensive pricing solutions. Expensive pricing systems are difficult to implement for Medium and Large Sized Enterprises in countries like Sri Lanka. Our goal in this research is to propose an affordable, efficient, easy-to-use and secure solution which can be implemented in Medium and Large Sized Enterprises in Sri Lanka. Manufacturing cost, shipping cost, competitor analysis, customer behaviour are taken as the root factors when deciding the price. The proposed solution includes Machine Learning components which is fed with historical data of these four factors to predict the manufacturing cost, shipping cost, competitor price and customer behavioural factors on a given date and as well as an optimisation component which enables the opportunities to minimise the cost and maximise the profit. The four Machine Learning components are implemented using LSTM, ARIMA, Facebook Prophet and a clustering model. The optimisation model is implemented using linear programming optimise these four components. A user-friendly web application is implemented using MEAN stack with micro service architecture to access this.
