PhD
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4219
Theses and Dissertations of the Sri Lanka Institute of Information Technology (SLIIT) are licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Browse
Publication Open Access “AccessBIM” - A Model of Environmental Characteristics for Vision Impaired Indoor Navigation and Way Finding(Curtin University, 2018-06) Jayakody, AThe navigation of indoor and outdoor environments play a pivotal role in the daily routine of humans. Navigation systems that provide path planning and exploration services for outdoor environments are readily available while navigation within a building is still a challenge due to limited information availability and the poor quality of GPS signals, which makes it difficult to capture characteristics within the indoor environment. Consequently, the use of GPS tracking devices for real-time map generation is not feasible. Indoor navigation is particularly difficult for people with vision impairment. According to the factsheet of the World Health Organization (WHO) as of October 2017, over 253 million people are estimated to be vision impaired: 36 million to be blind, and 217 to have poor vision. Currently, most blind and vision-impaired individuals use the white cane as an assistive tool and are often accompanied by care takers or voluntary helpers. Most modern indoor environments consist of complex architectural structures with varying arrangement of physical objects. Since retrieving indoor location information has been challenging for the vision impaired, it would be helpful if spatial information of doors, walls and staircases were made available. To address the above-mentioned problem, this thesis presents an improved schema design, an Accessible Building Information Model (AccessBIM) which could be used for generating an indoor map that could instruct vision impaired individuals in navigation, by the classification of real world objects and their locations. AccessBIM is a real-time relational database, which acts as the main component of the central system implemented to manipulate crowdsourced data such as the floor plan and architectural data along with semantic information within the built environment. The AccessBIM database stores information on the indoor arrangement of objects within buildings to facilitate the exchange and interoperability of real-time information. The database is equipped with an optimization algorithm that reduces the query execution time with the support of indexing, query re-writing, schema redesigning and a memory optimization technique introduced as “BIMcache”. vi In order to create a real-time map, the AccessBIM manipulates crowdsourced data from “smart devices” or AccessBIM users. The collection and storage of crowdsourced data, database optimization, API functions and the map construction algorithms were tested using a simulated test engine. The AccessBIM framework has the potential to play an integral role in assistive technologies related to localization and mapping, thus significantly improving the independence and quality of life for people with vision impairment whilst also decreasing the cost to the community related to support workersPublication Open Access An Autonomous Multiple Robot Registration and Control System: Design Implementation and Performance Evaluation(2022-11) Rajapaksha, U. U. S. KROS is the most prominent middleware used by most researchers in robotic application development. Our research mainly depends on ROS technologies because most researchers currently work with ROS as middleware for many research projects. Controlling the robots through the Web interface is essential. Because in some instances, users may not be able to communicate with the robot directly because of some bad conditions in the environment where the robots are currently placed. Therefore, we have developed a Web interface to control all robots through the Internet. However, the ROS topics, nodes, and message formats used to subscribe and publish can differ from one robot to another when we work with multiple robots in the same environment. Therefore, when a user expresses high-level instructions through a Web interface, all multiple robots must understand instructions uniformly and take necessary actions accordingly without considering each robot’s internal software and hardware implementation. The first contribution of the research is to develop an algorithm to register all robots based on the main components of the ROS technology through the Web interface autonomously. The robot Registration Engine was developed with algorithms to complete the autonomous robot registration task. The second contribution is identifying the relevant ROS topics and nodes for each action when a user command gives through the Web interface. The ROS topic identification algorithm was developed successfully. The third contribution was to evaluate the system performance under different conditions and derive the equations for the delay in response time through the web interface, validating the equations derived. We have conducted several experiments to evaluate our system with delays in response time. The worst-case analysis was completed for all algorithms with Big O notation. Users and researchers can utilize Robot Registration Algorithm and ROS Topic Identification Algorithm to work with multiple robots through the Web interface. We have successfully implemented all algorithms in a simulated environment in Gazebo.Publication Open Access Bio-oligomers as antibacterial agents and strategies for bacterial detection(The University of Edinburgh, 2014-11-28) Kasturiarachchi, J. CIn this thesis I examined the potential of Bio-Oligomers such as peptoids, peptides and aptamers, as therapeutic and diagnostic entities. Therapeutic Bio-Oligomers; A series of peptoid analogs have been designed and synthesised using solid phase synthesis. These peptoids have been subjected to biological evaluation to determine structure-activity relationships that define their antimicrobial activity. In total 13 peptoids were synthesised. Out of 13 different peptoids, only one peptoid called Tosyl-Octyl-Peptoid (TOP) demonstrated significant broad-spectrum bactericidal activity. TOP kills bacteria under non-dividing and dividing conditions. The Minimum Inhibitory Concentrations (MIC) values of TOP for S. epidermidis, E. coli and Klebsiella were 20 µM, whereas Methicillin-resistant Staphylococcus aureus (MRSA) and Methicillin-sensitive Staphylococcus aureus (MSSA) were 40 µM. The highest MIC values were observed for Pseudomonas aeruginosa (PAO1) at 80 µM. The selectivity ratio (SR) or Therapeutic index (TI) was calculated, by dividing the 10% haemolysis activity (5 mM) by the median of the MIC (50 µM) yielding a TI for TOP as 100. This TI is well above previously reported peptidomimetics TI of around 20. TOP demonstrates selective bacterial killing in co-culture systems and intracellular bacterial killing activity. Diagnostic Bio-Oligomers; In the second part of my thesis, I investigated aptamer and peptide-based molecular probes to detect MRSA. As well as screening aptamers and peptide probes against whole MRSA, I over-expressed and purified PBP2A protein. This purified protein was used as a target for aptamer and peptide probes to detect MRSA. Two different aptamer libraries were initially screened for utility. In-vitro conditions for SELEX were optimised. Biopanning with a phage derived peptides was also performed. Target sequences for both methods were identified and chemically synthesised. Evaluation of fluorescently labelled sequences with flow cytometry and confocal imaging showed no specificity for MRSA detection with either method. The Bio-Oligomers and the in-vitro selection methodology require further refinement to improve diagnostic utility.Publication Embargo Characterization of winged bean (Psophocarpus Tetragonolobus (L.) DC.) Accessions using Isoenzyme profiles and morphological characteristics (Electrophoresis, starch Gel …(The Pennsylvania State University, 1986) Peiris, C. NThe winged bean, a tropical legume, is rich in protein, minerals, vitamins, and carbohydrates. It is unique among leguminous plant kinds in that every part of the plant except the roots can be eaten. There are over a thousand accessions and cultivars of winged bean. Therefore, the use of classical methods of describing and identifying genotypes based on morphological characteristics has become increasingly difficult.Publication Open Access Climate Policy Assessment for Climate Change Mitigation and Carbon Neutrality: A Case Study of Sri Lanka(Department of Mechanical Engineering Sri Lanka Institute of Information Technology, 2023-12) Fernando, G.LClimate change is one of the most significant challenges faced by mankind in the 21st century. Human activities, particularly in the energy supply and demand sectors, primarily cause an increase in greenhouse gas (GHG) emissions. The Paris Agreement's climate goal aims to limit global warming to a level well below 2°C above pre-industrial levels, with a specific target of limiting temperature rises to 1.5°C by the end of this century. Therefore, there has been an emphasis on achieving large-scale reductions in GHG emissions from the energy sector. After the initial stocktake in 2023, it is apparent that global emission pathways are not meeting the expected progress toward the Paris Agreement targets. Swift actions are necessary to readjust these pathways. Consequently, the reduction of greenhouse gas emissions in developing economies will be pivotal in reaching the desired global temperature targets. This study examines the case of Sri Lanka, a developing economy with low carbon intensity, to explore the role of similar economies in acachievingthe Paris targets Sri Lnaka has has a population of 22 million and a GDP of 84.5 billion USD in 2021. The predicted economic growth in the future could result in a rapid increase in energy demand in the country. This could result in an increase in fossil fuel use and subsequent carbon emissions. Sri Lanka has pledged to mitigate 14.5% of the GHG emissions conditionally and unconditionally by 2030 compared to its 2021 levels through its Nationally Determined Contributions. However, it aspires to achieve ambitious targets like carbon neutrality by 2050. Moreover, it also tries to increase the share of renewable energy in electricity generation from 45% in 2021 to 70% in 2030. However, it needs a pragmatic plan to facilitate a smooth transition towards reducing these emissions. A systematic analysis of different policy options and scenarios is required to determine a suitable policy for reducing GHG emissions. In doing so, Energy-EconomicEnvironmental models can provide the basis for such analysis. The development v of such models for Sri Lanka and the carrying out of scientific studies are still at an early stage. This thesis covers the analysis of different scenarios for climate change mitigation using an energy-economic-environmental model in the case of a developing economy with low carbon intensity. The scientific questions to be answered in this study are: 1) How is the energy environmental system of an developing economy modeled considering both energy consumption and supply sectors? 2) What is the impact of carbon taxes on reducing carbon emissions? 3) How could energy, economic, and environmental models be used to analyse climate futures? 4) What scenarios will lead the country to carbon neutrality? 5) How do efficient technologies, renewable energy sources, cleaner fuels, nuclear energy, carbon capture and storage technologies, and green hydrogen for power generation reduce emissions? 6) What are the marginal abetment costs of CO2 reduction for proposed emission mitigation actions? 7) What impacts do low-carbon scenarios have on energy security? 8) What are the other co-benefits of CO2 mitigation? The first objective of this study is to develop a bottom-up type of energy system model for a developing economy with low energy intensity. Sri Lanka has chosen as a case study, considering the economic and demographic factors to assess energy use and its environmental implications during a given period. This model comprehensively assessed the integrated reference energy system, encompassing energy supply and demand sectors during a planning horizon. It used a recursive dynamic cost optimization approach, minimizing the energy system's total cost each year during the planning period from 2015 to 2050. The AIM/Enduse model, a part of the Asia Pacific Integrated Modeling family, was used to develop an energy system model for the Sri Lankan energy sector. It considered a Business-As-Usual scenario (BAU) and other scenarios for achieving large-scale reductions in CO2 emissions. The BAU scenario assumes existing economic, demographic, and social trends throughout the modeling period. It assumes the continuity of current policy measures across all five energy sectors throughout the modeling period. According to the model results, the total vi primary energy supply in the BAU scenario is expected to increase almost threefold, from 11 Mtoe in 2015 to 34 Mtoe in 2050. The CO2 emissions associated with energy use will increase from 19 Mt in 2025 to 66 Mt in 2015 at an average annual growth rate of 7%. The increase in CO2 emissions is attributed to the use of fossil fuels, as their share is expected to increase from 53% in 2015 to 66% in 2050. The results indicate that if there is no policy intervention, the share of fossil fuels will continue to increase, resulting in a significant increase in CO2 emissions. The second objective of this study is to examine the impact of carbon taxes on achieving large-scale emissions reductions in the energy sector. It employed five carbon tax trajectories proposed by the MESSAGE-GLOBIOM Integrated Assessment Consortium to achieve five levels for the global mean temperature. These targets will be achieved by imposing five different carbon tax trajectories ranging between 2.3 US$/tCO2 and 436 US$/tCO2 in 2050. The reference scenario for Sri Lanka was assumed to be in the middle of the road pathway defined in the Shared Socioeconomic Pathways. According to the model results, CO2 emissions at these carbon tax levels could be reduced by 25% to 60% by 2050. It also has other benefits, such as reduced primary energy supply and final energy consumption by 2050. Nevertheless, the research findings imply that aggressive carbon mitigation measures and taxes are required to achieve significant emission reductions in developing economiew. One of the main objectives of this study was to develop scenarios for achieving carbon neutrality by 2050. It defined four countermeasures: namely, plausible, ambitious, challenging, and stringent scenarios involving the level of intervention on the energy demand and supply sides. These scenarios considered different technology options and policy measures, such as the diffusion of efficient technologies, the availability of renewable energy sources, the use of cleaner fuels, nuclear energy, carbon capture and storage technologies, and green hydrogen for power generation. The results of this study revealed that a stringent scenario that includes aggressive policy measures in both the energy supply and vii demand sectors, use of renewable energy for power generation, diffusion of efficient end-use devices, fuel switching, increasing the share of electric cars, and public transport achieves a near carbon-neutral scenario at a carbon tax trajectory of 32 US$/tCO2 in 2020 and 562 US$/tCO2 in 2050. The net energy import dependency will decrease to 13% in 2050 compared to the BAU scenario (65%) under the near carbon neutral scenario, which is a positive outcome from the energy security perspective. The fourth objective of the study was the development of future emission pathways and the estimation of energy and environmental implications for different emission pathways using the model. The fifth IPCC assessment report analysed the energy system and related emissions under five shared socioeconomic pathways representing possible climate futures. These pathways include SSP1: Sustainability Pathway, SSP2: Middle of the Road Pathway, SSP3: Regional Rivalry Pathway, SSP4: Inequality Pathway, and SSP5: Fossil-fueled Development Pathway. The findings of this study reveal that the SSP5, which reflects rapid economic growth, higher utilisation, inefficient and traditional enduse technologies, firm reliance on abundant fossil fuel resources, and a lower level of awareness of sustainability and the environment in the future, will provide the highest primary energy supply of 44.6 Mtoe in 2050. The lowest primary energy is recorded under the SSP4, and it was 26.5Mtoe in 2050. The CO2 emissions in 2050 were highest under SSP5 with 107Mt and lowest under SSP1 with 24Mt in 2050. Out of all scenarios, SSP5 had the highest energy intensity with 6MJ/US$ and a carbon intensity of 0.25kg/ US$ in 2050. The SSP1, which characterized a sustainable pathway, resulted in a primary energy consumption of 27Mtoe and 17Mt CO2 emissions in 2050. It developed different climate futures that could provide valuable insights into how energy and emissions change. The final objective of this study is to analyse the co-benefits of carbon reduction and to estimate the marginal abatement cost of CO2 reduction. This study examined the co-benefits of reducing CO2 emissions under these emission viii reduction targets. The co-benefits analysed include a reduction in primary energy supply, net energy import dependency, energy security, and the level of local air pollutants (NOx and SO2). Six different indices collectively define the country's energy security, including the diversity of primary energy demand, non-carbon fuel share, renewable fuel share, oil share, primary energy intensity, and carbon intensity. Mitigating 90% of CO2 emissions compared to BAU will result in 21% of net energy import dependency. It also provided a 1.8 Shannon index for the diversity of primary energy demand, indicating a higher diversity of energy types. Meeting this reduction target would result in carbon intensity levels of 0.01kg/US$ and energy intensity levels of 2.4MJ/US$ in 2050, representing approximately a 90% and 80% reduction, respectively, compared to 2015 levels. This study also analysed the economic costs of reducing CO2 emissions and developed sector-level marginal abatement cost curves. These play a critical role in deciding policy options for reducing CO2 emissions. Five countermeasure scenarios, with CO2 emission reduction targets between 10% and 90%, were used to develop marginal abatement cost curves. According to sectorial marginal abetment cost curves, the most economical CO2 emission mitigation option would be introducing efficient and hybrid road vehicles, using efficient residential technologies such as refrigerators and air conditioners, and biomass for residential cooking. The highest mitigation potential will be possible by introducing electric buses for public transport and large-scale wind and solar energy generation. The study's findings indicate that aggressive policies introducing clean energy and efficient technologies are required to reduce large-scale CO2 emissions. Renewables (solar and wind) and nuclear energy for power generation will significantly reduce emissions. Considering the limitations in land availability, biomass is expected to play a limited role. In addition, it would require efficient end-use devices, switching to alternative fuels such as liquified LNG, using electric cars, and expanding public transport. Nevertheless, it would bring additional advantages such as improved energy security, reduced energy imports, and ix reductions in the levels of local air pollutants. Reducing emissions will require a marginal abatement of carbon for Sri Lanka, which will vary from 197USD/tCO2 to reduce 10% to 1792USD/tCO2 to reduce 90% by 2050. The results indicate that the marginal abatement cost for CO2 reduction is higher than the global average for developing conomiesPublication Open Access Development and testing of a high speed hydraulic manipulator with single time scale visual servoing(Memorial University of Newfoundland, 2014) Liyanage, M. HAutomation of production processes has enabled to meet the dramatic demand for manufactured products that has grown out of the increase in world population. In existing industries greater production requirements and improvement in product quality call for faster industrial robots. This study details the design and development of a high-speed visual servoing system for industrial applications. The proposed visual servoing system consists of a high speed robotic manipulator, a high-speed camera system and an embedded controller. The proposed robotic manipulator has the configuration of a Selective Compliant Assembly Robotic Arm (SCARA). It uses two custom-designed double vane rotary hydraulic actuators for driving the links of the robot. The SCARA system was mathematically modeled and simulated. Based on the simulation results, the hydraulic actuators were sized for optimal performance. A prototype actuator was subsequently designed, manufactured and experimentally evaluated. The test results show that the proposed actuator is capable of reaching torques of up to 460 Nm in 30 ms with a payload of 12 kg. This is not possible with electric motors of similar size. Then the proposed SCARA was designed and fabricated using the proposed actuators. The end effector of this manipulator was capable of reaching velocities of up to 2.7 ms⁻¹ with a payload of 5.3 kg. Comparable performance is not feasible with contemporary SCARA type robots. The proposed robot was designed for handling payloads up to 15 kg with speeds of up to 2 ms⁻¹. This often results in flexing of the links and twisting of the support column, adding external disturbances to the system. A high-speed camera system was designed and built to obtain the position of the end effector as feedback for the controller. It uses a two dimensional Position Sensitive Detector as the image sensor. An electronic circuit was designed and built for signal conditioning and data acquisition from the Position Sensitive Detector. It was then calibrated to account for non-linearities on the image sensor. The camera was constructed using this Position Sensitive Detector circuit, a lens and an infra red filter.It was then calibrated to estimate the extrinsic and intrinsic parameters. This camera was capable of carrying out measurements at frequencies of up to 1350 Hz. The measurements made by this camera produced an average absolute accuracy of 0.31 mm and 0.37 mm in x and y directions, respectively. A Field Programmable Gate Array was used in this study as the platform for developing an embedded controller for the robot. Using contemporary Field Programmable Gate Array technology, a powerful virtual processor can be synthesized and integrated with custom hardware to create a dedicated controller that out performs some of the conventional microcontroller and microprocessor based designs. The Field Programmable Gate Array based controller takes advantage of both hardware features and virtual processor technology. The input, output interfaces for this controller were implemented using hardware. Complex functions that are difficult to be implemented in hardware were implemented using a virtual soft processor. Four different types of controllers were implemented and tested. These include hardware proportional-derivative, software proportional-derivative, single time scale visual servoing and set point modification type controllers. The proposed implementation carried out single time scale visual servoing at frequencies of up to 330 Hz.Publication Open Access Development of Real-Time, Self-Learning Artificial Intelligence-Based Algorithms for Non-Intrusive Energy Disaggregation in a Multi-Appliance Environment(Faculty of Engineering Sri Lanka Institute of Information Technology, 2023-12) Herath, MElectricity serves as a cornerstone in modern economies, with demand in residential and commercial sectors rapidly increasing in recent years. Enabling real-time monitoring of individual appliance-wise energy consumption and delivering user feedback is essential for future energy conservation initiatives. Energy disaggregation becomes imperative in furnishing consumption statistics for individual appliances. The acquisition of appliance-specific energy consumption in a non-intrusive manner, without the need for sensors on each device but by utilizing readings from the main household energy meter, highlights Non-Intrusive Load Monitoring (NILM) as a promising solution. NILM, leveraging the capabilities of smart meters and advancements in computational power, gains popularity for its effectiveness in disaggregating and analyzing energy consumption patterns. This study introduces an Artificial Intelligence (AI)-based NILM solution capable of disaggregating the energy consumption of multiple appliances while adapting to new appliances and their evolving behaviors. Among various NILM approaches, Neural Network (NN)-based models demonstrate promising disaggregation capabilities. However, the selection of the most suitable NN type or architecture poses a challenge due to the multitude of approaches in literature. To address this issue, the study standardizes and compares different NNs, with results showing that the Convolutional Neural Network (CNN) exhibits superior prediction accuracy and speed. This study also investigates the impact of different appliances and their consumption profiles on disaggregation performance, rigorously testing parameters such as NN architecture, input-output mapping topologies, data preprocessing, and hyperparameters. This leads to the development of guidelines for future NILM studies. Additionally, the study introduces a hierarchical plug-and-play modular-based model for appliance anomaly detection, extending the application of NILM and overcoming limitations in anomaly detection literature. This study investigates two-dimensional (2D) input-based NILM solutions for predicting appliance energy consumption profiles and classifying appliances. Unlike conventional NN-based models using 1D signals, representing the aggregate energy signal as a 2D image improves performance by leveraging feature extraction capabilities of NNs and preserving vital temporal information and signal amplitude relationships. Various TSS to 2D image conversion methods for NILM were tested, including Gramin Angular Summation Field (GASF), Gramin Angular Difference Field (GADF), Recurrent Plot (RP), and Markov Transition Field (MTF), with GADF outperforming other methods. In addition, the study introduces a simple yet powerful 2D input mechanism for time series data, specifically energy consumption data. This mechanism will be integrated into a CNN-based energy disaggregation model for the first time in the NILM domain, with the aim of improving overall performance. While the proposed method excels over 1D input-based models in training, it is observed that the novel 2D input method requires augmentation in training data volume, data mixing, NN depth, and hyperparameter tuning to achieve superior generalization capabilities. Furthermore, aggregate energy signal-based Voltage-Current (V-I) trajectory plots were investigated for fully non-intrusive appliance classification, demonstrating high accuracy. v The study proposes a single NN architecture named "One-Shot." This model exhibits the capability to simultaneously disaggregate multiple appliances, offering a more efficient alternative to the intricate and computationally demanding existing NN-based NILM models that necessitate separate NNs for each appliance. The efficacy of this approach is evaluated across multiple input-output mapping configurations, with the multi-point multi-bin model proving superior. To address challenges associated with manual model re-training for new appliances and adapting to evolving consumption patterns, a self-learning module is incorporated, enhancing the performance of the OneShot model. To overcome issues related to excessive hyperparameter tuning and insufficient training data, the study presents an unsupervised model based on Blind Source Separation (BSS), utilizing Independent Component Analysis (ICA) to separate appliance energy signals from the aggregate signal. Developing more reliable disaggregation models in local environments requires a local energy dataset. For this purpose, the study creates a local energy dataset from households using a custom-designed data logger, capturing both low and high-frequency energy data at appliance, circuit, and main energy meter levels. This dataset is verified using the One-Shot model developed in this study. In summary, this study advances the field of NILM by introducing AI-based solutions, innovative approaches, and comprehensive guidelines. Ultimately, these contributions aim to foster energy conservation and enhance efficiency in residential and commercial settings globally.Publication Open Access Dynamic line parameter estimation using Synchrophasor Measurements(Wichita State University, 2021-05) Hettiarachchige-Don, A. C. SThe worldwide push towards a more intelligent, connected and reliable electric power delivery system has led to the propagation of a wide range of new technologies and ideas within the power grid infrastructure. Thus, the power grid is becoming more adaptable to changes and more reliable under distress. However, these benefits are only possible with vastly improved observability in the system. The traditional methods and technologies for grid monitoring were simply too slow and newer, faster and more accurate monitoring technologies became essential over the turn of the century. With the advancement of micro processing and communication technologies at an incredibly fast pace, this became possible in the form of smart monitoring devices. These devices include Intelligent Electronic Devices (IEDs), smart meters for homes and, at the transmission level, the use of Synchrophasor Measurement Units (PMUs). Over the past decade, transmission utilities were quick to adopt these PMU networks and they are now common among most major utilities. Compared to traditional monitoring systems, PMUs provide information at a much higher resolution and have the advantage of being time synchronized. The benefits of these networks are numerous, but they are not without certain drawbacks. PMU devices only report some basic system parameters from the field. While these are useful on their own, it is possible to use this data, in combination with other information, to extrapolate additional parameters about the grid. However, in this process, inherent errors present in PMU estimated data become an issue and renders the results of this extrapolated information unusable. In this work, of particular focus from these additional parameters is transmission line resistance. The fundamental cause of error will be investigated, and this knowledge will be applied to create a correction algorithm to output corrected transmission line resistance estimates that are more useful to utilities for a range of auxiliary applications such as dynamic line rating, determination of line sag, and conductor temperature estimation. This advancement would allow utilities to compound the economic benefits of their investment in PMU networks.Publication Open Access Enhanced Content Navigation Using Edge Routers in Content Delivery Network(Keio University Japan, 2016-08) Wijekoon, JThe Internet can be defined as a network composed of geographically dispersed servers and clients. In principle, clients request content from servers, and the servers respond to the requests by sending the requested content to the clients. The content should be navigated among networks, and certain rules and methods have been developed to achieve optimized navigation. Navigation is definable as the process of finding a destination and reaching that destination using a preferable route. Hence, the main challenges for achieving content navigation on the Internet can be summarized in the following two directions: 1) to determine and select service points and 2) to route users to selected service points. The need for optimized content delivery accelerates the development of the Internet by proposing content delivery networks (CDNs). CDNs use content cache servers within Internet Service Provider (ISP) networks …Publication Open Access Human gait modelling with step estimation and phase classification utilising a single thigh mounted IMU for vision impaired indoor navigation(Curtin University, 2016) Abhayasinghe, NThis research is focused on human gait modelling for infrastructure free inertial navigation for vision impaired. A pedometer based on a single thigh mounted gyroscope, an efficient algorithm to estimate thigh flexion and extension, gait models for level walking, a model to estimate step length and a technique to detect gait phases based on a single thigh mounted Inertial Measurement Unit (IMU) were developed and confirmed higher accuracies.Publication Open Access Interactions between river flow and seepage flow(M. Sc. Thesis, Hokkaido University, Japan, 2009-09) Rathnayake, U. SMany previous studies have been carried on the interactions between river flow and the seepage flow in the environmental and biological point of view. Even though the interactions between river flow and seepage flow is recognized as an important process in rivers, previous literature hardly touches on the stability or the limitations for the interactions. Since these interactions are occurred frequently at least in mountainous regions, the river flow cannot be well treated as a lined cannel flow. Understanding the stability of these interactions among river flow and the seepage flow would be advantages for several research areas; including river environmental engineering, ecological and biological studies. The subsurface layer below the river is known as the “hyporheic zone” and it can be defined as a saturated band of sediment that surrounds river flow and forms a linkage between the river and the aquifer. The zone facilitates to have bidirectional interactions as up-welling interactions and downwelling interactions. The origin of these interactions is due to the pressure and velocity differences between the two layers. The large velocity difference between the river flow layer and the seepage flow layer causes the instability of the flows. Due to this flow instability, a reciprocating flow motion is generated between the hyporheic layer and the above. In addition flow obstructions create an upstream high-pressure zone and a downstream low-pressure zone, resulting in hyporheic circulation under the object. The stability of these hyporheic interactions is analyzed using the linear stability analysis technique. Linear stability analysis technique is used to understand the stability of the natural phenomenon by many researchers. Navier-Stokes equations and Brinkman-Forchheimer equations are used in order to formulate the river flow and seepage flow interactions respectively. The open channel flow in river is analyzed using the mixing length turbulent model and spectral collocation method incorporated with the Chebyshev polynomials are used to perform the numerical solution of the perturbed equations. Stability diagrams are discussed with several slopes of the layers against the dimensionless particle diameter and wave number. It has been understood that the range for the occurrence of instability region increases with the slope of the combined river and seepage layers. However it is important to recognize another instability region which occurs even in the range of small dimensionless particle diameter with relatively high wave numbers. Several experiments are carried out, in order to understand the hyporheic interactions. Seepage layer is modeled using a Hele-Shaw which is a longitudinal parallel plate model. Methylene blue is used as the tracer to understand the hyporheic interactions and the experiment is conducted for two slopes as 0.1% and 0.2%. It can be concluded that the dimensionless dominant wave numbers have an effect on the combined channel slope and the Froude number of the river flow. In addition, it can be concluded that the residence time of hyporheic interactions are increased with the height of the river layer. Rough comparison between the theoretical analysis and the experimental observations is carried out. It can be concluded that the same tendency in the theoretical analysis and the experimental observations from the comparison figuresPublication Open Access Investigating the enabling role of web 2.0 technology for interactive e-learning in australian and sri lankan higher education(RMIT University, 2015) Karunasena, AInteractions are at the heart of e-learning as they enable learners to actively develop knowledge, acquire skills and develop the sense of belonging and satisfaction. Much attention is paid on developing interactive e-learning systems for facilitating active interactions between learners and learning resources, instructors and peer learners. Numerous technologies such as simulation technology and Web 2.0 technology are used to facilitate interactive e-learning to date. Those technologies support learners to interact with learning resources, instructors and peer learners to different extents. To facilitate interactive e-learning, it is important for educators and e-learning developers to understand how well technologies as above support interactions in e-learning. Web 2.0 technology has become popular around the world recently due to their ease of use, portability and high availability. Much research has been done on how Web 2.0 technology could be used for interactive e-learning. Existing research, however, has several limitations. For example, a majority of research has investigated how a specific Web 2.0 tool supports a specific kind of interactions in e-learning such as learner-learner interaction. Furthermore, much of existing research on Web 2.0 based interactive e-learning is conducted in developed countries. Whether Web 2.0 technology supports interactive e-learning in developing countries in a similar manner to developed countries, or whether developing countries could learn lessons from developed countries on using Web 2.0 technology for interactive e-learning are, therefore, not clear. This research aims to investigate the enabling role of Web 2.0 technology for interactive e-learning in higher education in Australia, a developed country and Sri Lanka, a developing country. To meet this aim, a quantitative research approach is adopted. Following this research approach, a conceptual framework on Web 2.0 based interactive e-learning developed based on a comprehensive review of the relevant literature, is validated using the survey data collected from learners in universities in Australia and Sri Lanka. The validation of the conceptual framework reveals that Web 2.0 technology supports the three major types of interactions in learning, namely, learner-learning resources, learner-instructor and learner-learner interactions in both Australia and Sri Lanka to a great extent. Furthermore, no significant differences are found on how Web 2.0 technology supports interactive e-learning in the above countries. The implication of these findings is that Web 2.0 tools could be used to improve the interactivity of e-learning. Another implication of this research is that new and more interactive e-learning systems can be developed by using Web 2.0 technology, in particular, for the purposes of managing learning resources, managing personal knowledge, delivering instructional support and collaborating in order to improve the effectiveness of e-learning. From a practical perspective, this study presents an in-depth investigation of how Web 2.0 technology can be used for improving the interactivity of e-learning in Australia and Sri Lanka. It also provides specific guidelines for developing interactive e-learning environments using Web 2.0 technology. From a theoretical perspective, this research finds that Web 2.0 technology could be used in developing countries and developed countries to improve the three major interactions in e-learning.Publication Open Access IT management sophistication in small business: its definition, measurement and relationship with IT impact(University of Canterbury. Accountancy, 2004) Suraweera, TThis research deals with information technology (IT) management in small businesses. Although IT management in large businesses has been extensively researched, only a handful of studies have focused on the small business sector. There are three specific objectives of this research: (a) to characterise IT management sophistication in small business, (b) to develop and validate a comprehensive instrument to measure this construct, and (c) to develop a model that explains the relationship between IT management sophistication and the IT impact, in the context of small business. The characterisation of the construct is based on the work of Raymond and Pare (1992) who explored the concept of IT sophistication within the context of small businesses. This study adopted a multi-method investigative approach, combining both case study research and survey methods. The study population was New Zealand's small chartered accountancy firms. Initially, a pool of indicators representing IT management sophistication in small business was derived on the basis of case study analysis. These indicators were used as the basis for drafting the measurement instrument which was tested within a wider population in the quantitative phase of the investigation. The second generation multivariate analytical technique, Partial Least Square (PLS) modelling, was used in the study's survey data analysis phase. This research characterised IT management sophistication in small business under three sub-dimensions: IT planning, IT controlling and IT leading. These factors were represented by nineteen indicators. The validity and reliability of the measurement instrument was examined in the PLS data analysis. A PLS model explaining the relationship between IT management sophistication, technological sophistication, and informational sophistication on one hand with IT impact on the other, in the context of small business was derived. The characterisation of IT management sophistication in small business will be useful for the researchers to understand this complex construct more clearly. The measurement instrument can be used to examine further the different aspects of IT management in small businesses. The model that related the study constructs will aid understanding the associated links between them. Practitioners will be able to use these results to improve upon their IT managerial practices to derive a greater impact of IT, which can, in tum, result in achieving higher organisational performance.Publication Open Access Lightning warning system based on slow fields and fast transient variations, suitable for oceanic tropics(http://dl.lib.mrt.ac.lk/handle/123/1958, 2007) Abhayasinghe, NLightning causes a lot of property and human damage all over Sri Lanka. It has been a major requirement to develop a low cost lightning warning system. The environmental vertical static electric field changes from 0.1 kVm-1 under fair weather conditions to extreme values like 10 kVm-1 under thunderstorm conditions. Also, lightning discharges generate electromagnetic radiation from ultra low frequency (ULF) through ultra high frequency (UHF) with peak energy emission at 10 kHz. The work discussed in this thesis uses both the static field variation and the electromagnetic radiation emitted by lightning discharges to predict a thunderstorm. A portable transient detector using an envelope detector tuned to 1600 kHz is used to detect electromagnetic radiation emitted by lightning discharges. An operational amplifier circuit having a slow response with a horizontal plate antenna is used to detect the static field variation. Final decision is made by a third circuit and three levels of alarms are released accordingly. Using the transient detector only, a warning can be released 25 minutes before the close by thunderstorm with 95% level of confidence. With the entire system, the confidence of the warning further increases. The cost of the transient detector is about 2500 Sri Lankan rupees with a rechargeable battery bank. The entire system with a battery backup costs about 5000 Sri Lankan rupees. According to the observations made by the transient detector the delay between cloud flashes and ground flashes shows a distribution of the form of a fractional function with a maximm at 27.52 minutes. The newly designed lightning warning system shows an acceptable grade of performance with its low cost.Publication Open Access Multiple-input multiple-output wireless system designs with imperfect channel knowledge(Queen's University, 2008-07) Ding, MEmploying multiple transmit and receive antennas for wireless transmissions opens up the opportunity to meet the demand of high-quality high-rate services envisioned for future wireless systems with minimum possible resources, e.g., spectrum, power and hardware. Empowered by linear precoding and decoding, a spatially multiplexed multiple-input multiple-output (MIMO) system becomes a convenient framework to offer high data rate, diversity and interference management. While most of the current precoding/decoding designs have assumed perfect channel state information (CSI) at the receiver, and sometimes even at the transmitter, in this thesis we will design the precoder and decoder with imperfect CSI at both the transmit and the receive sides, and investigate the joint impact of channel estimation errors and channel correlation on system structure and performance. The meansquare error (MSE) related performance metrics will be used as the design criteria. We begin with the minimum total MSE precoding/decoding design for a single-user MIMO system assuming imperfect CSI at both ends of the link. Here the CSI includes the channel estimate and channel correlation information. The closed-form optimum precoder and decoder are determined for the special case with no receive correlation. For the general case with correlation at both ends, the structures of the precoder and decoder are also determined. It is found that compared to the perfect CSI case, linear filters are added to the transceiver structure to balance the channel noise and the additional noise caused by imperfect channel estimation, which improve system robustness against imperfect CSI. i Furthermore, the effects of channel estimation error and channel correlation are coupled together, and are quantified by simulations. With imperfect CSI at both ends, the exact capacity expression for a single-user MIMO channel is difficult to obtain. Instead, upper- and lower-bounds on capacity have been derived, and the lower-bound has been used for system design. The closed-form transmit covariance matrix for the lower-bound has not been found in literature, which is referred to as the maximum mutual information design problem with imperfect CSI. Here we transform the transmitter design into a joint precoding/decoding design problem. The closed-form optimum transmit covariance matrix is then derived for the special case with no receive correlation, whereas for the general case with non-trivial correlation at both ends, the optimum structure of the transmit covariance matrix is determined. The close relationship between the maximum mutual information design and the minimum total MSE design is discovered assuming imperfect CSI. The tightness and accuracy of the capacity lower-bound is evaluated by simulation. The impact of imperfect CSI on single-user MIMO ergodic channel capacity is also assessed. For robust multiuser MIMO communications, minimum average sum MSE transceiver (precoder-decoder pairs) design problems are formulated for both the uplink and the downlink, assuming imperfect channel estimation and channel correlation at the base station (BS). We propose improved iterative algorithms based on the associated Karush-KuhnTucker (KKT) conditions. Under the assumption of imperfect CSI, an uplink–downlink duality in average sum MSE is proved, which is often used to simplify the more involved downlink design. As an alternative for solving the uplink problem, a sequential semidefinite programming (SDP) method is proposed. Simulations are provided to corroborate the analysis and assess the impacts of channel estimation errors and channel correlation at the base station on both the uplink and the downlink system performances.Publication Open Access Optimal management and operational control of urban sewer systems(University of Strathclyde, 2013) Rathnayake, U. SCombined sewer networks control, like many other real world problems, is usually identified with competing and conflicting objectives. Decision makers have a great need of selecting the best possible control strategy in minimizing the combined sewer overflows (CSOs) when controlling the sewer networks. However, this control strategy should be cost effective to produce a feasible control approach in real world. Cost effectiveness has become significantly important in present economic recession. Over the past decades, people have witnessed the control strategies based on minimization of CSOs. However, it is now, not only to minimize CSOs, but also to minimize the impact to the natural water from these CSOs. Therefore, this research explores the development of a holistic framework that is used for the multi-objective optimization of urban wastewater systems, considering flows and water quality in combined sewers and the cost of wastewater treatment. Pollution levels of several water quality parameters in dry weather flows and stormwater runoff are considered. Pollutographs for several water quality parameters are generated for the stormwater runoff. Temporal and spatial variations of the stormwater runoff are incorporated using these pollutographs for different land-uses. Furthermore, pollutographs are developed for different storm conditions, including single, two consecutive and migrating storms. Evolutionary algorithms are extensively used in solving the developed multiobjective optimization approach. Formulations for two different optimization approaches, one for the snapshot optimization and the other one for the dynamic optimization are developed. Simulation results from a full hydraulic model, including water quality routing are used in the optimization. The performance of the multi-objective optimization models are tested on a simple interceptor sewer system for several storm conditions. The proposed optimization approach for snapshot optimization gives the optimal CSO control settings where a single set of static control settings is used throughout the considered time period. However, the proposed optimization approach for dynamic optimization is capable of producing control strategies over the full duration of storm period. Furthermore, results for a number of alternative formulations in constraint handling for the developed multi-objective optimization approach are compared. They produce interesting findings. Overall, the constraint handling formulations developed outside the genetic (NSGA II) algorithm provides better control in combined sewer networks. In addition, the results of the multi-objective optimization demonstrate the benefits of the usage of optimization approach and its potential to establish the key properties of a range of control strategies through an analysis of the various tradeoffs involved. Solutions from the dynamic optimization approach highlight the usage of the real-time control in combined sewer systems. Given that the technology is there to measure water quality and flow rates, collect data and send feedbacks to the sewer system through central processing unit and the usage of high performance computers, the developed optimization model is capable of handing the present society's concerns in combined sewer systems. The model is capable of controlling the existing sewer networks according to the receiving water regulations and the fund availability of the wastewater treatment plants. However, further research is required to apply the developed multi-objective optimization approach in real-time control of urban sewer systems.Publication Open Access Realtime line parameter estimation using synchrophasor measurements and impact of sampling rates(Wichita State University, 2016) Hettiarachchige-Don, A. C. SThe installation of synchrophasor measurement units within the electrical grid system have provided utilities with the ability to monitor their transmission system in real time. These real time observations allow for better situational awareness and rapid responses to adverse system conditions. However, the real time impedance of the powerline is not one of the parameters that is transmitted to the control center and therefore, has to be calculated using the data received from multiple devices. This thesis proposes a simplified methodology for this analysis that requires lower computation power in comparison to most other proposed estimation techniques. Hence, this methodology is able to produce accurate results faster and by using a smaller quantity of stored data. Due to these reasons, this methodology can be implemented to provide near real time estimation and reporting of impedance values. For the purposes of this research, only the reactance information will be calculated but a similar approach can be used to obtain resistance information as well. The methodology consists of an algorithm to calculate and estimate the reactance of a line using the reported PMU data. It includes an outlier detection and elimination algorithm as well as a denoising technique that makes use of regularized least square estimation to accurately estimate the reactance over the analysis period. The methodology proposed is tested using real synchrophasor measurement data from a utility provider. The proposed mythology can easily be adapted and applied for the estimation and calculation of other parameters using PMU data.Publication Open Access Reinforcement learning based trust framework for MANET environment(Curtin University, 2018) Rupasinghe, LMobile Ad-hoc Networks (MANET) are design and implemented without the need for any infrastructure support. The properties of MANET inherently provide greater challenges in areas like security and reliability. This thesis presents three security protocols which were developed for addressing the MANET security needs. A novel trust calculation methodology and intelligent secure route prediction was designed to an existing MANET routing protocol. These protocols will help to implement a trustworthy MANET, providing a dynamic and secure environment.Publication Embargo Representation of evidence from bodies with access to partial knowledge(University of Miami, 2001) Kulasekere, E. CProblem solving and decision making are often carried out in environments where no single decision agent has access to the complete scope of information and the available information is either partial or approximate. An appropriate framework for modeling partial knowledge is crucial for understanding the various types of uncertainties that are generated and making decisions in such environments. When the complete scope of information is unavailable, the logical approach is to focus on the information that is common to all decision agents. For this purpose, it is necessary that an appropriate notion of conditional knowledge be developed. In this work, we propose a suitable conditional framework that is capable of extracting relevant information from a given body of evidence. A new combination function that allows the combination of evidence generated from two or more sources possessing non-identical scopes of information is also proposed in the context of this conditional framework. The proposed theory circumvents many of the difficulties and conflicting issues related to the traditional Dempster-Shafer theory of evidence and counter-intuitive results drawn from it. New measures for information embedded in the uncertainties generated from randomness and non-specificity of bodies of evidence are also proposed. These measures are shown to converge to the traditional Bayesian uncertainty measure in a probabilistic environment. The results of this research work are used to arrive at a unified strategy for intelligent resource management and congestion control of distributed sensor networks. Viable alternatives for analyzing common data mining tasks using subjective knowledge rather than the more traditional query processing methods are also proposed.Publication Open Access The Role of Social Capital and ICTs in Inter-Organizational Collaboration in a Developing Economy: An Empirical Study of the Finance Industry in Sri Lanka(Curtin University, 2017-09) Nawinna, Dasuni PriyanwadaIn the contemporary world of business, organizations cannot rely solely on their internal strengths to survive. Forming inter-organizational partnerships is becoming one of the most popular strategies available to an organization to share risks, resources and other capabilities with partners. Collaborative business strategies are especially beneficial in the emerging economies where organizations are constrained with lack of resources, technology, skills and infrastructure. Accordingly, explaining why and how some organizations do better in inter-organizational relationships (IORs) than others is a dominant challenge in the study of IORs. Social capital (SC) is an influential concept in understanding why and how some organizations do better in inter-organizational relationships. It is recognized as an important factor in developing relationships of trust, forming the foundation for greater collaboration and successful collective action. Social capital is a multi-dimensional, relational concept that turns into a powerful tool when combined with the network analysis approach and tools to study inter-organizational relationships such as alliances and joint ventures or collaborations of any form. While social capital has been found to support different firm-level value creations, such as creation of intellectual capital, resource exchange, innovation, knowledge sharing and performance, it has significance as the basis for the development of stakeholder relationships, which are essential to Corporate Social Responsibility (CSR). CSR is touted as a key enabler of both organizational performance and of sustainable development, which are also essential for developing economies. Information Systems (IS) researchers have increasingly become interested in exploring social capital in relation to Information and Communications Technologies (ICT). It is evident that social capital and ICT are mutually complementary in the interorganizational-level. While the role of social capital in the development or acceptance of ICTs and the role of ICTs in the formation of Social Capital is widely explored, the combined effect of SC and ICT on the IOR in developing contexts remains unexplored. Very little is known about the effect of ICT enabled Social Capital in the inter-bank context. The aim of this empirical research is to develop a model of how ICT-enabled social capital affects inter-bank strategic collaboration in a developing context, Sri Lanka. The purpose of this study is to investigate how the multiple dimensions of social capital influence the strategic collaboration in the Sri Lankan banking context, and the enabling role of ICTs. In order to accomplish this objective, the researcher uses quantitative techniques, the structural modelling approach combined with network measurements. Data is gathered through a survey of high-level management of banks and from public sources such as annual reports and web sites. The network analysis tools (e.g. ORA) and the statistical analysis methods (PLS-SEM) and tools (e.g. SmartPLS) have been used to derive results. The results of this study suggest that structural, relational dimensions of social capital have a positive influence towards the degree of strategic collaboration of banks. It is also evident that higher ICT capabilities at the firm-level strengthen the effect of cognitive social capital on collaboration. The results of the other moderation tests indicate that firm-size, age, gender-ratio of directors, ownership, geographic spread, culture, organization structure and previous experience strengthen the effect of social capital on strategic collaboration. The results of further analysis indicate that the structural social capital is influential for the corporate social responsibility of banking organizations. Both the inter-organizational collaboration and the corporate social responsibility yield higher financial performance at the firm-level. The study also provides evidence that the use of network measurements as the indicators of social capital provides better predictability in comparison to regular indicators. These findings provide a valuable contribution to the theory of social capital, literature on ICT for development and and network theory, contributing to a more holistic perspective that incorporates social, technical and organizational aspects and provides insights useful for building effective strategies in similar developing contexts.
