Theses
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/2429
Postgraduate students are required to submit a thesis as part of fulfilling the requirements of their respective postgraduate degree programmes. This community features merit-based graduate theses submitted by SLIIT postgraduate students. Abstracts are available for public viewing, while the full texts can be accessed on-site within the library.
Theses and Dissertations of the Sri Lanka Institute of Information Technology (SLIIT) are licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Browse
Publication Embargo Characterization of winged bean (Psophocarpus Tetragonolobus (L.) DC.) Accessions using Isoenzyme profiles and morphological characteristics (Electrophoresis, starch Gel …(The Pennsylvania State University, 1986) Peiris, C. NThe winged bean, a tropical legume, is rich in protein, minerals, vitamins, and carbohydrates. It is unique among leguminous plant kinds in that every part of the plant except the roots can be eaten. There are over a thousand accessions and cultivars of winged bean. Therefore, the use of classical methods of describing and identifying genotypes based on morphological characteristics has become increasingly difficult.Publication Embargo Representation of evidence from bodies with access to partial knowledge(University of Miami, 2001) Kulasekere, E. CProblem solving and decision making are often carried out in environments where no single decision agent has access to the complete scope of information and the available information is either partial or approximate. An appropriate framework for modeling partial knowledge is crucial for understanding the various types of uncertainties that are generated and making decisions in such environments. When the complete scope of information is unavailable, the logical approach is to focus on the information that is common to all decision agents. For this purpose, it is necessary that an appropriate notion of conditional knowledge be developed. In this work, we propose a suitable conditional framework that is capable of extracting relevant information from a given body of evidence. A new combination function that allows the combination of evidence generated from two or more sources possessing non-identical scopes of information is also proposed in the context of this conditional framework. The proposed theory circumvents many of the difficulties and conflicting issues related to the traditional Dempster-Shafer theory of evidence and counter-intuitive results drawn from it. New measures for information embedded in the uncertainties generated from randomness and non-specificity of bodies of evidence are also proposed. These measures are shown to converge to the traditional Bayesian uncertainty measure in a probabilistic environment. The results of this research work are used to arrive at a unified strategy for intelligent resource management and congestion control of distributed sensor networks. Viable alternatives for analyzing common data mining tasks using subjective knowledge rather than the more traditional query processing methods are also proposed.Publication Open Access IT management sophistication in small business: its definition, measurement and relationship with IT impact(University of Canterbury. Accountancy, 2004) Suraweera, TThis research deals with information technology (IT) management in small businesses. Although IT management in large businesses has been extensively researched, only a handful of studies have focused on the small business sector. There are three specific objectives of this research: (a) to characterise IT management sophistication in small business, (b) to develop and validate a comprehensive instrument to measure this construct, and (c) to develop a model that explains the relationship between IT management sophistication and the IT impact, in the context of small business. The characterisation of the construct is based on the work of Raymond and Pare (1992) who explored the concept of IT sophistication within the context of small businesses. This study adopted a multi-method investigative approach, combining both case study research and survey methods. The study population was New Zealand's small chartered accountancy firms. Initially, a pool of indicators representing IT management sophistication in small business was derived on the basis of case study analysis. These indicators were used as the basis for drafting the measurement instrument which was tested within a wider population in the quantitative phase of the investigation. The second generation multivariate analytical technique, Partial Least Square (PLS) modelling, was used in the study's survey data analysis phase. This research characterised IT management sophistication in small business under three sub-dimensions: IT planning, IT controlling and IT leading. These factors were represented by nineteen indicators. The validity and reliability of the measurement instrument was examined in the PLS data analysis. A PLS model explaining the relationship between IT management sophistication, technological sophistication, and informational sophistication on one hand with IT impact on the other, in the context of small business was derived. The characterisation of IT management sophistication in small business will be useful for the researchers to understand this complex construct more clearly. The measurement instrument can be used to examine further the different aspects of IT management in small businesses. The model that related the study constructs will aid understanding the associated links between them. Practitioners will be able to use these results to improve upon their IT managerial practices to derive a greater impact of IT, which can, in tum, result in achieving higher organisational performance.Publication Open Access Trajectory Planners for Cooperative Control of Two Industrial Robots and Belt Drives(School of Science and Engineering, Saga University, 2005-03) Jayawardena, T. S. SThis thesis focuses on trajectory planning strategies for high-speed, vibration restrained position control of belt drives and cooperative contour control of two robots in view of increasing the speed of cooperative task. The proposed solutions have been devised, implemented and verified for effective functionality. The trajectory planning in this context is carried out considering the relevant kinematic constraints met in actual practice; the maximum joint velocity constraints and the maximum joint acceleration constraints. The proposed planners are based on the principles of kinematics and the trajectory planning scenarios and, the issues are critically reviewed. For belt driven machine, a fourth order kinematic model integrating belt reaction torque is systematically derived, and thereby explained the spiky phenomenon in velocity profile of motor position, when an acceleration change is experienced. Further, a feed forward dynamic compensator is proposed to restraint vibration and to improve dynamic characteristics of the belt drives. The proposed feed forward compensator is a combination of inverse dynamics of the system and a desirable dynamic filter, which reforms the dynamic characteristics of the existing system. The planned trajectories at low speeds and high speeds are extensively tested for accurate performance with an actual belt driven machine and the results are illustrated. The proposed trajectory planners for two-robot cooperation are basically of two types. 1) Given objective cooperative trajectory exceeding the dynamic bounds of a single robot is decomposed into two concurrent complementary trajectories of two robots maneuvered simultaneously 2) For a specified objective locus, the minimum time complementary trajectories for cooperation are planned. The objective locus used to exemplify the concept of trajectory planners in both cases is an Sshaped locus and realization of the trajectories are carried out under maximum joint acceleration constraints. In the former cooperative trajectory planner, a fair task distribution is accomplished by minimizing the difference in maximum joint velocities of two robots. The complexities in planning trajectories are coped with a two-stage trajectory-planning paradigm backed with a short-listing criterion. A fourth order spline technique for position, minimizing the joint acceleration is also derived theoretically. The latter, minimum time cooperative trajectory planner, is of bang-bang type in acceleration profile and the fairness of each robot contribution is achieved through an additional contribution constraint for each robot to the cooperative task. The applicability of the trajectory-planning concept has been verified with cooperative trajectories having sharp corners. Since the proposed trajectory planners concerned under the thesis work are off-line and therefore they can be conveniently applied to existing servo systems irrespective of the computational power of in-use controller. Neither, a dramatic change in the existing hardware setup nor a considerable reconfiguration of the system is demanded in instrumentation point of view. This requirement of minimal changes in adaptation enhances the pragmatic significance of the proposed schemes.Publication Open Access Lightning warning system based on slow fields and fast transient variations, suitable for oceanic tropics(http://dl.lib.mrt.ac.lk/handle/123/1958, 2007) Abhayasinghe, NLightning causes a lot of property and human damage all over Sri Lanka. It has been a major requirement to develop a low cost lightning warning system. The environmental vertical static electric field changes from 0.1 kVm-1 under fair weather conditions to extreme values like 10 kVm-1 under thunderstorm conditions. Also, lightning discharges generate electromagnetic radiation from ultra low frequency (ULF) through ultra high frequency (UHF) with peak energy emission at 10 kHz. The work discussed in this thesis uses both the static field variation and the electromagnetic radiation emitted by lightning discharges to predict a thunderstorm. A portable transient detector using an envelope detector tuned to 1600 kHz is used to detect electromagnetic radiation emitted by lightning discharges. An operational amplifier circuit having a slow response with a horizontal plate antenna is used to detect the static field variation. Final decision is made by a third circuit and three levels of alarms are released accordingly. Using the transient detector only, a warning can be released 25 minutes before the close by thunderstorm with 95% level of confidence. With the entire system, the confidence of the warning further increases. The cost of the transient detector is about 2500 Sri Lankan rupees with a rechargeable battery bank. The entire system with a battery backup costs about 5000 Sri Lankan rupees. According to the observations made by the transient detector the delay between cloud flashes and ground flashes shows a distribution of the form of a fractional function with a maximm at 27.52 minutes. The newly designed lightning warning system shows an acceptable grade of performance with its low cost.Publication Open Access Shear capacity of composite deck slabs with concrete filled steel tubes(University of Moratuwa, 2008) Perera, S. V. TSteel and concrete composite systems are generally used as major structural components in multi-storey buildings. Composite construction in buildings is more popular with profiled steel sheeting (steel decking) since it serves as a working platform to support the construction loads and also as permanent formwork for concrete. To achieve large column free spans (in the range of 8m-12m), as often demanded for multi-storey office buildings, "steel and concrete composite floor trusses" may form economical solutions since they provide the facility to accommodate various service ducts within the structural zone. The concept of introducing a concrete filled steel tube (CFST), instead of the conventional open flanged steel section, as the top chord of these floor trusses has been discussed. However, the viability of this new concept should be ensured by experimental evidence on the longitudinal shear transfer capacity at the composite stage. This study discusses the experimental results of a series of push-off tests conducted on CFST embedded composite slab panels. The effect of providing different concrete top covers and effect of different concrete strengths have been investigated. With headed shear studs (two studs per sample, Configuration 3) 23%- 29% and 20%- 53% of increase in shear carrying capacity were achieved by increasing the concrete top cover from 20mm to 30mm and the concrete cube strength from grade 20 to grade 45 respectively. Composite slabs with CFSTs were 131% (only steel tube, Configuration 1) - 385% (steel tube with welded two steel strips, Configuration 2) higher than composite slabs with headed shear studs (two studs per sample). Then results of composite slabs with headed shear studs were compared with Eurocode-4 and it was at least 22% conservative. Keywords: composite slab, steel, concrete, concrete filled steel tubes, steel deckingPublication Open Access Multiple-input multiple-output wireless system designs with imperfect channel knowledge(Queen's University, 2008-07) Ding, MEmploying multiple transmit and receive antennas for wireless transmissions opens up the opportunity to meet the demand of high-quality high-rate services envisioned for future wireless systems with minimum possible resources, e.g., spectrum, power and hardware. Empowered by linear precoding and decoding, a spatially multiplexed multiple-input multiple-output (MIMO) system becomes a convenient framework to offer high data rate, diversity and interference management. While most of the current precoding/decoding designs have assumed perfect channel state information (CSI) at the receiver, and sometimes even at the transmitter, in this thesis we will design the precoder and decoder with imperfect CSI at both the transmit and the receive sides, and investigate the joint impact of channel estimation errors and channel correlation on system structure and performance. The meansquare error (MSE) related performance metrics will be used as the design criteria. We begin with the minimum total MSE precoding/decoding design for a single-user MIMO system assuming imperfect CSI at both ends of the link. Here the CSI includes the channel estimate and channel correlation information. The closed-form optimum precoder and decoder are determined for the special case with no receive correlation. For the general case with correlation at both ends, the structures of the precoder and decoder are also determined. It is found that compared to the perfect CSI case, linear filters are added to the transceiver structure to balance the channel noise and the additional noise caused by imperfect channel estimation, which improve system robustness against imperfect CSI. i Furthermore, the effects of channel estimation error and channel correlation are coupled together, and are quantified by simulations. With imperfect CSI at both ends, the exact capacity expression for a single-user MIMO channel is difficult to obtain. Instead, upper- and lower-bounds on capacity have been derived, and the lower-bound has been used for system design. The closed-form transmit covariance matrix for the lower-bound has not been found in literature, which is referred to as the maximum mutual information design problem with imperfect CSI. Here we transform the transmitter design into a joint precoding/decoding design problem. The closed-form optimum transmit covariance matrix is then derived for the special case with no receive correlation, whereas for the general case with non-trivial correlation at both ends, the optimum structure of the transmit covariance matrix is determined. The close relationship between the maximum mutual information design and the minimum total MSE design is discovered assuming imperfect CSI. The tightness and accuracy of the capacity lower-bound is evaluated by simulation. The impact of imperfect CSI on single-user MIMO ergodic channel capacity is also assessed. For robust multiuser MIMO communications, minimum average sum MSE transceiver (precoder-decoder pairs) design problems are formulated for both the uplink and the downlink, assuming imperfect channel estimation and channel correlation at the base station (BS). We propose improved iterative algorithms based on the associated Karush-KuhnTucker (KKT) conditions. Under the assumption of imperfect CSI, an uplink–downlink duality in average sum MSE is proved, which is often used to simplify the more involved downlink design. As an alternative for solving the uplink problem, a sequential semidefinite programming (SDP) method is proposed. Simulations are provided to corroborate the analysis and assess the impacts of channel estimation errors and channel correlation at the base station on both the uplink and the downlink system performances.Publication Open Access Interactions between river flow and seepage flow(M. Sc. Thesis, Hokkaido University, Japan, 2009-09) Rathnayake, U. SMany previous studies have been carried on the interactions between river flow and the seepage flow in the environmental and biological point of view. Even though the interactions between river flow and seepage flow is recognized as an important process in rivers, previous literature hardly touches on the stability or the limitations for the interactions. Since these interactions are occurred frequently at least in mountainous regions, the river flow cannot be well treated as a lined cannel flow. Understanding the stability of these interactions among river flow and the seepage flow would be advantages for several research areas; including river environmental engineering, ecological and biological studies. The subsurface layer below the river is known as the “hyporheic zone” and it can be defined as a saturated band of sediment that surrounds river flow and forms a linkage between the river and the aquifer. The zone facilitates to have bidirectional interactions as up-welling interactions and downwelling interactions. The origin of these interactions is due to the pressure and velocity differences between the two layers. The large velocity difference between the river flow layer and the seepage flow layer causes the instability of the flows. Due to this flow instability, a reciprocating flow motion is generated between the hyporheic layer and the above. In addition flow obstructions create an upstream high-pressure zone and a downstream low-pressure zone, resulting in hyporheic circulation under the object. The stability of these hyporheic interactions is analyzed using the linear stability analysis technique. Linear stability analysis technique is used to understand the stability of the natural phenomenon by many researchers. Navier-Stokes equations and Brinkman-Forchheimer equations are used in order to formulate the river flow and seepage flow interactions respectively. The open channel flow in river is analyzed using the mixing length turbulent model and spectral collocation method incorporated with the Chebyshev polynomials are used to perform the numerical solution of the perturbed equations. Stability diagrams are discussed with several slopes of the layers against the dimensionless particle diameter and wave number. It has been understood that the range for the occurrence of instability region increases with the slope of the combined river and seepage layers. However it is important to recognize another instability region which occurs even in the range of small dimensionless particle diameter with relatively high wave numbers. Several experiments are carried out, in order to understand the hyporheic interactions. Seepage layer is modeled using a Hele-Shaw which is a longitudinal parallel plate model. Methylene blue is used as the tracer to understand the hyporheic interactions and the experiment is conducted for two slopes as 0.1% and 0.2%. It can be concluded that the dimensionless dominant wave numbers have an effect on the combined channel slope and the Froude number of the river flow. In addition, it can be concluded that the residence time of hyporheic interactions are increased with the height of the river layer. Rough comparison between the theoretical analysis and the experimental observations is carried out. It can be concluded that the same tendency in the theoretical analysis and the experimental observations from the comparison figuresPublication Open Access Optimal management and operational control of urban sewer systems(University of Strathclyde, 2013) Rathnayake, U. SCombined sewer networks control, like many other real world problems, is usually identified with competing and conflicting objectives. Decision makers have a great need of selecting the best possible control strategy in minimizing the combined sewer overflows (CSOs) when controlling the sewer networks. However, this control strategy should be cost effective to produce a feasible control approach in real world. Cost effectiveness has become significantly important in present economic recession. Over the past decades, people have witnessed the control strategies based on minimization of CSOs. However, it is now, not only to minimize CSOs, but also to minimize the impact to the natural water from these CSOs. Therefore, this research explores the development of a holistic framework that is used for the multi-objective optimization of urban wastewater systems, considering flows and water quality in combined sewers and the cost of wastewater treatment. Pollution levels of several water quality parameters in dry weather flows and stormwater runoff are considered. Pollutographs for several water quality parameters are generated for the stormwater runoff. Temporal and spatial variations of the stormwater runoff are incorporated using these pollutographs for different land-uses. Furthermore, pollutographs are developed for different storm conditions, including single, two consecutive and migrating storms. Evolutionary algorithms are extensively used in solving the developed multiobjective optimization approach. Formulations for two different optimization approaches, one for the snapshot optimization and the other one for the dynamic optimization are developed. Simulation results from a full hydraulic model, including water quality routing are used in the optimization. The performance of the multi-objective optimization models are tested on a simple interceptor sewer system for several storm conditions. The proposed optimization approach for snapshot optimization gives the optimal CSO control settings where a single set of static control settings is used throughout the considered time period. However, the proposed optimization approach for dynamic optimization is capable of producing control strategies over the full duration of storm period. Furthermore, results for a number of alternative formulations in constraint handling for the developed multi-objective optimization approach are compared. They produce interesting findings. Overall, the constraint handling formulations developed outside the genetic (NSGA II) algorithm provides better control in combined sewer networks. In addition, the results of the multi-objective optimization demonstrate the benefits of the usage of optimization approach and its potential to establish the key properties of a range of control strategies through an analysis of the various tradeoffs involved. Solutions from the dynamic optimization approach highlight the usage of the real-time control in combined sewer systems. Given that the technology is there to measure water quality and flow rates, collect data and send feedbacks to the sewer system through central processing unit and the usage of high performance computers, the developed optimization model is capable of handing the present society's concerns in combined sewer systems. The model is capable of controlling the existing sewer networks according to the receiving water regulations and the fund availability of the wastewater treatment plants. However, further research is required to apply the developed multi-objective optimization approach in real-time control of urban sewer systems.Publication Open Access Opinion Mining and Sentiment Analysis, to Improve Legal Information Efficiency Analysis in Sri Lankan Legal cases(2014) Jayaweera, D.A.N.SriLanka Information Technology field is now growing faster and the other industries try to combine their work with Information Technology to gain more profit and publi ity. Sri Lanka law field is one of the most famous and oldest fields. Most of the industry field can combinewith Information Technology and do new things as an example store data analyzing data.They need to analyze data because of they need to get correct decision and decision making process. Current Lawyers in Sri Lanka, use manual way to store client details, organize their work etc. Because of that they need lots of document writings, document reading.When the new case acquire to Lawyer, they need to read all history cases and they needto analyze those cases and get the final decision. Searching documents, creating new case files, organize the documents they need to do in efficiency manner. I prop sed this systemto how to they analyze the legal cases opinion and mining legal cases. Opi ions are consequently important that whenever one needs to make a decision, one wants to hear others' opinions. This is true for both individuals and organizations. The technology of OpinionMining thus has a great scope for practical applications. The concept of opi ion is in thecontext of Sentiment Analysis, the main tasks of Sentiment Analysis, and the fr mework of opinion summarization. Along with them, two relevant and important concepts of subjectivityand emotion were also introduced.ewhich are highly related to but not e uivalent to opinion. Existing studies about them have mostly focused on their intersecti ns with opinion.I create a system to analyze legal information in proper manner. The researc is give designthe application in the world context 'and introduce interactive customizable application forthe purpose of analyze legal information. As the outcome of the research, it will present the statistics and survey outcomes of usage of the introducing application f r legal informationanalyze in Sri Lanka and make available the developing.Publication Open Access Development of a Framework for Real- Time Business Intelligence for Managing Risk in Banking Industry(2014) Mahasivam, PavithraThere is no doubt that data management is a critical component of any financial se 'vices firm's risk management strategy in the current market climate. As financial strategies have become more complex, new financial instruments are added and businesses continue the r expansion across the globe, the need for a coherent and streamlined approach to data nanagement potentially including the use of real-time data has never been greater. As today's decisions in the business world have become more real-time, the systems that support those decisions need to keep up. It is only natural that Business Intelligence BI systems quickly begin to incorporate real-time data in order to increase risk transparency to make better and faster busines s decisions, evaluate and predict broader spectrum of risk scenarios, confidentially answer qi arries from regulatory bodies and internal stakeholders. Thisresearch is mainly focusing on risk management of banking industry. The resea 'ch delivers advanced analytics and reporting capabilities to help strategic decision makers for ns vigate data to identify new opportunities, manage and mitigate risks, and make fact-based ecisions in ~- timely manner. Every bank measure and monitor their performance against characteri tics which is known as Key Performance Indicators (KPls). KPIs help an organization define t re progress . towards the organization goals. Important KPls of bank are identified in the literature that is used by financial regulators to keep track of how well-protected a bank is against risk.The real-time business intelligence framework is emulated for the banking industry in order to mitigate risk. The framework consists of changed data capture (eDq routine and rule-based engine. eDe routine is used to capture the changed data from the source database and load it to data mart online real-time. The rule base engines identify pattern changes in the data, based on the defined parameters and provide advanced analytics and reporting capabilities.Publication Open Access Development and testing of a high speed hydraulic manipulator with single time scale visual servoing(Memorial University of Newfoundland, 2014) Liyanage, M. HAutomation of production processes has enabled to meet the dramatic demand for manufactured products that has grown out of the increase in world population. In existing industries greater production requirements and improvement in product quality call for faster industrial robots. This study details the design and development of a high-speed visual servoing system for industrial applications. The proposed visual servoing system consists of a high speed robotic manipulator, a high-speed camera system and an embedded controller. The proposed robotic manipulator has the configuration of a Selective Compliant Assembly Robotic Arm (SCARA). It uses two custom-designed double vane rotary hydraulic actuators for driving the links of the robot. The SCARA system was mathematically modeled and simulated. Based on the simulation results, the hydraulic actuators were sized for optimal performance. A prototype actuator was subsequently designed, manufactured and experimentally evaluated. The test results show that the proposed actuator is capable of reaching torques of up to 460 Nm in 30 ms with a payload of 12 kg. This is not possible with electric motors of similar size. Then the proposed SCARA was designed and fabricated using the proposed actuators. The end effector of this manipulator was capable of reaching velocities of up to 2.7 ms⁻¹ with a payload of 5.3 kg. Comparable performance is not feasible with contemporary SCARA type robots. The proposed robot was designed for handling payloads up to 15 kg with speeds of up to 2 ms⁻¹. This often results in flexing of the links and twisting of the support column, adding external disturbances to the system. A high-speed camera system was designed and built to obtain the position of the end effector as feedback for the controller. It uses a two dimensional Position Sensitive Detector as the image sensor. An electronic circuit was designed and built for signal conditioning and data acquisition from the Position Sensitive Detector. It was then calibrated to account for non-linearities on the image sensor. The camera was constructed using this Position Sensitive Detector circuit, a lens and an infra red filter.It was then calibrated to estimate the extrinsic and intrinsic parameters. This camera was capable of carrying out measurements at frequencies of up to 1350 Hz. The measurements made by this camera produced an average absolute accuracy of 0.31 mm and 0.37 mm in x and y directions, respectively. A Field Programmable Gate Array was used in this study as the platform for developing an embedded controller for the robot. Using contemporary Field Programmable Gate Array technology, a powerful virtual processor can be synthesized and integrated with custom hardware to create a dedicated controller that out performs some of the conventional microcontroller and microprocessor based designs. The Field Programmable Gate Array based controller takes advantage of both hardware features and virtual processor technology. The input, output interfaces for this controller were implemented using hardware. Complex functions that are difficult to be implemented in hardware were implemented using a virtual soft processor. Four different types of controllers were implemented and tested. These include hardware proportional-derivative, software proportional-derivative, single time scale visual servoing and set point modification type controllers. The proposed implementation carried out single time scale visual servoing at frequencies of up to 330 Hz.Publication Open Access Uncertainty factors of the projects in Sri Lank n software industry and a controlling model(2014) De Silva, IndunethUncertainty is an inevitable factor of most software projects, not only in Sri Lanka but when considering the global software industry. Most Project Managers make decisions, milestones to make sure that each and every stakeholder in the project team is working to make the desired delivera les but still the project ends up with an overrun schedule, overflowing budget and compromised specifications. Or it just dies. Software organizations should identify different kind of uncertainties where project can tolerate at different stages of the projects. For this study I w uld like to collect data through questionnaires from several software companies in Sri Lanka and find out what are the uncertainties that they face during various phases of the project. Also the management approaches they take wh n such instance occurred will also identified from the data collection from vario s stake holders of the projects. ...;p,r;r In this study the uncertainty factors were explored by conducting surveys based on interviews and questionnaires. From the interview based surveys the uncertanty factors affecting to projects in Sri Lanka software indus y were identified and to measure the significance of the responses actions n each impact level a questionnaire based mass surveys was conducted.There it is observed that there are response actions that can be taken independent of the impact level of the uncertainty of some uncertainty factors. Furthermore when selecting suitable response action when there is an uncertainty in a particular impact level, the order of preference can be taken into consideration. This study is significant because the data was collected from a large sample of professionals who involved in Sri Lankan software industry and inferential analysis techniques and hypothesis tests were performed under 95% signi icance level and the tests satisfied the requirements of validity of the data. Als from the mass questionnaire-based survey verifies the application of responses ctions for identified uncertainties in different impact levels.Publication Open Access Institutionalization of Knowledge Manage ent in Sri Lankan Business Entities(2014) Kekulawala, K R P LThe knowledge-based theory of the firm suggests that know ledge is the organizational asset that enables sustainable competitive advantage in competitive environments. The emphasis on knowledge III today's organizations, especially in the west is based on the assumption that barriers to the transfer and replication of knowledge people carry in their h ads endow it with strategic importance. Many organizations are developin information systems designed specifically to facilitate the sharing and i tegration of knowledge. Such systems are referred to as Knowledge Manage ent Systems (KMS). The benefits of KM have been empirically proven in the gl bal business realm. Because the concepts of KMS and institutionalization f knowledge formally are just beginning to appear in Sri Lankan business organ zations, little research and field data exists to guide the development and impl mentation of such systems. Knowledge, to be leveraged as a competitive advantage, has to be instilled or institutionalized in to the culture and work ethic of rganizations there is no hard evidence to show that formal institutionalize knowledge management practices or s~ems exist in Sri Lankan business rganizations. However some business entities in the country have proven the rselves to be proactive and strategic in their approach and decision making which suggests the existence of practices to make best use of what the organizat ons "know". This study provides an analysis of the current understanding f knowledge management amongst Sri Lankan businesses, the prevalent practices of knowledge use and the extent of institutionalization of KM through the study of ten high performing organizations.Publication Open Access Geospatiallntelligence on a graph(2014) Kanaka, TharikGeographical Information Systems (GIS) bas being used oyer few decades across various organizations and also used by public in the day to da. for scenarios such as map representations, road navigation and Global Positi ning System (GPS) tracking. There are so many commercial and free GIS softw re is available out there for enterprises and also there are applications for per onal usage which are becoming very popular with increased usage of computer and mobile devices. On the other hand with the rapid increment computer systems usage and internet usage data is getting piled in exponentially. As a result of that concepts such as big data and NoSQL came to the picture in order to manipulate those data effectively. Graph databases arc one data model of NoSQL datab iscs which delivers significant advantages such as agility, flexibility and performance. Since geographic data is naturally structured like a graph, representing a GIS data in a graph structure can be useful for spatial index g, storage and topology in much effective way compared to other database type. There is a gap when representing organizational on maps whereas there are limitations when it comes to details. For an instance when restaurant cl ain company is representing their branch outlets on a map. They have to stick to basic set of attributes allowed by mapping service provider. Tbey can ot represent beyond that such as types of food they are selling and curr nt availability of them in' branch wise. This research will develop a implementation by combining these two technologies GIS and graph databases in order to achieve a central Geospatial intelligence on graph which livers benefits in the context of multiple data source integration and querying and analyzing them to generate new knowledge. In the implementation architecture a graph database stands in the back end and all the transactions are exposed and carried out by a managed Representational state transfer (REST) application programming interface (API) implementation. This API facility mak s the implementation as an interoperable platform which can be integrated with many other applications. Front end is implemented in Javascripr and mapping libraries which co ect to REST API backend and loads mapping information from a map engine. From the front end map routing and graph searching can be carried out. For graph results an optimal routing algorithm will be carried to for effective result s. At the end this research work outcome is a geospatial intelligence platform wl ich is implemented on a graph database.Publication Open Access Bio-oligomers as antibacterial agents and strategies for bacterial detection(The University of Edinburgh, 2014-11-28) Kasturiarachchi, J. CIn this thesis I examined the potential of Bio-Oligomers such as peptoids, peptides and aptamers, as therapeutic and diagnostic entities. Therapeutic Bio-Oligomers; A series of peptoid analogs have been designed and synthesised using solid phase synthesis. These peptoids have been subjected to biological evaluation to determine structure-activity relationships that define their antimicrobial activity. In total 13 peptoids were synthesised. Out of 13 different peptoids, only one peptoid called Tosyl-Octyl-Peptoid (TOP) demonstrated significant broad-spectrum bactericidal activity. TOP kills bacteria under non-dividing and dividing conditions. The Minimum Inhibitory Concentrations (MIC) values of TOP for S. epidermidis, E. coli and Klebsiella were 20 µM, whereas Methicillin-resistant Staphylococcus aureus (MRSA) and Methicillin-sensitive Staphylococcus aureus (MSSA) were 40 µM. The highest MIC values were observed for Pseudomonas aeruginosa (PAO1) at 80 µM. The selectivity ratio (SR) or Therapeutic index (TI) was calculated, by dividing the 10% haemolysis activity (5 mM) by the median of the MIC (50 µM) yielding a TI for TOP as 100. This TI is well above previously reported peptidomimetics TI of around 20. TOP demonstrates selective bacterial killing in co-culture systems and intracellular bacterial killing activity. Diagnostic Bio-Oligomers; In the second part of my thesis, I investigated aptamer and peptide-based molecular probes to detect MRSA. As well as screening aptamers and peptide probes against whole MRSA, I over-expressed and purified PBP2A protein. This purified protein was used as a target for aptamer and peptide probes to detect MRSA. Two different aptamer libraries were initially screened for utility. In-vitro conditions for SELEX were optimised. Biopanning with a phage derived peptides was also performed. Target sequences for both methods were identified and chemically synthesised. Evaluation of fluorescently labelled sequences with flow cytometry and confocal imaging showed no specificity for MRSA detection with either method. The Bio-Oligomers and the in-vitro selection methodology require further refinement to improve diagnostic utility.Publication Open Access AnyDbMobileSync : Database Agnostic Synchronization Framework for Mobile Application(2014-12) Abeysingha Gunawardhana, SumindaImplement database independent data synchronization middleware framework (AnyDBMobileSync) for Server side SQL server/oracle database/my s Ietc. and client side any data store (HTML5 local store/websql/sqllite etc). If an offline mobile application used this middleware sync framework then their backend Database can change without mobile application extra development. Framework handled different database type (Sql server/oracle/my QL) that communicates mobile clients' offline data. Framework and Mobile Client communicate using Rest-full service. because of that mobile client-side doesn't need to install extra API. Communicate data object type is JSON and JSON can use any application language to e tract data from JSON object.Publication Open Access An Approach towards Password Protection Based On Typing Style(2014-12) Arnarasena, Nelum ChathurangaThe most common user authentication mechanism is password verification. other words, characterswhich types as text in a password field. However, the aim of this research is to find outwhether the rhythm and/or the style of typing; how types instead of what pes (Keystroke Dynamics), is sufficiently reliable as a security enhancement. This is a biometric approach. Biometric solutions are costly; requires at least one additional sensor. But this study focuses on an economical biometric solution that does not necessitate any additional sens r other than the keyboard.Keystroke dynamics is an interesting biometric because it always invisible for users, unlessthey are physically present, and also it is not depending on a dedicated device or hardware infrastructure. When a person is typing at a keyboard, the detailed timing information that describes exactly when each key was pressed and when it was released and va .ation of speed movingbetween two keys are continuously monitoring in order to recognizes a unique pattern. Thenanother pattern recognition part is operating in stealth mode with the passw rd verification. Thereforeafter the password is verified successfully, the pattern recognition part also needs to be completedin order to authenticate the user. Significance of this research is the way of generating and storing the pattern. As I explained thrgJ.,lghthe literature study keystroke dynamics is not a veryreliable biometric. So the challenge was to make a strong pattern which i hard to reveal usinga less reliable building block which I achieved successfully through this study.Publication Open Access Investigating the enabling role of web 2.0 technology for interactive e-learning in australian and sri lankan higher education(RMIT University, 2015) Karunasena, AInteractions are at the heart of e-learning as they enable learners to actively develop knowledge, acquire skills and develop the sense of belonging and satisfaction. Much attention is paid on developing interactive e-learning systems for facilitating active interactions between learners and learning resources, instructors and peer learners. Numerous technologies such as simulation technology and Web 2.0 technology are used to facilitate interactive e-learning to date. Those technologies support learners to interact with learning resources, instructors and peer learners to different extents. To facilitate interactive e-learning, it is important for educators and e-learning developers to understand how well technologies as above support interactions in e-learning. Web 2.0 technology has become popular around the world recently due to their ease of use, portability and high availability. Much research has been done on how Web 2.0 technology could be used for interactive e-learning. Existing research, however, has several limitations. For example, a majority of research has investigated how a specific Web 2.0 tool supports a specific kind of interactions in e-learning such as learner-learner interaction. Furthermore, much of existing research on Web 2.0 based interactive e-learning is conducted in developed countries. Whether Web 2.0 technology supports interactive e-learning in developing countries in a similar manner to developed countries, or whether developing countries could learn lessons from developed countries on using Web 2.0 technology for interactive e-learning are, therefore, not clear. This research aims to investigate the enabling role of Web 2.0 technology for interactive e-learning in higher education in Australia, a developed country and Sri Lanka, a developing country. To meet this aim, a quantitative research approach is adopted. Following this research approach, a conceptual framework on Web 2.0 based interactive e-learning developed based on a comprehensive review of the relevant literature, is validated using the survey data collected from learners in universities in Australia and Sri Lanka. The validation of the conceptual framework reveals that Web 2.0 technology supports the three major types of interactions in learning, namely, learner-learning resources, learner-instructor and learner-learner interactions in both Australia and Sri Lanka to a great extent. Furthermore, no significant differences are found on how Web 2.0 technology supports interactive e-learning in the above countries. The implication of these findings is that Web 2.0 tools could be used to improve the interactivity of e-learning. Another implication of this research is that new and more interactive e-learning systems can be developed by using Web 2.0 technology, in particular, for the purposes of managing learning resources, managing personal knowledge, delivering instructional support and collaborating in order to improve the effectiveness of e-learning. From a practical perspective, this study presents an in-depth investigation of how Web 2.0 technology can be used for improving the interactivity of e-learning in Australia and Sri Lanka. It also provides specific guidelines for developing interactive e-learning environments using Web 2.0 technology. From a theoretical perspective, this research finds that Web 2.0 technology could be used in developing countries and developed countries to improve the three major interactions in e-learning.Publication Open Access Realtime line parameter estimation using synchrophasor measurements and impact of sampling rates(Wichita State University, 2016) Hettiarachchige-Don, A. C. SThe installation of synchrophasor measurement units within the electrical grid system have provided utilities with the ability to monitor their transmission system in real time. These real time observations allow for better situational awareness and rapid responses to adverse system conditions. However, the real time impedance of the powerline is not one of the parameters that is transmitted to the control center and therefore, has to be calculated using the data received from multiple devices. This thesis proposes a simplified methodology for this analysis that requires lower computation power in comparison to most other proposed estimation techniques. Hence, this methodology is able to produce accurate results faster and by using a smaller quantity of stored data. Due to these reasons, this methodology can be implemented to provide near real time estimation and reporting of impedance values. For the purposes of this research, only the reactance information will be calculated but a similar approach can be used to obtain resistance information as well. The methodology consists of an algorithm to calculate and estimate the reactance of a line using the reported PMU data. It includes an outlier detection and elimination algorithm as well as a denoising technique that makes use of regularized least square estimation to accurately estimate the reactance over the analysis period. The methodology proposed is tested using real synchrophasor measurement data from a utility provider. The proposed mythology can easily be adapted and applied for the estimation and calculation of other parameters using PMU data.
