Browsing by Author "Jayawardena, S"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Publication Embargo Eigenface based automatic facial feature tagging(IEEE, 2008-12-12) Wijeratne, S; Jayawardena, S; Jayasooriya, S; Lokupathirage, D; Patternot, M; Kodagoda, NThere are several approaches to search databases of faces. However such methods still require a significant use of humans to interpret an eyewitness account and so forth. In many cases these searches are done using visual building tools as creating a graphical face model. A system that can easily interface with general users should directly search a person by description given verbally or textually. This would reduce costs in the search process. Facial feature characteristics identification would act as a stepping stone in cataloguing large face databases automatically thus providing the possibility of a description based face search by text. This paper presents the possibility of utilizing eigenface approach to recognize different characteristics of a facial feature and assigning descriptive words such as "Large", "Small" to each feature. After training the system, it would automatically attempt to match a pattern in the training set that best describes the input image and output a tag associated with it. This effectively allows an image of a person's face to be tagged by his or her feature characteristics. While utilizing the standard set steps as defined in the eigenface algorithm, slight modifications are done in the algorithm that matches input images with ones in the training set. The training set defined has a very huge impact for the final outcome, and due to the subjective nature of the training, future research would be done on this regard. The investigation showed that the method works fine with well defined features such as eyes but fails for features such as foreheads due to the lack of significant differences or characteristics between such features. Hence it is seen that while eigenface can be used for the categorization of well defined features, it is unable by itself to create a system that can cover all features of a face.Publication Embargo Mixed Reality Supermarket: A Modern Approach into Day - to - Day Grocery Shopping(IEEE, 2020-11-04) Weerasinghe, N; Jayawardena, S; Mahawatta, D; Navaratne, H; Sriyaratna, D; Gamage, IIn the modern world where there are massive trends in development and implementation of new technologies, combination of Virtual Reality and Augmented Reality is one which has key potential in an everyday developing world. The main concept behind Virtual Reality is simply immersing the user in a virtual environment at the comfort of their own place. This is done by creating a computer-generated 3D environment with hand gestured navigation system combined with concepts of voice recognition, image processing and machine learning that explores intense human interactions. As we are in the 21st century, where technological transformations are most certainly creating blurry lines between fiction and reality, more and more people have the need to fulfill their daily requirements easily without wasting their valuable time. Buying day to day needs from a supermarket is one of the main activities that each one of us struggle to go through during the day. Targeting the above simple daily activities, we are making an effort to apply VR Technology to this area through this research and thus trying to provide a rather new technological experience for purchasing items from a supermarket. This can be beneficial to the consumers to minimize their valuable time wasted, and also, they will be able to get the real experience of shopping while getting exposure to marketing.Publication Embargo ROS Supported Heterogeneous Multiple Robots Registration and Communication with User Instructions(IEEE, 2022-02-23) Rajapaksha, S; Jayawardena, S; MacDonald, B. ADifferent types of heterogeneous multiple service robots are working in the same environment to help humans in many ways in a smart house. These service robots have different capabilities based on the different control and communication systems. The complexity of the programming of the robots is now reduced by using middleware like Robot Operating System (ROS). However, the communication and control of the heterogeneous service robots using very high-level instruction is still tricky because of differences in ROS topics. If a user has issued a high-level instruction to all multiple heterogeneous robots in the same environment, then all robots must complete the given task without considering the software differences in each robot. This research has developed a web-based interface where users can input high-level instruction to all multiple heterogeneous robots running in the same environment. We have used three levels of instructions. Level 01 moves robot forward for the given speed for some distance without obstacles. Level 02 is to move the robot to a specific location without obstacles. Level 03 is to navigate the robot to a goal with some obstacles in the environment. Initially, all robots need to register their software specifications and hardware specification Format (URDF) in the robot registration engine with the help of the ontology. Then all service robots act according to the instruction given by the interpreter. The proposed system was evaluated using a simulated environment with a gazebo using "Turtlebot" and "Tiago" robots. Time complexity analysis of all algorithms was completed using the Big O notation.
