Research Publications Authored by SLIIT Staff
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4195
This collection includes all SLIIT staff publications presented at external conferences and published in external journals. The materials are organized by faculty to facilitate easy retrieval.
Browse
2 results
Search Results
Publication Embargo Approximate decision making by natural language commands for robots(IEEE, 2006-11-06) Watanabe, K; Jayawardena, C; Izumi, KInferring the correct meaning of natural language commands, as judged by the person who issues commands, is mandatory for natural language commanded robotic systems. There have been some successful research on this; but one of the important and related aspects has not been addressed, i.e. the possibility of learning from natural language commands. Since natural language commands are generated by human users, they contain valuable information. Nevertheless, the learning from such commands, as well as the interpretation of them face many challenges due to the inherent subjectiveness of natural languages. In this paper, we propose a decision making process for natural language commanded robots which is influenced by certain characteristics of human decision making process. The proposed concept is demonstrated with an experiment conducted using a robotic manipulator. First, the robot is controlled with natural language commands to perform some pick and place operations during which the robot builds a knowledge base. After learning, the robot is capable of performing approximately similar tasks by making approximate decisions with the gained knowledge. For the decision making a probabilistic neural network is usedPublication Embargo Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors(IEEE, 2006-10-22) Watanabe, K; Jayawardena, C; Izumi, KNatural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.
