Research Publications Authored by SLIIT Staff
Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4195
This collection includes all SLIIT staff publications presented at external conferences and published in external journals. The materials are organized by faculty to facilitate easy retrieval.
Browse
2 results
Search Results
Publication Embargo Teaching a tele-robot using natural language commands(IEEE, 2005-11-07) Jayawardena, C; Watanabe, K; Izumi, KFor Internet-based teleoperation systems, user-friendly natural interfaces are advantageous because those systems are intended to be used by non-experts. In developing user friendly interfaces, natural language communication is mandatory. This work presents a system in which a sub-set of natural language is used to command a tele-robot manipulator doing an object sorting task. The paper discusses about referring to objects with natural language commands such as "pick the small red cube". This is achieved by learning individual lexical symbols that refer to colors, shapes, and sizes independently, and then inferring the meaning of a combination of them.Publication Embargo Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors(IEEE, 2006-10-22) Watanabe, K; Jayawardena, C; Izumi, KNatural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.
