Repository logo
Repository
Browse
SLIIT Journals
OPAC
Log In
  1. Home
  2. Browse by Author

Browsing by Author "Kiguchi, K"

Filter results by typing the first few letters
Now showing 1 - 7 of 7
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    PublicationEmbargo
    Control of redundant manipulators by fuzzy linguistic commands
    (IEEE, 2003-08-04) Pulasinghe, K; Watanabe, K; Izumi, K; Kiguchi, K
    This paper presents a method of controlling redundant manipulator by spoken language commands consisting fuzzy linguistic information. The present system introduces the fuzzyneuro control paradigm to the contemporary speech controlled robotic systems, which are based on on-off control paradigm. The system is sensitive to the action activation commands, action modification commands, and action repetition commands of the human-robot conversation carried out by practical dialogues. C.redibility of the proposed system is experimentally proved by controlling a manipulator with seven degrees-of-freedom by fuzzy linguistic information enriched spoken language commands to perform an assembling task.
  • Thumbnail Image
    PublicationOpen Access
    A Fuzzy-Neural Network Based Human-Machine Interface for Voice Controlled Robots Trained by a Particle Swarm Optimization
    (Korean Institute of Intelligent Systems, 2003-09-25) Watanabe, K; Chatterjee, A; Pulasinghe, K; Izumi, K; Kiguchi, K
    Particle swarm optimization (PSO) is employed to train fuzzy-neural networks (FNN), which can be employed as an important building block in real life robot systems, controlled by voice-based commands. The FNN is also trained to capture the user spoken directive in the context of the present performance of the robot system. The system has been successfully employed in a real life situation for navigation of a mobile robot.
  • Thumbnail Image
    PublicationEmbargo
    Modular fuzzy-neuro controller driven by spoken language commands
    (IEEE, 2004-01-30) Pulasinghe, K; Watanabe, K; Izumi, K; Kiguchi, K
    We present a methodology of controlling machines using spoken language commands. The two major problems relating to the speech interfaces for machines, namely, the interpretation of words with fuzzy implications and the out-of-vocabulary (OOV) words in natural conversation, are investigated. The system proposed in this paper is designed to overcome the above two problems in controlling machines using spoken language commands. The present system consists of a hidden Markov model (HMM) based automatic speech recognizer (ASR), with a keyword spotting system to capture the machine sensitive words from the running utterances and a fuzzy-neural network (FNN) based controller to represent the words with fuzzy implications in spoken language commands. Significance of the words, i.e., the contextual meaning of the words according to the machine's current state, is introduced to the system to obtain more realistic output equivalent to users' desire. Modularity of the system is also considered to provide a generalization of the methodology for systems having heterogeneous functions without diminishing the performance of the system. The proposed system is experimentally tested by navigating a mobile robot in real time using spoken language commands.
  • Thumbnail Image
    PublicationEmbargo
    A novel modular neuro-fuzzy controller driven by natural language commands
    (IEEE, 2001-07-27) Watanabe, K; Pulasinghe, K; Kiguchi, K; Izumi, K
    A method of interpreting imprecise natural language commands to machine understandable manner is presented in this paper. The proposed method tries to ease the process of man-machine interaction by combining the theoretical understanding of artificial neural networks and fuzzy logic. Both fields are very popular to mimic the human behavior in different research areas in artificial intelligence. The proposed system tries to understand the natural language command rather than mere recognition. The distinctive features of the artificial neural networks in pattern recognition and classification and the abilities of manipulating imprecise data by fuzzy systems are merged to recognize the machine sensitive words in the natural language command and then to interpret them to machine in machine identifiable manner. Modularity of the design tries to break up the complete task into manageable parts where the presence of individual part is vital to bridge the so-called man-machine gap.
  • Thumbnail Image
    PublicationOpen Access
    Training of Fuzzy-Neural Network for Voice-Controlled Robot Systems by a Particle Swarm Optimization
    (Institute of Control, Robotics and Systems, 2003-10-23) Watanabe, K; Chatterjee, A; Pulasinghe, K; Jin, S. O; Izumi, K; Kiguchi, K
    The present paper shows the possible development of particle swarm optimization (PSO) based fuzzy-neural networks (FNN) which can be employed as an important building block in real life robot systems, controlled by voice-based commands. The PSO is employed to train the FNNs which can accurately output the crisp control signals for the robot systems, based on fuzzy linguistic spoken language commands, issued by an user. The FNN is also trained to capture the user spoken directive in the context of the present performance of the robot system. Hidden Markov Model (HMM) based automatic speech recognizers are developed, as part of the entire system, so that the system can identify important user directives from the running utterances. The system is successfully employed in a real life situation for motion control of a redundant manipulator.
  • Thumbnail Image
    PublicationOpen Access
    Voice Communication in Performing a Cooperative Task with a Robot
    (Springer, Tokyo, 2002) Pulasinghe, K; Watanabe, K; Kiguchi, K; Izumi, K
    This paper investigates the credibility of voice (especially natural language commands) as a communication medium in sharing advanced sensory capacity and knowledge of the human with a robot to perform a cooperative task. Identification of the machine sensitive words in the unconstrained speech signal and interpretation of the imprecise natural language commands for the machine has been considered. The system constituents include a hidden Markov model (HMM) based continuous automatie speech recognizer (ASR) to identify the lexical content of the user's speech signal, a fuzzy neural network (FNN) to comprehend the natural language (NL) contained in identified lexical content, an artificial neural network (ANN) to activate the desired functional ability, and contral modules to generate output signals to the actuators of the machine. The characteristic features have been tested experimentally by utilizing them to navigate a Khepera® in real time using the user's visual information transferred by speech signals
  • Thumbnail Image
    PublicationOpen Access
    Voice-controlled modular fuzzy neural controller with enhanced user autonomy
    (Springer-Verlag, 2003-03) Pulasinghe, K; Watanabe, K; Kiguchi, K; Izumi, K
    In this article, a fuzzy neural network (FNN)- based approach is presented to interpret imprecise natural language (NL) commands for controlling a machine. This system, (1) interprets fuzzy linguistic information in NL commands for machines, (2) introduces a methodology to implement the contextual meaning of NL commands, and (3) recognizes machine-sensitive words from the running utterances which consist of both in-vocabulary and out-ofvocabulary words. The system achieves these capabilities through a FNN, which is used to interpret fuzzy linguistic information, a hidden Markov model-based key-word spotting system, which is used to identify machine-sensitive words among unrestricted user utterances, and a possible framework to insert the contextual meaning of words into the knowledge base employed in the fuzzy reasoning process. The system is a complete system integration which converts imprecise NL command inputs into their corresponding output signals in order to control a machine. The performance of the system specifications is examined by navigating a mobile robot in real time by unconditional speech utterances.

Copyright 2025 © SLIIT. All Rights Reserved.

  • Privacy policy
  • End User Agreement
  • Send Feedback