Please use this identifier to cite or link to this item: https://rda.sliit.lk/handle/123456789/820
Full metadata record
DC FieldValueLanguage
dc.contributor.authorArjuna, P.H.D.-
dc.contributor.authorSrimal, S.-
dc.contributor.authorBuddhika, A.G.-
dc.contributor.authorJayasekara, P.-
dc.date.accessioned2022-01-28T09:17:19Z-
dc.date.available2022-01-28T09:17:19Z-
dc.date.issued2017-01-26-
dc.identifier.issn1800-3591-
dc.identifier.urihttp://localhost:80/handle/123456789/820-
dc.description.abstractVoice commands have been used as the basic method of interaction between humans and robots over the years. Voice interaction is natural and require no additional technical knowledge. But while using voice commands humans frequently use uncertain information. In the case of object manipulation on a table, frequently used uncertain terms “Left, “Right”, “Middle”, “Front”...etc. These terms fail to depict an exact location on the table and the interpretation is governed by the robots point of view. Depending solely on vocal cues is not ideal as it requires the users to explain the exact location with more words and phrases making the interaction process cumbersome and less human like. However, using hand gestures to pinpoint the location is as natural as using the voice commands and frequently used when manipulating items on a surface. When compared to voice commands use of hand gestures is a more direct and less cumbersome approach. But when used alone hand gestures can result in errors while extracting the pointed location making the user dissatisfied. This paper proposes a multi-modal interaction method which uses hand gestures combined with voice commands to interpret uncertain information when placing an object on a table. Two fuzzy inference systems have been used to interpret the uncertain terms related to the two axes of the table.The proposed system has been implemented on an assistive robot platform. Experiments have been conducted to analyze the behaviour of the system.en_US
dc.language.isoenen_US
dc.publisherFaculty of Graduate Studies and Researchen_US
dc.relation.ispartofseriesVol.6;-
dc.subjectmulti-modal human robot interactionen_US
dc.subjectobject manipulationen_US
dc.subjectinterpreting uncertain informationen_US
dc.titleA Multi-modal Approach for Enhancing Object Placementen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/NCTM.2017.7872821en_US
Appears in Collections:Proceedings of the 6th National Conference on Technology & Management - NCTM 2017
Research Papers - IEEE
Research Papers - Open Access Research

Files in This Item:
File Description SizeFormat 
07872821.pdf330.62 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.