ebook img

Advances in Human-Robot Interaction PDF

352 Pages·2009·49.184 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Advances in Human-Robot Interaction

Advances in Human-Robot Interaction Edited by Vladimir A. Kulyukin I-Tech IV Published by In-Teh In-Teh Olajnica 19/2, 32000 Vukovar, Croatia Abstracting and non-profit use of the material is permitted with credit to the source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside. After this work has been published by the In-Teh, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work. © 2009 In-teh www.in-teh.org Additional copies can be obtained from: [email protected] First published December 2009 Printed in India Technical Editor: Teodora Smiljanic Advances in Human-Robot Interaction, Edited by Vladimir A. Kulyukin p. cm. ISBN 978-953-307-020-9 Preface Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human- robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers. Readers may take several paths through the book. Those who are interested in personal robots may wish to read Chapters 1, 4, and 7. Multi-modal interfaces are discussed in Chapters 1 and 14. Readers who wish to learn more about knowledge engineering and sensors may want to take a look at Chapters 2 and 3. Emotional modeling is covered in Chapters 4, 8, 9, 16, 18. Various approaches to socially interactive robots and service robots are offered and evaluated in Chapters 7, 9, 13, 14, 16, 18, 20. Chapter 5 is devoted to smart environments and ubiquitous computing. Chapter 6 focuses on multi-robot systems. Android robots are the topic of Chapters 8 and 12. Chapters 6, 10, 11, 15 discuss performance measurements. Chapters 10 and 12 may be beneficial to readers interested in human motion modeling. Haptic and natural language interfaces are the topics of Chapters 11 and 14, respectively. Military robotics is discussed in Chapter 15. Chapter 17 is on cognitive modeling. Chapter 19 focuses on robot navigation. Chapters 13 and 20 cover several HRI issues in assistive technology and rehabilitation. For convenience of reference, each chapter is briefly summarized below. In Chapter 1, Mamiko Sakata, Mieko Marumo, and Kozaburo Hachimura contribute to the investigation of non-verbal communication with personal robots. The objective of their research is the study of the mechanisms to express personality through body motions and the classification of motion types that personal robots should be given in order to make them express specific personality or emotional impressions. The researchers employ motion- capturing techniques for obtaining human body movements from the motions of Nihon- buyo, a traditional Japanese dance. They argue that dance, as a motion form, allows for more artistic body motions compared to everyday human body motions and makes it easier to discriminate emotional factors that personal robots should be capable of displaying in the future. In Chapter 2, Atilla Elçi and Behnam Rahnama address the problem of giving autonomous robots a sense of self, immediate ambience, and mission. Specific techniques are discussed to endow robots with self-localization, detection and correction of course deviation errors, faster and more reliable identification of friend or foe, simultaneous localization and mapping in unfamiliar environments. The researchers argue that advanced VI robots should be able to reason about the environments in which they operate. They introduce the concept of Semantic Intelligence (SI) and attempt to distinguish it from traditional AI. In Chapter 3, Xianming Ye, Byungjune Choi, Hyouk Ryeol Choi, and Sungchul Kang propose a compact handheld pen-type texture sensor for the measurement of fine texture. The proposed texture sensor is designed with a metal contact probe and can measure the roughness and frictional properties of a surface. The sensor reduces the size of contact area and separates the normal stimuli from tangential ones, which facilitates the interpretation of the relation between dynamic responses and the surface texture. 3D contact forces can be used to estimate the surface profile in the path of exploration. In Chapter 4, Sébastien Saint-Aimé, Brigitte Le-Pévédic, and Dominique Duhaut investigate the question of how to create robots capable of behavior enhancement through interaction with humans. They propose the minimal number of degrees of freedom necessary for a companion robot to express six primary emotions. They propose iCrace, a computational model of emotional reasoning, and describe experiments to validate several hypotheses about the length and speed of robotic expressions, methods of information processing, response consistency, and emotion recognition. In Chapter 5, Takeshi Sasaki, Yoshihisa Toshima, Mihoko Niitsuma and Hideki Hashimoto investigate how human users can interact with smart environments or, as they call them, iSpaces (intelligent spaces). They propose two human-iSpace interfaces – a spatial memory and a whistle interface. The spatial memory uses three-dimensional positions. When a user specifies digital information that indicates a position in the space, the system associates the 3D position with that information. The whistle interface uses the frequency of a human whistling as a trigger to call a service. This interface is claimed to work well in noisy environments, because whistles are easily detectable. They describe an information display system using a pan-tilt projector. The system consists of a projector and a pan-tilt enabled stand. The system can project an image toward any position. They present experimental results with the developed system. In Chapter 6, Jijun Wang and Michael Lewis presents an extension of Crandall's Neglect Tolerance model. Neglect tolerance estimates a period of time when human intervention ends but before a performance measure drops below an acceptable threshold. In this period, the operator can perform other tasks. If the operator works with other robots over this time period neglect tolerance can be extended to estimate the overall number of robots under the operator's control. The researchers' main objective is to develop a computational model that accommodates both coordination demands and heterogeneity in robotic teams. They present an extension of Neglect Tolerance model in section and a multi-robot system simulator that they used in validation experiments. The experiments attempt to measure coordination demand under strong and weak cooperation conditions. In Chapter 7, Kazuki Kobayashi and Seiji Yamada consider the situation in which a human cooperates with a service robot, such as a sweeping robot or a pet robot. Service robots often need users' assistance when they encounter difficulties that they cannot overcome independently. One example given in this chapter is a sweeping robot unable to navigate around a table or a chair and needing the user’s assistance to move the obstacle out of its way. The problem is how to enable a robot to inform its user that it needs help. They propose a novel method for making a robot to express its internal state (referred to as robot's mind) to request users' help. Robots can express their minds both verbally and non-verbally. VII The proposed non-verbal expression centers around movement based on motion overlap (MO) that enables the robot to move in a way that the user narrows down possible responses and acts appropriately. The researchers describe an implementation on a real mobile robot and discuss experiments with participants to evaluate the implementation's effectiveness. In Chapter 8, Takashi Minato and Hiroshi Ishiguro present a study human-like robotic motion during interaction with other people. They experiment with an android endowed with motion variety. They hypothesize that if a person attributes a cause of motion variety in an android to the android's mental states, physical states, and the social situations, the person has more humanlike impression toward the android. Their chapter focuses on intentional motion caused by the social relationship between two agents. They consider the specific case when one agent reaches out and touches another person. They present a psychological experiment in which participants watch an android touch a human or an object and report their impressions. In Chapter 9, Kazuhiro Taniguchi, Atsushi Nishikawa, Tomohiro Sugino, Sayaka Aoyagi, Mitsugu Sekimoto, Shuji Takiguchi, Kazuyuki Okada, Morito Monden, and Fumio Miyazaki propose a method for objectively evaluating psychological stress in humans who interact with robots. The researchers argue that there is a large disparity between the image of robots from popular fiction and their actual appearance in real life. Therefore, to facilitate human-robot interaction, we need not only to improve the robot's physical and intellectual abilities but also find effective ways of evaluating the psychological stress experienced by humans when they interact with robots. The authors evaluate human stress with acceleration pulse waveforms and saliva constituents of a surgeon using a surgical assistant robot. In Chapter 10, Woong Choi, Tadao Isaka, Hiroyuki Sekiguchi, and Kozaburo Hachimura give a quantitative analysis of leg movements. They use simultaneous measurements of body motion and electromyograms to assess biophysical information. The investigators used two expert Japanese traditional dancers as subjects of their experiments. The experiments show that a more experienced dancer has the effective co-contraction of antagonistic muscles of the knee and ankle and less center of gravity transfer than a less experienced dancer. An observation is made that the more experienced dancer can efficiently perform dance leg movements with less electromyogramic activity than the less experienced counterpart. In Chapter 11, Tsuneo Yoshikawa, Masanao Koeda and Munetaka Sugihashi propose to define handedness as an important factor in designing tools and devices that are to be handled by people using their hands. The researchers propose a quantitative method for evaluating quantitatively the handedness and dexterity of a person on the basis of the person's performance in test tasks (accurate positioning, accurate force control, and skillful manipulation) in the virtual world by using haptic virtual reality technology. Factor scores are obtained for the right and left hands of each subject and the subject's degree of handedness is defined as the difference of these factor scores. The investigators evaluated the proposed method with ten subjects and found that it was consistent with the measurements obtained from the traditional laterality quotient method. In Chapter 12, Tomoo Takeguchi, Minako Ohashi and Jaeho Kim argue that service robots may have to walk along with humans for special care. In this situation, a robot must be able to walk like a human and to sense how the human walks. The researchers analyze VIII 3D walking with rolling motion. The 3D modeling and simulation analysis were performed to find better walking conditions and structural parameters. The investigators describe a 3D passive dynamic walker that was manufactured to analyze the passive dynamic walking experimentally. In Chapter 13, Yasuhisa Hirata, Takuya Iwano, Masaya Tajika and Kazuhiro Kosuge propose a wearable walking support system, called Wearable Walking Helper, which is capable of supporting walking activity without using biological signals. The support moment of the joints of the user is computed by the system using an approximated human model of four-link open chain mechanism on the sagittal plane. The system consists of knee orthosis, prismatic actuator, and various sensors. The knee joint of the orthosis has one degree of freedom and rotates around the center of the knee joint of the user on sagittal plane. The knee joint is a geared dual hinge joint. The prismatic actuator includes a DC motor and a ball screw. The device generates support moment around the user's knee joint. In Chapter 14, Tetsushi Oka introduces the concept of a multimodal command language to direct home-use robots. The author introduces RUNA (Robot Users' Natural Command Language). RUNA is a multimodal command language for directing home-use robots. It is designed to allow the user to robots by using hand gestures or pressing remote control buttons. The language consists of grammar rules and words for spoken commands based on the Japanese language. It also includes non-verbal events, such as touch actions, button press actions, and single-hand and double-hand gestures. The proposed command language is sufficiently flexible in that the user can specify action types (walk, turn, switchon, push, and moveto) and action parameters (speed, direction, device, and goal) by using both spoken words and nonverbal messages. In Chapter 15, Jessie Chen examines if and how aided target recognition (AiTR) cueing capabilities facilitate multitasking (including operating a robot) by gunners in a military tank crew station environment. The author investigates if gunners can perform their primary task of maintaining local security while they are performing two secondary tasks of managing a robot and communicating with fellow crew members. Two simulating experiments are presented. The findings suggest reliable automation, such as AiTR, for one task benefits not only the automated task but also the concurrent tasks. In Chapter 16, Eun-Sook Jee, Yong-Jeon Cheong, Chong Hui Kim, Dong-Soo Kwon, and Hisato Kobayashi investigate the process of emotional sound production in order to enable robots to express emotion effectively and to facilitate the interaction between humans and robots. They use the explicit or implicit link between emotional characteristics and musical parameters to compose six emotional sounds: happiness, sadness, fear, joy, shyness, and irritation. The sounds are analyzed to identify a method to improve a robot's emotional expressiveness. To synchronize emotional sounds with robotic movements and gestures, the emotional sounds are divided into several segments in accordance with musical structure. The researchers argue that the existence of repeatable sound segments enable robots to better synchronize their behaviors with sounds. In Chapter 17, Eiji Hayashi discusses a Consciousness-based Architecture (CBA) that has been synthesized based on a mechanistic expression model of animal consciousness and behavior advocated by the Vietnamese philosopher Tran Duc Thao. CBA has an evaluation function for behavior selection and controls the agent's behavior. The author argues that it is difficult for a robot to behave autonomously if the robot relies exclusively on the CBA. To achieve such autonomous behavior, it is necessary to continuously produce behavior in the IX robot and to change the robot's consciousness level. The research proposes a motivation model to induce conscious, autonomous changes in behavior. The model is combined with the CBA. The motivation model serves an input to the CBA. The modified CBA was implemented in a Conscious Behavior Robot (Conbe-I). The Conbe-I is a robotic arm with a hand consisting of three fingers in which a small monocular CCD camera is installed. A study of the robot's behavior is presented. In Chapter 18, Anja Austermann and Seiji Yamada argue that learning robots can use the feedback from their users as a basis for learning and adapting to their users' preferences. The researchers investigate how to enable a robot to learn to understand natural, multimodal approving or disapproving feedback given in response to the robot's moves. They present and evaluate a method for learning a user's feedback for human-robot interaction. Feedback from the user comes in the form of speech, prosody, and touch. These types of feedback are found to be sufficiently reliable for teaching a robot by reinforcement learning. In Chapter 19, Kohji Kamejima introduces fractal representation of the maneuvering affordance on the randomness ineluctably distributed in naturally complex scenes. The author describes a method to extract scale shift of random patterns from scene image and to match it to the a priori direction of a roadway. Based on scale space analysis, the probability of capturing not-yet-identified fractal attractors is generated within the roadway pattern to be detected. Such an in-situ design process yields anticipative models for road following process. The randomness-based approach yields a design framework for machine perception sharing man-readable information, i.e., natural complexity of textures and chromatic distributions. In Chapter 20, Vladimir Kulyukin and Chaitanya Gharpure describe their work on robot-assisted shopping for the blind and visually impaired. In their previous research, the researchers developed RoboCart, a robotic shopping cart for the visually impaired. The researchers focus on how blind shoppers can select a product from the repository of thousands of products, thereby communicating the target destination to RobotCart. This task becomes time critical in opportunistic grocery shopping when the shopper does not have a prepared list of products. Three intent communication modalities (typing, speech, and browsing) are evaluated in experiments with 5 blind and 5 sighted, blindfolded participants on a public online database of 11,147 household products. The mean selection time differed significantly among the three modalities, but the modality differences did not vary significantly between blind and sighted, blindfolded groups, nor among individual participants. Editor Vladimir A. Kulyukin Department of Computer Science, Utah State University USA Contents Preface V 1. Motion Feature Quantification of Different Roles in Nihon-Buyo Dance 001 Mamiko Sakata, Mieko Marumo, and Kozaburo Hachimura 2. Towards Semantically Intelligent Robots 013 Atilla Elçi and Behnam Rahnama 3. Pen-type Sensor for Surface Texture Perception 039 Xianming Ye, Byungjune Choi, Hyouk Ryeol Choi, and Sungchul Kang 4. iGrace – Emotional Computational Model for EmI Companion Robot. 051 Sébastien Saint-Aimé and Brigitte Le-Pévédic and Dominique Duhaut 5. Human System Interaction through 077 Distributed Devices in Intelligent Space Takeshi Sasaki, Yoshihisa Toshima, Mihoko Niitsuma and Hideki Hashimoto 6. Coordination Demand in Human Control of Heterogeneous Robot 91 Jijun Wang and Michael Lewis 7. Making a Mobile Robot to Express its Mind by Motion Overlap 111 Kazuki Kobayashi and Seiji Yamada 8. Generating Natural Interactive Motion 125 in Android Based on Situation-Dependent Motion Variety Takashi Minato and Hiroshi Ishiguro 9. Method for Objectively Evaluating Psychological Stress Resulting 141 when Humans Interact with Robots Kazuhiro Taniguchi, Atsushi Nishikawa, Tomohiro Sugino, Sayaka Aoyagi, Mitsugu Sekimoto, Shuji Takiguchi, Kazuyuki Okada, Morito Monden and Fumio Miyazaki XII 10. Quantitative Analysis of Leg Movement and EMG signal 165 in Expert Japanese Traditional Dancer Woong Choi, Tadao Isaka, Hiroyuki Sekiguchi and Kozaburo Hachimura 11. A Quantitative Evaluation Method of Handedness 179 Using Haptic Virtual Reality Technology Tsuneo Yoshikawa, Masanao Koeda and Munetaka Sugihashi 12. Toward Human Like Walking – Walking Mechanism of 3D Passive 191 Dynamic Motion with Lateral Rolling – Advances in Human-Robot Interaction Tomoo Takeguchi, Minako Ohashi and Jaeho Kim 13. Motion Control of Wearable Walking Support System 205 with Accelerometer Based on Human Model Yasuhisa Hirata, Takuya Iwano, Masaya Tajika and Kazuhiro Kosuge 14. Multimodal Command Language to Direct Home-use Robots 221 Tetsushi Oka 15. Effectiveness of Concurrent Performance of Military 233 and Robotics Tasks and Effects of Cueing and Individual Differences in a Simulated Reconnaissance Environment Jessie Y.C. Chen 16. Sound Production for the Emotional Expression 257 of Socially Interactive Robots Eun-Sook Jee, Yong-Jeon Cheong, Chong Hui Kim, Dong-Soo Kwon, and Hisato Kobayashi 17. Emotoinal System with Consciousness and Behavior using Dopamine 273 Eiji Hayashi 18. Learning to Understand Expressions of Approval and Disapproval 287 through Game-Based Training Tasks Anja Austermann and Seiji Yamada 19. Anticipative Generation and In-Situ Adaptation of Maneuvering 307 Affordance in a Naturally Complex Scene Kohji Kamejima 20. User Intent Communication in Robot-Assisted Shopping for the Blind 325 Vladimir A. Kulyukin and Chaitanya Gharpure

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.