USF Libraries
USF Digital Collections

Intelligent telerobotic assistance for enhancing manipulation capabilities of persons with disabilities

MISSING IMAGE

Material Information

Title:
Intelligent telerobotic assistance for enhancing manipulation capabilities of persons with disabilities
Physical Description:
Book
Language:
English
Creator:
Yu, Wentao, 1972-
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
rehabilitation
Hidden Markov Model
Motion Intention Recognition
virtual fixture
skill learning
therapy
Dissertations, Academic -- Mechanical Engineering -- Doctoral -- USF
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: This dissertation addresses the development of a telemanipulation system using intelligent mapping from a haptic user interface to a remote manipulator to assist in maximizing the manipulation capabilities of persons with disabilities. This mapping, referred to as assistance function, is determined on the basis of environmental model or real-time sensory data to guide the motion of a telerobotic manipulator while performing a given task. Human input is enhanced rather than superseded by the computer. This is particularly useful when the user has restricted range of movements due to certain disabilities such as muscular dystrophy, a stroke, or any form of pathological tremor. In telemanipulation system, assistance of variable position/velocity mapping or virtual fixture can improve manipulation capability and dexterity.Conventionally, these assistances are based on the environment information, without knowing user's motion intention. In this dissertation, user's motion intention is combined with real-time environment information for applying appropriate assistance. If the current task is following a path, a virtual fixture orthogonal to the path is applied. Similarly, if the task is to align the end-effector with a target, an attractive force field is generated. In order to successfully recognize user's motion intention, a Hidden Markov Model (HMM) is developed. Also this dissertation describes the HMM based skill learning and its application in a motion therapy system in which motion along a labyrinth is controlled using a haptic interface. Two persons with disabilities on upper limb are trained using this virtual therapist.The performance measures before and after the therapy training, including the smoothness of the trajectory, distance ratio, time taken, tremor and impact forces are presented. The results demonstrate that the forms of assistance provided reduced the execution times and increased the performance of the chosen tasks for the disabled individuals. In addition, these results suggest that the introduction of the haptic rendering capabilities, including the force feedback, offers special benefit to motion-impaired users by augmenting their performance on job related tasks.
Thesis:
Thesis (Ph.D.)--University of South Florida, 2004.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Wentao Yu.
General Note:
Includes vita.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 151 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001498102
oclc - 57708966
notis - AJU6697
usfldc doi - E14-SFE0000479
usfldc handle - e14.479
System ID:
SFS0025170:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Intelligent Telerobotic Assistance For Enhancing Manipulation Capabilities Of Persons With Disabilities by Wentao Yu A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Mechanical Engineering College of Engineering University of South Florida Major Professor: Rajiv V. Dubey Ph.D. Glen Besterfield, Ph.D. Daniel Hess, Ph.D. Shuh Jing Ying, Ph.D. Wilfrido A. Moreno, Ph.D. A.N.V. Rao, Ph.D. Date of Approval: August 11, 2004 Keywords: R ehabilitation, H idden M arkov M odel, M otion I ntention R ecognition V irtual F ixture S ki ll L earning, T herapy Copyright 2004 Wentao Yu

PAGE 2

Acknowledgements First, I would like to thank Dr. Rajiv Dubey for being a great professor and giving me the opportunity to pursue research work in an area that I enjoy; I will never forget the opportunity that you gave me. I would also like to thank Dr. Glen Besterfield, Dr. Daniel Hess, Dr. Shu Jing Ying, Dr. Moreno and Dr. Rao for serving on my advisory committee. I would like to thank Aaron Gage for setting up the Ghost SDK for the PHANToM, and making some software for our system. I would like to th ank Norali Pernalete for making such an interesting previous research for the lab. Michael Jurczyk provided invaluable assistance in setting up the hardware. He got involved with the vision system getting the camera to work. His work led to the Halcon sof tware configuration. I would like to thank Dwayne Polzer for attaching the sensors on the end effector. I would like to thank Redwan Alqasemi for being a good friend and making careful correction on my dissertation. Thank you for your time and hard work. I would also like to acknowledge the financial support for the rehabilitation robotics research which I have been involved in the past few years. Stephen Sundarrao and his Rehabilitation Engineering and Technology Program provided the funding for research work I did in this lab. Finally, my thanks go to the lab members that I have met: Ben Fritz, Ed McCaffrey, Sashi Konda, Ash win Upadhyay, Kevin Edwards etc.

PAGE 3

i Table of Contents List of Tables iv List of Figures v Abstract ix Chapter 1: Introduction 1 1.1. Motivation 1 1.2. Dissertation Objectives 2 1.3. Dissertation Outline 3 Chapter 2: Background 5 2.1. Rehabilitation Robotics 5 2.2. Telerobotics 10 2.3. Teleop eration Assistance Background 14 2.3.1. Regulation of Positions 15 2.3.2. Regulation of Velocities 16 Chapter 3: Teleoperation with Assistance Functions 20 3.1. Introduction 20 3.2. Assistance Functions Concept 21 3.3. Box and Blocks Task 22 3.4. Sensor Assist Function 24 3.4.1. Descri ption 25 3.4.2. Stage One 26 3.4.3. Stage Two 27 3.4.4. Stage Three 28 3.4.5. Stage Four 29 3.4.6. Stage Five 29 3.4.7. Stage Six 30 3.4.8. Stage Seven 30 3.5. Experimental Results 31 3.5.1. Telemanipulation System Structure 31 3.5.2. Software Implementation 32 3.5.3. Results 33 3.5.3.1. Simulation Mode 33 3.5.3.2. Real Test Mode 34

PAGE 4

ii 3.6. Summary 36 Chapter 4: Telemanipulation Assistance Based on Motion Intention Recognition 37 4.1. Telemanipulation Assistance 37 4.2. Classes of Motion in Telemanipulation 39 4.3. Hidden Markov Model Based Motion Recognition 41 4.3.1. Data Preprocessing 41 4.3.2. Vector Q uantization 44 4.3.3. HMM T raining 48 4.3.4. Motion Recognition 54 4.4. Design of Fixture Assistance 57 4.4.1 Fixture Assistance 58 4.4.2 Force Field Design for Targets and O bstacles 59 4.5. Experiments 60 4.5.1. Experimental Test B e d 60 4.5.2. Experimental Results W ithout Assistance 62 4.5.3. Motion Recognition 64 4.5.4. Experiment Results with Assistance Based on Motion Intention Recognition 64 4.6. Summary 66 Chapter 5: Robotic Therapy for Persons with Disabilities Usin g Skill Learning 68 5.1. Motion Therapy 68 5.2. Hidden Markov Model Based Skill Learning 69 5.2.1. Raw data Conversion 70 5.2.2. Hidden Markov Model Computation 73 5.3. Experiments in Virtual Environment 76 5.3.1. Tasks and Experimental Test Bed 76 5.3.2. Skill Learning and Transferring 78 5.4. Mot ion Therapy Experiments 79 5.4.1. Motion Performance before Therapy Training 81 5.4.2. Motion Performance after Therapy Training 85 5.5. Summary 90 Chapter 6: Conclusions and Recommendations 91 6.1. Dissertation Overview 91 6.2. Virtual Fixture Assistance Based on Motion Intention 9 1 6.3. Robot Therapy and its Effectiveness 9 2 6.4. General Discussion 9 3 6.5. Recommendations 94 References 9 5 Appendices 10 4

PAGE 5

iii Appendix A : System Test b ed and Experiment Design 10 5 A.1. Introduction 105 A.2. Hardware 105 A.2.1. Robotics Research Corporation Manipula tor 105 A.2.2. P HANTOM Premium 1.5 108 A.3. Software 110 A.3.1. R2 Controller Program 110 A.3.2. HALCON Computer Vision Software 110 A. 3 3 Telerobot Control Interface 112 A.3.4. Teleoperation System Architecture 113 A. 4 RRC GUI 113 A.4.1. Safe Operating Instructions 115 A.4.1.1. Simulation Mode 115 A.4.1.2. Robot Mode 117 A.4.2. Jog Control 118 A.4.3. Positi on Feedback 120 A. 4 4 Te ach Pendant 120 A.4.5. Program Control 121 A.4.6. Client Server Interface 124 Appendix B : Visual Servoing for Grasping 1 2 6 B.1. Configuration of Vision System 126 B .2. 3D Pose Determination of Target with Respect to End effector 127 B.3. Visual Servo Controller Design 1 36 B.4. Tele autonomy Design 138 About the Author End Page

PAGE 6

iv List of Tables Table 3.1 Comparison of Averages for Box and Blocks T est Using Workspace Constraint 36 Table 4.1 Performance Summary without Assistance 63 Table 4.2 Motion Recognition Rate 64 Table 4.3 Performance Summaries with Assistance 66 Table 5 .1 Movement Performance Summary 89 Table A.1 Joint Limits for the RRC Manipulator 1 0 7 Table A.2 Phantom Premium 1.5 Specifications 109

PAGE 7

v List of Figures Figure 2.1 RAID Workstation 6 Figure 2.2 Manus Manipulator 6 Figure 2.3 Raptor Manipul ator 8 Figure 2.4 (a) MIT MANUS[85], (b) MIME[16] 9 Figure 2.5 Tele autonomy is the C ombination of Teleoperation and Autonomy 13 Figure 2.6 Tele collaboration with Information Feedback 13 Figure 2.7 Human machine Cooperative Teleoperation Concept [2 9 ] 15 Figure 2.8 Representation of Slave Constraint in the Constraint Plane[67] 15 Figure 2.9 Scaling Factor Function [53] 16 Figure 2.10 Scaling Factor Varying for Approach [29] 17 Figure 2.11 Coordinate Frames for Cross Alignment task [29] 17 Figu re 2.12 Two Types of R eference D irection Fixtures [55] 19 Figure 2.13 Virtual F ixtures to Aid Extract / Insert Motion [69] 19 Figure 3.1 Box and Blocks Test Window Interface 23 Figure 3.2 Box and Blocks Test, Master and Slave 23 Figure 3.3 Teleoperation Test bed 24 Figure 3.4 Sensors Mounted on End Effector 24 Figure 3.5 The Seven Stages of the Scaling Scheme 25

PAGE 8

vi Figure 3.6 Image Frame Showing Vector Determination 26 Figure 3.7 ScaleFactor According to LRF Data (DME) 28 F igure 3.8 The Telemanipulation System 31 Figure 3.9 Region Growing Image 32 Figure 3.10 Sobel Edge Detection Image 32 Figure 3.11 Trajectory C omparison of PhanTom and S lave M anipulator 33 Figure 3.12 Box and Block T ime E xecution 34 Figure 3.13 Traject ory of Box and Blocks Task 35 Figure 4.1 Path Following Motion and its Velocities Profile 40 Figure 4.2 Aligning with Target Motion and its Velocities Profile 40 Figure 4.3 Avoiding Obstacle Motion and its Velocities Profile 40 Figure 4.4 Operation Sto pping its Velocities Profile 41 Figure 4.5 Conversion of Continuous Velocity Data to Discrete Symbols 43 Figure 4.6 LBG Codebook Training 46 Figure 4.7 LBG Vector Quantization for S ome Random 2D D ata, as L E quals 2,4,8,16,32 47 Figure 4.8 5 states Lef t right Hidden Markov Model, with 32 Observable Symbols in Each State 48 Figure 4.9 Forward Computation Illustration 55 Figure 4.10 Virtual Fixture Definition 57 Figure 4.11 Stiffness Coefficients of Different Fixtures 59 Figure 4.12 Force Fields Illu stration (a: Attractive force, b: Repulsive force) 60 Figure 4.13 Simulation of the Task Execution 61

PAGE 9

vii Figure 4.14 Velocity Components without Assistance 62 Figure 4.15 Trajectories without Assistance 63 Figure 4.16 Velocity Components with Assistance 6 5 Figure 4.17 Trajectories with Assistance 66 Figure 5.1 Raw data Vectors 71 Figure 5.2 PSD Vectors 71 Figure 5.3 Vector Quantization When Codebook Length is 4 72 Figure 5.4 Two state Left right Hidden Markov Model 73 Figure 5.5 Hidden Markov Model w ith the Adjusted Parameters 74 Figure 5.6 Virtual Environment for Simulation Testbed 77 Figure 5.7 Forward S cores for all 12 Times of Task Execution 79 Figure 5.8 Actual Moving Distance is 716.8mm, Skill Moving Distance is 495.2mm, and Distance Ratio is 1.44 82 Figure 5.9 Tremor M easurements 83 Figure 5.10 Collisions: 15 Collisions O ccurred 84 Figure 5.11 Trajectories After Therapy T raining 85 Figure 5.12 Translation Tremors After Therapy 86 Figure 5.13 Collisions After Therapy 87 Figure A.1 RRC Ma nipulator Joints and Limits 105 Figure A.2 RRC Manipulator 10 7 Figure A.3 RRC Manipulator with Sensors and End Effector 10 8 Figure A.4 PHANTOM Premium 1.5 10 8 Figure A.5 Integrated Development Environment of Halcon 11 1

PAGE 10

viii Figure A.6 Telemanipulation Inte rface 11 2 Figure A.7 Teleoperation System Architecture 11 3 Figure A.8 RRC Graphical User Interface 11 4 Figure A.9 RRC GUI Main Window 11 4 Figure A.10 Controller Buttons 11 6 Figure A.11 Desktop Icons on Robot Controller Computer 11 7 Figure A.12 Jog Co ntrol Window and Position Feedback Window 119 Figure A.13 Teach Pendant for RRC Manipulator 121 Figure A.14 MainWindow for Move Data / Record 1 23 Figure A.15 File Management 123 Figure A.16 Execution and Status Windows 124 Figure A.17 Client Managemen t Window on Robot Computer 12 5 Figure B.1 Configuration of Vision System 126 Figure B.2 Coordinate System for Perspective Projection 128 Figure B.3 Coordinates System Assignment for Vision System 129 Figure B.4 Perspective Projection of a Line Segment in Image Plane 132 Figure B.5 Tele autonomy Illustration 138

PAGE 11

ix Intelligent Telerobotic Assistance for Enhancing Manipulation Capabilities of Persons with Disabilities Wentao Yu ABSTRACT This dissertation addresses the development of a telemanipulation system using intelligent mapping from a haptic user interface to a remote manipulator to assist in maximizing the manipulation capabilities of persons with disabilities. This mapping, referred to as assistance function, is determined on the basis of environmental model or real time sensory data to guide the motion of a telerobotic manipulator while performing a given task. Human input is enhanced rather than superseded by the computer. This is particularly useful when the user has restricted range of movements due to certain disabilities such as muscular dystrophy, a str oke, or any form of pathological tremor. In telemanipulation system, assistance of variable position/velocity mapping or virtual fixture can improve manipulation capability and dexterity. Conventionally, these assistances are based on the environment al i nformation, without knowing users motion intention. In this dissertation, users motion intention is combined with real time environment al information for applying appropriate assistance. If the current task is following a path, a virtual fixture orthogon al to the path is applied. Similarly, if the task is to align the end effector with a target, an attractive force field is generated. In order to successfully recognize users motion intention, a Hidden Markov Model (HMM) is developed.

PAGE 12

x This dissertation a lso describes the HMM based skill learning and its application in a motion therapy system in which motion along a labyrinth is controlled using a haptic interface. Two persons with disabilities on upper limb are trained using this virtual therapist. The pe rformance measures before and after the therapy training, including the smoothness of the trajectory, distance ratio, time taken, tremor and impact forces are presented. The results demonstrate that various forms of assistance provided reduced the executi on times and increased the performance of the chosen tasks for the disabled individuals. In addition, these results suggest that the introduction of the haptic rendering capabilities, including the force feedback, offers special benefit to motion impaired users by augmenting their performance on job related tasks.

PAGE 13

1 Chapter 1: Introduction 1.1. Motivation Physical disabilities make it difficult or sometimes impossible for individuals to perform several simple job related tasks such as pressing a button to operate a machine, moving light objects etc. While conside ring employment, the true potential of individuals with disabilities can be enhanced by technology to augment human performance. New developments in telerobotic systems can allow greater number of individuals with disabilities to compensate for their lost manipulation skills. In the past two decades, researchers in rehabilitation robotics have designed and developed a variety of passive/active devices to help persons with limited upper limb functions to perform essential daily manipulation tasks. Since th e user is inside the control loop, most of these research or commercial products have adopted telemanipulation system, in which the user issues robot motion commands through an interface [3]. However, practical results are limited, mainly due to the fact t hat although telemanipulation may relieve the user of the physical burden of manipulative tasks, it introduce s the mental burden of controlling the input device [4]. With typical telemanipulation, the user is in the control loop, sensing the environment i nformation such as the location and the distance of the target and providing the appropriate control signal to the input device. In literature [84], after training all operators for a certain time (normal subjects), only 60% of them were skilled enough to complete teleoperation tasks. A general method for introducing computer

PAGE 14

2 assistance in task execution without overriding an operators command to the manipulator is used. The appropriate movement for the task is kept or even enhanced, but the undesirable movements are reduced. This is done using assist functions, which scale the input velocity according to the task. This methodology has been previously employed by the author in the execution of manual dexterity assessment tasks with fully able individuals [53]. Beside this functional approach in rehabilitation, robotics applications can also assist clinically in therapy. Much evidence suggests that intensive therapy improves movement recovery. But such therapy is expensive, because it requires therapists o n a person to person basis. Recently there has been increased interest in restoring functions through robot aided therapy. This approach is to design therapy platform to substitute some of the therapists work. 1.2. Dissertation Objective The goal of this dissertation is to design an intelligent telerobotic system that can maximize the manipulation capabilities and reduce the mental burden for persons with disabilities on the upper limb: 1. Develop sensor based assistance functions to increase the limi ted motion range and enhance manipulation accuracy. 2. Implement these assist functions to perform a common vocational rehabilitation test referred to as a Box and Blocks. During task operation, adjust the scaling according to the available sensory data.

PAGE 15

3 3. Dev elop an algorithm to recognize operators motion intention by using Hidden Markov Model (HMM). Apply appropriate fixture assistance based on operators motion. If the recognized motion is following a path, a virtual fixture orthogonal to the path is appli ed. If the task is to align the end effector with a target, an attractive force field is generated. Similarly, if the task is to avoid obstacles, a repulsive force field is produced. 4. Develop a robotic therapy system based on skill learning through Hidde n Markov Model. Since HMM is feasible to model a stochastic process, such as speech or a certain assembly skill, it can be used to characterize the skill of moving along a labyrinth path. The skill of moving along a labyrinth is learned and considered as a virtual therapist, which replaces the role of a physical therapist for motion therapy. Perform motion experiments with two subjects with disabilities. The contribution of this dissertation is that telerobotic system with intelligent operation can enhanc e the manipulation capabilities and reduce the mental burden and learned skill of a specific task can be used as a robotic therapist to do motion therapy. 1.3. Dissertation Outline The history and the background of rehabilitation robotics and telemanip ulation system areas related to this work are discussed in chapter 2. The concept of rehabilitation robotics, haptic interface and teleoperation assistance are traced through history to the present state of knowledge in these areas. Chapter 3 describes a telemanipulation system to assist persons with disabilities perform dexterous manipulation tasks. In this chapter,

PAGE 16

4 assistance functions are used for mapping such that human input is enhanced and Box and Blocks is chosen to test the effectiveness of this sensor ba sed assistance function The Hidden Markov Model (HMM) based human motion intention recognition is developed in chapter 4 and then the implementation of appropriate virtual fixture assistance is applied to teleoperation. Chapter 5 describes the Hi dden Markov Model based skill learning and its application in motion therapy system using a haptic interface. Chapter 6 concludes with a discussion of the experimental results, and suggested future work.

PAGE 17

5 Chapter 2: Background 2.1. Rehabilitation Robotics Physical and cognitive disabilities make it difficult or impossible for individuals to perform several simple work and household tasks such as pressing a button to operate a machine, opening a door, mo ving light objects etc. A study by J. Schuyler et al concluded that a slight increase in manipulation ability, mobility and strength results in substantial increase in the number of jobs for which an individual might be eligible [31]. In many instances, such enhancements may mean the ability to do a task that the person is otherwise unable to perform. Assistive devices have attempted to fully or partially restore the lost functions and enable people with disabilities to perform many Activities of Daily Life (ADL) affecting their employment and quality of life [1, 7, 3, 4, 17]. The earliest research in this area (prosthetics and robotic arms) began in the late 1960s [2]. The Rancho Golden arm, developed at Rancho Los Amigos Hospital in Downey, Califor nia in 1969 was the first successful rehabilitation robot manipulator [32]. It used seven tongue switches in a sequential mode to successfully maneuver the arm in space. Johns Hopkins arm [1, 5], evolved from prosthetics, could execute tasks in pre progra mmed and direct modes through a chin manipulandum and other body powered switches. The Heidelberg Manipulator was the earliest example of the workstation based approach to the implementation of robotic systems [6, 7]. Spartacus project proposed that moun ting a manipulator arm on a wheelchair would increase the effectiveness of

PAGE 18

6 manipulation rehabilitation [8, 9]. Though all these assistive devices saw limited use by consumers, they established the foundation for further research. Figure 2.1 RAID Workstation Figure 2.2 Manus M anipulat or

PAGE 19

7 Si nce the 1980s, considerable progress has been made in the field of rehabilitation robotics technology. One example is the workstation robotic device. The goal of a workstation robotic device is to enable the user to perform tasks typically encountered i n office or at home. These tasks include moving books from a shelf to a reading board, opening the book and flipping through its pages, inserting CD ROMs and floppy diskettes into a computer. The most commonly used robotic workstation available to users with disabilities is the RAID (Robot for Assisting the Integration of the Disabled, Figure 2.1) workstation [12]. DEVAR (desktop assistant robot for vocational support in office settings) [16] can be used to handle paper, floppy disks, pick up and use t he telephone, and retrieve medication. RAA (Robotic Assistive Appliance) offers a human size manipulator at a workstation with 6 degrees of freedom with either programmed or direct control [17] and is currently undergoing testing to assess its advantages over an attendant [18]. The other kind of device is wheelchair mounted robot. A power wheelchair is used as a mobile base where a mechanical manipulator can be attached. Several wheelchair mounted manipulators are available to the consumer, but two in pa rticular, MANUS and the Raptor, are more successful. MANUS is the most well known of those successors (Figure 2.2). Raptor manipulator is the first robot assistive manipulator that has gained FDA approval for use in the US [35] (Figure 2.3). Because of it s increased size, though, the range of the Raptor is 120 cm compared to the 80 cm of the Manus. It can also lift up to 2.5 kg. Another project that has enjoyed relative success is the Handy 1 [7,11], which was primarily used as a feeding device for child ren with cerebral palsy. More recently, besides improving eating skills, the aid has been considered for other activities including application of cosmetics leisure activities [26].

PAGE 20

8 In addition, in FRIEND Robot arm system [15], a multimedia user interface was included to enlarge the functionality of existing technical aids. ISAC incorporated Artificial Intelligence (AI) into its controller to reduce the mental load on the user during the performance of manipulative tasks [20]. KARES uses a SPACEBALL 2003 as an input device to teleoperate the robotic arm [21]. In KAREA II, an advanced version of KARES has a visual servo, which allows the robotic arm to operate autonomously through the visual feedback of a binocular camera head [28]. The r obot arm workstations or wheelchair mounted manipulator above compensated for the activity deficiencies of people with disabilities. But because of the high cost, the poor interface between a complex electromechanical system and a person Figure 2.3 Raptor Manipulator

PAGE 21

9 with limited capa bilities, and social stigma attached with a robot, these assistive devices have had limited success as commercial products [1,3,4,7]. Besides assistive robots, another type of rehabilitation robotic system is therapy robot. MIT MANUS (Figure 2.4 (a)) is t he most successful robot aided therapy platform to undergo intensive clinical testing [85, 86]. This device is a planar, two revolute joint, backdriveable robotic device that attaches to the patients hand and forearm through a brace. The patient can move the robot, or the robot can move the patient, in the horizontal plane. The patient receives feedback of the hand trajectory on the computer screen. The results of clinical trials suggested that exercise therapy improved motor recovery [87 89]. (a) (b) Figure 2.4 (a) MIT MANUS [ 85] (b) MIME [ 16]

PAGE 22

10 MIME (Figure 2.4 (b)) is powerful enough to m ove a patients arm throughout the three dimensional workspace against gravity [79]. When the patient moves her/his unimpaired arm, a mechanical digitizing stylus senses the movement. The PUMA 560 robot arm then moves the patients impaired arm along a mirror symmetric trajectory. The result of clinical tests with MIME showed integration of robot aided therapy into clinical exercise programs would allow repetitive, time intensive exercises to be performed without one to one attentions from a therapist [16]. The ARM (Assisted Rehabilitation and Measurement) was designed to guide reaching movements across the workspace, and to measure multi axis force generation and range of motion of the arm [79]. Like MIT MANUS and MIME, the ARM device can assist or resist movements and can also measure hand movements. The ARM Guide has been used to quantify and understand abnormal coordination, spastic reflexes, and workspace deficits after stroke [90]. The testing results suggested that the constraint force and range of motion measurements during mechanically guided movement may prove useful for precise monitoring of arm impairment and of the effects of treatment techniques targeted at abnormal synergies and workspace deficits [91, 92]. 2. 2. Telerobotics Due to the unstructured environment of ADL and varieties of the tasks and the presence of the user, many rehabilitation robots adopt telerobotics systems so that users can issue commands through a human machine interface [8, 11, 15, 28]. Regarding teleoperation studies, several types of systems and concepts have been defined in the area of remote manipulation technology [39]. The concept developed by Ray Goertz in the

PAGE 23

11 1950's, in which a person's sensing and manipulation capability is exte nded to a remote location, is referred to as teleoperation. His mechanisms were mechanical pantograph devices which allowed radioactive materials to be handled at a safe distance. Later, electrical servos replaced mechanical linkages and cameras replaced direct viewing, so that the operator could be arbitrarily far away. Human operators look at video displays, and operate remotely located slave robot via a hand controller. Usually the term teleoperation refers to systems in which the human operator direc tly and continuously controls the remote manipulator. In these systems, the kinematic chain which is manipulated by the operator and may provide force feedback is referred to as the master, while the remote manipulator is referred to as the slave. From the point of view of autonomy, telerobot is classified into tele autonomy and tele collaboration [57]. The former term refers to the combination of teleoperation and autonomous robotic control. In some cases, a unilateral controller is used. In this c ase, there is no information feedback from slave to master or from master to human. The latter means all operations are controlled by the human machine collaboration, usually in the form of force reflection. For teleoperation itself, it can be classified into unilateral and bilateral telerobotics according to the data flow. In the former case, the slave robot is operated in free teleoperation, just like an open loop system. The only feedback is the task execution video of the slave or even no video if the master and slave are in the same room. This case is illustrated in figure 2.5 (upper part). The latter one has force feedback provided to the teleoperator, thus forming a kinesthetic or tele presence system [33, 34, 37, 73]. Figure 2.6 shows the arch itecture of a typical bilateral teleoperation. In this case, strategies in which human decisions are merged with computer based assistance

PAGE 24

12 have been made possible by more complex forms of automatic control and sensor data fusion. The control system adds c omputer generated velocity/force inputs to those from the master in the impedance controlled formulation to assist controlling the motion of the manipulator, such as moving along a surface without impact and obstacle avoidance. Bilateral impedance control in telerobotic systems provides good teleoperation since force reflection is provided to the operator during operation [33, 36, 39]. Dubey et al proposed variable impedance parameters to adapt to variable circumstances thus overcoming the conflict problem of choosing desired dynamics parameters [34]. This controller is primarily used in tasks requiring contact, such as needle inserting into tissue, object surface exploration. Teleoperation system design usually takes operation accuracy into account, not the convenience and simplification of operation. With the improvement of the controller architecture and assistance attempt, the task performance of telerobotic system in rehabilitation engineering is still not satisfactory [40, 41, 44]. For a simple "go get a cup and put it on a pad" task, it takes the operator 50 seconds, mostly due to the indexing the master once the master reaches its workspace limit and tuning the gripper to grasp the target [53]. Furthermore, the performance largely depe nds on the operator's familiarity with the system. In most cases, using a robot as a teleoperated device to complete a task is much harder than using human arm and hand. It can soon become very exhausting, especially if it has to perform repeated tasks s uch as feeding, even with some assistance Many researchers tried to improve the operation accuracy, reduce execution time and relieve the operator's mental labor through adding artificial intelligence. Kawamura et al [51] looked at how far rehabilitatio n robots had come in possessing abilities that relieve

PAGE 25

13 the user from the mental burden of controlling the robot. They had developed modules for fuzzy commands interface, object recognition and task planning. In intelligent telerobot system, vision based as sistance has improved the operation of aligning the end effector with the target [45, 50]. Figure 2.5 Tele autonomy is the Combination of Teleoperation and Autonomy Figure 2.6 Tele collaboration with Information Feedback + master slave operator environ ment Output: velocity/position, and/or force Input: video of slave with environment slave environ ment C(s) + Output: velocity/position, and/or force Input: video of slave with environment master slave human environ ment

PAGE 26

14 The telerobot emphasized in this dissertation, is the open loop telemanipulation with assistance. The challenge is to make it more functional and more intelligent. This dissertatio n is an attempt to address the issue of combining human flexibility and machine intelligence into an efficient rehabilitation robotic system. 2.3. Teleoperation Assistance Background In teleoperation, it is essential to provide as much assistance as possi ble for the operator. Basically, the assistance algorithm is to map the master commands to the slave in a way that scales up or down depending on the task and environment information. The scaling factors vary according to the tasks and environment. The ide a behind the assistance function concept is the generalization of position and velocity mappings between master and slave manipulators of a teleoperation system. This concept was conceived as a general method for introducing computer assistance in task exe cution without overriding operators commands to the manipulator (Figure 2.7). The assistance functions can be classified as regulation of position, velocity and contact forces. All of these assistance strategies are accomplished by modification of system parameters. A simple form of position assistance is scaling, in which the slave workspace is enlarged or reduced as compared to the master workspace. The velocity assistance is commonly used in approaching target and in avoidance of obstacles. In both case s, the velocity scaling varies according to whether motion in that particular direction is serving to further the desired effect of the motion.

PAGE 27

15 Figure 2.7 Human M achine Cooperative Teleoperation Concept [29] 2.3.1. Regulation of Positions In these fun ctions, the motion of the manipulator is constrained to lie along a given line or in a plane. This helps persons with disabilities operate more stably and smoothly. The details of these functions were presented in a different work by the authors [67] (See Figure 2.8). Figure 2.8 Representation of Slave Constraint Frame in the Constraint Plane [67]

PAGE 28

16 Figure 2.9 Scaling Factor Function [53] 2.3.2. Regulation of Velocities In this case the mapping between the master and slave is done based on velocities. Th e velocity scaling used varies according to whether the motion in a particular direction is serving to further the desired effect of the motion. In the approach assistance, the velocity is scaled up if the motion reduces the distance between the current an d goal positions of the manipulator. Otherwise, the velocity is scaled down. For velocity regulation, the scaling factors changing is depicted in Figure 2.9. The scaling factor depends on the subtask being executed and the direction of travel. The relatio nship between the master/slave velocities is: V slave = ScaleFactor V maste Figure 2.10 shows a velocity scaling factor varying based on the distance reading when the end effector is approaching a wall. Using a vision system, Everett designed a vision based mapping to align the end effector of the slave manipulator with a cross object [29, 45]. The velocities that reduce the alignment error are scaled up and the ones that increase the alignment error are scaled down (Figure 2.11).

PAGE 29

17 Figure 2.10 Scaling Factor Varying for Approach [29] Figure 2.11 Coordinate Frames for Cross Alignment Task [29] In tele collaboration, another type of assistance is virtual fixture. This assistance is functions of spatial parameters, instead of time. But what is virtual fixture? Virtual fixtures are defined, according to [68], as abstract precepts overlaid on top of the

PAGE 30

18 reflected sensory feedback from a remote environment such that a natural and predictable relation exists between an operators kinesthetic activities an d (efference) the subsequent changes in the sensations presented (afference). Intuitively, it is very easy to understand this. As a matter of fact, everyone has experience of using a real fixture, for example, drawing a straight line using a ruler. By pr essing your pencil against this "fixture", we are able to quickly draw a very straight line. Now imagine if there was no ruler there, but there was a virtual wall you could press against instead of a ruler. Similarly, what if there were invisible forces pulling on your pencil, forcing it to follow a straight path. These are virtual fixtures. Virtual fixtures play the same role in robot motion as they do in our line drawing motion. As a matter of fact, virtual fixture is a computed generated constraint th at displays position or force limitations to a robot manipulator or operator. It can be used to constrain the manually controlled manipulators motion on a desired surface or to be pulled into alignment with a task [37, 38, 61, 64]. Usually, two stiffness coefficients are defined: stiffness along the desired path and stiffness orthogonal to the path. The ratio between these two stiffness coefficients indicates the softness or hardness. If the ratio is close to zero, it is the hardest fixture, which means th at end effector can only move along the path, not deviating at all. If the ratio is close to 1, it is the softest fixture, where the end effector can move freely. So this kind of fixture is usually used for path following (Figure 2.12). Virtual fixture ca n also be in the form of potential force fields [68, 69]. Potential fields were used to produce velocity commands, which, when added to those generated by the input device, maneuver the manipulator toward the target or away from obstacles [69]. Force fiel d is usually in the magnetic form. The role of this type of fixture is the

PAGE 31

19 same, guiding the end effector into a goal or away from an obstacle. Figure 2.13 shows that extract and insert fixtures restrict the motion of the end effector when it is close to t he tool grasping position. This behavior is implemented in order to avoid a collision of the manipulator with the tool, while allowing the operator to quickly extract/insert the grasping position [69]. Figure 2.12 Two T ypes of R eference D irection F ixtur es [ 55] Figure 2.13 Virtual F ixtures to Aid Extract / Insert Motion [69]

PAGE 32

20 Chapter 3: Teleoperation with Assistance Functions 3.1. Introduction This chapter describes a telemanipulation system to assist persons with disabilities perform dexterous manipulation tasks. This work is expected to enhance the teleoperation performance through the use of scaled mapping from master to slave manipulation based upon sensory data. The concept is that appropriate movement for the task is kept or even enhanced, but the undesirable movements are reduced. This is done using assist functions, t hat scale the input velocity according to the task. This assistance approach uses assist functions and available sensory data to perform variable velocity mapping between the master and slave, referred to as the Sensor Assist Function(SAF). A common vocati onal rehabilitation test referred to as Box and Blocks was chosen to test the effectiveness of this sensor assisted function. A variable scaling scheme was developed using available sensory data. In the simulation mode, a visual environment was created for the Box and Blocks test. This was used to predict if a person with disabilities would be able to perform a task comfortably. The real test was performed using a master and slave manipulator system with a camera and laser range finder. A motion constraint was added to the master to simulate a user with disabilities. The results demonstrated that the sensor assistance not only reduced required input motion, idle time, and execution time, but also increased manipulation accuracy during the Box and Blocks te st. This work

PAGE 33

21 prompted the need of building a test bed that uses available sensory information to adjust parameters during task execution. 3.2. Assistance Functions Concept Assistance functions were developed to assist the operator by scaling the input vel ocity according to the task. The assistance includes linear assistance, planar assistance, and velocity assistance. The linear assist function constrains the input velocity along a line. The input velocity is transformed to a task frame and multiplied by a scaling matrix, and then transformed back to the base frame. A goal line is determined between two points and defined as the X axis of the linear task frame. The Z axis is defined as the perpendicular vector, and the Y axis is defined by the cross pro duct of Z cross X A transformation matrix is calculated according to the task frame, and is multiplied by the input velocity. = masterZ masterY masterX slaveZ slaveY slaveX V V V c b a c b a c b a V V V 31 31 31 21 21 21 11 11 11 (3.1) where V slave is the input velocity in the task frame. Then a scaling matrix is applied to scale down the velocity in the undesired directions along the task frame Y and Z axis. = slaveZ slaveY slaveX z y x scaledZ scaledY scaledX V V V k k k V V V 0 0 0 0 0 0 (3.2) where the values of k x k y k z depend on a specific task. In the linear assistance case, the values of k y and k z are very small. Then, V scaled is transformed back to the base frame

PAGE 34

22 using the transformation matrix, and that becomes the modified velocity that is sent to the robot controller. The planar assist f unction constrains the input velocity along a plane. To construct this task frame, three points are used to define a plane. The X axis is defined as the line between points 1 and 2. The Z axis is defined as the normal to the plane, and the Y axis is def ined as the cross product of Z and X A transformation matrix is determined, and the input velocity is converted to the task frame according to equation (3.1), the same as the linear case. For the planar assistance, however, the value of the scale matri x is different. Since the desired motion lies in the X Y plane, only motion along the Z axis will be scaled, so k z is very small. After the task frame velocity, V slave is multiplied by the scale matrix, it is converted back to the base frame and sent to the robot controller, according to equation (3.2). The velocity assist function increases and decreases the velocity according to the distance to the goal object or an obstacle. As the distance to the goal is known, a velocity scale factor can be appli ed to the velocity in order to increase or decrease the input velocity. These assistance strategies are integrated together to provide a form of assistance for users with disabilities to perform the Box and Blocks task in this research. 3.3. Box and Blo cks Task The Box and Blocks test measures gross manual dexterity and is frequently used in research on rehabilitation. This test, represented in figure 3.1(simulation mode) and figure 3.2 (real testing), consists of moving one inch blocks from one side to another in a two sided box. A wall divides the two sides. This test the use of large motions in all

PAGE 35

23 directions. The goal is to pick up the block from one side, and place it in the other side. In simulation mode (Figure 3.1), force feedback was added to make user feel resistive force and collision. In real test (Figure 3.2), a sphere constraint was applied to simulate the workspace of persons with disabilities. Since the possible input motion has been decreased, the able bodied user will better represent a person with disabilities. Assistance function algorithm is based on sensory data. Figure 3.1 Box and Blocks Test Window Interface MA S TER PHANToM Input Device SLAVE RRC Manipulator BOX BLOCK WALL X Z Y End Effeector Hitachi Camera DME Laser Range Finder Figure 3.2 Box and Blocks Test, Master and Slave

PAGE 36

24 3.4. Sensor Assist Function In this res earch, a combination of the linear, planar and velocity assistance, referred to as the Sensor Assist Function (SAF), was developed for the Vocational Rehabilitation test called Box and Blocks. The SAF essentially uses sensory data to perform variable velo city mapping from master to slave (Figure 3.3). Figure 3.3 Teleoperation Test bed Figure 3.4 Sensors Mounted on End Effector

PAGE 37

25 The sensors include a DME 2000 Laser Range Finder (LRF), and a vision system using a Hitachi KP D50. These sensors are mounte d on the end effector according to figure 3.4. The vision system is used to locate the goal object and obstacles. The image processing software, Halcon [77], obtains the center position of the goal object in the image plane. Once the end effector grasps the object, the software obtains the edge of the wall, which is used to avoid obstacles. The LRF is used in the velocity assistance in the Z direction depending on the depth of the obstacles and the object. 3.4.1 Description There are seven stages of a ssistance shown in figure 3.5. At the start of the task, the robot is in the home position and there is no scaling until the object is seen by the vision system. START FINISH 0 1 2 4 3 5 6 7 Z X Y Figure 3.5 The Seven Stages of the Scaling Scheme The first stage involves minimizing the distance between the end effector and the object in the X Y plane. The second stage adds z direction scaling as the manipulator moves down. The third stage assists the manipulator when the vision system can no

PAGE 38

26 longer see the goal object. Once the object is obtained, the fourth stage assists the operator in avoiding the wall obstacle. The fifth stage is activated when the range data is too close to an object. The sixth stage involves the vision system, and enhances the move ment in the horizontal plane to clear the wall horizontally. The seventh stage simply frees the user to place the object down on the correct side of the box. Since the center of the camera is not the end effector position, the camera needs to be calibrate d with the end effector. According to figure 3.6, the end effector position is projected on the image frame, and its pixel position is determined relative to the center position of the goal object. Image Frame X Y End Effector Image Projection EndX, EndY Vector Goal Object VisionX, VisionY Figure 3.6 Image Frame S howing Vector Determination 3.4.2. Stage One For stage one, the scaling is based upon the position of the object and the projected end effector position. A vector is created between these two points, in the X Y plane, and the task frame is calculated usi ng this vector and a Z axis. The x direction of the image frame is opposite to the x direction of the slave frame, so the vector calculation is as follows:

PAGE 39

27 ( ) ( ) y Vector = EndY VisionY x VisionX EndX (3.3) A transformation matrix is determined from the PhanToM frame to the task frame according to the task frame calculations in section 3.2, and the input velocity is scaled according to the following equations: C O INPUT SLAVE Transform V V = (3.4) = 1 0 0 0 0 1 0 0 0 0 e VisionScal Scale (3.5) SLAVE SCALED V Scale V = (3.6) ( ) T C O SCALED MODIFIED Transform V V = (3.7) where, for stage one, VisionScale ranges from 1.5 to 3 maximum. If the dot product of V SLAVE and Vector is negative, then VisionScale is 0.1. This means that the input velocity is in the opposite direction of the goal object. The modified velocity, V MODIFIED is sent to the low level controller. 3.4.3. Stage Two Stage two starts when the magnitude of the Vector is less than 75 pixels. This means that the end effector is close to the correct x, y position over the goal object, and the operator can start moving down towards the object. Stage one exists to help reduce the sensor error by keeping the end effector in the X Y plane for large movements while the operator is approaching the goal. Stage 2 uses the same task frame as stage 1, but the scale matrix reflects increased velocity in the z direction.

PAGE 40

28 = r ScaleFacto e VisionScal Scale 0 0 0 1 0 0 0 0 (3.8) where VisionScale ranges between 1 and 1 .5, and if the dot product of V SLAVE and Vector is negative, then VisionScale is 0.1. ScaleFactor depends on the value of the LRF, shown in figure 3.7. Velocity ScaleFactor vs. DME 0 1 2 3 4 5 6 100 250 400 550 700 850 1000 1150 1300 DME Range Value (mm) Velocity Scale Figure 3.7 ScaleFactor According to LRF Data (DME) So this scale matrix h elps to guide the end effector down towards the goal object. It increases the scale in the Z direction, and allows motion in the hVector direction to pull the end effector to the goal object. 3.4.4. Stage Three The third stage starts when the vision sys tem can no longer see the object. As the end effector gets closer to the object, it will eventually move out of the image frame because of the location of the camera on the end effector. In this stage the task frame will not be calculated since there is no data from the vision system. So the following scale matrix will be directly applied to the input velocity. Since the end effector is near the object, there will be little motion required in the X and Y direction.

PAGE 41

29 = r ScaleFacto k k Scale 0 0 0 0 0 0 (3.9) INPUT MODIFIED V Scale V = (3.10) Using a scale of k = 0.25 in the X and Y direction allows for some error correction, but it scales down large movements from the operator away from the goal object. 3.4.5. Stage Four The fourth stage begins when the end effector grabs the object. This stage scales the velocity in order to avoid the center wall obsta cle. At first, the velocity is scaled to move the end effector in the positive z direction according to AvoidScale AvoidScale depends on the LRF value, and ranges from 3 to 1. If the input velocity is in the downward z direction, then AvoidScale is 0.1. The y direction is scaled down because the desired motion for the task is in the x direction. The vision system performs edge detection and returns the greatest x value of that edge in the image frame. The initial scaling equation is: = Z Y X MODIFIED V V V AvoidScale V 0 0 0 1 0 0 0 0 1 (3.11) 3.4.6. Stage Five A as the end effector moves to the left to place the object on the other side of the box, the LRF is monitored for obstacles. If the LRF sees an obstacle, then all vel ocity inputs are scaled down, and the upward z direction is increased by AvoidScale according to the following equation:

PAGE 42

30 = Z Y X MODIFIED V V V AvoidScale V 0 0 0 1 0 0 0 0 1 0 (3.12) As the end effector moves to the left, the LRF leads, according to figures 3.1, 3.2 and 3.3. Figure 3.3 shows how the LRF can measure the wall without a collision. Therefore, the LRF checks the z direction to make sure the whole end effector can clear an obstacle. 3.4.7. Stage Six Now that the end effector has enough height to clear the wall vertically, it must clear the wall horizontally. So, once the wall comes into the image frame, the scaling is shown by the following equation: = Z Y X MODIFIED V V V Avoidwall V 1 0 0 0 0 1 0 0 0 0 (3.13) where Avoidwall increases the negative x direction, see figures 3.1 and 3.3, to assist in avoiding the seen obstacle. Once the wall obstacle is seen, the z direction will be scaled down. 3.4.8. Stage Seven Once the camera can no longer see the w all, the end effector has avoided the wall obstacle. The scaling returns to regular z direction velocity assistance according to the following equation. = Z Y X MODIFIED V V V r ScaleFacto V 0 0 0 1 0 0 0 1 (3.14)

PAGE 43

31 Once the obje ct is near the table on the correct side of the box, the operator is ready to release the object. Now the task is completed, and the completion time is recorded. By returning the end effector to home position, the operator is now ready to perform another Box and Blocks test. 3.5. Experimental Results 3. 5.1 Telemanipulation System Structure In this system (figure 3.8), the master robot is a PhanToM with 6 degrees of freedom from Sensable Technologies. It can provide tactile feedback for the user. A 7 D OF industrial robot RRC K 2107a is used as a slave manipulator in this application. A Windows 2000 PC is used to control the PhanToM and compute the mapping from master to slave. The slave manipulator controller runs another PC. A third PC handles the sen sory data. All PCs are linked together through an Ethernet, and sensory data is sent to the PhanToM PC and the velocity commands are sent to the manipulator PC. P h a n t o m H a n d C o n t r o l l e r S i n g l e B o a r d C o m p u t e r R R C M a n i p u l a t o r P C W i t h F r a m e G r a b b e r a n d H A L C O N P h a n T o M P C R R C C o n t r o l P C D M E 2 0 0 0 H i t a c h i K P D 5 0 Figure 3.8 The Telemanipulation System

PAGE 44

32 3.5.2 Software I mplementa tion Two major programs have been developed in this chapter. One is the image processing, which does the Sobel edge detection and region growing to obtain the coordinates of the object in image plane (figure 3.9 and 3.10). This program uses API functions provided by HALCON. Object of Interest RGB Image I mage after Region Growing Too Small Too Large Figure 3.9 Region Growing Image Major Horizontal Lines Largest Row Value RGB Image Sobel Image Figure 3.10 Sobel Edge Detection Image RGB Image Image After Region Growing RGB Image Sobel Image

PAGE 45

33 The other is the control program run in the master PC. It was developed using the GHOST SDK fr om Sensable Technology [48]. This software obtains the accurate position and orientation of the PhanToM. Force reflection is also available with the software. The sample time of the master PC getting position or velocity data from the master device is 0. 2s. Once the master velocity is obtained, it is modified according to the SAF. This adjusted velocity command is sent to the slave PC at the same rate as its sample rate. 3.5.3 Results 3.5.3.1. Simulation Mode -40 -20 0 20 40 -100 0 100 -50 0 50 100 150 200 y Box and Blocks Test using Assistance Function x z Master Slave Figure 3.11 Trajectory C ompari son of PhanTom and S lave M anipulator All values are in generic units

PAGE 46

34 Box and Blocks Test 0 5 10 15 20 1 2 3 4 5 6 7 Block Number Time (secs) No Assistance Provided Assistance Provided Figure 3.12 Box and Block Time E xecution An able bodied person performed the Box and Blocks simulation with assistance function to determine the effect of the assistance. When users move ment is away from the desired trajectory, force reflection will be felt by the user that makes the user move back to the desired trajectory. Figure 3.11 is the trajectory comparison of the Phantom and slave manipulator when doing box and blocks test with assistance function. Obviously, though the master has some random movements, the slave manipulator moves along a desired trajectory very well. A sample of time executions of seven tests is shown in figure 3.12. It is noticed that due to the assistance func tion, the average time was reduced considerably (from 10.33 to 5.66 seconds), and the standard deviation (from 0.81 to 0.50) was smaller as well. 3.5.3.2. Real Test M ode An able bodied person performed the Box and Blocks real test with and without the SA F to determine the effect of the assistance with a sphere constraint in his workspace, which simulated the motion of persons with disabilities. The height of the wall in the tests is 10 inch. Figure 3.13 shows the trajectory of the slave manipulator with

PAGE 47

35 n o assistance versus the slave with assistance when doing real box block test. According to this figure, the trajectory with assistance is a smooth curve approaching the object, and then avoiding the wall obstacle. The curve shows how the user was guided toward the object. The trajectory with no assistance shows that the user has a random approach to the object, while showing many uncertain and unnecessary movements. It also shows the effect of each stage of scaling. -600 -400 -200 0 200 400 600 800 -100 0 100 200 300 -800 -700 -600 -500 -400 -300 -200 -100 0 100 Slave No Assistance Slave Assistance Start Stage 1 Stage 2 Stage 3 Stage 4 S t age 5 S t age 6 S t age 7 Wall Obstacle Object Finish Assistance Vs. No Assistance Y Axis Z Axis X Axis Figur e 3.13 Trajectory of Box and Blocks Task For data analysis, the person performed the test 30 times with assistance and 30 times without assistance. Table 3.1 shows the results of the tests. It includes the decrease of necessary input motion, idle time, and execution time when using the developed computer assistance. Whenever in simulation or real test mode, assistance functions not

PAGE 48

36 only decreased the execution time, but also reduced its standard deviation from 4.512s to 2.086s. Table 3.1 Comparison of Averages for Box and Blocks Test Using Workspace Constraint Average Test Data All Positions No Assistance SAF Assistance % Decrease Total Distance 11.87 9.89 16.7% Number of Times Reposition 43.80 23.80 45.7% Time Spent Repositioning 22.56 9.66 57.2 % Total Completion Time 76.63 50.24 34.4% 3.6. Summary This work provides a virtual simulation and sensor assistance approach for a complex teleoperation task to be executed by persons with disabilities. It can be used as a vocational training platfor m and as an evaluation tool after therapy in rehabilitation engineering. The assistance will increase the safety and dexterity of these users who would not be able to perform the task otherwise. In this dissertation, the Box and Blocks test was explained a s well as a suitable combination of assistance that variably scales the input velocity. Able bodied persons initially performed the test to show the effect of the assistance concept. A constraint was added to the input to simulate a person with disabilit ies by decreasing the possible movements of the able bodied user, and more tests were performed. The results show how the desired motion was kept or sometimes augmented, and how the unwanted motion was reduced. Therefore, when applying this assistance, th e performance of a person with disabilities will be drastically enhanced.

PAGE 49

37 Chapter 4: Telemanipulation Assistance Based on Motion Intention Recognition In telemanipulation system s assistance through variable position/velocity mapping or virtual fixture can improve manipulation capability and dexterity [37, 45, 53, 61, 64]. C onventionally, such assistance is based on the sensory data of the environment and without knowing users motion intention. In this dissertation, users motion intention is combined with real time environment information for applying appropriate assistance If the current task is following a path, a virtual fixture is applied. If the task is aligning the end effector with a target, an attractive force field is produced. Similarly, if the task is avoiding obstacles that block the path, a repulsive force fie ld is generated. In order to successfully recognize users motion intention, a Hidden Markov Model (HMM) based algorithm is developed to classify human actions, such as following a path, aligning target and avoiding obstacles. The algorithm is tested on a simulation platform. This chapter presents the teleoperation assistance algorithm development based on operator's motion intention recognition through Hidden Markov Model (HMM). The basic theory and the application of HMM are also presented. 4.1. Telem anipulation Assistance The fundamental purpose of a telerobotic system is to extend operators sensory motor facilities and manipulation capabilities in remote environment [70]. This approach is guided by the philosophy that the human operator should remai n in direct control of the

PAGE 50

38 slave at all times, with human independent control parameters altered according to sensor information. However, manipulation tasks such as assembly are still difficult for a telerobotic system. In many cases, the users physical labor load of completing a task manually is replaced by mental burden of controlling the remote input device mentally. In the field of rehabilitation robotics, this is the main hindering for the wide application of telerobot assistive devices [71]. So ass istance for teleoperation has become essential in order to reduce the operation fatigue. The first kind of assistance is the variable position and velocity mapping based on sensory information and force feedback [53]. The other is virtual fixture which ha s been used as means of providing direct, physical assistance [37, 61, 64]. Just imagine drawing a straight line without a ruler, it is very difficult. Virtual fixture plays the same role as a ruler to enhance humans drawing a straight line. Both of these assistances can enhance a humans performance accuracy for complex tasks execution and reduce time consumption. But the limitation is that they are related to some specific tasks. Our recent work in telemanipulation systems for rehabilitation engineering motivated us to enhance manipulation accuracy and reduce operators fatigue [29, 50, 53]. In order to provide general assistance, specific tasks need to be divided into several simple and general subtasks. Our work tries to combine the environment informa tion with users motion intention before applying appropriate assistance. Human motion intention is classified by movement velocities through Hidden Markov Model: following a path, aligning with a target, avoiding an obstacle and stopping. For each motion appropriate assistance is provided. For example, if the motion is following a path, a virtual fixture orthogonal to the path is applied, just like a ruler. If the motion is aligning with a tar get, an attractive force field is applied.

PAGE 51

39 4.2. Classes of Mot ion in Telemanipulation With typical telemanipulation, the user enters the control loop, sensing the environment information such as the location and the distance of the target and providing the appropriate control signal through moving the input device. For a common task, such as grasp ing a cup and put ting it on a cup pad, the motion process can be divided in to four classes: 1. F ollowing the desired trajectory; 2. A ligning with the target; 3. A voiding an obstacle; and 4. Stopping The "following the desired trajec tory" motion happens when a desired trajectory is planned. For th e go grasp task, the desired trajectory is a straight line if there is no obstacle blocking the path. W e can decompose the velocity vector v c into two parts, v p velocity component along t he desired path tangent direction and v o velocity component orthogonal to the desired path tangential (Figure 4.1). Wh ile users are following a path, v p >> v o (Figure 4.1); Wh ile aligning the end effector with the target, both v p and v o are relatively smal l and close to each other (Figure 4.2); wh ile avoiding an obstacle, v p << v o (Figure 4.3); and when stopping, both v p and v o are close to zero (Figure 4.4). But these features are not true for each sample. We can not classify these four motions for each sa mple value using a simple threshold. So Hidden Markov Model, a technique of stochastic process is used. Since these two velocity components are orthogonal, they are independent. In order to apply HMM to model these two velocities components, a 2 dimensi on al HMM is used.

PAGE 52

40 Figure 4. 1 Path Following Motion and its Velocit y Profile Figure 4.2 Aligning with Target Motion and its Velocit y Profile Figure 4.3 Avoiding Obstacle Motion and its Velocit y Profile

PAGE 53

41 Figure 4.4 Stop Motio n and its Velocit y Profile 4.3. Hidden Markov Model based Motion Recognition 4.3.1. Data Preprocessing The velocity of the input device is sampled at 1000Hz rate. The data is denoted as ] [ O p V V V = Vp and Vo are the sets of velocity sampling values v p and v o = = ] ,...... [ ] ,...... [ , 2 1 , 2 1 o n o o O p n p p p v v v V v v v V (4.1) where n is the sample number. Since V p and V o play the same role, we just demonstrate the data processing of one of them i.e. Vp. Since we use discrete HMM, we need to convert this velocity data into a sequence of discrete symbols. We follow two steps in this conversion: (1) data preprocessing and (2) vector quantization, as illustrated in Figure 4.5. The primary purpose o f data preprocessing is to extract meaningful feature vectors for the vector quantization. In our case, the preprocessing proceeds in two steps: (1) spectral conversion, and (2) power spectral density (PSD) estimation. First, a 16 point width window with 5 0% overlap is used to select data:

PAGE 54

42 ] ,...... [ 16 2 1 p p p p v v v v = (4.2) Prior to spectral conversion, a hamming window is used to filter each frame thus minimizing spectral leakage The Hamming transformation ) ( v H T maps a k length ( k =16 in this case) real vector to a new k length real vector. 16) (k ] ... [ ) ( , 2 2 1 1 = = = p k k p p p v H v H v H v H v T h (4.3) } { k 1,2,..., i ], 1 ) 1 ( 2 cos[ 46 0 54 0 = k i H i p (4.4) Next FFT (Fast Fourier Transform) analysis is applied for every Hamming windowed data. The FFT transform ) ( h F T maps a k length vector ] ,... [ 2 1 k h h h h = to a k length complex vector ] ,... [ 2 1 k z z z z = [ ] where ) ( ... ) ( ) ( ) ( 1 1 0 h F h F h F h T z k h H = = (4.5) { } 1 ,... 1 0 ) ( 1 0 k / pq i 2 1 = = + k p e h h F k q q p p Now, let us define the power spectral density (PSD) estimates for the hamming Fourier output z The PSD estimates is given by, ] ) ( ... ) ( ) ( [ ) ( 2 / 1 0 z P z P z P z P k = where (4.6) 2 0 0 1 ) ( z H z P ss = { } 1 2 / ..., 2 1 ) ( 1 ) ( 2 2 + = k i z z H z P i k i ss i and 1 ) ( 2 2 / 2 / k ss k f H z P = = = k q k ss H k H 1 2

PAGE 55

43 Figure 4.5 Conversion of Continuous Velocity Data into Discrete Symbols

PAGE 56

44 Due to the symmetry structure, the length of PSD estimates o utput is k /2 = 8. As illustrated above, a 16 point velocity samplings window is mapped to an 8 point PSD vector. Let us represent the hamming windowing, Fourier transform and power spectral density by ) ( ) , ( v P F H T If there are m sampling window s, the PSD estimation vectors form a matrix as shown below, = = ) ( ) ( ) ( )} ( { )} ( { )} ( { ) , ( 2 ) , ( 1 ) , ( 2 1 m p v P F H p v P F H p v P F H m m P v T v T v T z P z P z P V M M (4.7) In the same way, the second dimensional data, V o can be converted into a PSD matrix as m O V above. 4.3.2. Vector Quantization In the previous section, we converted raw velocity data into the feature matrix m P V and m O V Let { } { } m t v V t ... 2 1 = denote the set of all feature vectors. In order to apply discrete output HMMs, we now need to convert the feature vectors V to N discrete symbols, where N is the number of output observables in our HMMs. In other words, we want to replace the many t v with L prototype vectors { } { } N n x Q n N ,... 2 1 = known as the codebook, such that we minimize the total distortion ( ) N Q V D ( ) T t n t n n t n t t N v x v x x v d x v d Q V D ) ( ) ( ) ( where ), ( min = = (4.8) over all feature vectors. We cho o se the well known LBG vector quantization (VQ) algorithm [72] to perfo rm this quantization. The illustration of LBG algorithm for

PAGE 57

45 different N is shown in Figure 4.7. For our case, N is determined to be 256. For our data, we set the split offset e = 0.001 and the convergence criterion d VQ = 10.0e 15 With these paramet er settings, the centroids usually converge within only a few iterations. Thus, the velocity signal is trained and classified into 256 vectors, denoted by VQ codebook Q N Now, given a sequence of feature (velocity for our case) vector V f we can convert them into a symbol vector { } f f s s s S ..., , 2 1 = with length f Let us use ) ( VQ T to represent the conversion from feature vector into symbol, then { } ) ( ),..., ( ), ( ) ( 2 1 N f VQ N VQ N VQ N f VQ f Q v T Q v T Q v T Q V T S = = (4.9) { } N n x v d index Q v T s n f N f VQ i ... 2 1 )], ( [min ) ( = = (4.10) We train the VQ codebook by th e se vectors and the codebook i s produced by LBG algorithm (see Figure 4.6). The LBG VQ (vector quantization) technique maps these 8 dimensinal vectors into a finite set of vectors Y = { y i : i = 1, 2, ..., L }, where L is the length of the codebook(it is determined to be 256 in our case). Each vector y i is called a code vector or a codeword and the set of all the codewords is referred to as a codebook Associated with each codeword, y i is the nearest neighbor region called Voronoi region, and it is defined by [72] : } : { i j all for y x y x R x V j i k i = (4.11) The 256 8 dimensional vectors in the codebook are 256 symbols in the output probability distribution fu nctions for discrete HMM. Similarly, a codebook for the velocity component v o vector and the 256 symbols are also obtained in the same way. The computation procedures of the data preprocessing part are illustrated in Figure 4.5. This method is similar to the continuous symbol conversion in [62]

PAGE 58

46 Figure 4.6 LBG Codebook Training

PAGE 59

47 4.3.3. HMM Training Figure 4.7 LBG V ector Quantization for R andom 2D D ata, as L E quals 2,4,8,16,32

PAGE 60

48 4.3.3. HMM Training HMM is usually used in continuous and discrete forms. Relatively, discrete HMM is easier for computation. In th is dissertation, discrete HMM is adopted. A discrete HMM can be defined as follows [63]: 1. A set of N states S ={ S 1 S 2 S N } 2. A set of M possible observations V ={ v 1 v 2 v M } 3. A state transition probability distribution A ={ a ij }, where a ij = P [ q t+1 = S j | q t = S i ], 1< =i, j<=N 4. Observation probability distribution in each state j B ={ b j (k) } where b j (k) = P [ v k at t|q t = S j ], 1<=j<=N 1<=k<=M 5. Initial State distribution p = { p i }, where p i = P [ q i = S i ] 1<= i <=N 6. Let l = ( A, B p ) be the complete parameter set. Figure 4.8 repre sents a 5 state HMM, where each state emits one of 256 discrete symbols in two dimensions. Figure 4.8 5 states Left R ight Hidden Markov Model, with 32 Observable Symbols in Each State

PAGE 61

49 In order to train an HMM model and use it to do recognition, the following three basic problems for HMM need to be solved [63]: 1 Given the observation sequence O = o 1 o 2 o T and a model l = ( A B p ), how to determine P ( O | l ), the probability of the observation sequence, given the model? This can be viewed as scoring a model in terms of how well it matches the observation. 2 Given the observation sequence O = o 1 o 2 o T and a model l = ( A B p ), what is the best corresponding state sequence Q = q 1 q 2 q T that best explains the observation (e. g. the most probable sequence). 3 How do we set or adjust the parameters of a model l = ( A B p ) to maximize P ( O | l ). This is the training or learning problem of adjusting the model's parameters to best fit a set of training data. In order to classify four different motions, we need to design a separate HMM for each motion. The observations are a sequence of coded spectral vectors where each spectral vector is mapped to one of several code words which is the closet match. Also the observation s are sequences of codes representing the motion executed repeatedly by one or more operators. The solution to problem 3 is to set the parameters of the model for each motion. The solution to problem 2 is to segment each of the motion training sequences i nto states and thereby gain information about how to adjust the number of states or the codebook. Once the four models are built; we can use the solution to problem 1 to score each motion models match to a given observation sequence and select the best m odel. The computation of the three problems will be explained in this section.

PAGE 62

50 Problem 1 is to determine P ( O | l ). Examine every state sequence length T Q = q 1 ,q 2 ,,q T how likely this state sequence is and how likely it is to generate the observation seque nce. First, we assume that individual observations are independent, and then the probability of observing O given Q is [63]: ) ( ) ( ) ( ) | ( ) | ( 2 2 1 1 1 T q q q t t T t o b o b o b q O P Q O P T = = = l l (4.12) The probability of a given state sequence is simply : ) | ( ) | ( ) | ( l l l Q P Q O P Q O P = (4.13) So the joint probability of an obse rvation and a state sequence is : ) | ( ) | ( ) | ( Q l l l Q P Q O P O P all = (4.14) The computation of Eq.(4.14) requires summing over N T possible sequences. Instead, a forward backward procedure is used to do this. The detailed algorithms is described by L. Rabiner[63]. Problem 2 is to find the state sequence, Q which is the most probable given a sequence of observations, i.e want to maximize P ( Q | O l ), or equivalently maximize P ( Q O | l ). The Viterbi algorithm [63] finds this state sequence by defining ]) | ,... [ ( max ) ( 2 1 ,... 2 1 l d O i q q q P i t q q q t t = = (4.15) i.e. the probability of the best subsequence that accounts for the first t observations and ends in state S i The induction ) ( ) ) ( ( max ) ( 1 1 + + = t j ij t i t O b a i j d d (4.16) computation is used. Also it is necessary to store the state argument i that maximizes this function for each t and j this will be kept in the vector y t (j).

PAGE 63

51 Problem 3 is about training. So far there is no known way to analytically calculate the par ameters of a model that maximizes the probability of an observation. However, the parameters can be locally maximized using an iterative hill climbing method called Baum Welch or EM(expectation modification) [63] Let us explain Baum Welch method. Define x t (i,j) as the probability of being in state S i at time t and state S j at time t +1. ) | ( ) ( 1 l x O S q S q P j i j t i t t = = = + (4.17) This can be calculated as [63] ) | ( ) ( ) ( ) ( ) | ( ) | , ( ) ( 1 1 1 l b a l l x O P j O b a i O P O S q S q P j i t t j ij t j t i t t + + + = = = = (4.18) Let g t ( i ) be the probability of being in state S i at time t given the sequence and the model. = = N j t t j i i 1 ) ( ) ( x g (4.19) The expected num ber of transitions from S i is then = 1 1 ) ( T t t i g Then the expected number of transition from S i to S j is then = 1 1 ) ( T t t i x Now we can estimate new values of the parameters given the observation as [63]: = i p Expected probability of being in state i at t =1 = g 1 ( i ) i state from ns transitio of number expected j state to i state from ns transitio of number ected exp = ij a = = = 1 T 1 t t 1 1 ) ( ) ( i j i T t t g x

PAGE 64

52 j state in times of number expected v g observatin j state in times of number ected exp ) ( k j k b = = = = = T 1 t t 1 ) ( ), ( i v O j T t k t t g g It can be proven that the updated model, or say a new model l is then either [63] l = l (We are at local maximum. This is also the stopping criterion for training) or l is better than l regarding given observation, i.e. ) | ( ) | ( l l O P O P > Overall, the training step is to obtain a maximum likelihood estimate of an HMM for an observation. The flow of this algorithm can be described as follows [63]: Initialize l = l =(A, B, p ) to random estimates that satisfy the probabi listic constraints (see below) Repeat o Set l : = l o Calculate p , B A based on O and l and set l : = p , B A Until l = l Always maintains probabilistic constr aints: = = = = = = L k j N i N j ij i N j k b N i a 1 1 1 ) 1 ( 1 ) ( ), 1 ( 1 1 p

PAGE 65

53 In practice it is impossible that l = l But they could be very close. Let l (k 1) denote the HMM l after k 1 iterations of Baum Welch algorithm, and let l (k) denote the current iteration of Baum Welch. Then, the training computation stops if HMM k k k k O P O P O P O P e l l l l < 2 / )] | ( ) | ( [ ) | ( ) | ( ) 1 ( ) ( ) 1 ( ) ( (4.20) where e HMM = 0.00001. In addition, in order to avoid computation overflow due to the multiplication of very small probabil ity numbers, scaling up for too small probability values are applied if necessary. This scaling up does not affect the training of the HMM since the only useful information is the ratio of different probabilities and not their real values. As explained in the previous section, a model corresponds to a motion. So we need to train four separate HMMs. Obviously, problem 3(training) is the most difficult one of the HMMs three problems. Suppose the HMM for path following is initialized as follows: l = (A, B p ). ] 2 0 2 0 2 0 2 0 2 0 [ = p = 6 0 1 0 05 0 15 0 1 0 025 0 9 0 025 0 025 0 025 0 05 0 1 0 7 0 1 0 05 0 1 0 1 0 1 0 6 0 1 0 05 0 05 0 05 0 05 0 8 0 A = 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 256 / 0 1 L L L L L B

PAGE 66

54 From these, we can see the probability constraints: the sum of the probability distribution from the current state to other states is ; at each state, the sum of the pr obability distribution of all possible observations is also . Using the observation sequences of path following, the HMM is trained, that is, the probability parameters are adjusted. The trained HMM is expressed by the updated value s until converg ence occurs ] 29 0 19 0 28 0 15 0 08 0 [ = p = 0 1 0 0 0 0 0 0 0 0 13 0 87 0 0 0 0 0 0 0 0 0 74 0 26 0 0 0 0 0 04 0 21 0 05 0 70 0 0 0 01 0 06 0 0 0 06 0 87 0 A = 018 0 001 0 0 0 009 0 0 0 001 0 0046 0 0 0 0025 0 003 0 0 0 0 0 006 0 0 0 0156 0 013 0 0 0 016 0 0 0 0023 0 L L L L L B 4.3.4. Motion Recognition Once the four HMMs are trained by their corresponding training set, they can classify motions. The classification criterion is the forward score of a sequence of observations for a given model. This forward calculation is the same as the forward part of the Forward Backward procedure used in solving problem 1.

PAGE 67

55 N i o b i i i = 1 ), ( ) ( 1 1 p a (4.21) = + = + N j T t o b a i j t j N i ij t t 1 1 1 ), ( ) 1 ( ) ( 1 1 1 a a (4.22) = = N j T t j o o o P 1 2 1 ) ( ) | ,... ( a l (4.23) Figure 4.9 Forward Computation Illustration Let us illustrate this computation by two one dimensional, two state, left right HMMs as an example. Figure 4.9 shows two HMMs representing two classes. The length of the observation vector is 4. Therefore at each time t, one of the four symbols, A, B, C or D will be observed for each state. From the structure of the first HMM (Figure 4.9 (a)), its parameters are: = = = 1 0 1 0 05 0 3 0 05 0 4 0 8 0 2 0 0 1 0 0 5 0 5 0 ], 7 0 3 0 [ B A p

PAGE 68

56 For the given observation sequence ABA, its fo rward score is computed as follows: 06 0 2 0 3 0 ) ( ) 1 ( 1 1 1 = = = A b p a 56 0 8 0 7 0 ) ( ) 2 ( 2 2 1 = = = A b p a 012 0 4 0 ] 0 56 0 5 0 06 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 1 ( 1 22 1 11 1 2 = + = + = B b a a a a a 0295 0 05 0 ] 0 1 56 0 5 0 06 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 2 ( 2 22 1 12 1 2 = + = + = B b a a a a a 0012 0 2 0 ] 0 0295 0 5 0 012 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 1 ( 1 21 2 11 2 3 = + = + = A b a a a a a 0284 0 8 0 ] 1 0295 0 5 0 012 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 2 ( 2 22 2 12 2 3 = + = + = A b a a a a a 0296 0 ) 2 ( ) 1 ( ) | ( 3 3 1 = + = = a a l ABA O P This is the probability of the first HMM for the give n observation sequence ABA. For the second HMM (Figure 4.9 (b)), its parameters are: = = = 0 0 25 0 1 0 6 0 5 0 05 0 4 0 1 0 0 1 0 0 7 0 3 0 ], 5 0 5 0 [ B A p The forward score for the given observation sequence ABA is computed in exactly the same way: 05 0 1 0 5 0 ) ( ) 1 ( 1 1 1 = = = A b p a 2 0 4 0 5 0 ) ( ) 2 ( 2 2 1 = = = A b p a 4 1 22 1 11 1 2 5 7 05 0 ] 0 2 0 3 0 05 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 1 ( = + = + = e B b a a a a a 1175 0 5 0 ] 1 2 0 7 0 05 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 2 ( 2 22 1 12 1 2 = + = + = B b a a a a a 5 4 1 21 2 11 2 3 25 2 1 0 ] 0 1175 0 3 0 5 7 [ ) ( ] ) 2 ( ) 1 ( [ ) 1 ( = + = + = e e A b a a a a a 0472 0 4 0 ] 1 1175 0 7 0 5 7 [ ) ( ] ) 2 ( ) 1 ( [ ) 2 ( 4 2 22 2 12 2 3 = + = + = e A b a a a a a 0472 0 ) 2 ( ) 1 ( ) | ( 3 3 2 = + = = a a l ABA O P Since 0472 0 ) | ( 2 = = l ABA O P > 0296 0 ) | ( 1 = = l ABA O P it can be concluded that l 2 is more likely to generate the observation sequence ABA. In other words, if we get the observation sequence ABA, the underlying process represented by HMM 2 has

PAGE 69

57 been recognized. In our case, the HMMs have two dimensions and the length of each dimension of observation vector is 256. The successive four symbols obtai ned by data preprocessing are used for the partial observation sequence. It could be, for example, {20, 255, 120, 19}. This vector is used to compute the forward likelihood of the four HMMs as shown in the illustration above. Then for the given observation vector, we choose the model that has the largest likelihood as our recognized model at time t. 4.4. Design of Fixture Assistance Once users motion intentions are recognized, appropriate assistance can be designed for each motion. W e define the path curve as p(s) and denote the target position by t. When the goal during task execution is to move to a target, we assume that the desired trajectory is a straight line that connects the current Cartesian position of the end effector and the target. A pre ferred reference direction d can be defined for each point of the end effector x c as: Figure 4.10 Virtual Fixture Definition c t c t c x x x x x d = ) ( (4.24) p ( s ) v p d ( x c ) v o v c

PAGE 70

58 Where t x and c x are the target position and the current position of the end effector respectively. We decompose v c the current velocity, into two orthogonal components: d d v v c p ) ( = (4.25) d d v v v c c o ) ( = (4.26) where v p is the veloci ty component along the path curve tangent and v o is the velocity component orthogonal to the curve tangent. The desire d path following is such that the velocity tangent to the curve is large and velocit y components in orthogonal direction are relatively sm all. If the desired trajectory of a sub task is a straight line, a virtual fixture can provide the same assistance as a ruler helps in draw ing a line. 4.4.1 Fixture Assistance Fixture assistance is always applied for path following except when the user is trying to align an object or avoid an obstacle. So the stiffness coefficient k d along the curve tangent is set to be zero. The stiffness orthogonal to the curve tangent is defined as: > = r d r d k r d k k c c o ) / 1 ( (4.27) where k c is the fixture coefficient (it is determined to be 0.5N/mm for this experiment), d is the distance between the end effector and the center position of the force fields, and r is the force fields radius. This m eans that once the end effector goes in side force field path following fixture is removed (See Figure 4.11 for fixture coefficient).

PAGE 71

59 10 20 30 40 50 60 70 80 90 100 0.2 0 0.2 0. 4 stiff N/mm 10 20 30 40 50 60 70 80 90 100 0.1 0 0.1 Attract tive N/mm 10 20 30 40 50 60 70 80 90 100 0.1 0 0.1 distance to force fields Repul sive N/mm Figure 4.11 Stiffness Coefficients of Different Fixtures 4.4.2 Force Field Design for T arge t s and O bstacles In g eneral, aligning the end effector with a target and avoiding obstacle s are not easy to execute, especially for persons with disabilities on the upper limb. Potential fields generated from the center position of the target or the obstac le can provide some assistance. Based on this concept, force fields are designed around targets and obstacles. We define the radius of force field to be r. In this dissertation, the force field is defined using spring force. For approaching a target th e force is defined as: > = r d d r k r d f f ) ( 0 (4.28) where k f is 0.1N/mm.

PAGE 72

60 For obstacle avoidance, the force is defined as: > = r d d r k r d f f ) ( 0 (4.29) where k f is 0.1N/mm. O nce the end effector goes within the radius r for aligning with the target the attractive force originated from the object center position can provide assistance. The f orce vectors generated by position and approach fixtures are shown in Figure 4.12. Payandeh et al used such virtual fixture as a task dependent telemanipulation aid [5, 14]. However, the origin of the force fields needs to be determined from the sensory da ta. In addition, r should be larger than the size of the target or the obstacle. (a) (b) Figure 4.12 Force Fields Illustration (a: Attractive force, b: Repulsive force) 4.5. Experiments We have implemented the algorithm described above, and conducted experiments to de termine the systems performance without and with the assistance. 4 .5.1 Experimental Test B ed Our telemanipulation simulation system is composed of a visualization scene and a haptic device. The visualization component, simulation scene, is realized through the PhanToM and GHOST [48] In this experiment, the task is to m ove the end effector from

PAGE 73

61 the origin (0,0,0) to ( 80,50,0) referred to as target Grasp (this means the end effector must reside in the object sphere for a short time) and then avoid the obstacle (0,45,0) and then put the target at the target destinatio n (80,50,0) and go back to the origin. The target grasp and the target destination are simulated as 8mm radius spheres. The obstacle and the end effector are simulated as 15mm and 5mm radius spheres ; respectively. User is asked to move the end effect or as fast and as smooth ly as possible (Figure 4.13). In order to avoid confusion, the operator is allowed to move on a planar surface and a planar constraint is added to the haptic device. In this experiment, we are concerned about the straight line path since it is relatively easy to obtain from the environment information. T his algorithm can be extended to a complex trajectory application if we can define the trajectory using vis ual information for the unstructured environment. End effector Origin Target Grasp Obstacle Target Destination Figure 4.13 Simulation of the Task Execution

PAGE 74

62 4.5.2 Experimental Results W ithout Assistance First, an expert user completed the task several times without assistance. During the first several tests, the common performance of the system is shown in Fig ure 4.14 and 4.15. As expected, the free motion has much difficulty in align ing with the target and following the path. The velocity component s orthogonal to the path are not small compared to the useful velocity components tangent ial to the path. Table 4 .1 summarizes the results, including path following error (mm) and execution time(s). Figure 4.14 Velocity Components W ithout Assistance 0 500 1000 1500 2000 2500 3000 100 50 0 50 100 0 500 1000 1500 2000 2500 3000 0 20 40 60 80 Time (ms) v p mm/s v o m m/s

PAGE 75

63 Figure 4.15 Trajectories w ithout Assistance Table 4.1 Per for ma nce Summary w ithout Assistance Path Error (mm) Execution Time(s) Subject Mean Stdev Mean Stdev 1 10.1 2.4 21.5 1.9 2 8.9 1.5 20.2 3.3 3 11.8 2.6 22.1 3.4 4 10.3 2.5 20.4 2.8 100 80 60 40 20 0 20 40 60 80 100 40 20 0 20 40 60 80 100 X axis(mm) Task Trajectory test 1 test 2 Y axis (mm)

PAGE 76

64 4.5.3. Motion Recognition For the task used in this dissertation, f our users in the lab completed the task for 10 times each W e collect ed 250 samples of data for each motion, the first 200 for training and the rest of the samples are for testing. For a total 50 testing samples of four motions, the system successfully re cognized 43 samples. The accuracy is 86%. Definitely, the size of the training set influences the recognition accuracy. After we included 500 samples into the training set, the system recognized 92 samples from 100 testing samples. The motion recognition performance is shown in Table 4.2. Table 4.2 Motion Recognition Rate Incorrect rate Motion Correct rate to 1 to 2 to 3 to 4 1: Path following 90.5% ---4.0% 2.3% 3.2% 2: Target align ing 89.1% 6.4% ---2.3% 2.2% 3: Obstacle avoidance 88.3% 7.7% 2.0% ---2.0% 4: Stopping 98.7% 0.0% 1.3% 0.0% ---4.5.4 Experimental Results with Assistance Based on Motion Intention Recognition As mentioned before, the resultant assistance is applied to each motion of a task. If the motion at a certain stag e is path following, a hard fixture is applied so that the end effector can move along the path. Once the motion has been changed into aligning with a target motion, hard fixture is replaced by an attractive force field. For avoiding an obstacle, a rep ulsive force field is applied. If the motion is classified as stopping, no assistance is applied. In g eneral, the shape of an obstacle is difficult to determine from the sensor y information of the environment So creating a desired path for ob stacle

PAGE 77

65 avoida nce is not feasible T his repulsive force field provides assistance for the operator to go around the obstacle. With these assistances, four users executed the same tasks for multiple times. Ever y time, the system performance was consistent and ha d very little variation Two random trajectories from different subjects are shown below. The fixture helped significant ly for path following, primarily due to the fact that the constraints ap plied to the PhanToM tool tip could force it to back up once there was some deviation from the path. Most of the time, the velocity component was much smaller compared to the velocity component tangent ial to the path. The large orthogonal velocity occurs when the user is align ing with a target or avoiding an obstacle. F igure 4.16 Velocity Components with Assistance Time (m s)

PAGE 78

66 Table 4.3 Performance Summaries with Assistance Path Error (mm) Execution Time(s) Subject Mean Stdev Mean Stdev 1 5.1 0.5 12.6 0.7 2 4.6 0.8 11.8 0.6 3 5.3 0.9 12.9 1.2 4 4.8 1.1 13.4 1.2 Figure 4.17 Trajectories with Assistance 4.6. Summary Hidden Markov Model is effective for the classification of random process es such as humans motion intention in a teleoperation task As long as the training set is sufficiently large the motion recognition accuracy is close to 100%. The selected assistance based on the recognized motion is appropriate for each type of motion. The 100 80 60 40 20 0 20 40 60 80 100 40 20 0 20 40 60 80 100 X axis(mm) Y axis (mm) Task Trajectory test 1 test 2

PAGE 79

67 experiment al r esults without assistance have shown that the operator always has random errors that result in difficulty in following a path and align ing with a target. The experimental results with assistance showed that the undesired random errors were removed or reduced. The HMM based assistance is useful for improving performance accuracy and decreasing ex ecution time. These results indicate that the appropriate assistance approach selection based on motion intention is possible. Based on the operators motion intention, it is possible to determine if an object is a target or an obstacle. In order to improv e the recognition accuracy, the dimension number of the Hidden Markov Model can be expanded. As long as they all are independent, the added dimensions will only linearly increase the computation al requirements

PAGE 80

68 Chapter 5: Robotic Therapy for Persons with Disabilities Using Skill Learning This chapter describes the Hidden Markov Model (HMM) based skill learning and its application in a motion therapy system using a haptic interface. A relatively complex task, moving along a labyrinth, is used. A normal subject executes this task for a number of times and the labyrinth skill is learned by Hidden Markov Model. The learned skill is considered as a virtual therapist who can train persons with disabilities to comple te the task. Two persons with disabilities on upper limb (cerebral palsy) were trained by the virtual therapist. The performance before and after therapy training, including the smoothness of the trajectory, distance ratio, time taken, tremor and impact forces are presented in this chapter. This labyrinth can be used as a therapy platform for upper limb coordination, tremor reduction and motion control improving. 5.1. Motion Therapy Much evidence suggests that intensive therapy improves movement recovery [78, 79]. But such therapy is expensive, because it requires therapists on a person to person basis. Recently, there has been an increased interest in restoring functions through robot aided therapy. This approach is to design therapy platform, such as force fields and moving constraints, to substitute therapists work. In this chapter, the role of the therapist is replaced by the learned skill. When humans execute a task, their actions reflect the skill associated with that task. When one does a par ticular task many times, each time the

PAGE 81

69 performance is different even though it represents the same skill. For exam ple, when one draws 50 circles of the same radius by hand, each circle will be different from the others although they may look close. But any one of the 50 circles is the result from operators circle drawing skill. The different looking of these circles is due to the random control commands from brain and the random movements of hand. Since Hidden Markov Model is feasible to model a stochastic process, such as speech signal, it is possible to characterize the skill of the upper limb motion for a specific task. In this dissertation, we have modeled the human movement along a labyrinth so that the underlying nature of it is revealed and can be us ed to transfer the skill to people with disabilities. It is desired that persons with disabilities can be trained for manipulation capabilities, which are incrementally improved through learning practice. Learning from observation is a paradigm where one o bserves other persons performance and learns from it. This is also like physical therapy for a specific disability. 5.2. Hidden Markov Model Based Skill Learning In this dissertation, we model the motion of mov ing along a labyrinth task skill using HMM. In order for the user to visualize the virtual therapist more effectively, the trajectory of the movement is chosen as the skill for learning. Since we only consider the movement in X Y plane, position coordinat es, Px and Py are used to represent the movement. In chapter 4, it has been explained how to convert continuous velocity data into discrete symbols. S imilar procedures are used in this chapter to convert continuous position data into discrete symbols.

PAGE 82

70 5. 2.1. Raw data Conversion The raw data used by HMM for motion intention in chapter 4 is the users velocity. In this chapter, the raw data is the translation trajectory, P x P y In order to use discrete HMM, we still need to convert raw data into symbols The procedures will be explained in this section. First of all, the translation trajectory is sampled by 1000Hz rate. Since P x and P y are independent vectors and processed in the same way, we just demonstrate the preprocessing procedures of P x For simp licity, we use an example with less data. Let us assume that the position samples for a specific task result in the following 3 vectors. V1 = [45.8066 36.9727 19.1504 16.2247 19.1068 29.9084 40.7183 17.3202 46.9558 31.8121 20.7432 39.353 4 30.6080 24.9133 38.8958 34.7934]; V2 = [63.5857 76.5475 41.8072 70.4114 13.8365 78.3798 21.7158 20.1863 70.0594 58.9845 10.9215 0.9405 71.5118 15.9310 23.8978 52.9154]; V3 = [13.6516 22.5228 3.1095 47.4401 27.974 0 20.3278 24.7446 16.0297 20.7795 10.8456 27.8307 36.4975 25.4315 30.7453 10.0353 18.2313]; The vector length is 16 points. In other words, we cut every 16 points and form a vector. These vectors are so called raw data. Their waveforms ar e shown in Figure 5.1. They do not have much useful information, just like our voice signal waveform in time domain. So we need to do some transformation. As illustrated in chapter 4, each raw data vector is multiplied by a Hamming window and then transfor med by 16 point FFT. It is well known that the result of FFT is a symmetrical vector. So in order to reduce computation complexity, only half of the FFT result is used in PSD computation. The 3 vectors shown previously are transformed into the following 3 vectors with 8 point length.

PAGE 83

71 P1 = 10 4 *[1.0271 0.2550 0.0111 0.0010 0.0049 0.0241 0.0320 0.0154]; P2 = 10 4 *[1.8429 0.3981 0.0195 0.0518 0.1750 0.1967 0.0299 0.0931]; P3 = 10 3 *[5.8727 0.9417 0.1443 0.0256 0.0841 0.0292 0.0769 ]. 0 5 10 15 20 0 20 40 60 80 0 5 10 15 20 0 20 40 60 80 0 5 10 15 20 0 20 40 60 80 0 5 10 0 0.5 1 1.5 2 x 10 4 0 5 10 0 0.5 1 1.5 2 x 10 4 0 5 10 0 0.5 1 1.5 2 x 10 4 Figure 5.1 Raw data Vectors Figure 5.2 PSD Vectors

PAGE 84

72 If this task is executed 10 times, we will have 30 PSD vectors. For a simple task, we can use all these 30 vectors in the computation s But for general applications in real life, this number could be very huge. It is impossible to do the computat ion s using all of these vectors. This is why we need to do vector quantization. As f or as vector quantization, it is an algorithm to group vectors into different clusters according to the vector distance criteria. The number of the clusters is determined b ased on the application and accuracy. For some simple application s usually 32 or 64 will be enough. The set and the number of the clusters are called codebook and codebook length. The clusters are called codewords of the codebook. The larger is the codebo ok length, the higher accuracy is the grouping. For this simple example, the length of the codebook for vecto r quantization is determined to be 4. Figure 5.3 shows the illustration of vector quantization when the codebook length is 4. In other words, the vector quantization is to divide the whole vectors set into 4 clusters according to how the vectors are close to each other. There are many available vector quantization algorithms in literature. The well known one is LBG [72]. Figure 5.3 Vector Quanti zation When Codebook Length is 4

PAGE 85

73 Once the codebook is obtained, we can use it as a template to convert any vector into discrete symbols. As a matter of fact, the 30 vectors used in vector quantization can also be represented by symbols. If we represent eac h cluster by a symbol, (for example, A or represents cluster 1, B or represents cluster 2 and so on), we may express the 30 PSD vectors as ABACDCDBACDCDABACDCAABDACDABDB or . This is the result of data preproc essing. When new vectors come in, they will be compared with codewords and placed into the corresponding clusters with which the vector s are closest, thus converting raw position data into discrete symbols. W e did this so that we can use discrete Hidden Ma rkov Model to do all computation. 5.2.2. Hidden Markov Model Computation Let us assume that we execute d this task 3 times to do skill learning. We need to determine which one of the three task executions represents our skill. For each task execution, the ir raw position data is preprocessed and converted into 3 discrete symbols. Let us assume that the symbols from the first task execution are ABA, the second one CBD, and the third one BDB. All symbols from these three task executions will be used as the training set. So the training set for HMM is ABACBDBDB. In order to explain the computation clearly, we use a two states left right HMM as shown below. Figure 5.4 Two state Left right Hidden Markov Model

PAGE 86

74 Before training, all parameters of HMM are i nitialized by randomly generated probability values, as shown below. = = = 02 0 22 0 15 0 13 0 05 0 44 0 78 0 21 0 44 0 56 0 75 0 25 0 ], 65 0 35 0 [ B A p Using the training set ABACBDBDB, th ese HMM parameters are updated using the same algorithm explained in chapter 4. After training, the HMM parameters are: = = = 0 0 25 0 1 0 6 0 5 0 05 0 4 0 1 0 0 1 0 0 7 0 3 0 ], 5 0 5 0 [ B A p The HMM with the adjusted parameters is shown in Figure 5.5: Figure 5.5 Hidden Markov Model with the Adjusted Parameters Once the HMM is trained by the training set, they can be used to evaluate any given observation sequence. The eva luation criterion is the forward score of a sequence

PAGE 87

75 of observations given in a model. The forward score of a given observation sequence is computed as follows: N i o b i i i = 1 ), ( ) ( 1 1 p a (5.1) = + = + N j T t o b a i j t j N i ij t t 1 1 1 ), ( ) 1 ( ) ( 1 1 1 a a (5.2) = = N j T t j o o o P 1 2 1 ) ( ) | ,... ( a l (5.3) where N is the number of states ( 2 in this case), T is the time corresponding to a symbol. For the HMM trained by the combination of the three time execution data, we might as well evaluate the forward score of each task execution. For the firs t task execution, the observation sequence is ABA. The forward score of this observation set is computed as following: 05 0 1 0 5 0 ) ( ) 1 ( 1 1 1 = = = A b p a 2 0 4 0 5 0 ) ( ) 2 ( 2 2 1 = = = A b p a 4 1 22 1 11 1 2 5 7 05 0 ] 0 2 0 3 0 05 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 1 ( = + = + = e B b a a a a a 1175 0 5 0 ] 1 2 0 7 0 05 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 2 ( 2 22 1 12 1 2 = + = + = B b a a a a a 5 4 1 21 2 11 2 3 25 2 1 0 ] 0 1175 0 3 0 5 7 [ ) ( ] ) 2 ( ) 1 ( [ ) 1 ( = + = + = e e A b a a a a a 0472 0 4 0 ] 1 1175 0 7 0 5 7 [ ) ( ] ) 2 ( ) 1 ( [ ) 2 ( 4 2 22 2 12 2 3 = + = + = e A b a a a a a 0472 0 ) 2 ( ) 1 ( ) | ( 3 3 2 = + = = a a l ABA O P So the forward score of the observation sequence ABA is 0.0472.

PAGE 88

76 For the second task execution, the observation sequence is CBD. The forward score of this observation sequence is computed in the same way: 3 0 6 0 5 0 ) ( ) 1 ( 1 1 1 = = = C b p a 05 0 1 0 5 0 ) ( ) 2 ( 2 2 1 = = = C b p a 3 1 22 1 11 1 2 5 4 05 0 ] 0 05 0 3 0 3 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 1 ( = + = + = e B b a a a a a 13 0 5 0 ] 1 05 0 7 0 3 0 [ ) ( ] ) 2 ( ) 1 ( [ ) 2 ( 2 22 1 12 1 2 = + = + = B b a a a a a 4 3 1 21 2 11 2 3 375 3 25 0 ] 0 13 0 3 0 5 4 [ ) ( ] ) 2 ( ) 1 ( [ ) 1 ( = + = + = e e D b a a a a a 0 0 0 ] 1 13 0 7 0 5 4 [ ) ( ] ) 2 ( ) 1 ( [ ) 2 ( 4 2 22 2 12 2 3 = + = + = e D b a a a a a 4 3 3 375 3 ) 2 ( ) 1 ( ) | ( = + = = e CBD O P a a l For the third task execution, the observation sequence is BDB. T he same way, its forward score is: 4 3 3 844 6 ) 2 ( ) 1 ( ) | ( = + = = e BDB O P a a l Since 4 4 375 3 ) | ( 844 6 ) | ( 0472 0 ) | ( = = > = = > = = e CBD O P e BDB O P ABA O P l l l It can be concluded that the task execution with ABA observation represent the task skill more closely than the other two observation sequences. In other words, the task execution whose observation sequ ence has the highest forward score represents the task skill. 5.3. Experiments in Virtual Environment 5.3.1. Tasks and Experimental Test Bed To evaluate the validity and effectiveness of the HMM for skill learning and its application for therapy, we des igned a haptic interactive simulation test bed (Figure 5.6).

PAGE 89

77 It is composed of a visualization scene and a PhanToM Premium 1.5 [48]. The PhanToM is an impedance haptic device that can provide force reflection to operators if collision happens. The simulati on scene is realized through API functions of GHOST [48]. The end effector is simulated as a sphere whose radius is 5mm. The width of the labyrinth is 18 mm. In this experiment, the task is defined to move the end effector from the origin (0, 0, 0) to get out of the labyrinth as quickly and smoothly as possible and with as few collisions as possible. In order to avoid the depth perception problem, operators are only allowed to move in the X Y plane by adding a planar constraint to the haptic device. Bardor fer et al used this haptic interface to do motion analysis of upper limb for patients with neurological diseases (ND), but they did not try to improve the manipulation performance [80]. Figure 5.6 Virtual Environment for Simulation Test bed

PAGE 90

78 5.3.2. Skill Learning and Transferring HMM is used to model the translation skill of moving along the labyrinth. The learned skill is later used as a virtual therapist in motion training. This task was executed twelve times to produce the training se t for HMM by a normal subject. The translation data of the end effector is recorded and converted into discrete symbols using the preprocessing approach as illustrated in section 5.2.1. The discrete symbols of these task executions are used to train HMM. Once the HMM has been trained, it can be used to evaluate each task execution. The set of symbols that produce the largest forward likelihood P(O|M) correspond to the motion that is most likely executed by the normal subject. In other words, it represents the skill needed by that specific task. We use a 5 state, left right, two dimensional HMM for skill learning. So t he prior matrix p is a 1 5 matrix, the transition matrix A is a 5 5 matrix with each row representing the transition probability from a certai n state to other states. It is necessary to note that we have two observability matri ces B, each of which is 256 5. p A and B matrices are initialized by the uniformly distributed random number as usual. Starting with these initial parameters, the HMM is trained by the training test. The forward algorithm was used to score each trajectory (Figure 5.7). It can be seen that No. 7 is the highest and No. 6 is the lowest in the probability value s It is important to note that the best (highest) or worst (low est) scores do not refer to the performance, but to the accuracy of representing the skill of doing the task. For example, if we are asked to draw many line segments with the same direction and length, it would be likely that we would draw a couple of clo se to perfect ones and a couple of very bad ones. But these extreme cases do not represent our line drawing skill. The lines that we are most likely to draw represent our line drawing skill.

PAGE 91

79 The trajectory with the highest score represents the translation skill of the subject most likely to do this task. 0 2 4 6 8 10 12 0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01 Data Index Probability Figure 5.7 Forward S cores for all 12 Times of Task Execution 5.4 Motion Therapy Experiments Since the skill of this task has been learned, the trajectory of the learned skill is displayed on the scree n acting as a therapist. During the therapy training session, operators try to follow it as accurately as possible (Figure 5.6). Two subjects: one is female, cerebral palsy with right hemiparesis and spasticity, persistent low back pain; the other is ma le, 19, cerebral palsy, partial paralysis of his upper and lower extremities, executed this task seven times each before and after training. Before collecting data, they practiced this movement for several times until they felt comfortable about it. Thei r data, including translation, velocity and reaction forces, were sampled at 1000Hz. The evaluation indexes include:

PAGE 92

80 Distance ratio R d Its value reflects the trajectory optimization capabilities. The smaller, the better. The ideal value is a little great er than 1. skill actual d d d R = (5.4) where d actual is the actual distance traveled, d skill is the distance traveled by the learned skill. Time taken to complete the task T ; Number of collision s with walls N c ; 0 1 ) ( = elsewhere with wall Collide n C (5.5) = + = j c j C j C N ) 1 ) ( ) 1 ( ( (5.6) Time duration of the collisions T i It reflects reaction capabilities. 1 ) ( ) 1 ( 1 ) ( ) 1 ( = + = + = j C j C j j C j C j i t t T (5.7) Impact force of the collisions with walls F i ; 2 2 y i x i i F F F + = (5.8) where F i, x and F i,y are impact forces when the end effector collides with the X direction wall and Y direction wall respectively. Tremor magnitude M t and frequency F t For motion analysis, operators collisions with X directional wall and Y direction wall do not make much difference. So only the magnitude of impact force is analyzed. The direction of impact force is not meaningful. Tremor information is extracted by

PAGE 93

81 applying a high pass filter, which has a cut off frequency fc = fmax/10 (fmax is maximum tremor fr equency ) Tremor magnitude is available in time domain. The tremor frequency can be obtained through discrete Fourier transform (DFT). The collision forces along X and Y axes are combined and the magnitude of the combined force was analyzed. C(n) indicate s the case when collisions occur. Nc is the number of collisions occurring during task execution. Ti is the time duration of each collision. Nc and Ti are obtained through checking the transition of C(n) between 0 and 1. 5.4.1. Motion Performance before Therapy Training Two persons with disabilities performed the task before and after therapy training. Figures 5.8, 5.9, and 5.10 present the performance of subject 1 before training. Figure 5.8 shows an actual trajectory and the skilled trajectory. Figure 5 .9 shows the translation tremor along X and Y axes, including tremor magnitude and frequency. Figure 5.10 presents the collision information, including the impact force and the time duration for each collision occurring.

PAGE 94

82 -80 -60 -40 -20 0 20 40 60 80 -80 -60 -40 -20 0 20 40 60 80 100 X-axis(mm) Y-axis(mm) translation with tremor Actual Movement Skilled Trajectory Figure 5.8 Actual Movi ng Distance is 716.8mm, Skill Moving Distance is 495.2mm, and Distance Ratio is 1.44

PAGE 95

83 0 500 1000 1500 2000 0 10 20 30 x-translation tremor number of sample magnitude(mm) -50 0 50 0 2 4 6 frequency/fs x-tran tremor FFT 0 500 1000 1500 2000 0 10 20 30 40 y-translation tremor number of sample magnitude(mm) -50 0 50 0 2 4 6 frequency/fs y-translation trmor FFT Figure 5.9 Tremor M easurements. X tremor magnitude mean is 8.4mm and STD is 6.9mm. Y tremor magnitude mean is 9.3mm and STD is 8.8mm

PAGE 96

84 2000 4000 6000 8000 10000 12000 14000 16000 0 1 2 The number of sample collision indicator collision detection 1=collision,0= no collision 0 5 10 15 0 2 4 6 The number of collision time interval(s) 0 100 200 300 400 500 600 0 0.5 1 1.5 The number of sample Reaction Force(N) Figure 5.10 Collisions: 15 C ollisions O ccurred. The max time duration is 5.87s and the minimum is 0.14s. The max impact force is 1.01N and the minimum is 0.15N

PAGE 97

85 5.4.2. Motion Performance after Therapy Training After therapy training, the data for each subject w as collected. The analysis for subject 1 is presented in Figures 5.11, 5.12, and 5.13. -80 -60 -40 -20 0 20 40 60 80 -80 -60 -40 -20 0 20 40 60 80 100 X-axis(mm) Y-axis(mm) translation with tremor Actual Movement Moving Skill Figure 5.11 Trajectories after T herapy Training Actual M oving Distance i s 619.3 and D istance R atio i s 1.25

PAGE 98

86 0 500 1000 1500 2000 0 5 10 15 x-translation tremor number of sample magnitude(mm) -50 0 50 0 0.5 1 1.5 frequency/fs x-tran tremor FFT 0 500 1000 1500 2000 0 5 10 15 20 y-translation tremor number of sample magnitude(mm) -50 0 50 0 0.5 1 1.5 2 frequency/fs y-translation trmor FFT Figure 5.12 Translation Tremors After Therapy. X axis T remor M agnitude M ean I s 5.27 mm and STD I s 3.93. Y axis T remor M agnitude M ean I s 6.71 and STD I s 4.41

PAGE 99

87 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10 4 0 1 2 The number of sample collision indicator collision detection 1=collision,0= no collision 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0 5 10 The number of collision time interval(s) 0 200 400 600 800 1000 1200 1400 1600 0 0.2 0.4 The number of sample Reaction Force(N) Figure 5.13 Collisions After Therapy. 6 C ollision s O ccurred. The maximum i mpact force is 0.39N and the minimum is 0.18N

PAGE 100

88 The X Y plane trajectory presents movement quality. The smoothness reflects the capability of controlling the end effector during movement. The tremor information plots present tremor magnitude, without considering its direction since magnitude is more meaningful than direction. The tremor frequency was always low, 2 3 Hz for the two subjects. Impact force occurs when there is a collision. Generally, the impact force is related to the smoothness of the trajectory. The smoother the trajectory is, the smaller the tremor magnitude is The time duration of each collision indicates the reaction to collision. From figure s 5.8 and 5.11, it can be seen that the trajectory was improved significantly. Figures 5.10 and 5.13 show the collision information before and after th e therapy training, respectively. As we can see, the numbers of collisions, the collision durations and the impact forces were decreased. Though the tremor magnitude was reduced considerably, the tremor frequency was about the same. This is due to the fac t that tremor frequency is not observable to the user. Before and after therapy training, seven trials of execution data were collected for each subject. The performance summary is presented in Table 5.1.

PAGE 101

89 Table 5.1 Movement Performance Summa r y Subject 1(Femal, cerebral palsy with right hemiparesis and spasticity (Mean / Std) Subject 2(male, 19, cerebral palsy, partial paralysis) (Mean / Std) Before Training After raining Before Training After Training Length Ratio R 1.68/0.35 1.16/0.27 1.46/0.24 1.12/0.17 Time taken(s) 25.35/3.78 16.99/2.08 18.03/2.80 12.04/1.58 Collision Numbers 17.57/4.70 10.43/3.87 13.42/3.05 8.77/2.49 X tremor Mag (mm) 10.47/4.86 4.26/2.33 7.77/3.61 5.13/2.03 Y Tremor Mag (mm) 10.21/6.72 6.43/2.15 8.42/4.34 5.43 /1.93 Tremor freq(max) 3.5Hz/ -3.4Hz/ -2.8Hz/ -2.5Hz/ -MaxTime Duration(s) 4.87/1.59 2.15/0.86 3.04/1.33 1.96/0.65 Impact force(max,N) 1.02/0.53 0.71/0.33 0.89/0.35 0.65/0.20

PAGE 102

90 5.5. Summary In this chapter a HMM based approach for labyrinth movi ng skill learning and transferring of the learned skill to persons with disabilities is presented. The two dimensional model is built for the XY plane translation. The learned skill is not the best or the worst one of the numerous task executions, but the one that the operator is most likely to do. That is, the most natural one. The learned skill was used as a virtual therapist for persons with disabilities. Persons with disabilities were asked to follow the virtual therapist as closely as possible. The difference between the subject and the virtual therapist provides visual feedback which helps the eye hand coordination control capability. After several times of therapy training, operators could control the end effector better, and hence reducing coll isions and making the trajectory smoother.

PAGE 103

91 Chapter 6: Conclusions and Recommendations 6.1. Dissertation Overview An intelligent teleoperation system using assistance functions was developed to improve task execution efficiently and to decrease the execution time. The approach was guided by the phil osophy that the human operator should remain in the control loop of the slave manipulator, thus using human intelligence for the telerobotics system control A common rehabilitation evaluation task, B ox and Blocks was tested using teleoperation assistanc e functions The results show ed how the desired motion was kept or sometimes augmented and how the unwanted motion was reduced. Complex telemanipulation tasks were decomposed into general and relatively simple subtasks: following a path, aligning with a ta rget, avoiding an obstacle and stopping. Hidden Markov Model was used to classify human motion intention into one of the four classes. For different subtasks, appropriate assistance was applied to enhance the input from master device. Another rehabilitati on robotics application is motion therapy. Using HMM a labyrinth movement skill wa s learned by the robot. The learned skill then act ed as a virtual therapist and two persons with disabilities on upper limb were trained u sing this approach The skill learn ing based robot therapy and its effectiveness were discussed. 6.2. Virtual Fixture Assistance Based on Motion Intention In telemanipulation system s assistance through variable position and velocity mapping or virtual fixture can improve manipulation cap abilit y and dexterity. This

PAGE 104

92 assistance is useful not only for path following, but also for aligning with target s and avoiding obstacle s Conventionally, such assistance is based on the environment al information and without knowing the users motion inten tion. In this dissertation, users motion intention is combined with real time environment al information to apply appropriate assistance. If the current task requires following a path, a hard virtual fixture orthogonal to the path is applied. Similarly, i f the task is to position a target, an attractive force field is produced to provide a guide for approaching. Hidden Markov Model is effective for motion classification. As long as the training set is sufficiently large, the motion recognition accuracy i s close to 100%. The assistance is appropriately selected based on the recognized motion. The experiment al results without assistance showed that the operator always ha d random errors that result ed in difficulty in following a path and positioning a target The experimental results with assistance showed that all those undesired random errors were removed or reduced. The HMM based assistance is useful for improving performance accuracy and decreasing execution time. In order to improve the recognition accu racy, the Hidden Markov Model can be expanded. As long as they are independent, the added dimensions linearly increase the computation complexity 6.3. Robot Therapy and its Effectiveness A HMM based approach for labyrinth moving skill learning and transf erring the learned skill to persons with disabilities on their upper limb was presented. The multidimensional model is built for the learning X Y plane translation skill. The learned skill was used as a therapist for persons with disabilities. They need to follow the virtual

PAGE 105

93 therapist as close as possible. The difference between the subject and the virtual therapist provides visual feedback that helps the eye hand coordination control capability. During the training process, the trajectory smoothness di d not improve significantly even though the user had less collisions and shorter execution time. This could be due to the fact that operators tend to q uickly withdrawn the ball after the collision to follow the continuously updated trajectory. After many r epetitions of therapy, operators were able to control the end effector to avoid collisions and make the trajectory smooth. They displayed some movements to avoid unnecessary body arrangements and postured themselves accordingly. The purpose of therapy is t o restore some of the lost function s of persons with disabilities. This robot aided therapy emphasizes the movement control through eye hand coordination training learn ed from normal subjects performance. This compensation allows persons with disabilities to improve upper limb coordination ; tremor reduction and motion control capabilities. 6.4. General Discussion Overall, when applying teleoperation assistance, the performance of subjects with disabilities can be enhanced. The results of the various experi ment al results were promising, and indicated that the proposed assistances techniques have real potential in speeding up the execution of a variety of tasks, improving operation accuracy and reducing operators fatigue. The Hidden Markov Model based skill learning proposed a new approach for motion therapy While p hysical therapy directed by a therapist restore s the lost motion th r ough physical exercise robot therapy supervised by a virtual therapist improve s eye hand coordination by learning from a dem onstrator.

PAGE 106

94 6.5. Recommendations The assistance algorithms were tested by using simulation platform s It is recommended to use a robot manipulator to test for a variety of real rehabilitation tasks. These test s could be implemented on the workstation based teleoperation system, which consists of a P h anToM Premium 1.5 and PUMA or RRC manipulator both of which will be available in our laboratory Although teleoperation assistance provides very valuable assistance for complex task execution, autonomous execut ion for some repetitive tasks requiring accurate fine tuning movement is recommended For the robot system in our lab, computer vision can be configured to implement visual servoing for target grasping.

PAGE 107

95 R eferences 1. William S. Harwin, Tariq Rahman, and Richard A. Foulds, A Review of Design Issues in Rehabilitation Robotics With Reference to North American research, IEEE Transactions on Rehabilitation Engineering, VOL. 3, NO.1, March 1995. 2. R. Alle n, A.karchak, Jr., and E.L. Bontrager, Design and fabrication of a pair of Rancho anthropomorphic arm, Attending Staff Assoc. Rancho Los Amigos Hospital, Inc., and Tech. Rep., 1972. 3. Gelderblom, G.J., Cremers G., and Soede,M. Review of Rehabilitation Robotics Application, International Journal of Human Friendly Welfare Robotics System, Vol.4, No.1 2003. 4. Axel Gr ser, Christan Martens, Rehabilitation Robusts Transfer of Development and Research Results to Disabled Users, International Journal o f Human Friendly Welfare Robotics System, Vol.3, No.1 2003. 5. W. Seamone and G.Schmeisser, Early clinical evaluation of a robot arm/worktable system for spinal cord injured persons. J.Rehab. Res. Dev., pp. 38 57, Jan. 1985. 6. H. Roesler, H.J.Kuppers, and E.Schmalenbach, The medical manipulator and its adapted environment: A system for the rehabilitation of severely handicapped, in IRIA Proc. Int. Conf. Telemanipulators for the Physically Handicapped, pp.73 77, 1978. 7. J ohn L. Dallaway, Robin D. Jackson, and Oaul H. A. Timmers, Rehabilitation Robotics in Europe, IEEE Transactions on Rehabilitation Engineering, VOL. 3, NO.1, March 1995. 8. J. Guittet, H. H. Kwee, N.Quetin, and J.Yclon, The Spartacus telethesis: Manipul ator control and experimentation, in IRIA Proc. Int. Conf. Telemanipulators for the physically Handicapped, pp.79 100.1978. 9. H. H. Kwee, Spartacus and Manus: Telethesis developments in France and the Netherlands, in Interactive Robotic Aids One Optio n for Independent living. World Rehabilitation Fund, 1986, pp.7 17.

PAGE 108

96 10. H. H. Kwee, J.J. Duimel, J.J. Smits, A.A. Tuinhofde Moed, and J.A. van Woerden, The Manus wheelchair borne manipulator: System review and first results, in IARP Proc. 2 nd Wkshp. M edical and Healthcare Robotics, pp. 385 395,1989. 11. M. Topping, Handy 1, a robotic aid to independence for severely disabled people, in Proc. 3 rd Cambridge Workshop Rehabilitation Robotics, pp.13 16, 1994. 12. J.L. Dallaway and R.D. Jackson, RAI D a Vocational robotic workstation, in ICORR 92 conference proceedings, 1992. 13. Michael Hillman*, Karen Hagan, Sean Hagan, Jill Jepson and Roger Orpwood, A Wheelchair Mounted Assistive Robot, in International Conference on Rehabilitation Robotics, S tanford, CA, U.S.A. 14. Philippe Hoppenot and Etienne Colle, Location and Control of a Rehabilitation Mobile Robot by Close Human machine Cooperation, in IEEE Transaction of Neural System and Rehabilitation Engineering, Vol .9, No.2, June 2001. 15. Chr istian Martens, Oleg Ivlev and Axel Grser. Burgar, Interactive Controlled Robotic System FRIEND to Assist Disabled People, in 7 th International Conference on Rehabilitation Robotics, France, May 2001. 16. H. F. Machiel Van Der Loos, VA/Stanford Rehab ilitation Robotics Research and Development Program: Lessons Learned in the Application of Robotics Technology to the Field of Rehabilitation, IEEE Transactions on Rehabilitation Engineering, VOL. 3, NO.1, March 1995. 17. Vijay Kumar, Tariq Rahman and Ve nkat Krovi, Assistive Devices For People With Motor Disabilities, Wiley Encyclopaedia of Electronics Engineering 1997. 18. G.E.Birch, M. Fengler, R.G.gosine, K. Schroeder, M. Schroeder, M. Schroeder and D.L. Johnson, An assessment methodology and its application to a robotic vocation assistive device, in Technology and Disability, 5(2):151 166, 1996. 19. S. J. Sheredos, B. Taylor, C. B. Cobb and E. E. Dann, Preliminary evaluation of the helping hand electro mechanical arm. Technology and Disability 5(2): 229 232, 1996. 20. Kazuhiko Kawamura, Sugato Bagchi, Moenes Iskarous, and Magured Bishay, Intelligent Robotic Systems in Service of Disabled, IEEE Transactions on Rehabilitation Engineering, VOL. 3, NO.1, March 1995. 21. Jin Woo Jung, Won Kyung Heyoung Lee and Jong Sung Kim, A Study on the Enhancement of Manipulation Performance of Wheelchair Mounted Rehabilitation Service Robot, in International Conference on Rehabilitation Robotics, Stanford, CA.

PAGE 109

97 22. Kelly McClenathan and Tariq Rahman, Po wer Augmentation in Rehabilitation Robots, in International Conference on Rehabilitation Robotics, Stanford, CA, U.S.A. 23. Christine Wright Ott, The GOBOT: A Transitional Powered Mobility Aid For Young Children With Physical Disabilities, in Internat ional Conference on Rehabilitation Robotics, Stanford, CA, U.S.A. 24. N. Didi, M.Mokhtari, A.Roby Brami, Preprogrammed Gestures for Robotic Manipulatos: An Alternative Speed up Task Execution Using MANUS, in International Conference on Rehabilitation R obotics, Stanford, CA, U.S.A. 25. Richard M. Mahoney, The Raptor Wheelchair Robot System, in 7 th International Conference on Rehabilitation robotics, France, May 2001. 26. Mike Topping, Handy 1, A Robotic Aid to Independence for Severely Disabled Peop le, in 7 th International Conference on Rehabilitation robotics, France, May 2001. 27. O. Ait Aider, P. Hoppenot, E.Colle, Localization by Camera of a Rehabilitation robot, in 7 th International Conference on Rehabilitation robotics, France, May 2001. 28. Zeungnam Bien, Won Kyung Song, Dae Jin Kim, Jeong Su Han, Vision based Control with Emergency Stop through EMG of the Wheelchair based Rehabilitation Robotic Arm, KARES II, in 7 th International Conference on Rehabilitation robotics, France, May 2001. 29. Steven Edward Everett, Human Machine Cooperative Telerobotics Using Uncertain Sensor and Model Data. Ph.D. Dissertation, The University of Tennessee, Knoxville, 1998. 30. P.G. Backes, Multi sensor based impedance control for task execution, In Proceedings of the 1992 IEEE International Conference on Robotics and Automation, pages 1245 1250, Nice, France, May, 1992. 31. Schuyler, R. Mahoney. Job Identification and Analysis for Vocational Robotics Applications. Proceedings RESNA 1995. 32. L. Leifer, Rehabilitation Robots, Robotics Age, pp 4 15, May/June 1981.

PAGE 110

98 33. T. F. Chan and R. V. Dubey, "Generalized Bilateral Controller for a Teleoperator System with a Six DoF Master and a Seven DoF Slave," Proceedings of the IEEE International Con ference on Robotics and Automation, San Diego, California, May 8 13, 1994, pp. 2612 2619. 34. R. V. Dubey, T. F. Chan and S. E. Everett, Variable Damping Impedance Control of a Bilateral Telerobotic System, IEEE Control System Magazine, February 1997. 35. http://www.appliedresource.com/RTD/Products/Raptor/index.htm 36. K. Sato, M. Kimura, and A. Abe, Intelligent Manipulator System with Nonsymmetric and Redundant Master S lave, Journal of Robotic System, 9(2):281 290,1992. 37. Luc D. Joly and Claude Andriot, "Imposing motion constraints to a force reflecting telerobot through real time simulation of a virtual mechanism," in Proceedings of the 1994 IEEE International Confe rence on Robotics and Automation, San Diego, CA, May 1994, pp. 357 362. 38. Kazuhiro Kosuge, Koji Takeo, and Toshio Pukuda, "Unified approach for teleoperation of virtual and real environment manipulation based on reference dynamics in Proc. of the 1 995 IEEE International Conference on Robotics and Automation, Nagoya,Japan,May1995. 39. Thomas B. Sheridan. Telerobotics, Automation, and Human Supervisory Control. The MIT Press, London, England, 1992. 40. Gunnar Bolmsjo, Hakan Neveryd, and Hakan Eftrin g. Robotics in Rehabilitation. IEEE Transactions on Rehabilitation Engineering, Vol. 3, No. 1, March 1995. 41. Norali Pernalete. Development Of A Robotic Haptic Interface To Perform Vocational Tasks By People With Disabilities." Ph.D. Dessertation, Dep artment of Electrical Engineering, University of South Florida, December 2001. 42. Koivo, A.J., Houshangi, N. Real time vision feedback for servoing robotic manipulator with self tuning controller , Systems, Man and Cybernetics, IEEE Transactions on, V olume: 21 Issue: 1 Jan. Feb. 1991, Pages:134 142. 43. Jia Li; Najmi, A.; Gray, R.M.; Image classification by a two dimensional hidden Markov model. Signal Processing, IEEE Transactions on [see also Acoustics, Speech, and Signal Processing, IEEE Tr ansactions on] Volume: 48 Issue: 2 Feb. 2000 Pages:517 533.

PAGE 111

99 44. C. Stanger, C. Angling, W. Harwin, D. Romilly. Devices for Assisting Manipulation: A Summary of User Task Priorities. IEEE Transactions on Rehabilitation Engineering. Vol 2, No 4, December 1994. 45. Steven E. Everett, Rajiv V. Dubey, Y. Isoda, and C. Dumont. Vision Based End Effector Alignment Assistance for Teleoperation. Proceedings of the IEEE International Conference on Robotics and Automation, Detroit, MI.. May 1999, p p. 543 549. 46. Jie Yang, Yangshen Xu, Chiou S. Chen, Hidden Markov Model Approach to Skill learning and Its Application to Telerobotics, IEEE Transactions on Robotics and Automation, Volume: 10 No.5,Oct 1994. 47. B. Hannaford and P. Lee, Hidden Marko v Model Analysis of Force/Torque Information in Telemanipulation. The International Journal of Robotics Research, Oct.1991, Vol. 10, No.5, pp.528 539. 48. SensAble Technologies. http://www.sens able.com/products/phantom.htm 49. Robotics Research Corporation, P.O. Box 206, Amelia, OH 45102. K Series Robot Arms Users Manual, 1986. Publication UM 100 87. 50. Wentao Yu, Benjamin Fritz, Norali Pernalete, Michael Jurczyk ,Rajiv V. Dubey "Sensors A ssisted Telemanipulation for Maximizing Manipulation Capabilities of Persons With Disabilities", Haptics Symposium 2003, Los Angles, CA, 2003. 51. Kazuhiko Kawamura, Sugato Bagchi, Moenes Iskarous, and Magured Bishay, Intelligent Robotic Systems in Serv ice of Disabled, IEEE Transactions on Rehabilitation Engineering, VOL. 3, NO.1, March 1995. 52. S. Hayati and S. T. Venkatarman, Design and Implementation of a Robot Control System with Traded and Shared Control Capabilities, In Proceedings of the 19 89 IEEE International Conference on Robotics & Automation, pages 1310 1315, Scottsdale, AZ. May 1989. 53. N. Pernalete, Wentao Yu, R. V. Dubey, W.A. Moreno, Development of an Intelligent Mapping Based Telerobotic Manipulation System To Assist Persons Wit h Disabilities. In Proceedings of the 2002 IEEE International Conference on Robotics & Automation, Washington, DC U.S.A., May 2002. 54. J.Yang, Y.Xu and C.S. Chen, "Hidden markov Model Approach to Skill Learning and its Application to Telerobotics," IEE E Trans. On Robotics and Automation, vol.10, no.5, pp.621 31,1994.

PAGE 112

100 55. A. Bettini, S. Lang, A. Okamura and G. Hager, "Vision Assisted Control for Manipulation Using Virtual Fixtures," IEEE/RSJ International Conference on Intelligent Robots and Systems, 20 01, pp. 1171 1176 56. Gregory D. Hager, A Modular System for Robust Positioning Using Feedback from Stereo Vision", IEEE Transactions on Robotics and Automation, vol.13, No.4, August 1997. 57. Young S. Park, Hyosig Kang, Tomas F. Ewing, Eric L. Faulrin g, J. Edward Colgate, Michael A. Peshkin, Enhanced Teleoperation for D & D, in the IEEE International Conference on Robotics and Automation, New Orleans, 2004. 58. Jiang Wang, William J. Wilson, 3D Relative Position And Orientation Estimation Using K alman Filter For Robot Control", in Proceedings of the 1992 IEEE International Conference on Robotics and Automation Nice, France May 1992. 59. Seth Hutchinson, Gregory D. Hager and Peter I. Corke, "A Tutorial on Visual Servo Contro," IEEE Transactions on Robotics and Automation, vol.12, No.5, October 1996. 60. Bernard Espiau, Francois Chaumette, and Patrick Rives "A new Approach to Visual Servoing in Robotics", IEEE Transactions on Robotics and Automation, vol.8, No.3, June 1992. 61. P. Marayong, A Bettini and A. Okamura, Effect of Virtual Ficture Compliance on Human Machine Cooperative Manipulation, in IEEE/RSJ proceedings, Lausanne, Switzerland, Oct 2002. 62. Jie Yang, Yangshen Xu, Chiou S. Chen, Human Action Learning via Hidden Markov Mode l, IEEE Transactions on Systems,Man, And Cybernetics Part A: Systems and Humans, Volume: 27 No.1 January 1997. 63. L. R. Rabiner, A Tutorial on Hidden Markov Model and Selected Applications in Speech Recognition, Proceedings of the IEEE, Volume: 77 Issue: 2, Feb 1989. 64. Z. Stanisic, S. Payandeh, and E. Jackson. Virtual fixture as an aid for teleoperation., in 9 th Canadian Aeronautic and Space Inst. Conference., 1996. 65. Christophe Collewet, Franois Chaumette, and Philippe Loisel, Image based v isual servoing on planar objects of unknown shape, in Proceedings of IEEE International Conference on Robotics and Automation, Seoul, Korea, May 2001.

PAGE 113

101 66. W. Yu, N. Pernalete, R. V. Dubey Telemanipulation Enhancement through Users Motion Intention Reco gnition and Virtual Fixture , submitted to IEEE International Conference on Intelligent Robots and Systems 2004. 67. N. Pernalete, W. Yu, R. Dubey. Augmentation of manipulation Capabilities of Persons with Disabilities Using Scaled Telemanipulation, IEE E/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, October 2002. 68. Rosenberg, L.B.; Virtual fixtures: Perceptual tools for telerobotic manipulation, Virtual Reality Annual International Symposium, IEEE 18 22 Sep 1993 Page(s): 76 82. 69. Payandeh, S.; Stanisic, Z. On application of virtual fixtures as an aid for telemanipulation and training, Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2002. 10th Symposium. 70. Pepper, R.L.and Kaome a, P.K. Research Issues in Teleoperator Systems, 28 th annual Human Factors Society Meeting, San Antonio, Tx, 1984. 71. Bolmsjo, G.; Neveryd, H.; Eftring, H.; Robotics in rehabilitation, Rehabilitation Engineering, IEEE Transactions, Volume: 3 Issue: 1 Mar 1995 Page(s): 77 83 72 R.M. Gray, Vector quantization, IEEE ASSP Mag., vol.1,No.2. pp.4 29,1984. 73. N. Turro, O.Khatib,E.Coste Maniere, Haptically Augmented Teleoperation, International Conference on Robotics and Automation. Seoul, Korea, Ma y 2001. 74. Roger Y. Tasi, Reimar K. Lenz, A new Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration", IEEE Transactions on Robotics and Automation, vol.5, No.3, June 1989. 75. Y.Shirai, H. Inoue, Guiding a robot by visual fe edback in assembly tasks", Patter Recognition., vol.5, pp.99 108,1973. 76. G. Puskorius and I. Feldkamp, Calibration of robot vision, in International Conference on Robotics and Automation. Raleigh, NC 1987. 77. http://www.mvtec.com/halcon/ The Software Solution for Machine Vision Application. 78. C. Butefisch, H. Hummelsheim, P. Denzler, and K. Mauritz, "Repetitive training of isolated movement improves the outcome of motor rehabilitation of the central ly paretic hand," Journal of the Neurological Sciences vol. 130, pp. 59 68, 1995.

PAGE 114

102 79. Myer Kutz, McGraw Hill, Biomedical Engineers Handbook, 2002. 80. Ale Bardorfer, Marko Munih, Anton Zupan, Alenka Primoic, Upper Limb Motion Analysis Using Haptic Interface, IEEE/ASME Transactions on Mechatronics, VOL.6, NO.3, September 2001. 81. Allen, P.K; Timcenko, A.; Yoshimi, B.; Michelman. P; Trajectory filtering and prediction for automated tracking and grasping of a moving object, IEEE International Con ference on Robotics and Automation, May 12 14, 1992 82. Hager, G.D.; Grunwald, G.; Hirzinger, G.; Feature based visual servoing and its application to telerobotics, IEEE/RSJ International Conference, Sept. 1994. 83. Robotics Research R2 Controller Ma nual V1.4, http://www.robotics research.com 84. Brian P. DeJong, J. Edward Colgate, Michael A. Peshkin, Improving Teleoperation : Reducing Mental Rotations and Translations, IEEE International Conferenc e on Robotics and Automation, April 26 May 1, 2004. 85. B. Volpe, H. Krebs, N. Hogan, L. Edelstein OTR, C. Diels, and M. Aisen, "A novel approach to stroke rehabilitation: robot aided sensory motor stimulation," Neurology vol. 54, pp. 1938 44, 2000. 86 M. L. Aisen, H. I. Krebs, N. Hogan, F. McDowell, and B. Volpe, "The effect of robot assisted therapy and rehabilitative training on motor recovery following stroke," Arch. Neurol. vol. 54, pp. 443 446, 1997. 87. H. I. Krebs, N. Hogan, B.T. Volpe, M.L. Aisen, L. Edelstein, and C. Diels, Robot Aided Neuro Rehabilitation in Stroke: Three Year Follow Up, in International Conference on Rehabilitation Robotics, Stanford, CA, U.S.A. 88. H. I. Krebs, B.T. Volpe, B.Rohrer, M. Ferraro, S. Fasoli, L. Edelstein, and N. Hogan, "Robot Aided Neuro Rehabilitation in Stroke: Interim Results on the Follow Up of 76 Patients and on Movement Performance Indices," in 7 th International Conference on Rehabilitation robotics, France, May 2001. 88. H. I. Krebs, N. Hogan, M. L Aisen, and B. T. Volpe, "Robot aided neurorehabilitation," IEEE Trans. Rehab. Eng. vol. 6, pp. 75 87, 1998. 89. D. J. Reinkensmeyer, L. E. Kahn, M. Averbuch, A. N. McKenna Cole, B. D. Schmit, and W. Z. Rymer, "Understanding and treating arm movement im pairment after chronic brain injury: Progress with the ARM Guide," Journal of Rehabilitation Research and Development vol. 37, pp. 653 662, 2000.

PAGE 115

103 90. Leonard E. Kahn, Michele Averbuch, W. Zev Rymer, David J. Reinkensmeyer, Comparison of Robot Assisted R eaching in Promoting recovery From Chronic Stroke, in 7 th International Conference on Rehabilitation Robotics, France, May 2001. 91. D. J. Reinkensmeyer, J. P. A. Dewald, and W. Z. Rymer, "Guidance based quantification of arm impairment following brain injury: A pilot study," IEEE Transactions on Rehabilitation Engineering vol. 7, pp. 1 11, 1999.

PAGE 116

104 Appendices

PAGE 117

Appe ndix A: System Test bed and Experiment Design 105 This chapter presents the system test bed at rehabilitation robotics lab which is used by this project. The hardware and software used in the project will be introduced. A.1. Introduction The previously outlined concept was implemented on the hardware an d software in this laboratory. This chapter describes the hardware used to test the new assistance strategy and the software we used in the testbed. A.2. Hardware During the course of this project, it was necessary to reconfigure the previously constru cted telerobotic system used by students at University of Tennessee at Knoxville [29]. The Kraft Master Hand Controller has been replaced by a PHANTOM premium 1.5. The currently used hardware and corresponding schematic are described in this section. A.2 .1 Robotics Research Corporation Manipulator The Rehabilitation Robotics and Telemanipulation Laboratory in the Mechanical Engineering Department at the University of South Florida uses a seven degree of freedom robot manipulator from Robotics Research Co rporation (RRC), model k 2107, as the remote manipulator. The manipulator has seven revolute joints boasting a redundant joint for obstacle avoidance. Joints 1, 3, 5, and 7 are roll type joints, while 2, 4, and 6 are wrist type joints. The total length o f the arm when all the joints are positioned forward, such as F igure A.1,

PAGE 118

Appendix A (Continued) 106 reaches 2.1 meters, about seven feet. Figure A.1 also shows the schematic of the robot manipulators seven joints including and location of each joint and their respective travel l imits. The travel limits are displayed in table A.1. The motions of the seven revolute joints and an end effector are displayed in figure A.2. Figure A.3 shows a picture of the complete telerobotic system including the actual mounting of the robot manip ulator on the horizontal plane that is not reflected in the previous figure. Figure A. 1 RRC Manipulator Joints and Limits

PAGE 119

Appendix A (Continued) 107 Table A. 1 Joint Limits for the RRC Manipulator Joint Number Lower Limit Uppe r Limit 1 +180 180 2 +135 45 3 +180 180 4 0 180 5 +360 360 6 0 180 7 +1080 1080 Figure A. 2 RRC Manipulator The manipulator uses a PC based controller. The controller uses inputs from the computers graphical us er interface (GUI) or the teach pendent as the reference position for each of the seven joints. From these positions, the inverse kinematics is calculated, and seven joint commands are determined and sent to the low level controller. The robot controller is capable of position, velocity, and torque control for the motors for each of the seven joints to maintain the appropriate joint angles of the manipulator.

PAGE 120

Appendix A (Continued) 108 Figure A. 3 RRC Manipulator with Sensors and End Effector A.2.2 P HANTOM Premium 1.5 Figure A. 4 PHANTOM Premium 1.5

PAGE 121

Appendix A (Continued) 109 Developed by SensAble Technologies [48], the PHANTOM device represents a resolution in human computer interface technology. Prior to its invention, computer users only had the capability to interact through the sense of sight, and more recently, sound. The sense of touch, the most important sense in many tasks, has been conspicuously absent. The PHANTOM device changes all of this. Just as the monitor enables users to see compute r generated images, and audio speakers allow them to hear synthesized sounds, the PHANTOM device makes it possible for users to touch and manipulate virtual objects. The PHANTOM haptic interface is distinguished from other touch interfaces by what it is no t. It is not a bulky exoskeleton device, a buzzing tactile stimulator nor a vibrating joystick. PHANTOM application areas include medical and surgical simulation, geophysics and nanomanipulation. The device used in this project is a premium 1.5, whose spec is as follows: Table A.2 P HANTOM Premium 1.5 Specifications Workspace 7.5 x 10.5 x 15 inches/19.5 x 27 x 37.5 cm Range of motion Lower arm movement pivoting at elbow Nominal position resolution 860 dpi / 0.03 mm Back drive friction 0.15 oz / 0.04 N Ma ximum Exertable Force 1.9 lbf / 8.5 N Continuous Exertable Force 0.3 lbf/ 1.4 N Stiffness 20 lbs./in / 3.5 N/mm Inertia < 0.17 lbm < 75 g Footprint 10 x 13 inches / 25 x 33 cm Force feedback x, y, z(3DOF) Position sensing x, y, z translation and rotation (6DOF optional) Interface Via Parallel Port Supported platforms Intel based PCs

PAGE 122

Appendix A (Continued) 110 A.3. Software Several independently running programs on various computers make up the software which acts to simulate telemanipulation and control this teleroboti cs system. The code includes that supplied by RRC manipulator manufacturer, purchased as general purpose software, and written in the lab. A. 3 .1 R2 Controller Program The R2 controller is developed on the basis of real time motion controller, supporting virtually any robotic mechanism with minimum software changes. It is completely configurable through the use of text configuration files with respect to manipulator and control hardware [83]. The R2 controller provides a server client TCP/IP protocol int erface, which indirectly utilizes the Dynamic Host Configuration Protocol (DHCP) service and the Windows Internet Name Service (WINS) for dynamic mapping of network names and address. A third party application can interface to the R2 server and the R2 real time controller via the R2 Server API server client protocol. All the motion controller commands are supported in the R2 Server so that the manipulator can be directed from either a client remotely via an Ethernet communication or an inter process commun ication protocol. This API decouples the higher level control development from the lower level motion controller. A. 3 .2 HALCON Computer Vision Software HALCON is commercial software for machine vision application, which has flexible architecture for rapi d development of image analysis and machine vision

PAGE 123

Appendix A (Continued) 111 applications. HALCON provides a library of more than 1100 image processing operators with outstanding performance for blob analysis, morphology, pattern matchin g, metrology, 3D calibration, and binocular s tereo, to name just a few [77]. For example, if we need to get image edge, we can choose Sobel, or Canny edge detector to do that. Also Halcon supports most of the currently used frame grabbers. We can just call open_framegrabber and grab_image fu nctions to get real time image. Components in Halcon are independent objects in the C++ object and VB modules which can be used by users for application development. The image acquisition and processing program can be developed in the integrated developme nt environment (shown in Figure A.5). But usually, in order to implement some complex computation, the program edited in Halcon operators is converted into C++ or VB in which users algorithm can be done easily. In this project, the image processing progr am and the data communication are developed using VC++. Figure A. 5 Integrated Development Environment of Halcon

PAGE 124

Appendix A (Continued) 112 A. 3 .3 Telerobot Control Interface This is the main control program to implement telemanipulation system. It is a clie nt of the R2 controller TCP/IP Server Client architecture via Ethernet communication. It is developed in VC++ to get the Cartesian 3D position and velocity of the master input device, PHANTOM premium 1.5. Two different operation modes are available: one i s the position mapping; the other is the velocity mapping, working like a 3D joystick. Also for visual servo controller, this program gets 3D pose of the target and sends the corresponding visual servoing velocity commands to the R2 controller. Figure A .6 Telemanipulation Interface

PAGE 125

Appendix A (Continued) 113 A. 3. 4 Teleoperation System Architecture A. 4 RRC GUI The graphical user interface (GUI) is provided by RRC. The RRC GUI includes jog control, program control, position feedback, client management and file management. This section describes the features of the RRC GUI. Figure A.8 illustrat es the different windows, in a custom arrangement. The main window is shown in figure A.9. Figure A.7 Teleoperation System Architecture

PAGE 126

Appendix A (Continued) 114 Figure A.8 RRC Graphical User Interface Figure A.9 RRC GUI Main Window

PAGE 127

Appendix A (Continued) 115 A. 4 .1. Safe Operating Instructions As with any machine, a list of guidelines and instructions describes how to safely operate the robot and avoid causing injuries to humans, the robot, or the environment. Upon integrating the many components of the robot controller interface, a list of instructions was developed for the operation of t he RRC manipulator. Not only do these instructions provide details for future users, it also points out the many features of the RRC GUI. There are three different modes in which to operate the robot: simulation mode, robot mode, and PHANToM client mode, explained in the flowing sections. A. 4 .1.1. Simulation Mode Instructions were developed for safe operation of the simulation of the telerobotic system. This mode has all the capabilities of the system without sending any commands to the RT Servo Contro ller. The following is a list of step by step instructions to safely operate the robot in simulation. 1. Flip the power switch on the back of the controller box to the "On" position. 2. Press the green controller on button to turn on the controller. (Press cont roller off to turn off). See figure A.10. 3. To operate the robot in simulation, make sure the main.cfg file has the simulation turned on, do this by the following steps. 4. Open the file to edit: \ config \ main.cfg (Right click on the icon) 5. On the second line, the simulation statement must read: Simulation = (On). 6. When the simulation is turned on, double click on the R2server.exe icon on the desktop. See figure A.11.

PAGE 128

Appendix A (Continued) 116 7. Once the message says Servo Initialized for Type 2 upgrade, then double click on the R2 GUI.exe icon on the desktop. See figure A.11. 8. Click the position feedback on the R2controller window to see the position of the seven joint angles and the global Cartesian coordinates of the robot. 9. To see visual simulation, double click on Solidworks file on the desktop of the PHANToM computer called: 1207iFA.SLDASM Figure A.10 Controller Buttons 10. Click on RRC Simulation / Feedback Simulation, and then click connect. The robot should follow the same configuration of the robot p osition feedback window on the controller computer. 11. There are three different coordinate systems in which to jog (move) the robot: Joint space, hand space, and linear space. Choose linear for most applications. 12. The teach pendant allows for jogging as wel l. It works in conjunction with the jog control buttons on the screen. 13. To quit, first close all windows on the controller computer, and then terminate the R2.RTA process by clicking on the RT Process Manager (See figure A.11) and clicking local. Find t he line with R2.RTA, and click: Kill Process.

PAGE 129

Appendix A (Continued) 117 Figure A.11 Desktop Icons on Robot Controller Computer A. 4. 1.2. Robot Mode Instructions were developed for safe operation of the telerobotic system where all commands are sent to the RT Servo Controller. Som e instructions are similar, so those steps are not repeated. The necessary instructions are as follows. 1. To operate the robot, turn the simulation off by changing the main.cfg configuration file. The icon is on the desktop, figure A.11. 2. Open the file to e dit: D: \ config \ main.cfg. 3. On the second line, the simulation statement must read: "Simulation = (Off)." 4. Follow the same instructions for when the simulation is turned on. 5. Once the GUI is activated, the Enable Arm window will appear. Click the "Enable Arm button, and then the computer will count for 20 seconds. 6. Upon being aware of the robot and its location, press the green machine start button, see figure A.10. If this is not done before the computer counts to 20 seconds, the Machine Start button will n ot activate the robot, and step 5 will need to be repeated. This is incorporated as a safety mechanism. The red e stop button must be attended whenever the robot is enabled.

PAGE 130

Appendix A (Continued) 118 7. Now the robot is enabled, and the homing process can begin. 8. The teach pendant wi ll show the seven joints. Move each joint separately to accommodate the joint angles for home position in table A.1. Once a joint has reached its home position, the computer will beep. 9. Once all the seven joints are in the home position, press and hold the red CNL button on the teach pendant until the homing window disappears. The robot will move a little bit to settle in the appropriate home position. Then start using the GUI functionality. A. 4 .2. Jog Control Jog control allows the user to manipulate the robot incrementally. Since the simulation acts as a client to the server, the jog control feature also controls the simulation as well. Jog control, shown in figure A.12, offers three different types of coordinate frames in which to move the robot, l inear space, joint space, and hand space. In linear movement, the user can activate the jog buttons and give commands to move in any axis in the Cartesian coordinate system, X, Y, and Z, and also adjust the orientation, roll, pitch and yaw. The GUI takes the commanded position and orientation in Cartesian coordinates and calculates the inverse kinematics to determine the low level commands to control the joint angles. Since there are six commands corresponding to the six degrees of freedom to define posit ion and orientation, the seventh command is called orbit. The orbit command changes the joint angles of the manipulator while leaving the position and orientation of the end effector unchanged. The speed in which the jogging

PAGE 131

Appendix A (Continued) 119 of the robot in linear space can be adjusted to run fast or slow, while the recommendation remains to operate the robot at a safe velocity. Figure A.12 Jog Control Window and Position Feedback Window Another coordinate system is called joint space. Each of the seven jog buttons co rresponds with its same numbered joint. For example, when the operator presses the +1 button, joint number one will change its angle in the positive direction, according to the velocity set by the user. During the homing operation, the joint space is use d to adjust the joints individually to achieve the home position of the robot. This feature is advantageous, especially when the configuration of the robot needs to be adjusted slightly.

PAGE 132

Appendix A (Continued) 120 The last coordinate system is called hand space. This coordinate system changes with the orientation of the end effector. The hand X, Y, and Z axes are fixed on each of the three orientation axes: roll, pitch, and yaw. This is the coordinate system used in teleoperation. A. 4 .3. Position Feedback Position feedback is offered as another window in the GUI environment, shown in figure A.12. This window simply displays the current position of the robot. The values of each of the seven joints are displayed, as well as the corresponding position and orientation in the bas e coordinate frame. The current Cartesian coordinates are calculated from the manipulators kinematics, according to its joint angles. These joint angles are received from the feedback of the manipulator. Resolver boards receive the seven joint angles, and send the exact feedback position to be displayed in the feedback window. This information is helpful to the user especially when operating the robot under simulation. A. 4 .4. Teach Pendant The teach pendant, figure A.12, is a hand held control devic e for operating the robot manipulator. The teach pendant is hooked up to the computer and provides real time control of the robot under the jog control mode. Once Enable jog buttons is activated in the RRC GUI jog control window, figure A.12, the teach pendant buttons are activated and coincide with the commands from the GUI on the computer screen. The

PAGE 133

Appendix A (Continued) 121 teach pendant allows the user to adjust the speed of the robot and change the coordinate system, as well as move the robot. Since the teach pendant ope rates in conjunction with the jog control buttons, fourteen buttons for the direct operation of the robot, depending on the coordinate system, are present on the hand held teach pendant. The advantage of using the teach pendant over the RRC GUI's jog cont rol is that the operator can be away from the computer observing the robots movements without being obstructed by the computer monitor. Figure A. 13 Teach Pendant for RRC Manipulator A. 4 .5. Program Control Most robot man ipulator control programs have the ability to program the robot through a graphical user interface or a teach pendant, to perform a series of movements to predetermined points. This is automating the robots motions. The GUI for the RRC manipulator has th is function called, Move Data/Record. The robot can be programmed,

PAGE 134

Appendix A (Continued) 122 once moving there, to record a point in space. It saves the joint angle configuration corresponding to the appropriate x, y, z, and the rotation in x, y, and z. A series of these recorde d points can be programmed and executed to perform a certain automated task. For example, in the case of teleoperation, the teleoperator would like to change the tool on the end of the robot. This would require the teleoperator to position and align the robot to exchange tools. This process is advantageous to have automated before the teleoperator begins the tasks, so that in the event of a necessary tool change, the operator needs only to select which tool is the desired tool for the next task, and the robot can switch tools at the supervision of the teleoperator, instead of changing tools in teleoperation. The operator must define the points that determine the automated path. The objective is to use the teach pendant or the jog control of the RRC GUI to move the manipulator to the desired points and record the points by clicking "Record" on the move window, see figure A .1 4 From the RRC GUI main window of commands, figure A.9 check the box for move / data record, and a window shown in figure A .1 4 w ill appear. Click create path, and the program requests a path name. Recorded points can now be added the path. Click on the execution and the program status check box to reveal the path name and the recorded points, and to monitor the progress of the p ath execution, see figure A .1 6

PAGE 135

Appendix A (Continued) 123 Figure A 14 MainWindow for Move Data / Record Figure A 15 File Management Paths or a group of paths can be saved using the file management window, figure A .1 6 For example, in figure A .1 6 the path name is called "mo untain." This path can be saved in a file and opened again at another time. Through program control, repetitive paths can be automated with a high degree of precision.

PAGE 136

Appendix A (Continued) 124 Figure A 1 6 Execution and Status Windows A. 4 .6. Client Server Interface In th e RRC GUI, the client management window displays the list of connected clients. Clients can be either active or passive. Every client is passive until made active by clicking the activate button in the client management window, figure A .1 7 Only one cli ent can be active at once. Once a client is activated, that client can send commands to the RT servo controller, and receive position feedback data. The R2 server ignores the commands from a passive client. However a passive client can request feedback data

PAGE 137

Appendix A (Continued) 125 from the server, and will receive the most recent position feedback data. The active or master client has control over the robot, whether it is in simulation or robot mode. Figure A 17 Client Management Window on Robot Computer

PAGE 138

Appendix B : Visual Servo ing for Object Grasping 126 This chapter presents the strategy of enhancing teleoperation through Tele autonomy. The basic theory and the application of robot vision are also presented. B.1. Configuration of Vision System Figure B.1 Configuration of Vision Syste m In the previous research of this lab, the camera was mounted paralleled with the end effector coordinate system. In that case, only the translation along Z axis was taken into account for object pose determination. It was easy to get the relative tra nslation between the two coordinates system by coarse measurement, not doing eye hand calibration. But the disadvantage of that configuration is that the camera could not see the object when the end effector is approaching it, thus limiting the usefulness of the

PAGE 139

Appendix B (Continued) 127 vision system. In this project, in order to improve the flexibility of task execution and keep object in the camera view always, the camera is mounted to the end effector with some translation and rotation (See Figure B.1). In order for the man ipulator to use a camera to estimate the 3D pose of an object relative to the end effector, calibration of the vision system, including camera calibration and eye hand calibration are essential. B.2. 3D Pose Determination of Target with Respec t to End effector Generally, in order to control robot using information provided by a computer vision system, it is necessary to understand the geometric aspects of the imaging process. Each camera contains a lens that forms 2D projection of the scene on the image plane where the camera is located. This projection causes direct depth information to be lost so that each point on the image plane corresponds to a ray in 3D space. Therefore, some additional information is needed to determine the 3D coordina tes corresponding to an image point. This information may come from multiple cameras, multiple views with a single camera, or the knowledge of geometric relationship between several feature points on the target. In this project, the results of the shape b ased matching, position coordinates (u, v), orientation q and scale factor s, enable us to determine the 3D pose with 4 unknowns. According to perspective projection, a point, c P=[x,y,z] T whose coordinates are expressed with respect to the camera coord inate system, C is projected onto the image plane with coordinates p=[u,v] T given by = y x z f v u (B.1)

PAGE 140

Appendix B (Continued) 128 Figure B. 2 Coordinate System for Perspective Projection We assign the camera coordinate system with the x and y axis forming a basis for the image plane, the z axis perpendicular to the image plane (along the optical axis), and with origin located at the distance l ( or f ) behind the image plane, where f is the focal length of the camera lens. This is illustrated in Figure B.2. We assign the tool coordinate system at the origin of the ROI (Region of Interests) of the object. So the coordinate values of the origi n O is ) 0 0 0 0 0 0 ( = o O Lets assume there is a line segment located between O and P(m, 0, 0) in the tool coordinates. T o m OP ) 0 0 ( = (B.2) When creating shape model, it was assumed that the tool coordinate system is aligned with the end effector except the translation along Z axis (see Figure B.2). So the coordinate of the tool origin of the ROI is (0, 0, Z 0 ).

PAGE 141

Appendix B (Continued) 129 Figure B.3 Coordinates System Assignment for Vision System When capturing dynamics images, the predefined line segment OP is moved to the coordinates as follows with respect to the end effector system: + + = Z Y m X m OP e a a sin cos (B.3)

PAGE 142

Appendix B (Continued) 130 As we have obtained the eye hand transformation c H e the coordinates of line segment OP can be transformed into camera coordinates system as follows: + + + + + + + + + + + + + + + = + + + = + = ) sin cos ( ) ( ) sin cos ( ) ( ) sin cos ( ) ( sin cos * 32 31 33 32 31 22 21 23 22 21 12 11 13 12 11 33 32 31 23 22 21 13 12 11 a a a a a a a a m r m r t Z r Y r X r m r m r t Z r Y r X r m r m r t Z r Y r X r t t t Z Y m X m r r r r r r r r r T OP R OP z y x z y x e c e e c c (B.4) where [ c Re, c Te ] is the eye hand transformation matrix. In order to clearly express the transformation relationship, we might as well use symbols for the transformation matrix elements, instead of numbers. In equation (B.4), if we let m=0, a = 0, we can get the coordinates of th e tool system origin with respect to the camera system: + + + + + + + + + = z y x c t Z r Y r X r t Z r Y r X r t Z r Y r X r O 33 32 31 23 22 21 13 12 11 (B.5) The perspective projections of the point O c and P c are as follows: + + + + + + = + + + + + + = z y o z x o t Z r Y r X r t Z r Y r X r f v t Z r Y r X r t Z r Y r X r f u 33 32 31 23 22 21 1 33 32 31 13 12 11 1 (B.6)

PAGE 143

Appendix B (Continued) 131 + + + + + + + + + + = + + + + + + + + + + = ) sin cos ( ) ( ) sin cos ( ) ( ) sin cos ( ) ( ) sin cos ( ) ( 32 31 33 32 31 22 21 23 22 21 1 32 31 33 32 31 12 11 13 12 11 1 a a a a a a a a m r m r t Z r Y r X r m r m r t Z r Y r X r f v m r m r t Z r Y r X r m r m r t Z r Y r X r f u z y p z x p (B.7) The perspective projection of line segment OP in image plane is also a line segment op In equations (B.6) and (B.7), if we let X=Y=0, Z=Z 0 and a = 0, we can obtain the persp ective projection of line segment OP during shape model creating (Figure B.4): + + = + + = z y o z x o t Z r t Z r f v t Z r t Z r f u 0 33 0 23 0 0 33 0 13 0 (B.8) + + + + = + + + + = m r t Z r m r t Z r f v m r t Z r m r t Z r f u z y p z x p 31 0 33 21 0 23 0 31 0 33 11 0 13 0 ) ( ) ( ) ( ) ( (B.9)

PAGE 144

Appendix B (Continued) 132 Figure B.4 Perspective Projection of a Line Segment in Image Plane The projection of line segment OP at the initial creating shape model stage and dynamics vision are shown above as o 0 p 0 and o 1 p 1 : = = ) ( ) ( 1 0 1 1 0 1 1 0 0 0 0 0 0 0 v v u u op v v u u op p p p p (B.10) Obviously, the orientation of a line segment between the ROI origin and a point on the bounder of the ROI represents the orientation of the model ROI. So orientation parameter q out o f the shape model matching equals to the angle between o 0 p 0 and o 1 p 1

PAGE 145

Appendix B (Continued) 133 1 0 1 0 cos op op op op = q (B.11) In order to simplifying computation, we replace som e long factors by single symbols. + = + = + = + + + = + + + = + + + = a a a a a a sin cos sin cos sin cos 32 31 3 22 21 2 12 11 1 33 32 31 3 23 22 21 2 13 12 11 1 m r m r b m r m r b m r m r b t Z r Y r X r a t Z r Y r X r a t Z r Y r X r a z y x (B.12) The symbols replacement results in: 2 3 2 2 3 2 3 1 1 3 2 2 2 1 3 2 2 3 2 3 1 1 3 1 ) ( ) ( ) ( ) ( cos b a b a b a b a k k b a b a k b a b a k + + + = q (B.13) where + = + = ) ( ) ( ) ( ) ( 31 21 0 31 23 33 21 2 31 11 0 31 13 13 11 1 y z x z t r t r Z r r r r k t r t r Z r r r r k (B.14) After submitting b 1 b 2 b 3 into equation (B.13), we can see factor m is canceled out (equati on (B.15)), thus proving the orientation is not related to the length of the selected line segment. This makes sense. 2 6 5 2 4 3 2 2 2 1 6 5 2 4 3 1 ) tan ( ) tan ( ) tan ( ) tan ( cos a a a a q k k k k k k k k k k k k + + + + + + + = (B.15) where

PAGE 146

Appendix B (Continued) 134 = = = = 2 32 3 22 6 2 31 3 21 5 1 32 3 12 4 1 31 3 11 3 a r a r k a r a r k a r a r k a r a r k (B.16) In equation (B.16), there is only one unknown, that is a It can be solved straightforward after some algebra operation. ) 2 4 ( 2 tan 1 3 1 2 2 2 e e e e e a + = a (B.17) where + + + = + + + = + + + = 2 5 2 3 1 2 2 5 2 3 2 2 2 1 3 5 4 6 3 2 1 6 5 2 2 4 3 2 1 2 6 5 4 3 2 2 2 1 2 2 6 2 4 1 2 2 6 2 4 2 2 2 1 1 ) ( cos ) )( ( )] ( cos ) )( [( 2 ) ( cos ) )( ( k k k k k k k k e k k k k k k k k k k k k k k k k k k e k k k k k k k k e q q q (B.18) In is necessary to note that there are two solutions for a from equation (B.15). Based on the simulation results, the solution shown in equation (B.17) is true; the ot her one is false solution and thrown away. For each frame of input image, orientation q out of the shape model matching function is known, so the orientation a of the model around the Z axis of the end effector coordinate system is a function of q The sca le factor s out of the shape model matching algorithm represents the area ratio between the extracted model ROI from the input image and the pre created model. It is assumed that the area of the model ROI is A While creating shape model, the translation o f the tool coordinate system along the Z axis of the end effector coordinates

PAGE 147

Appendix B (Continued) 135 system is T z,0 Projecting this object into the plane whose normal is parallel with the optical axis of the camera yields the projected model shape, which has area as: g cos 0 A A o = B.19) where g is the angle between the Z axis of end effector coordinates system and the Z axis of the camera coordinates system. Acc ording to perspective projection rule, the projection of a polygon in image plane is also a polygon. The area of the model ROI in image plane is: 2 0 2 0 2 0 1 cos Z Z f A A c i = g (B.20) Wh ere Z 0 is the Z axis coordinate of the model ROI in the camera coordinates system at the shape model creating stage. For dynamic visions, the Z coordinate of the model object is updated. The area of the model ROI in image plane is: 2 1 2 1 2 1 1 cos Z Z f A A c i = g (B.21) From shape model matching, i i A A s 0 1 = (B.22) It can be obtained that 0 1 c c Z s Z = (B.23) From the relationship between the area and the T z,i it can be proven that: 0 Z s Z = (B.24)

PAGE 148

Appendix B (Continued) 136 where Z is the Z axis translation of the tool coordinate system origin with respect to the end effector coordinate system. Once we know Z coordinate, we can use the position parameters ( u v ) to solve X and Y parameters by substituting equation (B.24) into equation (B.6): + + + = + + + = ) )( ( ) )( ( ] ) )[( ( ] ) )[( ( ) )( ( ) )( ( ] ) )[( ( ] ) )[( ( 32 1 12 31 1 21 32 1 22 31 1 11 33 1 13 31 1 21 33 1 23 31 1 11 32 1 12 31 1 21 32 1 22 31 1 11 33 1 23 32 1 12 33 1 13 32 1 22 r f u r r f v r r f v r r f u r t Z r f u r r f v r t Z r f u r r f u r Y r f u r r f v r r f v r r f u r t Z r f v r r f u r t Z r f u r r f v r X o o o o x o o y o o o o o o y o o x o o (B.25) So far, the four parameters X Y Z and a are available for 3D pose. The pose of the o bject with respect to the end effector system is: = 1 0 0 0 0 0 0 1 0 0 0 0 0 cos sin 0 sin cos Z Y X P o e a a a a (B.26) So now we can implement pose based visual servoing for the system. B.3.Visual Servo Controller Design Giv en an object pose with respect to the end effector coordinate system, it is straightforward to directly implement target tracking. Let o e p be a desired pose, which is

PAGE 149

Appendix B (Continued) 137 constant. It is only translated from the origin of the end effector coordinate system along its Z axis without any orientation. It also means that the end effector is aligned with the object and ready for grasping. So in this pose, the only value is the z axis translation c which is defined 3 inch. o e p is like this: = 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 c p o e (B.27) From the pose determination in Chapter 4, the actual pose of the object in respect to the end effector coordinate system is: = 1 0 0 0 0 0 0 1 0 0 0 0 0 cos sin 0 sin cos z y x o e T T T P g g g g (B.28) The pose error is defined as: o e o e e P P P = ( B.29) Since the orientation is only around Z axis, we might as well represent the rotation in terms of the unit vector z and rotation angle q we can define z k 1 q = W (B.30) e t k T 2 = (B.31) where g q = ] [ c T T T t z y x e = k 1 and k 2 are proportional constants.

PAGE 150

Appendix B (Continued) 138 The purpose of the visual servo is to produce velocity commands to drive the robot to a desired pose automatically. As shown in figure B.4, there are two different control modes to drive the manipulator. When the object is not in the scene of the end effector mounted camera, the telemanipulation operation can transmit control commands through input device. Once the target is seen by the camera and the relative pose between the came ra and the object is available, visual servo will take effect to generate control commands. These two control modes can be switched easily. B.4. Tele autonomy Design Our telerobot operation experience revealed that a typical ADL task is composed of a f ew motor behaviors ( sub tasks), namely looking_for goal, move_to_goal, align_with_goal, as shown in figure B.5. Figure B.5 Tele autonomy Illustration not end end aligned not aligned close_ enough not_close_ enough aligned locate_it found where_is_it start home teleoperation aut onomy look_for _goal move_to _goal align_with_ goal

PAGE 151

About the Author Wentao Yu was born on February 5, 1972 in Wuhan, China to Dachu Yu and Hanmei Mei. He attended Southern Institute of Metallurgy in Jiangxi, China and graduated with his Bachelor of Science in 1994. Then he was employed by Shougang Group i n Beijing, China, working as a mechanical and control engineer for three years. In 1997, he returned to school for graduate study at University of Science & Technology Beijing, where he attained his Master of Science in Mechatronics Engineering with a conc entration in embedded control system in early 2000. After graduating with his Master, he had been working in a software company as an embedded software engineer. Some investigation into Ph.D. programs around the world led him to come to University of South Florida, doing rehabilitation robotics research under the direction of professor Rajiv Dubey. He completed this work in the field of intelligent telerobotics with assistance during 2003 ~ 2004. In summer of 2004, he got a senior control engineer job offe r from TRW Automotive Inc. and start ed his new job late in August 2004.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001498102
003 fts
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 041209s2004 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000479
035
(OCoLC)57708966
9
AJU6697
b SE
SFE0000479
040
FHM
c FHM
090
TJ145 (ONLINE)
1 100
Yu, Wentao,
d 1972-
0 245
Intelligent telerobotic assistance for enhancing manipulation capabilities of persons with disabilities
h [electronic resource] /
by Wentao Yu.
260
[Tampa, Fla.] :
University of South Florida,
2004.
502
Thesis (Ph.D.)--University of South Florida, 2004.
504
Includes bibliographical references.
500
Includes vita.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
Title from PDF of title page.
Document formatted into pages; contains 151 pages.
520
ABSTRACT: This dissertation addresses the development of a telemanipulation system using intelligent mapping from a haptic user interface to a remote manipulator to assist in maximizing the manipulation capabilities of persons with disabilities. This mapping, referred to as assistance function, is determined on the basis of environmental model or real-time sensory data to guide the motion of a telerobotic manipulator while performing a given task. Human input is enhanced rather than superseded by the computer. This is particularly useful when the user has restricted range of movements due to certain disabilities such as muscular dystrophy, a stroke, or any form of pathological tremor. In telemanipulation system, assistance of variable position/velocity mapping or virtual fixture can improve manipulation capability and dexterity.Conventionally, these assistances are based on the environment information, without knowing user's motion intention. In this dissertation, user's motion intention is combined with real-time environment information for applying appropriate assistance. If the current task is following a path, a virtual fixture orthogonal to the path is applied. Similarly, if the task is to align the end-effector with a target, an attractive force field is generated. In order to successfully recognize user's motion intention, a Hidden Markov Model (HMM) is developed. Also this dissertation describes the HMM based skill learning and its application in a motion therapy system in which motion along a labyrinth is controlled using a haptic interface. Two persons with disabilities on upper limb are trained using this virtual therapist.The performance measures before and after the therapy training, including the smoothness of the trajectory, distance ratio, time taken, tremor and impact forces are presented. The results demonstrate that the forms of assistance provided reduced the execution times and increased the performance of the chosen tasks for the disabled individuals. In addition, these results suggest that the introduction of the haptic rendering capabilities, including the force feedback, offers special benefit to motion-impaired users by augmenting their performance on job related tasks.
590
Adviser: Dubey, Rajiv V.
653
rehabilitation.
Hidden Markov Model.
Motion Intention Recognition.
virtual fixture.
skill learning.
therapy.
690
Dissertations, Academic
z USF
x Mechanical Engineering
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.479