USF Libraries
USF Digital Collections

Design and implementation of a hard real-time telerobotic control system using sensor-based assist functions

MISSING IMAGE

Material Information

Title:
Design and implementation of a hard real-time telerobotic control system using sensor-based assist functions
Physical Description:
Book
Language:
English
Creator:
Veras-Jorge, Eduardo J
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Computer vision
Haptics
Robotic control
Scaled teleoperation
Virtual fixtures
Dissertations, Academic -- Mechanical Engineering -- Doctoral -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: This dissertation presents a novel concept of a hard real-time telerobotic control system using sensory-based assistive functions combining autonomous control mode, force and motion-based virtual fixtures, and scaled teleoperation. The system has been implemented as a PC-based multithreaded, real-time controller with a haptic user interface and a 6-DoF slave manipulator. A telerobotic system is a system that allows a human to control a manipulator remotely and the human control is combined with computer control. A telerobotic control system with sensor-based assistance capabilities enables the user to make high-level decisions, such as target object selection, and it enables the system to generate trajectories and virtual constraints to be used for autonomous motion or scaled teleoperation.The design and realization of a telerobotic system with the capabilities of sensing and manipulating objects with haptic feedback, either real or virtual, require utilization of sensor-based assist functions through an efficient real-time control scheme. This dissertation addresses the problem of integrating sensory information and the calculation of sensor-based assist functions (SAF's) in hard real-time using PC-based resources. The SAF's calculations are based on information from a laser range finder, with additional visual feedback from a camera, and haptic measurements for motion assistance and scaling during the approach to a target and while following a desired path. This research compares the performance of the autonomous control mode, force and motion-based virtual fixtures, and scaled teleoperation. The results show that a versatile PC-based real-time telerobotic platform adaptable to a wide range of users and tasks is achievable.A key aspect is the real-time operation and performance with multithreaded software architecture. This platform can be used for several applications in areas such as rehabilitation engineering and clinical research, surgery, defense, and assistive technology solutions.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2008.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Eduardo J. Veras-Jorge.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 207 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002046383
oclc - 496013663
usfldc doi - E14-SFE0002673
usfldc handle - e14.2673
System ID:
SFS0026990:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam 2200397Ka 4500
controlfield tag 001 002046383
005 20100106134543.0
007 cr mnu|||uuuuu
008 100106s2008 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0002673
035
(OCoLC)496013663
040
FHM
c FHM
049
FHMM
090
TJ145 (Online)
1 100
Veras-Jorge, Eduardo J.
0 245
Design and implementation of a hard real-time telerobotic control system using sensor-based assist functions
h [electronic resource] /
by Eduardo J. Veras-Jorge.
260
[Tampa, Fla] :
b University of South Florida,
2008.
500
Title from PDF of title page.
Document formatted into pages; contains 207 pages.
Includes vita.
502
Dissertation (Ph.D.)--University of South Florida, 2008.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
520
ABSTRACT: This dissertation presents a novel concept of a hard real-time telerobotic control system using sensory-based assistive functions combining autonomous control mode, force and motion-based virtual fixtures, and scaled teleoperation. The system has been implemented as a PC-based multithreaded, real-time controller with a haptic user interface and a 6-DoF slave manipulator. A telerobotic system is a system that allows a human to control a manipulator remotely and the human control is combined with computer control. A telerobotic control system with sensor-based assistance capabilities enables the user to make high-level decisions, such as target object selection, and it enables the system to generate trajectories and virtual constraints to be used for autonomous motion or scaled teleoperation.The design and realization of a telerobotic system with the capabilities of sensing and manipulating objects with haptic feedback, either real or virtual, require utilization of sensor-based assist functions through an efficient real-time control scheme. This dissertation addresses the problem of integrating sensory information and the calculation of sensor-based assist functions (SAF's) in hard real-time using PC-based resources. The SAF's calculations are based on information from a laser range finder, with additional visual feedback from a camera, and haptic measurements for motion assistance and scaling during the approach to a target and while following a desired path. This research compares the performance of the autonomous control mode, force and motion-based virtual fixtures, and scaled teleoperation. The results show that a versatile PC-based real-time telerobotic platform adaptable to a wide range of users and tasks is achievable.A key aspect is the real-time operation and performance with multithreaded software architecture. This platform can be used for several applications in areas such as rehabilitation engineering and clinical research, surgery, defense, and assistive technology solutions.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
590
Advisor: Rajiv Dubey, Ph.D.
653
Computer vision
Haptics
Robotic control
Scaled teleoperation
Virtual fixtures
690
Dissertations, Academic
z USF
x Mechanical Engineering
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.2673



PAGE 1

Design and Implementation of a Hard Real T ime Teler o botic Control System Using Sensor Based Assist Functions by Eduardo J. Veras Jorge A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Depart ment of Mechanical Engineering College of Engineering University of South Florida Major Professor: Rajiv Dubey Ph.D. Kathryn J. De Laurentis, Ph.D. Susana K. Lai Yuen, Ph.D. Craig Lusk, Ph.D. Wilfrido Moreno Ph.D. Kandethody Ramachandran Ph.D. Date of Approval: November 2 1 200 8 Keywords: computer vision, haptics, robotic control, scaled teleoperation virtual fixtures Copyright 2008 Eduardo J. Veras

PAGE 2

A cknowledgments The author graciously acknowledges the invaluable assistance and sup port provided by Dr Rajiv Dubey for trusting me in the first place and for helping me to identify a research problem in rehabilitation robotics. His guidance and sharing of his knowledge were instrumental during the course of my research. I would also li ke to t hank the rest of the members of my committee, Prof. Craig Lusk, Prof. Kandethody Ramachandran, Dr Kathryn De Laurentis, Prof. Susana Lai Yuen and Prof Wilfrido Moreno They have all contributed by providing their feedback and by sharing their ex perience in the form of invaluable comments for improving this document. Thanks to lab mates and good friends Chris Colbert Kathryn De Laurentis Karan Khokar Peter Schr ock Ramya Swaminathan Redwan Alqasemi Stephanie Carey, and Stephanie Stiber I h a ve ha d the opportunity to work with them and share memorable moments during this project. Special t han ks to Karan Khokar and Redwan Alqasemi for assisting with experiments and testing of software I also would like to express my gratitude to the staff o f the C enter for Rehabilitation Engineering and Technology : Bill Calvin, Linda Colon Rosanna h Parma Stephen Sundarrao and Vilma Fitzhenry and the ME department staff: Shirley Tervort Susan Britten and Wes Frusher Thanks to Johanna Cedeno for her help with the statistical analysis. In a more personal level, I would like to express my deepest gratitude to my family. Thanks to my mother, Lilliam, and brothers, Hugo Jos e A lfonso, Jose E duardo and Jacque, who always have been so supportive of my efforts Last, and for most thanks to God, for considering me one of his sons and for showing me a way to become a better person every day.

PAGE 3

Dedication To Auristela, my lovely wife, for her unconditional love, and for just finding the appropriate boost to gi ve when I really needed it Also, I would like to dedicate this work to my children, Adriana and Alfonso; I can only hope they will do much better than I had ever dreamed in life

PAGE 4

i Table Of Contents List of Tables iv List of Figur es v A bstract x i Chapter 1. Introduction 1 1.1 Motivation 1 1.2 Virtual and Haptic Feedback 2 1.3 Rehabilitation Robotics Applications 3 1.2 Dissertation O bjectives 4 1 3 Dissertation Outline 5 Chapter 2. Background 7 2.1 Introduction 7 2.2 Teleoperation Robotics 8 2.3 Teleoperation Assistance 12 2.3 .1 Position Based Assistance Functions 12 2 3 .2 Velocity Scaling Assistance Functions 13 2.3 .3 Virtual Fixture Assistance Functions 15 2.4 Teleoperation in Real Time 18 Chapter 3. Hard R eal T ime Tele robotic Controller 2 5 3.1 Introduction 2 5 3.2 The Need for Real T ime Haptically Controlled Robotics 26 3.3 Telerobo tic Computational Tasks 29 3.4 Overview of the Robot Arm Controller and Forward Kinematics Equations 30 3. 5 General Nonlinear Robotic Model 37 3.6 G eneric Architecture for a Real T ime Robotic Controller 4 3 3. 7 Cartesian Trajectory Generation Thread 50 3.8 Resolved Rate Thread 52 3 .9 Sensory Information Thread 5 3 3.10 Summary 5 5 Chapter 4 Sen sor Based Assistance, Autonomou s and Teleoperation Control 57 4. 1 Introduction 57 4.2 Sensor Based Telerobotic Control Theory 58 4.2.1 Autonomous Control Mo de 58 4.2.2 Position Based Teleoperation Control Mode 61

PAGE 5

ii 4.2.3 Velocity Based Teleoperation Control Mode 62 4. 2.4 Scaled Teleoperation 64 4.2.5 Virtual Fixture based Teleoperation 65 4.3 The Phantom Omni Haptic Interface 65 4.4 Joint and Cartesian Control through the Haptic Device 6 7 4.5 Telerobotic Control System 6 8 4.6 Indexing with the Haptic Device 70 4.7 Assi stance Function (SAF) Concept 70 4.8 Summary 75 Chapter 5 Visual and Haptic Data for Motion Scaling and Virtual Constraint Definition 76 5 .1 Introduction 76 5.2 Spatial Domain Pre Processing 77 5.3 Numerical Optimization Approach of the Camera Parameters 83 5.4 Inverse P erspective Mapping (IPM) 8 6 5.5 Edge Detection and Feature Extraction 88 5 .6 Mapping to the Robot Arm Reference Frame 89 5.7 Summary 94 Chapter 6 Sensor B ased Assistance Function Calculations 96 6 .1 Int roduction 96 6 .2 Generic Scheme for Motion Dependent Force Feedback Calculation 96 6 .3 Sensor Based Assistance 99 6.4 Comments 106 6. 5 Summary 1 06 Chapter 7. Experimental Methodology and Testbed For I n tera c tive Simulation Results 108 7.1 I ntroduction 108 7.2 Methodology for Experiment s 109 7.3 Visual and Haptic Testbed to Control a 6 DoF Robot Arm 112 7.4 Haptic Interface and Cartesian Motion 116 7.5 Performance Measures 117 7.5 .1 The Absolute Posit ion Error (APE) 118 7.5 .2 The Absolute Orientat ion Error (AOE) 119 7 .6 Sum mary 124 Chapter 8 Virtual Reality Simulation Testing 125 8 .1 In troduction 1 25 8 .2 Virtual Reality Simulatio n of the Pum a 560 Manipulator 126 8 .3 Control of the VR Model of the Puma 56 0 Manipulator 1 27 8 .4 VR Lin ear Trajectory Simulation 1 30 8 .5 Haptic Feedback and Assistive Functions in Simulation 1 33 8 .6 Comments o n the Haptic and VR Model Simulation 135 8 .7 Communica t ion Protocol 135

PAGE 6

iii 8.8 Commen ts on the Communication Protocol in the Simulation Program 136 8 .9 Summary 13 7 Chapter 9. Results and Discussion 13 8 9 .1 In troduction 138 9.2 Interactive Simul ation Results 138 9.2. 1 Position Based Control Interactive S imulation Results 143 9.2.2 Velocity B ased Control Interactive S imulation Results 154 9 3 Virtual Reality Simulation Results 169 9.4 Summary 174 Chapter 10 C onclusions a nd Recommendations 1 75 10 .1 Overview 175 10 .2 General Discussion 178 10 .3 Re commendations 179 Chapter 11 Future Work 181 1 1 .1 Introduction 181 1 1 .2 Combined Mob ility an d Manipulation with Time dependa nt Sensory Calibration Functions in Real Time 181 1 1 3 Autonomous Navigation 182 1 1 4 Rem o te Assistance 183 R eferences 184 Bibliography 191 Append ices 1 94 Appendix A : Puma 560 Homogeneous Transformations 1 9 5 Appendix B : Equivalent Single Angle Axis Representation 1 97 Appendix C : MatLab Script for the Symbolic Jacobian Determination 201 Appendix D : Sin gularity Robust (SR) Inverse 202 Appendix E : Angular Velocities Components of the End Effector 203 Appendix F: Specifications for the PHANTOM Omni Haptic Device 206 Appendix G : Custom Made Sick DT60 Data Acquisition Module 207 About the Author End Page

PAGE 7

iv L ist Of Tables Table 3. 1 DH Parameters of the Puma 560 Robot Arm [51 ] 3 3 Table 3. 2 Link Mass and Center of Gravity L ocations [51 ] 4 2 Table 5.1 Extrinsic Camera Parameters ( ) and End effector Rotation and Translation Matrices ( ) 93 Table 6.1 Constrained Directions in a Motion Task 104 Table 9.1 Completion Time (in seconds) for the Pick up a cup Task 140 Table 9.2 Completion Time Descriptive Statistics 141 Table F.1 Specifications for th e Omni Haptic Device 206

PAGE 8

v List Of Figures Figure 2.1 (a) End effector Constrained to Motion on a L inear Path. (b) End effector Constrained to Motion on a Plane 13 Figure 2.2 Scaling Factor Func tions [2 6 ] 14 Figure 2.3 Scaling Factor Based on Laser Range Finder Reading [3 1 ] 15 Figure 2.4 Cross Alignment Task in [3 1 ] 15 Figure 2.5 Force Clues Generated b y Position a nd Approach Fixtures (Left) Fixtures Restricting Degrees o f Freedom (Right) [ 3 6 ] 17 Figure 2.6 Lenard Jones P otential F unction s 18 Figure 3.1 Coordinate Frame Assign ment s to Links of Puma 560 [51] 3 2 Figure 3.2 DH Based Intermediate Transformations [51] 34 Figure 3.3 Simplified Resolved Rate Algorithm Block Diagram 38 Figure 3.4 Multithreaded Robot Arm Cont roller Architecture 45 Figure 3.5 Ready / B locked States, Adapted from [ 50 ] 47 Figure 3.6 Block Diagram of the S ystem Architecture 49 Figure 3. 7 Cartesian to Joint Space Conversion in the Robotic Workspace 5 3 Figure 4.1 Conceptual Represen tation of Autonomous Control Mode 5 9 Figure 4. 2 Phantom Omni Haptic Device 6 7 Figure 4.3 Phantom Omni Reference Configuration s 6 8 Figure 4. 4 Telerobotic System Block Diagram 6 9 Figure 4. 5 Representat ion of the Sensor Based Assistance Function 72

PAGE 9

vi Figure 4. 6 A S et of Line of Sight Vectors (in Red) Placed Closed to the Centroid of the Region of Interest (ROI) 73 Figure 4. 7 Line of Sigh t Using Single Axis Rotation [ 60 ] 74 Figure 5.1 Camera Model Geometry 7 9 Figure 5.2 Graphical User Interface with Chessboard Calibration Pattern 81 Figure 5.3 Chessboard Calibration Pattern at a Different Pose of the Robot Arm 81 Figure 5.4 Cal ibration Pattern in the Camera Mounted Field View 82 Figure 5.5 Distorted and Undistorted Sensor and Image Coordinates 82 Figure 5.6 85 Figure 5.7 Camera Centered Ca 85 Figure 5.8 Illustration of the Error between Predicted and Observed Image Points 86 Figure 5. 9 Camera and Image Planes Geometrical Relationships 87 Figure 5.10 Relationships between the Different Coordinate Frames [63] 90 Figure 6.1 Translational Distance, d ij Used for Feedback Force Control Law 97 Figure 6.2 Desired Path and "Noisy" Trajectory Input 105 Figure 7.1 Setup 110 Figure 7.2 Virtual Environment for Teleoperation of the PUMA Manipulator 113 Figure 7 .3 Sensor Suite Devices 114 Figure 7 .4 Camera and the Sick DT60 Laser Range Finder Mounted at the 11 4 Figure 7 .5 Results of the Segmentation and Feature Extraction Process 115 Figure 7 .6 Virtual Environment and 3 D Constraint Plane for Haptic Control 11 6 Figure 7 .7 Absolute Position Error 11 9 Figure 7 .8 Absolute Orientation Error 121

PAGE 10

vii Figure 8.1 Virtual Reality Mode l of the Puma 560 127 Figure 8.2 Control Panel for Joint and Cartesian Space VR Simulations 128 Figure 8.3 Haptic VR Puma 560 Graphical User Interface 129 Figure 8.4 Required Joint Angles for the Predefined Linear Trajectory Path 1 32 Figure 8.5 End Effector Displacement s from Ini tial to Goal Position 132 Figure 8.6 Bezier Curve Trajectory and Haptically Rendered Cube 133 Figure 8.7 Experimental Data of Forces R esulting from a Typical Interaction 134 Figure 9.1 Boxplot of Autonomous (C1), Position Based Regular Teleoperation (C2), Position Based Scaled Teleoperation (C3), Position Based Virtual Fixture (C4), Velocity Based Regular Teleoperation (C5), Ve locity Based Scaled Teleoperation (C6), and Velocity Based Virtual Fixture 142 Figure 9.2 Position Based Regular Teleoperation vs. Scaled Teleoperation 144 Figure 9.3 Position Based Regular Teleoperation vs. Autonomous Control 145 Fi gure 9.4 Position Based Regular Teleoperation vs. Virtual Fixture Teleoperation 145 Figure 9.5 Position Based Virtual Fixture Teleoperation vs. Autonomous Control 14 6 Figure 9.6 Position Based Scaled Teleoperation vs. Autonomous Control 146 Figure 9.7 Position Based Scaled Teleoperation vs. Virtual Fixture Teleoperation 147 Figure 9.8 Absolute Position Error in Position Based Regular vs. Scaled Teleoperation 147 Figure 9.9 Absolute Position Error in Position Based Regula r Teleoperation vs. Autonomous Control 148 Figure 9.10 Absolute Position Error in Position Based Regular vs. Virtual Fixture Teleoperation 148 Figure 9.11 Absolute Position Error in Position Based Virtual Fixture Teleoperatio n vs. Autonomous Control 149 Figure 9.12 Absolute Position Error in Position Based Scaled Teleoperation vs. Autonomous Control 149

PAGE 11

viii Figure 9.13 Absolute Position Err or in Position Based Scaled vs. Virtual Fixture Teleope ration 150 Figure 9.14 Absolute Orientation Error in Position Based Regular vs. Scaled Teleoperation 150 Figure 9.15 Absolute Orientation Error in Position Based Scaled Teleoperation vs. Autonomous Control 151 Figure 9.16 Absolute Orientation Error in Position Based Regular Teleoperation vs. Autonomous Control 151 Figure 9.17 Absolute Orientation Error in Position Based Regular vs. Virtual Fixture Teleoperation 152 Figure 9.18 Absolute Orientation Error in Position Based Virtual Fixture Teleoperation vs. Autonomous Control 152 Figure 9.19 Absolute Orientation Error in Position Based Scaled Teleoperation vs. Autonomous Control 153 Figur e 9.20 Absolute Orientation Error in Position Based Scaled vs. Virtual Fixture Teleoperation 153 Figure 9.21 Velocity B ased Regular Teleoperation vs. Scaled Teleoperation 155 Figure 9.22 Velocity B ased Regular Teleoperation vs. Autonomous Control 156 Figure 9.23 Velocity B ased Regular Teleoperation vs. Virtual Fixture Teleoperation 156 Figure 9.24 Velocity B ased Virtual Fixture Teleoperation vs. Autonomous Control 157 Figure 9.25 Velocity B ased Scaled Teleoperation vs Autonomous Control 157 Figure 9.26 Velocity B ased Scaled Teleoperation vs. Virtual Fixture Teleoperation 158 Figure 9.27 Absol ute Position Error in Velocity B ased Regular vs. Scaled Teleoperation 158 Figure 9.28 Absol ute Pos ition Error in Velocity B ased Regular Teleoperation vs. Autonomous Control 159 Figure 9.29 Absol ute Position Error in Velocity B ased Regular vs. Virtual Fixture Teleoperation 159

PAGE 12

ix Figure 9.30 Absol ute Position Error in Velocity B ased Virtual Fixture Teleoperation vs. Autonomous Control 160 Figure 9.31 Absol ute Position Error in Velocity B ased Scaled Teleoperation vs. Autonomous Control 160 Figure 9.32 Absol ute Position Error in Velocity B ase d Scaled vs. Virtual Fixture Teleoperation 161 Figure 9.33 Absolute Orientation Error in Velocity B ased Regular vs. Scaled Teleoperation 161 Figure 9.34 Absolute Orientation Error in Velocity Based Scaled Teleoperation vs. Autonomou s Control 162 Figure 9.35 Absolute Orientation Error in Velocity Based Regular Teleoperation vs. Autonomous Control 162 Figure 9.36 Absolute Orientation Error in Velocity Based Regular vs. Virtual Fixture Teleoperation 163 Figure 9.37 Absolute Orientation Error in Velocity Based Virtual Fixture Teleoperation vs. Autonomous Control 163 Figure 9.38 Absolute Orientation Error in Velocity Based Scaled Teleoperation vs. Autonomous Control 164 Figure 9.39 Abso lute Orientation Error in Velocity Based Scaled vs. Virtual Fixture Teleoperation 164 Figure 9.40 APE for Force, Position Based Regular and Scaled Teleoperation 166 Figure 9.41 AOE for Force, Position Based Regular and Scaled Teleoperation 166 Figure 9.42 APE for Teleoperation without Assistance, Motion based Scaling, Motion based Virtual Fixture and Force based Virtual Fixture 167 Figure 9.43 AOE for Teleoperation without Assistance, Motion based Scaling, Motion bas ed Virtual Fixture and Force based Virtual Fixture 167 Figure 9.44 APE for Autonomous, Velocity Based Scaling, Velocity Based Virtual Fixture and Force based Virtual Fixture 16 8 Figure 9.4 5 AO E for Autonomous, Velocity Based Scalin g, Velocity Based Virtual Fixture and For ce based Virtual Fixture 168

PAGE 13

x Figure 9.4 6 Position Results of Circular Path in Cartesian Space 1 70 Figure 9.4 7 Robot Position Tracking of the Circular Path in the X Y Plane 1 70 Figure 9.4 8 Haptic Position Tracking of the Circular Path in the X Y Plane 1 71 Figure 9.49 Typical Assistive Feedback Force Experienced b y the User 172 Figure 9.50 Typical Results of the Moving Average Filter Implementation 173 Figure E.1 Defi nition of the Euler A ngles 203 Figure G.1 Custom made ADC Module for the DT60 Sick Laser Sensor 207

PAGE 14

xi Design and Implementation of a Hard Real T ime Teler obotic Control Sys tem Using Sensor Based Assist Functions Eduardo J. Ver as Abstract This dissertation presents a novel concept of a hard real time telerobotic control system using sensory based assist ive functions combining autonomous control mode force and motion based virtual fixtures and scaled teleoperation. The system has been implemented as a PC based multithr eaded, real time controller wit h a haptic user interface and a 6 DoF slave manipulator A telerobotic system is a system that allows a human to control a manipulator remotely and the human control is combined wit h computer control A telerobotic control system with sensor based assistance capabilities enables the user to make high level decisions, such as target object selection, and it e nables the system to generate trajectories and virtual constraints to be used for autonomous motion or scaled teleoperation. The design and realization of a tele robotic system with the capabilities of sensing and manipulating objects with haptic feedback either real or virtual, require utilization of sensor based assist functions through an efficient real time control scheme This dissertation addresses the problem of integrating sensory information and the calculation of sensor based assist functions (SAF's) in hard real time using PC b ased resources T he calculations are based on information from a laser range finder, with additional visual feedback from a camera, and haptic measurements for motion assistance and scaling during the approach to a target and while

PAGE 15

xii following a desired path This research compares the perfor mance of the autonomous control mode, force and motion based virtual fixtures, and scaled teleoperation The results show that a versatile PC based real time telerobotic platform adaptable to a wide range of users and tasks is achievable. A key aspect is the real time operation and performance with multithreaded software architecture This platform can be used for several applications in a reas such as rehabilitation engineering and clinical research surgery, defense and assistive technology solutions

PAGE 16

1 Chapter 1 Introduction 1.1. Motivation The practicalities of creating a telerobotic control system to provide assistance f o r a wide community of users impose computational constraints in the realization of such system On one hand, the external assistance ( scaling, virtual fixture or h aptic force feedback ) is integrated with optical sensory information f o r computing the kind of assistance to be provided. On the other hand, the use of supervisory control i.e. human in the loop for physical control of the robot arm presents the possibility of introducing instability during task execution if the proper control action is delayed or the update rates are not consistent. It is desired to integrate a supervisory control (human in the loop), in which the human is in control, and at times, might switch to autonomous control mode scaling or virtual fixture teleoperation modes in an accurate and deterministic fashion for enabling stable control of the teleoperation while allowing sensor based motion guidan ce T he development of a hard real time tele robotic controller with haptic and sensor y integration requires that the generate d assist functions are fully integrated in the control system The implementation of hard real time control algorithms is a fundam ental step for the development of sensor based assistive technology in such a reas as rehabilitation and related training surgery, defense, and assistive technology applications D uring the user's interaction with real and virtual objects the haptic

PAGE 17

2 respo nse needs to be in real time allowing operation in a complex environment and providing user motion assistance during task execution In this context, hard real time means that all the timing constraints of the system are met every time Besides the auto nomous operation mode others operations are implemented in position and velocity control modes by the implementation of regular scaled and virtual fixture teleoperation modes In any of those control modes, t he stability and predictability of the teler obotic system response depends on strict timing requirements. In order to satisfy the response time constraints for telerobotic system with sensor based assistance a flexible real time and a multithreading approach are needed. The PC based multithreaded architecture allows designing and implementing tele robotic tasks with additional capabilities for assistance and haptic manipulation of target objects. 1 .2 Visual and Haptic Feedback The integration of visual and haptic information is particularly diffic ult because of the different nature of the sensory signals. On one hand, t he human brain can easily interpret continuous motion from visual signals being updated from 24 30 frames per second. On the other hand, the human sense of touch is much more deman ding in terms of consistent timing and update rates. It is known that in order to generate a realistic sensation of touch the update rate must be at least 1000 Hz consistently to have rigid A haptic interface s uch as the Phantom Omni requires a servo loop running between 1000 2000 Hz to transmit the sensation of a hard So, a n additional constraint is the definition of the limits of the achievable stiffness for s table control of the haptic interface

PAGE 18

3 [3] The restrictions discussed above are very significant in telerobotic applications which require continuous control of the robot arm configurations (position and orientation) in autonomous or teleoperation modes The design and implementation of a PC based platform for sen sor assisted telerobotic system would provide a platform for the realization of a hard real time teleoperation with a haptic interface by combinin g the desirable properties of autonomous and tele operation control system s Since PCs are ubiquitous, this platform can be more widely available and not exclusive to researchers or those who have access to major computer power. 1 .3 Rehabilitation Robotics Applications T his platform can be used for th e implementation and execution of different teleoperation tasks. T he research environment in which it is re alized is primarily concerned about the d evelopment of new technology or modifications to existing technology This implementation would assist pers ons with disabilities to enhance their mobility and manipulation using robotic systems This field is known as Rehabilitation Robotics. Rehabilitation robotics is a term associated with the use of robotic technology to assist persons with disabilities to perform tasks they are unable to accomplish, or have great difficulty accomplishing, without external assist methods to guide the user's interactions Within this context, the experiments conducted to validate the system are related to task completion of A ctivities of Daily Living (ADL ) such as pick up a cup Other opening a door, flipping a switch, and opening a faucet can be perform ed using the system. The testing of the system is conducted on healthy people perform ing and sk, which is a common activity of daily living ( ADL )

PAGE 19

4 task. Three people are trained to use the Phantom Omni interface and to teleoperate the PUMA manipulator. The a ctual hardware used for performing the experiments include a 6 DoF PUMA 560 manipulator, a Phantom Omni haptic interface and the sensory suite consisting of a CCD camera, a Sick DT60 laser range finder and the PUMA encoders. The performance indicators are defined in terms of the "Absolute Position Error" (APE), the "Absolute Orientation Error" (AOE) indicators, and the task completion time which are calculated using the recorded data sets for each experiment. 1.4 Dissertation Objectives The major objectives of this dissertation are: 1. T o begin the development of a PC based hard real time control ler f or a sensor assisted t elerobotic s ystem with a haptic user interface and a 6 DoF slave manipulator 2. To design a framework that can be us e ful for rehabilitation engineering surgery, defense, and assistive technology applications 3. The integration of vi sual and haptic feedback au tonomous, and teleoperated manipulation of target objects. 4. To implement real time se nsor based assist functions for use r 5. To provide visual feedback combined with scaled teleoper ation and virtual fixtures or constraints definitions to guide the user interactions while manipulating virtual and real objects

PAGE 20

5 6. To implement data structure s and communication protocol s that allows handling interactive simulations, haptic interactions, op tical sensors, and robotic manipulations in real time using a PC based platform 7. To develop a virtual reality model to simulate the telerobotic system in purely robotic mode and a haptic integrated mode for conceptual testing of the control algorithms. 8. To develop a control strategy based on a "closed form" solution for Puma like manipulators and a "Jacobian based" control strategy that is expandable to control redundant robot arms for which exact solutions are not available. 1.5 Dissertation Outline This d issertation comprises eleven (11) chapters; each one deals with a major topic related to the d evelopment of the PC based hard real time telerobotic control system using sensory based assist functions and the combination of autonomous control mode, force b ased and motion based virtual fixtures, and scaled teleoperation Chapter 1 discusses the motivation for development of the system as well as the need for hard real time tele robotics control combining autonomous and teleoperation control Chapter 2 gives a background on previous work in the field of rob otic teleoperation and assistance T he concept of real time control and multithreading architecture of the teleoperation tasks is outlined in Chapter 3 Chapter 4 contain s the basis of sensor based telero botic control implementation using position based and velocity based control modes Chapter 5 describes the mapping of the sensor s reference frames and the robot arm reference frame required for driving the robot arm using teleoperation with human in the loop and

PAGE 21

6 autonomous mode Chapter 6 describes the sensor based assistance functions for motion dependent feedback. Chapter 7 explains the experimental methodology for performing the experiments and a definition of the performance measures utilized. Chap ter 8 describes the virtual reality simulations developed for testing and debugging of the some of the algorithms implemented for the telerobotic and haptic system interfacing. Chapter 9 outline s the experiments conducted to show the control of the physic al system and discussion of the results. Chapter 10 concludes the dissertation work with recommendations, and suggestions for future work are outlined in Chapter 11

PAGE 22

7 Chapter 2 Background 2.1 Introduction Teleoperation tasks executed with the assistan ce of a haptic interface controller require controlling the position and orientation of a multiple degrees of freedom manipulator. Multiple joints of the manipulator are moved in a continuous way in order to obtain a particular configuration of its end ef fector. T he required task s for the haptic interface in general, are to follow a prescribed path, to provide force reflection through the device actuators impedance simulation using simple mathematical models such as spring type forces and obstacle avoi dance [4 ] [5]. These tasks are implemented with a human machine interface which requires the user to be always in the loop (supervisory control). In this work, a combination of supervisory control and autonomous control modes are implemented which requir es the integration of haptic interfacing techniques with sensor based assist functions (SAF's) and stable transitioning between control modes. The purpose is to reduce the burden of the user by eliminating the requirement of the user being "always in the loop" and to provide assistance to guide the user using scaling and virtual fixtures The concept of human machine interactions combined with research [6] [7] [8] [9 ]. by the generation of scaling and virtual constraints demands a consistent and stable timing response. The need for predictable performance is a key factor in the ability of a

PAGE 23

8 hard real time system to meet the application's response time requirements for such applications. This chapter describes previous work done in the teleoperation and assistance areas. Also a summary containing the differential features of the system described in t his dissertation is presented at the end of the chapter. 2.2 Teleoperation Robotics manipulation capability to a remote location [10]. It was first described by Ray Goertz who designed m echanisms such as mechanical pantograph devices to allow radioactive materials to be handled from a safe distance. Even though it was not a robotic application, it introduced a way for expanding research work in this direction. As teleoperation technolog y developed, the mechanical linkages were replaced by electrical servos and cameras replaced direct viewing, allowing the operator to be located arbitrarily far away. A more detailed description of several t eleoperation types of systems and concepts are de fined in the area of remote manipulation technology in [ 10 ]. The basics of computer aided teleoperation technology were established around 1965 70 when robotics applications were implemented with the aim of increasing dexterity and manipulation [11]. In the early stages of the development of teleoperation technology, the primary applications appear in the area of nuclear waste handling and decommissioning, handling toxic chemicals and radioactive materials. The human operators were provided with visual aid through video displays, and operate remotely located slave robot via a hand controller, but not assistance was provided to them to effectively complete the task The idea of supervisory control (which combines human

PAGE 24

9 and computer control) became appa rent when researchers started to question how to teleoperate vehicles on the moon through the unavoidable time delay of three tenths of a second for the radio signal round trip to the Moon [10, 12, 13]. Early applications of teleoperation in space were ba sically implementing time delays in the control system where a human was remotely controlling a vehicle without force feedback or motion assistance The time delays still continue to be a problem in space teleoperation for exploration. In 1985, another area of research was developed to find ways to remotely operate underwater vehicles (RUV's). At that time, a RUV named Jason was used for exploring the sunken Titanic cruise. The control system of the Jason was designed by Yoerger [14] and it was tele o perated from the ARGO towed imaging platform from the surface. This system integrated a vision system to assist the researchers from surface during the underwater exploratory task. Nowadays, the underwater exploration system is commonly known as the ARGO/ JASON system [15]. T he term teleoperation typically refers to systems in which the human operator directly and continuously controls the remote manipulator or telerobot. In these syste ms, the kinematic chain manipulated by the operator is referred to as the while the However, it is also used to define two types [16 ] : 1. Tele autonomy: refers to the combin ation of teleoperation and autonomous robotic control. In some cases, a unilateral controller is used where there is no feedback information from slave to master or from master to human.

PAGE 25

10 2. Tele collaboration: means that all operations are controlled by the h uman machine interface, usually in the form of force reflection. A teleoperation control system can be unilateral or bilateral depending on the data flow. In the case of a unilateral controller, the robot arm is operated as an open loop system. If the master and the slave are physically separated, there may be a video feedback of the slave executing a task or even no video if the master and slave are in the teleope [17, 18, 19]. In this case, human decisions are merged with the computer generated assistance to allow for complex forms of automatic control. The control system adds velocity/force inputs to t hose from the master in the impedance controlled formulation to assist the motion of the manipulator. Bilateral impedance control allows force reflection to be provided to the operator during task execution [10, 2 0 2 1 ] In [18] Dubey et al proposed the v ariable impedance method where the impedance parameters are adapt ed to variable circumstances thus overcoming the conflict problem of choosing desired dynamics parameters This controller is primarily used in tasks requiring contact, such as needle inserti ng into tissue or surface exploration. Teleoperation system design usually takes operation accuracy into account, not the convenience and simplification of the operation. With the improvement of the controller architecture and assistance attempt [22] the task performance of telerobotic system in rehabilitation engineering is still not satisfactory [2 3 2 4 2 5 ]. As explained in [2 6 takes the operator an average of 50 seconds, mostly due to indexin g the master once it reaches its workspace limit and tuning the gripper to grasp the targe t Furthermore, the

PAGE 26

11 performance largely depends on the operator's familiarity with the system. In most cases, using a robot as a teleoperated device to complete a tas k is much harder than using human arm and hand. It can soon become very exhausting, especially if it has to perform repeated tasks such as feeding, even with some assistance. Many researchers tried to improve the operation accuracy, reduce execution time a nd relieve the operator's mental labor through adding artificial intelligence (AI) Kawamura et al [2 7 ] looked at how far rehabilitation robots had come in possessing abilities that relieve the user from the mental burden of controlling the robot. This AI based system contains modules for a voi ce activated user interface which is capab le to interpret fuzzy commands such as "move closer", "go slower" or "move a little bit faster". These "fuzzy terms" can be recorded through a macro action builder (similar t o a script) which enables the user to specify a set of commands to perform a task. The macros can be replayed later as a high level action commanded by the user. As described in [2 7 ], the system has the capability to plan the actions to take in order to achieve a goal by learning the preconditions and effects of those actions obtained through the macro builder interface. The utilization of sensors i n intelligent telerobot ic system s such as vision based assistance has improved the operation of aligning the end effector with the target [2 8 29 ] where the visual information is used as part of the user interface in the form of visual cues for guiding tool in order to reach a goal. This dissertation extends the utilization of sensors to the calculation of the assist functions to guide the user while following a trajectory as well as to align the tool (a Barrett hand) with the target.

PAGE 27

12 2.3 Teleoperation Assistance In a telerobotic system, a human operator controls the movements by sending commands or sign als to the robot. In the last decade, developments in computer and communication technology have enabled the integration of the teleoperation robotics (telerobotics), sensory information, and haptic interfaces in such areas as rehabilitation, training, su rgery, research, device testing, and assistive technologies development. These developments have allowed further development of the assistance algorithms to map the master commands to the slave in a way that scales up or down depending on the task and env ironment information (the scaling factors vary accordingly). The assistance function concept consists of the generalization of position and velocity mappings between master and slave manipulators of a teleoperation system. It can be classified as regulati on of position, velocity and contact forces. All of these assistance strategies are accomplished by modification of the control law parameters of simple mathematical models of spring type and damping type forces. A simple form of position assistance is sca ling, in which the slave workspace is enlarged or reduced as compared to the master workspace. The velocity assistance is commonly used in approaching target and obstacle avoidance. In both cases, the velocity scaling varies according to whether motion in that particular direction is serving to further accomplishing the desired effect of the motion. 2.3.1 Position Based Assistance Functions In these functions, the motion of the manipulator is constrained to lie along a given line or in a 2D plane. Figur es (2.1a) and (2.1b) illustrate the situation of the linear

PAGE 28

13 and planar constraint definitions, respectively. A detailed explanation of the position based assist functions can be found in [3 0 ] In these particular functions, the force feedback is transferr ed to the user through the haptic device itself. This way the haptic is used as the actuation device to generate the force reflection as well as a positional sensor to measure the relative position between a trajectory point and the "tip" of the haptic dev ice. This information is then compared with the external sensory information to correct for possible deviations from the intended trajectory. Figure 2.1 (a) End effector Constrained to Motion on a Linear Path (b) End eff ector Constrained to Motion on a Plane 2.3.2 Velocity Scaling Assistance Functions In these functions, the level of assistance is based on velocity scaling according to whether the motion improves in the direction intended. In the approaching assistance mode, the velocity is scaled up (in free space) if the motion reduces the distance between the current and goal positions of the robot arm. Otherwise, the velocity is scaled down. End effector z G Goal point Constraint line End effector Goal point x z y G Constraint plane (a) (b)

PAGE 29

14 Figure 2.2 shows scaling factors used for velocities scaling from previous work done in the Rehabilitation Robotic Lab [3 0 ]. Figure 2.2 Scaling Factor Functions [2 6 ] From this figure it can be observed that the change of the scaling factor depends on the proximity to the goal and the direction of mot ion. This same approach was used by Everett, who designed a vision based mapping to align the end effector of the slave manipulator with a cross object [ 2 8 3 1 ]. This is similar to what occurs using a Laser Range Finder readings and a vision system. Fi gure 2.3 shows how a velocity scaling factor varies based on the distance reading when the end effector is approaching a wall. Using a vision system, the velocities that reduce the alignment error are scaled up and the ones that increase the alignment err or are scaled down (Figure 2.4). Goal 2 Goal 1 Max Workspace Max Workspace Operation Range Operation Range Scale Factor Scale Factor Goal 1 Max Workspace Towards Goal 1 From Goal 1 to Goal 2 0 1 2 0 1 2

PAGE 30

15 Figure 2.3 Scaling Factor Based on Laser Range Finder Reading [3 1 ] Fi gure 2.4 Cross Alignment Task Adapted from [3 1 ] 2.3.3 Virtual Fixture Assistance Functions Another form of assistance use d in tele spatial parameters. A canonical definition of virtual fixtures can be found in [3 2 ], as top of the reflected sensory feedback from a remote Image Plane ( O x O y ) (observed point) ( d x ,d y ) (desired point) y x x y y I x I V xy projected from

PAGE 31

16 kinesthetic activities (efference) and the subsequent changes in the sensations presented As an example a virtual 3D wall can be defined as a linear trajectory following by creating a stop constraint to prevent a collision with a desktop In tel eoperation, a virtual fixture can be defined as a computed generated spatial constraint t hat imposes position al or force limitations to a robot arm or operator movements In practice, virtual fixtures are used to constrain a haptically controlled a task [ 19, 3 3 3 4 3 5 ]. Usually, the stiffness coefficient along the desired path and stiffness orthogonal to the path are different. The stiffness ratio indicates the softness or hardness of the fixture. If the stiffness ratio is close to zero, it is the hardes t fixture, which means that the end effector can only move along the path without deviation. If the ratio is close to 1, it is the softest fixture, where the end effector can move freely and it is usually used for trajectory following. Virtual fixture can also be in the form of potential force fields [3 2 3 6 ]. Potential fields are used to produce velocity commands, which, when added to those generated by the input device, maneuver the manipulator toward the target or away from obstacles [3 6 ]. Figure 2.5 s hows that extract and insert fixtures restrict the motion of the end effector when it is close to the tool grasping position. This behavior is implemented in order to avoid a collision of the manipulator with the tool, while allowing the operator to quickl y reach the grasping position [3 6 ].

PAGE 32

17 Figure 2.5 Force Clues Generated b y Position a nd Approach Fixtures (Left) Fixtures Restricting Degrees o f Freedom (Right) [ 36 ] The guiding force in this field is calculated using a potential function This forc e can be attractive or repulsive, between the computer controlled path following and the deviation from this path caused by the user input To further explain this the Lenard Jones potential function is used here as an example. T he Lenard Jones potentia l function is used in physics simulation of attraction or repulsi on of atoms in Solid Mechanics. The acting regions of the force field are shown in Figure 2.6 The Lenard Jones equation represents the inter atomic potential energy, U, and is given by: (2.1 ) In Eq. (2.1 ), r is the distance between atoms, and n, m, A, and B are cons tants. The first term in Eq. (2.1 ) represents the attraction force component, while the second term represents the repulsive force component. In order to compute the inter atomic force between two atoms, the derivative of the potential energy is required as follows :

PAGE 33

18 ( 2.2 ) As can be observed from Eq. (2.2 ), the Lenard Jones potential function can be used to avoid o bstacles if the A parameter is made equal to zero (i.e., zeroing the attraction component) and keeping repulsion component only O n the other hand, if the In pract ice, boundaries defined around the desired path are created to act like virtual walls for guidance as explained above Figure 2.6 Lenard Jones P otential F unctions 2.4 Teleoperation in Real time There are several PC based ro botic control systems. Among these are QMotor 3.0 and QMotor RTK software packages developed by Costescu et al [ 37 ]. These packages use Object Oriented (OO) methods such as inheritance and polymorphism and a r U Repulsive f orce Attractive force

PAGE 34

19 Client/Server approach for asynchronous communic ation between different classes of services at the hardware and software control levels. The Operational Software Components for Advanced Robotics (OSCAR) framework is another program that uses OO framework for the development of control programs for robot ic manipulators [ 38 ]. This particular software was developed as a set of GNU C++ classes for the Sun Solaris OS for graphical simulation and for VxWorks real time OS for graphical and physical robot controllers. These two frameworks are useful for the co ntrol of the robotics manipulator as traditionally performed either through a GUI or manual input from the us er using a keyboard The QMotor RTK, for example, works exclusively at the jo int level of the robotics arm and does not support a haptic applicati on interface or sensor based control. The Open Robot Control Software (OROCOS) project is an open source fra mework which runs on Linux OS named Linux RTAI ( Real Time Application Interface for Linux ) This platform is a multi purpose and modular framework for robot and machine control [ 39 ]. Being designed to work under Linux OS, the framework is not fully POSIX compliant limiting software portability and interoperability. At the time of this writing, the OROCOS platform does not support haptically control led teleoperation. A more recent system, Microsoft Robotics Studio (MSRS) [ 40 41 ] by Microsoft, is based on services oriented runtime architecture design ed to run on Microsoft o perating s ystems. MSRS allows asynchronous applications to communicate throug h Web based or Windows based interfaces developed in C# A limitation of the services based approach is that it does not allow for robotic framework integration and the human machine interactions (HMI) through the sense of touch (haptic response) in hard real time. In

PAGE 35

20 addition, the integration of the sensor based feedback when it is emb edded in the control software would be difficult to achieve even in soft real time. A different platform using haptic control is described by Turro et al [ 42 syst em was implemented as a client server system for haptically augmented teleoperation using a master/slave scheme. The haptic feedback was achieved by using a slave controller consisting of a multi one CP S ome existing PC based haptic systems are used for rehabilitation, but they do not integrate sensors and the assistance provided to the user is pre recorded and, therefore, is not calculated in real time In [43], Hogan et al described the MIT Manus, a robot assisted therapy implementation aimed at the recovery of arm movement after stroke. The system uses a performance based impedance control algorithm for controlling execution of tasks in a 2 D plane. The patient receives assistance triggered by speed, time, or EMG thresholds. Charles et al [44] developed the Robot Assisted Microsurgery (RAMS) telerobotic workstation in collaboration with JPL/NASA to augment micro surgical dexterity. The sys tem includes a 6 DoF robotic manipulator (slave) that holds surgical instruments. Motions of the instruments are commanded by moving the handle on a master device in the desired trajectories. The system was designed to assist skilled and able bodied surge ons and is not suitable to assist people with disabilities to execute activities of daily living (ADL). A bilateral teleoperation approach was implemented by Everett et al [45], where a slave manipulator (7 DOF K 2107 Robotics Research Corporation (RRC) robot manipulator) is controlled by tracking the motion of a master manipulator (Phantom

PAGE 36

21 device). When the master touches an object, the slave reflects the forces back to the master device held by the operator [46]. It was developed using an SGI worksta tion and ControlShell graphical programming module running in the VxWorks OS. A Hidden Mark Model (HMM) based skill learning was developed by W. Yu et al, [47], to provide motion therapy using a h aptic interface. This system can be used as a physical ther apy for upper limb coordination, tremor reduction and motion control capabilities for persons with disabilities of the upper limb in a virtual environment. It was tested in simulation using a virtual reality representation of the RRC robotic arm. Chan et al [17] describes a telerobotic system, which includes variable stiffness and damping control schemes to control the master and the redundant slave dynamics to suit a given task. The functionality of the control scheme depends on sensed and commanded valu es of force and velocity, with no previous knowledge of the environment required. This prior research was not PC based and not versatile for a wide range of applications. In 1999 researchers at the Budapest University of Technology and Economics in Hunga ry started the REHAROB project using standard, full scale industrial robots for human therapy. This project is accounted to be the first in the world to target the use of standard, commercially available industrial robot (ABB manipulator) for the physiothe rapy of spastic hemi paretic stroke patients [48]. In contrast to these systems, the design described in this dissertation allowed us to create a simplified PC based framework, which can be implemented widely. A key problem add ressed is the integration o f human machine interactions combining the sense of touch and visual feedback as integ ral components of the robotic controlle r incorporating the advantages of real time architecture in a PC based framework. This

PAGE 37

22 platform provides for the benefits of a res earch laboratory setup to the user's desktop without demanding high end computer resources. The autonomous and teleoperation control with capabilities for scaling and virtual constraint definitions are implemented with the intention of s motion by removing the restriction of the user of always being in the control loop, but keeping the high level decision making capabilities. This would result in fatigue reduction for task execution over long periods of time. The combined work of Chan et al [17] and Everett et al [ 28 ] provided an approach for using uncertain sensor data based on the confidence of the measurements defined in terms of the mean and the standard deviation. The application of the assistance strategy concentrated on tasks r elated to radioactive waste tank cleanup. The nature of the associated tasks did not allow for autonomous command execution. In their work, t he variable damping algorithm was implemented on a 7 DOF K 2107 Robotics Research Corporation, RRC, robot arm wit h position input from a 6 DOF Kraft master hand controller. The RS232 communication protocol was used to transfer the master controller signals to a SGI host workstation A conversion from RS422 to RS232 was required tocol is RS422. The system control software was implemented on a Silicon Graphics GTX 340 Workstation with 2 CPUs. One CPU is used for the master controller (6 DoF Kraft hand) and for the graphical user interface. The second CPU was used for the slave c ontroller (RRC K 2107) and a low level SGI host computer was connected to the RRC servo controller through a Bit3 VME Multi bus adapter.

PAGE 38

23 In the present rese arch work, the implementation of aut onomous control and teleoperation control aims to facilitate the use of the assistive platform for any user making high level decisions, such as target object selection The system is capable of generating trajectories and virtual con straints to be used for autonomous motion or scaled teleoperation. This development involves the fusion of the optical sensor data sets and handling the transition states between the supervisory control system (human in the loop) and the autonomous, sensor y driven control, and vice versa in real time. A summary of the demanded requirements is listed below: 1. The platform for development is a PC based software controller which responds in real time in robotic and haptic modes. The implementation runs under QNX Real time Operating System (RTOS). QNX is a fully POSIX compliant OS. This is a key feature because by following the POSIX (Portable Operating System Interface) standard, the application is portable to conformal POSIX standard OS. The following POSIX services were used in the current development: i. Priority scheduling ii. Real time signals iii. Real time Timers iv. Message passing v. Thread creation and control vi. S cheduling and synchronization of multiple threads 2. The telerobotic system uses two forms of robotic control: a closed form solution of the inverse kinematics of the 6 DoF robot arm and a resolved rate based algorithm. Both control strategies include gravity compensation.

PAGE 39

24 3. The integration of the sensory data from the camera and laser i s handled through an optimiz ation solution to minimize the error using the Levenberg Marquart methodology. The error function is defined by the distance between a given point in the world coordinate system and the same point given by the inverse perspective projection. 4. Sensor based assist arm with position input from a 3 DoF (force based DoF) Phantom Omni haptic device. The SAF helps the user to follow a trajectory path described in terms of the sensory input using motion sc aling and virtual fixtures. 5. A low level network protocol based on UDP (User Datagram Packets) packets provides the necessary flexibility, reduced latency, and resources for integrating data from diverse sensors. A single packet contains the vision informa tion as well as the laser range finder information. 6. Rather than using conversion methods between different communication protocols, the UDP communication protocol is also used to transfer the master controller signals to the PC based host computer. Suppo rt for TCP/IP streams is also provided. 7. The communication platform implements features to ensure the order of arrival of the data and mechanisms to handle data loses, if necessary 8. The design takes into account that sensory datasets will be sent to multipl e machines at once (for physical and virtual reality simulations) by using the multicast and broadcast transmission properties of the UDP protocol.

PAGE 40

25 Chapter 3 Hard R eal Time R obotic Controller 3.1 Introduction In the particular domain of telerob otics, the human is always in the control loop (supervisory control) while the robot arm is used to manipulate objects in a virtual or real environment. However, the users of telerobotic systems tend to fatigue over time and their performance is greatly r educed [ 49 ]. In these situations, it is useful to provide reduce fatigue when the system is used over long periods of time. In this dissertation the assistance is prov ided to the users by the definition of sensor based assisting or resisting forces as the users deviate from a trajectory as well as motion based scaling and virtual fixture teleoperation The calculated forces are delivered to the users through the haptic device (Phantom Omni) which provides the sensation of touch to the user's hands. The integration of haptic feedback and the generation of the assisting or resisting forces based on sensory information is a challenge due to the uncertainty in the sensory information datasets, the deterministic timing and high frequency update rates for a realistic sensation of touch. In addition to this, the visual information extraction and data fusion requires computationally intensive pre processing for obtaining the digital features from the images. This type of scenario imposes additional constraints in terms of the timing response of the system. This chapter discusses the approach followed in this dissertation to deal with the timing constraints and high update ra tes imposed by

PAGE 41

26 application with synchronization mechanisms for inter processing communication to achieve real time performance. 3.2 The Need for Real T ime Haptically Controlled Robotics Real t ime (RT) systems are defined as those systems in which the correctness of the system depends not only on the logical result of computations, but also on the time at w hich the results are produced [7 ]. Following this canonical de finition, a real time operating system (RTOS) is a specially designed operating system that supports real t ime applications. A distinctive characteristic of a RT application is that it must satisfy real world timing boundaries without delays In genera l, the main characteristics of RTOS are: 1. Respond predictably to unpredictable outside events 2. Meet timing deadlines 3. Ability to process multiple threads concur rently In actual applications, RTOS specifications do not necessarily mean the response must be "f ast". H owever, the timing requirements to complete the required tasks must be consistently accurate and predictable. If a computer process is designed and expected to update its data structure at a specified frequency of 1000Hz for example, the RTOS must not delay this process by allowing a low priority process to run first. In the literature, this property of RTOS is called determinism. When a RT application is running multiple threads or tasks concurrently, a running thread will be in control of certa in resources of the CPU. The running thread must yield to another thread with higher priority, allowing

PAGE 42

27 the higher priority thread to run. The RTOS provides different mechanisms to handle this type of situations in real time. D epending on the degree of failure if the system does not meet a specified deadline a RTOS can be defined as "soft" or "hard" real time operating system In hard real time systems, if the timing requirements are not met or the application response action is delayed for any reason, (e.g., elevators or aircrafts control systems) a catastrophic failure might occur In control systems, for example, most applications must strictly meet real world timing requirements in order to avoid catastrophic results. On the other hand, "soft" real time systems will accept some level of lateness (e.g. a graphical user interface response for online authentication). Failure is not classified as catastrophic or incorrect in this case, but as an inconvenient response with a possible increased cost over t ime. In the telerobotic application described in this work where sensor based assist functions and haptic feedback are used to guide the user's motion, if the response time requirements are not met, the robot controller will not be able to provide a stab le control action, or it might be impossible to reach the prescribed destination with assistance. In this case, if the response time constraint is violated, the result is an unrealistic effect or loss of the "sense of touch" in the user's hands. As shown by Salisbury et al [1], the haptic force feedback must be updated at a frequency of at least 1000 Hz consistently without delays in order to have a realistic sensation of touch. Even though the results in the haptic case might not be catastrophic, the sy stem is described as a failure because the end results are not correct. Obstacle avoidance might be also an issue when negotiating obstacles resulting in a collision. The need for a predictable performance is, therefore, a

PAGE 43

28 key factor in the ability of a real time system to meet an application's response time requirements. The PC based framework provided by this work allows implementing telerobotic applications with deterministic response times. The platform developed for real time telerobotic, haptic f eedback, and sensory data fusion systems is implemented as multithreaded application. The robotic system runs on QNX RTOS, which provides hard real time timing, priority scheduling, and multithreading synchronization [50] The haptic and sensory systems r un on Windows XP OS, which is an event driven and not a real time operating system. The problem of predictability is alleviated by using a modified scheduler class developed to handle the high frequency update rates of the haptic thread under Windows. Th e platform sensory subsystem consists of a graphical user interface (GUI) which allows for image acquisition and post processing. The laser ranger finder datasets are also displayed. In this application, when the post processing phase is completed, a di fferent thread is assigned the task to act as a broadcasting server. This way, the user interface continues to be responsive and the display is immediately updated based on the most recently available data. If the data fusion is not programmed as a multi threaded application, the sensory subsystem will stop responding properly due to the event driven nature of the Windows OS. The haptic and the simulation threads run concurrently, but they have different update rates and therefore, the user will have a d elayed response or an event mismatching between the visual and the haptic feedback In practice, the graphical simulation and display requires about 24 to 30 Hz to create a continuous motion sensation.

PAGE 44

29 3.3 Telerobotic Computational Tasks In general, th e computational tasks in telerobotic applications include the solution of forward and inverse kinematic problems, trajectory generation, and the calculation of the associated torques for commanding the motors to reach the ir destination s The forward kinem atics deals with the computation of the position and orientation of the tool frame relative to the base frame [ 5 1 ]. On other hand, the inverse kinematics deals with the problem of finding all possible set s of joint angles required to attain the given posi tion and orientation of the end effector of the robot arm [ 5 1 ]. The trajectory generation is related to the way a robot arm is moved from one location to another in a controlled manner. Generally, a trajectory planning module is implemented to create con trolled movements in joint or Cartesian space. Finally, the torque calculations require the use of the kinematics and dynamics of the robot arm to achieve the desired joint angles However, in practice, a form of linearized controller (Proportional Integ ral Derivative) is used as an approximation in order to reduce the computational intensive calculations required if the kinematics and the dynamics are used. These computational tasks lead to the simultaneous motion in 3D space. In telerobotics this is a chieved by controlling the position and orientation of the tool frame necessary to follow a desired trajectory or for reaching a specified point in space [51] When the motion of the end effector of the robot arm is controlled by a haptic interface (Phant om Omni, for example), the position and orientation of the end effector of the The global position of the end effector can be determined from the encoders feedback information located a t each joint of the robot arm.

PAGE 45

30 In the case of joint space control, the direct measurements from the haptic device encoders can be used to determine the joint angles which are then mapped to the corresponding joint angle of the manipulator. Given the num erical values of the haptic more convenient way to map the different kinematics of the haptic and the robot arm is to use a Cartesian space solution, specially when t he 3D motion of the robot arm is intended to be use for the execution of structured tasks. 3.4 Overview of the Robot Arm Controller and Forward Kinematics Equations For modeling and controlling the robot arm, the kinematic equations of the links of the ma nipulator are necessary. The se equations are obtained by systematically assigning coordinate frames to each link following the Denavit Hartenberg (DH) convention [51 ] The procedure described in [ 5 1 ] starts by assigning reference coordinate frames to eac h link starting at the base which is considered as a fixed link, and ending with frame attached to the robot end effector of the Puma 560 for which n = 6 DoF. The following set of rules (0 13) and definit ions are considered to assign coordinate frames to the links and therefore to determine the DH parameters based : 0. Number the joints from 1 to n starting with the base and ending with the tool yaw, pitch, and roll, in the specified o rder. 1. Assign a right handed orthonormal coordinate frame to the robot base, making sure that aligns with the rotational axis of joint 1. Set

PAGE 46

31 2. Align with the rot ational axis of joint 3. Locate the origin of at the intersection of and axes. If they do not intersect, use the intersection of with a commo n normal between and 4. Select to be orthogonal to both and If and are parallel, point away from 5. Select to form a right handed orthonormal coordinate frame 6. Set If go to step 2; else continue. 7. Set the origin o f at the tool tip. Align with the approach vector, with the sliding vector, and with the normal vector to the tool. Set 8. Locate point at the intersection of and axes. If they do not intersect, use the intersection of with a common normal between and 9. Compute as the angle of rotation from to measure about 10. Compute as the distance from the origin of frame to point measured along 11. Compute as the distance from point to the origin of frame measured along 12. Compute as the angle of rotation from to measure about 13. Set If go to step 8; else stop. Figure 3.1 shows the frame assignments and the zero pos e configuration of the Puma 560 manipulator following the previous rules and definitions. Once the coordinate

PAGE 47

32 frames are assigned to every link on the chain the transformations between adjacent coordinate frames can then be represented by the standard ( 4 x 4 ) homogenous coordinate transformation matrix, T. Therefore, t he transformation matrix T is a mathematical description of the robot manipulator in terms of the DH parameters. Generally, the DH parameters are presented as a table containing one row of four parameters for each joint link set with an attached coordinate frame. The DH parameters allow one reference frame to be located exactly with respect to the preceding link frame. The geometrical variables described by the modified DH parameters conven tion are presented in Table 3. 1. Figure 3.1 Coordinate Frame Assignments to Links of Puma 560 [ 51 ] z 4, z 6 z 5 y 4, y 6 x 4, x 5, x 6 z 3 x 3 y 3 z 2 z 1 y 1 x 1 x 2 y 2

PAGE 48

33 Table 3. 1 DH Parameters of the Puma 560 Robot Arm [ 51 ] Joint i (rad) (m) (m) (rad) 1 0.0 0.0 0.0 2 0.0 0.243 5 3 0.0 0.431 8 0.0 934 4 0.0 203 0. 43 3 1 5 0.0 0.0 6 0.0 0.0 Figure 3.2 illustrates two adjacent link coordinate frames, and on a robo t manipulator. The frame will be uniquely determined from frame by the definition of the DH p arameters , and The t ransformation matrix describing the position and orientation of the frame with respect to fram e is determined (starting from frame ), as follows: 1. Translate a distanc e from the origin of frame in the direction of axis. 2. Determine the direction of by rotating vector by an angle around 3. Translate a distance along the vector The position reached defines the origin of coordinate frame and the vector is also determined. 4. Ro tate the vector about by an angle to determine the axis vector

PAGE 49

34 Figure 3.2 DH B ased Intermediate Transformations [ 51 ] Symbolically, these four steps can be expressed as [ 51 ] : ( 3. 1) In this equation, the rotation matrix defines a rotation about the through an angle and it is obtained as: (3.2) The translation transformation matrix along the axis for a distance is: (3.3)

PAGE 50

35 The rotation matrix defines a rotation around by an angle and is given by : (3.4) The translation transformation matrix along the axis for a distance is: (3.5) By substituting Equations (3.2) through (3.5) into Eq. (3.1) and performing the symbolic multiplications yield to the homogenous transformation matrix based on the modified DH parameters: (3.6) Table 3.1 shows the DH paramete rs at the home position. The objective now is to obtain the corresponding transformation matrices that relate the spatial position and orientation of the links connecting all the joints of the Puma 560 manipulator (See Appendix A) The transformation of the end effector of the robot arm is found as: (3.7a ) The final transformation obtained after the symbolic evaluation of Eq. (3.7a) can be written as:

PAGE 51

36 (3.7b) where, (3 .7c) Eq. (3.7c) represents the forward kinematic equations of the Puma 560 manipulator. This is the set of equations used to determine the end effector position in the Cartesian space. A similar procedure is followed to assign coordinate frames to the

PAGE 52

37 sensors (laser and camera) as well as to the object of interest and the workstation. A detailed discussion of the techniques used is presented later. 3.5 General Nonlinear Robotic Model In most practical applications of 6 DoF robot arms, the joint velocities required to achieve a predefined configuration (position and orientation) of the end effector of the robot arm at a desired speed are obtained by linearization of the dynamic governing equation [52] T he explicit dyn amic model solution of the manipulator for controlling the robot arm is avoided. However, a s shown by Armstrong et al [ 52 ], an abbreviated explicit model of the Puma 560 is less computational ly expensive which allows for a simplified realization. T he equ ation of motion for the robot arm can be written in terms of the 6 dimensiona l vector of joint position s as follows: (3.8 ) where, vector of generalized input forces, inertia matrix, viscous friction diagonal matrix, vector of Coriolis and centrifugal terms, vector of gravitational terms For tracking the desired trajectories in joint sp ace where the joint position is specified, the required generalized input torques to control the robot arm are calculated so that all joints are able to reach the prescribed position and orientation a t the desir ed

PAGE 53

38 velocities and accelerations (if specified). Several solution schemes have been suggested to reduce the complexity of t he solution to Eq. (3.8 ) The most commonly used technique for the linearization of (3.8) was devised by Whitney [ 53, 54 ]. This tec hnique resolves the desired end effector motion int o the necessary joint motions reducing the complexity of the solution This method is known as the Resolved Rate Method w hich provides a numerical solution in the end effector space. Considering Whitne the manipulator are required to solve the inverse kinematics problem. The position and the linear velocity components or forces components of the robot effector are specified The linear velocity components of the end effector must be transformed into joint velocities, and then into joint positions by simple numerical integration. Figure 3.3 shows a simpl ified diagram of the algorithm where the input to the block diagram corresponds to t he linear velocity components of the robot end effector [ 51 ]. Figure 3.3 Simplified Resolved Rate Algorithm Block Diagram As shown in Figure 3.3, only the position vector is known at this point. T h e 6 DoF of the Puma is controlled by six (6) brushed DC servo motors, each coupled with an encoder and a potentiometer. The current angular position of each joint can be Inverse Jacobian Puma 560

PAGE 54

39 obtained from the feedback signals from each encoder and potentiometer located at eve ry joint. T he required actuator torques are computed as a linearization feedback form of Eq (3.8 ) based on the desired positions and the desired joint rates ; i.e. the joint acceleration s are not considered ( ). The computed compone nts of Eq. (3.8 ) are defined as follows [55, 56 ]: computed vector of generalized input forces, computed inertia matrix, computed viscous friction diagonal matrix, computed vector of Coriolis and centrifugal terms, computed vector of gravitational terms Considering the computed values, the desired driving torque is computed as: (3.9 ) where and are the position and velocity gains, respectively. Eq. (3.9 ) gives an appropriate control action if In practical implementation there w ill be an error value defined as However, a ssuming that convergence is reached, then the elements of Eq. (3.9 ) would be equal to the actual elements in Eq. ( 3.8 ) The previous assumption results in the following set of equality co nstraints: (3.10 )

PAGE 55

40 (3.11 ) (3.12 ) (3.13 ) If the constraints expressed by Eq. (3.10) to (3.13 ) are satisfied, then Eq. (3.9 ) yields t o: (3.14 ) Equating (3.9) and (3.14 ) yields to the closed loop system dynamics equation: (3.15 ) As can be observed in Eq. (3.15) this simplification does not include the joint accelerations, so it re presents a set of independent first order differential equations for each joint of the manipulator. The response characteristics of the systems of differential equations can be adjusted by the proper selection of the gains and Eq. (3.15 ) can now be expressed as function of the error and the error rate as: ( 3.16 ) Eq. (3.16 ) represents a linearized feedback form and it will be vali d as long as the joint positions converge to the desired joint positions In this research work, t the gravitational term, and the closed loop sy stem with a P roportional D erivative (PD) feedback control law becomes:

PAGE 56

41 (3.17 ) The PD controller with gravity compensation produces a global asymptotically stable closed loop system through appropriate selection of the proportional and derivative set of gains [57] as long as the configuration of the robot arm is not singular The calculation of the gravitati onal compensation terms require s the inertia values as well as the locations of the center of gra vity of every link of the manipulator. Those parameters were experimentally determined by Armstrong et al [5 2 ] for the Puma 560 and are presented in Table 3.2. terms. The ca lculation of the required torques to compensate of the gravitational action will be a function of the joint space configuration (pose) of the manipulator and the gravitational constant, g. The kinetic and potential energies for each link can be expressed in terms of the joint variables and the link mass located at the respective center of gravity of the link. The gravitational components will appear naturally in the final manipulator dynamics equation in the standard form given by Eq. (3.14). A detailed explanation of the procedure can be found in [ 52 ].

PAGE 57

42 Table 3. 2 Link Mass and Center of Gravity Locations [ 52 ] Link i mass (kg) (mm) (mm) (mm) 1 2 17.40 68 6 16 3 4.80 0 70 14 4 0.82 0 143 14 5 0.34 0 0 0 6 0.09 0 0 32 Detached wrist 2.24 0 0 64 In this research work, the gravitational compensation is applied to every joint of the manipulator. Using the DH parameters from Table 3.1 and the link mass and center of gravity locations from Table 3.2, the gravitational constant components corresponding to each joint are found to be: (3.18) The gravitational terms as a function of the position vector G(q) can be obtained as follows: (3.19)

PAGE 58

43 Substituting a ll the term s in Eq. (3.19) into Eq. (3.17) gives the mathematical expression for calculating the driving torqu es of the manipulator in terms of the joint angle values at each time interval. 3. 6 Generic Architecture for a Real Time Robotic Controller The components of a robotic system ( robot arm controller, sensors, user interface/input, signal conditioners, and amplifiers) must perform different activities and interchange information among di fferent modules of the system to accompli sh different desired tasks. This section describes the multithreaded PC based implementation of a real time controller for a haptically interfaced 6 DoF robot arm. To accomplish this, the feedback signals from the haptic device as well as the sensory information must be transferred to the arm controller in real time in a deterministic fash ion by the host computer. The nature of this application demands a real time response in order to be usable for enhancing the ma nipulation capabilities of users in cases where the haptic interface prov ides force feedback and is an integral part of the robot arm controller. For this to be possible it is not acceptable to have delays in the haptic response. For example, it is not acceptable that the haptic device tip penetrates the rigid body rendered in the graphical scene during a haptic cyc le [58 ]. In the other hand, the integration of sensory assisted cular task r equires the sensor datasets to be also available in a deterministic fashion even though the sensor update rates are smaller than the robotic control signals. In the case of humans, it has been determined that the transmission of realistic sensation of tou ch occurs at frequencies

PAGE 59

44 over 1.0Khz [ 1, 3 ]. This corresponds to what wa s previously stated, the update rate of the feedback signals from the haptic device must be at least 1000Hz (1.0Khz) in order to 2 ] An additional constraint of this type of application is the definition of the limits of the achievable stiffness in the environment for stable control of the haptic interface [3]. The platform implemented must ensure that the transmitted signals an d the computed output torques are not delayed by a variable amount of time depending on the CPU system loads. To satisfy the forementioned requirements for any haptic control system for telerobotics applications, t he following threads were defined: 1. T he d etermination of the target position (in Joint or Cartesian space) from t he haptic device interface, 2. T he computation of the joint angles to reach the desired position, 3. A trajectory generation thread which computes position set point commands, and 4. T he comp utation of the torques (a PD software controller with gravity compensation) required to drive the motors (manipulator control program) based on the positional error signals. T he error based control signals of the robot arm (used for Joint Torque actuation control) are computed at the same update rate as the haptic signals. It must be taken into account that s ince there are multiple threads running at the same time, there is a chance of conflict when accessing shared memory or data structures. For example, the case when one thread is writing data to the memory and a second thread is reading from that same memory. In order to avoid data corruption a synchronization method is required to ensure exclusive access to shared

PAGE 60

45 resources. QNX R T OS was chosen for this platform because it is a fully compliant Portable Operating Sys tem Interface (POSIX) operating system and it provides multiple synchronization primitives, such as mutexes, real time semaphores, conditional variables, joining, and ba rriers [50] The POSIX standard is maintained by the IEEE and it is recognized by ISO and ANSI. All of these primitives implement mutual exclusion but have varying perform ance benefits and usage models [ 59 ]. The synchronization mechanism implemented is ba sed on real time semaphore si gnals and message passing, [50, 59 ]. Figure 3.4 shows the multithreaded architecture of the telerobotic control system. As shown, only the robotic control ler side of the design is illustrated in this figure Figure 3.4 Multithreaded Robot Arm Controller Architecture The tele robot ic control system implemented in this work requires the interaction of three fundamental components or subsystems: sensory, con trol, and actuation subsystems. The sen sory subsystem handles the measurements of physical quantities and state of the environment. At this level, the camera and the laser input, the joint encoder readings, as well as the haptic interface information are gathered and processed. The Traj. Gen. Thread T orque Gen. Thread Sensor Data T hread Comm. Thread Send Comm. Thread Receive S ynch. Mech. Main Application Thread

PAGE 61

46 control subsystem uses the sensors input to compute an action command to drive the actuators. The a ctuation subsystem (motors and transmission mechanisms) is responsible for physically changing the manipulator position and orientation In order to control the ro botic system and to achieve a desired configuration the sensing and the corresponding commanded actuation must meet strict timing constraints In other words, the s cheduled activities of the different subsystems must not be delayed before a relatively sh ort deadline for stable control of the robot arm. So, consistency and predictability are fundamental requirement s for the sensor based telerobotic control system to be controllable The generic architecture described in the present work is a multithread ed implementation, where the shared resources (critical section or region) are accessed by multiple threads concurrently. The QNX thread programming model allows multiple threads to access the CP U simultaneously with priority based scheduling. This means that the kernel will block the threads based on priorities and scheduling po licies defined for every thread created [ 50 ]. The p riority levels are defined by QNX from 0 as the lowest priority to 63 as the highest. These priority levels are strictly enfor ced by the operating system. This way the thread with the highest priority that is ready to run will be running until it is blocked. At each priority, the threads in QNX are scheduled according to one of the available policies (F irst Input First Output, F IFO and Round R obin RR ). These policies are only activated when more than one thread is ready to run at the same priority Figure 3.5 shows a diagram of the data flow. As illustrated, threads T1, T3, and T4 are at the highest priority which means t hat they will share the CPU based on the

PAGE 62

47 ed to each particular thread, [50 ]. The scheduler selects the next thread to run by looking at the priority assigned to the thread in the READY state. T he thread with the highest p riority that s at the head of its priority s queue is selected to run. For instance, as shown in Figure 3.5 As stated before, t he scheduling poli cy will be applied only when threads with the same priority are ready to run and a decision is required. Figure 3.5 Ready / B locked States, Adapted from [50 ] As multiple threads are running at the same time there is a possi bility of data corruption. In this research work, semaphore signals (a variable that indicates the status of a shared resource) and message passing [50] is used as the synchronization mechanism to prevent data corruption T he semaphore signaling mechanis m used for synchronization is set up before starting any of the implemented threads shown in Figure 3.4 If any previously defined thread is currently blocked waiting for the semaphore, the Prior ity Level 0 T1 63 T3 T4 T2 T5 T6

PAGE 63

48 next thread to be unblocked is determined in accordance with the scheduling policy defined for the blocked thread. If the situation arises where multiple threads are blocked waiting for the semaphore, then the highest priority thread that has been waiting the longest is unblocked; i.e. access is granted based on priori ty and scheduling policy. In general, when the supervisory control scheme in the sed, the sensory information can be used for adjusting the trajectory of the end effector of the erface. In order to combine the camera, the laser, encoder readings, and haptic sensory inputs to assist the user during task execution, the telerobotic system must meet tightly defined response constraints to avoid instability caused by time delays such as oscill ations, collisions, and the loss of rigid body sensations while touching objects. The correctness of the system response depends not only on the logical result of computations, but also on the time at w hich the results are produced [7]. At the co ntrol level of the telerobotic system, the different computational processes to execute a particular motion in 3D space, such as trajectory following and the required torque computations need to interchange information. In this work, multiple threads were designed to handle the signals of the robot controller as well as the visual and haptic information. The following is a summary of the key aspects of the generic architecture for the real time telerobotic controller proposed in this work. The real time application design enables the possibility to communicate between different runnin g threads. This allows the different subsystems to interact with each other and share the same data structure. Even though this inter process communication is a highly desi rable design feature of the telerobotic system there might be a chance of data corruption when a running thread

PAGE 64

49 attempts to change data while another thread is using the same data. For instance, when the hread is accessing its dat a structure for wr iting and the T hread is accessing the same data structure for reading. In such case, t he mutual exclusion of the data can be accomplished in RTOS s by the use of real time semaphores (a variable that indic ates the status of a shared resource) without affecting the responsiveness of the operating system [ 50 ]. Another important aspect is the preemptive scheduling of threads based on predefined priority level of each thread Figure 3.6 illustrates the integr ation of the different subsystems encompassing the system architecture. As shown, the system conforms to a modular design which facilitates scalability and application of the multithreading programming paradigms to ot her telerobotic applications in rehabi litation, training, surgery, defense, research, device tes ting, and assistive technology solutions Figure 3. 6 Block Diagram of the System Architecture Phantom Omni Puma560 Omni Controller Puma Controller Virtual Environment Amplifier & Motion Controllers Sensors (Camera, Laser) Video Stream from Camera

PAGE 65

50 3.7 Cartesian Trajectory Generation Thread The trajectory generation t hread solves the inverse kinematic equations of the robotic arm for non redundant robot arms and an inverse Jacobian approach for redundant robot arms, as discussed later this section. For the case of the Puma 560, both implementations are available in th e proposed system. The inverse kinematics solution gives the joint values corresponding to positions and orientations of the end effector. For the non redundant case, the trajectory generation thread is composed of the following steps: 1. At every time step define 2. Obtain the position and orientation of the end effector corresponding to the desired trajectory function (a straight line, for example) as explained below. 3. Solve the inverse kinematic problem to obtain the joint values corr esponding to the position and orientation obtained in (2). 4. Compute the driving torque based on the controller scheme being used. In this particular implementation a Proportional Derivative Plus Gravitational Compensation. 5. Send the computed torque s to the robotic controller. 6. Repeat the loop until the final destination is reached. The straight line motion in the trajectory generation thread is accomplished by computing the total transformation required to move the robotic arm from point i (defined as the initial) to j (defined as the destination). Once the total transformation is calculated, it must be divided into smaller segments to obtain the intermediate points for a smooth

PAGE 66

51 transition. The total transformation, T, defined between the initial position and orientation, and the final position and orientation is derived as follows: (3.20) Pre multiplying by the inverse of yields to: (3.21) So, the required total transformation between points A and B is given as: (3.22) In order to compute the intermediate points, the total transformation can be decomposed into a translation for moving the origin of th e initial end effector frame to the destination frame and a rotation about a single axis to align the end effector frame to the desired goal frame In the literature, this method is known as the single axis rotation method [ 60 ]. In the method, the translation component can be easily divided into smaller linear segments. However, the rotational components are nonlinear and a procedure to ensure orthogonality of the axes is required as well as provisions to avoid representational sin gularities (See Appendix B).

PAGE 67

52 3.8 Resolved Rate Thread This thread deals with implementation of the resolved rate algorithm described in [53, 54, 56]. The joint velocities are determined from the Cartesian velocities as follows: (3.23) where, desired vector of joint velocities, : commanded vector of Cartesian velocities (from the haptic device interface) : is the pseudo inverse of the Jacobian of the rob ot arm. The pseudo inverse is given by However, rather than directly performing a pseudo inverse calculation, the following relationship is defined: (3.24) The vector of independent coefficients can be solved with a LU decomposition me thod avoiding the computationally expensive process of the inverse of matrix defined as Once the vector y is known, the required angle rates are obtained from: (3.25) The resulting is the least norm joint velocity vector (or joint rate) which produces the required end effector Cartesian velocity vector [56]. The numer ical techniques associated with the calculation of resolved rate algorithm are all implemented in C++ to run under QNX. Figure 3. 7 illustrates the process.

PAGE 68

53 Figure 3. 7 Cartesian to Joint Space Conversion in the Robotic Workspace 3.9 Sensory Information Thread s Sensors give the robot the ability to interact with an unknown or unstructured environment [ 61 ] In practice, the robot will not be able environment. If the workspace is defined as a matrix of a determined size, the robot arm will reach only a set of local matrix cells around the robot. Sensors return information abou t their environment by physical ly interacting with the real world. The nature of this interaction may be passive o r active Passive sensors simply record emissions already present in the environment. Active sensors emit a signal and measure how the environment modifies the signal. In this research work, a CCD camera and a laser range finder are passive type senso r s used for the location of objects of interest. T he sensory information threads are in charge of data acquisition an d post processing of the sensory datasets It consists of six (6 ) concurrent threads with different update rates of their respective data structures: LU Mult Fwd Kin Trans p ose Robot Derivative

PAGE 69

54 1. The collection of image information and processing : This thread is responsible for capturing the image s and image processing (binarization, edge detection and feature extraction). 2. The laser ranger sensor thread : This thread reads the analo g signals coming from the laser sensor The output from laser finder is a voltage value which is proportional to the range or distance measure d To have access to this analog signal from a PC, it need s to be calibrated and converted to digital signal s us ing an Analog to Digital Converter as described in Appendix G 3. The haptic Servo loop thread : This thread implements the haptic effects (spring force model, spring damper model ) in simulation. This thread requires an update rate over 1000Hz for a realistic sensation of the particular effect through the actuators of the Phantom Omni. The differential transformation matrices (position and orientation) corresponding to the haptic tip are update d at this rate. 4. The collision de tection thread (user and virtual objects interaction) 5. The graphic thread: displays the 3D virtual reality model on the screen and communicates with the haptic servo loop to update the display accordingly. 6. The communication thread: implements a low level U ser Datagram Protocol ( U DP ) packet protocol with provision for data losses and order of arrival of the sensory datasets. These threads are run as six (6 ) separate threads concurrently or simultaneously, but with different update rates of their respective data structures. The sensory datasets

PAGE 70

55 fusion as well as the velocity and differential transformations of the haptic end effector is then transfer red to the manipulator controller. The QNX software design uses a schedule d thread for communication. This c ommunication thread consists of a low level network protocol based on UDP packets. The UDP protocol is flexible in its data structure, it can be extended to prevent data losses, ensure the order of arrival of the data transmitted and has reduced latency. These properties are desirable for transmission of data from diverse sensors. In this particular implementation, a single packet contains the data fusion from the visual and the laser range finder information. The design takes into account that datasets could be sent to multiple machines at once (for physical and virtual reality simulations, for example) by using the multicasting and broadcasting properties of the UDP transmission protocol. Due to the connectionless nature of the UDP protocol and its dis regard for network congestion, the derived protocol implements programmatic features to assure the order of arrival of the data and mechanisms to handle data loses, if any. 3.10 Summary In this chapter, the distinctive features of real time operating s ystem and real time applications are presented in relation to the multithreading tasks of the telerobotic system. The forward kinematics of the 6 DoF manipulator is formulated in terms of the homogenous transformations and the Denavit Hartenberg (DH) para meters. The inverse make the solution extensible to redundant robot arms. A linearized mathematical model of the control system is described in terms of the error sig nals between the actual

PAGE 71

56 positions and the desired positions with gravitational compensation. The implemented multi threading approach is explained and the threads defined for executing a particular motion the trajectory following, sensory data fusion as well as the torques required to drive the arm are discussed. The multiple threads designed to handle the signals of the robot controller as well as the visual and haptic data fusion with provisions for inter processing communication; priority based execu tion and data corruption avoidance are explained.

PAGE 72

57 Chapter 4 Sensor Based Assistance, Autonomou s and Teleoperation Control 4.1 Introduction In general, a telerobotic system consists of a master user input device operated by a human and the sla ve robot placed at a remote location and controlled using a supervisory control scheme This form of teleoperation requires the human to be in the control loop at all times. A utonomous and teleoperation control modes enable the system to combine human hi gh level decisions with the computer based intelligence control. T he idea of incorporating sensor based assistance to the system is to facilitate task executions and to remove the skills required for operating the system This work focuses on enhancing t he capabilities of users using intelligent autonomous a nd teleoperation (telerobotic) control to combine human high level decisions with computer intelligence on a hard real time master slave system that will help users to execute different tasks in an eas ier and faster manner. The human decision making component comes from locating the target objects in the environment using simple sensors and selecting a combination of different modes of operat ion like the autonomous control scaled, virtual fixture based position or velocity based teleoperation control modes In this chapter, the concept of assist function is defined in relation to the basic haptic parameters and the control law equa tions required to determine the intended path effector position and sensory input are outlined The different operation modes derived from the implementation of the autonomous control mode and

PAGE 73

58 teleoperation control scheme are also described. The concept of the centroid of the object used in the der ivation of the scaled and virtual fixture constraints is assumed to be known an d the details of its determination will be presented in Chapter 5. 4. 2 Sensor Based Telerobotic Control Theory The sensor based assistance and telerobotic control implementatio n s depend on either position or velocity control variables. For position based assistance a simple form is scaling, in which the effector is scaled up in the desired direction and scaled down in any other direction Similarly, i n the case of velocity assistance, the velocity is scaled according to whether the motion in a particular direction is serving to further accomplishing the desired effect of the motion for example, when moving towards a target object For instance, the 3D Cartesian based mapping from master to slave makes it very easy and quick for the users to point to objects in the environment with the laser range finder On ce the object is located by pointing the laser it is locked by the system by the press of a key and then the slave can proceed towards the object in au tomatic mode or by teleoperation. 4.2.1 Autonomous Control Mode Before the activation of the autonomous control mode the user points the laser to an object in the environment by teleoperating the s lave robot arm. Then the user selects the automatic mode option to move the gripper towards the object along the linear trajectory (line of sight) generated by the laser as shown in the Figure 4.1 After reaching

PAGE 74

59 a certain threshold distance the arm move s along a secondary trajectory to account for the laser offset distance from the gripper as shown in Figure 4.1 Figure 4.1 Conceptual Representation of Autonomous Control Mode As explained in Chapter 3, the resolved rate app roach for Cartesian motion is used to compute required joint velocities from the Cartesian velocities of the end differential transformation matrices at each of the sa mpling points is computed between the current end effector position and the target object position in hand co ordinates. Then, the resulting transformations are transformed to base coordinates before their use in the resolved rate algorithm. y 1 Target object frame Omni tip Position End effector frame Laser Desired trajectory Camera {o} D {f} {i}

PAGE 75

60 If the trans form ation of the current end effector position with respect to the base obtained from the solution of the forward kinematics of the manipulator, is denoted by then the transform ation of the target object with respect to the base can be computed by the following operation: (4.1) where is given by Eq. (4.2) and D is the measured distance from the laser (4.2 ) The e quivalent angle axis method [22] is used for obtaining the rotation part, and linear interpolation to obta in the linear part of transformations at the sampling points or A Cartesian velocity vector, V, is computed from two consecutive sampling transforms taken from the set above every 200 Hz whic h is the refresh rate of the trajectory generation thread, as explained before If and are two consecutive transformations defined as and then the ve follows: (4.3) where (4.4) and

PAGE 76

61 (4.5) T he required joint angle rates are computed us ing the inverse of the Jacobian of the manipulator as follows: (4.6 ) After integration of the joint rates, the current arm. 4.2. 2 Position Based Teleoperation Control Mode Position based teleoperation is the default control mode of the telerobotic system. In this mode, as the Phantom Omni is moved in its workspace by the user its transformat ion matrices are computed by solving the forward kinematics problem and mapped to the PUMA base frame. The differential rotations, dR and differential translations, dP of the Phantom Omni are computed between every tw o consecutive sampling points by (4.7) and (4.8), respectively. (4.7) (4.8) Knowing the current PUMA POSE, T P1 the new end effector POSE of the PUMA is computed as: (4.9)

PAGE 77

62 For teleoperation, a closed form solution of the inverse kinem atics problem is used to yield the joint angles which are then sent to the torque generator for computing joint torques. 4.2. 3 Velocity Based Teleoperation Control Mode In this mode of teleoperation, the Phantom Omni position determines the PUMA end eff ector speed and direction. In other words, when velocity control is used, the PUMA end effector speed changes proportionally to the Phantom Omni changing position. When the specified velocity is reached, it is maintained until the command from the Omni i end effector once to select a direction and speed for the Puma end effector. Then, the effector steady until the gripper mounted on the PUMA is close to t effector back to its initial position in order to stop close to the target. The implementation of the velocity based teleoperation is similar to the position based teleoperation mode except that the differential rotations dR and differential translations dP of the Omni are computed between the initial Omni stylus position when its button is pushed, and its current position. This way, the Omni pen behaves like a joystick; the further the joystick moves away from t h e center, the faster the PUMA end effector moves. This is also suitable to wheelchair bound users who are accustomed to using a wheelchair for mobility. In this mode, the Phantom Omni end effector transform ation is recorded when the user clicks the stylus button. The recorded transformation is referred to as in (4.10) :

PAGE 78

63 (4.10) in its workspace by the user, the current transformations are sent to the PUMA controller and are mapped to the PUMA ba se frame The differential translation is computed as: (4.11) where = a constant velocity factor and, = the real time clock refresh rate. This means that the fa rther the Omni pe n is from the start position, the faster the PUMA moves as is constant and only is updated at the cycle refresh rate. The differential rotation d R is computed as: (4.12) where corresponds to the transpose of and is a scaling rotation factor. Then, s mall increments of dR are computed from equivalent angle axis method and are used to transform at the cycle refresh rate to yield new rotation al components of the PUMA end effector transform ation T he se n ew transformations are computed in the same way as in position based teleoperation and the inverse kinematics yields joint a ngles at the cycle refres h rate, as explained in Chapter 3.

PAGE 79

64 4.2. 4 Scaled Teleoperation Scaled teleoperation is used to scale up or down create virtual constraint using the sensory data. After the user selects the target object from the enviro nment by pointing the laser, the reference trajectory vector is calculated. As the user moves the Phantom Omni in its workspace, the translation vectors are and sent to the PUMA controller at every cycle step If P i and P i+1 are the translation vectors of the homogenous transformations of two tip points, then the translation vector can be projected on the reference vector to o btain a new vector as follows: (4 .13 ) The projected vector resulting from (4.13) is then scaled up by mu ltiplying it by a scaling matrix given by : (4.14) S imilarly, the projections of the current translation vectors are determined on the other two axes perpendicular to the reference vecto r However the components of these vectors are scaled down. As the computations continue, becomes the new differential translation vector computed every cycle. The inverse k inematics on th e new transform ation yields the new j oint angles that ar e sent to the torque generator as before.

PAGE 80

65 4.2. 5 Virtual Fixture Teleoperation The v irtual fixture constraints are created by completely constraining the PUMA motion along the reference trajectory vector locked by the laser. This is done by scaling up the components of the current projected vector on the reference vector and scaling down to zero the components of the current projected vector on the axes perpendicular to At the same time, the orientation of the PUMA end effect or frame is maintained completely constrained in the Cartesian space except along the axis parallel to the desired trajectory. The differential translation vectors to be sent to the PUMA are computed in a manner similar to the Scaled Teleoperation discussed in 4.2.3 keeping the rotati on fixed and the new transformations yield joint angles at the cycle refresh rate to drive the PUMA robot arm 4.3 The Phantom Omni Haptic Interface A hapti c interface such as the Phantom Omni, has sensors to measure the (6 x 1) vector corresponding to the position and orientation of its end effector (3 rotations and 3 translations) as well as the built in 3 DoF force feedback capabili ties The haptic device used in this work is manufactured by SensAble T echnologies and it is shown in Figure 4.2 The positional feedback is obtained from the encoders placed at the motors and the force measurements are obtai ned from the actuators of t he Phantom Omni interface This information c an be manipulated to express the assistive forces not just as function of the end effector positi on of the Phantom Omni (also known as the stylus or thimble ) but

PAGE 81

66 also as a combination of the latter and externa l visual information provided by sensors such as a camera and a laser range finder. Assuming that there is an object of interest in the field of view of the user, when the user points to the object with the laser the line of sight (LoS), which passes thr ough the centroid feature of the object or region of interest effector, provides a visual indication of its location with respect to a fixed 3 D world reference frame. On the other hand, if the object of interest is partially or laser range finder) can provide the l ocation of the centroid depends on the robot camera frame ), the distanc e and direction of sight. In practice, there will be measurement errors between g with the system. These error signals can be used to compute force constraints for correcting the deviati ons from the intended path and for guiding the user towards the goal. As previously stated, t he Phantom Omni shown in Figure 4.2 provides six (6) positional degree of freedom inputs and three (3) force degree of freedom output (See Appendix F). The Omn objects by means of the forces transmitted to the users through the actuators mounted on the device. It allows for the control of the x, y, and z linear components of the feedback force, but does not allow for torsional feedback when users rotate the stylus. The stylus for example.

PAGE 82

67 Figure 4.2 Phantom Omni Haptic Device T he Phantom Omni software uses the OpenHaptics software development kit (SDK) that run s on Windows XP OS The OpenHaptics SDK consists of a set of two libraries known as the HDAPI and HLAPI The HLAPI is a high level library for haptics scene rendering. I t is best suited for adding haptic interactions to existing OpenGL graphics applications. On the other hand, the HDAPI provides access to low level haptic functions to handl e direct force rendering to the actuators of the haptic interface The type of fee dback force rendered by the haptic device can be time dependant, motion dependant, or a combination of both. In this work, the motion dependant feedback combined with the concept of the sensor based assist functions is used to control the six (6) Puma 560 robot arm in both, joint and Cartesian spaces. 4.4 Joint and Cartesian C ontrol through the Haptic Interface The Puma 560 robot arm can be controlled in joint and Cartesian spaces. Joint space haptic control means that the six (6) joints of the Phantom Omni are mapped to the corresponding joint angle s of the robot arm. The forward kinematic equations of the haptic and the robot arm are used at this point to obtain a set of joint angles. After y L 1 x z L 2

PAGE 83

68 the robot arm to the appropriate configuration. Figure 4.3 (c) shows t he zero configuration position of the Phantom Omni. When the device is placed as shown in (c), the first three joint angles are zero. The gimbals' angles of the device are not shown in this configuration. On the other hand, Cartesian space haptic control deals with the determination of the joint angle values to place the manipulator at a desired position and orientation at the specified velocity. The input veloc ity components are provided by the hapti c device, as shown in Figure 4.4 Figure 4.3 Phantom Omni Reference Configurations 4. 5 Telerobotic Control System The control strategy is a form of generalized bilateral control, wh ich maps positions and velocity components between the haptic workspace and the Puma 560 workspace [17] Figure 4 .4 shows a block diagram of the control strategy where the linear velocity components of the are mapped to the linear velocity of t he robot arm through the Jacobian As shown, the inverse of the Jacobian is not calculated directly (through the inverse or pseudo inverse methods). Instead, the y L 1 x z L 2 (a) Phantom Omni (b) Measured Joint Angles (c) Zero Configuration Angles 1 y y 1 2 3

PAGE 84

6 9 calculation is performed following the procedu re illustrated in section 3.5. This approach provides an improvement to the computational efficiency of the control strategy algorithm. When joint space control is used, the direct measurements from the optical encoders mounted on the haptic device are used to determine the joint angles. The corresponding transformation matrices are then used to represent the haptic's reference frame relative to the manipulator's reference frame. Given the numerical values of the haptic joint angles is relatively easy Figure 4.4 Telerobotics System Block Diagram + + + K v K p + + B h K h + + J u J u 1 + _ _ Phantom Omni Puma560

PAGE 85

70 4.6 Indexing with the Haptic Device The kinematics of the Phantom Omni is very different from the robot arm kinematics that it is con workspace of the haptic manipulator interface. The most appropriate way to implement T he stylus buttons are used for the user interaction, as follows: With the white button, the user can drag the screen, just like a standard mouse, to place the virtual object away from the limits of the workspace or to re position the stylus to a more comfortable orientation. On the oth er hand, the blue button is used to re engage the motion of the manipulator through the Phantom Omni in real time is a challenge because, if it is not done predictably, and/or the comma nded control signals from the haptic are delayed, the telerobotic system can go out of control or auto matically shutdown This safety feature is built tware controller is designed to expect a specified difference between the current and the next commanded configuration of the manipulator. If this difference is outside the specified range, the system is shutdown. 4.7 Assistance Function (SAF) Concept A s previously mentioned the haptic interface allows the user to have the "sensation of touch" of virtual objects through time dependant, motion dependant or a combination of both feedback forces. The idea of combining those types of forces with tory serves

PAGE 86

71 dexterity by scaling or by imposing virtual constraints Also, attractive or repulsive potential fields can be defined as virtual constraints that are implemented in the haptic contro l software to modify the control action provided by the actuators of the haptic interface [24] As shown in Figure 4.5 t he SAF constrains the motion of the robot arm to a desired linear path by constraining the robot end effector motion along a line de fined between the initial position of th e manipulator and the position of the goal point, both defined in Cartesian space. This way, the calculation of the SAF is based on the projected line from the end effector of the manipulator to the in tended destina tion of the user defined object of interest or target In this discussion, it is assumed that the lo cation of the centroid that the user is pointing to is known for the development of the assist function equations. The required compu tations to identify the position and orientation of an object in the 3D space are the topic of the next chapter where the centroid location in Cartesian coordinates is the result of the data fusion of the optical sensors camera and laser A common appli cation of the assist function concept results from the situation where but it is still visible from the sensors point of view (camera and laser range finder combined mod el ) In this situation, the sensors can provide the lo cation of the centroid from the images of the object ca ptured by the vision system, th e image processing techniques (binarization, edge detection, and feature extraction), and the inverse mapping solut ion. Another application results from the possibili ty that the user was shaking, due to tremor illness for example, and was unable to point the laser range finder precisely on

PAGE 87

72 the object of interest. In this case, the camera information can be used to d etermine the erroneous user input. During the execution of a task, the user is provided with position and velocity based control schemes as well as autonomous contro l with the possibility of switching between them. For instance, the user may choose to approach the target object in autonomous mode and then switch from autonomous to regular teleoperation for fine tuning the orientation of the end effector before graspi ng. Any combination between regular, scaled, and virtual fixture modes can be selected by the user to complete the task. Figure 4.5 Representation of the Sensor Based Assistance Function Figure 4.5 illustrates the line of sigh end effector and the region of interest (ROI). A t this point t here are two type s of y 1 Initial po sition Desired trajectory Goal position Haptic tip P osition F haptic Z X D d End effector P osition

PAGE 88

73 assistive force s One type will be attractive or repulsive to assist the user while moving along the trajectory path and the second type will assist the user motion to follow the prescribed linear path T he latest updates of the position vector obtained in the haptic thread are used to compute the new positions of the virtual object and to display the effect of attraction or re pulsion. The linear trajectory is defined by the line of sight vector Once the user's motion is along the prescribed path, an assist function is generated to guide the user to fo llow the trajectory with ease. Figure 4. 6 A S et of Line of Sight Vectors (in Red) Placed Closed to the Centroid of the Region of Interest (ROI) The goal or destination of the robot arm is defined as the centroid of the object of interest. The coordinates of the centroid feature are computed in pix els relative to the image plane. As it will be discussed later, sequences of transformations are required to represent the centroid coordinates relative to the world coordinate system. Also, the transformation from image space to joint space of the robot arm requires the knowledge of the kinematic equations of the robot arm. In the case of a robot mounted camera laser 0.5 0.5 0.5

PAGE 89

74 suite, the visual information is produced as an input signal defined in the image space. Therefore, a conversion is necessary for the tra nsformation. The inverse projection transformation obtained from data provided by the sensory suite (camera and laser range finder) is used to generate a linear trajectory in joint space using the single axis rotation method described in [24]. S ince the human is in the control loop, rather than attempting to drive the arm along this path autonomously, the difference between this trajectory and Figure 4.6 illustrates the method implemented to g enerate the linear trajectory in joint space. Figure 4.7 Line of Sight Using Single Axis Rotation [ 60 ] In cases where the user wants to switch to autonomous control mode t o reach the object of interest, a linear trajecto ry path is automatically generated using the location of the centroid of the object calculated using information obtained from the sensor datasets a o n a o n i j x y z

PAGE 90

75 4.8 Summary In this chapter, the concept of assist function was defined. The control law equations requi red to calculate the haptic feedback based on the haptic position were developed. The connecting line between the end effector of the robot arm and the centroid feature of the image of an object extracted from the optical sensor data fusion was developed a s well. Two types of functions to assist the user were described. One while approaching the path, and a second for follow ing the prescribed path The latter is effector of the manipulator and the centroid of the object of interest In order to reduce the burden of tasks exec ution over long periods of time an automatic mode is developed by the generation of a linear trajectory path using the location of the centroid of the object and the current position of the end effector of the manipulator. In the development of the control law, the location of the centroid was assumed to be known. The procedure to extract this information from images of the object is the topic of the next chapter as well as the sensor based assist functions calculations.

PAGE 91

76 Chapter 5 Visual and Haptic Data for Motion Scaling and Virtual Constraint Definition 5.1 Introduction In the previous chapter, the concept of the centroid of the object was used to effector position of the robot arm and the object of interest without detailing the procedure followed for its computation. The centroid calculation is based on information extracted from images of the object of interest which involves computer vision processes such as edge detection and feature extraction techniques. In computer vision, CCD cameras are used as passive sensors to extract data from the captured images. The intensity of the light is used to process the image information In practice, a complication arises from the extraction of 3 dimensional coordinates of an object given 2 dimensional information from the camera s image plane. Data fusion from two different sensors (cam era and lase r range finder ) provides a unique solution to the problem of reconstructing the 3D object position and orientation with respect to a fixed coordinate system based on 2 dimensional datasets. In this combined system, t he laser range sensor is use d to determine t he distance to the observed target object. This chapter describes the methodology necessary to calculate the location of the centroid and its relation to motion scaling and virtual constraints. T he detailed procedure s for handling the i mages, camera calibration, space domain processing, and

PAGE 92

77 mapping of the camera frame with respect to the base reference frame of the robot arm is also presented. 5.2 Spatial Domain Pre Processing In order to accurately predict the position and orientation of an object or region of interest, the pix el coordinates of the point in 3 D given the points in world coordinates need to be matched. To accomplish this, the computation of the internal ("intrinsic") and external ("extrinsic") parameters of the camera i s required. The Tsai's camera model as described in [ 6 2 ] is used to obtain those parameters. The model includes 3D 2D perspective projection with radial lens distortion compensation. This camera model defines a total of eleven (11) parameters: five (5) intrinsic or internal parameters and six (6) ext rinsic or external parameters. The internal parameters describe how the camera forms an image while the external parameters describe the camera position and orientation with respect to the world coordinate frame. The internal parameters include the focal length, the center of projection, and the CCD sensor array dimensions and they are specified by the manufacturer's design. The intrinsic parameters might vary from device to device even if they belong to t he same manufacturing batch The specifications might also be affected by environmental conditions such as distance between the camera and the scene and level of illumination available. The intrinsic parameters are defined as follows [ 6 2 6 3 6 4 ]: 1. Prin cipal point : intersection coordinates of the optical axis with the image plane as shown in Figure 5.1

PAGE 93

78 2. Scale factors : scaling factors for the x and y pixel dimensions; i.e., the horizontal and vertical size of a single pixel in engineering units (millimeters inches, meters, etc). 3. Aspect distortion factor : a scale factor to account for the model distortion in the aspect ratio of the camera. 4. Focal length : defines the distance from the optical center (or projection center) to the image plane as defined in a pinhole camera model ( this is different from the focal length printed on the lens of the camera by the manufacturer). 5. Lens distortion factor ( ): first order radial lens distortion coefficient The extrinsic or external parameters of the camera define the transformation of the pose of the camera with respect to a local coordi nate system represented by the local coordinate s ystem. The six (6) extrinsic camera parameters are: 1. defines rotation angles necessary to obtain the rotational transformation between the world and camera coordinate frames. 2. corresponds to the translatio nal components between the world and camera coordinate systems Figure 5.1 shows the assigned frames of the Tsai's camera model. Calibration data for the Tsai's camera model consists of 3D world coordinates of a feature point in en gineering units (in mm, for example), and corresponding 2D coordinates in pixels of the corresponding feature point in the image.

PAGE 94

79 Figure 5.1 Camer a Model Geometry As shown in Figure 5.1, a sequenc e of transformations is required to define the relationship between the position of a point P in world coordinates, and the same point as projected in the c amera reference frame The first transformation is a ri gid body transformation from the world coordinate system to the camera centered coordinate system defined as T his transformation is expressed as follows: (5.1) Target object in the scene P Camera These coordinates are corrected later for distortion

PAGE 95

80 where are the elements of the rotation (orientation) of the camera and corresponds to the translation vector in the world coordinate system Once this transformation is known a second transformation relates the to the ideal (un distorted) pinhole camera model This is accomplished by using the projective transformation formulas. In other words, the 3D camera point is projected into a 2D plane where the s ubscript u means "undistorted", because, at this point, there is no correction for lens distortion of the projected point. The projected transformation is given by Eq. (5.2) and (5.3) as follows: (5.2) (5.3) Expanding (5.1) and substituting into Eq. (5.2) and (5.3) yields to: (5.4) (5.5) Equations (5.4) and (5.5) represent the undistorted coord inates of the point P Next, the 1 st order radial d istortion model is applied to transform the undistorted points to the "true" position of the point's image The corrected coordinates for distortion are: (5 .6) (5.7)

PAGE 96

81 Figures 5.2 and 5.3 show some of the results presented to the user through a graphical user interface. Figure 5.4 show the chessboard pattern used for calibration and a typical Puma 560 configuration during calibratio n. Figure 5.2 Graphical User Interface with Chessboard Calibration Pattern Figure 5.3 Chessboard Calibration Pattern at a Different Pose of the Robot Arm

PAGE 97

82 Figure 5.4 Calibration Pattern in the Camera Mounted Field View As shown in Figure 5. 5 a sequence of conversions is necessary to representat ion of the position of the image frame Figure 5.5 Distorted and Undistorted Senso r and Image Coordinates Undistorted Sensor Plane Undistorted/Distorted Sensor Plane Distorted Image Plane

PAGE 98

83 These conversions are obtained by the evaluation of Eq. (5.8) and (5.9), as follows [65]: (5.8) (5.9) Now, given a set of points of the object of interest in the world coordina te system and the corresponding measured position in the image after the distortion factor has been applied, an error based objective function can be defined in terms of the difference between the point's ima ge coordinates and the coordinates predicted by the camera model as expressed in Eq. 5.10: (5.10 ) where are the observed image positions and are the predicted positions based on the known 3D world coordinates after correction of the radial distortion. The solution is found through the use of a nonlinear optimization technique known as the Levenberg Marquardt (LM ) method [62 63, 64 ] as discussed next. 5.3 Num erical Optimization Approach for Estimation of the Camera Parameters The nonlinear optimization for the determination of camera intrinsic and extrinsic parameters is based on a modified Levenberg Marquardt (LM) algorithm with a Jacobian calculated by a for ward difference approximation [ 6 2 ] The LM method increases the computational efficiency by combining gradient descend and Gauss Newton optimization

PAGE 99

84 methods. Initially, the implementation uses a closed form least squares estimation of three parameters the focal length f z axis translational component and the distortion coefficient Using the obtained values as the starting point, an iterative nonlinear optimization of all parameters simultaneously is executed using the LM algorithm one more time. The intrinsic camera parameters will be constants when the camera is moved with respect to the world reference frame. However, the extrinsic parameters defined by the position and orientation of the camera with res pect to the world coordinate system will change and, therefore, Eq. (5.1) must be recomputed. This situation will arise every time the user points to an object and/or rotates the haptic stylus, for example. In this case, the knowledge of the extrinsic ca mera parameters is fundamental to determine the transformations required to map the position and orientation of an object with respect to procedure involves supplying pa rameters like window size and number of squar es along each axis (X, Y) of the calibration pattern (chessboard pattern in this work) used for calibration and identifying the corners of the calibration grid in each of the images Then, the Inverse Perspecti ve Mapping (IPM) problem can be addressed. Figures 5.6 and 5.7 show simulated world centered and camera centered reference frames, respectively, after the optimization.

PAGE 100

85 Figure 5.6 World Centered Camera Calibrati 6 3 ] Figure 5.7 Camera Centered Calibrati 63 ]

PAGE 101

86 5.4 Inverse Perspective Mapping (IPM ) The inverse perspective mapping IPM is the key to use the visual information for driving the manipulator using supervisory control by the determination of the line of sight defined between the end effector of the robot arm and the centroid of the object of interest measured by the sensors. It can be also used for plan ning the straight line motion of the end effector in autonomous mode. The IPM is the op posite problem regarding the projective projection used during calibration. Figure 5.8 illustrates possible errors between the calibrated camera model predictions and the actual position of the observed image points. Figure 5. 8 Illustration of the Error between Predicted and Observed Image Points During calibration, a set of N image points ( N > 5) are matched to the corresponding points in the world coordinate system and the intrinsic and extrinsic parameters required for this matching are calculated. On the other hand, the inverse perspective problem uses the calibration data to determine the position and orientation of z x y WCS Cartesian Coordinates Inverse Perspective Projection

PAGE 102

87 points on the image relative to the world coordinate system. Similarly to the calibration problem, the met hodology implemented to solve the inverse perspective problem is once 62 ] and the Levenberg M arquardt (LM) numerical technique is also used to solve the optimization problem in a least square sense For the application to this part icular problem, input to the Tsai's algorithm is the predicted position and orientation of the end effector using the camera and the object position relative to the base and data from forward kinematics soluti on of the robot arm. Figure 5.9 shows some of the coordinate frames assigned in order to obtain the required transformations of the points in the image plane with respect to the camera plane. Figure 5. 9 Camera and Image Planes Geometrical Relationships Camera Plane Image Plane x ca m y pix y ca m z ca m x par t y part z part z hand y hand x hand

PAGE 103

88 5.5 Edge Detection and F eature Extraction In order to recognize an object from an image, it is assumed that the object can be segmented out of the image background after binarizing the captured image. A histogram equalization post processing is performed to make a n even distribu tion of the grayscale pixel colors. For edge detection, the Sobel metho d is used to compute the edges [ 64 ] as 66 ]. The Canny method is the preferred method in this work because it is more efficient in reducing n oise from the captured image. Both methods are standard image processing techniques; the details of their implementations are described in [ 64 ] and [ 66 ]. The methodology for the segmentation is that f or each segmented object, the feature extraction comp features, such as the centroid, perimeter, or area. For the computation of the centroid the following two equations are used: and where and represents each individual pixel coordinates, and defines the total number of pixels in the 2 D region of interest (ROI) [ 64 ]. As a result of the image projection and transfo rmation, only 2 D d atasets are available which correspond to the x y plane. However, i n order to drive the robotic system to reach a particular object of interest, the triple (x, y, and z) Cartesian coordinates are required. So, the additional information, which correspond s to the z dimension or depth is provided by the laser range finder measurements. The acquisition and digitalization processes of the images produce distortions of the original region of interest (ROI), especially when viewing objects from a large

PAGE 104

89 distan ce. These distortions increase the uncertainty of the datasets, the complexity of the image recognition process as well as the computational expense. For applications involving the location of objects of interest at large distances, the procedure impleme nted provides for distortion removal introduced by the lens and the aspect ratio of the camera, respectively. As stated before, the methodology for the perspective projection camera model was devised by R. Tsai [ 62 ] an d implemented by Bouguet [ 63 ] as a Ma tLab to olbox. This toolbox was used for validating the results of the multithreaded implementation of this algorithm which is included as a module of the vision system. An optimized algorithm for the camera calibration is also described in [ 67 ]. 5.6 M apping to the Robot Arm Reference Frame In order to use the robot mounted camera (hand eye) information and the laser range finder sensor for the robot pose estimation both intrinsic and extrinsic parameters of the camera needs to be obtained first. The n, the transformations for mapping the grid's local coordinate system of sensing array with respect to the manipulator's base frame are required It is important to note that, i n practice, an intermediate step known as the pixe l t o camera transformation, will also be required because points on the object or region of interest are known at the pixel level. This means that image pixel pairs (pixel row pixel col ) representing row and column numbers, respectively, are available with respect to a fixed pixel co ordinate frame attached to the sensing array. From Figure 5.2, the geometrical relationships between the coordinate points in the camera and image planes can be described Note that the origin of the image plane is defined at the left uppe r corner of t he image window. O n the other hand, the origin of

PAGE 105

90 the camera plane is considered to be at the center of the camera plane (the principal point ) which corresponds to one of the intrinsic or internal parameter of the particular camera in use. For a robot mo unted camera, the offset between the end effector of the manipulator and the camera is constant (it does not change between views), but it is unknown. The assembled homogenous transformation is then represented relative to the end effector of the robotic arm given their relative position as illustrated in Figure 5. 10 A detailed procedure of the mapping of the different reference frames can be found in [ 63 ]. Figure 5.10 Relationships between the Different Coordinate Frames [ 63 ] In order to be able to drive the robot arm using the sensor information from the laser and the camera combination, the pose transformation of the robot arm with respect to the manipulator's base frame is required. Position i Position j G i C i G j C j {B} Robot Base {CW} Calibration World H cg H cij H gij H cj H gj H cg

PAGE 106

91 From Figure 5. 10 the following relatio nship for the homogeneous transformation can be extracted: (5.11 ) where, : (4x4) homogenous transformation of the gripper or end effector between views. : (4x4) homogenous transformat ion of the gripper or end effector with respect to the camera. : (4x4) homogenous transformation of the camera between views. is once again used to solve (5.11 ) and to determi ne the position of the camera with respect to the robot han d coordinate frame. For a full description of the method refer to [ 62 ]. The result of the method will be the transformation matrix The homogeneous transformations and are known from the robot forward kinematic equations and from the extrinsic parameters of the camera calibration procedure discussed earlier. The transformation which defines the calibrati on grid frame with respect to the camera frame can be found from the inverse of the extrinsic parameters of the camera ( R c T c ), as follows: (5.12 )

PAGE 107

92 where are the elements of the rotation matrix R c and are the components of the translation vector T c At a particular position and orientation of the robot manipulator the transformation is stored and the corresponding extrinsic parameters of the camera are retri eved given the image of the region of interest (ROI). The camera transformation in the manipulator base reference frame is: (5.13 ) The calibration grid transformation can als o be obtained with respect to the robot base frame as: (5.14 ) The fixed transformation between the end effector and the robot mounted camera can be verified using the following expression: (5.15 ) A s an additional check to verify the solution, the result of (5. 15 ) must reflect the fact that the homogeneous transformation of the camera with respect to the gripper or end effector frame is constant for all calibration points given that the camera is att ached to the end effector of the robot arm. Table 5.1 shows the rotation and translation components of the camera and the predicted manip effector obtained from Eq. (5.11 This table was generated using simulation software in MatLab and compared to the recorded

PAGE 108

93 transformation matrices of the end effector of the Puma robot arm from the forward kinematics. Table 5.1 Extrinsic Camera Parameters ( ) and End effector Rotation and Translation Matrices ( ) Image Rotation Matrix Image Translation mm End effector Rotation Matrix End effector Translation mm 0.1179 0.9928 0.0232 126.5395 0.6862 0.6945 0.2163 92.8000 0.9902 0.1158 0.0779 66.5448 0.7274 0.6530 0.2110 635.6000 0.0747 0.0322 0.9967 235.2563 0.0053 0.3021 0.9533 326.6000 0.0124 0.9996 0.0263 143.9736 0.6221 0.7609 0.1843 115.9000 0.9937 0.0094 0.1119 76.1713 0.7794 0.5796 0.2378 625.8000 0.1116 0.0275 0.9934 224.2457 0.0741 0.2916 0.9537 338.0000 0.0849 0.9957 0.0378 135.1690 0.5437 0.8188 0.1844 115.8000 0.9900 0.0886 0.1100 83.2790 0.8327 0.4990 0.2399 626.7000 0.1129 0.0281 0.9932 225.3835 0.1045 0.2840 0.9531 336.4000 0.1185 0.9921 0.0422 131.9680 0.5163 0.8363 0.1843 115.8000 0.9864 0.1225 0.1093 85.4977 0.8487 0.4709 0.2407 627.0000 0.11 36 0.0287 0.9931 225.6934 0.1145 0.2807 0.9529 335.9000 0.0856 0.9959 0.0293 138.7425 0.5461 0.8153 0.1925 115.8000 0.9896 0.0884 0.1138 85.7883 0.8307 0.4976 0.2496 627.6000 0.1159 0.0193 0.9931 225.9435 0.1076 0.2962 0.9490 334.5000 0.1 478 0.9874 0.0567 124.5867 0.4882 0.8550 0.1749 115.7000 0.9824 0.1532 0.1067 88.7926 0.8637 0.4447 0.2372 628.1000 0.1140 0.0400 0.9927 227.9899 0.1250 0.2669 0.9556 333.7000 0.1117 0.9921 0.0570 102.6291 0.5192 0.8382 0.1666 93.5000 0.987 0 0.1174 0.1099 88.6900 0.8454 0.4752 0.2439 632.0000 0.1157 0.0440 0.9923 227.1811 0.1253 0.2675 0.9554 333.3000 0.1417 0.9874 0.0699 94.6773 0.4910 0.8568 0.1574 92.6000 0.9830 0.1487 0.1076 88.9109 0.8609 0.4497 0.2378 632.2000 0.1167 0.0 535 0.9917 227.5109 0.1329 0.2523 0.9585 333.1000 0.1053 0.9923 0.0655 98.0316 0.5214 0.8387 0.1574 92.6000 0.9874 0.1121 0.1119 90.0564 0.8440 0.4796 0.2400 633.0000 0.1183 0.0529 0.9916 228.2876 0.1258 0.2580 0.9579 331.7000 0.1066 0.992 0 0.0676 96.7395 0.5256 0.8398 0.1359 92.6000 0.9832 0.1153 0.1414 102.9621 0.8354 0.4793 0.2691 633.2000 0.1480 0.0514 0.9876 225.5877 0.1609 0.2550 0.9535 331.3000 Once the end effector transformation is determined based on the sensors data, the connecting line between the end effector of the robot arm and the position and orientation

PAGE 109

94 of the centroid feature the desired straight line trajectory. As explained in Chapter 4, the z orthonormal constraint via the cross product: (5.16 ) Eq. (5.16 ) needs to be transformed to coincide with the origin of the end effector reference frame for grasping. The necessary tra nsformation correspond to a translation to specify the line of sight relative to the end effector frame (the z axis of the camera is parallel to the z axis of the end effector). The method to calculate the assist function is discussed in detail in Chapter 6. 5.7 Summary This chapter describes the procedure for using the camera and laser information to compute the centroid location as well as the position and orientation of an object of interest in a 3D space. The p rincipal utility of the sensory information (camera and laser range finder) at this level is to provide an automated system for measuring and digitally processing the content of the images of an object of interest. This i nformation is then used for calcul ating the line of sight (LoS) defined between the end effector position and the object. Then, the LoS defines a linear trajectory for guiding the user's motion towards the object of interest. The Levenberg Marquardt (LM) nonlinear optimization method is

PAGE 110

95 d escribed for the camera and the laser range finder calibration The LM is also used for solving the inverse perspective mapping (IPM) to transform from measured points in the image's plane to the base reference frame of the manipulator.

PAGE 111

96 Chapter 6 S e nsor B ased Assistance Function Calculations 6 .1 Introduction The architecture proposed in this work incorporates assistance to the user's motion using simple sensors (a camera and a laser range finder). The visual information is combined with the huma n inputs and the deviations are corrected by the calculation of end effector and the object of interest is used as a constraining line. O nce the object is in the vi ew of the eye in hand camera, the vision system is activated and all the required transformations are determined as explained in Chapter 5. In the image pre processing part, the case in which all objects are on the top of a table is considered. In this situation, the control input is the position and orientation commands calculated from the visual input as well as the commands of the haptic input device. This chapter describes the dete rmination of the forces required to provide the appropriate feedback to guide the user's mo tion, which are identified here as the s ensor based assistance functions. 6.2 Generic Scheme for Motion Dependent Force Feedback Calculation T he feedback force, F, is computed to maintain the haptic tip constrained to the user's in tended path (see Figure 6.1) This force feedback is generated according the following control law:

PAGE 112

97 (6.1 ) where, = force feedback through the haptic interface = proportional gain = derivative gain = difference between the haptic tip position and = rate of change of Figure 6.1 Translational Dist ance, d ij Used for Feedback Force Control Law From equation 6.1, the translational spring damper virtual model is used for the force computation where d ij represents a displacement vector connecting points P i and P j P i corr esponds to the tip of the hap tic stylus and P j correspond to a contact node on a path or contact point on an object of interest. centroid as well as the line of sight are used as geometric features to have a visual y 1 K s {W} r i r j C d ij P j s i s j P j

PAGE 113

98 ded path. The displacement vector from P i to P j is obtained as: (6.2 ) where and are homogenous transformation matrices expressed with respect to the world coordinate system, {W}. The corresponding length of the spring damper, is now defined as: (6.3 ) The damping force component is a function of the displacement rate which is ob tained by differentiating Eq. (6.3 ) with respect to time: ( 6.4 ) After substit ution and simplification, Eq. (6.4) yields : (6.5 ) It can be shown that the time derivatives of the transformation matrices can be expressed in terms of angular velocities, and (see Appendix E for details) as: (6.6 ) Finally, the magnitude of the force applied to the user's hand through the haptic device is found to be: (6.7 )

PAGE 114

99 Comp aring Eq. (6.1 ) and Eq. (6.7 ), it is observed that = and = ; i.e., the shortest distance between the haptic tip pos ition and any point on the connecting line, as sho wn in Figure 6.1, is taken to be equivalent to the change in length of a virtual spring. Similarly, the rate of change is equivalent to the rate of change of the virtual spring length. As it is obvious from this derivation, the t orsional components were not taken into consid eration in the calculation. The Phantom Omni device used in this research does not have built in actuators such that it can exert torsional forces with the thimble In the case of a device with such capabilit ies, the generalized forces can be calculated using the principle of virtual work where the virtual displacements can be ob tained from the differential equation expressed in Eq. (6.7 ) and virtual rotations components can be obtained in terms of the Euler a ngles orientation coordinates [ 68 69 ] The next section These forces are sent to the haptic device in real time. 6 3 Sensor Based Assistance The sensors (camera and la ser range finder) information needs to be mapped to the Cartesian space of the manipulator in order to generate a n attractive or repulsive force to guide the user until the object of interest is between the gripper fingers in real time. As stated before the line of sight (LoS) is considered to be the intended or desired constraint frame for the end effector of the manipulator is defined along the LoS of the camera considering the z axis pointing in the direction of the camera axis, the x axis along the line defined between the initial position of the haptic tip

PAGE 115

100 and the projection defined by as shown in Figure 4.3. There will be measurement errors between the line of sight put possibly due to the reduced physical performance due to fatigue of the person interacting with the system or tremor illness These error signals are user towards the destination. As mentioned, the force constraints are defined by two different models: a) an attractive or repulsive force to guide the user towards the trajectory, and b) an assistive force to guide the user along the trajectory path. In the case of approaching the surface of a table, the c ontact force can be computed as a function of the remaining distance to the surface. The Cartesian motion between the initial position of the manipulator and the goal position is described in terms of robot arm transformations with respect to the base fra me of the manipulator One way to accomplish this is to define a trans lation along a straight line and a rotation about a fixed axis by an equivalent angle [ 51, 60 ] ( See Appendix B). As shown in Figure 4.3, the two constraint points are defined by the coordinates and respectively. The equation of the 3D line is given by : (6.8) The projection of the ini tial position of the end effector is: (6.9 )

PAGE 116

101 The distance between the projected point and the initial point is given by (6.10) Substituting (4.9) into (4.10 ) yields : (6.11) If D is define d as the distance measured using the laser range finder, and it is expressed in terms of the initial and goal Cartesian coordinates, then The following computation is performed: (6.12 ) The projection o f the haptic tip initial position can be obtained by substituting (6.11) into (6.9). The constraint frame for the end effector of the manipulator can now be obtained by defining the axes as shown in Figure 4.3 where the z axis po ints in the direction of the constraint line, the x axis along the line defined between the initial position of the haptic tip and the projection defined by The direction of the y axis can be found using the right hand rule and orthogonality condition After normalization, the transformation matrix R in terms of the directional cosines can be found as :

PAGE 117

102 (6.13) As previously stated, the eq uivalent single axis angle method is used to represent a rotation about a single axis to align the end effector frame to the desired goal configuration. This is also the basis for planning the linear motion for autonomous execution N smaller segments, where N depends on the distance of travel, nominal linear velocity of the end effector and the update rate of the trajectory generation thread. To accomplish t his task, the inverse kinematic equations of the manipulator are solved at each intermediate position. Two different approaches to solve the inverse kinematic equations are implemented in this work. One approach considers the closed form solution to obt ain the required joint variables to drive the robot arm to the next segment along the linear trajectory. This solution is appropriate when the robot arm is kinematically non redundant. The second approach is to obtain the joint rates using the inverse Ja cobian, resolved rate algorithm. This allows added flexibility for dealing with kinematically redundant robots. As stated before, t he benefit of switching control bet ween the human

PAGE 118

103 user and the automatic control is to reduce the burden of executing repeated tasks and to provide an appropriate level of assistance to the user by scaling the motion. As an example of constrained motion, th e haptic end effector linear ve locity can be as signed to the robot end effector velocity as This velocity can be scaled using a scaling factor in the constrained direction as follows: (6 .1 4 ) Notice that the Z axis component is not af fected by the scale factor because the constrained frame is defined along the desired path. However, the X and Y directions are scaled by the scaling factor The resulting velocity components are then used as the input to the resol ved rate algorithm as shown in the simplified version of the W 7 which shows an expanded version as implemented in the real time telerobotic controller. T he current position in the base frame of the haptic device is obtaine d, the vector de fined from th e starting point to the haptic device position is calculated as (6 .15) Similarly the vector between the starting and goal (destination) point s is obtained as: (6.16 ) Finally, the projection of the haptic position on the desired path is obtained through the use of the dot product as:

PAGE 119

104 ( 6.17 ) In Eq. (6.17), the vector is equivalent to defined by: (6.18) where the Cartesian coordinates of the object are represented in the world space following the procedure explained in Chapter 5. The trajectory path or control surface is surrounde d by an a ttractive potential field the amplitude of which increases with the distance between the end effector and the projected point. The assistance force vector is calculated as: (6.19 ) For a motion task along the X axis, a general scheme is to constrain the Y and Z axis directions. If the assisted motion is along the Y axis, then the X and Z directions are constrained. Table 6.1 shows the different cases for constrained directions in a motion task. Table 6.1 Constrained Di rections in a Motion Task X dir Free Y dir Free Z dir Free

PAGE 120

105 where, in Cartesian space and , are the new position after the constraint force is applied. Equation (6.19 ) includes only the spring type force feedback. Considering the force feedback control law represented by Eq. (6.7), it can be obs erved that this control law not only compensates for the difference (error signals) between the computer generated desired path and the deviation from this path caused by the user input, but it can also includes a dampening effect. This effect is directly proportional to the velocity component in the opposite direction of the motion. The combined spring type and damping type feedback forces help the user to stay in the straight trajectory. Once the user is moving along the path, additional assistance is provided in the direction along the linear trajectory as illustrated in Figure 6.2. The linear velocity components are scaled up or down depending upon the user's motion along the trajectory. In the illustration, corresponds to the scaled velocity vector, is the current the direction of the desire d resultant velocity. Figu re 6.2 Desired Path a nd "Noisy" Trajectory Input

PAGE 121

106 The Phantom Omni has built in force feedback capabilities, and an attractive or repulsive force can be rendered through the hapti c device interface to constrain motion using the control law defin ed by Eq. 6.5 The level of assistance can be modified as the user s skills in executi ng a particular task increase by modifying the scaling factor K (gain ) in the haptic control strategy. 6.4 Comments The Cartesian trajectory generated by positioning a nd orienting the end effector toward the object (destination point) is monitored by a separa te computational thread. By separating the data acquisition processing and communication process, a highly responsive interaction was attained. Even though the ma nipulation of objects can be driven through the sense of touch and the optical sensory information while the human is in the loop, the multithreaded implementation at the sensory suite level allows for the possibility to switch supervisory control of the r obotic arm to an autonomous mode at the user's command with ease. This transition between a supervisory control mode to an autonomous control mode reduces the burden on the user and reduce s the possibility of fatigue during long time interactions with the system. 6.5 Summary In this chapter, the concept of s ensor based assistance is defined. The assistance function calculations are described as well as the force feedback r equired to provide the appropriate sensor assisted functi on to guide the user's mo tion. The line of sight concept is considered as a visual indication of the intended linear trajectory of the user. The

PAGE 122

107 differences between the L oS, determined throug h the use of the sensor data fusion, and

PAGE 123

108 Chapter 7 Experimental Methodology and Testbed for Interactive Simulation 7.1 Introduction The implementation of a PC based multithreaded architec ture made possible the design and realization of a real time robotic system with the capabilities to provide sensor based assistance and haptic manipulation of real and virtual objects. In this chapter, the experiments conducted to validate the control st rategies with the actual hardware are described The testing of the system was conducted on healthy people perform ing and activity of daily living ( ADL ) task. Three people were trained to use the Phantom Omni interfa ce and to teleoperate the PUMA manipulator in all control modes to familiar ize themselves with the system. This Chapter presents the methodology used for the experime nts with the actual hardware: a 6 DoF Puma 560 manipulator a Phantom Omni haptic inter face and the sensory suite c onsisting of a CCD camera, a Sick DT60 laser range finder and the PUMA encoders The performance measures are defined by the "Abs olute Position Error" (APE), the "Absolute Orientation Error" (AOE) indicators, and the task compl etion time which are calculated using the recorded data sets for each experiment. The following list shows the different comparisons made using the APE and the AOE indicators for position and velocity based control modes:

PAGE 124

109 1. Autonomous Control Mode 2. Positio n Based Regular Teleoperation 3. Position Based Virtual Fixture Teleoperation 4. Position Based Scaled Teleoperation 5. Velocity B ased Regular Teleoperation 6. Velocity B ased Virtual Fixture Teleoperation 7. Velocity B ased Scaled Teleoperation 8. Force B ased Virtual Fixture Teleoperation Chapter 9 discusses and analyses the experimental data gathered for validating the trajectory tracking and assistive capabilities of the system for guidin g the user's motion during execution and successful completion of the task. 7.2 Metho dology for Experiments As previously stated, the testing of the system was conducted on three healthy people perform ing up a cup After training the subjects to use the Phantom Omni interface, they move d the PUMA manipulator in all control m odes. The test setup included a platform in front of the arm, with two marke rs indicating the pick up position and the drop off (destination ) position. These two positions were offset from each other in all the three Cart esian directions as shown in Figur e 7.1 A coffee cup was used as the intended target to be grasped and moved from the start to the end positions. The start position for all the experiments is kept constant and it is defined as the start position.

PAGE 125

110 For each test, the position and velocity based teleoperation modes were compared to regular, scaled and virtual fix ture based teleoperation modes in the following way: 1. Position Based Regular teleoperation vs. Scaled teleoperation 2. Position Based Regular teleoperation vs. Virtual Fixture 3. Position B ased Regular teleoperation vs. Autonomous 4. Velocity B ased Regular teleoperation vs. Scaled teleoperation 5. Velocity B ased Regular teleoperation vs. Virtual Fixture 6. Velocity B ased Regular teleoperation vs. Autonomous 7. Position B ased R egular teleoperation vs. Fo rce B ased Figure 7.1 up a cup

PAGE 126

111 When the user starts the operation under the supervision and observation of the position by the attendant. Th The user always starts with the position based teleoperation mode and then switches the test mode. While performing an ADL task the user can switch to any mode, however for the purposes of testin g the us er toggles between the position b ased teleoperation and the tested mode. The user has to toggle to position based teleoperation every time to orient the hand so that it is able to point to target objects, grasp the cu p and drop the cup at the desti nation point as these steps require re ori entation of the end effector. For automatic, scaled and virtual fixture based teleoperation mode s once the ob ject is located by teleoperation the user pushes the Phantom Omni stylus button to lock the target and generate the desired trajectory. Once the user reaches the target vicinity, the user teleoperates the arm to adjust the gripper and grasp the object. The user then points to the destination marker and pushes the Omni stylus button again to lock the destina tion coordinates and move in the same fashion to the drop off point and release the object. In the Scaled Teleoperation mo de, the user input was scaled 3X when it was along the trajectory generated by the laser, and 0.2X when it was perpendicular to the t rajectory. In the case of virtual fixtures, all positions and orientations coming from the user input were locked (scaled down to 0X) except the position parallel to the tr ajectory, which was scaled to 3X Eac h control mode was tested five times, and the elapsed time to complete the task was recorded The trajectory generator thread generates a log file recording the transformation matrices of the tip, the elapsed time and the gripper status at every loop. Data from this file were conditioned, and used fo r data analysis.

PAGE 127

112 7.3 Visual and Haptic Testbed to Control a 6 DoF Robot Arm In the experiments the Phantom Omni Haptic interface from SensAble Technologies is used as the master. It is run on a Pentium computer, with 1GHz single processing unit. The P hantom Omni device uses the OpenHaptics software which runs on Windows XP OS. A Microsoft Visual Studio C++ program was developed to run the Phantom Omni controller and render the virtual environment using OpenHaptics [ 7 0 ] and OpenGL library functions as well as APIs. The c ommands for crea ting and interfacing the PUMA software controller and the Phantom Omni controller were als o embedded in the same program. The protocol for sending and receiving information between the O mni and the PUMA controller is base d on User Datagram Protocol (UDP) sockets The UDP socket programming class implemented is a derived class from the Microsoft socket programming library. The program running on the Omni controller is multithreaded. These threads include the main applica tion thread, the graphics thread, the haptics thread, the collision detection thread (this thread runs on the background and it is responsible for collision among objects on the virtual environment and no real objects) and the communications thread for rec eiving data from PUMA controller. The main application thread starts the other threads, initializes the Phantom Omni, creates sockets for communication and integrates the whole application. The graphics thread renders the graphics scene at approximately 30 Hz refresh rate. T his graphics scene is a virtual environment that helps the user to engage and disengage the PUMA in teleoperation (Figure 7.2) The haptics thread provides the haptics feedback to the user at a refresh rate of 1000 Hz and the collision d etection thread does the computations for haptics force rendering.

PAGE 128

113 Figure 7.2 Virtual Environment for Teleoperation of the PUMA Manipulator The teleoperated robot consists of a 6 DoF Puma 560 manipulator. As explained in Chapter 3, the Puma software c ontroller is a form of a PD plus gravitational compensation strategy controller. The robot arm is equipped with a modified QC MP Orbit camera (an off the shelf USB camera) and a Sick DT60 laser range finder (See Ap pendix G) as shown in Figure 7.3 In its original format, the camera was not suitable to be mounted at the wrist of the robot arm and a new case was built to accommodate the integrated circuit, the lens and cables. Also, the face detection and auto zoom features of the MP Orbit model were turn ed off in order to implement the calibration procedure described in Chapter 5. This software runs on a Dual core computer with Windows XP OS. The sensors (the camera and Sick DT60 laser range finder ) and a 4 DoF Barrett Hand (Figure 7.3) were attached to t he wrist of th e Puma 560 manipulator.

PAGE 129

114 Logitech MP Orbit TM CCD Camera Sick DT60 Laser Range Finder Phantom Omni Haptic Device 3 Fingers Barrett Hand Figure 7.3 Sensory Suite Devices Figure 7.4 shows the camera and the DT60 laser as they are mounted o n the wrist of the Puma 560 robot arm in the experimental setup. The Barrett hand is also shown. Figure 7.4 Camera and the Sick DT60 Laser Range Finder Mounted at the Puma's End Effector

PAGE 130

115 As shown in Figure 7.5, when the user operates the robot arm and locates an object of interest, a strea m of images of the object in the field of view is processed for geometrical information computations. Figure 7.5 Results of the Segmentation and Feature Extraction Processes The segmentation and the feature extraction processes that take pl ace are a lso shown in Figure 7.5 As shown, the first window to the left presents the object as seen from the camera. The crosshair lines, overlaid in the centered image, are used to emphasize the centroid of the object of interest with respect to the screen coor dinate system located at the top left corner of the viewport. The black and white image to the right is the image result ed after applying the edge detection algorithm. As mentioned, the system includes two algorithms for edge detection for added flexibil ity: Sobel and Canny. However, only one of these edge detection algorithms must be active when the experiments are

PAGE 131

116 performed. The Canny edge detector is used in the presented computations because of its capabilities to smooth the image and to filter noi se in the original image. 7.4 Haptic Interface and Cartesian Motion During teleoperation of the robot arm through the haptic interface, the real time controller receives the latest position and velocity updates from a virtual en vironment as shown in Figu re 7.6 Figure 7.6 Virtual Environments and 3D Constraint Plane for Haptic Control As explained before, the user engages the Puma using the toggle buttons available to the user. The Phantom Omni control software uses the input from the two Constraint Plane Linear Trajectory Workspace Virtual Solid Cube

PAGE 132

117 tual cube as shown in Figure 7.6 This way, the user can move the cube to the center of the screen when it is needed and re engage the manipulator with more screen space available in the virtual environment. 7. 5 Performance Measures The performance measures define d in this work are associated with the trajectory tracking when position based or velocity based control modes are active. In this case, two performance indices were used to measure the error associated with the position and orientation in regular, scaled, and virtual f ixture teleoperatio n. The performance measures were defined by the "Absolute Position Error" (APE) and the "Absolute Orientation Error" (AOE) indicators. The following list shows the different comparisons made between the different APE and the AOE indica tors for position and velocity based control modes: 1. Autonomous, Force based, and Motion based Virtual Fixture Teleoperation 2. Force based Virtual Fixture, Regular, Scaled, and Virtual Fixture Teleoperation 3. Autonomous, Velocity Based Scaling, Velocity Based V irtual Fixture, and Force based Virtual Fixture 4. Position Based Regular teleoperation vs. Scaled teleoperation 5. Position Based Regular teleoperation vs. Virtual Fixture 6. Position Based Regular teleoperation vs. Autonomous

PAGE 133

118 7. Velocity B ased Regular teleoperation vs. Scaled teleoperation 8. Velocity B ased Regular teleoperation vs. Virtual Fixture 9. Velocity B ased Regular teleoperation vs. Autonomous Each task was repeated five times for each mode of operation and the calculations for the associated indicators of the Absolute Position Error as well as the Absolute Rotation Error were based o n the following definitions. 7.5.1 The Absolute Position Error (APE) This performance measure defines the error between the comman ded linear position components ( ) and the actual position achieved by the software controller ( ) In other words, the APE is the Cartesian distance between the desired and the actual end effector position [70 ]. This measure is obtained by the evaluation of Eq. 7.1 as follows: (7.1) where ( , ) are effector in the base frame of the manipulator and ( , ) are the achieved 3D coordinates of the drop off point (destination) also with respect to the base frame. Figure 7.7 shows the absolute position error when the robot arm is commanded in simulation to follo w a straight line trajectory between the goal position and a target situated 15.0 cm away from the initial position of the end effe ctor.

PAGE 134

119 Figure 7. 7 Absolute Position Error (APE) 7.5.2 The Absolute Orientation Error (AOE) This performance measure defin es the error related to the rotation matrix elements as described in Chapter 3. It specifies an equivalent single axis rotation angle about a vector defined between the desired and the current rotation of the e nd effector of the rob ot arm [70 ]. Equation 7.2 defines the rotation error: (7.2) where

PAGE 135

120 = (3x3) achieved rotation matrix at the destination ( defined as the DROP OFF POINT) and = (3x3) current rotation matrix evaluated at each ti me interval The trace function in Eq. ( 7. 2) corresponds to the sum of the diagonal elements of the product of the achieved and current rotation matrices which is also the sum of the eigenvalues of the product The angle expressed by specifies an equivalent single angle rotation about a vector defined between the final and the current orientation of the end effector of the manipulator Figure 7.8 shows the resu lts of the evaluation of Eq. 7.2 in an offline program in MatLab As before, absolute orientation error is calculated for the straight line trajectory defined between the goal position and a target situated 15.0 cm away from th e initial position of the end effector. As can be observed, the maximum orientation error obtained is about 0.000001 radians. Given that the initial orientation was zero, it should be expected that the orientation error to also be zero. However, accumul ated errors in the computation prevent this from happening in the simulation

PAGE 136

121 Figure 7. 8 Absolute Orientation Error (AOE) The following steps describe the process after recording every user interaction in autonomous and teleoperation control modes: 1. Duri ng regular teleoperation, the system does not use the external sensory input for assisting the user's motion. For automatic s caled and virtual fixture teleoperation mode s once the object is located by using teleoperation mode the user pushes the Omni s tylus button to lock the target and generate the desired trajectory based on the sensory input The user then teleoperates the robot arm using autonomous, scaled, or virtual fixture mode until the gripper reaches the target vicinity. Once the gripper rea ches the target vicinity, the user teleoperates the arm to adjust the gripper and grasp the object. Then, the user uses regular

PAGE 137

122 teleoperation and points to the destination marker and pushes the Omni stylus button again to lock the destination coordinates a nd move in the same fashion to the drop of f point and release the object. In the case of force based virtual fixture the laser input. 2. The position (X, Y, Z) and the orie ntation angles of the end effector of the Puma manipulator, as well as, the real time timing are recorded in text files by the real time application for all the experiments: autonomous, position based, and velocity based (regular, sc aled and virtual fixture) teleoperation. The initial (START POINT), the pick up point (PICKUP POINT) and the drop off (DROP POINT) are also recorded in the text file. 3. The recorded data are then transferred to the visualization appl ication in MatLab for plotting and further analysis. The transferring of the angles is more efficient than transferring the assembled (3x3) rotation matrix as registered by the real time software. 4. For every recorded configuration, a 3D plot showing the 3D Cartesian position (X Y Z) is obtained It is important to mention that, even if the autonomous mode is being tested, there is a small part of the trajectory for which the user needs to switch back to regular teleoperation in order to re orient and to avoid an obstacle intentionally placed between the pick up and drop points. Once the obstacle is avoided, the user can switch back to autonomous, or any of the tested modes For instance, Figure 9. 3 presents the case where the user switched back to auton omous mode for the last portion of the path to the drop off point.

PAGE 138

123 5. The (X, Y, Z) coordinates of the end effector from the START POINT to DROP POINT are used to calculate the "Absolute Position Error", APE, as given by Eq. ( 7. 1). The result from Eq. ( 7. 1) will then correspond to the traveled distance from start to destination. This value can be used as an indicator to measure which teleoperation mode can achieve the destination by traveling the lesser distance as a function of time. For instance, this meas ure is used to compare the regular teleoperation mode, which provides no assistance, to the autonomous, scaled, force based and motion based virtual fixture teleoperation modes. 6. The calculation of the "Absolute Orientation Error" (AOE) is more involved. First, the Euler's angles are used in the offline program to compute the rotation matrix (the details are shown in Appendix E ) Eq. ( 7. 2) is then evaluated at every sampled point recorded in the text file 7. The APE and AOE measures of the tested control modes described in section 7.2 are plotted versus time and comparisons are made to determine the effectiveness For both performance indicators the area un der the curve repres ents a measurement of the distance traveled (START POINT to the DROP POINT) and the time to complete the pick up a cup task. By comparing the area covered autonomous control mode, force and motion based virtual fixtures, and scaled tel eoperation experiments it is possible to determine the effectiveness of each form of control for completing the pick up a cup task and others ADL tasks. This area can be determined by numerically integ ration of the APE curve using a fixed increment of tim e as registered by the real

PAGE 139

124 time system. The smaller the area, the better the effectiveness of the method for accomplishing the pick up a cup task. 7. 6 Summary In this chapter, the methodology followed to conduct the experiments as well as the experimental testbed was described. The performance measures were also defined. A pick up a cup task, a common activity of daily living ( ADL ) is used as the testing task. Eight testing scenarios were defined for position based and velocity based control modes for later analysis The performance corresponding to autonomous control, regular, scaled, force based and motion based virtual fixture teleoperation modes is defined in (AOE). The area under APE curve can be used as a qualitative indicator for comparing each of the operation modes. Results including these comparisons are presented later in Chapter 9.

PAGE 140

125 Chapter 8 Virtual Reality Simulation Testing 8.1 Introduction In robotics, once the governing equations of robot arm motion are defined in terms of the virtual object variables, a computer generated version of the real robot arm can be used for testing the control strategies without the dangers of damaging the hardware. Virtual Reality, VR, provides a widely accepted computer interface that enables realistic simulations of physical systems. In the case of a robot arm, both the forward and inverse kinematics solutions can be defined in terms of th e joint angles of the virtual reality standard transformations defined by the scripting language known as Virtual Reality Markup Language (VRML). In practice, the appropriate mapping of the Cartesian axes between the reference frames defined for the robot arm and the haptic device can be easily visualized in the virtual environment by moving the haptic stylus or through a graphical user interface. This way, the inherent complexity of the design and testing of a real time controller with a haptic interface directly on the physical system can be reduced by performing probe of concepts of many of the programming tasks with realistic and believable visualizations and simulations. In this chapter, the haptic control of the Puma 560 model using the VR techniques is presented as well as the communication protocol developed in order to resolve the high timing demands of the haptic loop and the integration of the different programming workspaces.

PAGE 141

126 8.2 Virtual Reality Simulation of the Puma 560 Manipulator Virtual Re ality simulation of the robot arm enables the design and testing of sophisticated control strategies in a "proof of concept" sense without the dangers of damaging the real robot arm. As discussed in Chapter 4, the teleoperation tasks are executed through the use of the Phantom Omni for force feedback and the Puma 560 robot arm interface which has a very different kinematics compared to the Omni. The resulting transformations from the evaluation of their respective kinematics equations need to be mapped (in joint space or Cartesian space). For simulation of the VR robot arm motion, both the forward and inverse kinematics solutions can be defined in terms of the joint angles of the virtual reality transformations (known as a "Transform" object in the VRML sc ript language). The appropriate mapping of the Cartesian axes between the reference frames defined for the robot arm and the haptic device can be easily visualized in the virtual environment. In this work, the visualizations of the motion of the Puma 560 (with and without haptic control) were realized using VR toolbox as shown in Figure 8.1. The VR toolbox is an add in library used for the creation and visualization of virtual models within the MatLab/Simulink workspace. This toolbox allows complete con trol of the scripting files associated to the different parts of the robot construction (links, joints, base stand, and end effector). The VR toolbox follows the VRML97 standard which means that 3D CAD modeling software such as SolidWorks can be used to c reate the solid models. The CAD model (parts and assembling) can then be ported to the VRML97 format following a straightforward procedure.

PAGE 142

127 Figure 8.1 Virtual Reality Model of the Puma 560 Robot Arm 8.3 Control of the V R Model of the Puma 560 Manipula tor The VR model of the Puma 560 can be driven in two different ways. One way is using a simple graphical user interface (GUI) as shown in Figure 8.2. This option enables the user to perform the virtual simulations of the robot arm using purely robotic mode (without the haptic interface). The GUI was developed a s a control panel with toggle buttons and scroll bars for this form of operation. As shown, the graphical user interface (GUI) presents toggle buttons for the selection of the type of control, ei ther joint or Cartesian space. This GUI provides an intuitive interface to the user and the toggle

PAGE 143

128 bottom action prevents from trying to activate the two types of available control modes simultaneously. Figure 8.2 Control Panel for Joint and Cartesian S pace VR Simulations If the "Joint Control" toggle bottom is activated on the control panel, the scroll bars can be used to change each individual joint angle value in increments of 1 deg. The minimum value of the scroll bar is zero and the maximum value corresponds to the joint limit as defined in the real robot arm configuration files. In this case, the homogenous transformation matrices are evaluated (See Appendix A) and the results are assigned to the corresponding joint transformation matrix in the VRML script file. On the other hand, if the "Cartesian Control" toggle bottom is activated, the user is able to move the

PAGE 144

129 end effector along the 3D axis directions (X,Y,Z) and the solution of the inverse kinematics problem is required. In this case, two solutions were implemented. The first one is a "closed form" solution available for the Puma560 and the resolved rate algorithm based on the inverse Jacobian of the robot arm. This solution is more convenient when a closed form solution is not available, as it is the case for kinematically redundant robot arms. The details of this algorithm can be found in Chapter 3. The second one is using the haptic device for teleoperation of the virtual model of the robot arm as shown in Figure 8.3. In this case, th e user is provided with a virtual environment where a solid object (red) is displayed and the user can "touch" with the Omni's stylus. A separate window is then shown with the VR model of the Puma 560 tracking the "haptic tip" of the Phantom Omni device w hen the cube is "grasped" with the stylus. Figure 8 .3 Haptic VR Puma 560 Graphical User Interface

PAGE 145

130 8.4 VR Linear Trajectory Simulation A major benefit of the VR toolbox in MatLab, in addition to the visualization capabilities, is the availability o f robust built in numerical functions for linear algebra, inverse and pseudo inverse algorithms, optimization and singular value decomposition, among others. Taking advantage of these capabilities and, in preparation for the implementation of the real tim e trajectory generation in QNX, a MatLab script program was developed in order to compare the results from the VR simulation and the actual physical implemented in C++ code. The algorithm for the linear trajectory is based on the Equivalent Single Axis Rotation Method with provisions taken to avoid representational singularities (See Appendix B). Once the linear trajectory is generated, the required torques to drive the arm to the final destination needs to be computed. As discussed in Chapter 3, the i mplementation of the resolved rate control technique involves the computation of the Jacobian and the inverse of the Jacobian of the robotic arm. In QNX, all required numerical solutions must be implemented in C++ and the results need to be validated. T he availability of the results from the simulation makes it easier to debug potential errors during the computation of the different numerical algorithms in C++ running under QNX O/S. In MatLab, the script requires a homogenous transformation matrix def ining the initial position and orientation of the end effector and the final transformation matrix defining the desired (goal) destination as input arguments. Both transformation matrices are described relative to the base reference frame of the manipulat or. Also, the script expects the desired linear speed of the end effector as an input argument (0.2 m/s in this

PAGE 146

131 simulation). The following results were obtained by commanding the VR model of the Puma 560 to travel from its predefined ready (initial) posi tion to the predefined destination. The corresponding homogenous transformation matrices are: (8.1) (8.2) The specified initial and goal transformations correspond to 15.0 cm displacement of the end ef fector from its initial position along its own z axis. Figure 8.4 shows the required joint angles of the manipulator and Figure 8.5 shows the commanded linear trajectory. This is an important validation phase before using the Phantom Omni differential tr ansformations are used to command motion actions to the Puma manipulator.

PAGE 147

132 Figure 8.4 Required Joint Angles for the Predefined Linear Trajectory Path Figure 8.5 End Effector Displacements from Initial to Goal Position

PAGE 148

133 8.5 Haptic Feedback and Assist Fun ctions in Simulation Figure 8.6 shows a simulation of a haptically rendered cube and Bezier type curve trajectory where features of OpenGL, HLAPI and HDAPI libraries are combined for the simulation of a teleoperation task. The solid cube was created usin g graphic functions available through the OpenGL graphic and HLAPI libraries. On the other hand, the Bezier points were generated using the classical algorithm in C++, and then, displayed using OpenGL vertex structures. Figure 8.6 Bezier Curve Trajec tory and Haptically Rendered Cube During the interaction, the user will approach the Bezier trajectory. The assistance provided at this instant is a "stick" friction effect, running at the haptic servo loop update rates, which is activated when the user is at a close proximity (a distance

PAGE 149

134 equivalent to the radius of the sphere representing the haptic tip in the virtual environment) to the trajectory and a spring damper force activated once the user is following the path. In other words, the haptic interf ace provides guidance by along the trajectory. The resultant force is transmitted to the user's hands through the Phantom Omni using the method explained in Chapter 4. In this simulation, the haptic device is used for sensi ng proximity and for actuation in the form of force feedback to the user's hand. Typical "stick" friction forces are shown in Figure 8.7. Both original and filtered data are shown. Figure 8.7 Experimental Data of Forces Resulting from a Typical Inter action t (s) t (s) F (N) F (N)

PAGE 150

135 8.6 Comments on the Haptic and VR Model Simulations The integration of the VR toolbox with the different motion algorithms to drive the VR robot arm model in pure robotic mode occurs within the same MatLab workspace. Therefore, there is no communi cation issues involved. However, when the haptic control is integrated with the virtual reality environment (solid cube created with OpenGL) and the VR toolbox in MatLab, a different approach is required in order to make the virtual simulations responsive and stable at both ends. As discussed in Chapter 4, the Phantom Omni model uses the OpenHaptics libraries for the Windows OS To have access to those libraries, the C++ programming language is used. The VR simulation running on the MatLab environment needs to be interfaced with the HDAPI/HLAPI libraries for haptically rendering the OpenGL virtual objects in C++ A multithreaded application interface was developed to make the separate workspaces to communicate back and forth for data interchange. Thi s component of the application is based on UDP sockets running as separate thread and the technique is further explained next. 8.7 Communication Protocol As previously stated, the VR simulation and the haptic control software run in two different workspa ces. A network protocol based on U ser D atagram P ackets (UDP) was developed in order to interface the MatLab workspace used for the VR simulations and the C++ programming language used for the haptic control A single packet contains the joint angles and t

PAGE 151

1 36 transferred to the MatLab workspace As stated in Chapter 3, the protocol design includes features to deal with the possibility of data loses or out of order sequences For this particu lar implementation, a time stamp variable was used to prevent these problems. The interfacing of haptic control and the VR simulation software implements four (4) main threads in C++ running simultaneously with different update rates The different threa ds are: 1. The graphics thread. 2. The haptic loop thread 3. The collision detection thread. 4. The communication thread Of these four threads, only the communication thread implementation is different from the physical simulation (as discussed in Chapter 3). This is due to the fact that MatLab does not provide functionalities for handling real time clocks or synchronization mechanisms. The solution was to use regular timers and standard UDP based socket programming techniques in the M atLab programming environment. 8.8 Comments on the Communication Protocol in the Simulation Program T he communication thread provided a stable and acceptable response time for interfacing VR simulations with the Phantom Omni control ler when used for short periods of time However, w h en the interface is used for extended time, t he communication between the C++ application and the MatLab simulation is inconsistent and unreliable. T he dynamic data exchange API responsible for transferring the UDP packets between the MatLab workspace and the socket s program in C++ fails to meet the

PAGE 152

137 high timing constraints of the Phantom Omni and, at the same time, to update the virtual environment during the simulation. However, the interfacing between the VR simulation in MatLab and the OpenHaptics libr aries in C++ creates a realistic look and appearance of the robot arm as well as a friendlier graphical user interface (GUI) for testing and debugging. 8.9 Summary The use of the VR simulation provides a flexible visualization tool for testing the purely robotic control mode as well as the haptic ally driven manipulator. The virtual simulations allow validating the actual algorithms for teleoperation developed in C++ and the QNX RTOS The capability of matching the homogeneous transformations resulting f rom the kinematics analysis and the transformations programmed in VRML scripts permits to experiment and develop more efficient interfaces and communication techniques. The implementation as well as the debugging processes of the different control algorit hms and the required numerical approximation methods, both closed form and resolved rate, are greatly facilitated due the built in linear algebra scripts available in MatLab and the visualization facilities available in the Virtual Reality Toolbox

PAGE 153

138 C hapter 9 Results and Discussion 9.1 Introduction proposed model was tested in eight different modes of operation These modes consisted of regular, scaled and virtual fixtures usi ng position based and velocity based control autonomous mode and force based virtual fixture (for a total of 8 ), as described in Chapter 7. Each of these modes of operation comprised five repetitions of each experiment for a total of forty (40 ) experim ents Three users executed these experiments for a total of 120 experimental data sets. This Chapter presents the results of these experiments. Results and discussions of the virtual reality simulation are also presented in this chapter. 9.2 Interactive Simulations Results T he experiments were conducted b ased on the methodology presented in section 7.2. In all these experiments, when position based control is activated, t he user teleoperates the Phantom Omni interface to move the PUMA to the desired po sition and orientation For instance, in o rder to select a target object using the laser pointer, the user w ill move the Omni tip to a configuration so that the PUMA end effector points to the target object. On the other hand, when velocity based c ontrol is activated, t he Phantom Omni interface position determines the Puma end effector speed and direction. In other

PAGE 154

139 words, when velocity control is used, the Puma end effector speed changes proportionally to the Phantom Omni interface changing position. Wh en the specified velocity is reached, it is maintained until the command from the Omni is changed. Under effector once to select a direction and speed for the Puma end effector. Then, the user will effector steady until the gripper mounted on the Puma is close to the target object, then effector to the center in order to stop close the target. The definitions of these experiments are described as follows: a) Regu lar Teleoperation M ode: the user does not receive any assistance from the sens or based assist system. b) Scaled Teleoperation Mode: the user input is scaled 3x when it is along the trajectory generated by the laser, and 0.2X when it is perpendicular to the trajectory c) Virtual Fixture Teleoperation Mode: all positions and orientations coming from the user input are locked except the position parallel to the tr ajectory, which is scaled to 3X. d) Autonomous Mode: the user points the laser in the direction of the t arget object and commands the Puma manipulator to follow the trajectory. e) Force using the laser rang e finder.

PAGE 155

140 Table 9 .1 shows collected data of the time to complete the pick up a cup task for ten re petitions using autonomous, regular, scaled and virtual fixtures using position based and velocity based control, and force based virtual fixture teleope ration modes. The variables are defined as follows: 1. C1 = autonomous control mode 2. C2 = position based regular teleoperation mode 3. C3 = position based scaled teleoperation mode 4. C4 = position based virtual fixture constraint 5. C5 = velocity based regular teleop eration 6. C6 = velocity based scaled teleoperation 7. C7 = velocity based virtual fixture constraint 8. C8 = force based virtual fixture constraint Table 9 .1 Completion Time (in seconds) for the Pick up a cup Task Experiment No. C1 C2 C3 C4 C5 C6 C7 C8 1 86.549 82.058 69.243 74.322 71.230 82.288 78.382 80.949 2 86.214 88.105 102.300 92.718 80.681 79.143 79.990 66.764 3 98.342 87.114 95.975 79.582 70.778 81.129 80.849 68.850 4 85.255 92.069 69.630 86.085 74.315 88.941 76.833 79.776 5 94.995 77.443 71.129 53.45 7 63.775 71.469 64.575 68.552 6 68.592 86.214 109.892 78.522 76.064 84.615 84.835 78.213 7 73.647 88.105 90.282 96.207 93.846 77.0 63 74.046 84.389 8 65.670 94.862 91.182 98.683 76.953 83.948 82.158 77.473 9 67.654 109.590 89.762 101.060 60.270 78.322 6 4.525 94.596 10 65.097 88.848 84.878 80.340 62.398 67.932 71.958 79.910

PAGE 156

141 Table 9 .2 Completion Time Descriptive Statistics Variable N N* Mean SE Mean Std. Dev. Minimum Q1 Median Q3 Maximum C1 10 0 79.20 3.96 12.54 65.10 67.16 79.45 88.66 98.34 C2 10 0 89.44 2.71 8.57 77.44 85.18 88.10 92.77 109.59 C3 10 0 87.43 4.40 13.93 69.24 70.75 90.02 97.56 109.89 C4 10 0 84.10 4.50 14.24 53.46 77.47 83.21 96.83 101.06 C5 10 0 73.03 3.14 9.93 60.27 63.43 72.77 77.89 93.85 C6 10 0 79.49 1.98 6.25 67.93 75.66 80. 14 84.12 88.94 C7 10 0 75.82 2.22 7.02 64.53 70.11 77.61 81.18 84.84 C8 10 0 77.95 2.65 8.38 66.76 68.78 78.99 81.81 94.60 Data from Table 9 .2 were used to verify if the average time to complete the pick up a cup task can be used as predictive parameter. For this purpose, a plot type of chart was used The boxplot is a standard graphical tool used in descriptive statistics to show the variability of a set of input variables without assuming any probability distribution of th e underlying data [ 7 1 ] The boxplot in Figure 9. 1 shows that the time parameter will be a poor parameter if it is used as the only prediction parameter to identify which of the methods of control used to execute the task wo uld perform better for this tas k Also shown in Figure 9.1, is that the variability in the completion time of the pick up a cup task is too large when comparing the different modes described as C1 to C8. Therefore, a different method of evaluation of results must be used to better exp lain the performance of the sensor based assistive system.

PAGE 157

142 Figure 9 .1 Boxplot of Autonomous (C1), Position based Regular Teleoperation (C2), Position based Scaled Teleoperation (C3), Position based Virtual Fixture (C4) Velocity Based Regular Teleoperation (C5), Velocity Based Scaled Teleoperation (C6), Velocity Based Virtual Fixture (C7) Force based Virtual Fixture (C8) In section 7.5 a definition of performance indicators w as presented. B y using these indicators ei ght combina tions of the operation modes can be defined. E ach mode of operation was compared, and the associated Absolute Position Error (APE) and the Abs olute Orientation Error (AOE) were plotted for one repetition of the experiment of the pick up a cup t ask A qualitative assessment of results when the perfor mance indicators were used is shown in Fi gures 9.2 to 9.2 0 for position based control and Figures 9.2 1 to 9. 39 for velocity based control. The figures show the comparison between each of the four mode s and the corresponding Absolute Position and Orientation Errors. From t h is qualitative (*) is an outlayer point

PAGE 158

143 comparison of the absolute errors in position and orientation, it is recognized that 1) scaling and virtual fixture teleoperation modes perform better than regular tele operation and 2) autonomous mode performs b etter than regular, scaled, and virtual fixture either in position based or velocity based control forms These are expected results from an tion. 9.2.1 Position B ased Control Interactive Simulations Results The position based teleoperation is the default control mode of the telerobotic system. In this case, the Phantom Omni is moved in its workspace by the user and transformat ion matrices a re computed by solving the forward kinematics problem. The resulting transformations and then mapped to the PUMA base frame following the procedure discussed in section 4.2.2. Althou gh the same task was performed using different modes of operation when Regular teleoperation mode was used, the trajectory was not as smooth and fast as it was in the case of Autonomous Scaled and Virtual Fixture modes (Figures 9.2 to 9.4) Also the trajectory is longer in Regular mode. Nevertheless, the trajectory in the Autonomous compared to Virtual Fixture mode and also in the Scaled compared to Virtual Fixture, is similar (Figures 9.5 and 9.7) When comparing Autonomous to Scaled, the trajectory is shorter and smoother for the Autonomous mode (Figures 9.6) This latter is mostly due to the fact that in Autonomous mode the input from the user is partly removed and only used for re orienting the end effector of the manipulator These same results were obtained when comparing the Absolute Position Error (Figures 9.8 to 9.1 3).

PAGE 159

144 As for the Absolute Orientation Error, the errors in the Regular mode for the complete task are mostly higher than the Autonomous, Scaled and Virtual Fixture (Figures 9.14 to 9.16). In the Autonomous and Virtual Fixture modes, some portions of the er rors are constant (Figures 9.16 to 9.20). The explanation for this behavior is that those portions represent the sections of the trajectory where the orientation of the end effector of the Puma manipulator remains unchanged. Figure 9.2 Position B ase d Regular Teleoperation vs. Scaled Teleoper ation

PAGE 160

145 Figure 9.3 Position B ased Regular Teleoperation vs. Autonomous Control Figure 9.4 Position B ased Regular Teleoperation vs. Virtual Fixture Teleoperation

PAGE 161

146 Figure 9.5 Position B ased Virtua l Fixture Tel eoperation vs. Autonomous Control Figure 9.6 Position B ased Scaled Teleoperation vs. Autonomous Control

PAGE 162

147 Figure 9.7 Position B ased Scaled Teleoperation vs. Virtual Fixture Teleoperation Figure 9.8 Absolute Position Error in Position Based Regular vs. Scaled Teleoperation

PAGE 163

148 Figure 9.9 Absolute Position Error in Position Based Regular Teleoperation vs. Autonomous Control Figure 9.10 Absolute Position Error in Position Based Regular vs. Virtual Fixture Teleoperation

PAGE 164

149 Figure 9.11 Absolute Position E rror in Position Based Virtual Fixture Teleoperation vs. Autonomous Control Figure 9.12 Absolute Position Error in Position Based Scaled Teleoperation vs. Autonomous Control

PAGE 165

150 Figure 9.13 Absolute Position Error in Position Based Scaled vs. Virtual Fi xture Teleoperation Figure 9.14 Absolute Orientation Error in Position Based Regular vs. Scaled Teleoperat ion

PAGE 166

151 Figure 9.15 Absolute Orientation Error in Position Based Scaled Teleoperation vs. Autonomous Control Figure 9.16 Absolute Orientation Erro r in Position Based Regular Teleoperation vs. Autonomous Control

PAGE 167

152 Figure 9.17 Absolute Orientation Error in Position Based Regular vs. Virtual Fixture Teleoperation Figure 9.18 Absolute Orientation Error in Position Based Virtual Fixture Teleoperation v s. Autonomous Control

PAGE 168

153 Figure 9.19 Absolute Orientation Error in Position Based Scaled Teleoperation vs. Autonomous Control Figure 9.20 Absolute Orientation Error in Position Based Scaled vs. Virtual Fixture Teleoperation

PAGE 169

154 9.2.2 Velocity Based Control Interactive Simulations Results In this mode of teleoperation, the PUMA end effector speed changes end effector once to select a direction and speed for the PUMA end effe ctor. As effector steady to fix the speed until the gripper is in the vicinity of the target object. Then, the user moves the effector back to its initial position for s topping close to the target. The testing r esults for the Velocity Based control simulations are very similar to those obtained for the Position based control simulations. Figures 9.21 to 9.23 show that t he trajectory in Regular teleoperation mode is not as smooth, fast and shorter as it is Autonomous control mode and Scaled, Virtual Fixture control modes T he trajector ies in the Autonomous Virtual Fixture and Scaled mode s are similar (Figures 9. 24 and 9. 26 ). And comparing Autonomous to Scaled, the traject ory is shorter and smoother for the Autonomous mode (Figures 9. 25 ). Th is is also the case for the Absolute Position Error (Figures 9. 27 to 9. 32 ). As for the Absolute Orientation Error, for the Velocity Based control, the errors in the Regular mode for the complete task are mostly smaller than for the Autonomous, Scaled and Virtual Fixture (Figures 9. 33 to 9. 35 ). Similarly, the orientation errors for the Virtual Fixture and Scaled modes are smaller than for the Autonomous (Figures 9. 36 to 9. 39 ). This ca n be explained by the condition imposed in the velocity control mode for which a particular Omni end effector position does not have to remain mapped to a specific configuration of the slave, but only to the magnitude and direction of the slave of

PAGE 170

155 the end effector velocity. This means t hat there is no need to precisely reorient the gripper for grasping when the velocity control mode is active. Figure 9.21 Velocity Based Regular Teleoperation vs. Scaled Teleoperation

PAGE 171

156 Figure 9.22 Velocity Based Regula r Teleoperation vs. Autonomous Control Figure 9.23 Velocity Based Regular Teleoperation vs. Virtual Fixture Teleoperation

PAGE 172

157 Figure 9.24 Velocity Based Virtual Fixture Teleoperation vs. Autonomous Control Figure 9.25 Velocity Based Scaled Teleoperatio n vs. Autonomous Control

PAGE 173

158 Figure 9.26 Velocity Based Scaled Teleoperation vs. Virtual Fixture Teleoperation Figure 9.27 Absolute Position Error in Velocity Based Regular vs. Scaled Teleoperation

PAGE 174

159 Figure 9.28 Absolute Position Error in Velocity Based Regular Teleoperation vs. Autonomous Control Figure 9.29 Absolute Position Error in Velocity Based Regular vs. Virtual Fixture Teleoperation

PAGE 175

160 Figure 9.30 Absolute Position Error in Velocity Based Virtual Fixture Teleoperation vs. Autonomous Control Figure 9.31 Absolute Position Error in Velocity Based Scaled Teleoperation vs. Autonomous Control

PAGE 176

161 Figure 9.32 Absolute Position Error in Velocity Based Scaled vs. Virtual Fixture Teleoperation Figure 9.33 Absolute Orientation Error in Velocity Based Regular vs. Scaled Teleoperation

PAGE 177

162 Figure 9.34 Absolute Orientation Error in Velocity Based Scaled Teleoperation vs. Autonomous Control Figure 9.35 Absolute Orientation Error in Velocity Based Regular Teleoperation vs. Autonomous Control

PAGE 178

163 Figure 9.36 A bsolute Orientation Error in Velocity Based Regular vs. Virtual Fixture Teleoperation Figure 9.37 Absolute Orientation Error in Velocity Based Virtual Fixture Teleoperation vs. Autonomous Control

PAGE 179

164 Figure 9.38 Absolute Orientation Error in Velocity Base d Scaled Teleoperation vs. Autonomous Control Figure 9.39 Absolute Orientation Error in Velocity Based Scaled vs. Virtual Fixture Teleoperation

PAGE 180

165 The effectiveness of the assistive system during the execution of the pick up a cup task presented in Figure s 9.2 to 9.39 is summarized in Figures 9.40 to 9.4 5 The testing of force based virtual fixture is included in these figures as an additional parameter of comparison between the different modes of teleoperation. A comparison of the APE and the AOE ind icat ors for the force based and position based regular ( teleoperation without assistance) and scaled teleoperation modes is shown in Figures 9.40 and 9.41. The APE and AOE comparisons corresponding to Regular (Teleoperation without Assistance), Position base d Scaled Teleoperation (Motion based Scaling), Position based Virtual Fixture (Motion based Virtual Fixture) and Force based Virtual Fixture are depicted in Figures 9.42 and 9.43. Figures 9.44 and 9.45 show the APE and AOE indicators for Autonomous, Velo city Based Scaling, Velocity Based Virtual Fixture and Force based Virtual Fixture. As can be observed, Autonomous mode performs better than any other method, as shown in previous figures. The assistance provided in the form of scaled and virtual fixture is shown to be better than regular teleoperation (without assistance), as expected. The force straight line trajectory when compared to motion based virtual fixture.

PAGE 181

166 Figu re 9.40 APE for Force, Position Based Regular and Scaled Teleoperation Figure 9.41 AOE for Force, Position Based Regular and Scaled Teleoperation

PAGE 182

167 Figure 9.42 APE for Teleoperation without Assistance, Motion based Scaling, Motion based Virtual Fixture and Force based Virtual Fixture Figure 9.43 AOE for Teleoperation without Assistance, Motion based Scaling, Motion based Virtual Fixture and Force based Virtual Fixture

PAGE 183

168 Figure 9.44 APE for Autonomous, Velocity Based Scaling, Velocity Based Virtual Fixt ure and Force based Virtual Fixture Figure 9.4 5 A O E for Autonomous, Velocity Based Scaling, Velocity Based Virtual Fixture and Force based Virtual Fixture

PAGE 184

169 9 .3 Virtual Reality Simulation Results T he implemented multithreade d approach was also tested usin g Virtual Reality (VR) model of the PUMA manipulator The software based controller of the robot arm was interfaced to the real Phantom Omni hardware controller using the socket programming technique explained in section 7. Figure 9.40 shows the Cartesia n s end effector As shown, t he implemented multithread ed design allowed the execution of the teler obotic without event mismatch However, the communication between the Phantom Omni hardware con troller and the software based controller was unstable and it stopped responding abruptly The problem with that is the unpredictability and unreliability of the third party MatLab socket API used to integrate the C++ implementation of For the case sho wn in Figure 9.40 the effector follows the position in Cartesian space are negligible, and for plotting purposes, an offset of 10.0 mm in each direction was introduced so that the traces are distinguishable from one another. This shows that th e multi threaded implementation allows the associated tasks for controlling the telerobotic system to be executed concurrently without delays, increasing the overall performance.

PAGE 185

170 Figure 9.46 Position Results of Circular Path in Cartesian Space Figu res 9.4 7 and 9.4 8 illustrate the planar (X,Y) components of the trajectory using datasets from the circular path corresponding to the robot arm and the haptic plotted individually versus time in Figure 9.4 6 Figure 9.47 Robot Position Tracking of the Cir cular Path in the X Y Plane.

PAGE 186

171 Figure 9.48 Haptic Position Tracking of the Circular Path in the X Y Plane For testing the sensor based assist force (SAF's), the "haptic tip" was made to follow a linear trajectory generated between the Puma end effector and a target. As mentioned previously, this trajectory is generated from the information gathered by the camera and the laser. The virtual environment that consisted of a simulated target and an end effector along with a linear trajectory was available for the user to view on the PC that runs the Phantom Omni. A graph of forces that the user experiences while deviating from the trajectory versus time is shown in the Figure 9.4 9

PAGE 187

172 Figure 9.49 Typical Assistive Force Feedback Experienced b y the User It c an be observed from this graph that the user begins to deviate from the target at the 12.0 second mark. As this happens the feedback forces increase trying to put the user back on the trajectory. At around 12.7 second mark the user experiences the maximum force as the user has deviated maximum from the trajectory. This way the user is given force assistance to move along the trajectory. It should be also noted that the user experiences the forces only if the user is at a certain radius near the trajectory. The user experiences maximum forces at the outer periphery of the circle defined by the radius and fails to experience any forces once the user leaves the periphery. The response of the

PAGE 188

173 system is real time i.e., the user experiences the forces as soon as the user tries to move away from the trajectory. This real time response has been possible because of the multithreading strategies described previously. Using traditional signal processing 9. 50 correspond to frequencies between 5.0 to 10.0 Hz. This figure also shows a simple filter was also implemented for this purpose with acceptable results which are not included in this document. Figure 9.50 Typical Results of the Moving Average Filter Implementation

PAGE 189

174 9.4 Summary The results of the interactive or physical simulations for the pick up a cup task were presented and th e performances for autonomous control mode, force and motion based virtual fixtures, and scaled teleoperation modes of assistance were compared. The performance m easures as shown in Figures 9. 2 to 9.20 clearly indicate that the autonomous, scaled and virtual fixture tel eoperation modes enable appropriate assistance to the execution of the pick up a cup task. The experiments conducted to validate the control strategies with the actual hardware show that the errors in both position and orien tation are acceptable The results of the experiments with the Puma 560, the Phantom Omni and the sensory suite (camera and a laser range finder) for trajectory tracking as well as the force assistance for guiding the user's motion were satisfactory. It was found that t he variability shown by the box plot indicates that the completion time is not a sufficient parameter for comparison of the autonomous and teleoperation modes The performance measures also indicate that the real time performance of robotic system provides adequate assistance for trajectory tracking, the manipulation of objects and completion of the pick up a cup task. The experiments conducted to validate the control strategies with the actual hardware show that the errors in both position and orientation are acceptable The results of the experiments with the PUMA 560, the Phantom Omni and the sensory suite (camera and a laser range finder) for trajectory tracking for guiding the user's motion were satisfactory. It is sho wn that the system provides the sensor based assistance to guid e motion

PAGE 190

175 Chapter 10 Conclusions and Recommendations 10.1 Overview A PC based multithreaded, hard real time controller for a sensor assisted telerobotic system was developed The implemented assi stive force feedback system used simple sensors such as a laser range finder to guide the user's motion and a CCD camera for visual feedback The user gets visual as well as haptic feedback on the remote PC that has P hantom Omni as the master. It was s hown that the force feedback provided by the telerobotic con troller and the sensors is consistent and in real time even though the computational resources used for the implementation were purposely limited to support a wide range of users In order to co ordinate the parallel execution of the telerobotic tasks to run in real time a multithreaded architecture was developed. This approach allowed the telerobotic control of the arm, sensor y integration, and the computations of the different forms of assista nce without incurring in high costs, increased complexity and scalability problems associated with multiprocessor workstation systems. T he control strategy described in this dissertation used sensory signals for regular, scaled and virtual fixtures usin g position based and velocity based control, autonomous mode, and force based virtual fixture teleoperation during user interactions. The user was enabled to switch between autonomous control mode, force and motion based virtual

PAGE 191

176 fixtures, and scaled teleo peration modes. Several experiments were con ducted to validate the trajectory following capabilities of the teleroboti c system as well as the sensor based assistance to guide the user's motion A virtual envi ronment for object manipulation was provided t o the user in the form of a virtual cube and a sphere was displayed as a visual cue of the position and orientation of the tip of the haptic device In addition to the virtual environment, three (3) graphi cal view s present ed the sensory information to th e user for enhanced visual perception of the object's location relative to the end effector of the robot manipulator A testbed was created for conducting both simulated and physical experiments. The simulation was developed using a virtual reality mod el of the Puma 560 arm in MatLab and the Virtual Reality Toolbox The C++ programming software was developed to interface the Phantom Omni software and the virtual reality simulation s For the physical experiments, the Phantom Omni Haptic device from Sen sAble Technologie s is used as the master. It run s on a Pentium computer, with 1GHz single processing unit. The Phantom Omni device uses the OpenHaptics software which runs on Windows XP OS. The robot arm was equipped with a CCD camera and a Sick DT60 la ser range finder. A Pentium II 666 MHz single processor computer was used to run the QNX Real time Operating System. The Puma 560 software based control strategy is a form of a PD plus gravitational compen sation controller The testing procedures of the supervisory control scheme included circular, polynomial, Bezier curves, and linear trajectories with force feedback along the Cartesian axes (X, Y, Z ) as the user deviates from any of those trajectories. During those interactions, the virtual environment described previously as well as the camera views were displayed simultaneously on the

PAGE 192

177 screen for visualization of the telerobotic environment. The control system architecture designed to satisfy the real time constraint consists of the following main thr eads : 1. The determination of the target position and orientation with respect to the Puma end effector (in joint or Cartesian space) and mapping this position and orientation to the Phantom Omni tip. 2. A trajectory generation thread which computes intermediat e points of the trajectory to reach the target. 3. The computation of the joint angles of the PUMA for trajectory following using inverse kinematics based on the resolved rate algorithm. 4. The computation of the torques using a proportional derivative (PD) cont roller wi th gravity compensation which was required to drive the motors in the PUMA 5. The sensor information from the camera and the laser was fused to determine the position and orientation of the end effector and this dat a was sent to the Phantom Omni for further processing 6. The communication thread handles t he position and orientation info rmation of the effector. This information was used by the PUMA software controller for position based and velocity based teleoperation modes Also the processor that handles the Phantom Omni device has the following threads: 1 The graphics thread: It renders a virtual target, end effector position and a trajectory on the user screen that is similar to the PUMA environme nt at a refresh rate that conforms to the PUMA and Phantom end effector movement.

PAGE 193

178 2 The haptic thread: This thread computes the feedback forces based on the sensory Phantom Omni. As the user deviates from the trajectory the assistive forces required to bring the user back on the trajectory were calculated and delivered to the user using the OpenHaptics software and the actuators of the Phantom Omni interface 3 The communication thread handles the packets containing the Cartesian position software controller. 10 .2 General Discussion The integration of haptic feedback and th e generation assistance based on sensory information is a challenge due to t he strict timing constraints for a realistic sensation of touch and high update rates of the sensory inputs Additionally, the combination of v isual and haptic information depends on computationally intensive pre proces sing to obtain the digital features from the images I n this dissertation a multithreaded architecture was designed and implemented to deal with the timing constraints and high update rates imposed by separating the computational tasks into different runn ing threads with synchronization mechanisms for inter processing communication to achieve real time performance. The following is a list of the major contributions made in this dissertation: 1. A multithreaded PC based control scheme capable of real time hapt ic and visual feedback

PAGE 194

179 2. Implemen tation of sensor based assist functions (SAF's) f or guiding th e user's motion in the form of scaling, motion based and force based virtual fixture 3. The development of an automatic control mode to enhance the manipul ation capab ilities of the users and for reducing the possibility of fat igue over long periods of times 4. The integration of a laser range finder for the determination of the desired trajectory by pointing the laser to the object of interest 5. An integrated approach for h andling dive rse senso r datasets and data acquisition 10 .3 Recommendations It is recommended to improve the computer vision sub system to include more sophisticated feature extraction algorithms and object recognition techniques The experimental tests w ere performed successfully for a single object in the field of view of the camera and la ser range finder and the computation of the centr oid of the object of interest, h owever, it is recommended to include "blobs" det ection capabilities in order to detect and to label multiple objects in th e field of view of the camera and then, use probabilistic techniques for object recognition. Some geometrical features such as the centroid, area, perimeter, and roundness of the detected objects can be compared with ex isting geometrical features enumerated in a database for this purpose This would add flexibility to the trajectory generation in the presence of multiple objects as well as to the autonomous mode control of the telerobotic system. Also, a nother recommen dation is to enable the laser tracking of moving objects by using the current capabilities of the system

PAGE 195

180 for image processing and data fusion of the sensory information from the camera, laser range finder and encoder readings The multithreaded approach used proved to support high update rates of the sensory data which are fundamental for the tracking of moving objects. It is also recommended to extend the sensor based assist force (SAF's) concepts to include torque feedback. This requires force feedba ck in six degrees of freedom. In the current im plementation, the SAF's are 3 Do F output and, therefore, the assistance provided corresponds to force components along the Cartesian axes. However, for enhanced manipulation in 3D space, assisting or resisti ng torques may also be useful for certain tasks In the hardwar e side the Phantom Omni will need to be replaced by a 6 Do F haptic interface capable of re flect ing torques. Commonly ADL tasks requiring lso be enhanced by a 6 DoF force based virtual fixture teleoperation mode.

PAGE 196

181 Chapter 11 Future Work 1 1 .1 Introduction As previously discussed th e methods developed in this dissertation allow ed the execution of tele robotic manipulation tasks by the com bination of visual information using simple sensors and haptic force feedback to calculate assist ive functions in real time In the current version of the telerobotic control system t he calculation of the assistive force for guiding the user's motion and t he determination of the position and orientation of an object of interest as "seen" by the sensors (eye in hand camera and laser range finder) is based on a fixed reference frame located at the Puma 560 base Having this system controlling a robot on a mobile platform with sensor based assist functions such as the Wheelchair Mounted Robotic Arm (WMR A ) may increase the flexibility of such system as an assistive device. This chapter describes potential research problems that the development of a real time telerobotic control system with sensor based assist functions for a robot mobile platform would entail. 11.2 Combined Mobility and Manipulation with T ime dependant Sensor y Calibration Function s in Real time The idea is to design a real time control scheme which combines the control strategies required for maximizing the combined mobility and manipulation capabilities as implemented in [ 72 ], and, at the same time, implement the time dependent sensory

PAGE 197

182 calib ration functions required to calc ulate the sensor bas ed assist functions (SAF's) as described in this dissertation The integration of a real time telerobotic control system with sensor based assist functions and the Wheelchair Mounted Robotic Arm ", WMRA entails the implementation of optimized numerical a pproaches to deal with the redundancy of the WMRA system as well as the online calibration functions to determine the feedback force to guide the user's motion based on the sensor readings Such development would benefit users who are vision impaired and also forced to use a wheelchair 1 1 3 Autonomous Navigation The implementation of navigation al technologies with advanced perception through the use of sensor fusion, autonomy and learning techniques might benefit from the development of a Hybrid Deliber ative Architecture ( HDA ) HDA techniques might provide a suitable solution when the environment can not be altered to accommodate the based robotics and Neuro Fuzzy techniques for inference and learning might be combined. In this scenario, Neural Networks ( NN ) might be extended to automatically extract fuzzy rules from sensory informa tion (or numerical data) while F uzzy Logic (FL) techniques might be used to resolve conflicts and control of primitive behaviors. Hybrid Deliberativ e systems and methods are not commonplace and correspond to efforts of current research. Such implementation will require highly responsive and stable computer and software architectures. The mult ithreading framework developed for this work has the capab ilities to perform in real time and implements a high level communication protocol to deal with different sensory input

PAGE 198

183 format s (RS232, RS485, parallel, USB, IEEE1392, among others) These capabilities could serve as the foundation of the Hybrid Deliberat ive approach. 1 1 .4 Remote Assistance As already implemented, the system provides forc e assistance based on the visual feedback and laser readings. A similar setup can be implemented with the added capability for monitoring of the WMRA from a remote loca tion using communication channels over the Internet based protocol. The sensory suite can be mounted at the end effector of the wheelchair mounted robot arm, similar to the current version of the Puma 560 testbed. The present user interface will have to be modified to accommodate the visual information from the optical sensors and the haptic graphical display interfaces to be available online to the remote assistant This way th e remote human user will be able to observe t he environment around the WMRA. Using a haptic device as an input, the remote assistant can specify the desired motion to assist the disable person remotely. S everal of the meth ods described in this thesis will be useful for this application.

PAGE 199

184 References [1 ] J K. Salisbury, and M. A. Srinivasan 1997, "Phantom based Haptic Interaction wit h V irtual Objects", IEEE Computer Graphics and Applications, Vol.17, Issue 5, Sept Oct, 6 10. [2 ] N. Diolaiti, G. Niemeyer, F. Barbagli, J. K. Salisbury, and C. Melchiorri, 2005, "The Effect of Quantization and Coulomb Friction on the Stability of Haptic Rendering" in Proc. 1st World Haptics Conference Pisa, Italy, Mar, 237 246. [3 ] T The PHANTOM Haptic Interface: A Devi ce for in Proc. ASME Winter Annual Meeting Vol. 55 1, New Orleans, LA, 295 300. [4 ] Z.Y. Yang, Y.H. Chen, 2003, Haptic Rendering of Milling Encoding", Proceedings of the EuroHaptics 2003, Dublin, Ireland, 6 9 July, 2003, 206 217. [5 ] P. Leskovsky, M. Harders, and G. Szekely, 2006, "Assessing the Fidelity of USA, March 25 26, 19 25. [ 6 ] Y. Guangqi, J.J. Corso, G.D Hager, and A.M. Okamura, VisHap: Augmented Reality C ombining Haptics and V ision" IEEE International Conference on Systems Man and Cybernetics Vol. 4, 5 8 October, 2003, pp. 3425 3431. [ 7 ] T. L. McDaniel, and S. Panchanathan, "A Visio Haptic W earable System for A ssisting I ndividuals Who Are B lind", SIGACCESS Accessibility Computing September, 2006. [ 8 ] C. R. Wagner, S. J. Lederman, and R. D. Howe, "A Tactile Shape Display U sing RC S ervomotors", Proceedings of the 10 th Symposium on Haptic In terfaces for V irtual Environment and Teleoperator Systems, pp. 354 355, 2002. [ 9 ] V. Hayward and M. Cruz Hernandez, Tactile D is play Device Using Distributed Lateral Skin S tretch" Proceedings of the 8 th Symposium on Haptic Interfaces for Virtual Environm ent and Teleoperator Systems ASME IMECE DSC 69 2, pp 1309 1314 2000. [ 10 MIT Press, ISBN: 0 262 19316 7.

PAGE 200

185 [1 1 Hermes Publishing, ISBN: 1 85121 002, Vol. 3A, pp23 63. [1 2 88. [1 3 ion through time delay: review and 1993, pp. 592 606. [14 ] D.R. Yoerger and J. E. Slotine, 1991, Adaptive sliding control of an experimental underwater vehicle ings of 1991 IEEE International Conference on Robotics and Automation, Vol. 3, 9 11 April, pp. 2746 2751. [15 ] D. R. Yoerger, J. Newman, and J. E Slotine, 1986, Supervisory control system for IEEE Journal of Oceanic Engineering, Vol. 11, Issue 3, July1986, pp. 392 400. [16] Young S. Park, Hyosig Kang, Tomas F. Ewing, Eric L. Faulring, J. Edward Colgate, International Conference on Robotics and Automation, New Orle ans, 2004. [17] T. F. Chan and R. V. Dubey Bilateral Controller for a Teleoperator System with a Six DoF Master and a Seven A utomation, San Diego, CA, USA, May 1994, Volume 3, pp 2612 2619. [19] Luc D. Joly and Cla telerobot through real 1994 IEEE International Conference on Robotics and Automation, San Diego, CA, May 1994, pp. 357 362. [20] International Conference on Robotics and Automation, San Diego, California, May 8 13, 1994, pp. 2 612 2619. [2 1 Nonsymmetric and Redundant Master 281 290, 1992.

PAGE 201

186 [2 2 ] E.J. Veras, R. Swaminathan, and R. Dubey, A Multithreaded Implementat ion of Assist Functions to Control a Virtual Reality Model of a 6 DoF Robot Arm for Rehabilitation Applications and Robot Showcase, FCRAR, 2007, Tampa, Florida, May 31 June 01, 2007. [23] G. Bolmsjo, H. Neveryd, and H. F Transactions on Rehabilitation Engineering, Vol. 3, No. 1, March, 1995. Dissertation, Department of Electrical Engineering, University of South Florida, December 2001. [2 5 Rehabi litation Engineering, Vol 2, No 4, December 1994. [26 Intelligent Mapping Based Telerobotic Manipulation System To Assist Persons With ional Conference on Robotics & Automation, Washington, DC USA., May 2002. [2 7 Vol. 3, No. 1, M arch 1995. [2 8 Based End Effector Conference on Robotics and Automation, Detroit, MI.. May 1999, pp. 543 549. [ 29 ] Telemanipulation for Maximizing Manipulation Capabilities of Persons With [ 30 Augmentation of manipulation Capabilities of Conference on Intelligent Robots and Systems, Lausanne, Switzerland, October 2002. [ 31 Machine Coo perative Telerobotics Using Knoxville, 1998. [ 32 IEEE Virtual Reality Annual International Sym posium, 18 22 Sep 1993, pp. 76 82.

PAGE 202

187 [ 33 teleoperation of virtual and real environment manipulation based on reference obotics and Automation, Nagoya, Japan, May1995. [ 34 Human Switzerland, Oct 2002. [ 35 ] Z. Stanisic, S. Payand th Canadian Aeronautic and Space Inst. Conference, 1996. [ 36 th Sympo sium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2002. [ 37 A Multitasking PC Robotic s, Vol. 18 No. 1, pages 13 22. [ 38 ] E. Jung, C. Kapoor, and D. Batory, 2005, Robotics and Systems (IROW), Edmonton, Canada. [ 39 ] Her man Bruyninckx and Peter Soetens Generic Real Time Infrastructure for Signal Acquisition, Generation a nd Processing th Real time Linux Workshop, Boston, MA, December, 2002. [ 40 ] Microsoft Robotics Studio h tt p://msdn2.microsoft.com [ 41 29. [42] N. Turro, O. Khatib, and E. Coste in Proc. IEEE International Conference on Robotics an d Automation, Seoul, Korea, May 21 26, 2001, pp 386 392. [4 3 ] N. Hogan H. I. Krebs J. Charnnarong P. Srikrishna and A. Sharon International Society for Optical Engineering 2005. [4 4 ] S. Institute, Memphis, TN, and JPL/NASA, California Institute of Technology, Pasadena, CA, 1997.

PAGE 203

188 [4 5 Assisted Variable Trajectory Mapping for and Automation, 1998. [ 4 6 ] R.V. Dubey, S.E. Everett, N. Pernalete, and K.A, Manocha 2001 "Teleoperation Assistance through Variable Velocity Mapping" IEEE Transactions on Robotics and Automation, Vol. 17, Issue 5, Oct. 2001, 761 766. [ 4 7 ] W. Yu, R. Dube y and N. Pernalete, Robotic Therapy for Persons with Disabilities Using Hidden Markov Model Based Skill Learning Manipulation and Grasping, Geno va, Italy, July 1 2, 2004. [ 48 ] L. L. ital Force Control Applied in March, 2003, pp. 213 226. [49] A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and M. Goodrich, "Common metrics for human r obot interaction", in Proc. 2006 ACM Conference on Human Robot Interaction, 2006, pp 33 40. [50 ] QNX RTOS Soft Systems www.qnx.com/index.html [51 ] J. Craig, 2003, "Introduction to Robotics Mechanics and C ont rol", 3 rd Edition, Addison Wesley Publishing, ISBN 0201543613. [ 52 parameters of the Puma 560 arm in Proc. IEEE Int ernational Conf erence on Robotics and Automation Vol. 1 San Francisco, USA pp. 510 8, 1986. [53 ] D. E. Whitney, 1969, "Resolved motion rate control of manipulators and human prostheses", IEEE Transactions on Man and Machine Systems, Vol. MMS 10, June, 47 53. [54 ] D. E. Whitney, 1972, "The mathematics of co ordinated control of prosthetic arms and manipulators", ASME Journal of Dynamic Systems, Measurement, and Control, Dec., 303 309. [55 acceleration Control of Automatic Control, Vol. AC 25, No.3, June, 1980, pp 468 474. [56 ] R. Dubey, and J. Y. S Luh, 1987, "Redundant Robot Control for Higher Flexibility", in Proc. of 1987 IEEE International Conference on Robotics and Automation Vol.4, March 1987, 1066 1072.

PAGE 204

189 [ 57 ] M. Takegaki and S. Arimoto, 1981, "A New Feedback Method for Dynamic Control of Manipulators", J. Dyn. Sys. Meas. Control Transaction. ASME 103 119 125, 1981. [ 5 8 ] A. PHANTOM Haptic Device Implemented in a Projec 7 th International Immersive Projection Technologies Workshop, Eurographics Workshop on Virtual Environments, 225 230. [ 59 ] R. Krten, 2001, "Getting Started with QNX Neutrino 2: A Guide for Realtime 9682501 1 4. Press, Boston, 1981, ISBN: 0 262 16082 X. [61] R. R. Murphy, "Introduction to AI Robotics", The MIT Press, Cambridge, Massachusetts, 2000. ISBN: 0 262 13383 0. [6 2 ] Roger Y. Tsai, "A versatile Camera Calibration Technique for High Accuracy 3D Machine Vision Metrology Using Off the Shelf TV Cameras and Lenses", IEEE Journal of Robotics and Automation, Vol. RA 3, No. 4, August 1987, 323 344. [6 3 ] J Y. www.vision.caltech.edu/bo uguetj/calib_doc Publishing Co. ISBN: 0 201 10877 1, Vol. I, pp 60 93. [ 65 ] R Willson Modeling and Calibration of Automated Zoom Lenses in Proceedings of the SPIE #2350, Videometrics III, October 1994, pp.170 186. [ 66 ] J. Canny, 1986, "A Computational Approach to Edge Detection", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, Issue 6, Nov, 679 698. [ 67 ] Z. Zhang, Flexible Camera Calibration by Viewing a Plane from Unknown Orientations Microsoft Research, One Microsoft Way, Redmond, Washington 98052 6399, USA. [68 ] Ortega, M., Redon, S., and Coquillart, S., 2006, A Six Degree of Freedom God Object Method for Haptic Display of Rigid Bodies", Virtual Reality Conference 25 29 March 2006, 191 198. [ 69 ] Pa ljic, A., Burkhardtt, J.M., and Coquillart, S., 2004, Evaluation of pseudo haptic feedback for simulating torque: a comparison between isometric and elastic input devices", Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2004. 12th International Symposium on Haptics, 27 28 March, 2004, 216 223.

PAGE 205

190 [70] Tarn, T.J., B ejczy, A.K ., Marth, G.T. and Ramadorai, A.K. "Performance Comparison of Four Manipulato r Servo Schemes", IEEE Control Systems Magazine February, 1993, pp 22 29. rd Edition, John Wiley & Sons, Inc. 2002, ISBN: 0 471 20454 4. [ 72 ] R. M. A lqasemi, "Maximizing Manipulation Capabilities of Persons with Disabilities Using a Smart 9 Degree of Freedom Wheelchair Mounted Robotic Arm System", Ph.D. Dissertation, University of South Florida, 2007.

PAGE 206

191 Bibliography The following bibliography was revised and studied for learning fundamentals of some robotics and haptic concepts during the course of this research. It is listed for the benefit of potential readers. A. J. Davison, Real Time Simultaneous Localization and Mapping with a Single Camer a ", ICCV 2003. IEEE J. Computer Vol. 15, pp. 62 80, Dec. 1982. Following Control for Dissipative Passive Haptic th International Symposium on Ha ptic Interfaces for Virtual Environment and Teleoperator Systems, Los Angeles, California, pp 101 108, March 22 23, 2003. E. Chen, 1999, "Six degree of freedom Haptic System for Desktop Virtual Prototyping Applications, Proceedings of the First Internatio nal Workshop on Virtual Reality and Prototyping, Laval, France, June, 1999, 97 106. Ho, C., Basdogan, C., Srinivasan, M.A., 1998, "An Efficient Haptic Rendering Technique for Displaying Polygonal Objects with Surface Details in Virtual Environments" submi tted to Presence: Teleoperators and Virtual Environments. H.T. Yau, C.H. Menq, 1995, "Automated CMM Path Planning for Dimensional Inspection of Dies and Molds Having Complex Surfaces", International Journal of Machine Tools and Manufacture 35 (6) (1995) 861 876. A In O. Khatib, J. J. Craig, and T. Lozano Perez, editors, Robotics Review 1 The MIT Press, Cambridge, MA, 1989. J. M. Prager, 1980, "Extracting and labeling boundary segments in natural scen es", IEEE Transactions PAMI 2, January 1 st 16 27. J. T. Feddema, C. S. George Weighted selection of image features for resolve IEEE Transactions on Robotics and Automation 7:31 47, 1991.

PAGE 207

192 J. Weng, Camera calibration with distortion models and IEEE Transactions on Pattern Analysis and Machine Intelligence 14(10), Oct. 1992, pp. 965 980. ISBN: 0 13 030796 3, pp. 422 453. M. Ikits, C. Hansen, and C. R. Johnson, 2003, A Comprehensive Calibration and Registration Procedure Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Uta h, U.S.A. Time Assistance th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Los Angeles, California, pp 125 131, March 22 23, 2003. M. Skubic, and R.A. Volz, 2004, "Learning Force Sensory Patterns and Skills from Human Demonstration", International Journal of Machine Tools and Manufacture 44, 18 March, 2004, 1009 1017. N.I. Durlach, A.S. Mavor, 1995, Virtual Reali ty: Scientific and Technological Challenges, National Academy Press, Washington, DC, 1995. P. I. Corke, 1996, "A Robotics Toolbox for MatLab," IEEE Robotics and Automation Magazine Vol. 3, 24 32. P. I. Corke, and B.Armstrong Helouvry, 1994, A Search fo r Consensus among Model Parameters Reported for the PUMA 560 Robot" in Proc. of 1994 IEEE International Conference on Robotics and Automation, Vol. 2, 8 13 May, 1994, 1 608 1613. 401, August 2001. R. Hartley and A. Zisserman, 2004, Multiple View Geometry in Co mputer Vision ", Cambridge, MA, 2 nd Edition, Cambridge University Press, ISBN 0521540518. R. M. Haralick,H., Joo, C. Lee, X. Zhuang, V. G. Vaidya, and M. B. Kim, 1989, "Pose Estimation from Corresponding Point Data", IEEE Transactions On Systems, Man. And Cybernetics, Vol. 19, No. 6, November/December 1989. R.P. Paul, and C. N. Stevenson 1983, "Kinematics of Robot Wrists", IEEE International Journal of Robotics Research, Vol. 1, No 2, 33 38. R. Paul Puma manipulator American Control Conference San Francisco, CA 22 24 June, 1983, pp. 491 496.

PAGE 208

193 R. P. Paul International Journal of Robotics Research Vol. 5, N o. 2, 1986. IEEE Transactions on Robotics and Automation, Vol. 11, No. 6, December 1995, pp 844 858. T. Gutie rrez, J.I. Barabero, M. Aizpitarte, A.R. Cariilo, A. Eguidazu, 1998 Assembly S imulation T hrough H aptic Virtual P rototypes Proceedings of the 3 rd Phantom Users Group Workshop, Dedham, MA, 3 6 October, 1998. T. Yo shikawa, 1990, "Foundations of Robotics: Analysis and C ontrol", MIT Press, ISBN : 0262240289. W.H. Press, B.P. Flannery, S. A. Teukols 2 nd Edition, Cambridge University Press, Cambridge, MA, USA, ISBN : 0 521 43108 5. Y. Abdel Aziz, and H. Karara, 1971, "Direct linear transformation from comparator coordinates into object space coordinates", in Proc. of ASP/UI Symposium on Close Range Photogrammetric Systems, Urbana, IL., pp. 1 48. Y. Ma, S. Soatto, J. Kosecka, and S. S. Sastry, Model base d and image based 3D scene representation for interactive visualization Computer Vision and Image Understanding, Vol. 96, Issue 3, December 2004, 274 293.

PAGE 209

194 Appendices

PAGE 210

195 Appendix A: Puma 560 Homogeneous Transformations The homog eneous transformations are obtained from the substitution of the DH parameters in Table 3. 1 into the transformation equation given by Eq. [ 6 ] yields to: (A.1 ) (A.2 ) (A.3 ) (A.4 ) (A.5 ) (A.6 )

PAGE 211

196 Appendix A (Continued) Multiplying ( A.1) (A.6 ), the homogeneous transformation matrix of the end effector frame, {6}, in terms of the reference frame {0} corr esponding to the base of the robot (See Figure 3.1) as can be now be calculated: (A.7 ) The symbolic evaluation of Eq. (A.7) can be written as: (A.8) where, (A.9)

PAGE 212

197 Appendix B : Equivalent Single Angle Axis Representation The homogeneous transformation matrix, T, which describes a rotation around an arbitrary axis vector and an angl e defined as is given by the following matrix [ 48 ]. (B.1 ) where, , and and are the directional components of the rotational a xis The (3x3) rotation matrix is, then: (B.2 ) The first three elements of column four th of T are the components of the position vector, P: (B.3 ) A linear tr ajectory in Cartesian space can now be generated between two points defined by their corresponding homog enous transformation matrices, and where:

PAGE 213

198 Appendix B (Continued) (B.4 ) and (B.5 ) If N intermediate points are desired between the initial point defined by the homogeneous transformation and the destination position defined by t he linear components can be found as: (B.6) For the rotational components, the following calculations are required. Notice that the transform is used instead of the inverse because the rotation matrix is orthogonal: = (B.7) Before proceeding, it is convenient to ensure that the elements of the resulting matrix define an orthogonal matrix. This is accomplished by the cross product and taking any two columns as follows : (B.8)

PAGE 214

199 App endix B (Continued) No w, the equivalent single rotation angle can be found from the elements of the rotation matrix given by Eq. (B.7) and (B.8) as follows: (B.9 ) Using the equivalent angle, the directional components of the single axis can now be found using the following set of equations. Noti ce that these equations include provisions to avoid the representational singularities ( i.e. the axis is poorly defined) arising from situations where the angle of rotation is very small (defined by a tolerance, Tol er ) or 180. T he following equations ar e evaluated: (B.10 ) (B.11 ) (B.12 )

PAGE 215

200 App endix B (Continued) In Eq. B.12 the following substitutions are need ed to ensure the most positive components of are used: (B.12 a) (B.12 b) (B.12 c) Now, a rotation matrix can be obtained for every intermediate point by dividing the equivalent rotation angle into (N 1) equally spaced values by substitution of the corresponding components of t he single axis rotation, Eq. B.10 to B.12 and the evaluation of the conditions to avoid representational singularities in B.12a to B.12c This procedu re will allow having well defined intermediate transformations between the initial and the goal ( destination ) transfor mations.

PAGE 216

201 Appendix C : MatLab Script for the Symbolic Jacobian Matrix function Jac = symJacobn() %symJacobn calculates the symbolic form o f the Jacobian of the manipulator %with respect to the end effector frame. puma560akb; syms th1 th2 th3 th4 th5 th6 real; syms th2d th3d th4d th5d th6d real; syms a3 a4 d2 d3 d4 real; th=sym('[th1; th2; th3; th4; th5; th6]'); %Symbolic values: DH=[ 0 0 th(1) 0; pi/2 0 th(2) d2;0 a3 th(3) d3; pi/2 a4 th(4) d4; pi/2 0 th(5) 0; pi/2 0 th(6) 0]; U=sym('[1 0 0 0;0 1 0 0;0 0 1 0; 0 0 0 1]'); for i=6: 1:1 dx = [ U(1,1)*U(2,4)+U(2,1)*U(1,4); U(1,2)*U(2,4)+U(2,2)*U(1,4); U(1,3)*U(2,4)+U(2,3)*U(1,4)]; delt = [U(3,1); U(3,2); U(3,3)]; Jac(1,i) = dx(1); Jac(2,i) = dx(2); Jac(3,i) = dx(3); Jac(4,i) = delt(1); Jac(5,i) = delt(2); Jac(6,i) = delt(3); TT=rotx(DH(i,1))*transl(DH(i,2),0,0)*rotz(DH (i,3))*transl(0,0,DH(i,4)); U = TT*U; end % The Solution using symbolic approach is: % ans = % 0.4995 0.2394 0.3162 0 0 0 % 0.4457 0.3319 0.2813 0 0 0 % 0.0303 0.5160 0 .0941 0 0 0 % 0.4504 0.6164 0.6164 0.3309 0.0479 0 % 0.5524 0.7607 0.7607 0.0159 0.9989 0 % 0.7014 0.2034 0.2034 0.9435 0 1.0000 % Solution using Corke's toolbox % jaco bn(p560m,qready) % ans = % 0.4995 0.2394 0.3162 0 0 0 % 0.4457 0.3319 0.2813 0 0 0 % 0.0303 0.5160 0.0941 0 0 0 % 0.4504 0.6164 0.6164 0.3309 0.0479 0 % 0.5524 0.7607 0.7607 0.0159 0.9989 0 % 0.7014 0.2034 0.2034 0.9435 0.0000 1.0000

PAGE 217

202 Appendix D : Singularity Robust (SR) Inverse The SR inverse [16] is also known as damped pseudoinverse [18]. Considering a linear system of equations as the form : (D.1 ) If the matrix of coefficient s is not square, the pseudoinverse A + may be used to compute the least square solution with the objective function defined as the minimal norm. The pseudo inverse solution avoids the problem of ext remely large amplitude in the neighborhood of singular points by minimizing the sum of the norms of the error (defined as ) and the solution For an m by n ( where m < n) matrix A, its pseudoinverse is computed by : (D.2 ) The resulting matrix may have extremely large elements when is nearly singular. The SR inv erse uses the following equation instead: (D.3 ) Where is the SR inverse of I is the identity matrix, and is the parameter that determines the weighting be tween the norm of the solution and the error. If a small is used then the error gets small, but the solution might get large around singular points and vice versa [19]

PAGE 218

203 Appendix E : Angular Velocities Components of the End Effector angles as shown in Figure ZZ. These angles are called Euler angles. Figure E.1 Definition of the Euler A ngles matrices. For the case of the angular velocity components of the end effecto r, the equation that describes the total rotation is The , and E.1 Now, the total rotation mat rix, R, is found to be: E.2 where , , and

PAGE 219

204 App endix E (Continued) In the end effector ax is, the components of the angular velocity are obtained by writing the total rotation matrix as: and (E.3) (E.4 ) where is the rotation about the z axis by angle and it is obtained from the total rotation given by Eq. (TT). Taking the z component as yields to: (E.5) Next, the rotation about the axis by angle is obtained from given by second column vector of : (E.6) Similarly, the rotation by angle is given by the third column vector of as: (E.7)

PAGE 220

205 App endix E (Continued) T he end effector angular velocity components in matrix form are : (E.8)

PAGE 221

206 Appendix F: Specifications for the PHANTOM Omni Haptic Device The Phantom Omni is a haptic device model developed by SensAble Technologies. It offers six (6) positional DoF as input and three (3) forces DoF output. The specifications for this device are shown in Table F. 1 Table F. 1 Specifications for the Omni Haptic Device Model The PHANTOM Omni Device Force feedback workspace : ~6.4 W x 4.8 H x 2.8 D in > 160 W x 120 H x 70 D mm Footprint : Physical area the base of device occupies on the desk 6 5/8 W x 8 D in ~168 W x 203 D mm Weight (device only) : 3 lb 15 oz Range of motion : Hand movement pivoting at wrist Nominal position resolution : > 450 dpi ~ 0.055 mm Backdrive friction : <1 oz (0.26 N) Maximu m exertable force at nominal (orthogonal arms) position : 0.75 lbf. (3.3 N) Continuous exertable force (24 hrs.) > 0.2 lbf. (0.88 N) Stiffness : X axis > 7.3 lb/in (1.26 N/mm) Y axis > 13.4 lb/in (2.31 N/mm) Z axis > 5.9 lb/in (1.02 N/mm) Inertia (appare nt mass at tip) : ~0.101 lbm. (45 g) Force feedback : x, y, z (3Dof Output) Position sensing : [Stylus gimbal] : x, y, z (digital encoders) [Pitch, roll, yaw ( 5% linearity potentiometers)] (6Dof Input) Interface : IEEE 1394 FireWire port Supported pla tforms : Intel based PCs GHOST SDK compatibility : No : Yes Applications : Selected Types of Haptic Research and

PAGE 222

207 Appendix G : Custom Made Sick DT6 0 Data Acquisition Module The Sick DT60 is dist ance sensor that uses a laser diode to produce red light which is a reflected from the target object to generate an analogue signal proportional to the distance from the target. The DT60 sensor has a range of 200mm to 6m and is designed to be used with any target material. According to the documentation provided by the manufacturer, the visible red light is an eye safe light beam, however, it is highly recommended to avoid d irect exposure to the laser light. Power and signal connections to the laser are via a standard M12, 5 pin plug. Accuracy is 10 mm with a typical reproducibili ty of around 7mm The output signal is a current varying from 4.0mA to 20.0mA proportional to the measured distance. Before Analog to Digital conversion using the 232 SDA12, a hi gh precision resistor must be used to convert to a voltage signal w ith 0 5 VDC range (See Figure G .1 ). Figure G.1 Custom made ADC Module for the DT60 Sick Laser Sensor Pin 17 Pin 18 Pin 8 Pin 19 Pin 7 R (249 0.5%) 232 SDA12 Converter Sick DT60 Signal (w h t ) Com (blu ) Regulated Power 12 V DC +12 (brn )

PAGE 223

About the A uthor Eduardo J. Veras was born on August 9, 1963 in Santiago, Dominican Republic He Catlica Madre y Maestra University in 1987 and a M.S. in Design and Manufacturing from University of Puerto Rico at Mayaguez in 19 92. He started teaching at Polytechnic University of Puerto Rico until he entered the Ph.D. program at the University of South Florida in 200 4 Mr. Veras was a teaching assistant in the USF Mecha nical Engineering Laboratory II and a research assistant in the Rehabilitation Robotics Center at USF H is responsibilities as RA included development of a haptic controller to drive a VR of the Puma 560, and interfaces for a S paceball Barrett Hand, and Sick LRF for MatLab and a BCI 2000 program to drive a wheel chair mounted robotic arm. He accepted a position as a faculty member at the Polytechnic University of Puerto Rico Orlando