USF Libraries
USF Digital Collections

Maximizing manipulation capabilities of persons with disabilities using a smart 9-degree-of-freedom wheelchair-mounted r...

MISSING IMAGE

Material Information

Title:
Maximizing manipulation capabilities of persons with disabilities using a smart 9-degree-of-freedom wheelchair-mounted robotic arm system
Physical Description:
Book
Language:
English
Creator:
Alqasemi, Redwan M
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
DOF
ADL
Control
Robot
Rehabilitation
Mobility
Redundancy
Dissertations, Academic -- Mechanical Engineering -- Doctoral -- USF
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: Physical and cognitive disabilities make it difficult or impossible to perform simple personal or job-related tasks. The primary objective of this research and development effort is to assist persons with physical disabilities to perform activities of daily living (ADL) using a smart 9-degrees-of-freedom (DOF) modular wheelchair-mounted robotic arm system (WMRA). The combination of the wheelchair's 2-DoF mobility control and the robotic arm's 7-DoF manipulation control in a single control mechanism allows people with disabilities to do many activities of daily living (ADL) tasks that are otherwise hard or impossible to accomplish. Different optimization methods for redundancy resolution are explored and modified to fit the new system with combined mobility and manipulation control and to accomplish singularity and obstacle avoidance as well as other optimization criteria to be implemented on the new system. The resulting control algorithm of the system is tested insimulation using C++ and Matlab codes to resolve any issues that might occur during the testing on the physical system. Implementation of the combined control is done on the newly designed robotic arm mounted on a modified power wheelchair and with a custom designed gripper. The user interface is designed to be modular to accommodate any user preference, including a haptic device with force sensing capability, a spaceball, a joystick, a keypad, a touch screen, head/foot switches, sip and puff devices, and the BCI 2000 that reads the electromagnetic pulses coming out of certain areas of the brain and converting them to control signals after conditioning. Different sensors (such as a camera, proximity sensors, a laser range finder, a force/torque sensor) can be mounted on the WMRA system for feedback and intelligent control. The user should be able to control the WMRA system autonomously or using teleoperation. Wireless Bluetooth technology is used for remote teleoperation in case the user isnot on the wheelchair. Pre-set activities of daily living tasks are programmed for easy and semi-autonomous execution.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2007.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Redwan M. Alqasemi.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 421 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001915694
oclc - 180707627
usfldc doi - E14-SFE0002004
usfldc handle - e14.2004
System ID:
SFS0026322:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Maximizing Manipulation Capabilities of Persons with Disabilities Using a Smart 9-Degree-of-Freedom Wheelchair-Mounted Robotic Arm System by Redwan M. Alqasemi A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Mechanical Engineering College of Engineering University of South Florida Major Professor: Rajiv Dubey, Ph.D. Shuh-Jing Ying, Ph.D. Craig Lusk, Ph.D. Wilfrido Moreno, Ph.D. Kandethody Ramachandran, Ph.D. Date of Approval: March 29, 2007 Keywords: dof, adl, control, robot rehabilitation, mobility, redundancy Copyright 2007, Redwan M. Alqasemi

PAGE 2

Note to Reader The original of this document contains color that is necessa ry for understanding the data. The original dissertat ion is on a file with the USF library in Tampa, Florida.

PAGE 3

Dedication To my wife, Ola, for her unconditional love and support in every possible way and for the sacrifices she ma de of her own needs and comf ort for mine. To my children, Hiba, Lama, Rama, Ryan and Dana, who gave me that wonderful f eeling that I see and feel every day when I came back from a long day in the office. Without them, I can’t have any color, taste or joy in my life that charges me for success. Thank you for enduring my absence for countless days and nights. To my loving mother and father, who gave me all the support and encouragement I needed to continue my education. I will ne ver forget the sleepless nights you had for my comfort, and the prayers you made for my success. To my advisor, Dr. Dubey, who was like my older brother, giving me the advice when I need it and helping me in any way he can above and beyond his duties. Your inspiration helped me achieve this work. You are truly a great professor and role model. To my brothers and sisters who never sp ared any opportunity to help me when I needed their help. To my relatives and frie nds who gave me the comfort and confidence whenever I needed them. Above all, to God, who showered me with his countless blessings and guided me to the right path and made me succeed througho ut the way with all the obstacles I faced, thank you God.

PAGE 4

Acknowledgments I would like to express my gratitude to my advisor, Dr. Rajiv Dubey for giving me the precious opportunity to work with him and for sharing his knowledge and experience with me in both teaching and research. His guidance and immense patience throughout the course of my research are grea tly appreciated. I would also like to thank the members of my committee Dr. Shuh-Jing Ying, Dr. Craig Lusk, Dr. Wilfrido Moreno and Dr. Kandethody Ramachandran for their valuable comments to this research. I would like to gratefully acknowledge the important contri bution and support of the Florida Department of Education, the Di vision of Vocational Re habilitation, and the Center for Rehabilitation Engineering and Tec hnology at the University of South Florida, especially Mr. Stephen Sunda rrao, who provided a great help in conducting experiments and tests with people with disabilities. Many thanks go to the members of the Rehabilitation Robotics group, including Edua rdo Veras, Edward McCaffrey, Kevin Edwards, Mayur Palankar, Sebastian Mahl er and Steven Colbert who added some important contributions to this research. Special thanks and appreciation go to Dr. Emanuel Donchin and Dr. Yael Arbel of the Department of Psychology at U SF for lending their BC I-2000 hardware and support to integrate the BCI system into this research. I would also like to express my gratitude to Vilma Fitzhenry, Susan Britten, Sh irley Tervort, Wes Frusher, Robert Smith, Thomas Gage, and James Christopher for th eir support in paper work and machining.

PAGE 5

i Table of Contents List of Tables vii List of Figures viii Abstract xvi Chapter 1: Introduction 1 1.1. Motivation 1 1.2. Dissertation Objectives 3 1.3. Dissertation Outline 4 Chapter 2: Background 6 2.1. History of Rehabilitation Robotics 6 2.2. Rehabilitation Robotic s Classification 7 2.2.1. Workstation Robotic Arms 7 2.2.2. Wheelchair-Mounted Robotic Arms 12 2.2.3. Mobile / Assistant Robots 15 2.2.4. Robots in Therapy 18 2.2.5. Smart Wheelchairs / Walkers 19 2.3. Commercial Wheelchair-Mounted Robotic Arms 20 2.3.1. The Manus 20 2.3.2. The Raptor 21 2.4. Robot Control 22 2.4.1. Redundant Robot Control 22 2.4.2. Mobile Robot Control 26 Chapter 3: Control Theory of Redundant Manipulators 41 3.1. Introduction 41 3.2. Terminology 42 3.3. Redundant Manipulators Problem Formulation 42 3.3.1. Frames of References 43 3.3.2. Denavit-Hartenberg Parameters 44 3.4. Forward Kinematics Equations 46 3.4.1. Link Transformation Matrices 46 3.4.2. Velocity Propagation and the Jacobian 48

PAGE 6

ii 3.5. Inverse Kinematic Equations 51 3.5.1. Closed Form Solutions 51 3.5.2. Manipulability Ellipsoid 53 3.5.3. Numerical Solutions 54 3.5.4. Redundancy Resolution 55 3.5.5. Optimization Criteria 56 3.6. Summary 57 Chapter 4: Mobility Control Theory 59 4.1. Introduction 59 4.2. Terminology 60 4.3. Mobility Problem Formulation 60 4.3.1. Frame Assignment 61 4.3.2. Wheelchair’s Important Dimensions 62 4.4. Homogeneous Transformation Relations 64 4.4.1. Driving Wheels’ Motion and the Turning Angle 64 4.4.2. The Radius of Curvature 66 4.4.3. Point-to-Point Transforma tion of the Wheelchair 69 4.4.4. Transformation to the Robotic Arm’s Base 72 4.5. Wheelchair Velocities 72 4.5.1. Wheelchair Velocity Mapping to the Robotic Arm Base 73 4.5.2. Mapping the Driving Wheels’ Velo cities to the Wheelchair 75 4.6. Wheelchair’s General Jacobian 77 4.7. Trajectory Options 78 4.8. Operator’s Safety Issues 79 4.9. Summary 80 Chapter 5: Control and Optimization of th e Combined Mobility and Manipulation 81 5.1. Introduction 81 5.2. Terminology 82 5.3. WMRA Assembly and Problem Definition 82 5.4. Kinematics of the Combined WMRA System 83 5.5. Jacobian Augmentation and Resolv ed Rate Equations Generation 84 5.6. Jacobian Changes Based on the Control Frame 88 5.6.1. Ground-Based Control 88 5.6.2. Wheelchair-Based Control 88 5.6.3. End-Effector Based Cont rol (Piloting Option) 89 5.7. Jacobian Inversion Meth ods and Singularities 89 5.7.1. Inverting Using Pseudo Inverse 90 5.7.2. Inverting Using Singularity-Robust Inverse 90 5.8. Optimization Methods with the Combined Jacobian 91 5.8.1. Criteria Functions and Minimizing Euclidean Norm of Errors 92 5.8.2. Weighted Least Norm Solution 94 5.8.3. Joint Limit Avoidance 95 5.8.4. Obstacle Avoidance 99

PAGE 7

iii 5.8.5. Safety Conditions 99 5.8.6. Unintended Motion Effect Based on the Optimization Criteria 100 5.9. Optional Combinations for the Resolved Rate Solution 101 5.10. State Variable Options in the Control Algorithm 103 5.10.1. Seven Robotic Arm Joints, Left wheel and Right Wheel Variables 104 5.10.2. Seven Robotic Arm Joints, Fo rward and Rotational Motion of the Wheelchair 105 5.11. Trajectory Generation 109 5.11.1. Generator of a Linear Trajectory 109 5.11.2. Generator of a Polynomial Trajectory 113 5.11.3. Generator of a Polynomial Trajectory with Parabolic Blending Factor 115 5.12. Control Reference Frames 116 5.12.1. Ground Reference Frame 117 5.12.2. Wheelchair Reference Frame 118 5.12.3. End-Effector Reference Frame 119 5.13. Summary 120 Chapter 6: User Interface Options 121 6.1. Introduction 121 6.2. User Interface Clinical Testing 121 6.2.1. Representative ADL Tasks Used for the Clinical Study 122 6.2.2. The Tested User Interfaces 124 6.2.3. Population of the Chosen Us ers with Disabilities 125 6.2.4. Clinical Test Results on User Interfaces 126 6.3. The New WMRA User Interfaces Used 128 6.3.1. Six-Axis, Twelve-Way SpaceBall 128 6.3.2. Computer Keyboard and Mouse 129 6.3.3. Touch Screen on a Tablet PC 130 6.4. The Brain-Computer Interface (BCI ) Using P300 EEG Brain Signals 131 6.4.1. The P300 EEG Signal 131 6.4.2. The Use of the BCI 132 6.4.3. The BCI-2000 Interface to the New 9-DoF WMRA System 133 6.4.4. Testing of the BCI-2000 with the WMRA Control 134 6.5. Expandability of User Interfaces 135 6.5.1. Omni Phantom Haptic Device 136 6.5.2. Sip n’ Puff Device 137 6.5.3. Head and Foot Switches 138 6.6. Summary 138 Chapter 7: Testing in Simulation 139 7.1. Introduction 139 7.2. User Options to Control the WMRA System 139

PAGE 8

iv 7.3. Changing the Physical Dimensions and Constraints of the WMRA System 142 7.4. Programming Language Packages Used 142 7.4.1. Programs in C++ Programming Language 144 7.4.2. Matlab Programming Environment 144 7.4.3. Simulation with Virtual Reality Toolbox 147 7.4.4. Graphical User Interf ace (GUI) Program 149 7.5. Comments on Interfacing Different Programs Together 150 7.6. Summary 151 Chapter 8: Simulation Results 153 8.1. Introduction 153 8.2. Simulation Cases Tested 153 8.3. Results and Discussion of the First Five Cases 155 8.3.1. WMRA Configurations in the Fi nal Pose of the Simulation 158 8.3.2. Displacements of the Joint Space Variables 161 8.3.3. Velocities of the Joint Space Variables 167 8.3.4. Singularities and the Ma nipulability Measure 169 8.4. Results and Discussion of the Second Two Cases 172 8.5. More Simulation for Optimizati on Methods and Criterion Function Effects 178 8.6. Simulation of the Eight Implemente d Optimization Control Methods for the Case of an Unreachable Goal 184 8.7. Summary 194 Chapter 9: Experimental Testbed and Field Tests 195 9.1. Introduction 195 9.2. The New 7-DoF Robotic Arm Design and Development 195 9.2.1. Design Goals 196 9.2.1.1. Weight 196 9.2.1.2. Mount Type 196 9.2.1.3. Stiffness 197 9.2.1.4. Payload 197 9.2.1.5. Reconfigurability 198 9.2.1.6. Power Supply and Consumption 198 9.2.1.7. Cost Constraint 198 9.2.1.8. User Interface 199 9.2.1.9. Degrees of Freedom 199 9.2.1.10. Actuation and Transmission Systems 199 9.2.1.11. DC Motors as Actuators 200 9.2.2. Kinematic Arrangements and Component Selection 200 9.2.2.1. Gearhead Selection 202 9.2.2.2. Motor Selection 203 9.2.2.3. Material Selection 204 9.2.2.4. Joint Design 204

PAGE 9

v 9.2.2.5. Wrist Design 204 9.2.3. Final Design Testing and Specifications 205 9.3. The New 2-Claw Ergonomic Gri pper Design and Development 208 9.3.1. Paddle Ergonomic Design 209 9.3.2. Actuation Mechanism 211 9.3.3. Component Selection 212 9.3.4. Final Design and Testing 216 9.4. Modification of a Standard Power Wheelchair 219 9.5. Controller Hardware 220 9.5.1. Controller Boards 222 9.5.2. Communication and Wiring 223 9.5.3. Safety Measures 224 9.6. Experimental Testing 225 9.7. Summary 228 Chapter 10: Conclusions and Recommendations 230 10.1. Overview 230 10.2. General Discussion 231 10.3. Recommendations 234 Chapter 11: Future Work 237 11.1. Introduction 237 11.2. Quick Attach-Detach Mechanism 237 11.3. A Single Compact Controller 238 11.4. A Sensory Suite 239 11.5. Real-Time Control 239 11.6. Bluetooth Wireless Technology for Re mote Wireless Teleoperation 240 11.7. Sensor Assist Functions (SAFs) 240 11.8. Pre-Set ADL Tasks 241 References 243 Appendices 249 Appendix A. Hardware Components 250 A.1. Robotic Arm Gear Motors with Encoders 250 A.2. Harmonic Drive Gearheads 252 A.3. Wheelchair Selected Encoders 264 A.4. Wheelchair Selected Friction Wheels 267 A.5. Gripper’s Actuation Motor 268 A.6. Gripper’s Planetary Gearhead 269 A.7. Gripper’s Optical Encoder 270 A.8. Gripper’s Spur Gears 272 A.9. Gripper’s Slip Clutch 273 A.10. PIC Servo SC Motion Controller Board 280 A.11. SSA-485 Smart Serial Adapter 283

PAGE 10

vi Appendix B. Matlab Programs and Functions 292 B.1. VRML File of the Virtua l Reality Control Code 292 B.2. Matlab Functions Listed Alphabetically 297 B.3. Matlab Main Script and GUI Main File 349 Appendix C. C++ Programs and DLL Library 401 C.1. DLL Library Functions 401 C.2. DLL Library Documentation 403 About the Author End Page

PAGE 11

vii List of Tables Table 3.1: The D-H Parameters of the New 7-DoF Robotic Arm Built at USF. 46 Table 9.1: HD Systems Gearhead Selections for Each Joint. 202 Table 9.2: Arm Deflections vs. Applied Load. 206 Table 9.3: Power Usage. 206 Table 9.4: Summary of the Robotic Arm Specifications. 208

PAGE 12

viii List of Figures Figure 2.1: Puma 250 Arm. 8 Figure 2.2: Handy-1. 9 Figure 2.3: RAID Workstation. 10 Figure 2.4: Robot Assistive Device. 11 Figure 2.5: ProVAR System. 11 Figure 2.6: Weston Arm. 13 Figure 2.7: Asimov Arm. 14 Figure 2.8: FRIEND Robotic System. 15 Figure 2.9: MoVAR. 16 Figure 2.10: MoVAID. 16 Figure 2.11: TAURO Robotic System. 17 Figure 2.12: MIT Manus System. 18 Figure 2.13: Mouth Opening and Closing Device. 19 Figure 2.14: iBOT (Left) and Segway (Right) Devices. 20 Figure 2.15: Manus Arm. 21 Figure 2.16: Raptor Arm. 22 Figure 2.17: Redundancy Resolution without (Left) and with (Right) Obstacle Avoidance. 24

PAGE 13

ix Figure 2.18: The Robot Visual Servoing A pplication Using the QP Controller. 26 Figure 2.19: Reference Frames Used for the Manipulation LIRMM. 27 Figure 2.20: Cooperative Control System Setup. 28 Figure 2.21: Mobile Manipulator Model. 29 Figure 2.22: Wheeled Mobile Ma nipulator with Two Arms. 30 Figure 2.23: Nomad XR4000 with the Puma560 Mounted on Top. 32 Figure 2.24: Mobile Manipulator. 33 Figure 2.25: Mobile Manipulator H2BIS. 33 Figure 2.26: LAAS Mobile Manipulator. 35 Figure 2.27: Mobile Manipulator. 36 Figure 2.28: Interaction Control of the Mobile Manipulator. 38 Figure 2.29: Trajectory Tracking for a Planar 2-DoF Robot on a Differential Mobile Base. 39 Figure 2.30: Animation of the Motion of th e End-Effector and the Platform Front Point. 40 Figure 3.1: Joint-Link Kinematic Parameters. 43 Figure 3.2: Solid Works Model of the New 7-DoF Robotic Arm Built at USF. 45 Figure 3.3: Frame Assignments and Dime nsions of the New 7-DoF Robotic Arm. 45 Figure 3.4: Velocity Vectors of Neighboring Links. 49 Figure 3.5: Manipulability Ellipsoid fo r a 7-DoF Manipulator in a 6-DoF Euclidean Space. 54 Figure 4.1: Wheelchair Coordinate Fram es and Dimensions of Interest. 62 Figure 4.2: Traveled Distance of a Turning Wheel. 64

PAGE 14

x Figure 4.3: Traveled Distance with Turning Angle. 65 Figure 4.4: Radius of Curvature in Case 1. 67 Figure 4.5: Radius of Curvature in Case 2. 67 Figure 4.6: Radius of Curvature in Case 3. 68 Figure 4.7: Radius of Curvature in Case 4. 69 Figure 4.8: Point-to-Point Transformation of Frames. 70 Figure 4.9: The Case When “L3” is Zero. 73 Figure 4.10: The Case When “L2” is Zero. 74 Figure 4.11: The Case When the Left Wheel is Stationary. 76 Figure 4.12: The Case When the Right Wheel is Stationary. 76 Figure 4.13: The Three Sub-Motions in Motion Planning of the Wheelchair. 79 Figure 5.1: WMRA Coor dinate Frames. 83 Figure 5.2: Four Joint Limit Boundary Conditions. 98 Figure 5.3: Linear Trajectory Generation. 112 Figure 5.4: Polynomial Function of 3rd Order for Variable Ramp with Time. 114 Figure 5.5: Polynomial Function of 3rd Order for Blended Variable Ramp with Time. 116 Figure 5.6: Polynomial Trajectory Generation. 117 Figure 6.1: Four Different ADL Tasks. 123 Figure 6.2: Four-Way Joystick for Manus. 124 Figure 6.3: Eight-Button Keypad for Manus. 124 Figure 6.4: Eight-Way Joystick for Raptor. 125 Figure 6.5: Clinical Testing of the Keypad by a Power Wheelchair User. 126

PAGE 15

xi Figure 6.6: Clinical Testing of the J oystick by a Power Wheelchair User. 127 Figure 6.7: Twelve-Way SpaceBall. 129 Figure 6.8: A Keyboard and a Mouse. 129 Figure 6.9: A 12-Inch Touch Screen of a Tablet PC. 130 Figure 6.10: GUI Screen Used for the Touch Screen. 130 Figure 6.11: Basic Design and Operation of the BCI System. 131 Figure 6.12: The Non-Invasive BCI Device. 133 Figure 6.13: Basic Design and Operation of the BCI System. 134 Figure 6.14: The Phantom Omni Device from SensAble Technologies. 136 Figure 6.15: The Sip and Puff Input Device. 137 Figure 6.16: Head and Foot Switches. 138 Figure 7.1: Program Flowchart. 143 Figure 7.2: A Sample Command Prompt s for Autonomous Operation Mode. 146 Figure 7.3: Simulation Window of the WMRA System in Wire Frame. 147 Figure 7.4: A Sample of the Virt ual Reality Simulation Window. 148 Figure 7.5: The Graphical User Interface (GUI) Screen with the Defaults. 150 Figure 8.1: The Initial Pose of the WMRA in Simulation. 156 Figure 8.2: Position of the WMRA During Simulation. 157 Figure 8.3: Orientation of th e WMRA During Simulation. 157 Figure 8.4: Destination Pose for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 159 Figure 8.5: Destination Pose Case II, When W = [10, 10, 10, 10, 10, 10, 10, 1, 1]. 159 Figure 8.6: Destination Pose Case III, When W = [1, 1, 1, 1, 1, 1, 1, 100, 100]. 160

PAGE 16

xii Figure 8.7: Destination Pose Case IV When W = [1, 1, 1, 1, 1, 1, 1, 100, 1]. 160 Figure 8.8: Destination Pose Case V, When W = [1, 1, 1, 1, 1, 1, 1, 1, 100]. 161 Figure 8.9: Arms’ Joint Motion for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 162 Figure 8.10: Arms’ Joint Motion for Case II, When W = [10, 10, 10, 10, 10, 10, 10, 1, 1]. 162 Figure 8.11: Arms’ Joint Motion for Case III, When W = [1, 1, 1, 1, 1, 1, 1, 100, 100]. 163 Figure 8.12: Arms’ Joint Motion for Case IV, When W = [1, 1, 1, 1, 1, 1, 1, 100, 1]. 163 Figure 8.13: Arms’ Joint Motion for Case V, When W = [1, 1, 1, 1, 1, 1, 1, 1, 100]. 164 Figure 8.14: Wheels’ Motion for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 165 Figure 8.15: Wheels’ Motion for Case II, When W = [10, 10, 10, 10, 10, 10, 10, 1, 1]. 165 Figure 8.16: Wheels’ Motion for Case III, When W = [1, 1, 1, 1, 1, 1, 1, 100, 100]. 166 Figure 8.17: Wheels’ Motion for Case IV When W = [1, 1, 1, 1, 1, 1, 1, 100, 1]. 166 Figure 8.18: Wheels’ Motion for Case V, When W = [1, 1, 1, 1, 1, 1, 1, 1, 100]. 167 Figure 8.19: Arms’ Joint Velocities for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 168 Figure 8.20: Wheels’ Velocities for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 168 Figure 8.21: Manipulability Index for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 169 Figure 8.22: Manipulability Index for Case II, When W = [10, 10, 10, 10, 10, 10, 10, 1, 1]. 170 Figure 8.23: Manipulability Index for Ca se III, When W = [1, 1, 1, 1, 1, 1, 1, 100, 100]. 170 Figure 8.24: Manipulability Index for Ca se IV, When W = [1, 1, 1, 1, 1, 1, 1, 100, 1]. 171

PAGE 17

xiii Figure 8.25: Manipulability Index for Case V, When W = [1, 1, 1, 1, 1, 1, 1, 1, 100]. 171 Figure 8.26: Arm Base Position When the Weights Were Equal, W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 173 Figure 8.27: Arm Base Orientation When the Weights Were Equal, W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 174 Figure 8.28: Arm Base Position for Case A, When W = [1, 1, 1, 1, 1, 1, 1, 50, 50]. 175 Figure 8.29: Arm Base Orientation for Ca se A, When W = [1, 1, 1, 1, 1, 1, 1, 50, 50]. 175 Figure 8.30: Arm Base Position for Case B, When W = [1, 1, 1, 1, 1, 1, 1, 50, 1]. 177 Figure 8.31: Arm Base Orientation for Ca se B, When W = [1, 1, 1, 1, 1, 1, 1, 50, 1]. 177 Figure 8.32: Wheels’ Motion Distances for Case I. 179 Figure 8.33: Joint Angular Displacements for Case I. 179 Figure 8.34: Wheels’ Motion Distances for Case II. 180 Figure 8.35: Joint Angular Displacements for Case II. 181 Figure 8.36: Wheels’ Motion Distances for Case III. 181 Figure 8.37: Joint Angular Disp lacements for Case III. 182 Figure 8.38: Wheels’ Motion Distances for Case IV. 183 Figure 8.39: Joint Angular Disp lacements for Case IV. 183 Figure 8.40: Manipulability Measure Case I (PI). 186 Figure 8.41: Joint Angular Displa cements for Case I (PI). 186 Figure 8.42: Manipulability Me asure Case II (PI-JL). 187 Figure 8.43: Joint Angular Displace ments for Case II (PI-JL). 187

PAGE 18

xiv Figure 8.44: Manipulability Me asure Case III (WPI). 188 Figure 8.45: Joint Angular Displace ments for Case III (WPI). 188 Figure 8.46: Manipulability Meas ure Case IV (WPI-JL). 189 Figure 8.47: Joint Angular Displace ments for Case IV (WPI-JL). 189 Figure 8.48: Manipulability Me asure Case V (SRI). 190 Figure 8.49: Joint Angular Displa cements for Case V (SRI). 190 Figure 8.50: Manipulability Meas ure Case VI (SRI-JL). 191 Figure 8.51: Joint Angular Displacements for Case VI (SRI-JL). 191 Figure 8.52: Manipulability Me asure Case VII (WSRI). 192 Figure 8.53: Joint Angular Displace ments for Case VII (WSRI). 192 Figure 8.54: Manipulability Meas ure Case VIII (WSRI-JL). 193 Figure 8.55: Joint Angular Displace ments for Case VIII (WSRI-JL). 193 Figure 9.1: Complete SolidWorks Model of the WMRA. 201 Figure 9.2: Kinematic Diagram with Link Frame Assignments. 201 Figure 9.3: Harmonic Drive Gearhead. 203 Figure 9.4: Pittman Servo Brush Motors with Gearbox and Encoder. 203 Figure 9.5: Wrist Design: 3-Roll Wrist (Left), Orthogonal Wrist (Right). 205 Figure 9.6: WMRA SolidWorks Models a nd the Corresponding Positions of the Built Device. 207 Figure 9.7: The New Gripper’s Ergonomic Surfaces. 209 Figure 9.8: The Gripper Design in Application Reference. 210 Figure 9.9: Extended Interior Surface Added to the Gripper. 211 Figure 9.10: The Selected Coreless Gearhead Servo Motor. 213

PAGE 19

xv Figure 9.11: The Selected Slip Clutch. 214 Figure 9.12: The Assembled Actuation Mechanism. 214 Figure 9.13: The New Gripper and th e Actuation Mechanism Drawing. 215 Figure 9.14: The New Gripper and the Actuation Mechanism. 215 Figure 9.15: The New Gripper When Holding a Spherical Object. 217 Figure 9.16: The New Gripper When Holding a Tapered Cup. 217 Figure 9.17: The Gripper When Opening a Door with a Lever Handle (Left) and a Knop Handle (Right). 218 Figure 9.18: The New Gripper When Handling Small Objects. 218 Figure 9.19: The New Gripper When Handling Large and Heavy Objects. 219 Figure 9.20: A Circuit Designed to Conve rt Digital PWM Duty-Cycle Control Signal to Analogue Signal. 220 Figure 9.21: The Designed Controller Box Installed on the Modified Wheelchair. 221 Figure 9.22: The Quick-Release Mechanis m that Mounts the Robotic Arm on the Wheelchair. 222 Figure 9.23: JRKERR PIC Servo SC Controller Boards. 223 Figure 9.24: Control System Circuitry. 224 Figure 9.25: Serial Port Connection of th e Joint Motors (Left) and the Gripper (Right). 225 Figure 9.26: Wheelchair Encoders and Control Communications. 226 Figure 9.27: A Person with Guillain-Ba rre Syndrome Driving the New WMRA System. 227 Figure 9.28: A Human Subject Testing of the BCI-2000 Interface with the WMRA System. 228

PAGE 20

xvi Maximizing Manipulation Capabilities of P ersons with Disabilities Using a Smart 9Degree-of-Freedom Wheelchair-Mounted Robotic Arm System Redwan M. Alqasemi ABSTRACT Physical and cognitive disabilities make it difficult or impossible to perform simple personal or job-rela ted tasks. The primary objective of this research and development effort is to assist persons with physical disabilities to perform activities of daily living (ADL) using a smart 9-degree s-of-freedom (DOF) modular wheelchairmounted robotic arm system (WMRA). The combination of the wheelchair’s 2DoF mobility control and the robotic arm’s 7-DoF manipulation control in a sing le control mechanism allows people with disabilities to do many activities of daily livin g (ADL) tasks that are otherwise hard or impossible to accomplish. Different optimization methods for redundancy resolution are explored and modified to fit the new syst em with combined mobility and manipulation control and to accomplish singularity an d obstacle avoidance as well as other optimization criteria to be implemented on the new system. The resulting control algorithm of the system is tested in simula tion using C++ and Matl ab codes to resolve any issues that might occur during the testing on the physical system. Implementation of the combined control is done on the newly de signed robotic arm m ounted on a modified power wheelchair and with a custom designed gripper.

PAGE 21

xvii The user interface is designe d to be modular to accommodate any user preference, including a haptic device with force sensing capability, a spaceball, a joystick, a keypad, a touch screen, head/foot switches, sip and puff devices, and the BCI 2000 that reads the electromagnetic pulses coming out of certain areas of the brain and converting them to control signals after conditioning. Different sensors (such as a camera, proxi mity sensors, a laser range finder, a force/torque sensor) can be mounted on the WMRA system fo r feedback and intelligent control. The user should be able to control the WMRA sy stem autonomously or using teleoperation. Wireless Bluetoot h technology is used for remote teleoperation in case the user is not on the wheelchair. Pre-set activ ities of daily living tasks are programmed for easy and semi-autonomous execution.

PAGE 22

1 Chapter 1: Introduction 1.1. Motivation According to the latest data from the US Census Bureau Census Brief of 1997 [1], one of every five Americans had difficulty performing functional ac tivities (about 53 million), half of them were considered to have severe disabilities (over 26 million). Robotic aides used in these applications vary from advanced limb orthosis to robotic arms [2]. Persons that can benefit from th ese devices are those with severe physical disabilities (such as cerebral palsy resulti ng in loss of sensation or loss of ability to control movement), acquired disabilities (such as spinal cord injury, multiple sclerosis and stroke), and mobility disabilities (such as osteoporosis and arthritis due to chronic disorders) that result in a limited or no upper extremity mobility which limit their ability to manipulate objects. These devices increase self-sufficiency, and reduce dependence on caregivers. In the case of spinal cord injury or dysfunction these aids are most appropriate for individuals with spinal deficiencies ranging from cer vical spine vertebrae 3 through cervical spine vertebrae 5. Indivi duals with neuromuscular deficiencies, such as muscular sclerosis, or other motor dysfunctions due to accidents, disease, aging, or genetic predispositions, can benefit from these robotic devices as well.

PAGE 23

2 A wheelchair mounted robotic arm (W MRA) can enhance the manipulation capabilities of individuals w ith disabilities that are usi ng power wheelchairs, and reduce dependence on human aides. Individuals that require mobility assist devices such as a power wheelchair can benefit from various r obotic devices because the power wheelchair provides a platform with which to mount the device as well as a power supply, using the wheelchair’s batteries. There have been se veral attempts in the past to create commercially-viable wheelchair mounted robo tic arms. Currently there are only two commercially available WMRAs available, the Manus (Exact Dynamics, Inc., Netherlands) and the Raptor (Applied Resources, Inc, NJ USA). Unfortunately, most WMRAs have had limited commercial success due to poor usability and low payload. It is often diffi cult to accomplish many of the Activities of Daily Living (ADL) tasks with the WMRAs cu rrently on the market due to its physical and control limitations and its control indepe ndence of the wheelchair’s control system. This project attempts to surpass available commercial WMRA devices by offering an intelligent system that combines the mobility of the wheelchair and the manipulation of a newly designed arm in an effort to improve performance, usability, control and reduce mental load on the user while maintaining cost competitiveness. The two commercially available WMRAs lack the integration of the robotic arm controller with the wheelchair controller, and that leads to an increased mental load on the user. Combining the control of both th e power wheelchair and the robotic arm would decrease this mental burden on the user a nd improve the combined system usability. It is desired to fulfill the need of such integrated systems to be used for many ADL tasks such as opening a spring-load ed door autonomously and go through it,

PAGE 24

3 interactively exchange object s with a companion on the move avoid obstacles by going around them while maneuvering objects, conveni ently handle food a nd beverage between the fridge, Microwave oven, stove, etc. wi thout the need to switch between the wheelchair controller and the r obotic arm controller, and avoi d singularities in a small working environment, such as an office, where wheelchair motion can be slightly utilized to maneuver objects while avoiding singularitie s (similar to a person sitting on an office chair and handling objects around him by moving his/her arm while slightly moving the chair to get closer to an object that is otherwise unreachable). 1.2. Dissertation Objectives 1The main objective of this work is to develop and optimize a control system that combines the manipulation of the newly designed 7-DoF robotic arm and the mobility of a modified 2-DoF wheelchair in a smart 9-DoF control algorithm. 2Redundancy resolution is to be optimally so lved to avoid singularities, joint limits, obstacles and to allow larger wheelchair or manipulator motion depending on the user proximity to the goal. 3This WMRA is to utilize an optimized co ntroller that is expa ndable to control both WMRA and the power wheelchair. 4A complex and flexible simulation program is to be developed to simulate the 9-DoF WMRA in Virtual Reality. 5A 7-DoF Robotic arm is to be devel oped and integrated to a modified power wheelchair to include PC based control and sensory feedback.

PAGE 25

4 6High-level control of the 9-DoF WMRA sy stem is to be designed to combine the WMRA’s 7 DoF and the wheelchair’s mobility in the new redundant 9-DoF system. 7Redundancy is to be used and optimized to improve manipulati on capabilities for activities of daily living (ADL s) and avoid singularities. 8The new system is to be capable of executing complex pre-set tasks autonomously as well as in teleoperation mode. 9The user interface in the WMRA system’s teleoperation mode should be capable of using a force-reflecting haptic interface, a keypad, a Spaceball, a touch screen, a BCI 2000 brain-wave sensor device or other user interface devices that can be used as modular user interfaces with different capabili ties to fit the indi vidual user needs. 10Higher level control algorithms are to be developed to interface the sensory data and the user input for an easy control of the system. 11The system should be capable of futu re modification to use Bluetooth wireless technology for remote teleoperation so that the user can also perform some ADL tasks while not seated on the wheelchair. 12A sensory suite can be in the control algorithm for feedback purposes. 1.3. Dissertation Outline This dissertation will give a background in chapter 2 on previous work done in the field of mobile robots and re dundant manipulators as well as a ssistive devices that can be used by individuals with disabilities. Ch apter 3 will focus the redundant manipulator control theories and methods, and chapter 4 wi ll discuss the control th eory of differential drives that produce non-holonomic motion. In chapter 5, the combination of both the

PAGE 26

5 redundant manipulator control and the diffe rential drive non-holonomic mobility control theory will be discussed. Mathematical rela tionships and augmentation of the Jacobian to combine the mobility and manipulation will also be generated in this chapter along with the optimization methods used for this appli cation. Chapter 6 will show the different user interfaces used for this application and so me clinical studies conducted with human subjects. Chapter 7 will show the applica tion of combined mani pulation and mobility control using simulation, and the results of th e simulation will be shown and discussed in chapter 8. Testbed design for experimental a pplication of the control theory on physical WMRA system will be described in chapte r 9 along with the experimental results. Chapter 10 concludes the dissertation with summary and discussion with recommendations, and chapter 11 will discuss fu ture work that can be conducted on the WMRA system.

PAGE 27

6 Chapter 2: Background 2.1. History of Rehabilitation Robotics The development of robots started in the 1960’s with manipulators which were used for manufacturing purposes [3]. Planetar y rovers and vision embedded systems took the attention of researchers in the early 1970’s and were de veloped along side with the industrial manipulators. Starting in the past decade, research ers have focused on artificial intelligence in robotics in order to wide n the use of robots and make them more intelligent in their applications One such use is in the area of rehabilitation, where people with disabilities can take control of some of their daily needs without the need for human assistance. A key problem in robotic arms th at are mounted on a mobile platform is the combination of the manipulation and mobility of these systems while they move in space, especially when redundancy is introduced. There have been various attempts over the years to create robotic assistants for individu als with various levels of disabilities. For over 30 years, research has progressed in this field with only partial commercial success. One of the first attempts at rehabilita tion robotics included the Rancho “Golden” arm [4] designed in 1969 at Rancho Los Amigos Hospital in Downy, California. The arm was an electrically-driven 6 Degree Of Freed om (DOF) robotic arm which mounted to a powered wheelchair and was cont rolled at the joint level by an array of tongue-operated

PAGE 28

7 switches. Discussions on the topic of the controllability of the arm commented on both successes and failures of the design. The successes of the project can be attributed to the important role that proprioceptive feedback plays in the control of a persons own extremities [5]. These pioneering research projects provided a framework for future development. 2.2. Rehabilitation Robotics Classification Assistive robotics can be gr ouped into one of five cate gories: Workstation robots which operate in stationary and well-stru ctured environments, wheelchair-mounted robotic arms which operate on power wheelchairs to assist in activi ties of daily living, mobile assistive robots which travel abou t the room and have a manipulator arm, therapeutical robots which are used for differe nt kinds of therapy, and smart wheelchairs and walkers. 2.2.1. Workstation Robotic Arms The very first rehabilitation robotics a pplications focused on using commerciallyavailable industrial manipulator s and modifying them for rehabilitation applications. An example of these manipulators is the PU MA 250 shown in figure 2.1. A factor which limits the use of industrial robotic arms in rehabilitation is the basic difference in operational requirements. Industrial arms ar e designed to work at high speed in an environment where there are no humans. This reason alone would limit their use for reasons of safety of the operator. For app lications in a human-intensive workspace,

PAGE 29

8 assistive robotic arms need to be mechanica lly limited to low velo cities and accelerations for the safety of the operator and th e human subjects around these devices. Figure 2.1: Puma 250 Arm. The Robotic Aid Project [6] was an atte mpt to create a system for users with quadriplegia. The project was an integration of a PUMA 250 industr ial manipulator arm, microprocessor, multi-line monochrome disp lay and speech synthesis and recognition systems. Limitations with the speech-rec ognition systems and computational power of the day restricted the success of the program The processing ability of the contemporary computers did not allow for re al-time inverse kinematics of the arm. This limited the arm to merely replaying preprogrammed actions Individual joints of the arm could be manipulated but coordinated real-time multi-joint maneuvers were impossible. As more application-specific robotic arms and computers with increased computational power became available, arms with controllers could now be mounted onto mobile platforms. At first these syst ems were simply rolling bases which then increased in complexity and degrees of fr eedom to include powered mobile robots.

PAGE 30

9 Handy-1 [7] is a robotic arm mounted to a non-powered wheeled base to assist in very specific activities of daily living (ADL ). Handy-1 was developed in 1988 to provide persons with severe disabilities assistance at mealtimes. Since its initial introduction the unit has expanded capabilities and is now capab le of providing assistance in a broader number of activities of daily living (ADL). Handy-1 is cap able of assisting individuals with personal hygiene, eating and drinking, and the application of make-up. During user trials, women specifically asked if the unit would be capable of applying cosmetic products. Shortly after the tria l, the design was upgraded w ith a new tray and gripper accessory. Each ADL task has a specific tray to accomplish its goal. Handy-1 is shown in figure 2.2 and is based on a 5 DOF ligh tly modified industrial manipulator. Figure 2.20: Handy-1. In the feeding mode the ope rator controls the robot thr ough an interface that uses lights which move across the available food tr ays, and a button that selects the item desired. Once the button is pre ssed, the robot scoops up the sele cted food and brings it to a predetermined place near the operator’s m outh. Once the user has consumed the food, the operator presses the button again and the robot returns to the food selection mode. This process is repeated until the operator is finished. An advancement of the technology

PAGE 31

10 of Handy-1 is being explored with the Robo tic Aid to Independent Living [8] (RAIL) project. RAIL improves upon the Handy-1 desi gn by incorporating a new controller for better manipulator control, a 3D simulati on tool for modeling virtual scenarios and attachment of sensors to assist se t up and position error determination. The RAID workstation [9] shown in figur e 2.3 was designed to be a workstation assistive robot system. It is comprised of a 6 DOF robotic arm mounted onto a linear track in a well-controlled envir onment. In the figure the mani pulator can be seen near the top of the shelf in the center of the cabinet. Figure 2.3: RAID Workstation. The RAID system provides benefits that are enhanced by the formal structure provided by a workstation environment. This organization allows the manipulator arm to repeatedly move and acquire items need ed by the operator using preprogrammed functions and routines. At this time the RAID system is currently under evaluation in Europe. The Robotic Assistive Device [10], s hown in figure 2.4, is a robotic arm developed by the Neil Squire Foundation in Vancouver, Canada. The RAD is a 6 DOF workspace mountable manipulator that uses a serial port com puter interface. The

PAGE 32

11 manipulator is controlled through a graphica l user interface (GUI) utilizing icons to symbolize predefined tasks. The system consist of several modules which when combined create an arm with a cylindrical r each of approximately 55” and a height of 110”. The arm can be mounted on various surfaces and has good repeatability at 0.12” and relatively large payload capacity of 9.5 lbs. Most reha bilitation specific manipulators have maximum payloads of 5 pounds or less. Figure 2.4: Robot Assistive Device. The ProVAR [11] shown in figure 2.5 is a system based on a Puma 260 robotic arm designed to operate in a vocational environment. Figure 2.5: ProVAR System.

PAGE 33

12 The ProVAR system uses a web-base d virtual environment to model the functionality of the manipulator. In this way the operator can examine potential arm movements for a given task, and if the si mulation is successful, the action can be initiated. In this way, the actions of the arm and its interactions within its workspace can be seen before any action is taken. The primary goals for ProVAR are more functionality per dollar, easier operator control, and hi gher system reliability compared with the previous generation of vo cational assistive robots. 2.2.2. Wheelchair-Mounted Robotic Arms Wheelchair mounted robotic arms (WMRAs ) combine the idea of a workstation and a mobile robot, which mounts a manipulat or arm onto a power wheelchair. In the past, industrial manipulators ha ve been too large and heavy to be mounted onto a power wheelchair. An industrial manipulator mounted onto the wheelchair would have excessively hindered the ope rator’s ability to maneuve r the chair through doors and hallways. Several design considerations mu st be met before deciding on where, on a power wheelchair, to mount a robotic arm. The foremost design consideration is the safety of the operator [12]. The mount must be sturdy and rigid and not compromise the structural integrity or the f unctionality of the chair in a ny way. Next the robotic arm must be mounted in such a way that it has a minimum footprint outsi de the footprint of the chair itself. There are several possibl e mounting locations for a WMRA [13]. The mount may be in the front, side or rear of the wheelchair.

PAGE 34

13 Helping Hand system [14] is a 5DOF robotic arm that can be mounted to the side of a power wheelchair. The Helping Hand opera tes by joint control and is manipulated by using switches to control individual joints. The Weston robotic arm [15], shown in figure 2.6, utilizes a vertical actuator mounted to a wheelchair with the main rotary joints (shoulder, elbow, and wrist) constrained to move in the horizontal plane. This is the continuation of the trolley mounted Wessex robot arm research. A prisma tic joint moves in a linear sliding motion along a track. Figure 2.6: Weston Arm. Another arm WMRA is the Asimov [16] which is a modular manipulator designed with the motors and controls di stributed throughout the arm. A computer rendering of the Asimov is shown in figure 2.7. The modularity of the design allows for multiple mounting locations on a wheelchair or stationary appli cation with various workspace geometries. The concept of a modular manipulator has several benefits. This provides the opportunity for one manipulator th at can be used in either a mobile or

PAGE 35

14 workstation environment. Different link geometries can be explored to create the optimum design for any given application. As imov models have been shown with all three possible mounting positions: front, side a nd rear. Without physical models to test the efficacy of the design, it is unknown how well the design would integrate into realworld applications. Figure 2.7: Asimov Arm. The FRIEND [17] robotic system is a Ma nus arm mounted onto a wheelchair and integrated with stereo visi on, dedicated computer control, and specialized software. Besides programming with a keypad or j oystick, the FRIEND system, shown in figure 2.8, is capable of being programmed via a haptic interface glove. The haptic glove allows the operator / programmer to feel what the robot feels through feedback to the user. A Haptic glove is put on and the action, such as pouring a glass, is completed and stored into the computer for future use. The action can then be replayed at a later time as a predefined user function. The operator may also control the arm through verbal commands using an integrated vo ice recognition system.

PAGE 36

15 Figure 2.8: FRIEND Robotic System. 2.2.3. Mobile / Assistant Robots Mobile systems are capable of assisti ng individuals with disabilities. These systems include a mobile base, various sensor s and a manipulator arm. An early version of one such system is th e Mobile Vocational Assistant Robot (MoVAR) [18]. This system, shown in figure 2.9, utilizes an omni -directional mobile platform mounting a PUMA-250 robotic arm as well as several se nsors including a remote viewing camera, force and gripper proximity sensors. MoVAID [18] is an advanced vers ion of the MoVAR system, designed specifically for home use. MoVAID improve s upon the previous model by applying the lessons learned in laboratory testing to assi st in common tasks around the home such as cleaning and food prepar ation. MoVAID incorporates a va riety of sensing devices both mounted to the manipulator and the base. In figure 2.10, MoVAID can be seen along with the various sensors that are located on the ma nipulator arm. Sensors mounted to the first

PAGE 37

16 link of the arm include a pair of cameras us ed for stereo vision and a laser localization system used in task execution. Figure 2.9: MoVAR. The MoVAID system uses active beacons pos itioned within the room that provide reference data to determine its location and or ientation. In addition to position detection, the unit also has ultrasonic range detectors a nd an active bumper that disables the device should an impact occur. Figure 2.10: MoVAID.

PAGE 38

17 The robotic arm used by MoVAID has 8 DOF and a three-fingered gripper with two degrees of freedom. The gripper was orig inally designed as a prosthetic device specifically to have excellent dexterity. Th e increased agility pr ovided by the gripper over more traditional end-effectors allows MoVAID to be very effective in the unstructured home environment. Another design is the TAURO [19] that is an integrated robotic system using offthe-shelf components such as a power wh eelchair, Manus manipulator, ultrasonic sensors, camera and computers. TAURO is a mobile service robot being developed for inspection, stocktaking and documentation ta sks in indoor environments. The TAURO system integrates the movement of the wheel chair and the operati on of the manipulator. In this way if the goal is out of reach of the manipulator, the wheelchair will move on a path toward the goal until the manipulator can reach its goal. This coordinated control is a significant advance in the use of WMRAs. Although not specifically designed for rehabilitation robotics tasks, it would be readily adaptable to the task. The TAURO system can be seen in figure 2.11. Figure 2.11: TAURO Robotic System.

PAGE 39

18 2.2.4. Robots in Therapy Assistive robotics can enhance the capab ility of people with disabilities by assisting them to do different activities of daily living. On the ot her hand, therapeutical robots can exercise the user’s muscles to keep it alive or to increase its ability to function with time. One of these systems is the MIT Manus [20] shown in figure 2.12, which uses impedance control to move, guide or perturb the hand motion of the user in training. It records the position, velocity and applied forc es during therapy sessi ons for analysis. Figure 2.12: MIT Manus System. Another therapeutical robot is the mout h-opening and closing devices conducted by Takanobu et al in 2001 [21] that is used in training as shown if figure 2.13. This robotic device uses 6 linear actuators to mani pulate the U-shaped end effector. Each joint carries displacement and velocity sensors for feedback.

PAGE 40

19 Figure 2.13: Mouth Opening and Closing Device. 2.2.5. Smart Wheelchairs / Walkers The iBOT and Segway mobility systems [22] are unique gyro-balanced mobility devices that have been designed to operate on four wheels or two wheels, stabilizing the user by automatically adjusti ng and balancing. The iBOT offe rs five operating functions: a remote function that allows the user to detach the joystick, and via cable wire connection, drive the empty iBOT into the back of a vehicl e for easy transporting. The stair climbing function allows the user to climb up and down stairs with or without assistance. The wheel function that allows it to climb curbs as high as 4 inches and travel over grass, gravel, sand and other forms of uneven terrain. The standard function is similar to standard power chairs. The bala nce function allows the user to reach high places independently in a similar manner to the Segway. Figure 2.14 shows both the iBOT and the Segway.

PAGE 41

20 Figure 2.14: iBOT (Left) and Segway (Right) Devices. 2.3. Commercial Wheelchair-Mounted Robotic Arms Currently there are two production wheelch air mounted robotic arms (WMRAs): the Manus, manufactured by Exact Dynamics, and the Raptor, manufactured by Applied Resources. 2.3.1. The Manus The Manus manipulator [23] arm (or as recently been called ARM for Assistive Robotic Manipulator) is a fully deterministic manipulator. A fully deterministic arm can be programmed in a manner comparable to industrial robotic manipulators. The Manus has been under development since the mid 1980’s and entered into pr oduction in the early 1990’s. A picture of the Manus mounted onto a Permobil Max90 wheelchair is shown in figure 2.15.

PAGE 42

21 Figure 2.15: Manus Arm. This arm utilizes a front mounting location to the left of the operator’s left knee, which allows for good manipulation of objects th at are above the plane of the wheelchair seat, and most importantly the operator’s f ace and lap. Manus carries 6 revolute joints with encoders, a 1-DoF gripper and a vertical lift. The Manus manipul ator is controlled by a joystick and a keypad, and can perform a single-joint control or coordinated control of multiple joints. 2.3.2. The Raptor Another production WMRA is the Raptor [24], which mounts the robotic arm to the right side of the wheelchair. This manipul ator carries 4 revolute joints plus a planar gripper and can be seen mounted to a pow er wheelchair in figure 2.16. The arm is directly controlled by the user by either a joystick or 10-button c ontroller. Because the Raptor does not have encoders to provide feedback to th e controller, the manipulator cannot be pre-programmed in the fashion of i ndustrial robots, and can not have Cartesian control.

PAGE 43

22 Figure 2.16: Raptor Arm. The Raptor is a side-mounted arm that is partially hi dden underneath the chair. When the arm is not in use, the Raptor ar m can be stowed relatively inconspicuously. Robotic arms with joint control require high er levels of concentration and eye-hand coordination from the operator than Cartesian control systems. 2.4. Robot Control The controller design of the robotic device is as important as the design of the robotic device itself. Many researchers have explored different ways of controlling the robotic devices both in simula tion and in physical systems. 2.4.1. Redundant Robot Control When controlling a robotic device, it is essential to compare the work space capability of the robot and the task space required in operation. In general, a minimum of

PAGE 44

23 6 degrees-of-freedom are requir ed in a robot in order to accomplish a total manipulation control of objects in the workspace. In the case of planar work space, a minimum of 3 joints are required in a robot to achieve fu ll manipulation of that workspace. When the number of joints exceeds the number of c ontrolled coordinates in the workspace, redundancy is introduced, and the conventiona l inverse kinematics for a close-form solution is no longer applicab le. Redundancy resolution and optimization have been the subject of many researchers, where the use of the extra joints is employed to execute additional tasks and optimize the motion based on certain performance criteria. Chang [25] has proposed a closed-form so lution formula for inverse kinematics of redundant manipulators using Lagrange multip lier. He proposed an additional set of equations to resolve the redundancy at the inve rse kinematic level in such a way that a given criteria function may be minimized or maximized. The additional equations were set in a similar way to the homogeneous soluti on term of the resolved rate method which uses the null space to resolve the redundanc y. He used the manipulability index as the criteria function, but any crit eria function can be used as long as the function can be reduced to an expression in terms of joint variables only. Khadem et al [26] used a global optim ization scheme to avoid round obstacles using the resolved rate method. Their simulatio n of a three-revolute-joint planar robotic arm has shown good performance in following a path while the specified robot link was avoiding a specified obstacl e throughout the simulation. Fi gure 2.17 shows the simulation mechanism with and without the obst acle avoidance optimization criteria.

PAGE 45

24 Figure 2.17: Redundancy Resolution without (Left) and with (Right) Obstacle Avoidance. Chan et al [27] have proposed a ne w method to resolve the redundancy and optimize for joint limit avoidance. They used a symmetric positive definite weight matrix that carries different weights for each indi vidual joint of the redundant robot to be included in the least-norm solution they were using to control the 7-DoF robotic arm. The weighted-least norm solution was implement ed, and showed a good combination between reaching the goal with the specified trajectory accurately and avoiding the joint limits of the robotic arm. McGhee et al [28] used the weight matrix to avoid joint limits, singularities, and obstacles using the proba bility-based weighting of the performance criteria. Beiner et al [29] have improved the velo city norms and the kinetic energy of their planar 3-DoF robotic crane with hydraulic ac tuators by using an improved Pseudo inverse solution control scheme. They used the in itial manipulator configuration as an optimization parameter, and were able to re duce the actuator velocities obtained by a pseudoinverse solution and simultaneously avoid the actuators limits. Zergeroglu et al [30] have designed a model-based nonlinear controller that was able to achieve exponential link position and s ubtask tracking. Their control strategy used

PAGE 46

25 the pseudoinverse of the manipul ator Jacobian and did not re quire the computation of the positional inverse kinematics. Their control strategy did not place any restriction on the self-motion of the manipulator, and hence, the extra DoF were available for their manipulability maximization, obstacle avoi dance, and joint limits subtasks. Kwon et al [31] have introduced a new method to optimize and resolve redundancy considering joint-limit constraint functions. Their Dual QCQP method used the quadratic inequality constraints to appr oximate linear inequality constraints which represent joint position, velocity and torque bounds. Using their method, they were able to reduce the size of the pr oblem by reducing the number of constraints and variables. They formulated the quadratic objective func tion and then converted the problem into two problems by eliminating linear equality constraints and by applying the duality theory. This method was used in their simulati on of a 4-joint planar robotic arm, and they were able to cut the computat ion time to about a tenth of that when the problem was not reduced. Ellekilde et al [32] have introduced a new scheme for controlling robots in visual servoing applications. They employed quadrat ic optimization techniques to solve the inverse kinematics problem and explicitly handle both joint position, velocity and acceleration limits by incorporating these as constraints in the optimization process. Contrary to other techniques that use the re dundant degrees of fr eedom to avoid joint limits, in their method, they incorporated the dynamic properties of the manipulator directly into the control system to use re dundancy to avoid joint velocity and acceleration limits. They used the joint position limits, velocity limits and acceleration limits by converting them into the velocity domain and choosing the best case of these limits (that

PAGE 47

26 satisfies all of them) at every time step to be used for the optimization function. Figure 2.18 shows the application of their method in the example of the RoboCatcher visual servoing application using the QP controller. The robot was trying to track the car which moves in a circle in the playing area. The QP control system was robust with respect to singularities which enables the robot to track the car as “good as possible” even if it is out of reach. Figure 2.18: The Robot Visual Servoing Application Using the QP Controller. 2.4.2. Mobile Robot Control In the past decade, mobile manipulators that combine the manipulation of a manipulator and the mobility of a mobile pl atform have been paid much attention by many researchers. Most of the works for th e mobile manipulators have reported on the coordination of the mobile platform and th e manipulator and the obstacle avoidance in

PAGE 48

27 their environments. Recently, the importance of human-robot coexistent systems which perform cooperative works with humans and provide convenience such as house cleaning and washing has been raised. Perrier et al [33] have proposed a method to determin e a feasible path between two fixed configurations for a mobile ma nipulator whose vehicle is non-holonomic. For this purpose, they wrote the global displacement of the system in a symbolic way, using two representation tools: homogeneous matri ces and dual quaternions The corresponding joint parameters are computed to make th e desired displacement coincide with the computed symbolic displacement. Figure 2.19 show s the frames of reference used in their robotic simulation. Figure 2.19: Reference Frames Used for the Manipulation LIRMM. The simulation results about motion genera tion of a mobile manipulator with a non-holonomic vehicle and a six Degree-ofFreedom (DoF) arm using a global method was shown. They represented the non-holon omy of the vehicle as a constrained displacement. The method tries to make the global feasible displacement of the system correspond to the desired one. Two kinds of displacement representations were used: homogeneous matrices and dual quaternions Trajectories obtained with the two representations were given. Both representations allowed them to compute feasible trajectories, a lthough different.

PAGE 49

28 At Chuo University in Japan [34], resear chers have proposed a simple method for carrying a large object by coope ration of multiple mobile manipulators with position controllers. Manipulators on m obile platforms are used as free joint mechanisms by locking some joints and making the rest of the joints free. These free joints play the role of mechanical compliance in order to avoid excessive inne r forces due to the mutual positioning errors. They found that complian ce is needed for cooperation among positioncontrolled robots, and three feedback cont rol laws for platforms moving on uneven ground are studied and they looked at their c ontrol performance. As shown in figure 2.20, they proposed the control laws to be used for a prototype cooperative system consisting of three moving tables driven by ball screws. Figure 2.20: Cooperative Control System Setup. When multiple robots hold single object at the same time, geometrical constraints by closed loop structures are imposed on each robot. Thus compliance is needed for each robot to avoid excessive inner forces caused by the mutual positioning errors. Huang et al [35] have developed a smallsized platform as shown in figure 2.21. The problem they faced is the fact that for a small scale platforms, the mobile manipulator may fall down when moving at high speed or executing tasks in the presence

PAGE 50

29 of disturbances. Therefore, it is necessary to consider both stabilization and manipulation simultaneously while coordinating vehicle mo tion and manipulator motion. They propose a method for coordinating vehicle motion planning considering manipulator task constraints, and manipulator motion planning co nsidering platform stability. Specifically, first, the optimal problem of vehicle motion is formulated, considering vehicle dynamics, manipulator workspace and system stability. Figure 2.21: Mobile Manipulator Model. They derived the manipulator motion c onsidering stability compensation and manipulator configuration. The n, simulation is conducted to de monstrate effectiveness of their method. These researchers tried to deri ve coordinated motion so that the mobile manipulator can move stably and follow a gi ven desired end-point trajectory (path, velocity) at an optimal conf iguration. They derived the re dundant manipulator motion, considering stability compensation and ma nipulator configura tion, and provided simulation results. When considering the comp atibility of stabili zation and manipulation,

PAGE 51

30 it is first necessary to maintain system stab ility. Then, based on the assurance of system stability, the mobile manipulator should ex ecute tasks with an optimal configuration. The researchers in Noval Postgraduate Sc hool, and Tokai Univ ersity [36] have presented a unified approach to the task space analysis of a wheeled mobile manipulator interacting with the environm ent as shown in figure 2.22. The system considered is a doublearticulated manipulators atop a wheel ed mobile platform handling a common object. Figure 2.22: Wheeled Mobile Manipulator with Two Arms. They derived the task space ellipsoid, both kinematic and dynamic, for a wheeled mobile manipulator which takes into account manipulation and locomotion. The ellipsoid is able to visualize how the manipulator and the platform can contribute to a task execution by integrating the mobility of the plat form with the manipulability of the arms as one unified measure. This measure can be useful not only for the task space analysis of

PAGE 52

31 a single mobile manipulator, but also for th e coordination of multiple-arms mobile robots or mobile manipulators. Separating manipulation from mobility makes control and planning problems easier, but it will be much more effective and efficient if the manipulator can execute a given task while the platform is moving. Thes e researchers have pr oposed a coordination algorithm of mobile manipulator s which utilizes manipulability measure of the arm. They treated both locomotion and manipulation in th e same framework from the view-point of task space. Contribution of the manipulator a nd platform is represen ted by the task space ellipsoid at the end effecter point or at the center of the objec t to be handled. The ellipsoid not only displays how a given task is contributed by the arm and the platform, but also the shape of the ellipsoid, kinematic or dynamic, naturall y reflects constraint equations to which the platform is subjecte d. They also derived the motion equations of the two-arm mobile manipulator and the ob ject separately. Motion equations of the mobile manipulator which itself consists of multiple subsystems are obtained by adding dynamic interaction terms to pre-existing moti on equations to get th e global equations of motion for the whole system. Royal Institute of Technology research ers [37] have proposed a platformindependent control approach for mobile manipulation and coor dinated trajectory following. Given a path for the gripper to follow, another path is plan ned for the base in such a way that it is feasible with respect to manipulability. The base and the end-effector then follow their respective reference trajecto ries according to error-feedback control algorithms, while the base is placed in such a way that the end-effector trajectory always is within reach for the manipulator. The experi mental platform that they have used for

PAGE 53

32 this is a Nomadic XR4000 base platform t ogether with a Puma560 manipulator arm, as shown in figure 2.23. Figure 2.23: Nomad XR4000 with the Puma560 Mounted on Top. Given a path for the gripper to follow, the idea is to plan another path for the base, online, in such a way that the end-effect or trajectory always lies in the dextrous workspace. These two paths are then tracked using a virtual vehicle approach, where the motions of the reference point s on the desired base and gr ipper paths ar e governed by their own dynamics, containing both position error feedback as we ll as coordination terms. Researchers in Kyushu University [38] have studied the planning method of a trajectory of a mobile manipulator such as shown in figure 2.24. They derived the dynamics of the mobile manipulator consider ing it as the combined system of the manipulator and the mobile platform. The plan ning problem is formulated as an optimal control problem. To solve the problem, they us ed the concept of the order of priority. A gradient-based iterative algorithm which synthesizes the gradient function in a hierarchical manner based on the order of priori ty is used. The simulation results of the 2link planar non-holonomic mobile manipulator are given to show the effectiveness of their algorithm. A mobile mani pulator composed of a manipulator and a mobile platform

PAGE 54

33 has a much larger workspace than a fixed-base manipulator due to the mobility provided by the platform. The trajectory planni ng problem of the non-holonomic mobile manipulator dynamics has been taken into consideration. Figure 2.24: Mobile Manipulator. Researchers in LAAS-CNRS in France [39] have extended the standard definition of manipulability to the case of a nonholonomic mobile manipulator built from an n joint robotic arm and a nonholonomic mobile platform as shown in figure 2.25. The effects of mounting the arm on a nonholonomic platform are shown through the analysis of the manipulability. Figure 2.25: Mobile Manipulator H2BIS.

PAGE 55

34 Applications of criteria inhe rited from manipulability c onsiderations are given to justify design and to generate the controls of their system. Their study was motivated by the generation of the mobile manipulator ve locities to execute a given operational path. The inversion of the direct instantaneous kinematic model of the mobile manipulator allowed them to solve the problem and to take into account some a dditional criteria. As a usual criterion, they considered the manipulab ility measure, which is very useful to characterize the instantaneous kinematics of a given system These researchers showed how the notion of manipulability can be extend ed in that case to represent the possible operational motions in a given configuration of the system. Some simulations gave an idea of the effects of the platform on the sh ape of manipulability ellipsoids, with an obvious dependence on nonholonomy. Those same researchers [40] have also proposed a generic scheme to solve the kinematic control problem of wheeled mob ile manipulators when the operational motion is imposed. They generalized the Additional Ta sk Method to solve the control problem of these redundant nonholonomic systems. They in tegrated any number of additional userdefined constraints to the ope rational task and proposed a ge neric approach to control a large class of mobile manipulators as well as other methods to express the additional tasks corresponding to real experimental cons traints. Thus the control designer can use purposely redundancy, particularly to avoid obstacles. They illustrated the Additional Task Method by a collision free simulation. Th is simulation is done in a 3D environment and uses an efficient collision detector. Fi gure 2.26 represents the mobile manipulator of LAAS.

PAGE 56

35 Figure 2.26: LAAS Mobile Manipulator. Figure 2.26 (a) represents the collision in motion of the mobile manipulator without taking into account the additional coll ision avoidance constraint. Figure 2.26 (b) shows the achievement of the motion w ith the use of the proposed method. Researchers at Okayama University [41] have conducted research to realize the motion planning for an intelligent mobile ma nipulator shown in figure 2.27. To plan a mobile manipulator’s motion, it is popular that the base robot motion is regarded as manipulator’s extra joints, and the whole system is considered as a redundant manipulator. In this case, the locomotion controll er is a part of the manipulator controller. However it is difficult to implement bot h controllers as one controller in the implementation because of the difference of actuators character. In this research they have focused on a path planning algorithm for a mobile base with keeping manipulability at the tip of the mounted manipulator. In this case, the locomotion controller is independent from the manipulator controller and a cooperative motion is realized by the communication between both controllers.

PAGE 57

36 Figure 2.27: Mobile Manipulator. One of their general approaches is to cons ider the locomotion as extra joints of the manipulator. Manipulability was defined as a valuation of difficulty of manipulator’s operation. The goal of these researchers is to draw large objects on a wall by a mobile manipulator based on the above approach. Fi gure 2.27 shows an overview of the mobile manipulator considered for their research ta sk. To realize the above task, one of the biggest problems is that an accumulated erro r of estimated base robot’s position affects position accuracy at the end effector. Ther efore, the manipulator should have the capability to adjust its pose when the base robot detects positioning errors. The motion planning approach is reasonable enough to cope with above conditions because manipulability is considered. Each pose of the manipulator is calculated by inverse kinematics at each layer. To verify the result, they have executed the program in the

PAGE 58

37 motion simulator. The end effector traces the desired slope segment while the mobile base moves. Seraji [42] has simulated the motion contro l of a mobile 2-link planar manipulator mounted on a non-holonomic mobile platform. He combined the Jacobian of the mobile manipulator with that of the robotic ar m. To resolve the redundancy, he added two additional variables to the task space, the platform angle, and the elbow angle between the forearm and the wrist. Accordingly, he a ugmented the Jacobian to include these two variables’ Jacobian in his control equations This kind of redundancy resolution increases the need for trajectory planning for these extra task variables and makes it computationally expensive. Chung et al [43] have approached the control problem of non-holonomic mobile manipulators in special workspace by decom posing the mobile ma nipulator into two subsystems, the mobile platform and the manipulator. According to their redundancy resolution scheme, the manipulator was co mmanded to follow the desired trajectory given in task space and the platform was re sponsible for positioning the manipulator at a specified point in the workspace to avoid singular configura tions of the manipulator. To coordinate the two separate motions togeth er, they developed an interaction control algorithm, as shown in figure 2.28, in which two nonlinear controllers were designed for the subsystems, based on the redundancy reso lution scheme. The interaction controller consisted of a robust adaptive controller for the manipulator and an input-output linearizing controller for the mobile platform Their simulation results demonstrated a good performance of the intera ction controller based on thei r trajectory-following task.

PAGE 59

38 Figure 2.28: Interaction Control of the Mobile Manipulator. Gardner et al [44] and Bayl e et al [45] presented a sy stematic unified kinematic analysis for manipulator arms mounted on mobile platforms. They extended the definition of manipulability by scaling joint velocities by their maximum values. To find the best possible mounting loca tion of the mobile platform, they simulated a mobile manipulator with differential drive that carr ies a variable placement of the arm base on the mobile platform. They were able to illu strate the manipulability measure based on the manipulator mounting position on the ove rall mobility of the system. Xu et al [46] have proposed a new stra tegy to deal with the mobility and manipulation combination problem in a mobile manipulator that has redundant DoF. In their control, they decomposed the position an d orientation of the e nd-effector into two parts. The position and orientation of the s ub-vectors of the manipulator projected on the global z-axis was defined, then the mobile base and the manipulator were moved along

PAGE 60

39 the main direction of the desired path and the sub-vectors on axes x-axis and y-axis in the world frame. The simulation results showed that the small working ranges of the joints of the manipulator have seriously limited the application. Luca et al [47] have tested the exte nsion of conventional redundancy resolution methods to include non-holonomic mobile pl atforms at the base of the redundant arm using an augmented Jacobian. They have used the singularity analysis and redundancy resolution methods available for standard ma nipulators to compare the reduced gradient method and the projected gradient method. Their simulation has shown the superior optimization performance of the reduced gr adient over the projec ted gradient method. The desired tasks for the robotic system were executed by the combined motion of all the configuration variable s. Figure 2.29 shows the simulation of their implementation using the reduced gradient method on a 4-DoF planar system following a circular path while keeping the end-effector poi nting to certain direction. Figure 2.29: Trajectory Tracking for a Planar 2-DoF Robot on a Differential Mobile Base.

PAGE 61

40 Papadopoulos et al [48] have tested their control algorithm on mobile manipulators mounted on differential-drive plat forms as well as car-like platform. Both system platforms were equipped with a two link manipulator. The differential kinematics for the two mobile syst ems were written so as to map platform and e nd-effector velocities to the driven wheel velocities, without viol ation of the non-holonomic constraints. This allowed specification of trajectories for both the platform and the end-effector and computation of actuator commands. Orthogonal complements and the Lagrangian methodology were used to obtain the reduced equations of motion for the differentiallydriven system. Figure 2.30 shows the desired path of the end effector and that of the front point on the platform along with the act ual trajectory-following simulation. Figure 2.30: Animation of the Motion of the End-Effector and the Platform Front Point. Based on these equations, a model-based controller was designed to eliminate tracking errors. The controller was applie d successfully to a simple crack-sealing example, and showed accurate simulation.

PAGE 62

41 Chapter 3: Control Theory of Redundant Manipulators 3.1. Introduction Control issues in simple robotic systems can be resolved easily when we try to control Cartesian coordinates using a robotic sy stem that has the same number of joints as these coordinates to be contro lled. For instance, a robotic arm with two revolute joints can be used to control the x-axis and y-axis coordinates of the end-effector in a planar workspace using simple mathematical relations that relate the joint motion to the endeffectors’ motion. Trying to c ontrol a third variable, such as the angle of approach in a planar workspace will be difficult and some ti mes impossible if we use the same 2-DoF. This is because the manipulator carries less DoF than the workspace. On the contrary, when we try to control the two variables me ntioned above using a manipulator with three or more joints, then we will face control pr oblems since the solution to the equations of motion carries an infinite number of solutions This is because th e manipulator carries more DoF than the workspace. In this chapter, we will look at different ways to control a robotic arm that carries more DoF than the workspace, and we will prov ide different solution choices that can be chosen from these infinitely many solutions.

PAGE 63

42 3.2. Terminology When talking about the degrees of freedom (DoF), Craig [49] defines it as the number of independent position variables which would have to be specified in order to locate all parts of the mechanism. For exampl e, a 4-bar mechanism has a single DoF even though there are three moving members of the mechanism. In a typical manipulator, the number of DoF is the same as the number of joints since it is an open kinematic chain. Degrees of redundancy, on the other hand, are re ferred to when the number of joints is greater than the dimension of th e manipulation variable [50]. The end-effector, or sometimes referred to as the “gripper” defined as the free end of the chain of links that make the manipulat or [49]. A work space is referred to as the space of which the manipulator’s end-effector ca n reach, or as Craig [49] defines it, it is the existence or non-existence of a kinematic solution of a given manipulator. 3.3. Redundant Manipulators Problem Formulation Redundant manipulators can be of any size and shape that use revolute or prismatic joints among others, but in this ch apter, we will limit our research on a seven DoF redundant robotic arm that has a full si x DoF Cartesian workspace. By definition, this robotic arm has one degree of redundancy. The six controlled Cartesian variables in this case are the three positions in the x, y and z coordinates, and the three orientations or angles about the x, y and z axes. To be compatible with the test bed and the simulation that we will discuss in details in the co ming chapters, a complete description of the manipulator’s physical characte ristics will be discussed.

PAGE 64

43 3.3.1. Frames of References The first step in studying the kinematics of any robotic manipulator is to assign coordinate frames of references according to one of the known conventions and generate the kinematic parameters based on the se lected convention. The most widely used convention is the standard convention used by Paul [51], but in our case, we will use the modified convention by Craig [49]. The result s will still be the same, but it is only a matter of preference. Referring to figure 3.1, fr ames of each link should be attached in the following manner: 1Assign the Zi axis pointing along the ith joint axis. If that link has no joints, such as the ground or the end-effector, any direction is permitted. 2Assign the Xi axis pointing along the Zi-Zi+1 common perpendicular. If the axes intersect, then Xi can be normal to the plane containing Zi-Zi+1 axes. 3Assign the Yi axis according to the right -hand coordinate system. Figure 3.1: Joint-Link Kinematic Parameters.

PAGE 65

44 3.3.2. Denavit-Hartenberg Parameters There are four parameters that fully de scribe the kinematic relations between every neighboring joints and links in a manipulator, as sh own in figure 3.1. These four parameters are: 1Two neighboring joint relations: a. The link length (parameter a). b. The link twist angle (parameter ). 2Two neighboring link relations: a. The link offset (parameter d). b. The joint angle (parameter ). These kinematic parameters are called th e Denavit-Hartenberg parameters, or for short, D-H parameters. Gathering these parame ters for all coordinate frames in a table allows a better view of the kinematic char acteristics of the robotic arm. Assigning the frames as shown in the above steps will allo w us to define the neighboring joint-link parameters as follows: 1The value of “ai” is defined as the distance from Zi to Zi+1 measured along Xi. 2The value of “ i” is defined as the angle from Zi to Zi+1 measured about Xi. 3The value of “di” is defined as the distance from Xi-1 to Xi measured along Zi. 4The value of “ i” is defined as the angle from Xi-1 to Xi measured about Zi. The robotic arm at hand consists of seve n revolute joints of which the rotation axes of every two immediate joints intersec t. Figure 3.2 shows a Solid Works drawing of the new robotic manipulator that was desi gned and built at the University of South

PAGE 66

45 Florida from the ground up [52]. For that mani pulator, frame assignme nt for each link is shown in figure 3.3, and the D-H parameters are pointed out. Figure 3.2: Solid Works Model of the New 7-DoF Robotic Arm Built at USF. Figure 3.3: Frame Assignments and Dimensions of the New 7-DoF Robotic Arm. Note that the rotational axes of the last three joints intersect at one point, this setup gives a mechanical advantage to the wr ist both in calculations and in manipulation. The D-H parameters of the above manipulator are shown in table 3.1. Note that the value assigned to d6 is zero since X5 and X6 are at the same axis line and the distance between them is zero. More details on this particular joint design will be discussed in a later chapter.

PAGE 67

46Table 3.1: The D-H Parameters of the New 7-DoF Robotic Arm Built at USF. 3.4. Forward Kinematics Equations The aim of the forward kinematics is to solve the transformation equations for the end-effectors’ Cartesian positi on and orientation or velocitie s when the joint angles and velocities are given. Even though this kind of control is not practical if used for task execution, but it is a step that has to be done before thinking of doing the inverse kinematic control. 3.4.1. Link Transformation Matrices Homogeneous transformation matrices that transform the motion from one coordinate frame reference to the other can be easily obtained from the D-H parameters using the conventional equations [49] that relate every two consecutive frames to each other as follows: i i-1 (degrees) ai-1 (mm) di (mm) i (degrees) 1 -90 0 110 1 2 90 0 146 2 3 -90 0 549 3 4 90 0 130 4 5 -90 0 241 5 6 90 0 0 6 7 -90 0 179 7

PAGE 68

47 1 0 0 0 01 1 1 1 1 1 1 1 1 1 i i i i i i i i i i i i i i i i i i id c c s c s s d s s c c c s a s c T (3.1) Where “s” is sine, and “c” is cosine of the angle. Applying the above formula to all seven reference coordinate frames give s the following homogeneous transformations: 1 0 0 0 0 0 1 0 0 0 01 1 1 1 1 0 1 c s d s c T 1 0 0 0 0 0 1 0 0 0 02 2 2 2 2 1 2 c s d s c T 1 0 0 0 0 0 1 0 0 0 03 3 3 3 3 2 3 c s d s c T 1 0 0 0 0 0 1 0 0 0 04 4 4 4 4 3 4 c s d s c T 1 0 0 0 0 0 1 0 0 0 05 5 5 5 5 4 5 c s d s c T 1 0 0 0 0 0 0 1 0 0 0 06 6 6 6 5 6 c s s c T and 1 0 0 0 0 0 1 0 0 0 07 7 7 7 7 6 7 c s d s c T (3.2) These homogeneous transformations descri be the kinematic behavior of the robotic system at any instance of time. For instance, to find wher e frame 4 lies based on frame 3 when joint 4 is at certain angle, substituting that angle in the specified transformation matrix gives the position and or ientation of frame 4 based on frame 3. The first 3x3 rows and columns of the homoge neous transform describes frame 4’s unit

PAGE 69

48 vectors projection on frame 3, and the first three rows of the last column of the homogeneous transform describes the posit ion of frame 4’s center based on frame 3. Propagating these matrices from one frame to the other gives us the forward kinematics of the robotic arm that describes the end-e ffectors’ frame based on the base frame as follows: T T T T T T T T6 7 5 6 4 5 3 4 2 3 1 2 0 1 0 7 (3.3) From this point on, we will use these transformation matrices as noted above. The rotation matrices and the frame’s center coor dinates extracted from these homogeneous transformation matrices will also be used as follows: 1 0 0 01 1 1P R Ti i i i i i (3.4) Where “R” is the 3x3 rotation matrix repr esentation of the transform, and “P” is the vector containing the X, Y and Z coor dinates of the origin of the frame. 3.4.2. Velocity Propagation and the Jacobian Forces and velocities acting on the joints are crucial for the control of robotic manipulators. When a manipulator is contro lled by sending a torque value to its joint motors, precise knowledge of the acting torque s and forces on each joint is needed. The same is true when velocities are used to control the manipulators, each joints’ velocity need to be determined so that the task can be executed as desire d by the operator. Figure 3.4 shows the linear ( ) and angular ( ) velocity vectors acting on neighboring links. These velocities are related together by the physical dimensions of the link that holds these two neighboring joints, and these dimens ions are the same ones we obtained in the

PAGE 70

49 previous sub-heading in the form of hom ogeneous transformation. The angular velocity of link “i+1” with respect to frame “i+1” can be defined as: 1 1 1 1 1 1 i i i i i i i i iZ R (3.5) Where “” is the joint angular velocity, and “Z ” is the projection of the Z-axis on its own frame of reference. This Z is usually [0, 0, 1]T. Similarly, the linear velocity of the origin of frame “i+1” with respect to frame “i+1” can be defined as: ) (1 1 1 1 i i i i i i i i i iP v R v (3.6) Propagating these velocities throughout the joints will result in a full description of all velocities acting on every joint at any moment of time when the joint angles are provided. Figure 3.4: Velocity Vectors of Neighboring Links.

PAGE 71

50 The same way we found the velocities at e ach joint, we can de rive these relations in general using the Jacobian. The end-eff ectors’ Cartesian coordinates are direct functions of the joint angles along the manipulator as follows: ) , , , (7 6 5 4 3 2 1 F X (3.7) Where X is the 6x1 vector contains the 3 position and 3 orientation dimensions of the end-effector with respect to the base frame. To convert th e dimentions into velocities, we can partial differentiate each of the Cart esian variables with re spect to each of the joint angles. That gives: 7 7 6 2 2 6 1 1 6 7 7 5 2 2 5 1 1 5 7 7 4 2 2 4 1 1 4 7 7 3 2 2 3 1 1 3 7 7 2 2 2 2 1 1 2 7 7 1 2 2 1 1 1 1, , , f f f f f f f f f f f f z f f f y f f f x (3.8) Or these can be re-written as: F X (3.9) Note that we only have six equations which are the Cartesian positions and orientations, and seven unknowns which are the seven joint angles. This gives us an over-defined system, and we will talk about th is in more details later in the chapter. Dividing both sides of (3.9 ) by the time increment ( t) gives the velocities as:

PAGE 72

51 ) ( J V F X t F t X (3.10) Where “J( )” is the Jacobian matrix that relate s the joint angular velocities to the end-effectors’ Cartesian velo cities based on the base frame. At any time step, knowing the joint velocities and joint angles allows us to translate directly to the end-effector’s Cartesian velocities using the Jacobian. This Jac obian can also be used to relate the acting forces and moments at each joint to the e nd-effectors’ acting forces and moments. 3.5. Inverse Kinematic Equations The aim of the inverse kinematics is to solve the transformation equations for the joint angles or velocities when the end-eff ectors’ Cartesian positi on and orientation or velocities are given. Most i ndustrial robotic manipulators im plement this kind of control for its practical use. In order to read the joint angles while the robot is running and supply it to the controller, encoders are necessary to be mounted at each joint for joint feedback. The solution of such equations can vary from a close-form solution to different numerical solutions depending on the size of the joint domain as compar ed to the size of the task domain. 3.5.1. Closed Form Solutions A close form solution is an exact solution to the set of equations that relates the joint rates to the Cartesian velocities. This is possible when the number of joints in the manipulator is equal to the number of Cartes ian variables to be controlled in the task space. In that case, the transfor mation matrix in the left-hand side of (3.3) is given, and we need to find the joint angles included in th e right-hand side of th at equation. This can

PAGE 73

52 be done easily for simple robotic systems. Fo r instance, it can be easy to find the joint angles of a two-link planar r obotic manipulator when the X and Y coordinates of the tip of the arm are given by solving the two equations for the two unknowns. Doing so for more complex manipulator can result in very lengthy equations that will require extensive calculations and can be time consuming, whic h might be costly in terms of real-time control. Another way of finding a close-form solu tion is by using the Jacobian given in equation (3.10). Inverting the Jacobian when it exists can directly give the joint rates if the Cartesian velocities and the current joint angles are given as follows: V J ) (1 (3.11) A solution does not exist when the Jaco bian is not at fu ll rank, or when redundancy is introduced since the Jacobian in that case will not be a square matrix. When the number of joints exceeds the number of controlled coordinates in the workspace, the conventional inverse kinematics for a close-form solution is no longer applicable. Redundancy resolution and optimiza tion schemes have been the subject of many researchers, where the use of the extra joints is employed to serve additional task executions and optimize the moti on based on certain criterion. Some researchers have altered these equa tions by adding more constraints based on certain criterion so that the number of equations matches th e number of joint variables as Chang [25] did. He proposed a closed-for m solution formula for inverse kinematics of redundant manipulators using Lagrange mu ltiplier by proposing an additional set of equations to resolve the redundancy at the inve rse kinematic level in such a way that a given criterion function may be minimized or maximized. The additional equations were

PAGE 74

53 set in a similar way to the homogeneous soluti on term of the resolved rate method which uses the null space to resolve the redundancy. Although he used the manipulability index [53] as the criterion function, but any cr iterion function can be used as long as the function can be reduced to an expressi on in terms of joint variables only. 3.5.2. Manipulability Ellipsoid For a Cartesian coordinate solution of the e nd-effector to exist, it is important to stay away from singular confi gurations of the robotic arm. A good way to ensure that a singular configuration in not re ached is to find the determinan t of the Jacobian matrix and make sure it is as far away from zero as possi ble. The closer the de terminant to zero, the higher the joint velocities required to produce the desired Cartesian motion. In the case where the Jacobian is not at full rank due to the fact that the matrix is not square, a different measure is required to ensure a smooth motion of the arm with no singularities along the way. Yoshikawa [53] have proposed a method that measures the manipulability measure for any manipulator wi th any size Jacobian. Consider the set of end-effectors’ velocities X that are accomplishable by the set of joint velocities such that the Euclidean norm satisfies: 12 7 2 6 2 5 2 4 2 3 2 2 2 1 q q q q q q q (3.12) This set is an ellipsoid in the 6-dimen tional Euclidean space shown in figure 3.5. The end-effector can move at high speed al ong the direction of th e major axis of the ellipsoid, and only at low speed along the minor axis of the ellipsoid. Also, the larger the ellipsoid is, the faster the e nd-effector can move. A representative measure of how the

PAGE 75

54 manipulator is able to move at a certain configuration is the volume of the ellipsoid at that particular configuration, and that can be represented by a scalar value as follows: ) ) ( ) ( det(TJ J w (3.13) Where “w” is the manipulability measure at the configuration specified by the set of joint angles “ ”. This measure represents how far th e manipulator is from singularities, the larger this measure is, the farther away fr om singularity the mani pulator is. When this measure reaches zero, a manipulator is sa id to be at a singul ar configuration. Figure 3.5: Manipulability Ellipsoid for a 7-DoF Manipulator in a 6-DoF Euclidean Space. 3.5.3. Numerical Solutions Numerical solutions, such as Gauss’ elim ination method, are less computationally expensive than the inverse solutions of th e Jacobian [54]. These solutions can be implemented using the Jacobian to follow th e user’s directional motion commands or to follow the desired trajectory. In the case of redundancy, the Jacobian is not a square

PAGE 76

55 matrix any more, and that makes it un-invertib le and not at full rank Since the Jacobian is the key relation that relates the Cartesia n space and the joint space variables, it is important to use different methods to invert this kind of non-square matrix. One of the most used methods in redundant manipulator co ntrols is the Pseudo inverse, which can be used for numerical solutions as follows: 1 *) ( T TJ J J J (3.14} To use the above equation, it is required th at the rank of the jacobian matrix “J” is equal to the number of rows of that matr ix. Redundancy can then be resolved using Pseudo inverse of the Jacobian to obtain a numerical solution of the joint angle rates using the following equation: V J ) (* (3.15) This is one of the numerical solutions adop ted in the earlier versions of the control system of the arm, which minimizes the Eu clidean norm of errors as the optimization criterion. Other numerical solutions and optim ization methods will be discussed later in this chapter and in chapter 5. 3.5.4. Redundancy Resolution One of the first problems that needs to be taken care of in robotic manipulators is the singularity problem, which is the case when a solution does not exist at certain parts of the trajectory due to an odd configur ation of the arm. Even with redundant manipulators, singular configurations may be reached along the process of following certain pre-specified trajectories. As mentione d earlier, manipulability measure is used as a factor to measure how far the current confi guration is from singularity. In the case when

PAGE 77

56 we have a redundant manipulator, achieving the same point in the work space can be done in an infinitely many configurations of the arm. Choosing one of these configurations can be done using different op timization criteria to be satisfied in case multiple solutions exist. When the trajectory is satisfied, and there exists a null space that can be used to select one of the infinite solutions that satisfy the trajectory, and at the same time satisfy a chosen criterion function, a valid configuration is selected that is solved based on both original requirement and the criteria constraints. One of these criterion functions or restrictions can be the maximization of the manipulability measure since it is crucial to the task execution. Optimizing the solution can be achieved by adding an additional term to equation (3.15) that carries a sub-task to be consider ed in case more than one solution to the basic task exists. This secondary task term is added as follows: f J J I V J )) ( ) ( ( ) (* 7 (3.16) Where “f” is a 7x1 vector representing the secondary task, “J*” is a 7x6 inversion of the Jacobian matrix, and I7 is the identity matrix of size 7. The choice of the criterion function can range from a scalar quantity, such as the manipulability measure, or can be a set of functions, such as joint limit avoidance conditions. 3.5.5. Optimization Criteria As mentioned earlier, optimization can ta ke the form of a scalar or a set of equations to be considered in the optimiza tion process. When an exact solution does not exist, equation (3.16) covers all the least square solutions that minimize the Euclidean norm of errors while maintaining mini mum joint velocity norms as follows:

PAGE 78

57 Fulfil ( ) ( minJ V) while maintaining ( min) (3.17) At many instances, the main objective func tion to follow a certain trajectory does not require the use of all available joints of the manipulator, For instance, rotating the wrist around the end-effector’s ro tation axis requires only the last joint (joint 7) to move. This leaves all 6 other joints available for optimization, and at that moment, the arm would have six degrees of re dundancy. The null space in th is case containing the other six joints can be used to optimize for more than one criterion. Some of these criteria are: 1Maximizing the manipulability measure. 2Minimizing the joint velocities. 3Minimizing the energy. 4Avoiding obstacles. 5Avoiding joint limits. 6Pointing at certain point while moving along a required trajectory. Some of the above criteria can be used toge ther in a priority-based level that will realize the higher priority criterion while the main task is being executed, and then go on to the lower priority tasks if null space still ex ists until all joints are being used or all criteria have been realized. 3.6. Summary In this chapter, a mathematical model of a 7-DoF redundant robot is described. The arm consists of seven revolute joints with intersecting axes of rotation between every two neighboring joints. The problem was form ulated by assigning coordinate frames to each one of the links according the modified convention of frame assignments. The D-H

PAGE 79

58 parameters of the system were generated to calculate the relati ons between every two consecutive joints and links. The forward ki nematic equations were generated, and the total homogeneous transformation matrix of the robotic arm was created. The velocity relations between the links were propagated to find the end-effectors’ Cartesian velocity relations to the joint velocities, and these relatio ns lead to the Jacobian matrix that relates the work space and the joint space together. Inverse kinematic equations were generated to find the joint positions or velocities in case the required Cartesian coordinates are given. Different methods of doing the i nverse kinematics were discussed, and the optimized redundancy resolution scheme used in the control was shown.

PAGE 80

59 Chapter 4: Mobility Control Theory 4.1. Introduction “Mobile Manipulator” is a widespread term that refers to robots that combine capabilities of locomotion and manipulation. When these sy stems are devoted to indoor tasks, they are often equipped with powere d wheels. The arrangement of the wheels and their actuation device determine the holonomic or non-holonomic nature of this locomotion system. Some wheeled mobile ma nipulators built from an omni-directional platform are holonomic, and many of them are non-holonomic. The tasks assigned to these systems are often translated in term s of end-effectors’ motion. Although this concept is well known for robotic arms, it is quite different in th e case of non-holonomic systems [50, 54]. A distinction between the two types of motion is that the holonomic motion can sufficiently move in any direction of its workspace, whereas the nonholonomic motion can not move in arbitr ary directions of its workspace. In this chapter, we will deal with the non-holonomic motion, which is the opposite problem of redundancy discussed in ch apter three. Different control methods will be discussed, and the chosen motion planning strategy will be derived so that the lost degree-of-freedom can be compen sated by using trajectory planning.

PAGE 81

60 4.2. Terminology Holonomic motion refers to the relations hip between the controllable and total number of degrees of freedom of a given platfo rm. If the controllable DoF is greater than or equal to the total DoF in the workspace, then the platform is said to be holonomic [55]. If the controllable DoF is less than the total DoF in the workspace, then the platform is said to be non-holonomic [56]. Examples of non-holonomic platforms are cars, power wheelchairs and other mobile platforms that can, at any given moment, move in two dimensions out of the three planar dimensi ons (i.e. motion along the X-axis, Y-axis and rotation about the Z axis). In contrast, holonom ic platforms are platforms that can move at any moment of time in all three planar dimensions, such as platforms that are equipped with three power omni-directional wheels. Three different variables are available for the planar motion of the mobile platform moving on the ground, two positional variables along the X-axis and Y-axis, and one rotational variable about the Z-axis. The power wheelchair used in this work is a non-holonomic wheelchair that can move along th e X-axis and rotate about the Z-axis. 4.3. Mobility Problem Formulation The wheelchair used in this work is an “Action Ranger X Storm Series” power wheelchair. This wheelchair accomplishes it s non-holonomic motion using a differential drive that carries two independe ntly-driven wheels in the ba ck of the power wheelchair. The front of the wheelchair has two passive castors that are placed to support the wheelchair’s motion. This makes the wheelchai r a 2-DoF system that moves in plane

PAGE 82

61 [48]. A full description of the wheelchair and its important dimensions will be discussed in this section. 4.3.1. Frame Assignment Three important points of interest were assigned on and around this wheelchair, and coordinate frames were assigned on these three points. These three frames are the wheelchair’s coordinate frame assigned at the center of the driving wheels’ axle, the ground frame assigned at an ar bitrary location on the groun d floor, and another frame called frame “A” assigned at the point where the 7-DoF robotic arm will be mounted. Figure 4.1 shows two-dimensional top a nd side views of the Solid WorksTM model of the wheelchair with the key dimensions and the frame assignments. Note that these three frames are independent and the frame assign ment rules discussed in chapter 3 do not apply. For simplicity, the orientation of the wheelchair’s frame and the “A” frame were assigned such that the rotation matr ix between the two is identity. Throughout the development of the equations in this section and in the subsequent sections, these assigned frames will be used to define the relationships between the ground, the wheelchair, the arm base and the end-effector (gripper). The assignment of the ground frame is arbitrary because it doesn ’t change any of the kinematics of the WMRA system. The wheels’ axle frame is assigned because of its importance in the generation of equations between the ground a nd the arm base. The arm base coordinate frame is assigned to link the end of the wheelchair kinematics to the robotic arm kinematics as it is mounted on the wheelchair. The end-effector’s frame is the frame that will carry on the assigned tasks in the Cartesian space.

PAGE 83

62 Figure 4.1: Wheelchair Coordinate Fr ames and Dimensions of Interest. 4.3.2. Wheelchair’s Important Dimensions There are five important dimensions that will be used in the derivation of coordinate relations in the next section. These dimensions are shown in figure 4.1, and they represent the physical dimensions of th e wheelchair as well as the coordinate frame distances. These dimensions can be described as follows:

PAGE 84

63 1The distance between the centers of the two driving wheels along the differential drive axle. This distance is noted as “L1” and its value is 560 mm. 2The offset distance from the center of the differential drive to the center of frame “A” along the wheelchair’s X-axis. Th is distance is no ted as “L2” and its value is 440 mm. 3The offset distance from the center of the differential drive to the center of frame “A” along the wheelchair’s Y-axis. Th is distance is no ted as “L3” and its value is 230 mm. 4The offset distance from the center of the differential drive to the center of frame “A” along the wheelchair’s Z-axis. This distance is noted as “L4” and its value is 182 mm. 5The offset distance from the center of th e differential drive to the center of the ground frame along the wheelchair’s Z-ax is, which is the same as the wheelchair’s driving wheels’ radius. This distance is noted as “L5” and its value is 168 mm. An important note to mention at this point is that the transformation between the wheelchair’s coordinate frame and the “A” coordinate frame is constant since both frames were attached to th e power wheelchair independently from its wheels’ motion. On the other hand, the transformation between the ground frame and the wheelchair’s frame depends on three variables, the distance along the X-axis and Y-axis and the orientation of the wheelchair about the Z-axis, denoted by “ ”. These variables represent the mobility of the wheelchair on the planar ground surface.

PAGE 85

64 4.4. Homogeneous Transformation Relations The homogeneous transformations between the coordinate frames assigned to the wheelchair, the ground and th e robotic arm base depend on the motion of the two differentially-driven wheels. The details of generating these relations will be discussed thoroughly in this section. 4.4.1. Driving Wheels’ Motion and the Turning Angle In the application of mobile robots, wheel slippage can be considered when the wheels’ characteristics are constant, but in our application, we will assume that slip is compensated by the user. The classical way of obtaining the distance travelled from an initial position to the final position of a wheel that is turning with angle “ ” as shown in figure 4.2 can be written as: 5L d (4.1) Where “d” is the travelled distance, and “L5” is the wheel’s radius. This is the case when the wheel moves in a straight motion with no turning. Figure 4.2: Traveled Dist ance of a Turning Wheel. In our case, the wheelchair is equipped with two wheel s, and the above motion is a special case that commands the wheelchair to move in a straight forward fashion. In general, each independent wheel moves indepe ndently at its own velocity, and a turning

PAGE 86

65 angle is introduced. To approach the formulati on of the turning angle, let’s assume that the left wheel is stationary and the right wheel is turning as shown in figure 4.3. Figure 4.3: Traveled Dist ance with Turning Angle. The curved blue line in the figure is the ac tual traveled distance of the right wheel when the left wheel is stationary. Th e angle of rotation in this case is: rL L L d 1 5 1 (4.2) Where “L1“ is the wheelbase width and “ r” is the right wheel turning angle. In the case when the left wheel is moving while the right wheel is stationary, the turning angle would be the same as in (4.2) with a negative sign. When both wheels are turning at different amplitude, the turning angle would be directly related to the difference between the two angles as follows:

PAGE 87

66 ) (1 5 l rL L (4.3) Where “ l” is the rotation angle of the left wheel. The above relation is not enough to describe consecutive motion steps th at will be implemented in the Virtual Reality simulation and in the implementation of the physical system. We need to relate the previous step to the current step at any mo ment of time to relate all steps together. If we have the turning angle from the previous step, we can assume that that angle was the starting angle, and that the coming step will carry the translation through next turning angle increment. This can be realized as follows: ) (1 5 0 l rL L (4.4) Where “0 ” is the resultant turning angle from all previous steps added together. This gives us a continuous angle tracking th roughout the motion of the power wheelchair even when the turning angle was fluctuating. 4.4.2. The Radius of Curvature The turning angle is not the only factor needed in this non -holonomic motion of the platform. The radius of curvature “r” is also needed [61], and it is critical to the calculations of the transformati on matrices. Four cases are to be considered for the radius of curvature. These cases are as follows: 1When 0 r: As shown in figure 4.4, this case happens when the left wheel is turning less than the right wheel. In this case, th e radius of curvature is defined as:

PAGE 88

67 lL r 5 (4.5) Figure 4.4: Radius of Curvature in Case 1. 2When 01 r L: As shown in figure 4.5, this case happens wh en the left wheel is turning in the opposite direction of the righ t wheel. In this case, th e radius of curvature is negative since the left wheel is moving in the negative direction. The radius of curvature in this case is defined as: lL r 5 (4.6) Figure 4.5: Radius of Curvature in Case 2.

PAGE 89

68 3When 1L r : As shown in figure 4.6, this case happens when the right wheel is turning less than the left wheel. In this case, the radi us of curvature will be negative since the turning angle of the wh eelchair is in the negative direction. The radius of curvature in this case is defined as: 1 5L L rr (4.7) Figure 4.6: Radius of Curvature in Case 3. 4When r As shown in figure 4.7, this case happens when the left wheel is turning in the same direction and amplitude as that of the right wheel. In this case, the radius of curvature will be infinity sin ce there is no turning angle invo lved in this kind of motion. The above four cases cover all motion possi bilities the wheelchair is capable of. At this point, we have all the necessary information to generate the transformation matrices along all the points of interest that were pointed out earl ier in the chapter.

PAGE 90

69 Figure 4.7: Radius of Curvature in Case 4. 4.4.3. Point-to-Point Transformation of the Wheelchair To transform the wheelchair’s coordinate frame during motion, we assume that the initial position and orient ation of the frame is known, and we need to find the new position and orientation for the next time step. Let the initial coordinate frame of the wheelchair be “W0” and the next coordinate frame after moving one step is “W1” as shown in figure 4.8. Lets also define point “O” as the point where the extension of the two Y-axes intersect on plane XY. Knowing the transf ormation between “W0” and “W1” gives us the perspective that we were looking for when the wheelchair is in motion. To accomplish this transformation, three su b-transformations are to be performed: 1Transformation along “Y0” by the amount of 21L r to reach point “O”. 2Rotation about ”ZO” by the amount of “1 ” to reach the orientation of “W1”. 3Transformation along “Y1” by the amount of 21L r to reach point “W1”. These three transformations define coordinate frame “W1” based on coordinate frame “W0”, which is:

PAGE 91

70 Figure 4.8: Point-to-Point Transformation of Frames. ) 2 ( ) ( ) 2 (1 1 10 1L r D R L r D Ty z y W W or 1 0 0 0 0 1 0 0 ) 2 ( ) 1 ( 0 ) 2 ( 01 1 1 1 1 1 1 10 1L r C C S L r S S C TW W (4.8) If we assume that the initial c oordinate frame of the wheelchair “W0” was a result of previous transformation from the ground or igin “G” as illustrated in figure 4.1, the

PAGE 92

71 resulting homogeneous transformation from the ground frame “G” to the wheelchair’s initial frame “W0” can be expressed as: 1 0 0 0 1 0 0 0 05 0 0 0 0 0 00L P C S P S C Ty x G W (4.9) Multiplying (4.8) and (4.9) together re sults in the relation between the ground coordinate frame “G” and the final c oordinate frame of the wheelchair “W1”as follows: T T TW W G W G W0 1 0 1 or 1 0 0 0 1 0 0 ) 2 / ( ) 2 ( ) 2 2 ( 0 ) ( ) ( ) 2 / ( ) 2 ( ) 2 2 ( 0 ) ( ) (5 0 1 1 1 1 0 1 0 1 0 0 1 1 1 1 0 1 0 1 01L P C S L r C C S P C S L r S S C Ty x G W (4.10) In the case when the wheelchair is moving in a straight line as described in the previous sub-heading in “case 4”, the relati on in (4.8) can be simplified to a pure translation as follows: 1 0 0 0 0 1 0 0 0 0 1 0 0 0 150 1r W WL T (4.11) Following the same procedure above, a simpler solution can be reached for this special case that relates the ground coordina te frame to the wheelchair’s coordinate frame.

PAGE 93

724.4.4. Transformation to the Robotic Arm’s Base So far, we have found the homogeneous tr ansformation matrix that relates the ground coordinate frame to the wheelchair’s coordinate frame. For the purpose of the robotic arm to be mounted on the wheelchai r, one more transformation is required between the wheelchair’s coordinate frame an d the robotic arm base coordinate frame where it attaches to the wheelchair. This tr ansformation will be constant since the arm base and the wheelchair are both attached together in a ri gid mounting bracket at point “A” as shown in figure 4.1. This constant tr ansformation is basically a translation as follows: 1 0 0 0 1 0 0 0 1 0 0 0 14 3 21L L L TW A (4.12) Post multiplying (4.10) by (4.12) gives th e transformation needed to describe the robotic arm base at any mome nt based on the ground frame. This resultant matrix is required to be evaluated at every time step to know the current positi on and orientation of the wheelchair and arm base referenced to the world coordinate frame. 4.5. Wheelchair Velocities In wheelchair motion, velocities are mapped from the wheels’ motion to the robotic arm base motion so that the results can be ready when the robotic arm is put together with the wheelchair. In this subsecti on, the velocity relati ons will be generated and the Jacobian matrix will be derived. Depending on the location of the robotic arm base on the wheelchair, three cases will be studi ed to make the general form that we were

PAGE 94

73 looking for in the Jacobian. When the wheelchair is moving in a straight line, the velocity component at any point on the wheelchair is the same. But in th e case of rotation, velocity components throughout the different points on the wheelchair will be different [44]. We will take the case of pure rotation to derive the velo city relations, and then we will add the component that is coming from the straight line motion to the velocities gathered from the rotation. 4.5.1. Wheelchair Velocity Mapping to the Robotic Arm Base The relation between the wheelchair’s coor dinate frame and the robotic arm base frame can be found in three cases as follows: 1Case I: When the Robotic Arm’s Offset is in the X-Direction: In this case, the length “L3” is set to zero as shown in figure 4.9, and the robotic arm is installed directly in front of the wheelchair along the line that divides the wheelchair into two symmetrical halves. Wh eelchair motion in this context produces three relative motions at point “A”, two linea r motions along X-axis and Y-axis, as well as a rotational motion about Z-ax is. These three components are: Figure 4.9: The Case When “L3” is Zero. W A L2 X Y

PAGE 95

74W A W W A W W AC L Y S L X 2 2 (4.13) 2Case II: When the Robotic Arm’s Offset is in the Y-Direction: In this case, the length “L2” is set to zero as shown in figure 4.10, and the robotic arm is installed directly in on the wheelchai r’s axle along the line of rotation of the two driving wheels. Wheelchair motion in this contex t also produces three relative motions at point “A”, two linear motions along X-axis and Y-axis, as well as a rotational motion about Z-axis. These three components are: W A W W A W W AS L Y C L X 3 3 (4.14) Figure 4.10: The Case When “L2” is Zero. W A L2 X Y

PAGE 96

75 3Case III: When the Robotic Arm’s Offset is in Both Directions: This case combines the above two cases w ith a general understanding of the used arm location on the wheelchair. In this case, both offsets are accounted for and algebraically added together with their signs. Wheelchair mo tion in this context also produces three relative motions at point “A”, two linear moti ons along X-axis and Y-axis, as well as a rotational motion about Z-axis. Adding the compone nt of linear motion of the wheelchair produces the general motion form ula that relates the wheelchair frame velocities to the arm bas fr ame velocities as follows: W A W W W W W A W W W W W AS L C L Y Y C L S L X X 3 2 3 2 (4.15) 4.5.2. Mapping the Driving Wheels’ Ve locities to the Wheelchair The relation between the wheelchair’s coordinate frame and the two driving wheels can be found by studying two cases, one case is when the left wheel is stationary while the right wheel is rotating as shown in figure 4.11, and the other case is when the right wheel is stationary and the left wh eel is moving as shown in figure 4.12. Algebraically adding the two terms toge ther gives the general relationship between the wheels’ motion and the wh eelchair frame’s motion as follows: r l W r W l W W r W l W WL L L L S L S L Y C L C L X 1 5 1 5 5 5 5 52 2 2 2 (4.16)

PAGE 97

76 Figure 4.11: The Case When th e Left Wheel is Stationary. Figure 4.12: The Case When the Right Wheel is Stationary. Equation (4.16) carries a full mapping be tween the velocities of the wheels and the wheelchair velocities, while equation (4.15) carries a full mapping between the velocities of the arm base and the wheelchair velocities. Wheelchair’s frame center Left wheel, moving L1/2 X Y Right wheel, not moving Left wheel, not moving L1/2 X Y Right wheel, moving Wheelchair’s frame Center

PAGE 98

774.6. Wheelchair’s General Jacobian To simplify the velocity relations and find the Jacobian, let us rewrite equation (4.15) in the form of a matrix as follows: W WA A W W W A A AV J V Y X S L C L C L S L Y X 1 0 0 1 0 ) ( 0 13 2 3 2 (4.17) Where “JWA” is the Jacobian that relates the wheelchair’s Cartesian velocities to the arm base Cartesian velocities. Also, equation (4.16), can be rewritten in a matrix form as follows: wh whW W r l W W WJ V L L S S C C L Y X 1 1 52 2 2 (4.18) Where “JwhW” is the Jacobian that relates the wheelchair’s motion and the driving wheels’ motion. To obtain the general Jacobian that relates the wheels’ velocities to the arm base frame velocities, a dot product of the two Jacobians can be performed as follows: wh whA A wh whW WA AJ V J J V or (4.19) r l A A AL L S L C L L S S L C L L S C L S L L C C L S L L C L Y X 1 1 3 2 1 3 2 1 3 2 1 3 2 1 52 2 ) ( 2 ) ( 2 ) ( 2 ) ( 2 2 (4.20)

PAGE 99

78 The above equation will be used with the numerical methods to produce the motion commanded by the user in Cartesian coordinates after calculating the wheels’ velocities required to rea lize the commanded motion. 4.7. Trajectory Options As discussed earlier in this chapter, non-holonomic constraints on mobile platforms restrict the system’s ability to control all DoF in the workspace. The above equations give the user the choice to contro l two out of three variables in the planar Cartesian coordinates. To make sense of the two chosen variables, it is more important to select the two position variables of the wheelchair rather than the orientation and one of the two positions. This way, the wheelchair is free to move in the plane in both direction, but without proper orientation. Adding trajectory planning to the motion ca n compensate for the lack of control variables by dividing the moti on into sub-motions to realiz e two of the three commanded variables at every sub-motion. Suppose that the wheelchair is commanded to move the arm’s base reference frame from “T0” position to “T1” position, where “T” is the homogeneous transformation matrix of that pos ition, the motion can be planned in three steps to realize the X-direction motion, Y-direction motion and the Z-direction orientation. The following steps can be programmed to execute these three motions: 1From the initial point of the arm base at “T0”, find the corresponding wheelchair’s frame transformation matrix at that pose using equation 4.12. 2From the destination point of the arm base at “T1”, find the corresponding wheelchair’s frame transformation matrix at that pose using equation 4.12.

PAGE 100

79 3Draw a line between the two new frame transformations of the wheelchair’s frame, and find the angle of that line using the transformation resultant between the two. 4Command the wheelchair to rotate to the angle of the new line with no translation. 5Command the wheelchair to move in a stra ight line from the initial position to the final position of the wheelchai r, ignoring the orientation. 6Command the wheelchair to rotate from the angle of the ne w line to the angle of the final position. The above steps with the th ree sub-motions produced as shown in figure 4.13 will make the user capable of controlling all th ree DoF in the workspace of the wheelchair’s planar motion. Simulation testing of this method has been done and was successful to move to any position and orie ntation on the ground plane. Figure 4.13: The Three Sub-Motions in Motion Planning of the Wheelchair. 4.8. Operator’s Safety Issues In the simulation process of the wheelchai r motion, two separate concerns were noticed and can be summarized as follows: 1 2 3

PAGE 101

80 1In the case of directional motion with no rotation, the wheelchair might end up in an odd angle that might be useless to the user if he/she is using this kind of control in the autonomous motion of the actual wheelchair. 2In the case when a trajectory is plan ned and a motion is divided into three submotions, it was noticed that when the wheelchair is in the first or second rotational motion, the front of the wh eelchair may run into close-by objects such as a wall or a human in the close proximity of the wheelchair. It would also be dangerous if a drop in the ground elevation is nearby, such as a stair step down. Since this pr ocess is done autonomously, sensory information is important to be added for the safety of the operator. 4.9. Summary In this chapter, the wheelchair’s motion was analyzed and the points of interest in the wheelchair relations were pointed out. Coordinate frames we re assigned to these points of interest and transformations betw een the assigned frames were generated. Velocity propagation from one coordinate fr ame to the other were conducted to find the corresponding points of interests’ velocities based on the differentially driven wheel’s velocities according to the non-holonomic motion rules. Th e system’s Jacobian was generated to relate the robotic arm base frame Cartesian velocities to the wheels’ velocities. After that, trajectory planning wa s shown to compensate for lack of full coordinate control in the work space of the wheelchair. Safety issues were addressed in the wheelchair control methods and sugge stions were given to address them.

PAGE 102

81 Chapter 5: Control and Optimization of the Co mbined Mobility and Manipulation 5.1. Introduction In the previous two chapters, mathematical models of the 7-DoF robotic arm and the 2-DoF power wheelchair have been derived. The homogeneous transformation relationships between different key points on both of them were developed, and the relative velocities were calculated. The J acobians of both system s were developed and numerical solution schemes to calculate the inverse kinematics of each of them were derived. It is often he lpful to use robotic devices to he lp people with disabilities to do their activities of daily living w ithout the need for an assistan t. The idea of mobile robotic manipulators is something that can help the user especially if the user is confined to a wheelchair. Having two independent controls fo r the wheelchair and the arm significantly limits the use of the arm in terms of c ontrollability and the executable tasks. In this chapter, we will combine the two systems together in an effort to produce a 3-degree of redundancy, 9-DoF system with a si ngle control structure that can be used to control the combined wheelchair-mounted r obotic arm (WMRA) system. This gives the WMRA system much more flexibility and it can be controlled autonomously or using teleoperation with th e two sub-systems cooperating together in the control scheme.

PAGE 103

82 5.2. Terminology In the combination of mobility and mani pulation, several interpretations can be done to accomplish cooperated motion in the Cartesian space. In this work, the term “combined mobility and manipulation” is used to indicate that the combination is done from the lowest control level and mathematical models up to the advanced control algorithms to integrate both units into a single system. In this context, a single Jacobian of both sub-systems that combines the charact eristics of mobility and manipulation will be derived. 5.3. WMRA Assembly and Problem Definition In a previous work, mounting locations of the robotic arm on a power wheelchair were studied to determine the effect of the mounting location on the manipulability measure [57]. In this work, the selected m ounting location provided th e best location to execute tasks related to activ ities of daily living (ADL). Figure 5.1 shows the WMRA system with the frame assignments at each one of the points of interest. The reference frame of the ground was assumed to be stati onary, and all other frames of the WMRA system are related to it. The aim of this chapter is to combine the seven joints of the robotic arm and the two joints (wheels) of the wheelchair into a single vector, and gene rate the corresponding individual transformations to obtain the gene ral homogeneous transformation that relates the ground frame to the end-effector’s frame. Velocities will also be propagated through both sub-systems to find the total Jacobian th at represents the velocity mapping from the nine joint variables to the six Cartesian space variables.

PAGE 104

83 Figure 5.1: WMRA Coordinate Frames. 5.4. Kinematics of the Combined WMRA System The total homogeneous transf ormation matrix of the WMRA system will be used later in the implementation of this theory, and defining it appropriately insures accuracy of the results. In our applic ation, we will de fine it one way now and then we will redefine it later in this chapter when we ha ve other options and choices of variables to control. For the time being, let’s define the joint space of the robotic arm as: T Aq7 6 5 4 3 2 1 (5.1)

PAGE 105

84 Let’s also define the joint space of the wheelchair (wheels’ rotation angles) as: T r l Cq (5.2) The combination of the two joint spaces can also be defined as: T r l C Aq q q 7 6 5 4 3 2 1 (5.3) Where 1 through 7 are the robotic joint angles fr om the arm base to the wrist respectively, l and r are the rotation angles of the left and right wheels respectively. Since we defined the transformation between the robotic arm base frame and the endeffector’s frame in equation (3.3), and th e transformation between the ground frame and the wheelchair’s frame in equation (4.10), as well as the constant transformation between the wheelchair’s frame and the arm base fram e in (4.12), we can use these equations to find the total transformation matrix as follows: T T T T T T T T T TW G W G 6 7 5 6 4 5 3 4 2 3 1 2 0 1 0 7 (5.4) Matrix TG 7 represents the 4X4 homogeneous transformation between the ground and the end-effector’s frame in terms of th e WMRA joint space. At any moment of time, for a given the joint space vector in (5.3), s ubstituting joint values in (5.4) gives a clear description of the position a nd orientation of the end-effe ctor. This calculation is important when we compare the target posi tion in the workspace to the current location of the end-effector. 5.5. Jacobian Augmentation and Resolved Rate Equations Generation The velocity relation developed in chapte r 4 leads to the Jacobian of the nonholonomic wheelchair that relate s the wheels’ veloci ties to the three Cartesian velocities

PAGE 106

85 in the planar motion of the wheelchair as desc ribed in equation (4.20). It is important to modify this equation to include all six Cart esian velocities in space so that we can combine the motion of the arm with the mo tion of the wheelchair. To find the new Jacobian of the wheelchair, let’s define the task space vector of the wheelchair as: T C C C C C C C A CZ Y X q f r ) ( (5.5) This vector represents the task space vector at the robotic arm base frame (A), and can be found by modifying ( 4.20) to include all Cartes ian velocities as follows: C W C A Cq J J r (5.6) Where “JW“ is the wheelchair’s planar Jacobian: 1 1 3 2 1 3 2 1 3 2 1 3 2 1 52 2 ) ( 2 ) ( 2 ) ( 2 ) ( 2 2 L L S L C L L S S L C L L S C L S L L C C L S L L C L JW (5.7) and “JC” is defined to map the Jacobian from the three Cartesian coordinates to six Cartesian coordinates as follows: T CJ 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 (5.8) Equation (5.6) relates the wheels’ velocity v ector to the Cartesian task space at the robotic arm base. Since the task will not be performed by th e arm base, we need to map the motion to the end-effector’s frame. Let’ s define the task space vector at the endeffector’s frame (E) as: T C AZ Y X q q f r ) ( (5.9)

PAGE 107

86 Differentiating (5.9) with respect to time gives: C C A Aq q f q q f r (5.10) Note that the Jacobian in equation (5.6) re lates the wheels’ velocity vector to the Cartesian task space at the robotic arm base fr ame, and what we need is the equivalent relationship defined at the end-effector’s frame. This will be done to the wheelchair’s motion by introducing a new Jaco bian that relate s the wheelchair’s motion at the arm base frame to the end-effector’s frame motion. This Jacobian will de pend only on the first two and last Cartesian coordinates of the e nd-effector based on the arm base frame as follows: A C G A C yg xg yg xg E Cr J r I S P C P C P S P I r 4 20 ) ( 0 (5.11) Where Pxg and Pyg are the x-y coordinates of the end-effector based on the arm base frame, and is the angle of the arm base frame, which is the same as the angle of the wheelchair based on the ground frame. Substituting (5.6) into (5.11) gives: C W C G E Cq J J J r (5.12) Now the Jacobians are augmented separately and ready for combination. Substituting the rates of change by the Jacobians found earlier for the arm and the Jacobains for the wheelchair into equation (5.10) gives: C W C G A Aq J J J q J r (5.13) Putting (5.13) in a matrix form results in the following: C A W C G Aq q J J J J r or q J r (5.14)

PAGE 108

87 Equation (5.14) combines the two Jacobians together to combine the mobility of the wheelchair and the manipulation of the arm. It is important to mention here that care must be taken in implementing this method in simulation or in the actual WMRA system. The combined Jacobian is related to joint an gle rates, while the wheelchair is effectively performing linear motion that re sults from the two driving wheels. Combination of the Jacobian in this context requi res three processes to be done before and after evaluating equation (5.14), these processes must be follo wed, or the solution will not converge, and the system will be out of c ontrol. The three processes should be done in the following sequence: 1Before using the Jacobian, convert the linear velocities at the left and right wheels into angular velociti es using the equations: 5L Dl l and 5L Dr r (5.15) 2Use the Jacobian with the new angles of the left and right wheels. 3After using the Jacobian, convert the a ngular velocities of the left and right wheels into linear velocities using the equations: 5L Dl l and 5L Dr r (5.16) The reason we are converting the angular into linear veloci ties is to avoid oscillation of the system when evaluating th e sines and cosines of the wheels’ angles when they complete a full rotation. Having th e angular velocity values converted into linear velocities eliminates the oscillat ion in the system when the mobility and manipulation are combined, and produce smooth and compatible motion with the task

PAGE 109

88 trajectory. Running these equations in simulation showed satisf actory results, this will be discussed later. 5.6. Jacobian Changes Based on the Control Frame It is always noticed in mobile robotic applications that the user may need to control the end-effector based on its own fr ame rather than the ground frame or the wheelchair frame. In this application, we cons idered all three possibilities to be included for user convenience. These possibilities requir e a slight modification to the Jacobian of both the wheelchair and the robotic arm. Note that so far, we defined the robotic arm’s Jacobian based on its base frame, and th e wheelchair’s Jacobian based on the ground frame. These definitions will be changed based on the controlled frame. 5.6.1. Ground-Based Control This is best used in autonomous control mode, where the trajectory to the target is always defined based on the ground frame. Th e Jacobian of the wheelchair in this case stays the same, and the Jacobian of the robotic arm becomes: original A G G new AJ R R J 0 00 0 (5.17) 5.6.2. Wheelchair-Based Control This is best used in teleoperation contro l mode, when the user is controlling the wheelchair most of the time. Th e Jacobian of the robotic arm in this case stays the same, and the Jacobian of th e wheelchair becomes:

PAGE 110

89original W C G T G T G new W C GJ J J R R J J J 0 00 0 (5.18) 5.6.3. End-Effector Based Control (Piloting Option) This is best used in teleoperation contro l mode, when the user is controlling the end-effector most of the time to perform ADL tasks. This mode is called the pilot mode since the end-effector is compared to a flyi ng object with its own frame-based control. The Jacobian of the robotic arm in this case will be changed as follows: original A T T new AJ R R J 0 7 0 70 0 (5.19) And the Jacobian of the wheelch air will be changed as follows: original W C G T G T G new W C GJ J J R R J J J 7 70 0 (5.20) 5.7. Jacobian Inversion Methods and Singularities At this point, we produced a Jacobian that combines both the mobility of the power wheelchair and the manipulation of the 7-DoF arm. To make effective use of the arm and execute Cartesian space tasks, an in version of the Jacobian is necessary. Often times singular configurations that produce high join t rates and lead to instability occur while trying to execute a given task. Inve rting the Jacobian while trying to avoid singularities would give reasona bly effective results. Two met hods of Jacobian inversions were done for the combined system, one of th em uses Pseudo inverse, and the other uses Singularity-Robust inverse.

PAGE 111

905.7.1. Inverting Using Pseudo Inverse Numerical solutions were implemented using the Jacobian to follow the user’s directional motion commands or to follow the desired trajectory. Redundancy can be resolved using Pseudo inverse of the Jaco bian [54], and singular ity is avoided by maximizing the manipulability measure [53] di scussed earlier in chapter 3. When the Jacobian matrix is of rank 6, which is the number of rows in the 6X9 combined Jacobian matrix that we have, Pseudo inverse can be written as: 1 *) ( T TJ J J J (5.21) When this inverse was implemented in simulation, it showed good results when the manipulability measure was far from zero. Since this method carries a guaranteed valid solution only at a singular configuration and not around it, the results can carry high joint velocities when singularity is approached. 5.7.2. Inverting Using Singularity-Robust Inverse Another inversion method was tried with the new combined WMRA system since singularity needs to be addressed. A method that starts to change th e arm configuration as it approaches singularities was tested with the new system, that method is called the Singularity-Robust (S-R) inverse of the Jacobian [50]. Using this method allowed the use of redundancy resolution schemes for differe nt subtasks, while singularities are taken care of at the Jacobian inversion level. The SR inverse of the Jacobian used to carry out the inverse kinematics can be written as: 1 6 *) ( I k J J J JT T (5.22)

PAGE 112

91 where I6 is a 6x6 identity matrix, and k is a scale factor. It has been known that this method reduces the joint velocities near singularities, but compromises the accuracy of the solution by increasing the joint velocities error. Choosing the scale factor k is critical, if it is too high, the error will be t oo high and the system might destabilize, and if it is too small, the joint rates will go too high, and the system might destabilize. Since the point in using this factor is to give approximate solution near and at singularities, an adaptive scale factor is updated at every time step to put the proper factor as needed. This factor can be defined as: 0 0 2 0 00 ) 1 ( w w for w w for w w k k (5.23) Where w0 is the manipulability measure at the start of the boundary chosen when singularity is approached, and k0 is the scale factor at singul arity. The optimum values of w0 and k0 for our system were found by simulation to be 0.034 and 13x10-9 respectively. 5.8. Optimization Methods with the Combined Jacobian One of the most beneficial advantages of redundancy in robo tic manipulators is the fact that its motion can be optimized in many differe nt ways. A human arm, for instance, is a redundant system because it has 7 degrees of freedom (3 in the shoulder, 1 in the elbow and 3 in the wrist) and there are only 6 physical degree s of freedom in the task of placing the hand in any position and or ientation in space (x, y, z, roll, pitch and yaw). If a human arm had only 6 joints, the arm w ill be stationary if the wrist is fixed, but since the human arm carries 7th joint, we were able to still move our arm while fixing the wrist at a fixed point.

PAGE 113

92 Now that the singularity is taken care of using the S-R Inverse of the Jacobian, we can use the joint redundancy to optimize for a s econdary task or to set motion preference weights on the joint domain while following th e Cartesian trajectory. Three methods of optimization will be discussed and tried for this system. 5.8.1. Criteria Functions and Minimizing Euclidean Norm of Errors Redundancy can be resolved using any of the two inverses discussed above to obtain a numerical solution of the joint angle rates. If we have the desired task space variables, and we need to obtain the desired joint space variables, we can use the simplest form of the resolved rate methods using the following equation: d d C Ar J q q (5.24) This solution minimizes the norm of the joint velocities and the Euclidean norm of end-effector velocity errors, which is the difference between the commanded Cartesian space variables and the actual Cartesian space variables achieved. Adding more tasks to the optimization process can be done by addi ng additional terms to include another optimization function as follows: 0 9 *) ( q J J I r J q qd d C A (5.25) Where “0q ” is a 9x1 vector representing the secondary task, “J*” is the modified 6x9 Jacobian matrix, and I9 is the identity matrix of size 9. The choice of the criterion function can range from a scalar quantity, such as the manipulability measure “w”, or can be a set of functions, such as joint limit avoidance condi tions. The secondary task can

PAGE 114

93 either be the desired trajectory in the case of pre-set task execu tion, or it can be a criterion function such that: ) (0q H a qq (5.26) Where H(q) is the optimization criteri on y = H(q). The existence of the mobile platform means that “0q ” may not exist for non holonomic constraint such as that of the wheelchair. To go around this limitation [47] proposed the following: Differentiate the optimization criterion function “H” with respect to time as follows: C C A Aq q H q q H q H y ) ( or, (5.27) C A W T qq q J I H y 0 0 (5.28) In this case, the value of “Hq ” that improves the objective function can be defined as: 0) ( 0 0q q H J I a qq T W H (5.29) And that velocity vector can be used for optimization. This gives a good representation of the arm joints’ velocities a nd the wheels’ velocitie s of the wheelchair. The null space may contain more variables than what is required for the secondary task. In this case, another secondary task can be used to optimize for more than one criterion. The sign in (5.29) was taken as positive in case the optimization function needs to be maximized, and negative in case it needs to be minimized depending on the function and its requirement in the control algorithm.

PAGE 115

945.8.2. Weighted Least Norm Solution Weighted Least Norm solution can also be used as proposed by [ 27] to resolve the system redundancy. In order to put a motion pr eference of one joint rather than the other (such as the wheelchair wheels and the arm join ts), a weighted norm of the joint velocity vector can be defined as: q W q qT W (5.30) Where “W” is a 9X9 symmetric and posi tive definite weighting matrix. For simplicity, the weight matrix can be a di agonal matrix that represents the motion preference of each joint of the system. For the purpose of analysis, the following transformations are introduced: 2 / 1 W J JW and q W qW 2 / 1 (5.31) Using (5.31), we can rewrite (5. 14) and (5.30) respectively as: W Wq J r and (5.32) W T W Wq q q (5.33) In this case, the Least Norm solution of (5.32) is: r J qW W * (5.34) Using the second part of (5.31), jo int velocities can be redefined as: 2 1 W Wq W q (5.35) Using Pseudo inverse, it can be shown that the Weighted Least Norm solution is calculated as follows: r J W J J W qT T W 1 1 1 (5.36)

PAGE 116

95 The above method has been used in simu lation of the 9-DoF WMRA system with the nine state variables “dq ” that represent the seven joint velocities of the arm and the two wheels’ velocities of the power wheelch air. It was found that latter two state variables are of limited use since they tend to unnecessarily rotate the wheelchair back and forth during a long forward motion due to their equal weights. Changing the weights of these two variables will only result in a preference of one’s motion over the other. More on this will be discussed later in the chapter. 5.8.3. Joint Limit Avoidance The criterion function used for optimiza tion can be defined based on the physical joint limits of the WMRA system, and mini mizing that function results in limiting the joint motion to its limit. A mathematical representation of joint limits in robotic manipulators has been a topic of many resear chers. One of these representations were proposed [27] as follows: ) ( ) ( ) ( 4 1 ) (min , max 2 min max 7 1 i current i current i i i i iq q q q q q q H (5.37) This criterion function becomes “1” when the current joint angle is in the middle of its range, and it becomes “infinity” when the joint reaches either of its limits. When used in (5.29) the function can be minimized by choosing the negative sign for it so that the null space is used to choose the minimum va lue of the function that satisfies the main objective function. Using this optimization function in (5.36) can be accomplished through the weight matrix used for optimization. Rather than choosing arbitrary weight values for

PAGE 117

96 each individual joint based on the user preference, an additional value can be added to represent the optimization cr iterion function as follows: 9 9 2 2 1 1) ( 0 0 0 ) ( 0 0 0 ) (q q H w q q H w q q H w Wu u u (5.38) Where “wi,u“ is the user-defined weight prefer ence to joint “i”, and the second term in each element is the gradient proj ection of the criterion function defined as: 2 min , 2 max min max , 2 min max ,) ( ) ( 4 ) 2 ( ) ( ) (i current i current i i i i current i i i iq q q q q q q q q q q H (5.39) When any particular joint is in the middl e of the joint range, (5.39) becomes zero, for that joint, and the only weight left is th e user defined weight. On the other hand, when any particular joint is at its limit, (5.39) becomes “infinity”, which means that the joint will carry an infinite weight that makes it im possible to move any further. When the user prefers to move robotic arm with minima l wheelchair motion, heavy weight can be assigned by the user to the two wheelchair stat e variables. When any of the robotic arm joints gets close to its limit and its weight approaches infinity, the wheelchair’s weight will be much less than that of the joint, and hence it will be more free to move than the joint that is clos e to its limit. It is important to note two different de ficiencies that may lead to unintended operation or joint lock when us ing this method. The first defi ciency is that the joint is penalized with higher weight whether it is a pproaching its limit or getting away from it.

PAGE 118

97 This may cause the robotic arm to use the nul l space inefficiently by preferring to move a joint with heavy weight going towards its lim it rather than moving a joint with heavier weight that is moving away from its limit. This problem can be eliminated if two conditions were imposed on the criterion function as shown in figure 5.2. These conditions are as follows: 1When the joint limit is being approached from outside the limit and moving towards the limit (i.e. the weight difference between two consecutive steps is positive and the current joint limit is not exceeded) then give the weight as calculated by (5.38). 2When the joint limit is being approached from outside the limit and moving away from the limit (i.e. the weight di fference between two consecutive steps is negative and the current joint limit is not exceeded) then give the weight as “wi,u“. The second deficiency is that the precis e joint limit that takes the weight to infinity may never be reached, instead, the num erical solution with its relatively coarse step size may jump from a joint value close to the joint limit be fore it is reached to a joint value close to the joint limit after it is reached This will result in a heavy weight that will slowly get lower as the joint gets away from the set limit towards its actual limit. If the previous two conditions were applied al one, a dangerous motion can happen by giving the weight as the user chosen weight only since the joint is getting away from its limit from inside that limit. This can either break th e joint or lock it when it reaches its actual physical limit with the hard stop. To overcom e this deficiency, two more conditions need to be imposed on the system:

PAGE 119

98 3When the joint limit is being approached from inside the limit and moving away from the limit (i.e. the weight di fference between two consecutive steps is negative and the current joint limit is exceeded) then give the weight as infinity since no further moti on inside should be allowed. 4When the joint limit is being approached from inside the limit and moving towards the limit (i.e. the weight difference between two consecutive steps is positive and the current joint limit is exceeded) then give the weight as “wi,u“ since the joint is actually ge tting away from its limit. Imposing the above four conditions on the weight matrix to perform on the optimization criterion gave the control mechan ism much better results in terms of joint limit avoidance and user-preferred motion of eac h individual variable in the joint space. Figure 5.2: Four Joint Limit Boundary Conditions.

PAGE 120

995.8.4. Obstacle Avoidance The same criterion function can also be used to optimize the control algorithm based on obstacle avoidance. An important math ematical representation of the obstacles around the WMRA system is necessary to fo rmulate these criteri a functions. For the immediate objects around the WMRA system, th ere are three complex shapes that need to be avoided. These shapes are the wheelchair, the human us er and the robotic arm itself. Modelling these shapes mathematically ca n be challenging since they are not fixed shapes. In this case, sensory suite can be us ed to recognize the obstacle and avoid it as the WMRA moves in its workspace. 5.8.5. Safety Conditions In order to create a comp rehensive representation of the physical environment within the WMRA and in its immediate surroundings, several safety conditions should be imposed to avoid joint limits both in position and velocities, and to avoid the arm from hitting the human user or the wheelchair or it self. These conditions were put in place as follows: 1Stop the joint if it reaches its maximum or minimum limit: 0, min max commanded i i i i iq then q q OR q q if (5.40) 2Slow down the joint if it reaches its velocity limit (w hich is also useful in case the WMRA reached singularity and went out of control): max , max ,) (i commanded i commanded i i iq q sign q then q q if (5.41)

PAGE 121

100 3Slowly reverse all joint velocities in case any robotic arm joint frame approaches collision with the ground, the wheelchair’s side, the wheelchair’s wheels, the wheelchair’s human driver’s shoulder, his/her lap, his/her legs, and the wheelchair’s battery pack. 4Slowly reverse all joint velocities in case the robotic arm’s upper arm approaches collision with its forearm. Considering the above four conditions in the control algorithm ensures the safety of the human operator as well as the WMRA system from physical damage. Condition number “3” above has been expanded into “ 16” sub-conditions that address the physical relations between the reachable space of each joint frame and the physical presence of obstacles in that part icular reachable space. 5.8.6. Unintended Motion Effect Based on the Optimization Criteria It is important to the user when operating the WMRA system in teleoperation mode to have total control with predictable motion. When equation (5.25) is used with “qo=0”, the optimization will be based on the minimization of the Euclidean norm of errors. Observing that equation shows that the commanded joint velocities will be “zeros” if the user does not command the Cartesia n variables of the end-effector to move. Depending on the chosen criterion function of “qo” and its dependency on different variables in both the Cartesian space and th e joint space, it is possible for the WMRA to move even when the user does not command it to move. In the case of joint limit avoidance, th e criteron function in (5.37) depends only on the joint variables. That ma kes the second term in (5.25) non-zero even when the user

PAGE 122

101 commands the WMRA system not to move by se tting the first term to “zero” (or by not touching the controller interface to send r =0). The logical interpretation of that equation means that the user will see the WMRA in motio n as soon as the system is powered up if any of the joints is not in the middle of its operational range. This motion will drive the arm to its joint mid-range angles and then st op. The reaction caused by such optimization can be dangerous on the user and the surrounding subjects if it is used the way it is. On the other hand, looking at equation (5.36) when using the Weighted Least Norm solution, when the commanded Cartesian velocity vector r =0, the commanded joint velocity vector will be “zero” no matte r what the weight function is. This ensures that the optimization will not take effect unless the user started to command the arm to move in its workspace. Even though both so lutions were integrated in the control algorithm of the WMRA, care must be taken when using the first solution since it can result in unpredictable motion. 5.9. Optional Combinations for the Resolved Rate Solution Each of the optimization schemes mentioned earlier can be used for different purposes, and other schemes might be added to resolve redundancy. To make the control algorithm flexible in terms of optimization, all methods were implemented in the highlevel control so that the user can choose hi s/her preference. Two things are needed for redundancy resolution, the first is finding the inverse of the Jacobian and the second is implementing the optimization function to fi nd the joint rates. The following eight combinations were programmed for redundancy resolution options:

PAGE 123

102 1Pseudo inverse solution (PI): This selection evaluates the inverse of the Jacobian using Pseudo inverse to find jo int rates for the next time step when given the current Cartesian coor dinates of the end-effector. 2S-R inverse solution (SRI): This selec tion evaluates the inverse of the Jacobian using the Singularity-Robust inverse to fi nd joint rates for the next time step when given the current Cartesian coordinates of the end-effector. 3Weighted Pseudo inverse solution (WPI): This selection evaluates the inverse of the Jacobian using Pseudo inverse, and then applies the weighted least norm solution to find joint rates for the next time step when given the current Cartesian coordinates of the end-effector. 4Weighted S-R inverse solution (WSRI): This selection evaluates the inverse of the Jacobian using the Singularity-Robust inverse, and then applies the weighted least norm solution to find joint rates for the next time step when given the current Cartesian coor dinates of the end-effector. 5Pseudo inverse with gradient projection solution (PI-GP): This selection evaluates the inverse of the Jacobian using Pseudo inverse and adds the projection of the null space with the op timization criterion function to find joint rates for the next time step when given the current Cartesian coordinates of the end-effector. 6S-R inverse with gradient project ion solution (SRI-GP): This selection evaluates the invers e of the Jacobian using the Singularity-Robust inverse and adds the projection of the null space w ith the optimization criterion function to

PAGE 124

103 find joint rates for the next time st ep when given the current Cartesian coordinates of the end-effector. 7Weighted Pseudo inverse with criterion function optimization solution (WPICF): This selection evaluates the invers e of the Jacobian using Pseudo inverse, and then applies the weighted least norm solution with the criterion optimization function included in the weig ht matrix to find joint rates for the next time step when given the current Cartesian coordinates of the endeffector. 8Weighted S-R inverse with criterio n function optimization solution (WSR-CF): This selection evaluates the inverse of the Jacobian using the SingularityRobust inverse, and then applies the weighted least norm solution with the criterion optimization function included in the weight matrix to find joint rates for the next time step when given the cu rrent Cartesian coordinates of the endeffector. In the event that other optimization criter ia or Jacobian inversion methods were added to the control algorithm, more choices can be added to the program, and the user will be given the prompt to choose which optimization selection to select. 5.10. State Variable Options in the Control Algorithm In any control problem, state variables ar e chosen such that changing any of them during the control process gives a result that makes sense for the general objective of the control algorithm. In our application, the stat e variables selected are the seven joint limits of the robotic arm and the two wheelchair wheel s’ angles. This is not necessarily the best

PAGE 125

104 way to control the WMRA system. Other stat e variables may make more sense in the overall control objectives, and may be used in a better way in terms of optimization. 5.10.1. Seven Robotic Arm Joints, Left wheel and Right Wheel Variables This is the default state variables selection that applies the resolved rate scheme to find “q” that contains the nine joint variable s selected. These joint variables are the seven joints of the robotic arm, the left driving wheel angle, and the right driving wheel angle of the wheelchair. As far as the robo tic arm is concerned, controlling its seven joints is the only available se lection, but when it comes to th e wheelchair, there is another way to control its wheels’ angles. In this sub-section, the select ed variables for the wheelchair motion gave undesired motion that was not necessary to execute the trajectory-following command. When used with the weighted least norm solution optimization scheme, a weight matrix that contains preference weights to each of the nine state variables was created such that the wheelchair wheels’ motion ca rry higher weights than the robotic arm’s joints. This is done so that if the task is within the workspace of the robotic arm and can be executed without the need to move the wh eelchair, the control process would result in minimal wheelchair motion and maximum arm motion for task execution. In the event that the user is working in an office enviro nment, he/she will not be inconvenienced be constantly moving his wheelchair unnecessarily. The problem found with the selection of th ese state variables is that when motion of the wheelchair is necessary for task execution, and both wheels are given equal weights, the wheelchair turns unnecessarily. This happens because the robotic arm is

PAGE 126

105 mounted on the side of the wheelchair, and when it follows a long trajectory and the arm reaches its workspace boundaries, the left wheel starts to move since it is the closest one to the arm base that will cause the arm to co ntinue following the trajectory. This results in a wheelchair rotation since the right wheel is not necessary to move at that point. When moving the left wheel is no l onger sufficient to give the r obotic arm the necessary motion to follow the trajectory, the right wheel star ts moving and causes the wheelchair to turn again unnecessarily. Even though the end-eff ector precisely followed its trajectory and the wheelchair moved as it was necessary, its motion was unpleasant. In a test of moving the end-effector in a straight line tr ajectory extending 10 meters in the forward direction of the wheelch air, the motion that makes sense is that the arm should extend forward, an d the two wheels of the wheelch air move at the same speed as necessary, but because of the wheel prox imities to the robotic arm base, they were giving different speeds to the wheels ove r the simulation period of time until the destination point was reached. Giving the two wheels different weights based on their proximity of the arm base doesn’t solve the pr oblem since the trajectory can be different from the straight line, and in the case of left or right hand motion of the end-effector, the wheelchair will not behave in the best way possible. 5.10.2. Seven Robotic Arm Joints, Forward and Rotational Motion of the Wheelchair In avoiding this kind of behavior in the wheelchair motion response, the two wheelchair’s state variables sh ould be changed. Since the user does not care about the wheels motion or which wheel mo ves faster than the other, two different state variables must be chosen, and then related to the whee ls’ motion. The choice that makes sense in

PAGE 127

106 this context is to choose polar coordinate co ntrol of the wheelchair, and that can be done by controlling the linear distance and the angl e of the wheelchair. If having the linear motion and the rotational motion of the wheelch air as the two state variables rather than the two wheels’ motions, it can be used with the weights to give motion that is more convenient to the user. A linear motion of th e wheelchair corresponds to equal velocities of both wheels in the same direction, wh ile a rotational motion corresponds to equal velocities of both wheels in opposite directions. A combination of linear and rotational velocities corresponds to algebrai cally adding the resultant motion of both in each of the wheels. To make this option available, let’s re-define equation (5.2) by the two new state variables as follows: ) ( ) (Rotation X n Translatio qC (5.42) For pure rotation of the wheelchair, both wheels will be running to the same angle with opposite directions, the result ing wheelchair’s inclination angle is: ) (1 5l rL L (5.43) Since “ l“ and “ r” are equal in value, but opposite in direction, lets take the right wheel angle as the positive angle, that is: l r (5.44) This simplifies (5.43) to: 1 5 1 52 ) 2 (L L L L (5.45) Or,

PAGE 128

107 5 12L L (5.46) Using (5.44) and (5.46) gives the relations: 5 12 L Ll and 5 12 L Lr (5.47) For pure translation of the wheelchair, both wheels will be running to the same angle “ ” with the same direction, the resu lting wheelchair’s linear distance is: 5L X or 5L Xr l (5.48) For the general case when we have both rotation and translation, we can add the wheels’ angles in (5.47) and (5.48) together. Now we need to relate the old state variable to the new state variables in the wheelchai r as follows: let the new wheelchair’s state variables be defined as X qC, then the two wheels’ rotational angles will be: 5 5 12 L X L Ll and 5 5 12 L X L Lr (5.49) Differentiating (5.49) with respect to time gives the rates of the angular motion of the wheels as: 5 5 12 L X L Ll and 5 5 12 L X L Lr (5.50) Remembering that 5L X for linear motion, using the angle instead of the distance makes the Jacobian compatible in uni ts. Putting (5.50) in a matrix form gives a better prospective as follows:

PAGE 129

108 X L L L L L Lr l 5 1 5 5 1 52 1 2 1 (5.51) This gives the Jacobian that relates the two wheels to the angle of inclination and the travelled linear distance of the wheelchair. The Jacobian used in (5.7) then can be modified as follows: 5 1 5 5 1 52 1 2 1 ] [ ] [ L L L L L L J JOld W New W (5.52) The rest of the derivation can be carried on in the same way as done to combine the manipulability of the wheelchair and the manipulation of the arm done earlier this chapter. The new state variables will become: T C AX q q q 7 6 5 4 3 2 1 (5.53) Care must be taken again here when using the angle “ ” in the Jacobian instead of the linear distance “X”, a conversion from distance to angle must be done before processing the Jacobian, and then another c onversion back to dist ance after processing the Jacobian. The combination of the above two variables would be sufficient to describe any forward and rotational motion of the wheelchai r. Having these two state variables in vector “q” instead of the wheels’ velocities gives a greater advantage in controlling the preferred rotation or translation of the wheelch air. The wheelchair’s Jacobian in (5.6) is changed for the new state variables before a ugmenting it to the arm’s Jacobian, and the

PAGE 130

109 results are much better in terms of desired co ntrol. When these new state variables were used in simulation with the we ight matrix, we were able to avoid user inconvenience problem that happened in the previous method by assigning heavier wei ght to the rotation of the wheelchair than the weight of the translation so that wh en a trajectory is followed, rotation only occurs as necessary. We will show examples of that in simulation results chapter. 5.11. Trajectory Generation Trajectory generation is an important st ep in performing autonomous tasks using any robotic device. Si nce this 9-DoF robotic system can be used in both autonomous motion and teleoperation motion, it was essen tial to explore on different trajectory generation schemes. Four different trajectory generators were developed for this WMRA system, and the user were given the choice to ch oose one of them for any particular task. 5.11.1. Generator of a Linear Trajectory It is important to mention here that the transformation from one point to the other in space is a non-linear process if it involves rotation. For this reason, the trajectory generator must take that into considera tion when dealing with rotations. A typical homogeneous transformation matrix consis ts of three rotati onal vectors and one translational vector as follows: 1 0 0 0z z z z y y y y x x x xP a o n P a o n P a o n T (5.54)

PAGE 131

110 Where “n” is the projection of the unit “X” axis on the reference frame, “o” is the projection of the unit “Y” axis on the refere nce frame, “a” is the projection of the unit “Z” axis on the reference frame, and “P” is th e coordinates of the frame’s origin on the reference frame. Since vector “P ” is a linear vector that invo lves only distances, it needs no modification. The modification is needed fo r the unit axes projections on the reference frame since they include non-linear sine and cosine functions of the three angles of rotation. The three Euler angles of rota tions in the homogeneous transform can be represented by a single rotation about a new single axis in spa ce [51]. Finding that axis of rotation and the new single rotational angle make s it easier to divide that single angle into angle steps along the trajecto ry. The single angle of rota tion can be found from the homogeneous transform as [51]: ) 1 ( ) ( ) ( ) ( tan2 2 2 1 z y x x y z x y za o n o n n a a o (5.55) Using the function (Atan2) instead of (A tan) would give a single value of the angle based on its position in th e quadrant of rotation. Once this angle is found, we can now find the new axis of rotation by defini ng its unit vector projection on the reference frame. This can be done through three different conditions: 1When the rotation angle is zero or very small, in this case, there is no rotation, and the axis of rotation can be arbitrary. For simplicity, we can define it as: Tk 0 0 1 (5.56) 2When the rotation angle is less than 90o, in this case, the axis of rotation can be defined as:

PAGE 132

111 T x y z x y zo n n a a o k sin 2 sin 2 sin 2 (5.57) 3When the rotation angle is more than 90o, in this case, the axis of rotation can be defined as: cos 1 cos ) ( cos 1 cos ) ( cos 1 cos ) (z x y y z x x y za o n sign o n a sign n a o sign k (5.58) Where “sign” indicates the sign of the di fference of what is inside the bracket. These values are not always true [51], and adjustments must be made based on which one of the projection component s is the largest value as follows: aIf “kx” is the largest, then the other two are: ) cos 1 ( 2 ) cos 1 ( 2 x z x z x x y yk n a k and k o n k (5.59) bIf “ky” is the largest, then the other two are: ) cos 1 ( 2 ) cos 1 ( 2 y y z z y x y xk a o k and k o n k (5.60) cIf “kz” is the largest, then the other two are: ) cos 1 ( 2 ) cos 1 ( 2 z y z y z z x xk a o k and k n a k (5.61) It is important to remember that the tran sformation matrix of the task is usually defined based on the global coordinate frame, an d that the current end-effector’s frame is defined based also on the global coordinate fr ame. The rotation from the end-effector’s

PAGE 133

112 current position “Ri” to the desired task position “Rd” should be found, and that rotation is the one that should be processed for trajectory genera tion of the angle between the two, and that can be found as: d T iR R R (5.62) Once we have the single angle of rotation and the axis of rotation, we can generate the trajectory in a linear line. The approach used to generate the trajectory utilizes a constant transfor mation change along the trajectory, which means that the trajectory will be divided into “n” transformation matrices, with “ T” transformations between every two consecutive points in the trajectory as shown in figure 5.3. Figure 5.3: Linear Trajectory Generation. To find the constant transformation change along the trajectory, the four variables (dx, dy, dz, d) that represent the constant distance change and the cons tant angle change must be defined as follows: Ti T1 T2 T3 T4 T5 Td T Initial End-Effector’s Transformation Desired End-effector’s Transformation Transformations of Trajectory Points Transformation Change with Equal Distances and angles WMRA System World Coordinate Frame

PAGE 134

113 i d i d i dPz Pz Py Py Px Px n d dz dy dx 1 (5.63) From these values, we can now construct the rotation change along the trajectory using the following equation: d d k d k d k k d k d k k d k d k k d d k d k d k k d k d k k d k d k k d d k dRz x z y y z x x z y y z y x y z x z y x xcos ) cos 1 ( sin ) cos 1 ( sin ) cos 1 ( sin ) cos 1 ( cos ) cos 1 ( sin ) cos 1 ( sin ) cos 1 ( sin ) cos 1 ( cos ) cos 1 (2 2 2 (5.64) Using (5.63) and (5.64), we can now fi nd the transformation change between any two consecutive points along th e trajectory as follows: 1 0 0 0 dz dy dx dR T (5.65) To find the transformation matrix of a ny point along the tr ajectory line, the following equation will be used: t i tT T T n t 1 (5.66) Where “Tt” is the trajectory point transformation and “Ti” is the initial transformation matrix of the end-effector. Th e above scheme was coded into a program in the form of a function for trajectory genera tion and used for WMRA autonomous control. 5.11.2. Generator of a Polynomial Trajectory When the robotic arm starts moving from rest, it is impossible to move it from zero to full speed in virtually no time. When the linear trajectory was generated,

PAGE 135

114 simulation revealed that the joint space vari ables are commanded to move at infinite accelerations at the beginning of simulation to reach the desire d velocity in no time. To take care of this issu e, a polynomial trajectory is introdu ced so that when the arm starts from rest, the trajectory points are very close to each other, and then the arm will reach a maximum speed and ramp back down to zero ve locity when it reaches the destination as shown in figure 5.4. Figure 5.4: Polynomial Function of 3rd Order for Variable Ramp with Time. The governing equation for such a polynomial [49] can be written for any variable that needs to be ramped as follows: 3 0 3 2 0 2 0) ( 2 ) ( 3 ) ( t X X t t X X t X t Xf f f f (5.67) The above equation was implemented in th e trajectory genera tor, and all four variables from (5.63) were divided to non-linear segments, and the results were satisfactory since the transfor mation change is no longer c onstant, but rather variable with time. This was added to the program as a second choice, and the previous linear option was also kept as the first choice. X0 Xd t0 tf t X

PAGE 136

1155.11.3. Generator of a Polynomial Trajecto ry with Parabolic Blending Factor When using the polynomial function to gene rate a non-linear trajectory, efficiency of the trajectory-following task was not acceptabl e. The reason is that the desired velocity is reached at the mid-point of the trajectory, and ramping that velocity up and down takes a very long time. To overcome this pr oblem, a polynomial blending procedure was adopted [49]. The blending factor accelerates the velocities at the beginning of the trajectory, and then set the acce leration to zero throughout the ma jor part of the trajectory following procedure when the desired velocity is reached, and then decelerate the velocity back down to zero at the end of th e trajectory-following task as shown in figure 5.4 with a blending factor of “5”. Since the middle segment is linear, we use the linear function definition to define that segment. The beginning and ending segments of the trajectory are assumed to have the same dur ation and at the same constant acceleration with opposite signs. We first begin by defining th e blending factor “b”, in our case, we chose 5. Then we define the acceleration during blending as: 24f i f bt X X b X (5.68) Where “tf” is the time at which the trajectory-following task is completed. Then we define the time when blending ends as: X X X X t X t ti f f b 2 ) ( 4 22 2 (5.69) The velocity at that point would be: b b bt X X (5.70) And the variable’s value at blending would be: 25 0b b i bt X X X (5.71)

PAGE 137

116 Substituting these values in to the 3rd degree polynomial give s a smooth trajectory as shown in figure 5.5. When the blending fact or is “1”, the consta nt velocity region becomes a point, and that returns us back to the polynomial with no blending as decribed in the previous subsection. Figure 5.5: Polynomial Function of 3rd Order for Blended Variable Ramp with Time. A slight modification was made to make the blending factor vary based on the length of the trajectory and the commanded velo city so that the curv e is not too steep and at the same time it doesn’t, the accelerati on is reasonable during the blending regions. The second blending region is done the opposite way of the first one. Figure 5.6 shows the trajectory generation with polynomial function. 5.12. Control Reference Frames At the beginning of solving the manipulation and mobility combination problem, the commanded motion was referenced to th e ground frame. In reality, this doesn’t always help the user in the most effective wa y. For instance, when th e robotic arm is used in autonomous mode, it is better to refere nce the motion to the ground frame, but when X0 Xd t0 tf t X

PAGE 138

117 the user is controlling the WMRA system us ing teleoperation, he/she will be confused about which direction represents the forward or right turn a nd so forth. For that reason, three different control reference frames were programmed so that th e user can choose the most convenient one based on the task at hand. The ground frame is most suited for autonomous operation with preset tasks, the wheelchair’s frame is most suited for wheelchair motion in the most part, and the end-effector’s frame is most suited for teleoperation using the end-effector. Refer to figure 5.1 for the Cartesian coordinate frame of references that we will be working on to transform the Cartesian task space and the Jacobian from one frame to the other. Figure 5.6: Polynomial Trajectory Generation. 5.12.1. Ground Reference Frame This is the default option that was used for reference in all the previously generated equations. The commanded Cartes ian positions expressed in a homogeneous Ti T1 T3 T5 T7 T9 Td T Initial End-Effector’s Transformation Desired End-Effector’s Transformation Transformations of Trajectory Points Transformation Change with varying Distances and angles WMRA System World Coordinate Frame

PAGE 139

118 transformation matrix were based on the gr ound coordinate frame, and the Jacobian was also based on the ground frame. If the us er is operating in au tonomous mode, the trajectory will also be generated based on the gr ound frame. If the user uses this option, it may cause some inconvenience in the se nse that if the wheelchair rotates 180o and the user tries to move forward in his/her mind, the actual motion would be backwards since the positive X-axis of the ground reference frame is stationary and will be pointing backwards from the wheelchair’s prospective. 5.12.2. Wheelchair Reference Frame In this option, the user w ould like to use the wheelchair’s frame as the reference frame for the motion of the WM RA system. Two transformati ons have to be made, one that transforms the commanded Cartesian positions from the wheelchair’s reference frame to the ground’s reference frame, and that would be modified as: T T TW d G W G d (5.72) And the other transformation is to tran sform the Jacobian from the ground frame to the wheelchair’s frame as follows: W G T G A T G A W AJ R R J 0 0 and A G T G A T G A A AJ R R J 0 0 (5.73) When the user is using the WMRA system to perform pre-set tasks of activities of daily living in autonomous mode, a third tr ansformation is necessary to transform the trajectory generation from the wheelchair’s reference frame to the ground’s reference frame as follows: old t initial G A current G A new tT T T T, 1 (5.74)

PAGE 140

119 This gives the user the feeling of cont rolling the WMRA system based on where the wheelchair moves. A very good choice to fo r this option would be when the user is controlling the wheelchair al one without autonomous mo tion and without using the robotic arm. 5.12.3. End-Effector Reference Frame In this option, the user uses the end-effector’s frame as the reference frame for the motion of the WMRA system. Two transformati ons have to be made here as well, one that transforms the commanded Cartesian pos itions from the end-effector’s reference frame to the ground’s reference frame, and that would be modified as: T T Td G G d7 7 (5.75) And the other transformation is to tran sform the Jacobian from the ground frame to the end-effector’s frame as follows: W G T G T G WJ R R J 7 7 70 0 and A G T G T G AJ R R J 7 7 70 0 (5.76) When the user is using the WMRA system to perform pre-set tasks of activities of daily living in autonomous mode, a third tr ansformation is necessary to transform the trajectory generation from the end-effector’s reference frame to the ground’s reference frame as follows: old t initial G current G new tT T T T, 7 1 7 (5.77) This gives the user the feeling of cont rolling the WMRA system based on where the end-effector is pointing. This option is a very good choice when the user controls the WMRA to do activities of daily living using the end-effector.

PAGE 141

1205.13. Summary The main aim of this chapter is to show the theory of combining the mobility of 2DoF the wheelchair and the manipulation of the 7-DoF robotic arm to form a single control structure of the 9-DoF wheelchai r-mounted robotic arm. We derived the combined forward kinematics of the system, and showed the relations between the two subsystems with each other and with the gr ound. The new WMRA syst em with 3 degrees of redundancy was controlled using the resolv ed rate after augmen ting the Jacobian to include both subsystems. Several methods of inversion were implemented to find the inverse of the jacobian to solve the inverse kinematic pr oblem. Optimization was then done using several techniques and the c ontrol algorithm was designed to use any optimization method or criterion function. Two different state variables were impl emented in the WMRA system to reduce the unintended motion of the wheelchair while executing a task of a long trajectory. It was found that using the polar-coordinate type variables in the mobility side of the problem gave more efficient results when used with the robot ic arm. Trajectory generation was done using diffe rent linear and polynomial f unctions with or without parabolic blending. The contro l reference frame was also sh own in three different bases of reference, and each one of them was useful in different kinds of setups.

PAGE 142

121 Chapter 6: User Interface Options 6.1. Introduction One of the important aspects of controlli ng robotic devices is the user interface, especially if the user is a person with disabilities. If the design of the robot is good, the control algorithm is very sophisticated, but the user interface is insufficient, the user might not utilize the robot up to its capabilities due to lack of control. In this chapter, we will talk about the clinical tests done on tw o WMRA user interfaces that are used in commercial WMRA systems. Then we will talk about the user interface devices used for this new WMRA that was included in the program software, as well as some other possibilities that can be added later as the user prefers. 6.2. User Interface Clinical Testing Two different user interfaces were tested in two different commercial wheelchairmounted robotic arms, Manus and Raptor. Th ese two user interfaces are the double-axis joystick as well as a keypad. A test procedure was put in pl ace to do the testing, and human subjects with disabilities were tested to perform these tasks using the chosen user interfaces. Cognitive load was addressed base d on the different user interfaces for different users and the type of control they are using (Carte sian or joint control). The

PAGE 143

122 effectiveness of the two commercial WMRA based on these aspects and the execution times for different ADL tasks were addressed. 6.2.1. Representative ADL Tasks Used for the Clinical Study In WMRA applications to perform tasks of daily living, the user-specific needs should be taken into consideration to make a proper selection of a WMRA [57]. This criterion is based on user’s age, size weight, disability and prognosis. Other characteristics should also be addressed such as the cosmetics features, cost and payload capacity needed. In this clin ical study, a representative popul ation of individuals with different functional limitations who use a personal aid to do their ADL tasks were observed and interviewed while performing reac h and manipulation tasks associated with their ADL using Manus and Raptor in separate occasions. Their opinions on the ease of use and comfort-related aspects were obtained. Based on the uses of WMRA devices for ADL tasks, a test bed was designed to evaluate the user interface and control of each WMRA. In order to assess the functional use, different ADL related tasks, as s hown in figure 6.1, were designed as follows: 1Relocating an object on a level plane: This task consisted of moving the WMRA from the home position (H), picking up an object from quadrant-1 and positioning it in quadrant-2. Each quadrant was 8” deep and 11.5” wide. The quadrants were on a table surf ace 30” above the floor surface. 2A pronation / supination function to si mulate a pouring function: This task consisted of moving the WMRA from the home position (H), picking up a

PAGE 144

123 water bottle from quadrant-1 and pouring the water into a cup in quadrant-2. Each quadrant was 11.5” deep and 8” wide. 3Accessing a higher level cabinet: This task consisted of moving the WMRA from the home position (H), picking up the object from the surface of a shelf (24”) above the surface of the table and placing it on the table surface. 4Picking up an object from the floor: This task consisted of moving the WMRA from the home position (H) to the floor (1), picking up the object and placing it on the table surface (2). Figure 6.1: Four Different ADL Tasks. Each task was performed three times and the time was recorded. The Raptor was tested with the joystick interface and the Manus with the joystick and the keypad interfaces. The input device for Raptor consists of an eight-way joystick. An additional set of switches controlled the opening and clos ing of the gripper. The four-way joystick of Manus controls the arm and the gripper us ing four different menus in the Cartesian control mode. The user accesses the menus by a quick tap of the joystick in either direction. The keypad interface consists of 16 buttons that the user can activate for control of the WMRA. Both th e keypad and joystick systems offered a visual display to the user indicating the menu and function. H 1 2 H 1 2 H 2 1 H 2 1 (a) (b) (c) (d)

PAGE 145

124 6.2.2. The Tested User Interfaces Three user interface devices were tested in this c linical study, two different joysticks and one keypad. These devices we re supplied by the manufacturer of these robotic arms, and they had no flexible robotic system to take third-party user interfaces. Manus uses a two dimensional joystick, four-way joystick as shown in figure 6.2, that is easy to move in all directions in its circular motion. Figure 6.2: Four-Way Joystick for Manus. Manus was also equipped with another us er interface that uses a 4X4 matrix of buttons on a keypad as shown in figure 6.3. This device works individually from the joystick, and it has a clear display of button functions. Figure 6.3: Eight-Button Keypad for Manus.

PAGE 146

125 The third user interface came with Raptor, which is a two-dimensional, eight-way joystick as shown in figure 6.4. That joystick can travel only along the eight directions to control the arm, and if the direction does not match one of these eight ways, the joystick doesn’t move. Figure 6.4: Eight-Way Joystick for Raptor. 6.2.3. Population of the Chosen Users with Disabilities In this clinical study, two individuals with disabiliti es (C5-6 quadriplegia) of similar size and weight, who have been power wheelchair users for 25 years, were selected to test the evaluation test bed. Th e home position for each WMRA was chosen as the stored position in which the user woul d normally place the arm when travelling. After each user was trained on the use of the WMRA device, they were timed on the performance of each task. Errors were noted if the object was dropped, placed 6” beyond the destination or the task was incomplete. In the second phase, cognitive load was added to determine the effect on the time. This was done by asking the subjects a series of questions using a telephone headse t as they performed each task.

PAGE 147

126 6.2.4. Clinical Test Results on User Interfaces In the tests conducted by this study, the us ers were active power wheelchair users, and they favoured a joystick interface, but di d not like using a two dimensional control for three dimensional output. A space-ball or glove with voice recognition and macro controls would be far more efficient. The tests showed that the Raptor’s 2D, 8-way joystick control interface was the easiest to understand and learn. However the users found it difficult to activate the secondary sw itch for opening and closing the gripper. The Manus with joystick was the most difficu lt to learn and errors were caused by the user when using the joystick to access the menu structure. The keypad offered direct control and was most efficient. However the users had difficulty with the size of the buttons and shape on the keypad as shown in figure 6.5. Figure 6.5: Clinical Testing of the Keypad by a Power Wheelchair User. In moving the object in the same plane, th e users had difficulty picking up objects from quadrant 1 and took an average of 180 seco nds to pick up the object using Raptor as

PAGE 148

127 shown in figure 6.6. Once the object was pick ed up, positioning it in quadrant two was done in 15-30 seconds and the return to home occu rred in 20 seconds or less. The same task took about half the time when Manus was used with the keypad. Cognitive loading interestingly did not affect the initial phase of the task (H-1 ), but significantly increased the time trebled for the remainder of the task. The users had difficulty with diagonal movement of the arms. Figure 6.6: Clinical Testing of the Joystick by a Power Wheelchair User. The pouring task was the most difficult and the operation of tilt ing the bottle often caused the water to spill outside the recei ving cup when Raptor was used. The model presented was a useful test to evaluate WMRA. Both the degrees of freedom and the control interface are critical for an efficien t WMRA use. Performance was best in the case of the Manus with the keypad, where suffi cient degrees of freedom existed with the least complicated user inte rface control. However, the performance can be greatly

PAGE 149

128 enhanced by a more intuitive control with less cognitive load. Othe r user interfaces may put these two arms in a better usability when used by people with disabilities. 6.3. The New WMRA User Interfaces Used One of the most difficult situations that can affect the outcome of the use of WMRA systems is the existence of two individual user interfaces to individually control the power wheelchair and the robotic arm. In the new WMRA system, since both the wheelchair and the arm are being controlled in the same control algorithm, a single user interface can be used to perform ADL tasks without the need of user-cooperated motion from two different interfaces. In the program structure, flexibility was one of the objectives in the design of user interfaces so that a wider range of these interfaces can be used based on the user’s abilities and preference. 6.3.1. Six-Axis, Twelve-Way SpaceBall This user interface makes a three dimens ional motion that corresponds to the six Cartesian space variables used in the WMRA. Three translational directions in their positive and negative values, and three rotational directions in their positive and negative values. Figure 6.7 shows the SpaceBall device th at is programmed and used in the control software implementing the control algorithm di scussed in this work. In addition to the SpaceBall’s main functionality, twelve fully programmable buttons were added for any sub-commands that might be used in the control of the WMRA system. The problem in this device is that it is sti ff and might be hard to move by people with disabilities.

PAGE 150

129 Figure 6.7: Twelve-Way SpaceBall. 6.3.2. Computer Keyboard and Mouse Another user interface is the computer keyboard and mouse, where the user might prefer the use of thes e devices as shown in figure 6.8. This option was programmed in the control algorithm in case a computer is equipped with special software, such as speech recognition software, that the user might be using for other functions. Figure 6.8: A Keyboard and a Mouse.

PAGE 151

130 6.3.3. Touch Screen on a Tablet PC This is one of the main user interfaces th at can be used in this application since a tablet PC is already installed as part of the control hardware. The implementation of the control structure included th is option in both simulati on and actual WMRA motion. The touch screen used is of a Fujitsu Lifebook tablet PC equipped with a 12-inch active digitizer as shown in figure 6.9. Figure 6.9: A 12-Inch Touch Screen of a Tablet PC. This user interfaces can be used with th e newly developed graphical user interface (GUI) program shown in figure 6.10, where the user can touch any of the directional buttons to activate the proper Cartesian directional motion of the WMRA system. Figure 6.10: GUI Screen Used for the Touch Screen.

PAGE 152

131 6.4. The Brain-Computer Interface (B CI) Using P300 EEG Brain Signals Many people with severe motor disabili ties need augmentative communication technology to enable them to control different devi ces independently. Those who are totally paralyzed, or “locked-in,” cannot us e conventional augmentative technologies, all of which require some measure of muscle cont rol. Over the past two decades, a variety of studies has evaluated the possibi lity that brain signals recorded from the scalp or from within the brain could provide new augmentative technology that does not require muscle control [58] for a comprehens ive review. These BCI systems measure specific features of brain activity and translate them into de vice control signals as shown in figure 6.11. Figure 6.11: Basic Design and Operation of the BCI System. 6.4.1. The P300 EEG Signal The P300 is a neural evoked potential component of the electroencephalogram, or EEG [59]. This event-related potential (ERP) appears as a positive deflection of the EEG voltage at approximately 300 ms. It dominates at parietal electrode sites. The P300 is

PAGE 153

132 supposed to follow unexpected se nsory stimuli or stimuli that provide useful information to the subjects according to his/her task. The P300 only peaks in the vicinity of 300 millisecond for very simple decisions [59]. More generally, its latenc y appears to reflect the amount of time necessary to come to a decision about the stimulus. The P300 also has useful properties of being larger to rare stimuli, especially if they are targets. The amplitude of the P300 therefore gives information about how the person is cate gorizing the stimuli and how rare they are considered to be subjectively. The P300 is only seen when the person is actively keeping track of the stimulus so it al so gives information about what they are paying attention to, which makes it useful for BCI applicati ons. A further paramete r is the method of feedback used and this is shown in studies of P300 signa ls. Patterns of P300 waves are generated involuntarily (stimul us-feedback) when people see something they recognize and may allow BCIs to decode categories of thoughts without training patients first. 6.4.2. The Use of the BCI The features used in studies to date in clude slow cortical potentials, P300 evoked potentials, sensory motor rhythms recorded from the scalp, event-related potentials recorded on the cortex, and neuronal action pot entials recorded within the cortex. These studies show that non-muscu lar communication and control is possible and might serve useful purposes for those who cannot use c onventional technologies To people who are locked-in (e.g., by end-stage amyotrophic lateral sclerosis, brainstem stroke, or severe polyneuropathy) or lack any useful muscle c ontrol (e.g., due to seve re cerebral palsy), a BCI system, as shown in figure 6.12, could give the ability to answer simple questions

PAGE 154

133 quickly, control the environment, perform sl ow word processing, or even operate a neuron-prosthesis or orthosis [58]. For easie r, non-invasive use of this neuro-imaging technology, the user wears a head mask fitted with several electrodes to measure the P300 EEG signals from the activities of the brain as shown in figure 6.12. Figure 6.12: The Non-Invasive BCI Device. 6.4.3. The BCI-2000 Interface to the New 9-DoF WMRA System In collaboration with the Department of Psychology at the University of South Florida, we were able to de velop a new user interface that uses the portable BCI-2000 device to control the new WMRA system ev en for people who are paralyzed from the neck down. The screen shown in figure 6.13 wa s developed to give the user the proper prospective of what to control, and at the sa me time to serve as the user feedback for the selected image. The BCI-2000 scans the rows and columns of the sc reen choices shown in figure 6.13 at high frequenc y so that one row or one colu mn is shown at a time. The user is asked to look at the symbol that he /she would like to use to control the WMRA

PAGE 155

134 system and count the number of times the/ she saw that symbol. Every time the user counts one more view of the symbol, th e P300 EEG signal is recorded, and the corresponding row or column that was shown at that moment was also recorded. In about 15 seconds, the BCI-2000 gives th e selected row and column of the shown matrix on the screen. Once this value is rece ived by the WMRA control progr am, it translates it into a Cartesian velocity in the proper direction and executes the algorithm to move the arm. Figure 6.13: Basic Design and Operation of the BCI System. 6.4.4. Testing of the BCI-2000 with the WMRA Control Before the BCI-2000 was tested to control the robotic arm, a volunteer human subject was trained properly to use the de vice. The BCI-2000 was also trained to be optimized for that particular human subject, and it showed high accuracy of the selected choice (ranging from 92% to 100 %). These gains were recorded to be used for the actual test. During the testing phase, a successful control with high accuracy of the motion response was apparent. Few potential problems were noticed as follows:

PAGE 156

135 1Every full scan of a single user input takes about 15 second, and that might cause a delay in the response of the WM RA system to change direction on time as the human user wishes. This 15 second delay may cause problems in case the operator needs to stop the WM RA system for a dangerous situation such as approaching stairs. 2After an extended period of time in using the BCI-2000 system, fatigue starts to appear on the user due to his conc entration on the screen when counting the appearances of his chosen symbol. This tiredness on the user’s side might be a potential problem. 3In case the wrong selection was ma de by the BCI-2000, the user will be frustrated to return back to his/her original choice. 4When the user is constantly looki ng at the screen and concentrating on the chosen symbol, he/she will not be l ooking at where the WMRA is going, and that poses some danger on the user. Despite the above noted problems, a successful interface with a good potential for a novel application was developed. 6.5. Expandability of User Interfaces During the programming and implementati on of the designed control algorithm, modularity and flexibility of the WMRA system was taken into account. For this purpose, all mentioned user interfaces are converted into a corresponding vector that is interfaced to the main program through a single functio n. Changing the user interface, or adding other interfaces is very easy in this contex t since the output of any new user interface can

PAGE 157

136 be reformatted to the proper vector format in a new function that will directly interface to the main program and be used as a new interface selection. Other possible interface selections can be added, in cluding the following devices. 6.5.1. Omni Phantom Haptic Device The force-feedback enabled Phantom device from SensAble Technologies shown in figure 6.14 can be used as one of the user interface devices. It carries a stylus mounted on a six-joint mechanism with encoders and force transducers. The Cartesian coordinate velocities of the tip of the stylus can be mapped into an input to the commanded Cartesian velocities of the WMRA system. Figure 6.14: The Phantom Omni Device from SensAble Technologies. The Phantom allows users to actually feel virtual objects if integrated with a sensory suite. The Phantom contains 3 mo tors, which control the x, y, and z forces exerted on the user's fingertip. Mounted on each motor is an optical encoder to determine the x, y, and z position of the user's fingertip. The torque from the motors is transmitted

PAGE 158

137 through a proprietary transmission cable to a s tiff, lightweight linkage. Incorporated at the end of this linkage is a passive 3 degr ee-of-freedom set of gimbals attached to a thimble [60]. The passive gimbals allow the thimb le to rotate so that a user's fingertip may assume any comfortable orientation. 6.5.2. Sip n’ Puff Device The Sip n’ Puff is a term used to descri be a dual-switch system which utilizes pneumatic switches. A single piece of tubi ng, accessible to the user, controls both switches as shown in figure 6.15. A slight pressure (puff) operates one switch, while a slight vacuum (sip) operates the other, and th e proper signal to the controlled device is sent through an RS232 serial port. This device is widely used in assistive technology applications for control. A disadvantage of this device is the fact that it acts as an on/off switch, which means that its use will be very complicated for the user to contro l functions that need many input choices. Figure 6.15: The Sip and Puff Input Device.

PAGE 159

138 6.5.3. Head and Foot Switches Head and foot switches such as the ones shown in figure 6.16 ca n also be used for a user interface to the WMRA system in case the user’s foot or head muscles are the strongest controllable muscles in his/her body. Some of the f oot switches allow the user to rest the activating body pa rt on top of the switch between activations. The head switch can be activated with a light pressu re exerted by the user’s head, Figure 6.16: Head and Foot Switches. 6.6. Summary In this chapter, several us er interface options were pr esented. A clinical study of the user interface devices used by the commer cially available WMRA was presented, and a test procedure was described. The high-le vel control algorithm of the WMRA can be interfaced with many user interfaces, but the ones tested were the SpaceBall, the Keyboard and mouse, the touch screen, and the Brain-Computer interface (BCI) that reads the P300 EEG signal from the brain to control the WMRA just by paying attention to a visual display. Other devices that can easily be adapted to the WMRA control include the Phantom Omni haptic devices, the Sip n’ Puff devices, and the head and foot switches among others.

PAGE 160

139 Chapter 7: Testing in Simulation 7.1. Introduction When new concepts in contro l are developed, it is impor tant to validate them by means of simulation. In our case, the contro l methods that combined the manipulation and mobility of the newly developed WMRA were tested in simulation before applying them to the actual WMRA system. This st ep is very important for debugging and inspecting the methods before applying them in to the actual arm so that no harm to the physical system is done in case of unexpected errors. In this chapter, we will show the different ways this theory was implemented, and the different programming packages used for this purpose. Figure 7.1 shows a flowchart of the program procedure. 7.2. User Options to Control the WMRA System In the control software, several options were made available to include the modularity, re-configurability an d flexibility requirements fo r this WMRA system. These options were programmed to work in combinat ion with any possibilities that make sense of the control as follows: 1What to control: This option gives th e user the option to control both position and orientation of the end effector with its six Cartesian variables, or control

PAGE 161

140 the position only and ignore the orient ation to bring the Cartesian space variables down to three variables a nd use the resultant six-degree of redundancy system for different sub-task optimization. 2What to run: In this option, the user is given three choices. These choices are to run the robotic arm only while fr eezing the wheelch air, to run the wheelchair only while freezing the arm in certain configura tion specified by the user, or to run the combined WMRA system that uses both the robotic arm and the power wheelchair. 3What is the control coor dinate reference frame: This option gives the user the choice of controlling the end effector in reference to the ground coordinate frame, the wheelchair coordinate frame, or the gripper’s coordinate frame. 4What kind of simulation to run: This option gives the us er to run Virtual Reality simulation, Matlab wire frame simulation, both simulations together, or no simulation at all. 5Run the actual WMRA: This option gives the user the option to run the WMRA system or not when running the control software. 6Print diagnostic plots: This option a llows the user to print out the various states of the system variables in terms of position, velocity and acceleration of the points of interest in the WMRA system, as well as the manipulability measure of both the arm and the WMRA system. 7Optimization Method: This option give s the user the option to use the eight optimization combination methods discussed in chapter five. It also allows the

PAGE 162

141 user to select the joint limit avoidance and/or the joint limit and obstacle safety stop options on the control system. 8Close all when completed: This option gives the user the option to close or keep open the simulation windows, the diagnostic plots, and the WMRA control DLL library that connect to the actual WMRA. 9User interface options: This option a llows the user to choose autonomous operation using position control, velocity control of the end effector. It also allows the user to choose teleoperati on control using the SpaceBall, the BCI2000 system, or the touch screen. 10Trajectory generator: This option allows the user to chose the trajectory to be linear, polynomial, or polynom ial with parabolic blending. 11Where to start: This option gives the user the option to start the WMRA system at the ready positi on, the current position or a user-specified position. 12Include pre-set task motion: This option gives the user the option to initialize the system from its parking position to it s ready position, and when the user is finished using the WMRA system, it give s the option to go back to the ready position and the park position, respectively. The above user choices were adequate to allow the user to choose the most comfortable options based on his/her prefer ence so that the WMRA could be used efficiently with as many user-specific needs as possible. When the user chooses to control the wheelchair only, the wheelchair motion is slow relative to the normal wheelchair velocities. If the user needs a normal operati on of the wheelchair, th e control system can shut down, and the control switch can be switche d to the standard wheelchair controller.

PAGE 163

142 7.3. Changing the Physical Dimensions and Constraints of the WMRA System It was noted in previous programmi ng experience that a program can be extremely difficult to modify by other than the developer if any physical changes or modifications to the system occur. Since th is is a project that could involve several changes, it is important to store few files that describe the physical characteristics of the system, and have each function or script progra mmed to read from these files so that any physical changes to the system can be easily accommodated in the control software. Three files were dedicated fo r this purpose as follows: 1Wheelchair dimensions: A file na med “WMRA_WCD.m” was designed to carry the physical dimensions of the wheelchair that are used in the program, as well as the mounting location of the robotic arm on the wheelchair. 2Robotic arm parameters: A file named “WMRA_DH.m” was designed to carry the D-H parameter table of the robotic arm. 3Robotic arm joint limits: A file na med “WMRA_Jlimit.m” was designed to carry the maximum and minimum joint limits of the robotic arm. 7.4. Programming Language Packages Used In order to fulfill the need of implemen ting the program in simulation and in the physical arm, it was important to choose compa tible programs whenev er possible. For the physical arm, the communicati on protocols and functions that send the commands and receive the sensory information from the controller boards use C++ with certain DLL library functions. On the other hand, si mulation is best don e using Matlab 2006b from MathWorks because of its powerful toolboxes th at include good packages for simulation.

PAGE 164

143 Figure 7.1: Program Flowchart. Run the Forward Kinematics Procedures Teleoperation Autonomous Run Trajectory Generator Initialize the System and Timer Run the Procedures for the Jacobians and Augment Them Calculate the Cartesian Space Norm of Errors Run the Resolved Rate Solution and Optimization Procedures Update the Joint Space Variables Send the Commands to the Joints and Wheels Run the Forward Kinematics Procedures Update the System Variables Synchronize the Run Time and the Commanded Velocities Display Optional Charts Deactivate and Reset the System Get Velocities and Port Info Collect Physical Data and User Choices END

PAGE 165

144 To be compatible with the goal of modul arity of the physical system, a separate file that includes the physical characteristic s of the robotic arm and the wheelchair was created, and every thing in the program refers to that file to read the piece of information needed for its calculations. In this sense, th e program can be easily modified for any other WMRA system with different physical charac teristics by editing that file and changing the information from the current system to the new system. 7.4.1. Programs in C++ Programming Language Since the PIC Servo SC controller boards were interfaced to the PC using DLL libraries that are programmed in C++ as f unctions, it was clear th at this programming language had to be used for communication with the PC. In terms of simulation, these are not needed since we did not need to communi cate with the actual WMRA to perform the simulation. However, for the Space Ball to be implemented and integrated with the system, C++ programming was required as it s drivers were compatible with a C++ library. The program is designed to run the dr iver of the SpaceBall and collect the user inputs from the device and send it to a Matlab environment as a vector variable that is changed constantly as the user moves the SpaceBall. 7.4.2. Matlab Programming Environment The main programming language used to im plement the control system is Matlab since it includes a lot of li braries and simulation capabiliti es. The program was coded into Matlab code that includes the main script as well as several functions created for certain purposes. Each one of the function was created in such a way that it was simple to

PAGE 166

145 understand, and at the same time easy to be modi fied for future changes. The main script runs in Matlab command prompt and starts by asking the user questions and collecting the answers so that it runs according to the user preferences. When the program runs in simulation mode, the user can chose between simulation in wire frame graphics or in Virtual reality graphics. In the wire frame graphics, precise Cartes ian coordinate lines were drawn and simulated through time fo r the ground, wheelchair, arm base and endeffector coordinate frames. Another coordinate frame wa s added in case autonomous operation was required and the de sired destination coordinate frame can be shown. In this case, at the end of simulation, the end-effect or’s coordinate frame coincides with the destination coordinate frame. This gives a precision representation of the simulation accuracy. At the end of simulation, optional plots can be displayed, including the positions, velocities and accelerations of the joint space va riables, the Cartesian coordinates of both the end-effector and the wheelchair, a nd the manipulability measure throughout the simulation period. These plots are very useful to understand the behavior of the system and the response throughout the simulation and when certain characteristics or methods are chosen over others. They also help in diagnosing any problems or potential problems that may appear when the control is im plemented. Figure 7.2 shows a sample of the command prompts when the user uses the pr ogram from the comm on command line of Matlab to use the simulation in autonomous mode. Figure 7.3 shows the simulation window of the WMRA wire frame with the Ca rtesian coordinate fr ames attached and color-coded. The results of the simulation will be shown and disc ussed in the next chapter.

PAGE 167

146 Figure 7.2: A Sample Command Prompts for Autonomous Operation Mode. >> WMRA_Main Choose what to control: For combined Wheelchair and Arm cont rol, press "1", For Arm only co ntrol, press "2", For Wheelchair only control, press "3". 1 Choose whose frame to base the control on: For Ground Frame, press "1", For Wheelchair Fram e, press "2", For Grippe r Frame, press "3". 1 Choose the cartesian coordinates to be controlled: For Position and Orientation, press "1 ", For Position only, press "2". 1 Please enter the desired optimization method: (1= SR-I & WLN, 2= P-I & WLN, 3= SR-I & ENE, 4= P-I & ENE) 1 Do you want to include Joint Limit Avoidance? (1= Yes, 2= No) 1 Do you want to include Joint Limit/Velocity and Obstacle Safety Stop? (1= Yes, 2= No) 1 Choose the control mode: For position control, press "1", For velocity control, press "2", For SpaceBall control, press "3", For Psychology Mask control, press "4", For Touch Screen control, press "5". 1 Please enter the transformation matrix of the desired po sition and orientation from the control-based frame (e.g. [0 0 1 1455;-1 0 0 369;0 -1 0 999; 0 0 0 1]) [ 1 0 0 800 ; 0 1 0 -500 ; 0 0 1 350 ; 0 0 0 1 ] Please enter the desired linear velocity of the gripper in mm/s (e.g.50) 50 Chose the Trajectory generation function: Press "1" for a Polynomial function with Blending, or press "2" for a Polynomial function without Blending, or press "3" for a Linear function. 1 Choose animation type or no animation: For Virtual Reality Animation, press "1", For Matlab Graphics Animation, press "2", For BOTH Animations, press "3", For NO Animation, press "4". 2 Would you like to run the actual WMRA? For yes, press "1", For no, press "2". 2 Press "1" if you want to start at the "ready" position, or press "2" if you want to enter the initial joint angles. 1 Press "1" if you want to include "park" to "ready" motion, or press "2" if not. 1 Press "1" if you do NOT want to plot the simulation results, or press "2" if do. 1 Simula. time is 7.460476 seconds. Elapsed time is 7.513704 seconds. Do you want to go back to the "ready" positio n? Press "1" for Yes, or press "2" for No. 1 Do you want to go back to the "parking" position? Press "1" for Yes, or press "2" for No. 1 Do you want to close all simulation windows and arm controls? Press "1" for Yes, or press "2" for No. 1 >> >>

PAGE 168

147 Figure 7.3: Simulation Window of the WMRA System in Wire Frame. 7.4.3. Simulation with Virtual Reality Toolbox The same simulation discussed in the prev ious sub-section wa s also programmed and simulated using Virtual Reality simulati on. SolidWorks models of each one of the link segments of the robotic arm were drawn, as well as the wheelchair model and the two driving wheels separately. All drawn models are then converted into WMRL files that use WMRL language. A new VRML progr am was created to call each individual segment of the WMRA system in a hierarchy, and relate them together using variable positions of the joint space variables. In that environment, enhancements were made to make the background and the floor look real istic in simulation. The new VRML file

PAGE 169

148 created was then called during Matlab simu lation and updated with the new joint space variables so that the view of the WMRA change as the simulation progresses. Different view points were created to vi ew the system in Virtual Reality. Unlike workstation robots, this WMRA is not statio nary, and it eventua lly gets out of the simulation window if the wheelchair is driven too far. For this reason, several dynamic views are also developed to follow the wheelchai r as it moves so that it stays within the viewing area of the window. These views can be changed during the simulation, and snap shots or videos can be recorded. Figure 7.4 sh ows a static view of the Virtual Reality program window that shows the WMRA in the ready position. Figure 7.4: A Sample of the Virtual Reality Simulation Window.

PAGE 170

149 The Virtual Reality model used for simulation was tested using several user interfaces, including the SpaceBall, the keyb oard and mouse, the brain-computer interface (BCI2000) and the touch screen interface. The program performed in a satisfactory way with preci se and fast simulations with no noticeable delays. 7.4.4. Graphical User Interface (GUI) Program Using the main program in Matlab to cont rol the robotic device was hard for a user with disabilities to accomplish because of the initial questions asked by the program to bring the control up to the user prefer ence. A new GUI program was created to ease this process and make it practical and user friendly for persons with disabilities. The main program was integrated with a GUI with default values so that the user can store the default values in the main program and use it directly as the software opens. This feature dramatically reduces the burden on the user to fill out the init ial options every time he/she wants to use the WMRA system. Figure 7.5 sh ows the graphical user interface with its default options. To make it even easier and less confusing to the user, different windows or buttons will disappear if they don’t apply to the user’s selected option or when next options do not apply to the currently chosen mo de. Since the tablet PC is equipped with a touch screen, the user can easily tap the sele ctions. When a touch-sc reen user interface control is selected, another scr een appears with the functions and directions that the user can choose appear as shown in figure 6.8. This screen accepts commands by touching the intended button, or by pressing the butt on by mouse or the equipped touch pad.

PAGE 171

150 Figure 7.5: The Graphical User Inte rface (GUI) Screen with the Defaults. 7.5. Comments on Interfacing Different Programs Together When this program was created, we knew that communication problems would occur between software and hardware or so ftware and software. The first problem was integrating the SpaceBall and inte rfacing it with Matlab. DLL lib raries that are written in C++ are possible to read and use the functions they contai n, but the problem comes when these functions use different data structure than Matlab while compiling. This means that

PAGE 172

151 the functions are either unusable or very hard to use. In the case of SpaceBall, a new C++ program was created to send the data to a Ma tlab environment and make it ready for use. Another problem came when we were goi ng to use the program to operate the actual WMRA system. Since it uses functions from complex DLL libraries, we had to recreate functions in C++ and compile them into DLL files in a data structure that is compatible with C++, and then use them in Matlab and call these functions to communicate with the PIC Servo SC controll er boards used in controlling the WMRA. This works out well, except that some times the virtual link between Matlab and the DLL library fails, and that resu lts in unresponsive WMRA when commanded to do a task. This problem can be taken care of if the program controlling the arm is separated from the program that simulates the arm. This way, the program that controls the arm can be rewritten in C++ so that less interfacing problems will appear. The BCI 2000 user interface also uses a C++ program for processing and sending the data out. In this case, a networked TCP/IP port was dedicated to communicate between the BCI2000 and the computer that is running the control algorithm, and Matlab was interfacing with the TCP/IP port to get the date and use it in the control software. 7.6. Summary In this chapter, a description of the simulation software was presented and discussed. Different programmi ng languages and packages were used to create different applications and interface them together. The main program was written in Matlab, and two different graphic simulation were used. Wire frame graphical simulation of WMRA was created for precise inspection of the simu lation and its results, and Virtual reality

PAGE 173

152 simulation was created for its realistic look and appearance. Several plots can be shown to describe the system behavi or during the simulation period. The main program can be run in two different ways, one is through the common command line of Matlab, and the other thr ough a graphical user interface (GUI). The GUI was more user-friendly and easier for use by people with disabilities. Several communication and interfacing problems were faced during programming different parts of the WMRA system together with the control software. The solutions to these problems were presented.

PAGE 174

153 Chapter 8: Simulation Results 8.1. Introduction Simulation of many different cases to test the theory developed in chapters 3, 4, and 5 is important to validate the control al gorithm and the methods used for control, especially if these algorithms are going to be used to control the actual WMRA system built at USF. In this chapter, simulation of these cases will be shown, and the effects of different control schemes and values will be discussed. Many plots of Cartesian space variables and joint space variables will be s hown in positions, veloci ties and accelerations of these variables throughout the simulation period. The effectivenes s of the singularity avoidance schemes will be shown by plotting the manipulability measure of the robotic arm and the combined WMRA system. The co ntrol system of the 9-DoF WMRA system is implemented in simulation using Matlab 7.0.4 with Virtual Reality toolbox installed on a PC running Windows XP. 8.2. Simulation Cases Tested Several cases were tested in this simulation using the Weighted Least Norm solution control with Singularity-Robust inverse of the Jacobian since this was the most effective way of controlling the WMRA system Five different values were tried for the

PAGE 175

154 diagonal elements of the weight matrix (W) to implement the control system and to verify its effectiveness. These values were expressed in the following five cases: 1Case I: The weight matrix of the firs t case carried in its diagonal elements the same value “1” for all 9 variables. That means that all seven joints of the arm and the two wheelchair position and orientation variables will have equal potential of motion. 2Case II: In the second case, “W” carri ed “10” for each of the arm’s seven joints, and “1” for wheelchair’s position and orientation variables, which means that the wheelchair’s two variable s are 10 times more likely to move than the arm’s joints. 3Case III: The third case carried weights of “1” for the arm’s joints, and “100” for the wheelchair’s two variables in “W”, which means that the arm is 100 times more likely to move than the wheelchair. 4Case IV: In the fourth case, “W” car ried weights of “1” for the arm’s seven joints and the wheelchair’s orientation variable, and a weight of “100” for the wheelchair’s position. This means that the forward or backward motion of the wheelchair is 100 times less likely than the motion of the rest of the system 5Case V: The last case was the opposite of the fourth case, where the orientation of the wheelchair took a weight of “100” and the other eight variables took a weight of “1”. This means that the wh eelchair’s rotational motion is 100 times less likely to occur than the motion in the arm’s joints and the wheelchair’s translational motion.

PAGE 176

155 To show the effect of choosing the state variables of the wheelchair’s nonholonomic motion in the planar Cartesian coor dinates as the linear position and angular orientation rather than the two wheelchair wh eel angles, two other cases were added for comparison of the WMRA system’s behaviour when either method was used as follows: 1Case A: When the state variables re presenting the wheelchair’s motion were selected as the two angles of the wheelchair’s driving wheels. 2Case B: When the state variables re presenting the wheelchair’s motion were selected as the linear forward motion a nd the angular motion of the wheelchair in the planar Cartesian space. Each one of these individual cases will be discussed, and the results will be shown to express the difference between these cases and the effectiveness of the methods and variables chosen. 8.3. Results and Discussion of the First Five Cases The first five cases dealing with different weight values in the weight matrix “W” will be discussed in this section. The si mulation was tested by commanding the WMRA system to move the gripper’s frame from its ready position defined by the following homogeneous transformation matr ix based on the ground frame: 1 0 0 0 899 0 1 0 131 0 0 1 455 1 0 0rT (8.1) Moving the arm from its ready position defined above to the desired position defined by the following homogeneous transf ormation matrix based on the ground frame:

PAGE 177

156 1 0 0 0 550 0 1 0 970 1 0 0 455 0 0 1dT (8.2) Figure 8.1 shows the initial pose of the WMRA system at the beginning of the simulation when it was at the ready position. The end-effecto r’s position and orientation on the Cartesian space were the same in all tr ials since the trajectory was the same for all five cases tested. Figure 8.2 shows the endeffector’s position and figure 8.2 shows the end-effector’s orientation during simulation as it moves from the initial pose to the commanded point in the workspace. The motions of the individual variables in the joint space were completely different for each one of the cases depending on the selected weight for each variable so that we can ge t the desired behaviour of the WMRA system. Figure 8.1: The Initial Pose of the WMRA in Simulation.

PAGE 178

157 0 2 4 6 8 10 12 -200 0 200 400 600 800 1000 Hand Position vs Time time, (sec)position, (mm) x y z Figure 8.2: Position of the WMRA During Simulation. 0 2 4 6 8 10 12 -100 -80 -60 -40 -20 0 20 Hand Orientation vs Time time, (sec)orientation, (deg) roll pitch yaw Figure 8.3: Orientation of the WMRA During Simulation.

PAGE 179

158 8.3.1. WMRA Configurations in the Final Pose of the Simulation During simulation, each case behaved differently in terms of solved values of the joint space variables. Figures 8.4, 8.5, 8.6, 8.7 and 8.8 show the final poses of the WMRA system after the end-effector reached the desired destination for the five cases studied. Observing the figures, it was apparent from the first case compared to the others that all seven joints of the arm and the tw o wheelchair’s position and orientation variables had equal potential of motion as shown in figure 8.4. In the second case, the wheelchair’s two variables were 10 times more likely to move than the arm’s joints, and that is apparent in the results shown in figure 8.5. In the third case, the arm was 100 times more likely to move than the wheelchair, and that can be clearly seen in figure 8.6, where the wheelchair had a minimal motion and the arm did most of the motion. The beauty of this simulation comes apparent in the last two cases, where in the fourth case, the forward or backward motion of the wheelchair was 100 times less likely than the motion of the rest of the syst em, and figure 8.7 shows how the wheelchair’s forward motion was minimal. Figure 8.8 shows th e last case, which is the opposite of the fourth case, where the wheelchair’s rotational motion was 100 times less likely to occur than the motion in the arm’s joints a nd the wheelchair’s translational motion. These poses clearly show the property of combining the wheelchair’s motion and the robotic arm’s motion under the optimi zation and redundancy resolution schemes discussed in earlier chapters. It was also observed from r unning other tasks that took the WMRA system out of its reach in the vertical direction that this method was stabilized by ignoring some of the trajectory ’s orientation or position erro rs as needed so that the system doesn’t go out of control by produci ng high velocities in the joint domain.

PAGE 180

159 Figure 8.4: Destination Pose for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. Figure 8.5: Destination Pose Case II, When W = [10, 10, 10, 10, 10, 10, 10, 1, 1].

PAGE 181

160 Figure 8.6: Destination Pose Case III, When W = [1, 1, 1, 1, 1, 1, 1, 100, 100]. Figure 8.7: Destination Pose Case IV, When W = [1, 1, 1, 1, 1, 1, 1, 100, 1].

PAGE 182

161 Figure 8.8: Destination Pose Case V, When W = [1, 1, 1, 1, 1, 1, 1, 1, 100]. 8.3.2. Displacements of the Joint Space Variables The simulation program was designed to gi ve different useful values and plots throughout the simulation process for obser vation and diagnosis of any potential problems that might occur during the task execution whether the physical arm is running or if it is just the simulation. Among these pl ots are the joints’ angul ar displacements and velocities. Figures 8.9 through 8.13 show the angular displacement versus time for the arm’s seven joints throughout the simulation pe riod for all five cases The first case in figure 8.9 shows the normal weights with no pref erence to any of the nine variables. In the second case shown in fi gure 8.10, when the arm was a ssigned large weight in the weight matrix, it was clear that the seven arm joints had minimal motion that was necessary for the destination to be reached. That end-effector destination was impossible

PAGE 183

162 to reach by using the two wheelchair variables only. The last three cases shown in figures 8.11, 8.12 and 8.13 show an easy arm motion as compared to that of the wheelchair. 0 2 4 6 8 10 12 -20 0 20 40 60 80 100 120 140 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 Figure 8.9: Arms’ Joint Motion for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 0 2 4 6 8 10 12 -40 -20 0 20 40 60 80 100 120 140 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 Figure 8.10: Arms’ Joint Motion for Case II, When W = [10, 10, 10, 10, 10, 10, 10, 1, 1].

PAGE 184

163 0 2 4 6 8 10 12 -20 0 20 40 60 80 100 120 140 160 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 Figure 8.11: Arms’ Joint Motion for Case III, When W = [1, 1, 1, 1, 1, 1, 1, 100, 100]. 0 2 4 6 8 10 12 -20 0 20 40 60 80 100 120 140 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 Figure 8.12: Arms’ Joint Motion for Case IV, When W = [1, 1, 1, 1, 1, 1, 1, 100, 1].

PAGE 185

164 0 2 4 6 8 10 12 -20 0 20 40 60 80 100 120 140 160 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 Figure 8.13: Arms’ Joint Motion for Case V, When W = [1, 1, 1, 1, 1, 1, 1, 1, 100]. Another plot that was give n in the simulation program was the track distances drawn by each of the two wheels of the wheelcha ir. These plots were us eful in particular to observe the wheelchair’s motion. Figur es 8.14 through 8.18 s how these distances driven through the simulation for all five cases. An important property of this optimization method was apparent during si mulation, and can be seen in figure 8.14, which was minimization of singularity. As the arm was moving to the destination and the left wheel was moving backwards, it reversed its motion in the middle of the simulation period when the arm approached singularit y as seen in figure 8.21. The maximum wheelchair motion occurred in the second case as shown in figure 8.15, where the higher weight was assigned to the arm, and the wheelchair was free to move. Figure 8.16 shows the opposite, where the wheelchair moved the least among all cases since the weight was assigned to the wheelchair’s motion a nd the arm did most of the motion.

PAGE 186

165 0 2 4 6 8 10 12 -150 -100 -50 0 50 100 150 200 250 Wheels Track distances vs Time time, (sec)wheels track distances, (mm) L R Figure 8.14: Wheels’ Motion for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 0 2 4 6 8 10 12 -200 -100 0 100 200 300 400 500 Wheels Track distances vs Time time, (sec)wheels track distances, (mm) L R Figure 8.15: Wheels’ Motion for Case II, When W = [10, 10, 10, 10, 10, 10, 10, 1, 1].

PAGE 187

166 0 2 4 6 8 10 12 -20 -15 -10 -5 0 5 10 15 20 25 30 Wheels Track distances vs Time time, (sec)wheels track distances, (mm) L R Figure 8.16: Wheels’ Motion for Case III, When W = [1, 1, 1, 1, 1, 1, 1, 100, 100]. 0 2 4 6 8 10 12 -150 -100 -50 0 50 100 150 Wheels Track distances vs Time time, (sec)wheels track distances, (mm) L R Figure 8.17: Wheels’ Motion for Case IV, When W = [1, 1, 1, 1, 1, 1, 1, 100, 1].

PAGE 188

167 0 2 4 6 8 10 12 -60 -40 -20 0 20 40 60 80 100 120 140 Wheels Track distances vs Time time, (sec)wheels track distances, (mm) L R Figure 8.18: Wheels’ Motion for Case V, When W = [1, 1, 1, 1, 1, 1, 1, 1, 100]. Observing figures 8.17 and 8.18 shows how the opposite weights carried by the position and orientation variables of the wh eelchair in these two cases led to a rotation as observed in figure 8.17, where both wheels carried the same but opposite motion, and a translation as observed in figure 8.18, where both wheels carried the same motion. 8.3.3. Velocities of the Joint Space Variables The velocity profiles of the five cases were observed, but the beauty of the trajectory generator was apparent. Figures 8.19 and 8.20 show the velocity profiles of the seven arm joints and the two wheelchair whee ls respectively for case I. When using a 3rd order polynomial with parabolic blending, velo cities ramped up or down at a constant acceleration rather than going from zero to the desired joint velocities in no time. This option was used in all simulation cases.

PAGE 189

168 0 2 4 6 8 10 12 -15 -10 -5 0 5 10 Joint Angular Velocities vs Time time, (sec)joint velocoties, (deg/s) 1d 2d 3d 4d 5d 6d 7d Figure 8.19: Arms’ Joint Velocities for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. 0 2 4 6 8 10 12 -40 -30 -20 -10 0 10 20 30 40 50 Wheels Track Velocities vs Time time, (sec)wheels track velocoties, (mm/s) Ld Rd Figure 8.20: Wheels’ Velocities for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1].

PAGE 190

169 8.3.4. Singularities and the Manipulability Measure Figures 8.21 through 8.25 show the manipulability index of both arm only and the combined WMRA system. It is important to note here that these values were multiplied by ( 10-9 ) to get the normalized manipulability meas ure. It is clear that the manipulability is much higher for the WMRA system than th at of the arm only due to the fact that the WMRA system carries two more degrees of fr eedom. In all five cas es, the manipulability measure was maximized based on the we ight matrix. Figure 8.22 shows the manipulability of the arm as nearly constant because of the minimal motion of the arm. Figure 8.23 shows how the wheelchair started mo ving rapidly later in the simulation (see figure 8.16) as the arm approached singularity, even though the weight of the wheelchair motion was heavy. This helped in impr oving the WMRA system’s manipulability. 0 2 4 6 8 10 12 0.5 1 1.5 2 2.5 3 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA Figure 8.21: Manipulability Index for Case I, When W = [1, 1, 1, 1, 1, 1, 1, 1, 1].

PAGE 191

170 0 2 4 6 8 10 12 1 1.5 2 2.5 3 3.5 4 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA Figure 8.22: Manipulability Index for Case II, When W = [10, 10, 10, 10, 10, 10, 10, 1, 1]. 0 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA Figure 8.23: Manipulability Index for Case III, When W = [1, 1, 1, 1, 1, 1, 1, 100, 100].

PAGE 192

171 0 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 3 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA Figure 8.24: Manipulability Index for Case IV, When W = [1, 1, 1, 1, 1, 1, 1, 100, 1]. 0 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA Figure 8.25: Manipulability Index for Case V, When W = [1, 1, 1, 1, 1, 1, 1, 1, 100].

PAGE 193

172 It is important to mention that changing the weights of each of the state variables gives motion priority to these variables, but may lead to singularity if heavy weights are given to certain variables when they are n ecessary for particular motions. For example, when all the seven joints of the arm were given a weight of “1000” and the task required rapid motion of the arm, singularity occurred since the joints were nearly stationary. Changing these weights dynamically in the control loop depending on the task in hand leads to a better performance. 8.4. Results and Discussion of the Second Two Cases The two other cases tested in simulation we re done to show the effect of choosing the state variables of the wheelchair’s non-holonomic moti on in the planar Cartesian coordinates as the linear posi tion and angular orientation rath er than the two wheelchairs wheel angles. In the first case (A), the state variables representing the wheelchair’s motion were selected as the two angles of th e wheelchair’s driving wheels. In the second case (B), the state variables representing the wheelchair’s motion were selected as the linear forward motion and the angular motion of the wheelchair in the planar Cartesian space. In this simulation test, the WMRA sy stem was commanded to move the gripper forward on a straight line along the global “X” direction for one meter (1000 mm), i.e., it was moved from the ready position shown in equation 8.1 to the following desired position: 1 0 0 0 899 0 1 0 131 0 0 1 1455 1 0 0dT (8.3)

PAGE 194

173 The natural response that the operator w ould expect is to move the wheelchair forward without turning since the trajectory is in a straight li ne in front of the wheelchair. What actually happened in the first case (A) wa s different. First, when the weights of all joint variables were the same, the response was the same in both cases since it didn’t make a difference what the state variables were if you assigned the same weights to all variables. Figure 8.26 shows the position of the robotic arm’s base that is mounted on the power wheelchair. Observe that the arm had to move about 650 mm forward and 400 mm to the side of the wheelchair. Also, figure 8.27 shows the orientation of the robotic arm’s base that is mounted on the power wheelch air. Observe that the arm had to turn unnecessarily about 28 degrees clockwise, and then turn again about 9 degrees counter clockwise. This unnecessary motion can be a voided using the weight matrix only if the state variables are selected in the proper way to be controlled. Figure 8.26: Arm Base Position When the Weights Were Equal, W = [1, 1, 1, 1, 1, 1, 1, 1, 1].

PAGE 195

174 Figure 8.27: Arm Base Orientation When the Weights Were Equal, W = [1, 1, 1, 1, 1, 1, 1, 1, 1]. In case (A), the state variables were the two wheels of the wheelchair, and the only way to control th ese two variables were by assigning heavy weights on both of them so that the wheelchair doesn’t move unnecessa rily. The weights assigned were “50” to both wheels, and “1” to the seven robotic ar m joints. Figure 8.28 shows the position of the robotic arm’s base that is mounted on the power wheelchair. In th is case, the arm had to move about 625 mm forward and 250 mm to the side of the wheel chair. Even though the side motion was not necessary, but it was le ss than the motion when the weights were equal. Also, figure 8.29 shows the orientation of the robotic arm’s base. Observe that the arm had to turn unnecessarily about 16 degrees clockwise, and then turn again about 2 degrees counter clockwise. Even though this unnecessary rotation happened, it was still less than that motion when the weights were equal.

PAGE 196

175 Figure 8.28: Arm Base Position for Case A, When W = [1, 1, 1, 1, 1, 1, 1, 50, 50]. Figure 8.29: Arm Base Orientation for Case A, When W = [1, 1, 1, 1, 1, 1, 1, 50, 50].

PAGE 197

176 Having control over the forward motion a nd the orientation of the wheelchair separately allows greater and more meaningful behavior of the system response. In case (B), the state variables were the wheelchair’s linear motion and its rotational orientation, and these two variables can be controlled separa tely to give preference to the rotation or the forward motion separately by assigning he avier weights on the variable that should not change unnecessarily. The weights assi gned in this case were “50” to the wheelchair’s rotational motion, and “1” to the wheelchair’s forward motion and the seven robotic arm joints. Figure 8.30 shows the position of the robotic arm’s base that is mounted on the power wheelchair. In this case, the arm had to move about 700 mm forward and 100 mm to the side of the wheel chair. Even though the side motion was not necessary, it was significantly less than that for case (A). The wheelchair moved more forward to compensate for the unwante d side motion. Also, figure 8.31 shows the orientation of the robotic arm’s base. The orie ntation change was mini mal, and it was less than 8 degrees clockwise. This turn was less th an half of that in case (A) since heavier load was given to the orientation rather than all wheelchair mo tion. Notice that the orientation change not only wa s minimal, but it didn’t cha nge direction to go counter clockwise as what happened in case (A). This shows that the apparently unnecessary rotation that happened was in f act necessary to follow the trajectory without getting close to singular configurations. The observations of these cases emphasize the importance of choosing the variables based on the convenien ce of the user. The adapted variables for WMRA control in the actual arm were the forward position and the rotational orie ntation of the WMRA.

PAGE 198

177 Figure 8.30: Arm Base Position for Case B, When W = [1, 1, 1, 1, 1, 1, 1, 50, 1]. Figure 8.31: Arm Base Orientation for Case B, When W = [1, 1, 1, 1, 1, 1, 1, 50, 1].

PAGE 199

178 8.5 More Simulation for Optimization Met hods and Criterion Function Effects To show the effects of optimization method and the criterion function on the simulation results, four different cases of op timization methods with different criterion function were tested in simulation. These cases are: 1Case I: The minimization of Euclidea n norm of errors using Pseudo inverse of the Jacobian. 2Case II: Joint limit avoidance based optimization with Pseudo inverse of the Jacobian and the grad ient projection term. 3Case III: The Weighted Least Norm optimization solution with S-R inverse of the Jacobian. 4Case IV: The Weighted Least Norm opt imization solution with S-R inverse and joint limit avoidance. The four tested cases showed different joint reaction in the WMRA system based on the method used and the optimization cr iteria selected. The WMRA system was commanded to move in autonomous mode from its initial position before simulation to a point that is (-1500, -400, 100) mm away from the ground’s frame with the same orientation as the ground frame’s orienta tion. Figures 8.32 and 8.33 show the results of the first case, where the wheels and the joints of the WMRA system moved minimally to achieve the destination point. Note that joint six can not move more than 100o from the center of the joint range, and not including the joint limit avoidance made it cross that limit, while the rest of the joints had plenty of room to move to achieve the destination and did not move.

PAGE 200

179 Figure 8.32: Wheels’ Motio n Distances for Case I. Figure 8.33: Joint Angular Displacements for Case I.

PAGE 201

180 When this same test was conducted with the joint limit avoidance as the criterion function as discussed in case II, joint limits were successfully avoided as shown in figures 8.34 and 8.35. The wheelchair moved more than in the first case, and joint one came close to its limit of 170o to compensate for the other joints for the limited motion in joint six. In case III, a different optimization method was used without the joint limit avoidance criterion function imbedded in the weight matrix. The weight matrix considered of the user-defined weight s of W = diagonal [ 1, 1, 1, 1, 1, 1, 1, 20, 20]. Figures 8.36 and 8.37 show the motion of the wh eels as they occurred later on during the simulation, and the joint angles travelled to r each the destination at the end effector. It can be seen that joint six went over its limits of 100o since the weight matrix does not reflect the joint limit avoidance as the optimization criterion function. Figure 8.34: Wheels’ Motio n Distances for Case II.

PAGE 202

181 Figure 8.35: Joint Angular Displacements for Case II. Figure 8.36: Wheels’ Motion Distances for Case III.

PAGE 203

182 Figure 8.37: Joint Angular Di splacements for Case III. In case IV, the optimization criterion function was included in the weight matrix to avoid joint limits with all the four condi tions discussed in chap ter 5. Figures 5.38 and 5.39 show how the wheelchair moved significantly more to compensate for the joints that reached their limits and their weights went to infinity. Joint three reached its limit of 170o and joint six reached its limit of 100o. This resulted in a smoother simulation with joint limit implementation while keeping the user’s preference of minimal wheelchair motion as expressed in terms of the user-defined portion of the weight matrix discussed in chapter five. These test cases reflect the us ability of the system and its reaction to different control algorithms as it is used base d on the user’s preference. It is noted here that when the user chose case II with tele operation mode, the system started moving before the user touched the controls since the system was still optimized to keep the joints close to the middle of their range of motion.

PAGE 204

183 Figure 8.38: Wheels’ Motio n Distances for Case IV. Figure 8.39: Joint Angular Displacements for Case IV.

PAGE 205

184 8.6. Simulation of the Eight Implemented Optimization Control Methods for the Case of an Unreachable Goal To test the difference in the system response in case the WMRA system is commanded to reach a point that is physical ly unreachable, eight different cases were simulated, each uses a different control method. The end-effector was commanded to move horizontally and vertically upwards to a height of 1.3 meters from the ground. This height is physically unreachable, and th e WMRA system will reach singularity. The response of the system can avoid that singularity depending on the method used. Singularity, joint limits and preferred join t-space weights were the three factors we focused on in this simulation. The eight control cases simulated were as follows: 1Case I: Pseudo inverse solution (PI). 2Case II: Pseudo inverse solution with the gradient projection term for joint limit avoidance (PI-JL). 3Case III: Weighted Pseudo inverse solution (WPI). 4Case IV: Weighted Pseudo inverse so lution with joint limit avoidance (WPIJL). 5Case V: S-R inverse solution (SRI). 6Case VI: S-R inverse solution with the gradient projection term for joint limit avoidance (SRI-JL). 7Case VII: Weighted S-R inverse solution (WSRI). 8Case VIII: Weighted S-R inverse so lution with joint limit avoidance (WSRIJL). From these cases, we observe the follo wing results in te rms of singularity expressed by the manipulability measure, joint limit avoidance (joint 6 should not exceed

PAGE 206

185 +/100o), and the user option of preferred weights of motion (1 is used for the arm and 10 for the wheelchair): 1Case I: (PI) In this cas e, the system was unstable, the joints went out of bounds, and the user had no weight assignmen t choice (see figures 8.40 and 8.41). 2Case II: (PI-JL) In this case, the system was unstable, the joints stayed in bounds, and the user had no weight assignmen t choice (see figures 8.42 and 8.43). 3Case III: (WPI) In this case, the system was unstable, the joints went out of bounds, and the user had weight assign ment choices (see figures 8.44 and 8.45). 4Case IV: (WPI-JL) In this case, the sy stem was unstable, the joints stayed in bounds, and the user had weight assign ment choices (see figures 8.46 and 8.47). 5Case V: (SRI) In this case, the system was stable, the joints went out of bounds, and the user had no weight assignment choice (see figures 8.48 and 8.49). 6Case VI: (SRI-JL) In this case, the sy stem was unstable, the joints stayed in bounds, and the user had no weight assi gnment choice (see figures 8.50 and 8.51). 7Case VII: (WSRI) In this case, the sy stem was stable, the joints went out of bounds, and the user had weight assign ment choices (see figures 8.52 and 8.53). 8Case VIII: (WSRI-JL) In this case, the system was stable, the joints stayed in bounds, and the user had weight assign ment choices (see figures 8.54 and 8.55). It is clear that case number 8 showed the best performance since it fulfilled all the important control requirements. This method a voided singularities wh ile keeping the joint limits within bounds and satisfying the user-s pecified weights as much as possible.

PAGE 207

186 Figure 8.40: Manipulability Measure Case I (PI). Figure 8.41: Joint Angular Di splacements for Case I (PI). 0 2 4 6 8 10 12 -50 0 50 100 150 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 0 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA

PAGE 208

187 Figure 8.42: Manipulability Measure Case II (PI-JL). Figure 8.43: Joint Angular Displ acements for Case II (PI-JL). 0 2 4 6 8 10 12 -20 0 20 40 60 80 100 120 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 0 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 3 3.5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA

PAGE 209

188 Figure 8.44: Manipulability Measure Case III (WPI). Figure 8.45: Joint Angular Disp lacements for Case III (WPI). 0 5 10 15 20 25 -50 0 50 100 150 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 0 5 10 15 20 25 0 1 2 3 4 5 6 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA

PAGE 210

189 Figure 8.46: Manipulability Measure Case IV (WPI-JL). Figure 8.47: Joint Angular Displ acements for Case IV (WPI-JL). 0 5 10 15 20 25 -50 0 50 100 150 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA

PAGE 211

190 Figure 8.48: Manipulability Measure Case V (SRI). Figure 8.49: Joint Angular Disp lacements for Case V (SRI). 0 2 4 6 8 10 12 0 20 40 60 80 100 120 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 0 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 3 3.5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA

PAGE 212

191 Figure 8.50: Manipulability Measure Case VI (SRI-JL). Figure 8.51: Joint Angular Displ acements for Case VI (SRI-JL). 0 5 10 15 20 25 -60 -40 -20 0 20 40 60 80 100 120 140 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 3 3.5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA

PAGE 213

192 Figure 8.52: Manipulability Measure Case VII (WSRI). Figure 8.53: Joint Angular Disp lacements for Case VII (WSRI). 0 5 10 15 20 25 -20 0 20 40 60 80 100 120 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 3 3.5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA

PAGE 214

193 Figure 8.54: Manipulability Measure Case VIII (WSRI-JL). Figure 8.55: Joint Angular Displ acements for Case VIII (WSRI-JL). 0 5 10 15 20 25 -20 0 20 40 60 80 100 120 Joint Angular Displacements vs Time time, (sec)joint angles, (deg) 1 2 3 4 5 6 7 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 3 3.5 x 108 Manipulability Measure vs Time time, (sec)Manipulability Measure WARM WWMRA

PAGE 215

194 8.7. Summary Simulation results of the implementation of the methods of combining mobility and manipulation and redundancy resolution were shown in this chapter. Several cases were defined for simulation, and observation of the simulation results were shown and discussed for the effectiveness of the soluti ons. In all cases, the tr ajectory was generated to move the end-effector from the initial to the final position following the specified optimization choices. Final configurations of the WMRA system were shown for all cases, and the joint space variables were studied. The effect of the 3rd degree polynomial with parabolic blending in generating the trajec tory points were shown in the velocities of the joint space variables. This led to cons tant accelerations or decelerations of the variables so that smooth motion occurred. A c ouple of other simulations shown in this chapter verified the idea behind the proper choice of the state variables so that the control of these variables makes more sense than any arbitrary choice of variables that may produce undesirable system behavior. Four comparison cases were presented to compare four different control optimization methods when used within the workspace. Another eight cases were presented to show the different behaviors of the system response in case the WMRA system was commanded to go to a point that wa s physically out of its reach. These twelve cases clearly identified the advantage of using the WSRI method with joint limit avoidance over all other optimization methods.

PAGE 216

195 Chapter 9: Experimental Testbe d and Field Tests 9.1. Introduction The combination of mobility and manipul ation in robotics as assistive devices would be better used in actua l products if testing on phys ical systems was done after theories and simulation results were developed. In this chapter, we will discuss the testbed comprise of a new 7-DoF robotic arm design, a modified wheelchair, a new gripper designed specifically for activities of daily living (ADL), a nd a control hardware that controls all th ese equipment using a tablet PC running Windows XP. Design aspects and components will be shown, and communi cation and wiring the sy stem together will be discussed. 9.2. The New 7-DoF Robotic Arm Design and Development A 7-DoF wheelchair-mounted robotic arm (WMRA) was designed and built to be integrated with a power wheelchair to help pe ople with disabilities to do their activities of daily living independently or with minimal he lp. The mechanical de sign incorporates DC servo drive motors with actuator hardwa re at each individu al joint, allowing reconfigurable link lengths [52]. It has seve n degrees of freedom and uses a side mount on a power wheelchair. The control system allows coordinated Ca rtesian control, and

PAGE 217

196 offers expandability for future research, su ch as coordinated motion with the wheelchair itself. 9.2.1. Design Goals A new WMRA was developed, designed and built. The goal was to produce an arm that has better manipulability, greater payload, and easier control than current designs. The arm is also reconfigurable, whic h increases the number of applications for our design. The following design goal s were set for the hardware: 9.2.1.1. Weight In a mobile application, minimal weight is of primary importance. Power wheelchairs have a rated payload, and a hea vy arm reduces the payload available for the user. Based on this criterion, our goal wa s to have a total system mass under 14 kg, including the arm, controller, and all peripherals. 9.2.1.2. Mount Type As found in previous research [57], side mounting is preferable overall because it provides the best balance between manipul ability and unobtrusiveness. However, care must be taken to prevent widening of the power chair. The new arm is mounted as far forward and upward as possible while still in a side mount c onfiguration, and only increases chair width by 7.5cm. This mounting location allows the arm to be stowed by folding it back and then wrapping the fore arm behind the seat. It virtually disappears

PAGE 218

197 when not in use, especially when the arm is painted to match the chair. This helps avoid the stigma that these devices can bring. 9.2.1.3. Stiffness It is one of the major differences betw een this WMRA and a typical industrial manipulator. As anticipated, teleoperation will be the most common control mode for the robot, and therefore great prec ision is not required. With a human participating at all times, inaccuracy due to a compliant structur e is easily and transparently corrected. Recognizing this allowed the structure to be made much lighter than an industrial manipulator with the same payload. However, the low stiffness and large backlash of other WMRAs is an impediment to accurate coordinated control. With this design, we attempted to find an optimal balance, stiffe r than other WMRAs, but less stiff than an industrial manipulator. 9.2.1.4. Payload This manipulator is intended for use in Activities of Daily Living (ADL), and for job tasks of a typical office environment. As such, it is important that the arm be strong enough to move objects that are common in th ese environments. A gallon (4 Liters) of milk is a good upper limit for a typical aroundthe-house object that must be manipulated. As this is an approximately 4 kg mass, this was set as the baseline payload for the arm at full horizontal reach at rest. Then, an extr a margin of 2 kg was added to allow for a choice of end-effector capable of this load. The 4 kg useful payload is much larger than the 1 kg payload of the Raptor.

PAGE 219

198 9.2.1.5. Reconfigurability Even though a side mount was chosen for this prototype, it is important to note that the basic design can be adapted to a front or rear wheelchair mount, or a fixed workstation mount. The arm can be specialized for these workspaces by adjusting link lengths. Longer lengths can be specified for a rear mount on a power chair, but this reduces payload and reduces manipulability in front of the chair. Reconfigurable arm lengths allow greater leverage on the engineering input, as a single basic design may be adapted to numerous applications. This is only practical with electric drive and actuator placement directly at each joint. 9.2.1.6. Power Supply and Consumption In the power wheelchair industry, a 24-volt lead-acid battery pack is standard, and is the natural choice for the power supply of a WMRA. All motors controllers, input devices, sensors, etc. must be able to work with 24vdc. Energy consumption is important, as users would reject a device that worked well but left them stranded without the wheelchair power! Therefore, efficient components were chosen to keep power consumption low. 9.2.1.7. Cost Constraints Reasonable cost is important to widespread adoption of these devices, but is not a major hurdle such as poor utility and difficulty of use. The target was to be in the midrange of commercially available systems in terms of cost. The usability of the WMRA

PAGE 220

199 system will be far more important than the co st when the user decides on which device should be used. 9.2.1.8. User Interface People want a useful payload, and a simp le intuitive control. Raptor lacks encoders and therefore control is manual, one joint at a time. Quadrature encoders are a cost-effective way to provide coordinated Cart esian control. The controllers of the new WMRA have PWM voltage regulation, and ha ve built-in support for acceleration limits. The system easily scales to control grippers or even the base wheelchair, all through one standard control system. 9.2.1.9. Degrees of Freedom Extra degrees of freedom help improve manipulability. This is evidenced by the considerable increase going from Raptor’s 4 DOF to the 6 DOF of MANUS. Our new design incorporates 7 joints, allowing full pos e control even in di fficult regions of the workspace, such as reaching around the wheelchair, reaching up to a high shelf, manoeuvring around objects, or opening a door. 9.2.1.10. Actuation and Transmission Systems Most actuation alternatives were re stricted due to the requirement for reconfigurability. Changing the length of an arm that is driven through linkages or flexible cables from motors in the base w ould require many parts to change, which would require a new design. The option of pneumatics was eliminated due to positioning

PAGE 221

200 difficulty and compressor noise. The best op tion was to drive the joints electrically through harmonic gearheads that carry large gear ratio, with the enti re actuator positioned at each joint. 9.2.1.11. DC Motors as Actuators The only serious choice was whether to us e stepper or servo motors [52]. Due to recent improvements in servo controllers, the co st of this option is not much higher than that for stepper motors. Brush DC servomot ors allow closed-loop control, and are much quieter, lighter and more efficient than ste ppers. For these reasons, DC Servo drive was selected. Quadrature encoders mounted on the motors, were selected for their accuracy, simplicity and low cost. Optical limit switches ease initializ ation at power-up. 9.2.2. Kinematic Arrangements and Component Selection The arm is a 7-DOF design, us ing 7 revolute joints [52]. It is anthropomorphic, with joints 1, 2 and 3 acting as a shoulder, jo int 4 as an elbow, and joints 5, 6 and 7 as a wrist as shown in figure 9.1. The 3 DOF shou lder allows the elbow to be positioned anywhere along a spherical surface, whereas wi th the Raptor arm, elbow movement is limited to a circle. Throughout the arm, adjacent joint axes are oriented at 90 degrees as shown in figure 9.2. This helps to meet tw o goals: mechanical design simplicity and kinematic simplicity. Machining parts on a c onventional milling machine is easier with right angles, and the coordina te transform equations simplif y greatly resulting in low computational cost. All adjacent joint axes intersect, also simplifying the kinematics.

PAGE 222

201 Figure 9.1: Complete SolidWorks Model of the WMRA. Figure 9.2: Kinematic Diagram with Link Frame Assignments. Emphasis was placed on using off-the-she lf parts wherever possible. The basic arrangement for each joint is a high-reduction gearhead, a motor with encoder and spurgear reduction, and a bracket that holds these two parts and atta ches to the two neighbouring links. The hardware components were selected to meet the design requirements, as follows:

PAGE 223

202 9.2.2.1. Gearhead Selection Harmonic drive gearheads were chosen because they can achieve 100:1 reduction in a single stage, with only 64mm axial leng th [52]. In addition, they have bearings suitable for supporting overhung loads, enabling the next arm segment to be bolted directly to the output flange of the gearhead. This greatly simplifies the design, reducing weight and cost through lowe r part count. Gearheads were chosen based on required overhung loads and torques, with the size of the gearhead gradually reducing at consecutive distal joint. Once the basic type of gearhead was selected, information on the available sizes was collected, namely th e mass and recommended maximum torque. Maximum recommended torque here was take n to be the lesser of two specifications from the manufacturer: maximum output to rque and maximum overhung torque. A simple spreadsheet model of a horizontally outstretched arm was made, which accounted for link lengths and self-weight. The target payload (6 kg) wa s also applied to the end of the arm, and eventually the gearheads were chosen as shown in Table 9.1. One of the harmonic drive gearheads selected for the firs t two joints in the shoulder of the robotic arm is shown in figure 9.3. Table 9.1: HD Systems Gearhead Selections for Each Joint. Joint Model SelectedTorque (N m)Outside Diam (mm)Mass (kg) 1 CSF-25 140 107 1.50 2 CSF-25 140 107 1.50 3 CSF-20 70 93 0.98 4 CSF-17 46 79 0.68 5 CSF-17 46 79 0.68 6 CSF-14 19.5 73 0.52 7 CSF-11 6.6 58 0.15

PAGE 224

203 Figure 9.3: Harmonic Drive Gearhead. 9.2.2.2. Motor Selection Brush DC motors were chosen since they are the least expensive way to achieve servo control [52]. While brushl ess motors are a future possibility, performance gains are dubious, and would significantly increase the cost of the robot. The marginal increase in efficiency is relatively unimportant, and ge ar train noise is already greater than commutator noise. The main benefits for brushless motors are increased service life before maintenance, and possibly better pack aging. Brush DC servo drive is the best overall compromise for a WMRA. Brush DC, 24v Pittman motors were selected that meet all performance criteria, and have integrated gearboxes and encoders as shown in figure 9.4. These relative encoders are in itialized with opti cal limit switches. Figure 9.4: Pittman Servo Brush Motors with Gearbox and Encoder.

PAGE 225

204 9.2.2.3. Material Selection 6061 Aluminum was chosen for the joint brackets because of machinability, weldability, lower cost, good strength-to-weight ratio, and availabi lity [52]. This material was also chosen for the link tubes, for the same reasons. Steel was rejected due to its high density. In many places, the th ickness of a bracket is not determined by strength or stiffness, but by simple pack aging constraints. Steel would unacceptably increase mass in these areas. Carbon fiber/e poxy were considered for the link tubes due to the increase in stiffness and reduction in weight possible. It was rejected for the prototype because of its cost. Carbon fiber becomes especially attractive for a long-reach option, and may make a rear-mount arm more feasible. 9.2.2.4. Joint Design Once all components were selected, design of each joint wa s rather straight forward [52]. The typical arrangement for a jo int is to have a gearhead and motor held together by an angle bracket. This bracket mounts to the previous joint or link. The output flange of the gearhead attaches to the next link. Billet 6061 aluminum was chosen for its high strength and dimensional accuracy. 9.2.2.5. Wrist Design There were two basic choices for a 3-DOF wrist: Orthogonal and nonorthogonal as shown in Figures 9.5. Th e conventional orthogona l arrangement was selected due to better packaging [52]. All th ree axes are mutually orthogo nal and all axes of rotation converge at a single point. This is common in industrial mani pulators, such as the Puma

PAGE 226

205 560. It is done to simplify the kinematic equations and provide inverse kinematic solution with the least amount of computations Joint 6 has a design unlike the others in this manipulator. A right angle gearbox be tween the motor and gearhead greatly improves packaging, but increases complexit y. A single bracket was designed to hold all 3 parts in proper alignment, and to carry the load to joint 5. Joint 7 is coaxial with the last link, so irrespective of the pose of the arm, rotation abou t this axis is assured. The gearhead mounts to a flange welded to the end of the link tube, and the motor is hidden inside this tube. Wires to the motors and enc oders can run inside th e tubes or be clipped on the side of the tubes. Figure 9.5: Wrist Design: 3-Roll Wrist (Left), Orthogonal Wrist (Right). 9.2.3. Final Design Testing and Specifications Good stiffness leads to quality construc tion and accurate control. Stiffness was tested by extending the arm straight out in front of the wheelchair [52]. A dial indicator was set to measure vertical deflection, and then a known mass was ap plied to the wrist plate. Deflections were measured at the wris t plate (100.3 cm from axis 1), joint 4 (50.8

PAGE 227

206 cm from joint 1) and directly on the joint 1 gearhead. Table 9.2 shows the arm deflection due to the applied load. Table 9.2: Arm Deflections vs. Applied Load. Load (kg) Wrist Deflection (mm)Elbo w Deflection (mm)Joint 1 Deflection (mm) 2 4.4 1.8 0.2 4 8.7 3.7 0.4 6 13.3 5.5 0.7 Each joint was individually tested for th e maximum load it could lift. This was done by placing the arm in a pose most adverse for the joint in question. The arm was placed fully outstretched, pointing forward parallel with the ground. Weights were progressively added, and the joint was given fu ll power to try to raise them. All joints were tested up to the desi gn load. Testing shows that joints three and four are overpowered, and smaller motors could be substituted. The maximum, unloaded speeds of each jo int were measured using a known arc (90, 180, or 360 degrees as geometry permitted). Time to traverse this arc was measured with a stopwatch and joint velocities in RP M were calculated. Speeds range from 5 RPM in proximal joints to 16 RPM in Joint 7. With any battery-operated device, energy use is very important. A digital multi-meter was conn ected inline with the power feed from the wheelchair battery, and power consumption was recorded as shown in Table 9.3. Table 9.3: Power Usage. Condition Current (A) Idle all motors off, controller only0.36 Holding self-weight outstretched 0.58 Holding 6kg fully outstretched 1.70 Lifting 6kg with joint 1 3.30

PAGE 228

207 While more testing will be instructive, a reasonable estimate is that typical household and office tasks will lead to an aver age current of 2 Amperes. Therefore, six continuous hours of arm use would consume 12 Ah. This would leave a 73 Ah battery (group 24 gel cell) with 61 Ah for propulsion, or 84% of capacity. Thus, driving range would be reduced, from about 30 km to 25 km. This should be acceptable for most users, and daytime charging can help restore range. Figure 9.6 shows the completed robotic arm in different positions using the SolidWorks model of the arm as well as the actual arm in two different positions. The photos show the ar m with a Barrett hand installed at the endeffector. A summary of the specifications of the robotic arm built are shown in Table 9.4. Figure 9.6: WMRA SolidWorks Models and the Corresponding Positions of the Built Device.

PAGE 229

208Table 9.4: Summary of the Robotic Arm Specifications. Arm Mass 12.5 k g Max reachable hei g ht above floo r 1.37 m Chair width increase with side mount7.5 c m Avera g e Current Draw2 A Desi g n Pa y load ( includin g g ri pp er ) 6 k g Deflection at desi g n p a y loa d 13.3 m m De g rees of Freedo m 7 Actuator T yp eBrush DC Servo TransmissionHarmonic Drive Controller T yp ePic-Servo SC 9.3. The New 2-Claw Ergonomic Gr ipper Design and Development A new robotic gripper was designed and constructed for Activities of Daily Living (ADL) to be used with the new Wheelchair-Mounted Robotic Arm developed. Two aspects of the new gripper made it uniqu e: one is the design of the paddles, and the other is the design of the actuation mechanism that produces parallel motion for effective gripping. The paddles of the gr ipper were designed to grasp a wide variety objects with different shapes and sizes th at are used in every day li fe. The driving mechanism was designed to be simple, light, e ffective, safe, self content, and independent of the robotic arm attached to it. In designing a gripper, functionality is very important, and it remains one of the main factors considered in most robotic s applications. If the design has good functionality, minimal cost, hi gh durability, and the aesthetic characteristics are met, a good product is likely to be produced. In orde r to decide on a good design for a gripper, several aspects have to be inspected, such as the tasks required by the mechanism, size and weight limitations, environment to be used in as well as material selection. Some of the ADL tasks that will be performed usi ng the gripper are openi ng doors, grasping a

PAGE 230

209 glass to drink from, flipping on a light sw itch, pushing and turning buttons and knobs, holding books and similar objects, handling tiny obj ects such as a CD or lose sheets of paper, or holding a small ball. 9.3.1. Paddle Ergonomic Design Specific considerations were taken in the attempt to optimize the functionality of the gripper. It was decided early on that the gr ipper would utiliz e parallel motion generated from a dual four bar mechanism att ached to each side of the two fingers creating 8 links between the gri pper surfaces and the driving m echanism itself. As a start, the gripper’s fingers (paddles) were first put into consideration. Through the required tasks expected out of the overall device the grip per’s surfaces were designed to be varied for the adequate handling and use of household objects mentioned. A rounded surface was implemented as shown in figure 9.7, which w ould give the gripper a soft look as well as good function while grasping objects. Figure 9.7: The New Gri pper’s Ergonomic Surfaces.

PAGE 231

210 A spherically channeled surface was placed in the center of the paddle surface with the intention to contour to spherical door knobs. Small protrusions were added to the end of each paddle at the tip of the grippe r for grasping smaller objects allowing added dexterity and the operations of press but tons and toggle switc hes. The tips were specifically made narrow for precision operations and rounded off to prevent the marring of surfaces that they would come in contac t with. Optional protrusions extending toward the center of the grip at the tip of one of the paddles was added to allow objects such as door handles and door knobs to be pulled open with more security, rath er than relying on friction and the locking of the mechanisms grip alone. The other paddle would have a small opening for the protrusions to go throug h when closing the gripper is required as seen in Figure 9.8. Figure 9.8: The Gripper Design in Application Reference. It was later decided that an extra flat surface placed clos er to the driver mechanism would be beneficial in grasping la rger rectangular objects such as boxes or books. By relying on the finger tips of the grippe r alone to grasp larger objects, a greater moment would be generated on the driving mechanism and highe r stresses induced in the

PAGE 232

211 links to achieve the same amount of gripping force a ttainable from a locat ion closer to the driving mechanism itself. Figure 9.9 shows these changes to the paddles. Figure 9.9: Extended Interior Surface Added to the Gripper. As a final modification to the paddles, a spring hinge was added to the back of the flat paddle surface, near the hinge location, to allow for a small amount of torsional rotation. The thought behind this modification wa s for an added degree of freedom in the paddles to allow for a better grasp on tape red objects such as cups and for selfadjustment. Four main contact surfaces were in tended for this gripper: The spherical area at the center of the paddles for spherical obj ects, the two round su rfaces on both sides of the paddle for handling cylindrical and tapered objects, the two flat surfaces at the bottom and top of the paddles for handl ing rectangular and large object s, and the paddles’ tips for handling small objects, switches, knobs and sheets of paper. 9.3.2. Actuation Mechanism The driving mechanism was the next step in creating the gripper. As noted previously, the design was going to utilize four bar linkages to allow the paddles to open and close in a parallel motion. The main reas ons for this were to increase the contact surfaces between the gripper and the handled object, and to prevent these objects from slipping out of the grasp of the paddles due to the angular change in the contact surfaces

PAGE 233

212 caused by simpler pin joint gripper designs. By keeping the paddles parallel, a more predictable surface contact angle could be cont rolled which would allow larger objects to be grasped safely without the risk of being dropped. This modification was done and tested. The first requirement for the gripper was for it to have a minimum gripping force of ten pounds and be capable of traveling from a full open position of four inches to a closed position within approximately four s econds. The gripper was also required to have an onboard motor for modularity reasons. The idea of utilizing an acme screw and a pull nut setup would be adequate for power transm ission, and its compact size, relatively high variability in gear ratios, and its ability to lock the positi on without the use a mechanical brake mechanism made it a good choice for the purpose of this gripper. For this design a stainless steel 1/4-20 acme screw with a plas tic nut was selected and thought to be the best design for space conservation and overall weight conservation as well. The selected motor carried relatively high torque to size ra tio, and as a result, minimized the overall weight of the gripper dramatically. For the sa fety of the user, the handled object, and the mechanism, an adjustable slip-clutch was attached to the acme screw to build up the gripping force based on how delic ate the object is, and to prevent the torque in the motor to rise above the designed limit of the mechanism. 9.3.3. Component Selection The selected components are as follows: 1The Motor: A 24 volt DC coreless gearhead servo motor was selected since the wheelchair can supply that voltage from its batteries. The diameter of the

PAGE 234

213 selected motor was 0.67 inches having a length of 1.77 inches. This motor, made by Faulhaber, puts out a stal l torque of 11.5 mNm with a maximum current of 190 mA and a maximum speed of 8000 rpm. This motor uses a 14-1 planetary gear ratio, and an optical en coder with 512 counts per revolution for the use of feedback control. Figure 9.10 shows the motor assembly with the gearhead and the encoder. Figure 9.10: The Selected Co reless Gearhead Servo Motor. 2Acme Screw and Pull Nut: A Stainl ess Steel 20 thread-per-inch acme screw was selected with a diameter of 0.25 in ches to transmit the motion from the motor to the linkages through a Delrin pl astic pull nut. This helps in locking the mechanism when the motor is stoppe d, and it gives a proper conversion of the motor speed to the required torque for driving the system. 3Slip Clutch: An adjustable 0 to 50 oz-i n slip clutch was selected to build up the grip force and slip in case the motor is still running while the required torque is reached. Figure 9.11 shows a drawing of the slip clutch 4Spur Gears and Flange Ball Bearings : Two spur gears ma de out of anodized aluminum were selected with a pitch of 0.25 inch to transmit the motion from the motor shaft to the acme screw. A gear ration of 2:1 is used with 36 teeth,

PAGE 235

214 9.5 mm diameter gear on the motor shaf t and 72 teeth, 18.5 mm diameter gear on the acme screw. Figure 9.11: The Selected Slip Clutch. Figure 9.12 shows the actual driving mech anism after assembly. Aluminum was the main component used in building the hous ing and shield of the mechanism and the links. Other components, including plastics and Teflon, were used as sleeves in the joints of the driving four-bar linkage mechanism. Figure 9.12: The Assembled Actuation Mechanism. When all the side panels are in place and the top cap of the housing seal the compartment of the spur gears, safety of th e operator is ensured in terms of getting any

PAGE 236

215 external object caught in the driving mechanism. It also ensures proper protection of the motor and the small components driving the gripper from external dust and debris. Extensions on both sides of the gripper’s base with extra holes were added for expandability in case other devi ces such as a camera and a la ser range finder need to be mounted to the gripper’s base plate. Figur es 9.13 and 9.14 show the final Pro/E drawing and the physical gripper with the involved components after assembly, respectively. Figure 9.13: The New Gripper and the Actuation Mechanism Drawing. Figure 9.14: The New Gripper and the Actuation Mechanism.

PAGE 237

216 9.3.4. Final Design and Testing Force analysis of the mechanism was accomplished by working from the paddles’ contact surfaces through the mechanism linka ges until reaching the electronic motor. The force considered in the design was 10 pounds of gripping force at the contact surfaces of the gripper. The force from the paddle surfaces was then translated through the parallel four-bar linkages to the pull-nut using static analysis. Teflon bushings were utilized in the hinges at this joint to redu ce friction but accounted for while calculating the forces. The pull-nut static calculations were used to determine the required torque on the acme screw to generate the force needed at the pull-nut. This was accomplished relatively accurately by using the offered sp ecifications by the manufacturer of the acme screw. Input torque per output force measuremen ts were utilized when calculating the torque required within the acme screw. Ball bearings were used to support the acme screw for maximum efficiency. After calculating the torque needed in the acme screw, forces were determined at the teeth of the s pur gears used in the mechanism. The required torque and speed of the motor were calcula ted by assuming a required minimum opening and closing time of 4 seconds with the given for ce at the gripper. A safety factor of 2 was used in selecting a motor for the required torque. Figure 9.15 shows a close-up view of the gr ipper, attached to the newly designed 9-DoF WMRA system on a power wheelchair, holding a 2.5 inch diameter ball. Several tests were conducted using the rapid prototype models and te st objects to ensure proper application before the final design was reached. When the gripper machining was

PAGE 238

217 completed and the gripper was assembled, act ual grasping tasks commonly used in ADL were conducted. Figure 9.15: The New Gripper When Holding a Spherical Object. Another application tested show the adjustability of the paddles to the grasped object, as shown in figure 9.16. A standard cup was the test object to show adjustability of the paddles due to the added hinges that give them an extra degree of freedom for adjustment to the tapered object. Figure 9.16: The New Gripper When Holding a Tapered Cup. One of the main objectives intended for this gripper is the ability to handle different door handles. Figure 9.17 shows both kinds of handles the lever handle and the

PAGE 239

218 knob handle, commonly used in doors. These handl es were used in th is test to ensure proper application. Figure 917: The Gripper When Opening a Door with a Lever Handle (Left) and a Knop Handle (Right). Another test for handling small object s and sheets of paper were conducted. Figure 9.18 shows the gripper holding a business card usi ng the tips of the paddles without the need to fully close the other end of the gripper. Figure 9.18: The New Gripper When Handling Small Objects. Handling large objects can be challenging based on the geometrical complexity of that object. Figure 9.19 shows th e gripper holding the box of heavy tools while moving it from one place to another. The two side -curved surfaces and the middle spherical surfaces help in supporting odd objects in case complex shapes are handled.

PAGE 240

219 Figure 9.19: The New Gripper When Handling Large and Heavy Objects. 9.4. Modification of a Standard Power Wheelchair To install the newly designed component s on a power wheelchair for a complete WMRA system, modifications had to be made to a standard power wheelchair both in hardware and control. The selected power wheelchair was the “Action Ranger X Storm Series”. The wheelchair has been modified by adding an incremental encoder on each one of the wheels. The controller module of the wh eelchair has also been modified using TTL compatible signal conditioner and a DA converter so that the signal going to the wheels can be controlled using the same PIC-Servo SC controllers used in the arm. The only difference is that the output from this contro l board used for the wheelchair is the PWM signal rather than the amp lified analogue signal. Since the wheelchair controller was seal ed, and the manufacturers treat these controllers as proprietary components, we had to find a way to take over the control of the wheelchair. The best way to do this was by opening the joystick module and interfacing our control system with the joystick signal. The joystick sends two

PAGE 241

220 independent analogue voltages to the wheelch air controller, one controls the forward speed of wheelchair (i.e. both wheels at th e same speed and direction) and the other controls the rotation of the wheelchair (i.e. both wheels at the sa me speed bur opposite directions). The voltage sent is as follows: 1A voltage of 0.4 volts corre sponds to a full positive speed. 2A voltage of 2.6 volts correspond s to a stop and applies breaks. 3A voltage of 4.0 volts corre sponds to a full negative speed. Any voltage between these values corres ponds to slower motion of the wheels. The controllers used in our WMRA syst em are capable of supplying pulse-width modulation (PWM) signal at 20 KHz. Cha nging the duty cycle means changing the average of the signal. A circ uit that converts a constant-f requency PWM signal is shown in figure 9.20. Two independent circuits like these can be connected between the WMRA controller and the joystick of the wheelchair so that the wheelchair can be controlled using the arm controller. Figure 9.20: A Circuit Designed to Convert Digital PWM Duty-Cycle Control Signal to Analogue Signal.

PAGE 242

221 A new controller box was designed to fit 12 controller boards, two power adapters, one converter, a cooling fan, and the connecting cables. Figure 9.21 shows the box before attaching it to the pow er wheelchair. This box was built so that it is easy to take off and put on the wheelchair with quick connectors that can be disconnected from the arm and the power supply to the battery. Figure 9.21: The Designed Controller Box Installed on the Modified Wheelchair. Another item attached to the power wheel chair was the quick-release mechanism, shown in figure 9.22, that is permanently a ttached to the power wheelchair and can quickly mount or dismount the designed robotic arm into or out of the wheelchair. This mechanism allows the user to quickly det ach the arm if the wheelchair needs to be transported in a small container or a miniva n that does not fit the WMRA system. Cable connectors extending from the robotic arm to th e controller box are designed to quickly disengage the power and logic to and from the arm and the contro ller box. This also allows for easier portability that can be done by an average person.

PAGE 243

222 Figure 9.22: The Quick-Release Mechanism that Mounts the Robotic Arm on the Wheelchair. 9.5. Controller Hardware The controller hardware was designed to control all joint servo motors simultaneously through an amplified analogue si gnal, and to control the motors of the differential drive of the wheelchair through PWM signal. Wiring of the boards to the individual motors was done using quick-release sockets. 9.5.1. Controller Boards PIC-SERVO SC controllers that support th e DC servo actuators were chosen as shown in figure 9.23. At 5cm x 7.5cm, this unit has a microprocessor that drives the builtin amplifier with a PWM signal, handles PID position and velocity control, communicates with RS-485, and can be daisy-ch ained with up to 32 units. It also reads encoders, limit switches, an 8 bit anal ogue input, and supports coordinated motion control. Each joint controller is individua lly addressable, and can be controlled in

PAGE 244

223 position, velocity, or current (torque) mode. In position mode, velocity and acceleration limits may be specified for smooth operation. Da ta for the entire arm is interfaced to the main computer using a single serial link. The PIC-Servo SC controllers use RS-485, and a hardware converter interfaces this with the RS-232 port or a USB port on the host PC. The current host PC is an IBM la ptop, running Windows XP. However, the communications protocol is simple and ope n, and could be adapte d to virtually any hardware/software platform with an RS-232 or a USB port. These controller boards were all connected to the computer using a single cable. Figure 9.23: JRKERR PIC Servo SC Controller Boards. 9.5.2. Communication and Wiring As shown in figure 9.24, PIC-SERVO SC controllers (C1 through C7) that support the DC servo actuators (J1 through J7 ) were integrated in the control box. The logic of the boards run through 12v DC power c onverted from the wheelchair’s batteries.

PAGE 245

224 Figure 9.24: Control System Circuitry. The seven motors used to actuate the power wheelchair were connected via a serial port as shown in figure 9.25 (left), and the single servo motor used for the gripper was connected to the controller boards using a different serial port as shown in figure 9.25 (right). In figure 9.26, the circuit that connects the wheelchair encoders to the controller boards is shown. A toggle switch will be added to the joystick supplied with the wheelchair so that it can st ill be used if the user wants to run the wheelchair regularly. 9.5.3. Safety Measures Two safety measures were added to the hardware of the WMRA system. The first is a panic stop button that is connected and s ituated under the right elbow of the user to stop the motor power supply from its battery source without shutting off the logic power.

PAGE 246

225 This way, the system can run back up, or a di agnostic procedure can detect any problems that may have happened. Another safety featur e is the use of a timer that cuts off the power to the motors and to the logic circuits so that the batteries of the power wheelchair can conserve energy in case the user forgot to shut the system off. Figure 9.25: Serial Port Connection of the Joint Motors (Left) and the Gripper (Right). 9.6. Experimental Testing The newly designed WMRA was put to test in its early stages when the robotic arm was ready for testing. Even though wheel chair modification is still undergoing, we

PAGE 247

226 were able to run the control algorithm to move the arm only as we had this in one of the user options in the control software. Fi gure 9.27 shows the WMRA system with the Barrette hand installed and a video camera us ed by a person affected by Guillain-Barre Syndrome. In her case, she was able to use both the computer interfaces. Figure 9.26: Wheelchair Encoders and Control Communications. The robotic arm was also tested with an able bodied human subject using the Brain-Computer Interface (BCI-2000) with the newly designed gripper as shown in figure 9.28. The user was able to move the robotic arm without touching any of the controls by looking at the feedback screen and counting the number of flashes of the particular direction or choice displayed on the screen. This wa s a successful test of this interface that encountered some unanticipated problems. When the user sits on the wheelchair, which is within the electromagne tic field of all the running wires inside the WMRA system, the BCI sensors were picki ng up a lot of noise and magnifying them along with the brain signal. This reduced the accuracy of the user’s choice recognition,

PAGE 248

227 but it was good enough for him to execute the ta sk he was trying to do without the need to step off the wheelchair. This noise mi ght be reduced if the BCI-2000 gains were trained on the user while s itting on the wheelchair.. Figure 9.27: A Person with Guillain-Barre Syndrome Driving the New WMRA System. Another user interface tested with this WMRA system was the touch screen. It was one of the most convenient control interfaces that we tested if the user is able to hold on to the stylus pen and touch the screen icons with it. In autonomous mode, the arm was also tested by commanding the controller to drive it from one point to another with a specific trajectory, and it moved the arm at that trajectory and returned back to the ready position as needed. Other physical tests con ducted include the use of end-effector Cartesian velocity profile inputs to move the arm for a specified period of time. It is noteworthy at this point to mention that the electronics used to control the WMRA system stop responding occasionally with no apparent reason and at random without specific conditions. Overall, the system was functioning as designed, and it was able to execute ADL tasks with different user interf aces. More field testing will be conducted later, and the results will be published.

PAGE 249

228 Figure 9.28: A Human Subject Testing of th e BCI-2000 Interface with the WMRA System. 9.7. Summary In this chapter, the design of a ne w 7-DoF robotic arm was presented. The component selection was discussed, and th e final product testing was described along with the specifications of the device. A ne w gripper that was desi gned specifically for activities of daily living was presented. Special claw design procedur e and features were presented. Each feature represents a specifi c use for task execu tion and grasping. The actuation mechanism of the gripper was desi gned, and proper components were selected to provide sufficient power to grasp the desired objects.

PAGE 250

229 A standard power wheelchair was modifi ed to hold the robotic arm and the controller box and the associated hardware. Two optical encoders were added to the wheels of the power wheelchair to give feed back to the controller when moving the wheelchair for a closed loop control. The contro ller hardware that co ntrols the seven joint motors of the robotic arm, the two wheels of the power wheelchair, and the motor of the gripper was shown. This contro ller is capable of running up to 32 controller boards that are daisy-chained to form a single interf ace to the computer. Operator safety was addressed by adding panic stop button and a timer to turn the system power off. Testing of the WMRA with human s ubjects was conducted to ensure proper operation of the system. Several user interface options were tested as well, and the results were satisfactory.

PAGE 251

230 Chapter 10: Conclusions and Recommendations 10.1. Overview Extensive analysis was conducted to combine the WMRA’s 7-DoF and the wheelchair’s mobility in the new redundant 9-DoF system. This redundancy was used and optimized to improve manipulation capabi lities for activities of daily living (ADLs) and avoid obstacles, joint limits and singula rities. The new system was capable of executing pre-set tasks autonomously as well as in teleoperation mode. A real-time controller was developed and implemented to provide high frequency inverse kinematics update rates and real-time sensory feedback for effective closed-loop control of the WMRA system. The control algorithm was implemented in Virtual Reality simulation to test its ability to provide a good and comprehensive co ntrol structure that can be used by persons with disabilities. A newly built modular WMRA was us ed. It was developed based on manipulation trajectories needed for activit ies of daily living. Th is WMRA utilized an optimized controller for both WMRA and the power wheelchair. A standard power wheelchair was modified to include PC based control and sensory feedback.

PAGE 252

231 A keypad, a Spaceball, a touch screen, a nd a Brain-Computer Interface (BCI) were used as modular user interfaces with di fferent capabilities for each input device to fit the individual user needs and capabilities. Future testing will determine the appropriate interface needed for a specific disability. Hi gher level control algorithms were developed to interface the sensory data and the user input for an easy control of the system. Testing procedures were developed for both simulation and experimental testing on the developed testbed. That testbed was cr eated to conduct the nece ssary testing of the system in realistic environments (Home, Offi ce, etc.). Several US patents were planned for many parts of this work. 10.2. General Discussion A 7-DoF robotic arm and a 2-Do F non-holonomic wheelchair were mathematically modeled for kinematiccontrol. A combination of the two mathematical models created a new 9-DoF redundant manipulator that combined the mobility and manipulation. The control system was designed for coordinated Cart esian control with singularity robustness and task-optimized combined mobility and manipulation. Weighted Least Norm solution was implemente d among others for preference control of each of the joints and the wheelch airs’ position and orientation. The control algorithm utilized red undancy to optimize the motion based on different criteria functions or user-defined we ights of preference. It was noticed that the use of conventional optimization methods resu lted in unintended motion that may turn into undesirable move potentially harmi ng the human user or the WMRA hardware. These methods add an optimization term that can still be active even when the user is not

PAGE 253

232 commanding the arm to move. Even though these conventional methods were kept in the control algorithm for the user to choose, it was found that the Weighted Least Norm solution with the new modifications added to it gave the most predictable and robust control algorithm that resulted in a smoot h motion with joint limit avoidance and userpreferred motion weights. A wheelchair-mounted robotic arm (WMRA) was designed and built to meet the needs of mobility-impaired persons, and to ex ceed the capabilities of current devices of this type. The mechanical design incorporates DC servo drive with actuators at each joint, allowing reconfigurable link lengths and thus greater ad aptability to a range of workspaces. Nine principal degrees of fr eedom allow full pose control of both the wheelchair and the arm. The used control electr onics are capable of co ntrolling up to 32 devices when daisy-chained t ogether, and it is capable of reading different sensory feedback and supplying it to the control soft ware. Reliability of these electronics proved unpredictable since it showed some failure with no specific reason or pattern for diagnostics. A new gripper was designed spec ifically to be used for activities of daily living. The design includes two ergonomic claws with designated surfaces for handling specific shapes and objects. The actuation mechanism was designed to be light effective and safe at the same time. Interfacing the gripper with the robotic arm and controlling it using the same controllers used to control the arm were some of the features included in the design and component selection process. Modularity in both the hardware and softwa re levels allows mu ltiple input devices to be used to control the system. User in terfaces include the SpaceB all, the keyboard and

PAGE 254

233 mouse, a touch screen, or the Brain-Comput er Interface (BCI) us ed for people with disabilities. Any other preferre d device can be used easily si nce the control software is flexible enough to allow any other us er-interface hardware to be added. Simulation testing of the new control algorithm was conducted using Matlab and Virtual Reality toolbox among other C++ pr ograms. Modular functions with proper interfaces were designed to ease the addition to any future developments that might be needed. The results showed a powerful method of controlling this 9-DoF combined WMRA system. Simulation results were also shown to emphasize on the effectiveness of the methods. The use of simulation and hardware te sting showed successful integration between the mobility and manipulation mathema tically and in the real application. When tested with the actual robotic arm, th ere were some unpredictable moments of unreliability between Matlab functions and C++ functions that resulted in the loss of motion in few occasions. This may be due to the lack of good compatibility between Matlab program and the DLL library that was compiled using C++. This problem can very likely be taken care of if the control software of the actual WMRA is done separately on a similar program done in C++. The following is a list of the major cont ributions made in this dissertation: 1Design and development of a 9-Do F wheelchair-mounted modular robotic arm system. 2Design and development of an ergonomic gripper to be used for ADL tasks. 3Development of a complex inversekinematics algorithm that combines the mobility of non-holonomic motion and the manipulation of redundant

PAGE 255

234 manipulators for a complex 9-DoF contro l system with all the associated details. 4Expand the WLN method with the S-R inverse for a new control method that is robust and reliable in real applications. 5Utilization of redundancy for joint limit avoidance of such complex systems through optimization. 6Development of a powerful and modula r simulation tool using Virtual Realty and friendly graphical user interface to model and simulate the 9-DoF WMRA system with theory implementation. 7The implementation of the theory on the actual WMRA hardware, and the resolution of all communication and interface challenges. 8The use of the BCI system to control WMRAs for people with severe disabilities. 10.3. Recommendations Going back to the control method, it would add more e nhancements to the control algorithm if an accurate mathematical m odel of the human s ubject along with the wheelchair and the surrounding obstacles are available. This way, the WMRA motion would change the configurati on to avoid these objects rather than stop the system. The implemented safety measures and conditions ca n still be kept in ca se the system comes close to singularity or goes out of control. In the hardware part, better contro l boards may eliminate the occasional unpredictable failure of the WMRA system to respond. While the daisy-chaining

PAGE 256

235 capabilities of these control boards give th e system better connection to the computer through a single wire, performance was greatly affected and was noticeable. When the WMRA system was commanded to execute a task, the commanded joint positions and velocities were sent to the boards for execution. The problem with that was the fact that the command goes from the first board to move its joint, to the last board, and by the time the last joint moves, the first joint woul d already have finished its motion. This introduced some uneven periods of motion am ong the joints. A singl e and more compact board to control all ten motors may replace th e ten boards in use currently whenever they are commercially available. It is also reco mmended to change the flexible coupling in joint 6 to a rigid coupling to avoid slippage. In the simulation side of this work, a separation of the physical WMRA system control and the simulation control may be don e to convert the control software to C++ only rather than the combination of Matlab and C++ together. While Matlab’s Virtual Reality simulation is impressive, using it to control the robotic arm in a conventional operating system introduced undesirable dela ys. Separating the two softwares would allow the user to use the actual WMRA system more efficiently, and at the same time allow the system to be work in powerful vi rtual environment for testing and development of new implementations to the system. User interfaces were also using C ++ applications under Windows operating system. When used with Matlab, it was n ecessary to interface these two programming packages together either by intermediate pr ograms that link the information between the two packages, or by assigning virtual ports and sending the data and information through

PAGE 257

236 these ports. In both cases delays were introduced in the system, and some times, it even froze the computer. The current control system is sufficient fo r low velocities, which is what persons with disabilities need to perform thei r ADL tasks. Expanding the mathematical representation of the WMRA system to include a full dynamic model and gravity compensation can help the user to perform othe r tasks that require high velocities such as sports and recreational activities.

PAGE 258

237 Chapter 11: Future Work 11.1. Introduction This WMRA system has the potential to be one of the leading assistive device projects in the country. Several patents were planned for for many aspects of this work. Commercializing this WMRA system would benefit many people with disabilities who find themselves physically dependent on other that may or may not be willing to provide the best help possible. Several steps can be ta ken to ensure an effective system that can be widely used with many wheelchair-bound i ndividuals. This chapter gives a glimpse of what can be done in the future to add more capabilities and ease of use to this WMRA system. 11.2. Quick Attach-Detach Mechanism This is an ongoing work that is aiming to have the robotic arm and the controller hardware detached and attached quickly with minimum efforts. The idea is to allow a single individual the ability to attaching or detaching the robotic arm and the controller box with all the wires and cables to and from the wheelchair. It was noticed with both Manus and Raptor that when they are attach ed to a power wheelchair, transportation of the wheelchair becomes a problem, even wh en a power lift is used with a custom-

PAGE 259

238 designed wheelchair transportation van. Manus already has a quick-release mechanism to detach the arm from its hosting wheelchair. The ongoing efforts in this project will employ a mechanism to attach the arm to the wheelchair using the we ight of the arm to slide it into place and lock the system solid. Quick connection cables are also designed to remove the cables from the controller box and detach the whole control system from the wheelchair. This will allow the user to tran sport both the wheelchair and the robotic arm independently and with minimum effort. 11.3. A Single Compact Controller In the current design, each motor to be controlled uses a de dicated controller board to send the commanded position and velocity to that motor. This resulted in ten controller boards so fa r, seven for the seven robotic joints, two for the wheelchair and one for the gripper. An additional board was used to take the signals from all ten daisychained controller boards to the computer’s serial port or USB port on a single cable. Currently, we are seeing huge advancemen ts in microelectronics and microprocessing. Having a single board that is capab le of simultaneously controlling all ten motors without the serial connection delays will give a better perf ormance to the whole system. This will also affect the size of th e controller box. The current controller-boards box is significant in its size housing the ten c ontrol boards, a converter board and power adapter, and was mounted under the seat cu shion of the power wheelchair. Having a significantly smaller box may e liminate the need for having it mounted on the wheelchair and keep it on the robotic arm itself. This will reduce the need for many connectors and

PAGE 260

239 cables between the wheelchair and the roboti c arm, and it will certainly make it lighter and more self-sufficient. 11.4. Sensory Suite It is essential for an intelligent robotic system to carry in its sensory suite many different sensors. In this WMRA system, te n optical encoders were installed for joint angle measurements. A laser range finder and a digital camera are two other sensors that are to be added. The laser range finder will be used for object-following or for determining abject coordinates based on the re ad distance and the cu rrent orientation of the laser range finder. The camera will be used for navigation feedback to the user as well as for object recognition and tracking in the plane. Other sensors, such as proximity sens ors, will be used for on-line obstacle avoidance and for guidance through narrow path ways. Force-torque sensors will be added at the gripper’s base to provi de force feedback to enhance the manipulation of different objects. 11.5. Real-Time Control When operating the WMRA system us ing Windows XP operating system, time and priority assignments are uncontrollable by the programmer. Doing the control under a real-time operating system such as QNX allo ws the programmer to control the priorities and set priority rules to operate the WMRA as well as any other software or hardware used by the computer. This will enhance the response time of the system and make the

PAGE 261

240 programmer run the system at higher frequenc ies without compromising the accuracy of the WMRA system or the operating system. 11.6. Bluetooth Wireless Technology fo r Remote Wireless Teleoperation Bluetooth wireless is being integrated to the system to add remote wireless teleoperation so that the us er can perform some ADL tasks while not seated on the wheelchair. The current USB and serial conne ctions between the WMRA system and the control software on the tablet PC will be made wireless through special adaptors that will convert the signal from its cu rrent protocol to Bluetooth protocol and back at the computer terminal. This will allow the user to detach the tablet PC from the wheelchair when he/she is not using it. For instance, if a user with severe disabilities wakes up in the morning in need of a drink or a snack, that person will usually wait until the designated aid or family member comes to the room for assistance. Having the ability to control the WMRA system remotely allows him/her to drive the WMRA around the house by operating the system through a selected user interface and looking at the camera view through the tablet PC monitor. When the fridge is reached, the operato r would be able to use the arm to open the door, get the desired dr ink and come back to the room with no need to wait for a human aid. 11.7. Sensor Assist Functions (SAFs) Sensor Assist Functions (SAFs) will be used to assist or resist user’s motion based on the trajectory generated to execute the intended task and the motion input coming from the user interface. Velocity scaling teleoperati on, force reflection, varying

PAGE 262

241 impedance parameters, and visual servoing fo r object grasping will be used to enhance the manipulation capabilities of persons with di sabilities. When using a haptic device, the user can be trained to perform better in ADL tasks by starting with the assist functions and slowly releasing them with time as th e user gets used to proper control. 11.8. Pre-Set ADL Tasks Programming several tasks to the current WM RA system is quite straight forward. The plan is to program comm only used tasks for each indi vidual using the customized WMRA system so that these tasks can be done autonomously as the user selects them. Several tasks will be included in this WMRA system as follows: 1Turning switches on and off. 2Operating an oven, washer, drye r, microwave, dishwasher, etc. 3Opening and going through spring-loaded doors when the door dimensions are according to common standards 4Object-following task as the camera and/or the laser ra nge finder guide the WMRA system to autonomously follow that object or human. 5Inserting CDs/diskettes into the computer or the CD player. These pre-programmed ADL tasks are among many others that can be programmed to execute at the user’s request. A good scenario of such tasks is when a user is in a hallway or a room and would like to go outside that room through a springloaded door. The user can point the attach ed laser range finder to the door handle and press the assigned button. The control system will start the autonomous mode while keeping the teleoperation mode running. Th e autonomous operation will start by

PAGE 263

242 calculating the coordinates of the door knob from the given distance from the laser and the given laser orientation from the optical encoders and forward kinematics of the WMRA system. Once the door knob location is fu lly defined, the WMRA control system moves the wheelchair to a close proximity from the door at certain pre-calculated angle, reach to the door knob using the arm, grasp it using the end-effector, open the door and backup the wheelchair from the door way with a resultant circular motion at the gripper to match the door handle trajectory while opening. The system can then advance the wheelchair while holding the door using the robotic arm, and drive the wheelchair through the door until it is clear, and then release the door. The autonomous mode will then stop until the next pre-set operation is requested.

PAGE 264

243 References [1] US Census Bureau (1997), “Disabilitie s Affect One-Fifth of All Americans”, Census Brief, CENBR/97-5, http:/ /www.census.gov/prod/3/97pubs/cenbr975.pdf, 1997. [2] J. Reswick, “The Moon Over Dubrovnik A Tale of Worldwide Impact on Persons with Disabilities”, Advances in Exte rnal Control of Human Extremities, 1990. [3] R. Murphy, “Introduction to AI Robotics”, MIT Press, 2nd edition, 2002. [4] J. Allen, A. Karchak, and E. Bontrager, “D esign and Fabrication of a Pair of Rancho Anthropomorphic Arms”, Technical Report of the Attending Staff Association of the Rancho Los Amigos Hospital Inc, 1972. [5] T. Rahman, S. Stroud, R. Ramanathan, M. Alexander, R. Alexander, R. Seliktar, and W. Harwin, “Consumer Criteria for an Arm Orthosis”, Applied Science and Engineering Laboratories, www95.home page.villanova.edu/rungun.ramanathan/ publications/t_and_d.pdf, 2000. [6] H.F.M. Van der Loos, VA/Stanford Rehabilitation Robotics Research and Development Program, “Lessons Learne d in the Application of Robotics Technology to the Field of Rehabilitatio n”, IEEE Transactions on Rehabilitation Engineering, V. 3, N. 1, pp. 46-55, 1995. [7] M. Topping, “An Overview of the Deve lopment of Handy 1, a Rehabilitation Robot to Assist the Severely Disabled”, Journal of Intelligent and Robo tic Systems, V. 34, N. 3, pp. 253-263, 2002. [8] M. Topping, H. Heck, G. Bolmsjo, and D. Weightman, “The Development of RAIL (Robotic Aid to Independent Living)”, Pr oceedings of the third TIDE Congress, 1998. [9] J. Dallaway, and R. Jackson, “RAID A Vocational Robotic Workstation”, Procceedings of the IEEE International Conference on Rehabilitation Robotics (ICORR), 1992.

PAGE 265

244 [10] Robotic Assistive Devi ce, Neil Squire Foundation, http://www.neilsquire.ca/r d/projects/RobotApp.htm. [11] N. Katevas, “Mobile Robotics in Health Care Services”, IOS Press Amsterdam, pp. 227-251, 2000. [12] H. Yanco, “Integrating Robotic Re search: A Survey of Robotic Wheelchair Development”, AAAI Spring Symposium on Integrating Robotic Research, Stanford, California, 1998. [13] P. Warner, and S. Prior, “Investigati ons into the Design of a Wheelchair-Mounted Rehabilitation Robotic Manipulator”, Pro ceedings of the 3rd Cambridge Workshop on Rehabilitation Robotics, Camb ridge University, England, 1994. [14] S. Sheredos, B. Taylor, C. Cobb, and E. Dann, “The Helping Hand ElectroMechanical Arm”, Proceedings of RESNA, pp. 493-495, 1995. [15] M. Hamilton, “The Weston Wheelchai r Mounted Assistive Robot The Design Story”, Robotica, V. 20, pp. 125-132, 2002. [16] G. Bolmsj, M. Olsson, P. Hedenborn, U. Lorentzon, F. Charnier, H. Nasri, “Modular Robotics Design System Integrat ion of a Robot for Disabled People”, EURISCON, 1998. [17] B. Borgerding, O. Ivlev, C. Marten s, N. Ruchel, and A. Grser, “FRIEND Functional Robot Arm With User Friendly Interface For Disabled People”, The 5th European Conference for the Advancemen t of Assistive Technology, Dsseldorf, Germany, 1999. [18] H.F.M. Van der Loos, “Lessons Learne d in the Application of Robotics Technology to the Field of Rehabilitation”, IEEE Tran sactions on Rehabilitation Engineering, V. 3, N. 1, pp. 46-55, 1995. [19] M. Pauly, “TAURO Teleautomated Se rvice Robot for the Disabled”, Automated Mobile Systems, pp. 30-39, 1995. [20] N. Hogan, H. Krebs, J. Charnnaron g, P. Srikrishna, and A. Sharon, “MIT-MANUS: A Workstation for Manual Therapy and Training”, Proceedings of the 1992 IEEE International Workshop on Robot and Human Communication, pp. 161-165, Tokyo, Japan, 1992. [21] H. Takanobu, A. Takanishi, D. Ozawa, K. Ohtsuki, M. Ohnishi, and A. Okino, “Integrated Dental Robot System for Mouth Opening and Closing Training”, Proceedings of the 2002 IEEE Intern ational Conference on Robotics and Automation (ICRA), V. 2, pp. 1428-1433, 2002.

PAGE 266

245 [22] D. Tougaw, C. Polito, P. Weiss, and J. Will, “Sponsoring a FIRST Robotics Team”, Proceedings of the 2003 ASEE IL/IN Section Conference. pp. 60-62. 2003. [23] H. Eftring, and K. Boschian, “Tec hnical Results from Manus User Trials”, Proceedings of the 1999 IEEE Internationa l Conference on Rehabilitation Robotics (ICORR), pp. 136-141, 1999. [24] M. Hillman, and A. Gammie, “The Bath Institute of Medical Engineering Assistive Robot”, Proceedings of the 1994 IEEE Inte rnational Conference on Rehabilitation Robotics (ICORR), pp. 211-212, 1994. [25] P. Chang, “A Closed-Form Solution fo r Inverse Kinematics of Robot Manipulators with Redundancy”, IEEE International Jour nal of Robotics and Automation, V.3, N. 5, 1987. [26] S. Khadem, and R. Dubey, “Global Redundant Robot Control Scheme for Obstacle Avoidance”, Proceedings of the 1988 IEEE Southeast Conference. pp. 397-402, Knoxville, Tennessee, 1988. [27] T. Chan, and R. Dubey, “A Weighted Least-Norm Solution Based Scheme for Avoiding Joint Limits for Redundant Jo int Manipulators”, IEEE Robotics and Automation Transactions (R&A Transactions). V. 11, N. 2, pp. 286-292, 1995. [28] S. McGhee, T. Chan, R. Dubey, and R. Kress, “Probability-Based Weighting of Performance Criteria for a Redundant Mani pulator”, Proceedings of the 1994 IEEE International Conference on Robotics and Automation (ICRA). V. 3, pp. 1887-1894, San Diego, California, 1994. [29] L. Beiner and J. Ma ttila, “An Improved Pseudoinverse Solution for Redundant Hydraulic Manipulators”, Robotica, V. 17, pp. 173–179, 1999. [30] E. Zergeroglu, D. Dawson, I. Walker, a nd P. Setlur, “Nonlinear Tracking Control of Kinematically Redundant Robot Manipulat ors”, IEEE/ASME Transactions on Mechatronics, V. 9, N. 1, pp. 129-132, 2004. [31] W. Kwon, B. Lee, and M. Choi, “R esolving Kinematic Redundancy of a Robot Using a Quadratically Constrained Optimi zation Technique”, Robotica, V. 17, pp. 503-511, 1999. [32] L. Ellekilde, P. Favrholdt, M. Pauli n, and H. Petersen, “Robust Inverse Jacobian Control with Joint Limit and Singularity Handling for Visual Servoing Applications”, The International Journal of Robotics Research, 2006.

PAGE 267

246 [33] C. Perrier, P. Dauchez, and F. Pierro t, “A Global Approach for Motion Generation of Non-Holonomic Mobile Manipulat ors”, Proceedings of the 1998 IEEE International Conference on Robotics and Automation (ICRA), V. 4, pp. 2971-2976, Leuven, Belgium, 1998. [34] H. Osumi, M. Terasawa, and H. Nojiri, “Cooperative Control of Multiple Mobile Manipulators on Uneven Ground”, Proceedin gs of the 1998 IEEE International Conference on Robotics & Automation (ICRA), 1998. [35] Q. Huang, S. Sugano, and K. Tanie, “Motion Planning for a Mobile Manipulator Considering Stability and Task Cons traints”, Proceedings of the 1998 IEEE International Conference on Robo tics and Automation (ICRA), 1998. [36] Y. Yamamoto, and X. Yun, “Unified An alysis on Mobility and Manipulability of Mobile Manipulators”, Proceedings of the 1999 IEEE International Conference on Robotics & Automation (ICRA), pp. 1200-1206, Detroit, Michigan, 1999. [37] M. Egerstedt, and X. Hu, “Coordinated Trajectory Following for Mobile Manipulation”, Proceedings of the 2000 IEEE International C onference on Robotics & Automation (ICRA), 2000. [38] A. Mohri, S. Furuno, M. Iwamura, and M. Yamamoto, “Sub-Optimal Trajectory Planning of Mobile Manipulator”, Pro ceedings of the 2001 IEEE International Conference on Robotics & Automation (ICRA), 2001. [39] B. Bayle, J. Fourquet, and M. Re naud, “Manipulability An alysis for Mobile Manipulators”, Proceedings of the 2001 I EEE International Conference on Robotics & Automation (ICRA), 2001. [40] B. Bayle, J. Fourquet, F. Lamira ux, and M. Renaud, “Kinematic Control of Wheeled Mobile Manipulators”, Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robot s and Systems (IROS), 2002. [41] K. Nagatani, T. Hirayama, A. Gofuku, a nd Y. Tanaka “Motion Planning for Mobile Manipulator with Keeping Manipulabi lity”, Proceedings of the 2002 IEEE/RSJ International Conference on Intelligen t Robots and Systems (IROS), 2002. [42] H. Seraji, “A Unified Approach to Mo tion Control of Mobile Manipulators”, The International Journal of Robotics Research, V. 17, N. 2, pp. 107-118, 1998. [43] J. Chung, S. Velinsky, and R. Hess, “Interaction Control of a Redundant Mobile Manipulator”, The International Journal of Robotics Research, V. 17, N. 12, pp. 1-8, 1998. [44] J. Gardner, and S. Velinsky, “Kinematic s of Mobile Manipulators and Implications for Design”, Journal of Robotic Systems, V. 17, N. 6, pp. 309-320, 2000.

PAGE 268

247 [45] B. Bayle, J. Fourquet, and M. Re naud, “Manipulability of Wheeled Mobile Manipulators: Application to Motion Gene ration”, The Interna tional Journal of Robotics Research, V. 22, N. 7–8, pp. 565-581, 2003. [46] D. Xu, H. Hu, C. Calderon, and M. Tan, “Motion Planning for a Mobile Manipulator with Redundant DOFs”, Proceed ings of the Intern ational Conference on Intelligent Computing (ICIC), Hefei, China, 2005. [47] A. Luca, G. Oriolo, and P. Gi ordano, “Kinematic Modeling and Redundancy Resolution for Nonholonomic Mobile Manipulators”, Proceedings of the 2006 IEEE International Conf erence on Robotics and Automation (ICRA), pp. 18671873, 2006. [48] E. Papadopoulos, and J. Poulakakis, “Planning and Model-Based Control for Mobile Manipulators”, Proceedings of the 2000 IEEE/RSJ lnternational Conference on Intelligent Robots and Systems (IROS), pp. 1810-1815, 2000. [49] J. Craig, “Introduction to Robotics Mechanics and Control”, Third Edition, AddisonWesley Publishing, 2003. [50] Y. Nakamura, “Advanced Robotics: Redundancy and Optimization”, AddisonWesley Publishing, 1991. [51] R. Paul, “Robot Manipulators: Math ematics, Programming, and Control”, MIT Pres, 1981. [52] K. Edwards, R. Alqasemi, R. Dube y, “Design, Construction and Testing of a Wheelchair-Mounted Robotic Arm”, Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA), pp. 3165-3170, 2006. [53] T. Yoshikawa, “Manipulability and Redundancy Control of Robotic Mechanisms”, Proceedings of the 2006 IEEE Intern ational Conference on Robotics and Automation (ICRA), V. 2, pp. 1004-1009, 1985. [54] T. Yoshikawa, “Foundations of Robotics: Analysis and Control”. MIT Press, 1990. [55] W. Koepf, “The Algebra of Holonom ic Equations”, Mathematica, V. 44, pp. 173– 194, 1997. [56] O. Krupkova, and P. Volny, “Euler-L agrange and Hamilton Equations for NonHolonomic Systems in Field Theory”, Journal of Physics, V. 38, pp. 8715-8745, 2005.

PAGE 269

248 [57] E. McCaffrey, R. Alqase mi, and R. Dubey, “Kinematic Analysis and Evaluation of Wheelchair Mounted Robotic Arms”, Proceed ings of the 2004 IMECE International Mechanical Engineering Congress & Expos ition (IMECE), Anaheim, California, 2004. [58] G. Schalk, D. McFarland, T. Hinter berger, N. Birbaumer, and J. Wolpaw, “BCI2000: A General-Purpose Brain-Com puter Interface (BCI) System”, IEEE Transactions on Biomedical Engineering, V. 51, N. 6, pp. 1034-1043, 2004. [59] S. Sutton, M. Braren, J. Zublin, a nd E. John, “Evoked Potential Correlates of Stimulus Uncertainty”, Science, V. 150, pp. 1187–1188, 1965. [60] Sensable Technologies webs ite: http://www.sensable.com, 2007. [61] D. Xu, M. Tan, G. Ch en, “An Improved Dead Reckoning Method for Mobile Robot with Redundant Odometry Information”, Proceedings of the 7th International Conference on Control, Automation, R obotics and Vision (ICARCV), pp. 631-636, Singapore, 2002.

PAGE 270

249 Appendices

PAGE 271

250 Appendix A. Hardware Components A.1. Robotic Arm Gear Motors with Encoders

PAGE 272

Appendix A. (Continued) 251

PAGE 273

Appendix A. (Continued) 252 A.2. Harmonic Drive Gearheads

PAGE 274

Appendix A. (Continued) 253

PAGE 275

Appendix A. (Continued) 254

PAGE 276

Appendix A. (Continued) 255

PAGE 277

Appendix A. (Continued) 256

PAGE 278

Appendix A. (Continued) 257

PAGE 279

Appendix A. (Continued) 258

PAGE 280

Appendix A. (Continued) 259

PAGE 281

Appendix A. (Continued) 260

PAGE 282

Appendix A. (Continued) 261

PAGE 283

Appendix A. (Continued) 262

PAGE 284

Appendix A. (Continued) 263

PAGE 285

Appendix A. (Continued) 264 A.3. Wheelchair Selected Encoders

PAGE 286

Appendix A. (Continued) 265

PAGE 287

Appendix A. (Continued) 266

PAGE 288

Appendix A. (Continued) 267 A.4. Wheelchair Selected Friction Wheels

PAGE 289

Appendix A. (Continued) 268 A.5. Gripper’s Actuation Motor

PAGE 290

Appendix A. (Continued) 269 A.6. Gripper’s Planetary Gearhead

PAGE 291

Appendix A. (Continued) 270 A.7. Gripper’s Optical Encoder

PAGE 292

Appendix A. (Continued) 271

PAGE 293

Appendix A. (Continued) 272 A.8. Gripper’s Spur Gears

PAGE 294

Appendix A. (Continued) 273 A.9. Gripper’s Slip Clutch

PAGE 295

Appendix A. (Continued) 274 A.10. PIC Servo SC Motion Controller Board

PAGE 296

Appendix A. (Continued) 275

PAGE 297

Appendix A. (Continued) 276

PAGE 298

Appendix A. (Continued) 277

PAGE 299

Appendix A. (Continued) 278

PAGE 300

Appendix A. (Continued) 279

PAGE 301

Appendix A. (Continued) 280

PAGE 302

Appendix A. (Continued) 281

PAGE 303

Appendix A. (Continued) 282

PAGE 304

Appendix A. (Continued) 283 A.11. SSA-485 Smart Serial Adapter

PAGE 305

Appendix A. (Continued) 284

PAGE 306

Appendix A. (Continued) 285

PAGE 307

Appendix A. (Continued) 286

PAGE 308

Appendix A. (Continued) 287

PAGE 309

Appendix A. (Continued) 288

PAGE 310

Appendix A. (Continued) 289

PAGE 311

Appendix A. (Continued) 290

PAGE 312

Appendix A. (Continued) 291

PAGE 313

292 Appendix B. Matlab Programs and Functions B.1. VRML File of the Virtual Reality Control Code #VRML V2.0 utf8 #%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% #%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% #%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% #%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% #%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% NavigationInfo { type "EXAMINE" speed 30 av atarSize [ 1, 0, 0 ] headlight TRUE } DEF WMRAROBOT Group { children[ Group { children [ DEF EXT_SETTINGS Group { children [ WorldInfo { title "Wheelchair Mounted Robotic Arm, By: Redwan Alqasemi, USF 2007"}, NavigationInfo { type "EXAMINE" avatarSize 180 visibilityLimit 200 speed 1000 }, Background { groundColor [ 0.8 0.7 0.1 0.8 0.7 0.1] groundAngle [1.57] skyColor [ 0 0 1 0 0.5 1 0 0.5 1 0.5 0.5 0.5 1 0.5 0] skyAngle [ 1 1.15 1.35 1.57] #topUrl "cloud.jpg" }, DEF DynamicView Transform { rotation 0 1 0 0 translation 0 0 0 children [ Viewpoint { description "a_start" position 2500 500 1800 orientation 0 1 0 0.8 jump FALSE }, Viewpoint { description "a_far" position 900 6000 -200 orientation -0.601 -0.547 -0.582 2.172 jump FALSE }, Viewpoint { description "a_bk-lt-up" position -1300 1600 -1600 orientation -0.1 -1 -0.25 2.4 jump FALSE }, Viewpoint { description "a_bk-lt-dn" position -1400 400 -1800 orientation 0.025 -1 0.037 2.4 jump FALSE },

PAGE 314

Appendix B. (Continued) 293 Viewpoint { description "a_ft-lt-up" position 1600 1800 -1400 orientation -0.1 0.9 0.25 2.4 jump FALSE }, Viewpoint { description "a_ft-lt-dn" position 1700 400 -1600 orientation 0.031 1 -0.052 2.4 jump FALSE }, Viewpoint { description "a_ft-rt-up" position 1600 1900 1500 orientation -0.4 0.5 0.14 0.85 jump FALSE }, Viewpoint { description "a_ft-rt-dn" position 1700 300 1900 orientation 0.191 1 -0.075 0.615 jump FALSE }, Viewpoint { description "a_bk-rt-up" position -1700 1700 1700 orientation -0.25 -0.5 -0.12 1 jump FALSE }, Viewpoint { description "a_bk-rt-dn" position -1800 500 1900 orientation 0.116 -1 0.021 0.818 jump FALSE }, Viewpoint { description "a_birdeye" position -1100 4900 -1900 orientation -0.56 -0.72 -0.4 2.2 jump FALSE }, Viewpoint { description "a_top" position 200 3100 0 orientation -0.577 -0.577 -0.577 2.1 jump FALSE }, ]} Viewpoint { description "top" position 200 3100 0 orientation -0.577 -0.577 -0.577 2.1 jump FALSE }, Viewpoint { description "birdeye" position -1100 4900 -1900 orientation -0.56 -0.72 -0.4 2.2 jump FALSE }, Viewpoint { description "bk-rt-dn" position -1800 500 1900 orientation 0.116 -1 0.021 0.818

PAGE 315

Appendix B. (Continued) 294 jump FALSE }, Viewpoint { description "bk-rt-up" position -1700 1700 1700 orientation -0.25 -0.5 -0.12 1 jump FALSE }, Viewpoint { description "ft-rt-dn" position 1700 300 1900 orientation 0.191 1 -0.075 0.615 jump FALSE }, Viewpoint { description "ft-rt-up" position 1600 1900 1500 orientation -0.4 0.5 0.14 0.85 jump FALSE }, Viewpoint { description "ft-lt-dn" position 1700 400 -1600 orientation 0.031 1 -0.052 2.4 jump FALSE }, Viewpoint { description "ft-lt-up" position 1600 1800 -1400 orientation -0.1 0.9 0.25 2.4 jump FALSE }, Viewpoint { description "bk-lt-dn" position -1400 400 -1800 orientation 0.025 -1 0.037 2.4 jump FALSE }, Viewpoint { description "bk-lt-up" position -1300 1600 -1600 orientation -0.1 -1 -0.25 2.4 jump FALSE }, Viewpoint { description "far" position 900 6000 -200 orientation -0.601 -0.547 -0.582 2.172 jump FALSE }, Viewpoint { description "start" position 2500 500 1800 orientation 0 1 0 0.8 jump FALSE }, DEF GROUND Transform { rotation 1 0 0 0 translation 0 0 0 children [ Shape { geometry Box { size 5000 1 5000 } appearance Appearance { texture ImageTexture { url "9_Z _Ground.jpg" repeatS TRUE repeatT TRUE }

PAGE 316

Appendix B. (Continued) 295 textureTransform TextureTransform { rotation 0 center 0 0 translation 0 0 scale 3 3 }}}]}, ]} ]} # Transforming the wheelchair world coordi nate system to the VR's world coordinate system: DEF World Transform { rotation 1 0 0 -1.5707963 translation 0 0 0 children [ DEF Chair Transform { rotation 0 0 1 0 translation -440 -230 168 children [ # DEF WCR SphereSensor {} # DEF WCT PlaneSensor { minPosition -400 0 maxPosition 400 0 } Group { children [Inline { url "0_Chair.wrl" } DEF LWheel Transform { rotation 0 1 0 0 translation 0 0 0 children [ # DEF LW CylinderSensor { diskAngle 0 minAngle 1.5707963 maxAngle 1.5707963 } Group { children [Inline { url "0_LWheel.wrl" }]}]} DEF RWheel Transform { rotation 0 1 0 0 translation 0 0 0 children [ # DEF RW CylinderSensor { diskAngle 0 minAngle 1.5707963 maxAngle 1.5707963 } Group { children [Inline { url "0_RWheel.wrl" }]}]} DEF ARM1 Transform { rotation 1 0 0 1.5707963 translation 440 220 139 children [ # DEF JOINT1 CylinderSensor { diskAngle 0 minAngle 1.5707963 maxAngle 1.5707963 } Group { children [Inline { url "1.wrl" } DEF ARM2 Transform { rotation 0 0 -1 1.5707963 translation 0 42.69 -75.1 children [ # DEF JOINT2 CylinderSensor { diskAngle 0 minAngle -1.5708 maxAngle 1.5708 } Group { children [Inline { url "2.wrl" } DEF ARM3 Transform { rotation 0 1 0 1.5707963 translation -1.73 75.08 -42.7 children [ # DEF JOINT3 CylinderSensor { diskAngle 0 minAngle -3.1416 maxAngle 3.1416 } Group { children [Inline { url "3.wrl" }

PAGE 317

Appendix B. (Continued) 296 DEF ARM4 Transform { rotation 0 0 -1 0 translation -2.92 42.64 -75.08 children [ # DEF JOINT4 CylinderSensor { diskAngle 0 minAngle -3.1416 maxAngle 3.1416 } Group { children [Inline { url "4.wrl" } DEF ARM5 Transform { rotation 0 1 0 1.5707963 translation -11.45 74.85 -423.58 children [ # DEF JOINT5 CylinderSensor { diskAngle 0 minAngle -3.1416 maxAngle 3.1416 } Group { children [Inline { url "5.wrl" } DEF ARM6 Transform { rotation 0 0 -1 1.5707963 translation -2.17 45.99 -75.1 children [ # DEF JOINT6 CylinderSensor { diskAngle 0 minAngle -3.1416 maxAngle 3.1416 } Group { children [Inline { url "6.wrl" } DEF ARM7 Transform { rotation 0 1 0 1.5707963 translation -2.92 -61.52 -161.49 children [ # DEF JOINT7 CylinderSensor { diskAngle 0 minAngle -1.5708 maxAngle 1.5708 } Group { children [Inline { url "7.wrl" } DEF ARM8 Transform { rotation 0 0 -1 0 translation -1.78 61.39 -192.29 children [ # DEF JOINT8 CylinderSensor { diskAngle 0 minAngle -3.1416 maxAngle 3.1614 } Group { children [Inline { url "8.wrl" } ]}]} ]}]} ]}]} ]}]} ]}]} ]}]} ]}]} ]}]} ]}]} ]} # ROUTE WCT.translati on_changed TO Chair.set_translation # ROUTE WCR.rotation_changed TO Chair.set_rotation # ROUTE LW.rotati on_changed TO LWheel.set_rotation # ROUTE RW.rotati on_changed TO RWheel.set_rotation # ROUTE JOINT1.rota tion_changed TO ARM1.set_rotation # ROUTE JOINT2.rota tion_changed TO ARM2.set_rotation # ROUTE JOINT3.rota tion_changed TO ARM3.set_rotation # ROUTE JOINT4.rota tion_changed TO ARM4.set_rotation # ROUTE JOINT5.rota tion_changed TO ARM5.set_rotation # ROUTE JOINT6.rota tion_changed TO ARM6.set_rotation # ROUTE JOINT7.rota tion_changed TO ARM7.set_rotation # ROUTE JOINT8.rota tion_changed TO ARM8.set_rotation ]}

PAGE 318

Appendix B. (Continued) 297 B.2. Matlab Functions Listed Alphabetically % This "new USF WMRA" function SIMULATES the arm going from any position to the ready position with ANIMATION. All angles are in Radians. % the ready position is assumed to be qd=[pi/2;pi/2;0;pi/2;pi/2;pi/2;0]] (Radians). % ini=1 --> initialize animation figures, ini=2 or any --> just update the figures, ini=3 --> close the figures. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function WMRA_any2ready(ini, vr, ml, arm, Tiwc, qi) % Closing the Arm library and Matlab Graphics Animation and Virtual Reality Animation and Plots windows: if ini==3 if arm==1 try WMRA_ARM_Motion(ini, 0, 0, 0); end end if vr==1 try WMRA_VR_Animation(ini, 0, 0); end end if ml==1 try WMRA_ML_Animation(ini, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); end end return ; end % Defining the used conditions: qd=[pi/2;pi/2;0;pi/2;pi/2;pi/2;0]; % Final joint angles (Ready Position). ts=10; % (5 or 10 or 20) Simulation time to move the arm from any position to the ready position. n=100; % Number of time steps. dt=ts/n; % The time step to move the arm from any position to the ready position. % Initializing the physical Arm: if arm==1 WMRA_ARM_Motion(ini, 2, [qi;0], dt); ddt=0; end % Initializing Virtual Reality Animation: if vr==1 WMRA_VR_Animation(ini, Tiwc, qi); end % Initializing Robot Animation in Matlab Graphics: if ml==1 % Inputting the D-H Parameters in a Matrix form: DH=WMRA_DH(qi); % Calculating the transformation matrices of each link:

PAGE 319

Appendix B. (Continued) 298T01=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(DH(1,4))*WMRA_transl(0,0,DH(1,3 )); T12=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(DH(2,4))*WMRA_transl(0,0,DH(2,3 )); T23=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(DH(3,4))*WMRA_transl(0,0,DH(3,3 )); T34=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(DH(4,4))*WMRA_transl(0,0,DH(4,3 )); T45=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(DH(5,4))*WMRA_transl(0,0,DH(5,3 )); T56=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(DH(6,4))*WMRA_transl(0,0,DH(6,3 )); T67=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(DH(7,4))*WMRA_transl(0,0,DH(7,3 )); % Calculating the Transformation Matrix of the initial and desired arm positions: Ti=Tiwc*T01*T12*T23*T34*T45*T56*T67; Td=Tiwc*WMRA_q2T(qd); WMRA_ML_Animation(ini, Ti, Td, Tiwc, T01, T12, T23, T34, T45, T56, T67); end % Check for the shortest route: diff=qd-qi(1:7); for i=1:7 if diff(i) > pi diff(i)=diff(i)-2*pi; elseif diff(i) < (-pi) diff(i)=diff(i)+2*pi; end end % Joint angle change at every time step. dq=[diff/n;0;0]; % Initialization: qo=qi; tt=0; while tt <= (ts-dt) % Starting a timer: tic; % Calculating the new Joint Angles: qn=qo+dq; % Updating the physical Arm: if arm==1 ddt=ddt+dt; if ddt>=0.5 || tt>=(ts-dt) WMRA_ARM_Motion(2, 1, [qn;0], ddt); ddt=0; end end % Updating Virtual Reality Animation: if vr==1 WMRA_VR_Animation(2, Tiwc, qn); end % Updating Matlab Animation: if ml==1

PAGE 320

Appendix B. (Continued) 299 % Calculating the new Transformation Matrix: T1a=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(qn(1))*WMRA_transl(0,0,DH(1,3)) ; T2a=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(qn(2))*WMRA_transl(0,0,DH(2,3)) ; T3a=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(qn(3))*WMRA_transl(0,0,DH(3,3)) ; T4a=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(qn(4))*WMRA_transl(0,0,DH(4,3)) ; T5a=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(qn(5))*WMRA_transl(0,0,DH(5,3)) ; T6a=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(qn(6))*WMRA_transl(0,0,DH(6,3)) ; T7a=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(qn(7))*WMRA_transl(0,0,DH(7,3)) ; WMRA_ML_Animation(2, Ti, Td, Tiwc, T1a, T2a, T3a, T4a, T5a, T6a, T7a); end % Updating the old values with the new values for the next iteration: qo=qn; tt=tt+dt; % Pausing for the speed sync: pause(dt-toc); end % This function communicates with the physical USF WMRA system with 9 DOF to get the encoder readings and send the commands to be executed. % The (.H) file and the (.DLL) file that contains the used functions should be in the directory containing this program. % config=0: Set the current encoder readings to zeros, config=1: Read the encoder readings from the configuration txt file. % config=2: Change the configuration file to the initial values provided by (qo), then read the encoder readings from the configuration txt file. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [qn] = WMRA_ARM_Motion(ind, config, qo, dt) % Declaring the global variables: global L ptr e2r1 e2r2 e2r3 e2r4 e2r5 e2r6 e2r7 e2r8 e2r9 e2d % The initialization of the Arm library: if ind==1 % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % Serial Communication properties: com=4; baud=19200;

PAGE 321

Appendix B. (Continued) 300 % PID controller gains: Kp=100; Kd=1000; Ki=0; % Conversion of encoder readings to radians: Note that the encoder readings are negative of the kinematic arrangements in the control code. e2r1=-pi/900000; e2r2=-pi/900000; e2r3=-pi/950000; e2r4=-pi/710000; e2r5=-pi/580000; %e2r6p=-pi/420000; %e2r6n=-pi/500000; e2r6=-pi/440000; e2r7=-pi/630000; e2r8=1; % Redwan: change this to forward motion when wheelchair controllers are installed (Only when reading the encoders). e2r9=1; % Redwan: change this to rotation motion when wheelchair controllers are installed (Only when reading the encoders). e2d =-1/100000; % The case when changing the configuration file to qo is required: if config==2 % Converting the commanded angles to encoder readings: %qo=[qo(1)/e2r1; qo(2)/e2r2; qo(3)/e2r3; qo(4)/e2r4; qo(5)/e2r5; qo(6)/e2r6; qo(7)/e2r7; qo(8)/e2r8; qo(9)/e2r9; qo(10)/e2d]; qo=[qo(1)/e2r1; qo(2)/e2r2; qo(3)/e2r3; qo(4)/e2r4; qo(5)/e2r5; qo(6)/e2r6; qo(7)/e2r7; qo(10)/e2d]; % Redwan: Replace this with the one above when wheelchair controllers are installed. % Changing the configuration file to qo: fid = fopen( 'configuration.txt' 'w' ); fprintf(fid, %10.0f ,qo); fclose(fid); config=1; end try % Closing the library in case it was open: calllib ( 'controlMotor' 'close' ); catch end try % Loading the DLL library of functions: loadlibrary( 'controlMotor.dll' 'controlMotor.h' ); catch end % Establishing the connections, and setting the encoders to the current configuration: check=calllib( 'controlMotor' 'init' com, baud, config); if check == 0 fprintf( '\nWMRA initialization has failed, Please check your communications.\n' ); end % Setting the PID controller gains (All motors the same gains in this case. Use 'setParamsPID' command to set each individual motor to different PID gains: calllib ( 'controlMotor' 'setParamsPIDAll' Kp, Kd, Ki); % Creating a pointer of for the 10 joints to be used to read or set the encoders: dim = 1:8; % Redwan: Change 8 to 10 when wheelchair controllers are installed. ptr = libpointer ( 'int32Ptr' dim); % Reading the current positions and converting them to radians: calllib ( 'controlMotor' 'getPosAll' ptr);

PAGE 322

Appendix B. (Continued) 301 qc=double(ptr.Value); qc=[qc(1:7), 0, 0, qc(8)]; % Redwan: Remove when wheelchair controllers are installed. qn=[qc(1)*e2r1; qc(2)*e2r2; qc(3)*e2r3; qc(4)*e2r4; qc(5)*e2r5; qc(6)*e2r6; qc(7)*e2r7; qc(8)*e2r8; qc(9)*e2r9; qc(10)*e2d;]; % Closing the Arm library: elseif ind==3 % Reading the current positions to be saved in the configuration file: calllib ( 'controlMotor' 'getPosAll' ptr); % Closing the library and unloading: calllib ( 'controlMotor' 'close' ); unloadlibrary( 'controlMotor' ); % Reporting the function output to be zero (This value will not be used): qn=0; % Updating the Arm: else % Reading the current positions: calllib ( 'controlMotor' 'getPosAll' ptr); qc=double(ptr.Value)'; % Converting the commanded angles to encoder readings: %qo=[qo(1)/e2r1; qo(2)/e2r2; qo(3)/e2r3; qo(4)/e2r4; qo(5)/e2r5; qo(6)/e2r6; qo(7)/e2r7; qo(8)/e2r8; qo(9)/e2r9; qo(10)/e2d]; qo=[qo(1)/e2r1; qo(2)/e2r2; qo(3)/e2r3; qo(4)/e2r4; qo(5)/e2r5; qo(6)/e2r6; qo(7)/e2r7; qo(10)/e2d]; % Redwan: Replace this with the one above when wheelchair controllers are installed. % finding the needed velocities for the arm, note that a factor of 33.8 is needed for encoder velocities and position conversion: qdo(1:7)=33.8*abs(qo(1:7)-qc(1:7))/dt; qddo=500*[1; 1; 1; 1; 1; 1; 1; 10]; % Calculating the gripper's commanded position and velocity: qo(8)=qo(8)+qc(8); % Redwan: Change 8 to 10 when wheelchair controllers are installed. qdo(8)=33.8*abs(qo(8)-qc(8)); % Redwan: Change 8 to 10 when wheelchair controllers are installed. % Splitting the negative sign to be used in the DLL functions: dir=[0;0;0;0;0;0;0;0]; % Redwan: Add two more zeros when wheelchair controllers are installed. for i=1:8 % Redwan: Change 8 to 10 when wheelchair controllers are installed. if sign(qo(i)) == -1 qo(i) = -qo(i); dir(i) = 1; end end % Sending the commanded angles to the controller boards: calllib ( 'controlMotor' 'posSelect' [1, 1, 1, 1, 1, 1, 1, 1, -1], qo', qdo', qddo', dir); % Reading the current positions and converting them to radians: calllib ( 'controlMotor' 'getPosAll' ptr); qc=double(ptr.Value); qc=[qc(1:7), 0, 0, qc(8)]; % Redwan: Remove when wheelchair controllers are installed. qn=[qc(1)*e2r1; qc(2)*e2r2; qc(3)*e2r3; qc(4)*e2r4; qc(5)*e2r5; qc(6)*e2r6; qc(7)*e2r7; qc(8)*e2r8; qc(9)*e2r9; qc(10)*e2d;]; end

PAGE 323

Appendix B. (Continued) 302% This function uses a 3rd order Polynomial with a Blending factor to find a smooth trajectory points of a variable "q" along a streight line, given the initial and final variable values and the number of trajectory points. % The output is the variable position. % See Eq. 7.18 page 210 of Craig Book %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [qt] = WMRA_BPolynomial(qi, qf, n) % Blending Factor: b=5; % Initializing the time: tt=0; tf=abs((qf-qi)); dt=tf/(n-1); if tf > 0.001 % Blending procedure: % Time, position, velocity, and acceleration of the variable at the first blending point: qddb=b*4*(qf-qi)/tf^2; tb=tf/2-sqrt(qddb^2*tf^2-4*qddb*(qf-qi))/abs(2*qddb); qdb=qddb*tb; qb=qi+qddb*tb^2/2; % Calculating the polynomial factors at the first blending point: From Eq.7.18 page 210 of Craig Book a01=qi; a11=0; a21=0.5*qddb; a31=(20*(qb-qi)-8*qdb*tb-2*qddb*tb^2)/(2*tb^3); %a41=(30*(qi-qb)+14*qdb*tb+qddb*tb^2)/(2*tb^4); % Uncomment for 5th order polynomial. %a51=(12*(qb-qi)-6*qdb*tb)/(2*tb^5); % Uncomment for 5th order polynomial. % Calculating the polynomial factors at the second blending point: From Eq.7.18 page 210 of Craig Book a02=qb+qdb*(tf-2*tb); a12=qdb; a22=-0.5*qddb; a32=(20*(qf-a02)-12*a12*tb+2*qddb*tb^2)/(2*tb^3); %a42=(30*(a02-qf)+16*a12*tb-qddb*tb^2)/(2*tb^4); % Uncomment for 5th order polynomial. %a52=(12*(qf-a02)-6*a12*tb)/(2*tb^5); % Uncomment for 5th order polynomial. end % Calculating the intermediate joint angles along the trajectory from the initial to the final position: for i=1:n if tf <= 0.001 qt(i)=qi; elseif tt<=tb qt(i)=a01+a11*tt+a21*tt^2+a31*tt^3; %+a41*tt^4+a51*tt^5; % Uncomment before "+a41" for 5th order polynomial. elseif tt>=(tf-tb) qt(i)=a02+a12*(tt+tb-tf)+a22*(tt+tb-tf)^2+a32*(tt+tb-tf)^3; %+a42*(tt+tbtf)^4+a52*(tt+tb-tf)^5; % Uncomment before "+42" for 5th order polynomial. else qt(i)=qb-qdb*(tb-tt); end

PAGE 324

Appendix B. (Continued) 303 tt=tt+dt; end % This function is to stop the arm if it is moving towards a collision with itself, the wheelchair, or the human user. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [dq]=WMRA_collide(dqi, T01, T12, T23, T34, T45, T56, T67) % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % Collision Conditions: gr=100-L(4)-L(5); % The ground buffer surface. dq=dqi; % 1Collision of frame 3 using T03: T03=T01*T12*T23; % Collision with the ground: if T03(3,4) <= gr dq=-0.01*dqi; end % Collision with the wheelchair's front left side: if T03(1,4) >= 450 && T03(2,4) <= -150 dq=-0.01*dqi; end % Collision with the wheelchair's rear left side: if T03(1,4) <= 450 && T03(2,4) <= 100 dq=-0.01*dqi; end % Collision with the wheelchair's rear left wheel: if T03(1,4) <= 0 && T03(2,4) <= 100 && T03(3,4) <= 120 dq=-0.01*dqi; end % 2Collision of frame 4 using T04: T04=T03*T34; % Collision with the ground: if T04(3,4) <= gr dq=-0.01*dqi; end % Collision with the wheelchair's front left side: if T04(1,4) <= 450 && T04(1,4) >= -100 && T04(2,4) <= 0 dq=-0.01*dqi; end % Collision with the wheelchair's rear left side: if T04(1,4) <= -100 && T04(2,4) <= 100 dq=-0.01*dqi; end % Collision with the wheelchair's rear left wheel: if T04(1,4) <= -100 && T04(2,4) <= 100 && T04(3,4) <= 120 dq=-0.01*dqi; end % 3Collision of frame 5 using T05: T05=T04*T45;

PAGE 325

Appendix B. (Continued) 304% Collision with the ground: if T05(3,4) <= gr dq=-0.01*dqi; end % Collision with the wheelchair driver's left shoulder: if T05(1,4) <= -100 && T05(1,4) >= -550 && T05(2,4) <= 150 dq=-0.01*dqi; end % Collision with the wheelchair driver's lap: if T05(1,4) <= 400 && T05(1,4) >= -100 && T05(2,4) <= 0 && T05(3,4) <= 470 dq=-0.01*dqi; end % Collision with the wheelchair's battery pack: if T05(1,4) <= -430 && T05(1,4) >= -630 && T05(2,4) <= 100 && T05(3,4) <= 50 dq=-0.01*dqi; end % 4Collision of frame 7 using T07: T07=T05*T56*T67; % Collision with the ground: if T07(3,4) <= gr dq=-0.01*dqi; end % Collision with the wheelchair driver's left shoulder: if T07(1,4) <= -50 && T07(1,4) >= -600 && T07(2,4) <= 200 dq=-0.01*dqi; end % Collision with the wheelchair driver's lap: if T07(1,4) <= 450 && T07(1,4) >= -50 && T07(2,4) <= 50 && T07(3,4) <= 520 dq=-0.01*dqi; end % Collision with the wheelchair's battery pack: if T07(1,4) <= -480 && T07(1,4) >= -680 && T07(2,4) <= 50 && T07(3,4) <= 100 dq=-0.01*dqi; end % 5Collision of the arm and itself using T37: T37=T34*T45*T56*T67; % Collision between the forearm and the upper arm: if T37(1,4) <= 170 && T37(1,4) >= -170 && T37(2,4) >= -100 && T37(3,4) <= 0 dq=-0.01*dqi; end % This function gives the WMRA's errors from the current position to the required trajectory position. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [delta]=WMRA_delta(Ti, Td) ep=Td(1:3,4)-Ti(1:3,4); eo=0.5*( cross(Ti(1:3,1),Td(1:3,1)) + cross(Ti(1:3,2),Td(1:3,2)) + cross(Ti(1:3,3),Td(1:3,3)) ); % From equation 17 on page 189 of (Robot Motion Planning and Control) Book by Micheal Brady et al. Taken from the paper (Resolved-Acceleration Control of Mechanical Manipulators) By John Y. S. Luh et al. delta=[ep; eo];

PAGE 326

Appendix B. (Continued) 305% This function gives the DH-Parameters matrix to be used in the program. % Modifying the parameters on this file is sufficient to change these dimention in all related programs. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [DH]=WMRA_DH(q) % Inputting the D-H Parameters in a Matrix form, dimensions are in millimeters and radians: % Dimentions based on the actual physical arm: DH=[-pi/2 0 110 q(1) ; pi/2 0 146 q(2) ; -pi/2 0 549 q(3) ; pi/2 0 130 q(4) ; -pi/2 0 241 q(5) ; pi/2 0 0 q(6) ; -pi/2 0 179+131 q(7)]; % Dimentions based on the Virtual Reality arm model: % DH=[-pi/2 0 109.72 q(1) ; pi/2 0 118.66 q(2) ; -pi/2 0 499.67 q(3) ; pi/2 0 121.78 q(4) ; % -pi/2 0 235.67 q(5) ; pi/2 0 0 q(6) ; -pi/2 0 276.68 q(7)]; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% Thanks to Mayur Palankar %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function varargout = WMRA_error_gui(varargin) % WMRA_ERROR_GUI M-file for WMRA_error_gui.fig % WMRA_ERROR_GUI, by itself, creates a new WMRA_ERROR_GUI or raises the existing % singleton*. % % H = WMRA_ERROR_GUI returns the handle to a new WMRA_ERROR_GUI or the handle to % the existing singleton*. % % WMRA_ERROR_GUI('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in WMRA_ERROR_GUI.M with the given input arguments. % % WMRA_ERROR_GUI('Property','Value',...) creates a new WMRA_ERROR_GUI or raises the % existing singleton*. Starting from the left, property value pairs are % applied to the GUI before WMRA_error_gui_OpeningFunction gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to WMRA_error_gui_OpeningFcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to help WMRA_error_gui % Last Modified by GUIDE v2.5 03-Feb-2007 15:47:37 % Begin initialization code DO NOT EDIT gui_Singleton = 1; gui_State = struct( 'gui_Name' mfilename, ...

PAGE 327

Appendix B. (Continued) 306 'gui_Singleton' gui_Singleton, ... 'gui_OpeningFcn' @WMRA_error_gui_OpeningFcn, ... 'gui_OutputFcn' @WMRA_error_gui_OutputFcn, ... 'gui_LayoutFcn' [] ... 'gui_Callback' []); if nargin && ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end % End initialization code DO NOT EDIT % --Executes just before WMRA_error_gui is made visible. function WMRA_error_gui_OpeningFcn(hObject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % varargin command line arguments to WMRA_error_gui (see VARARGIN) set (handles.edit1, 'String' varargin{1}); % Choose default command line output for WMRA_error_gui handles.output = hObject; % Update handles structure guidata(hObject, handles); % UIWAIT makes WMRA_error_gui wait for user response (see UIRESUME) uiwait(handles.figure1); % --Outputs from this function are returned to the command line. function WMRA_error_gui_OutputFcn(hObject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure %varargout{1} = handles.output; % --Executes on button press in pushbutton1. function pushbutton1_Callback(hObject, eventdata, handles) % hObject handle to pushbutton1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) close; function edit1_Callback(hObject, eventdata, handles) % hObject handle to edit1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit1 as text % str2double(get(hObject,'String')) returns contents of edit1 as a double % --Executes during object creation, after setting all properties. function edit1_CreateFcn(hObject, eventdata, handles) % hObject handle to edit1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called

PAGE 328

Appendix B. (Continued) 307% Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function varargout = WMRA_exit(varargin) % WMRA_EXIT M-file for WMRA_exit.fig % WMRA_EXIT, by itself, creates a new WMRA_EXIT or raises the existing % singleton*. % % H = WMRA_EXIT returns the handle to a new WMRA_EXIT or the handle to % the existing singleton*. % % WMRA_EXIT('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in WMRA_EXIT.M with the given input arguments. % % WMRA_EXIT('Property','Value',...) creates a new WMRA_EXIT or raises the % existing singleton*. Starting from the left, property value pairs are % applied to the GUI before WMRA_exit_OpeningFunction gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to WMRA_exit_OpeningFcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to help WMRA_exit % Last Modified by GUIDE v2.5 14-Mar-2007 23:20:09 % Begin initialization code DO NOT EDIT gui_Singleton = 1; gui_State = struct( 'gui_Name' mfilename, ... 'gui_Singleton' gui_Singleton, ... 'gui_OpeningFcn' @WMRA_exit_OpeningFcn, ... 'gui_OutputFcn' @WMRA_exit_OutputFcn, ... 'gui_LayoutFcn' [] ... 'gui_Callback' []); if nargin && ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end % End initialization code DO NOT EDIT % --Executes just before WMRA_exit is made visible. function WMRA_exit_OpeningFcn(hObject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. % hObject handle to figure

PAGE 329

Appendix B. (Continued) 308% eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % varargin command line arguments to WMRA_exit (see VARARGIN) % Choose default command line output for WMRA_exit handles.output = hObject; % Update handles structure guidata(hObject, handles); global VAR_SCREENOPN VAR_SCREENOPN = 1; % UIWAIT makes WMRA_exit wait for user response (see UIRESUME) % uiwait(handles.figure1); % --Outputs from this function are returned to the command line. function varargout = WMRA_exit_OutputFcn(hObject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure varargout{1} = handles.output; % --Executes on button press in pushbutton1. function pushbutton1_Callback(hObject, eventdata, handles) % hObject handle to pushbutton1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_SCREENOPN VAR_SCREENOPN = 0; close; % This function gives the Jacobian Matrix and its determinant based on frame 0 of the new USF WMRA, given the Transformation Matrices of each link. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [J0,detJ0] = WMRA_J07(T1, T2, T3, T4, T5, T6, T7) T=eye(4); J0(1,7)=-T(1,1)*T(2,4)+T(2,1)*T(1,4); J0(2,7)=-T(1,2)*T(2,4)+T(2,2)*T(1,4); J0(3,7)=-T(1,3)*T(2,4)+T(2,3)*T(1,4); J0(4,7)=T(3,1); J0(5,7)=T(3,2); J0(6,7)=T(3,3); T=T7*T; J0(1,6)=-T(1,1)*T(2,4)+T(2,1)*T(1,4); J0(2,6)=-T(1,2)*T(2,4)+T(2,2)*T(1,4); J0(3,6)=-T(1,3)*T(2,4)+T(2,3)*T(1,4);

PAGE 330

Appendix B. (Continued) 309J0(4,6)=T(3,1); J0(5,6)=T(3,2); J0(6,6)=T(3,3); T=T6*T; J0(1,5)=-T(1,1)*T(2,4)+T(2,1)*T(1,4); J0(2,5)=-T(1,2)*T(2,4)+T(2,2)*T(1,4); J0(3,5)=-T(1,3)*T(2,4)+T(2,3)*T(1,4); J0(4,5)=T(3,1); J0(5,5)=T(3,2); J0(6,5)=T(3,3); T=T5*T; J0(1,4)=-T(1,1)*T(2,4)+T(2,1)*T(1,4); J0(2,4)=-T(1,2)*T(2,4)+T(2,2)*T(1,4); J0(3,4)=-T(1,3)*T(2,4)+T(2,3)*T(1,4); J0(4,4)=T(3,1); J0(5,4)=T(3,2); J0(6,4)=T(3,3); T=T4*T; J0(1,3)=-T(1,1)*T(2,4)+T(2,1)*T(1,4); J0(2,3)=-T(1,2)*T(2,4)+T(2,2)*T(1,4); J0(3,3)=-T(1,3)*T(2,4)+T(2,3)*T(1,4); J0(4,3)=T(3,1); J0(5,3)=T(3,2); J0(6,3)=T(3,3); T=T3*T; J0(1,2)=-T(1,1)*T(2,4)+T(2,1)*T(1,4); J0(2,2)=-T(1,2)*T(2,4)+T(2,2)*T(1,4); J0(3,2)=-T(1,3)*T(2,4)+T(2,3)*T(1,4); J0(4,2)=T(3,1); J0(5,2)=T(3,2); J0(6,2)=T(3,3); T=T2*T; J0(1,1)=-T(1,1)*T(2,4)+T(2,1)*T(1,4); J0(2,1)=-T(1,2)*T(2,4)+T(2,2)*T(1,4); J0(3,1)=-T(1,3)*T(2,4)+T(2,3)*T(1,4); J0(4,1)=T(3,1); J0(5,1)=T(3,2); J0(6,1)=T(3,3); T=T1*T; J0=[T(1:3,1:3),zeros(3,3);zeros(3,3),T(1:3,1:3)]*J0; detJ0=sqrt(det(J0*J0')); % This function gives the WMRA's base Jacobian Matrix based on the ground frame, given the Wheelchair orientation angle about the z axis. % Dimentions are as supplies, angles are in radians. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

PAGE 331

Appendix B. (Continued) 310%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [J]=WMRA_Jga(ind, p, XY) % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % Deciding if the motion is in reference to the arm base (1) or the wheel axle center (0): if ind == 0, L(2:4)=[0;0;0]; end % Calculating the Jacobian: J = [eye(2) zeros(2,3) [-(XY(1)*sin(p)+XY(2)*cos(p)) ; XY(1)*cos(p)-XY(2)*sin(p)] ; zeros(4,2) eye(4)] (L(5)/2)*[cos(p)+2*(L(2)*sin(p)+L(3)*cos(p))/L(1) cos(p)2*(L(2)*sin(p)+L(3)*cos(p))/L(1) ; sin(p)-2*(L(2)*cos(p)-L(3)*sin(p))/L(1) sin(p)+2*(L(2)*cos(p)-L(3)*sin(p))/L(1) ; 0 0;0 0;0 0; -2/L(1) 2/L(1)] [1 L(1)/(2*L(5)) ; 1 L(1)/(2*L(5))]; % This function gives the joint limit vector to be used in the program. % Modifying the parameters on this file is sufficient to change these limits in all related programs. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [qmin,qmax]=WMRA_Jlimit() % Inputting the joint limits in a vector form, dimensions are in radians: % Dimentions based on the actual physical arm: qmin=-[170;170;170;170;170;100;200]*pi/180; qmax= [170;170;170;170;170;100;200]*pi/180; % This "new USF WMRA" script SIMULATES the Joint control of the WMRA system with ANIMATION and plots for 9 DOF. All angles are in Radians. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Defining used parameters: d2r=pi/180; % Conversions from Degrees to Radians. r2d=180/pi; % Conversions from Radians to Degrees. % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % User input prompts: choice1 = input( '\n Choose animation type or no animation: \n For Virtual Reality Animation, press "1", \n For Matlab Graphics Animation, press "2", \n For BOTH Animations, press "3", \n For NO Animation, press "4". \n' 's' ); if choice1== '2'

PAGE 332

Appendix B. (Continued) 311 vr = 0; ml = 1; elseif choice1== '3' vr = 1; ml = 1; elseif choice1== '4' vr = 0; ml = 0; else vr = 1; ml = 0; end choice2 = input( '\n Would you like to run the actual arm? \n For no, press "0", \n For yes, press "1". \n' 's' ); if choice2== '1' arm=1; else arm=0; end choice3 = input( '\n Press "1" if you want to start at the "ready" position, \n or press "2" if you want to enter the initial joint angles. \n' 's' ); if choice3== '2' qi = input( '\n Please enter the initial angles vector of the arm in radians (e.g. [pi/2;pi/2;0;pi/2;pi/2;pi/2;0]) \n' ); WCi = input( '\n Please enter the initial x,y position and z orientation of the WMRA base in millimeters and radians (e.g. [200;500;0.3]) \n' ); ini=0; else qi=[90;90;0;90;90;90;0]*d2r; WCi=[0;0;0]; ini=0; if vr==1 || ml==1 || arm==1 choice4 = input( '\n Press "1" if you want to include "park" to "ready" motion, \n or press "2" if not. \n' 's' ); if choice4== '2' ini=0; else ini=1; end end end % Calculating the Transformation Matrix of the initial position of the WMRA's base: Tiwc=WMRA_p2T(WCi(1),WCi(2),WCi(3)); % Calculating the initial Wheelchair Variables: qiwc=[sqrt(WCi(1)^2+WCi(2)^2);WCi(3)]; % Calculating the initial and desired joint positions: qi=[qi;qiwc]; qd = input( '\n Please enter the desired angles and distance vector in radians and mm (e.g. [pi/3;-pi/3;pi/3;-pi/3;pi/3;-pi/3;pi/3;500;pi/3]) \n' ); ts = input( '\n Please enter the desired execution time in seconds (e.g. 2) \n' ); % Calculating the initial and final transformation matrices: [Ti, Tia, Tiwc, T01, T12, T23, T34, T45, T56, T67]=WMRA_Tall(1, qi(1:7), qi(8:9), Tiwc); [Td, Tda, Tdwc, T01d, T12d, T23d, T34d, T45d, T56d, T67d]=WMRA_Tall(2, qd(1:7), qd(8:9), Tiwc); % Calculating the number of iteration and the time increment (delta t): dt=0.05; % Time increment in seconds. total_time=ts; % Total time of animation. n=round(total_time/dt); % Number of iterations rounded up. dt=total_time/n; % Adjusted time increment in seconds. dq=(qd-qi)/n; % Initializing the joint angles, the Transformation Matrix, and time: qo=qi; To=Ti; Toa=Tia; Towc=Tiwc;

PAGE 333

Appendix B. (Continued) 312tt=0; i=1; % Initializing the WMRA: if ini==0 % When no "park" to "ready" motion required. % Initializing Virtual Reality Animation: if vr==1 WMRA_VR_Animation(1, Towc, qo); end % Initializing Robot Animation in Matlab Graphics: if ml==1 WMRA_ML_Animation(1, To, Td, Towc, T01, T12, T23, T34, T45, T56, T67); end % Initializing the Physical Arm: if arm==1 WMRA_ARM_Motion(1, 2, [qo;0], 0); ddt=0; end elseif ini==1 && (vr==1 || ml==1 || arm==1) % When "park" to "ready" motion is required. WMRA_park2ready(1, vr, ml, arm, Towc, qo(8:9)); if arm==1 ddt=0; end end % Re-Drawing the Animation: if vr==1 || ml==1 drawnow; end % Starting a timer: tic % Starting the Iteration Loop: while i<=n % Calculating the new Joint Angles: qn=qo+dq; % Calculating the new Transformation Matrices: [Tn, Tna, Tnwc, T01, T12, T23, T34, T45, T56, T67]=WMRA_Tall(2, qn, dq(8:9), Towc); % Updating Physical Arm: if arm==1 ddt=ddt+dt; if ddt>=0.5 || i>=(n+1) WMRA_ARM_Motion(2, 1, [qn;0], ddt); ddt=0; end end % Updating Virtual Reality Animation: if vr==1 WMRA_VR_Animation(2, Tnwc, qn); end % Updating Matlab Graphics Animation: if ml==1 WMRA_ML_Animation(2, Ti, Td, Tnwc, T01, T12, T23, T34, T45, T56, T67); end % Re-Drawing the Animation: if vr==1 || ml==1 drawnow; end

PAGE 334

Appendix B. (Continued) 313 % Updating the old values with the new values for the next iteration: qo=qn; To=Tn; Toa=Tna; Towc=Tnwc; tt=tt+dt; i=i+1; % Delay to comply with the required speed: if toc < tt pause(tt-toc); end end % Reading the elapsed time and printing it with the simulation time: toc if vr==1 || ml==1 || arm==1 % Going back to the ready position: choice6 = input( '\n Do you want to go back to the "ready" position? \n Press "1" for Yes, or press "2" for No. \n' 's' ); if choice6== '1' WMRA_any2ready(2, vr, ml, arm, Tnwc, qn); % Going back to the parking position: choice7 = input( '\n Do you want to go back to the "parking" position? \n Press "1" for Yes, or press "2" for No. \n' 's' ); if choice7== '1' WMRA_ready2park(2, vr, ml, arm, Tnwc, qn(8:9)); end end % Closing the Arm library and Matlab Graphics Animation and Virtual Reality Animation and Plots windows: choice8 = input( '\n Do you want to close all simulation windows and arm controls? \n Press "1" for Yes, or press "2" for No. \n' 's' ); if choice8== '1' if arm==1 WMRA_ARM_Motion(3, 0, 0, 0); end if vr==1 WMRA_VR_Animation(3, 0, 0); end if ml==1 WMRA_ML_Animation(3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); end end end % This function uses a Linear function to find an equally-spaced trajectory points of a variable "q" along a streight line, given the initial and final variable values and the number of trajectory points. % The output is the variable position. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [qt] = WMRA_Linear(qi, qf, n)

PAGE 335

Appendix B. (Continued) 314dq=(qf-qi)/(n-1); for i=1:n qt(i)=qi+dq*(i-1); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% Thanks to Mayur Palankar %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function varargout = WMRA_matrix_entry(varargin) % WMRA_MATRIX_ENTRY M-file for WMRA_matrix_entry.fig % WMRA_MATRIX_ENTRY, by itself, creates a new WMRA_MATRIX_ENTRY or raises the existing % singleton*. % % H = WMRA_MATRIX_ENTRY returns the handle to a new WMRA_MATRIX_ENTRY or the handle to % the existing singleton*. % % WMRA_MATRIX_ENTRY('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in WMRA_MATRIX_ENTRY.M with the given input arguments. % % WMRA_MATRIX_ENTRY('Property','Value',...) creates a new WMRA_MATRIX_ENTRY or raises the % existing singleton*. Starting from the left, property value pairs are % applied to the GUI before WMRA_matrix_entry_OpeningFunction gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to WMRA_matrix_entry_OpeningFcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to help WMRA_matrix_entry % Last Modified by GUIDE v2.5 21-Feb-2007 13:19:38 % Begin initialization code DO NOT EDIT gui_Singleton = 1; gui_State = struct( 'gui_Name' mfilename, ... 'gui_Singleton' gui_Singleton, ... 'gui_OpeningFcn' @WMRA_matrix_entry_OpeningFcn, ... 'gui_OutputFcn' @WMRA_matrix_entry_OutputFcn, ... 'gui_LayoutFcn' [] ... 'gui_Callback' []); if nargin && ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{2:nargout}); else gui_mainfcn(gui_State, varargin{:}); end % End initialization code DO NOT EDIT % --Executes just before WMRA_matrix_entry is made visible. function WMRA_matrix_entry_OpeningFcn(hObject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB

PAGE 336

Appendix B. (Continued) 315% handles structure with handles and user data (see GUIDATA) % varargin command line arguments to WMRA_matrix_entry (see VARARGIN) set (handles.edit3, 'String' varargin{1}); % Choose default command line output for WMRA_matrix_entry handles.output = hObject; % Update handles structure guidata(hObject, handles); % UIWAIT makes WMRA_matrix_entry wait for user response (see UIRESUME) uiwait(handles.figure1); % --Outputs from this function are returned to the command line. function WMRA_matrix_entry_OutputFcn(hObject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure %varargout{1} = handles.output; function edit1_Callback(hObject, eventdata, handles) % hObject handle to edit1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit1 as text % str2double(get(hObject,'String')) returns contents of edit1 as a double % --Executes during object creation, after setting all properties. function edit1_CreateFcn(hObject, eventdata, handles) % hObject handle to edit1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on button press in pushbutton1. function pushbutton1_Callback(hObject, eventdata, handles) % hObject handle to pushbutton1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_MATRIX Input=get(handles.edit1, 'String' ); VAR_MATRIX = Input; close; function edit3_Callback(hObject, eventdata, handles) % hObject handle to edit3 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit3 as text % str2double(get(hObject,'String')) returns contents of edit3 as a double % --Executes during object creation, after setting all properties. function edit3_CreateFcn(hObject, eventdata, handles) % hObject handle to edit3 (see GCBO)

PAGE 337

Appendix B. (Continued) 316% eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % This function does the animation of USF WMRA with 9 DOF using Matlab Graphics. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function WMRA_ML_Animation(i, Ti, Td, Towc, T01, T12, T23, T34, T45, T56, T67) % Declaring the global variables: global L arm wheelchairl wheelchairu wheelchairc initial desired hand global arm_base ground initialco desiredco handco arm_baseco % The initialization of the animation plot: if i==1 % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % Arm: T1=Towc*T01; T2=T1*T12; T3=T2*T23; T4=T3*T34; T5=T4*T45; T6=T5*T56; T7=T6*T67; % Wheelchair: T8=Towc; % Arm Base Position. T9=T8*WMRA_transl(-L(2),-L(3),-L(4)); % Wheelbase Center. T10=T9*WMRA_transl(0,-L(1)/2,0); % Right Wheel Center. T11=T9*WMRA_transl(0,L(1)/2,0); % Left Wheel Center. % Lower Platform Corners: T12=T9*WMRA_transl(-200,-L(1)/2,0); % Rear Right Wheelchair Corner. T13=T9*WMRA_transl(-200,L(1)/2,0); % Rear Left Wheelchair Corner. T14=T9*WMRA_transl(L(2)+200,L(1)/2,0); % Front Left Wheelchair Corner. T15=T9*WMRA_transl(L(2)+200,-L(1)/2,0); % Front Right Wheelchair Corner. % Upper Platform Corners: T16=T9*WMRA_transl(-200,-L(1)/2,L(4)); % Rear Right Wheelchair Corner. T17=T9*WMRA_transl(-200,L(1)/2,L(4)); % Rear Left Wheelchair Corner. T18=T9*WMRA_transl(L(2)+200,L(1)/2,L(4)); % Front Left Wheelchair Corner. T19=T9*WMRA_transl(L(2)+200,-L(1)/2,L(4)); % Front Right Wheelchair Corner. % Initial Animation Plot: figure(11); % Plots of the Arm and Wheelchair system: arm=plot3([T8(1,4), T1(1,4)],[T8(2,4), T1(2,4)],[T8(3,4),T1(3,4)], '-b' ,[T1(1,4), T2(1,4)],[T1(2,4), T2(2,4)],[T1(3,4),T2(3,4)], '-g' ,[T2(1,4), T3(1,4)],[T2(2,4),

PAGE 338

Appendix B. (Continued) 317T3(2,4)],[T2(3,4),T3(3,4)], '-b' ,[T3(1,4), T4(1,4)],[T3(2,4), T4(2,4)],[T3(3,4),T4(3,4)], '-g' ,[T4(1,4), T5(1,4)],[T4(2,4), T5(2,4)],[T4(3,4),T5(3,4)], '-b' ,[T5(1,4), T6(1,4)],[T5(2,4), T6(2,4)],[T5(3,4),T6(3,4)], '-g' ,[T6(1,4), T7(1,4)],[T6(2,4), T7(2,4)],[T6(3,4),T7(3,4)], '-g' 'LineWidth' ,3); hold on ; wheelchairl=plot3([T12(1,4), T13(1,4)],[T12(2,4), T13(2,4)],[T12(3,4),T13(3,4)], 'g' ,[T13(1,4), T14(1,4)],[T13(2,4), T14(2,4)],[T13(3,4),T14(3,4)], '-g' ,[T14(1,4), T15(1,4)],[T14(2,4), T15(2,4)],[T14(3,4),T15(3,4)], '-g' ,[T15(1,4), T12(1,4)],[T15(2,4), T12(2,4)],[T15(3,4),T12(3,4)], '-g' ,[T10(1,4), T11(1,4)],[T10(2,4), T11(2,4)],[T10(3,4),T11(3,4)], '-b' 'LineWidth' ,3); wheelchairu=plot3([T16(1,4), T17(1,4)],[T16(2,4), T17(2,4)],[T16(3,4),T17(3,4)], 'g' ,[T17(1,4), T18(1,4)],[T17(2,4), T18(2,4)],[T17(3,4),T18(3,4)], '-g' ,[T18(1,4), T19(1,4)],[T18(2,4), T19(2,4)],[T18(3,4),T19(3,4)], '-g' ,[T19(1,4), T16(1,4)],[T19(2,4), T16(2,4)],[T19(3,4),T16(3,4)], '-g' 'LineWidth' ,3); wheelchairc=plot3([T12(1,4), T16(1,4)],[T12(2,4), T16(2,4)],[T12(3,4),T16(3,4)], 'g' ,[T13(1,4), T17(1,4)],[T13(2,4), T17(2,4)],[T13(3,4),T17(3,4)], '-g' ,[T14(1,4), T18(1,4)],[T14(2,4), T18(2,4)],[T14(3,4),T18(3,4)], '-g' ,[T15(1,4), T19(1,4)],[T15(2,4), T19(2,4)],[T15(3,4),T19(3,4)], '-g' 'LineWidth' ,3); % Plots of points of interest on the system: initial=plot3(Ti(1,4),Ti(2,4),Ti(3,4), '-co' 'LineWidth' ,5); desired=plot3(Td(1,4),Td(2,4),Td(3,4), '-ro' 'LineWidth' ,5); hand=plot3(T7(1,4),T7(2,4),T7(3,4), '-yo' 'LineWidth' ,5); arm_base=plot3(Towc(1,4),Towc(2,4),Towc(3,4), '-mo' 'LineWidth' ,5); ground=plot3(0,0,0, '-ko' 'LineWidth' ,5); % Plots of the x-y-z local coordinate lines of the points of interest on the system: initialco=plot3([Ti(1,4), Ti(1,4)+100*Ti(1,1)],[Ti(2,4), Ti(2,4)+100*Ti(2,1)],[Ti(3,4),Ti(3,4)+100*Ti(3,1)], '-r' ,[Ti(1,4), Ti(1,4)+100*Ti(1,2)],[Ti(2,4), Ti(2,4)+100*Ti(2,2)],[Ti(3,4),Ti(3,4)+100*Ti(3,2)], 'g' ,[Ti(1,4), Ti(1,4)+100*Ti(1,3)],[Ti(2,4), Ti(2,4)+100*Ti(2,3)],[Ti(3,4), Ti(3,4)+100*Ti(3,3)], '-b' 'LineWidth' ,1); desiredco=plot3([Td(1,4), Td(1,4)+100*Td(1,1)],[Td(2,4), Td(2,4)+100*Td(2,1)],[Td(3,4),Td(3,4)+100*Td(3,1)], '-r' ,[Td(1,4), Td(1,4)+100*Td(1,2)],[Td(2,4), Td(2,4)+100*Td(2,2)],[Td(3,4),Td(3,4)+100*Td(3,2)], 'g' ,[Td(1,4), Td(1,4)+100*Td(1,3)],[Td(2,4), Td(2,4)+100*Td(2,3)],[Td(3,4), Td(3,4)+100*Td(3,3)], '-b' 'LineWidth' ,1); handco=plot3([T7(1,4), T7(1,4)+100*T7(1,1)],[T7(2,4), T7(2,4)+100*T7(2,1)],[T7(3,4),T7(3,4)+100*T7(3,1)], '-r' ,[T7(1,4), T7(1,4)+100*T7(1,2)],[T7(2,4), T7(2,4)+100*T7(2,2)],[T7(3,4),T7(3,4)+100*T7(3,2)], 'g' ,[T7(1,4), T7(1,4)+100*T7(1,3)],[T7(2,4), T7(2,4)+100*T7(2,3)],[T7(3,4), T7(3,4)+100*T7(3,3)], '-b' 'LineWidth' ,1); arm_baseco=plot3([Towc(1,4), Towc(1,4)+100*Towc(1,1)],[Towc(2,4), Towc(2,4)+100*Towc(2,1)],[Towc(3,4),Towc(3,4)+100*Towc(3,1)], '-r' ,[Towc(1,4), Towc(1,4)+100*Towc(1,2)],[Towc(2,4), Towc(2,4)+100*Towc(2,2)],[Towc(3,4),Towc(3,4)+100*Towc(3,2)], '-g' ,[Towc(1,4), Towc(1,4)+100*Towc(1,3)],[Towc(2,4), Towc(2,4)+100*Towc(2,3)],[Towc(3,4), Towc(3,4)+100*Towc(3,3)], '-b' 'LineWidth' ,1); groundco=plot3([100,0],[0,0],[0,0], '-r' ,[0,0],[100,0],[0,0], 'g' ,[0,0],[0,0],[100,0], '-b' 'LineWidth' ,1); % Specifying plot properties: view(40,15); axis([-800 500 -500 800 0 1300]); grid on ; title( 'WMRA Animation' ); xlabel( 'x, (mm)' ); ylabel( 'y (mm)' ); zlabel( 'z (mm)' ); legend([arm(1), arm(2), wheelchairl(5), wheelchairl(1), initial(1), desired(1), hand(1), arm_base(1), ground(1), initialco(1), initialco(2), initialco(3)], 'ROBOTIC 'ARM' 'wheelaxle' 'wheelchair' 'initial position' 'desired position' 'current position' 'arm base position' 'ground position' 'local x-axis' 'local y-axis' 'local zaxis' ,-1); hold off ; % Closing the animation plot: elseif i==3 close (figure(11));

PAGE 339

Appendix B. (Continued) 318% Updating the animation plot: else % Arm: T1=Towc*T01; T2=T1*T12; T3=T2*T23; T4=T3*T34; T5=T4*T45; T6=T5*T56; T7=T6*T67; % Wheelchair: T8=Towc; % Arm Base Position. T9=T8*WMRA_transl(-L(2),-L(3),-L(4)); % Wheelbase Center. T10=T9*WMRA_transl(0,-L(1)/2,0); % Right Wheel Center. T11=T9*WMRA_transl(0,L(1)/2,0); % Left Wheel Center. % Lower Platform Corners: T12=T9*WMRA_transl(-200,-L(1)/2,0); % Rear Right Wheelchair Corner. T13=T9*WMRA_transl(-200,L(1)/2,0); % Rear Left Wheelchair Corner. T14=T9*WMRA_transl(L(2)+200,L(1)/2,0); % Front Left Wheelchair Corner. T15=T9*WMRA_transl(L(2)+200,-L(1)/2,0); % Front Right Wheelchair Corner. % Upper Platform Corners: T16=T9*WMRA_transl(-200,-L(1)/2,L(4)); % Rear Right Wheelchair Corner. T17=T9*WMRA_transl(-200,L(1)/2,L(4)); % Rear Left Wheelchair Corner. T18=T9*WMRA_transl(L(2)+200,L(1)/2,L(4)); % Front Left Wheelchair Corner. T19=T9*WMRA_transl(L(2)+200,-L(1)/2,L(4)); % Front Right Wheelchair Corner. % Updating Animation Plot: % Plots of the Arm and Wheelchair system: set(arm(1), 'XData' ,[T8(1,4), T1(1,4)], 'YData' ,[T8(2,4), T1(2,4)], 'ZData' ,[T8(3,4),T1(3,4)]); set(arm(2), 'XData' ,[T1(1,4), T2(1,4)], 'YData' ,[T1(2,4), T2(2,4)], 'ZData' ,[T1(3,4),T2(3,4)]); set(arm(3), 'XData' ,[T2(1,4), T3(1,4)], 'YData' ,[T2(2,4), T3(2,4)], 'ZData' ,[T2(3,4),T3(3,4)]); set(arm(4), 'XData' ,[T3(1,4), T4(1,4)], 'YData' ,[T3(2,4), T4(2,4)], 'ZData' ,[T3(3,4),T4(3,4)]); set(arm(5), 'XData' ,[T4(1,4), T5(1,4)], 'YData' ,[T4(2,4), T5(2,4)], 'ZData' ,[T4(3,4),T5(3,4)]); set(arm(6), 'XData' ,[T5(1,4), T6(1,4)], 'YData' ,[T5(2,4), T6(2,4)], 'ZData' ,[T5(3,4),T6(3,4)]); set(arm(7), 'XData' ,[T6(1,4), T7(1,4)], 'YData' ,[T6(2,4), T7(2,4)], 'ZData' ,[T6(3,4),T7(3,4)]); set(wheelchairl(1), 'XData' ,[T12(1,4), T13(1,4)], 'YData' ,[T12(2,4), T13(2,4)], 'ZData' ,[T12(3,4),T13(3,4)]); set(wheelchairl(2), 'XData' ,[T13(1,4), T14(1,4)], 'YData' ,[T13(2,4), T14(2,4)], 'ZData' ,[T13(3,4),T14(3,4)]); set(wheelchairl(3), 'XData' ,[T14(1,4), T15(1,4)], 'YData' ,[T14(2,4), T15(2,4)], 'ZData' ,[T14(3,4),T15(3,4)]); set(wheelchairl(4), 'XData' ,[T15(1,4), T12(1,4)], 'YData' ,[T15(2,4), T12(2,4)], 'ZData' ,[T15(3,4),T12(3,4)]); set(wheelchairl(5), 'XData' ,[T10(1,4), T11(1,4)], 'YData' ,[T10(2,4), T11(2,4)], 'ZData' ,[T10(3,4),T11(3,4)]); set(wheelchairu(1), 'XData' ,[T16(1,4), T17(1,4)], 'YData' ,[T16(2,4), T17(2,4)], 'ZData' ,[T16(3,4),T17(3,4)]); set(wheelchairu(2), 'XData' ,[T17(1,4), T18(1,4)], 'YData' ,[T17(2,4), T18(2,4)], 'ZData' ,[T17(3,4),T18(3,4)]); set(wheelchairu(3), 'XData' ,[T18(1,4), T19(1,4)], 'YData' ,[T18(2,4), T19(2,4)], 'ZData' ,[T18(3,4),T19(3,4)]); set(wheelchairu(4), 'XData' ,[T19(1,4), T16(1,4)], 'YData' ,[T19(2,4), T16(2,4)], 'ZData' ,[T19(3,4),T16(3,4)]); set(wheelchairc(1), 'XData' ,[T12(1,4), T16(1,4)], 'YData' ,[T12(2,4), T16(2,4)], 'ZData' ,[T12(3,4),T16(3,4)]);

PAGE 340

Appendix B. (Continued) 319 set(wheelchairc(2), 'XData' ,[T13(1,4), T17(1,4)], 'YData' ,[T13(2,4), T17(2,4)], 'ZData' ,[T13(3,4),T17(3,4)]); set(wheelchairc(3), 'XData' ,[T14(1,4), T18(1,4)], 'YData' ,[T14(2,4), T18(2,4)], 'ZData' ,[T14(3,4),T18(3,4)]); set(wheelchairc(4), 'XData' ,[T15(1,4), T19(1,4)], 'YData' ,[T15(2,4), T19(2,4)], 'ZData' ,[T15(3,4),T19(3,4)]); % Plots of points of interest on the system: set(initial(1), 'XData' ,Ti(1,4), 'YData' ,Ti(2,4), 'ZData' ,Ti(3,4)); set(desired(1), 'XData' ,Td(1,4), 'YData' ,Td(2,4), 'ZData' ,Td(3,4)); set(hand(1), 'XData' ,T7(1,4), 'YData' ,T7(2,4), 'ZData' ,T7(3,4)); set(arm_base(1), 'XData' ,Towc(1,4), 'YData' ,Towc(2,4), 'ZData' ,Towc(3,4)); % Plots of the x-y-z local coordinate lines of the points of interest on the system: set(initialco(1), 'XData' ,[Ti(1,4), Ti(1,4)+100*Ti(1,1)], 'YData' ,[Ti(2,4), Ti(2,4)+100*Ti(2,1)], 'ZData' ,[Ti(3,4),Ti(3,4)+100*Ti(3,1)]); set(initialco(2), 'XData' ,[Ti(1,4), Ti(1,4)+100*Ti(1,2)], 'YData' ,[Ti(2,4), Ti(2,4)+100*Ti(2,2)], 'ZData' ,[Ti(3,4),Ti(3,4)+100*Ti(3,2)]); set(initialco(3), 'XData' ,[Ti(1,4), Ti(1,4)+100*Ti(1,3)], 'YData' ,[Ti(2,4), Ti(2,4)+100*Ti(2,3)], 'ZData' ,[Ti(3,4),Ti(3,4)+100*Ti(3,3)]); set(desiredco(1), 'XData' ,[Td(1,4), Td(1,4)+100*Td(1,1)], 'YData' ,[Td(2,4), Td(2,4)+100*Td(2,1)], 'ZData' ,[Td(3,4),Td(3,4)+100*Td(3,1)]); set(desiredco(2), 'XData' ,[Td(1,4), Td(1,4)+100*Td(1,2)], 'YData' ,[Td(2,4), Td(2,4)+100*Td(2,2)], 'ZData' ,[Td(3,4),Td(3,4)+100*Td(3,2)]); set(desiredco(3), 'XData' ,[Td(1,4), Td(1,4)+100*Td(1,3)], 'YData' ,[Td(2,4), Td(2,4)+100*Td(2,3)], 'ZData' ,[Td(3,4),Td(3,4)+100*Td(3,3)]); set(handco(1), 'XData' ,[T7(1,4), T7(1,4)+100*T7(1,1)], 'YData' ,[T7(2,4), T7(2,4)+100*T7(2,1)], 'ZData' ,[T7(3,4),T7(3,4)+100*T7(3,1)]); set(handco(2), 'XData' ,[T7(1,4), T7(1,4)+100*T7(1,2)], 'YData' ,[T7(2,4), T7(2,4)+100*T7(2,2)], 'ZData' ,[T7(3,4),T7(3,4)+100*T7(3,2)]); set(handco(3), 'XData' ,[T7(1,4), T7(1,4)+100*T7(1,3)], 'YData' ,[T7(2,4), T7(2,4)+100*T7(2,3)], 'ZData' ,[T7(3,4),T7(3,4)+100*T7(3,3)]); set(arm_baseco(1), 'XData' ,[Towc(1,4), Towc(1,4)+100*Towc(1,1)], 'YData' ,[Towc(2,4), Towc(2,4)+100*Towc(2,1)], 'ZData' ,[Towc(3,4),Towc(3,4)+100*Towc(3,1)]); set(arm_baseco(2), 'XData' ,[Towc(1,4), Towc(1,4)+100*Towc(1,2)], 'YData' ,[Towc(2,4), Towc(2,4)+100*Towc(2,2)], 'ZData' ,[Towc(3,4),Towc(3,4)+100*Towc(3,2)]); set(arm_baseco(3), 'XData' ,[Towc(1,4), Towc(1,4)+100*Towc(1,3)], 'YData' ,[Towc(2,4), Towc(2,4)+100*Towc(2,3)], 'ZData' ,[Towc(3,4),Towc(3,4)+100*Towc(3,3)]); end % This function is for the resolved rate and optimization solution of the USF WMRA with 9 DOF. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [dq]=WMRA_Opt(i, JLA, JLO, Jo, detJo, dq, dx, dt, q) % Declaring a global variable: global dHo % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % The case when wheelchair-only control is required with no arm motion: if i==0 WCA=3;

PAGE 341

Appendix B. (Continued) 320 % claculating the Inverse of the Jacobian, which is always non-singular: pinvJo=inv(Jo(1:2,1:2)); % calculating the joint angle change: % Here, dq of the wheels are translated from radians to distances travelled after using the Jacobian. dq=pinvJo*dx; dq(1)=dq(1)*L(5); else % Reading the physical joint limits of the arm: [qmin,qmax]=WMRA_Jlimit; % Creating the gradient of the optimization function to avoid joint limits: dH=[0;0;0;0;0;0;0]; if JLA==1 for j=1:7 dH(j)=-0.25*(qmax(j)-qmin(j))^2*(2*q(j)-qmax(j)-qmin(j))/((qmax(j)q(j))^2*(q(j)-qmin(j))^2); % Re-defining the weight in case the joint is moving away from it's limit or the joint limit was exceeded: if abs(dH(j)) < abs(dHo(j)) && q(j) < qmax(j) && q(j) > qmin(j) dH(j)=0; elseif abs(dH(j)) < abs(dHo(j)) && (q(j) >= qmax(j) || q(j) <= qmin(j)) dH(j)=inf; elseif abs(dH(j)) > abs(dHo(j)) && (q(j) >= qmax(j) || q(j) <= qmin(j)) dH(j)=0; end end end dHo=dH; % The case when arm-only control is required with no wheelchair motion: if max(size(dq))==7 WCA=2; wo=20000000; ko=350000; % The weight matrix to be used for the Weighted Least Norm Solution with Joint Limit Avoidance: W=diag(1*[1;1;1;1;1;1;1]+1*abs(dH)); % The inverse of the diagonal weight matrix: dia=diag(W); Winv=diag([1/dia(1); 1/dia(2); 1/dia(3); 1/dia(4); 1/dia(5); 1/dia(6); 1/dia(7)]); % The case when wheelchair-and-arm control is required: else WCA=1; wo=34000000; ko=13; % The weight matrix to be used for the Weighted Least Norm Solution: W=diag([1*[1;1;1;1;1;1;1]+1*abs(dH);10*[1;1]]); % The inverse of the diagonal weight matrix: dia=diag(W); Winv=diag([1/dia(1); 1/dia(2); 1/dia(3); 1/dia(4); 1/dia(5); 1/dia(6); 1/dia(7); 1/dia(8); 1/dia(9)]); end % Redefining the determinant based on the weight: if i==1 || i==2 detJo=sqrt(det(Jo*Winv*Jo')); end dof=max(size(dx)); end % SR-Inverse and Weighted Least Norm Optimization: if i==1 % Calculating the variable scale factor, sf: if detJo
PAGE 342

Appendix B. (Continued) 321 end % claculating the SR-Inverse of the Jacobian: pinvJo=Winv*Jo'*inv(Jo*Winv*Jo'+sf*eye(dof)); % calculating the joint angle change optimized based on the Weighted Least Norm Solution: % Here, dq of the wheels are translated from radians to distances travelled after using the Jacobian. if WCA==2 dq=pinvJo*dx; else dq=pinvJo*dx; dq(8)=dq(8)*L(5); end % Pseudo Inverse and Weighted Least Norm Optimization: elseif i==2 % claculating the Pseudo Inverse of the Jacobian: pinvJo=Winv*Jo'*inv(Jo*Winv*Jo'); % calculating the joint angle change optimized based on the Weighted Least Norm Solution: % Here, dq of the wheels are translated from radians to distances travelled after using the Jacobian. if WCA==2 dq=pinvJo*dx; else dq=pinvJo*dx; dq(8)=dq(8)*L(5); end % SR-Inverse and Projection Gradient Optimization based on Euclidean norm of errors: elseif i==3 % Calculating the variable scale factor, sf: if detJo
PAGE 343

Appendix B. (Continued) 322 %dq=pinvJo*dx+(eye(9)-pinvJo*Jo)*dq; dq=pinvJo*dx+0.001*(eye(9)-pinvJo*Jo)*[dH;0;0]; dq(8)=dq(8)*L(5); end end if JLO==1 % A safety condition to stop the joint that reaches the joint limits in the arm: if WCA~=3 for k=1:7 if q(k) >= qmax(k) || q(k) <= qmin(k) dq(k)=0; end end end % A safety condition to slow the joint that exceeds the velocity limits in the WMRA: if WCA==3 dqmax=dt*[100;0.15]; % Joiny velocity limits when the time increment is dt second. else dqmax=dt*[0.5;0.5;0.5;0.5;0.5;0.5;0.5;100;0.15]; % Joiny velocity limits when the time increment is dt second. end for k=1:max(size(dq)) if abs(dq(k)) >= dqmax(k) dq(k)=sign(dq(k))*dqmax(k); end end end % This function gives the Transformation Matrix of the WMRA's base on the wheelchair with 2 DOF, given the desired x,y position and z rotation angle. % Dimentions are as supplies, angles are in radians. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [T]=WMRA_p2T(x, y, a) % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % Defining the Transformation Matrix: T=[cos(a) -sin(a) 0 x ; sin(a) cos(a) 0 y ; 0 0 1 L(4)+L(5) ; 0 0 0 1]; % This "new USF WMRA" function SIMULATES the arm going from the parking position to the ready position with ANIMATION. All angles are in Radians. % The parking position is assumed to be qi=[0;pi/2;0;pi;0;0;0] (Radians), % the ready position is assumed to be qd=[pi/2;pi/2;0;pi/2;pi/2;pi/2;0] (Radians). % ini=1 --> initialize animation figures, ini=2 or any --> just update the figures, ini=3 --> close the figures. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%%

PAGE 344

Appendix B. (Continued) 323%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function WMRA_park2ready(ini, vr, ml, arm, Tiwc, qiwc) % Closing the Arm library and Matlab Graphics Animation and Virtual Reality Animation and Plots windows: if ini==3 if arm==1 try WMRA_ARM_Motion(ini, 0, 0, 0); end end if vr==1 try WMRA_VR_Animation(ini, 0, 0); end end if ml==1 try WMRA_ML_Animation(ini, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); end end return ; end % Defining the used conditions: qi=[0;pi/2;0;pi;0;0;0]; % Initial joint angles (Parking Position). qd=[pi/2;pi/2;0;pi/2;pi/2;pi/2;0]; % Final joint angles (Ready Position). ts=10; % (5 or 10 or 20) Simulation time to move the arm from the parking position to the ready position. n=100; % Number of time steps. dt=ts/n; % The time step to move the arm from the parking position to the ready position. dq=(qd-qi)/(0.5*n+5); % Joint angle change at every time step. % Initializing the physical Arm: if arm==1 WMRA_ARM_Motion(ini, 2, [qi;qiwc;0], dt); ddt=0; end % Initializing Virtual Reality Animation: if vr==1 WMRA_VR_Animation(ini, Tiwc, [qi;qiwc]); end % Initializing Robot Animation in Matlab Graphics: if ml==1 % Inputting the D-H Parameters in a Matrix form: DH=WMRA_DH(qi); % Calculating the transformation matrices of each link: T01=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(DH(1,4))*WMRA_transl(0,0,DH(1,3 )); T12=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(DH(2,4))*WMRA_transl(0,0,DH(2,3 )); T23=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(DH(3,4))*WMRA_transl(0,0,DH(3,3 )); T34=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(DH(4,4))*WMRA_transl(0,0,DH(4,3 ));

PAGE 345

Appendix B. (Continued) 324T45=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(DH(5,4))*WMRA_transl(0,0,DH(5,3 )); T56=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(DH(6,4))*WMRA_transl(0,0,DH(6,3 )); T67=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(DH(7,4))*WMRA_transl(0,0,DH(7,3 )); % Calculating the Transformation Matrix of the initial and desired arm positions: Ti=Tiwc*T01*T12*T23*T34*T45*T56*T67; Td=Tiwc*WMRA_q2T(qd); WMRA_ML_Animation(ini, Ti, Td, Tiwc, T01, T12, T23, T34, T45, T56, T67); end % Initialization: qo=qi; tt=0; while tt <= (ts) % Starting a timer: tic; % Calculating the new Joint Angles: if tt==0 qn=qo; elseif tt < (dt*(0.5*n-5)) qn(1)=qo(1)+dq(1); elseif tt < (dt*(0.5*n+5)) qn=qo+dq; elseif tt < (dt*(n-1)) qn(2:7)=qo(2:7)+dq(2:7); end % Updating the physical Arm: if arm==1 ddt=ddt+dt; if ddt>=0.5 || tt>=(ts) WMRA_ARM_Motion(2, 1, [qn;qiwc;0], ddt); ddt=0; end end % Updating Virtual Reality Animation: if vr==1 WMRA_VR_Animation(2, Tiwc, [qn;qiwc]); end % Updating Matlab Animation: if ml==1 % Calculating the new Transformation Matrix: T1a=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(qn(1))*WMRA_transl(0,0,DH(1,3)) ; T2a=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(qn(2))*WMRA_transl(0,0,DH(2,3)) ; T3a=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(qn(3))*WMRA_transl(0,0,DH(3,3)) ; T4a=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(qn(4))*WMRA_transl(0,0,DH(4,3)) ; T5a=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(qn(5))*WMRA_transl(0,0,DH(5,3)) ;

PAGE 346

Appendix B. (Continued) 325T6a=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(qn(6))*WMRA_transl(0,0,DH(6,3)) ; T7a=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(qn(7))*WMRA_transl(0,0,DH(7,3)) ; WMRA_ML_Animation(2, Ti, Td, Tiwc, T1a, T2a, T3a, T4a, T5a, T6a, T7a); end % Updating the old values with the new values for the next iteration: qo=qn; tt=tt+dt; % Pausing for the speed sync: pause(dt-toc); end % This function plots different animation variables for USF WMRA with 9 DOF. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function WMRA_Plots(st, L, r2d, dt, i, tt, qn, dq, Tn, Tnwc, detjoa, detjo) % Declaring the global variables: global time q1 q2 q3 q4 q5 q6 q7 qll qrr global qd1 qd2 qd3 qd4 qd5 qd6 qd7 qdl qdr global x y z roll pitch yaw xc yc zc rollc pitchc yawc detJoa detJo % Data collection at every iteration: if st==1 % Generating a time vector for plotting: time(i)=tt; % Joint Angles: q1(i)=qn(1)*r2d; q2(i)=qn(2)*r2d; q3(i)=qn(3)*r2d; q4(i)=qn(4)*r2d; q5(i)=qn(5)*r2d; q6(i)=qn(6)*r2d; q7(i)=qn(7)*r2d; qll(i)=qn(8)-L(1)*qn(9)/2; qrr(i)=qn(8)+L(1)*qn(9)/2; % Joint Velocities: qd1(i)=r2d*dq(1)/dt; qd2(i)=r2d*dq(2)/dt; qd3(i)=r2d*dq(3)/dt; qd4(i)=r2d*dq(4)/dt; qd5(i)=r2d*dq(5)/dt; qd6(i)=r2d*dq(6)/dt; qd7(i)=r2d*dq(7)/dt; qdl(i)=(dq(8)-L(1)*dq(9)/2)/dt; qdr(i)=(dq(8)+L(1)*dq(9)/2)/dt; % Hand Position and Orientation: x(i)=Tn(1,4); y(i)=Tn(2,4); z(i)=Tn(3,4); or=WMRA_T2rpy(Tn);

PAGE 347

Appendix B. (Continued) 326 roll(i)=or(1)*r2d; pitch(i)=or(2)*r2d; yaw(i)=or(3)*r2d; % Arm Base Position and Orientation on the Wheelchair: xc(i)=Tnwc(1,4); yc(i)=Tnwc(2,4); zc(i)=Tnwc(3,4); orc=WMRA_T2rpy(Tnwc); rollc(i)=orc(1)*r2d; pitchc(i)=orc(2)*r2d; yawc(i)=orc(3)*r2d; % Manipulability Measure: detJoa(i)=detjoa; detJo(i)=detjo; % Plotting the data in graphas: else figure(2); plot(time,qd1, '-y' ,time,qd2, '-m' ,time,qd3, '-c' ,time,qd4, '-r' ,time,qd5, 'g' ,time,qd6, '-b' ,time,qd7, '-k' ); grid on ; title( 'Joint Angular Velocities vs Time' );xlabel( 'time, (sec)' );ylabel( 'joint velocoties, (deg/s)' );legend( '\theta_1d' '\theta_2d' '\theta_3d' '\theta_4d' '\theta_5d' '\theta_6d' '\theta_7d' ,-1); figure(3); plot(time,qdl, '-b' ,time,qdr, '-g' ); grid on ; title( 'Wheels Track Velocities vs Time' );xlabel( 'time, (sec)' );ylabel( 'wheels track velocoties, (mm/s)' );legend( '\theta_Ld' '\theta_Rd' ,-1); figure(4); plot(time,q1, '-y' ,time,q2, '-m' ,time,q3, '-c' ,time,q4, '-r' ,time,q5, '-g' ,time,q6, 'b' ,time,q7, '-k' ); grid on ; title( 'Joint Angular Displacements vs Time' );xlabel( 'time, (sec)' );ylabel( 'joint angles, (deg)' );legend( '\theta_1' '\theta_2' '\theta_3' '\theta_4' '\theta_5' '\theta_6' '\theta_ 7' ,-1); figure(5); plot(time,qll, '-b' ,time,qrr, '-g' ); grid on ; title( 'Wheels Track distances vs Time' );xlabel( 'time, (sec)' );ylabel( 'wheels track distances, (mm)' );legend( '\theta_L' '\theta_R' ,-1); figure(6); plot(time,x, '-y' ,time,y, '-m' ,time,z, '-c' ); grid on ; title( 'Hand Position vs Time' );xlabel( 'time, (sec)' );ylabel( 'position, (mm)' );legend( 'x' 'y' 'z' ,-1); figure(7); plot(time,roll, '-y' ,time,pitch, '-m' ,time,yaw, '-c' ); grid on ; title( 'Hand Orientation vs Time' );xlabel( 'time, (sec)' );ylabel( 'orientation, (deg)' );legend( 'roll' 'pitch' 'yaw' ,-1); figure(8); plot(time,xc, '-y' ,time,yc, '-m' ,time,zc, '-c' ); grid on ; title( 'Arm Base Position vs Time' );xlabel( 'time, (sec)' );ylabel( 'position, (mm)' );legend( 'x' 'y' 'z' ,-1); figure(9); plot(time,rollc, '-y' ,time,pitchc, '-m' ,time,yawc, '-c' ); grid on ; title( 'Arm Base Orientation vs Time' );xlabel( 'time, (sec)' );ylabel( 'orientation, (deg)' );legend( 'roll' 'pitch' 'yaw' ,-1); figure(10); plot(time,detJoa, '-y' time,detJo, '-m' );

PAGE 348

Appendix B. (Continued) 327 grid on ; title( 'Manipulability Measure vs Time' );xlabel( 'time, (sec)' );ylabel( 'Manipulability Measure' ); legend( 'W_A_R_M' 'W_W_M_R_A' ,-1); end % This function uses a 3rd order Polynomial with no Blending factor to find a smooth trajectory points of a variable "q" along a streight line, given the initial and final variable values and the number of trajectory points. % The output is the variable position. % See Eqs. 7.3 and 7.6 page 204,205 of Craig Book %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [qt] = WMRA_Polynomial(qi, qf, n) tt=0; tf=abs((qf-qi)); dt=tf/(n-1); for i=1:n if tf <= 0.001 qt(i)=qi; else qt(i)=qi+(qf-qi)*3*tt^2/tf^2-(qf-qi)*2*tt^3/tf^3; % From Eq.7.3 and 7.6 page 204,205 of Craig Book end tt=tt+dt; end % This function reads the BCI 2000 device output through TCP/IP port, % extracts the selected row and column of the screen interface out of the sent data from the BCI 2000, % converts these row and column data to a commanded Cartesian velocities from the BCI device % and sends it as an output. % The optional input to this function is the port number. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%% Thanks to Eduardo Veras %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function declaration: function dx=WMRA_psy(port1) % Assigning the port number in case the user did not input it: if nargin<1 port1=19711; end % Openning the port: try udp=pnet( 'udpsocket' ,port1);

PAGE 349

Appendix B. (Continued) 328catch pnet(udp, 'close' ); clear udp ; end % Initializing the loop variable: i=1; % Starting the loop: while (i<2) % Trying to read the data packet from the port: try len=pnet(udp, 'readpacket' ); % Condition to make sure that the data is read: if len>0 % Reading a data block: data=pnet(udp, 'read' ,36, 'uint64' 'ieee-be' 'block' ); % Condition to make sure that the read data is not blank: if (isempty(data)~=1) % Condition to make sure that the read data is not a blank line: if length(data) > 0 % Finding the string 'SelectedRow' out of that data line: k1 = findstr(data, 'SelectedRow' ); % Condition in case the required string is found: if (isempty(k1)~=1) % The length of the string: num1 = length(k1); % Condition to ensure that the string length > 0: if num1 > 0 % Reading the string that comes right after the selected string and converting it to a number: mrow = str2double(data(13)); end end % Finding the string 'SelectedColumn' out of that data line: k2 = findstr(data, 'SelectedColumn' ); % Condition in case the required string is found: if (isempty(k2)~=1) % The length of the string: num2 = length(k2); % Condition to ensure that the string length > 0: if num2 > 0 % Reading the string that comes right after the selected string and converting it to a number: mcol = str2double(data(16)); end end %Modifying the output to the proper format: rc=[mrow mcol]; % Assigning the directional velocity vector based on the selected row and column from the interface screen: dx=[0;0;0;0;0;0;0]; if rc == [1 1] dx=[0;0;1;0;0;0;0]; elseif rc == [1 2]

PAGE 350

Appendix B. (Continued) 329 dx=[1;0;0;0;0;0;0]; elseif rc == [1 3] dx=[0;0;0;0;0;0.003;0]; elseif rc == [1 4] dx=[0;0;0;0;0.003;0;0]; elseif rc == [1 5] dx=[0;0;0;0;0;-0.003;0]; elseif rc == [2 1] dx=[0;1;0;0;0;0;0]; elseif rc == [2 2] dx=[0;-1;0;0;0;0;0]; elseif rc == [2 3] dx=[0;0;0;-0.003;0;0;0]; elseif rc == [2 4] dx=[0;0;0;0;-0.003;0;0]; elseif rc == [2 5] dx=[0;0;0;0.003;0;0;0]; elseif rc == [3 1] dx=[0;0;-1;0;0;0;0]; elseif rc == [3 2] dx=[-1;0;0;0;0;0;0]; elseif rc == [3 3] dx=[0;0;0;0;0;0;0]; elseif rc == [3 4] dx=[0;0;0;0;0;0;1]; elseif rc == [3 5] dx=[0;0;0;0;0;0;-1]; else fprintf( 'ERROR' ); end % dx=dx(1:6); % Once we get the reading, we can get out of the loop: i=2; end end end catch end end % Be a good citizen and cleanup your mess: pnet(udp, 'close' ); clear udp ; % This function gives the Transformation Matrix of the new USF WMRA with 7 DOF, given the joint angles in Radians. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [T]=WMRA_q2T(q) % Inputting the D-H Parameters in a Matrix form: DH=WMRA_DH(q); % Calculating the transformation matrices of each link: T1=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(DH(1,4))*WMRA_transl(0,0,DH(1,3) );

PAGE 351

Appendix B. (Continued) 330T2=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(DH(2,4))*WMRA_transl(0,0,DH(2,3) ); T3=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(DH(3,4))*WMRA_transl(0,0,DH(3,3) ); T4=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(DH(4,4))*WMRA_transl(0,0,DH(4,3) ); T5=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(DH(5,4))*WMRA_transl(0,0,DH(5,3) ); T6=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(DH(6,4))*WMRA_transl(0,0,DH(6,3) ); T7=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(DH(7,4))*WMRA_transl(0,0,DH(7,3) ); % Calculating the total Transformation Matrix of the given arm position: T=T1*T2*T3*T4*T5*T6*T7; % This "new USF WMRA" function SIMULATES the arm going from the ready position to any position with ANIMATION. All angles are in Radians. % the ready position is assumed to be qd=[pi/2;pi/2;0;pi/2;pi/2;pi/2;0] (Radians). % ini=1 --> initialize animation figures, ini=2 or any --> just update the figures, ini=3 --> close the figures. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function WMRA_ready2any(ini, vr, ml, arm, Tiwc, qd) % Closing the Arm library and Matlab Graphics Animation and Virtual Reality Animation and Plots windows: if ini==3 if arm==1 try WMRA_ARM_Motion(ini, 0, 0, 0); end end if vr==1 try WMRA_VR_Animation(ini, 0, 0); end end if ml==1 try WMRA_ML_Animation(ini, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); end end return ; end % Defining the used conditions: qi=[pi/2;pi/2;0;pi/2;pi/2;pi/2;0]; % Initial joint angles (Ready Position). ts=10; % (5 or 10 or 20) Simulation time to move the arm from the ready position to any position. n=100; % Number of time steps. dt=ts/n; % The time step to move the arm from the ready position to any position. % Initializing the physical Arm: if arm==1 WMRA_ARM_Motion(ini, 2, [qi;0;0;0], dt); ddt=0;

PAGE 352

Appendix B. (Continued) 331end % Initializing Virtual Reality Animation: if vr==1 WMRA_VR_Animation(ini, Tiwc, [qi;0;0]); end % Initializing Robot Animation in Matlab Graphics: if ml==1 % Inputting the D-H Parameters in a Matrix form: DH=WMRA_DH(qi); % Calculating the transformation matrices of each link: T01=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(DH(1,4))*WMRA_transl(0,0,DH(1,3 )); T12=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(DH(2,4))*WMRA_transl(0,0,DH(2,3 )); T23=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(DH(3,4))*WMRA_transl(0,0,DH(3,3 )); T34=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(DH(4,4))*WMRA_transl(0,0,DH(4,3 )); T45=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(DH(5,4))*WMRA_transl(0,0,DH(5,3 )); T56=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(DH(6,4))*WMRA_transl(0,0,DH(6,3 )); T67=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(DH(7,4))*WMRA_transl(0,0,DH(7,3 )); % Calculating the Transformation Matrix of the initial and desired arm positions: Ti=Tiwc*T01*T12*T23*T34*T45*T56*T67; Td=Tiwc*WMRA_q2T(qd); WMRA_ML_Animation(ini, Ti, Td, Tiwc, T01, T12, T23, T34, T45, T56, T67); end % Check for the shortest route: diff=qd-qi; for i=1:7 if diff(i) > pi diff(i)=diff(i)-2*pi; elseif diff(i) < (-pi) diff(i)=diff(i)+2*pi; end end % Joint angle change at every time step. dq=diff/n; % Initialization: qo=qi; tt=0; while tt <= (ts-dt) % Starting a timer: tic; % Calculating the new Joint Angles: qn=qo+dq; % Updating the physical Arm: if arm==1

PAGE 353

Appendix B. (Continued) 332 ddt=ddt+dt; if ddt>=0.5 || tt>=(ts-dt) WMRA_ARM_Motion(2, 1, [qn;0;0;0], ddt); ddt=0; end end % Updating Virtual Reality Animation: if vr==1 WMRA_VR_Animation(2, Tiwc, [qn;0;0]); end % Updating Matlab Animation: if ml==1 % Calculating the new Transformation Matrix: T1a=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(qn(1))*WMRA_transl(0,0,DH(1,3)) ; T2a=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(qn(2))*WMRA_transl(0,0,DH(2,3)) ; T3a=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(qn(3))*WMRA_transl(0,0,DH(3,3)) ; T4a=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(qn(4))*WMRA_transl(0,0,DH(4,3)) ; T5a=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(qn(5))*WMRA_transl(0,0,DH(5,3)) ; T6a=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(qn(6))*WMRA_transl(0,0,DH(6,3)) ; T7a=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(qn(7))*WMRA_transl(0,0,DH(7,3)) ; WMRA_ML_Animation(2, Ti, Td, Tiwc, T1a, T2a, T3a, T4a, T5a, T6a, T7a); end % Updating the old values with the new values for the next iteration: qo=qn; tt=tt+dt; % Pausing for the speed sync: pause(dt-toc); end % This "new USF WMRA" function SIMULATES the arm going from the ready position to the parking position with ANIMATION. All angles are in Radians. % The parking position is assumed to be qi=[0;pi/2;0;pi;0;0;0] (Radians), % the ready position is assumed to be qd=[pi/2;pi/2;0;pi/2;pi/2;pi/2;0]] (Radians). % ini=1 --> initialize animation figures, ini=2 or any --> just update the figures, ini=3 --> close the figures. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function WMRA_ready2park(ini, vr, ml, arm, Tiwc, qiwc)

PAGE 354

Appendix B. (Continued) 333% Closing the Arm library and Matlab Graphics Animation and Virtual Reality Animation and Plots windows: if ini==3 if arm==1 try WMRA_ARM_Motion(ini, 0, 0, 0); end end if vr==1 try WMRA_VR_Animation(ini, 0, 0); end end if ml==1 try WMRA_ML_Animation(ini, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); end end return ; end % Defining the used conditions: qi=[pi/2;pi/2;0;pi/2;pi/2;pi/2;0]; % Initial joint angles (Ready Position). qd=[0;pi/2;0;pi;0;0;0]; % Final joint angles (Parking Position). ts=10; % (5 or 10 or 20) Simulation time to move the arm from the ready position to the parking position. n=100; % Number of time steps. dt=ts/n; % The time step to move the arm from the parking position to the ready position. dq=(qd-qi)/(0.5*n+5); % Joint angle change at every time step. % Initializing the physical Arm: if arm==1 WMRA_ARM_Motion(ini, 2, [qi;qiwc;0], dt); ddt=0; end % Initializing Virtual Reality Animation: if vr==1 WMRA_VR_Animation(ini, Tiwc, [qi;qiwc]); end % Initializing Robot Animation in Matlab Graphics: if ml==1 % Inputting the D-H Parameters in a Matrix form: DH=WMRA_DH(qi); % Calculating the transformation matrices of each link: T01=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(DH(1,4))*WMRA_transl(0,0,DH(1,3 )); T12=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(DH(2,4))*WMRA_transl(0,0,DH(2,3 )); T23=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(DH(3,4))*WMRA_transl(0,0,DH(3,3 )); T34=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(DH(4,4))*WMRA_transl(0,0,DH(4,3 )); T45=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(DH(5,4))*WMRA_transl(0,0,DH(5,3 )); T56=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(DH(6,4))*WMRA_transl(0,0,DH(6,3 ));

PAGE 355

Appendix B. (Continued) 334T67=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(DH(7,4))*WMRA_transl(0,0,DH(7,3 )); % Calculating the Transformation Matrix of the initial and desired arm positions: Ti=Tiwc*T01*T12*T23*T34*T45*T56*T67; Td=Tiwc*WMRA_q2T(qd); WMRA_ML_Animation(ini, Ti, Td, Tiwc, T01, T12, T23, T34, T45, T56, T67); end % Initialization: qo=qi; tt=0; while tt <= (ts) % Starting a timer: tic; % Calculating the new Joint Angles: if tt==0 qn=qo; elseif tt < (dt*(0.5*n-5)) qn(2:7)=qo(2:7)+dq(2:7); elseif tt < (dt*(0.5*n+5)) qn=qo+dq; elseif tt < (dt*(n-1)) qn(1)=qo(1)+dq(1); end % Updating the physical Arm: if arm==1 ddt=ddt+dt; if ddt>=0.5 || tt>=(ts) WMRA_ARM_Motion(2, 1, [qn;qiwc;0], ddt); ddt=0; end end % Updating Virtual Reality Animation: if vr==1 WMRA_VR_Animation(2, Tiwc, [qn;qiwc]); end % Updating Matlab Animation: if ml==1 % Calculating the new Transformation Matrix: T1a=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(qn(1))*WMRA_transl(0,0,DH(1,3)) ; T2a=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(qn(2))*WMRA_transl(0,0,DH(2,3)) ; T3a=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(qn(3))*WMRA_transl(0,0,DH(3,3)) ; T4a=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(qn(4))*WMRA_transl(0,0,DH(4,3)) ; T5a=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(qn(5))*WMRA_transl(0,0,DH(5,3)) ; T6a=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(qn(6))*WMRA_transl(0,0,DH(6,3)) ; T7a=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(qn(7))*WMRA_transl(0,0,DH(7,3)) ; WMRA_ML_Animation(2, Ti, Td, Tiwc, T1a, T2a, T3a, T4a, T5a, T6a, T7a);

PAGE 356

Appendix B. (Continued) 335 end % Updating the old values with the new values for the next iteration: qo=qn; tt=tt+dt; % Pausing for the speed sync: pause(dt-toc); end % This function gives the homogeneous transformation matrix, given the rotation angle about the X axis. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [T]=WMRA_rotx(t) c=cos(t); s=sin(t); T=[1 0 0 0 ; 0 c -s 0 ; 0 s c 0 ; 0 0 0 1]; % This function gives the homogeneous transformation matrix, given the rotation angle about the Z axis. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [T]=WMRA_roty(t) c=cos(t); s=sin(t); T=[c 0 s 0 ; 0 1 0 0 ; -s 0 c 0 ; 0 0 0 1]; % This function gives the homogeneous transformation matrix, given the rotation angle about the Z axis. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [T]=WMRA_rotz(t) c=cos(t); s=sin(t); T=[c -s 0 0 ; s c 0 0 ; 0 0 1 0 ; 0 0 0 1];

PAGE 357

Appendix B. (Continued) 336%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% Thanks to Mayur Palankar %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function varargout = WMRA_screen(varargin) % WMRA_SCREEN M-file for WMRA_screen.fig % WMRA_SCREEN, by itself, creates a new WMRA_SCREEN or raises the existing % singleton*. % % H = WMRA_SCREEN returns the handle to a new WMRA_SCREEN or the % handle to % the existing singleton*. % % WMRA_SCREEN('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in WMRA_SCREEN.M with the given input arguments. % % WMRA_SCREEN('Property','Value',...) creates a new WMRA_SCREEN or raises the % existing singleton*. Starting from the left, property value pairs are % applied to the GUI before WMRA_screen_OpeningFunction gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to WMRA_screen_OpeningFcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to help WMRA_screen % Last Modified by GUIDE v2.5 04-Mar-2007 20:56:51 % Begin initialization code DO NOT EDIT gui_Singleton = 1; gui_State = struct( 'gui_Name' mfilename, ... 'gui_Singleton' gui_Singleton, ... 'gui_OpeningFcn' @WMRA_screen_OpeningFcn, ... 'gui_OutputFcn' @WMRA_screen_OutputFcn, ... 'gui_LayoutFcn' [] ... 'gui_Callback' []); if nargin && ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end % End initialization code DO NOT EDIT % --Executes just before WMRA_screen is made visible. function WMRA_screen_OpeningFcn(hObject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % varargin command line arguments to WMRA_screen (see VARARGIN) % Choose default command line output for WMRA_screen handles.output = hObject;

PAGE 358

Appendix B. (Continued) 337% Update handles structure guidata(hObject, handles); global VAR_DX global VAR_SCREENOPN VAR_DX=[0;0;0;0;0;0;0]; VAR_SCREENOPN = 1; if (varargin{1} == '1' ) set(handles.pushbutton1, 'Enable' 'on' ); else set(handles.pushbutton1, 'Enable' 'off' ); end set(handles.togglebutton2, 'cdata' ,button11); set(handles.togglebutton3, 'cdata' ,button12); set(handles.togglebutton8, 'cdata' ,button13); set(handles.togglebutton9, 'cdata' ,button14); set(handles.togglebutton15, 'cdata' ,button15); set(handles.togglebutton4, 'cdata' ,button21); set(handles.togglebutton5, 'cdata' ,button22); set(handles.togglebutton10, 'cdata' ,button23); set(handles.togglebutton11, 'cdata' ,button24); set(handles.togglebutton16, 'cdata' ,button25); set(handles.togglebutton6, 'cdata' ,button31); set(handles.togglebutton7, 'cdata' ,button32); set(handles.pushbutton37, 'cdata' ,button33); set(handles.togglebutton13, 'cdata' ,button34); set(handles.togglebutton14, 'cdata' ,button35); % UIWAIT makes WMRA_screen wait for user response (see UIRESUME) % uiwait(handles.figure1); function play = button11 play = iconize(imread( '11.jpg' )); function play = button12 play = iconize(imread( '12.jpg' )); function play = button13 play = iconize(imread( '13.jpg' )); function play = button14 play = iconize(imread( '14.jpg' )); function play = button15 play = iconize(imread( '15.jpg' )); function play = button21 play = iconize(imread( '21.jpg' )); function play = button22 play = iconize(imread( '22.jpg' )); function play = button23 play = iconize(imread( '23.jpg' )); function play = button24 play = iconize(imread( '24.jpg' )); function play = button25 play = iconize(imread( '25.jpg' )); function play = button31 play = iconize(imread( '31.jpg' )); function play = button32 play = iconize(imread( '32.jpg' )); function play = button33 play = iconize(imread( '33.jpg' )); function play = button34 play = iconize(imread( '34.jpg' )); function play = button35 play = iconize(imread( '35.jpg' )); function out = iconize(a)

PAGE 359

Appendix B. (Continued) 338[r,c,d] = size(a); r_skip = ceil(r/70); c_skip = ceil(c/70); out = a(1:r_skip:end,1:c_skip:end,:); % --Outputs from this function are returned to the command line. function varargout = WMRA_screen_OutputFcn(hObject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure varargout{1} = handles.output; % --Executes on button press in pushbutton1. function pushbutton1_Callback(hObject, eventdata, handles) % hObject handle to pushbutton1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) WMRA_Main_GUI; % --Executes on button press in pushbutton2. function pushbutton2_Callback(hObject, eventdata, handles) % hObject handle to pushbutton2 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_SCREENOPN VAR_SCREENOPN = 0; close; % --Executes on button press in togglebutton2. function togglebutton2_Callback(hObject, eventdata, handles) % hObject handle to togglebutton2 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 1 1 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton6, 'Enable' 'off' ); VAR_DX(3) = 1; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton6, 'Enable' 'on' ); VAR_DX(3) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton2 % --Executes on button press in togglebutton3. function togglebutton3_Callback(hObject, eventdata, handles) % hObject handle to togglebutton3 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 1 2 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton7, 'Enable' 'off' ); VAR_DX(1) = 1; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton7, 'Enable' 'on' ); VAR_DX(1) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton3

PAGE 360

Appendix B. (Continued) 339% --Executes on button press in togglebutton8. function togglebutton8_Callback(hObject, eventdata, handles) % hObject handle to togglebutton8 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 1 3 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton15, 'Enable' 'off' ); VAR_DX(6) = 0.003; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton15, 'Enable' 'on' ); VAR_DX(6) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton8 % --Executes on button press in togglebutton9. function togglebutton9_Callback(hObject, eventdata, handles) % hObject handle to togglebutton9 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 1 4 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton11, 'Enable' 'off' ); VAR_DX(5) = 0.003; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton11, 'Enable' 'on' ); VAR_DX(5) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton9 % --Executes on button press in togglebutton15. function togglebutton15_Callback(hObject, eventdata, handles) % hObject handle to togglebutton15 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 1 5 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton8, 'Enable' 'off' ); VAR_DX(6) = -0.003; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton8, 'Enable' 'on' ); VAR_DX(6) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton15 % --Executes on button press in togglebutton4. function togglebutton4_Callback(hObject, eventdata, handles) % hObject handle to togglebutton4 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 2 1 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton5, 'Enable' 'off' ); VAR_DX(2) = 1; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' ))

PAGE 361

Appendix B. (Continued) 340 set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton5, 'Enable' 'on' ); VAR_DX(2) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton4 % --Executes on button press in togglebutton5. function togglebutton5_Callback(hObject, eventdata, handles) % hObject handle to togglebutton5 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 2 2 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton4, 'Enable' 'off' ); VAR_DX(2) = -1; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton4, 'Enable' 'on' ); VAR_DX(2) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton5 % --Executes on button press in togglebutton10. function togglebutton10_Callback(hObject, eventdata, handles) % hObject handle to togglebutton10 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 2 3 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton16, 'Enable' 'off' ); VAR_DX(4) = -0.003; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton16, 'Enable' 'on' ); VAR_DX(4) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton10 % --Executes on button press in togglebutton11. function togglebutton11_Callback(hObject, eventdata, handles) % hObject handle to togglebutton11 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 2 4 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton9, 'Enable' 'off' ); VAR_DX(5) = -0.003; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton9, 'Enable' 'on' ); VAR_DX(5) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton11 % --Executes on button press in togglebutton16. function togglebutton16_Callback(hObject, eventdata, handles) % hObject handle to togglebutton16 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX

PAGE 362

Appendix B. (Continued) 341% 2 5 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton10, 'Enable' 'off' ); VAR_DX(4) = 0.003; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton10, 'Enable' 'on' ); VAR_DX(4) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton16 % --Executes on button press in togglebutton6. function togglebutton6_Callback(hObject, eventdata, handles) % hObject handle to togglebutton6 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 3 1 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton2, 'Enable' 'off' ); VAR_DX(3) = -1; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton2, 'Enable' 'on' ); VAR_DX(3) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton6 % --Executes on button press in togglebutton7. function togglebutton7_Callback(hObject, eventdata, handles) % hObject handle to togglebutton7 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 3 2 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton3, 'Enable' 'off' ); VAR_DX(1) = -1; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton3, 'Enable' 'on' ); VAR_DX(1) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton7 % --Executes on button press in pushbutton37. function pushbutton37_Callback(hObject, eventdata, handles) % hObject handle to pushbutton37 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 3 3 dx = [0;0;0;0;0;0;0]; % dx=dx(1:6);Redwan. VAR_DX = dx; set(handles.togglebutton2, 'BackgroundColor' 'white' ); set(handles.togglebutton3, 'BackgroundColor' 'white' ); set(handles.togglebutton8, 'BackgroundColor' 'white' ); set(handles.togglebutton9, 'BackgroundColor' 'white' ); set(handles.togglebutton15, 'BackgroundColor' 'white' ); set(handles.togglebutton4, 'BackgroundColor' 'white' );

PAGE 363

Appendix B. (Continued) 342set(handles.togglebutton5, 'BackgroundColor' 'white' ); set(handles.togglebutton10, 'BackgroundColor' 'white' ); set(handles.togglebutton11, 'BackgroundColor' 'white' ); set(handles.togglebutton16, 'BackgroundColor' 'white' ); set(handles.togglebutton6, 'BackgroundColor' 'white' ); set(handles.togglebutton7, 'BackgroundColor' 'white' ); set(handles.togglebutton13, 'BackgroundColor' 'white' ); set(handles.togglebutton14, 'BackgroundColor' 'white' ); set(handles.togglebutton2, 'Value' 0); set(handles.togglebutton3, 'Value' 0); set(handles.togglebutton8, 'Value' 0); set(handles.togglebutton9, 'Value' 0); set(handles.togglebutton15, 'Value' 0); set(handles.togglebutton4, 'Value' 0); set(handles.togglebutton5, 'Value' 0); set(handles.togglebutton10, 'Value' 0); set(handles.togglebutton11, 'Value' 0); set(handles.togglebutton16, 'Value' 0); set(handles.togglebutton6, 'Value' 0); set(handles.togglebutton7, 'Value' 0); set(handles.togglebutton13, 'Value' 0); set(handles.togglebutton14, 'Value' 0); set(handles.togglebutton2, 'Enable' 'on' ); set(handles.togglebutton3, 'Enable' 'on' ); set(handles.togglebutton8, 'Enable' 'on' ); set(handles.togglebutton9, 'Enable' 'on' ); set(handles.togglebutton15, 'Enable' 'on' ); set(handles.togglebutton4, 'Enable' 'on' ); set(handles.togglebutton5, 'Enable' 'on' ); set(handles.togglebutton10, 'Enable' 'on' ); set(handles.togglebutton11, 'Enable' 'on' ); set(handles.togglebutton16, 'Enable' 'on' ); set(handles.togglebutton6, 'Enable' 'on' ); set(handles.togglebutton7, 'Enable' 'on' ); set(handles.togglebutton13, 'Enable' 'on' ); set(handles.togglebutton14, 'Enable' 'on' ); % --Executes on button press in togglebutton13. function togglebutton13_Callback(hObject, eventdata, handles) % hObject handle to togglebutton13 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_DX % 3 4 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton14, 'Enable' 'off' ); VAR_DX(7) = 1; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton14, 'Enable' 'on' ); VAR_DX(7) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton13 % --Executes on button press in togglebutton14. function togglebutton14_Callback(hObject, eventdata, handles) % hObject handle to togglebutton14 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB

PAGE 364

Appendix B. (Continued) 343% handles structure with handles and user data (see GUIDATA) global VAR_DX % 3 5 if (get(hObject, 'Value' ) == get(hObject, 'Max' )) set(hObject, 'BackgroundColor' 'red' ); set(handles.togglebutton13, 'Enable' 'off' ); VAR_DX(7) = -1; elseif (get(hObject, 'Value' ) == get(hObject, 'Min' )) set(hObject, 'BackgroundColor' 'white' ); set(handles.togglebutton13, 'Enable' 'on' ); VAR_DX(7) = 0; end % Hint: get(hObject,'Value') returns toggle state of togglebutton14 % This function gives the Roll, Pitch, Taw angles, given the transformation matrix. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [rpy]=WMRA_T2rpy(T) rpy = zeros(1,3); % Making sure there is no singularity: if abs(T(1,1))
PAGE 365

Appendix B. (Continued) 344 % Calculating the transformation matrices of each link: T1=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(DH(1,4))*WMRA_transl(0,0,DH(1,3) ); T2=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(DH(2,4))*WMRA_transl(0,0,DH(2,3) ); T3=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(DH(3,4))*WMRA_transl(0,0,DH(3,3) ); T4=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(DH(4,4))*WMRA_transl(0,0,DH(4,3) ); T5=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(DH(5,4))*WMRA_transl(0,0,DH(5,3) ); T6=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(DH(6,4))*WMRA_transl(0,0,DH(6,3) ); T7=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(DH(7,4))*WMRA_transl(0,0,DH(7,3) ); % Calculating the Transformation Matrix of the initial arm position: Ta=T1*T2*T3*T4*T5*T6*T7; % Calculating the Transformation Matrix of the initial WMRA system position: T=Twc*Ta; else T1=WMRA_rotx(DH(1,1))*WMRA_transl(DH(1,2),0,0)*WMRA_rotz(q(1))*WMRA_transl(0,0,DH(1,3)); T2=WMRA_rotx(DH(2,1))*WMRA_transl(DH(2,2),0,0)*WMRA_rotz(q(2))*WMRA_transl(0,0,DH(2,3)); T3=WMRA_rotx(DH(3,1))*WMRA_transl(DH(3,2),0,0)*WMRA_rotz(q(3))*WMRA_transl(0,0,DH(3,3)); T4=WMRA_rotx(DH(4,1))*WMRA_transl(DH(4,2),0,0)*WMRA_rotz(q(4))*WMRA_transl(0,0,DH(4,3)); T5=WMRA_rotx(DH(5,1))*WMRA_transl(DH(5,2),0,0)*WMRA_rotz(q(5))*WMRA_transl(0,0,DH(5,3)); T6=WMRA_rotx(DH(6,1))*WMRA_transl(DH(6,2),0,0)*WMRA_rotz(q(6))*WMRA_transl(0,0,DH(6,3)); T7=WMRA_rotx(DH(7,1))*WMRA_transl(DH(7,2),0,0)*WMRA_rotz(q(7))*WMRA_transl(0,0,DH(7,3)); Ta=T1*T2*T3*T4*T5*T6*T7; Twc=WMRA_w2T(1, Twc, dq); T=Twc*Ta; end % This function finds the trajectory points along a streight line, given the initial and final transformations. Single-angle rotation about a single axis is used % See Eqs. 1.73-1.103 pages 30-32 of Richard Paul's book Robot Manipulators" %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function [Tt] = WMRA_traj(ind, Ti, Td, n) % Finding the rotation of the desired point based on the initial point: R=Ti(1:3,1:3)'*Td(1:3,1:3);

PAGE 366

Appendix B. (Continued) 345% Initial single-angle representation of the rotation: a=atan2(sqrt((R(3,2)-R(2,3))^2+(R(1,3)-R(3,1))^2+(R(2,1)-R(1,2))^2) (R(1,1)+R(2,2)+R(3,3)-1)); s=sin(a); c=cos(a); v=1-c; % Finding the single-vector components for the rotation: if a<0.001 kx=1; ky=0; kz=0; elseif aky && kx>kz ky=(R(2,1)+R(1,2))/(2*kx*v); kz=(R(1,3)+R(3,1))/(2*kx*v); elseif ky>kx && ky>kz kx=(R(2,1)+R(1,2))/(2*ky*v); kz=(R(3,2)+R(2,3))/(2*ky*v); else kx=(R(1,3)+R(3,1))/(2*kz*v); ky=(R(3,2)+R(2,3))/(2*kz*v); end end % Running the desired trajectory method: % 1 == Polynomial with Blending function, % 2 == Polynomial without Blending function, % 3 == Linear function. if ind == 2 at=WMRA_Polynomial(0,a,n); xt=WMRA_Polynomial(Ti(1,4), Td(1,4), n); yt=WMRA_Polynomial(Ti(2,4), Td(2,4), n); zt=WMRA_Polynomial(Ti(3,4), Td(3,4), n); elseif ind == 3 at=WMRA_Linear(0,a,n); xt=WMRA_Linear(Ti(1,4), Td(1,4), n); yt=WMRA_Linear(Ti(2,4), Td(2,4), n); zt=WMRA_Linear(Ti(3,4), Td(3,4), n); else at=WMRA_BPolynomial(0,a,n); xt=WMRA_BPolynomial(Ti(1,4), Td(1,4), n); yt=WMRA_BPolynomial(Ti(2,4), Td(2,4), n); zt=WMRA_BPolynomial(Ti(3,4), Td(3,4), n); end Tt(:,:,1)=Ti; for i=2:n % Single-angle Change: da=at(i)-at(1); s=sin(da); c=cos(da); v=1-c; % Rotation and Position Change: dR=[kx^2*v+c kx*ky*v-kz*s kx*kz*v+ky*s; kx*ky*v+kz*s ky^2*v+c ky*kz*v-kx*s; kx*kz*v-ky*s ky*kz*v+kx*s kz^2*v+c];

PAGE 367

Appendix B. (Continued) 346% Finding the trajectory points along the trajectory line: Tt(:,:,i)=[Ti(1:3,1:3)*dR [xt(i);yt(i);zt(i)] ; 0 0 0 1]; end % % % Rotational Trajectory: % % Single-angle Change: % da=2*pi/(n-1); % kx=1; ky=0; kz=0; % s=sin(da); % c=cos(da); % v=1-c; % % % Rotation and Position Change: % dR=[kx^2*v+c kx*ky*v-kz*s kx*kz*v+ky*s; % kx*ky*v+kz*s ky^2*v+c ky*kz*v-kx*s; % kx*kz*v-ky*s ky*kz*v+kx*s kz^2*v+c]; % % % Finding the trajectory points along the trajectory line: % Tt(:,:,1)=Ti; % for i=2:n % x=Ti(1,4)+2000*cos((i-1)*da); % y=Ti(2,4)+2000*sin((i-1)*da); % Tt(:,:,i)=[Ti(1:3,1:3)*(dR)^(i-1) [x;y;Ti(3,4)] ; 0 0 0 1]; % end % This function gives the homogeneous transformation matrix, given the X, Y, Z cartesian translation values. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [T]=WMRA_transl(x, y, z) T=[eye(3) [x;y;z]; 0 0 0 1]; % This function does the animation of USF WMRA with 9 DOF using Virtual Reality Toolbox. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function WMRA_VR_Animation(i, Twc, q) % Declaring the global variables: global L WMRA % The initialization of the animation plot: if i==1 % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % Opening the WMRA file:

PAGE 368

Appendix B. (Continued) 347 WMRA = vrworld( '\9_wmra.wrl' ); open(WMRA); % Changing the View Point of the simulation: WMRA.DynamicView.translation=[Twc(1,4) 0 -Twc(2,4)]; % Calculating the wheelaxle transform instead of the arm base transform: Twc=Twc*[eye(3) -L(2:4) ; 0 0 0 1]; % The orientation about Z of the wheelchair: phi=q(9); % Calculating wheelchair's wheels' angles: ql=q(8)/L(5)-L(1)*q(9)/(2*L(5)); qr=q(8)/L(5)+L(1)*q(9)/(2*L(5)); % Updating the VRML file for the new angles and distances: WMRA.Chair.translation=[Twc(1,4) Twc(2,4) L(5)]; WMRA.Chair.rotation=[0 0 1 phi]; WMRA.LWheel.rotation=[0 1 0 ql]; WMRA.RWheel.rotation=[0 1 0 qr]; WMRA.ARM2.rotation=[0 0 -1 q(1)]; WMRA.ARM3.rotation=[0 1 0 q(2)]; WMRA.ARM4.rotation=[0 0 -1 q(3)]; WMRA.ARM5.rotation=[0 1 0 q(4)]; WMRA.ARM6.rotation=[0 0 -1 q(5)]; WMRA.ARM7.rotation=[0 1 0 q(6)]; WMRA.ARM8.rotation=[0 0 -1 q(7)]; % Viewing the simulation: view(WMRA); % Closing the animation plot: elseif i==3 close(WMRA); delete(WMRA); % Updating the animation plot: else WMRA.DynamicView.translation=[Twc(1,4) 0 -Twc(2,4)]; Twc=Twc*[eye(3) -L(2:4) ; 0 0 0 1]; phi=q(9); ql=q(8)/L(5)-L(1)*q(9)/(2*L(5)); qr=q(8)/L(5)+L(1)*q(9)/(2*L(5)); WMRA.Chair.translation=[Twc(1,4) Twc(2,4) L(5)]; WMRA.Chair.rotation=[0 0 1 phi]; WMRA.LWheel.rotation=[0 1 0 ql]; WMRA.RWheel.rotation=[0 1 0 qr]; WMRA.ARM2.rotation=[0 0 -1 q(1)]; WMRA.ARM3.rotation=[0 1 0 q(2)]; WMRA.ARM4.rotation=[0 0 -1 q(3)]; WMRA.ARM5.rotation=[0 1 0 q(4)]; WMRA.ARM6.rotation=[0 0 -1 q(5)]; WMRA.ARM7.rotation=[0 1 0 q(6)]; WMRA.ARM8.rotation=[0 0 -1 q(7)]; end % This function gives the Transformation Matrix of the wheelchair with 2 DOF (Ground to WMRA base), given the previous transformation matrix and the required wheelchair's travel distance and angle. % Dimentions are as supplies, angles are in radians. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

PAGE 369

Appendix B. (Continued) 348% Function Declaration: function [T]=WMRA_w2T(ind, Tp, q) % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % Deciding if the motion is in reference to the arm base (1) or the wheel axle center (0): if ind == 0, L(2:4)=[0;0;0]; end % Defining the inverse of Transformation Matrix between the wheelchair center and the WMRA's base: Twa=[eye(3) L(2:4) ; 0 0 0 1]; % The previous transformation matrix from the ground to the wheelchair center: Tp=Tp*inv(Twa); % Defining the Transformation Matrix between the ground and the wheelchair center and WMRA's base: if abs(q(2))<=eps % Streight line motion. Tp(1:2,4)=Tp(1:2,4)+q(1)*Tp(1:2,1); T=Tp*Twa; else po=atan2(Tp(2,1),Tp(1,1)); p=q(2); r=q(1)/p-L(1)/2; Tgw=[cos(po+p) -sin(po+p) 0 Tp(1,4)+sin(pi/2+po+p/2)*(r+L(1)/2)*sin(p)/cos(p/2) ; sin(po+p) cos(po+p) 0 Tp(2,4)-cos(pi/2+po+p/2)*(r+L(1)/2)*sin(p)/cos(p/2) ; 0 0 1 Tp(3,4) ; 0 0 0 1]; T=Tgw*Twa; end % This function gives the wheelchair dimentions matrix to be used in the program. % Modifying the dimentons on this file is sufficient to change these dimention in all related programs. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Function Declaration: function L=WMRA_WCD() L=[0;0;0;0;0]; % All dimentions are in millimeters. L(1)=560; % Distance between the two driving wheels. L(2)=440; % Horizontal distance between the wheels axix of rotation and the arm mounting position (along x). L(3)=230; % Horizontal distance between the middle point between the two driving wheels and the arm mounting position (along y). L(4)=182; % Vertical distance between the wheels axix of rotation and the arm mounting position (along z). L(5)=168; % Radius of the driving wheels.

PAGE 370

Appendix B. (Continued) 349 B.3. Matlab Main Script and GUI Main File % This "new USF WMRA" script SIMULATES the WMRA system with ANIMATION and plots for 9 DOF. All angles are in Radians. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Declaring the global variables to be used for the touch screen control: global VAR_DX global VAR_SCREENOPN global dHo % Defining used parameters: d2r=pi/180; % Conversions from Degrees to Radians. r2d=180/pi; % Conversions from Radians to Degrees. % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % User input prompts: choice000 = input( '\n Choose what to control: \n For combined Wheelchair and Arm control, press "1", \n For Arm only control, press "2", \n For Wheelchair only control, press "3". \n' 's' ); if choice000== '2' WCA=2; choice00000 = input( '\n Choose whose frame to base the control on: \n For Ground Frame, press "1", \n For Wheelchair Frame, press "2", \n For Gripper Frame, press "3". \n' 's' ); if choice00000== '2' coord=2; elseif choice00000== '3' coord=3; else coord=1; end choice0000 = input( '\n Choose the cartesian coordinates to be controlled: \n For Position and Orientation, press "1", \n For Position only, press "2". \n' 's' ); if choice0000== '2' cart=2; else cart=1; end choice5 = input( '\n Please enter the desired optimization method: (1= SR-I & WLN, 2= P-I & WLN, 3= SR-I & ENE, 4= P-I & ENE) \n' 's' ); if choice5== '2' optim=2; elseif choice5== '3' optim=3; elseif choice5== '4' optim=4; else optim=1; end choice50 = input( '\n Do you want to include Joint Limit Avoidance? (1= Yes, 2= No) \n' 's' ); if choice50== '2' JLA=0;

PAGE 371

Appendix B. (Continued) 350 else JLA=1; end choice500 = input( '\n Do you want to include Joint Limit/Velocity and Obstacle Safety Stop? (1= Yes, 2= No) \n' 's' ); if choice500== '2' JLO=0; else JLO=1; end elseif choice000== '3' WCA=3; choice00000 = input( '\n Choose whose frame to base the control on: \n For Ground Frame, press "1", \n For Wheelchair Frame, press "2". \n' 's' ); if choice00000== '2' coord=2; else coord=1; end choice500 = input( '\n Do you want to include Joint Velocity Safety Stop? (1= Yes, 2= No) \n' 's' ); if choice500== '2' JLO=0; else JLO=1; end cart=2; optim=0; JLA=0; else WCA=1; choice00000 = input( '\n Choose whose frame to base the control on: \n For Ground Frame, press "1", \n For Wheelchair Frame, press "2", \n For Gripper Frame, press "3". \n' 's' ); if choice00000== '2' coord=2; elseif choice00000== '3' coord=3; else coord=1; end choice0000 = input( '\n Choose the cartesian coordinates to be controlled: \n For Position and Orientation, press "1", \n For Position only, press "2". \n' 's' ); if choice0000== '2' cart=2; else cart=1; end choice5 = input( '\n Please enter the desired optimization method: (1= SR-I & WLN, 2= P-I & WLN, 3= SR-I & ENE, 4= P-I & ENE) \n' 's' ); if choice5== '2' optim=2; elseif choice5== '3' optim=3; elseif choice5== '4' optim=4; else optim=1; end choice50 = input( '\n Do you want to include Joint Limit Avoidance? (1= Yes, 2= No) \n' 's' ); if choice50== '2' JLA=0; else JLA=1; end

PAGE 372

Appendix B. (Continued) 351 choice500 = input( '\n Do you want to include Joint Limit/Velocity and Obstacle Safety Stop? (1= Yes, 2= No) \n' 's' ); if choice500== '2' JLO=0; else JLO=1; end end choice0 = input( '\n Choose the control mode: \n For position control, press "1", \n For velocity control, press "2", \n For SpaceBall control, press "3", \n For Psychology Mask control, press "4", \n For Touch Screen control, press "5". \n' 's' ); if choice0== '1' cont=1; Td = input( '\n Please enter the transformation matrix of the desired position and orientation from the control-based frame \n (e.g. [0 0 1 1455;-1 0 0 369;0 -1 0 999; 0 0 0 1]) \n' ); v = input( '\n Please enter the desired linear velocity of the gripper in mm/s (e.g. 50) \n' ); choice00 = input( '\n Chose the Trajectory generation function: \n Press "1" for a Polynomial function with Blending, or \n press "2" for a Polynomial function without Blending, or \n press "3" for a Linear function. \n' 's' ); if choice00== '2' trajf=2; elseif choice00== '3' trajf=3; else trajf=1; end elseif choice0== '2' cont=2; ts = input( '\n Please enter the desired simulation time in seconds (e.g. 2) \n' ); if cart==2 Vd = input( '\n Please enter the desired 3x1 cartesian velocity vector of the gripper (in mm/s) (e.g. [70;70;-70]) \n' ); else Vd = input( '\n Please enter the desired 6x1 cartesian velocity vector of the gripper (in mm/s, radians/s) (e.g. [70;70;-70;0.001;0.001;0.001]) \n' ); end elseif choice0== '3' cont=3; % Space Ball will be used for control. v = input( '\n Please enter the desired linear velocity of the gripper in mm/s (e.g. 50) \n' ); elseif choice0== '4' cont=4; % BCI 2000 Psychology Mask will be used for control. v = input( '\n Please enter the desired linear velocity of the gripper in mm/s (e.g. 50) \n' ); port1 = input( '\n Please enter the desired port number (e.g. 19711) \n' ); else cont=5; % Touch Screen will be used for control. v = input( '\n Please enter the desired linear velocity of the gripper in mm/s (e.g. 50) \n' ); end choice1 = input( '\n Choose animation type or no animation: \n For Virtual Reality Animation, press "1", \n For Matlab Graphics Animation, press "2", \n For BOTH Animations, press "3", \n For NO Animation, press "4". \n' 's' ); if choice1== '2' vr = 0; ml = 1; elseif choice1== '3' vr = 1; ml = 1; elseif choice1== '4' vr = 0; ml = 0;

PAGE 373

Appendix B. (Continued) 352else vr = 1; ml = 0; end choice10 = input( '\n Would you like to run the actual WMRA? \n For yes, press "1", \n For no, press "2". \n' 's' ); if choice10== '1' arm=1; else arm=0; end choice2 = input( '\n Press "1" if you want to start at the "ready" position, \n or press "2" if you want to enter the initial joint angles. \n' 's' ); if choice2== '2' qi = input( '\n Please enter the arms initial angles vector in radians (e.g. [pi/2;pi/2;0;pi/2;pi/2;pi/2;0]) \n' ); WCi = input( '\n Please enter the initial x,y position and z orientation of the WMRA base from the ground base in millimeters and radians (e.g. [200;500;0.3]) \n' ); ini=0; else qi=[90;90;0;90;90;90;0]*d2r; WCi=[0;0;0]; ini=0; if vr==1 || ml==1 || arm==1 choice3 = input( '\n Press "1" if you want to include "park" to "ready" motion, \n or press "2" if not. \n' 's' ); if choice3== '2' ini=0; else ini=1; end end end choice4 = input( '\n Press "1" if you do NOT want to plot the simulation results, \n or press "2" if do. \n' 's' ); if choice4== '2' plt=2; else plt=1; end % Calculating the Transformation Matrix of the initial position of the WMRA's base: Tiwc=WMRA_p2T(WCi(1),WCi(2),WCi(3)); % Calculating the initial Wheelchair Variables: qiwc=[sqrt(WCi(1)^2+WCi(2)^2);WCi(3)]; % Calculating the initial transformation matrices: [Ti, Tia, Tiwc, T01, T12, T23, T34, T45, T56, T67]=WMRA_Tall(1, qi, [0;0], Tiwc); if cont==1 % Calculating the linear distance from the initial position to the desired position and the linear velocity: if coord==2 D=sqrt( (Td(1,4)-Tia(1,4))^2 + (Td(2,4)-Tia(2,4))^2 + (Td(3,4)-Tia(3,4))^2); elseif coord==3 D=sqrt( (Td(1,4))^2 + (Td(2,4))^2 + (Td(3,4))^2); else D=sqrt( (Td(1,4)-Ti(1,4))^2 + (Td(2,4)-Ti(2,4))^2 + (Td(3,4)-Ti(3,4))^2); end % Calculating the number of iteration and the time increment (delta t) if the linear step increment of the tip is 1 mm: dt=0.05; % Time increment in seconds. total_time=D/v; % Total time of animation. n=round(total_time/dt); % Number of iterations rounded up. dt=total_time/n; % Adjusted time increment in seconds.

PAGE 374

Appendix B. (Continued) 353 % Calculating the Trajectory of the end effector, and once the trajectory is calculated, we should redefine "Td" based on the ground frame: if coord==2 Tt=WMRA_traj(trajf, Tia, Td, n+1); Td=Tiwc*Td; elseif coord==3 Tt=WMRA_traj(trajf, eye(4), Td, n+1); Td=Ti*Td; else Tt=WMRA_traj(trajf, Ti, Td, n+1); end elseif cont==2 % Calculating the number of iterations and the time increment (delta t) if the linear step increment of the gripper is 1 mm: dt=0.05; % Time increment in seconds. total_time=ts; % Total time of animation. n=round(total_time/dt); % Number of iterations rounded up. dt=total_time/n; % Adjusted time increment in seconds. dx=Vd*dt; Td=Ti; elseif cont==3 WMRA_exit(); % This is to stop the simulation in SpaceBall control when the user presses the exit key. dt=0.05; dx=v*dt*[spdata1(3)/20 ; -spdata1(1)/40 ; spdata1(2)/30 ; spdata1(6)/1500 ; spdata1(4)/900 ; spdata1(5)/1300]; dg=spdata1(7); Td=Ti; n=1; elseif cont==4 WMRA_exit(); % This is to stop the simulation in Psychology Mask control when the user presses the exit key. dt=0.05; dx=v*dt*WMRA_psy(port1); dg=dx(7); dx=dx(1:6); Td=Ti; n=1; else WMRA_screen( '0' ); % This is to start the screen controls. Argument: '0'=BACK button disabled, '1'=BACK button enabled. dt=0.05; dx=v*dt*VAR_DX(1:6); dg=VAR_DX(7); Td=Ti; n=1; end % Initializing the joint angles, the Transformation Matrix, and time: dq=zeros(9,1); dg=0; qo=[qi;qiwc]; To=Ti; Toa=Tia; Towc=Tiwc; tt=0; i=1; dHo=[0;0;0;0;0;0;0]; % Initializing the WMRA: if ini==0 % When no "park" to "ready" motion required. % Initializing Virtual Reality Animation: if vr==1 WMRA_VR_Animation(1, Towc, qo); end % Initializing Robot Animation in Matlab Graphics:

PAGE 375

Appendix B. (Continued) 354 if ml==1 WMRA_ML_Animation(1, To, Td, Towc, T01, T12, T23, T34, T45, T56, T67); end % Initializing the Physical Arm: if arm==1 WMRA_ARM_Motion(1, 2, [qo;dg], 0); ddt=0; end elseif ini==1 && (vr==1 || ml==1 || arm==1) % When "park" to "ready" motion is required. WMRA_park2ready(1, vr, ml, arm, Towc, qo(8:9)); if arm==1 ddt=0; end end % Re-Drawing the Animation: if vr==1 || ml==1 drawnow; end % Starting a timer: tic % Starting the Iteration Loop: while i<=(n+1) % Calculating the 6X7 Jacobian of the arm in frame 0: [Joa,detJoa]=WMRA_J07(T01, T12, T23, T34, T45, T56, T67); % Calculating the 6X2 Jacobian based on the WMRA's base in the ground frame: phi=atan2(Towc(2,1),Towc(1,1)); Jowc=WMRA_Jga(1, phi, Toa(1:2,4)); % Changing the Jacobian reference frame based on the choice of which coordinates frame are referenced in the Cartesian control: % coord=1 for Ground Coordinates Control. % coord=2 for Wheelchair Coordinates Control. % coord=3 for Gripper Coordinates Control. if coord==2 Joa=Joa; Jowc=[Towc(1:3,1:3)' zeros(3); zeros(3) Towc(1:3,1:3)']*Jowc; elseif coord==3 Joa=[Toa(1:3,1:3)' zeros(3); zeros(3) Toa(1:3,1:3)']*Joa; Jowc=[To(1:3,1:3)' zeros(3); zeros(3) To(1:3,1:3)']*Jowc; elseif coord==1 Joa=[Towc(1:3,1:3) zeros(3); zeros(3) Towc(1:3,1:3)]*Joa; Jowc=Jowc; end % Calculating the 3X9 or 6X9 augmented Jacobian of the WMRA system based on the ground frame: if cart==2 Joa=Joa(1:3,1:7); detJoa=sqrt(det(Joa*Joa')); Jowc=Jowc(1:3,1:2); Jo=[Joa Jowc]; detJo=sqrt(det(Jo*Jo')); else Jo=[Joa Jowc]; detJo=sqrt(det(Jo*Jo')); end % Finding the Cartesian errors of the end effector: if cont==1 % Calculating the Position and Orientation errors of the end effector, and the rates of motion of the end effector: if coord==2

PAGE 376

Appendix B. (Continued) 355 invTowc=[Towc(1:3,1:3)' -Towc(1:3,1:3)'*Towc(1:3,4);0 0 0 1]; Ttnew=invTowc*Tiwc*Tt(:,:,i); dx=WMRA_delta(Toa, Ttnew); elseif coord==3 invTo=[To(1:3,1:3)' -To(1:3,1:3)'*To(1:3,4);0 0 0 1]; Ttnew=invTo*Ti*Tt(:,:,i); dx=WMRA_delta(eye(4), Ttnew); else dx=WMRA_delta(To, Tt(:,:,i)); end elseif cont==2 elseif cont==3 dx=v*dt*[spdata1(3)/20 ; -spdata1(1)/40 ; spdata1(2)/30 ; spdata1(6)/1500 ; spdata1(4)/900 ; spdata1(5)/1300]; dg=spdata1(7); elseif cont==4 dx=v*dt*WMRA_psy(port1); dg=dx(7); dx=dx(1:6); else dx=v*dt*VAR_DX(1:6); dg=VAR_DX(7); end % Changing the order of Cartesian motion in the case when gripper reference frame is selected for control with the screen or psy or SpaceBall interfaces: if coord==3 && cont>=3 dx=[-dx(2);-dx(3);dx(1);-dx(5);-dx(6);dx(4)]; end if cart==2 dx=dx(1:3); end % Calculating the resolved rate with optimization: % Index input values for "optim": 1= SR-I & WLN, 2= P-I & WLN, 3= SR-I & ENE, 4= P-I & ENE: if WCA==2 dq=WMRA_Opt(optim, JLA, JLO, Joa, detJoa, dq(1:7), dx, dt, qo); dq=[dq;0;0]; elseif WCA==3 dq=WMRA_Opt(optim, JLA, JLO, Jowc, 1, dq(8:9), dx(1:2), dt, 1); dq=[0;0;0;0;0;0;0;dq]; else dq=WMRA_Opt(optim, JLA, JLO, Jo, detJo, dq, dx, dt, qo); end % Calculating the new Joint Angles: qn=qo+dq; % Calculating the new Transformation Matrices: [Tn, Tna, Tnwc, T01, T12, T23, T34, T45, T56, T67]=WMRA_Tall(2, qn, dq(8:9), Towc); % A safety condition function to stop the joints that may cause colision of the arm with itself, the wheelchair, or the human user: if JLO==1 && WCA~=3 dq(1:7)=WMRA_collide(dq(1:7), T01, T12, T23, T34, T45, T56, T67); % Re-calculating the new Joint Angles: qn=qo+dq; % Re-calculating the new Transformation Matrices: [Tn, Tna, Tnwc, T01, T12, T23, T34, T45, T56, T67]=WMRA_Tall(2, qn, dq(8:9), Towc); end % Saving the plot data in case plots are required:

PAGE 377

Appendix B. (Continued) 356 if plt==2 WMRA_Plots(1, L, r2d, dt, i, tt, qn, dq, Tn, Tnwc, detJoa, detJo); end % Updating Physical Arm: if arm==1 ddt=ddt+dt; if ddt>=0.5 || i>=(n+1) WMRA_ARM_Motion(2, 1, [qn;dg], ddt); ddt=0; end end % Updating Virtual Reality Animation: if vr==1 WMRA_VR_Animation(2, Tnwc, qn); end % Updating Matlab Graphics Animation: if ml==1 WMRA_ML_Animation(2, Ti, Td, Tnwc, T01, T12, T23, T34, T45, T56, T67); end % Re-Drawing the Animation: if vr==1 || ml==1 drawnow; end % Updating the old values with the new values for the next iteration: qo=qn; To=Tn; Toa=Tna; Towc=Tnwc; tt=tt+dt; i=i+1; % Stopping the simulation when the exit button is pressed: if cont==3 || cont==4 || cont==5 if (VAR_SCREENOPN == 1) n=n+1; else break end end % Delay to comply with the required speed: if toc < tt pause(tt-toc); end end % Reading the elapsed time and printing it with the simulation time: if cont==1 || cont==2, fprintf( '\nSimula. time is %6.6f seconds.\n' total_time); end toc % Plotting: if plt==2 WMRA_Plots(2, L, r2d, dt, i, tt, qn, dq, Tn, Tnwc, detJoa, detJo); end if vr==1 || ml==1 || arm==1 % Going back to the ready position: choice6 = input( '\n Do you want to go back to the "ready" position? \n Press "1" for Yes, or press "2" for No. \n' 's' ); if choice6== '1'

PAGE 378

Appendix B. (Continued) 357 WMRA_any2ready(2, vr, ml, arm, Tnwc, qn); % Going back to the parking position: choice7 = input( '\n Do you want to go back to the "parking" position? \n Press "1" for Yes, or press "2" for No. \n' 's' ); if choice7== '1' WMRA_ready2park(2, vr, ml, arm, Tnwc, qn(8:9)); end end % Closing the Arm library and Matlab Graphics Animation and Virtual Reality Animation and Plots windows: choice8 = input( '\n Do you want to close all simulation windows and arm controls? \n Press "1" for Yes, or press "2" for No. \n' 's' ); if choice8== '1' if arm==1 WMRA_ARM_Motion(3, 0, 0, 0); end if vr==1 WMRA_VR_Animation(3, 0, 0); end if ml==1 WMRA_ML_Animation(3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); end if plt==2 close (figure(2),figure(3),figure(4),figure(5),figure(6),figure(7),figure(8),figure(9),figure(1 0)); end end end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%% COPY RIGHTS RESERVED %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% Developed By: Redwan M. Alqasemi %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% Thanks to Mayur Palankar %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% April 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function varargout = WMRA_Main_GUI(varargin) % WMRA_Main_GUI M-file for WMRA_Main_GUI.fig % WMRA_Main_GUI, by itself, creates a new WMRA_Main_GUI or raises the existing % singleton*. % % H = WMRA_Main_GUI returns the handle to a new WMRA_Main_GUI or the handle to % the existing singleton*. % % WMRA_Main_GUI('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in WMRA_Main_GUI.M with the given input arguments. % % WMRA_Main_GUI('Property','Value',...) creates a new WMRA_Main_GUI or raises the % existing singleton*. Starting from the left, property value pairs % are % applied to the GUI before WMRA_Main_GUI_OpeningFunction gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to WMRA_Main_GUI_OpeningFcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to help WMRA_Main_GUI

PAGE 379

Appendix B. (Continued) 358 % Last Modified by GUIDE v2.5 31-Mar-2007 16:02:05 % Begin initialization code DO NOT EDIT gui_Singleton = 1; gui_State = struct( 'gui_Name' mfilename, ... 'gui_Singleton' gui_Singleton, ... 'gui_OpeningFcn' @WMRA_Main_GUI_OpeningFcn, ... 'gui_OutputFcn' @WMRA_Main_GUI_OutputFcn, ... 'gui_LayoutFcn' [] ... 'gui_Callback' []); if nargin && ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end % End initialization code DO NOT EDIT % --Executes just before WMRA_Main_GUI is made visible. function WMRA_Main_GUI_OpeningFcn(hObject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % varargin command line arguments to WMRA_Main_GUI (see VARARGIN) % Choose default command line output for WMRA_Main_GUI handles.output = hObject; % Update handles structure guidata(hObject, handles); global VAR_SCREENOPN global VAR_LOOP global VAR_WCI global VAR_QI VAR_WCI = [0; 0; 0]; VAR_QI = [1.5708; 1.5708; 0; 1.5708; 1.5708; 1.5708; 0]; VAR_SCREENOPN = 0; VAR_LOOP = 0; % UIWAIT makes WMRA_Main_GUI wait for user response (see UIRESUME) % uiwait(handles.figure1); % --Outputs from this function are returned to the command line. function varargout = WMRA_Main_GUI_OutputFcn(hObject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hObject handle to figure % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure varargout{1} = handles.output; % -------------------------------------------------------------------function file_menu_Callback(hObject, eventdata, handles) % hObject handle to file_menu (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % -------------------------------------------------------------------function open_menu_Callback(hObject, eventdata, handles)

PAGE 380

Appendix B. (Continued) 359% hObject handle to open_menu (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % -------------------------------------------------------------------function save_menu_Callback(hObject, eventdata, handles) % hObject handle to save_menu (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % -------------------------------------------------------------------function saveas_menu_Callback(hObject, eventdata, handles) % hObject handle to saveas_menu (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % -------------------------------------------------------------------function exit_menu_Callback(hObject, eventdata, handles) % hObject handle to exit_menu (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) close; % -------------------------------------------------------------------function help_menu_Callback(hObject, eventdata, handles) % hObject handle to help_menu (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % --Executes on selection change in popupmenu1. function popupmenu1_Callback(hObject, eventdata, handles) % hObject handle to popupmenu1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) val = get(hObject, 'Value' ); switch val case 1 if (get(handles.popupmenu17, 'Value' ) == 2) set(handles.edit65, 'Enable' 'on' ); set(handles.edit66, 'Enable' 'on' ); set(handles.edit67, 'Enable' 'on' ); set(handles.text22, 'Enable' 'on' ); end if (get(handles.popupmenu17, 'Value' ) == 1) if (strcmp(get(handles.edit36, 'Enable' ), 'off' )) set (handles.edit36, 'String' 0); set (handles.edit37, 'String' 0); set (handles.edit38, 'String' 1); set (handles.edit39, 'String' -1); set (handles.edit40, 'String' 0); set (handles.edit41, 'String' 0); set (handles.edit42, 'String' 0); set (handles.edit43, 'String' -1); set (handles.edit44, 'String' 0); set(handles.edit36, 'Enable' 'on' ); set(handles.edit37, 'Enable' 'on' ); set(handles.edit38, 'Enable' 'on' ); set(handles.edit39, 'Enable' 'on' ); set(handles.edit40, 'Enable' 'on' ); set(handles.edit41, 'Enable' 'on' ); set(handles.edit42, 'Enable' 'on' ); set(handles.edit43, 'Enable' 'on' ); set(handles.edit44, 'Enable' 'on' ); end

PAGE 381

Appendix B. (Continued) 360 end case 2 if (get(handles.popupmenu17, 'Value' ) == 2) set(handles.edit65, 'Enable' 'off' ); set(handles.edit66, 'Enable' 'off' ); set(handles.edit67, 'Enable' 'off' ); set(handles.text22, 'Enable' 'off' ); end if (get(handles.popupmenu17, 'Value' ) == 1) set (handles.edit36, 'String' 1); set (handles.edit37, 'String' 0); set (handles.edit38, 'String' 0); set (handles.edit39, 'String' 0); set (handles.edit40, 'String' 1); set (handles.edit41, 'String' 0); set (handles.edit42, 'String' 0); set (handles.edit43, 'String' 0); set (handles.edit44, 'String' 1); set(handles.edit36, 'Enable' 'off' ); set(handles.edit37, 'Enable' 'off' ); set(handles.edit38, 'Enable' 'off' ); set(handles.edit39, 'Enable' 'off' ); set(handles.edit40, 'Enable' 'off' ); set(handles.edit41, 'Enable' 'off' ); set(handles.edit42, 'Enable' 'off' ); set(handles.edit43, 'Enable' 'off' ); set(handles.edit44, 'Enable' 'off' ); end end % Hints: contents = get(hObject,'String') returns popupmenu1 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu1 % --Executes during object creation, after setting all properties. function popupmenu1_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu3. function popupmenu3_Callback(hObject, eventdata, handles) % hObject handle to popupmenu3 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) val = get(hObject, 'Value' ); switch val case 1 if (strcmp(get(handles.popupmenu1, 'Enable' ), 'off' )) set(handles.popupmenu8, 'Enable' 'on' ); set (handles.checkbox1, 'Value' 1); set(handles.checkbox1, 'Enable' 'on' ); set(handles.popupmenu1, 'Enable' 'on' ); string_list = { 'Ground' ; 'Wheelchair' ; 'Gripper' ;}; set(handles.popupmenu21, 'String' string_list); end case 2 if (strcmp(get(handles.popupmenu1, 'Enable' ), 'off' ))

PAGE 382

Appendix B. (Continued) 361 set(handles.popupmenu8, 'Enable' 'on' ); set (handles.checkbox1, 'Value' 1); set(handles.checkbox1, 'Enable' 'on' ); set(handles.popupmenu1, 'Enable' 'on' ); string_list = { 'Ground' ; 'Wheelchair' ; 'Gripper' ;}; set(handles.popupmenu21, 'String' string_list); end case 3 if (get(handles.popupmenu21, 'Value' ) == 3) set(handles.popupmenu21, 'Value' ,2); end string_list = get(handles.popupmenu21, 'String' ); set(handles.popupmenu21, 'String' string_list(1:2)); set(handles.popupmenu8, 'Enable' 'off' ); set (handles.checkbox1, 'Value' 0); set(handles.checkbox1, 'Enable' 'off' ); set(handles.popupmenu1, 'Value' ,2); set(handles.popupmenu1, 'Enable' 'off' ); if (get(handles.popupmenu17, 'Value' ) == 2) set(handles.edit65, 'Enable' 'off' ); set(handles.edit66, 'Enable' 'off' ); set(handles.edit67, 'Enable' 'off' ); set(handles.text22, 'Enable' 'off' ); end if (get(handles.popupmenu17, 'Value' ) == 1) set (handles.edit36, 'String' 1); set (handles.edit37, 'String' 0); set (handles.edit38, 'String' 0); set (handles.edit39, 'String' 0); set (handles.edit40, 'String' 1); set (handles.edit41, 'String' 0); set (handles.edit42, 'String' 0); set (handles.edit43, 'String' 0); set (handles.edit44, 'String' 1); set(handles.edit36, 'Enable' 'off' ); set(handles.edit37, 'Enable' 'off' ); set(handles.edit38, 'Enable' 'off' ); set(handles.edit39, 'Enable' 'off' ); set(handles.edit40, 'Enable' 'off' ); set(handles.edit41, 'Enable' 'off' ); set(handles.edit42, 'Enable' 'off' ); set(handles.edit43, 'Enable' 'off' ); set(handles.edit44, 'Enable' 'off' ); end end % Hints: contents = get(hObject,'String') returns popupmenu3 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu3 % --Executes during object creation, after setting all properties. function popupmenu3_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu3 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end

PAGE 383

Appendix B. (Continued) 362% --Executes on selection change in popupmenu4. function popupmenu4_Callback(hObject, eventdata, handles) % hObject handle to popupmenu4 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) val = get(hObject, 'Value' ); switch val case 1 if (get(handles.popupmenu15, 'Value' ) == 1) set(handles.popupmenu16, 'Enable' 'on' ); end set(handles.popupmenu11, 'Enable' 'on' ); if (get(handles.popupmenu11, 'Value' ) == 1) set(handles.popupmenu14, 'Enable' 'on' ); end set(handles.popupmenu10, 'Enable' 'on' ); case 2 if (get(handles.popupmenu15, 'Value' ) == 1) set(handles.popupmenu16, 'Enable' 'on' ); end set(handles.popupmenu11, 'Enable' 'on' ); if (get(handles.popupmenu11, 'Value' ) == 1) set(handles.popupmenu14, 'Enable' 'on' ); end set(handles.popupmenu10, 'Enable' 'on' ); case 3 if (get(handles.popupmenu15, 'Value' ) == 1) set(handles.popupmenu16, 'Enable' 'on' ); end set(handles.popupmenu11, 'Enable' 'on' ); if (get(handles.popupmenu11, 'Value' ) == 1) set(handles.popupmenu14, 'Enable' 'on' ); end set(handles.popupmenu10, 'Enable' 'on' ); case 4 if (get(handles.popupmenu6, 'Value' ) == 1) if (get(handles.popupmenu15, 'Value' ) == 1) set(handles.popupmenu16, 'Enable' 'off' ); end set(handles.popupmenu11, 'Enable' 'off' ); if (get(handles.popupmenu11, 'Value' ) == 1) set(handles.popupmenu14, 'Enable' 'off' ); end if (get(handles.popupmenu7, 'Value' ) == 1) set(handles.popupmenu10, 'Enable' 'off' ); end end end % Hints: contents = get(hObject,'String') returns popupmenu4 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu4 % --Executes during object creation, after setting all properties. function popupmenu4_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu4 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called

PAGE 384

Appendix B. (Continued) 363% Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu6. function popupmenu6_Callback(hObject, eventdata, handles) % hObject handle to popupmenu6 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) val = get(hObject, 'Value' ); switch val case 1 if (get(handles.popupmenu4, 'Value' ) == 4) if (get(handles.popupmenu15, 'Value' ) == 1) set(handles.popupmenu16, 'Enable' 'off' ); end set(handles.popupmenu11, 'Enable' 'off' ); if (get(handles.popupmenu11, 'Value' ) == 1) set(handles.popupmenu14, 'Enable' 'off' ); end if (get(handles.popupmenu7, 'Value' ) == 1) set(handles.popupmenu10, 'Enable' 'off' ); end end case 2 if (get(handles.popupmenu15, 'Value' ) == 1) set(handles.popupmenu16, 'Enable' 'on' ); end set(handles.popupmenu11, 'Enable' 'on' ); if (get(handles.popupmenu11, 'Value' ) == 1) set(handles.popupmenu14, 'Enable' 'on' ); end set(handles.popupmenu10, 'Enable' 'on' ); end % Hints: contents = get(hObject,'String') returns popupmenu6 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu6 % --Executes during object creation, after setting all properties. function popupmenu6_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu6 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu7. function popupmenu7_Callback(hObject, eventdata, handles) % hObject handle to popupmenu7 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns popupmenu7 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu7

PAGE 385

Appendix B. (Continued) 364val = get(hObject, 'Value' ); switch val case 1 if ((get(handles.popupmenu4, 'Value' ) == 4) && (get(handles.popupmenu6, 'Value' ) == 1)) set(handles.popupmenu10, 'Enable' 'off' ); end case 2 set(handles.popupmenu10, 'Enable' 'on' ); end % --Executes during object creation, after setting all properties. function popupmenu7_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu7 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu8. function popupmenu8_Callback(hObject, eventdata, handles) % hObject handle to popupmenu8 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns popupmenu8 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu8 % --Executes during object creation, after setting all properties. function popupmenu8_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu8 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu10. function popupmenu10_Callback(hObject, eventdata, handles) % hObject handle to popupmenu10 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns popupmenu10 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu10 % --Executes during object creation, after setting all properties. function popupmenu10_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu10 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' );

PAGE 386

Appendix B. (Continued) 365end % --Executes on selection change in popupmenu14. function popupmenu14_Callback(hObject, eventdata, handles) % hObject handle to popupmenu14 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns popupmenu14 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu14 % --Executes during object creation, after setting all properties. function popupmenu14_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu14 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu11. function popupmenu11_Callback(hObject, eventdata, handles) % hObject handle to popupmenu11 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) val = get(hObject, 'Value' ); switch val case 1 set(handles.popupmenu14, 'Enable' 'on' ); case 2 set(handles.popupmenu14, 'Enable' 'off' ); end % Hints: contents = get(hObject,'String') returns popupmenu11 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu11 % --Executes during object creation, after setting all properties. function popupmenu11_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu11 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu16. function popupmenu16_Callback(hObject, eventdata, handles) % hObject handle to popupmenu16 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns popupmenu16 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu16 % --Executes during object creation, after setting all properties. function popupmenu16_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu16 (see GCBO)

PAGE 387

Appendix B. (Continued) 366% eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu15. function popupmenu15_Callback(hObject, eventdata, handles) % hObject handle to popupmenu15 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_WCI global VAR_QI val = get(hObject, 'Value' ); switch val case 1 if ((get(handles.popupmenu4, 'Value' ) ~= 4) || (get(handles.popupmenu6, 'Value' ) == 2)) set(handles.popupmenu16, 'Enable' 'on' ); end set (handles.edit49, 'String' 1.5708); set (handles.edit50, 'String' 1.5708); set (handles.edit51, 'String' 0); set (handles.edit52, 'String' 1.5708); set (handles.edit53, 'String' 1.5708); set (handles.edit54, 'String' 1.5708); set (handles.edit55, 'String' 0); set(handles.edit49, 'Enable' 'off' ); set(handles.edit50, 'Enable' 'off' ); set(handles.edit51, 'Enable' 'off' ); set(handles.edit52, 'Enable' 'off' ); set(handles.edit53, 'Enable' 'off' ); set(handles.edit54, 'Enable' 'off' ); set(handles.edit55, 'Enable' 'off' ); set(handles.pushbutton5, 'Enable' 'off' ); set(handles.pushbutton6, 'Enable' 'off' ); set (handles.edit56, 'String' 0); set (handles.edit57, 'String' 0); set (handles.edit58, 'String' 0); set(handles.edit56, 'Enable' 'off' ); set(handles.edit57, 'Enable' 'off' ); set(handles.edit58, 'Enable' 'off' ); case 2 set(handles.popupmenu16, 'Enable' 'off' ); set (handles.edit49, 'String' VAR_QI(1,1)); set (handles.edit50, 'String' VAR_QI(2,1)); set (handles.edit51, 'String' VAR_QI(3,1)); set (handles.edit52, 'String' VAR_QI(4,1)); set (handles.edit53, 'String' VAR_QI(5,1)); set (handles.edit54, 'String' VAR_QI(6,1)); set (handles.edit55, 'String' VAR_QI(7,1)); set(handles.edit49, 'Enable' 'off' ); set(handles.edit50, 'Enable' 'off' ); set(handles.edit51, 'Enable' 'off' ); set(handles.edit52, 'Enable' 'off' );

PAGE 388

Appendix B. (Continued) 367 set(handles.edit53, 'Enable' 'off' ); set(handles.edit54, 'Enable' 'off' ); set(handles.edit55, 'Enable' 'off' ); set(handles.pushbutton5, 'Enable' 'off' ); set(handles.pushbutton6, 'Enable' 'off' ); set (handles.edit56, 'String' VAR_WCI(1,1)); set (handles.edit57, 'String' VAR_WCI(2,1)); set (handles.edit58, 'String' VAR_WCI(3,1)); set(handles.edit56, 'Enable' 'off' ); set(handles.edit57, 'Enable' 'off' ); set(handles.edit58, 'Enable' 'off' ); case 3 set(handles.popupmenu16, 'Enable' 'off' ); set(handles.edit49, 'Enable' 'on' ); set(handles.edit50, 'Enable' 'on' ); set(handles.edit51, 'Enable' 'on' ); set(handles.edit52, 'Enable' 'on' ); set(handles.edit53, 'Enable' 'on' ); set(handles.edit54, 'Enable' 'on' ); set(handles.edit55, 'Enable' 'on' ); set(handles.pushbutton5, 'Enable' 'on' ); set(handles.pushbutton6, 'Enable' 'on' ); set(handles.edit56, 'Enable' 'on' ); set(handles.edit57, 'Enable' 'on' ); set(handles.edit58, 'Enable' 'on' ); end % Hints: contents = get(hObject,'String') returns popupmenu15 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu15 % --Executes during object creation, after setting all properties. function popupmenu15_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu15 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu19. function popupmenu19_Callback(hObject, eventdata, handles) % hObject handle to popupmenu19 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns popupmenu19 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu19 % --Executes during object creation, after setting all properties. function popupmenu19_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu19 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER.

PAGE 389

Appendix B. (Continued) 368if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu17. function popupmenu17_Callback(hObject, eventdata, handles) % hObject handle to popupmenu17 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_SCREENOPN val = get(hObject, 'Value' ); switch val case 1 if (get(handles.popupmenu1, 'Value' ) == 1) set(handles.edit36, 'Enable' 'on' ); set(handles.edit37, 'Enable' 'on' ); set(handles.edit38, 'Enable' 'on' ); set(handles.edit39, 'Enable' 'on' ); set(handles.edit40, 'Enable' 'on' ); set(handles.edit41, 'Enable' 'on' ); set(handles.edit42, 'Enable' 'on' ); set(handles.edit43, 'Enable' 'on' ); set(handles.edit44, 'Enable' 'on' ); else set (handles.edit36, 'String' 1); set (handles.edit37, 'String' 0); set (handles.edit38, 'String' 0); set (handles.edit39, 'String' 0); set (handles.edit40, 'String' 1); set (handles.edit41, 'String' 0); set (handles.edit42, 'String' 0); set (handles.edit43, 'String' 0); set (handles.edit44, 'String' 1); set(handles.edit36, 'Enable' 'off' ); set(handles.edit37, 'Enable' 'off' ); set(handles.edit38, 'Enable' 'off' ); set(handles.edit39, 'Enable' 'off' ); set(handles.edit40, 'Enable' 'off' ); set(handles.edit41, 'Enable' 'off' ); set(handles.edit42, 'Enable' 'off' ); set(handles.edit43, 'Enable' 'off' ); set(handles.edit44, 'Enable' 'off' ); end set(handles.edit15, 'Enable' 'on' ); set(handles.edit16, 'Enable' 'on' ); set(handles.edit17, 'Enable' 'on' ); set(handles.text15, 'Enable' 'on' ); set(handles.text16, 'Enable' 'on' ); set(handles.text17, 'Enable' 'on' ); set(handles.text18, 'Enable' 'on' ); set(handles.edit45, 'Enable' 'on' ); set(handles.text14, 'Enable' 'on' ); set(handles.text13, 'Enable' 'on' ); set(handles.pushbutton3, 'Enable' 'on' ); set(handles.popupmenu20, 'Enable' 'on' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit62, 'Enable' 'off' ); set(handles.edit63, 'Enable' 'off' ); set(handles.edit64, 'Enable' 'off' );

PAGE 390

Appendix B. (Continued) 369 set(handles.text21, 'Enable' 'off' ); set(handles.edit65, 'Enable' 'off' ); set(handles.edit66, 'Enable' 'off' ); set(handles.edit67, 'Enable' 'off' ); set(handles.text22, 'Enable' 'off' ); set(handles.edit4, 'Enable' 'off' ); set(handles.text6, 'Enable' 'off' ); set(handles.text5, 'Enable' 'off' ); set(handles.pushbutton4, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit1, 'Enable' 'off' ); set(handles.text1, 'Enable' 'off' ); set(handles.text2, 'Enable' 'off' ); set(handles.edit69, 'Enable' 'off' ); set(handles.text23, 'Enable' 'off' ); case 2 set(handles.edit36, 'Enable' 'off' ); set(handles.edit37, 'Enable' 'off' ); set(handles.edit38, 'Enable' 'off' ); set(handles.edit39, 'Enable' 'off' ); set(handles.edit40, 'Enable' 'off' ); set(handles.edit41, 'Enable' 'off' ); set(handles.edit42, 'Enable' 'off' ); set(handles.edit43, 'Enable' 'off' ); set(handles.edit44, 'Enable' 'off' ); set(handles.edit15, 'Enable' 'off' ); set(handles.edit16, 'Enable' 'off' ); set(handles.edit17, 'Enable' 'off' ); set(handles.text15, 'Enable' 'off' ); set(handles.text16, 'Enable' 'off' ); set(handles.text17, 'Enable' 'off' ); set(handles.text18, 'Enable' 'off' ); set(handles.edit45, 'Enable' 'off' ); set(handles.text14, 'Enable' 'off' ); set(handles.text13, 'Enable' 'off' ); set(handles.pushbutton3, 'Enable' 'off' ); set(handles.popupmenu20, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit62, 'Enable' 'on' ); set(handles.edit63, 'Enable' 'on' ); set(handles.edit64, 'Enable' 'on' ); set(handles.text21, 'Enable' 'on' ); if (get(handles.popupmenu1, 'Value' ) == 1) set(handles.edit65, 'Enable' 'on' ); set(handles.edit66, 'Enable' 'on' ); set(handles.edit67, 'Enable' 'on' ); set(handles.text22, 'Enable' 'on' ); end set(handles.edit4, 'Enable' 'on' ); set(handles.text6, 'Enable' 'on' ); set(handles.text5, 'Enable' 'on' ); set(handles.pushbutton4, 'Enable' 'on' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit1, 'Enable' 'off' ); set(handles.text1, 'Enable' 'off' );

PAGE 391

Appendix B. (Continued) 370 set(handles.text2, 'Enable' 'off' ); set(handles.edit69, 'Enable' 'off' ); set(handles.text23, 'Enable' 'off' ); case 3 set(handles.edit36, 'Enable' 'off' ); set(handles.edit37, 'Enable' 'off' ); set(handles.edit38, 'Enable' 'off' ); set(handles.edit39, 'Enable' 'off' ); set(handles.edit40, 'Enable' 'off' ); set(handles.edit41, 'Enable' 'off' ); set(handles.edit42, 'Enable' 'off' ); set(handles.edit43, 'Enable' 'off' ); set(handles.edit44, 'Enable' 'off' ); set(handles.edit15, 'Enable' 'off' ); set(handles.edit16, 'Enable' 'off' ); set(handles.edit17, 'Enable' 'off' ); set(handles.text15, 'Enable' 'off' ); set(handles.text16, 'Enable' 'off' ); set(handles.text17, 'Enable' 'off' ); set(handles.text18, 'Enable' 'off' ); set(handles.edit45, 'Enable' 'off' ); set(handles.text14, 'Enable' 'off' ); set(handles.text13, 'Enable' 'off' ); set(handles.pushbutton3, 'Enable' 'off' ); set(handles.popupmenu20, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit62, 'Enable' 'off' ); set(handles.edit63, 'Enable' 'off' ); set(handles.edit64, 'Enable' 'off' ); set(handles.text21, 'Enable' 'off' ); set(handles.edit65, 'Enable' 'off' ); set(handles.edit66, 'Enable' 'off' ); set(handles.edit67, 'Enable' 'off' ); set(handles.text22, 'Enable' 'off' ); set(handles.edit4, 'Enable' 'off' ); set(handles.text6, 'Enable' 'off' ); set(handles.text5, 'Enable' 'off' ); set(handles.pushbutton4, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit1, 'Enable' 'on' ); set(handles.text1, 'Enable' 'on' ); set(handles.text2, 'Enable' 'on' ); set(handles.edit69, 'Enable' 'off' ); set(handles.text23, 'Enable' 'off' ); case 4 set(handles.edit36, 'Enable' 'off' ); set(handles.edit37, 'Enable' 'off' ); set(handles.edit38, 'Enable' 'off' ); set(handles.edit39, 'Enable' 'off' ); set(handles.edit40, 'Enable' 'off' ); set(handles.edit41, 'Enable' 'off' ); set(handles.edit42, 'Enable' 'off' ); set(handles.edit43, 'Enable' 'off' ); set(handles.edit44, 'Enable' 'off' ); set(handles.edit15, 'Enable' 'off' ); set(handles.edit16, 'Enable' 'off' );

PAGE 392

Appendix B. (Continued) 371 set(handles.edit17, 'Enable' 'off' ); set(handles.text15, 'Enable' 'off' ); set(handles.text16, 'Enable' 'off' ); set(handles.text17, 'Enable' 'off' ); set(handles.text18, 'Enable' 'off' ); set(handles.edit45, 'Enable' 'off' ); set(handles.text14, 'Enable' 'off' ); set(handles.text13, 'Enable' 'off' ); set(handles.pushbutton3, 'Enable' 'off' ); set(handles.popupmenu20, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit62, 'Enable' 'off' ); set(handles.edit63, 'Enable' 'off' ); set(handles.edit64, 'Enable' 'off' ); set(handles.text21, 'Enable' 'off' ); set(handles.edit65, 'Enable' 'off' ); set(handles.edit66, 'Enable' 'off' ); set(handles.edit67, 'Enable' 'off' ); set(handles.text22, 'Enable' 'off' ); set(handles.edit4, 'Enable' 'off' ); set(handles.text6, 'Enable' 'off' ); set(handles.text5, 'Enable' 'off' ); set(handles.pushbutton4, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit1, 'Enable' 'on' ); set(handles.text1, 'Enable' 'on' ); set(handles.text2, 'Enable' 'on' ); set(handles.edit69, 'Enable' 'on' ); set(handles.text23, 'Enable' 'on' ); case 5 set(handles.edit36, 'Enable' 'off' ); set(handles.edit37, 'Enable' 'off' ); set(handles.edit38, 'Enable' 'off' ); set(handles.edit39, 'Enable' 'off' ); set(handles.edit40, 'Enable' 'off' ); set(handles.edit41, 'Enable' 'off' ); set(handles.edit42, 'Enable' 'off' ); set(handles.edit43, 'Enable' 'off' ); set(handles.edit44, 'Enable' 'off' ); set(handles.edit15, 'Enable' 'off' ); set(handles.edit16, 'Enable' 'off' ); set(handles.edit17, 'Enable' 'off' ); set(handles.text15, 'Enable' 'off' ); set(handles.text16, 'Enable' 'off' ); set(handles.text17, 'Enable' 'off' ); set(handles.text18, 'Enable' 'off' ); set(handles.edit45, 'Enable' 'off' ); set(handles.text14, 'Enable' 'off' ); set(handles.text13, 'Enable' 'off' ); set(handles.pushbutton3, 'Enable' 'off' ); set(handles.popupmenu20, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit62, 'Enable' 'off' ); set(handles.edit63, 'Enable' 'off' ); set(handles.edit64, 'Enable' 'off' );

PAGE 393

Appendix B. (Continued) 372 set(handles.text21, 'Enable' 'off' ); set(handles.edit65, 'Enable' 'off' ); set(handles.edit66, 'Enable' 'off' ); set(handles.edit67, 'Enable' 'off' ); set(handles.text22, 'Enable' 'off' ); set(handles.edit4, 'Enable' 'off' ); set(handles.text6, 'Enable' 'off' ); set(handles.text5, 'Enable' 'off' ); set(handles.pushbutton4, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set(handles.edit1, 'Enable' 'on' ); set(handles.text1, 'Enable' 'on' ); set(handles.text2, 'Enable' 'on' ); set(handles.edit69, 'Enable' 'off' ); set(handles.text23, 'Enable' 'off' ); if (VAR_SCREENOPN ~= 1) WMRA_screen ( '1' ); drawnow; end end % Hints: contents = get(hObject,'String') returns popupmenu17 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu17 % --Executes during object creation, after setting all properties. function popupmenu17_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu17 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit1_Callback(hObject, eventdata, handles) % hObject handle to edit1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit1 as text % str2double(get(hObject,'String')) returns contents of edit1 as a double % --Executes during object creation, after setting all properties. function edit1_CreateFcn(hObject, eventdata, handles) % hObject handle to edit1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit4_Callback(hObject, eventdata, handles) % hObject handle to edit4 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB

PAGE 394

Appendix B. (Continued) 373% handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit4 as text % str2double(get(hObject,'String')) returns contents of edit4 as a double % --Executes during object creation, after setting all properties. function edit4_CreateFcn(hObject, eventdata, handles) % hObject handle to edit4 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit36_Callback(hObject, eventdata, handles) % hObject handle to edit36 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit36 as text % str2double(get(hObject,'String')) returns contents of edit36 as a double % --Executes during object creation, after setting all properties. function edit36_CreateFcn(hObject, eventdata, handles) % hObject handle to edit36 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit37_Callback(hObject, eventdata, handles) % hObject handle to edit37 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit37 as text % str2double(get(hObject,'String')) returns contents of edit37 as a double % --Executes during object creation, after setting all properties. function edit37_CreateFcn(hObject, eventdata, handles) % hObject handle to edit37 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit38_Callback(hObject, eventdata, handles) % hObject handle to edit38 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit38 as text

PAGE 395

Appendix B. (Continued) 374% str2double(get(hObject,'String')) returns contents of edit38 as a double % --Executes during object creation, after setting all properties. function edit38_CreateFcn(hObject, eventdata, handles) % hObject handle to edit38 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit39_Callback(hObject, eventdata, handles) % hObject handle to edit39 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit39 as text % str2double(get(hObject,'String')) returns contents of edit39 as a double % --Executes during object creation, after setting all properties. function edit39_CreateFcn(hObject, eventdata, handles) % hObject handle to edit39 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit40_Callback(hObject, eventdata, handles) % hObject handle to edit40 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit40 as text % str2double(get(hObject,'String')) returns contents of edit40 as a double % --Executes during object creation, after setting all properties. function edit40_CreateFcn(hObject, eventdata, handles) % hObject handle to edit40 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit41_Callback(hObject, eventdata, handles) % hObject handle to edit41 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit41 as text % str2double(get(hObject,'String')) returns contents of edit41 as a double % --Executes during object creation, after setting all properties.

PAGE 396

Appendix B. (Continued) 375function edit41_CreateFcn(hObject, eventdata, handles) % hObject handle to edit41 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit42_Callback(hObject, eventdata, handles) % hObject handle to edit42 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit42 as text % str2double(get(hObject,'String')) returns contents of edit42 as a double % --Executes during object creation, after setting all properties. function edit42_CreateFcn(hObject, eventdata, handles) % hObject handle to edit42 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit43_Callback(hObject, eventdata, handles) % hObject handle to edit43 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit43 as text % str2double(get(hObject,'String')) returns contents of edit43 as a double % --Executes during object creation, after setting all properties. function edit43_CreateFcn(hObject, eventdata, handles) % hObject handle to edit43 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit44_Callback(hObject, eventdata, handles) % hObject handle to edit44 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit44 as text % str2double(get(hObject,'String')) returns contents of edit44 as a double % --Executes during object creation, after setting all properties. function edit44_CreateFcn(hObject, eventdata, handles) % hObject handle to edit44 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB

PAGE 397

Appendix B. (Continued) 376% handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit16_Callback(hObject, eventdata, handles) % hObject handle to edit16 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit16 as text % str2double(get(hObject,'String')) returns contents of edit16 as a double % --Executes during object creation, after setting all properties. function edit16_CreateFcn(hObject, eventdata, handles) % hObject handle to edit16 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit15_Callback(hObject, eventdata, handles) % hObject handle to edit15 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit15 as text % str2double(get(hObject,'String')) returns contents of edit15 as a double % --Executes during object creation, after setting all properties. function edit15_CreateFcn(hObject, eventdata, handles) % hObject handle to edit15 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit17_Callback(hObject, eventdata, handles) % hObject handle to edit17 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit17 as text % str2double(get(hObject,'String')) returns contents of edit17 as a double % --Executes during object creation, after setting all properties. function edit17_CreateFcn(hObject, eventdata, handles) % hObject handle to edit17 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows.

PAGE 398

Appendix B. (Continued) 377% See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit49_Callback(hObject, eventdata, handles) % hObject handle to edit49 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit49 as text % str2double(get(hObject,'String')) returns contents of edit49 as a double % --Executes during object creation, after setting all properties. function edit49_CreateFcn(hObject, eventdata, handles) % hObject handle to edit49 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit50_Callback(hObject, eventdata, handles) % hObject handle to edit50 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit50 as text % str2double(get(hObject,'String')) returns contents of edit50 as a double % --Executes during object creation, after setting all properties. function edit50_CreateFcn(hObject, eventdata, handles) % hObject handle to edit50 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit51_Callback(hObject, eventdata, handles) % hObject handle to edit51 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit51 as text % str2double(get(hObject,'String')) returns contents of edit51 as a double % --Executes during object creation, after setting all properties. function edit51_CreateFcn(hObject, eventdata, handles) % hObject handle to edit51 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' ))

PAGE 399

Appendix B. (Continued) 378 set(hObject, 'BackgroundColor' 'white' ); end function edit52_Callback(hObject, eventdata, handles) % hObject handle to edit52 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit52 as text % str2double(get(hObject,'String')) returns contents of edit52 as a double % --Executes during object creation, after setting all properties. function edit52_CreateFcn(hObject, eventdata, handles) % hObject handle to edit52 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit53_Callback(hObject, eventdata, handles) % hObject handle to edit53 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit53 as text % str2double(get(hObject,'String')) returns contents of edit53 as a double % --Executes during object creation, after setting all properties. function edit53_CreateFcn(hObject, eventdata, handles) % hObject handle to edit53 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit54_Callback(hObject, eventdata, handles) % hObject handle to edit54 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit54 as text % str2double(get(hObject,'String')) returns contents of edit54 as a double % --Executes during object creation, after setting all properties. function edit54_CreateFcn(hObject, eventdata, handles) % hObject handle to edit54 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end

PAGE 400

Appendix B. (Continued) 379function edit55_Callback(hObject, eventdata, handles) % hObject handle to edit55 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit55 as text % str2double(get(hObject,'String')) returns contents of edit55 as a double % --Executes during object creation, after setting all properties. function edit55_CreateFcn(hObject, eventdata, handles) % hObject handle to edit55 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit56_Callback(hObject, eventdata, handles) % hObject handle to edit56 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit56 as text % str2double(get(hObject,'String')) returns contents of edit56 as a double % --Executes during object creation, after setting all properties. function edit56_CreateFcn(hObject, eventdata, handles) % hObject handle to edit56 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit57_Callback(hObject, eventdata, handles) % hObject handle to edit57 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit57 as text % str2double(get(hObject,'String')) returns contents of edit57 as a double % --Executes during object creation, after setting all properties. function edit57_CreateFcn(hObject, eventdata, handles) % hObject handle to edit57 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit58_Callback(hObject, eventdata, handles) % hObject handle to edit58 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB

PAGE 401

Appendix B. (Continued) 380% handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit58 as text % str2double(get(hObject,'String')) returns contents of edit58 as a double % --Executes during object creation, after setting all properties. function edit58_CreateFcn(hObject, eventdata, handles) % hObject handle to edit58 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit45_Callback(hObject, eventdata, handles) % hObject handle to edit45 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit45 as text % str2double(get(hObject,'String')) returns contents of edit45 as a double % --Executes during object creation, after setting all properties. function edit45_CreateFcn(hObject, eventdata, handles) % hObject handle to edit45 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu20. function popupmenu20_Callback(hObject, eventdata, handles) % hObject handle to popupmenu20 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns popupmenu20 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu20 % --Executes during object creation, after setting all properties. function popupmenu20_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu20 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit62_Callback(hObject, eventdata, handles) % hObject handle to edit62 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA)

PAGE 402

Appendix B. (Continued) 381% Hints: get(hObject,'String') returns contents of edit62 as text % str2double(get(hObject,'String')) returns contents of edit62 as a double % --Executes during object creation, after setting all properties. function edit62_CreateFcn(hObject, eventdata, handles) % hObject handle to edit62 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit63_Callback(hObject, eventdata, handles) % hObject handle to edit63 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit63 as text % str2double(get(hObject,'String')) returns contents of edit63 as a double % --Executes during object creation, after setting all properties. function edit63_CreateFcn(hObject, eventdata, handles) % hObject handle to edit63 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit64_Callback(hObject, eventdata, handles) % hObject handle to edit64 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit64 as text % str2double(get(hObject,'String')) returns contents of edit64 as a double % --Executes during object creation, after setting all properties. function edit64_CreateFcn(hObject, eventdata, handles) % hObject handle to edit64 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit65_Callback(hObject, eventdata, handles) % hObject handle to edit65 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit65 as text % str2double(get(hObject,'String')) returns contents of edit65 as a double % --Executes during object creation, after setting all properties.

PAGE 403

Appendix B. (Continued) 382function edit65_CreateFcn(hObject, eventdata, handles) % hObject handle to edit65 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit66_Callback(hObject, eventdata, handles) % hObject handle to edit66 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit66 as text % str2double(get(hObject,'String')) returns contents of edit66 as a double % --Executes during object creation, after setting all properties. function edit66_CreateFcn(hObject, eventdata, handles) % hObject handle to edit66 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit67_Callback(hObject, eventdata, handles) % hObject handle to edit67 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit67 as text % str2double(get(hObject,'String')) returns contents of edit67 as a double % --Executes during object creation, after setting all properties. function edit67_CreateFcn(hObject, eventdata, handles) % hObject handle to edit67 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on selection change in popupmenu21. function popupmenu21_Callback(hObject, eventdata, handles) % hObject handle to popupmenu21 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns popupmenu21 contents as cell array % contents{get(hObject,'Value')} returns selected item from popupmenu21 % --Executes during object creation, after setting all properties. function popupmenu21_CreateFcn(hObject, eventdata, handles) % hObject handle to popupmenu21 (see GCBO)

PAGE 404

Appendix B. (Continued) 383% eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end function edit69_Callback(hObject, eventdata, handles) % hObject handle to edit69 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hObject,'String') returns contents of edit69 as text % str2double(get(hObject,'String')) returns contents of edit69 as a double % --Executes during object creation, after setting all properties. function edit69_CreateFcn(hObject, eventdata, handles) % hObject handle to edit69 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles empty handles not created until after all CreateFcns called % Hint: edit controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject, 'BackgroundColor' ), get(0, 'defaultUicontrolBackgroundColor' )) set(hObject, 'BackgroundColor' 'white' ); end % --Executes on button press in pushbutton3. function pushbutton3_Callback(hObject, eventdata, handles) % hObject handle to pushbutton3 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_MATRIX if (get(handles.popupmenu1, 'Value' ) == 1) WMRA_matrix_entry( '(3 x 4)' ); TS = str2num(VAR_MATRIX); [row col] = size(TS); if ((row == 3) && (col == 4)) set (handles.edit36, 'String' TS(1,1)); set (handles.edit37, 'String' TS(1,2)); set (handles.edit38, 'String' TS(1,3)); set (handles.edit16, 'String' TS(1,4)); set (handles.edit39, 'String' TS(2,1)); set (handles.edit40, 'String' TS(2,2)); set (handles.edit41, 'String' TS(2,3)); set (handles.edit15, 'String' TS(2,4)); set (handles.edit42, 'String' TS(3,1)); set (handles.edit43, 'String' TS(3,2)); set (handles.edit44, 'String' TS(3,3)); set (handles.edit17, 'String' TS(3,4)); else WMRA_error_gui ( 'Matrix size wrong. Size (3 x 4) expected' ); end else WMRA_matrix_entry( '(3 x 1)' ); TS = str2num(VAR_MATRIX); [row col] = size(TS); if ((row == 3) && (col == 1)) set (handles.edit16, 'String' TS(1,1));

PAGE 405

Appendix B. (Continued) 384 set (handles.edit15, 'String' TS(2,1)); set (handles.edit17, 'String' TS(3,1)); else WMRA_error_gui ( 'Matrix size wrong. Size (3 x 1) expected' ); end end % --Executes on button press in pushbutton4. function pushbutton4_Callback(hObject, eventdata, handles) % hObject handle to pushbutton4 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_MATRIX if (get(handles.popupmenu1, 'Value' ) == 1) WMRA_matrix_entry( '(2 x 3)' ); TS = str2num(VAR_MATRIX); [row col] = size(TS); if ((row == 2) && (col == 3)) set (handles.edit62, 'String' TS(1,1)); set (handles.edit63, 'String' TS(1,2)); set (handles.edit64, 'String' TS(1,3)); set (handles.edit65, 'String' TS(2,1)); set (handles.edit66, 'String' TS(2,2)); set (handles.edit67, 'String' TS(2,3)); else WMRA_error_gui ( 'Matrix size wrong. Size (2 x 3) expected' ); end else WMRA_matrix_entry( '(1 x 3)' ); TS = str2num(VAR_MATRIX); [row col] = size(TS); if ((row == 1) && (col == 3)) set (handles.edit62, 'String' TS(1,1)); set (handles.edit63, 'String' TS(1,2)); set (handles.edit64, 'String' TS(1,3)); else WMRA_error_gui ( 'Matrix size wrong. Size (1 x 3) expected' ); end end % --Executes on button press in pushbutton5. function pushbutton5_Callback(hObject, eventdata, handles, varargin) % hObject handle to pushbutton5 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_MATRIX WMRA_matrix_entry( '(3 x 1)' ); TS = str2num(VAR_MATRIX); [row col] = size(TS); if ((row == 3) && (col == 1)) set (handles.edit56, 'String' TS(1,1)); set (handles.edit57, 'String' TS(2,1)); set (handles.edit58, 'String' TS(3,1)); else WMRA_error_gui ( 'Matrix size wrong. Size (3 x 1) expected' ); end % --Executes on button press in pushbutton6. function pushbutton6_Callback(hObject, eventdata, handles) % hObject handle to pushbutton6 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB

PAGE 406

Appendix B. (Continued) 385% handles structure with handles and user data (see GUIDATA) global VAR_MATRIX WMRA_matrix_entry( '(7 x 1)' ); TS = str2num(VAR_MATRIX); [row col] = size(TS); if ((row == 7) && (col == 1)) set (handles.edit49, 'String' TS(1,1)); set (handles.edit50, 'String' TS(2,1)); set (handles.edit51, 'String' TS(3,1)); set (handles.edit52, 'String' TS(4,1)); set (handles.edit53, 'String' TS(5,1)); set (handles.edit54, 'String' TS(6,1)); set (handles.edit55, 'String' TS(7,1)); else WMRA_error_gui ( 'Matrix size wrong. Size (7 x 1) expected' ); end % --Executes on button press in pushbutton1. function pushbutton1_Callback(hObject, eventdata, handles) % hObject handle to pushbutton1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_STOP VAR_STOP = 1; drawnow; set(handles.pushbutton1, 'Enable' 'off' ); set(handles.pushbutton7, 'Enable' 'off' ); set(handles.pushbutton8, 'Enable' 'on' ); set(handles.pushbutton2, 'Enable' 'on' ); % --Executes on button press in pushbutton2. function pushbutton2_Callback(hObject, eventdata, handles) % hObject handle to pushbutton2 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_STOP global VAR_DX global VAR_SCREENOPN global VAR_WCI global VAR_QI global spdata1 global VAR_LOOP VAR_DX = [0;0;0;0;0;0;0]; VAR_STOP = 0; error = 0; notfilled = 0; td_err = 0; ts_err = 0; set(handles.pushbutton1, 'Enable' 'on' ); set(handles.pushbutton2, 'Enable' 'off' ); set(handles.pushbutton8, 'Enable' 'off' ); if (get(handles.popupmenu17, 'Value' ) == 3) || (get(handles.popupmenu17, 'Value' ) == 4) || (get(handles.popupmenu17, 'Value' ) == 5) set(handles.pushbutton7, 'Enable' 'on' ); VAR_LOOP = 1; end cart = get(handles.popupmenu1, 'Value' );

PAGE 407

Appendix B. (Continued) 386WCA = get(handles.popupmenu3, 'Value' ); if WCA == 3 optim = 0; JLA = 0; JLO = get(handles.checkbox2, 'Value' ); else optim = get(handles.popupmenu8, 'Value' ); JLA = get(handles.checkbox1, 'Value' ); JLO = get(handles.checkbox2, 'Value' ); end coord = get(handles.popupmenu21, 'Value' ); t1 = get(handles.popupmenu4, 'Value' ); if t1 == 1 vr = 1; ml = 0; elseif t1 == 2 vr = 0; ml = 1; elseif t1 == 3 vr = 1; ml = 1; else vr = 0; ml = 0; end arm = get(handles.popupmenu6, 'Value' ); arm = arm 1; plt = get(handles.popupmenu7, 'Value' ); choice8 = get(handles.popupmenu10, 'Value' ); drawnow; if VAR_STOP == 1 stop = 1; else stop = 0; end if stop == 0 cont = get(handles.popupmenu17, 'Value' ); if cont == 1 if cart == 1 v_str = get(handles.edit36, 'String' ); v_1 = str2double(v_str); if isnan (v_1) error = 1; notfilled = 1; end if (v_1 > 1) || (v_1 < -1) error = 1; td_err = 1; end v_str = get(handles.edit37, 'String' ); v_2 = str2double(v_str); if isnan (v_2) error = 1; notfilled = 1; end if (v_2 > 1) || (v_2 < -1) error = 1; td_err = 1; end v_str = get(handles.edit38, 'String' ); v_3 = str2double(v_str);

PAGE 408

Appendix B. (Continued) 387 if isnan (v_3) error = 1; notfilled = 1; end if (v_3 > 1) || (v_3 < -1) error = 1; td_err = 1; end v_str = get(handles.edit39, 'String' ); v_5 = str2double(v_str); if isnan (v_5) error = 1; notfilled = 1; end if (v_5 > 1) || (v_5 < -1) error = 1; td_err = 1; end v_str = get(handles.edit40, 'String' ); v_6 = str2double(v_str); if isnan (v_6) error = 1; notfilled = 1; end if (v_6 > 1) || (v_6 < -1) error = 1; td_err = 1; end v_str = get(handles.edit41, 'String' ); v_7 = str2double(v_str); if isnan (v_7) error = 1; notfilled = 1; end if (v_7 > 1) || (v_7 < -1) error = 1; td_err = 1; end v_str = get(handles.edit42, 'String' ); v_9 = str2double(v_str); if isnan (v_9) error = 1; notfilled = 1; end if (v_9 > 1) || (v_9 < -1) error = 1; td_err = 1; end v_str = get(handles.edit43, 'String' ); v_10 = str2double(v_str); if isnan (v_10) error = 1; notfilled = 1; end if (v_10 > 1) || (v_10 < -1) error = 1; td_err = 1; end v_str = get(handles.edit44, 'String' ); v_11 = str2double(v_str); if isnan (v_11) error = 1; notfilled = 1; end if (v_11 > 1) || (v_11 < -1) error = 1;

PAGE 409

Appendix B. (Continued) 388 td_err = 1; end else v_1 = 1; v_2 = 0; v_3 = 0; v_5 = 0; v_6 = 1; v_7 = 0; v_9 = 0; v_10 = 0; v_11 = 1; end v_str = get(handles.edit16, 'String' ); v_4 = str2double(v_str); if isnan (v_4) error = 1; notfilled = 1; end v_str = get(handles.edit15, 'String' ); v_8 = str2double(v_str); if isnan (v_8) error = 1; notfilled = 1; end v_str = get(handles.edit17, 'String' ); v_12 = str2double(v_str); if isnan (v_12) error = 1; notfilled = 1; end Td = [v_1 v_2 v_3 v_4; v_5 v_6 v_7 v_8; v_9 v_10 v_11 v_12; 0 0 0 1]; temp_str = get(handles.edit45, 'String' ); v = str2double(temp_str); if isnan (v) error = 1; notfilled = 1; end trajf = get(handles.popupmenu20, 'Value' ); elseif cont == 2 v_str = get(handles.edit62, 'String' ); v_1 = str2double(v_str); if isnan (v_1) error = 1; notfilled = 1; end v_str = get(handles.edit63, 'String' ); v_2 = str2double(v_str); if isnan (v_2) error = 1; notfilled = 1; end v_str = get(handles.edit64, 'String' ); v_3 = str2double(v_str); if isnan (v_3) error = 1; notfilled = 1; end if (cart == 1) v_str = get(handles.edit65, 'String' ); v_4 = str2double(v_str); if isnan (v_4) error = 1; notfilled = 1; end

PAGE 410

Appendix B. (Continued) 389 v_str = get(handles.edit66, 'String' ); v_5 = str2double(v_str); if isnan (v_5) error = 1; notfilled = 1; end v_str = get(handles.edit67, 'String' ); v_6 = str2double(v_str); if isnan (v_6) error = 1; notfilled = 1; end end if (cart == 1) Vd = [v_1; v_2; v_3; v_4; v_5; v_6]; else Vd = [v_1; v_2; v_3]; end temp_str = get(handles.edit4, 'String' ); ts = str2double(temp_str); if isnan (ts) error = 1; notfilled = 1; end if ts < 0 error = 1; ts_err = 1; end else temp_str = get(handles.edit1, 'String' ); v = str2double(temp_str); if isnan (v) error = 1; notfilled = 1; end if cont == 4 temp_str = get(handles.edit69, 'String' ); port1 = str2double(temp_str); if isnan (port1) error = 1; notfilled = 1; end end end end if stop == 0 drawnow; if VAR_STOP == 1 stop = 1; else stop = 0; end if stop == 0 start = get(handles.popupmenu15, 'Value' ); if start == 1 if vr == 1 || ml == 1 || arm == 1 if (get(handles.popupmenu16, 'Value' ) == 1) ini = 1; else ini = 0; end else ini = 0;

PAGE 411

Appendix B. (Continued) 390 end qi = [90;90;0;90;90;90;0] pi/180; VAR_QI = qi; WCi = [0;0;0]; VAR_WCI = WCi; else v_str = get(handles.edit49, 'String' ); v_1 = str2double(v_str); if isnan (v_1) error = 1; notfilled = 1; end v_str = get(handles.edit50, 'String' ); v_2 = str2double(v_str); if isnan (v_2) error = 1; notfilled = 1; end v_str = get(handles.edit51, 'String' ); v_3 = str2double(v_str); if isnan (v_3) error = 1; notfilled = 1; end v_str = get(handles.edit52, 'String' ); v_4 = str2double(v_str); if isnan (v_4) error = 1; notfilled = 1; end v_str = get(handles.edit53, 'String' ); v_5 = str2double(v_str); if isnan (v_5) error = 1; notfilled = 1; end v_str = get(handles.edit54, 'String' ); v_6 = str2double(v_str); if isnan (v_6) error = 1; notfilled = 1; end v_str = get(handles.edit55, 'String' ); v_7 = str2double(v_str); if isnan (v_7) error = 1; notfilled = 1; end qi = [v_1; v_2; v_3; v_4; v_5; v_6; v_7]; VAR_QI = qi; v_str = get(handles.edit56, 'String' ); v_1 = str2double(v_str); if isnan (v_1) error = 1; notfilled = 1; end v_str = get(handles.edit57, 'String' ); v_2 = str2double(v_str); if isnan (v_2) error = 1; notfilled = 1; end v_str = get(handles.edit58, 'String' ); v_3 = str2double(v_str); if isnan (v_3)

PAGE 412

Appendix B. (Continued) 391 error = 1; notfilled = 1; end WCi = [v_1; v_2; v_3]; VAR_WCI = WCi; ini = 0; end choice6 = get(handles.popupmenu11, 'Value' ); if (choice6 == 1) choice7 = get(handles.popupmenu14, 'Value' ); end end end if (error == 1) if notfilled == 1 WMRA_error_gui ( 'One or more required inputs are not filled or filled wrongly' ); end if td_err == 1 WMRA_error_gui ( 'Elements of Rd (Rotation Matrix) should be in between -1 to +1' ); end if ts_err == 1 WMRA_error_gui ( 'Ts should be greater than zero' ); end end if cont == 5 if (VAR_SCREENOPN ~= 1) WMRA_screen ( '1' ); drawnow; end end % Declaring a global variable for Optimization in WMRA_Opt(): global dHo if (stop == 0) && (error == 0) %Redwan's Code Entry drawnow; if VAR_STOP == 1 stop = 1; else stop = 0; end if stop == 0 % 1st point % This "new USF WMRA" script SIMULATES the WMRA system with ANIMATION and plots for 9 DOF. All angles are in Radians. % Defining used parameters: d2r=pi/180; % Conversions from Degrees to Radians. r2d=180/pi; % Conversions from Radians to Degrees. % Reading the Wheelchair's constant dimentions, all dimentions are converted in millimeters: L=WMRA_WCD; % Calculating the Transformation Matrix of the initial position of the WMRA's base: Tiwc=WMRA_p2T(WCi(1),WCi(2),WCi(3)); % Calculating the initial Wheelchair Variables: qiwc=[sqrt(WCi(1)^2+WCi(2)^2);WCi(3)]; % Calculating the initial transformation matrices:

PAGE 413

Appendix B. (Continued) 392 [Ti, Tia, Tiwc, T01, T12, T23, T34, T45, T56, T67]=WMRA_Tall(1, qi, [0;0], Tiwc); if cont==1 % Calculating the linear distance from the initial position to the desired position and the linear velocity: if coord==2 D=sqrt( (Td(1,4)-Tia(1,4))^2 + (Td(2,4)-Tia(2,4))^2 + (Td(3,4)Tia(3,4))^2); elseif coord==3 D=sqrt( (Td(1,4))^2 + (Td(2,4))^2 + (Td(3,4))^2); else D=sqrt( (Td(1,4)-Ti(1,4))^2 + (Td(2,4)-Ti(2,4))^2 + (Td(3,4)-Ti(3,4))^2); end % Calculating the number of iteration and the time increment (delta t) if the linear step increment of the tip is 1 mm: dt=0.05; % Time increment in seconds. total_time=D/v; % Total time of animation. n=round(total_time/dt); % Number of iterations rounded up. dt=total_time/n; % Adjusted time increment in seconds. % Calculating the Trajectory of the end effector, and once the trajectory is calculated, we should redefine "Td" based on the ground frame: if coord==2 Tt=WMRA_traj(trajf, Tia, Td, n+1); Td=Tiwc*Td; elseif coord==3 Tt=WMRA_traj(trajf, eye(4), Td, n+1); Td=Ti*Td; else Tt=WMRA_traj(trajf, Ti, Td, n+1); end elseif cont==2 % Calculating the number of iteration and the time increment (delta t) if the linear step increment of the gripper is 1 mm: dt=0.05; % Time increment in seconds. total_time=ts; % Total time of animation. n=round(total_time/dt); % Number of iterations rounded up. dt=total_time/n; % Adjusted time increment in seconds. dx=Vd*dt; Td=Ti; elseif cont==3 dt=0.05; dx=v*dt*[spdata1(3)/20 ; -spdata1(1)/40 ; spdata1(2)/30 ; spdata1(6)/1500 ; spdata1(4)/900 ; spdata1(5)/1300]; dg=spdata1(7); Td=Ti; n=1; elseif cont==4 dt=0.05; dx=v*dt*WMRA_psy(port1); dg=dx(7); dx=dx(1:6); Td=Ti; n=1; else dt=0.05; dx=v*dt*VAR_DX(1:6); dg=VAR_DX(7); Td=Ti; n=1; end drawnow; if VAR_STOP == 1 stop = 1; else stop = 0;

PAGE 414

Appendix B. (Continued) 393 end if stop == 0 % 2nd point % Initializing the joint angles, the Transformation Matrix, and time: dq=zeros(9,1); dg=0; qo=[qi;qiwc]; To=Ti; Toa=Tia; Towc=Tiwc; tt=0; i=1; dHo=[0;0;0;0;0;0;0]; % Initializing the WMRA: if ini==0 % When no "park" to "ready" motion required. % Initializing Virtual Reality Animation: if vr==1 WMRA_VR_Animation(1, Towc, qo); end % Initializing Robot Animation in Matlab Graphics: if ml==1 WMRA_ML_Animation(1, To, Td, Towc, T01, T12, T23, T34, T45, T56, T67); end % Initializing the Physical Arm: if arm==1 WMRA_ARM_Motion(1, 2, [qo;dg], 0); ddt=0; end elseif ini==1 && (vr==1 || ml==1 || arm==1) % When "park" to "ready" motion is required. WMRA_park2ready(1, vr, ml, arm, Towc, qo(8:9)); if arm==1 ddt=0; end end % Re-Drawing the Animation: if vr==1 || ml==1 drawnow; end % Starting a timer: tic drawnow; if VAR_STOP == 1 stop = 1; else stop = 0; end % Starting the Iteration Loop: while (i<=(n+1)) && (stop == 0) % 3rd point % Calculating the 6X7 Jacobian of the arm in frame 0: [Joa,detJoa]=WMRA_J07(T01, T12, T23, T34, T45, T56, T67); % Calculating the 6X2 Jacobian based on the WMRA's base in the ground frame: phi=atan2(Towc(2,1),Towc(1,1)); Jowc=WMRA_Jga(1, phi, Toa(1:2,4)); % Changing the Jacobian reference frame based on the choice of which coordinates frame are referenced in the Cartesian control:

PAGE 415

Appendix B. (Continued) 394 % coord=1 for Ground Coordinates Control. % coord=2 for Wheelchair Coordinates Control. % coord=3 for Gripper Coordinates Control. if coord==2 Joa=Joa; Jowc=[Towc(1:3,1:3)' zeros(3); zeros(3) Towc(1:3,1:3)']*Jowc; elseif coord==3 Joa=[Toa(1:3,1:3)' zeros(3); zeros(3) Toa(1:3,1:3)']*Joa; Jowc=[To(1:3,1:3)' zeros(3); zeros(3) To(1:3,1:3)']*Jowc; elseif coord==1 Joa=[Towc(1:3,1:3) zeros(3); zeros(3) Towc(1:3,1:3)]*Joa; Jowc=Jowc; end % Calculating the 3X9 or 6X9 augmented Jacobian of the WMRA system based on the ground frame: if cart==2 Joa=Joa(1:3,1:7); detJoa=sqrt(det(Joa*Joa')); Jowc=Jowc(1:3,1:2); Jo=[Joa Jowc]; detJo=sqrt(det(Jo*Jo')); else Jo=[Joa Jowc]; detJo=sqrt(det(Jo*Jo')); end % Finding the Cartesian errors of the end effector: if cont==1 % Calculating the Position and Orientation errors of the end effector, and the rates of motion of the end effector: if coord==2 invTowc=[Towc(1:3,1:3)' -Towc(1:3,1:3)'*Towc(1:3,4);0 0 0 1]; Ttnew=invTowc*Tiwc*Tt(:,:,i); dx=WMRA_delta(Toa, Ttnew); elseif coord==3 invTo=[To(1:3,1:3)' -To(1:3,1:3)'*To(1:3,4);0 0 0 1]; Ttnew=invTo*Ti*Tt(:,:,i); dx=WMRA_delta(eye(4), Ttnew); else dx=WMRA_delta(To, Tt(:,:,i)); end elseif cont==2 elseif cont==3 dx=v*dt*[spdata1(3)/20 ; -spdata1(1)/40 ; spdata1(2)/30 ; spdata1(6)/1500 ; -spdata1(4)/900 ; spdata1(5)/1300]; dg=spdata1(7); elseif cont==4 dx=v*dt*WMRA_psy(port1); dg=dx(7); dx=dx(1:6); else dx=v*dt*VAR_DX(1:6); dg=VAR_DX(7); end % Changing the order of Cartesian motion in the case when gripper reference frame is selected for control with the screen or psy or SpaceBall interfaces: if coord==3 && cont>=3 dx=[-dx(2);-dx(3);dx(1);-dx(5);-dx(6);dx(4)]; end if cart==2 dx=dx(1:3); end

PAGE 416

Appendix B. (Continued) 395 % Calculating the resolved rate with optimization: % Index input values for "optim": 1= SR-I & WLN, 2= P-I & WLN, 3= SR-I & ENE, 4= P-I & ENE: if WCA==2 dq=WMRA_Opt(optim, JLA, JLO, Joa, detJoa, dq(1:7), dx, dt, qo); dq=[dq;0;0]; elseif WCA==3 dq=WMRA_Opt(optim, JLA, JLO, Jowc, 1, dq(8:9), dx(1:2), dt, 1); dq=[0;0;0;0;0;0;0;dq]; else dq=WMRA_Opt(optim, JLA, JLO, Jo, detJo, dq, dx, dt, qo); end drawnow; if VAR_STOP == 1 stop = 1; else stop = 0; end if stop == 0 % 4nd point % Calculating the new Joint Angles: qn=qo+dq; % Calculating the new Transformation Matrices: [Tn, Tna, Tnwc, T01, T12, T23, T34, T45, T56, T67]=WMRA_Tall(2, qn, dq(8:9), Towc); % A safety condition function to stop the joints that may cause colision of the arm with itself, the wheelchair, or the human user: if JLO==1 && WCA~=3 dq(1:7)=WMRA_collide(dq(1:7), T01, T12, T23, T34, T45, T56, T67); % Re-calculating the new Joint Angles: qn=qo+dq; % Re-calculating the new Transformation Matrices: [Tn, Tna, Tnwc, T01, T12, T23, T34, T45, T56, T67]=WMRA_Tall(2, qn, dq(8:9), Towc); end % Saving the plot data in case plots are required: if plt==2 WMRA_Plots(1, L, r2d, dt, i, tt, qn, dq, Tn, Tnwc, detJoa, detJo); end % Updating Physical Arm: if arm==1 ddt=ddt+dt; if ddt>=0.5 || i>=(n+1) WMRA_ARM_Motion(2, 1, [qn;dg], ddt); ddt=0; end end % Updating Virtual Reality Animation: if vr==1 WMRA_VR_Animation(2, Tnwc, qn); end % Updating Matlab Graphics Animation: if ml==1 WMRA_ML_Animation(2, Ti, Td, Tnwc, T01, T12, T23, T34, T45, T56, T67); end

PAGE 417

Appendix B. (Continued) 396 % Re-Drawing the Animation: if vr==1 || ml==1 drawnow; end % Updating the old values with the new values for the next iteration: qo=qn; To=Tn; Toa=Tna; Towc=Tnwc; tt=tt+dt; i=i+1; % Stopping the simulation when the exit button is pressed: if cont==3 || cont==4 if (VAR_LOOP == 1) n=n+1; else break end end if cont==5 if (VAR_SCREENOPN == 1) && (VAR_LOOP == 1) n=n+1; else break end end % Delay to comply with the required speed: if toc < tt pause(tt-toc); end end drawnow; % 5th point if VAR_STOP == 1 stop = 1; else stop = 0; end end drawnow; % 6th point if VAR_STOP == 1 stop = 1; else stop = 0; end if stop == 0 % 7th point % Reading the elapsed time and printing it with the simulation time: if cont==1 || cont==2, fprintf( '\nSimulation time is %6.6f seconds.\n' total_time); end toc % Plotting: if plt==2 WMRA_Plots(2, L, r2d, dt, i, tt, qn, dq, Tn, Tnwc, detJoa, detJo); end if vr==1 || ml==1 || arm==1 % Going back to the ready position: if choice6==1 WMRA_any2ready(2, vr, ml, arm, Tnwc, qn);

PAGE 418

Appendix B. (Continued) 397 % Going back to the parking position: if choice7==1 WMRA_ready2park(2, vr, ml, arm, Tnwc, qn(8:9)); end end % Closing the Arm library and Matlab Graphics Animation and Virtual Reality Animation and Plots windows:or press "2" for No. \n','s'); if choice8==1 if arm==1 WMRA_ARM_Motion(3, 0, 0, 0); end if vr==1 WMRA_VR_Animation(3, 0, 0); end if ml==1 WMRA_ML_Animation(3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); end if plt==2 close (figure(2),figure(3),figure(4),figure(5),figure(6),figure(7),figure(8),figure(9),figure(1 0)); end end end end end end end VAR_QI = qn(1:7); VAR_WCI = [Tnwc(1,4); Tnwc(2,4); phi]; if (get(handles.popupmenu15, 'Value' ) == 2) set (handles.edit49, 'String' VAR_QI(1,1)); set (handles.edit50, 'String' VAR_QI(2,1)); set (handles.edit51, 'String' VAR_QI(3,1)); set (handles.edit52, 'String' VAR_QI(4,1)); set (handles.edit53, 'String' VAR_QI(5,1)); set (handles.edit54, 'String' VAR_QI(6,1)); set (handles.edit55, 'String' VAR_QI(7,1)); set (handles.edit56, 'String' VAR_WCI(1,1)); set (handles.edit57, 'String' VAR_WCI(2,1)); set (handles.edit58, 'String' VAR_WCI(3,1)); end set(handles.pushbutton1, 'Enable' 'off' ); set(handles.pushbutton7, 'Enable' 'off' ); set(handles.pushbutton2, 'Enable' 'on' ); set(handles.pushbutton8, 'Enable' 'on' ); % --Executes on button press in pushbutton7. function pushbutton7_Callback(hObject, eventdata, handles) % hObject handle to pushbutton7 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) global VAR_LOOP VAR_LOOP = 0; drawnow; set(handles.pushbutton1, 'Enable' 'off' ); set(handles.pushbutton7, 'Enable' 'off' );

PAGE 419

Appendix B. (Continued) 398set(handles.pushbutton2, 'Enable' 'on' ); set(handles.pushbutton8, 'Enable' 'on' ); % --Executes on button press in checkbox1. function checkbox1_Callback(hObject, eventdata, handles) % hObject handle to checkbox1 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hint: get(hObject,'Value') returns toggle state of checkbox1 % --Executes on button press in checkbox2. function checkbox2_Callback(hObject, eventdata, handles) % hObject handle to checkbox2 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hint: get(hObject,'Value') returns toggle state of checkbox2 % --Executes on button press in pushbutton8. function pushbutton8_Callback(hObject, eventdata, handles) % hObject handle to pushbutton8 (see GCBO) % eventdata reserved to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) %%%%%%%%%%%%%%%%%%1st coll set(handles.popupmenu1, 'Enable' 'on' ); set(handles.popupmenu1, 'Value' ,1); set(handles.popupmenu3, 'Enable' 'on' ); set(handles.popupmenu3, 'Value' ,1); set(handles.popupmenu21, 'Enable' 'on' ); set(handles.popupmenu21, 'Value' ,1); set(handles.popupmenu4, 'Enable' 'on' ); set(handles.popupmenu4, 'Value' ,1); set(handles.popupmenu6, 'Enable' 'on' ); set(handles.popupmenu6, 'Value' ,1); set(handles.popupmenu7, 'Enable' 'on' ); set(handles.popupmenu7, 'Value' ,1); set(handles.popupmenu8, 'Enable' 'on' ); set(handles.popupmenu8, 'Value' ,1); set(handles.checkbox1, 'Enable' 'on' ); set (handles.checkbox1, 'Value' 1); set(handles.checkbox2, 'Enable' 'on' ); set (handles.checkbox2, 'Value' 1); set(handles.popupmenu10, 'Enable' 'on' ); set(handles.popupmenu10, 'Value' ,1); %%%%%%%%%%%%%%%%%%2nd coll set(handles.popupmenu17, 'Enable' 'on' ); set(handles.popupmenu17, 'Value' ,1); set (handles.edit36, 'String' 0); set (handles.edit37, 'String' 0); set (handles.edit38, 'String' 1); set (handles.edit39, 'String' -1); set (handles.edit40, 'String' 0);

PAGE 420

Appendix B. (Continued) 399set (handles.edit41, 'String' 0); set (handles.edit42, 'String' 0); set (handles.edit43, 'String' -1); set (handles.edit44, 'String' 0); set(handles.edit36, 'Enable' 'on' ); set(handles.edit37, 'Enable' 'on' ); set(handles.edit38, 'Enable' 'on' ); set(handles.edit39, 'Enable' 'on' ); set(handles.edit40, 'Enable' 'on' ); set(handles.edit41, 'Enable' 'on' ); set(handles.edit42, 'Enable' 'on' ); set(handles.edit43, 'Enable' 'on' ); set(handles.edit44, 'Enable' 'on' ); set (handles.edit15, 'String' 369); set (handles.edit16, 'String' 1455); set (handles.edit17, 'String' 999); set(handles.edit15, 'Enable' 'on' ); set(handles.edit16, 'Enable' 'on' ); set(handles.edit17, 'Enable' 'on' ); set(handles.text15, 'Enable' 'on' ); set(handles.text16, 'Enable' 'on' ); set(handles.text17, 'Enable' 'on' ); set(handles.text18, 'Enable' 'on' ); set(handles.edit45, 'String' 100); set(handles.edit45, 'Enable' 'on' ); set(handles.text14, 'Enable' 'on' ); set(handles.text13, 'Enable' 'on' ); set(handles.pushbutton3, 'Enable' 'on' ); set(handles.popupmenu20, 'Enable' 'on' ); set(handles.popupmenu20, 'Value' ,1); %%%%%%%%%%%%%%%%%%%%%%% set (handles.edit62, 'String' 70); set (handles.edit63, 'String' 70); set (handles.edit64, 'String' -70); set(handles.edit62, 'Enable' 'off' ); set(handles.edit63, 'Enable' 'off' ); set(handles.edit64, 'Enable' 'off' ); set(handles.text21, 'Enable' 'off' ); set (handles.edit65, 'String' 0.001); set (handles.edit66, 'String' 0.001); set (handles.edit67, 'String' 0.001); set(handles.edit65, 'Enable' 'off' ); set(handles.edit66, 'Enable' 'off' ); set(handles.edit67, 'Enable' 'off' ); set(handles.text22, 'Enable' 'off' ); set (handles.edit4, 'String' 2); set(handles.edit4, 'Enable' 'off' ); set(handles.text6, 'Enable' 'off' ); set(handles.text5, 'Enable' 'off' ); set(handles.pushbutton4, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% set (handles.edit1, 'String' 50); set(handles.edit1, 'Enable' 'off' );

PAGE 421

Appendix B. (Continued) 400set(handles.text1, 'Enable' 'off' ); set(handles.text2, 'Enable' 'off' ); set (handles.edit69, 'String' 19711); set(handles.edit69, 'Enable' 'off' ); set(handles.text23, 'Enable' 'off' ); set(handles.pushbutton7, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%3rd coll set(handles.popupmenu15, 'Enable' 'on' ); set(handles.popupmenu15, 'Value' ,1); set(handles.popupmenu16, 'Enable' 'on' ); set(handles.popupmenu16, 'Value' ,1); set (handles.edit49, 'String' 1.5708); set (handles.edit50, 'String' 1.5708); set (handles.edit51, 'String' 0); set (handles.edit52, 'String' 1.5708); set (handles.edit53, 'String' 1.5708); set (handles.edit54, 'String' 1.5708); set (handles.edit55, 'String' 0); set(handles.edit49, 'Enable' 'off' ); set(handles.edit50, 'Enable' 'off' ); set(handles.edit51, 'Enable' 'off' ); set(handles.edit52, 'Enable' 'off' ); set(handles.edit53, 'Enable' 'off' ); set(handles.edit54, 'Enable' 'off' ); set(handles.edit55, 'Enable' 'off' ); set(handles.pushbutton5, 'Enable' 'off' ); set(handles.pushbutton6, 'Enable' 'off' ); set (handles.edit56, 'String' 0); set (handles.edit57, 'String' 0); set (handles.edit58, 'String' 0); set(handles.edit56, 'Enable' 'off' ); set(handles.edit57, 'Enable' 'off' ); set(handles.edit58, 'Enable' 'off' ); set(handles.popupmenu11, 'Enable' 'on' ); set(handles.popupmenu11, 'Value' ,1); set(handles.popupmenu14, 'Enable' 'on' ); set(handles.popupmenu14, 'Value' ,1); set(handles.pushbutton2, 'Enable' 'on' ); set(handles.pushbutton1, 'Enable' 'off' ); %%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%

PAGE 422

401 Appendix C. C++ Programs and DLL Library C.1. DLL Library Functions

PAGE 423

Appendix C. (Continued) 402

PAGE 424

Appendix C. (Continued) 403 C.2. DLL Library Documentation controlMotor.dll is a Windows DLL that incl udes functions for communicating with PICSERVO (v.4, v.5 and v.10) modules. It can be used with almost all Windows programming languages. This DLL was create d using Microsoft Vi sual Studio 2003, and the source code is included with the DLL. This portion is bei ng developed by Mayur Palankar. Initialization/Closing 1) Initialize PIC-SERVO Modules Opens the COM port and initia lizes the PIC-SERVO modules. Command: init Syntax: int init (int CommPort, long BaudRate, int fileOpen); Return Value: Negative number or zero: Error or Failure (C orresponds to a unique error number) or zero if no modules found Positive number: Success. Number correspond s to the number of modules found in the system. Arguments: CommPort: COM port nu mber (1 to 8) BaudRate: Communication ra te (19200, 57600, 115200 or 230400) fileOpen: Set PIC-SERVO modules based on th e last stored confi guration.(0 or 1) Description: Opens the COM port specified by CommPort at the specified BaudRate, and initializes the network of motor controllers. Controller addresses are dynamically assigned, starting with the furthest controller with address 1. All group addresses are set to 0xFF. Returns the number of controllers found on in th e network. The PIC-SERVO modules are initialized to the last saved configuration if the fileOpen argument is passed a positive number else they are initialized to zero. Pl ease note that this doesn’t mean the arm will move; this means the modules is assumed to be at the initial confi guration that was saved previously. CommPort and BaudRate: These arguments te ll which communication port and the rate at which it should communicate to it.

PAGE 425

Appendix C. (Continued) 404 fileOpen: If a positive number is passed, the ol d configuration last st ored (when close() is executed) will be restored. This is done using a file ‘configuration.txt’ which is local to this dll. If the file is no t found, initial position are used. Any changes manually made to the file will also be reflected if the option is chosen. Examples: init(4, 19200, 0) This command will try to initialize the PIC-SERVO modules at COMM port 4 with a baud rate 19200 and they will be initialized at their start position (fileOpen = 0). 2) Close communication Saves the current configuration, resets all the buffers and gracefully closes the COMM port. Command: close Syntax: int close (); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success. Description: This command saves the current configuration in the ‘configuration.txt’ file which is local to the dll. This file is used to open the PIC-SERVO modules in the previous configuration. It resets all the PIC-SERVO modules and shuts it down. NOTE: Any functions of this dll can’t be executed unless th e init function is executed. After that, all functions (except init) can be executed. A new init function can only be executed unless the old conn ection is closed first. CAUTION: The init command creates threads fo r internal usage for each of the PICSERVO modules found. For gracef ul shutdown, the close() should be executed. If the parent process using the dll doesn’t execute the close() before exiting, the lock on the PIC-SERVO modules still exist causing the system to behave unpredictably. This responsibility rests solely on th e programmer using this dll.

PAGE 426

Appendix C. (Continued) 405 Reset/Clear Motor 1) Reset Resets a particular PIC-SERVO module or group of modules. Commands included: reset resetAll resetSelect Syntax: int reset (int module); int resetAll (); int resetSelect (int module[]); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) Description: This command resets a PIC-SERVO mo tor module to its start up status. module: This argument describes the PICSERVO module address which has to be reseted. The number sent should be less than or equal to the number of modules present else it will result in an e rror (error code: -1). When an a rray of module number is passed, the index is used to address the module and its value decides if the motor module will be reseted or not. For ex. [1 0 1 1] when passed, will reset the motor module numbers 1, 3 and 4; while the motor module number 2 wont be affected. The length of the array (in this case 4) should be less than or equal to the modules present in the circuit. Types: reset: This command is used to reset one PIC-SERVO module. resetAll: This command resets all the PIC-SERVO modules. resetSelect: Using this comm and, individual PIC-SERVO modul es can be reseted. This command has the highest flexibility and the othe rs are a special case of this command. Examples: reset(1) Resets the PIC-SERVO module 1 to its start up state.

PAGE 427

Appendix C. (Continued) 406 2) Clears Clears a particular PIC-SERVO module or group of modules. Commands included: clear clearAll clearSelect Syntax: int clear (int module); int clearAll (); int clearSelect (int module[]); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) Description: This command clears a PIC-SER VO motor module’s status bits. module: This argument describes the PIC-SER VO module address whose status bits have to be cleared. The number sent should be less than or equal to the number of modules present else it will result in an error (error code: -1). When an array of module number is passed, the index is used to address the modul e and its value decides if the motor module will be cleared or not. For ex. [1 0 1 1] when passed, will clear the motor module numbers 1, 3 and 4; while the motor module number 2 wont be affected. The length of the array (in this case 4) should be less than or equal to the modules pr esent in the circuit. Types: clear: This command is used to clear one PIC-SERVO module. clearAll: This command clears all the PIC-SERVO modules. clearSelect: Using this comm and, individual PIC-SERVO modul es can be cleared. This command has the highest flexibility and the othe rs are a special case of this command. Examples: clear(1) Clears the PIC-SERVO module 1’s status bits.

PAGE 428

Appendix C. (Continued) 407 Enable/Disable Motors 1) Enable Enables a particular PIC-SERV O module or group of modules. Commands included: enable enableAll enableSelect Syntax: int enable (int module); int enableAll (); int enableSelect (int module[]); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) Description: This command enables a PIC-SERVO moto r module. Any move command can’t be executed if the motor modules are disabled. module: This argument describes the PICSERVO module address which has to be enabled. The number sent should be less than or equal to the number of modules present else it will result in an error (error code: -1 ). When an array of m odule numbers is passed, the index is used to address the module and its value decides if the motor module will be enabled or not. For ex. [1 0 1 1] when passe d, will enable the moto r module numbers 1, 3 and 4; while the motor module number 2 wont be affected. The length of the array (in this case 4) should be less than or equal to the modules present in the circuit. Types: enable: This command is used to enable one PIC-SERVO module. enableAll: This command enable s all the PIC-SERVO modules. enableSelect: Using this command, individual PIC-SERVO modules can be enabled. This command has the highest flexibility and the othe rs are a special case of this command. Examples: enable(1) Enables the PIC-SERVO module 1.

PAGE 429

Appendix C. (Continued) 408 2) Off Disables a particular PIC-SERV O module or group of modules. Commands included: off offAll offSelect Syntax: int off (int module); int offAll (); int offSelect (int module[]); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) Description: This command disables a PIC-SERVO motor module. Any move command can now be executed. module: This argument describes the PICSERVO module address which has to be disabled. The number sent should be less than or equal to the number of modules present else it will result in an e rror (error code: -1). When an a rray of module number is passed, the index is used to address the module and its value decides if the motor module will be disabled or not. For ex. [1 0 1 1] when passed, will disable the motor module numbers 1, 3 and 4; while the motor module number 2 wont be affected. The length of the array (in this case 4) should be less than or equa l to the modules pres ent in the circuit. Types: off: This command is used to disable one PICSERVO module. offAll: This command disables all the PIC-SERVO modules. offSelect: Using this command, individual PI C-SERVO modules can be disabled. This command has the highest flexibility and the othe rs are a special case of this command. Examples: off(1) Disables the PI C-SERVO module 1.

PAGE 430

Appendix C. (Continued) 409 Move Commands 1)Position Control Loads a motion trajectory to move to a certain position. Commands included: pos posAll posSelect posSelectAll Syntax: int pos (int module, unsigned long pos, uns igned long vel, unsigned long acc, int rev); int posAll (unsigned long pos, unsigned long vel, unsigned long acc, int rev); int posSelect (int module[], unsigned long pos[], unsigned long vel[], unsigned long acc[], int rev[]); int posSelectAll (int module[], unsigned l ong pos, unsigned long vel, unsigned long acc, int rev); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) pos / pos[] : positive 32 bit position data (0 to +2,147,483,647) vel / vel[] : positive 32 bit velocity data (0 to +83,886,080) acc / acc[] : positive 32 bit acceleration data (0 to +2,147,483,647) rev / rev[] : reverse motion (0 or 1) Description: This command sends the position, velocity and acceleration data needed for a particular motion to the appropriate PIC-SERVO motor module. module: This argument describes the PIC-SERVO module address where the corresponding trajectory information has to be sent. The number sent should be less than or equal to the number of modules present else it will result in an error (error code: -1). When an array of module number is passed, th e index is used to address the module and its value decides if the motor module will be en abled or disabled. For ex. [1 0 1 1] when passed, will load the corresponding trajector y information to the module numbers 1, 3

PAGE 431

Appendix C. (Continued) 410 and 4; while the module number 2 wont be aff ected. The length of the array (in this case 4) should be less than or equal to the modul es present in the circ uit. Array elements greater then the number of m odules present will be discarded. Care should be taken so that the corresponding entries of module array matches w ith the pos, vel and acc arguments and is left solely on the user. pos, vel and acc: Data for the motion trajectory. rev: This argument determines the direction in which the motor moves. When its value is 1 or a positive number, the direction will taken as reverse and the sign of the position field will be reversed. When 0 the direction will be taken as forward. Types: pos: This command is used to control one PIC-SERVO module. posAll: This command controls all the PIC-SE RVO modules and moves all of them to the same position with the same velocity and acceleration in the same direction. posSelect: Using this comma nd, individual PIC-SERVO modules can be controlled to move to their corresponding positions with their own corresponding velocity, acceleration and direction. This command has the highest flexibility and the others are a special case of this command. posSelectAll: Similar to the above one but the trajectory for all sele cted modules will the same. Examples: pos(1, 100000, 10000, 100, 1) Moves the PIC-SERVO module 1 to the position 100000 with velocity 10000 and acceleration 100. The direction is reverse. posSelect([1 0 0 1], [200000 100 100000 100000], [10000 0 0 300000], [100 0 100 300], [0 1 1 1]) Moves the PIC-SERVO module 1 to the position 200000 in the forward direction (velocity 100000 and acceleration 100) and moves the PIC-SERVO module 4 to the position 100000 in the reverse direction (velocity 300000 and acceleration 300). The PIC-SERVO module 2 and 3 won’t have any effect and its en tries will be discarded. 2)Velocity Control Loads a motion trajectory to m ove with a certain velocity. Commands included: vel velAll velSelect velSelectAll

PAGE 432

Appendix C. (Continued) 411 Syntax: int vel (int module, unsigned long vel, unsigned long acc, int rev); int velAll (unsigned long vel, unsigned long acc, int rev); int velSelect (int module[], unsigned long vel[], unsigned long acc[], int rev[]); int velSelectAll (int module[], unsigne d long vel, unsigned long acc, int rev); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) vel / vel[] : positive 32 bit velocity data (0 to +83,886,080) acc / acc[] : positive 32 bit acceleration data (0 to +2,147,483,647) rev / rev[] : reverse motion (0 or 1) Description: This command sends the velocity and accelerati on data needed for a particular motion to the appropriate PIC-SERVO motor module. module: This argument describes the PIC-SERVO module address where the corresponding trajectory information has to be sent. The number sent should be less than or equal to the number of modules present else it will result in an error (error code: -1). When an array of module number is passed, th e index is used to address the module and its value decides if the motor module will be en abled or disabled. For ex. [1 0 1 1] when passed, will load the corresponding trajector y information to the module numbers 1, 3 and 4; while the module number 2 wont be affe cted. The length of the array (in this case 4) should be less than or equal to the modul es present in the circ uit. Array elements greater then the number of m odules present will be discarded. Care should be taken so that the corresponding entries of module arra y matches with the vel and acc arguments and is left solely on the user. vel and acc: Data for the motion trajectory. rev: This argument determines the direction in which the motor moves. When its value is 1 or a positive number, the direction will take n as reverse. When 0 the direction will be taken as forward. Types: vel: This command is used to control one PIC-SERVO module.

PAGE 433

Appendix C. (Continued) 412 velAll: This command controls all the PIC-SE RVO modules and moves all of them with the same velocity and acceler ation in the same direction. velSelect: Using this command, individual PI C-SERVO modules can be controlled to move with their own corresponding velocity, acceleration and direction. This command has the highest flexibility and the othe rs are a special case of this command. velSelectAll: Similar to the above one but the trajectory for all sele cted modules will the same. Examples: vel(1, 10000, 100, 1) Moves the PIC-SERVO module 1 with ve locity 10000 and acceleration 100. The direction is reverse. velSelect([1 0 0 1], [10000 1000 30 300000], [100 1 100 300], [0 1 1 1]) Moves the PIC-SERVO module 1 in the forw ard direction with velocity 100000 and acceleration 100 and moves the PIC-SERVO m odule 4 in the reverse direction with velocity 300000 and acceleration 300. The PICSERVO module 2 and 3 won’t have any effect and its corresponding entries will be discarded. 3)PWM Control Loads a motion trajectory to move with certain PWM information. Commands included: pwm pwmAll pwmSelect pwmSelectAll Syntax: int pwm (int module, unsigned char pwm, int rev); int pwmAll (unsigned char pwm, int rev); int pwmSelect (int module[], uns igned char pwm[], int rev[]); int pwmSelectAll (int module[], unsigned char pwm, int rev); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) pwm / pwm[] : positive 8 bit PWM data (0 to +255) rev / rev[] : reverse motion (0 or 1)

PAGE 434

Appendix C. (Continued) 413 Description: This command sends the PWM data needed fo r a particular motion to the appropriate PIC-SERVO motor module. module: This argument describes the PIC-SERVO module address where the corresponding PWM information has to be sent. The number sent should be less than or equal to the number of modules present else it will result in an error (error code: -1). When an array of module number is passed, th e index is used to address the module and its value decides if the motor module will be en abled or disabled. For ex. [1 0 1 1] when passed, will load the corresponding trajector y information to the module numbers 1, 3 and 4; while the module number 2 wont be aff ected. The length of the array (in this case 4) should be less than or equal to the modul es present in the circ uit. Array elements greater then the number of m odules present will be discarded. Care should be taken so that the corresponding entries of module arra y matches with the vel and acc arguments and is left solely on the user. pwm: Data for the motion trajectory. rev: This argument determines the direction in which the motor moves. When its value is 1 or a positive number, the direction will take n as reverse. When 0 the direction will be taken as forward. Types: pwm: This command is used to control one PIC-SERVO module. pwmAll: This command controls all the PIC-SERVO modules and moves all of them with the same PWM information. pwmSelect: Using this comma nd, individual PIC-SERVO modules can be controlled to move with their own PWM information. This command has the highest flexibility and the others are a special case of this command. pwmSelectAll: Similar to the above one but th e trajectory for all selected modules will the same. Examples: pwm(1, 100, 1) Moves the PIC-SERVO module 1 with PWM 100. The direction is reverse. pwmSelect([1 0 0 1], [100 1 100 200], [0 1 1 1]) Moves the PIC-SERVO module 1 in the forwar d direction with PWM 100 and moves the PIC-SERVO module 4 in the reverse dire ction with PWM 200. The PIC-SERVO module 2 and 3 won’t have any effect and its corresponding entries will be discarded.

PAGE 435

Appendix C. (Continued) 414 Stop Commands 1)Stop De-accelerates a PIC-SERVO module to a complete stop. Commands included: stop stopAll stopSelect Syntax: int stop (int module); int stopAll (); int stopSelect (int module[]); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) Description: This command de-accelerates a moving PICSERVO motor module to a complete stop. The de-acceleration will be the same amount with which the motor was moving at the time of execution. If the motor is already stopped, the command will have no effect. module: This argument describes the PICSERVO module address which has to be stopped. The number sent should be less than or equal to the number of modules present else it will result in an e rror (error code: -1). When an a rray of module number is passed, the index is used to address the module and its value decides if the motor module will be stopped or left to run. For ex. [1 0 1 1] when passed, will stop the motor module numbers 1, 3 and 4; while the motor module number 2 wont be affected. The length of the array (in this case 4) should be le ss than or equal to the modul es present in the circuit. Types: stop: This command is used to stop one PIC-SERVO module. stopAll: This command stops all the PIC-SERVO modules. stopSelect: Using this comm and, individual PIC-SERVO m odules can be stopped. This command has the highest flexibility and the othe rs are a special case of this command. Examples: stop(1) De-accelerates the PIC-SERVO module 1 to a complete stop.

PAGE 436

Appendix C. (Continued) 415 stopAll() De-accelerates all the PIC-SERV O modules to a complete stop. stopSelect([1 0 1 0]) De-accelerates the PIC-SERVO modules 1 and 3 to a complete stop. PIC-SERVO modules 2 and 4 won’t be affected. 2)Break Immediately stops a PIC-SERVO module. Commands included: brk brkAll brkSelect Syntax: int brk (int module); int brkAll (); int brkSelect (int module[]); Return Value: Negative number or zero: Error or Failure (Corresponds to a unique error number) Positive number: Success Arguments: module/module[]: module address (1 to 32) Description: This command Immediately stops a moving PI C-SERVO motor module. If the motor is already stopped, the command will have no effect. module: This argument describes the PICSERVO module address which has to be stopped. The number sent should be less than or equal to the number of modules present else it will result in an e rror (error code: -1). When an a rray of module number is passed, the index is used to address the module and its value decides if the motor module will be stopped or left to run. For ex. [1 0 1 1] when passed, will stop the motor module numbers 1, 3 and 4; while the motor module number 2 wont be affected. The length of the array (in this case 4) should be le ss than or equal to the modul es present in the circuit. Types: brk: This command is used to stop one PIC-SERVO module. brkAll: This command stops all the PIC-SERVO modules. brkSelect: Using this comma nd, individual PIC-SERVO modul es can be stopped. This command has the highest flexibility and the othe rs are a special case of this command.

PAGE 437

Appendix C. (Continued) 416 Examples: brk(1) Immediately stops the PIC-SERVO module 1. brkAll() Immediately stops all the PIC-SERVO modules. brkSelect([1 0 1 0]) Immediately stops the PIC-SERVO modules 1 and 3. PIC-SERVO modules 2 and 4 won’t be affected.

PAGE 438

Appendix C. (Continued) 417 Status Commands 1)Individual Paramete rs per PIC-SERVO Returns the status of the individual parameter for a particular PIC-SERVO module. Commands included: getPos getVel getAd getHm Syntax: long getPos (int module); long getVel (int module); unsigned char getAd (int module); long getHm (int module); Return Value: Negative number: Error or Failure (Corresponds to a unique error number) Positive number or zero: Corresponding status. Arguments: module/module[]: module address (1 to 32) Description: This command returns the specific status para meter that was asked for the particular PICSERVO module. module: This argument describes the PIC-SER VO module address whose status has to be read. The number sent should be less than or equal to the number of modules present else it will result in an error (error code: -1). Types: getPos: This command returns the current position of the PIC-SERVO module. getVel: This command return s the current velocity of the PIC-SERVO module. getAd: This command returns the curren t A/D value of the PIC-SERVO module. getHm: This command returns the current motor home position of the PIC-SERVO module. Examples: getPos(1) Returns the current position of PIC-SERVO module 1.

PAGE 439

Appendix C. (Continued) 418 2)Individual Parameters Returns the status of th e individual parameter fo r all PIC-SERVO modules. Commands included: getPosAll getVelAll getAdAll getHmAll Syntax: int getPosAll (long x); int getVelAll (long x); int getAdAll (long x); int getHmAll (long x); Return Value: Negative number: Error or Failure (Corresponds to a unique error number) Positive number or zero: Success. Arguments: (*) x: A pointer to the array which has the status values. Description: This command returns the specific status parameter for all PIC-SERVO modules. x: A pointer to the array which has the stat us values for all the PIC-SERVO modules. The size of the array will be equal to the modules present in the system. Types: getPosAll: This command returns the curre nt position of all the PIC-SERVO modules. getVelAll: This command returns the current velocity of all th e PIC-SERVO modules. getAdAll: This command returns the current A/D value of all the PIC-SERVO modules. getHmAll: This command returns the curren t motor home position of all the PIC-SERVO modules. Examples: getPosAll(1) Returns the current position of all PIC-SERVO modules. getVelAll(1) Returns the current velocities of all PIC-SERVO modules. 3)Complete Status Returns the status of a ll parameters for all or one PIC-SERVO modules.

PAGE 440

Appendix C. (Continued) 419 Commands included: status statusAll Syntax: int status (int module, long x); int statusAll (long x); Return Value: Negative number: Error or Failure (Corresponds to a unique error number) Positive number or zero: Success. Arguments: module: module address (1 to 32) (*) x: A pointer to the array which has the status values. Description: This command returns the all status para meters for all or one PIC-SERVO modules. x: A pointer to the array which has the stat us values for all the PIC-SERVO modules. The size of the array will be either 4 or 4 (modules present in the system). Types: status: This command returns all the parame ters for a particular PIC-SERVO module. statusAll: This command returns all the parameters for all PIC-SERVO modules.

PAGE 441

Appendix C. (Continued) 420 Set Motor Changes the position value of the PIC-SERVO modules. Commands included: setPos setPosAll setPosSelect setPosSelectAll Syntax: int setPos (int module, unsigned long pos, int rev); int setPosAll (unsigned long pos, int rev); int setPosSelect (int module[], unsigned long pos[], int rev[]); int setPosSelectAll (int module [], unsigned long pos, int rev); Return Value: Negative number: Error or Failure (Corresponds to a unique error number) Positive number or zero: Success. Arguments: module/module[]: module address (1 to 32) pos / pos[] : positive 32 bit position data (0 to +2,147,483,647) rev / rev[] : reverse motion (0 or 1) Description: This command sets the position variable for a specific PIC-SERVO module or a set of modules. module: This argument describes the PI C-SERVO module address whose position variable has to be changed. The number sent s hould be less than or equal to the number of modules present else it will result in an error (error code: -1). When an array of module number is passed, the index is used to addr ess the module and its value decides if the motor module’s position will be changed or not. For ex. [1 0 1 1] when passed, the position variable for the module numbers 1, 3 an d 4 will be changed; while of the module number 2 wont be affected. The length of the array (in this ca se 4) should be less than or equal to the modules present in the circuit. Array elements greater then the number of modules present will be discarded. Care shou ld be taken so that the corresponding entries of module array matches with the pos ar gument and is left solely on the user.

PAGE 442

Appendix C. (Continued) 421 pos: New data. rev: This argument determines the direction in which the motor is present. When its value is 1 or a positive number, the direction will ta ken as reverse and the sign of the position field will be reversed. When 0 the direction will be taken as forward. Types: setPos: This command sets the position variable of one PIC-SERVO module. setPosAll: This command sets the position va riable of all PIC-SERVO modules with the same position and direction. setPosSelect: Using this command, individual PIC-SERVO modules can be controlled to change to their correspondi ng positions and direction. Th is command has the highest flexibility and the others are a special case of this command. setPosSelectAll: Similar to the above one but the position variable changed for the selected modules is the same. Examples: setPos(1, 100000, 1) Sets the position of the PIC-SERVO module 1 to 100000. setPosSelect([1 0 0 1], [200000 100 100000 100000], [0 1 1 1]) Sets the position variable of PIC-SER VO module 1 to position 200000 in the forward direction and sets the position of PIC-SERV O module 4 to position 100000 in the reverse direction. The PIC-SERVO module 2 and 3 won’ t have any effect and its corresponding entries will be discarded.

PAGE 443

About the Author Redwan M. Alqasemi graduated with honor from King Abdulaziz University in Jeddah, Saudi Arabia with his Bachelor degr ee in Mechanical E ngineering in 1995. He was awarded the Best University Stude nt award for achievements during his undergraduate studies. He worked at the Pepsi, and then m oved to Wichita, Kansas in 1996 and got his Master’s degree from Wich ita State University in 2001. He taught several labs, and was awarde r the best project award by Boeing and Raytheon aircraft companies for building a wall-climbing robot for aircraft maintenance. Redwan moved to Tampa, Florida in 2001 to pursue his Ph.D. degree in the field of Rehabilitation Robotics. He taught several courses and labs, and worked in the Center for Rehabilitation Engineering and Technology as the leader of their research group. He completed his Ph.D. degree in the field of combining mobility and manipulation of wheelchair-mounted robotic arms in 2007. Redwan is a member of Tau-Beta-Pi a nd Phi-Kappa-Phi honor societies. He is also a member of ASME and IEEE societies. He has been actively involved in research and has published many papers in many pr estigious journals and conferences.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001915694
003 fts
005 20071105164109.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 071105s2007 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0002004
035
(OCoLC)180707627
040
FHM
c FHM
049
FHMM
090
TJ145 (ONLINE)
1 100
Alqasemi, Redwan M.
0 245
Maximizing manipulation capabilities of persons with disabilities using a smart 9-degree-of-freedom wheelchair-mounted robotic arm system
h [electronic resource] /
by Redwan M. Alqasemi.
260
[Tampa, Fla.] :
b University of South Florida,
2007.
520
ABSTRACT: Physical and cognitive disabilities make it difficult or impossible to perform simple personal or job-related tasks. The primary objective of this research and development effort is to assist persons with physical disabilities to perform activities of daily living (ADL) using a smart 9-degrees-of-freedom (DOF) modular wheelchair-mounted robotic arm system (WMRA). The combination of the wheelchair's 2-DoF mobility control and the robotic arm's 7-DoF manipulation control in a single control mechanism allows people with disabilities to do many activities of daily living (ADL) tasks that are otherwise hard or impossible to accomplish. Different optimization methods for redundancy resolution are explored and modified to fit the new system with combined mobility and manipulation control and to accomplish singularity and obstacle avoidance as well as other optimization criteria to be implemented on the new system. The resulting control algorithm of the system is tested insimulation using C++ and Matlab codes to resolve any issues that might occur during the testing on the physical system. Implementation of the combined control is done on the newly designed robotic arm mounted on a modified power wheelchair and with a custom designed gripper. The user interface is designed to be modular to accommodate any user preference, including a haptic device with force sensing capability, a spaceball, a joystick, a keypad, a touch screen, head/foot switches, sip and puff devices, and the BCI 2000 that reads the electromagnetic pulses coming out of certain areas of the brain and converting them to control signals after conditioning. Different sensors (such as a camera, proximity sensors, a laser range finder, a force/torque sensor) can be mounted on the WMRA system for feedback and intelligent control. The user should be able to control the WMRA system autonomously or using teleoperation. Wireless Bluetooth technology is used for remote teleoperation in case the user isnot on the wheelchair. Pre-set activities of daily living tasks are programmed for easy and semi-autonomous execution.
502
Dissertation (Ph.D.)--University of South Florida, 2007.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 421 pages.
Includes vita.
590
Advisor: Rajiv Dubey, Ph.D.
653
DOF.
ADL.
Control.
Robot.
Rehabilitation.
Mobility.
Redundancy.
690
Dissertations, Academic
z USF
x Mechanical Engineering
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.2004