USF Libraries
USF Digital Collections

Control of autonomous robot teams in industrial applications

MISSING IMAGE

Material Information

Title:
Control of autonomous robot teams in industrial applications
Physical Description:
Book
Language:
English
Creator:
Tsalatsanis, Athanasios
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Mobile robot teams
Autonomous systems
Hybrid control
Control architecture
Supervisory control
Dissertations, Academic -- Industrial & Management Systems -- Doctoral -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: The use of teams of coordinated mobile robots in industrial settings such as underground mining, toxic waste cleanup and material storage and handling, is a viable and reliable approach to solving such problems that require or involve automation. In this thesis, abilities a team of mobile robots should demonstrate in order to successfully perform a mission in industrial settings are identified as a set of functional components. These components are related to navigation and obstacle avoidance, localization, task achieving behaviors and mission planning. The thesis focuses on designing and developing functional components applicable to diverse missions involving teams of mobile robots; in detail, the following are presented: 1. A navigation and obstacle avoidance technique to safely navigate the robot in an unknown environment. The technique relies on information retrieved by the robot's vision system and sonar sensors to identify and avoid surrounding obstacles. 2.A localization method based on Kalman filtering and Fuzzy logic to estimate the robot's position. The method uses information derived by multiple robot sensors such as vision system, odometer, laser range finder, GPS and IMU. 3. A target tracking and collision avoidance technique based on information derived by a vision system and a laser range finder. The technique is applicable in scenarios where an intruder is identified in the patrolling area. 4. A limited lookahead control methodology responsible for mission planning. The methodology is based on supervisory control theory and it is responsible for task allocation between the robots of the team. The control methodology considers situations where a robot may fail during operation. The performance of each functional component has been verified through extensive experimentation in indoor and outdoor environments.As a case study, a warehouse patrolling application is considered to demonstrate the effectiveness of the mission planning component.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2008.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Athanasios Tsalatsanis.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 182 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002001442
oclc - 319836403
usfldc doi - E14-SFE0002653
usfldc handle - e14.2653
System ID:
SFS0026970:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Control of Autonomous Robot Teams in Industrial Applications by Athanasios Tsalatsanis A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Industrial and Management Systems Engineering College of Engineering University of South Florida Co Major Professor: Ali Yalcin Ph.D. Co Major Professor: Kimon Valavanis Ph.D. Tapas Das, Ph.D. Sudeep Sarkar, Ph.D. Nikos Tsourveloudis, Ph.D Date of Approval: August 27, 2008 Keywords: mobile robot teams, autonomous systems, hybrid control, control architecture, supervisory control Copyright 2008 Athanasios Tsalatsanis

PAGE 2

Dedication This work is dedicated to all those who made it possible. To my family for their love and support throughout these years. To my advisors, Dr. Valavanis and Dr. Yalcin, for their guidance, support and friendship that made me feel as a part of th eir families. I would like to thank the members of my dissertation committee Dr. Das, Dr. Sarkar, Dr. Tsourveloudis and Dr. Kaw for their time and comments on my work. I would also like to thank Dr. Kandel for his suggestions on my research. Finally, I wou ld like to specifically thank my Laura for her love, support and inspiration.

PAGE 3

Acknowledgments This research was supported in part by two grants ARO W911NF 06 1 0069 and SPAWAR N00039 06 C 0062.

PAGE 4

i Table of Contents List of Tables v ii List of Figures ix Nomenclature xiv Abstract x vii Chapter 1 Introduction 1 1.1 Robots in Industry 1 1.2 Robot s and Robot Teams: Definitions 2 1.3 Functional Requirements 5 1.4 Control Architectures 7 1. 5 Functional Components 10 1. 6 Patrolling and Inspection 13 1.7 Motivation 1 4 1.8 Research Objectives 15 1.9 Research Contributions 16 1.10 Outline 18 Chapter 2 Literature Review 19 2.1 Introduction 19 2.2 Di stributed Robot Architectures 20

PAGE 5

ii 2.2. 1 ALLIANCE 20 2.2.2 DRS 22 2.2.3 ACTRESS 23 2.2.4 CAMPOUT 23 2.3 Centra lized Architectures 25 2.3.1 CMUnited 25 2.4 Hybrid Architectures 26 2.4.1 GOFER 26 2.4.2 3TEAR 27 2.4.3 A Hybrid Control Architecture f or Mobile Robots 27 2.4.4 Hyb rid Algorithms of Multi Ag ent Control of Mobile Robots 28 2.5 Hybrid Control Architect ure for Autonomous Patrolling 28 Chapter 3 Navigation and Obstacle Avoidance 30 3.1 Introduction 30 3.2 Related Wo rk and Comparisons 33 3.3 Vision System 35 3 .3.1 Image Acquisition 36 3.3 .2 Color Space Transformation 36 3.3.3 Applying Threshold 36 3.3.4 YC b C r Color Space 37 3.3.5 Gauss Filter 40 3.3.6 Ext raction of Interesting Points 40

PAGE 6

iii 3.3.7 Interestin g Points Correspondence 41 3.3.8 Distance Com putation 42 3.3.9 Computational Time 46 3.4. Motion Algorithm 46 3.5 Results 49 3.5.1 Vision System Algorithm 50 3.5.2 Motion Algorithm 51 3.6 Discussion 57 3.7 C onclusions 58 Chapter 4 Task Achieving Behaviors 60 4.1 Introduction 60 4.1.1 Related R esearch 6 2 4.1.2 Comparisons 64 4.2 Vision System 65 4.2.1 Image Acquisition 65 4.2.2 Target Selection 65 4.2.2.1 Predetermined Color 66 4.2.2.2 Dynamical ly Determined Color 67 4.2.3 Target Tracking 69 4.2.4 Ext raction of Intere sting Points 72 4.2.5 Inte resting Points Correspondence 73 4.2.6 Distance Computation 74 4.2.7 Computational Time 76

PAGE 7

iv 4.3 Motion Algorithm 77 4.4 Results 79 4.5 Conclus ions 83 Chapter 5 Localization 8 5 5.1 Introduction 8 5 5.2 Related Work 87 5 .3 Extended Kalman Filter 89 5.3.1 Constructing the System's Model 89 5.3.2 Constructing the Measurement's Model 91 5.3.3 Linearization 94 5.3.4 Implementation 94 5.4 Fuzzy Logic Controllers 95 5.4.1 Range Sensors 96 5.4.2 Odometer 97 5.4.3 GPS 98 5.4.4 IMU 99 5.4.5 Fuzzy Controllers Implementation 99 5.5 Case Study 99 5. 5.1 Landmark I dentifica tion and Distance Computation 101 5. 5 .1 .1 Image A cquisition 101 5.5.1.2 Landmark I dentification 101 5.5.2 Distance C omputation 102 5. 5 .2.1 Vision / Laser Registration 103

PAGE 8

v 5.5.3 GPS Conversions 105 5.5.4 IMU 105 5.5.5 Fuzzy Extended Kalman Filter 107 5.5 6 Results 110 5.5.6 .1 Indoor Environment 111 5.5.6 .2 Outdoor Environment 112 5.6 Conclusions 117 Chapter 6 Mission Plann ing 118 6.1 Introduction 118 6.2 Related W ork 122 6.3 System Model Description 124 6.4 Utility Function Definition 129 6.4.1 Normalized Utility Function 136 6.5 Limited Lookahead Control Policy 137 6.5.1 Robot Failures and Repairs 138 6.6 Comput ational Results Using the Proposed Control Methodology 142 6.7 Conclusions 149 Chapter 7 Conclusions and Future Work 152 7.1 Contributi ons 153 7.1.1 Navigation and Obstacle Avoidance 153 7.1.2 Task Achieving Behaviors: Target Tracking 154 7.1.3 Loc alization 155 7.1.4 Mission Planning 155

PAGE 9

vi 7.2 Future Work 157 References 159 Appendices 171 Appendix A HSI Color Space 172 Appendix B CMYK Color Space 174 Appendix C Color Segmentation Results 177 About the Author End Page

PAGE 10

vii List of Tables Tab le N.1 Variables used in Chapters 3 and 4 xiv Table N.2 Variables used in Chapter 5 xv Table N.3 Variables used in Chapter 6 xvi Table 5.1 Values of fitting parameters 106 Table 5.2 Distributions of the error in sensor readings with respect to the rang e measurements 108 Table 5.3 Distributions of the error in odometer readings with respect to the traveled distance 108 Table 5.4 Distributions of the error in GPS readings with respect to the satellite cove r age 108 Table 5.5 Error comparison for in door navigation 112 Table 6.1 Robot sensors 121 Table 6.2 Region/robot allocation 121 Table 6.3 Fuzzy logic membership functions for the patrolling scenario 143 Table 6.4 Experimental results for the case without failures 145 Table 6.5 Summary of expe rimental results for the case without failures 146 Table 6.6 Experimental results for the case with failures 146 Table 6.7 Summary of experimental results for the case with failures 147

PAGE 11

viii Table 6.8 ANOVA for total utility with 4 levels of LLD 148 Table 6 .9 ANOVA for total utility with 3 levels of LLD 148

PAGE 12

ix List of Figures Fig. 1.1 The Puma 500 robotic manipulator (a); a mobile robot (b) and an aerial vehicle (c) from the USF Unmanned Systems Labor a tory; the underwater vehicle Aqua Explorer 1000 by KDD Co. (d) 3 Fig. 1.2 Functional components for a team of mobile robots 6 Fig. 1.3 Centralized control architecture for a team of mobile robots 9 Fig. 1.4 Distributed control architecture for mobile robot teams 10 Fig. 1.5 Hybrid control arch itecture for mobile robot teams 11 Fig. 3.1 Block diagram of vision system function 35 Fig. 3.2 Applying t hreshold technique in YC b C r color space 38 Fig. 3.3 Applying t hreshold technique to extract red, green, blue and yellow obstacle s using the YC b C r color space 39 Fig. 3.4 Interesting point correspondence 42 Fig. 3.5 Horizontal field of vie w of monoscopic vision system 43 Fig. 3.6 Stereo from motion 44 Fig. 3.7 Depth from stereo vision 44 Fig. 3.8 Rotat ed stereoscopic vision system 46 Fig. 3. 9 Location of the sonars on board ATRV mini 47 Fig. 3.10 The block di agram of the motion algorithm 49

PAGE 13

x Fig. 3.11 Real distance vs computed distance using data from the parallel vision system 51 Fig. 3.12 Real distance vs computed distance using data f rom the rotated vision system 51 Fig. 3.13 Collision avoid ance using vision system data, experiment 1 52 Fig. 3.14 Collision avoid ance using vision system data, experiment 2 52 Fig. 3.15 Collision avoidance us ing vision system data, experiment 3 53 Fig. 3.16 Collision avoid ance using vision system data, experiment 4 53 Fig. 3.17 Collision avoidance using range measurements from the ultrasonic sensors, experiment 1 54 Fig. 3.18 Collision avoidance using range measurements from the ult raso nic sensors, experiment 2 54 Fig. 3.19 Collision avoidance usi ng data from sonar and camera, experiment 1 55 Fig. 3.20 Collision avoidance usi ng data from sonar and camera, experiment 2 55 Fig. 3.21 Collision avoidance usi ng data from sonar and camera, experiment 3 55 Fig. 3.22 Collision avoidance using data from sonar and camera experiment 4 5 6 Fig. 3.23 Robot's trajectory snapshots 56 Fig. 3.24 Interesting point correspondence for the case of monocular vision system 58 Fig 4.1 Tracking control system 62 Fig. 4.2 Block diagram of the vision system function 66 Fig. 4.3 Region growing results for the segmentation of an object with known color 67

PAGE 14

xi Fig. 4.4 Region growing results for the segmentation of an object with unknown color 69 Fig. 4.5 Image s egmentation for camera motion 70 Fig. 4.6 Location of the tar get related with the robot 72 Fig. 4.7 Horizontal field of vie w of monoscopic vision system 75 Fig. 4.8 Stereo from motion 75 Fig. 4.9 Depth from stereo vision 75 F ig. 4.10 Rotat ed stereoscopic vision system 76 Fig. 4.11 Segmentation of the laser finder b eam in three regions 78 Fig. 4.12 Flow chart of the motion algorithm 79 Fig. 4.13 Tracking blue color 80 Fig. 4.14 Tracking red color, example 1 80 Fig. 4.15 T racking red color, example 2 81 Fig. 4.16 Tracking red color and obstacle avoidance 81 Fig. 4.17 Tracking blue color and obstacle avoidance 82 Fig. 4.18 Tracking a mobile robot 82 Fig. 5.1 Landmark position with respect to a global coordinate system 90 Fig. 5.2 Block d iagram for the EKF im plementation 95 Fig. 5.3 Membership functions for the sensory readings and the variance of the error distribution 97 Fig. 5.4 Relation between the EKF and the fuzzy controllers 100 Fig. 5.5 Color threshold techni que 102 Fig. 5.6 Stereo vision system in p arallel configuration 103

PAGE 15

xii Fig. 5.7 Relation between the coordinate systems of the vision system and the laser range finder 104 Fig. 5.8 Membership function s of the vision system (a), laser range f inder (b) GPS (c) and odometer( d ) FL controllers 109 Fig. 5.9 Block diagram for the fuzzy EKF 110 Fig. 5.10 The vehicle trajectory according to three methods experiment 1 1 13 Fig. 5.11 The vehicle trajecto ry according to three methods, experiment 2 113 Fig. 5.12 Performance of the EKF, experiment 1 114 Fig. 5.13 Performance of the fuzzy EKF, experiment 1 1 14 Fig. 5.14 Performance of the EKF, experiment 2 115 Fig. 5.15 Performance of the fuzzy EKF, experiment 2 1 15 Fig. 5.16 Performance of the EKF, experiment 3 116 Fig. 5 .17 Performance of the fuzzy EKF, experiment 3 1 16 Fig. 6.1 Warehouse partitioning for the patrolling scenario 120 Fig. 6.2 Transition graph for Robot j 126 Fig. 6.3 Transition graphs for the task completion requirements 127 Fig. 6.4 Transition graph of t he failure assumption automaton 128 Fig. 6.5 Membership functions of the fuzzy variable: robot's endurance 132 Fig. 6.6 Membership functions of the fuzzy variables: designer's choice robot's efficiency and robot's ability 132 Fig. 6.7 Forward sweep (a) and backtracking pass (b) of dynamic programming 135 Fig. 6.8 Block diagram of the control algorithm 137 Fig. 6.9 Limited lookahead policy for the patrolling scenario 139 Fig. 6.10 Control algorithm with failure detection 141

PAGE 16

xiii Fig. 6.11 Events exec uted in the UCSM after a temporary failure 141 Fig. 6.12 Events executed in the UCSM after a failure with task re initialization 142 Fig. 6.13 Blocking example 151 Fig. A.1 Applying threshold tec hnique in HSI color space 172 Fig. A.2 Thresholding t o extract red, green, blue and yellow obstacles using the HSI color space 173 Fig. A.3 Applying threshold t echnique in CMYK color space 174 Fig. A.4 Thresholding to extract red, green, blue and yellow obstacles using the CMYK color space 176 F ig. A.5 Thresholding to extract red, green, blue and yellow obstacles using the three color spaces 177 Fig. A.6 Applying t hreshold technique to extract yellow obstacles using the YC b C r color space 182

PAGE 17

xiv Nomenclature Table N.1 Variables use d in Chapters 3 and 4 x y Coordinates of a pixel in image plane RGB YC b C r HSI Color spaces T A threshold value H Function for image 's histogram g f Grayscale and color image functions respectively I k Variance of illumination in direction k r Correlation coefficient d x d y Pixel disparities Z Depth between an object and the camera(s) X Distance between the object and the camera on the horizontal axis B Baseline x pi Pixel coordinate on X axis U j Range data from sensor j x y Vehicle's velocity and angular velocity respe c tively Angle between the camera and the X axis

PAGE 18

xv Table N.2 Variables used in Chapter 5 X k The state vector (system 's model) at time k x k = x k y k k [ ] T The vehicle's posture at time k u k = V k k [ ] T Velocity vector consisting of the vehicle's linear and angular velocities at time k gps k = gps x k gps y k [ ] T GPS readings rel ated to vehicle's position at time k imu k = ax k eax k ay k eay k angle z k [ ] T IMU reading s related to the vehicle's acceler ations, e r rors in acceleration, and steering angle at time k odo k = odo x k odo y k odo k [ ] T Odometer readings related to the vehicle's pos ition at time k cam k i = dcam x k i dcam y k i [ ] T Distance between the i th landmark and the vehicle as measured by the camera at time k las k i = dlas x k i dlas y k i [ ] T Distance between the i th landmark and the vehicle as measured by the laser range finder at time k lan k i = lan x k i lan y k i [ ] T Position of the i th landmark at time k d j k i = dx j k i dy j k i [ ] T Distance between the i th landmark and the vehicle as measured by the j th range sensor at time k n k Zero mean Gaussian noise with covariance Q k w k Zero mean Gaussian noise with covariance R k Z k Measurement 's model at time k P k Covariance matrix at time k

PAGE 19

xvi Table N.3 Variables used in Chapter 6 k Number of task j Number of robot j Set of events that Robot j can ex e cute Q j Set of states Robot j j Transition function for Robot j q j 0 Q jm Initi al and final states Robot j G j = ( j Q j q j 0 j Q jm ) The Finite Automaton representing the unco ntrolled behavior of the R o bot j R n = ( n Q n x n 0 j X nm ) Requirements model S Specifications model s A string U ( s ) Utility of a string V ( s ) Maximum utility value for a sting U N ( s ) Normalized utility function for a string

PAGE 20

xvii Control of Autonomous Robot Teams in Industrial Applications Athanasios Tsalatsanis ABSTRACT The use of teams of coordinated mobile robots in industrial settings such as u nderground mining, toxic waste cleanup and material storage and handling, is a viable and reliable approach to solving such problems that require or involve automation In this th esis, abilities a team of mobile robots should demonstrate in order to successfully pe r form a mission in industrial se t tings are identified as a set of functional components. These components are related to navig a tion and obstacle avoidance, lo calization, task achieving behaviors and mission planning. The thesis focuses on designing and developing fun ctional components applicable to diverse missions involving teams of mobile r o bots; in detail, the following are presented: 1. A navigation and o bstacle avoidance technique to safely navigate the robot in an unknown environment. The technique relies on information retrieved by the r obot's vision sy s tem and sonar sensors to identify and avoid surrounding obstacles. 2. A localization method based on Kalman filtering and Fuzzy logic to estimate the robot's position. The method uses information derived by multiple robot sensors such as vision system, odometer, laser range finder, GPS and IMU.

PAGE 21

xviii 3. A target tracking and collision avoidance technique based on i n formation derived by a vision system and a laser range finder. The technique is applicable in scena rios where an intruder is identified in the patrolling area. 4. A limited lookahead control methodology responsible for mission planning. The metho d olog y is based on supervisory control theory and it is responsible for task allocation b e tween the robots of the team. The control methodology considers situations where a robot may fail during operation. The performance of each functional component has been verified through exte nsive experimentation in indoor and outdoor environments. As a case study, a warehouse patrolling application is considered to demonstrate the effectiveness of the mission pla nning component.

PAGE 22

1 Chapter 1 Introduction 1.1 Robots in I ndustry The use of robotic based solutions in industry is driven by the need for safety, quality and efficiency. The increasing demand for inexpensive products of higher quality and increased r e liability requires frequent modifications to the production process. The flexibility of the system to adapt to new modifications is bounded by the overpriced and time consuming training of specialized personnel. In addition, many activities in i ndustrial applications such a s manufacturing, underground mining, toxic waste cleanup and material storage/ handling of chemicals, take place in hazardous environments har mful to human health. To address the issues of safety, quality and efficiency in industry it is essential to reduc e the human presence from particular applications and assign them to auton omous robotic based solutions. The scope of this chapter is to familiarize the reader with terms such as robots, robot teams, control architectures and functional requirements. Brie f examples of diffe rent robots and control architectures are presented.

PAGE 23

2 1.2 Robots and Robot Teams: Defin i tions The term robot is derived from the Czech word robota (force labor) and it was first used in the movie Rossum's Universal Robots by Karel Cap ek. In this movie small, artificial, humanoid beings named robotniks were obeying their lord's commands. Ho wever, the world of robotics have come a long way since robotniks. Different types of a utonomous robots such as robotic manipulators, mobile robots, aerial and underwater vehicles have been developed for industrial, military and civilian applications. The term autonomous refers to systems that are capable of performing a number of operations for long periods of time without any external interfe r ence [1] The robotic manipulators shown in Fig. 1.1a, are articulate devices of finite size able to pe r form high reparative operations such as pick and place activities, car body painting, welding and material ha ndling. These robots a re used in automotive and electronics industry and they are co mposed of two parts: a stationary part fixed on the ground or on a platform and a moving part able to move with 2 to 6 degrees of freedom. The robotic manipulators have limited range of o p eratio n because of their stationary part. Alternatively, robots with mobility capabilities have been developed. Depending on the operational environment, these r obots are classified into three main categories: Unmanned ground (UGVs), aerial (UAVs) and underwater (UUVs) vehicles. These robots consist of a set of sensors and actuators onboard a moving pla t form. The moving platform for the case of UGVs or mobile robots, shown in Fig. 1.1b, is a car like veh i cle with two (2) or more wheels. In the case of UAVs, see F ig. 1.1c, the moving platform has the form of an ai r plane or a rotorcraft (helicopter) and in the case of UUVs, see Fig. 1.1d, the form of a miniature size subm a

PAGE 24

3 rine. Robots with mobility capabilities have been extensively used in military applic ations suc h as battlefield surveillance and target tracking [2] or humanitarian minefield demoliti on [3] [4] while lately they b e come available for civilian applications such as search and rescue [5] [6] and aut o mated traffic monitoring [7] Fig.1.1 The Puma 500 robotic manipulator (a); a mobile robot (b) and an aerial vehicle (c) from the USF Unmanned Systems Laboratory; the underwat er vehicle Aq ua Explorer 1000 by KDD Co. (d) Several well structured applications such as "pick and place" or material handling activities can be performed by single r o botic solutions. However, as the complexity and the requirements of the application in creases, the use of multi robotic s o lutions is more suitable. For instance, using a single robot in applications such as patro l ling or mapping

PAGE 25

4 is a viable solution but not an effective one. For example, for a warehouse patrolling mi ssion a mobile robot sho uld carry the appr o priate sensors to identify fire, chemical leaks and unauthorized human presence. Instal l ing all these sensors on a single robot is not an optimal decision since the robot could be trapped in a fire resulting in total loss of these sensor s. In addition, the probability of robot failure increases as the number of se n sors increases disqualifying the robot from achieving its mission. Furthe r more using one robot instead of many significantly increases the time required to acco m plish the missio n. A multi robotic system would be more efficient for such applications. The overall mi s sion would be decomposed into tasks allocated to the robots according to the set of se n sors each robot carries. Individual robot failures would be addressed by assig n in g the tasks of the failed unit to one or more other robots. In this manner, the prob a bility of the multi robotic system to achieve the overall objective increases even though some of the sy stem's members are inoperative. Multi robotic systems can be classi fied into three main categories : robot teams, cooper a tive robot teams and swarms. The term robot teams refers to small size group of robots with different properties or capabilities. For example, a robot team can have me mbers from aerial, underwater and gr ound vehicles, all working together to achieve a set of common objectives. The overall objective is decomposed into tasks for each robot. C ooperation between the robots of the team to perform a task is not allowed. On the other hand the cooperative robot t eams consider combined effort on task execution allowing two or more robots to work on the same task su p plementing each other. Finally, the term swarms refers to large size groups of robots with ident i cal properties and capabilities. In

PAGE 26

5 this work a team o f robots consisting of mobile robots equipped with overlapping se n sory capabilities, to allow multiple selections of robots for a task is co n sidered. 1.3 Functional Requirements All mobile robot teams operate under a set of functional requirements rela ted to the team's ability to perform a mission. These requirements can be broken down into seven main components: 1. Mission planning 2. Communication 3. Task execution 4. Motion control 5. Localization 6. Navigation and obstacle avoidance 7. Task achieving behaviors Fig. 1.2 provides a schematic representation of the functional components and their relationship. The Mission Planning component i n cludes all the features related to higher level control, such as mission decomposition, task allocation, optimization, robot coope r a tion and o b servation. It receives information through the communication component from all

PAGE 27

6 the r o bots in the team regarding their posture, functionality and task status. The Mission Planning component gene r ates the task assignments for each robot. The Tas k Execution component is responsible for mission planning at the robot's level. It coordinates all the robot's actions to achieve a set of tasks utilizing instructions on how to pe r form the current task from the Task Achieving Behaviors component. The Task Execution components notifies the Mission Planner with the status of the robot (o perational or non operational) and of the task (completed or uncompleted). Fig. 1.2 Functional components for a team of mobile robots The Task Achieving Behaviors component is a modular depository of behaviors that can be activated individually or in parallel related to task specifications.

PAGE 28

7 The Motion Control component consists of all the physical mechanisms of the r obot such as actuators and wh eels responsible for the robot's mobility. The Localization Component provides estimates of the robot posture, velocity, acceler a tion and orientation. It receives information from a set of position sensors such as GPS, IMU and odometer, and reports to the Task Execution component the current po sture of the robot. Navigation and Obstacle Avoidance refers to the set of algorithms responsible for the safe passage of the robot from an initial point to a final destination. The navigation component receives inpu ts from the Task Execution component and the robot's sensors and provides v e locity commands to the Motion Control component. The afore described functional components are common to all mobile r o bot teams regardless of the mission's nature. A team of mobile robots has r e quirements for mission control, task execution, motion control, localization, communication and navig a tion whether it is assigned with a planetary exploration or with a toxic waste cleanup mi s sion. However, there is significant diversity on h ow all these functional components interact with each other resulting in various degrees of flexibility for the robot team. This intera ction between the functional comp o nents is represented by control architectures 1.4 Control Architectures In robot t eams, there are numerous approaches for control architectures ranging from fully centralized to fully distributed approaches. The centralized control archite c

PAGE 29

8 tures, shown Fig. 1.3, use one robot or a central controller as the team's leader. The co ntroller is responsible for the team's actions, from mission planning to obstacle avoi d ance. Primary advantage of the centralized control architectures is that the central co n troller has exact knowledge of each robot's activ i ties, which allows for optimal planning. However, centralized approaches present many disadvantages. Since the central controller is r esponsible for all the actions in the team, the computational complexity of the system i ncreases as the number of robots increases, which implies that the r e spons e of the system in a dynamic environment is rather slow. Furthermore, every team member commun icates directly with the central controller increasing the communications bandwidth. Finally, if the communications link or the central controller fails, the team 's members become vulnerable to dangerous citations. Fully centralized approaches are suited for a pplications as R o bocup [8] with small teams of robots operating in static environments. The distributed control architectures shown in Fig. 1.4, allocate the team's intell igence to the robots. Each robot operates independently and has the cap ability to dete rmine its actions based on the requirements of the mission and the actions of the other r obots. Fully distributed systems are easier to implement but have significant drawbacks usually in the area of optimiz a tion since there is no overall mo nitoring of the system's processes. Additional functional r e quirements arise to resolve situations when more than one robots attempt to execute the same task (conflicts). Fully distributed control archite ctures have been used for hazardous waste cleanup [9] and h u manitarian de mining [10]

PAGE 30

9 Fig. 1.3 Centralized control architecture for a team of mobile robots Hybrid control architectures, shown in Fig. 1.5, combine characteristics from both centralized and distributed approaches. The intelligence of the robot team is sep a rated into layers of control. The higher layer is responsible for the overall performance of the team while the lower layers are responsible for each robot's functionalities such as tas k exec u tion, navigation and localization. The control design allows the robots to carry on with their task assignments even if the communications with the mission planner are lost. In addition, communications between the mission planner and the team's memb ers are limited to task assignments, task completion and operational status, which implies that hybrid control architectures do not require broad communication ban d width.

PAGE 31

10 Fig 1.4 Distributed control architecture for mobile ro bot teams 1.5 Functional Components In Fig. 1.3 1.5, the team's fun c tional components appear as structural modules in each control architecture. As noted earlier, these functional components are common to all missions involving mobile robot teams. How ever, the functional comp o nent design could be either generic, applicable to any mission, or mission specific. For example, the Mission Planner of a space exploration mission cannot be used in a material storage a pplication. Nevertheless, both Mission Plan ners share co m mon characteristics such as how to decompose the overall mission into tasks for each r o bot or how to handle resource

PAGE 32

11 failures and repairs. On the other hand, the navigation and obstacle avoidance component would be the same in both missions, space exploration and storage handling, a s suming that the same sensors are used. The design of the functional components is a challenging problem. Each comp onent should demonstrate characteristics that will allow the robot team to a c complish its mission ef fe c tively. Fig. 1.5 Hybrid control architecture for mobile robot teams

PAGE 33

12 The main issues that should be addressed in the design of the Mission Planner are: 1. Mission decomposition and task allocation. Each robot of the team ca rries a set of sensors appropriate for a set of tasks. Based on these sensor capabilities, the Mi ssion Planner ide n tifies a suitable robot for each task. 2. Control Optimization. Various optimization criteria such as shortest route or use of the low priced r obot can be utilized to maximize the performance of the robot team and at the same time minimize the associated cost. 3. Robot failures and repairs. A very important consideration in the Mission Planner design is that of resource failures and repairs. A fail ed robot could jeo p ardize the team's mission. The Mission Planner must demonstrate the ability to respond to robot failures by realloca t ing tasks to the remaining robots of the team. At the same time, since a repaired robot is a significant asset to the te am's performance, the Mission Planner must be able to r e integrate the repaired robot into the team. 4. Completion of the overall objective. All tasks related to the mission must be co mpleted under the assumption that there are available resources. 5. Deadlocks and conflicts. The design of the Mission Planner must avoid situations where more than one robot are a s signed to the same task or a robot reaches a state from which it cannot get out of. 6. Computational requirements. The Mission Planner must be able to adap t to the changes in the team's environment in real time. In the design of the Navigation and Obstacle Avoidance component there are two key issues:

PAGE 34

13 1. Velocity commands. Based on the current position of the robot and a final dest in a tion, a set of algorithms c omputes the appropriate velocity commands that will move the robot to its target. 2. Obstacle Avoidance. Each robot carries range sensors that provide information about the surrounding environment. The Navigation and Obstacle Avoidance co m ponent utilizes the sensor readings to identify a safe, obstacle free passage for the robot. The Localization component must be able to utilize readings from position and range sensors to determine the posture of each robot. The Mission Planner requires this information for task allocation. Also, the Navigation and Obstacle Avoidance component uses the posture info r mation to derive the velocity vectors for the robot's motion. Finally, the Task Achieving Behaviors component encapsulates a set of behaviors each robot should de monstrate to achieve the mission. Since not all the robots are able to perform all the tasks in a mission, the design of the Task Achieving Behaviors comp onent should be modular in a way that each robot can activate the appropriate behavior to perform a gi ven task. 1.6 Patrolling and Inspection An effective functional components design can be demonstrated in various i ndustrial applications that utilize mobile robot teams. Such application is that of auton omous patrolling and i n spection where a team of mobile robots is assigned to guard and

PAGE 35

14 monitor an industrial area (i.e. warehouse). In general patrolling refers to a group of ind ividuals that perform security functions, collect information and report unexpected activ ities. Chara c teristics that a patro lling group should demonstrate are: 1. Mission decomposition. Each member of the patrol team is assigned to a region of the patrolled area. Not all regions have the same requir e ments for patrolling, thus each of the team's members can be assigned to a specifi c region. 2. Optimization. Depending on the number of members in the group, a minimization on the number of regions and the member routes can be achieved to completely cover the p a trolled area. 3. Patrol frequency. Each region of the patrolled area must be revi sited in regular or in ra n dom time intervals. These characteristics can be incorporated into the design of the functional comp onents and enable a team of mobile robots to perform patrolling missions. 1.7 Motivation From higher level functions such a s mission decomposition and task alloc a tion to lower level operations such as collision avoidance, the design of each functional comp onent is vital to the successful operation of the robot team. A robot that ca n not navigate safely cannot perform task requi ring robot movement, and a team of robots cannot fun ction eff i ciently unless there is a way of task allocation.

PAGE 36

15 In literature, there is significant work in the design of each individual fun c tional component. This work investigates how these functional com p o nents can interact with each other and allow the robot team to effectively perform a mission. Consider the situ ation where the Navigation and Obstacle Avoidance component uses info r mation from a laser range finder to avoid collision with an obstacle, whi le the Localization co m ponent uses the same sensor to estimate the position of the robot cau s ing a conflict in sensor utilization. The design of each functional component has to consider the interaction b etween the components and resolve conflicts in senso r utilization. 1.8 Research Objectives The goal of this research is to design functional co m ponents common to a wide range of autonomous mobile robot teams such as Mission Planning, Navigation and O bstacle Avoidance, Localization and Task Achieving Beh aviors and investigate their d eployment under the u m brella of a hybrid control architecture. The specific objectives are: 1. Identify functional components applicable to any mission involving mobile r o bot teams. 2. Design individual functional components applic able to a warehouse patrolling mission 3. Demonstrate individual functional component design and interaction using an a utonomous patrolling appl i cation

PAGE 37

16 1.9 Research Contributions The main contribution of this research is that it integrates all the key e lements r equired for the operation of a robot team as a whole. Through this research, fun c tional components essential for most robot teams are identified and methodologies are deve loped to allow a robot team to complete missions such as warehouse patrollin g. The rest of this section summarizes the contributions related to component design methodologies and the warehouse patrolling application. A novel depth estimation technique is introduced in the design of the Nav i gation and Obstacle Avoidance component The technique uses information from two cameras to derive depth estimates between the robot and an object. The novelty of this technique is that instead of calibrating the cameras and using standard stereo vision equations to d e rive depth, parameters suc h as the image size, the angle of view of the cameras and the relative pos i tion of two different captures of the same scene have been used to derive a depth estimation. The developed technique presented maximum error of 8% in depth measurements up to 8m, b etter than most approaches in literature. In the design of the Task Achieving Behaviors, a vision based target tracking technique is presented. A main contribution of this technique is that it uses a stereo vision system where both cameras track a target independently, providing a redundant mech anism that helps avoiding loosing the target. This me ans that even if one camera lo ses the target', it can retrieve i n formation from the other camera to find it again. In addition the robot's direction is controlle d by the pan/tilt angles of the cameras, allowing the r o bot to

PAGE 38

17 avoid obstacles and keep tracking a target, or to keep tracking a target that moves on u neven terrains. For the Localization component, a fuzzy Extended Kalman Filter methodology is developed that fuses information from multiple sensors to derive an estimate of the robots posture. Main contribution of this work is the consideration of five distinct sensors: GPS, IMU, stereo vision system, laser range finder and odometer, while most related a ppr oaches do not exceed three sensors. In addition, the Fuzzy Logic controller design a llows for incorporation of the error in sensor readings into the EKF without the use of an error model. The error in sensor readings is approximated by multiple zero mean G au ssian distributions and it is included in the covariance matrix of the measurement model. Finally, for the Mission Planning component a limited lookahead control policy is presented. Contributions of this work focus in the areas of computational co m plexi ty and of task allocation. In this work, instead of constructing the complete superv i sor, a limited lookahead policy is utilized that co n structs a lookahead supervisor based on the evolution of the system in real time. The proposed methodology offers an ad vantage to highly d ynamic systems, since the control model can change on the fly. In addition, the design of the Mission Planning component contributes to the warehouse patrolling application by mode l ling the mission using Discrete Event Systems.

PAGE 39

18 1.10 Outline This thesis is organized as follows. Chapter 2 is dedicated to literature review. A nu m ber of well established architectures in teams of mobile robots are presented. The chapter focuses on centralized, distributed and hybrid control approaches. Chapter 3 presents a Navigation and Obstacle Avoidance method for a mobile r obot. The method uses range information from a stereo vision system and a set of sonar sensors to safely navigate the robot in an indoor environment. Also, this chapter pr e sents a simple method of computing range from a stereo vision system. Chapter 4 discusses the Task Achieving Behavior functional component. Patro l ling a p plications require the ability of recognizing and tracking unauthorized presence in a secure facility. A visi on based target tracking method is presented that utilizes a st e reo vision system and a laser range scanner to enable a mobile robot to track an i n truder and notify the authorities. In Chapter 5 a fuzzy Extended Kalman Filter is presented that uses inform ation from multiple sensors: GPS, IMU, stereo vision system, laser range finder and odometer to determine the posture of a vehicle travelling in indoor and/or outdoor env i ronments. Chapter 6 is devoted to the Mission Planner component. A dynamic task all oc ation and controller design methodology for cooperative robot teams based on limited lookahead policies is pr e sented. Finally, Chapter 7 presents the contributions of the completed work and di s cusses dire c tions in future work.

PAGE 40

19 Chapter 2 Literature R eview 2.1 Introduction Control architectures for the control of a single autonomous mobile robots are d esigned based on three main paradigms: deliberative, reactive and hybrid delibera tive/reactive [1] Main characteristic of the deliberative architectures, as in [11] [12] [13] is their hierarchical structure. Each layer of the architecture is dedicated t o a diffe rent function of the robot (i.e. sensing, thinking and acting). In this sense, robot's actions are determined by re a soning rather than just reaction to sensory information. On the other hand, reactive architectures, as in [14] [15] [16] consist of a set of behavioral modules each designed for a different function of the robot. All the modules are activated in para llel and respond to direct sensory inform a tion. Examples of these behavioral modules are: obstacle avoidance, map building and exploration. Finally, hybrid deli b erative/ reactive architectures, as in [17] [18] [19] [20] incorporate the reasoning part of the deliberative architectures and the behavioral modules of the reactive archite c tures. An additional planni ng module is used to provide reasoning on the activation of the appropriate beha vior according to the sensory i n formation and to the robot's mission.

PAGE 41

20 These paradigms are used to design architectures for single mobile robot co n trol. When the number of rob ots increases and mobile robot teams are shaped, archite c tures that will coord i nate the robots' efforts to achieve a common mission are required. These architectures are classified into three main categorie s: Centralized, Distributed and H ybrid architectur es. In the centra l ized approaches, a central controller is responsible for all the functions of the team. Each robot has no intelligence and all sensor readings are transmitted to the central controller. The central controller decides for every action that each robot should take, for example, which task to pe r form and how to avoid an obstacle. On the other hand, distributed architectures distribute the i n telligence among the robots. Every robot has modules for behaviors such as obstacle avoidance and task e xecution. The robot team performs task allocation collectively. Finally Hybrid control architectures distribute a layer of control to the robots and keep the task planning and task alloc a tion to a central computer. This central computer can be a robot of t he team. The rest of this chapter summarizes well established architectures for mobile robot teams. 2.2 Distributed Robot Architectures 2.2.1 ALLIANCE ALLIANCE [9] is a distributed software architecture build for heterogeneous m obile r o b ot control emphasizing in fault tolerance capabilities. It has been designed for small teams of robots that have to perform independent tasks. All the robots have the

PAGE 42

21 capability to d e termine their own actions based on the requirements of the mission, the a ctivities of other robots, the current env i ronmental conditions and the robot's internal state. The architecture uses a hierarchical behavioral based model. The lower level b ehaviors correspond to primitive survival behaviors such as obstacle avoidance, w hile the higher level behaviors correspond to task achieving functions such as map building or exploring. ALLIANCE incorporates the use of mathematically modelled motivations such as imp a tience and acquiescence. The robot impatience incorporates two param eters: The time in which one robot is willing to allow another robot to affect the motivation of a b ehavior set and the time that a robot allows another robot to accomplish its task before b ecoming impatient. The robot acquiescence determines the time that a robot needs to finish its task and assigns the task to a n other robot it that time is exceeded, and the time before assigning the robot to another behavior. Significant contributions of the ALLIANCE architecture are the distributed fault tolerance and adaptivity. Fault tolerance is the ability of the robot team to respond to i ndividual robot failures or failures in communications that may occur during the mission. The author proposes the dynamic re selection as response to fault tolerant. Adaptivity is the ability of the robot team to change its behavior over time to response to a dynamic environment or changes to the team's mission. The author demonstrates the effectiveness of the architecture using a team of three robots performing a laboratory versi on of hazardous waste cleanup.

PAGE 43

22 2.2.2 DRS DRS [21] addresses the issue of d y namic distributed task allocation when more than one robot can perform the same task. The architecture is fully distributed and a ssumes no centralized mechanism such as CPU or shared memory. Furthe r the commun ications between the robot are considered limited. Each robot makes decisions indepen dently and all robots cooperate to achieve a common goal. The architecture has been d esigned for applications in space exploration, defence operations and othe r automation sy s tems where high reliability is required. DRS is based on a set of Distributed Mutual Exclusion algorithms that use a "sign board" for inter robot communications. Each robot can read and write on the board and any other robot of the team ca n read the message. The characteristics of the Distr ibuted Mutual Exclusion algo r ithms are: 1. Mutual exclusion: The capacity of a resource should never be exceeded 2. Deadlock free: The algorithm should not result in situations where none of the r obots can ever get a resource 3. Lockout free: the algorithm should ensure that a robot requested access to a r esource eve n tually it will have its turn to check the resource out. One of the issues that it is not discussed in DRS is that of dynamic reallocation due to r o bot failures.

PAGE 44

23 2.2.3 ACTRESS ACTRESS [22] is a decentralized control system for multi robot systems t hat a ddresses the issues of communication, task assignment and path planning among heter ogeneous robotic agents. Each robot is co n sidered to have the ability to understand the target of tasks, recognize the surrounding environment, act and manage its own cond itions. Further, each robot has the ability to communicate with any other robot of the team to work in parallel for sp e cific tasks or avoid interference for other tasks. The contribution of ACTRESS is the ability of recruitment. Many robots can work on a task in parallel and request help when needed. The efficiency of ACTRESS has been tested on an application where mobile robots perform a box pushing task. 2.2.4 CAMPOUT CAMPOUT [23] is a distributed contr ol architecture based on multi agent beha vior based methodology. The architecture consists of a set of elementary architectural mechanisms for behavior representation, behavior composition and behavior coordin ation, group coordination and the interface bet ween them. The behavior representation is implemented using finite state machines. A beha vior is formalized as a mapping that relates each possible sensory i n formation sequence to an action. The most desired actions are assigned the value 1 and the undesi red the value 0.

PAGE 45

24 Behavior composition refers to the mechanism used to build high level behaviors by combining lower level behaviors based on the behavior coordin a tion mechanisms. Behavior Coordination : Behavior coordination mechanisms are divided into two co mplementary classes: arbitr a tion and command fusion. Arbitration mechanisms select one behavior from a group of competing behaviors and give the complete control of the system until the next cycle. CAMPOUT uses a pr iority based arbitration where behavi ors with high priority are allowed to suppress the output of behaviors with lower priorities, and state based arbitration, which is based on DES. Command Fusion mechanisms combine multiple behaviors to form a control a ction. This approach allows all the b ehaviors to contribute to the control of the system in a cooperative ma n ner. CAMPOUT uses the following command fusion techniques: 1. Voting techniques that interpret the output of each behavior as votes for or against all po s sible actions 2. Fuzzy commands that use fuzzy inference to formalize the action selection pr ocess 3. Multiple behavior fusion mechanisms that select the best trade off between the task obje c tives that satisfies the behavioral objectives Group coordination is treated as the coordination of mult iple distributed beha viors, where more than one decision makers are present. Finally, t he robots communicate either by interaction or through d i rect communication.

PAGE 46

25 The architecture is designed for planetary surface exploration. The authors of the paper do not present a failure recovery mechanism incorporate into the archite c ture. 2.3 Centralized Architectures Centralized approaches for control architectures are used primarily in RoboCup (Soccer with robots), where a central computer receives inputs fro m the robots of the team and external sensors (vision system mounted on top of the soccer field) and decides on the strategy that the robots should follow. 2.3.1 CMUnited CMUnited [8] is an architecture that perceives the overall system as a combin ation of robots, external sensors and a centralized interface computer. The complete s y stem is fully autonomous. The external sensor (vision) identifies the positions of each r obot and the ball and transmits this information to the centralized computer. The computer uses runs a set of different algorithms that evaluate the team's state and decide on what the robots should do next. The cen t ral computer sends command actions to the robots of the team. This is a typical centralized control architecture. The major disadvantage of such architectures is that in case of communication failures eac h member of the team stops r eacting.

PAGE 47

26 2.4 Hybrid Architectures 2.4.1 GOFER GOFER [24] is a sense model plan act architecture designed to control a team of multiple robots in indoor automation applications. The architecture includes a task pla nning system, a task allocation system, a motion planner and an execution system. The task pla n ner derives plans made of a high level action such as "go to position A" or "get object K". The task allocation sy s tem orders and allocates tasks or actions to the robots. The motion planning system converts the high level actions into motion comm ands and finally the execution system monitors the exec u tion and reacts to unexpected events. The task allocation system is partially centralized. Each robot communicates with the task allocator to determine its current task. The task allocator receives orders from the task allocation system and gene r ates plan structures and provides the available robots with a description of these plans. Each robot acquires a goal in two ways. The first is to ask or to be asked to pe r form a task and the second is to auto nomously generate tasks with respect to the current situation. For example a robot may ask the other robots to d etermine what it can do for them. GOFER uses hierarchical Petri Nets for interpretation of the plan decomposition and ex e cution monitoring.

PAGE 48

27 2 .4.2 3TEAR 3T [25] is a three layered architecture, with skills, sequencing and planning la yers. The planner constructs partially ordered p lans, listing tasks for the robot to perform according to some goals. The Sequencing layer decomposes the tasks that have been co nstructed by the planner to sets of a c tions. Each action corresponds to a set of skills that are activated in the Skills layer. Additionally, event monitors are activated in the Skills layer, notify the sequencing layer of completion of the set of actions. 2.4.3 A Hybrid Control Architecture f or Mobile Robots The architecture presented in [26] combines aspects of classic control and beha vior based control. Both continuous and discrete event syst ems are considered. The DES part is modelled using Petri nets. The control architecture consists of 4 layers: User inte rface, Action planning, Motion planning control and Obstacle avoidance. The higher level is the user interface that stores the instructio ns that a robot should follow. These instru ctions are a set of actions that the robot should perform according to the se n sor readings. The Action planning layer considers the directions of the User interface and issues a r eference for the Motion control le vel. The Action pla n ning level is designed using Petri nets. The motion control level has a reactive control loop to avoid obstacles. This architecture is designed to control a single robot. The concept of control that the authors propose can easily tran sferred in teams of robots, where the user interface and

PAGE 49

28 action planning layers are located to a centralized computer and the layers of Motion planning and O b stacle avoidance are located in each robot. 2.4.4 Hybrid Algorithms of Multi Agent Control of M obile Robots Authors of [27] combine methods from AI and neural network control to solve the problem of multi agent robotic systems. The proposed architecture consists of a str a tegic (supervisor) and a tactical (local) level of control. The strategic level uses AI techniqu es to address task d e composition, optimal task allocation, global environment modelling and avoidance of collision between the robots. The tactical level controls actions such as o bstacle avoidance, optimal path planning and control of robot actuators. Eac h robot is a complete independent sy s tem that is able to perform navigation, acquire sensory data and communicate with other robots and the superv i sor. One drawback of the paper is that the authors use external databases to store the complete paths of ea ch robot. In this way the communication channels are always busy carrying inform a tion from the robots to the supervisor and to the database. It would be sufficient to store only the current positions of the robots in the central computer. 2.5 Hybrid Co ntrol Architecture for Autonomous Patrolling The control architectures presented in this chapter are applicable to mobile robot teams. Applications such as autonomous patrolling can be modelled using any of these

PAGE 50

29 architectures. The approach followed in this work is closer to hybrid control archite ctures. Each robot has a certain level of autonomy, which allows task execution while a central computer or a robot is responsible for mission planning. The control architecture is decomposed into three levels of control. The higher level is responsible for the task a llocation between the robots, the middle level is responsible for the task execution (mon itoring) while the lower level is responsible for each robot's operation. The overall arch itecture encompasse s a continuous and a discrete control system. The continuous control system consists of a set of functional components that can be activated individ u ally or in parallel and are related to each robot's task assignment. The discrete control system, Mi ssion P lanner, is a Discrete Event Dynamic Systems (DEDS) superv i sor that allocates tasks to the robots and oversees the behavior of the robot team.

PAGE 51

30 Chapter 3 Navigation and Obstacle Avoidance 3.1 Introduction A novel, efficient and robust approach is pr esented for vision based depth estimation. The method does not require any camera calibration technique that adds computational load to the system, but it is based on the image size, the field of view of the camera(s) and the relative position of two image frames of the same scene. The image size is set by the user, the field of view is known by the camera's manufacturer, while the relative distance can be measured. No additional information is required for a given application. A case study demonstrates the efficiency of the presented method for indoor mobile robot motion planning and c ollision avoidance S imultaneous ultrasonic sensor and camera range measurements are used to identify obstacles and navigate the robot around them. Ultrasonic sensors hav e the advantage of fast and accurate range measurements, but in certain cases, small surface objects, or objects situated at a wide angle related to the ultrasonic sensor(s), cannot be detected. Given these limitations, a vision system may

PAGE 52

31 be used to ident ify obstacles within the field of view of the robot; in this chapter obstacles located at distances up to 8m are identified accurately. This is achieved using a color space transformation, adaptive thresholding and the proposed method. The YC b C r color spa ce has been used to retrieve the obstacle's color from the acquired image; this choice over the HSI and CMYK color spaces is justified because the YC b C r color space demonstrated better performance in locating the object in question (however, results using all three color spaces are included for comparison purposes). Both sonar sensors and cameras are activated and operate simultaneously and in parallel to obtain range measurements from common search areas in the front of the mobile robot. The algorithm all ows for "back and forth" sonar / camera based obstacle identification, in the sense that although sonar sensors may initially identify a potential obstacle, camera data may be used to complement and decide about the presence / absence of potential obstacle s and navigate around them. The implemented vision system consists of two uncalibrated rotated color cameras mounted on top of the robot at a distance of 45 cm from each other. Experimental results and comparisons using a parallel stereoscopic, rotated and monocular vision system are presented. The computational complexity of the camera based range measurement is of order O(n 2 ) where n is the dimension of a square image, while the complexity of the sonar based range measurement is of order O(n) where n is the number of the sonar sensors. Experimental validation and verification of the proposed method has been demonstrated using the ATRV mini skid steering differential drive robot, equipped with 350 MHz PIII processor and 256 MB of RAM. The ATRV mini u ses 24 ultrasonic

PAGE 53

32 sensors, a GPS, a compass unit and a vision system that consists of two pan/tilt video cameras, SONY EVI D30. The robot runs RedHat Linux 6.0 and Mobility interface. Obstacles are assumed to be of yellow, red, green, blue, or mixed color. No enhancements or modifications to the original system have been performed. Movement from an initial to a final goal point follows the fuzzy controller framework reported in [28] Experiments confirm the effectiveness of the presented approach; the maximum computational error as well as the Normalized Root Mean Square Error, rmse of range measurements using either a rotated, paralle l, or monocular vision system is about 3.5% (rmse 0.023), 4.21% (rmse 0.025) and 2% (rmse 0.011) respectively for obstacles lying at a distance of 27 800 cm from the robot. This computational error is considerably better compared to results presented in [29] [30] [31] [32] [33] [34] [35] The accuracy and error margin related to sonar sensor range measurements depends heavily on the type of sonars used; however, in all cases, accurate data is returned within a 2 meter distance. The chapter is org anized as follows. The next section refers to related work and comparisons. Section 3.3 describes the vision system algorithm, and Section 3.4 presents the motion algorithm. Section 3.5 is dedicated to experimental results. Section 3.6 discuses the case of monocular vision system (basically for completeness purposes) and Section 3.7 concludes the chapter.

PAGE 54

33 3.2 Related Work and Comparisons In general, vision based range measurements require knowledge of the camera intrinsic and extrinsic parameters. A wi dely used technique to acquire this knowledge is camera calibration where a set of linear equations with 12 unknown parameters needs be solved [36] This technique is applicable to known or unknown sc enes assuming that the intrinsic parameters of the camera do not change for different views. This is a valid assumption when pinhole cameras are used. The modern CCD cameras tend to automatically adjust their intrinsic parameters in order to acquire clear images. Thus, calibration has to be repeated every time the scene or the camera orientation changes. Reported research in [29] a nd [30] estimates depth by fusing defocus and stereo with reported rmse errors of about 0.12 and 0.34 respectively. In [31] a sub pixel stereo method is proposed with reported computational errors of more than 10%. In [32] dynamic stereo images are used with reported computational error less than 5% In [33] a defo cus method is used with errors of less than 5% In [34] the robustness of image based visual servoing is explored with reported errors of 6% Research in [35] combines a monocular CCD camera and an inertial sensor with an error of about 5.42% Reported research in [35] includes only simulation results, therefore, no comparison is made. Regarding depth estimation, the presented approach dif fers from related work presented in [37] in which range through visual looming is computed using a monocular vision system. A major disadvantage of this approach is that an object's projection must fit entirely within the focal plane, as opposed to our approach where only a part of the

PAGE 55

34 object is required for the same computation. Also the distance between the camera and the object is assumed to be measured from the object's center. In our appro ach, a distance value is calculated for each of the object's corners providing more accurate range measurement especially when the object is not parallel with the focal plane. Furthermore, even no calibration is required in [37] one has to know the physical size of the image plane as opposed to our approach where no such information is required. The approaches presented in [38] [39] [40] [41] [42] do not use the vision system for range measurements. The research in [38] uses vision system information for robot localiz ation and detection of static obstacles through an environment model, and sonar range measurements to detect moving obstacles. The approach followed in [39] for indoor navigation of robot teams uses sonar information to detect walls, open doors and corners through characteristic sonar footprints, while the vision system is activated to describe what has already been detected. Similarly, the work in [40] uses range information from ultrasonic sensors that is transferred to the visio n system in order to provide an absolute scale for the image coordinates. In [41] the role of the vision system is to track a color target while the ultrasonic sensors are used for collision avoidance. In [42] two cooperative robots are using visual information to detect landmarks for localization. The work in [43] presents robot architecture able to perform real time vision processing using 4 cameras mounted on the robot. Approaches to robot navigation using range measurements derived from ultrasonic sensors are the topic of [44] [28] [45] [46] [47] The presented approach demonstrates an effective and accurate way how data derived from a vision system can be converte d to depth measurements without using any

PAGE 56

35 camera calibration technique. As a case study sonar and vision system data are utilized simultaneously for robot navigation and collision avoidance in indoors environments. 3.3 Vision System The vision system of the ATRV Jr consists of two uncalibrated pan/tilt color cameras mounted on top of the robot at a distance of 45 cm from each other. The main steps of the vision algorithm to convert image information to range data are shown in Fig. 3.1(a). The case of m onocular vision system differs in the frame grabbing sequence, meaning that the pair of images is grabbed sequentially as illustrated in Fig. 3.1(b). Fig. 3.1 Block diagram of vision system function

PAGE 57

36 3.3.1 Image Acquisition Image acquisition is ach ieved using the Video4Linux API at a rate of 30 fps Since both cameras share the same frame grabbing device, the frame rate is reduced to 14 fps for each camera. Each 24 bit color image has a resolution of 320x240 pixels. 3.3.2 Color Space Transformat ion The YC b C r color space is used to retrieve the obstacle's color from the acquired image; it is chosen over the HSI and CMYK color spaces because it demonstrated better performance in locating the object in question. The transformation from the origina l RGB color space is documented in [48] and [49] 3.3.3 Applying Threshold A threshold technique has been implemented in the YC b C r HSI and CMYK color spaces. This is basically done for comparison purposes. Experiments were conducted in an indoors lab environment under different lighting conditions. However the light was uniformly distributed on the obstacles. Images of the same color obstacles obtained in the YC b C r HSI and CMYK color spaces were used for identification and comparison purposes.

PAGE 58

37 At first, an image containing a yellow obstacle was co nsidered, followed by an image containing a multi color obstacle; the purpose of this second image is to illustrate ability to recognize different colors (color segmentation) under different lighting conditions given that the obstacle is located at various distances from the robot. 3.3.4 YC b C r Color Space The C b component of the YC b C r color space is related to the amount of the blue component. Colors containing high quantity of blue color are represented brighter in the C b image. Yellow contains a nar row quantity of blue color; it is represented as the darkest of all colors in the C b image. A threshold value to recognize yellow color is: t T = when h t H > ) ( and ) min( i t = (3.1) where T is the threshold value, H() the function of C b 's image histogram, h a number of pixel value that shows when an obstacle is considered big enough to participate in threshold calculation, i is the intensity of the C b component and t is the minor intensity value correspondin g to the number of pixels greater than h Because of light distribution variations, not every pixel (of the yellow obstacle represented in the C b image) has the same intensity. A threshold area is used to distinguish the obstacle's pixels from the backgrou nd, its center being the T value with boundaries between T 0.1T and T+0.1T A new image representing only the object of interest is created according to the equation: g ( x y ) = 255 0 # T 0 1 T < C b ( x y ) < T + 0 1 T else (3.2)

PAGE 59

38 g(x, y) is the intensity function of the new image and C b (x, y) the intensity function of the C b component. The value of T is computed for each image separately using the image histogram. Thus, the system adapts to illumination variations. Fig. 3.2 illustrates a color image containing a ye llow obstacle (a), the result of the threshold technique (b), and the histogram of a C b component (c). T is the minimum pick on the histogram and the threshold area is 10% around T Fig. 3.2 Applying threshold technique in YC b C r color space However, using the properties of the YC b C r color space more colors such as red, blue and green may be extracted from a color image. In particular, red color corresponds to the highest intensity values on the C r component and to medium inten sity values between (130, 160) on the C b component. Similarly, blue color corresponds to the highest values on the C b component and to the lowest values on the C r component. Finally, green color corresponds to C b component values in the area between (70, 1 00) and to C r component values in the area between (85, 120) Fig. 3.3, shows a color image containing

PAGE 60

39 the four color obstacle (a), the result of the threshold technique (b) according to equation (3), and the histograms of the C b and C r components. g ( x y ) = 255 200 150 100 0 # # $ # # C r ( x y ) > T 1 and 130 % C b ( x y ) % 160 70 % C b ( x y ) % 100 and 85 % C r ( x y ) % 120 C b ( x y ) > T 2 and C r ( x y ) < T 3 T 4 & 0 1 T 4 < C b ( x y ) < T 4 + 0 1 T 4 else (3.3) C b (x,y) and C r (x,y) are the intensity functions of C b and C r components, respectively. T 1 T 2 are the highest picks with the highest intensity value in C r and C b components, respectively, T 3 the lowest pick with the lowest intens ity value in C r component and T 4 as defined in (3.1). Implementation using the HSI and CMYK color spaces is described in the appendix. (a) (b) (c) (d) Fig. 3.3 Applying threshold techinque to extract red, green, blue and yellow obstacles using the YC b C r color space

PAGE 61

40 3.3.5 Gauss Filter The thresholded image may have pixels that do not belong to the object of interest. This is because some of the background elements contain a percentage of the color of interest. To eliminate these pixels a [5x5] Gauss filter is used [50] 3.3.6 Extraction of Interesting Points In order to extract characteristic points in an image, the Moravec interest operator has been used [50] The operator computes the directional variances ( I 1 I 2 I 3 I 4 ) for each point and applies an interest value t o it. The directions are computed using all pixels in a window centered about the point, as shown in Equation (3.4 ). S represents pixels of a window centered on this point. The window size used is [5x5] I 1 = [ f ( x y ) f ( x y + 1 )] 2 ( x y ) S # I 2 = [ f ( x y ) f ( x + 1 y )] 2 ( x y ) S # I 3 = [ f ( x y ) f ( x + 1 y + 1 )] 2 ( x y ) S # I 4 = [ f ( x y ) f ( x + 1 y 1 )] 2 ( x y ) S # (3.4 ) An interesting val ue is assigned to each pixel according to: I ( x y ) = min( I 1 I 2 I 3 I 4 ) (3.5 ) The image is then segmented in regions; each region's interest point is defined as the pixel with the maximum interest value.

PAGE 62

41 3.3.7 Interesting Points Correspondence The are a of each image that contains the object of interest is segmented in four regions. An interesting point belonging in a region of the first image corresponds to the interesting point belonging in the same region of the second image. The limitation of the me thod is that it can only retrieve information from the two closest obstacles. However, this method reduces computational time since no correspondence algorithm is needed, which the ATRV mini's CPU cannot spare. Fig. 3.4 shows the images from left (a) and r ight (b) camera as well as their corresponding pixels (c). Whereas the interesting point corespondence mentioned above reports acurate results, it depends on the output of the thresholded image. For this reason, a correlation algorithm may be also used. T he correlation coefficient, r is defined as the sum of the products of the pixel brightnesses divided by their geometric mean [51] : r ( d x d y ) = f ( d x + i d y + j ). g ( i j ) ( i j ) S f 2 ( d x + i d y + j ). g 2 ( i j ) ( i j ) S ( i j ) S (3.6 ) where f g the left and the right image respectively, S a pixels window centered around an interesting poin t (9x9), and d x d y the disparity of the pair of pixels to be matched. In this case all the interesting points of the right thresholded image derived by the Moravec operator are matched with pixels on the left image.

PAGE 63

42 (a) (b) (c) (d) Fig. 3.4 Interesting point cor r espondence Both methods have been tested and both provide similar accuracy. For practical reasons, only the first method can be implemented on the ATRV mi ni on board computer, in order to get real time results. 3.3.8 Distance Computation The presented method for depth estimation uses only the size of the image, the angle of view of the camera and the relative position of two different captures of the s ame scene. Fig. 3.5 shows the field of view of a monocular camera, where A is the length in cm of the horizontal field of view in a distance of Z i cm from the camera system, (X i Yi, Z i ) the world coordinates of any point in the field of view, (x i ,y i ) the projection of the world point onto image plane and (y p x p ) its pixel coordinates. The origin of the pixel coordinate system coincides with the center of the image.

PAGE 64

43 For an image of 320x240 pixels, one may observe that: X i = 2 320 x p Z i tan( 24 4 )( cm ) (3.7 ) A second image containing the same world point is acquired from a different position in order to compute depth. In case of a monocular vision system this is achieved by moving towards the world point (stereo from motion). Fig. 3.6 demonstrates this procedure. For both images the following equations are derived: ) ( 2 1 2 cm x x B x Z p p p = (3.8 ) where B is the relative distance between the two shots in cm (baseline) and x p1 x p2 the coordinates of the pixel which represents the worl d point in the first and second image respectively. Fig. 3.5 Horizontal field of view of monoscopic vision system

PAGE 65

44 Fig. 3.6 Stereo from motion For the parallel stereoscopic vision system shown in Fi g. 3.7 the depth is derived from: Z = 320 B 2 ( x p 1 x p 2 ) tan( 24 4 ) ( cm ) (3.9 ) From Fig. 3.7, it is shown derives that the fields of view of the two cameras converge at a distance of 49.6 cm from the vision system. This is the distance of the closest obstacle that the vision system can locate. Fig. 3.7 Depth from stereo vision

PAGE 66

45 To minimize this distance, both cameras are rotated such that their optical axes converge closer (see Fig. 3.8). In this work, the angle of rotation, is set to 12 o reducing the distance of the closest obstacle to 27 cm, which is the length between the front bumper of the mobile robot and the base of the vision system. For the rotating stereoscopic vision system of Fig. 3.8, depth is calculated as: B a a CD a a C D a CD a a C D a Z 2 sin 2 sin ) sin )(cos ( sin sin cos ) ( cos 2 2 2 2 ! ! + ! = (3.10 ) is the angle of rotation in degrees, C = 2 320 x p 1 tan( 24 4 ) (3.11 ) D = 2 320 x p 2 tan( 24 4 ) (3.12 ) Since the area of each image c ontaining the object of interest is segmented into four regions, there are four pairs of corresponding interesting points and, therefore, four measurements of distance. In case of one obstacle the minimum and maximum measurements are ignored and the distan ce is defined as the average of the other two. In case of two obstacles or one obstacle located at an angle relatively to the vision system, the distance has two values and it is defined as the average of the measurements which correspond to the same verti cal edge.

PAGE 67

46 Fig. 3.8 Rotated stereoscopic vision system 3.3.9 Computational Time The time required converting vision system data to range measurements using the rotating or the parallel vision system is 0.43 sec when runnin g on ATRV mini's on board computer, while its speed reaches 0.6m/sec. 3.4 Motion Algorithm The motion algorithm uses range data from both ultrasonic sensors and the vision system for robot motion planning and collision avoidance. Considering the ATRV mini, Fig. 3.9, only the eight front ultrasonic sensors have been utilized to derive the motion algorithm. The concept of implemented "collision avoidance" is to calculate the length of the area left and right of the obstacle and to follow the route wit h larger free space. The

PAGE 68

47 cardinal sensor is the vision system which means that in case both sensors detect an obstacle the range measurements are calculated through vision data. Fig. 3.9 Location of the sonars on board ATRV mi ni Using data from ultrasonic sensors, the free space calculation is done by comparing range measurement summations from sensors mounted on the robot front left and right. If the summation of the left sensor data is greater than the summation of the righ t one, the robot moves to the obstacle's left, otherwise, it moves to the obstacle's right, according to: Turn right if : U i < U i i = 20 23 i = 0 3 (3.13 ) Turn left if: U i > U i i = 20 23 i = 0 3 (3.14 ) U i is the range data in cm from sensor i Us ing data from the vision system, X, the horizontal length of an area corresponding to x p pixels on a 320x240 resolution image is calculated as (see above):

PAGE 69

48 X = 2 320 x Z tan( 24 4 ) (3.15 ) Z is the distance between the obstacle and the cameras. To ca lculate the area's length left of the obstacle, x represents the difference in position between the first image pixel and the first obstacle pixel, while for the calculation of the area's length right of the obstacle x represents the difference in position between the last obstacle pixel and the last image pixel such that: ) 4 24 tan( ) ( 320 2 0 Z x x X p left = (3.16 ) ) 4 24 tan( ) ( 320 2 320 Z x x X p right = (3.17 ) x 0 x 320 are the first and the last image pixel respectively, and x p the obstacle's pixel. Since the vision syste m is under rotation it is preferred to compute the length of the left area using data from the right camera and the length of the right area using data from the left camera. Collision avoidance is completed by the robot's turn to the side with greater leng th, until none of the sensors detects an obstacle, and forward movement for Z cm. Fig. 3.10 presents the flow chart of the motion algorithm. To avoid an obstacle, the robot increases its rotational velocity and reduces its translational speed to half unti l none of the sensors detects the object. This response guides the robot safely left or right of the obstacle. The current position of the robot is entered to the fuzzy controller [28] and a motion vector reinstates the robot to its previous path towards the goal point.

PAGE 70

49 Fig. 3.10 The block diagram of the motion algorithm 3.5 Results For implementation purposes, two types of obstac les have been used: Type I with size 50x60x30 cm and Type II with size 40x30x20 cm. Type II obstacles are colored yellow and cannot be detected from ultrasonic sensors because of their height, which is equal to the distance between the ground and the ultra sonic sensors. Type I obstacles

PAGE 71

50 cannot be detected from the vision system since they are not yellow. Multicolor obstacles are also used to illustrate capabilities of the approach in identifying different colors at different distances. Experiments have bee n conducted in an indoor lab environment with several "furniture obstacles" of different shapes, sizes and orientation, and with different lighting conditions. 3.5.1 Vision System Algorithm Experimental results showed that the threshold technique in t he YC b C r color space can identify more than 92% of the yellow obstacle without detecting any of the background pixels. On the other hand the techniques implemented in the HSI and CMYK color spaces did detect most of the obstacle's pixels (more than 70% and 85% respectively), but they also detected many pixels of the background. Hence, the YC b C r choice is justified. Experimental results of range measurements using a parallel and a rotated stereoscopic vision system demonstrate average errors of 2.26% and st andard deviation of 1.1 for the parallel system and 1.58% and standard deviation of 1 for the rotated one. The relation between the actual and the computed distance is illustrated in Fig. 3.11 and 3.12.

PAGE 72

51 Fig. 3.11 Real distance vs computed distance using data from the parallel vision system Fig. 3.12 Real distance vs computed distance using data from the rotated vision system 3.5.2 Motion Algorithm The robot's trajectory is demonstrated from an i nitial point to a final goal point while collision avoidance is achieved using only the vision system. Fig. 3.13 demonstrates avoidance of two yellow obstacles located in the robot's right, suggesting left movement for avoidance. 0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 0 5 1 0 1 5 2 0 2 5 3 0 E x p e r i m e n t n u m b e r Distance (cm) R e a l C o m p u t e d 0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 0 5 1 0 1 5 2 0 2 5 3 0 E x p e r i m e n t n u m b e r Distance (cm) R e a l C o m p u t e d

PAGE 73

52 Fig. 3.13 Collision avoidance using vision system data, experiment 1 Fig. 3.14 shows avoidance of two obstacles where the robot turns left because both obstacles are detected from the vision system. Fig. 3.14 Collision a voidance using vision system data, experiment 2 Fig. 3.15 demonstrates avoidance of three obstacles. The first obstacle that lies in the left of the robot, blocks the vision system from detecting the third obstacle. Thus, the robot has to move left since the vision system can detect the second obstacle.

PAGE 74

53 Fig. 3.15 Collision avoidance using vision system data experiment 3 Fig. 3.16 shows another case with three obstacles. The first obstacle is located right of the robot and i t blocks the vision system from detecting the other two. Robot moves left and meets the second obstacle which lies left of the robot. The robot moves right and meets the third obstacle which is located right of the robot. Fig. 3.17 and 3.18 demonstrate the robot trajectory while it avoids three obstacles using range measurements from ultrasonic sensors. To avoid an obstacle, the robot turns left or right according to its relative position with the obstacle. Fig. 3.16 Collision a voidance using vision system data, experiment 4

PAGE 75

54 Fig. 3.17 Collision avoindance using range measurments from the ultrasonic sensors, experiment 1 Fig. 3.18 Collision avoindance using range measurment s from the ultrasonic sensors, experiment 2 Fig. 3.19 3.22 demonstrate the robot's trajectory and collision avoidance using data from both sonar and camera. For comparison purposes, the robot's trajectory using only the ultrasonic sensors is also shown. Type I obstacles are represented with green color and Type II with cyan.

PAGE 76

55 Fig. 3.19 Collision avoidance using data from sonar and camera, experiment 1 Fig. 3.20 Collision avoidance using data from so nar and camera, experiment 2 Fig. 3.21 Collision avoidance using data from sonar and camera, experiment 3

PAGE 77

56 Fig. 3.22 Collision avoidance using data from sonar and camera experiment 4 Fig. 3.23 show s snapshots of the robot's trajectory while it avoids three yellow obstacles. Fig. 3.23 Robot's trajectory snapshots

PAGE 78

57 3.6 Discussion As mentioned earlier, the case of monocular vision system has also been tested. Fig. 3.1(b) shows the main steps of the vision algorithm where the pair of images needed for range computation is grabbed sequentially. It is assumed that the mobile robot is moving directly towards the object of interest and no rotation is performed during the capture of each image pair. The equation (previously derived) Z = x p 2 B x p 1 x p 2 (3.18 ) is used for distance estimation. The basic steps of image acquisition, color space transformation, th resholding and filtering remain the same as presented above. However, using the assumption that the mobile robot is moving directly towards the object of interest, the part of interest point correspondence can be improved. The Moravec Interest Operator in creases considerably the computational time of the system (approximately 0.11 sec for each image). Instead, a method of image subtraction can be performed. The pair of the thresholded images is subtracted and the outcome is a new image which reveals the ed ges of the object captured in the pair of consecutive images ( Fig. 3.24). Performing a simple line scan algorithm, the corresponding pixels can be derived.

PAGE 79

58 Fig. 3.24 Interesting point correspondence for the case of monocular vision system The case of the monocular vision system requires accurate knowledge of the distance covered between two consecutive captured images; however, the robot's odometer cannot provide suc h accurate data. Thus, the computational error is significantly increased. This is the main reason why stereo vision is preferred. Indicative experimental results on range measurements using a monocular vision system that conducted without the use of the robot's odometer demonstrate mean error of 1.02% and standard deviation 0.54 3.7 Conclusions This chapter has presented a simple vision based depth estimation technique using the image size of an image, the angle of view of the camera and the relati ve position of two different captures of the same scene. The effectiveness of the proposed method has

PAGE 80

59 been tested for mobile robot motion planning and collision avoidance in an indoors environment based on simultaneous ultrasonic sensor and/or camera range measurements. Compared to other related methods, experimental results have confirmed the simplicity and effectiveness of the presented approach as well as superior maximum computational errors and Normalized Root Mean Square Errors, rmse of range measure ments using either a rotated, parallel, or monocular vision system. Future work involves sensor fusion between laser scanner and stereoscopic vision system with the introduction of system failures.

PAGE 81

60 Chapter 4 Task Achieving Behaviors 4.1 Introduct ion A robust and efficient method is presented for mobile robot dynamic target trac king and collision avoidance in indoor environments using stereo vision and a laser range finder. Key characteristics of the tracking system are real time operation, effic ient co mputational complexity and ability to adapt to different environments and moving targets. The computational complexity of the vision based tracking and range measurement is O(n 2 ) where n is the dimension of a square image; the complexity of the las er based range measurement is O(m) where m is the number of laser segments. Contrary to most existing approaches that are restricted to specific environments and ce r tain types of targets such as templates [52] [53] [54] cars [55] [56] and humans [57] [58] [59] [60] the presented method is rather general and applicable to any i ndoors env i ronment. The target is identified by its color using the HSI color space and a region growing algorithm The target color may be either pr e determined or dynam ically defined, regardless of the target's shape or other physical characte r istics. The distance between the target and the mobile robot is calculated using the stereo vision system data.

PAGE 82

61 Limitations have been imposed on the cameras' motion so that pixels correspon d ences are found across a single epipolar line reducing in this way the computational complexity of the task. Distance is used to control the mobile robot's velocity. Collision avoidance with o b jects other than the target is accomplished using dat a from the laser range finder. The proposed tracking system operates as follows: As soon as a potential target has been identified, both cameras track the target independently of each other with their pan/tilt mech a nisms. The distance from the target is ca lculated using stereo geometry as the mobile robot moves towards the target. The robot's steering angle, is controlled by the angle of the pan mechanism as illustrated in Fig. 4. 1. Thus, the target is tracked even if one of the cameras fails to identify it; moreover, in this way, the target is being tracked even while the mobile robot avoids collision with surrounding obstacles, or the target moves in irregular terrains. Experimental validation and verification of the proposed method is demonstrated using the ATRV Jr skid steering differential drive robot, equipped with 3 GHz P IV pr ocessor and 1 GB of RAM. The ATRV Jr uses a 30m range laser finder that is located in front of the robot, a GPS a compass unit and a stereo vision system that consists of two uncalibrated pan/tilt video cameras, SONY EVI D30 mounted on top of the robot at a di stance of 35 cm fr om each other. The robot runs RedHat Linux 7.0 and Mobility interface. Suitable applications that the method may be used are warehouse patrolling and office building security and inspection, where the target may be any moving object and / or human (friend or enemy).

PAGE 83

62 T a r g e t R o b o t Y X L e f t C a m e r a R i g h t C a m e r a _ S t e e r i n g d i r e c t i o n Fig. 4.1 Tracking control system The chapter is organized as follows: The rest of this section refers to related work and comparisons. Section 4.2 describes the vision system algorithm while Section 4.3 presents the motion algorithm. Section 4.4 is dedicated to experimental results and Se ction 4.5 concludes the chapter. 4.1.1 Related Research Reported research in [52] uses template matching techniques for o b ject tracking. The authors use a rated gyro and a camera to solve the problem of image deformation caused by the camera rolling and pitching during the mobile robot's motion on irregular terrains. In [55] a visual feedback controller is proposed using a fixed camera to control the mobile robot to track a moving target. The target is pr e determined and its dimensions are known. In [61] and [57] the Condensation Algorithm has been used to implement v ision based tracking of moving objects. Objects are tracked by their outlines and feature s.

PAGE 84

63 Research in [62] solves the problem of targ et tracking and collision avoidance using magnetic sensors mounted on the left and right of the robot. A multi modal a n choring method has been used in [58] to track humans with a vision system and a laser range finder. Laser data are used to extract the legs of a person while the skin color is detected through camera images. In [53] a vision based tracking system is presented that uses trinocular sets of precisely aligned and rectified images to represent the trajectory of the object being tracked. In [63] a target tracking co n troller has been designed with coll ision avoidance consideration and simulation results are shown. In [56] and [64] an exper imental study has been performed on tracking two autonomous mobile robots using only distance sensors. Research in [65] d e rives a heuristic algorithm to match the velocity of target and tracker in an indoor struct ured e n vironment. In [59] a person tracking approach is presented using a stereo vision and face detection modules. This system ide n tifies skin color using a converted RGB color space and performs face detecti on using the face d etector library reported in [66] R e ported research in [67] uses infrared sensor to design a fuzzy controller to track a mobile robot. In [68] a fuzzy controller is presented for a ge neral target tracking sy s tem. Finally, research in [54] uses template matching techniques for visual target tracking. In [69] a fuzzy algorithm is used to detect an object based on a color cue and tracking is based on a maximum radius of displacement in su b sequent frames. Concerning vision based techni ques, research in [ 70] uses multivariate decision trees for piecewise linear non parametric function approximation to learn the color of the target object from training samples. The approach in [60] proposes a color model which includes the intensity information in HSI color space using B spline curves for face trac k

PAGE 85

64 ing. Research in [71] uses the Histogram Intersection and Histogram Back projection techniques to match and locate a color in an image scene. In [72] color segmentation is performed based on contrast information and adaptive thresholds in static images. The authors in [73] use color segmentation to identify obstacles in indoor environments. Using a training set of images and the r g color space they present a model robust to shadows and illumination changes. Similar to [71] researchers in [74] propose color i ndexing using histograms of RBG color ratios. Another color tracking algorithm is pr esented in [75] where a neural network is used to robustly identify skin color regardless of lighting conditions. In [76] color histogram information is used to detect and track pede strian for a stationary visual su r veillance syste m. 4.1.2 Comparisons The main differences and advantages of the presented approach compared to r elated r e search are: 1. Using color histograms on the H I plane allows for a variety of o b jects to be used as a target, independently of the target's shape, size or other physical characteris tics. 2. Both cameras of the vision system track a target independently, providing a r edundant mechanism that helps avoiding loosing the target. This me ans that even if one camera lo ses the target', it can retrieve informati on from the other camera to find it again.

PAGE 86

65 3. The robot's direction is controlled by the pan/tilt angles of the cameras, allo w ing the robot to avoid obstacles and keep tracking a target, or to keep tracking a target that moves on uneven terrains. 4.2 Visio n System The vision system of the ATRV Jr consists of two uncalibrated pan/tilt color cameras mounted on top of the robot at a distance of 35 cm from each other. The main steps of the pr o posed vision based tracking algorithm to convert image information t o camera's motion and range data are shown in Fig. 4.2. Each step is presented sep a rately. 4.2.1 Image Acquisition Image acquisition is achieved using the Video4Linux API at a rate of 30 fps Both cameras share the same frame grabbing device, which sup ports rates up to 30 fps from 4 analogue video inputs Each 24 bit color image has a resolution of 320x240 pixels. 4.2.2 Target Selection The target is being tracked by its color that can be either predetermined or d ynamically defined in the image scen e. Identifying color in an image using a region gro w

PAGE 87

66 ing technique i n stead of template matching or pattern recognition techniques requires less computational time and allows the system to be robust in choosing multiple targets. The HSI color space has been chosen for the image manipulation techniques discussed below. This is because the HIS color space is similar to the human perception of colors [77] Fig. 4.2 Block diagram of the vision system function 4.2.2.1 Predetermined Color When the target is a known object, the variation of the hue saturation and i ntensity values of the color's representation in the HSI color space is known. A segment ati on technique on H I plane based on a region growing algorithm is used to separate the

PAGE 88

67 target from the image's background. The basic concept is to identify a pixel, "seed", in the image that takes the mean hue and intensity values within the area of the obj ect's color and grow a region with similar neighbouring pixels. For example, the T shirt in Fig. 4.3a has hue and intensity values that vary between (310, 340) and (125, 145) respe ctively. Thus, the seed pixel will have a hue value of 325 and an intensity value of 135 The region growing algorithm will merge neighbouring to the seed pixels that have hue and intensity values in the former area. Fig. 4.3b shows the result of this technique. (a) (b) Fig. 4.3 Region growing results for the segmentation of an object with known color 4.2.2.2 Dynamically Determined Color When the target is unknown, it is defined as the first moving object in the image scene. Motion in a scene is identifie d by subtracting sequential frames. If the outcome of the subtraction is a black image then no motion has been detected in the image scene. On the other hand, if 2% or more of the outcome image pixels has values different than zero and it is 4 connected t hen motion has been detected. Initially, the or i ginal color images were used for the subtraction, but since the variations of color comp o nents are significant

PAGE 89

68 even for images taken within short p e riods, it has been decided that the subtraction will occur i n the greyscale images. The time difference between two frames must be propo rtional to the target's speed to identify the motion in the images. If the target is moving too fast and the time difference between the two frames is large, then the system will f ail to identify motion. Reversely, if the target is moving slow and the time difference between the two frames is small, the threshold of 2% of the image pixels will not be met. Exper imentally it has been determined that for a walking man or for a moving m obile robot a difference of 0.3sec is adequate to identify motion between the two frames A median filter with window size 5x5 is applied to the subtracted image to eliminate individual pixels that were erroneously recognized as motion'. These pixels usua lly belong to o bjects with shining surfaces where light is irregularly reflected. When the target has been detected, the RGB to HSI transform is performed to the region of the image that su rrounds it. This region will be denoted from now on as region of in terest To identify the color of the moving object, the histograms of the hue and intensity co m ponents are computed for the region of interest. The hue and intensity value with the greater frequency in the region of inte r est is used as the seed value for the region growing algorithm. To improve the accuracy of this selection, the seed is the pixel where its 8 connected neighbours present the maximum hue and intensity values in the region of i nterest. This is a gruelling criterion for the selection of the seed that helps to avoid loca t ing objects in the scene with similar colors. Then, the region growing algorithm is applied to merge pixels that present hue and i n tensity values in the range of of the seed's values. In Fig. 4. 4 two sequential images (a), (b) are depicted as well as the region of i n te r est derived by the motion (c) and the region growing algorithm (d).

PAGE 90

69 (a) (b) (c) (d) Fig. 4.4 Region growing results for the segmentation of an object with unknown color The hue and intensity component histograms are only computed for the initial pair of i m ages and the maximum values are used to run the regi on growing algorithm for the rest of the images. Since illumination may vary even in indoor environments the max imum values of the Hue and Intensity components are recalculated for the region of inte rest in the case that the seed cannot be found. 4.2.3 Target Tracking One of the differences of the proposed method compared to others is that in exis ting approaches there is an effort to control the tracker's speed and steering angle to fo llow the target holding the vision system fixed. In the presented met hod, each camera of the stereo vision sy s tem tracks the target using their pan/tilt mechanisms. Thus, the target is being tracked even when the robot is in collision avoidance mode, or when the target is

PAGE 91

70 moving in irregular terrains. The velocity of the ro bot is adjusted according to the relative distance between the robot and the ta r get, calculated using data derived from the stereo vision system. The robot's steering angle, is controlled by the angle of the pan mech anism of the cameras, Fig. 4.1. Each camera's motion is such that it keeps the target at the center of the image. Ther e fore, each image is segmented into 14 regions as shown in Fig. 4.5. When the mass center of the pixels belonging to the target falls into a numbered sub image, an appropr iate motion of the pan/tilt mechanism tends to reinstate the target to the center of the image. Experimentally it has been determined that when the mass center resides to the sub images 1 8, an angle of approximately 7 o on the horizontal axis and 5.4 o on the vert ical axis is efficient to reinstate the target to the center of the image. When the mass center resides on the sub images 9 14 a sharper motion is required, thus the angl e of the horizontal axis is set to 12 o Fig. 4.5 Image segmentation for camera motion

PAGE 92

71 Since stereo correspondences are needed to calculate the distance between the r obot and the target, both cameras are forced to the same ang le in the vertical axis (tilt). This allows calc u lating pixel correspondences across a single epipolar line instead of the whole images, reducing the required computational time. The case of one camera failing to locate the target is also considered; then data derived and collected from the failed camera may no longer be useful. Two cases have been considered. First, the target is located in front of the robot Fig. 4.6a, and second the target is located on an angle with respect to the robot's direction, F ig. 4.6b. In the first case the failed camera will be forced to the opposite angle of the second camera and in the second case the failed camera will be forced to the angle of the second camera. Given the robot's velocity and the camera's angle on the hori zontal axis, the stee ring v e locity of the robot can be computed: tan | | | | x y ! = (4.1) where y is the steering velocity, x is the robot's velocity and is the horizontal angle of the ca m era. The robot's velocity depends on the dis tance between the robot and the target and it varies from 0 1 m/s.

PAGE 93

72 Fig. 4.6 Location of the target related with the robot 4.2.4 Extraction of Interesting Points In order to extract characteristic points in an image, the Mor avec interest operator has been used [50] The operator computes the directional variances ( I 1 I 2 I 3 I 4 ) for each point and applies an interest value to it. The directions are computed using all pixels in a window centered on the point, as shown in (2). S represents pixels of a window centered on this point. A window size of 5x5 is used. I 1 = [ f ( x y ) f ( x y + 1 )] 2 ( x y ) S # I 2 = [ f ( x y ) f ( x + 1 y )] 2 ( x y ) S # I 3 = [ f ( x y ) f ( x + 1 y + 1 )] 2 ( x y ) S # I 4 = [ f ( x y ) f ( x + 1 y 1 )] 2 ( x y ) S # (4.2) An interest value is assigned to each pixel according to:

PAGE 94

73 ) , min( ) ( 4 3 2 1 I I I I y x I = (4.3) The image is then segmented in regions where each region's interest point is d efined as the pixel with the maximum interest value. 4.2.5 Interesting Points Correspondence A correlation algorithm is used to solve the correspondence problem The c orrel ation c o efficient, r is defined as the sum of the products of the pixel brightness divided by their geome t ric mean [51] : r ( d x d y ) = f ( d x + i d y + j ). g ( i j ) ( i j ) S f 2 ( d x + i d y + j ). g 2 ( i j ) ( i j ) S ( i j ) S (4.4) where f g are the left and the right image respectively, S is a 9x9 pixels window centered on an interesting point and d x d y are the disparity of the pair of pixels to be matched. Since both cameras are forced on the same vertical angle, the assumption of single epip olar line is valid. This means that every interesting point generated by the Moravec oper ator from th e right image that belongs to a certain row has to be correlated with interesting points of the left image in the same row. Experimentally it has been determined that the pairs of pixels that have correlation coefficient greater than 0.99 are considered co rre sponded.

PAGE 95

74 4.2.6 Distance Computation The method that is used to compute the distance between the robot and the target requires the angle of horizontal view of the camera as the only parameter for depth est imation. Fig. 4.7 shows the field of view of a monocular camera, where A is the length in cm of the horizontal field of view in a distance of Z i cm from the camera system, (X i Yi, Z i ) is the world coordinates of any point in the field of view, (x i ,y i ) is the projection of the world point onto image plane and (y p x p ) are the pixel coordinates. The origin of the pixel coordinate system coincides with the center of the image. For an image of 320x240 pixels, one may observe that: X i = 2 320 x p Z i tan( 24 4 )( cm ) (4.5) A second image containing the same worl d point is acquired from a different pos ition in order to compute depth. In case of a monocular vision system this is achieved by moving towards the world point (stereo from motion). Fig. 4.8 demonstrates this proce dure. For both images the following equat ions are derived: Z = x p 2 B x p 1 x p 2 ( cm ) (4.6) where B is the relative distance between the two shots in cm (baseline) and x p1 x p2 are the c o ordinates of the pixel which represents the world point in the first and second image respectively. For the parallel stereoscopic vision system shown in Fig. 4.9 the depth is d e rived from:

PAGE 96

75 Z = 320 B 2 ( x p 1 x p 2 ) tan( 24 4 ) ( cm ) (4.7) Fig. 4.7 Horizontal field of view of monoscopic vision system Fig. 4.8 Stereo from motion Fig. 4.9 Depth from stereo vision Since both cameras are moving independently there are 3 angles that have to be co n sidered for the depth estimation; one for each pan and one for the tilt of the cameras. For this r o t ating stereoscopic vision system depicted on Fig. 4.10, depth is calculated as: Z = 320 cos( c ) 2 tan( 24 4 )[cos( a ) x p 1 cos( b ) x p 2 ] + 320 [sin( b ) sin( a )] B ( cm ) (4.8)

PAGE 97

76 where and b are the horizontal angles of rotation in degrees of the left and the right camera r e spectively, c is the common vertical angle of r otation and B is the base line. 4.2.7 Computational Time To reduce the computational complexity of the algorithm, distance calculations are not performed for each pair of frames. This action is executed every 30 pairs of frames and only if both camera s have identify the target. The computational time required to convert visual information to range measur ements depends heavily on the part of the image that the target covers. The maximum frame rate reaches the 20 fps. This is an adequate frame rate whe n the target is a walking man or a mobile robot, considering that the maximum velocity of the tracker cannot e xceed the 1 m/s. Fig. 4.10 Rotated stereoscopic vision system

PAGE 98

77 4.3 Motion Algorithm The laser range finder is divid ed into three regions; each one is responsible for collision detection in the three main directions: front right and left A fourth range mea surement is co m puted by the stereo vision system and it is associated with the distance of the target from the r o b ot. All these inputs are fed to the motion algorithm which decides for the vehicle's rotational and translational speed. Fig. 4.12 depicts the flow chart related to the motion algorithm. The assumption is that the robot stays still until the vision system detects a target. This is a valid assumption since a failed robot will remain in its current location until it receives new commands from a supervisory controller, or a patrolling robot will visually inspect its territory until a motion is detected. As s oon as the target has been identified and its distance from the robot has been co m puted, the robot starts moving towards the target. Assuming that the robot has to reach the maximum velocity when its distance from the target is more than 4m then the robot 's velocity is linearly derived by: | x | = Z / 4 (4.9) where x is the robot's velocity and Z is the distance derived from the vision system. Given the robot's velocity and the camera's angle on the horizontal axis, the stee ring v e locity of the robot is obtained using (4.1). Collision possibilities are computed using the data derived from the laser range finder in the three main directions, front, right and left of the robot, as shown in Fig. 4.11.

PAGE 99

78 The safety distance from the robot has been arbitrarily set to 40 cm When an o bstacle is detected at a distance less than this threshold; a correction in the steering angle of the robot is performed driving it away from the obstacle. At the same time bot h ca meras keep tracking the target regardless of the robot's motion. When the vehicle passes the obstacle, it resumes its past route a c cording to the cameras' pan positions. The system stops the robot when the distance computed by the vision system is less than 1m Fig. 4.11 Segmentation of the laser finder beam in three regions The correction in the steering angle forces the robot to keep a distance greater than 40 cm from the obstacle, while the velocity of the robot reduces to half. This is pe rformed using a si m ple set of rules: 1. If an obstacle is detected in the right side of the robot, turn left until its di s tance is greater than 40cm. Then carry on the target's route. 2. If an obstacle is detected in the left side of the robot turn right until its di s tance is greater than 40cm. Then carry on the target's route. 3. If an obstacle is detected both in the left and the right side of the robot and the projections on the y axis ( Fig. 4. 10) of both distances are greater than 20cm, which is the distance between the center of the laser range finder and the edges of

PAGE 100

79 the robot, then continue straight passing through the obstacles. Otherwise, co nventionally, turn right until the di s tance computed by the left segment of the laser is greater th an 40cm. 4. If an obstacle is detected in front of the robot, it is unlikely that this object is the target since the robot would have stopped moving, turn right until the distance co m puted by the left segment of the laser is greater than 40cm. Fig. 4.12 Flow chart of the motion algorithm 4.4 Results Experiments have been conducted in an indoor lab environment with several "fu rniture obstacles" of different shapes, sizes and colors. For implementation purposes a person wearing different color clothes is used as the target. In the frames that follow, one can note the difference in lighting conditions, as well as the ability of the a l gorithm to

PAGE 101

80 distinguish the target from the background, even if objects with similar colors appear in the image scene. Fig. 4.13 demonstrates the ability of the vision system to detect the target wea r ing blue pants. The robot in this sequence is not moving. Fig. 4.14 demonstrates the same procedure but this time a red color is targeted Notice that the red cones on the bac kground does not confuse the tracking system. Fig. 4.13 Tracking blue color Fig. 4.14 Tracking red color, example 1

PAGE 102

81 Fig. 4.15 presents the robot's view as it follows a person in a corridor. Notice the ca m eras' motion and the turn of the robot at the end of the corridor. Fig. 4.16 and 4.17 demonstrate tracking and obstacle avoidance from an exte r nal video source. Fig. 4.15 Tracking red color, example 2 Fig. 4.16 Tracking red color and obstacle avoidance

PAGE 103

82 Fig. 4.16 (Continued) Fig. 4.17 Tracking blue color and obstacle av oidance Fig. 4.18 demonstrates the ability of the vision system to detect a mobile robot that passes in front of the tracker. Fig. 4.18 Tracking a mobile robot

PAGE 104

83 Fig. 4.18 (Continued) 4.5 Conclusions This chapter has presented a vision based target tracking method for mobile robots using a stereoscopic vision system and a laser range finder. Experimental results demo nstrate the effe c tiveness and the robustness of the approach. Significant advantages over the other vision based target tracking techniques co ncern the ability on tracking a variety of objects and that the robot's motion derives fr om the horizontal angle of the cameras, which allows the robot to avoid obstacles and keep tracking a target, or to keep tracking a target that moves on irregular terrains.

PAGE 105

84 The presented approach is initialized using motion estimation techniques to specify the target and compute the color histograms. In the presented results the target is the only object moving in the image scene. However, if more than one object is moving, the algorithm can be trained for the one that occupy greater portion of the image. Alternative methods to avoid the randomness of the motion estimation can be used. Methods such as template matc h ing will specify with accuracy the object that needs to be tracked. This is only for initialization purposes. When the object of interest is l oc ated in the image, its color histograms are computed and the rest of the algorithm co ntinues as is. Future work involves vision based target tracking in outdoor environments where the variance in the color components is significant.

PAGE 106

85 Chapter 5 Localizat ion 5.1 Introduction Localization is prerequisite to Unmanned Ground Vehicles (UGV) accurate and auton o mous navigation in uncertain and/or unknown environments. At any given time, the UGV should have exact knowledge, or should be able to derive, or est imate accurately its position and orient a tion. Required information is basically acquired from position estimation sensors such as Global Positioning System (GPS), Inertial Measurement Unit (IMU) and odometers. The indivi d ual use of these sensors has spec ific disadvantages. For example: GPS cannot provide accurate position estimation when the UGV travels small distances or it travels in indoor environments; IMU is sensitive to sadden accelerations and magnetic interferences; odometers lose track of the UGV due to wheel slippage. Thus, a method that fuses data and information acquired from mu l tiple sensors providing an accurate estimation of the UGV position and orientation is required. This chapter proposes a multi sensor fusion method for UGV localizatio n based on Fuzzy Logic (FL) and Extended Kalman Filters (EKF). The sensors considered are a GPS,

PAGE 107

86 an IMU, a stereo vision system, a laser range finder, and the vehicle's odometer. All se nsors are mounted on top of the UGV. Position sensors such as GPS and odometer provide information related to the vehicle's posture. Inertial sensors such as the IMU compute a ccelerations and angular rates. F i nally, range sensors such as the stereo vision system and the laser range finder compute distances between landmarks and the vehicle. The FL co ntrollers proposed in this chapter receive inputs such as sensor readings and e n vironmental characteristics to compute the statistics of the error distributions in the mea s urements of the GPS, the odometer, the vision system and t he laser range finder. In this chapter, the standard definition of error is used: the difference between the true value and the measured value divided by the true value. In contrast to the traditional EKF approach where the error in sensor readings is e ith er preset or an additional variable in the system model, the proposed fuzzy EKF a pproximates the error in sensor readings with multiple zero mean Gaussian distributions, based on the performance of the sensor in a series of experiments. In this way, the er ror in sensor readings is incorporated into the covariance matrix R of the measurement's model as zero mean Gaussian distribution and not as an unknown variable in the system's model. Experimental validation and verification of the pr oposed method is carried out u sing the ATRV Jr skid steering differential drive UGV, equipped with 3 GHz P IV proce ssor and 1 GB of RAM. The sensor suit of the ATRV Jr consists of a SICK laser range finder, a GPS, a Microstrain IMU, a compass unit and a st ereo vision system. The vehicle runs on RedHat Linux 7.0 and Mobility i n terface. The experiments have been conducted indoors and outdoors, on different types of floors and different weather cond i tions.

PAGE 108

87 The rest of the chapter is organized as follows: Sec tion 5.2 discusses work related to the UGV localization problem and summarizes the contributions of proposed research. Section 5.3 presents the EKF while Section 5.4 discusses the FL extension of the EKF. Section 5.5 demonstrates the use of the fuzzy EKF, while Section 5.6 concludes this cha pter. 5.2 Related Work The EKF has been widely used in mobile robot navigation applications, mostly to int e grate data from position sensors such as GPS, INS and odometer, and/or range sensors such as laser range find er and ultrasonic sensors. The choice of the sensors to be integrated is related to the robot's usage and operational environment. For example, indoor applic ations may not benefit from a GPS unit due to signal scrambling and outdoor applications may not be nefit from ultr a sonic sensors due to their limited performance outside. Examples of sensor integration for mobile robot localization include odometry and laser [78] [79] [80] [81] ; odometry and ultrasonic sensors [82] ; odometer, gyroscope and GPS [83] ; INS [84] ; INS and vision system [85] ; GPS, INS, odometer and vision system [86] ; vision system and odometry [87] [88] [89] [90] [91] ; l a ser and vision system [92] [93] to name a few. A drawback of the EKF is that it requires precise knowledge of the system's d ynamics, and noise distrib utions to be Gaussians with zero mean [94] Various techniq ues have been proposed to overcome these problems with some focusing on the use of FL. R e

PAGE 109

88 search in [95] proposes an adaptive fuzzy Kalman Filter (KF) method for mobile robot localization. A weighted EKF is used to fuse mea s urements from a GPS and an INS, while a fuzzy logic system is used to tune weights based on the covariance and the mean value of the residuals. In this way, the EKF is prevented from diverging. In [96] two FL contro llers are used to enhance the EKF. The first adjusts the measurement covariance matrix based on a c o variance matching technique and the second monitors the performance of the EKF and assigns a degree of co nfidence on the outputs of the first fuzzy controller. In [97] a Takagi Sugeno fuzzy model is used to linearize the EKF system model. In [98] a fuzzy KF is proposed based on the assumption that the measurement and the model unce rtainty is possibilistic' instead of Gaussian. In [99] a Neuro Fuzzy KF is pr e sented to evaluate sensor measurements and measurement errors. Finally, in [95] an adaptive fuzzy logic system is discussed that modifies the Kalman gain preventing the filter from diver ging. The co ntribution of this research can be summarized as follows: 1. The localization method presented fuses information derived from five different sensors: GPS, IMU, odometer, stereo vision system and laser range finder, while most related work utilizes up to three sensors. 2. A framework that handles data and information from multiple range sensors and multiple landmarks is provided. 3. The FL controller design allows for real time computation of the statistics of the error in sensor readings used in the EKF without prio r knowledge of the math ematical model of the sensor.

PAGE 110

89 4. An important characteristic of the presented method is that the landmarks' pos itions with respect to the vehicle or any other general coordinate system is not r equired. 5.3 Extended Kalman Filter The EKF is a powerful estimation tool designed for systems described by non linear equ a tions. To implement the EKF it is necessary to define models describing the system's dynamics and sensor measurements. The rest of this section describes the deve lopment of the EKF models. 5.3.1 Constructing the System's Model In localization applications, the system's model represents the belief of the veh icle's posture based on a series of mathematical equations such as the kinematics equ ations, landmark pos i tions or e rror models. The kinematics equations for a differential drive vehicle, which provide an estimate of the vehicle's posture, are defined as: x k + 1 = x k + V k t cos k + 1 2 ax k t 2 y k + V k t sin k + 1 2 ay k t 2 k + # k t $ % & & & & & & ( ) ) ) ) ) ) (5.1)

PAGE 111

90 where x k = x k y k k [ ] T is the vehicle position at time k with respect to a global coordinate system, u k = V k k [ ] T is the control input consisting of the vehicle linear and angular v elocities and ax k ay k the vehicle's accelerations on axes x and y respectively ( Fig. 5.1). It is assumed that the vehic le travels in a 2D environment. It is also assumed that the acceler ations ax k ay k are co n stant: ax k + 1 = ax k ay k + 1 = ay k (5.2) In addition to vehicle's kinematics, range readings derived by sensors as the laser range finder, provide additional means of estimating the vehicle's position. Consider a set of landmarks in positions, lan k i = lan x k i lan y k i [ ] T i = 1 ... n with respect to a global coord inate system as d e picted in Fig. 5.1. These landmarks are stationary and their position in time can be expressed as: lan k + 1 i = lan k i (5.3) Fig. 5.1 Landmark position with respect to a global coordinate system

PAGE 112

91 Equations 5.1, 5.2 and 5.3 are used to describe the system's model for most UGV localiz a tion applications. The system's model is described by the equation: X k + 1 = f ( X k u k ) + n k (5.4) where n k is Gaussian noise with zero mean and covariance Q k related to the uncertainty of the v e locity vector u k The non linear transition function f describes the state updates and it is given by: f ( X k u k ) = x k ax k ay k lan k 1 lan k n # # # # # # # $ % & & & & & & & = x k + V k t cos ( k + 1 2 ax k t 2 y k + V k t sin ( k + 1 2 ay k t 2 ( k + ) k t ax k ay k lan x k 1 lan y k 1 lan x k n lan y k n # # # # # # # # # # # # # # # $ % & & & & & & & & & & & & & & & (5.5) 5.3.2 Constructing the Measurement's Model The measurement's model shows how information derive d from the vehicle's se nsor ties to the system's state. In this research, five sensors are considered: a GPS, an IMU, an odometer, a laser range finder and a stereo vision system. The measurements acquired by the GPS and the odometer are related to the pos ture of the vehicle, namely the vector x k of the system's model. Thus:

PAGE 113

92 gps k odo k # $ % & = gps x k gps y k odo x k odo y k odo k # # # # # # $ % & & & & & & = x k y k x k y k k # # # # # # $ % & & & & & & + w gps w odo # $ % & (5.6) where gps and odo denote the readings from the GPS and the odometer respectively and w odo w gps are zero mean Gaussian noise associated to the odometer and GPS readings respe c tively. The measurements acquired from the range sensors are the distances between the lan d marks and the vehicle and can be expressed as: d 1 k 1 d m k n # # # $ % & & & = cos k sin k ( sin k cos k ! cos k sin k ( sin k cos k # # # # # # $ % & & & & & & lan x k 1 ( x k lan y k 1 ( y k lan x k n ( x k lan y k n ( y k # # # # # # $ % & & & & & & + w 1 k 1 w m k n # # # $ % & & & (5.7) where d j k i = dx j k i dy j k i [ ] is the distance of the i th landmark from the vehicle as measured by the j th range sensor, w j i is zero mean Gaussian noise associated with the measurement of the j th range sensor. The range sens ors considered in this research are a stereo vision system and a laser range finder. Thus, Equation 5.7 is modified as follows:

PAGE 114

93 cam k 1 cam k n las k 1 las k n # # # # # # # $ % & & & & & & & = cos k sin k ( sin k cos k ! cos k sin k ( sin k cos k # # # # # # $ % & & & & & & lan x k 1 ( x k lan y k 1 ( y k lan x k n ( x k lan y k n ( y k # # # # # # $ % & & & & & & + w cam k 1 w cam k n w las k 1 w las k n # # # # # # # $ % & & & & & & & (5.8) where cam and las denote the range readings from the vi sion systems and the laser range finder respectively. Finally, the measurements acquired from the IMU are associated to the acceler ations and the steering angle of the vehic le and can be described by the e quation: imu k [ ] = cos k sin k 0 sin k cos k 0 0 0 1 # $ % % % & ( ( ( ax k ay k k # $ % % % & ( ( ( + w imu k (5.9) Equat ions 5.6, 5.8 and 5.9 are used to describe the measurement's model for the presented localization application. The measurement's model is described by the equation: Z k = h ( X k ) + w k (5.10) Based on sensor availability the transition function h is defined as: h ( X k ) = gps k odo k imu k cam k 1 las k 1 ! # # # # # # # $ % & & & & & & & (5.11)

PAGE 115

94 5.3.3 Linearization The transition functions f and h are non linear functions To linearize these fun ctions, the first order Taylor expansion (Jacobian matrices of f and h ) is used. The linear system's and measurement's models become: X k + 1 = F k X k + n k (5.12) Z k = H k X k + w k (5.13) where F k = f ( X k u k ) (5.14) and H k = h ( X k ) (5.15) 5.3.4 Implementation The EKF is implemented in two steps. In the first step (propagat ion) the system's current state and the error covariance matrix are estimated based on the system's model. In the second step (update) the system's state is improved by incorporating the sensor rea dings. Fig. 5.2 d e picts the block diagram for the EKF imple mentation where Q k R k are the covariance matrices for the system's and the measurement's model respectively, and P k is the error covariance.

PAGE 116

95 Fig. 5.2 Block d iagram for the EKF implementation 5.4 Fuzzy Logic Controllers The purpose of the EKF is to filter out the noise related to the system's and mea surement's models and provide an estimate of the vehicle's position. As mentioned prev iously, the noise in both models is described by zero mean Gaussian distributions. Ho wever, the EKF cannot filter out the error in sensor readings unless a model of the se n sor is developed. In this work, instead of creating a complex mathematical model for each se nsor that will estimate the error in sensor readings, fuzzy logic has been employed to repr esent the error in sensor readings as zero mean Gaussian distributions. The develo p ment of the fuzzy logic controllers is based on the performance of each sensor in a series of e xperiments. The statistics of the error in each sensor readings are inc orporate into the mea surement's model through the covariance matrix R k

PAGE 117

96 5.4.1 Range Sensors It has been observed that the error in range readings is related to the distance b etween the target and the range sensor. For example, it h as been experimentally determined that the error a s sociated with the laser readings is described by the distribution N ( 0 max ) when the target's di s tance from the sensor is 7m to 9m, and by the distribution N ( 0 min ) when the ta rget's distance is less than 1m. Based on this observation, a Ma m dani type fuzzy controller has been developed that approximates the error in range sensor readings using multiple zero mean Gaussian distrib u tions based on the distance between the target and the sensor. In other words, the fuzzy controller computes the variance, for the error distribution of every range sensor based on the sensory readings. Three membership functions are used to describe the range measurements and the variances of the error distributions as: Small Medium and Large The FL contro l ler for the range se n sors uses the following 3 rules: 1. "If the range measurement is Small then the variance is Small 2. "If the range measurement is Medium then the variance is Medium 3. "If the range measurement is Large then the variance is Large Fig. 5.3 depicts an example of the membership functions describing the distance between a range sensor and a target, and the variance of the zero mean Gaussian distrib ution. Detailed me m bership function figures for all the sensors are shown in Section 5.5.

PAGE 118

97 Fig. 5.3 Membership functions for the sensory readings and the variance of the error di stribution 5.4.2 Odometer The error related to the odometer readings increases as the dist ance the vehicle travels i n creases. In addition, the error in the odometer readings is closely related to the type of floor the vehicle travels on and the velocity in which the vehicle travels. For e xample, the error in the odometer readings is greater whe n the vehicle travels on tiles than when it travels on asphalt. The odometer cannot be incorporated into the EKF without an accurate error model that considers both the traveled distance and the type of floor the v ehicle travels on. This is a significant d rawback since the odometer data can provide an a dditional way of estimating the system's state. In this work, the error in sensor readings is approximated by multiple zero mean Gaussian distributions using FL. A Mamdani type fuzzy controller has been deve loped that receives as inputs the type of floor the vehicle travels on and the distance that the v ehicle has traveled to co m pute the statistics of the Gaussian distributions. The information regarding the type of floor the vehicle is traveling is controlle d by the vehicle's operator.

PAGE 119

98 Three membership functions have been used to describe the floor slippage, the di stance traveled and the variance of the zero mean Gaussian distribution as: Small Medium and Large The FL controller designed for the odometer co nsists of 16 rules of the form: 1. "If the distance that the vehicle has traveled is Small and the floor slippage is Small then the variance is Small 2. "If the distance that the vehicle has traveled is Medium and the floor slippage is Medium then the varianc e is Medium 3. "If the distance that the vehicle has traveled is Large and the floor slippage is Large then the variance is Large 5.4.3 GPS There are many parameters that influence the error in the GPS readings such as the sate l lite coverage and the w eather conditions. In this work a Mamdani type FL controller has been d e velop that approximates the error in GPS readings with multiple zero mean Gaussian distributions based on the satellite coverage, namely the number of satellites used for triangulation The FL controller uses three membership functions to describe the satellite coverage and the variance of the zero mean Gaussian distributions as: Small M edium and Large There are three rules in the FL controller as follows: 1. "If the coverage is Large then the variance is Small 2. "If the coverage is Medium then the variance is Medium 3. "If the coverage is Small then the variance is Large

PAGE 120

99 5.4.4 IMU The IMU provides information on angular rate and acceleration of the vehicle. To estimate the position of the vehicle using as sole information the acceleration of the veh icle, a double int e gration is required. However, the error in the IMU readings is integrated as well making the IMU useless in short time. Unfortunately, there are no empir i cal rules tha t could be used to approximate the error in the IMU read ings as zero mean Gaussian distribution s An error model needs to be developed in order to efficiently incorporate the IMU readings into the EKF. The deve loped error model is based on [84] and it is describ ed in the next se c tion. 5.4.5 Fuzzy Controllers Implementation The FL controllers described in the previous subsections are used to update the c ovariance matrix R k of the measurement's model in the EKF. Fig. 5.4 describes this proc edure. 5.5 Case Study The ATRV Jr UGV has been used to test the performance of the fuzzy EKF. The experiments run indoors and outdoors using known color landmarks. The vision sy s tem identifies the landmarks and computes the distance between the vehicle and the lan d

PAGE 121

100 marks. A second set of range measurements to the landmarks is acquired from the laser range finder. A vision/laser registration algorithm is used to verify that both sensors pr ovide range measurements for the same landmark. Additional position readings are a cquired from the vehicle's odometer and GPS and acceleration readings from the IMU. Fig. 5.4 Relation between the EKF and the fuzzy controllers The rest of this section describes the landmark identification technique, the stereo vision di stance computation technique, the vision/laser registration algorithm, the GPS data conversion, the IMU error model and the fuzzy EKF.

PAGE 122

101 5.5.1 Landmark Identification and Distance Computation The vision system of the ATRV Jr consists of two pan/tilt col or cameras mounted on top of the vehicle, at a distance of 35cm from each other. The landmarks that have been used in these experiments have distinct color and can be separated from the image's bac kground by a color threshold technique. The main steps of t he vision algorithm that ident ifies a landmark and co m putes its distance from the vehicle are: 5.5.1.1 Image Acquisition Image acquisition is achieved at a rate of 30 fps using the Video for Linux API. Each 24bit color image has a resolution of 320x240 pixels. 5.5.1.2 Landmark Identification A landmark is identified by its color. Identifying color in an image instead of te mplate matching techniques requires less computational time. The HSI color space has been used for the segmentation technique tha t follows. Since the landmark is a known object, the variation of the hue, saturation and intensity values of the color's representation in the HSI color space is known. A segmentation technique on H I plane based on a region gro w ing algorithm is used to s eparate the target from the image's background. The basic concept is to identify a pixel, "seed", in the image that takes the mean hue and intensity

PAGE 123

102 values within the area of the object's color and grow a region with similar neighboring pixels. For example the cone shown in Fig. 5.5a has hue and intensity values that vary between (10, 20) and (216, 250) respectively. Thus, the seed pixel will have a hue value of 15 and an intensity value of 233 The region growing algorithm will merge neighboring to the se ed pixels that have hue and intensity values in the former area. Fig. 5.5b shows the result of this technique. (a) (b) Fig. 5.5 Color threshold technique 5.5.2 Distance Computation The method that is used to compute the distance between the vehicle and the landmark is presented in [100] This method does not require any camera calibration tec hnique that adds computational load to the system, but it is based on the image size, the field of view of the camera(s) and the relative position of the cameras. Fig. 5.6 demo nstrates the rel a tionship between the fields of view of the two cameras and the perspective of a world point from each camera. The depth measurement is derived by: cam k i = 320 B 2 ( xp 1 xp 2 ) tan( 24 4 ) ( cm ) (5.16)

PAGE 124

103 where cam k i is the distance between the left camera and the i th landmark, B is the relative distance between the two cameras (baseline), ( xp 1 xp 2 ) is the pixel disparity. Fig. 5.6 St ereo vision system in parallel configuration The distance between the left camera and the landmark over the Y axis is derived by: cam y k i = 2 320 xp cam k i tan( 24 4 )( cm ) (5.17) where xp the image location of a landmark pixel and cam x k i = cam k i 2 cam y k i 2 (5.18) 5.5.2.1 Vision / Laser Registration An essential issue on data fusion applications is that all sensors must provide data for the same entity. In this case study a stereo vision system and laser range finder sensors

PAGE 125

104 mounted o n top of a vehicle are used to compute distances from a landmark. Both sensors are able to recognize objects located in front of the vehicle. The purpose of the vision sy stem/ laser registration is to verify that both sensors provide range data for the sam e lan dmark. Fig. 5.7 depicts the coo r dinate systems of both sensors that need to be aligned. Fig. 5.7 Relation between the coordinate systems of the vision system and the laser range finder. The Fig. 's units are cm. The depth computation from the vision system is derived from (5.16). This mea surement has to be transferred to the laser's coordinate system. The transformation is achieved by transl a tions over the X and Y axis ( Fig. 5.7). Thus the distance of an object computed by the vision system in the laser's coordinate system will be: las x k i = cam x k i camtr x las y k i = cam y k i camtr y (5.19) where camtr x = 43 cm and camtr y = 17 cm Finally, the angle in which the i th lies in the laser's coo rdinate system is derived by:

PAGE 126

105 = arch tan las x k i las y k i # $ % & (5.20) The range measurement from a landmark taken by the laser sensor at angle # co rresponds to the measurement taken by the vision system for the same landmark. 5.5.3 GPS Conversions The information derived from the GPS is converted from Latitude/Longitude to Universal Transverse Mercator (UTM) coordinates. UTM is a rectilinear mapping system in map coordinates are represented as Cartesian coordinates and distance is calculated u sing Euclidian distance measures [101] The units of the UTM system are meters and the equations for the conversion could be found in [101] 5.5.4 IMU As described in [84] the error in IMU readings is approximated by the following function: ( t ) = C 1 ( 1 e t T ) + C 2 (5.21) where C 1 C 2 and T are parameters. To estimate the values of the parameters C 1 C 2 and T IMU readings were collected while the sensor was immobilized. Using the Leve n berg

PAGE 127

106 Marquardt least square fit method, the IMU readings were fitted to Equation 5.21. Table 5.1 summarizes the best fitting parameter val ues for each of the IMU outputs. Table 5.1 Values of fitting parameters C 1 C 2 T Acceleration on X 0.037 m / s 2 0.171 m / s 2 3.787 s Ac celeration on Y 2.167 m / s 2 0.016 m / s 2 735.714 s Acceleration on Z 0.036 m / s 2 0.178 m / s 2 1.498 s Angle rate around X 0.009 rad / s 0.078 rad / s 8.329 s Angle rate around Y 0.002 rad / s 0.002 rad / s 1.536 s Angle rate around Z 0.001 rad / s 0.001 rad / s 14 .141 s The IMU error model will be incorporated into the system's model of the EKF. It is a s sumed that the vehicle is moving in a 2D space and readings such as acceleration on Z axis, angle rates around X and Y axis are not consider ed. The system's model described in Equations 5.4 and 5.5 will become: X k + 1 = f ( X k u k ) + n k (5.22)

PAGE 128

107 f ( X k u k ) = x k imu k lan k 1 lan k n # # # # # # $ % & & & & & & = x k + V k t cos ( k + 1 2 ax k t 2 y k + V k t sin ( k + 1 2 ay k t 2 ( k + ) k t ax k + eax k Tax k Tax k + t ay k + eay k Tax y Tax y + t angle z k lan x k 1 lan y k 1 lan x k n lan y k n # # # # # # # # # # # # # # # # # # # # # # $ % & & & & & & & & & & & & & & & & & & & & & & (5.23) where imu k = ax k eax k ay k eay k angle z k [ ] T the IMU readings related to the vehicle's a ccelerations, errors in acceleration, a nd steering angle at time k, and Tax k Tay k the corr esponding p a rameters T as shown in Table 5.1. 5.5.5 Fuzzy Extended Kalman Filter This section describes in detail the FL controllers and their involvement into the EKF. A FL logic control ler has been designed for each of the sensors: GPS, odometer, st ereo vision sy s tem and laser range finder. These controllers are responsible for updating the statistics of the di s tributions describing the error in sensor readings. The design of the FL cont rollers involved a number of experiments that helped identify the error in sensor rea d

PAGE 129

108 ings in various conditions. To compute the error in odometer and GPS readings, exper iments were run indoors and outdoors with the vehicle traveling at a co n stant speed of 0.5m/s. On the other hand, the error in range measurements was computed with the veh icle immobilized while multiple readings were co l lected in various distances from a target. Table 5.2 shows the statistics of the error in the range sensors readings as a fun ction of the measured distance and Table 5.3 summarizes the performance of the odometer in different types of floors based on the traveled distance. Table 5.4 shows the perfor mance of the error in the GPS readings based on the number of satellites ava ilable. Table 5.2 Distributions of the error in sensor readings with respect to the range measur ement s Distance 7 9m 3 5m 0 2m Vision System N ( 0 0 7 2 ) N ( 0 0 3 2 ) N ( 0 0 1 2 ) Laser N ( 0 0 6 2 ) N ( 0 0 2 2 ) N ( 0 0 1 2 ) Table 5.3 Distributions of the error in odometer readings with respect to the traveled di stance Distance 0 5m 10 15m 20 inf Odometer Tile N ( 0 0 2 2 ) N ( 0 1 5 2 ) N ( 0 2 5 2 ) Od ometer Grass N ( 0 0 2 2 ) N ( 0 0 7 2 ) N ( 0 1 6 2 ) Odometer Asphalt N ( 0 0 2 2 ) N ( 0 0 6 2 ) N ( 0 1 2 2 ) Table 5.4 Distributions of the error in GPS readings with respect to the satel lite coverage Satellites 5 6 7 GPS N ( 0 7 2 ) N ( 0 4 2 ) N ( 0 2 2 )

PAGE 130

109 Fig. 5.8 depicts the membership functions of the FL controllers designed for the stereo vision system (a), the laser range finder (b), the GPS (c ) and the odometer (d). The rules for each FL controller follow the structure described in Section 5.4. (a) (b) (c) (d) Fig. 5.8 Membership functions of the vision system (a), laser range finder (b), GPS (c) and odom e ter (d) FL controllers The fuzzy EKF is implemented by recursively computing the propagation equ ations, the measurements covariance matrix and the update equations. Fig. 5.9 demo n strates this proc e dure.

PAGE 131

110 Fig. 5.9 Block diagram for the fuzzy EKF 5.5.6 Results To evaluate t he performance of the fuzzy EKF, a set of experiments was run i ndoors and outdoors. The distinction b e tween indoor and outdoor environments has to be stated because it influences considerably the sensors' performance. For example the GPS cannot establish s atellite connection indoors while the IMU readings are heavily distorted due to metallic surroundings.

PAGE 132

111 The performance of the fuzzy EKF is compared with the performance of the EKF. Since there are no error models for the sensors used in the EKF, the errors in sensors rea dings were a s sumed zero mean Gaussian using the maximum variance for each sensor from Tables 5.3 5.4. 5.5.6.1 Indoor Environment For the indoor experiments the vehicle is assigned to follow two trajectories. For the first trajectory (st raight line) the linear velocity is constant, V=0.3m/s and the angular velocity is zero, $ =0rad/s For the second trajectory the linear velocity is constant V=0.3m/s and the angular v e locity is repeatedly changed between 0.2rad/s and 0.2rad/sec Each cyc le's time is 0.2sec For comparison purposes, results derived using the traditional EKF are also presented. Fig. 5.10 and 5.11 present the vehicle's trajectory as estimated by the EKF and the fuzzy EKF. For both experiments only one landmark is co nsidered, located in front of the vehicle at a distance of 7.90m. Table 5.5 summarizes the results for the two trajectories. As shown, for the first exper i ment (straight line) the error remains less than 4% for all the methods. On the other hand, for the second exp eriment the odometer error is more than 20%, while the position estimation error for the EKF and the fuzzy EKF is 6% and 4% respectively.

PAGE 133

112 5.5.6.2 Outdoor Environment For the outdoor experiments the vehicle was assigned to perform a square with edges of 10m. The experiments were run in different days with different weather cond itions so that diffe r ent satellite coverage will occur. Four landmarks are considered, one at the end of each square's edge. In the following figure the actual position of the lan d marks appears as red squares and a c tual vehicle position samples appear as red stars. Since the experiments were run outside for long distances, it is impossible to know the exact route the vehicle followed. The vehicle position sa m ples and the landmark p ositions are as closed to the reality as possible. Fig. 5.12 and 5.13 pr e sent the performance of the EKF and the fuzzy EKF respectively. There was a lock of seven sate l lites for this experiment ( Large in terms of FL membership function). The experiments i n Fig. 5.14 and 5.15 were conducted with a six satellites lock (Medium, in terms of FL membership function), while the experiments in Fig. 5.16 and 5.17 with a five satellites lock ( Small in terms of FL membership function). Table 5.5 Error comparison for indoor navigation Odometer EKF Fuzzy EKF State Exp. 1 Exp. 2 Exp. 1 Exp. 2 Exp. 1 Exp. 2 x 4% 5% 2% 2.3% 1% 1.3% y 2% 25% 2% 6% 2% 4% lan x 1 0% 2% 0% 1% lan y 1 0% 5% 0% 4%

PAGE 134

113 Fig. 5.10 The vehicle trajectory according to three methods, experiment 1 Fig. 5.11 The vehicle trajectory according to three methods, experiment 2

PAGE 135

114 Fig. 5.12 Performance of the EKF, experi ment 1 Fig. 5.13 Performance of the fuzzy EKF, experiment 1

PAGE 136

115 Fig. 5.14 Performance of the EKF, experiment 2 Fig. 5.15 Performance of the fuzzy EKF, experiment 2

PAGE 137

116 Fig. 5.16 Performance of the EKF, experiment 3 Fig. 5.17 Performance of the fuzzy EKF, experiment 3

PAGE 138

117 As shown in Fig. 5.12 to 5.17 the proposed fuzzy EKF is able to fuse inform a tion from multiple sensors to estimate the position of the vehicle. In most cases, the fuzzy EKF performs better in terms of position accuracy from the EKF. This is because the fuzzy EKF takes into co n sideration the error in sensor readings. For example in Fig. 5.12 513 where the satellite cove r age is Large the fuzzy EKF considers a Small error in the GPS readings. Thus, the estimated route is closer to the GPS route However, in Fig. 5.16 17 when the satellite coverage is Small the fuzzy EKF considers Large error in the GPS rea d ings, resulting in a route closer to the odom e ter's route. 5.6 Conclusions This chapter presented a method for UGVs localization using F uzzy Logic and Kalman Filtering. Information from five different sensors was fused to provide an estimate of the vehicles position. Fuzzy Logic has been used to compute and adjust the parameters of the error distribution of the sensor readings and update t he covariance matrix of the measurement's model in EKF. As demonstrated the fuzzy EKF performs better than the EKF in terms of position accuracy. More extensive experimentation on the sensors beha vior may further improve the accuracy of the fuzzy EKF. Fu ture work involves natural landmark selection. The vehicle should be in position to d y namically identify the most dominant landmarks of the environment that moves.

PAGE 139

118 Chapter 6 Mission Planning 6.1 Introduction Many applications in industrial, civili an and military fields benefit from mobile robot utilization. Application domains vary from warehouse patrolling to service robotics and to space exploration. Mobile robots can be assigned to explore, map or inspect friendly or hostile territories [102] [103] [104] or dispense medications in medical f acilities [105] Specifically in industrial applications, such as manufacturing, underground mining, toxic waste clean up and material sto r age/handling, wher e many processes take place in hazardous environments harmful to human health, the choice of robotics based sol u tions is justifiable. Furthermore, as the complexity and requirements of an application increases, significant advantages may be drawn from the use of multi robot systems. Multi robot systems are classified into cooperative robot teams and robot swarms [106] The difference between cooperative robot teams and robot swarms is that in the former case the team members present different sensory capabilities while in the latter case all members are identical. A cooperative robot team is considered in this cha p ter.

PAGE 140

119 A major challenge when working with multi robot systems is that of task alloc ation and coordination. The overall mission is decomposed into multiple tasks to which one or more robots are assigned. The task allocation problem is further complicated co nsidering the dynamic chara c teristics of t he robot team such as robot failures and repairs that may lead to incomplete tasks. The robot team should be able to complete the mission even if some team members are no longer functional. This chapter describes a general dynamic task allocation and contr oller design methodo l ogy for cooperative robot teams. The robot team is modeled as a DES where each robot is modularly represented by a finite state automaton model. The mission r equirements model is synth e sized from individual finite state automata repres enting task completion requirements. The pr o posed control methodology is partially based on the RW supervisory control theory [107] However, instead of synthesizing a complete s upervisor, as the traditional RW theory suggests, a limited look a head policy is adopt ed that enables/disables events in the system in real time based on the evaluation of a utility function and robot availability. The utility function uses fuzzy logic to quantify the ability of a robot to perform a task. The robot modules appear or disappe ar overtime depending on failure and repair events of the robots and in case of failures, the control methodology re allocates tasks to the operational robots of the team to e n sure mission completion. Our work is motivated first by the fact that in traditi onal supervisory control theory, the a cceptable sequences of event execution determined apriori are computatio n ally intractable for realistic size problems. Furthermore, in applications where there is a significant d egree of uncertainty associated with res ource reliability and the environment, these s equences may not be executable. Instead, we propose a control approach based on a li m

PAGE 141

120 ited lookahead control policy for task allocation in real time. Se c ondly, the criteria used in restricting system evolution a re the marked states, which d e note the acceptable states, and the legal language, which denotes acceptable system b e havior. These criteria fail to describe a preferred behavior within the acceptable beha v iors. In cooperative robot teams, several character istics of the team members, such as endurance, reliability, eff iciency etc, must be considered in task allocation decisions. We describe these characteri stics as fuzzy variables and develop a fuzzy controller to dete r mine the utility function value for eac h task allocation event. These values are then used to determine a preferred task allocation in real time as a part of the proposed controller d e sign methodology. To further clarify the proposed control methodology, an application scenario, d epicted in Fig 6.1, where a team of mobile robots is assigned to patrol a warehouse co ntaining hazar d ous and security sensitive materials is considered throughout the chapter. Three robots with di f ferent sensory capabilities are employed. Table 6.1 summarizes the senso r suite for each robot. Fig. 6.1 Warehouse partition ing for the patrolling scenario Based on the partition based patrolling strategy described in [108] the warehouse is partitioned into five regions as follows: Region 1 contains flammable materials, Region

PAGE 142

121 2 chemical materials, Region 3 radioactive materials, Re gion 4 security sensitive mater ials and Region 5 security sensitive and flammable materials. This co n figuration allows us to i n troduce features commonly encountered in cooperative robot team missions, such as flexibility in task assignment (Region 1 can be assigned to Robot 1 or Robot 3), and c ooperation between the robots (Region 5 must be assigned either to Robot 3 or first to R obot 1 and subsequently to R o bot 2). Table 6.1 Robot sensors Robot 1 2 3 Sensors Fire detector Chemical detector Geiger Counter Chemical detector Geiger Counter Vision System Fire detector Vision system Table 6.2 Region/robot allocation Region 1 2 3 4 5 Material Flammable Chemical Radioactive Security Sensitive Flammable & Sec. Sensitive Robots Robot1 or Robot 3 Robot 1 or R obot 2 Robot 1 or Robot 2 Robot 2 or Robot 3 Robot 3 or Robots 1 & 2 The team's mission is to inspect all five warehouse regions. The overall mission is d i vided into 5 tasks where task corresponds to the patrolling/inspection of t he war e house Region k. The rest of this chapter is organized as follows: Section 6.2 discusses the related literate. Section 6.3 presents the DES models of the robot team and mission requirements models for the task allocation problem. Section 6.4 describ es the utility function concept

PAGE 143

122 and the fuzzy contro l ler used to determine the utility function values for task allocation events. Section 6.5 describes the proposed limited lookahead policy. Se c tion 6.6 presents the experimental results using the proposed controller design methodo l ogy and Section 6.7 includes conclusions and future r e search directions. 6.2 Related Work The task allocation problem has been addressed in literature by utility based a pproaches and auction based approaches for both coopera tive robot teams and robot swarms. Utility based approaches have been used for task allocation in many control a rchitectures as in [106] [109] and [110] Each task is assigned to a robot based on various utility estimates: In [106] each robot is assigned a task based on utility estimates of a cquiescence and impatience. In [109] utilities are computed as a function of relevan t se nsors; the robot having the most relevant sensors for a task is assigned the particular task. Utility has also been used in r o bot team cooperation to estimate the cost of executing an action [111] and for sensor based metrics [112] Auction based a p proaches as in [113] [111] and [112] achieve task allocation based on the Artificial Intelligence concept of Contract Net Protocol [114] Each robot bids for an available task and the robot with the higher bid is assigned to that task. In the propos ed control methodo l ogy, the dynamic task allocation problem is addressed using utility and fuzzy logic. Uti l ity function values are computed based on the ability of each robot to perform a task co n sidering several factors.

PAGE 144

123 Limited lookahead policies for s upervisory control have been first studied in [115] where a limited lookahead window is used to control the online behavior of the u ncontrolled system model. The notion of pending traces is introduced to describe the lega lity of a trace in the lookahead window based on a conservative or an optimistic attitude. The notion of pending traces was later raised in [116] by extending the uncontrolled sy stem model behavior by arbitrary traces beyond the li m ited lookahead window. In [117] the authors present a methodology that recursively computes the future control actions based on previous ly computed control actions. Later, in [118] and in [119] the authors pr e sent an extension to the lookahead policies to cope with the computational complexity problem by making a control decision without exploring the whole lookahead window. Further enhancements in limited lookahead policies for supervisory control have been proposed. In [120] a lookahead policy is presented for systems wi th partial obser vability. Also, in [121] system's uncertainty is co n sidered by assigning probabilities to event occurrences and in [122] by modeling all possible variations of the system. To our knowledge there are no limited look ahead pol i cies in the literature designed to control cooperative robot teams. As noted in [123] only few approaches, as described in [124] and [125] conce ntrate in time varying systems where sy s tem modules appear or disappear in time. In these approaches res ource modules disappear only after the completion of assigned tasks. In this work, we relax this assumption by considering failures during task execution. In c oordinated robot teams the concept of robot failures and r e pairs is important since a robot failu re while executing a task will lead to an incomplete mission unless the control model

PAGE 145

124 reassigns the task els e where. The lookahead policy presented in this chapter considers robot failures and repairs to ensure mission completion. Supervisory control based approaches on discrete event system have been used by a number of researches to control mobile robot teams. However, although a limited amount of work considers robot failures, not much effort is found in the area of control decisions concerning robot rejo ining the robot team after repairs. Specifically, the aut omata based approaches pr e sented in [126] [127] and [128] consider situations where some robots go offline but do not take into account situations where robots come back online. Similarly, the Petri Net controller in [129] disr e gards robot repairs. Finally, the control architecture presented in [130] handles only robot failures. 6.3 System Model Description In RW supervisory control theory, the uncontrolled system's model (UCSM) and the mission requirements are separately modeled using finite automata. Considering the patrolling application described in Section 6.1, let be the finite automaton (FA) re p resenting the uncontrollable behavior of Robot j is the set of events Robot can execute, is the set of states and is the tra nsi tion function. and are the in i tial and final states respectively. The set of events consists of the controllable and the unco n trollable events and

PAGE 146

125 For the patrolling application, the controllable events are corr esponding to "initiation of Task k by Robot j ". Based on the robot sensory capabilities d escribed in Tables 6.1 and 6.2, possible robot task allocations for Task k and Rob ot j are defined as: (6.1) The uncontrollable events are correspon ding to "completion of Task k by Robot j ", "failure of Robot j while executing Task k ", "repair of Robot j and "repair of Robot j and re ini tialization of Task k respectively. The set of states for robot j is defined as where denotes the state of R obot as idle, as busy with Task k and a s failed while executing Task k The initial and final states of the Robot j are idle, namely Fig. 6.2, depicts the trans ition graph for the automaton describing R o bot j An arrow marks the initial state. The marked state is shown as a dark circle. The FA model design incorporates two different cases of robot failures and r epairs. A r o bot failure may be considered as temporary failure or failure with task re initialization. In the first case, the failed robot will conti nue executing its task as soon as is repaired while in the second case the task of the failed robot needs to be reinitia l ized.

PAGE 147

126 Bjk Ij Fjk failure_jk drop_jk start_jk complete_jk repair_jk Fig. 6.2 Transition graph for Robot j The UCSM that represents the uncontrolled behavior of the robot team is co mposed of the synchronous prod uct [131] of the individual r o bot modules as follows: (6.2) The specification' s model is the finite automaton that models the mission r equirements which is synthesized using individual task completion requirement models. Three alternative task completion requirements are modeled: 1. Alternative 1: Task can be p erformed only by Robot 2. Alternative 2: Flexibility in task assignments: Task can be performed by Robot or by Robot 3. Alternative 3: Task sequ encing and robot coordination: Task k must be pe r formed first by R o bot and subsequently by Robot Fig. 6.3 depicts the transition graphs for these three alternative task completion requir ements. Fig. 6.3a describes the requirement where Task k can be performed only by Robot j If Robot j fails during the task, the task may be re initialized as shown by the drop jk event. Fig. 6.3b describes flexibility in task assignment where Task k can be pe r

PAGE 148

127 formed only by Robot j or Robot i. Finally, Fig. 6.3c describes task sequen c ing and robot coordination where Task j must be completed first by using Robot j and then Robot i For example, consider the patrolling scenario where Robots 1 and 2 must perfo rm Task 5 (p atrol of region 5 of the warehouse). The region contains flammable and security se n sitive materials. Since Robot 2 is not equipped with fire detection sensors (Table 6.1) it is vu lnerable to fire. For this reason Task 5 must be assigned first t o Robot 1 and then to Robot 2. The task completion requirements are modeled as finite automata of the form: (6.3) where n is the number of the individual task completion requirements. For the patrolling scenario described in Sect ion 6.1, corresponding to patrolling of the five warehouse regions. Patrol warehouse Regions 1 4 are modeled based on Alternative 2 and war ehouse Region 5 is modeled based on Alternative 3. !"#$"%&' ()*+,-"-%&' .$)+%&' a. Alternative 1 !"#$"%&' ()*+,-"-%&' .$)+%&' !"#$"%/' ()*+,-"-%/' .$)+%/' b. Alternative 2 !"#$"%&' ()*+,-"-%&' .$)+%&' !"#$"%/' ()*+,-"-%/' .$)+%/' c. Alternative 3 Fig. 6.3 Transition graphs for the task completion requirements

PAGE 149

128 For the particular warehouse patrolling application, it is assumed that only one robot may fail once during the mission. Otherwise, it may not possible to complete the mission a nd further discussion of the control methodology is rendered impossible. This assumption is modeled as a finite automaton and is depicted in Fig. 6.4. !"#$%&'()* &'+"#&()* ,&-+()* !"#$%&'(#* &'+"#&(#* ,&-+(#* Fig. 6.4 Transition graph of the failure assumption automaton The specification's model, S, is a finite automaton synthesized by the synchr onous pro d uct of all mission requirements. Considering the specifications outlined in the warehouse patro l ling scenario, the specifications model is synthesized as follows: (6.4) The supervisory control model for the team of robots consists of the coupled sy stem model and the control pattern The coupled model is defined as the product of the UCSM and the specifications model, w hich includes all the events that are a l lowed by both models: (6.5) The control pattern is a function based on the supremal controllable language of the coupled model that enables (1) or disables (0) the controll able events in the UCSM so that desirable system behavior is guaranteed. The synth e sis of the control pattern and consequently the solution to the supervisory control problem is a

PAGE 150

129 computationally prohibitive procedure for larger systems. Furthermore, while a task all ocation and desired control pattern determined a priori may be executable with reliable resources in controlled environments, such a sequence of events is very u n likely to be executed to completion in applications ass ociated with cooperative robot teams due to unreliable resources and uncontrollable, and possibly hostile, e n vironments which robot teams typically operate in. In this chapter, instead of following the traditional supervisory control approach and synthesiz ing the complete supervisor for the system, a limited lookahead control po licy is adopted. The limited lookahead control approaches are suitable for highly dynamic systems since only a portion of the system corresponding to the system's behavior in the nea r future is evaluated. A limited lookahead window of finite depth is used to direct the behavior of the system. Every time an event is executed in the UCSM, the lookahead window is reconstructed and all possible s e quences of events in the lookahead window are evaluated. The event leading to the highest evaluated string is enabled while the rest of the controllable events are disabled. The evaluation criteria based on the utility concept are described in the next section. 6.4 Utility Function Definition In cooperative robot teams, each robot possesses unique characteristics including but not limited to sensory capabilities, cost, efficiency and endurance. For this reason, each robot pr e sents a different level of ability to perform a certain task. In addi tion, the

PAGE 151

130 utility function may be used to capture the system designer's choice to assign certain tasks to specific robots. For exa m ple, consider the case where the system's designer knows that a sensor in a robot is functional but not efficient. For this r eason, the designer wishes to assign a task to this robot that does not make use of the particular sensor. These aspects complicate the task allocation problem. An evaluation method that maximizes the overall performance of the robot team is r e quired. In supervisory control theory, traditional system evaluation criteria are the marked states, which denote the acceptable states, and the legal language, which denotes acceptable system behavior. However, these criteria fail to describe undesirable yet a ccepta ble behavior. For example, Robots 1 or 3 can perform Task 1. Assigning the task to either robot is an acceptable action but assigning Task 1 to Robot 1 ties up other sensory capabilities. In a sense, Robot 1 is "over qual i fied" to perform Task 1, making Ro bot 3 the more desirable choice. The proposed control methodology employs a utility function that evaluates strings in the lookahead window. We define a utility function which assoc iates an event in with a utility value between 0 and 1 and we define the utility of a string as: (6.6) The attributes mentioned that could be used to compute the robot's ability to pe rform a task, suc h as endurance, efficiency and designer's choice, represent vague co ncepts hard to d e scribe mathematically. However, these concepts can be described in terms of fuzzy logic as fuzzy variables with linguistic membership functions. A Mamdani type

PAGE 152

131 fuzzy logic controller [132] is proposed that receives as inputs the membe r ship of each fuzzy variable and computes the ability of a robot to perform a task. Three fuz zy variables are considered: robot's endurance, designer's choice and robot's efficiency. The first fuzzy variable has three memb ership functions and denotes how long a robot can remain functional. The second fuzzy variable with three membership functions denotes the system d esigner's choice to assign certain tasks to specific robots. Finally, the third fuzzy variable with three membership functions denotes the robot's efficiency level. The output of the fuzzy logic controller is also a fuzzy variable with membership functions denoting a r obots ability to perform a task. Fig. 6.5 and 6.6 depict the fuzzy variables and their membership functions we have adopted for the patrolling application. Considering a Task and a Robot the ability of Robot to perform Task denoted by is computed based on a set of rules such as: 1. If the robot's endurance is long the designer's choice is high and the robot's eff iciency is high then the of Robot j to perform Task k is high 2. If the robot's endurance is fair the designer's choice is medium and the robot's efficiency is m e dium then the of Robot j to perform Task k is medium 3. If the robot's endura nce is short the designer's choice is low and the robot's eff iciency is low then the of Robot j to perform the task k is low

PAGE 153

132 There are 27 such rules in the fuzzy controller which cover all combinations among the membership func tions of the input variables. !"#$% &'($ )#*+ ,,-.,-/0 ,-1 ,-2 0 ,-3 Fig. 6.5 Membership functions of the fuzzy variable: robot's endurance !"# $%&'($ )'*) ++,-+,./ +,0 +,1 / +,2 Fig. 6.6 Membership functions of the fuzzy variables: designer's choice, robot's eff iciency and robot's ability The ability of a robot to perform a ta sk is closely related to task allocation and cons e quently to task initiation events Thus, the utility function value of the events is equal to the ability of Robot to perform Task In other words, where (6.7)

PAGE 154

133 The events corresponding to task re initialization due to robot failure re present cancelation of task assignments. The utility fu nction values of the events are : where (6.8) The higher the utility function value for an event, the more desired this event is. Since uncontrollable events cannot be enabled/disabled, their utility function values are where (6.9) Each time an event occurs in the UCSM and the limited lookahead window is reco n structed, the utility functi on values for all the strings in the lookahead window are computed. The string with the highest utility function value corresponds to the most desirable system behavior. The maximization procedure is implemented as a dynamic progra mming problem [133] with a forward sweep and a backtracking pass. Fig. 6.7a shows a po r tion of the coupled model for the patrolling application. For example, assume that: (6.10) In the forward sweep of the dynamic programming, each state in the lookahead tree is assigned the maximum utility function, denoted by V, of the strings that led to that state: (6.11)

PAGE 155

134 where denotes all the strings from the current root state, to state Q, or formally where is the transition function for the coupled model. In Fig. 6.7a, using Equations (6.6) to (6.11), state S1 is assigned the value: (6.12) Similarly, the value of state S4 is the utility function of the string : (6.13) In the same manner, the values of states S3, S5 and S6 are: (6.14) (6.15) (6.16) The backtracking pass initiated after the conclusion of the forward sweep is illu strated in Fig. 6.7b. Starting from the final states in the lookahead window, a maximum utility value, V, is assigned to all of the immediate predecessors of the final states based on the equation: (6.17) where denotes a state in the lookahead window and denotes the set of immediate successors of that produced the succe s sor with the maximum utility value where: and (6.18) The procedure is repeated until the root state of the lookahead window is reached. In the example of Fig. 6.7 the string with the highest value of V is Ther efore, the co n trol methodology disa bles the event and enables the event A

PAGE 156

135 new limited look a head window is constructed as soon as an event is executed and the utility maximization proc e dure is repeated. S2:V=1 S3:V=1 S1:V=0.5 S4:V=1 S0 S5:V=2 S6:V=1 start_23 start_22 start_11 complete_11 complete_12 start_12 S2:V=2 S3:V=1 S1:V=1 S4:V=1 S0; V=2 S5:V=2 S6:V=1 start_23 start_22 start_11 complete_11 complete_12 start_12 a b Fig. 6.7 Forward sweep (a) and backtracking pass (b) of dynamic programming The assignment of utility function values to events is a design parameter, which is cu s tomizable based on the goals of the mission and the designer. In the task allocation scenario we are considering, t hese values are assigned in a manner where strings co mple t ing a task assig n ment are not highly evaluated as shown in Equations (6.14) and (6.16). However, a different strategy may require prioritizing task completion. In this case events associated with co mpletion events may be assigned higher utility function values and the controller will evaluate strings with task completion events highly and guide the system accordingly.

PAGE 157

136 6.4.1 Normalized Utility Function In the evaluation process described in the previous section, a string that includes many task initiation events is evaluated higher than a string with fewer task initiation events. However, the highest evaluated string may not always correspond to the most d esirable task allocation. Consider the ca se where and indicating that a s signing Task 1 to Robot 3 is the desired action since Suppose that the event (initiation of Task 1 from Robot 1) is a part of the string and the event (initiation of Task 1 from Robot 3) is a part of the string in a limited lookahead window with depth 3 where only 3 future events are considered. The utility funct ion value for the string is higher than the utility of the string (3 and 2 respectively). Thus, the control methodology will disable the event which is a more desirable task allocation To eliminate this bias arising from the limited depth of the lookahead window, a normalized utility function (6.19) where is the number of task initiation events in the string is used. The normalized utility values of the two strings would be 1 and 2 respectively leading to the preferred choice of task allocation. It should be noted that the normalization of the utility function is also a design consideration and is customizabl e depending on the characteri s tics of the mission.

PAGE 158

137 6.5 Limited Lookahead Control Policy This section presents the limited lookahead policy of the proposed control met hodology for task allocation in a mobile robot team. Fig. 6.8 depicts the block diagram of the control methodology algorithm consisting of 4 main modules: system initiation, lo okahead wi n dow formation, string evaluation and control decision. !"#$%&' ()($(*+(,*$(-) .--/*0%*1 2()1-2' 3-4&*$(-) !$4()5' %6*+7*$(-) 8-)$4-+' 1%9(#(-) :0"#(9*+' ;-<-$'=%*& Fig. 6.8 Block diagram of the control algorithm In the system initialization module, the UCSM and the specifications are gene rated as described in Section 6.2. Based on the fuzzy logic controller, each event is a ssigned a utility fun c tion value using Equations (6.7), (6.8) and (6.9). Using as root state the initial state of the UCSM and the specificat ions model, a look a head window is formed that includes all the transitions starting from the initial state up to a ce r tain predefined depth in the coupled model. The transitions in the lookahead window form a tree of strings that can be executed in the UCS M. Each string in the lo o k ahead window is evaluated using the normalized utility function shown in Equ a tion (6.19).

PAGE 159

138 In the control decision module, the event exiting the root state leading the string with the highest utility function value is enabled. Al l the other controllable events exiting the root state are disabled. In the case that two or more strings present the same utility function value all events leading these strings are enabled. At the next state, the new sy stem state becomes the root state f or the lookahead window and the procedure is repeated until the system reaches a marked state. For the warehouse patrolling application, the forward sweep of the dynamic pr ogra m ming for a small section of the lookahead window of depth 3 is shown in Fig. 6 .9a. The task initiation events are labeled as and the task completion events as The utility function value for each event, is shown u n derneath the event label and the maximum utility function, V(Q), computed using Equ a tion (6.11) is depicted at the final state of each string. Fig. 6.9b shows the result of the backtracking pass where each state has been a s signed the maximum utility functi on based on Equation (6.17). The string with the highest utility function value appears in bold. Since event is the leading event of the string with the highest utility function value, r emains enabled while a ll other controllable events exiting the root state are disabled. 6.5.1 Robot Failures and Repairs Frequently, during a mission, a robot may go offline due to a sensor failure or commun i cation loss resulting in an incomplete task. The control methodol ogy should be able to compe n sate for robot failures by reallocating tasks to the operational robots of the

PAGE 160

139 team. Two kinds of failures are considered in this work: temporary failures and failures with task re initialization. !"## $%&'( !")* $%&') !")* $%&') !")+ $%&', !"(( $%&'!"(+ $%&'. /")* $%& !"#( $%&', /")* $%& !"(( $%&'/")+ $%& !"(( $%&'/"## $%& /"(( $%& /")* $%& !"## $%&'( !"(+ $%&'. !"() $%&'+ /"#( $%& !"(+ $%&'. 0%&'+* 0%&') 0%&'+* 0%&') 0%&'** 0%&'+ 0%&'(* 0%&'( 0%&'*, 0%&'+* 0%&')* 0%&'** 0%&'** 0%&'** 0%&'+* 0%&'+* 0%&'+ 0%&'* 0%&'+ 0%&'(* 1 2 !"## $%&'( !")* $%&') !")* $%&') !")+ $%&', !"(( $%&'!"(+ $%&'. /")* $%& !"#( $%&', /")* $%& !"(( $%&'/")+ $%& !"(( $%&'/"## $%& /"(( $%& /")* $%& !"## $%&'( !"(+ $%&'. !"() $%&'+ /"#( $%& !"(+ $%&'. 0%&'*, 0%&'*, 0%&'+* 0%&'** 0%&'** 0%&'* 0%&'+ 0%&'* 0%&'*, 0%&'+* 0%&')* 0%&'** 0%&'** 0%&'** 0%&'+* 0%&'+* 0%&'+ 0%&'* 0%&'+ 0%&'(* 0%&'*, Fig. 6.9 Limited lookahead policy for the patrolling scenario Temporary failures, such as communication loss are failures that can be repaired in a short period of time and the robot may continue executing its task after the repair. When a te m porary failure occurs the events in the lookahead window associated with

PAGE 161

140 task initiation or co m pletion of the failed robot are masked until the robot is repaired so that events associated with the failed robot are not considered. Masking is used to hide all the events associated with the failed robot. In essence, since the failed robot cease to be a part of the robot team until it is repaired, its FA module disappears from the UCSM. When the robot is repaired and its FA module appears into the UCSM, all events assoc iated to the repaired robot ar e unmasked. To incorporate the failure information into the control methodology an additional module called failure detection has been implemented. The module interacts with the rest of the alg o rithm components as shown in Fig. 6.10. When the failure dete ction module detects a robot failure in the robot team, the information to be used in the calculation of the new limited look a head window is forwarded to the next module. This information includes the set of events to be masked, the type of failure, and th e expected time to r epair. Fig. 6.11 demonstrates a temporary failure. Robot 1 is failed in State 2 and r epaired in State 5 to continue executing its task assignment. The initiation events are labeled as the completion events as the failure events as the repair events as and the task drop events as Failures with task re initialization are failures that require re initialization of the task assigned to the failed robot. For example, consider a sensor failure that it is not i mmediately reco g nized. However, during task e xecution the sensor failure is realized and the task needs to be re initialized. When a temporary failure with task re initialization

PAGE 162

141 event occurs, all initiation and completion events in the lookahead window associated with the failed robot are masked unt il the robot is repaired. The task, assigned to the failed robot, is re allocated to a different robot. Fig. 6.12 demonstrates a temporary fai lure with task re initialization. Robot 3 has failed in State 3 while executing Task 1 and the failure is consider ed as failure with task re initialization. In State 7 Task 1 is dropped and re assigned to Robot 1. !"#$%&' ()($(*+(,*$(-) .--/*0%*1 2()1-2' 3-4&*$(-) !$4()5' %6*+7*$(-) 8-)$4-+' 1%9(#(-) :*(+74%' 1%$%9$(-) ;0"#(9*+' <-=-$'>%*& Fig. 6.10 Control algorithm with failure detection # $ % & ( ) *+'" ,-./012134 526781-4054 90:0,4' 5+'" *+#$ *+"# 1-/2614054 90:0,4' 1+'" ;+#$ *+#' ;+'" < '! '' ;+"# *+"( ;+#' '" ;+"( Fig. 6.11 Events executed in the UCSM after a temporary failure

PAGE 163

142 # $ % & ( ) *+#' ,-./012134 526781-4054 90:0,4# *+'" *+"# 5+#' ;+'" ;+"# <1+#' *+'' = '! '' *+#( ;+'' ;+#( *+'$ '" ;+'$ '" '# '( *+"$ ;+"$ 1-/2614054 90:0,4# Fig. 6.12 Events executed in th e UCSM after a failure with task re initialization 6.6 Computational Results Using the Proposed Control Methodology This section discusses the experimental results of the controller design methodo logy for dynamic task allocation using the warehouse pat rolling application scenario. The membership functions for three selected fuzzy variables, namely endurance, efficiency and designer's choice, are shown in Table 6.3. Using the membership functions, the ability of each robot to perform a task is computed by the fuzzy controller. The corresponding event utility function values based on Equation (6.7) are: , , , (6.20) , The maximum profit due to control actions is equal to 4.09 and is achieved when the fo l lowing task assignments are made in any orde r: Two operational scenarios, one without failures and one with failures are consi dered for varying depth of limited lookahead windows. Three performance measures of

PAGE 164

143 interest are r e ported. Total utility refers to the sum of the uti lity of the events executed in the experimental run. This is an indicator of the quality of task allocation decisions where the higher values indicate better task allocation decisions. Average state space refers to the average number of states e x plored eac h time a lookahead window is created during an experimental run. This metric refers to the computational requirement for real time dec ision making. Finally, total state space metric refers to the total number of new states e xplored during the entire experi mental run. Table 6.3 Fuzzy logic membership functions for the patrolling scenario Robot Fuzzy variable Task 1 Member. Task 2 Member. Task 3 Member. Task 4 Member. Task 5 Member. Long Fair Long Long Low Medium Mediu m Medium 1 Endurance Efficiency Des. Choice Low High Low Low Fair Fair Fair Short High Medium Medium Medium 2 Endurance Efficiency Des. Choice Low High High Low Short Fair Fair Medium Medium Medium 3 Endurance Efficiency Des. Choice High Low High Table 6.4 s hows the results of 10 experimental runs for the case without failures for limited lookahead depths (LLD) of 2, 3, 4 and 12 and Table 6.5 summarizes these r esults. In this scenario the sequence of events which completes the mission of patrolling the 5 ware house r e gions is 12 and the size of the coupled model is 735. For the case of LLD 12, the entire states space of the coupled model is explored and the maximum po ssible total utility is obtained in every run. In LLDs of 3 and 6, every experimental run r e

PAGE 165

144 tur ned the maximum total utility. While this is a strong indication that a LLD of 25% and 50% of the maximum lookahead depth can r e turn comparable results to the maximum total utility, this performance cannot be guaranteed for every case. For the cases of LL D, the average computational requirements to generate lookahead window were 10% and 56% of the computational requirements of the maximum lookahead depth and 15% and 68% of the complete state space was explored. In the case of LLD 2, the average t o tal utili ty is 75% of the maximum total utility at a small fraction of the computational r equirements of the LLD 12 case. In summary, these experiments indicate that using a li mited look a head control policy for task allocation performs comparably to a control polic y, which considers the total state space at a fraction of the total computational requirements. Table 6.6 shows the results of 20 experimental run for the operational scenario with robot failures for limited LLDS of 2, 3, 6 and 12. Robot failures are gene rated using a uniformly di s tributed random variable denoting the probability of Robot to fail. Furthermore, a uniformly distri b uted random variable is defined to describe the severity of a robot failure and the repair time as follows: Failure with task re initialization if (6.21) Temporary failure if The random variable R is also used to denote the robot repair time as follows: (6.22)

PAGE 166

145 For example if the failure is considered as temporary and the robot can be repaired after 4 event intervals. If R=6, the repair is considered temporary with task re initializations and the r o bot can be repaired after 10 6=4 event intervals. Table 6.7 summarizes the results of the 20 experimental run. In this scenario, the unce r tain environment arising from robot failures produced a more mixed set of results. In all cases including the maximum lookahead depth of 14 (the increase from 12 is due to the robot failure and repair/task re initialization events) the average total utility was less than the maximum total utility. As, expected the computational requirements were co mparable to the scenario without f ailures. Note that the percentage of computational r equirements for the LLD of 2, 3, and 6 were less since the maximum lookahead depth in this scenario is higher. Table 6.4 Experimental results for the case without failures

PAGE 167

146 Table 6.5 Summary of exper imental results for the case without failures Table 6.6 Experimental results for the case with failures

PAGE 168

147 Table 6.6 ( Continued ) Table 6.7 Summary of experimental results for the case with failures T o demonstrate the impact of the LLD on task allocation decisions based on the total utility of the executed events, we compared the null hypothesis H o : 2 = 3 = 6 = 14 against the alternative hypothesis that the means are different using a single factor ANOVA with four levels of LLD and 20 repli cations. The ANOVA is summarized in

PAGE 169

148 Table 6.8. The results indicate that there is a significant difference b e tween the means and LLD is a factor that impacts the total utility level. Subsequently we conducted similar experiment but this time using 3 levels of LLD 3, 6 and 14. The ANOVA for this experimental deign is summarized in Table 6.9. The results of this analysis show no significant difference between the means. Table 6.8 ANOVA for total u tility with 4 levels of LLD Table 6.9 ANOVA for total u til ity with 3 levels of LLD In summary, these preliminary experiments indicate that a limited lookahead co ntrol po l icy provides a computational efficient approach to the problem of task allocation. However, the depth of the limited lookahead window must be carefully chosen based on the characteristics of the mission as well as the desired level of optimality as the quality of the task allocations is dependent on this parameter, Furthermore, these preliminary r esults indicate that LLD of less than of the ma ximum lookahead depth provides co mpounded reductions in the computational r e quirements where on average less than 10% of the computational requirements are sufficient to make control decisions. However, it

PAGE 170

149 must be pointed out that this statement must be fu rther ver i fied with larger experimental designs. 6.7 Conclusions In this chapter we describe a novel control methodology for task allocation in c ooperative robot teams. Finite automata formalism is used to model the robot team and the mission requireme nts as discrete event systems. In developing the system model, we co nsidered flexibility in task assignment, robot coordination for task completion and robot failures and repairs. These characteristic s are commonly encountered in mission pla n ning and execu tion of cooperative robot teams. We also describe a utility function for task a llocation that uses fuzzy logic to describe various robot capabilities which are diff i cult to quantify, Subsequently, a limited lookahead control policy coupled with a fuzzy con tro ller is developed for task allocation in real time. The use of limited lookahead policies presents significant advantages in terms of comp u tational complexity. The computationally complexity of the traditional supervisory control pro b lem is polynomial [107] i f the UCSM and the specification's model describe perfectly the behavior of the robot team and the mission requirements. However, if there is imperfect information about the system then the co m plexity becomes PSPACE hard [134] The limited lookahead policy proposed in this chapter is based on the co u pled model of the system where the computational complexity of the coupled model gener ation is linear. The computational results show that only a fraction of the coupled model

PAGE 171

150 needs to be explored to in the limited lookahead window to m ake task allocation dec isions. The preliminary results show that task allocation decisions based on the limited lookahead window control policy produces comparable results when compared with e xploring the complete coupled model. Fu r ther completely randomiz ed experimentation is required to generalize these findings. The limited lookahead depth is a critical parameter affecting the quality of task a llocation as well as computational complexity. The result s in Section 6.6 indicate that larger limited lookahead depths lead to higher number of states visited by the control a lgorithm. This is an expected result, however, the associated increase in the utility of task allocation is not as clear cut. In this work, as in most of the referenced literature, the depth o f the lookahead window is arbitrarily chosen. In [115] the depth window is co mputed based on the number of uncontrollable events in the system. A future research d irection i n volves determining the characteristic associated with cooperative robot teams and their missions which may be used to develop a methodology to calculate a dynamic limited lookahead depth in real time. Such an approach will result in a controller that i s adaptable to the changing needs of missions and cooperative robot teams. In the described patrolling application, robots carry multiple sensors to patrol one or more regions of a warehouse. Consider Robot 3 that carries a fire detection sensor and a visi on system. A failure of the fire detection sensor can be considered by the control methodology as a partial failure of the robot and a task that does not require fire detection capabilities such as Task 4 can still be assigned to that robot. In this manner the robot is considered partially functional and its ability to perform a task is recomputed by the

PAGE 172

151 fuzzy controller presented in Section 4, leading to intelligent utilization of system r esources. In addition to partial failures, this work may be extended to include catastrophic failures where the robot will not re join the team. Pertaining to the warehouse patrolling scenario, co n sider the event sequence depicted in Fig. 6.13. Robot 1 is assigned to Task 5, which completes in State 2. Task 5 can be assign ed to Robot 3 or to Robot 1 and Robot 2 in the particular s e quence. In State 7, a catastrophic failure event is initiated for Robot 2. Since Robot 2 will never be repaired, Task 5 will never be completed unless Task 5 is reset. Thus, any string initiated i n State 8 of the figure will not reach a marked state and the team's mission will not be completed. State 8 is called blocking state. Future research involves developing a methodology that will a d dress issues related to state blocking. # $ % & ( ) *+'$ ,-.-*./0123,4 5-367/84054 90:0.4" ,+'$ *+#( ,+#( *+'" ,+'" *+"$ 5+"$ Fig. 6.13 Blocking e xample

PAGE 173

152 Chapter 7 Conclusions and Future Work Several industrial, military and civilian applications require autonomous multi r obot solutions. Along with each robot's capabilities to perform basic functions such as navigation and obstacle avoidance, a key characteristic of multi robot systems is that of cooperation and coordination. The goal of this research is to design functional comp onents common to a wide range of autonomous mobile robot teams and investigate their deployment under the umbrella of a hybrid control architecture. This research conce ntrates on the development of four main functional components: Navigation and Obstacle Avoidance, Task Achieving B e haviors, Localization and Mission Planning. Chapter 3 described a method for navigation and obstacle avoidance using range sensors and a st ereo vision system. Chapter 4 presented a target tracking method, while Chapter 5 d escribed a localization method based on Fuzzy Logic and Kalman Filtering. Finally, Cha pter 6 presented a mission planning met hod for robot coordination and accomm o dation of resource failures. At the beginning of this research, there were questions in our minds r e garding the ability of a robot team to efficiently and effectively perform complex mi s sions such as warehouse patroll ing. When the development of a functional comp o nent was completed,

PAGE 174

153 it seemed that we were coming closer to the final goal, but at the same time, new challe nges such as sensor operation in various environments and new ideas such as partial robot failures we re uncovered. We were able to address a number of new challenges. The rest, gave us directions for future research. Even though a real patrolling sce n ario has not been implemented yet, the advances of our work suggest that this is po s sible. Our vision is t hat autonomous robot teams are able to complete missions that i nvolve cooperation and coordination, eff i ciently and effectively. This work demonstrates results that support and broaden the vision to include intelligent robots that can adapt to mission requ irements in dynamic and uncertain environments. In addition, the div ersity between the team members can extend the application domains from the i ndustrial/civilian environments to military applications in adversarial environments. The rest of this chapter summarizes the specific contributions of this research to date and the future work. 7.1 Contributions 7.1.1 Navigation and Obstacle Avoidance Key characteristic of autonomous patrolling applications is that of navigation and obstacle avoidance. Eac h robot of the team should demonstrate the capability of naviga ting in a facility and avoid collision with surrounding objects. In Chapter 3 a navigation and obstacle avoidance method for mobile robots based on ultrasonic and v i sion sensors

PAGE 175

154 has been prese nted. Main contribution of this work is the utiliz a tion of the stereo vision system as range sensor. A depth estimation method with uncalibrated cameras was intr oduced that uses the image size, the angle of view of the ca m eras and the relative position of two different captures of the same scene. Then, range information from both the ultr asonic sensors and the vision system have been used in para l lel to safely navigate a mobile robot. Parallel sensor utilization is particularly useful in patro l ling applicat ions where lighting variations in different areas may affect the ope r ation of the vision system but not the operation of the ultrasonic sensors. The Navigation and O b stacle avoidance approach outlined in Chapter 3 a p pears in [100] and [135] 7.1.2 Task Achieving Behaviors: Target Tracking Patrolling applications require the ability to recognize and track unauthorized presence in a secure facility. As Task Achieving Behavior, a tar get tracking method for m o bile robots was developed in Chapter 4 and presented in [136] and [137] The main differences and advantages of the presented approach compared to related r e search are: 1. Both cameras of the vision system track a target i ndependently, providing a r edundant mechanism that helps avoiding loosing the target. This me ans that even if one camera lo ses the target', it can retrieve information from the other camera to find it again.

PAGE 176

155 2. The robot's direction is controlled by the pan/ tilt angles of the ca m eras, allowing the robot to avoid obstacles and keep tracking a target, or to keep tracking a target that moves on uneven terrains. 7.1.3 Localization Localization is an essential functionality for mobile robot applications such as a utonomous patrolling. The processes of coordination and task allocation require exact knowledge of the position of all the robots in the team. Chapter 5 presented a localiz a tion method based on Fuzzy Logic and Kalman Filtering. The major contribution of this work is the number of sensors used. While most work in literature utilizes up to three di f ferent sensors for robot localization, work in Chapter 5 utilizes five different se n sors: GPS, IMU, odometer, stereo vision system and laser range finder. In addition, the Fuzzy Logic controller design allows for incorporation of the error in sensor readings into the EKF without the use of an error model. The error in sensor readings is approximated by mult iple zero mean Gaussian distributions and it is include d in the covariance matrix of the measurement model. The localization method described in Chapter 5 appears in [138] 7.1.4 Mission Planning Chapter 6 is dedicated to Mission Planning. A general task alloc a tion supervisory controller has been deve l oped to oversee the team of mobile robots that work together to

PAGE 177

156 perform applications such as warehouse patrolling. The control metho d ology presented is partially based on the RW supervisory control theory. However, the contro l ler utilizes a limited lookahead policy to enable and disable events in the system based on the eval uation of a fuzzy lo gic based utility function and robot availa b ility. Contributions of this work focus in the areas of computational complexity and of task allocation. In traditional supervisory control theory the acceptable event sequences are computed apriori. However, as the problem size increases so does the state space of the supervisor. In addition, applications with significant degree of uncertainty are highly dynamic and require frequent modifications to the control model. In this work, instead of constructing the co mplete supervisor, a limited lookahead policy is utilized that co nstructs a lookahead supervisor based on the evolution of the system in real time. The pr oposed methodology offers an advantage in highly dynamic systems, since the control model can change o n the fly. Concerning the task allocation problem, most work in literature utilizes random or biding methodologies to assign robots to tasks. This work proposes a novel approach that uses a fuzzy logic controller, which introduces a measure for the ability of a robot to pe rform a task based on criteria such as endurance, efficiency and designer's choice. The supervisory control design presented in Chapter 6 appears in [139] while an earlier ve rsion of the controller was published in [140]

PAGE 178

157 7. 2 Future W ork Additional features can be incorporated to the design of the functional comp onents to improve the performance of the patrolling system. Future work involves: 1. Enhancement of the Localization component to identify random objects in the e nvironment the robot travels to use as landmarks for the range sensors. However, including multiple landmarks into the system's model increases the state space of the localization problem. For this reason a methodology such as compressed Ka lman filtering is required to decrease the state space due to incorporation of mult iple landmarks in the system model. The compressed Kalman filtering methodo logy will remove states of the system's model that are no longer active. 2. Mission Planning enhancements involve t he development of a contro l ler tolerant to partial and catastrophic failures of the robot team members. In the pr e sented configuration, when a failure in a robot's sensor o c curs, the robot is considered failed by the controller. However, the failed robot m ay still be useful in a different task that does not require the failed sensor. In a d dition to partial failures, there are situations where during the mission a robot may fail and will never re join the team due to a catastrophic failure. Catastrophic fail ures result in bloc k ing states, from where the system will not be able to move on. The Mission Planning co mponent should be able to avoid blocking states. 3. Deployment of the robot team in an actual patrolling application. So far, the pe rformance of most of the functional components has been verified and validated

PAGE 179

158 e xperimen t ally. However, the performance of the Mission Planning component has been v alidated in a simulated setting.

PAGE 180

159 References [1] G. A. Bekey, Autonomous Robots: From b io logical i nspiration to i mplement ation and c ontrol MIT Press, 2005. [2] C. Cheng, W. Han, C. Ng Teck, J. Ibanez Guzman, J. Shen, and W. Chan Chun, "Target tracking and path planning for vehicle following in jungle environment," in 8th Control, Automation, Robotics and Vision Conference 2004, pp. 455 460 vol.4. [3] Y. Zhang, M. Schervish, E. U. Acar, and H. Choset, "Probabilistic methods for robotic landmine search," in Proceedings of International Conference on Intell igent Robots and Systems 2001, pp. 1 525 1532 vol.3. [4] V. Kumar and F. Sahin, "Cognitive maps in swarm robots for the mine d e tection application," in IEEE International Conference on Systems, Man and Cybernetics 2003, pp. 3364 3369 vol.4. [5] J. S. Jennings, G. Whelan, and W. F. Evans, "Co operative search and rescue with a team of mobile robots," in 8th International Conference on Advanced Robotics 1997, pp. 193 200. [6] N. Ruangpayoongsak, H. Roth, and J. Chudoba, "Mobile robots for search and rescue," in IEEE International Workshop on Sa fety, Security and Rescue Robo tics 2005, pp. 212 217. [7] K. Kaaniche, B. Champion, C. Pegard, and P. Vasseur, "A v ision a lgorith m for dynamic d etection of moving v ehicles with a UAV," in Proceedings of the 2005 IEEE International Conference on Robotics a nd Automation 2005, pp. 1878 1883. [8] M. Veloso and P. Stone, Individual and collaborative behaviors in a team of homogeneous robotic soccer agents in roceedings of the Third Intern a tional Conference on Multi Agent Systems 1998, pp. 309 316. [9] L. E. Parker, "ALLIANCE: an architecture for fault tolerant multirobot cooper ation," IEEE Transactions on Robotics and Automation, 1998 pp. 220 240, vol. 14.

PAGE 181

160 [10] M. Long, A. Gage, R. Murphy, and K. Valavanis, "Application of the d i s tributed field r obot a rchi tecture to a simulated d emining t ask," in Pr o ceedings of the 2005 IEEE International Conference on Robotics and Autom a tion 2005, pp. 3193 3200. [11] J. S. Albus, H. G. McCain, an d R. Lumia, "NASA/NBS standard r eference model for telerobot control system a rchitecture (NASREM)," in Technical Report, R obots Systems Division, National Bureau of Standar d s 1987. [12] J. E. Naranjo, C. Gonzalez, T. de Pedro, R. Garcia, J. Alonso, and M. A. S o telo, "AUTOPIA architecture for automatic driving and maneuvering," in IEEE Intell igent Transportation Systems Conference 2006, pp. 1220 1225. [13] H. A. Hagras, "A hierarchical type 2 fuzzy logic control architecture for auton omous mobile robots," IEEE Transactions on Fuzzy Systems, 2004 pp. 524 539, vol. 12. [14] H. Hu, J. M. Brady, J. Grothusen, F. Li, and P. J. Probert, "LICAs: a modular a rchitecture for intelligent control of mobile robots," in International Conference on Intelligent Robots and Systems 95. 'Human Robot Interaction and Cooperative Robots' 1995, pp. 471 4 76 vol.1. [15] R. Stenzel, "A behavior based control architecture," in IEEE International Co nference on Systems, Man, and Cybernetics 2000, pp. 3235 3240 vol.5. [16] R. Brooks, "A robust layered control system for a mobile robot," IEEE Journal of Robotics and Automation, 1986, pp. 14 23, vol. 2. [17] A. H. P. Selvatici and A. H. R. Costa, "A hybrid adaptive architecture for mobile robots based on reactive behaviors," in Fifth International Conference on Hybrid Intelligent Systems 2005 [1 8] B. Lenser and M Veloso, "A m odular h ierarchical b ehavior based a rchite c ture," in RoboCup 2001: Robot Soccer World Cup V 2002. [19] Y. Ya n, Q. Zhu, and C. Cai, "Hybrid c ontrol a rchitecture of mobile robot based on subsumption a rchitecture," in Proceedings of the 2006 IE EE I n ternational Conference on Mechatronics and Automation 2006, pp. 2168 2172. [20] M. Lindstrom, A. Oreback, and H. I. Christensen, "BERRA: a research archite cture for service robots," in IEEE International Conference on Robotics and Aut omation 2000, p p. 3278 3283 vol.4. [21] J. Wang, "DRS operating primitives based on distributed mutual exclusion," in International Conference on Intelligent Robots and Systems '93 1993, pp. 1085 1090 vol.2.

PAGE 182

161 [22] H. Asama, A. Mat sumoto, and Y. Ishida, "Design of an a u tonomous a nd d istri buted r obot s ystem: Actress," in International Workshop on Intelligent Robots and Systems '89. The Autonomous Mobile Robots and Its Applications 1989, pp. 283 290. [23] T. Huntsberger, P. Pirjanian, A. Trebi Ollennu, H. Das Nayar, H. Ag ha z arian, A. J. Ganino, M. Garrett, S. S. Joshi, and P. S. Schenker, "CAMPOUT: a control a rchitecture for tightly coupled coordination of multirobot systems for planetary surface exploration," IEEE Transactions on Systems, Man and Cybernetics, Part A,, 200 3, pp. 550 559, vol. 33. [24] P. Caloud, C. Wonyun, J. C. Latombe, C. Le Pape, and M. Yim, "Indoor autom ation with many mobile robots," in Intelligent Robots and Systems '90. 'Towards a New Frontier of Applications' 1990, pp. 67 72 vol.1. [25] E. Gat, "On Three Layer Architectures," in Kortenkamp, D., Banasso, R.P. Art ificial Intelligence and Mobile Robots : MIT Press, 1997, pp. 195 210. [26] H. Secchi, V. Mut, R. Carelli, M. Schneebeli, and T. Bastos, "A hybrid Co n trol Architecture for Mobile Robots. Clas sic Control, Behavior Based Co n trol and Petri Nets." [27] A. V. Timofeev, F. A. Kolushev, and A. A. Bogdanov, "Hybrid algorithms of multi agent control of mobile robots," in International Joint Conference on Ne ural Networks 1999, pp. 4115 4118 vol.6. [28 ] L. Doitsidis, K. P. Valavanis, and N. C. Tsourveloudis, "Fuzzy logic based autonomous skid steering vehicle navigation," in IEEE International Confe r ence on Robotics and Automation 2002, pp. 2171 2177 vol.2. [29] A. N. Rajagopalan, S. Chaudhuri, and M. Uma, "Depth estimation and image re storation using defocused stereo pairs," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, pp. 1521 1525, vol. 26. [30] U. Mudenagudi and S. Ghaudhuri, "Depth estimation using defocused stereo i mage pa irs," in The Proceedings of the Seventh IEEE International Confe r ence on Computer Vision 1999, pp. 483 488 vol.1. [31] K. Umeda and T. Takahashi, "Subpixel stereo method: a new methodology of st ereo vision," in IEEE International Conference on Robotics a nd Autom a tion 2000, pp. 3215 3220 vol.4. [32] J. Fan and T. E. Weymouth, "Depth from dynamic stereo images," in IEEE Co mputer Society Conference on Computer Vision and Pattern Recognition 1989, pp. 250 255.

PAGE 183

162 [33] L. Shang Hong, F. Chang Wu, and C. Shyang "A generalized depth estim a tion algorithm with a single image," IEEE Transactions on Pattern Analysis and M achine Intelligence, 1992, pp. 405 411, vol. 14. [34] E. Malis and P. Rives, "Robustness of image based visual servoing with r e spect to depth distri bution errors," in IEEE International Conference on R o botics and Automation 2003, pp. 1056 1061 vol.1. [35] S. Derrouich, K. Izumida, and K. Shiiya, "A combination of monocular CCD camera and inertial sensor for range estimation," in IEEE 2002 28th Annua l Co nference of the Industrial Electronics Society 2002, pp. 2191 2196 vol.3. [36] M. Sonka, V. Hlavac, and R. Boyle, Image p rocessing, a nalysis and machine v ision : PWS Publishing, 1999. [37] E. Sahin and P. Gaudiano, "Mobile robot range sensing through visual loo m ing," in Proceedings of Intelligent Control 1998, pp. 370 375. [38] I. Ohya, A. Kosaka, and A. Kak, "Vision based navigation by a mobile robot with obstacle avoidance using single camera vision and ultrasonic sensing," IEEE Transactions on Robo tics and Automation, 1998, pp. 969 978, vol.14. [39] B. R. Duffy, C. Garcia, C. F. B. Rooney and G. M. P. O'Hare, "Sensor f u sion for social r obotics," in 31st Int. Symp. On Robotics MontrŽal, Canada, 2000, pp. 258 264. [40] R. C. Mann, J. P. Jones, M. Be ckerman, C. W. Glover, L. Farkas, J. Han, E. W acholder, and J. R. Einstein, "An intelligent integrated sensor system for the ORNL mobile robot," in IEEE International Symposium on Intelligent Control 1988, pp. 170 173. [41] G Gubber and H. Sahli, "Sensor integration on a m obile r obot," in 12th Intern ational Symposium on Measurement and Control in Robotics France, 2002. [42] C. Jennings, D. Murray, and J. J. Little, "Cooperative robot localization with v ision based mapping," in IEEE International Conferen ce on Robotics and Autom ation 1999, pp. 2659 2665 vol.4. [43] V. Tucakov, M. Sahota, D. Murray, A. Mackworth, J. Little, S. Kingdon, C. Jennings, and R. Barman, "Spinoza: a stereoscopic visually guided mobile robot," in Proceedings of the Thirtieth Hawai i International Conference on System Sc iences 1997, pp. 188 197 vol.5. [44] D. Fox, W. Burgard, and S. Thrun, "The dynamic window approach to coll i sion avoidance," IEEE Robotics & Automation Magazine, 1997, pp. 23 33, vol. 4.

PAGE 184

163 [45] W. D. Rencken, "Autonomo us sonar navigation in indoor, unknown and unstru ctured environments," in International Conference on Intelligent Robots and Sy stems '94 1994, pp. 431 438 vol.1. [46] A. Elfes, "Sonar based real world mapping and navigation," IEEE Journal of R obotics and Automation, 1987, pp. 249 265, vol.3. [47] K. P. Valavanis, T. Hebert, R. Kolluru, and N. Tsourveloudis, "Mobile robot navigation in 2 D dynamic environments using an electrostatic potential field," IEEE Transactions on Systems, Man and Cybernetics, Part A, 2000, pp. 187 196, vol. 30. [48] CCIR, "Encoding parameter of digital television for studios," International Radio Consultative Committee 1990. [49] J. J. De Dios and N. Garcia, "Face detection based on a new color space YCgCr," in International Conferen ce on Image Processing 2003, pp. III 909 12 vol.2. [50] R. Jain, R. Kasturi, and B. G. Schunck, Machine v ision : McGraw Hill Intern ational Editions, 1995. [51] J. C. Russ, The i mage p rocessing h andbook : IEEE PRESS, 1995. [52] J. Ding, H. Kondou, H. Kimura, Y. Hada, and K. Takase, "Robust tracking for camera control on an irregular terrain vehicle," in Proceedings of the 41st SICE Annual Conference 2002, pp. 1191 1196 vol.2. [53] P. Saeedi, P. Lawrence, and D. Lowe, "3D motion tracking of a mobile robot in a natural environment," in IEEE International Conference on Robotics and Aut om a tion 2000, pp. 1682 1687 vol.2. [54] C. Balkenius and L. Kopp, "Visual tracking and target selection for mobile r obots," in Proceedings of the First Euromicro Workshop on Adv anced Mobile R obot 1996, pp. 166 171. [55] H. Y. Wang, S. Itani, T. Fukao, and N. Adachi, "Image based visual adaptive tracking control of nonholonomic mobile robots," in International Confe r ence on Intelligent Robots and Systems 2001, pp. 1 6 vol.1. [5 6] L. Fu Chang, T. Wei, and S. L. Tzuu Hseng, "An experimental study on tracking control of two autonomous mobile robots," in 23rd International Conference on Industrial Electronics, Control and Instrumentation 1997, pp. 1311 1316 vol.3. [57] H. M. Gross H. J. Boehme, and T. Wilhelm, "Contribution to vision based loca lization, tracking and navigation methods for an interactive mobile ser v ice robot," in IEEE International Conference on Systems, Man, and Cyberne t ics 2001, pp. 672 677 vol.2.

PAGE 185

164 [58] M. Klein ehagenbrock, S. Lang, J. Fritsch, F. Lomker, G. A. Fink, and G. Sagerer, "Person tracking with a mobile robot based on multi modal ancho r ing," in 11th IEEE International Workshop on Robot and Human Interactive Communication 2002, pp. 423 429. [59] T. Darr ell, G. Gordon, M. Harville, and J. Woodfill, "Integrated person tracking using stereo, color, and pattern detection," in IEEE Computer Soc i ety Conference on Computer Vision and Pattern Recognition 1998, pp. 601 608. [60] L. Yong Beom, Y. Bum Jae, and L. Seong Whan, "A real time color based object tracking robust to irregular illumination variations," in IEEE International Co nference on Robotics and Automation 2001, pp. 1659 1664 vol.2. [61] E. B. Meier and F. Ade, "Using the condensation algorithm to im plement tracking for mobile robots," in Third European Workshop on Advanced M o bile Robots 1999, pp. 73 80. [62] J. Miyata, T. Murakami, and K. Ohnishi, "An approach to tracking motion of m obile robot for moving object," in 26th Annual Confjerence of the I EEE Industrial Electronics Society 2000, pp. 2249 2254 vol.4. [63] L. Sung On, C. Young Jo, H. B. Myung, Y. Bum Jae, and O. Sang Rok, "A st able target tracking control for unicycle mobile robots," in International Confe rence on Intelligent Robots and Sys tems 2000, pp. 1822 1827 vol.3. [64] T. Wei and S. L. Tzuu Hseng, "Realization of two dimensional target trac k ing problem via autonomous mobile robots using fuzzy sliding mode co n trol," in Proceedings of the 24th Annual Conference of the IEEE Industrial Electronics Society 1998, pp. 1158 1163 vol.2. [65] Z. Lin, V. Zeman, and R. V. Patel, "On line robot trajectory planning for catching a moving object," in IEEE International Conference on Robotics and Automation 1989, pp. 1726 1731 vol.3. [66] H. Rowl ey, S. Baluja, and T. Kanade, "Neural network b ased f ace d ete c tion," in IEEE Conference on Computer Vision and Pattern Recognition 1997, pp. 690 696. [67] T. H. S. Li, C. Shih Jie, and T. Wei, "Fuzzy target tracking control of autonomous mobile robots by using infrared sensors," IEEE Transactions on Fuzzy Systems, vol. 12, pp. 491 501, 2004. [68] R. C. Luo and C. Tse Min, "Autonomous mobile target tracking system based on grey fuzzy control algorithm," IEEE Transactions on Industrial Electro n ics, vol. 47, pp. 920 931, 2000.

PAGE 186

165 [69] F. J. Montecillo Puente, V. Ayala Ramirez, A. Perez Garcia, and R. E. Sa n chez Yanez, "Fuzzy color tracking for robotic tasks," in IEEE International Conference on Systems, Man and Cybernetics 2003, pp. 2769 2773 vol.3. [70] S. Bul uswar and B. Draper, "Color Machine Vision for Autonomous Veh i cles," Engineering Applications of Artificial Intelligence, vol. 11, pp. 245 256, 1998. [71] M. J. Swain and D. H. Ballard, "Indexing via color histograms," in Third Intern ational Conference on Computer Vision 1990, pp. 390 393. [72] C. Hsin Chia, C. Wei Jung, and W. Sheng Jyh, "Contrast based color segment ation with adaptive thresholds," in International Conference on Image Processing 2002, pp. II 73 II 76 vol.2. [73] C. Yong and C. Zhu, "Obs tacle detection using adaptive color segmentation and planar projection stereopsis for mobile robots," in IEEE International Conference on Robotics, Intelligent Systems and Signal Processing 2003, pp. 1097 1101 vol.2. [74] B. V. Funt and G. D. Finlayson, "Color constant color indexing," IEEE Transa ctions on Pattern Analysis and Machine Intelligence, 1995, pp. 522 529, vol. 17. [75] W. Ying and T. S. Huang, "Nonstationary color tracking for vision based human computer interaction," IEEE Transactions on Neur al Networks, 2002 pp. 948 960, vol. 13. [76] J. Orwell, P. Remagnino, and G. A. Jones, "Multi camera colour tracking," in Second IEEE Workshop on Visual Surveillance 1999, pp. 14 21. [77] K. Sobottka and I. Pitas, "Segmentation and tracking of faces in co lor i m ages," in Proceedings of the Second International Conference on Automatic Face and Ge sture Recognition 1996, pp. 236 241. [78] A. Arsenio and M. I. Ribeiro, "Active range sensing for mobile robot localiz ation," in International Conference on Intelli gent Robots and Systems 1998, pp. 1066 1071 vol.2. [79] H. Huosheng and G. Dongbing, "Landmark based navigation of mobile r o bots in manufacturing," in 7th IEEE International Conference on Emerging Technologies and Factory Automation 1999, pp. 121 128 v ol.1. [80] H. Chou, M. Traonmilin, E. Ollivier, and M. Parent, "A simultaneous loca l ization and mapping algorithm based on Kalman filtering," in IEEE Intell i gent Vehicles Symposium 2004, pp. 631 635.

PAGE 187

166 [81] U. Larsson, J. Forsberg, and A. Wernersson, "Mobil e robot localization: integra ting measurements from a time of flight laser," IEEE Transactions on Industrial Electronics, 1996 pp. 422 431, vol. 43. [82] T. Ching Chih, "A localization system of a mobile robot by fusing dead reckoning and ultrasonic measur ements," in IEEE Instrumentation and Measur ement Technology Conference 1998, pp. 144 149 vol.1. [83] P. Goel, S. I. Roumeliotis, and G. S. Sukhatme, "Robust localization using rel ative and absolute position estimates," in International Conference on I n te lligent Robots and Systems 1999, pp. 1134 1140 vol.2. [84] B. Barshan and H. F. Durrant Whyte, "Inertial navigation systems for mobile r obots," IEEE Transactions on Robotics and Automation, 1995 pp. 328 342, vol. 11. [85] S. Niwa, T. Masuda, and Y. Sezak i, "Kalman filter with time variable gain for a multisensor fusion system," in International Conference on Multisensor Fusion and Integration for Intelligent Systems 1999, pp. 56 61. [86] K. Seong Baek, L. Seung Yong, H. Tae Hyun, and C. Kyoung Ho, "An a d vanced approach for navigation and image sensor integration for land veh i cle navigation," in IEEE 60th Vehicular Technology Conference 2004, pp. 4075 4078 v ol. 6. [87] E. T. Baumgartner and S. B. Skaar, "An autonomous vision based mobile robot," IEEE Tra nsactions on Automatic Control, 1994, pp. 493 502, vol.39 [88] M. Marron, J. C. Garcia, M. A. Sotelo, E. Lopez, and M. Mazo, "Fusing odome tric and vision data with an EKF to estimate the absolute position of an auton omous mobile robot," in IEEE Conference Emerging Technologies and Factory Automation 2003, pp. 591 596 vol.1. [89] F. Chenavier and J. L. Crowley, "Position estimation for a mobile robot using v ision and odometry," in IEEE International Conference on Robotics and Autom ation 1992, pp. 2588 25 93 vol.3. [90] E. Stella, G. Cicirelli, F. P. Lovergine, and A. Distante, "Position estimation for a mobile robot using data fusion," in IEEE International Symposium on Intelligent Control 1995, pp. 565 570. [91] S. Kai Tai and T. Wen Hui, "Environment p erception for a mobile robot u s ing double ultrasonic sensors and a CCD camera," IEEE Transactions on I n dustrial Electronics 1996 pp. 372 379, vol. 43. [92] W. S. Wijesoma, K. R. S. Kodagoda, and A. P. Balasuriya, "A laser and a camera for mobile robot nav igation," in 7th International Conference on Control, Aut omation, Robotics and Vision 2002, pp. 740 745 vol.2.

PAGE 188

167 [93] K. O. Arras, N. Tomatis, and R. Siegwart, "Multisensor on the fly localiz a tion using laser and vision," in International Conference on Int elligent Robots and Systems 2000, pp. 462 467 vol.1. [94] H. Choset, K. M. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L. E. Kavraki, and S. Thrun, Principles of r obot m otion, t heory, a lgorithms and i mplement a tion : The MIT Press, 2005. [95] J. Z. Sasiad ek and P. Hartana, "Sensor data fusion using Kalman filter," in Third International Conference on Information Fusion 2000, pp. WED5/19 WED5/25 vol.2. [96] P. J. Escamilla Ambrosio and N. Mort, "A hybrid Kalman filter fuzzy logic arch itecture for multisen sor data fusion," in IEEE International Symposium on Intell igent Control 2001, pp. 364 369. [97] R. Carrasco and A. Cipriano, "Fuzzy logic based nonlinear Kalman filter applied to mobile robots modelling," in IEEE International Conference on Fuzzy Systems 2004, pp. 1485 1490 vol.3. [98] F. Martia, A. Jimenez, D. Rodr i guez Lo sada, and B. M. Al Hadithi, "A n ovel f uzzy Kalman f ilter for m obile r obots l ocalization," in Information Pro c essing and Management of Uncertainty in Knowledge Based Systems 2004. [99 ] A. Tiano, A. Zirilli, and F. Pizzocchero, "Application of interval and fuzzy tec hniques to integrated navigation systems," in Joint 9th IFSA World Co n gress and 20th NAFIPS International Conference 2001, pp. 13 18 vol.1. [100] A. Tsalatsanis, K. Valavan is, and N. Tsourveloudis, "Mobile r obot n avigation u sing s onar and range m easurements from u ncalibrated c ameras," in 14th Medite rranean Conference on Control and Automation 2006, pp. 1 7. [101] J. P. Snyder, A w orking m anual Washington: United States Gov ernment Printing Office, 1987. [102] V. Kumar and F. Sahin, "Cognitive maps in swarm robots for the mine d e tection application," in IEEE International Conference on Systems, Man and Cyberne tics. 2003, pp. 3364 3369 vol.4. [103] Y. Zhang, M. Schervish, E. U. Acar, and H. A. C. H. Choset, "Probabilistic met hods for robotic landmine search," in Proceedings of IEEE/RSJ International Co nference on Intelligent Robots and Systems. 2001, pp. 1525 1532 vol.3. [104] J. S. Jennings, G. Whelan, and W. F. Evans, "Co operative search and rescue with a team of mobile robots," in Proceedings of 8th International Conference on A dvanced Robotics, ICAR '97, 1997, pp. 193 200.

PAGE 189

168 [105] B. Krishnamurthy and J. Evans, "HelpMate: A robotic courier for hospital use," in IEEE Intern ational Conference on Systems, Man and Cybernetics 1992, pp. 1630 1634 vol.2. [106] L. E. Parker, "ALLIANCE: an architecture for fault tolerant multirobot cooper ation," Robotics and Automation, IEEE Transactions on, 1998, pp. 220 240, vol. 14 [107] P. J. Ramadge and W. M. Wonham, "Supervisory control of a class of di s crete event processes," SIAM Journal on Control and Optimization, 1987, pp. 452 462, vol. 18. [108] Y. Chevaleyre, "Theoretical analysis of the multi agent patrolling problem," in IEEE/WIC/ACM International Conference on Intelligent Agent Technology. IAT 2004, pp. 302 308. [109] R. Zlot, A. Stentz, M. B. Dias, and S. A. T. S. Thayer, "Multi robot explor a tion controlled by a market economy," in Proceedings of IEEE International Confe rence on Ro botics and Automation. ICRA. 2002, pp. 3016 3023. [110] A. V. Timofeev, F. A. Kolushev, and A. A. Bogdanov, "Hybrid algorithms of multi agent control of mobile robots," in International Joint Conference on Ne ural Networks. IJCNN 1999, pp. 4115 4118 vol. 6. [111] S. C. Botelho and R. Alami, "M+: a scheme for multi robot cooperation through negotiated task allocation and achievement," in Proceedings of IEEE Intern ational Conference on Robotics and Automation. 1999, pp. 1234 1239 vol.2. [112] B. P. Gerkey and M. J. Mataric, "Sold!: auction methods for multirobot coordin ation," IEEE Transactions on Robotics and Automation, 2002 pp. 758 768, vol. 18. [113] M. G. Lagoudakis, M. Berhault, S. Koenig, P. A. K. P. Keskinocak, and A. J. A. K. A. J. Kleywegt, "Simpl e auctions with performance guarantees for multi robot task allocation," in Proceedings of IEEE/RSJ International Conference on Intell igent Robots and Systems. (IROS 2004). 2004, pp. 698 705 vol.1. [114] R. Davis and R. G. Smith, "Negotiation as a metapho r for distributed pro b lem solving," Artificial Intelligence, 1983 pp. 63 109, vol. 20. [115] S. L. Chung, S. Lafortune, and F. Lin, "Limited lookahead policies in supe r visory control of discrete event systems," IEEE Transactions on Automatic Control, 1992 pp. 1921 1935, vol.37 [116] R. Kumar, H. M. Cheung, and S. I. Marcus, "Extension based l imited l o o kahead supervision of d iscrete e vent s ystems," Automatica, 1998 pp. 1327 1344, vol. 34.

PAGE 190

169 [117] S. L. Chung, S. Lafortune, and F. Lin, "Recursive computation of limited look ahead supervisory controls for discrete event systems," Discrete Event Dynamic Systems, 1993 pp. 71 100, vol.3 [118] S. L. Chung, S. Lafortune, and F. Lin, "Supervisory control using variable look ahead policies," Discrete Event Dynamic Sys tems, 1994 pp. 237 268, vol.4 [119] N. B. Hadj Alouane, S. Lafortune, and L. Feng, "Variable lookahead supe r visory control with state information," IEEE Transactions on Automatic Co n trol, 1994 pp. 2398 2410, vol. 39. [120] M. Heymann and F. Lin, "On line control of partially observed discrete event sy stems," Discrete Event Dynamic Systems, 1994 pp. 221 236, vol. 4. [121] R. Kumar and V. K. Garg, "Control of stochastic discrete event systems modeled by probabilistic languages," IEEE Transactions on Automat ic Co n trol, 2001, pp. 593 606, vol. 46. [122] F. Lin, "Robust and adaptive supervisory control of discrete event systems," IEEE Transactions on Automatic Control, 1993, pp. 1848 1852, vol.38. [123] L Grigorov and K. Rudie, "Near o ptimal o nline c ontrol of d ynamic d i s crete e vent s ystems," Discrete Event Dynamic Systems, 2006, pp. 419 449, vol.1 6. [124] C. Yi Liang, S. Laortune, and L. Feng, "How to reuse supervisors when di s crete event system models evolve," in Proceedings of the 36th IEEE Confe r ence on Decis ion and Control. 1997, pp. 2964 2969 vol.3. [125] D. Gordon and K. Kiriakidis, "Adaptive supervisory control of interco n nected discrete event systems," in Proceedings of the 2000 IEEE International Confe rence on Control Applications 2000, pp. 935 940. [1 26] D. Gordon Spears and K. Kiriakidis, "Reconfigurable robot teams: modeling and supervisory control," IEEE Transactions on Control Systems Technology, 2004 pp. 763 769, vol. 12. [127] K. Kiriakidis and D. Gordon, "Supervision of multiple robot systems," in Pr oceedings of the 2001 American Control Conference. 2001, pp. 2117 2120 vol.3. [128] W. Xi, P. Lee, A. Ray, and S. A. P. S. Phopa, "A behavior based collabor a tive multi agent system," in IEEE International Conference on Systems, Man and C ybernetics 2003, pp. 4242 4248 vol.5. [129] K. Jamie, R. K. Pretty, and R. G. Gosine, "Coordinated execution of tasks in a multiagent environment," IEEE Transactions on Systems, Man and Cybe r netics, Part A, 2003, pp. 615 619, vol.33.

PAGE 191

170 [130] S. Kimura, M. Takahashi, T Okuyama, S. A. T. S. Tsuchiya, and Y. A. S. Y. S uzuki, "A fault tolerant control algorithm having a decentralized autonomous a rch i tecture for space hyper redundant manipulators," IEEE Transactions on Sy stems, Man and Cybernetics, Part A, 1998, pp. 521 52 7, vol. 28. [131] C. G. Cassandras and S. Lafortune, Introduction to d iscrete e vent s ystems Se c ond Edition ed. Norwell, Massachesetts, USA: Kluwer Academic Publis h ers, 1999. [132] E. H. Mamdani, "Advances in the linguistic synthesis of fuzzy controllers," Inte rnational Journal of Man Machine Studies, 1976, pp. 245 254, vol.8. [133] R. Bellman, "On the Theory of Dynamic Programming," in Proceedings of the National Academy of Sciences 1952. [134] V. D. Blondel and J. N. Tsitsiklis, "A survey of computational complexity r e sults in systems and control," Automatica, 2000, pp. 1249 1274, vol. 36. [135] A. Tsalatsanis, K. Valavanis, and N. Tsou rveloudis, "Mobile robot n avigation u sing sonar and range measurements from uncalibrated c ameras," Journal of Intell igent a nd Robotic Research Systems, 2007 pp. 253 284, vol. 48. [136] A. Tsalatsanis, K. Valavanis, and A. Yalcin, "Vision b ased t arget t rac king and collision a voidance for m obile r obots," in International Conference on Artif i cial Intelligence 2006, pp. 652 658 vol. 2. [137] A. Tsalatsanis, K. Valavanis, and A. Yalcin, "Vision based t arget t racking and collision a voidance for mobile r obots," Journal of Intelligent and Robotic R esearch Systems, 2008 pp. 285 304, vol. 48. [138] A. Tsalatsanis, K. P. Valavanis, A. Ka ndel, and A. Yalcin, "Multiple sensor based UGV localization using fuzzy extended Kalman filtering," in Mediterranean Co nference on Control & Automation 2007, pp. 1 8. [139] A. Tsalatsanis, A. Yalc in, and K. Valavanis, "Dynamic task a llocation in c oop er at ive robot teams: A limited l ookahead control p olicy," submitted at IEEE Tran sactions on Automation Science and Engineering, 2008. [140] A. Tsalatsanis, A. Yalcin, and K. P. Valavanis, "Automata based s upervi sory co ntroller for a mobile robot t eam," in IEEE 3rd Latin American Robotics Symp osium, 2006. LARS '06. 2006, pp. 53 59.

PAGE 192

171 Appendices

PAGE 193

172 Appendix A HSI Color Space The hue, H of a color is determined by its angle with respect to the red axis [16]. H=60 o corresponds to yellow. Thus, a threshold on th e hue component in the area of 55 o to 65 o can easily extract yellow pixels from a color image. Additionally, saturation S of a color is the degree to which the color is undiluted by white [16]. Saturation's range for yellow obstacles has been experimentall y determined to be within (0.35, 0.43), where 0 corresponds to unsaturated and 1 to fully saturated colors. Applying both hue and saturation thresholds in a HSI color space image results in a new image in which yellow pixels are represented as white accord ing to the equation: Fig. A.1 shows a color image containing a yellow obstacle and the corresponding output after thresholding. Fig. A.1 Applyin g threshold technique in HSI color s pace

PAGE 194

173 Appendix A (Continued) The HSI color space may also be used to extract red, blue and green colors. The hue's value for red color is while the saturation's range has be en determined to be within (0.70, 0.80). Similarly, the hue's value for green color is while its saturation varies between (0.35, 0.54). Finally, blue color has a value of hue and saturation between (0.66, 0.94) Fig. A.2 presents a color image containing the four color obstacle (a) and the result of the threshold technique (b) according to the equation: (a) (b) Fig. A.2 Thresholding to extract red, green, blue and yellow obstacles using the HSI color space

PAGE 195

174 Appendix B CMYK Color Space The CMYK color space represents each color in its secondary spectral components cyan, magenta, yellow. One of its primary components is yellow Pixels of a yellow obstacle refer to pixels with highest intensity in the yellow's component image that can be extracted from its histogram. Additionally, yellow color pixels are presented as black on the cyan comp onent. A new image may be created by applying a threshold technique to the CMYK image according to: T 1 is the highest pick in yellow's component histogram minus a tolerance of 10% Y(x,y) and C(x,y) the intensity functions of Yellow and Cyan respectively. Fig. A.3 shows a color image contains a yellow obstacle (a), the thresholded image (b) and the histogram of the yellow component (c). Fig. A.3 Applyi ng thresh old technique in CMYK color s pace.

PAGE 196

175 Appendix B (Continued) The CMYK color space may also be used to extract red, green and blue colors from a color image. Red color corresponds to the highest values on magenta component and its cyan values varry betwee n (80, 115) Green color corresponds to high values on the yellow component and its magenta values varry between (80, 135) Finally, blue color corresponds to the highest values on the cyan component and at the same time its magenta values varry between (1 80, 230) Fig. A.4, presents a color image containing the four color obstacle (a), the result of the threshold technique (b) according to equation (7) and the histograms of cyan (c), magenta (d) and yellow (e) components. T 1 T 2 and T 3 are the highest picks with the highest intensity values in yellow's, cyan's and magenta's component histograms respectively, minus a tolerance of 10% C(x,y) M(x,y) and Y(x,y) are the intensity functions of the Cyan Magenta and Yellow component respectively.

PAGE 197

176 Appendix B (Continued) (a) (b) (c) (d) (e) Fig. A.4 Thresholding to extract red, green, blue and yellow obstacles using the CMYK color space

PAGE 198

177 Appendix C Color Segmentation Results Fig. A.4 illustrates a multicolor obstacle located at different distances from the robot, under different lighting conditions (a), and the result of the threshold technique applied to the YC b C r (b), HSI (c), and CMYK (d) color spaces. (a) (b) (c) (d) Test 1 (a) (b) (c) (d) Test 2 Fig. A.5 Thresholding to extract re d, green, blue and yellow obstacles using the three color spaces

PAGE 199

178 Appendix C (Continued) (a) (b) (c) (d) Test 3 (a) (b) (c) (d) Test 4 Fig. A.5 (Continued)

PAGE 200

179 Appendix C (Continued) (a) (b) (c) (d) Test 5 (a) (b) (c) (d) Test 6 Fig. A.5 (Continued)

PAGE 201

180 Appendix C (Continued) (a) (b) (c) (d) Test 7 (a) (b) (c) (d) Test 8 Fig. A.5 (Continued)

PAGE 202

1 81 Appendix C (Continued) (a) (b) (c) (d) Test 9 (a) (b) (c) (d) Test 10 Fig. A.5 (Continued) As observed, the threshold technique using the YC b C r color space performs better in identifying all colors, while background pixels have been considerably suppress ed.

PAGE 203

182 Appendix C (Continued) Finally, Fig. A.6 shows a more realistic and complicated scene with yellow objects that demonstrate the ability of the threshold technique as applied to the YC b C r color space to identify objects of different size and orientati on. Fig. A.6 App lying t hreshold technique to extract yellow obstacles using the YC b C r color space

PAGE 204

About the Author Athanasios Tsalatsanis received his Diploma in Production Enginee ring and Management and his Masters in Production Engineering from Technical University of Crete Greece in 2001 and 2003 respectively. As a doctoral student at the University of South Florida in the Department of Industrial and Management Systems Engineer ing, he worked in several programs and published several articles related to autonomous robot teams. His research interests include soft computing, control of autonomous systems and information systems.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 002001442
003 fts
005 20090501093342.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 090501s2008 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0002653
035
(OCoLC)319836403
040
FHM
c FHM
049
FHMM
090
T56 (Online)
1 100
Tsalatsanis, Athanasios.
0 245
Control of autonomous robot teams in industrial applications
h [electronic resource] /
by Athanasios Tsalatsanis.
260
[Tampa, Fla] :
b University of South Florida,
2008.
500
Title from PDF of title page.
Document formatted into pages; contains 182 pages.
Includes vita.
502
Dissertation (Ph.D.)--University of South Florida, 2008.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
520
ABSTRACT: The use of teams of coordinated mobile robots in industrial settings such as underground mining, toxic waste cleanup and material storage and handling, is a viable and reliable approach to solving such problems that require or involve automation. In this thesis, abilities a team of mobile robots should demonstrate in order to successfully perform a mission in industrial settings are identified as a set of functional components. These components are related to navigation and obstacle avoidance, localization, task achieving behaviors and mission planning. The thesis focuses on designing and developing functional components applicable to diverse missions involving teams of mobile robots; in detail, the following are presented: 1. A navigation and obstacle avoidance technique to safely navigate the robot in an unknown environment. The technique relies on information retrieved by the robot's vision system and sonar sensors to identify and avoid surrounding obstacles. 2.A localization method based on Kalman filtering and Fuzzy logic to estimate the robot's position. The method uses information derived by multiple robot sensors such as vision system, odometer, laser range finder, GPS and IMU. 3. A target tracking and collision avoidance technique based on information derived by a vision system and a laser range finder. The technique is applicable in scenarios where an intruder is identified in the patrolling area. 4. A limited lookahead control methodology responsible for mission planning. The methodology is based on supervisory control theory and it is responsible for task allocation between the robots of the team. The control methodology considers situations where a robot may fail during operation. The performance of each functional component has been verified through extensive experimentation in indoor and outdoor environments.As a case study, a warehouse patrolling application is considered to demonstrate the effectiveness of the mission planning component.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
590
Co-advisor: Ali Yalcin, Ph.D.
Co-advisor: Kimon Valavanis, Ph.D.
653
Mobile robot teams
Autonomous systems
Hybrid control
Control architecture
Supervisory control
690
Dissertations, Academic
z USF
x Industrial & Management Systems
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.2653