USF Libraries
USF Digital Collections

Moonlight in Miami

MISSING IMAGE

Material Information

Title:
Moonlight in Miami a field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise
Physical Description:
Book
Language:
English
Creator:
Burke, Jennifer L
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
rescue robotics
communication analysis
field research methods
technology
user studies
Dissertations, Academic -- Psychology -- Masters -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: This study explores human-robot interaction during a 16-hour high-fidelity Urban Search and Rescue (USAR) disaster response drill with teleoperated robots. Situation awareness and team interaction were examined using communication analysis. Operators (n=5) sought assistance from team members to compensate for difficulties building or maintaining situation awareness. Operator-team member communication focused on relating what was seen through the robot's eye view with prior knowledge and planning search strategies. Results suggest operators need a new cognitive mental model to filter and comprehend data provided by the robot, and that robot-assisted search is a team task rather than an individual one. USAR technical search teams need a new shared mental model of robot-assisted search in order to coordinate activities effectively.
Thesis:
Thesis (M.A.)--University of South Florida, 2004.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Jennifer L. Burke.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 68 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001450747
oclc - 54388226
notis - AJN8703
usfldc doi - E14-SFE0000220
usfldc handle - e14.220
System ID:
SFS0024916:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001450747
003 fts
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 040202s2004 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000220
035
(OCoLC)54388226
9
AJN8703
b SE
SFE0000220
040
FHM
c FHM
090
BF121
1 100
Burke, Jennifer L.
0 245
Moonlight in Miami
h [electronic resource] :
a field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise /
by Jennifer L. Burke.
260
[Tampa, Fla.] :
University of South Florida,
2004.
502
Thesis (M.A.)--University of South Florida, 2004.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 68 pages.
520
ABSTRACT: This study explores human-robot interaction during a 16-hour high-fidelity Urban Search and Rescue (USAR) disaster response drill with teleoperated robots. Situation awareness and team interaction were examined using communication analysis. Operators (n=5) sought assistance from team members to compensate for difficulties building or maintaining situation awareness. Operator-team member communication focused on relating what was seen through the robot's eye view with prior knowledge and planning search strategies. Results suggest operators need a new cognitive mental model to filter and comprehend data provided by the robot, and that robot-assisted search is a team task rather than an individual one. USAR technical search teams need a new shared mental model of robot-assisted search in order to coordinate activities effectively.
590
Adviser: Coovert, Michael D.
653
rescue robotics.
communication analysis.
field research methods.
technology.
user studies.
690
Dissertations, Academic
z USF
x Psychology
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.220



PAGE 1

Moonlight in Miami: A Field Study of Human-Robot Interaction in the Context of an Urban Search and Rescue Disaster Response Training Exercise by Jennifer L. Burke A thesis submitted in partial fulfillment of the requirements for the degree of Master of Arts Department of Psychology College of Arts and Sciences University of South Florida Major Professor: Michael D. Coovert, Ph.D. Robin R. Murphy, Ph.D. Walter C. Borman, Ph.D. Date of Approval: September 8, 2003 Keywords: rescue robotics, communication analysis, field research methods, technology, user studies Copyright 2003, Jennifer L. Burke

PAGE 2

DEDICATION To my parents Jane and Garfield for their faith in me and unwavering support, my daughter Sara for her inspiration and encouragement, and most of all my husband Michael for his love, patience and understanding during the arduous journey of graduate study.

PAGE 3

ACKNOWLEDGEMENTS This work was supported in part by DARPA under the Synergistic Cyber-Forces Seedling program (N66001-1074-411D) and the Cognitive Systems Exploratory Effort program (N66001-03-8921), and SAIC, Inc. The author would like to thank Jean Scholtz and Ron Brachman for their support. Many thanks to Jenn Casper, Mark Micire, and Brian Minten for their help in collecting the data, Thomas Fincannon for his assistance in editing and transcribing the videotapes, and Rescue Training Associates for providing the test venue. Michael Coovert provided expertise, encouragement and guidance as my advisor, and introduced me to Robin Murphy, who has served as role model and mentor throughout this process. I am grateful to both for the opportunities given and pathways opened.

PAGE 4

TABLE OF CONTENTS LIST OF TABLES..............................................................................................................ii LIST OF FIGURES...........................................................................................................iii ABSTRACT.......................................................................................................................iv CHAPTER 1 INTRODUCTION........................................................................................1 CHAPTER 2 OVERVIEW OF USAR AND TECHNICAL SEARCH.............................5 CHAPTER 3 ROBOTS ON THE SCENE.......................................................................11 What is a Robot?....................................................................................................11 Related Work.........................................................................................................13 HRI Studies................................................................................................13 Situation Awareness...............................................................................................18 CHAPTER 4 METHOD...................................................................................................21 Participants, Apparatus, Setting and Procedure.....................................................21 Robot Assisted Search and Rescue Communication Coding Scheme...................27 CHAPTER 5 RESULTS...................................................................................................33 Situation Awareness...............................................................................................33 Team Process and Communication........................................................................35 Interaction of SA and Team Communication........................................................40 CHAPTER 6 DISCUSSION.............................................................................................42 Key Points..............................................................................................................42 Conclusions............................................................................................................47 REFERENCES..................................................................................................................51 APPENDICES...................................................................................................................57 Appendix A Robot Assisted Search and Rescue Communication Coding Scheme (RASAR-CCS) 58 Appendix B Intercorrelations between operator statement categories.................61 i

PAGE 5

LIST OF TABLES Table 1. Operator metrics..........26 Table 2. Operator statement category frequencies and percentages..36 Table 3. Dyad frequencies and percentages for tether managers and team members.......39 Table 4. Chi-square results for high and low SA operator statements..41 ii

PAGE 6

LIST OF FIGURES Figure 1. Organizational structure of USAR Task Force (FEMA, 1992)...........................8 Figure 2. Inuktun Microtrax and VGTV robots................................................................23 Figure 3. Map of disaster response training site and robot run locations.........................28 Figure 4. Percentages of operator statements by content..................................................34 Figure 5. Team member interactions................................................................................37 iii

PAGE 7

Moonlight in Miami: A Field Study of Human-Robot Interaction in the Context of an Urban Search and Rescue Disaster Response Training Exercise Jennifer L. Burke ABSTRACT This study explores human-robot interaction during a 16-hour high-fidelity Urban Search and Rescue (USAR) disaster response drill with teleoperated robots. Situation awareness and team interaction were examined using communication analysis. Operators (n=5) sought assistance from team members to compensate for difficulties building or maintaining situation awareness. Operator-team member communication focused on relating what was seen through the robots eye view with prior knowledge and planning search strategies. Results suggest operators need a new cognitive mental model to filter and comprehend data provided by the robot, and that robot-assisted search is a team task rather than an individual one. USAR technical search teams need a new shared mental model of robot-assisted search in order to coordinate activities effectively. iv

PAGE 8

1 CHAPTER I INTRODUCTION Urban search and rescue (USAR) has been posed by the DARPA/NSF study on human-robot interaction (Murphy & Rogers, 2001) as an exemplar domain for humanrobot interaction (HRI). US AR involves the rescue of victims from the collapse of a man-made structure. The environment can be characterized as a pile of steel, concrete, dust, and other rubble and debris. The areas ar e perceptually disorien ting; they no longer look like recognizable structures due to the collapse, it is da rk, and everything is covered in gray dust from concrete or sheet rock. Robot assisted search and rescue in this field domain, requires that small shoe-box sized physi cally situated robots operate under these unstructured, outdoor environmental conditions in real-time to visually search areas that are either too narrow for safe human or canine entry or generally unsafe for human exploration The robots are short, providing a viewpoint from less than one foot off the ground. This exacerbates any keyhole effects (Woods, Tittle, Feil & Ro esler, in press). These domain and agent characteristics present many challenges that distinguish USAR from other HRI settings, e.g. manufactur ing, entertainment and office-oriented applications. The relationship between humans and robots in USAR is different than manufacturing, office, or even security applications of robots. Possibly the most interesting HRI aspect is that robots, much like search dogs, must physically team with people to perform any activity. Because of their small size and the mobility challenges

PAGE 9

2 imposed by the USAR environment, robots must be carried in backpacks to the voids targeted to be searched. Second, humans must interpret the video, audio, and thermal imaging data provided from the robots and fu se it with other data sources (e.g., building plans) and knowledge (e.g., time of day) in order to identify vict ims and structural anomalies as well as conduct and coordinate la rge-scale rescue efforts. The information extracted from the robots search must be abstracted and propagated up a hierarchy of decision makers as well as distributed late rally among search specia lists. Therefore, the human-robot team must cooperatively transfor m data into information and levels of knowledge. This means HRI in USAR must cons ider distributed information transfer and cooperation. Third, the operators and decision-makers (consumer s of information provided by the robots) are under extreme cognitive and physical fatigue, introducing new issues not commonly seen in industrial settings. Any progress in HRI for USAR applications would likely be applicable to military and security applications, which are also time-critical, high-stress domains. Fourt h, the high degree of human involvement is not expected to change in the near future. The robots are not autonom ously mobile for the demanding conditions of a rubble pile, and the most optimistic roadmap posits only navigational autonomy within 10 years (Murphy, 2002.) As a result robo ts require at least one operator, and often a robot will need a second operator to ma nipulate a tether or safety line for lowering into vertical voids. This introduces the possibility of a more diverse team, with humans serving multiple roles in controlling one robot. Fifth, USAR is a domain where the robots perform tasks that cannot be accomplished by a living creature; thus the operator has no higher metaphor or example of how to use the robot.

PAGE 10

3 By studying human interactions with USAR r obots, it may be possible to learn how to accelerate the generation of new strategies for deploying robots. This study investigates human-robot intera ction during robot-a ssisted search and rescue activities observed as pa rt of a high-fidelity USAR fi eld training drill in Miami, Florida, managed by Rescue Training A ssociates. The 16-hour drill was conducted on November 30, 2001, in collapsed buildings and r ubbles piles, creating a realistic physical setting. It was the final exam for two days of classes in urban s earch and rescue for 75 firefighters and USAR workers. The Center for Robot-Assisted Search and Rescue (CRASAR) was permitted to tape how the robots were used by the students and instructors during the drill in exchange fo r providing a short classroom training session on how the robots were used for visual technical search at the World Trade Center response (Casper and Murphy, 2003; Casper, 2002; Micire, 2002). It should be emphasized that data collection was opportuni stic and observational: the drill was not structured for a formal HRI study and ther e were no hypotheses gene rated before hand. The conditions of the drill (most night-tim e, exposed rubble and rebar) made roving videotaping particularly unsafe, and only stationary activities (the operator at the control station after it was se t up) could be record ed without risking in jury. Although data collection was conducted without a particular hypothesis, the analyses reported in this article focus on situation awareness (SA) and team process and communication. Previous work suggests SA and teamwork are needed for effective task performance in complex, high stress work domains similar to USAR (Prince & Salas, 2000; Stout, Cannon-Bowers, Salas & Milanovich, 1999; Sonnenwald & Pierce, 2000) and HRI

PAGE 11

4 studies of USAR (Casper & Murphy, 2003; Casper & Murphy, 2002) support the need for SA. By establishing indicator s of situational awareness in robot assisted search and rescue, the study serves as a foundation for creating the appropriate cognitive augmentation needed for effective technica l search. The findings concerning human-torobot ratios have profound ramifications not on ly for USAR operations, but also for other robotic domains. In addition, investigation of the rescue teams communication as they work with the robots may provide insight in to the development of both individual and shared mental models of the robot, the envi ronment and the search task needed for robotassisted search operations The rest of this study is organized as follows. Chapter 2 provides an overview of the USAR domain and the activities in tech nical search task. Chapter 3 provides an overview of robotics and a summary of relate d HRI work in observational field studies, and defines situation awareness for the purpos es of this study. Ch apter 4 details the methodology used for the observational study, co ding of the video data, and analyses. Results, including patterns of team communication and indicators of situation awareness are presented in Chapter 5. Chapter 6 discusses the implicat ions and questions raised by the findings, and notes the need for cognitive augmentation to improve human performance.

PAGE 12

5 CHAPTER 2 OVERVIEW OF USAR AND TECHNICAL SEARCH The organizational structure of USAR pos es interesting challenges for effective human-robot interaction. This section su mmarizes salient points about the technical search task and the use of robot s; the reader is directed to Casper (2002) for a more indepth description of USAR from a HRI viewpoi nt. It is important to note that rescue robots are not used by traditional response t eams; instead the Center for Robot-Assisted Search and Rescue (CRASAR) maintains an independent team which deploys with national or international teams. The intent is to integrate mature robot technologies into the standard team cache. The description below represents the deployment strategy recommended by the CRASAR response team at the time of the Miami drill. Technical search is one of ma ny emergency response tasks. In the USA, operations at a mass-casualty incident are divided into tw elve emergency support functions (ESF), ranging from medical support (organizing hos pitals and ambulances) through logistics (making sure that food and portable toilets are available to workers). Each ESF is conducted by a specially trained task fo rce and coordinated through an incident commander and the incident command staff. USAR is only one f unction, designated ESF 9, within the larger incident organization. T echnical search is one task within USAR. Personnel who conduct technical search are highly trained members of a cohesive team and generally work in pairs (the buddy system) for safety. USAR functions and personnel require advanced training and equipm ent; as a result it is generally conducted

PAGE 13

6 by a designated federal or state task force. There are currently 28 federal task forces recognized by the Federal Emergency Manageme nt Agency and possibly up to four times as many teams responsible for highly populated urban areas of states Both federal and regional teams typically share the same organization, fielding a 56-person Task Force in order to sustaining operations around the cloc k (in 12-hour shifts) for a maximum of 10 consecutive days. The teams are composed of a) firefighters, paramedics and Emergency Medical Technicians and b) civilians, most often in canine search, structures, and hazardous materials. USAR workers routinely log over 200 hours of USAR-specific training each year. Most fire fighters have not had four y ears of college, while most civilians have. Task forces are usually elite and highly cohesive, where the members are hand-picked for both skills and social dynamics. USAR operations are physically and c ognitively fatiguing. Every member who works in the hot zone (collapse site) must be able to physically ne gotiate rubble piles and uneven surfaces, work in confined spaces, climb ladders and work at heights, and quickly exit void spaces to avoid secondary collaps es. Task force members wear specialized safety equipment, and are closely monitore d for signs of physical exhaustion or stress (particularly Critical Incide nt Stress Syndrome) when working. Although the teams work in 12-hour shifts, the reality of both shifts setting up opera tions and infrastructure and working in the field during the first 24 hours l eads to sleep deprivati on. It is conventional wisdom that a responder will get less than 3 hours of continuous sleep during the first 48 hours of an incident. The sleep deficit doe s not decrease during the 10 day deployment.

PAGE 14

7 Technical search, as seen in Figure 1, is one of the four USAR functions: search technical support medical and rescue or extrication. These four operations represent sub-specialties within the task force. Wh ile no two disasters are managed precisely the same way, USAR operations often begin with a manual reconnaissance of the area of damage, called the hot zone Victims on the surface or eas ily removed from light rubble are extracted immediately as encountered. After reconna issance, the command staff determines what the safest strategy is to e ffectively search the hot zone for survivors within the rubble. In areas th at are deemed safe for humans to investigate, canine teams may be sent forward.

PAGE 15

8 Figure 1. Organizational structure of USAR Task Force (FEMA, 1992). In most cases, technical search specialists wait until called for. When a dog has indicated signs of a survivor in an area, technical search specialists are summoned onto

PAGE 16

9 the pile. The command staff attempts to minimize the number of people in the hot zone, so technical search specialists wait at the f orward station of the hot zone perimeter until called over the radio or assigned an area to search. A technical search specialist may carry a fiber-optic boroscope, thermal imag er, or a video camera mounted on a wand for a visual inspection of the rubble, depending on the verbal descripti on of the void or the specific request of a particular device by th e leader. If a survivor is found, the search team and command staff brings in the medical and rescue teams, who call on members of the technical support team as needed. Before leaving the void, the technical search specialists mark the exterior of the void with symbols indicating that it has been searched, the structural condition, and pres ence of survivors/remains. The visual inspection of a void is most of ten done with a boroscope or a camera on a wand. These technologies generally cannot penetrate more than 12 feet into a void, whereas robots are well-suited for voids l onger than 20 feet. Rega rdless of tool, the search activity takes on the order of 3-30 mi nutes, and a technical search specialist may spend most of a 12-hour shift waiting, and then work furiously for a few minutes. The command staff may periodically evacuate the hot zone and cease all operations so that technical search specialists can apply sensitiv e acoustic listening devices. This also adds to the cognitive stress. No evacuations were called for during the Miami drill while the robots were deployed. The field data collected in the Miami drill used the robots for a visual technical search task, where robots served as cameras on wheels. The visual t echnical search task consists of four activitie s in order of importance: search for signs of victims report of

PAGE 17

10 findings to the team or task force leader, not e any relevant structur al information that might impact the further in vestigation of the void, and estimate the volume that has been searched and map it rela tive to the rubble pile In this case, the technical search specialist operated a robot instead of a boroscope or th ermal imager. It should be noted that the team leader is responsible for integrating the information about maps, safety risks, location of victims, and coverage of the pile. Thus, technical search task is highly focused and generally limited to a short period of time where the search er is called onto the pile, carries the technical equipment to the site, se ts it up, gets results, a nd then returns to the forward station. The data colle cted during the drill attempted to capture how the operator was searching for signs of survivors and noti ng structural information, since these were the activities with direct human-robot interaction.

PAGE 18

11 CHAPTER 3 ROBOTS ON THE SCENE What is a Robot? The term robot came from Karl Capeks 1921 play R.U. R. (Rossums Universal Robots). It was used to describe a race of menial workers, artif icial humans created from a vat of biological parts to serve as slave labor for real humans. Science fiction books and movies transformed robots into mech anical creatures, a nd propitiated their menial stance by portraying them as factual-minded automatons that mimicked human qualities without understanding. In reality, an intelligent robot is a m echanical creature which can function autonomously and interact with its world (M urphy, 2000). Intelligence implies it does not perform in a mindless fashion, while autonom y means it can adapt to changes in the environment (or itself) and continue to reach its goal. Brooks (2002) defines two principles that disti nguish robots from computers: situ atedness and embodiment. Robots are situated in that they are embedded in the worl d, and interact with the world through sensors which influence their behavior. They are embodied in that sense of having a physical body that experiences the world in part through the influence of the world on that body. Like computers, robots have evolved from research laboratories and military/industrial applications, and are rapi dly gaining a presence in the worlds of entertainment, work and everyday life.

PAGE 19

12 Robots have traditionally been used for th e three Ds: dull, dangerous or dirty work. Industrial robots have been devel oped for economic reasons in manufacturing, agriculture and service industries, to incr ease productivity and re duce inefficient human resource allocation, particularly in hard-t o-staff menial labor positions. Because the original goal was precision and repeatability for use in mass production, little effort was put into machine intelligence or human fact ors considerations. As the space program evolved, the need for artificial intelligence, i.e. robots capable of learning, planning, reasoning and problem-solving, spurred resear ch sponsored not only through NASA, but also by the Defense Advanced Research Proj ects Agency (DARPA). Mobile robots have developed more from safety and humanitari an concerns, and are the primary focus in nuclear, space exploration, military and rescue applications. This study is directed toward human-robot interaction with mobile robots. While the pervading noti on in past research has been the substitution of robots for peopl e, the current trend is toward robots as assistive technology, i.e. designed to comple ment humans rather than replace them. The current state of the art in mobile robot s is situated autonomy (the robot acts on its own using information from its sensor s), though teleoperation is more common in practice. Teleoperation is when a human opera tor controls a robot from a distance using sensors and a display. (This differs from re mote-control operation, where the operator has visual contact with the robot). Some app lications have moved to semi-autonomous control, where the robot is given an instru ction or task to do on its own (but under supervision). Others have built upon the noti on of shared control, where the robot does the dirty work and the human does that whic h requires finesse. Cert ainly there are more

PAGE 20

13 autonomous applications in the commercial sector (Hondas Asimo, Sonys Aibo robotic dog), but systemic problems have slowed th e rate of development in military and governmental application. Related Work Human-robot interaction is a relatively new field. For an overview, the reader is referred to the DARPA/NSF study on human -robot interaction (Murphy & Rogers, 2001). Our study differs from existing research in HRI in three dimensions: goals, methodology, and focus. Of the relatively sm all number of studies in HRI, only three studies address HRI in field domains, one usi ng data from a USAR exercise in July 2001, one using data from the WTC, and the thir d with SWAT teams. Situation awareness emerged as a common theme across the three studies, and shared mental models of the problem space were a critical factor in the SWAT team study. Endsleys three-level model of situation awareness (1988) is used in analysis of the data collected, and is briefly reviewed. HRI Studies Human-robot interaction is significan tly different from human-computer interaction in several ways (Scholtz, 2003.) Robots are embodied and can move and interact with humans in dynam ic, real-world environments Their platforms hold sensors that can fail or degrade Users may interact with more than one independent system and systems may have varying degrees of autonomy and cognition These dimensions pose

PAGE 21

14 challenges for designers of huma n-robot systems, and those who seek to best utilize the rich potential of complementary relationships between the two. Human-robot interaction, in turn, is a relatively new field, and this study differs from existing research in HRI in three di mensions: goals, methodology and focus. Most studies have addressed social acceptance of robots or interface design (Breazeal, 2000; Arkin, R., Fujita, M., Takagi, T., and Hasegawa, R. 2003; Draper, Pin, Rowe & Jansen, 1999; Wilkes, Alford, Cambron, Rogers, Pe ters & Kawamura, 1999; Khatib, Yokoi, Brock, Chang & Casal, 1999; Thrun, 1998; Nico lescu & Mataric, 2001.) In contrast, this study examines direct relationships between humans and robots performing tasks in work contexts. Experiments in laboratory or other controlled settings, simulations and modeling techniques are the most common methods of HRI study, with few studies conducted in the field (Breazeal, 2003; Ki esler & Goetz, 2002; Kawamura, Nilas, Muguruma, Adams & Zhou, 2003; Severins on-Eklundh, Green & Huttenrauch, 2003; Langle & Worn, 2001; Nakamura, Ota & Arai, 2002; Fong Thorpe & Baur, 2001.) This study is an observational field study of users working with robots in real environments. Current theoretical models and taxonomies of human-robot interaction focus on levels of autonomy-existing or hypothesizedfor known human tasks (Murphy & Rogers, 2001; Scholtz, 2003; Woods et al., in press.) This study is concerned with identifying new tasks for robots in the search and rescue domain, with robots that are, for the present, teleoperated. There is some research that is similar to the current study, i.e. applies to robots and humans in field work settings, rather than office, web, or manufacturing-type

PAGE 22

15 scenarios: current work in the USAR doma in (Casper, 2002; Ca sper & Murphy, 2002; Micire, 2002), a field study of SWAT t eams (Jones & Hinds, 2002), NASAs Robonaut research (Bleuthmann et al., 2003) and Kraut, Fussell & Siegels (2003) related remote collaboration study. Existing research in robot-assisted USAR from pre-9/11 field trials and the first known deployment of robots in a disaster response (Casper, 2002; Casper & Murphy, 2002) revealed difficulties in operator teleprop rioception and telekinest hesis, as described in Sheridan (1992.) Prior to the World Trade Center disaster, one ethnographic study conducted (Casper & Murphy, 2002) documented wo rkflow patterns in field trials with rescue workers and two types of tactical m obile robots. The study identified collaborative teleoperation, i.e., two operators with two robots assisting one another, as a team-based work strategy for efficient navigation and error avoidance. While formal ethnographic methods were not used to study robot-assisted operations at the WTC, video data was collected and analyzed post 9/11 in Casper (2002) and Micire ( 2002). Important findings emerged regarding the environment, tasks, communication and logistics requirements, and social informatics (Casper, 2002). Th e high stress environment present on-site quickly revealed the need to address cognitive deficits br ought on by fatigue and lack of sleep, both ever-present conditions in USAR ope rations. Issues such as packability of the robots and complexity of the interfaces influenced rescue workers willingness to use the robots in technical search tasks. Acceptance of the robots also appeared to be related to workers prior experience with other technical search tools. Robot failures due to traction slippage, camera occlusion and lighting adjust ments retarded the search process. Findings

PAGE 23

16 suggested that tether management, the lack of image processing, and difficulties in size and depth estimation must be addressed in orde r to aid fast and accurate victim detection (Micire, 2002.) Finally, robot information is a one-to-many mapping with temporal and abstraction hierarchies. The tim ely and appropriate distributio n of information is critical to effective use of rescue robotics. In a domain very similar to USAR, Jo nes and Hinds (2002) observed police SWAT teams in training exercises, and identi fied leader roles in establishing common ground and coordinating distributed team memb er actions as factors transferable to system design for coordinating distributed robo ts. Like search and rescue teams, SWAT teams operate in high stress, time-critical work environments. In this qualitative field study, researchers observed leader s roles and actions in four field exercises as they coordinated and directed dist ributed SWAT teams. Leaders formed global mental models to build common ground (shared situation awareness) among di stributed team members. They found SWAT teams use objec ts and spatial relations to coordinate actions, and that sharing common ground from the recipients pe rspective increased situation awareness and team performance. These findings were in corporated into a system design using an object-centered electronic dialogue between an operator and multiple, distributed robots. A Correspondence Agent was created to assist the operator in building global SA, and to send commands to distributed robots us ing their own frame of reference. This field study of team-based USAR ope rations differs from Jones and Hinds work in that I am observing real robot-user interaction as it occu rs between operator(s) and a single robot to inform present, not fu ture, coordinated human -robot systems. Their

PAGE 24

17 findings regarding the criticality of shared awareness in team-based, dynamic work domains, however, are certainly applicable. Though studied through simulation rather than functional application, NASAs Robonaut research platform (Bluethmann et al., 2003) shares some commonality with USAR HRI, as well, in that the focus is on the operator-robot re lationship in a work context. Robonaut is designed to work in close proximity to humans, performing existing human tasks with existing tools, however, whil e robots in rescue operations go in places humans cannot (or should not) go, and perform ta sks that are yet to be fully defined. The remote collaborative physical task studi es reported in Kraut, Fussell & Siegel (2003) are not robot-related; how ever, there are important aspe cts that are relevant to human-robot interaction in search and rescue operations, namel y, the contribution of shared visual space to situation awareness. In two experiments examining the effects of visual information on a collaborative repair task, the researchers used conversation analysis to compare differences between expe rt assistance given side-by-side, remotely using shared visual space, and remotely through audial channels only. Researchers observed a worker wearing a head-mounted vide o system that provided a remote helper with a view of what the worker was looking at during a collaborative bicycle repair task. Findings were that side-by-side assistance was more effective than remote assistance augmented with shared visual information, due to the limitations in shared visual space, the lack of spatial orienta tion and other physical-perceptua l cues, and the consequent need to spend more time establishing co mmon ground. Remote visual assistance was more effective than audial-only assistance, however, emphasizing the increased situation

PAGE 25

18 awareness made possible through the visual information that was shared. Conversation analysis results showed the advantage of shared visual space in establishing common ground (shared situation awareness) between the worker and the remote helper. Recommendations included suggestions for video configurations for remote collaboration. The findings from these studies all point to situation awareness, perception and communication during tasks as critical aspects of human-robot intera ction. Operators in field tests and at the WTC did not know how to interpret what they saw through the robots camera, partly because of fatigue, a nd partly because of the lack of expected perceptual cues (Casper, 2002; Casper & Mu rphy, 2002.) Like the remote helper in the distributed collaborative task, wh at they saw did not match thei r internal mental model. While no formal hypotheses are posed, I anticipa te these will be salient factors in USAR robotics. Situation Awareness The exploration of SA in robot-assisted s earch operations in Chapter 5 is based upon Endsleys three-level model, which defines situation awareness as the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future (1988, p.97) (italics added). Perception (Level 1 SA) is detection of sensory information: the perception of elements in the environment within a volume of time and space.

PAGE 26

19 Comprehension (Level 2 SA) is divided in to two subcategories, identification and interpretation. Identification is defined as comprehension of perceived cues in terms of subjective meaning: e.g., identifying objects, locations and victims. Interpretation is defined as comprehension of perceived cu es in terms of objective significance or importance to the current situation. Projection (Level 3 SA) is defined as the projection of future situation events and dynamics through projecting, generating and activating solutions/plans. Endsleys model is based on an inform ation-processing theory (Wickens, 1992), in which SA is acquired largely through sensory input: sight, sound, touch, taste and smell. Perception and attention are important elements in taking sensory data into working memory, where it is coded and pattern -matched with existing goals and mental models in long-term memory. Jones & Endsle y (1996) noted that 76% of SA errors in pilots were due to problems in perception. Th is is of particular interest to this study, where the impact of perception on the control of robots is expected to be similar. SA also comes from many other sources in addition to sensory input, e.g. system knowledge, prior knowledge, and from other peopl e in the environment. Mental models play an important role in dealing with the limitations of working memory. Operators develop internal representations of the tec hnology they use and the environment in which they use it. These mental models help dire ct limited attention efficiently, integrate information and provide a way of projecting fu ture events or states. As Endsley (2000) states, The use of mental models in achievi ng SA is considered to be dependent on the ability of the individual to pattern match be tween critical cues in the environment and

PAGE 27

20 elements in the mental model(p.16.) Mental models support SA; they can also hinder it if the mental models are inaccurate. In the Jones & Endsley study referenced earlier, 20% of SA errors were associated with problems with mental models (1996.) Research on teams and mental models ha s suggested that having a shared mental model of the problem space can increase SA and team performance (Stout, CannonBowers, Salas & Milanovich, 1999; Sonnenwal d & Pierce, 2000.) Effective planning and communication strategies were found to in crease team shared mental models and correspondingly team performance. In a study of military command and control exercises, Sonnenwald & Pierce (2000) f ound frequent communications between team members about the work context and situ ation, work process and domain-specific information were needed to maintain shared situation awareness in dynamic, constraintbound contexts.

PAGE 28

21 CHAPTER 4 METHOD This chapter describes the participants, a pparatus and setting of the field study, the Robot-Assisted Search and Rescue Comm unication Coding System (RASAR-CSS), and the method of analysis used to interpret the data Five operators were videotaped operating one of three Inuktun robots in a 16-hou r disaster response dr ill. A description of the drill site, the training conducted prior to the drill, a nd timeline of the exercise are presented. Statements made by or to the opera tors were coded by two independent raters into categories generated by a content analys is of the operator statements. Reliability analyses conducted showed acceptable ranges of Cohens kappas for interrater reliability. Following the coding of each operator, raters through consensus assigned a global rating of situation awareness using a 5-point Li kert scale. Correlational and chi-squared analyses were conducted based on the data collected. Participants, Apparatus, Setting and Procedure The five participants in the study were th ree student participan ts of the disaster response training exercise and two instruct ors. Though demographic information for the five study participants was not available, th ey were a subset of the approximately 75 students and approximately 15 in structors involved in the drill who can be characterized as a) current USAR Task Force members serv ing as instructors or completing required

PAGE 29

22 recertification training hours; or b) first responders (f irefighters and emergency medical technicians) seeking USAR certif ication in order to be eligib le to serve on a regional Task Force team. The majority of students had no urban search and rescue experience for a weapon of mass destruction ev ent or natural disaster (e.g., collapse of a large building due to an explosi on or earthquake). The apparatus used in the study consiste d of three robotic systems: two Inuktun Microtracs System robots a nd an Inuktun MicroVGTV robot (see Figure 2). The user interface offers little inform ation beyond a visual view of the environment from the robots camera. Scale, dimensionality and color resolution are known constraints. The three robots are small, tracked platforms e quipped with a color CCD camera on a tilt unit and two-way audio through a set of microphone s and speakers on the robot and operator control unit. The VGTV (Variable Geometry Tracked Vehicle) is a polymorphic robot which can change from a flat position to a raised, triangular position. Its design allows the vehicle to change shape while moving to meet terrain and obsta cle challenges, and it is capable of lifting the camera up to a higher vantage point (about 10.5 inches high when raised to maximum height). All three r obots are powered and c ontrolled through a 100foot tether cord that connects the operator control unit (OCU) a nd the robot. The Inuktun robots have limited communication capabilit y. The operator is given basic control capability: traversal, power, camera tilt, focus, illumina tion, and height change for the VGTV. The setting for this study was a 3-day disa ster response training exercise offered by Rescue Training Associates, Inc. in Miam i, FL on November 28-30, 2001. The exercise

PAGE 30

23 consisted of 2 days of intensive hands-on training which included collapse shoring, concrete breaching & breaking, heavy metal cutting and crane operations, technical search operations and WMD/HazMat operations followed by a 16-hour deployment drill on an actual collapse site. As part of the Technical Search Operations module, which exposes course participants to the latest technical search innovations, all students received 20 minutes of awareness-level instruction in rescue robotics conducted by researchers from USFs Center for Robot-Assisted Search and Rescue. Figure 2. Inuktun Microtrax and VGTV robots.

PAGE 31

24 The awareness training course was designed to provide th e students with a mental model of how the robot worked, and to provi de an opportunity for hands-on experience teleoperating a robot (though time constrai nts precluded all students from having the chance to do so). The course did not cover any strategies for deployment, because CRASAR had not identified and codifi ed any strategies at that time. For the 16-hour high fidelity response drill, a 2-story warehouse in a light industrial park near the airpor t was partially collapsed, creating a large rubble pile. In addition to the collapsed building, two larg e rubble piles and an abandoned automobile that was set on fire were used for traini ng operations. Figure 3 shows the layout of the collapse site and debris and rubble piles. The site was not simplified and significant safety hazards were present. Large chunks of concrete walls, tangl ed rebar, and loose electrical wiring posed the main hazards to people on the piles. Weather and visibility conditions are not always conduc ive to rescue operations, but in this case the night was clear (almost full moon) and the temperature normal for the area (@70F). The drill was attractive because it duplicated a real incident in terms of physical setting and in how the response was conducted. At the start of the drill, participants were checked in, divided into three teams, assigne d roles and transported to the site. Once at the site, they establis hed scene security, set up the Base of Operations, and conducted site safety and operational surveys. Fiel d operations commenced at 10:30 P.M., approximately 4 hours after the drill began. During field operations, the robot cache was available for deployment on call. Robots were deployed in three areas of the hot zone, as shown in Figure 3. When a team requested a robot via radio, two or three researchers

PAGE 32

25 would move to the requested location and set up the robot for use, explaining the controls to the operator as needed. A student or researcher was designated as tether manager for the operator, i.e. uncoiled and recoiled the tether cord, and sometimes shook or popped the cord to free it from debris. Figure 3. Map of disaster response training site and robot run locations. The data collection process was a modified version of the procedure used by Casper (2002.) Two cameras simultaneously recorded 1) the view through the robots camera (what it sees) and 2) a view of the operator and the Operator Control Unit (what the operator is seeing and doing.) When the robot was visible, a third video unit recorded an external view of the robot in use.

PAGE 33

26 The robot was deployed five times (see Table 1). Three of the five runs (runs 1, 2 and 5) were initiated on request by the teams. The first two runs searched the main rubble pile located next to the collapsed building. The fifth run used the robot during victim recovery operations on the smalle r rubble pile in an attempt to get a visual of or pathway to the victim. The other two runs (runs 3 a nd 4) were initiated by instructors to gain hands-on experience with the robots. In these runs, areas that had already been searched by the teams were explored. In each run, member s of the team self-organized to run the robot, with runs 2, 4 and 5 involving 2 memb ers of the team. In runs 1 and 3, an additional participant became spontaneously in volved by looking over the shoulder of the operator and interacting. The remainder of the team was either occupied with other tasks or passively observed. The five runs yielded a total of 66 min 16 sec of videotape for analysis. Table 1. Operator metrics. Operator # Start Time (approximate) Duration (minsec) Robot Used H-R Ratio Total # Operator Statements Statements: Minute Ratio 1(S) 10:45 P.M. 14:20 VGTV 3:1 82 5.73:1 2(S) 11:25 P.M. 13:48 VGTV 2:1 66 4.78:1 3(I) 12:45 P.M. 14:39 VGTV 3:1 54 3.68:1 4(I) 1:05 A.M. 14:52 VGTV 2:1 60 4.03:1 5(S) 3:15 A.M. 3:42 MicroTrax 2:1 10 2.70:1 M=12:16 M=54 M=4.4:1 SD=4:48 SD=24 SD=1.17:1 (S)=student (I)=instructor

PAGE 34

27 Robot Assisted Search and Rescue Communication Coding Scheme Since Robot Assisted Search and Rescue is a relatively new field, there are no existing domain-relevant methods of analysis (e.g. communication coding schemes). The FAAs Controller-to-Controller Comm unication and Coordination Taxonomy (C 4 T; Peterson, Bailey, & Willems, 2001) uses verbal information to assess team member interaction from communication exchanges in an air traffic control environment. The C 4 T is applicable to this work in that it captures the how and what of team communication by coding form, content and mode of communication. The goal, however, is two-fold, not only to capture the how and wha t of USAR robot operator teams, but also the who, and to captu re observable indicators of robot operator situational awareness. Therefore I devel oped a new coding scheme, the Robot Assisted Search and Rescue Communication Codi ng Scheme (RASAR-CCS). Although the development of the RASAR-CCS is guided by the structure of the C 4 T, and incorporates relevant portions of the C 4 T, the RASAR-CCS is domain-spe cific. It was developed to examine USAR robot operator interactions wi th team members and to capture observable indicators of robot operato r situational awareness. The RASAR-CCS addresses the goals of capturing team process and situational awareness by coding each statement on four categories: 1) speaker-recipient dyad, 2) form or grammatical structure of the co mmunication, 3) function or intent of the

PAGE 35

28 communication, and 4) content or topic of the communication. By examining dyad, form and content, one can determine which team members are interacting and what they are communicating about. Similarly, exploring elem ents of content and function allows one to examine indicators of operator situation awareness. The development of the RASARCCS is described below and the complete coding scheme is provided in Appendix A. Speaker-recipient dyad codes were developed as a function of speaker-recipient pairs of individuals anticipate d in a USAR environment. Nine dyads were constructed to describe conversations between individuals. Five dyad codes classify statements made by the operator to another person (or persons): operator-tether manage r, operatoranother team member, operator-researc her/robot technician, operato r-group, or operatorother. The remaining four classify statements received by the operator from another person: tether manager-operator, another team memb er-operator, researcher/robot technicianoperator, or otheroperator. The primary dyads involve the operato r and tether manager (the person manipulating the robots tether during teleoperation), operato r and researcher, or operator and another team member. The element ope rator-other is used when the operator addresses a specific person who does not match one of those roles. The operator-group dyad is used when the opera tor is addressing those pres ent as a group, or when the operators statements are not clearly addresse d to a specific individual. Verbalizations between individuals which did not include the operator were not coded. Similar to the C 4 T taxonomy, the form category contains the elements: question, instruction, comment or answer (RASAR-CCS uses the label instruction while the C 4 T

PAGE 36

29 uses the label command to describe statements dictati ng that some task or action take place). Statements not matching th ese categories are classified as undetermined To establish content and function codes a subset of operator statements (177 of the 272 total statements) were subjected to a Q-sort content analys is (Sachs, 2000). Two subject matter experts (SMEs) not involved in the study so rted operator statements on content according to the topic being discussed and on function according to the high level purpose of the statement Q-sort categories were reviewed and refined by two additional SMEs to ensure the elements re flected the domain of content and function. The Q-sort analysis based on content yielded seven elements representing the content category: 1statements related to robo t functions, parts, e rrors, or capabilities (State of the robot), 2statements describi ng characteristics, conditi ons or events in the search environment (State of the environmen t), 3statements reflecting associations between current observations and prior observa tions or knowledge (State of information gathered), 4statements surrounding the r obots location, spatial orientation in the environment, or position (Robot situatedness), 5indicators of direction of movement or route, (Navigation), 6statements reflecting search task plans, procedures or decisions (Search Strategy), and finally 7statemen ts unrelated to the task (Off Task). The first four content elements are necessary for building and maintaining SA in search operations, while the elements of navigation and search strategy require SA. Situation awareness is generated through information perceived (Level 1) and comprehended (Level 2) about the robot and environment. Since navigation and search

PAGE 37

30 strategy are elements that cannot be execu ted efficiently without SA, statements reflecting these are indicators of operator SA (Level 3). Eight elements were identified from the Q-sort to represent the function category: 1Asking for information from someone (Seek information), 2Sharing observations about the robot or environment (Report), 3Making a previous statement or observation more precise (Clarify), 4Affirming a previous statement or observation (Confirm), 5Expressing doubt, disorientation, or loss of confidence in a state or observation (Convey uncertainty), 6Projecting future goals or st eps to goals (Plan), 7Sharing information other than that described in report either in response to a question, or offering unsolicited information (Provide informati on). For this study, the focus is on operator SA; hence an eighth element was included as a default for statements made by individuals other than th e operator (Non-operator). The function elements of reporting and providing info rmation merit explanation, as they appear very similar. Reporting involves perception and comprehension of the state of the robot, robot situatedness, the environment or the state of information gathered. Any other information shared by an operator, in answer to a question or on his own, is classified as providing information (for example search strategy or navigation). Indicators of SA are captured in the func tion category primarily through the elements reporting and planning. When operator shares information (reports) based on the robots eye view, one can infer the first two levels of SA, perception and comprehension, have taken place. The third SA level, planning a nd projection, is captured in the function category as the element plan.

PAGE 38

31 The RASAR-CCS also obtains a global asse ssment of situational awareness, rated on a 5-point Likert scale (1=l ow, 5=high). This observer rating is a subjective measure reached by consensus between the two rate rs. Video recordings of the operators manipulating the robot were us ed to code statements made by both the operators and surrounding personnel. Two raters were trained to code videot apes using the RASAR-CCS. One rater (the author) was involved in data collection. The second rater, though not nave, was not on site during data collection. Ra ters reviewed descriptions of the disaster drill and data collection procedures, and then reviewed defi nitions for all the c odes. Coding guidelines were developed to reduce ambiguity and to enhance reliability. Behavioral examples selected from the videotapes were also revi ewed. The majority of the training centered on coding statements together and reaching consensus. Training continued until both raters felt comfortable rating independently (approximately 8 hours). A written transcript of each videotap e was produced yielding a fixed number of statements to be coded (502 statements acro ss the five operators). Using the Noldus Observer Video-Pro (Noldus, Trienes, Hendr iksen, Jansen & Janse n, 2000) observational coding software, raters coded 181 statements (36%) in the transcri pts along the four RASAR-CCS dimensions: dyad (speaker-recipient pair), form (gramm atical structure of the communication), function (intent of the communication) and content (topic). Cohens kappa ( ) was computed to measure interrater agreement for each of the four coding dimensions: dyad, form, function, and content. Re liability analyses verified that raters agreed more than chance would predict, with Cohens kappas of .72 for dyad, .78 for

PAGE 39

32 statement type, .64 for statement content a nd .72 for statement function. The remaining statements were coded by a single rater. Codes for each of the 502 statements are used in data analyses. Frequencies, percentages and correlations of the RASAR-CCS categories and elements are generated to explore team process and communication: whos talking to whom (dyad), how (form), about what (content) and for what purpose (function). This is an expl oratory study, in that I am looking for relationships that may ha ve some bearing on effective human-robot interaction in the USAR domain. Therefor e, all operator statement categories are included in analysis. Significant relationships emerged and are presented in each of the four categories. All co rrelations reported are signifi cant at p<.05 unless otherwise noted. As mentioned previously, the RASARCSS obtains global assessments of situation awareness for each operator (5-point scale; 1=low, 5=high). These ratings were used to identify operators with high versus low situation awareness. Data from two operators receiving a rating of one were combined to form a low SA group and similarly, data from operators receiving a four or five were combined to form a high SA group (data from one operator receiving a th ree were not used in this an alysis). Chi-square analyses are computed to determine differences in high and low SA operator statements relative to who the operator was communica ting with (dyad), and the statement form, content and function.

PAGE 40

33 CHAPTER 5 RESULTS This section presents findings relate d to situation awareness and team communication. Correlational an alyses, chi-square analyses and statement category frequencies/percentages are presented in three major areas: SA, team process and communication, and the interaction of SA and team communication. Situation Awareness Operators had difficulty building or mainta ining SA, and spent over half of their time trying to do so. As shown in Figure 4, 54% of operator statements were related to gaining situation awareness at Levels 1 and 2 (state of robot & r obot situatedness38%, state of environment13% and informati on gathered3%) and c onsiderably less time talking about factors requiring situation awareness (Level 3) to perform (navigation21% and search strategy-16% .) Relationships between elements in th e dimensions of content and function captured indicators of operator situation awaren ess (see Chapter 4 for a description of SA identifiers in the RASAR-CCS.) The correlation matrix of operator statement categories (Appendix B) revealed operator statements related to sear ch strategy were strongly correlated with statements rela ted to the state of the environment (r=.94) and state of information gathered (r=.89). These two SA-re lated content areas were closely tied to each other (r=.91) as well, indicating the importa nce of linking what is being observed in the environment with what the operator already knows about the environment.

PAGE 41

34 Figure 4. Percentages of operator statements by content. Search strategy and planning are an intuitive fit because of the need to plan search activities, and indeed, search strategy statements correlated with statements coded as planning (r=.95) in the function category. However, the significant correlation of planning (a SA Level 3 indicator) with the state of the environment (r=.98, p<.001) emphasizes the necessity of perception and comprehension in performing search operations. This is confirmed by another important relationship in this category between the two functions of plan and report (r=.93.) The report element is used strictly when the operator is reporting on the state of the robot (including situatedness), environment or

PAGE 42

35 information gathered, all indicators of pe rception and comprehension (Levels I and II SA.) This clearly ties situation awareness to operator planning (SA Level 3) in HRI. Team Process and Communication Operators demonstrated team-based processes and communication techniques while using the robot in sear ch operations, a finding suppor ted by statement frequencies, percentages, and correlations between statement categories. Results are first presented for the 272 statements made by the operators to team members, sinc e the studys focus is on the operators mental model and situationa l awareness. Additional results examining operator & team member statements are then presented. Table 2 provides frequency and percentage of occurrence of each descriptor by coding category. As seen in Table 1, operators spoke to other participants approximately 4 times per minute while teleoperating the robot (M=4.4, SD=1.17 stmts/min.) Almost 30% of the operators statements were directed to team members directly connected to the task of naviga ting the robot in search of a victim (the tether manager, and the other team member; see Table 2.) Co rrelations (Appendix B) of operator-team member dyad with other variab les in the coding system also depict the team-oriented nature of the robot search task. The opera tors statements to his or her teammate correlated significantly with statements c oded as instructions (r=.97, p<.001.) The content categories related to ope rator-team member statements were state of information gathered and search strategy (r=.94 for each), suggesting that in conversations with a

PAGE 43

36 teammate, the operator related what he was s eeing to something he had seen before (or had prior knowledge of), and articulated search strategies. Table 2. Operator statement categor y frequencies and percentages. Category/Subcategory Frequency Percentage of Total by Category Dyad Operator-Tether Manager 46 17 Operator-Researcher 109 40 Operator-Team Member 30 11 Operator-Other 10 4 Operator-Group 77 28 Form Question 45 17 Instruction 8 3 Answer 99 36 Comment 120 44 Content State of the Robot 62 23 State of the Environment 36 13 State of Information Gathered 9 3 Robot Situatedness 38 14 Search Strategy 43 16 Navigation 57 21 Off Task 23 9 (Missing content) (4) 1 Function Seek Information 29 11 Report 62 23 Clarify 11 4 Confirm 17 6 Convey Uncertainty 18 7 Provide Information 88 32 Plan 27 10 ( Missing function) (20) 7 Total number of statements = 272

PAGE 44

37 Correlations of operator statement form with content suggest operators instructions were related to search strategy, the state of the environment and state of information gathered (r=.99, .95, .92 respectively.) In addition, instruction statements made by operators correlated significantly with statements coded as having a planning function (r=.94.) Although the primary focus of this paper is on operator situation awareness and how operators talk to team members to facilitate SA, further analyses were conducted to explore information exchange between dyad members (see Figure 5). That is, operator statements to primary rescue team members (operator, tether manager, team member, and researcher robot specialist) and from primary rescue team members to the operator were examined. Figure 5. Team member interactions.

PAGE 45

38 Previously I examined the frequency of statements for each element within a category (e.g., 45 questions, or 62 statements re garding the state of the robot). In this analysis I examined, by dyad, the frequency of statements based on form, content and function combined to give an integrated pict ure of information exchange between rescue team members (e.g., the operator asked a que stion of the tether manager seeking information about the state of the robot). Naturally, at this level of detail, the nu mber of possible combinations (4 forms, 7 topics, and 7 functions) is formidable. Theref ore, Table 3 presents only the three highest frequency statement types (including tie s), broken down by sp eaker-recipient (i.e., operator tether manager excha nges are presented as statemen ts from the operator to the tether manager and by statements from the tether manager to the operator) for each dyad. Operators clearly had distinct expecta tions for information exchange between themselves and members of their team. Oper ators requested information from tether managers regarding the state of the robot (9 %), its situatedness (6%) and navigation (6%) and gave tether managers information a nd instructions (6% a nd 6%, respectively) regarding the state of the robot. Conve rsely, operators asked team members for information on the robot (7%), the environment (7%), and search strategy (7%), and offered information to team members regard ing robot situatedness (7%), the environment (7%), and search strategy (7%).

PAGE 46

39 Table 3. Dyad frequencies and percentages for tether managers and team members. Statement Type % of Speakers Statements Statement Type % of Speakers Statements Operator Tether Manager Exchanges (n=83) Operator to Tether Manager (n=47) Tether Manager to Operator (n=36) Question seeking information about State of the Robot 9% Instructions regarding navigation 22% Question seeking information about Robot Situatedness 6% Comment on Robot Situatedness 11% Question seeking information about navigation 6% Comment on navigation 11% Instruction planning State of the Robot 6% Answer about State of the Robot 11% Comment providing information on the State of the Robot 6% Answer reporting navigation 6% Answer confirming navigation 6% Operator Team Member Exchanges (n=76) Operator to Team Member (n=27) Team to Member Operator (n=49) Question seeking information about State of the Robot 7% Comment on State of the Robot 14% Question seeking information about St Environment 7% Instruction, navigation 12% Question seeking information about search strategy 7% Comment St Environ 10% Comment St Environ report 7% Comment Robot Situatedness report 7% Comment planning search strategy 7% Answer providing information about search strategy 7% *Percentages do not total 100% since only the three highest frequency stat ement types (including ties) are shown.

PAGE 47

40 Interaction of SA and Team Communication Comparisons between operators rated as having high versus low SA on a global rating scale offer support for the influence of team behaviors on situation awareness. Chisquare results (Table 4) suggest operator communication with the tether manager ( 2 = 16.2, p<.001) and with other team members ( 2 = 18.6, p<.001) was related to high Situation Awareness. High SA op erators also provided instruc tions more frequently then their low SA counterparts ( 2 = 4.5, p<.05.) Furthermore, chi-square reveals that re gardless of who they were speaking to, high SA operators made more statements than low SA operators a bout robot situatedness ( 2 = 5.4, p<.05) and about search strategy ( 2 = 12.9, p<.001) This suggests high SA operators had more knowledge of the robots lo cation and spatial orie ntation in the void space, and were more focused on goal-directed cues. It follows that the operators situation awareness is a key factor in planning and executing search operations. Operators with low SA did not seem to have a plan as to how to search using the robot. Finally, high SA operators engaged in highe r levels of reporting, i.e. they talked more to their teammates about SA-related factors in the search environment ( 2 = 4.74, p<.05.) And though not significant at the .05 level, the da ta suggests that low SA operators convey uncertainty more fr equently than high SA operators ( 2 = 3.55, p=.06, ns ).

PAGE 48

41 Table 4. Chi-square results for high and low SA operator statements. Low SA Operators (frequency) (N=2) High SA Operators (frequency) (N=2) ChiSquare p-value Dyad Operator-Tether Manager 9 36 16.2 .000** OperatorTeam Member 2 24 18.6 .000** Operator-Researcher 56 52 .15 .70 Operator-Other 5 4 .11 .74 Operator-Group 42 32 1.35 .25 Form Question 16 37 2.81 .09 Instruction 1 7 4.5 .03* Answer 46 49 .09 .76 Comment 51 65 1.69 .19 Topic State of the Robot 30 30 0 1 State of the Environment 14 20 1.06 .30 State of Information Gathered 3 5 .5 .48 Robot Situatedness 11 25 5.4 .02* Search Strategy 9 32 12.9 .000** Navigation 32 24 1.14 .29 Off Task 8 15 2.13 .14 (missing) 0 4 4 .04* Function Seek Information 10 17 1.81 .17 Reporting 22 39 4.74 .03* Clarify 5 6 .09 .76 Confirm 5 11 2.25 .13 Convey Uncertainty 13 5 3.55 .06 Provide Information 36 47 1.46 .23 Plan 10 16 1.38 .24 (missing) 13 7 1.8 .18

PAGE 49

42 CHAPTER 6 DISCUSSION Several aspects of the results merit furt her discussion below: the challenges in perception and situati on awareness, the importance of team communication in developing mental models of the problem space and the support for findings of previous studies. Key Points SA is critical to effective utilization of rescue robots in USAR, and operators had difficulty building and maintaining SA. The most important (and perhaps surprising) finding is that fully half of the operato rs communication surrounds perceiving and interpreting (or trying to interpret) what is happening in the world, with the robot, and relating that information to what informati on is already known, with the remaining half related to planning search strategy, navigati ng and teleoperating. This finding is based on the fact that over half (54%) of the statemen ts made by the five operators were coded as content associated with situ ation awareness (Levels 1 and 2), an important aspect of human-robot interaction (Schol tz, 2003). It is also supporte d by the correlations of SArelated content categories with search strate gy and planning. This c ontradicts traditional wisdom in robotics, which assumes navigation and mission tasks are conducted simultaneously. However, it confirms Sherid ans findings regarding the difficulties in teleproprioception and telekinesthesis during teleoperation (1992).

PAGE 50

43 This suggests one of the main challe nges in achieving effective human-robot interaction is bridging the c ognitive gaps between the two entities. The cognitive control tasks of navigating, searching, mapping, interp reting what is being seen on the video monitor, and making decisions about what to do with that information are overloading the operator. Training and experi ence may assist the USAR r obot operator in forming a mental model of how robots eye informati on is conveyed and then interpreted. What is clear, however, is that the in formation being received from the robot does not match the operators current mental mode l. One explanation may be that the perceptual cues, e.g. the keyhole effect noted by W oods et al. (in press) are ind eed challenging the operator, and thats where the cognitive deficits begin to appear. This difficulty in integrating the robo-immersed view with expectancies rega rding the search process mirrors Caspers observations at WTC. In both cases fatigue certainly played a part; it seems likely, however, that lack of a cognitive model of how a robot sees is also a factor. On an interesting note, videotapes reco rding the robots eye view during the 5 operator deployments revealed an almost even split between the amounts of time operators spent actually m oving the robot (51%) as opposed to remaining stationary (49%.) The percentage of time the robot spent st ationary is very sim ilar to the percentage of statements devoted to SA Levels 1 and 2 (both are around 50%). Correlational analyses of operator statements and robot m ovements are outside the scope of this study. However, it will be explored in future work. The second key point is that it takes a team to use a r obot in search operations, and not just physically: operato rs used team processes & communication to compensate

PAGE 51

44 for the lack of SAi.e., they tried to pool their perceptions to create a shared mental model, since they had difficulty coming up with one on their own. The significance of the operators communication with teammates is important in terms of frequency, form, content and function. The team-oriented orga nizational structure of USAR stresses the interdependence between team members in getting the job done effectively. Operators discussed search strategy with their teammates using inform ation about the environment, and relating it to what they already knew. Ye t only 16% of their statements concerned the state of the environment, or related what th ey were seeing to known information, a telling percentage in light of the necessity of this information in search operations. This suggests operators were attempting to develop a shared mental model with teammates in order to increase situation awareness. They also used this information to plan and devise search strategies. The re port function used in the coding scheme was defined as reporting about the state of the robot, environment or information gatheredall SA-related topics What is exciting is that reporting and planning were clearly related, i.e. operators were using what they were seeing through the robots eye to form a mental model of the search space (and the robots position in that space) in order to devise search strategies. Planning not only facilitates th e building of shared mental models with teammates, it is also can result in improved team perf ormance (Stout et al., 1999.) While it is surprising that navigation st atements correlated only with statements function-coded as conveying uncertainty (r=.93), this may be artifa ctual, reflecting the lack of SA in two of the operators.

PAGE 52

45 The effective use of team processes a nd communication to compensate for the lack of SA suggests there is an interacti on between SA and team communication. Operators with high SA talked to their teammates more about search strategies and robot situatedness, gave more instructions, and re ported more on the state of the environment, robot, and information gathered. Talking about it helps create a mental model of whats happening. This is important for future training and development in USAR, and also for robot system design. Confirming/disconfirming their interpretation of what was seen with another individual, collaborati ng with a teammate to project plan and make decisions, and sharing information with other team memb ers were not necessarily new strategies to the Task Force workers; the application of those strategies to working with a new technology, however, definitely wa s. This finding supports previous findings from an Air Force study of F-15C pilots (B ell & Lyon, 2000) in which the most highly rated elements of SA were a) use of comm unication information and b) information integration from multiple sources. Other studies have noted the interrelation of team communication and situation awareness. Mosier & Chidester ( 1991) found the number of situation awarenessrelated communications predicted team pe rformance, and Bailey & Willems (2002) reported air traffic controllers increased comm unications to maintain situation awareness under conditions of high workload. Operator statements reflect specific exp ectations regarding the nature of each team members roles (see Figure 5). The data suggests team members did not share the operators role expectations. For example, although team members provided information

PAGE 53

46 on the robot and the environment, and provide d instructions for navigation, they paid little attention to search st rategy. In addition, tether managers provided information on the robot, and its situatedne ss; however, they mainly provi ded instructions regarding navigation. This suggests operators saw teth er managers as a re source for obtaining information; whereas tether managers saw their role as providing assistance with navigation. While the operator saw team members as problem-holders, sharing pertinent information about the state of the robot and the environment and collaborating on search strategy, team members did not address ope rator needs regardi ng search strategy. Lastly, quantitative analyses confirm previous research on HRI in search and rescue operations (Casper, 2002; Casper & Murphy, 2003) which suggested that these tasks will be short and requi re two operators, not one. Time -on task with the robots was of short duration, with the average deployment drop lasti ng less than 15 minutes. (Timeon-task describes the time elapsed from the in itial drop of the robot until the conclusion of the operators run.) Four of the five opera tors utilized the robot for slightly under 15 minutes each (M=12.26 min, SD=4.8 min) in se arch operations. The fifth operator used the robot briefly during a rescue operation to try to see or get to the victim through a small void. When he saw that was not feasible he terminated the run. These run times are similar to those of operators at the WTC (C asper, 2002). Actual drop times at the World Trade Center were even less, averaging 6-7 minu tes. The ability to complete the search in a short time is a significant factor in the re scue workers perception of the utility of a rescue robot. As new control tasks evolve utilizing the robots (e.g., carrying medical payloads to victims), operators may spe nd longer periods of time deploying them.

PAGE 54

47 Conclusions This study reports on human-robot data from a disaster response training exercise conducted on a collapsed building site. While the number of operators video recorded is small, the data are rich and the findings le nd support to prior research in the USAR domain and results from the WTC which in dicated that perception, not navigation, is more significant than previously thought. Th e major findings of the study lead to the following conclusions: Cognitive augmentation in the form of intell igent perceptual assistance is needed. On average, the operator is actively engaged in the search task only 32% of the time. In addition, 54% of the operators statements centered on perception and comprehension of the robot and environment. Finally, the amount of time the robot was stationary was close to 50%. This suggests that it is extremely di fficult for operators to establish situational awareness due to inherent perc eptual challenges (the world is being perceived from an unnatural viewpoint, the lighting is uncontrolled, etc.) and lack of information in the user interface about the state of the robot (Is it upside down? What pose is it in?). This is consistent with the results of the previous studies of HR I in USAR (Casper & Murphy, 2002; Casper & Murphy, 2003). Robot-assisted technical search is a team task rather than an individual one. The human-robot ratio was never less than 2:1, in part, because physical robot operations require at a minimum, an operator and a teth er manager. In addition, the search task itself demands information exchange am ong team members. More frequent

PAGE 55

48 communication with team members was relate d to higher ratings of operator SA (see Table 4). Furthermore, operator-team me mber communication was significantly related to statements involving search, instructions, and state of information gathered (Appendix B). Robot operators need a new cognitive mental model to filter and comprehend data provided by the robot, and to pl an effective search strategies More than half of operator statements were related to perception and comprehension of the robot and the environment perceived through the robots eye view. Even so, the low frequency of statements regarding information needed to pl an search (the state of the environment, 13%; state of information gathered, 3%) suggests operators had difficulty reconciling information obtained from the robots eye view with their existing knowledge of the search environment. USAR technical search teams need a new shared mental model of the technical search task in order to coordi nate activities effectively. Op erators and their teammates did not have shared expectations regarding thei r roles in the search process (Figure 5). Operators saw tether managers as a resource for obtaining information about the robot in the environment along with navigation; wher eas tether managers saw their role as primarily providing assistance with naviga tion. Similarly, the operator saw team members as problem-holders, sharing pertinent information about th e state of the robot and the environment and collaborating on s earch strategy, however, team members did not address operator needs regardin g search strategy (Figure 5).

PAGE 56

49 Though the results of this study are prel iminary, and must be replicated, the findings give rise to numerous new questions: Is the amount of time spen t building or maintaining sit uation awareness stable, or will it change as operators gain experience? Will cognitive augmentati on shorten the time operators spend gaining SA? What perceptual cues are critical for gaining situation awareness in technical search operations? Will shared mental models of the ro bot, the environment and the search task improve operator performance in search operations? Future research should examine these and other issues that emerge as USAR personnel acquire more experience working w ith increasingly s ophisticated robotic technology. In particular, the use of visual information (the robots eye view) as a resource has implications for new ways of conducting USAR operations. Sharing the robot information across various problem-hol ders in the organization (structural and medical specialists, incident commanders) co uld prove invaluable in reducing the time required to rescue disaster victims. The fact that these problem-holders may be physically remote suggests distributing robot informa tion could reduce the effects of cognitive fatigue or localized noises a nd distractions that accompa ny search activities. In addition, the RASAR Communication C oding Scheme generated to organize and examine human-robot interaction may provi de insight into the nature of the manmachine relationship in USAR and in other robotic domains as robots continue to evolve and become a part of the workplace. Patte rns of team process and communication may

PAGE 57

50 emerge through analysis that wi ll be useful in training, e.g. robot operators may train in teams rather than individually in order to capitalize on the intera ction between SA and team communication. Currently, research is ongoi ng that applies the technique s described in this study to new data collected from 40 rescue professionals in tw o similar 24-hour high fidelity disaster response drills c onducted in 2002-2003. The goals are to 1) identify operator and team mental models of robot-assisted search 2) pinpoint the perceptu al cues that increase situation awareness and spur development of these models, and 3) continue to study the evolving processes of team communication and collaboration as robots are incorporated into USAR operations. It is expected that th e study results will be useful for the larger case of anticipating (and facilita ting) roles, tasks, and strate gies that emerge when a new technology is introduced.

PAGE 58

51 REFERENCES Arkin, R., Fujita, M., Takagi, T., and Hasegawa, R. ( 2003) An ethological and emotional basis for human-robot interaction. Robotics and Autonomous Systems 42 (3-4), March 2003. Bailey, L. & Willems, B. (2002). The moderator effects of taskload on the interplay between en route intra-sector team comm unications, situation awareness, and mental workload. (DOT/FAA/AM-02/18). Department of Transportation, Federal Aviation Administration, Office of Aerosp ace Medicine, Washington, D.C. Bell, H. & Lyon, D. (2000). Using observer ratin gs to assess situation awareness. In M. Endsley & D. Garland (Eds.), Situation Awareness Analysis and Measurement, (p. 129-146). Mahwah, NJ: Erlbaum. Bluethmann, W., Ambrose, R., Diftler, M., As kew, S., Huber, E., Goza, M., Rehnmark, F., Lovchik, C., Magruder, D. (2003) Robonaut: A robot designed to work with humans in space. Autonomous Robots, 14 (2-3), 179-197. Breazeal, C. (2000). Sociable machines: Expressive social exchange between humans and robots. Doctoral Dissertation, Departme nt of Electrical Engineering and Computer Science, MIT. Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42 (34), 167-175. Brooks, R. (2002). Flesh and Machines: How Robots Will Change Us. New York: Pantheon.

PAGE 59

52 Casper, J. (2002 ). Human-robot interactions during the robot-assisted Urban Search and Rescue response at the World Trade Center M.S. Thesis, Computer Science & Engineering, University of South Florida. Casper, J. & Murphy, R. (2002). Workflow study on human-robot interaction in USAR. Proceedings of the International Conf erence on Robotics and Automation (ICRA) 2002. Casper, J. & Murphy, R. (2003). Human-robot interactions during the robot-assisted search and rescue res ponse at the World Trade Center. IEEE Transactions on Systems, Man and Cybernetics, Part B, 33 (3), 367 -385. Draper, J., Pin, F., Rowe, J. and Jansen, J. (1999). Next generation munitions handler: Human-machine interface and preliminary performance evaluation. Proceedings of the 8 th International Topical Me eting on Robotics and Remote Systems, Pittsburgh, PA, Compact Disc. Endsley, M. (1988). Design and evaluation for situation awareness enhancement. In Proceedings of the Human Factors Society 32 nd Annual Meeting, 1, 97-101. Santa Monica, CA: Human Factors Society. Endsley, M. (2000). Theoretical und erpinnings of situation awareness: A critical review. In M. Endsley & D. Garland (Eds.), Situation awareness: Analysis and Measurement (p.3-32). Mahwah, NJ: Erlbaum. Fong, T., Thorpe, C. & Baur, C. (2001). Collaboration, dialogue, and human-robot Interaction. Proceedings of the 10th International Symposium of Robotics Research, Lorne, Victoria, Australia

PAGE 60

53 Jones, D. & Endsley, M. (1996). Sources of situation awareness errors in aviation. Aviation, Space and Environmental Medicine, 67 (6), 507-512. Jones, H. & Hinds, P. (2002). Extreme work groups: Using SWAT teams as a model for coordinating distributed robots. In Proceedings of the ACM 2002 Conference on Computer Supported Cooperative Work (CSCW 2002) 372-381, New Orleans LA. Kawamura, K., Nilas, P., Muguruma, K., Adams, J. and Zhou, C. (2003). An agentbased architecture for an adaptive human-robot interface. Hawaii International Conference on System Scien ces (HICSS-36), Big Island, Hawaii. Khatib, O., Yokoi, K., Brock, O., Chang, K. & Casal, A. (1999). Robots in human environments: Basic autonomous capabilities. International Journal of Robotics Research, 18 (7), 684-696. Kiesler, S. & Goetz, J. (2002) Mental models of robotic assistants. In CHI 02 Extended Abstracts (Minneapolis, MN, April), ACM Press. Kraut, R., Fussell, S. & Siegel, J. (2003). Visual information as a conversational resource in collaborative physical tasks. Human-Computer Interaction, 18, 13-49. Langle, T. & Worn, H. (2001). Human-robot cooperation using multi-agent-systems. Journal of Intelligent & Robotics Systems, 32 (2), 143-159. Micire, M. (2002). Analysis of the robotic-assisted se arch and rescue response to the World Trade Center disaster M.S. Thesis, Computer Science & Engineering, University of South Florida.

PAGE 61

54 Mosier, K. & Chidester, T. (1991). Situati on assessment and situa tion awareness in a team setting. Proceedings of the 11 th Congress of the International Ergonomics Association, 798-800. Murphy, R. (2000). Introduction to AI Robotics Cambridge, MA: MIT Press. Murphy, R. (2002). Rats, robots, and rescue. IEEE Intelligent Systems, 17, (5), 7 -9. Murphy, R., & Rogers, E. (2001). Human-robot interaction: Final Report for DARPA/NSF study on human-robot interaction. Retrieved June 5, 2002, from http://www.cse.calpoly.edu/~erogers/HRI/HRI-report-final.html Nakamura A., Ota J. and Arai T. (2002). Human-supervised multiple mobile robot system. IEEE Transactions on Robotics and Automation, 18 (5), 728-743. Nicolescu, M. & Mataric, M. (2001). Learni ng and interacting in human-robot domains. IEEE Transactions on Systems, Man & Cybernetics-Part A: Systems and Humans, 31(5) 419-430. Noldus, L., Trienes, R., Hendriksen, A., Jans en, H. & Jansen, R. (2000). The Observer Video-Pro: new software for the co llection, management, and presentation of timestructured data from vi deotapes and digital media files. Behavior Research Methods, Instruments & Computers, 32 197-206. Peterson, L., Bailey, L., & Willems, B. (2001). Controller-to-controller communication and coordination taxonomy (C 4 T). (DOT/FAA/AM-01/19). Department of Transportation, Federal Aviation Ad ministration, Office of Aerospace Medicine, Washington, D.C.

PAGE 62

55 Prince, C. & Salas. E. (2000). Team situati on awareness, errors, and crew resource management: Research integration for training guidance. In M. Endsley & D. Garland (Eds.), Situation Awareness Analysis and Measurement, (p. 325-347). Mahwah, NJ: Erlbaum. Sachs, J. (2000). Using a small sample Q sort to identify item groups. Psychological Reports, 86, 1287-1294. Scholtz, J. (2003). Theory and evaluation of human robot inte ractions. Hawaii International Conference on System Sciences (HICSS-36), Big Island, Hawaii. Severinson-Eklundh K, Green A, Huttenrauch H. (2003). Social and colla borative aspects of interaction with a service robot. Robotics and Autonomous Systems, 42 (3-4), 223234. Sheridan, T. (1992). Telerobotics, automation, and human supervisory control Cambridge, Massachusetts: MIT Press. Sonnenwald, D. & Pierce, L. (2000). Info rmation behavior in dynamic group work contexts: Interwoven s ituational awareness, dense soci al networks and contested collaboration in command and control. Information Processing and Management, 36(2000), 461-479. Stout, R., Cannon-Bowers, J., Salas, E. & Mi lanovich, D. (1999). Planning, shared mental models, and coordinated perf ormance: An empirical link is established. Human Factors, 41 (1), 61-71. Thrun, S. (1998). When robots meet people. IEEE Intelligent Systems, May/June 1998 27-29.

PAGE 63

56 United States Federal Emergency Management Agency (1992). Urban Search & Rescue Response Plan United States. Wickens, C. (1992). Engineering psychology and human performance New York: Harper Collins. Wilkes, D., Alford, A., Cambron, M., Rogers, T., Peters, R. & Kawamura, K. (1999). Designing for human-robot symbiosis. Industrial Robot 26(1), 47-58. Woods, D., Tittle, J., Feil, M. & Roesle r, A. (in press). Envisioning human-robot coordination for future operation: A roboticist, cognitive engi neer and problem holder confront demanding work settings. IEEE Transactions on Systems, Man & Cybernetics: Part C: Speci al Issue on Human-Robot Interaction.

PAGE 64

57 APPENDICES

PAGE 65

58 Appendix A: Robot Assisted Search a nd Rescue Communica tion Coding Scheme (RASAR-CCS) Category Subcategories Definitions Sender/Recipient Dyad Operator-Tether Manager Operator: individual teleoperating the robot Tether ManagerOperator Tether manager: individual manipulating the tether and assisting operator with robot Team memberOperator Team member: one other than the tether manager who is assisting the operator (usually by interpreting) OperatorTeam member Researcher-Operator Researcher: indi vidual acting as scientist or robot specialist Operator-Researcher Other-Operator Other -individual interacting with the operator who is not a tether manager, team member or researcher Operator-Other Operator-Group Group -set of individuals interacting with the operator Statement Form Question Request for information

PAGE 66

59 Instruction Direction for task performance Comment General statement, initiated or responsive, that is not a question, instruction or answer Answer Response to a question or an instruction Content State of the robot Robot functions, parts, errors, capabilities, etc. State of the environment Characteristics, conditions or events in the search environment State of information gathered Connections between current observation and prior observations or knowledge Robot situatedness Robots location and spatial orientation in the environment; position Victim Pertaining to a victim or possible victim Navigation Direction of movement or route Search Strategy Search task pl ans, procedures or decisions Off task Unrelated or extraneous subject Function Non-operator Default for statements made by individuals other than the operator Seek information Asking for information from someone Report Sharing observations about the robot, environment, or victim

PAGE 67

60 Clarify Making a previous statement or observation more precise Confirm Affirming a previous statement or observation Convey uncertainty Expressing doubt, disorientation, or loss of confidence in a state or observation Plan Projecting future goals or steps to goals Provide information Sharing information other than that described in report either in response to a question, or offering unsolicited information

PAGE 68

61 Operator Statement Categories 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Sender/Recipient Dyad 1. Operator-tether 1.00 2. Operator-researcher 03.0 235.00 1. 0 3. Operator-team mbr -. 6 1 4. Operator-other .02 .04 .20 1.00 5. Operator-group .19 .68 -.03 .63 1.00 6. Question .14 .82 .59 -.28 .23 1.00 Statement Form 7. Instruction -.15 .43 .97** .41 .22 .57 1.00 8. Answer .31 .89* .08 -.23 .55 .83 .15 1.00 9. Comment .45 .41 .28 .80 .80 .22 .52 .30 1.00 Content 10. State of the Robot .65 .58 -.38 -.15 .57 .45 -.26 .84 .35 1.00 11. State of Environment -.22 .62 .87 .50 .46 .59 .95* .30 .61 -.12 1.00 12. State of Information -.30 .63 .94* .10 .16 .77 .92* .37 .28 -.15 .91* 1.00 13. Robot Situatedness .69 .53 .42 .00 .31 .77 .49 .67 .57 .58 .44 .47 1.00 14. Search -.04 .44 .94* .44 .26 .58 .99** .19 .60 -.18 .94* .89* .57 1.00 15. Navigation .56 .30 -.45 .53 .84 -.08 -.20 .39 .71 .71 -.01 -.34 .28 -.12 1.00 16. Off Task -.04 .76 -.27 -.37 .46 .51 -.24 .84 -.07 .72 -.02 .08 .16 -.25 .31 1.00 Function 17. Seek Information .28 .75 .46 -.39 .17 .98** .44 .85 .16 .54 .43 .65 .81 .46 -.05 .53 1.00 18. Report .35 .68 .66 .43 .60 .71 .80 .57 .82 .35 .83 .72 .84 .85 .34 .09 .65 1.00 19. Clarify .28 .65 -.09 -.66 .13 .74 -.13 .88* -.15 .76 -.06 .19 .52 -.11 .09 .83 .82 .22 1.00 20. Confirm .72 .44 .06 -.45 .09 .72 .06 .76 .17 .75 .01 .18 .86 .13 .20 .41 .84 .48 .81 1.00 21. Convey uncertainty .26 .17 -.49 .67 .81 -.34 -.25 .13 .62 .44 -.03 -.41 -.07 -.19 .93* .21 -.36 .14 -.19 -.18 1.00 22. Provide Information .52 .49 .31 .70 .79 .35 .53 .42 .99* .44 .61 .33 .69 .62 .70 .01 .30 .87 -.01 .32 .56 1.00 23. Plan -.01 .65 .83 .52 .53 .64 .94* .39 .73 .04 .98** .86 .60 .95* .13 .00 .51 .93* .02 .18 .04 .75 1.00