USF Libraries
USF Digital Collections

Exploring the human interactivity with a robot to obtain the fundamental properties of materials

MISSING IMAGE

Material Information

Title:
Exploring the human interactivity with a robot to obtain the fundamental properties of materials
Physical Description:
Book
Language:
English
Creator:
Christian, William
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Haptics
Robotics
Interaction
Engineering
Science
Dissertations, Academic -- Mechanical Engineering -- Masters -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: This research studies the way in which humans and robots interact with each other. When two humans are working together through a set of robotic devices, do they tend to work together or fight with each other more? In which Cartesian direction do they have the most difficulty? Does fighting drastically affect the performance of the team? Finally, what measures can be taken to promote better cooperation between humans and robots to ultimately allow humans to work just as comfortably with a robotic partner as with a human partner? This research answers these questions and provides an analysis of human-robot interaction. It was found that significant fighting between the subjects does have a negative impact on the performance of the team. Out of the three Cartesian directions, the up-down direction was found to be the most difficult to cooperate in. Although the level of fighting varied greatly among different dyads, two things which greatly assisted in completing the experiments were force feedback and visual feedback. Different methods of feedback were tested, and subject performance in each was compared.
Thesis:
Thesis (MSME)--University of South Florida, 2010.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by William Christian.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains X pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E14-SFE0004742
usfldc handle - e14.4742
System ID:
SFS0028039:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim
leader nam 22 Ka 4500
controlfield tag 007 cr-bnu---uuuuu
008 s2010 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0004742
035
(OCoLC)
040
FHM
c FHM
049
FHMM
090
XX9999 (Online)
1 100
Christian, William.
0 245
Exploring the human interactivity with a robot to obtain the fundamental properties of materials
h [electronic resource] /
by William Christian.
260
[Tampa, Fla] :
b University of South Florida,
2010.
500
Title from PDF of title page.
Document formatted into pages; contains X pages.
502
Thesis (MSME)--University of South Florida, 2010.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
3 520
ABSTRACT: This research studies the way in which humans and robots interact with each other. When two humans are working together through a set of robotic devices, do they tend to work together or fight with each other more? In which Cartesian direction do they have the most difficulty? Does fighting drastically affect the performance of the team? Finally, what measures can be taken to promote better cooperation between humans and robots to ultimately allow humans to work just as comfortably with a robotic partner as with a human partner? This research answers these questions and provides an analysis of human-robot interaction. It was found that significant fighting between the subjects does have a negative impact on the performance of the team. Out of the three Cartesian directions, the up-down direction was found to be the most difficult to cooperate in. Although the level of fighting varied greatly among different dyads, two things which greatly assisted in completing the experiments were force feedback and visual feedback. Different methods of feedback were tested, and subject performance in each was compared.
590
Advisor: Kyle Reed, Ph.D.
653
Haptics
Robotics
Interaction
Engineering
Science
690
Dissertations, Academic
z USF
x Mechanical Engineering
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.4742



PAGE 1

Exploring the Huma n Intera ctivity with a Robot to Obtain the Fundamental Properties of Materials by William L. Christian A thesis submitted in partial fulfillment o f the requirements for the degree of Master of Science in Mechanical Engineering Department of Mechanical Engineering College of Engineering University of South Florida Major Professor: Kyle B. Reed, Ph.D. Rajiv Dubey, Ph.D. Don Dekker, Ph.D. Date of Approv al : October 14 2010 Keywords: Haptics, Robotics, Interaction, Engineering, Science Copyright 2010 William L. Christian

PAGE 2

Acknowledgements I would like to wholeheartedly thank my family for their support and encouragement throughout my education and throughout my life. With their help and support I was able to complete this research and the rest of my education thus far. Their adv ice, support, and dedication have been extremely beneficial throughout my educational path and I am ver y grateful for it. I would also like to thank my advisor, Dr. Kyle B. Reed for his support, advice, and dedication to this research and its success. He has greatly expanded and developed my knowledge of Engineering, robotics, haptics, and research. I would also like to thank my committee members, Dr. Rajiv Dubey and Dr. Don Dekker, for their support and assistance throughout this research as well as throughout my college career. Lastly, I would like to thank the University of South Florida College of Engineering and the Department of Mechanical Engineering for the knowledge, skills and abilities to perform this research, as well as for the Engineering fundamentals and education which will be immensely valuable throughout my entire career.

PAGE 3

i Table of Contents List of Tables ................................ ................................ ................................ ................... ii i List of Figures ................................ ................................ ................................ .................. iv Abstract ................................ ................................ ................................ ............................ v i Chapter 1 Introduction ................................ ................................ ................................ .... 1 Chapter 2 Bac kground Research ................................ ................................ ..................... 6 2.1 The History of Robotics and Materials Science ................................ ........... 6 2.2 Current Research in Robotics and Materials Science ................................ 1 1 2.3 Human In teraction with Robots ................................ ................................ .. 17 2.4 The Past, Present, and Future of Robotics in Space ................................ ... 2 2 Chapter 3 Devices and Design Parameters ................................ ................................ ... 26 3.1 A Robotic Haptic Interfac e ................................ ................................ ......... 27 3.2 P rogramming and Force Feedback ................................ ............................. 30 3.3 Forces, Work, and Motion Redundancies ................................ ................... 32 3.4 Robotic Interaction with Materials ................................ ............................. 36 Chapter 4 Ex perimental Protocol ................................ ................................ .................. 41 4.1 The Necessary Hardware ................................ ................................ ............ 41 4.2 Experimental Setup ................................ ................................ ..................... 43 4.3 Co nducting the Experiment s ................................ ................................ ....... 45 4.4 Problems and Solutions ................................ ................................ ............... 52 4.5 Post Experiment Analysis and Calculations ................................ ............... 54 Chapter 5 Experimental Assessment of Virtual Environments ................................ .... 61 5.1 Human Robot Interaction in the Box Interaction Experiment .................... 62 5.2 Th e Fighting Factor ................................ ................................ ..................... 69 5.3 Force Feedback vs. Visual Feedback ................................ .......................... 7 7 5.4 Subject Feedback vs. Numerical and Statistical Analysis .......................... 7 9

PAGE 4

ii Chapter 6 Resul ts an d Observations of Materials Analysis ................................ .......... 82 6.1 Three Force Feedback M od e s ................................ ................................ ..... 83 6.2 Human Robot Interaction in the Material s Analysis Experiment ............... 85 6.3 Obtaining the Fundamental Properties of Materials ................................ ... 90 6.4 Subject F eedback vs. Numerical and Statistical Analysis .......................... 93 Chapter 7 Future Developments, Experi ments, and A pplications ................................ 9 7 7.1 Future Developments and Expansions ................................ ........................ 9 8 7.2 Robots as Material Analyzers ................................ ................................ ... 10 1 7.3 Robotics in the 21 st Century ................................ ................................ ...... 10 2 7.4. The Future of Humans and Robots in Space ................................ ............ 10 5 Chapter 8 Conclusi o ns ................................ ................................ ................................ 10 7 Refere n ces ................................ ................................ ................................ ...................... 11 0

PAGE 5

iii List of Tables Table 1. Interesting material propertie s and the hardware necessary to experimentally test for them ................................ ................................ ......... 10 Table 2. The basic schedule d timeline that the subjects follow ed when taking part in the experiments ................................ ................................ ................. 44 Table 3. The best fit line and R 2 values of the force work vs. torque work data ....... 7 6 Table 4. The successfulness of the subj ects in identifying the unknown materials in the ma terials analysis experiment ................................ ........................... 9 2

PAGE 6

iv List of Figures Figure 1 The complete experimental setup during the ex periment s ......................... 42 Figure 2. A 3 D MatLAB representation of the initial position of all ten spheres in th e sphere interaction simulation ................................ ........................... 47 Figure 3. The sphere int eraction simulation in progress ................................ ............ 48 Figure 4. The box int eraction experiment in progress ................................ ............... 50 Figure 5. The materials analysis experiment in progress, as seen from the left, front, and right side ................................ ................................ ..................... 51 Figure 6 An i llustration of the relative position of the box in box coordinates, the absolute position of the box in world coordinates, and the boxframe forces ................................ ................................ ................................ ........... 55 Figure 7. An illustration of the instant forces and instant torques ............................. 56 Figure 8 The average time taken to reach each target box in the box interaction experiment ................................ ................................ ................................ ... 63 Figure 9 The average offset distance when each target box was reached in the box interaction experiment ................................ ................................ ........ 65 Figure 10. The average off set angle when each t arget box wa s reached in the box interaction experiment ................................ ................................ ............... 66 Figure 11. The individual forces and torques versus the joint forces and torques in the box interaction experiment ................................ .............................. 67 Fig ure 12 The frequency of each fighting factor per simulation in the box interaction experiment ................................ ................................ ............... 71

PAGE 7

v Figure 13 The average time taken to reach the target box per fighting factor in the box interaction experime nt ................................ .............................. 72 Figure 14 The average fighting factor per target box in th e box interaction experim e nt ................................ ................................ ................................ ... 73 Figure 15 The force work vs. torque work scatter plot for all subjects of fighting factors 1 through 4 in the box interaction experime nt ................................ 75 F igure 16 The fighting distance between the subjects per Cartesian direction in the ma terials analysis experiment ................................ .......................... 85 Figure 17 The fighting velocity between the subjects per Cartes ian direction in the ma terials analysis experiment ................................ .......................... 87 Figure 18 The actual hardness test results from the m aterials analysis experiment .... 91

PAGE 8

vi Abstract This research studies the way in which humans and robots interact with each other When two humans are working together through a set of robotic devices, do they tend to work together or fight with each other more? In which Cartesian direction do they have the most d ifficulty? Does fighting drastically affect the performance of the team? Finally, what measures can be taken to promote better cooper ation between humans and robots to ultimately allow humans to work just as comfortably with a robotic partner as with a h uman partner? This research answer s these questions and provide s a n analysis of human robot interaction. It was found that significant fighting between the subjects does have a negative impact on the performance of the team. Out of the t hree Cartesian d irections, the up down direction was found to be the most difficult to cooperate in. Although the level of fighting varied greatly among different dyads two things which greatly assisted in completing the experiments were force feedback and visual feedba ck. Different methods of feedback were tested, and subject performance in each was compared

PAGE 9

1 Chapter 1. Introduction T he general purpose of this research wa s to test the ability of two human subjects working together with a set of robots to interact with different virtual environments, perform materials testing, and acquire and analyze data. Over the last few decades, it has become commonplace for robotic devices to exist in our world Through the developmen t of these devices, humans must learn to work with and interact with robots in a n efficient and effective way. Interacting with a robot is very different than interacting with another human, so it crucial that we study the ways in which humans interact wi th each other and try to mimic those behaviors through a robotic device. As the field of robotics has developed, robots have been developed to perform more and more complex tasks. However, most robots are autonomous, and are developed to work on various simple tasks independently, without much, if any human interaction. However, one of the greatest goals in the field of robotics engineering is to develop robots which are able to work and interact with humans using some degree of intelligence and skill. There are several difficulties in achieving this. First of all, a robot has no natural intelligence. It cannot think for itself the way a human can. In fact, even the most sophisticated robots can only do exactly what they are programmed to do. If they are not programmed for something, they cannot do it. However, some degree of artificial intelligence can be achieved by programming the robot to know what to do in multiple

PAGE 10

2 situations it might encounter, or by programming adaptive control into the device Adaptive control allows for the fact that some of the system parameters slowly change over time or are random, and allows the device to compensate for it. Another difficulty is that humans naturally work differently with another human than they do with a robot. When working with another human, there are social factors involved. For instance, a person would not want to embarrass themselves in front of their friends However, this human factor is typically removed when working with a non human entity. Yet another difficulty arises because the experience of working with a robot is quite different than working directly with another person. There is a very different feeling involved when performing a task virtually through a robotic device than when performing a similar task in real life. Even the simple task of two humans using a robotic device to move a virtual object is drastically different than the task of two h umans moving a physical object of the same size across a room. When working with a robot, it is common to encounter virtual objects. A virtual object is an object that a human user can touch, feel, and interact with through a robotic device, even though it is not a physical object. There are several ways that this is achieved. The most common is through the use of force feedback. When a subject out of the object, just as he would if it were a real object. This allows him to feel the shape, stiffness, and texture of the object and interact with it. Another way to create a virtual object is through visual feedback. It is common for visual feedback to be used along with force feedback, although this is not always the

PAGE 11

3 case. In this research visual feedback was used so that the subjects could see the objects they were interacting with in a 3 D view on the computer screen. This is extremely beneficial when interacting with complex virtual environments because it allows the subjects to see the entire environment and their position within it. Yet another way to create a virtual object i s through auditory feedback. This can be used with visual o r force feedback, but it does not have to be. The basic concept is that the subject will hear a sound that increases in volume or pitch as he gets closer to the virtual object and decreases in vo lume or pitch as he gets farther away from it. Also common is the use of sensory substitution, in which one sense is substituted for another sense. For instance, the sense of touch and position may be substituted with the sense of hearing. However, audi tory feedback was not used in this research. multiple feedback m odes can be used simultaneously In this research, the desired combination was to use force and visual fee dback together because it allowed the subjec ts to see and feel the objects they we re interacting with, which is the closest to real life interaction In this research, e ach subject was able to practice with two basic virtual environments before completing the first experiment, which involved moving a virtual box towards a set of targets. Then, they completed the second experiment, which involved the testing of five real materials through the robotic devices, using force hardness value s In the virtual environments, both force and visual feedback were used, and in the materials analysis experiment, only force feedback was used. After measuring a set of

PAGE 12

4 hardness values for each material, the subjects had to try and identi fy them given a table of ten materials and their hardnesses. A total of 20 subjects participated in th is research working in two member pairs called dyads, for a total of ten independent experiments performed over a five week period. Of the 20 sub jects tested, 14 were male and six were female. Ten of the subjects stated in the pre experiment survey that they had worked with a robotic device of some kind before, while the other ten stated that they had not Through observation during the experiments, it was noticed that those who had not worked with a robotic device before approached the virtual environments slightly more cautiously than those who had. This research demonstrated how well humans and robots interact with each other in performing experimen ts and acquiring data. The subjects had to adjust to the idea of worki ng with each other through a series of robotic devices The robots used were a set of four Phantom Omnis, developed by SensAble Technologies. The Omnis are excellent haptic devices, and can provide fast and accurate force feedback, allowing them to easily render complex virtual objects and environments (SensAble Technologies, 2010). Robotics technology has many applications in the scientific community. One field of study w here robotics has greatly enhanced the scope of knowledge is in space science and engineering. Space probes have been sent all over the solar system to study other worlds. These robots must be programmed to think through the many complications and proble ms that will commonly arise throughout their journey. Materials science can also benefit from robotics technology. In fact, the two go hand in hand. Robots can go places that no human can go, allowing them to perform tests on materials that no human cou ld ever get near. Humans can then remotely operate

PAGE 13

5 the robot s and interact with t hem from a distance In some cases, humans must program the robot s in advance and allow them to work on their own if the time lag becomes too great for real time interaction For example, if a Mars rover discovers an interesting rock, it must be able to identify the object of interest and then perform tests on it to determine what materials it is made of and how it formed, completely on its own, using commands sent by missio n control several hours earlier. In fact, the Mars rovers Spirit and Opportunity have already explored a combined 24 kilometers of the Martian surface over the last six years, analyzing rocks and other interesting materials (Bentley, 2009) Thi s demonstr ates that the success of a mission such as this one depends on the simultaneous use of the fields of robotics and materials science. The ultimate goal of this research is to learn how a human robot team could someday travel in space and cooperatively stud y materials of extraterrestrial origin. Whether it involves studying an asteroid, a comet, or rocks on the Moon or Mars, the fields of robotics and materials science play a vital role in the success of such a miss ion. Scientists are always building more and more complex robots which can study complicated materials and alloys. In the future, humans will be able to travel to other worlds with these robots. Humans will work directly with them, drastically increasing the speed at which discoveries are made. However, before this can occur, we must learn how humans and robots interact, and to determine their strengths and weaknesses. Then, the weaknesses can be corrected and the strengths can be amplified. In turn, this research is very interesting and has a lot to offer to the scientific community.

PAGE 14

6 Chapter 2. Background Research The application of robotics in major application s in this field are to study the way that a human robot team can interac t with virtual objects in a virtual environment and how a human robot team can interact with a set of materials, perform tests on them, acquire data, analyze that data, and ultimately determine their identity. This chap ter discusses the history of the fields of robotics and materials science, some theory behind these fields, and some current research in them. 2.1 The History of Robotics and Materials Science Over the last few decades, machines have greatly enhanced the speed and efficiency at which tasks can be performed. As better, faster, and smarter machines have been developed, it has become possible to study things which were not possible to study in detail previously. Eventually, the technology was developed to build a programmable machine which was capable of independently performing a task and relaying the results back to a human user. This was the beginning of the field of robotics. The Czech playwright Karel Capek first used the wor d play titled in which he illustrated robots as mechanical machines which on the outside looked similar to humans, but could work endlessly and tirelessly, eventually turning against their masters to rise up and destroy the human race

PAGE 15

7 (Murray et al., 1994). This has been a popular science fiction concept over the years, and has been used in many books and movies. Real robots are indeed mechanical machines which can work endlessly and tirelessly, at least until the materials composing the robot fail due to fatigue or overheating. However, t hey do not typically resemble humans, although there is some element of arti ficial intelligence in which robots can exhibit. However, a robot will only do exactly what i t is programmed to do, and that i s it. If it makes any decisions on its own, it does that because it is programmed to do so. During the 1940s and 1950s, the simplest of true robots were developed. These early robots consisted of what was essentially a mechanical manipulator, otherwise known as a teleoperator, or a telemanipulator. In essence, a teleoperator is an electronic and/or mechanical system made up o f a master robot and a slave robot. The master user completely controls the slave robot using a master robot or controller and a communications device. The slave robot then uses the information sent from its user to work within its environment, providing feedback to the user through the master robot (Misra, Okamura, 2006). The first of these teleoperators was developed at Argonne and Oak Ridge National Laboratories. They were very simple linkage mechanisms which were built for the purpose of handling ra dioactive materials. By the late 1950s, the first computers had been developed to the point where computer numerically controlled (CNC) machines had been developed for manufacturing purposes. With this technology, CNC lathes and CNC milling machines were in research and development.

PAGE 16

8 CNC technology was then developed in the field of robotics as well. The master and slave teleoperators could then be replaced with reprogrammable CNC controllers. Once CNC robots had been developed, they could be programmed to perform simple tasks. Then, it was necessary to develop a programming language which could be used for programming CNC robots. The first such language, called WAVE, was developed at Stanford in 1973. This language formed the basis for programming a robot with more sophisticated commands (Murray et al., 1994). Now that a programming language had been defined, robots were able to perform more and more difficult tasks and experiments. Throughout the 1980s and 1990s, more sophisticated programming languages were developed. The C language was eventually developed, and then C++, both of which allowed for the programming of r obots to perform very complicated tasks, making them very useful to the scientific community. Today, C++ is one of the most common programming languages used in robotics. Just as the field of robotics has advanced greatly over the last 100 years, the fie ld of materials science has as well. We have discovered many more elements, and learned a great deal about their properties. We have learned about the way in which atoms interact with each other, and how to modify compou nds to improve their properties As we have learned more about materials, we have discovered that the strength of a material is not constant. The strength of the same alloy can vary tremendously, even by more than an order of magnitude, depending on a variety of factors. Some of these f actors include the size and shape of the alloy, the type of loading, and the cracks, voids, or other imperfections present ( Pitchumani et al., 2004). For instance, a very long and very sharp crack produces a large stress concentration factor, which can ca use the

PAGE 17

9 material to fail even at a relatively low stress. As the crack continues to propagate, the stress concentration factor can rise as high as 100 or even greater. Therefore, if the ultimate tensile strength without the crack was 50,000 psi, with the crack present, failure could occur at 500 psi or even less. Another major factor which contributes to the strength of a material is how it was formed, and what processes were used to make it. For example, a cast iron part will be much weaker than a forg ed iron part. Furthermore, an annealed part will be weaker, although more ductile, than a strain hardened part. The strain hardened piece will be stronger, although more brittle. As a result, the annealed part will actually have the higher fracture toug hness (Pitchumani et al., 2004). In recent years, robots have been used to study materials to learn of their properties and how to manipulate them. A robot can test a material for its yield and ultimate tensile strengths, obtain a stress strain curve, an d find material properties such as This can be done by applying a set of known forces to the material by the robot, and then reading a strain gauge attached to the material to get the strain The stress can then be ca lculated from the force, allowing for calculation of calculated as well. There are many significant properties which can be obtained by a robot, many of which can then be used to calcula te other properties or determine important characteristics of the material. Table 1 summarizes these properties, the hardware necessary to test them, and some general notes on what they are and how they are useful.

PAGE 18

10 Table 1. Interesting material properties and the hardware necessary to experimentally test for them. Desired Property Necessary Hardware Notes Hardness Hardness Tester A robot can test softer materials much easier than harder materials, so this property is essential to obtain. Density Scale and Beaker A water filled beaker can be used to find the volume and a scale can be used m / V Stress Tensile Tester For a known applied force and cro ss sectional area, F / A Strain Strain Gauge The strain can be measured directly with a strain gauge. Elastic Modulus Strain Gauge Once you know the stress and strain, E = / 2 Strain Gauges A strain gauge in the x direction and another in the z direction will give you x z Yield Strength Tensile Tester, Strain Gauge The point at which the material begins to yield. For brittle materials, fracture occurs shortly hereafter. Tensile Strength Tensile Tester The highest point on the engine ering stress strain curve. Fracture Stress Tensile Tester The point on the en gineering stress strain curve in which fracture occurs.

PAGE 19

11 Table 1 shows that there are many different material properties which can be experimentally found by a robot, or by a hu man robot team. For this research, hardness was the desired property to be measured by the robot. This property is good for able to correctly identify the materials more often than not from a hardness test and haptic interaction with them. properties. For instance, a robot can also examine a material at the point of fracture. In recent years, tests have been performed on the failure of materials from several different causes. Robotic compression testers have been developed which are capable of testing materials which have failed due to fracture, fatigue, attrition, abrasion, p eeling, chipping, and corrosion (Pitchumani et al., 2004). Now that the field of robotics has developed as far as it has today, many experiments have been done in this field over the last decade, many of which involved a combination of robotics and materi als. However, one aspect of robotics which is still growing is the haptic interaction between humans and robots, which involv es the subject and the robot work ing together as a team to accomplish the defined task. 2.2 Current Research in Robotics and M aterials Science In their initial stages, robots were very simple machines which could be programmed to perform a single simple task. However, over the years they have advanced greatly. They have been designed to perform more complicated tasks, run prec ise experiments, acquire data, and even compute results. This has occurred because

PAGE 20

12 over the past couple of decade s, there have been significant advancements, changes, and developments in the field of robotics. This has opened up the opportunity for robot s to become involved in a diverse range of scientific fields, including medicine, healthcare, logistics, manufacturing, and material analysis. It is becoming more and more apparent that robotics will greatly influence the world over the next 50 years, and there will be many exciting new inventions along the way (A Roadmap for US Robotics, 2009). As robots have advanced, they have begun to greatly influence the field of materials science. It has become possible to test materials using robots, to determine their properties, their history, and to learn of their imperfections. With this data, we have learned how different materials behave when put under stress, and further developed our knowledge base on how to manipulate and form them with other materials t o make stronger, tou gher, and more durable alloys. One area of materials research which has recently involved robotics has been in the study of human tissues. Since tissues are very soft materials, a robot can measure their properties relatively easily, by only applying a very small amount of force. It is convenient to work with soft materials because it requires a smaller force to deform these material s by a measurabl e amount, and many robots can easily deliver this range of force without deforming themselves or overheating their motors One such experiment involved comparing the force feedback for linear dynamic tissue models versus nonlinear dynamic tissue models. Up until this point, most researchers generally as sumed a linear elastic behavior for the modeling of tissues under stress (Misra et al., 2007). This seemed to be a reasonable assumption, because most other materials have an approximately linear stress strain curve in the elastic region.

PAGE 21

13 arch, robotic manipulators were used to test soft tissues using a nonlinear dynamic model. When the nonlinear model was applied, it was found that the Poynting effect developed when a shear force was applied to the tissue. T he Poynting effect is the crea tion of large differential normal stresses or strains as a result of shear stresses applied to a highly strained material. These normal stresses were not present in the linear model. As a result, there was a significant measurable difference in the force feedback for the linear and nonlinear models. T he largest difference in the maximum reaction forces between the two models was 51.2%. This demonstrated that soft tissues do not behave linearly in the elastic region (Misra et al., 2007). Another experim ent involving robotics in the testing of human tissues studied a robotically assisted teleoperated surgery. In this experiment, the surgeon manipulated a while perform ing tests on patients using the da Vinci surgical system (Yamamoto et al., 2008). relay that information back to the surgeon through various methods of force and visual f eedback at a distance (Yamamoto et al., 2008). During the teleoperated surgery experiment, various elastic tissue properties were measured, including elastic deformation and reaction to applied stresses, and were then compared to a general model. However, it is extreme ly difficult, if not impossible, to create a perfect mathematica l model for a real human tissue, although there are several good models that can approximate the dynamic behavior of tissues under stress. One of the major difficulties in performing this experiment was the lack of adequate h aptic

PAGE 22

14 feedback to the surgeon. As a result, the surgeon had to rely too heavily on visual cues such as tissue deformation to make a guess as to how much force the robot was actually exerting on the tissue (Yamamoto et al., 2009). h as taught us a lot more about soft human tissues, and how organic materials react to applied forces. However, there has also been a significant amount of research dealing with nonorganic materials as well. For the case of nonorganic materials, temperatures, forces, and pressur es which could never be withstood by any organic material are commonly dealt with. For nonorganic materials at high temperatures, a very significant deformation over time occurs. This deformation is called creep. At high enough temperatures, typically a t least one third the melting point in absolute temperature, an applied force which is smaller than the yield strength at that temperature can cause the material to creep. When a material creeps, its strain increases by a certain amount per unit time, unt il it eventually fails. As the temperature of the material increases, the creep rate increases exponentially. Creep can occur in nearly all materials, including ductile and brittle solids, polymers, and amorphous solids. Brittle solids will fail much qu icker when creep occurs, while ductile solids may creep for a very long time before failure occurs. Even ceramic matrix composites can undergo creep if left at an elevated temperature for a long period of time (Sodanapalli, Coon, 2002). There are other s ignificant methods of failure over time as well, including fatigue and corrosion. Fatigue failure can occur after a certain number of stress cycles. The number of cycles, called the fatigue life, can range from less than 1,000 to more than 500

PAGE 23

15 million. Corrosion occurs due to the chemical reactions of a material with its surroundings. For instance, water and oxygen will cause iron to rust, which is a very common form of corrosion. As a result, it can be very difficult to predict the properties and beha viors of materials with reasonable accuracy. Typically, models in the form of regression trees must be used to find a good approximation of how a material will behave. This is because material behavior is so complex that few linear models work, so nonlin ear regression techniques must be used (Li, 2006). Nevertheless, robotics has made the testing process much easier. This has resulted in a significantly increased amount of data which can be obtained. A robot can test how a material reacts to applied st resses at defined initial conditions by performing tensile tests, compression tests, bending tests, and torsion tests. The initial conditions themselves consist of how the material was formed, such as by annealing, cold working, casting, forging, etc., an d what defec ts are present in the material, such as voids, cracks dislocations, or vacancies Varying the initial conditions can drastically change the material properties, even though the material itself remains the same (Tryland et al., 2000). Another interesting experiment which has recently been performed involved using nano robots to manipulate nanomaterials at the nano scale. During these experiments, a three degree of freedom nanomanipulator was used to manipulate extremely small samples of mater ial with extremely high precision. These nanomanipulators are quite impressive, able to move a linear distance of 12 millimeters with a precision of 0.25

PAGE 24

16 nanometers. They are also able to move an angular distance of 120 with a precision of 0.02 seconds of arc (Saeidpourazar, J alili, 2008). One method of testing these nanomanipulators involves force scaling, in which general tests are scaled to larger dimensions before the very small scale tests are performed. Such precise robots can contribute signific antly to the field of nanomaterials and nanotechnology. They may even be able to develop nanomaterials which may someday be used to produce alloys of enormous strength, which could then be used to build structures which are nearly unimaginable today. T here has been and is currently a significant amount of research being done in the fields of robotics and materials. However, in order to make the most out of robotics technology, a huma n user must be able to work directly with a robotic device Many of t he experiments discussed in t his section involved direct human interaction, especially when the human user worked with a teleoperator system, either at the macro scale or at the nano scale. Human robot interaction is therefore very important in robotic ma terials analysis, and the field of robotics in general. Due to this, there is also a lot of research exploring human interactions with robots. Much of this research consists of ex amining the behavior of human human teams and comparing them to human robot teams. Some of these experiments involve materials testing, wh ile others are more focused on h aptics research, although all of the experiments contribute to the ultimate goal of improving the in teractions between humans and robots

PAGE 25

17 2.3 Human Interac tion with Robots It is very important in the field of robotics that the human user and the robot are able to interact with each other and to work together as a team. This applies for any robotics testing, including materials research. The user must be able to give a series of commands to the robot, the robot must then collect the appropriate data, make some basic interpretations of that data, and then relay the data and the interpretations back to the user. In order for this to be done effectively, the human user must be able to successfully work with a robot as a member of a human robot team. Often, several humans are working with several robotic devices, so cooperation between all members is essential. It is important that the robots themselves are designed to be as human like as possible. They must have sensors which can detect applied forces and motion. Then, they must be programmed to respond to the human users based on their sensors and the data that they collect. I t is also important in resea rch to determine the most suitable human characteristics which allow for cooperative work between two humans, and then use these characteristics to design a robot which is capable of smooth, humanlike movement (Baker et al., 2006). One of the original pro jects of this nature is the work of Reed and Peshkin. Reed and Peshkin performed research testing the ph ysical collaboration of human human teams and human robot teams, and then comparing the results (Reed, Peshkin, 2008). Reed and Peshkin state that mos t human human interactions are controlled by vision and sound. Humans tend to mimic the actions they see done by another person.

PAGE 26

18 They also state that humans are very capable of adjusting to changes in their environment (Reed, Peshkin, 2008). Also signif icant is the physical interaction between humans and robots. Whenever two members interact with each other, whether it is two humans or one human and one robot, some level of fighting will occur. This is because it is impossible for perfect cooperation t o occur, as there is always some element of resistance or human error. The level of cooperation can be increased by designing the robot to be more human like. To do this, the robot must possess many of the same qualities as humans. However, a major chal lenge is in design ing a robot which is capable of adjusting to the changes in its environment through adaptive control. It must have force sensors with a fast sampling rate, and be programmed to react quickly when an appli ed force changes. Another challenge arises from redundancies in the motion. Redundancies occur when there is more than one way to perform a task, and always exist in any h aptic interaction involving two or more members. Furthermore, one would expect that the more members present in the group, the more prone to fighting the group is. When the human members and the robotic members are fighting with each other, the efficiency of the interaction is greatly reduced. table with a curtain in the middle. One person operated each side of the table. The two participants had to move a lever towards a projected target in a one degree of freedom environment, and their performance was measured by the time it took them to suc cessfully reach the target. The two participants could not speak to or see each other, so communication was

PAGE 27

19 restricted to the forces and motions transmitted through the handles (Reed, Peshkin, 2008). In their first experiment, a single individual operate d the device alone, with nobody on the other side of the table. In the second experiment, two humans operated the table, one on each side. In the third experiment, one of the humans was replaced with a robot ic partner. In some cases, the remaining human was told he was working with a robot, while in other cases, he was led to believe he was still working with a human (Reed, Peshkin, 2008). The human human teams performed the task 8.5% faster than the solo individuals, even though many of the human human teams believed that their partner actually slowed them down However, the human robot teams where the human believed he was working with another person performed 0.9% faster than the solo individ uals, while the human robot teams where the human knew he w as working with a robot performed 3.9% slower than the solo individuals (Reed, Peshkin, 2008). research illustrate that there are significant psychological issues which m ust be addressed in the human robot teams. When th e human user knew he was working with a robot, he performed slower than when he thought he was working with another human, even though all other variables remained the same. These results demonstrate that, as mentioned in chapter one, there are important social factors involved in the human human teams which are not present in the human robot teams. These social factors exist primarily because humans naturally want to perform better if they think they are being watched and evaluated. It can be easy for

PAGE 28

20 s omeone to not care quite as much if they know their partner is a nonhuman entity. This is known as social facilitation, where an individual is motivated to perform better on simple tasks when being watched by someone else than if they were alone, and is a social obstacle which must be overcome if humans are to work with robots on a regular basis. Even more recently, there have been some further experiments which have expanded upon the work of Reed and Peshkin. Another recent experiment by Kelso involved virtual partner interaction, which is the study of the real time interacti ons between a human and a robot This research explored how humans coordinate with human like robots, with a primary focus of studying the continuous dynamics o f interaction between a human dynamics were very similar to that of a human (Kelso et al, 2009). measured their performance in working with a virtu al partner. It consisted of two initial scaling trials, lasting 200 seconds each, and 32 experimental trials, lasting 100 seconds each. The human subjects were told to make smooth, rhythmic movements with their right index finger for the duration of the experiment, and to not stop this motion at any time until the tests was complete. Their motion was rather slow, so that fatigue would not have a strong impact on the results (Kelso et al., 2009). ayed to a virtual partner. The effectiveness of the information flow between the human subject and the virtual partner was measured. It was noted that there was a weakness in the coupling of the virtual partner with the human subject (Kelso et al., 2009)

PAGE 29

21 Another recent experiment involved the collaboration of a human robot team in the precise positioning of a three dimensional flat object on a target. Both the human and the robot could exert forces and torques on the object in a six degree of freedom e nvironment. However, due to the lack of range sensor s on the robot, the human had to be the primary decision maker during the object manipulation In a three dimensional, six degree of freedom working environment, it is generally more challenging to prop erly position the object on the target than in a one dimensional, one degree of freedom environment. However, the robot was able to assist in the human robot interaction in order to successfully accomplish the task (Wojtara et al., 2009). It has also bee n of research interest to study the roles played by each member of a human human team and a human robot team. In two member teams, or dyads, there is an executer and a conductor. The role of the executer is primarily contributing to the execution of the task, while the role of the conductor is to make decisions and to control the motion. In a human robot team, the human user is typically the conductor and the robot is typically the executer ( Stefanov et al., 2009 ). All of these research studies have dem onstrated that it is possible for humans and robots to interact with each other successfully. Therefore, it must also be possible for a human robot team to be able to work together to evaluate material properties as well. The field of robotics is one of the most exciting branches of science to develop over the last 50 years and successful human robot interaction is crucial for its success. Therefore, it is crucial that human robot teams function just as well as human human teams. With humans and robots working together, the possibilities are limitless, and the discoveries made will be great.

PAGE 30

22 2.4 The Past, Present, and Future of Robotics in Space One of the most exciting applications of robotics is the use of robots in space exploration. For decades, robots have been sent out into the solar system far beyond where humans could possibly go. These robotic pioneers have been sent to other worlds to study t hem, test their materials, and look for signs of life. The first robot to travel into space was the Soviet satellite Sputnik I, which launched in October 1957. Today, there are hundreds of satellites in Earth orbit. There have been dozens of robotic exp lorers sent to other worlds. In fact, robots are the only manmade objects which have ever travelled beyond the Moon. However, someday, humans will accompany these robots on their adventure, and successful human robot interaction will be very important to the success of the mission. One major characteristic which all robots used for space exploration must possess is mobility. If a space probe in not mobile, then it is essentially stuck on the same surface forever, and has very limited scientific potentia l. However, if it is mobile, then it can move arou nd and study a much larger area. As a result, mobile robots are extremely important in planetary exploration. They are capable of taking measurements over a large area, and the y can go wherever the human scientists want them to go. For instance, if there are some interesting foothills one hundred meters away, mission control can simply program the robot to drive over there and begin performing some research (Schilling, Jungius, 1996) However, Schilling and Jungius state that there are several challenging design requirements for space faring robots which are not present in industrial or commercial mobile robots. The reason for this is that space probes must work in extremely harsh

PAGE 31

23 conditions, including working in a vacuum, dealing with low or zero gravity, and dealing with temperature extremes not seen here on Earth. Furthermore, the robot must be as lightweight and compact as possible, be able to work for months or years at a time on a very limited pow er supply, deal with communication time lags of anywhere from a few minutes to several hours, and endure a hibernation period of anywhere from a few months to several years during interplanetary travel (Schilling, Jungius, 1996). Nevertheless, these probl ems have been more or less overcome in the last 50 years. There is nothing which can be done about the harsh working conditions these robots have to face, so we just have to deal with them. Due to the long time lag, the human scientists typically send a series of commands to the robot at a time, which give s the robot work to do for several more hours. However, this means that the human robot interactions are even more critical. Since a real time teleoperator system is not possible, the scientists must f ully understand the robot and its capabilities. Fortunately, most space probes have a camera, so they can see their surroundings and take pictures providing visual feedback to the scientists operating them They all have force and range of motion sensors, allowing them to facilitate the exploration of the new world around them. Today, there are several robotic space missions underway, including the Mars rovers Spirit and Opportunity, the Saturn orbiter Cassini, and the Pluto fly by probe New Horizons, which is currently in route, and will arrive at Pluto in July 2015. However, as exciting as the prospects of robotics on other worlds is, another very important prospect in the application of robotics on the International Space Station. The Int ernational Space Station is a massive space based research facility in low Earth orbit. However, there are many engineering limits and cost constraints which limit

PAGE 32

24 the amount of payload, communication bandwidth, and number of astronauts the space station is able to carry. As a result, automated robots will be essential for smooth application of the space station in the near future. A n advanced type of robonaut, which is a humanoid type of robot specifically designed to perform more delicate tasks on the space station, could be implemented for such operations (Bluethmann et al, 2003) As the space station nears completion, there is a large amount of external maintenance which needs to be done much of which is too dangerous to safely perform or would simp ly place an overbearing workload on the astronauts Robotic assistants or robonauts, could drastically cut back on the number of human spacewalks necessary. Spacewalks are quite dangerous and expensive to perform However, with several robonauts in pla ce, the astronauts on board the station could directly interact with the m in a human robot team in order to get the job done, from safely inside th e station (Pippo et al., 1998). It is clear that the field of robotics plays a vital role in solar system exploration and beyond. This is partly due to the rapidly advancing field of electronics. While the equipped with modern technology and computers. This is one reason why robotic space exploration has been so popular over the years. It is much cheaper and far less dangerous to send a robot ic explorer to another planet than to go ourselves even though current space probes must deal with large time lags when interacting wit h human scientists back on Earth (Launius, McCurdy, 2007). However, one day, humans will travel beyond the Moon, and it will be direct human robot teams exploring other worlds together. A full scale mission to Mars may

PAGE 33

25 very well consist of a team of six astronauts and as many as a dozen robo nau ts. These robots will be able to work out in the environment when radiation or temperature levels do not permit the astronauts to go outside. Each astronaut may have two robo nau ts who assist him in performing expe riments, acquiring data, and making discoveries (Bluethmann et al, 2003) The majority of the work done will be in the field of materials science. Whether it involves testing new alloys, testing soil and atmospheric samples, or looking for signs of life, the principles of effective materials testing must always be utilized. When this occurs, who knows what amazing discoveries are waiting to be made? The field of robotics has been very significant indeed to the modern world. There have been all sorts of research in this field, from studying the collaboration of human robot teams, studying how robots can be used in the analysis of materials, and even sending robots to other worlds to perform research where no man has gone before There are also many unexplored parts of the Earth, such as the deep ocean, where humans and robots will go to study. There will be many fascinating new materials to study, and many exciting discoveries waiting to be made. Therefore, it is my goal to e xpand upon the current research, and to learn how to improve human robot interactions in different ways. It may prove extremely valuable someday when humans and robots travel in space together, and must work together to make discoveries. It will be the b eginning of the development of a human robot partnership which will last throughout the century, and will allow us to grow, to develop, and to explore.

PAGE 34

26 Chapter 3. Devices and Design Parameters The most important device for any robotics resea rch is, of course, the robots themselves. To get the best results in a human robot interaction, it is wise to select a robotic device which has a fast servoloop frequency, preferably around 1,000 hertz, which is neces sary in accurately rendering a h aptic environment. It is also wise to select a device which is user friendly, comfortable, and one that is not capable of exerting dangerous levels of force back to the subjects. Furthermore, to make the experiments themselves practical, it is a good idea to s elect a robot which is small enough to sit comfortably on a desk top. There are several reasons for this. First of all, large robots are very expensive, require considerably more power to operate, and can generally apply large forces back to the subjects. There are many research applications in which large robots are essential, but for this research a small desk top device is better. Furthermore a large stylus is going to be much heavier, causing fatigue in the subjects much more quickly. The robot sel ected for this research was the Phantom Omni, developed by SensAble Technologies (SensAble Technologies, 2010). This selection was made due to availability of the devices, cost, and the abilities of the Omnis in h aptic interaction. They are small and lig htweight enough to be safe and reliable for human robot interaction, have a fast servoloop frequency of 1,000 hertz, and can be programmed using the C++

PAGE 35

27 language to re nder virtually any small scale h aptic environment or force feedback simulation. 3.1 A Robotic Haptic Interface A total of four Phantom Omnis were used, creating a six member human robot team consisting of two human subjects and four robots. The Omni is actually a member of the SensAble Phantom set of h aptic devices. It is capable of allowing its user to simulate m any different h aptic interfaces. It has many specialized features, including motion in six degrees of freedom, a compact, portable design, a rubber stylus and inkwell for convenient and easy calibration, and two switches on the stylus which can be programmed to input or output data from the Omni (SensAble Technologies, 2010) The Phantom Omni is an impedance device (SensAble Technologies, 2010). There are two different types of robotic devices, which are impedance devices and admittance devices. An impedance device can read positions, velocities, and accelerations, and output a force back to its user. This allows them to be backdrivable and to generate inertia, making it possible for them to render very realistic force fe edback. This is a major advantage in h aptics, as impedance devices can easily determine from their position if they are interacting with a virtual object or not. If they are not, no force is applied back to the user. If they are, then a force pushing th e user back out of the object is applied (Siciliano, Khatib, 2008). An admittance device is just the opposite of an impedance device. It can read forces, but outputs a position back to its user. Due to this, an admittance device can easily be programmed to follow a predefined path based strictly on the positions

PAGE 36

28 involved. Programming them to follow a parabolic path, a circular path, or a series of more complex paths is therefore very straightforward Although i t is possible to program an impedance device to follow a predefined path, it must be done by applying a force in the direction of the desired position, set up like a spring between the actual and desired positions. This moves the stylus toward the desired position. The motion is fairly smooth and accurate for slower velocities, but it is still not nearly as good at this task as an admittance device would be A dmittance devices have the major disadvantage in h aptics in that it is difficult for them to ren der virtual objects or to generate adequate force feedback. The only way it could be done would be to program the device to respond to applied forces by moving to a new position that would feel similar to interacting with a virtual object. Even still, th e effect would not be nearly as realistic as that which could be generated by an impedance device (Siciliano, Khatib, 2008). Fortunately, the Omni is an impedance device, which makes it an e xcellent tool for generating a h aptic environment. For instance, consider the most simplistic type of x position is greater than zero, no force is applied. As soon as the x position reaches zero, the device begins to apply a force in the positive x direction pushing the user out of the wall. If the user continues to push into the negative x direction this force will increase proportional to the penetration into the wall, u ntil the robot reaches its maximum possible force pushing the user back out of the wall.

PAGE 37

29 The Omni has a maximum workspace of 320 mm x 240 mm x 140 mm. It has a good position resolution (0.055 mm) and is fairly lightweight (1.786 kg), which adds to its po rtability. It is capable of exerting a maximum force on the user of 3.30 Newtons (0.742 lb), although a continuous force in excess of 0.88 Newtons (0.198 lb) for an extended period of time can cause overheating or even damage to the motor and the device. Finally, it can produce a maximum stiffness of 1,260 N/m in the x direction, 2,310 N/m in the y direction, and 1,020 N/m in the z direction workspace, the x direction refers to the left right direction, the y direction refers to the up down direction, and the z direction refers to the forward backward direction (SensAble Technologies, 2010). The only real limitations to the Omni are the small workspace in which it can work within, its maximum force limit of 3.30 N and that more than two Omnis in series is an unsupported configuration, which could theoretically lose calibration after an extended period of time However, due to its default servoloop frequen cy of 1,000 hertz, the Omni refreshes its force rendered every millisecond, all owing for smooth and continuous feedback and frequent calibration during the experiments will prevent unwanted losses of calibration This allows it to simulate realistic virtual surfaces and environments (SensAble Technologies, 2010). These features ma ke it perfect for this applicat ion and many other interesting h aptics research projects. However, before any experiments could be done, extensive C++ programming had to be done to actually create the virtual worlds that the subjects would interact with. It is this that would command the Omnis to generate the force feedback necessary for the simulations.

PAGE 38

30 3.2 Programming and Force Feedback Before one can begin programming the Phantom Omnis to render virtual environments, he must first understand the workings of the C++ programming language, more specifically how to call up an Omni and tell it to render a certain force function. There are several parts to a successful program. First of all, in the center of any C++ program is the main function. The main function is the first function called, so it is crucial that the instructions for initializing the Omnis be placed within it (Schildt, 2003) Generally, the main function will initialize each Omni, enable them to render forces, start the servoloop sc heduler, display instructions on the screen for the users, run the callback function and main application loop, and finally shut down and turn off the Omnis when the program is terminated by the user. There are two more separate functions necessary, which are the main application loop and the servo callback function. The general purpose of the main application loop is to detect and interpret keypresses. K eyboard instructions can be programmed in this function as well. The servo callback function is call ed upon each servoloop tick, or every one millisecond. This function explicitly defines the force function for the Omnis to apply back to the user. It is within this function where the programmer must actually ion and force data are updated each millisecond when this function is called. Fortunately, C++ is an object oriented programming language, so it has the power to generate more complex environments with multiple virtual objects. To take advantage of the o bject oriented nature of C++ to create several of the same type of object, it is best to create a class for that object. In C++, a class is simply a template that defines the form

PAGE 39

31 of the objects, or the members within it Once a class and all of its memb er function have been created, the construction of new objects of that type is very straightforward through the use of a constructor function ( Schildt, 2003) One simulation used as a trial virtual environment simulation involved the subjects interacting with ten virtual spheres in a dynamic environment. The spheres had a virtual size and weight, and moved based on the amount of force applied to them by the subjects. Because the spheres all had the same dynamics, they were all objects of the sphere class which contained all of the necessary data equations and functions to simulate their motion. The use of the sphere class was ver y advantageous because it allowed for the creation of new spheres v ery easily. This simulation could easily be expanded to include more and more spheres until the entire virtual world is filled up with spheres. A nother element of a Phantom Omni C++ program that is also very useful is the inclusion of open GL graphics. The gr aphics add severa l more func tions to the program However, they give the subjects visual feedback as well as force feedback when working within their environment. In a typical open GL display window, a three dimensional view of the entire virtual environment is presented, including all active Omnis and all virtual objects present at the current time. The window updates itself at approximately 30 frames per second, allowing it to accurately show the state of the environment in real time. O nce a C++ program is successfully completed tested, and debugged, then proper force feedback can be rendered back to the users. As it turns out, there are many ways to generate force feedback. Each way has some advantages and disadvantages. During the

PAGE 40

32 materials analysis experiment three differ ent mode s of force fe edback were used and compared, and it was found that the human robot interaction with the materials varied quite a bit based on which feedback mode was active at the time. This proved to be true for both the softer materials as well as the harder materials. The subjects also interacted with each other differently depending on the feedback mode currently being applied back to them. 3.3 Forces, W ork, and Motion Redundancies In any robotic h aptic interaction, the human subjects must apply forces to the virtual objects, and in turn, the robot applies forces back to them. When applying forces to a virtual environment, work is being done. Even with only one human subject interacting with a virtual environment through one robotic device, a combination of positive and negative work can be done if the human and the robot are fighting with each other. However, when you have more than one human or more than one robot, the concept of forces and work become a little more complicated, and motion redundancies are introduced. Motion redundancies are always present when two or more members are working together, whether it is one human and one robot, two human s, two humans and four robots, or any other combination. This is because motion redundancies occur when there is more than one way to perform a task, or when a particular motion can be performed by either member of the team. Take for example, two humans carrying a table across a room. One person could push back, or the other could. If both are pushing back, then the motion is redundant, but they are sharing the workload. However, if one is pushing

PAGE 41

33 forward and the other is pushing back, then they are fi ghting with each other, which complicates the interaction. has a weight of 100 N an d must be moved a distance of ten meters. It would then require a total of 1,000 J of work being done to move the table, neglecting the small height the table must be picked up off of the floor. If both humans cooperate perfectly, then each would do 500 J of work. This would be represented as person one do in g 50% of the work and person two doing 50% of the work Now, assume that person one was doing nearl y all of the work while person two was passively holding the table above the ground. The wor k analysis may now show person one as do ing 95% of the work and person two as only doing 5%. Going further, what i f person two were not actually helping move the table at all, but were instead pushing b ack in the direction of person one ? In this case, the two subjects would be fighting with each other For the case of both subjects fighting, the work analysis may show person one as doing 2,000 J of work and person two as doing 1,000 J of work, for a net total of 1,000 J. This concept of negative work arises because the two people are not working together, but are instead fighting with each other. Due to this, it took three times more energy between both people as was necessary to move the table. This is because person one had to work twice as hard as he would have if he were moving the table by himself to overcom e the fighting of person two Now, take for example, a virtual box in which all four Omnis ar e attached to a different bottom corner Subject one is responsible for the Omnis on the left side of the box and subject two is responsible for the Omnis on the right side of the box. The goal is

PAGE 42

34 for them to cooperatively move the box towards a target box. This is similar to moving the table across the room. The only difference is that there is one more degree of freedom in the system. For the case of moving a table across the room, there is likely to be significant motion in the x and z directions, as well as rotational motion. For the box, significant motion in the y direction will occur as well. Another difference is th at, since the Omnis are positioned at a corner and not at the center of a bottom edge, pushing with only one Omni will apply a torque to the box causing it to spin due to the offset distance During this box interaction experiment, t he subjects were instructed to work together as much as possible, and their level of cooperation was measured by first recording all of the positions of each Omni to a data file. Then, MatLAB analyzed this data and calculated the individual and joint forces and torques by each subject. A higher percentage of joint forces and joint torques indicates better cooperation between the subjects. Then, MatLAB calculated the work done by each subject for forces and for torques, based on the forces and torques themselves, distances and angles involved, and the total force work and total torque work for each target box throughout the entire simulation. Although the detailed analysis of this experiment is presented in chapter 5, it is important to note that the percentage of work indicates how well the subjects were cooperating with each other. One very good result would be each subject contributing approximately 50% of the work for both forces and torques. This would indicate that th e subjects were cooperating very well in dividing up the workload to move the box towards the target. Another very good result would be to see one subject contributing most of the

PAGE 43

35 work for the forces and the other subject contributing most of the work for the torques. In this case, the subjects are still cooperating very well with each other although one is focusing on properly positioning the box while the other is focusing on properly orienting the box at the correct angle. Another, less ideal result which sometimes arose was to see one subject dominating both forces and torques. This indicated that one subject was doing nearly all of the work to reach the target while the other subject was passively holding on to their styluses to keep the box stable This is not ideal because the subjects did not equally spread the workload amongst each other. When negative work occurred, it indicated that the subjects were fighting more than they were working together to reach the target. The more negative a subj cases even showed the percentages to be more than 1,000% and 900%, indicating that the subjects were not working together at all. There are many different types of human robot interactions pr esent in this in detail in section 5.2. More details on how MatLAB was programmed to calculate the fighting factor is also discussed in detail in section 4.5. The f ighting factor is an integer between 1 and 5 which indicates how the subjects interacted with each other. Once the fighting factors had been determined for each target box for each dyad, the target boxes were compared to determine which had the highest or lowest fighting factors, how much the fighting factor affected the time required to reach the target box, and whether the first or second time running the simulation affected the fighting factor.

PAGE 44

36 The box interaction experiment was only the first of two e xperiments. Extensive analysis was done on this experiment to determine whether translational motion or rotational motion caused the most fighting, and to determine whether there were any correlations between the time to reach the target and the fighting factor for that target. However, t he concept of fi ghting between the subjects wa s looked into even further on the second experiment performed, which was the materials analysis experiment. 3.4 Robotic Interaction with Materials In the materials analysis experiment, the fighting distances and fighting velocities in each Cartesian direction of the world frame were calculated and compared. With increased fighting come negative work, wasted energy, and hindered performance. Therefore, the objective is to learn when and why members of the human robot team fight with each other so that measures can be taken to better enable them to cooperate more. When interacting with the materials, the Omnis were set up as a three Omni teleoperator system with two master robots and one slave robot. Each of the two subjects controlled one of the master robots, while the slave robot mimicked the average position and velocity of the two master robots. The fourth Omni was deactivated during the materials analysis expe riment. The objective was for the two subjects to work together as much as possible to move the slave Omni towards the target material and perform a series of hardness tests on it. Each subject would feel a force applied back to them, which depending on the current

PAGE 45

37 active force feedback mode, was based on either the hardness of the material the slave Omni was in contact with the amount in which he was fighting with his partner or both In the materials analysis experiment, three force feedback modes w ere used. For all three, the slave Omni averages the position and velocity of the two master Omnis, mimicking the combined motions of the two human subjects. The fighting position and fighting velocity were calculated for each of the three Cartesian dire ctions by writing to a data file the x, y, and z positions of each Omni every millisecond Then, MatLAB could read this file and calculate the fighting distance. By differentiating, the fighting velocity could be calculated as well. The following eight equations were used to calculate the fighting distance and velocity for each Cartesian direction. Remember that, in the direction refers to the left right direction, the y direction refers to the up down direction, and the z direct ion refers to the forward backward direction. fight_pos_x = mean [abs (x 1 x 2 )] (1) fight_pos_y = mean [abs (y 1 y 2 )] (2) fight_pos_z = mean [abs (z 1 z 2 )] (3) fight_pos = sqrt [(fight_pos_x) 2 + (fight_po s_y) 2 + (fight_pos_z) 2 ] (4) fight_vel_x = mean [abs (vel,x 1 vel,x 2 )] (5) fight_vel_y = mean [abs (vel,y 1 vel,y 2 )] (6) fight_vel_z = mean [abs (vel,z 1 vel,z 2 )] (7) fight_vel = sqrt [(fight_vel_x) 2 + (fight_vel_y) 2 + (fight_vel_z) 2 ] (8) For example, assume that the x position of the first master Omni is 50 mm and the x position of the second master Omni is 50 mm. In this case, the slave Omni would

PAGE 46

38 be at the average x position of 0 mm, but the fi ghting distance in the x direction would be 100 mm. Now, assume that the first master Omni is moving upward at 50 mm/s and the second master Omni is moving downward at 50 mm/s. In this case, the slave Omni would remain stationary in the y direction, but the fighting velocity in the y direction would be 100 mm/s. However, o ne of the greatest difficulties in this experiment was accurately rendering the stiffness of the material, felt by the slave Omni, back to the subjects through their master Omni. One reason for this was the small maximum force of 3.30 N for which the Omnis are capable of producing. Another reason is that there are always differences between the way a virtual surface feel s as opposed to the way a real surface feels Therefore, three different force feedback modes were used and compared, to see which resulted in the better interaction between the subjects, the Omnis, and the materials. These modes were System Force Feedback, Social Force Feedback, and Dual Force Feedback. The general concept between the three force feedback modes is as follows. In System Force Feedback, each master Omni feels a spring force between its position and the position of the slave Omni (equations 11 and 12). In Social Force Feedback, each master Omni feels a spring force between its position and the position of the other master Omni (equations 13 and 14) This mode has the advantage that it is easy to tell if you are fighting with your partner, but has the disadvantage that you cannot obtain any informatio n on the object the slave Omni is interacting with based on the force feedback provided

PAGE 47

39 Both of these modes have been used to some degree before (Glynn et al., 2001). However, a new type of force feedback, called Dual Force Feedback was also used in this research. The general concept here is that both master Omnis feel the exact same force, equal to a spring force between the average position of the two master Omnis and the position of the slave Omni (equations 15 and 16) This mode tends to bring a ll three Omnis towards a stable equilibrium position. Dual Force Feedback has the advantage that both subjects feel the same force, so each subject knows that his partner feels the same force as he does, but it has the disadvantage that it is very diffic ult to know whether the forces experienced are due to the slave Omni interacting with a material or from the two subjects fighting with each other The following nine equations were used to calculate the force to be rendered back to each of the three Omni s for each of the three force feedback modes. Note that for all three modes, both the position and velocity differences were used to calculate the magnitude of this spring force. Desired Positions and Velocities: desiredPos = 0.5 (pos Omni_1 + pos Omni_2 ) (9) desiredVel = 0.5 (vel Omni_1 + vel Omni_2 ) (10) System Force Feedback: F sys tem Omni 1 = k (pos Omni_1 pos Omni_3 ) b (vel Omni_1 vel Omni_3 ) (11) F sys tem Omni _2 = k (pos Omni_2 pos Omni_3 ) b (vel Omni_2 vel Omni_3 ) (12) Social Force Feedback: F s ocial Omni 1 = k (pos Omni_1 pos Omni_2 ) b (vel Omni_1 vel Omni_2 ) (13) F s ocial Omni _2 = k (pos Omni_2 pos Omni_1 ) b (vel Omni_2 vel Omni_1 ) (14)

PAGE 48

40 Dual Force F eedback: F dual Omni 1 = k (pos Omni_3 desiredPos) + b (vel Omni_3 desiredVel) (15) F dual Omni _2 = F dual Omni 1 (16) Slave Omni: F Omni _3 = k (desiredPos pos Omni_3 ) + b (desiredVel vel Omni_3 ) (17) From equation 17, it is clear that the slave Omni was drawn to the average position and velocity of the two master Omnis. Also, all of the force feedback modes were based on a spring mass was the da mping coefficient. In two of the force feedback modes, System Force Feedback and Dual Force Feedback, the subjects felt a force when the slave Omni was interacting with the material in a similar manner to if they were interacting with a virtual material t hrough their Omni. In the other force feedback mode, Social Force Feedback, the subjects could not feel anything that the slave Omni was encountering. These three modes will be discussed in greater detail with the results presented in sections 6.1 and 6. 2, but it is important to note that the human robot interaction with the materials was very different in each of the different feedback modes. However, the fighting distance and velocity did remain fairly consistent for each of the five materials, indicat ing that feedback mode and Cartesian direction have a much larger impact on performance than the actual material being tested. Now that all of the theory behind the experiments has been defined, the next chapter focuses on the details of the actual experi ment itself, from the time the subjects enter ed the laboratory until they le ft It then goes on to discuss how the calculations were performed, any problems which were encountered along the way and the solutions which were found to solve them

PAGE 49

41 Chapter 4 Experimental Protocol Before any of the actual experiments could be run, the C++ codes for generating the virtual environments had to be written and tested, and IRB approval had to be granted for this research. Then, all of the necessary ha rdware had to be obtained, formed, and set up. It wa s crucial that everything be tested, checked, and rechecked before bringing in the subjects so that everything goes as smoothly as possible when running the actual experiments. 4.1 The Necessary Hardware There were several different items necessary for setting up and running the experiment s First and foremost is a medium sized workstation containing the computer, four SensAble Phantom Omnis, and two chairs, one for each subject The experiment operator stood during the experiments. Also necessary was a copy of the IRB consent form and survey for each subject, and a device for backing up and storing the data collected. Figure 1 show s the complete experimental setup with the sp here interaction simulation running.

PAGE 50

42 Figure 1. The complete experimental setup during the experiment s In this photo, the sphere interaction simulation has just begun, and all four Omnis are table as they are in fact subjects sit in the chairs and each control two of the Omnis during this practice trial During the mat erials analysis experiment the two subjects control only the left t wo Omnis, the third acts as the slav e robot and actually interacts with the materials, and the fourth is deactivated. the materials to be placed inside of during the materi als analysis experiment. T he next items necessary were the five materials themselves, which included a small block of soft foam, styrofoam, cardboard, soft wood, and aluminum. All of the materials were painted black to make them appear similar. The box was also painted black, so that the subjects could not easily determine the identity of the materials from sight alone. The box was

PAGE 51

43 also placed about one meter away from the subjects, further reducing this possibility, which was possible since the subject s interacted with the materials through a teleoperator system. 4.2. Experimental Setup Once all of the necessary hardware had been obtained, the experiments could actually be set up and conducted. There were a total of four parts to the entire the Omnis, force feedback, and virtual object interaction, as well as the two actual experiments themselves. During th e experiments, the Omnis were taped to the table using double sided tape so that they would not slide around during the experiments. Once the O mnis were set up, the next step wa s to calibrate them often. A set of four Omnis in series is an unsupported co nfiguration, so the Omnis can easily become uncalibrated after as little as ten to fifteen minutes, causing jerky motions and poor force feedback, so it was essential to recalibrate them as often as possible during the experiments. By calibrating often, t here were very few calibration errors during the experiments themselves. Table 2 presents a general minute by minute outline of the experimental procedure for the subjects. Sometimes the experiments finished in as little as 30 minutes if the subjects wer e quicker in the box interaction or materials analysis experiment s but the goal was to not let them run longer than 45 minutes, as to not take up too much of the

PAGE 52

44 Table 2. The basic schedule d timeline that the subjects follow ed when taking part in the experiments. The total time involved for each subject pair or dyad, wa s approximately 45 minutes. Time Current Activity 0 min The subjects enter the laboratory, IDRB 114, and I introduce myself to them. 1 min Give both subjects the IRB consent form and allow them 5 minutes to read over it, sign it, and ask any questions they may have at the time. 6 min Survey the subjects to get their subject number (1 through 20), gender, and whether they have ever worked with a roboti c device before. It is made clear in the consent form that this data will not be attached to their name and will only be used for statistical analysis purposes only. 7 min Begin the first practice run, which is the simulation of the outside of a box. Th e subjects have up to 2 minutes to practice with this simulation. 9 min Begin the second trial run, which is the sphere interaction simulation. The subjects have up to 3 minutes to practice with this simulation. 12 min Explain the box interaction experiment instructions. 13 min Perform the box interaction experiment twice, recording the data for each trial. The subjects should be able to complete each simulation in 4 minutes or less, allowing up to 8 minutes for this experiment. 21 min Give a 5 minute break from the Omnis, allowing the subjects to rest. At this point, it is time to set up the materials analysis experiment and to explain the instructions and procedure for it. It is also necessary to explain the three force feedback modes to the subjects at this time.

PAGE 53

45 Table 2. Continued 26 min Begin the hardness tests on the first material. One minute will be allotted for each of the three force feedback modes, for a total of 3 minutes. 29 min Set up the experiment for the second material. 30 min Perform the hardness tests on the second material. 33 min Set up the experiment for the third material. 34 min Perform the hardness tests on the third material. 37 min Set up the experiment for the fourth material. 38 min Perform the hardness tests on the fourth material. 41 min Set up the experiment for the fifth material. 42 min Perform the hardness tests on the fifth material. 45 min Save all of the data to a disk. Give the subjects the post experiment survey and thank them sincerely for participating in this research project. Tabl e 2 presents the general timeline followed throughout the experiment and all four simulations involved with it. The next section goes into greater detail on each of the simulations and what wa s actually being done, studied and measured for each of them. 4.3 Conducting the Experiments Once the subj ects enter ed the laboratory, their first task wa s to carefully read and fill out the IRB paperwork as I explain ed th e fundamentals of robotics and h aptics to them Then, each subject got to practice with two virtual environments, each involving

PAGE 54

46 four Omnis. The firs t of these environments involved simulating the outside of a virtual cube. The cube passively floated The cube did not move, deform, or change in any way as the subjects interact ed with it. Each Omni had its own independent cube. This interaction on ly had force feedback, no visual feedback was presented The cubes we re perfectly smooth and frictionless. The purp ose of this simple interaction wa s to get the subjects used to force feedback and the concept of interacting with a virtual object. The cubes were soft, and if enough force was applied, one cou ld actually push straight through the cube and come out the other side. After a couple of minutes interactin g with the cubes, the subjects we re ready for a more interesting virtual environment. The second p ractice simulation involved simulati ng moving sp heres in a virtual h aptic interaction simulation This wa s a dynamic vector applied to a particular sphere by a subject or by another sphere mass of that vector of that sphere. The acceleration wa s then integrated up to the new velocity vector and position of the sphere. A total of ten spheres and four Omnis were present in this virtual environment. The Omnis can interact with each other as well, feeling like hitting a bump when one Omni The feature that distinguished this simulation from the first one wa s the visual feedback included. Through the use of op en GL graphics, the s ubjects could see the position of all ten spheres as well as their positions in real time on the screen. This wa s extremely beneficial in a more complex virtual environment su ch as this one because

PAGE 55

47 without it, the subjects would have had no idea as to the actual position and velocity of th e spheres and the other Omnis. However, with the visual feedback, the subjects were easily able to visualize the virtual environment they were working within. All 20 subjects stated in the post experiment survey that th e open GL visual feedback was use ful in the sphere and box interaction simulations Figures 2 and 3 illustrate the virtual environment i tself and the actual visual feedback available to the subjects during the simulation. Figure 2. A 3 D MatLAB representation of the initial pos ition of all ten spheres in the spher e interaction simulation

PAGE 56

48 Figure 3. The sphere interaction simulation in progress. This wa s the second of two practice environments the subjects work with before the actual experiments begin. Once the subjects had adequate time to practice with these first two simulations, they were ready to begin the actual experiments The first of these was the virtual box inter action experiment In this experiment, t he objective was for the subjects to cooperate as much as possible to move the virtual box toward a set of ten target boxes, all of which require d translational and rotational motion of the box.

PAGE 57

49 This simulation bega n with the box positioned in the center workspaces, and the first target box in the upper right hand corner. There we re four directions in which the box could move, which we re left and right up and down, forward and backward, and rotation about the y axis. Rotations about the x axis and the z axis were left out because this research focused primarily on planar motions. In order to reach the target box, the subjects had to position the virtual box within 20 millimeters from the target with an offset angle of no more than 30. In setting up the experiment, it was found that these constraints set a moderate difficulty level on the experiment. Any stricter, and some of the dyad s may not have been able to complete the simulation. Any more lenient, and the dyad s would have reached m ost of the targets far too quickly to properly analyze their level of coopera tion. Once the first box was reached, the second appeared, and once it was reached, the third appeared, and so on, until all ten target boxes had been reached. Each box rotated 90 from the orientation of the previous box, ensuring that the subjects had to apply both forces and torques to the box in order to reach the next target. Once all ten target boxes had been reached, the simulation wa s complete. The subjects then comple te d the entire simulation again, and the ir per formance wa s compared between the first and second time. Quantitie s which we re compared include the time to reach each box, the offset angle when each box wa s reached, and the fighting factor for each box. Fo r thi s experiment, open GL graphics we re also utilized just as in the sphere interaction simulation The visual feedback include d the virtual box in which the Omnis we re attached to, the current target box, and the Omnis themselves. The subject on the le re colored green and th re colored blue.

PAGE 58

50 This interaction would have be en nearly impossible without visual feedback. Figure 4 illustrates the box interaction experiment as the subjects are approaching the tent h target box. To reach this box, the subjects must move the virtual box up and to the right, as well as straighten out the box angle slightly. Figure 4 The box interaction experiment in progress This is the first of two experiment s which is designed to measure the abilities of two human subjects working together in a virtua l environment through a set of robotic devices.

PAGE 59

51 After the box interaction experiment wa s completed two times, a five minute break wa s given to allow the su bjects to rest and to allow for the setup o f the second experiment, which wa s the materials analysis experiment During this experim ent, a total of five materials we re tested using the three force feedback modes described in section 3.4 which we re System Force Feedback, Social Force Feedback, and Dual Force Feedback (Glynn et al., 2001). During the materials analysis experiment, the hardness of the material was calculated in the C++ program based on the deflection of the material and the forces applied t o it by the third Omni. As mentioned in section 4.1, the materials we re all painted black and placed inside of a black box approximately one meter away from the s ubjects so that their identity wa s not revealed too soon. Figure 5 shows the left, front, an d right side view of the third Omni interacting with a block of soft wood, performing a hardness test on it. Figure 5 The materials analysis experiment in progress, as seen from the left, front, and right side Thi s is the second of two experiments which measures the ability of two hum an subjects to interact with a robot to perform an actual experiment and acquire data. All of the m aterials are painted black and are placed inside of a black box so that the subjects cannot easily det from sight alone. As you can see in figure 5 the stylus of the third Omni is taped up. This is because the stylus is not motor controlled and would otherwise flop around, making the

PAGE 60

52 hardness tests impossible to accurately perform. Figure 5 is seen peer ing down into the box. However, the subjects were encouraged not to peer into the box and were strictly not permitted to touch the materials or interact with them in any way except through the third Omni. This kept the experiments fair and unbiased, as the final task for the subjects was to try and figure out the identity of the materials using a table of known material hardnesses. The table contained ten materials, five of which were the five they tested. After the experiment, the subjects filled out the post experiment survey and were sincerely thanked for participating in this research study. 4.4 Problems and Solutions Throughout the research process, there were a few minor problems which ar ose and had to be dealt with The main issue was that, al though the Omnis are excellent h aptic devices and are well suited for this type of research, a set of four Omnis in series can lose calibration after about 10 to 15 minutes of continuous force feedb ack. This is because only dual Omni setups are supported in a seri es configuration, while three and four Omni setups are not. When a miscalibration occurs callback errors become more common, and eventually, the uncalibrated Omni stops properly rendering force feedback and can even start vibrating or moving around uncontrollably. To prevent this problem from arising, the Omnis had to be recalibrated as often as possible throughout each experiment. They were calibrated a total of six times throughout the entire process, once at the very beginning, once after the first virtual box interaction once after the sphere interaction simulation, once after each of the two box interaction simulations, and once after the materials analysis experiment. To calibrate,

PAGE 61

53 the subjects were instructed to place the stylus of each Omni back into the inkwell and the Phantom Test Calibration tool was run (SensAble Technologies, 2010) Another issue which arose on one occasion was overheating of the motors. The box interaction simulation is very demanding on the motors, and is often applying the maximum force of 3.30 Newtons back to the subjects. Most of the simulations were completed in less than 4 minutes, so overheating did not occur. However, in one case, the time taken to complete the first simulation was 9 minutes and 27 seconds and the time taken to compl ete the second simulation was 4 minutes and 59 seconds, the longest at which the box interaction simulation had ever been run continuously for Towards the end of the second simulation, a warning message appeared on the screen that the second Omni had war m motors This caused this Omni to immediately lose calibration, so the simulation had to be aborted and a ten minute break was mandated to allow the motors to cool. After ten minutes, the second simulation was resumed on target box 7, which was where th e issue first arose, and the simulation was finished without further problems. One possible solution for future work with simulations such as this one would be to limit the maximum force rendered. A force of 0.88 Newtons can be rendered continuously for 24 hours without causing overheating or other stresses on the device, so for a t en minute simulation, the maximum force could be limited to somewhere between 0.88 Newtons and 2.00 Newtons, depending on how much of a safety factor you are striving for. However, other than these two issues, the experiments ran very well and the data co llected was very interesting. The next section discusses how MatLAB was used to calculate the interesting quantities from the original data files

PAGE 62

54 4.5. Post Experiment Analysis and Calculations For each dyad tested, three data files were generated by the Omnis. The first two were from each box interaction experiment, and included the time, target box number, and the actual and desired x, y, and z positions of all four Omnis each millisecond throughout the simulation. The third was from the materials analysis experiment, and included the time, material number, force feedback number, and the x, y, and z positions and forces for all three Omnis each millisecond throughout the simulation. For the box interaction experiment, the time taken to reach each t arget and the offset distance and angle when each target was reached was written to a separate data file. The MatLAB analysis was only applied to the positions file for this experiment. The first task was to calculate the individual forces and torques an d the joint forces and torques. First, the boxframe forces were summed up for each time step. The boxframe forces are the forces that the subjects exerted on the Omni that were transformed into the moving reference frame of the box. This moving referenc e frame was determined by calculating the relative position of the box in box coordinates from the absolute position in world coordinates. This relative position was calculated by multiplying the world coordinates by the appropriate sine or cosine of the box angle. The concept of the boxframe forces on the box are illustrated in figure 6.

PAGE 63

55 Figure 6 An illustration of the relative position of the box in box coordinates, the absolute position of the box in world coordinates, and the boxframe forces In this figure, the x Then the self forces and self torques we re then calculated based on subject A us ing Omnis 1 and 2, and subj ect B using Omnis 3 and 4. Then, the total forces and total torques were added up for each target box. The percentages of individual forces and torques were then calculated by dividing the self force for each subject by the total force and dividing the s elf torque for each subject by the total torque and multiplying each value by 100%. The following eight equations were used to calculate these quantities. instant_force = 2 min [abs (L, R)] when sign (L) = sign (R) (18 ) instant_torque = d min [abs (L, R)] when sign (L) = sign (R) (19 ) self_force n = sum (instant_force) ( 20 ) self_torque n = sum (instant_torque) (21 )

PAGE 64

56 total_force = sum [abs (boxframe_forces)] (22 ) total_torque = sum [abs (d boxframe_forces)] (23 ) individual_force n = (self_force n / total_force) 100% (24 ) individual_torque n = (self_torque n / total_torque) 100% (25 ) In equations 18 and 19 instant _force and instant_torque are the individual force and torque quantities for each time step. The individual forces and torques are those contributed solely by one subject, and not jointly by both subjects. They are based on the force component, or boxframe force, o f either translation or rotation of each of the four The minimum absolute value of L and R was computed, since the Omni with the lowest box frame force produced the self force or torque. If L and R were applied in the same direction, then a force was applied, and if they were applied in opposite directions, then a torque was applied This co ncept is illustrated in figure 7 Figure 7 An i llustration of the instant forces and instant torques

PAGE 65

57 In figure 7 the concept of the L and R forces are shown, as well as why the minimum of these forces was used in the calculation. The portion of the larger force which exceeded the magnitude of the minimum force was an ambiguous force. This force, shown within the blue circles, is ambiguous in terms of what the user wants it to do, either mov ing the box or rotating the box. In equations 20 and 21 the instant_force and instant_torque quantities wer e summed up for each millisecond that the target box was active. Note that the subscript n refers to the subject number, either one or two. The variable d in equations 19 and 23 rrently being dimension, depending on the case. The individual forces and torques were th en calculated using equations 24 and 25 by dividing the self force by the total force or by dividing the self torque by the total torque. Once the individual forces and torques had been calculated, the joint forces could be calculated by making the assumption that the rest of the forces and to rques not contributed solely by one subject or the other must have been contributed jointly by the pair. Therefore, equations 26 and 27 were used to calculate these quantities. joint_force = [1 (self_force 1 + self_force 2 ) / total_force] 1 00% (26 ) joint_torque = [1 (self_torque 1 + self_torque 2 ) / total_torque] 100% (27 ) that, for a particular target box, the total force was 1,000 N and the to tal torque was 100 A individually contributed 150 N of force and 8

PAGE 66

58 N*m of torque, and subject B individually contributed 200 N of force and 7 N*m of torque. From equations 26 and 27 the joint force would be 650 N and the joint torque would be 85 N*m. Therefore, the individual forces for subjects A and B would be 15% and 20%, the individual torques for subje cts A and B would be 8% and 7%, the joint forces would be 65%, and the joint torques would be 85%. Once the joint forces and torques had been calculated, the next step was to calculate the self components of work for both forces and torques for subjects A and B. Then, the total work done by the translational forces and the total work done by the rotati onal torques was calculated. The percentages of individual work for forces and torques were then calculated by dividing the individual work done by each subject by the total work for that target box for both forces and torques and multiplying each value b y 100%. Note that the force work was calculated based on the difference in the relative position of the box, and the torque work was calculated based on the difference between the box angle from the current time step to the previous time step. The follow ing six equations were used to calculate these quantities. self_work_force n = sum [ instant_force (r _pos time_loop r _pos time_loop 1 ) ] (28 ) self_work_torque n = sum [instant_torque ( time_loop time_loop 1 )] (29 ) total_work_force = sum [ boxframe_forces (r_pos time_loop r_pos time_loop 1 )] (30 ) total_work_torque = su m [boxframe_forces ( time_loop time_loop 1 )] (31 ) force_work n = (self_work_force n / total_work_force) 100% (32 ) torque_work n = (self_work_ torque n / total_work_torque ) 100% (33 )

PAGE 67

59 In equations 28 through 33 r _pos time_loop r _pos time_loop 1 magnitude of the distance between the relative position of the box in the current time step time_loop time_loop 1 ious time step. Just like for the individual forces and torques, the individual force work and torque work were ca lculated using equations 32 and 33 by dividing the self work force by the total work force or by dividing the self work torque by the total w ork torque. When summing up the force work or torque work for subjects A and B, you get 100%. However, this does not mean that the individual percentages themselves are between 0% and 100%, as they were for the individual and joint forces and torques. I n fact, the individual work component for one subject can be greater than 100%, meaning that the individual work component for the other subject is negative. As discussed in section 3.3 negative work indicates that there was more fighting than cooperatio n between the subjects for that particular target box. However, it was often seen that, even though there may have be en immense fighting for translational forces, there was quite a bit of cooperation for torques, or vice versa, indicating that the two are not directly related to each other. A direct comparison, however, showed that there was consistently more cooperation i n applying torques to the box than there was in applying translational forces to the box, which will be discussed in more detail in sections 5.1 and 5.2, along with the discussion of the fighting factor, which is derived directly from the force work and t orque work calculated here.

PAGE 68

60 For the materials analysis experiment, the level of cooperation between the subjects was measured a bit differently. For this experiment, the measured hardness values were written to a separate data file. Just like for the bo x interaction experiment, the MatLAB analysis was only applied to the positions file. For each material, the fighting distance and velocity was calculated for each force feedback mode and for each Cartesian direction. This was done to measure the differe nce in position and velocity between the first two Omnis. Since this experiment was set up in a three Omni teleoperator system with two master Omnis and one slave Omni the first two Omnis were the master Omnis in which the two subjects controlled. Th e p ositions data was recorded to the data file each millisecond and the velocity data each millisecond was obtained through differentiation. The difference between the two Omnis could then be easily obtained, allowing for the calculation of the fighting dist ance. The fighting velocity, in millimeters per second, was calculated for each Cartesian direction in the same manner. Equations 1 through 8 from section 3.4 were used to calculate these quantities for each discrete time step. The average of all time s teps for a particular feedback mode of a particular material was the quantity recorded for analysis. The purpose of this analysis was not only to measure how well a human robot team cooperated with each other, but to also measure which force feedback mode gave the subjects the most difficulty and which Cartesian direction gave them the most difficulty for both position and velocity. The next two chapters will present the results for both experiments, and answer these questions. They will also reveal how the subjects mode compared to the actual numerical analysis of the data.

PAGE 69

61 Chapter 5 Experimental Assessment of Virtual Environments For the box interaction experiment, the subjects were instructed to cooperatively move the virtual box towards a target box, with tolerance levels of 20 millimeters and 30 degrees There were ten target boxes in the simulation, and the entire simulation was completed twice. For subjects who had never w orked with a robotic device before, this was the first time that they had ever worked with a partner to interact with a virtual object in this manner. It is therefore of research interest to determine the successfulness of the human robot interaction in t his environment, whether distance or offset angle was the leading constraint in reaching the target box, and the fighting factors for both horizontal and rotated target boxes, in both the first and second simulation The force feedback mode was constant throughout this experiment. Each Omni was attached to a bottom corner of the box, and the force applied back to the subjects was a spring force proportional to the distance the Omni was f rom the desired corner positi on, as calculated by equa tion 34 F = k sqrt [(x omni x corner ) 2 + (y omni y corner ) 2 + (z omni z corner ) 2 ] (34 ) In equation 34 k is the spring constant of the virtual spring between the Omni and the box corner, F is the magnitude of the force in the direction of the corner, and x, y, and z are the positions of the Omni or corner, whichever the case may be. The constraints on

PAGE 70

62 the box were that it was limited to a maximum angular velocity of 1 rad/sec, and was bound to the region between +100 and 100 in the x, y, and z directions. If one of these was 120 x 60 x 60 millimeters and its density was 4,000 kg/m 3 or 4 times the density of water, giving a virtual mass of 1.728 kilograms ( 3.810 pounds ) 5.1. Human Robot Interaction in the Box Interaction Experiment Both the first and second simulations were exactly the same, although the subjects all felt as if the second time was somewhat easier This was due to having some experience with the virtual box the second time through, versus having no experience with it the first time through. This was also indicated by the time taken to complete each simulation. The average time required to complete it t he first time was 3 minutes and 49 seconds while the average time required to complete it the second time was 2 minutes and 27 seconds. In the box interaction experiment, boxes 1, 3, 5, 7, and 9 were the horizontal boxes and boxes 2, 4, 6, 8, and 10 were the rotated boxes. The horizontal boxes had the longest edge facing the screen and the rotated boxes had the narrow edge facing the screen. This ensured that the subjects would have to rotate the box 90 after reaching a target box i n order to reach the next one. The graphs p resented in figures 8 through 15 fully analyze the data in the box interaction experiment. However, it is also of interest to see which pairs are actually statistically significantly different. In order to do this, a paired t test was run for each of the comparisons in these figures. If the p value, or probability that the null hypothesis is

PAGE 71

63 correct, is less than 0.05, then the two data sets are statistically significantly different. If the null hypothesis cannot be rejected and there is no statistical ly significant difference between the pairs. Figure 8 shows the average time to reach each target box for each simulation. Note that for all graphs with error bars present, the range represents one standard deviation from the mean of the data collected. Figure 8 The average time taken to reach each target box in the box interaction experiment. The error bars represent one standard deviation from the mean. In figure 8 it is clear that the average time for reaching each target box except 3 and 10 was less in the sec on d simulation than in the first However, i t is notable that there are v ery large standard deviations for some of the boxes, in particular 2, 4, and 8. This is d ue to one or two dyad s having great difficulty in lining up the virtual box with the target box, taking more than three minutes to reach a single box in some cases. These pairs did not actually have a large amount of fighting, they just could no t get within the 20 mm and 30 limits for several attempts. Also notable is that all three of these boxes

PAGE 72

64 are rotated boxes. Box 5, a horizontal box, also has a fairly large standard deviation for the first simulation, although it is not nearly as large as the ones for 2, 4, or 8. The longest time taken to reach a single box in the entire experiment was 3 minutes and 3 seconds, for target box 4 in the first simulation. The fastest a single box was reached in the entire experiment was 1.71 seconds for target b ox 3 in the first simulation. The longest time taken to complete the entire simulation was 9 minutes and 26 seconds in the first simulation and the fastest the entire simulation was completed was 1 minute and 15 seconds in the second simulation. Out of all ten dyad s tested, nine completed the second simulation faster than they completed the first simulation and one took 16.86 seconds longer to complete the second simulation than the first T his demonstrates that even a little practice can greatly increase the speed and efficiency at which a human robot team can interact with a virtual environment. However, were the times statistically significantly different between the two simulations? When the paired t test was run, it yielded a p value of 0.00 50. Therefore, it is safe to say that the null hypothesis can be rejected, and that the subjects did perform the second simulation statistically significantly faster than they performed the first simulation. The next comparison made was between the avera ge offset distances for each target box when reached. Due to the constraints on the experiment, the distance was always less than 20 millimeters. Figure 9 shows the average distance from the target box when reached, for each box in each simulation.

PAGE 73

65 Fi gure 9 The average offset distance when each target box wa s reached in the box interaction experiment. The error bars represent one standard deviation from the mean. As was observed in the actual data, distance was the leading constraint in reaching the target box 82% of the time in both the first and second simulation s. This is verified by figure 9 as the average offset distances are quite close to 20 mm. For box 1, distance was the constraint 100% of the time. This demonstrates that for the constraints given, it wa s more difficult to correctly position the box in the 3 D environment than to rotate it to the proper position. Through observation, it was common for the subjects to get the box to within 21 or 22 mm of the target, miss, and then have to try again. Perhaps if the tolerance h ad been increased to 30 mm, the results may have been quite different. However, for the constraints given, offset distance was proven to be more diffi cult to meet than offset angle. There is no statistic al ly significant difference between the offset distances in the first and second simulation. When the paired t test was ru n, it yielded a p value of 0.96

PAGE 74

66 This proves that there was absolutely no improvement in the offset distances the second time the si mulation was run from the first time. The next comparison made was between the average offset angles for each target box when reached. Due to the constraints on the experiment, the angle was always less than 30. Figure 10 shows the average offset angle from the target box when reached, for each box in each simulation. Figure 10 The average off set angle when each target box wa s reached in the box interaction experiment The error bars represent one standard deviation from the mean. In figure 10 it is clear that no box has an average offset angle close to 30, indicating that the offset angle was rarely the leading constraint in reaching the target box. However, the large standard deviations indicate that the offset angle did vary sign ificantly between the dyad s for each box. However, there is no statistical ly significant difference between the angles in the first or second simulation. The t test yielded a p value of 0.75

PAGE 75

67 The lack of statistical significan ce between the first and second simulation and the data seen in figure 10 demonstrate that the offset angles were some random value between 0 and 30 when the target was reached. This makes sense since the offset distance was the leading constraint in reac hing the target box for 82% of the targets. Once the time, offset distance, and offset angles had been analyzed, the individual and joint forces and torques were analyzed to measure the level of cooperation between the subjects. Figure 11 shows the avera ge individual and joint forces and torques for each target box in the first and second simulation In figure 11 the individual forces and torques presented are actually the sum of those for both subjects. For example, if the individual forces for subjec ts A and B were 20% and 20% and the joint forces were 60%, figure 11 would show 40% for individual forces and 60% for joint forces. Figure 11 The individual forces and torques versus the joint forces and torques in the box interaction experiment. As you can see, there was no statistical ly significant difference between the first and second simulations, but there wa s a statistical ly significant difference between the

PAGE 76

68 overall forces and torques, the individual and joint forces, and the individual and joint torques. There were five t tests run on this data. Comparing the joint forces between the first and second simulation yielded a p value of 0.68. Comparing the joint torques between the first and second simulation yielded a p value of 0.78. Howev er, when comparing the overall joint forces to the overall joint torques, the resulting p value was less than 0.0001. When comparing the individual forces to the joint forces, the p value was less than 0.0001, and when comparing the individual torques to the joint torques, the p value was less than 0.0001. This demonstrates that there is essentially a 100% chance that the null hypothesis can be rejected for these three cases, proving that the subjects did cooperate more for rotational motion than for tran slational motion and that there was a statistical ly significant difference between the individual and joint forces and the individual and joint torques. On average, approximately 63.46% of the total forces were joint forces and approximately 87.85% of th e total torques were joint torques. This also demonstrates that the subjects cooperated more with rotational torques than with translational forces, as was stated in section 4.5, and as is also illustrated by the offset distance being the leading constrai nt in reaching the target box. The greater cooperation between the subjects for rotational torques than for translational forces is also proven by the analysis of work. Going beyond the individual and joint forces and torques, the work done by the forces and torques was obtained next using equations 28 through 33 from section 4.5. Once the force and torque work had been analyzed, the concept of the fighting factor could then be formed.

PAGE 77

69 5.2. The Fighting Factor The concept of positive and negative work was dis cussed in detail in sections 3.3 and 4.5 Since the percentages indicate how well the subjects worked together to move the box, the fighting factor could then be defined The fighting factor is an integer value between 1 and 5 and is based directly on these percentages a high level of cooperati fighting Remember that subjects A and B both doing 50% of the work would indicate a perfect distribution of the work, and hence a very g ood level of cooperation There was a good deal of co operation, and the subjects distributed the work nearly equally in order ven if the percentages are between [50% 50% ] and [30% 70% ] That is, 30% for one subject and 70% for the other. [30% 70% ] and [0% 100% ] This indicates that the subjects are still working together, but one is doing most of the work. This is similar to the example given in section 3.3 where two people are carrying the table across the room, and one is passively holding the table above the ground while the other does most of the work to move it. This is also a very good result if one subject does the majority of the work for forces and the other does the majority of the work for torques, as they have still evenly distributed the work amongst each other, is given if the percentages are between [ 0% 100% ] and [ 30% 130% ]

PAGE 78

70 most of the work. However, the difference is that the subjects are fighting with each other more than they are cooperating, hence the negative wor k. For fighting factors of 3 through 5, the level of fighting between the subjects increases with each step [ 30% 130% ] and [ 100% 200% ] This indicates that the subjects were fight ing considerably, which makes it significantly more difficult and more tiring to reach the target. For this case, anywhere from 1.3 to 2 times as much work is being done than is necessary to reach the target, which is problematic because it can lead to in creased fatigue in both the subjects and the Omnis. Lastly, a greater than [ 100% 200% ] This indicates that the subjects were fighting the entire time and eventually got lucky enough t o reach the target Often in these cas es, several times as much work wa s being done than was necessary Also unique to this case is that both subjects have done more work than if they had moved the box by themselves. The research interest here is to study whether horizontal or rotated boxes have the highest fighting factors, whether the fighting factor is highest for forces or torques, whether the fighting factor was less in the second simulation than the first and how much the fighting factor affected the time taken to reach each target box. Figure 12 shows the number of each fighting factor for horizontal and rot ated boxes in each simulation.

PAGE 79

71 Figure 12 The frequency of each fighting factor per simulation in th e box interaction experiment. Note that each simulation has a total of 100 fighting factors, since there were ten target boxes reached for each of the ten dyads In figure 12 r also demonstrates that the subjects cooperated better in rotating the box than in positioning the box, as was discussed in section 5.1 Another notable observation is the for both forces and torques This indicated that, for the rotated boxes, it was very common for one subject to do most of the work for forces and the other to do most of the work for torques. It is also seen in figure 12 that there were boxes in the force analysis, especially in the second simulation. This indicates that, for these boxes, the subjects had great difficulty in cooperati ng to move the box. This was observed during the experiments as well. It was common to see one subject push the box one way and the other push back the other way Also common was to see the box getting very close to the target when one subject would mak e a strong move, causing the box to

PAGE 80

72 either spin around or fly away from the target, causing the subjects to have to start over in reaching that target. This brings up the next question, how does the fighting factor affect the time taken to reach the targe t box? Figure 13 shows the average time taken to reach each target box based on the fighting factor, for both for ce analysis and torque analysis Fig ure 13 The average time taken to reach the target box per fighting factor in the box interaction experiment. Figure 13 also shows that there was a statistical ly significant difference between the times taken to complete the first and second simulations. However, the results we re somewhat unexpected. One would expect the average time to increase st eadily as the fighting fa ctor increased. However, this wa s only somewhat seen. For force analysis in the first the longest. However, for the second simulation, the behavio r is not what you would For torque analysis in the first

PAGE 81

73 simulation, the analysis wa we re oo k progressively longer. When looking at the actual data, it becomes clear why some of these discrepancies occurred. Some dyads were much quicker at completing the simulation ave not taken any longer than an other dyad s Also, there were not force analysis and there were not that many unusually long time to reach could greatly skew the results shown in figure 13 However, figure 1 3 still shows Yet another analysis regarding the fighting factor is the actual fighting factors themselves for each individual target box. Figure 14 shows the average fighting factor for each of the ten target boxes in both the first and second simulation. Figure 1 4 The average fighting factor per target box in the box interaction experiment The error bars represent one standard deviation from the mean

PAGE 82

74 Figure 14 reiterates that there was no statistical ly significant difference between the fighting factors in the first and second simulations, but that there was a statistical ly significant difference between forces and torques. H owever, it also indicates which boxes were easiest for the subjects to cooperate on and which were the most difficult. For force analysis, boxes 1 and 7 promoted the most fighting while box 10 promoted the least fighting For torque analysis, box 1 promoted the most fighting while boxes 2, 3, and 10 promoted the least fighting. Howev er, an interesting observation wa s that for boxes 4, 5, and 6, the second simulation had a much higher average torque fighting factor than the first simulation. This ma y have been due to the subjects rushing more in the second simulation thinking that they were more skilled than they actually were. The standard deviations are also quite large for the fighting factors, indicating that some subject teams had a much highe r level of cooperation than others. However, the first target box in the first simulation had a higher average fighting factor than the other boxes, which was likely due to this being at the very beginning of the experiment, so the subjects were just gett ing used to the force feedback and the task at hand. When performing the statistical analysis for this data, the results were exactly what was expected. Running a t test to compare all of the fighting factors for both forces and torques between the first and second simulation yielded a p value of 0.80, so, just as predicted, there was no statistical significance between the first and second simulation. However, running a t test to compare the force fighting factors to the torque fighting factors yielded a p value of less than 0.0001, which demonstrates that there is essentially a 100% chance that the null hypothesis can be rejected for this case and that the torque fighting factor is statistically significantly less than the force fighting factor

PAGE 83

75 The fi nal analysis regarding the fighting factor is the actual correlation between the work percentages themselves for forces and torques. The theory is that if a subject has a higher work percentage for forces, then they would have a lower work percentage for torques as each subject would specialize on a separate portion of the task Figures 15 illustrate s y plane with the x axis representing the force work percentage and the y axis representing the torque work percentage. Figure 1 5 The force work vs. torque work scatter plot for all subjects of fighting factors 1 through 4 in the box interaction experiment Figure 15 presents all of th e individual force work and torque work data for fighting factors 1 through 4 in a scatter plot. All points with a fighting factor of 1 are

PAGE 84

76 colored blue, all points with a fighting factor of 2 are colored green, all points with a fighting factor of 3 are colored orange, and all points with a fi ghting factor of 4 are colored red. The data for fighting factors of 5 were not analyzed here because the subjects were fighting too much to get any meaningful data in this analysis. As seen in figure 13, the trendline has a negative slope, indicating th at the theory of a subject having a higher percentage of torque or work, but a lower percentage of the other, is at least somewhat true. It is difficult to prove this theory, however, since the data points are greatly scattered, which is indicated by the very low value of R 2 and since the slope of the trendline is so shallow. However, this trendline and R 2 value is only for all data points with fighting factors 1 through 4. Table 3 lists the best fit trend line and R 2 value for seven different combinations of fighting factors Table 3 The best fit line and R 2 values of the force work vs. torque work data Fighting Factors Best Fit Trendl ine R 2 Value 1 y = 0.2800x + 63.998 0.1141 2 y = 0.1213x + 56.067 0.0180 3 y = 0.0793x + 53.966 0.0089 4 y = 0.1216x + 56.079 0.0290 1 and 2 y = 0.1257x + 56.285 0 .0194 1 through 3 Y = 0.0934x + 54.670 0.0118 1 through 4 y = 0.1135x + 55.673 0.0223 In table 3, it is clear that there is not a large difference in the slope of the trendline or in the R 2 value for different combinations of fighting factors. The slope is negative in

PAGE 85

77 all seven cases, giving some element of proof to the force vs. torque co rrelation. However, an ideal correla tion would result in the trendline y = x + 100, with an R 2 value of at least 0.50. The shallow slope o f the trendline demonstrates that the subjects did not cooperate in this way as well as they could have in this experiment. This is also supported by the large amount of fighting and negative work seen in the analysis. For improving the human robot interaction in an experiment such as this one, the ultimate goal would be to elim inate the fighting factors of 3, 4, and 5 altogether For the best human robot interaction, the fighting factors need to be 1 or 2, and the points on the scatter plot need to be as close to the line y = x + 100 as possible. However, this is quite diffic ult to achieve. It would take considerably more practice on the part of the subjects, as well as better force feedback and visual feedback. However, if it is achieved, then the human robot team c ould be capable of working with a much higher level of effi ciency and speed by each member focusing on separate parts of the task 5.3. Force Feedback vs. Visual Feedback The concepts of force feedback and visual feedback were introduced in chapter 1, but now it is time to expand on them to improve their beneficence The box interaction experiment was programmed such that only one method of force feedback and o ne method of visual feedback were used. However, better methods of feedback could certainly improve the fighting factor between the subjects. Whi le all 20 of the subjects stated that the visual feedback was useful, there was definitely some room for improvement. Future versions of this experiment could include

PAGE 86

78 two open GL windows. The first would show the virtual box and the target box from the p erspective o f subject A and the second would show it f rom the perspective of subject B Another possibility would be to have the open GL window be based on a moving the window remained fixed in the same absolute reference frame, so the box moved within it. Yet another possibility would be to have two windows, one being the absolute reference frame as was in the actual experiment, and the other being based on the moving c oordinate frame described above. Future expansions of this project could include running this experiment with different types of visual feedback and then comparing the results. Another type of future expansion could be to try different types of force fee dback for the box. One possibility for this could be to instead have the force feedback based on the level of cooperation between the two subjects, similar to Social Force Feedback in the material s analysis experiment. The general concept here is that th e subjects would feel a spring force keeping them attached to the box corner, just as they did in the experiment, but they would also feel an additional force pulling them slightly in the direction of their partner. This method would be a very good teachi ng method, in that it would promote one would be someone who is highly experienced in h aptic interactions such as this, and the student would be someone who is new to h aptic interactions, and is seeking some practice in human robot interactions. After all, the best way to improve the cooperation between the subjects would be for them to practice with several virtual environments

PAGE 87

79 such as this one. Improved methods o f force feedback and visual feedback may help, but overall, the best way to improve these skills is to practice. More future expansions and developments will be presented in section 7.1 but it is significant to note that, even during the 30 45 minutes the subjects spent participating in this research, they all showed significant improvements in their robotic interaction skills and confidence by the end of the experiments. Most of the subjects, especially those who had never used a robotic device before approached the experiments with a fair amount of caution and shyness in the beginning. However, by the end of the material s analysis experiment, they all showed significantly more confidence and comfortability in using the Omnis and interacting with their partner through them 5.4. Subject Feedback vs. Numerical and Statistical Analysis The statistical analysis performed for the box interaction experiment proved that there was indeed a statistical ly significant difference bet ween the level of cooperation between the subjects for translational motion and rotational motion of the box. The subjects cooperated significantly more in rotating the box than in moving it in space, as was demonstrated by the analysis of the forces, tor ques, work, and the fighting factor. The percentage of joint torques was much higher than the percentage of joint forces, and the average fighting factor was significantly lower for torques than for forces. The statistical analysis also proved that the s ubjects performed the simulation significantly faster the second time than the first time, indicating that practice does improve performance and efficiency by a statistically significant amount. However, what did the subjects have to say about this? Did their perception match the numerical results ?

PAGE 88

80 Interestingly enough, in the post experiment survey, 55% of the subjects stated that rotational motion was in fact the most difficult to control. The other 45% stated that translational motion was the most di fficult. This is in contradiction to the numerical and statistical analysis, which demonstrated that the subjects had a n easier time controlling the rotational motion than the translational motion. Some of the reasons stated by subjects who felt that rot ation was more difficult to control were that it was difficult to know which way your partner is turning the box, that there was a limited frame of reference in viewing the box while it was rotating, and that it was difficult to see your Omnis when you are in the back of the open GL window All of these issues would be corrected by the implication of some of the visual and force feedback methods described in section 5.3 Some of the reasons stated by subjects who felt that translation was more difficult t o control were that it required extensive coordination with your partner, the box position would often place the Omnis in their extreme positions, at which they placed unwanted torques on the box, and that the box sometimes felt too heavy or had too much m omentum to be moved easily These issues could be corrected by slightly reducing the size of the field in which the box can move within and reducing the density of the box. Other than that, more practice would be required to master the coordination issue s in this human robot interaction. As stated previously, the box was bound to the region between +100 and 100 in the x, y, and z directions and had a density of 4,000 kg/m 3 By limiting the region to between +70 and 70 in the x, y, and z directions, r educing the density to 1,000 kg/m 3 improving the force and visual feedback available to the subjects, and allowing the

PAGE 89

81 subjects to practice for a substantial amount of ti me with this and other similar h aptic environments, an average fighting factor of bet ween 1 and 2 for all target boxes may easily become obtainable. Good subject cooperation is essential in using human robot teams to perform actual experiments, such as the materials analysis experiment. Chapter 6 will discuss the results of this experiment, and provide a detailed analysis of the cooperation between the subjects in a three dimensional Cartesian space between three different force feedback modes.

PAGE 90

82 Chapter 6 Results and Observations of Materials Analysis The hardness of a material is an easily obtainable, yet fundamental property. There are many hardness testers out there which are more suited to the task of finding a Omni offers. They provide no h aptic interaction between the user and the device, nor do they allow two subjects to cooperate in a human robot interaction in performing the experiments One limitation to the Phantom Omnis is that they are only able to perform these experiment s on s ofter materials. As discussed in section 2.2, it requires a smaller force to deform these materials by a measurable amount, and many robots are unable to deliver large r amounts of force, the Omni being one of them. With that, it was observed that the harder the material, the less accurate the results and the more common repeat hardness measurements became due to the Omni slipping on the hard surface or getting an erroneous value. T he five materials tested were soft foam, styrofoam, cardboard, soft woo d, and aluminum. The hardness values recorded for soft wood and aluminum were slightly higher than the actual values, although this was due to the Omnis being unable to get as accurate of measurements on these two harder materials. However, the most inte resting aspect of this experiment was not the hardness values themselves, but how the subjects worked together to obtain them. They worked with three different force feedback modes to obtain the results.

PAGE 91

83 6.1. Three Force Feedback Modes As discussed in section 3.4 the three force feedback modes utilized in the materials analysis experiment were System Force Feedback, Social Force Fe edback, and Dual Force Feedback Remember that System Force Feedback was based on a force proportional to the distance bet Social Force Feedback was based on a force proportional to the distance between the both master Omnis feeling the same force, proportional to the distance between the average position of both master Omnis and the position of the slave Omni. These feedback modes are mathematically defined in equations 9 through 17 from section 3.4. The hypothesis is that Dual Force Feedback would cause the most fighting because it is difficult to tell if the force you feel is due to you fighting with your partner or due to the slave Omni being restricted due to its interaction with a material. Social Force Feedbac k wa s expected to c ause the least fighting bec ause all of the force feedback wa s based on the fighting distance between you and your partner. As the fighting distance increased the for ce applied back to you increased pulling you back towards In t urn, System Force Feedback would be in the middle, and would then be expected to have more fighting than Social Force Feedback, but less than Dual Force Feedback. The subjects found all three force feedback modes to be different, but useful. Each has the ir advantages and disadvantages. The advantage of System Force Feedback is that the force tends to pull all three Omnis toward the same position, and gives the subject the sense of how stiff the material is that he is interacting with, although the

PAGE 92

84 disadv antage is that it is harder to know if you are fighting with your partner, so it can be more difficult to cooperate than in Social Force Feedback. The advantage of Social Force Feedback is that the mode makes it very easy to cooperate with your partner an d reduce the fighting distance and velocity substantially. However, the disadvantage is that it is impossible to know what the slave Omni is feeling in this mode Hence there is no way to gauge the stiffness of the material from the force feedback alone. The advantage of Dual Force Feedback is that both subjects feel the exact same force, so each subject know s what his partner is feeling. This mode is very good if one person is trying to train another in force feedback, since the trainer could set up the system so that he feels a force in which he wants the trainee to feel, knowing that the trainee would feel that force as well. It also tends to br ing all three Omnis towards a stable equilibrium position However, the disadvantage is that it is very difficult to cooperate using this mode for an experiment such as this one because it is hard to tell whether the forces experienced are due to the slav e Omni interacting with a material or from the two subjects fighting with each other. This means that this mode promotes a higher likelihood of fighting between the subjects than the other two mode s, as the subjects can easily end up fighting more just tr ying to get the system back to equilibrium. During the break between the box interaction experiment and the materials analysis experiment, the subjects were given a th o rough explanation and demonstration of these three force feedback modes Then, they we re ready to begin the materials analysis experiment Therefore, the next step is to analyze their interaction with the Omnis in each of the three force feedback modes in this experiment.

PAGE 93

85 6.2. Human Robot Interaction in the Materials Analysis Experiment It was seen that in a human robot interaction within a virtual environment, the subjects had an easier time controlling rotational motion than translational motion, even though more subjects felt that rotational motion was actually more difficult to contr ol However, in the materials analysis experiment, the motion involved all translational motion in a Cartesian three dimensional space. There were five materials tested, and three different force feedback modes used for each of the materials. Figure 16 shows the average fighting distance between the subjects for each force feedback mode and for each Cartesian direction in the materials analysis experiment. Fig ure 16 The fighting distance between the subjects per Cartesian direction in the materia ls analysis experiment. The error bars represent one standard deviation from the mean. As seen in figure 16 there was no significant difference between the five materials themselves. This indicates that the material hardness itself does not have a large impact on the fighting distance. However, there is a significant difference between the force feedback modes. As was hypothesized, Dual Force Feedback consistently

PAGE 94

86 produced the largest fighting distance, then System Force Feedback, and Social Force Feedback produced the smallest fighting distance. The average fighting distance is statistically significantly smaller in the z direction than in the x or y directions. There is no statistical ly significant difference between the fighting distance in th e x and y directions, although out of the 15 possible combinations, the y direction had the largest fighting distance of the three Cartesian directions in 9 cases, the x direction had the largest fighting distance in 6 cases, and the z direction never had the largest fighting distance. To compare the statistical significance between the three Cartesian directions, a paired t test was run for each comparison, similar to what was done for the box interaction experiment. The comparisons made were between the x and y directions, the x and z directions, and the y and z directions, independent of the force feedback mode. Then, System Force Feedback wa s compared to Social Force Feedback, System Force Feedback wa s compared to Dual Force Feedb ack, and Social Force Feedback wa s compared to Dual Force Feedback, using only the 3 D fighting vectors, s o that these comparisons w ere independent of the Cartesian direction. This wa s done for both fighting distance and fighting velocity, for a total of 12 t test comparisons The first six comparisons made involved the fighting distances in figure 16 Comparing the x direction to the y direction yielded a p value of 0.42 Comparing the x direction to the z dire ction yielded a p value of 0.047 Comparing the y direction to the z direction yielded a p value of less than 0.0001 This confirms that the fighting distance in the z direction was statistically significantly smaller than in the x or y directions, but

PAGE 95

87 that there was no statistical ly significant difference between the fighting distances in the x and y directions. Comparing System Force Feedback to Social Force Feedback yielded a p value of less than 0.0001. Comparing System Force Feedback to Dual Force Feedback yielded a p value of less than 0.0001. Comparing Social Force Feedback to Dual Force Feedback yielded a p value of less than 0.0001. This also confirms what was observed in figure 16, that Social Force Feedback produced the smallest fighting distance, that System Force Feedback produced significantly more than Social Force Feedback, and that Dual Force Feedback produced significantly more than th e other two modes. The next quantity to be analyzed was the fighting velocity. Figure 17 shows the average fighting velocity between the subjects for each force feedback mode and for each Cartesian direction in this experiment Fig ure 17 The fighting velocity between the subjects per Cartesian direction in the materials analysis experimen t. The error bars represent one standard deviation from the mean. As seen in figure 17 there was no significant difference between the five materials themselves, indicating that the material hardness did not have a large impact on

PAGE 96

88 the fighting velocity, just as it did not have a large impact on the fighting distance. Again, it is clear that, for the fighting velocity, D ual Force Feedback also produced the most fighting an d Social Force Feedback produced the least. The final six statistical comparisons made involved the fighting velocities in figure 17. Comparing System Force Feedback to Social Force Feedback yielded a p value of less than 0.0001. Com paring System Force Feedback to Dual Force Feedback yielded a p value of less than 0.0001. Comparing Social Force Feedback to Dual Force Feedback yielded a p value of less than 0.0001. This also confirms what was observed in figure 17, that Social Force Feedback produced the smallest fighting velocity, that System Force Feedback produced significantly more than Social Force Feedback, and that Dual Force Feedback produced significantly more than the other two modes. However, for fighting veloc ity, there w a s a much larger difference between the x, y, and z directions than there was for the fighting distance. Hence, the fighting velocities in the three Cartesian directions were far more statistically significantly different to each other than the fighting d istances were. Comparing the x direction to the y direction yielded a p value of less than 0.0001. Comparing the x direction to the z direction yielded a p value of less than 0.0001. Lastly, comparing the y direction to the z direction yielded a p value of less than 0.0001. This analysis also confirms what was observed in figure 17, that the fighting velocity in the x direction was statistically significantly smaller than in the other two directions, that the fighting velocity in the z direction was sta tistically significantly larger than in the x direction but statistically significantly smaller than in the y direction, and that the fighting velocity in the y direction was statistically significantly larger than the

PAGE 97

89 fighting velocity in the other two di rections. For all 15 cases in figure 17, the y direction has by far the largest fighting velocity, the z direction has the second largest, and the x direction has the smallest. This proved that motion in the y direction was much more difficult to match t han motion in the x and z directions. This result indicates that quite often during the experiment one subject was moving his stylus up while the other was moving his down. Since most of the motion was up and down motion, it wa s somewhat expected that this direction might have a larger fighting velocity and fighting distance. However, the subjects were instructed to work together as much as possible in moving the slave Omni, and this data shows that they clearly were not doing so a good amount of the t ime. This may have been responsible for some of the repeat hardness measurements that had to be taken. One other notable observation in both figures 16 and 17 is the standard deviations present in the data. For the fighting distance, the standard deviations were generally very large, indicating that some dyads cooperated much more than others during this experiment. This is largely due to the fact that in some dyads both subjects had previous experience i n working with robotic devices, while in others, only one had previous experience, and in others, neither subject had ever worked with a robotic device before. For fighting velocity, the standard deviations were much smaller than for fighting distance, bu t they were still quite large. The best way to reduce the fighting distance and fighting velocity is, just like in the box interaction experiment, for the subjects to get substantial practice working with robotic devices in experiments such as this one. Furthermore, running the experiment in Social Force Feedback mode all the time would further reduce the fighting distance and

PAGE 98

90 velocity, but unfortunately, the subjects would not be able to receive h aptic feedback Therefore, the best force feedback mode to use for this type of interaction would be System Force Feedback. This mode allows the subjects to feel the stiffness of the material, and it also allows them to feel if they are fighting with their partner. With enough pra ctice in this feedback mode the levels of fighting distance and velocity should be reduced greatly. Yet another possibility would be to introduce visual feedback in to this experiment. The position of both master Omnis and the material could be shown in an open GL window That way, the subjects would see exactly how far apart they are from their partner, which would likely reduce the fighting distance by a significant amount. The fighting velocity should be reduced somewhat too, as each subject would se e the motions of his partner on the screen and could more easily mimic it. These possibilities will be discussed further in section 7.1. 6.3. Obtaining the Fundamental Properties of Materials The f inal objective in this research wa s to actually study the effectiveness in which a human robot team could obtain the fundamental properties of materials such that they could be identified Of the five materials tested, the property measured was the Brinell hardness. The measured hardness values were then co mpared to the actual hardness values, for each of the three force feedback modes Figure 18 shows the average measured hardness value for each force feedback mode and for each material compared to the actual hardness of that material

PAGE 99

91 Fig ure 18 The actual hardness test results from the materials analysis experiment. The error bars represent one standard deviation from the mean. As seen in figure 18 the measured hardness values are quite close to the actual hardness values, within the standard deviations. As stated earlier, the measured values for soft wood and aluminum were consistently high, but this was due to the Omni having difficulty actually measuring these harder materials. As seen in figure 18 Dual Force Feedback produced a slightly larger standard deviation than the other two feedback modes There is definitely a significant difference between the measured hardness values of the five materials, indicating that the methods used were fairly accurate in calculating the hardness values. There was not a significant difference between the force feedback modes indicating that the feedback mode did not have a strong impact o n the calculated hardness value. Once the materials analysis experiment w as complete, the subjects were given a tabl e of ten materials and their actual hardness values. They were then show n all 75

PAGE 100

92 hardness data points, five for each of the three force feedback modes for each of the five materials. They then used this data, the table and the h aptic feedback rendered t o them during the experiments to try and figure out the identity of the materials. It was found that some of the materials were more difficult to identify than others. This was partly due to the materials given in the table, and partly due to the subject s being more familiar with some of them than others. The material which was the easiest to identify was aluminum, and the most difficult to identify was soft wood. Table 4 lists the number of subjects to correctly identify each material and the percentage of accuracy which resulted. Table 4. The successfulness of the subjects in identifying the unknown materials in the materials analysis experiment A total of 20 subjects participated in this experiment. Material Number of Subjects to Correctly Identify Percentage of Subjects to Correctly Identify Soft Foam 17 85% Styrofoam 14 70% Cardboard 11 55% Soft Wood 7 35% Aluminum 18 90% Out of all 20 subjects tested, only three were able to correctly identify all five materials. For soft wood, the major factor which made identifying it more difficult was that the Omnis measured it to be slightly harder than it really was, causing seven o f the

PAGE 101

93 subjects to incorrectly assume that it was hard wood. However, this should not have been such an issue as soft wood has an actual hardness value of 1.6 while hard wood has an actual hardness value of 4.0, and the average values measured by the Omnis were around 1.9. Testing for more material properties, such as yield strength, elastic modulus, and likely yield better results for subject identification. However, c onsidering that there were ten materials in the table the rate of su ccess by chance alone would be two subjects per material. Since the actual number of subjects to correctly identify the materials was much higher than this, it proves that just testing for the hardness can eliminate a lot of possibilities. Even if you cannot pinpoint the identity after just this test, you can often reduce the number of possibilities by a significant amount Once the experiments were completed, the subjects were given the post experiment in which they analyzed the experiment. Their responses for the box interaction experiment were already analyzed in chapter 5, but they also discussed the three force feedback modes and the materials analysis experiment as well. The next section analyzes their responses and compares them to the numerical analysis of human robot interaction in each of these feedback modes. 6.4. Subject Feedback vs. Numerical and Statistical Analysis The statistical analysis of the materials analysis experiment proved that there was indeed a statistical ly significant difference between the three force feedback modes for both fighting distance and fighting velocity. For both fighting distance and velocity,

PAGE 102

94 Social Force Feedback was significantly less than the other two, Dual Force Feedback was significantly more than the other two, and System Force Feedback was significantly more than Social Force Feedback but significantly less than Dual Forc e Feedback. The statistical analysis also proved that the y direction promoted the largest fighting velocity, while there was not nearly as much difference between the fighting distances. However, what did the subjects have to say about this? Did their perception match the numerical results? Out of the 20 subjects tested, three stated that they liked Sy stem Force Feedback the most, ten stated that they liked S ocial Force Feedback the most, six stated that they liked Dua l Force Feedback the most, and one stated no preference to either mode Then, the subjects were asked t o rate each mode on a scale of one to five wi th one indicating that they did not like the mod e or that it was too hard, and five indicating that they really liked the mode and found it to be very comfortable and user friendly S ystem Force Feedback received a mean score of 3.15, S ocial Force Feedback received a mean score of 3.35, and Dual Force Feedback received a mean score of 3.45. System Force Feedback had a median of 3 and a mode of 3, Social Force Feedback had a median of 4 and a mode of 4, and Dual Force Feedback had a median of 3 and a mode of 3. This indicates that the subjects as a whole liked Social Force Feedback and Dual Force Feedback more than they liked System Force Fe edback. Next, the subjects were asked which feedback mode they found to be the easiest and which they found to be the most difficult. Four stated that System Fo rce Feedback was the easiest, ten stated that Social F orce Feedback was the easiest, and five stated that Dual Force Feedback was the easiest As for which was the most difficult, six stated t hat

PAGE 103

95 System Force Feedback was, five stated that Social Force Feedback was, and eight stated tha t Dual Force Feedback was One subject stated no preference to any of the modes. This indicates that there wa s some discrepancy between the subjects on which modes were easier or harder. It seems pretty clear from the subject feedback that they liked System Force Feedback the le ast, even though it would be the mode of choice for this experiment. The subjects indicated that they liked Dual Force Feedback, but that it was too difficult to control and cooperate with your partner in They also indicated that they liked Social Force Feedback because it was easy to use and easy to cooperate with your partner in. However, the biggest criticism of Social Force Feedbac k wa s that you cannot feel the material you are interacting with, which is the primary disadvantage of this mode With that, the subjects basically feel the same thing about the three feedback modes as the numerical analysis shows. Dual Force Feedback feels nice, but it is more difficult to control. Social Force Feedback is easy to control, but you cannot feel anything y ou are interacting with. System Force Feedback offers very good force feedback, but it requires a lot of practice to get used to, and can often be confusing at first. Dual Force Feedback can often be confusing at first too. The final question on the survey asked the subjects whether interacting with their partner through a robotic device in a virtual environment, either the box interaction experiment or the materials analysis experiment, was easier or more difficult than in r eal life. Only three subjects felt that it was easier and 17 felt that it was more difficult. Most of the subjects who felt it was more difficult said that it was because it was a new experience, and not something they were used to doing on a regular basis. Several then

PAGE 104

96 went on to say that with practice, such as in the second box simulation or by the end of the materials analysis experiment, the robotic inte raction seemed much easier than at first. Of the three subjects who stated that it was easier than in real life, two had previous experience with a robotic device and one did not. The key reasons stated by the 17 subjects who felt that it was more diffic ult included that it was difficult to tell your position on the box from the visual feedbac k given, that the Omni made hand eye coordination more difficult, and that there wa s an intermediate device connecting you to the object. Some possible solutions fo r overcoming these difficulties were presented in section 5.3 with some of the possible force and visual feedback additions. After the completion of the survey, the subjects were sincerely thanked for participating, a nd their portion was complete. These results have a strong impact in the development of human robot interactions. There are many future developments, experiments, and applications that can come from this research. Just as there was a lot learned from this research, there will be a lot more learned from future research in this field. The ultimate goal is to advance human robot interactions to the point at which human robot teams can perform advanced research anywhere on Earth and beyond, into the Universe.

PAGE 105

97 Chapter 7 Future Developments, Experiments, and Applications This research has demonstrated some of the ways in which humans and robots can interact with each other in a research setting Overall, there have been many interesting observations, results, and findings. How ever, there are several continuations of this research which may come in the near future, which can expand upon some of the knowledge gained from it In general, there are two types of future developments, experiments, and applications which can come from this. The first is a set of direct expansions upon this research many of which were briefly introduced in chapters 5 and 6. These direct expansions would serve to improve human robot interaction and performance, as well as study some other methods of h uman robot interaction not fully observed in this research. The second type is a set of future developments that could eventually come from this and similar research, several years in the future. This set would include many of the concepts presented in c hapter 2, such as a robotically teleoperated surgery (Yamamoto et al., 2008), advanced robots as material analyzers, robots as personal assistants, and space faring human robot teams which travel beyond the Earth to study the cosmos. Both types of future research are very interesting and offer significant contribution to the scientific community but the first set of research must be performed before the second set can. We must learn more about human robot interaction, develop better methods of force feed back and visual feedback, and develop the human social

PAGE 106

98 factors such that the subjects will naturally work just as well with a robotic device or with another human through a robotic device as they would when working with another human directly. 7.1. Futu re Developments and Expansions Of the two types of future developments, experiments, and applications for this project, the first is the most straightforward to apply As mentioned in chapter 5, there are significant improvements in the visual feedback which can be implemented for the box interaction experiment. Several of the subjects stated in the survey that it was too difficult to determine their position on the b ox with the visual feedback given. coded green and the subject coded blue. However, as the box rotated, be hidden behind the box. This had the disadvantage in that it made it more difficult for someone without considerable practice with the environment to know where he was. Even more so, it was even more difficult for the subjects to be able to tell which of their Omnis on the screen corresponded to their left Omni and which corresponded to their right Omni. A direct expansion on this research would run the box interaction experiment with two or three different visual feedback methods and then comparing th e results, just as the results were compared for horizontal boxes and rotated boxes, and for the first and second simulation, in this research. Different types of visual feedback were described in detail in section 5.3.

PAGE 107

99 S ection 5.3 also went into the pos sibility of comparing different force feedback methods for the box interaction experiment as well. While this may have some advantages, it is clear that improved visual feedback would be the most advantageous, at least for the box interaction experiment. None of the subjects suggested that the force feedback available was in need of improvement, although one did say that the Omnis simply could no t render enough force back to them to create the virtual environments as accurately as would be necessary to co mpare to real life. However, several subjects mentioned the need for improved visual feedback in this experiment. For the materials analysis experiment, there are several direct future expansions as well. As mentioned in section 6.2, the most productive type of force feedback mode for this experiment would be System Force Feedback. Although the subjects were not too popular with this mode, it does provide the best force feedback for accurately rendering the stiffness of the materials and the fighting distance and velocity between the subjects In a future version of this experiment, more materials could be tested, focusing primarily on soft materials with a hardness value of less than 2.0. Visual feedback, such as that mentioned in section 6.2 could be introduced, allowing the su bjects to see the position of the material in the box on the screen, as well as the position of the slave Omni and the two master Omnis. Not only would this reduce the fighting distance and velocity between the subjects, but it would also eliminate the ne ed for the subjects to look in the direction of the slave Omni, further reducing the possibility of the subjects prematurely However, a s emphasized previously, the only way to truly get excellent human robot interacti on with very little fighting is through practice. Therefore, in any future version of this research, it may be best to focus on just one

PAGE 108

100 experiment instead of two, and allow the subjects to practice with several virtual environments of increasing co mplexi ty, instead of just two. Now that both of these experiments have been completed and their results analyzed, there are not only these direct expansions or improvements which can be done, but there are other similar research projects that can be done as wel l. One type of experiment which is also being done involves one subject performing a bimanual experiment in which his left Omni generates a preset motion and he must match it as closely as possible with his right Omni. This offers an additional type of h uman robot interaction that was not able to be studied in this research. This bimanual experiment would further study human robot interaction between one human and two robots. A variation on this would be to use one experiment operator, one subject, and two Omnis in Dual Force Feedback mode. The operator would move his Omni in a simple path and the subject would try to match the motion based on the feedback. In Dual Force Feedback, both the operator and the subject would feel the same force rendered bac k to them. This variation would be analyzed in a similar fashion to the bimanual experiment, except it would be very different in that it would apply force feedback to the subject, whereas the bimanual experiment does not, and it would explore the interaction of a two person, two robot team. However, there is plenty of other research in the second category of future developments, those which would include very advanced applications, often taking many years to complete, and offering incredible new i nventions and discoveries along the way. The next three sections will discuss some of these possibilities, and many of the exciting new applications for which could come from them.

PAGE 109

101 7.2. Robots as Material Analyzers The materials analysis experiment demonstrated how a human robot team can test a material for its properties However, it only tested for one simple property, hardness. Any of the properties discussed in Table 1 in S ection 2.1, as well as many other properties, can be tested for by a mor e advanced robotic device. It is fairly unlikely that two people would be interacting with a series of robots in a more advanced materials analysis, as they did in this experiment. Instead, it would likely be one human scientist remotely operating either one or a series of devices. However, the general concept is the same. The human scientist must be able to respond to force feedback and cooperate with the robot to get the job done. Even more advanced applications may involve detailed material analysis at the micro or even nano scales. Human robot interactions could still be performed at these scales through the use of force scaling, similar to what was done in Saeidpourazar and robotic manipulation research (Saeidpourazar, Jalili, 2008) The human scientist would move the stylus, and he would see an atomic force microscope image on the screen of the nano robot and its actual interaction with the sample. As the scientist moved the stylus, the robot would mimic the motion, with the forces and distances scaled by a factor of, say, one million. Therefore, a motion by the scientist of 10 centimeters would correspond to a motion of the nano robot of 100 nanometers. With this, it is clear that teleoperators play a massive role in human robot material s analysis, in current research and in future research. Furthermore, there are many materials which are simply impossible for humans to directly access. For accessing and studying materials in places such as nuclear waste sites, the interior of v olcanoes, or

PAGE 110

102 the bottom of the ocean, it is simply too dangerous or outright impossible to send humans to directly interact with the environment, but teleoperator robotic systems make it feasible. An advanced slave robot, specifically designed to not only survive, but work effectively in such harsh conditions is sent to the target location, while the human scientist sits comfortably in a laboratory interacting with it through a master robot and a computer (Buss et al., 2010). While the field of robotics o ffers great potential in the field of materials science and analysis, robotics can also contribute greatly to the advancement of many other applications Throughout this century, robots will likely be used to improve the life of many individuals, not just in the scientific community, but in the lives of each and every one of us. 7.3. Robotics in the 21 st Century As human robot interaction continues to be explored and developed, robots will eventually become household items In this research, the subjects and the robots were all in the same laboratory, sitting at the same workstation. However, it is also possible for a virtual h aptic interaction to be conducted remotely, with the human subjects being hundreds or even thousands of miles apart, through the use of a Collaborative Virtual Environment (Chellali et al., 2010). In a Collaborative Virtual Environment, two or more subjects work together with o ne or more virtual objects in a digital space connected through the internet. This allows the users to be located anywhere in the world. With this type of interaction, the box interaction experiment could be run using two subjects ten thousand miles apar t, who

PAGE 111

103 would be able to feel the same force feedback, see the same visual feedback, and communicate with each other just as if they were sitting right next to each other. Another advantage is that it would allow dozens or even hundreds of users to interac t with a larger, more complex virtual environment, which would not be practical in a single laboratory setting (Chellali et al., 2010). As discussed in section 2.3, interactions involving two members, whether it be two humans or one human and one robot, t ypically have an executer and a conductor. However, in virtual environments involving many people, it is much more difficult to strictly define the se roles, and in some cases, no member can have a distinct role. Research is already being done to study th e human robot interaction between two human subjects, where neither subject is permitted to be in either a leader or follower role. The goal is to establish a model based on this, in which both subjects work together to perform the task, instead of one pe rson leading and the other passively following along. This concept can then be expanded to human robot interactions involving several members (Evrard, Kheddar, 2009). While this research focused a lot o n the concept of work, another area for future development is to focus on the concept of energy. There is some research already taking place with this, studying the energy exchange in a human human robot interaction. T his research compares the difference between human human, human robot, and human human robot teams. It was found that the performance was the greatest when two humans were involved than in any case with one human, even when the mass of the virtual object was reduced by 50% (Feth et al., 2009 ).

PAGE 112

104 However, it would be of interest to study the energy exchange between many subjects in a large interaction, interacting with each other through a Collaborative Virtual Environment. Furthermore, it would be of interest to study the role in which the hu man social factors play in a typical single laboratory setting versus in a Collaborative Virtual Environment. Does remote operation affect the way in which the subjects interact with each other and with the robotic devices? Lastly, one of the greatest po tentials of the field of robotics is the use of robots as personal assistants. While a robot such as the Phantom Omni would not be capable of such a task, a more sophisticated robot may be. However, in order to achieve this, there are several things the robot must have. First of all, the robot must be programmed to understand spoken language very well, meaning that it must have voice recognition Next, it must be programmed in how to respond to thousands of different vocal commands and in turn, complete thousands of common household tasks relatively quickly and efficiently. This will likely require complex programming algorithms as well as a mechanical design which is very robust and adaptable. Finally, it must be easy to clean, maintain, and service, relatively available, and affordable. Although not possible today, it is likely that later this century robotics technology will have advanced to the point where such a robot will be readily available, for a cost which is not prohibitive to the average c onsumer. By then, enough research will have been done regarding human robot interaction that the robots will be as human like in their interaction as possible, and instructing the human owners on how to effectively interact with the devices will be relati vely straightforward as well.

PAGE 113

105 There are many exciting applications for the future of robotics here on Earth. However, another exciting application for this field is the future of robotics out in the final frontier. Some of the greatest discoveries are w aiting to be made by human robot teams in space. 7.4. The Future of Humans and Robots in Space While designing robots tough enough to venture to some of the most inhospitable regions of the Earth is a significant challenge, designing robots tough enoug h to venture into space presents a whole new league of complexity Beyond an altitude of 100 miles, the atmosphere is so tenuous that you are essentially in a vacuum, stronger than even the best laboratory vacuums on Earth. Beyond 1,000 miles up, space i s filled with high energy cosmic radiation and micrometeoroids which could disable a robotic explorer, or even kill a human astronaut (Schilling, Jungius, 1996) Humans as well as r obotic explorers travelling to other worlds must be protected from the se harsh conditions, which is very difficult and expensive to do. Without adequate protection, sending humans on longer space voyages beyond the Moon such as to Mars or beyond, would expose them to dangerous levels of radiation, which could cause them to s uffer from radiation sickness, which could in turn cause cancer or other adverse health problems (Fry, 1984). Although it is much easier to protect a small space probe than a large manned spaceship, eventually the technology will reach the point where thi s obstacle will be overcome. With advancements in propulsion, the obstacle of long travel times between planets will be overcome as well. With these two obstacles overcome, humans and

PAGE 114

106 robots will travel together throughout the solar system to perform res earch, study materials, and search for life. In the future, space stations will be constructed in orbit around different worlds, and human robot teleoperators will be necessary for repairs. It is much safer to have a robo nau t perform an external repair t han to have an astronaut perform a spacewalk. Furthermore, a manned mission to Mars may likely use robotic teleoperators to study the Martian environment. Just like studying harsh regions of the Earth, the human scientist can sit comfortably inside the M ars spaceport and drive the robo nau t to a site of interest, at which it will take samples and perform on site research (Bluethmann et al, 2003) There will be no communications time lag since the human operators will be on Mars, and not on Earth. Further more, they will be able to analyze the data in real time, greatly expediting the rate at which research can be performed. In the distant future, humans and robots may travel beyond the solar system, to distant worlds around distant suns. For a high speed spacecraft flying through the cosmos, extern al repair robonaut s will be essential, as will be personal assistant robots, which will aid the astronauts in basic tasks throughout the ir multi year flight. There are many great discoveries just waiting to be made, and man y new adventures on the horizon The field of robotics and materials science will play a central role in these adventures throughout the 21 st century and beyond.

PAGE 115

107 Chapter 8 Conclusions The field of robotics has been a rapidly growing field throughout the second half of the 20 th century, and into the 21 st century It will continue to grow and develop at an accelerating pace throughout the 21 st century and beyond. With that, human robot interaction will become more and more com mon in the coming decades. There has been a significant amount of research done in the fields of robotics and materials science, and there is currently a significant amount of research going on right now. This research was unique in that it combined seve ral aspects, including virtual environments, teleoperators, and material analysis. It studied the way two humans worked with each other in a virtual environment, and which Cartesian directions are easier to cooperate in and which are more difficult. This research set out to explore the human interactivity with a robot to obtain the fundamental properties of materials. In doing so, there were several questions which were determined to be answered. For instance, when two humans are working together throug h a set of robotic devices, do they tend to work together or fight with each other more? In which Cartesian direction do they have the most difficulty with? Does fighting drastically affect the performance of the team? Finally, what measures can be take n to promote better cooperation between humans and robots, to ultimately allow humans to work just as comfortably with a robotic partner as with a human partner?

PAGE 116

108 Through analysis of the fighting factor, it was found that when two humans are working togeth er through a set of robotic devices, there is a considerable amount of fighting that occurs. However, there is also a considerable amount of cooperation as well. Out of all the trials performed, about half of the time the subjects were cooperating more than they were fighting, and about half of the time they were fighting more than they were cooperating. It is pretty clear that the Cartesian direction in which the subjects have the most difficulty cooperating in is the y direction. The fighting velocit y was statistically significantly larger in the y direction than in the other two directions. The fighting distance was slightly larger in the y direction than in the x direction, although there was no statistical ly significant difference. The fighting d istance was, however, statistically significantly larger in the x and y direction s than in the z direction. It was also found that increased fighting did adversely affect the performance of the team, although not nearly as much as was hypothesized. A few of the target boxes with a fighting factor of 5 were still reached in under ten seconds, although this was more likely due to chance than skill. However, it was generally observed that the higher the fighting factor, the longer it took to reach the targe t and the more fatigued the subjects became. Lastly, there are several measures which can be taken to promote better cooperation between humans and robots. First of all, improved force feedback and visual feedback such as that discussed in sections 5.3 a nd 7.1 can be implemented to reduce the fighting distance and fighting velocity, as well as to generate more fighting factors o f 1 or 2, and reduce the number Also, the force feedback could be tailored

PAGE 117

109 to help compensate for weakness es in the interaction, such as fighting in the y direction. For instance, the spring force rendered back to the subjects could be larger for displacements in the y direction than for displacements in the x and z directions. Lastly, the subjects could be given more time to practice with several virtual environments leading up to the experiments, allowing them to become more comfortable with the devices, the virtual environments, and the overall h aptic interaction. In conclusion, t here are many future applications which can come from this research and others, some of which has the potential to change the world. Someday, we will live in a world in which robots are an everyday part of life. They will be common around the house, in the workplace, and may even become like buddies in which we can interact with when no one else is around. Further in the future, humans and robots will travel out into the cosmos together, exploring and colonizing other worlds. New material alloys will be discovered, develope d, and studied. Human robot interaction will play a major role in the research performed on them. Eventually, massive colonies will exist throughout the solar system in which humans and robots will live, work, and interact on a daily basis. The challeng es we face are great, but the rewards are even greater. The know ledge we have gained involving h aptics, human robot interaction, virtual environments, and more effective teleoperator systems will prove immensely valuable in And what an adventure it will be. Someday, our descendants will live in a world in which we can only dream of. In the journey there, we will be able to dream, to inspire, and to explore.

PAGE 118

110 R eferences A Roadmap for US Robotics: From Internet to Robotics 86 pages. 2009. tic of a two Degree of Freedom on Haptics. Pages 296 301. 2006. w Scientist. Pages 42 45. 2009. Bluethmann, W; Ambrose, R; Diftler, M; Askew, S; H uber, E; Goza, M; Rehnmark, F; ot Designed to Work with Humans 197. 2003. Buss, M; Peer, A; Schaub, T; Stefanov, N; Unterhinninghofen, U; Behrendt, S; Leupold, Modal Multi User The International Journal of Robotics Research, V olume 29, No. 10. Pages 1298 1316. 2010. Pages 1 8. 2010. Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. Pages 45 50. 2009. Exchange in Haptic Human Human Interaction in a Shared Virtual Object Haptic Interfaces for Virtual Environment and T eleoperator Systems. Pages 338 343. 2009 Research, Volume 4, Number 10. Pages 121 130. 1984.

PAGE 119

111 Fee dback Joysticks to Promote Annual Meeting Proceedings, Virtual Environments. Pages 1911 1915. 2001. ual Partner Interact ion (VPI): Exploring Novel Behaviors via Coordination Dynamic Issue 6. Pages 1 11. 2009. Volume 29. Pages 271 282. 2007. Materials Science and Engineering A, Volume 433. Pages 261 268. 2006 IEEE Transa ctions on Haptics. Pages 301 307. 2006 Linear versus Nonlinear E ions on Haptics. Pages 519 524. 2007. Murray, RM; Li, Z; Sastry, SS. A Mathematical Introduction to Robotic Manipulation 467 pages. 1994. ications of nomous Systems, Volume 23. Pages 37 43. 1998. Pitchumani, R; Zhupanska, O; Meesters, GMH; Characterization Powder Technolo gy, Volume 143 144. Pages 56 64. 2004. uman Human and Human Robot r 2. Pages 108 120. 2008.

PAGE 120

112 Robotic Manipulation using a RRP Nanoman ipulator: Part A Mathematical Modeling and Development of a Robust Adaptive Driving Applied Mathematics and Computation, Volum e 206. Pages 618 627. 2008. Schildt, Herbert. nd Edition Published by McGraw Hill Osborne Media. 576 pages. 2003. lanetary Exp Engineering Practice, V olume 4, Number 4. Pages 513 524. 1996. SensAble Technologies. Phantom Omni Haptic Device. < http://www.sensable.com/haptic phantom omni.htm >. 2010. Siciliano, B; Khatib, O. Springer Handbook of Robotics Published by Springer. 1,611 pages. 2008. Journal of Materials Sc ience, Volume 37. Pages 4197 4202. 2002. Stefanov, N; Peer, A; Buss, M. IEEE Trans actions on Haptics. Pages 51 56. 2009. 492. 2 000 Wojtara, T; Uchihara, M; Murayama, H; Shimoda, S; Saka i, S; Fujimoto, H; Kimura, H. Robot Collaboration in P recise Positioning of a Three Dimensional Auto matica, Volume 45. Pages 333 342. 2009. Transactions on Haptics. Pages 217 233. 2008. Yamamoto, T; Vagvolgyi, B; Balaji, K; Whitcomb, LL; Okamura AM. Estimation and Graphical Display for Teleoperated Robot IEEE Transactions on Haptics. Pages 4239 4245. 2009.