USF Libraries
USF Digital Collections

Robot data and control server for Internet-based training on ground robots

MISSING IMAGE

Material Information

Title:
Robot data and control server for Internet-based training on ground robots
Physical Description:
Book
Language:
English
Creator:
Kalyadin, Dmitry
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
Teleoperation
Rescue robots
Remote presence
Distributed systems
Java
Dissertations, Academic -- Computer Science -- Masters -- USF   ( lcsh )
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: To facilitate the emerging need for remote robot training and reach back, this thesis describes a system that allows for convenient web browser based robot operation over the Internet, while providing the means for recording and playback of all video, data and user actions. Training of first responder personnel on rescue robots is hindered by the fact that these devices are very expensive and are only affordable by a few specialized organizations that make them available by request at the time of a disaster. The system described in this thesis will allow first responders to practice on the robots without having to be physically present at same location. Having these capabilities of remote presence, the system can also be used in a real world response to transmit robot video and data to persons not present at the site of the incident, such as structural engineers or medical doctors.The recording capability will be used as an aid during training and to help resolve accountability issues in the real world scenario. Similar demands in the area of network video surveillance are met by the use of a network DVR that records and relays video and controls between IP cameras and Internet clients. The server implemented in this thesis is unique in that it extends these capabilities to include data from various robot sensors. All of the mentioned above video, data, and controls are combined into a convenient web browser based graphical user interface. The server was implemented and tested using rescue robots, but could be tailored to any other distributed robot architecture where reliable and convenient web browser based robot operation over the Internet is desired.System testing validated server capabilities of remote multi user robot operation, as well as its unique ability to store and play back external camera view along with robot video and data, to help with situation awareness. Conclusions drawn from the experiments indicate that this system can indeed be used for Internet robot training, as well as for other robotics research such as bandwidth regulation techniques or human-robot interaction studies by non computer science researchers who do not have physical access to robots.
Thesis:
Thesis (M.S.)--University of South Florida, 2007.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Dmitry Kalyadin.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 84 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001920236
oclc - 187985727
usfldc doi - E14-SFE0002111
usfldc handle - e14.2111
System ID:
SFS0026429:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Robot Data and Control Server for In ternet-Based Training on Ground Robots by Dmitry Kalyadin A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science Department of Computer Science and Engineering College of Engineering University of South Florida Major Professor: Robin Murphy, Ph.D. Miguel Labrador, Ph.D. William Armitage, Ph.D. Date of Approval: March 20, 2007 Keywords: teleoperation, rescue robots, remote presence, distributed systems, java Copyright 2007, Dmitry Kalyadin

PAGE 2

i Table of Contents List of Tables iv List of Figures v Abstract vii Chapter One Introduction 1 1.1 Motivation 2 1.2 Research Question 3 1.3 Contributions 4 1.4 Research Approach 4 1.5 Thesis Organization 7 Chapter Two Related Work 8 2.1 Network Video Distribution 8 2.1.1 Brief History of Netw ork Video Surveillance 8 2.2 Robot Teleoperation Over a Network 18 2.3 Summary 21 Chapter Three Approach 23 3.1 High Level System Structure Overview 23 3.2 Design Requirements 23 3.2.1 Size Requirements of the Robot Computer 24 3.2.2 Field Network Restrictions 24 3.2.3 Limitations of the Current Robot Control System 25 3.2.4 External Camera Requirement 25 3.2.5 Summary of Design Requirements 26 3.3 Relay Server 26 Chapter Four Implementation 28 4.1 System Overview 28 4.2 Record Relay Server 29 4.2.1 Recording 30 4.2.1.1 Why Not Use Files 30 4.2.1.2 Why Use a Database 30 4.2.1.3 Database Structure 32 4.2.1.4 Recording Approach 33

PAGE 3

ii 4.2.2 Relay Mechanism 35 4.2.2.1 Robot Connection 36 4.2.2.2 Camera Connection 38 4.2.2.3 Relay of Controls 39 4.2.3 Server Control 40 4.3 Play Server 41 4.4 Client Applet 43 4.4.1 Overview 43 4.4.2 Graphical User Interface 44 4.4.3 Joystick Capability 46 4.4.4 Java Applet Security Restrictions 47 4.5 Common Mechanisms 48 4.5.1 Port Multiplexing 48 4.5.2 Shared Variable Protection 49 4.5.3 Multi-User Capability and Security 50 4.5.4 Server GUI 51 4.5.5 Redundancy Control 53 4.5.6 TCP vs. UDP 53 4.6 Implementation Summary 54 Chapter Five Proof-of-Concept 56 5.1 Objectives 56 5.2 Experimental Setup 56 5.3 Proof-of-Concept Experiments 61 5.3.1 UK to USF 62 5.3.2 Minneapolis to USF 62 5.3.3 USF to San Diego 62 5.3.4 USF Campus Experiment 63 5.3.5 Other Local Experiments 64 5.3.6 Wireless Experiments 64 5.3.7 Experiment Summary 65 5.4 Scalability 66 5.5 Miscellaneous 69 5.6 Summary 69 Chapter Six Discussion 71 6.1 Limiting Factors 71 6.1.1 Network Limitations 71 6.1.2 Processing Power Limitations 72 6.2 Design and Implementation Issues 73 6.2.1 Port Blocking 73 6.2.2 Novice Users 74 6.2.3 User Policy 74 6.3 Notes on Experiments 75

PAGE 4

iii Chapter Seven Conclusions and Future Work 76 7.1 Conclusions 76 7.2 Future Work 77 7.2.1 System Expansion 77 7.2.2 TCP to UDP 78 7.2.3 Proxy Server 79 7.2.4 Dynamic Server Resource Management 80 References 82

PAGE 5

iv List of Tables Table 1. Database Table Used to Store Recording Information 32 Table 2. Database Table C ontaining Recorded Data 33 Table 3. Equipment Specifications 59 Table 4. Proof-of-Concep t Experiment Summary 66 Table 5. Summary of the Scalability Experiment 67

PAGE 6

v List of Figures Figure 1. Example of a Rescue R obot, MicroVGTV Extreme by Inuktun 1 Figure 2. High Level Relay Server System Diagram 5 Figure 3. Web Browser Based Graphica l User Interface of the Server 6 Figure 4. A Typical CCTV System 9 Figure 5. Examples of Commerci ally Available Video Servers 11 Figure 6. Examples of Commercially Available IP Cameras 12 Figure 7. A Single-Source, Single-U ser Network Surveillance System 12 Figure 8. Multi-Source Network Video Surveillance System 13 Figure 9. Multi-Source, Multi-User Network Surveillance System 15 Figure 10. Examples of Commercially Available DVRs 17 Figure 11. Mercury Project, Robot ic Arm and User Interface 19 Figure 12. Xavier, the Firs t Mobile Online Robot 20 Figure 13. Relay Server from the Network Perspective 27 Figure 14. Three Main Parts of the Server Architecture 29 Figure 15. Rate Based Recording Approach 34 Figure 16. Record and Relay Mechanism of the Server 35 Figure 17. Graphical User Interface to DFRA 37 Figure 18. Software Structure Including Control Mechanism 40 Figure 19. Server Control 41

PAGE 7

vi Figure 20. Playback Mech anism of the Server 42 Figure 21. High Level Client -Side Software Diagram 43 Figure 22. Graphical User Interface 45 Figure 23. Using a Joystick from a Java Applet 47 Figure 24 Server GUI 52 Figure 25. Experimental Setup 57 Figure 26. Robot Side of the Configuration 57 Figure 27. Complete Configuration at the USF Robotics Lab 58 Figure 28. Outside Use of the Robot 60 Figure 29. Detailed View of the Robot-Side Setup 60 Figure 30. Screenshot of the GUI during Outside Robot Operation 61 Figure 31. Frame Rate and Network/CP U Utilization vs. Number of Users 68 Figure 32. Proxy Configuration 80

PAGE 8

vii Robot Data and Control Server for In ternet-Based Training on Ground Robots Dmitry Kalyadin ABSTRACT To facilitate the emerging need for remote robot training and reach back, this thesis describes a system that allows for conveni ent web browser based robot operation over the Internet, while providing the means for recordi ng and playback of all video, data and user actions. Training of first responder pe rsonnel on rescue robots is hindered by the fact that these devices are very expensive and ar e only affordable by a few specialized organizations that make them available by re quest at the time of a disaster. The system described in this thesis will allow first res ponders to practice on th e robots without having to be physically present at same location. Ha ving these capabilities of remote presence, the system can also be used in a real world response to transmit robot video and data to persons not present at the site of the incident, such as st ructural engineers or medical doctors. The recording capability will be us ed as an aid during training and to help resolve accountability issues in the real world scenario. Similar demands in the area of network vi deo surveillance are met by the use of a network DVR that records and relays video a nd controls between IP cameras and Internet clients. The server implemented in this thesis is unique in that it extends these capabilities

PAGE 9

viii to include data from various robot sensors. All of the me ntioned above video, data, and controls are combined into a convenient we b browser based graphical user interface. The server was implemented and tested using re scue robots, but could be tailored to any other distributed robot architecture where reliable and convenient web browser based robot operation over the In ternet is desired. System testing validated server capabilitie s of remote multi user robot operation, as well as its unique ability to store and play back external camera view along with robot video and data, to help with situation awareness. Conclusions drawn from the experiments indicate that this system can i ndeed be used for Inte rnet robot training, as well as for other robotics research such as bandwidth regulation techniques or humanrobot interaction studies by non computer science research ers who do not have physical access to robots.

PAGE 10

1 Chapter One Introduction Rescue robots are a fairly new breed of remotely operated devi ces that have been used in real world disaster areas to explore voids that are inaccessible or unsafe for humans or dogs. These devices usually consist of the robot itself, Operator Control Unit (OCU), and a tether connecting the two through which c ontrols and power are transmitted. Figure 1 shows a typical rescue ro bot, Micro VGTV (Variable Ge ometry Tracked Vehicle) Extreme produced by Inuktun. Figure 1. Example of a Rescue R obot, MicroVGTV Extreme by Inuktun The system described in this thesis is a robot data and control server that allows multiple clients to operate these rescue robots remotely over the Internet through a convenient web browser interface, while storin g all available audio, video, sensor data, and user actions for future playback.

PAGE 11

2 1.1 Motivation Primary motivation for this work is the fact that rescue robot systems are very expensive and are affordable by only a small number of specialized agencies. In a real world scenario however, it may be required for th ese robots to be opera ted by first responders (fire and rescue, police) who may have never used or even seen the robots before. Consequently, there is a real need for training of first responder personnel on these devices. At the present, there are only a few places that have the capability to train in rescue robotics. Time limitations of first responders and lack of educational programs allow for only a handful of trainees each y ear. If such training was possible over the Internet, and there was a distance learning syst em easily accessible from anywhere in the country, the number of student s and therefore competent ope rators could be increased dramatically. Secondary motivation comes from a real wo rld need of reachback, the ability to transmit robot audio, video, and sensor data from the disaster site to the incident commander as well as the outside world. Even when the robot is operated from the physical control unit and not over the network, it may be helpful to make the available data accessible to others not pres ent at the site of the incident such as structural engineers or medical doctors. Murphy et al. recommend in [12] the “having two heads is better than one” approach to search and rescue robotics. Certainly, having a specialist assess the structural integrity of a building or identif y body parts is more effective than the same task being performed by the robot operator, who may be trained only in robotics. This

PAGE 12

3 would provide us with valuable real time sp ecialist advice otherwise not available with current equipment. The idea of a black box, or being able to record robot audio, video, data and user actions, can be employed in both scenarios desc ribed above. During training, recorded operations could be used to help students understand what they did wrong, while in a real world situation this data can help resolve acc ountability issues. This is important since only robot video and audio are currently record ed while user actions can only be guessed from changes in video. 1.2 Research Question How can a ground robot be operated over the Internet during trai ning while having the ability to record and play ba ck in the same interface all available audio, video (including a view from an external camera), and sensor data, as well as user actions? Also, how can multiple other users such as an instructor or an observer remotely monitor student activity in real time? The question above can be broken down into three more specific questions that are answered in this thesis: How can we reliably and convenientl y control ground robots over the Internet? How and where can we store and retrieve all available video and data so that it can be played back later from anywhere using the Internet?

PAGE 13

4 How can remote users view all video, da ta, and user actions while robot operation is in progress? 1.3 Contributions With a completed system the following claims can be made: Ground robots can be operated remotely usi ng the Internet as part of a distance learning course without requiring students to be physically present at any specific location. Training sessions can be recorded and reviewed later by any authorized person using the Internet. Multiple clients anywhere in the world can be connected to the system at the same time to observe training in progress. 1.4 Research Approach The implemented architecture employs a network connected machine running a web server and specially designed software that connects to the robot and the external IP camera. This machine then serves as a central point of distribution of robot audio, video, data, and controls to remote users connected via the Internet (Figure 2). A local database is used to store and retrieve all availa ble information streaming through the server.

PAGE 14

5 Figure 2. High Level Relay Server System Diagram The training scenario assumed for our appr oach is as follows: the robot will be set up at a rubble pile at USF and trainees will be able to access the test bed from anywhere in the country using the Internet. The trainees will pe rform various tasks to understand the capabilities and limitations of the robots. To help with situation awareness, an external video camera will be set up along with the robot to provide an outside view of the test bed as it is difficu lt to initially adjust to s eeing the world through the robot camera alone. The main goals of the system are remote robot operation using the Internet, data and video archiving, and multi-user capability. Let’s describe these goals in more detail. The system will allow its users to cont rol the robot over the Internet through a web browser based interface. This interface (shown in Figure 3) includes robot video, robot data (temperature, shape, battery level, etc.), robot control panel, 2-way audio, and a view from an external camera. The r eason for the web brow ser based interface specifically, is to make the system as easy to use as possible and avoid the complications of installing any special software. This also he lps us with portability issues, or being able to use this system across different hardware and software platforms.

PAGE 15

6 Figure 3. Web Browser Based Graphical User Interface of the Server The system will also allow users to r ecord robot video, audio, data, and user actions as well as video from an external ca mera. This could be used during training to review the actions of trainees and could also be used in real world incident applications to resolve accountability issues. Finally, the system will allow multiple users to view the available video, audio and data, share controls of the robot, and obser ve control actions of others in real time (the trainee, instructor tech support, etc.).

PAGE 16

7 The goals and architecture of this system make it different from other available equipment and software. Recording and distribu tion of video over a netw ork is a task that has been fairly well exploited in the last de cade. IP cameras, video servers, and network enabled Digital Video Recorders (DVR) are exam ples of readily available equipment that make real time sharing and recording of vide o an easy task. The main shortcoming of this equipment is the fact that these devices have no capability to dist ribute or record data along with video. And while attempts have been made to enable r obot control over the Internet, these systems lack the ability to r ecord video, audio and us er actions and were built for specific platforms that do not have the same capabilities as the rescue robots for which this system is intended. 1.5 Thesis Organization Chapter Two describes previous work done in the area of network video distribution and robot teleoperation. Chapter Three provides a high level explanation of the approach taken to achieve the goal set. Chapter Four examines in detail the structure and implementation of the server software. Expe riments performed using this system are described in Chapter Five. Chapter Six discus ses some of the issues encountered during the development, and finally, Chapter Seven includes conclusions of this project, and suggests some directions for future work.

PAGE 17

8 Chapter Two Related Work This chapter describes the related work in network video distribution and robot teleoperation over the Internet. We first disc uss the use of network video in surveillance systems, discuss selected problems and soluti ons in this area, then look at previous robotics projects involving web br owser based robot control. 2.1 Network Video Distribution The most obvious and widespread use of netw ork video distribution is in the area of surveillance systems. Let’s start by looking at how video survei llance has been done historically, then look at the equipment and software needed to enable sharing of the video over the Internet. 2.1.1 Brief History of Network Video Surveillance For years, the term video surveillance system has been associated with closed circuit television (CCTV). These systems usually consist of multiple analog video cameras physically connected to a central monitori ng location by means of cables where using a

PAGE 18

9 video multiplexer, multiple video feeds are displayed on the screen [16] (see Figure 4). CCTV systems are usually single user systems in the sense that all the links lead to a single central location. These systems are al most always located nearby the monitored site to make the wire routi ng easier. Recording is done on s ite with a simple VCR. This type of system is simple, cheap a nd has been around for many years. Figure 4. A Typical CCTV System The question is what if you wanted to see video from the mentioned above cameras from anywhere in the world? Say you wanted to check on your employees while you are at home sick, or check on your house while on a business tr ip. Now what if you wanted to let multiple people see the video produced by this system? The applications of these systems are endless. A similar problem is faced by certain law enforcement agencies. There is a list of sites that are under their jurisdic tion and that they are responsibl e for in case of an attack, industrial accident, or a natura l disaster. The list of sites ra nges from amusement parks to chemical plants to airports, most of which have their own CCTV surveillance systems. In

PAGE 19

10 a case of an incident, it is desired to be ab le to view the video from these sites at the police station, assess the damage and allocate the needed resources before ever leaving the office [25]. With the distances separati ng the sources and the viewers, running wires is out of question, so a reasona ble solution is to use the Inte rnet for the data transmission. Internet is a digital network, so we need a way to convert the video from analog to digital and distribute it over this network. This is where vi deo servers come in. A typical video server is roughly a three part device that consists of an analog to digital (A/D) converter, a compressor, and a network server The A/D converter takes the analog video, samples it every 1/30th of a second and produces a series of digital images. The compressor then takes these images and reduc es their size using a certain compression algorithm, usually of MPEG or MJPEG format. Th e task of the network server part of the device is to wait for incoming requests from us ers, upon which the images are sent out as a stream of data and are displayed at the client. From a technical perspective, these devices have an IP address and a server pr ogram listening on a speci fic port. Given this information, the client software can connect to the server and receive the data using TCP or UDP sockets. Most video servers also come with a built in web server that listens on port 80 and can send the video using HTTP, whic h can then be viewed using a standard web browser. Video server comes in a variety of shapes and sizes – at one extreme it is a very small device that can fit in your pocket and on the other is a powerful desktop machine with a video card and a network interface (see Figure 5). The differences between the two extremes among many others include the number of users that a device can service, as well as image processing and recording capabil ities. Intuitively, the smaller the physical

PAGE 20

11 size of the device, the less capabilities it is lik ely to have. Axis [17] is one of the leading manufacturers of commercial video servers, and is a good source for understanding the capabilities of these devices. Figure 5. Examples of Commercially Availabl e Video Servers. Clockwise from top left: Fulbond XS1000P, GAO Tek GAODVS, Axis 241Q, Clover CNVS100, and Axis Rack Solution Another way to distribute video over a netw ork is to use a digital IP camera that combines an analog camera and a video server in one (see Figure 6). These devices are getting more and more popular as they easily enable video conferencing and are more user friendly than an analog camera / video server combination. Toshiba [23] and Sony [24] are among the more popular IP camera pr oducers. The downside of these devices is the fact that usually they do not offer the same features and capabilities as full size video servers.

PAGE 21

12 Figure 6. Examples of Commercially Availabl e IP Cameras. Clockwise from top left: Micronet SP5520K, Toshiba IK-WB15A, Inte llinet IDATA IP-PTZ, Axis 213, and Iqinvision Iqeye101 In either case, for a single source / singl e user both video server and IP camera approaches work very well, given an appr opriate Internet connection (see Figure 7). Video can be sent continuous ly, and recording could be done at either end of the connection. Figure 7. A Single-Source, Single-User Network Surveillance System

PAGE 22

13 Most of the problems are encountered wh en the system is expanded to include multiple sources and multiple clients. First, let’s add more cameras to our scenario, let’s say a 100, or a 1000 (see Figure 8): Figure 8. Multi-Source Network Video Surveillance System Network capabilities quickly become th e limiting factor. Bandwidth bottlenecks at both the source and the client will not allow simultaneous transmission of all video streams. The upside is that in realit y, no single person can comprehend 100 or a 1000 camera views at one time. More likely, a user will want to quickly glance at the available video and then pick three to f our cameras of interest and sp end most of the time looking at those. Simple software written by companies such as Axis [16] let us do just that – given the parameters such as the IP addre ss / port of the camera and username / password

PAGE 23

14 allows you to connect to multiple cameras a nd view them simultaneously in a graphical user interface. As mentioned earlier, bandwidth is still the limiting factor, so to improve the performance, transmission rate of certa in cameras can be reduced to 1 frame per second, while receiving the full 30 fps from othe rs. Another feature supported by some of the cameras is motion detection, where nothing is sent to the user unt il motion is detected in the field of view of the camera upon which the service is upgraded to full frame rate. More complex solutions such as those o ffered by Broadware [17], Aimetis [20], and D3Data [22] also include more strict security policies, provide inte lligent video analysis, and are usually built to be highly scalable. In this multi-source system, the choice of position of recording equipment also becomes very limited. At the client side, it is very easy to record what is being displayed on the screen, but due to network limitations this may not be much, so complete video recording can only be done at the site of su rveillance or nearby. Also, video recorded at the client side can not be easily shared between multiple geographically separated users. Let’s get back to expanding our distributed system scenario and allow multiple clients access the system (see Figure 9):

PAGE 24

15 Figure 9. Multi-Source, Multi-User Network Surveillance System Clearly, we have just multiplicatively increased our bandwidth problem. Another issue that comes up is the fact that most vi deo servers are relatively small devices with limited computational power and can only ha ndle between five to ten users before suffering significant degradation in performa nce. This results in limited frame rate received by all clients connected to the same source. One way to resolve this is to enhance our video servers by replacing them with more powerful network machines tailored to perform the same task. We can easily see that with hundreds or thousands of cameras, this solution becomes very costly and simply not feasible, especially if there are specific size constraints for a given application. Another problem with sharing the system between multiple users arises when cameras with pan, tilt, and zoom (PTZ) capabilities are added to our scenario. Who gets to control these devices? Common video server settings allow us to either make the

PAGE 25

16 controls available to everyone or only specific users, wh ile others can only view the video. In either case, there is a possibility of multiple people controlling the camera at the same time, which can become chaotic and irri tating. Another available option is to enable control queuing, where only one person can exer cise controls at a given time and queue the others on first come first serve basis. This works great until you are the one sitting in line waiting for controls. Assume a scenario wh ere an incident happens and gets called in to police dispatch. An officer who has contro l privileges decides to look at what is happening, logs on to the camera and starts view ing and controlling it. Two minutes later, incident commander arrives at his office and tr ies to control the camera. He can not do so, even though he has the same privileges and ev en higher rank. If th e incident commander is at a different location and does not know w ho is currently control ling the camera, there is no easy way for him to resolve this situat ion. It is obvious that simply specifying whether someone has control ri ghts is not enough in a critical surveillance system, and that some sort of priority based system has to be in place to resolve situations such as the one described above. Let’s now look at some of the recording equipment available on the market. Days of VCRs are long gone, the quality of analog recording equipment does not compare to that of digital storage. Ne twork based Digital Video Reco rders (DVRs) are pieces of equipment that allow for recording of vide o streams from multiple network sources. Some of these devices even have the capabilitie s typical to video servers in the sense that a user, once connected, can not only view the video currently being stored, but also play back previously recorded capture (see Figure 10). American Dynamics [19] and Dedicated Micros [21] offer a variety of mode rn network DVRs. Just like in the case of

PAGE 26

17 typical video servers, a DVR is just a com puter, tailored to perform a specific task. It simply acts as a user, connects to the camera, receives the video and stores it on a hard drive. Meanwhile a concurrent server program accepts connections from clients and lets them retrieve previously stored information. Figure 10. Examples of Commercially Av ailable DVRs. From left: VPON VP9000, American Dynamics EDVR, and iView DVR16CDRW With time, network DVR capabilities expanded to also include accepting commands from clients and resending them to IP cameras that are equipped with the pan/tilt/zoom mechanism. At this point the clients are completely separated from the cameras – all traffic in both directions now has to go through the DVR. This may seem like making things more complicated than they should be, but in reality this approach gives us many advantages. First of all, if the position of the server on the network is chosen so that it can reliably connect to several cameras or video servers, it becomes a reliable place for video storage, which is fr equently not practical at the sources. Also, by allowing only one connection to the camera, we significantly reduce the processing load on the devices. All other users actually connect to the DVR, which is usually a much

PAGE 27

18 more powerful machine. This also minimizes the network traffic on the link closest to the camera, which in many cases is wireless. To summarize the above, there are two main problems that limit the functionality of a network video distribution system. The first problem is the physical size of the source devices that is reduced due to appli cation requirements. This limits the number of users that the equipment can handle and also does not allow for video or data storage at the source. The second problem is the network itself which in many cases is the bottleneck in terms of how many video stre ams can be transmitted. These issues are addressed in the area of video surveillance by the use of a ne twork DVR that relays video from cameras to users and commands from c lients to the source devices. We will see in the next chapter how this idea can be bene ficial in the area of online robotics. 2.2 Robot Teleoperation Over a Network Remote robot operation is not new – remote manipulators have been used for years to handle hazardous materials. With the explosive growth of the Internet came the desire to also be able to control va rious devices over a wide area network. The reach, convenience and capabilities of the Internet are incredib ly attractive for this task. As a result, numerous teleoperated robot systems have b een implemented in the last 20 years. The first remotely operated robot to use HTTP and browser interface was the Mercury Project started in the spring of 1994 by Ken Goldberg at the University of Southern California [2]. The setup included a robotic arm with a mounted camera and an air blower over a box of sand (Figure 11). The sandbox contained buried artifacts

PAGE 28

19 inspired by Journey to the Cent re of the Earth book by Jules Verne. Users were to use the air blower to uncover the artif acts and try to guess the origin of the items. User interface shown in Figure 11 included a clickable ma p of possible arm movements allowing the fixture to move in the horizontal and vertical axis and pulse air onto the area just under the camera. Figure 11. Mercury Project, Robotic Arm and User Interface (Source: www.usc.edu) The first mobile online robot was Xavier (Figur e 12), made available on the web in December of 1995 [3]. Xavier is a product of the robotics lab of the Carnegie Melon University and was meant to be an au tonomous indoor robot. While testing a new navigation algorithm, the author s developed web pages to monitor the robot’s progress and command its behavior. While the remote operation was not the primary goal of the project, Xavier generated a larg e amount of interest as an onli ne robot – in the three years of operation it received over 30,000 requests a nd carried out over 4,700 different tasks.

PAGE 29

20 The robot operated in the computer scie nce building of CMU and the user interface allowed clients to tell it what room to go to and what to do (tell a joke, for example). Figure 12. Xavier, the First Mobile On line Robot (Source: www.cs.cmu.edu) These two examples are rather simple in terms of software architecture. As time went by, more complex software systems were developed to addre ss the remote operation needs of various robot platforms (many of thes e are described in [15]). One to one robot operation over a reliable wired network is fairly straightforward. The complexity mainly comes from three sources, first one being th e network itself. Naturally, as the operating distance increases, so does the transmission de lay, even more so in wireless and satellite networks. This becomes a problem when the vi sual feedback is so far behind that it is hard for the operator to estimate the effects of various commands on the robot. Take for example communications between human opera tors on Earth and Mars Rovers, where one way delay can be as long as 10 minutes [26]. To mitigate th is, if the operating conditions of the robot are known, it may be possible to predict robot movement by simulation before actually executing the command [5] [11]. A nother source of

PAGE 30

21 architectural complexity is the task at ha nd. While teleoperation is sufficient for some applications, others may require some degr ee of autonomy. Now the robot has to juggle between running an autonomous task while still waiting for other user commands that may interrupt the current process. Finally, multi-user capability is yet another complication that adds a requirement of coor dinating between multiple clients at the same time. Most of the current teleop erated systems are very similar to the ones mentioned above in a sense that they all control a r obot over a network. The hardware and software platforms are different and ar e often dictated by the specifi c robot design. All of them lack the ability to record and play back a udio, video, and user actions. Something else to take note of is that most of the approach es concentrate on accomplishing a specific task, details of which are known prior to the im plementation. The field of rescue robotics on the other hand, is complicated by never knowin g what type of environment the next deployment is going to be in, making the use of predicting motion by simulation of little use. 2.3 Summary From the previous work described in this ch apter, we can see that the history of video surveillance and robot teleoperation have a lot in common in terms of employing the Internet to expand their capab ilities. The devices used in both areas are conceptually similar in that they provide some sensory f eedback (in a form of video images for IP cameras) and allow for execution of certain comma nds. Another similarity is that just like

PAGE 31

22 many robot platforms, IP cameras are usua lly very small and have limited processing power, which limits the amount of simultaneous users that the devi ce can serve at any given time, and also does not allow for video storage or processing onboard. The conventional approach for recording network vide o is to have a server nearby (a reliable network link away, not necessa rily physically close) that connects to the camera and constantly stores the received video. Clients can later connect to this network DVR and view the previous capture. Besi des the storage capa bility, these servers also allow users to view real time video and c ontrol the cameras. The key adva ntage of this approach is that instead of connecting directly to the camera, the clients actually connect to the server, which keeps the number of camera client s to one (the server). This approach also minimizes the traffic on the network link closest to the source, which is especially critical if that link is wireless. Thus, the combinat ion of the camera and the network DVR allows for best of both worlds – the size of the cam era can be minimized while still allowing for video storage and multi-user operation through th e server. We will see in the next chapter how the same concept also proves to be beneficial for remote robot operation.

PAGE 32

23 Chapter Three Approach 3.1 High Level System Structure Overview The key idea behind this system is fairly simple – it is to move the poi nt of distribution of robot controls and data to a more capable m achine that is not dir ectly connected to the robot. As in the network DVR scenario described in the previous chapter, this enables the users to control the robot over the Internet, provides a conveni ent place for data and video storage, and allows us to minimize the size of the computer attached to the robot. The rest of this chapter explains why this approach was taken and provides more high level details about the system structure. 3.2 Design Requirements The main system goals are remote robot ope ration over the Internet, ability to record robot video, data and user actions, and multi user capacity. We start by discussing several requirements and limitations that influenced the final system architecture, and then describe how the main system goals are ach ieved while satisfying these requirements.

PAGE 33

24 3.2.1 Size Requirements of the Robot Computer Network communication between robot and client s is enabled by a com puter attached to the robot via a serial port interface. This computer is usually a ruggedized laptop, not a powerful workstation. The reason for the use of a portable computer is that rescue equipment may need to be carried for long dist ances to incident site s, so size and weight of the gear is critical. This requirement does not allow us to perform video and data storage on site due to the limitations of lapt op storage and processing capabilities. Multi user performance is also affected by the size factor – CPU utilization with one connected client is around 90%, obviously not allowi ng for anyone else to use the system. 3.2.2 Field Network Restrictions The network path between the robot and its users usually consists of multiple hops, the first one of which (from the robot to an Inte rnet gateway, for example) is likely to be wireless in the field. Regardless of the technology, wireless ne tworks can be easily saturated, degrading throughput to all users, especially when they are used to carry multiple video streams. It is to our advantage to minimize the amount of traffic present in this first hop from the robot.

PAGE 34

25 3.2.3 Limitations of the Current Robot Control System Distributed Field Robot Arch itecture (DFRA, described in more detail in the next chapter) currently used for robot control over a network has two main limitations. First of all, due to its dynamic host discovery mechanism, DFRA is only capable of communications over a local area network, not a llowing us to connect to the robot using the Internet. Second, the software installation is fairly complicated and currently is only compatible with the Linux operating system. This installation requires detailed knowledge of the architecture as well as the operating system and is only realistic for a person with a rich computer background. Since our goal is a system that will be used by non computer savvy people such as firefighters, doctors, or structural engineers, a simpler solution is needed. 3.2.4 External Camera Requirement Since one of the intended uses for this system is robot training of persons who may have never seen these devices before, having an exte rnal view of the robot is essential. This not only helps with situation awareness while performing a sp ecific task, but also helps users to initially realize the basic capabilities of the robot in terms of its mobility and shape.

PAGE 35

26 3.2.5 Summary of Design Requirements Initially, we had a system that allowed us to control the robot ove r a local area network (DFRA), and the understanding that having multip le client connections directly to the robot is not realistic due to network and processing power restrictions, which also do not allow for local video storage. What we need ed was to extend the range of operation to that of the Internet, make it easier to use, multiple user capable, include an external camera, and provide a reliable place for video and data storage. All of these tasks are accomplished with a relay server de scribed in the next section. 3.3 Relay Server The key idea behind this system is to move the point of di stribution of audio, video and data (to put simply, the machine to which th e clients will actually be connecting) to a location where the storage and processing power of the server are not limited by its size. In a real world response, this could be th e incident command center, while in a training scenario this server can be ke pt inside, having the robot outsi de in a test bed of rubble. The connection between the robot and server co uld be either wired (i n a lab scenario, for example) or wireless in the field. Server so ftware communicates with the robot over this link using DFRA, while the users connect to the server using HTTP (Figure 13). Audio, video, and data are received by the server, stored, and retransmitted to the clients connected to the system. All controls, video, and data, as well as the view from an

PAGE 36

27 external camera are combined into a single web browser based interface making it easy to use and eliminating the need for any software installation. Figure 13. Relay Server from the Network Perspective This approach provides a convenient solu tion to the problems mentioned in this chapter in that it limits the pr ocessing load on the robot laptop, restricts wireless traffic to the amount of one user (the server), provide s a convenient location fo r data storage, and greatly simplifies the use of the system.

PAGE 37

28 Chapter Four Implementation 4.1 System Overview The server software created in this project consists of three main parts – Record Relay Server, Play Server, and the C lient applet (Figure 14). Record Relay Server is responsible for connecting to the robot and external camer a, bringing in video and data from these devices, storing it in the database, and relayi ng this information to clients connected to the system. It is also responsible for managi ng the traffic in the ot her direction – relaying commands from users to the robot and camera, while storing them in the database along with the rest of video and data. Play Server portion of the software, on the other hand, is in charge of accessing the database, sending to clients the list of available recorded sessions and streaming those recordings once re quested to play. Finall y, the Client applet component is a Java applet embedded in an HTML document that runs on the connected user’s computer and communicates with the se rver software. The rest of this chapter describes the implementation of these three components.

PAGE 38

29 Figure 14. Three Main Parts of the Server Architecture 4.2 Record Relay Server Record Relay Server portion of the software is probably the more complex one out of the three since it is responsible for most of the server functionality. It takes care of several related tasks such as connecting to the robot and camera, then storing and relaying the received information. The tasks of recording, relay, and playback are very similar in nature. The first one is to take the data and put it on a hard drive, while th e last two do the same, except the information is put on a network line. We can take advantage of this similarity by reusing related modules and thus simplifyi ng certain aspects of implementation. Let’s look at each of these tasks se parately and see how they ar e implemented in our system, starting with recording.

PAGE 39

30 4.2.1 Recording There are two ways we can store data on a disk – we could save it in a file, or we could use a database. Each one has its advantages a nd drawbacks. In this case a database was chosen for the reasons describe d in the next two sections. 4.2.1.1 Why Not Use Files Managing files can be very difficult when tr ying to store complex data. Once opened, it is usually very hard to tell a file’s internal stru cture, let alone read it easily, especially when we are talking about storing multiple large quan tities of binary data such as robot video images and audio snapshots. Another disadvant age is that once a file format is decided upon, it is not trivial to change the way the data is stored an d retrieved. For example, a lot of source code would have to be rewritten if we decided to add a nother sensor to the system. Also, it is almost impossible for an outsider to come in and modify the system without having to look through pages of comp licated code to try and understand the way information is stored. Finally, once many r ecordings have been made, browsing and trying to find the one we wa nt is not easy with files. 4.2.1.2 Why Use a Database Unlike manual file access, da tabases already provide us w ith many needed mechanisms for complex data storage and re trieval. Numerous free graphica l user interfaces exist to

PAGE 40

31 make database management and browsing a breeze, and the Structured Query Language (SQL) is a well known database programming la nguage that is familiar to most software developers. Let’s look at some of the database built in mechanisms that are useful to us in this project. Databases are designed to do one thing, and they do it well – they efficiently store and retrieve data on disk. They take care of all the low details of di sk I/O letting us to simply tell them what to store. Databases keep track of system resources and provide security mechanisms to protect the data. Also, when we start thinking of system scalability, the database approach wins once again – if at some point we see that the server CPU is becoming overloaded by managi ng disk I/O and network requests, we can easily move the database to a different m achine and let the server concentrate on the execution of the main program. In this case the on ly change to be made to the code is one line in a configuration file specifying the database address (traditionally, all communication between the database and the program is done through a TCP socket, so it makes no difference whether the database is located on the same machine or connected via a network). Also, anyone who has ever written a progr am with several threads trying to access and modify a shared resource knows that thes e resources have to be protected by some form of a solution to the critic al section problem. This is ex actly the case we have – the data may be recorded by one user and pl ayed back by another at the same time. Databases already come with this protecti on mechanism built in. Clearly, the database approach is much more appr opriate for the tasks at hand.

PAGE 41

32 4.2.1.3 Database Structure The database employed by this system is the freely available MySQ L, paired up with a phpMyAdmin front end. The phpMyAdmin GUI is not considered a part of the system, but rather a tool to conveniently create the da tabase and modify its settings. This tool can also be used by an outside person to quickly understand the structure of the tables and the way the data is organized. Two tables contain all information needed for recording and playback of robot data. The first one (see Table 1) contains info rmation about the record ings already stored in the database. By looking at this tabl e we can tell (among ot her things) how many previously recorded sessions we have, wh en each one was record ed, and its duration. Table 1. Database Table Used to Store Recording Information The second table (see Table 2) contains actual da ta from all recorded sessions such video images and sensor readings. Each recorded session is identified by its session ID.

PAGE 42

33 Table 2. Database Table Containing Recorded Data 4.2.1.4 Recording Approach Besides the low level disk storage options mentioned above, there are two different higher level approaches to recording – it can be done based on events, or at a specified rate. A rate based recoding approach is used in our software for several reasons. Recall that we are trying to store data from multiple sources – two or more video cameras, and various sensors. The slight complication is th at all these readings are asynchronous – they do not arrive at the server at the same tim e. Using an event based approach we would record each reading as it becomes available pairing it up with a timestamp. The problem is that some sensor readings may change slightly many times within a short period of time, for example the battery voltage may go up and down a hundredth of a volt. Every time this happens we would have to store that change. Obviously, this is not very efficient, and we could prevent it by setting thresholds of change for every sensor. Still, this approach can be fairly wasteful since it creates a large number of rows in the database, each one only having a single read ing. Also, during playback when a timestamp is read, a timer would have to be started, the data displayed at expiration, then a new row

PAGE 43

34 has to be read, and so fourth. This is unn ecessarily complicated from the programming standpoint, so let’s look at the other, rate based approach. Instead of having all sensor r eadings stored in separate variables, we can create a buffer that will hold all the available data in one place. We can now read and store this buffer in a database a certain number of times a second (Figure 15). This approach is much cleaner, yet it accomplishes the same task. We can easily set the recording quality by changing how frequently our buffer is sampled. The playback mechanism also becomes much simpler – all we have to do is r ead and display the data at the same rate it was recorded. The biggest advantage how ever, comes from the mentioned above similarity between recording and relaying ta sks. The same module used for storing our buffer in the database, slightly modified can be used to send our buffer over a network. Figure 15. Rate Based Recording Approach Also, unlike the event based approach, this mechanism makes it possible for us to change the quality of video and data sent to various types of users. For example the operator may require video at 30 fps, while we can limit the display of an observer to, say 5fps (Figure 16).

PAGE 44

35 Figure 16. Record and Relay Mechanism of the Server 4.2.2 Relay Mechanism Relay mechanism of the server is a two way task – video and data are relayed from the robot to clients, and controls are relayed from clients to the robot and external camera. Figure 16 shows the video and data brought in to the buffer and distributed to clients. Before we talk about how controls are relayed in the other direction, let’s see how this video and data are acquired from the robot and external camera.

PAGE 45

36 4.2.2.1 Robot Connection The connection between the serv er and robot is established using Distributed Field Robot Architecture (DFRA) which allows us to re trieve available video and data, and send commands to the robot. DFRA is the only readily available archite cture for network op eration of rescue robots. It was developed by Matt Long [1] at the University of South Florida and is a distributed, object-oriented implementation of the SFX hybrid robot architecture that allows for dynamic discovery and acquisition of robot resources. The main advantage of DFRA is having a systematic layered appro ach for interoperability of various robot platforms while taking full advantage of each system’s capabilities. The architecture is designed around Sun’s Jini middleware layer which takes care of the dynamic discovery, and uses Java Remote Method Protocol and Java Extensible Remote Invocation (JERI) for remote communication. DFRA is implemented using Java for its platform independence and the variety of freely availabl e libraries such as XM L parsers and Jini. Besides DFRA, two other key tools were av ailable for us to use in our system – serial interface robot drivers and a sample us er interface. To enable the use of DFRA for a specific robot platform, a series of drivers have to be impl emented that utilize each of the robot’s capabilities and interface them w ith the architecture. Such drivers are freely available from CRASAR (developed by Jeff Craighead, craighea@cse.usf.edu). In plain words, DFRA fitted with the rescue robot driver s allows us to take advantage of all robot capabilities over a local area netw ork. Further, a graphic user interface developed in part by Jennifer Riley at SA Technologies was al so available at the time. This interface

PAGE 46

37 combined the sensor data and robot contro ls in a single graphic display in a Java application (Figure 17). The GUI was not yet fully completed and was slightly redesigned for this project (to include an external camera view), however what it provided was a clear example of the API provided by DFRA which is not all together trivial. Figure 17. Graphical User Interface to DFRA Besides all the benefits, DFRA has two main shortcomings that are solved by this project. First of all, it can not be reliably used over the Internet. JINI, the underlying service discovery mechanism uses multicastin g to find other compatible hosts on the network. The problem is that multicasting is not supported by many ISPs as a security measure. This can be overridden if the IP address of the robo t is known and unicasting can be used (which of course defeats the purpose of dynamic discovery). However, even once the two machines can see each other, Ji ni and JERI use multiple ports for socket connections between the hosts, and if any kind of a firewall is present anywhere in the path, many of them may get blocked. This s eems like an insignificant problem at a first

PAGE 47

38 glance, but after several experiments across the country, port blocki ng was the prevalent problem. It was realized that whatever protoc ol our system used in the end, the client connection had to use the smallest amount of ports possible – one, if feasible. In addition, this had to be one of the ports that are almost never blocked, such as port 80 or 8080, normally used by HTTP. Second shortcoming of DFRA is the fact that its installation process is very involved and is currently limited to the Li nux operating system. By combining DFRA with the server described in this thesis the problem is eliminated by the use of Java applets in a web browse r based user interface. Clearly, it is to our advantage to use DFRA as part of th e end system, as it already makes all existing robot functionality availabl e over a LAN, which solves part of the problem in dealing with the network as well as low level communications with the device. The Internet limitation does not impair our intended functionality since we want the server to be close to the robot (from the network perspective), and it will take care of the data distribution from that point on. Installation is not an issue since it only has to be done once, and is not required to be performed by the users. 4.2.2.2 Camera Connection Communication with the camera is done us ing Axis VAPIX API [27] for HTTP communication with many of the devices they manufacture such as IP cameras and video servers. The API includes commands for requesting the video, controlling the pan, tilt, and zoom mechanism, and are executed using GET and POST methods of HTTP.

PAGE 48

39 4.2.2.3 Relay of Controls Having at our disposal the mechanisms descri bed above, we can get the needed video and data to the server, store all available info rmation and distribute it over the network. We now need a control mechanism that will allo w remote clients operate the robot and the camera while recording user actions in the da tabase. There are two pa rts to this server component – the command sender threads that ta lk directly to the devices, and threads that receive user commands (Figure 18). Th e threads connecting to the robot (labeled ‘Robot vid/dat getter thread’ and ‘Robot co m sender thread’) do so using DFRA, while the threads that connect to the IP cam era use HTTP. Communication between the command sender and user threads is done throug h function calls, and when these calls are made, a note is taken in the shared buffer. This information is then stored in the database along with the rest of the video and data, ensuring the capture of every user action.

PAGE 49

40 Figure 18. Software Structure Including Control Mechanism 4.2.3 Server Control While many tasks are automatic (methods execu ted when the user logs on to the system, for example), the server software also i nvolves various functions that need to be controlled by the user, such as connecti ng to and disconnecting from the robot and camera, as well starting and stopping the reco rding process. For this task, a separate thread is created for every user that allows them turn on and off basic server functions (Figure 19). Client applet communicates wi th this thread when appropriate control buttons are pressed in the user interface.

PAGE 50

41 Figure 19. Server Control This concludes the discussion of the Reco rd Relay Server part of the server software. The next significant component is the Play Server discussed in the following section. 4.3 Play Server Play Server is the portion of our software that is responsib le for relaying to clients the contents of the database. It is in charge of two tasks – listing the available recorded

PAGE 51

42 sessions and, once the user selects a desired recording, streaming the data and video back to the client. The listing mechanism is fairly simple – server receives a command from user, gets the contents of the Session Data table fr om the database and se nds it to the client. The user then selects the desired recording a nd the playback mechanism is activated. This mechanism is actually very similar to the rela y mechanism of the Record Relay Server. In this case, instead of connecting to the robot a nd camera to get the data, we connect to the database and stream the received data and vi deo back to the client, as shown in Figure 20. Figure 20. Playback Mechanism of the Server The reason for separating the retrieving th read from the sender is the fact that network speeds may not allow us to send the video and data at the same rate it was recorded. What happens then is that the pl ayback will take place at the network speed. For example, if the bandwidth only allows us to send the video at 10fps and it was recorded at 30fps, the video will play 3 times slower. The buffer used here is the exact same data structure as in the Record Relay Server.

PAGE 52

43 Play Server along with the Record Relay Se rver are the two part s of the software that are executed on the server machine. The ne xt chapter describes th e Client applet that is run on the client’s computer. 4.4 Client Applet 4.4.1 Overview The client-side portion of the system soft ware is shown in more detail on Figure 21. Client program is a Java applet embedded in an HTML document. This document, along with the applet reside on the server, and ar e downloaded and run when the client types in the server address in to a web browser. Figure 21. High Level Client-Side Software Diagram

PAGE 53

44 Since there are several parts of the server th e Client applet has to deal with, we can separate client tasks into those dealing with receiving and displayi ng video and data from the server; the parts that liste n for keyboard, mouse, or joystick commands to be sent to the robot and camera; the server control panel that enables or disables server functionality, and finally the part that is responsible for listing and playing back previously recorded sessions. As can be conc luded from previous sections, the first three tasks deal with Record Relay Server, while the last one communicates with the Play Server. These tasks are fairly simple, as th e server does most of the work. The Client applet simply makes requests and waits for a response, which can be streaming data/audio/video, or simply an acknowledgm ent that a connection to the robot or a camera has been successfully established. 4.4.2 Graphical User Interface The user interface consists of six tabbed pane ls each giving the user specific capabilities described in the previous section. The most significant part of the GUI is the control panel (Figure 22). This part of the interface give s a client full control of the robot (lateral movement, raise/lower, camera up/down/zoom /focus, lights, laser), as well as the external video camera (pan/tilt/zoom). The GUI also displays va rious robot sensor readings (video, audio, temperatur e, shape, lights, laser, motor currents, battery level, IP address) and allows the client to send audio to the device. External camera view is also included to help with situation awareness. The recorder controls allow us to start and stop recording of all video/data/use r actions to the database. The panel labeled ‘Database’ lists

PAGE 54

45 all currently available recorded sessions, while the ‘Player’ panel (which looks exactly like the panel shown in Figure 22) is used to play back the selected recording. For control of server functionality, ‘Ser ver Control’ panel allows the user to make the server connect/disconnect to the robot and the external camera. ‘Status’ panel displays response messages from the server, and finally, the ‘Hel p’ panel displays some useful suggestions for setup (if needed). Figure 22. Graphical User Interface

PAGE 55

46 Robot control panel (located in the lower right quarter of the control interface in Figure 22) also includes a pseudo joystick. This part of the GUI was actually added after the system was mostly put together. The pr oblem is that the f our directional button controls are fairly limited in the sense that they only allow the robot to go full throttle in each direction, which is very unrealistic. This pseudo joystick allows us to click and drag it with a mouse, giving us full range of dir ectional motion, just like the real joystick of the OCU. 4.4.3 Joystick Capability Unlike Java applications at the present time (JDK 5.0.7) Java applets do not support the use of a joystick, which presents us with a unique problem if we want to enable online users with this capability. There is however, a web browser plug in that lets us use the joystick from JavaScript, Sun’s web browser scripting language. This plug in was developed by Carl Woffenden and is freely available from www.bigredswitch.co.uk Since we can establish communication between a Java applet and Ja vaScript code, we can effectively use the joystick from our Clie nt applet. (Figure 23) The only shortcoming of the plug in is the fact that it was impleme nted using ActiveX, which means that it is only compatible with Windows and Internet Explorer.

PAGE 56

47 Figure 23. Using a Joystick from a Java Applet 4.4.4 Java Applet Security Restrictions Java applet security restrictions, even though well justified, introduce some inconveniences into the impleme ntation of the Client applet. The major nuisance is the fact that these constraints do not allow the applet to get audio from the microphone connected to the client’s com puter without explicit authorization. This audio is used for the two-way audio communication between the client and robot. To allow this, java permissions file has to be manually edited, by inserting a specially formatted permission line giving the user this capability. What ma kes this process even more confusing for people who are not computer savvy is that there are usually seve ral versions of this file on a single machine, which makes giving someon e directions for doing this much harder. To mitigate this, the ‘Help’ tab of the GUI incl udes a detailed explanation of steps to be taken to enable two-way audio.

PAGE 57

48 4.5 Common Mechanisms While the three main software components ar e fairly independent, there are several concepts that are shared or used by all of them. This sec tion describes some of these ideas. 4.5.1 Port Multiplexing Initially, the software included several server threads, one for each specific function such as robot control, video relay, playback, etc., eight all together. This is the traditional way of communication over the Inte rnet. After a few months of testing over long distances and troubleshooting, it was realized that one of the major problems was not the software itself, but the fact that firewalls along the way would block certain ports (not always the same ones) which took away parts of the f unctionality. After changing the port numbers several times, it became obvious that this was only a temporary solution and that the number of ports had to be reduced to one. The traditional way server threads are used in most multithreaded server programs is as follows. A server thread listens for connections on a specified port. Once a connection is accepted, a new thread (also called a service thread) is created and it is this thread that actually does all the work from then on, while the server thread goes back to listening for more connections. What we needed to do was to reduce the number of server threads to one. Since functionality is usua lly separated by ports, and we needed to combine all of it into one, connection requests now had to include what service thread they are expecting to be created for them which made this part of communication

PAGE 58

49 somewhat complicated. The port used by the se rver for this purpose is 8080, and since it is sometimes used by HTTP, it is almost never blocked, solving the problem. 4.5.2 Shared Variable Protection Since a shared buffer is used in both Record Relay Server and Play Server, it presents us with the traditional critical section problem. In Play Server, since there is only one reader (network sender thread) and one writer (dat abase reading thread), a simple semaphore protection is sufficient. Record Relay Server however, has three types of threads sharing the buffer – the threads that bring in the data and write it to the buffer, the thread that reads the buffer and stores it in the database, and the threads that read the buffer and send its contents to the clients. At a first glance, it seems that we could use the traditional readers/writers solution to the critical sec tion problem, where multiple readers can be using the buffer at the same time and where th e writers have priorit y. This would not be altogether efficient, because ev en though the database recorder is theoretica lly a reader, it should have higher priority then the user thread s, since it is more important to record the data then to send it over the network. To accomplish this, the standard readers/writers solution was modified to give recorder thread the same priority as a writer. The integrity and efficiency of the algorithm are not di srupted – the thread does not modify the contents of the buffer and can safely be in its critical section al ong with other readers.

PAGE 59

50 4.5.3 Multi-User Capability and Security It should be noted that Record Relay Server and Play Server are both independent and multi user capable, that is, several clients can be watching different recordings at the same time, while multiple other users can be controlling the robot. Recording is the only task that is done one at a time since it doesn ’t make sense for multiple users to be storing the same data in the database. Since a separate thread is created to en able a particular server functionality for each user (such as robot control, for example) it is easy to allow or deny that ability by simply accepting or denying a connection that re quests the particular thread to be created. This allows us to create a set of permissions for every user and store it in the database along with username and password. This set includes a Boolean valu e for every possible server functionality available to the user currently including robot control, camera control, server control, playback, database management, ability to send two-way audio to the robot, and ability to receive video and data. The table containing user information also includes user type and user priority which will be later used for dynamic server resource management. The first connection from client to the se rver is made to verify user credentials. When username and password are looked up in the database, a special process called UserManager creates an entry in its list of c onnected users that is populated by user permissions, type and priority. A unique user ID number is also created that can be later used to look up that client in the user tabl e and this number is sent back to the client. Every client-server connection made from this point on will include in a request this user

PAGE 60

51 ID. When a request is received for a partic ular functionality, the server first checks whether the user has ade quate permission, and if not connection is closed. By default, no user from user protection was implemented as part of this thesis. This means that once connected, all users have the same priority (besides the initial permissions check), and no cooperative teleope ration mechanisms as in [13] [14] are present. Another parallel project by Chris Williams [6] addressed this shortcoming by implementing a multiple tier client hierarchy that limits user capabilities based on their needs. The same system also include s a knowledge-based bandwidth regulation mechanism that allows us to service more concurrent users by mi nimizing the bandwidth usage depending on robot’s modes of operation. 4.5.4 Server GUI To make server management easier and help better visualize its operation a GUI was added to the application (see Figure 24). The interface tracks output from all server processes and displays it in a convenient logger panel. Upper le ft side of the GUI displays currently running server threads, while the left side displays information about currently connected clients and resources associated wi th each one. The interface also allows the super user to disconnect clie nts, disable specific functi onality, and control the video frame rate received by different users. This GUI was written mostly using AspectJ[29] aspect oriented language extension to Java that allows for implementing crosscutting concerns much better than Java alone. Keepi ng track of security or system state is not easy using normal programming style, as the code to implement such monitoring would

PAGE 61

52 have to be spread among many classes. This is done much easier using an aspect-oriented programming language, since the whole program can be accessed from a single place. Figure 24. Server GUI This approach was first inspired by Po lymer [30] policy spec ification language that is similar to AspectJ but allows for composing more complex sets of policies. However, after several unresolved problems with Polymer, AspectJ was used, which being a commercial product, has much more available documentation and support. The beauty of both Polymer and AspectJ is that without modifying the source code, we can

PAGE 62

53 monitor execution of the application, as we ll as modify its runtime behavior from one place, as opposed to making changes to multiple classes of the program. 4.5.5 Redundancy Control Because of the way the data is recorded, rela yed, and played back (a t a preset rate) it is possible without protection to store or send the same data more then once which has to be prevented. For example, if the video image has not changed in 2 seconds, there is no point in sending the same frame at 30 times a second over and over again. To avoid this, every variable in the shared buffer has a flag associated with it that indicates whether or not that variable had been read since it was last updated. Before that value is stored in the database or sent over network, the reader thr eads check the flag and ignore the contents if the buffer had not been recently updated. 4.5.6 TCP vs. UDP Transmission Control Protocol (TCP) [8] [9 ] is used for communication between the server components and the client applet. TCP is the transport layer protocol used by most Internet applications as it provides reliable communication over the unreliable Internet Protocol. User Datagram Protoc ol (UDP) [7] is another type of a transport layer protocol that provides unreliable servic e, and basically gives progra mmers direct access to the IP layer. The advantage of using UDP is that it usually achieves highe r throughput than TCP because it does not employ any congestion management mechanisms [10]. This is the

PAGE 63

54 reason why UDP is used for certain real time streaming video applications. The problem with using UDP is that there is no guarantee that the data will get to the destination, so many error recovery mechanisms have to be implemented, making the communications part of the software relatively complicate d. Another disadvantage of using UDP for the server to client connections in our scenario is that this pr otocol is not TCP-friendly, that is, UDP will drown out any other TCP connections present on the same link. Currently, the server machine has only one Network In terface Card (NIC) which is shared by the DFRA robot-server connection and multiple serv er-client connections. If UDP is used for the latter link, it is very like ly that the robot-server traffi c will be negatively affected, which is highly undesirable. This could be ci rcumvented by either rewriting DFRA to use UDP for its underlying protocol (which would require rewriti ng most of the architecture) or splitting the network traffic by using two NICs one for the robot-server connections and one for the server-client link. All of this is not to say that there is no performance gain from switching to UDP for the transport layer protocol, but simply th at for the purpose of the goal set of this project, this task is left as possible future work. 4.6 Implementation Summary As can be seen in this chap ter, the key concept of the implementation is in the way the data is relayed, recorded, and played back – everything revolves around the shared buffer data structure that is sampled and distributed by various modules. This allows us to take advantage of the similarities between reco rding, relay, and playback mechanisms, and

PAGE 64

55 gives us other advantages such as the abil ity to control bandwidth and processing power for different types of users. Building on the available Dist ributed Field Robot Architecture, robot drivers, MySQL database and Apache web server, this software allows multiple clients to c ontrol the robot over the Intern et and record all available video, data, and user actions.

PAGE 65

56 Chapter Five Proof-of-Concept 5.1 Objectives The primary objective of proof-of-concept experi ments was to verify system functionality in various setup scenarios. While seemingl y simple, this testing sometimes revealed unexpected problems that resulted in signifi cant changes to system architecture. The secondary goal was to evaluate performance of the system under heavy load of multiple users that results in increased processing bur den on the hardware and network saturation. 5.2 Experimental Setup Experiments performed with our system usua lly consisted of remote users operating the robot over the Internet while it was located at the robotics lab in the computer science building at the University of South Florida. High level view of th is configuration is shown in Figure 25.

PAGE 66

57 Figure 25. Experimental Setup Part of the system covered by the shaded area labeled ‘A’ was sometimes located in the lab, while at others outside in a pile of rubble. A more detailed diagram of the devices contained in this mobile part of the setup is shown in figure 26. Figure 26. Robot Side of the Configuration

PAGE 67

58 In this scenario the laptop computer perf orms most of the processing of the robot data (except Analog to Digital (A/D) conve rsion of video, which is performed by a separate device). Commands are sent to the robo t using USB, and the data is sent back on the same bus. Both this laptop and the exte rnal camera are connected to an Ethernet switch, the output of which is connected to a LAN by either an Ethernet cable or through a wireless/Ethernet bridge. A complete lab configuration having both robot and server inside the lab is shown in Figure 27. Figure 27. Complete Configura tion at the USF Robotics Lab Table 3 describes the equipment currently used in our system. While no minimum hardware requirements were defined for the se rver as the machine was more than capable

PAGE 68

59 of performing the tasks it was dedicated t o, the parameters given in the table are suggested. The equipment listed for the lapt op computer, however, was pushed to its limits, so these specifications should be considered minimu m for this application. Linux operating system is required for both server a nd laptop machines, sin ce it is necessary to accommodate DFRA. Fedora Core 4 Linux was used in our configuration, but any modern distribution should work. Table 3. Equipment Specifications Name Brand/Model Processor Memory Hard Drive OS Network Server Dell Precision 370n Pentium 4 3.4 GHz 2 GB 2X400GB Fedora Core 4 Linux Gigabit Ethernet Robot Laptop Rugged Notebooks Talon P14N Pentium 4m 2.4 GHz 1 GB 40GB Fedora Core 4 Linux 100Mbps Ethernet External IP Camera Axis 213PTZ N/A N/A N/A Embedded Linux 100Mbps Ethernet Video A/D Converter The Imaging Source DFG/1394-1e N/A N/A N/A N/A FireWire When used outside, the mobile part of the equipment was placed on a wheeled cart and taken to the test bed as shown in Figure 28, then connected back wirelessly, or through a long Ethernet cable, when too mu ch interference was encountered from background traffic (uni versity wireless).

PAGE 69

60 Figure 28. Outside Use of the Robot Figure 29 shows in detail the de vices used when part of the system is used outside. Figure 29. Detailed View of the Robot-Side Setup

PAGE 70

61 Figure 30. Screenshot of the GUI During Outside Robot Operation Snapshot of the GUI during outside r obot operation is shown in Figure 30. 5.3 Proof-of-Concept Experiments Multiple proof-of-concept experiments were performed using this system during and after the implementation stages from several places around the United States and one location in the United Kingdom.

PAGE 71

62 5.3.1 UK to USF The longest distance of operation was betw een Wirral, United Kingdom and USF in Tampa. The robot was operated by Carl Woff enden, creator of the web browser joystick plug-in used in the implementation. This experiment was successful, even with 4200 miles of physical distance separating the tw o locations (this and all other distances mentioned in this chapter were estimated li nearly using the measuring tool of Google Earth software). According to the operator network lag was noticeable but tolerable, allowing him to safely navigate the hallways of the computer science building in Tampa, FL. 5.3.2 Minneapolis to USF The second longest distance of successful operation was approximately 1300 miles between Minneapolis, MN and USF, Tampa. The robot was operated by the author as well as by several other people at the fall 2006 Safety Security Rescue Research Center (SSRRC) meeting. Again, network delay, though present, was not an issue. 5.3.3 USF to San Diego Another attempted experiment was to be ab le to access the robot set up at the Strong Angel III (SA III, Integrated Disaster Res ponse Demonstration, San Diego CA, August 2006) from USF. A local area network was set up at the event, and on several occasions

PAGE 72

63 (about an hour or two at a time) Internet acc ess was also provided. In this case, Internet access meant that we could use our machines on the local network to access any resource on the web. Because of the way IP protocol addressing and Network Address Translation (NAT) work, by default we could not access th e machines on the local area network in the reverse direction – from the Internet. To enable this, simple changes had to be made to the configuration file of the gateway router. These cha nges were trivial, but because of the number of requests constantly flooding the network technicians, we were never able to get outside access to our server. Unfortuna tely, this would also not be out of the ordinary for a real disaster. Si nce the problem in this scenario is that we could not make connections into the desired LAN, this could pos sibly be fixed by setting up a proxy server somewhere on the Internet. Both the c lient and our server would connect to this proxy, which would then simply retransmit the data in both directions. This way the server would be making an outside conn ection, eliminating the problem. This would inevitably add more delay to the system b ecause of the processing overhead of the extra hop, but considering the need for reliable contro l during disasters, th is may be an option worth pursuing in the future. 5.3.4 USF Campus Experiment Another experiment was performed between psychology and computer science buildings on the USF campus. This exercise was performe d as part of a demonstration of current trends in Human-Robot Interaction (HRI) research by Dr. Jennifer Burke during the Department of Psychology gra duate research colloquium series. This experiment

PAGE 73

64 demonstrated another possible use of the system in the area of HRI by students not having physical access to robots. 5.3.5 Other Local Experiments Numerous other operations were perfor med from around the Tampa Bay area. The experiments were mostly used to verify sp ecific system functionality, and many lessons were learned in the troubleshoot ing process. These lessons were not altogether technical since many of the tests were performed by non computer savvy people, the interface and operating instructions were augmented after al most every run. On the technical side, as expected, the network was the cause of most problems. By nature, as the distance increases, so does the network delay which make s the operation more difficult. Still, even with the longest distance of separating robot and clients, the system was completely operational, the network delay was tolerabl e, and assuming the growth of high speed internet users and the developm ent of faster networks, this will only improve with time. 5.3.6 Wireless Experiments To simulate field operations, several tests we re performed using a wireless link between the robot and the server. Different wireless technologies were used for that purpose including 802.11b and a Motorola Wireless Mesh network. By nature of these technologies, 802.11 provided much higher ba ndwidth and smaller delay, but only worked over short distances and was highly su sceptible to interference. On the other

PAGE 74

65 hand, the Motorola mesh network worked much better over longer distances, but provided low throughput and high delay. Overall, neither network would be acceptable to use in a field scenario because of their num erous shortcomings. To make it clear, this does not mean that the users cannot connect to the server us ing wireless. The user link is not particularly critical – as the performan ce degrades, the quality of the video received by that particular client may get worse, but compared to a disaster situation, the load on a regular access point is miniscule. It is only in the fiel d that these technologies get pushed to their maximum capacity and eventually fail. 5.3.7 Experiment Summary As a result of the experiments described above all server functionality was verified as operational, server control panels were simplif ied, and detailed instructions were added to the user interface to correct any problems that may come up during use. Table 4 summarizes various proof-of-concept expe riments performed using this system.

PAGE 75

66 Table 4. Proof-of-Concept Experiment Summary Experiment Network (robot-server link) Distance Duration Results Wirral, UK wired 4200 miles 45min success Minneapolis wired 1300 miles 1 hour success SA III, San Diego wired 2000 miles Over a period of two days failure USF campus wired ~ 1 mile 15 min success 15-20 local experiments wired 10-50 miles ~ 30 min eachsuccess 5-10 local experiments wireless 10-50 miles ~ 30 min eachmixed 5.4 Scalability To test the scalability of the system anot her experiment was performed with the number of users varied from 1 to 10 and frame ra te measured at each client. Several other relevant factors such as server network a nd CPU utilizations were monitored during the process. Average client frame rate was used as a measure of system performance. Results of this experiment are shown in Table 5.

PAGE 76

67 Table 5. Summary of the Scalability Experiment Number of Users Average Client Frame Rate Server CPU Utilization Server Network Utilization Laptop CPU Utilization Laptop Network Utilization 0 0 0 5 33 5 1 0 16 55 60 85 1 30 22 77 60 85 2 30 27 80 60 85 3 30 29 86 60 85 4 30 30 88 60 85 5 30 32 89 60 85 6 30 35 92 60 85 7 25 36 95 60 85 8 24 37 95 60 85 9 20 40 95 60 85 10 15 44 95 60 85 As can be seen above, Table 5 contains tw o entries for the oneclient scenario. In the first case (shaded row), a client connected to the server and commanded it to connect to the robot and external camera. At this poi nt the server was receiving video, audio and data from both devices, but not relaying a ny of it to the client. This was purposely done to see how much network and CPU resource s were used just for the robot/server connection. The second row for one connected client shows the typical case when the server is receiving and relaying data in both directions. It is worthwhile to point out that CPU utilization of the robot laptop with no c onnected users is 33% – this CPU time is used to receive and buffer video imag es from the robot through FireWire. Figure 31 shows a plot of the data contai ned in Table 5 excluding laptop metrics, since as expected, once the server connects to the robot, network and CPU utilizations stay approximately consta nt for that computer.

PAGE 77

68 Figure 31. Frame Rate and Network/CPU Utilization vs. Number of Users Clearly, results indicate that network load is the major limiting factor on the number of clients and overall performance of the system. Still, even with 10 simultaneous users, frame rate was around 15 fps, which can be considered acceptable. It is important to note that the results we re taken with all users receiving an equal share of the server’s cap abilities, 30fps at best. If this high frame rate was only offered to a few selected users who really need it, wh ile observers would only receive 10-20fps, the overall number of clients that the server can simultaneously handle would be increased dramatically. Also, even with this configuration, if more us ers were desired to join the robot exercise, common frame rate could be reduced to 20fps, for example, which from experience is still very lifelike. In general, however, it is likely that the system will be

PAGE 78

69 used by 2-5 users, in which case the machine is fully capable of serving 30fps to each connected client. The same experiment was also performe d with the recording mechanism enabled. Results were consistent reco rding of all available video/da ta added about 15% to server CPU utilization with no effect on client frame rate or network utilization. 5.5 Miscellaneous Another experiment was performed to measur e the delay added by the server itself as compared to controlling the robot directly using DFRA. The test was performed on a local area network (to accommodate DFRA) a nd the server added about 100ms worth of delay, which is negligible compared to the othe r alternative (not being able to control the robot at all). 5.6 Summary The main goal of the experiments performed using this system was to verify server functionality of controlling ground robots ove r the Internet. This goal was achieved by repeatedly operating the robot across thousa nds of miles separati ng the device and its operator. Scalability experiment showed that under current hardware configuration up to six users can be remotely sharing full ope rational capabilities of the robot. Problems encountered during the attempted SAIII experime nt revealed that in the presence of nonconfigurable NAT router it would be helpfu l to implement a proxy server that would

PAGE 79

70 reverse the initial connection between server and clients. Finally, the delay added by the server was found tolerable by all remote users.

PAGE 80

71 Chapter Six Discussion Implementation of this system was for the most part fairly straightforward and was just the matter of programming the modules pl anned for during the design. There are, however, several factors that limit the performa nce of our server. This chapter describes these limitations as well as other pr oblems encountered during design and implementation. 6.1 Limiting Factors 6.1.1 Network Limitations The main limiting factor for this system is the network connecting c lients to the server and especially the segment between robot and se rver. For the training s cenario, this is not so much an issue since the needed network re sources can be allocated at the training site prior to the experiments. In a real world, how ever, the network capabilities are likely to be limited, which makes the use of the server for reachback questionable, especially the critical black box capability. Since the robot-ser ver link is most likely to be wireless, the probability of data loss is very high. With the improvement of the processing power of

PAGE 81

72 portable computers, however, it may soon be possible for the reco rding part of the software to be implemented on the laptop conne cted to the robot, without having to rely on the network. In either case, the server does guarantee that the commands executed on the robot from the clients connected ov er the web will be recorded correctly. 6.1.2 Processing Power Limitations Processing power of both the server and especial ly the robot laptop turn ed out to be a big factor in the overall performance. This is mostly due to programming language selection Java, being a byte interpreted language, consumes a lot of processing power as compared to lower level languages such as C or C++. Si nce the hardware of the server can be easily upgraded, this is mostly a problem for the laptop attached to the robot. Simple data connections are not a problem, but any kind of image processing is out of the question. Image processing in this case can be simple compression – video comes from the robot at VGA resolution (640X480 pixels) in RGB format with 256 bit color depth, so initial image size is 640 X 480 X 3 X 256 = 29.45 MB. Transmitting at 30 frames per second would require at least 8 Mbps connection which at this time is unrealistic, so we have two choices – either compress the images, or re duce the resolution. As it turns out, both are computationally intensive task s. Compression is an obvious one, but resolution reduction is a little more subtle. It can be done in se veral ways – we can either discard the unused pixels, in which case the image ends up l ooking jagged, or we can average the pixel values to produce a smoother, more natural im age. The latter takes a lot more processing

PAGE 82

73 power, so currently the faster way is used. Video images are reduced to a quarter of their original size for transmission and storage, a nd are later expanded at the client display. Since video processing takes up most of the computational power, addition of a separate device that would perform these ta sks in hardware may be beneficial. This would take the processing load off the laptop and greatly improve the quality of robot video received by clients. 6.2 Design and Implementation Issues 6.2.1 Port Blocking Port blocking is done by fire walls as a security measure to prevent popular types of attacks and consists of filte ring out connections addressed to particular TCP or UDP ports. This creates a challenge from the pr ogramming standpoint, since typically various network-dependent tasks are separated by conn ections on different ports. For example, if we wanted to receive video from the robot as well as send commands to the device, we would open two connections on separate ports – one to request and receive video, and the other to control the robot. Init ially, the server had seven ports that were used for video and data transmission, server c ontrol, robot control, etc. A ll functionality was verified locally, yet during cross country tests certain aspects of the system (always different ones) were completely disabled. After exam ining the network tra ffic with a protocol analyzer, it became obvious that there was no evidence of connections on certain ports. This was very surprising, sin ce even after the users were advised to disable the firewall

PAGE 83

74 on their machine, the problem persisted, whic h meant that the connections were being dropped somewhere along the way. In an attempt to fix the problem, port numbers were changed, even system ports were used ( 21, 22, 25, 80, etc.) all with no improvement. There was not a set that would work consistently. Finally, port multiplexing described in the previous chapter was implemented and the problem was eliminated. 6.2.2 Novice Users Perhaps not surprisingly, about 50% of the se tup time in the initial stages of validation was spent on the phone explaining to clients how to use the system. The questions were mostly unrelated to the actual software but were more concerning the use of their own computer to perform certain tasks, such as tu rn off the firewall, or make sure their Java plug in is up to date. After ha ving to repeat same directions to several users, a help tab was added to the interface and the home page of the server was modified to include directions on how to make su re the client system is up to date, how to change Java security settings and some overall suggesti ons on the use of the system. A diagnostic panel was also added to display system messages in an understandable format. 6.2.3 User Policy This server was designed for multi user opera tion from the standpoint of multithreading and managing available resources. It does not however, include any user policies to resolve contention issues between various t ypes of clients. For example, if everyone

PAGE 84

75 connected to the server has equa l priorities as far as robot co ntrol and quality of received video, commands are still sent to the robot on a fi rst come – first serve basis. This issue is beyond the scope of this project, partially be cause user priority concerns were being addressed in a parallel project by Chris Williams [6]. Unfortunately, at the time of completion, these mechanisms were validated by Williams using an earlier version of the software described in this thesis. In the future, the two systems will be combined to include the latest versions of each package. 6.3 Notes on Experiments Since there was really nothing to prove numer ically, or from a performance standpoint, proof-of-concept title is a more appropri ate one for all performed operations than experiments. Also, for scalability tests it is important to note that there is a constant tradeoff between frame rate and quality of video and audio. For the experiments described, all parameters were set to medi um quality, that is, both camera and robot images of 320X240 with 20-50% compression, and audio sampled at 11.025 KHz in both directions. Of course, if compression factor s for video images and audio sampling rates were decreased, results would be different with consistently lower frame rates. Measure of success can also be somewh at ambiguous. In this case, success was measured by the ability of remote users to safely operate the robot. Tolerance of the network delay by different users can of course be different and is a subjective measure, but at this point no other mechanisms are in place to quantify user satisfaction.

PAGE 85

76 Chapter Seven Conclusions and Future Work 7.1 Conclusions This thesis describes a system that distri butes robot video, data and controls over the Internet and also provides a convenient pla ce for network based storage and playback of all pertinent information. The server is partic ularly useful because all of its functionality is made available through a web browser based graphical user interface that is accessible by any authorized user with an internet c onnection, is platform independent, and does not require any software installation. Recall that there were two intended uses for this system – training on ground robots and reach back in a real world disast er. The primary system goal was to enable robot control over the Internet a nd to be able to record all av ailable data and user actions. This goal was achieved and it is safe to say th at the server could very well be used to train on rescue robots since the system be haves very well over long distance wired networks. Reach back, the secondary intende d functionality is arguable, the main problem being the network. Current wireless technologies are not yet robust enough for reliable field operati ons. Communications will definitely improve with time, and this server should work well over any Internet-t ype network that the future brings.

PAGE 86

77 7.2 Future Work Several directions for future work were iden tified during the developm ent of this system one is the general expansion of the system to support multiple robots, various robot platforms and robot architectures, another is to implement a proxy server to overcome the real world difficulty of outside access to the LAN containing the robot and server. Switching from TCP to UDP to achieve better throughput over client server connection also looks promising, and finally, addi ng a dynamic server resource management mechanism is needed to keep better track of security and user actions. 7.2.1 System Expansion This system is essentially an Internet exte nsion of an existing robot architecture for a single-robot, multiple-client scenario. In the future it could be expanded to include multiple robots, possibly of different platforms. There are many different dist ributed robot architectures that strive for platform independence and dynamic discovery, yet they a ll lack an easily accessible user interface and recording capabilities, such as those desc ribed in this thesis. Consequently, it would be beneficial to include in the system an interface not only to th e robots, but to the architecture as a whole. In the same manne r that local area networks connect to the Internet through a gateway, this server can be expanded to pe rform a similar, higher level gateway task for distribution of robot data and controls and other ongoing tasks performed by the robot architecture such as dynamic service discovery. By being able to

PAGE 87

78 retrieve from the architecture the platform capabilities of the r obot and modifying the interface on the go, this could in the end become, as a whole, a platform independent (both robot and client side) robot architecture. This, of course, is not at all a trivial task, but having already in place a framework for distribution of robot controls, video, and data, the problem is narrowed down to performing a similar ta sk with a different set of information. The system was built on top of DFRA, and could be tailored to any other robot architecture by changing the r obot drivers. If the gateway functionality was attempted, a more general API to should be designed, so th at this system could be easily added to any other distributed architecture. The combinati on of system capabiliti es and a generalized API would make this server an extremely attractive tool for designers of robot architectures. 7.2.2 TCP to UDP As discussed in section 4.5.5, there are some advantages to using UDP as a transport protocol instead of TCP. Si nce no congestion control mechanisms are implemented, UDP can usually achieve bett er throughput than TCP. There are also drawbacks to this approach, mainly from the implementation standpoint. UDP provides no guarantee that data will arrive in order, or even that it will arrive at all, so error recovery mechanisms have to be implemen ted manually. Also, unlike TCP, UDP is not a stream based protocol, which mean s that data to be sent has to be separated into packets at the sending side and reassembled at the receiver by the programmer. There are

PAGE 88

79 however, several available li braries that implement these mechanisms on top of UDP. The task of investigating these packages a nd their stability is le ft for future work. 7.2.3 Proxy Server As mentioned in Chapter Five, in a real world scenario when the LAN containing the robot and server can not be configured, the system may not be accessi ble from the outside world, that is, connections can be made out to the Internet, bu t not in the opposite direction. A proxy server shown in Figure 32 is a possible solution to this problem. This server will be a machine located somewhere on th e Internet that will relay traffic from the server to clients and from c lients to server. Initially, the robot server will establish communication with the proxy which at that point will start listening for client connections. In a typical scenario, users conne ct directly to the robot server. In the configuration described here, they will actually be connectin g to the proxy server which will not have any incoming traffic restrictions Once a user connection is established, the proxy will simply relay user requests to the robot server and the data back to the clients. Since both the clients and our server will be making outgoing connections, the problem will be eliminated. This idea is very similar to a super node terminology used in a peerto-peer network community, with Skype voice over IP software [28] being a perfect implementation example.

PAGE 89

80 Figure 32. Proxy Configuration A reasonable question to ask is why not us e this proxy scenario all the time? The reason is that an extra processing hop will most certainly add more delay to the communications, making this configuration pr actical only as a backup when no other options are available. 7.2.4 Dynamic Server Resource Management As mentioned in section 4.5.3, there is curr ently no mechanism to resolve contention issues between users with the same permissions There is also no means to control server and network load depending on the amount of connected users. Another future application of this system involves a hi gh capability Unmanned Surface Vehicle (USV) that needs to be accessed by several users at a time, some of which may be connected over the Internet. This small boat has multiple on-board sensors including 6 cameras that provide various data to users on shore, or those connected over the Internet.

PAGE 90

81 Due to the fact that network connection to the robot is wireless, there is no way that several people can be connected to the device at the same time while getting reasonable amount of resources such as video from multiple built-in cameras. Realistically, only a subset of robot data can be transmitted over the link. In reality, a user would never need to access all data at the same time, the problem is that different users may want to access different subsets of av ailable information. And since there is a definite distinction between the different types of users in this bandwidth-hungry configuration, a mechanism is needed to provide some form of quality of service distinction. Otherwise, the performan ce to all users will be degraded. This wireless link limitation was also pr esent in the original requirements of the server, but the set of data to be transmitted to users was constant, and even though distinguishing between different types of users was considered beneficial in that scenario, it was not necessary. All data needed to implement this mech anism is currently in place – server has a list of connected users, their type, priority and permission. Now actual policies have to be implemented that govern server behavior, that for example, deny a new connection if the machine is reaching its capacity, or where appropriate, disconn ect a user that is already connected, depending ion their pr iorities. Frame rates and sele ction of the sensor subset will also have to be changed dynami cally according to these policies.

PAGE 91

82 References [1] Matthew Long, “Creating a Distributed Field Robot Architecture for Multiple Robots”, Master’s Thesis, November 2004. [2] Ken Goldberg, Michael Mascha, Steven Gentner, Juergen Rossman, Nick Rothenberg, Carl Sutter and Jeff Wiegley. “B eyond the Web: Excavating the Real World via Mozaic (The Mercury Project)” Second International WW W Conference, Chicago, IL, Oct 17-21, 1994. [3] Reid Simmons, Richard Goodwin, Ka ren Zita Haigh, Sven Koenig, Joseph O'Sullivan. "A Modular Architectur e for Office Delivery Robots," In Autonomous Agents 1997 February 1997. ACM. Pages 245-252. [4] Anthony Cowley, Hwa-chow Oliver Hs u, and C.J.Taylor. “Opening the Dialog: Robotics and the Internet” 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, May 2006. [5] Anatal K. Bejczy, Steven Venema, and Won S. Kim. Role of Computer graphics in space telerobotics: Preview and predictive displays. In Cooperative Intelligent Robotics in Space pp. 365-377. Proceedings of the SPIE, vol. 1387, November 1990. [6] Chris Williams, “Knowledge-Based Video Compression for Robots and Sensor Networks” Master’s Thesis, May 2006. [7] J. Postel, RFC 768 User Da tagram Protocol, August 28 1980. [8] V. Jacobson, “Congestion A voidance and Control”, in Proceedings of SIGCOMM pp. 314-329, 1988. [9] S. Floyd and K. Fall, “Promoting the Us e of End-to-End Congestion Control in the Internet.” In IEEE/ACM Transactions on Networking, August 1999. [10] Dmitry Kalyadin, “Performance of TC P over wireless networks.” Honors Thesis, Department of Computer Science and Engi neering, University of South Florida, December 2004. [11] Nak Young Chong, Tetsuo Kotoku, Koht aro Ohba, Kiyoshi Komoriya, Kazuo Tanie, Junji Oaki, Hideaki Hashimoto, Fu mio Ozaki, Katsuhiro Maeda and Nobuto

PAGE 92

83 Matsuhira, “A collaborative multi-site teleoperation over an ISDN.” Mechatronics, Volume 13, Issues 8-9, October 2003, Pages 957-97. [12] R. Murphy, J.L. Burke, "Human-Robot In teraction in USAR Technical Search: Two Heads are Better Than One," R.R., also appears in Proceedings of the 13th IEEE International Workshop on Robot and Hum an Interactive Communication (RO_MAN 2004) [13] Goldberg, K.; Chen, B.; Solomon, R.; Bui, S.; Farzin, B.; Heitler, J.; Poon, D.; Smith, G, “Collaborative teleope ration via the Internet.” In Proceedings of Robotics and Automation, 2000, ICRA '00, 24-28 April 2000 vol.2, pages 2019 202. [14] Goldberg, K.; Dezhen S ong; Levandowski, A. “Collabo rative teleoperation using networked spatial dynamic voting.” In Proceedings of the IEEE Volume 91, Issue 3, March 2003 Page(s):430 – 439. [15] Ken Goldberg, Roland Siegwart, Beyond Webcams: an Introduction to Online Robots. Cambridge, MA, MIT Press, 2002. [16] Herman Kruegle, CCTV Surveillance: Video Practices and Technology Butterworth-Heinemann, 1996. [17] Axis Communications, http://www.axis.com. [18] Broadware, http: //www.broadware.com. [19] American Dynamics, h ttp://www.americandynamics.net. [20] Aimetis Corp., http://www.aimetis.com. [21] Dedicated Micros, http: //www.dedicatedmicrosus.com. [22] D3Data LLC, www.D3Data.com. [23] Toshiba, http://www.toshiba.com. [24] Sony, http://www.sony.com. [25] W. D. Armitage, D. Kalyadin, M. La brador, and R. Murphy, “Video and Biohazard Monitoring of Sites During Incidents”, Proceedings of Sharing Solutions for Emergencies and Hazardous Enviro nments (SSFEHE) Conference Salt Lake, Utah, February 2006.

PAGE 93

84 [26] A.H. Mishkin, J.C. Morrison, T.T. N guyen, H.W. Stone, B.K. Cooper, B.H. Wilcox, “Experiences with operations and autonomy of the Mars Pathfinder Microrover” in Proceedings of the IEEE Aerospace Conf erence, 1998, Volume 2, 21-28 March 1998 Pages: 337 – 351. [27] http://www.axis.com/techsup/cam_servers/dev/cam_http_api_2.htm. [28] http://www.skype.com. [29] G. Kiczales, E. Hilsdale, J. Hugunin, M. Kersten, J. Palm, and W. G. Griswold. “An overview of AspectJ”. Proceedings of the 15th European Conference on Object-Oriented Programming (ECOOP), pages 327-355, June 2001. [30] Lujo Bauer, Jay Ligatti, David Wa lker, “Composing Security Policies with Polymer”, PLDI’05, June12-15, 2005.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001920236
003 fts
005 20080107140713.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 080107s2007 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0002111
035
(OCoLC)187985727
040
FHM
c FHM
049
FHMM
090
QA76 (ONLINE)
1 100
Kalyadin, Dmitry.
0 245
Robot data and control server for Internet-based training on ground robots
h [electronic resource] /
by Dmitry Kalyadin.
260
[Tampa, Fla.] :
b University of South Florida,
2007.
520
ABSTRACT: To facilitate the emerging need for remote robot training and reach back, this thesis describes a system that allows for convenient web browser based robot operation over the Internet, while providing the means for recording and playback of all video, data and user actions. Training of first responder personnel on rescue robots is hindered by the fact that these devices are very expensive and are only affordable by a few specialized organizations that make them available by request at the time of a disaster. The system described in this thesis will allow first responders to practice on the robots without having to be physically present at same location. Having these capabilities of remote presence, the system can also be used in a real world response to transmit robot video and data to persons not present at the site of the incident, such as structural engineers or medical doctors.The recording capability will be used as an aid during training and to help resolve accountability issues in the real world scenario. Similar demands in the area of network video surveillance are met by the use of a network DVR that records and relays video and controls between IP cameras and Internet clients. The server implemented in this thesis is unique in that it extends these capabilities to include data from various robot sensors. All of the mentioned above video, data, and controls are combined into a convenient web browser based graphical user interface. The server was implemented and tested using rescue robots, but could be tailored to any other distributed robot architecture where reliable and convenient web browser based robot operation over the Internet is desired.System testing validated server capabilities of remote multi user robot operation, as well as its unique ability to store and play back external camera view along with robot video and data, to help with situation awareness. Conclusions drawn from the experiments indicate that this system can indeed be used for Internet robot training, as well as for other robotics research such as bandwidth regulation techniques or human-robot interaction studies by non computer science researchers who do not have physical access to robots.
502
Thesis (M.S.)--University of South Florida, 2007.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 84 pages.
590
Advisor: Robin Murphy, Ph.D.
653
Teleoperation.
Rescue robots.
Remote presence.
Distributed systems.
Java.
690
Dissertations, Academic
z USF
x Computer Science
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.2111