USF Libraries
USF Digital Collections

Pervasive sensing and computing for natural disaster mitigation

MISSING IMAGE

Material Information

Title:
Pervasive sensing and computing for natural disaster mitigation
Physical Description:
Book
Language:
English
Creator:
Quintela, Daniel H
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
Wireless sensor network
Remote sensing
Disaster management
Adaptable architecture
Motes
Dissertations, Academic -- Electrical Engineering -- Masters -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: This research proposed the use of state-of-the-art wireless communications and networked embedded systems technologies to provide environmental sensing for the early detection of natural disasters. The data is acquired, processed and transmitted, from the location where the disaster originates, to potentially threatened conurbations in order to promptly notify the population. The acquired data is transformed from its raw form into information that can be utilized by local authorities to rapidly assess emergency situations and then to apply disaster management procedures. Alternatively, the system can generate alerting signals without human intervention. Furthermore, recorded historical data can be made available for scientists to build models, to understand and to forecast the behavior of disasterous events. An additional, important, contribution of this research was the analysis and application of Wireless Sensor Network technology for disaster monitoring and alerting.
Thesis:
Thesis (M.S.E.E.)--University of South Florida, 2005.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Daniel H. Quintela.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 114 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001680961
oclc - 62413432
usfldc doi - E14-SFE0001160
usfldc handle - e14.1160
System ID:
SFS0025481:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001680961
003 fts
005 20060215071032.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 051202s2005 flu sbm s000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001160
035
(OCoLC)62413432
SFE0001160
040
FHM
c FHM
049
FHMM
090
TK145 (Online)
1 100
Quintela, Daniel H.
0 245
Pervasive sensing and computing for natural disaster mitigation
h [electronic resource] /
by Daniel H. Quintela.
260
[Tampa, Fla.] :
b University of South Florida,
2005.
502
Thesis (M.S.E.E.)--University of South Florida, 2005.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 114 pages.
3 520
ABSTRACT: This research proposed the use of state-of-the-art wireless communications and networked embedded systems technologies to provide environmental sensing for the early detection of natural disasters. The data is acquired, processed and transmitted, from the location where the disaster originates, to potentially threatened conurbations in order to promptly notify the population. The acquired data is transformed from its raw form into information that can be utilized by local authorities to rapidly assess emergency situations and then to apply disaster management procedures. Alternatively, the system can generate alerting signals without human intervention. Furthermore, recorded historical data can be made available for scientists to build models, to understand and to forecast the behavior of disasterous events. An additional, important, contribution of this research was the analysis and application of Wireless Sensor Network technology for disaster monitoring and alerting.
590
Adviser: Dr. Wilfrido Moreno.
653
Wireless sensor network.
Remote sensing.
Disaster management.
Adaptable architecture.
Motes.
690
Dissertations, Academic
z USF
x Electrical Engineering
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1160



PAGE 1

Pervasive Sensing and Computing for Natural Disaster Mitigation By Daniel H. Quintela A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering Department of Electrical Engineering College of Engineering University of South Florida Major Professor: Wilfrido A. Moreno, Ph.D. James T. Leffew, Ph.D. Miguel Labrador, Ph.D. Date of Approval: April 6, 2005 Keywords: Wireless Sensor Network, Remote Sensing, Disaster Management, Adaptable Architecture, Motes Copyright 2005, Daniel H. Quintela

PAGE 2

ACKNOWLEDGEMENTS I would like to express my appreciati on for the opportunity given to me Dr. Moreno, my major professor, in realizing this thesis. I would like to thank the committee members, Dr. Leffew and Dr. Labra dor, for their guidance and support. I am indebted to many student colleagues for their encouragement and cooperation throughout the course of this thesis I am especially gr ateful to Mauricio Castillo, Oscar Gonzalez, Jaime Dimate, Karim Souccar, Yohan Prevot, and Nhat Nguyen. Words would not describe how grateful I am for all the help and support given by my family and friends throughout my graduate studies. Finally, I would like to express all my gratitude towards the Electrical Engineering faculty in helping me succeed academically.

PAGE 3

i TABLE OF CONTENTS LIST OF TABLES iii LIST OF FIGURES iv ABSTRACT vi CHAPTER 1 INTRODUCTION 1 1.1 Problem Statement 3 1.2 Research Scope 4 1.3 Thesis Organization 5 CHAPTER 2 STATE-OF-THE-ART SURVEY 6 2.1 Space Technology for Remote Sensing 6 2.2 Telemetry-based Solutions for Remote Sensing 7 2.3 Wireless Sensor Networks for Remote Sensing 8 CHAPTER 3 WIRELESS SENSOR NETWORK OVERVIEW 10 3.1 The WSN Concept 10 3.2 Hardware 11 3.2.1 Mote Platforms 12 3.2.2 Data Acquisition Boards 14 3.2.3 Sensors 15 3.2.4 Programming Board 16 3.2.5 Network Gateway 16 3.3 TinyOS 18 3.3.1 Concept 19 3.3.2 Component 20 3.3.3 Concurrency Model 21 3.3.4 Scheduling 22 3.3.5 Programming Language: nesC 22 3.4 MintRoute Algorithm 23 CHAPTER 4 OVERALL SYSTEM CONCEPT 25 4.1 Proactive Approach to Disaster Management 27 4.2 Remote Sensing Network Architecture 28 4.2.1 Identifying Sensing Parameters 29 4.2.2 Identifying Locations to Deploy Sensors 31 4.2.3 Identifying the Available Infrastructure 32 4.2.4 Cell Design 33 4.3 Central Processing Office 35 4.4 A WSN Applied to Natural Disasters 36 CHAPTER 5 SENSOR NODES 38 5.1 Data Acquisition Requirements for Flash-Flood Monitoring 39 5.2 Node Types 39 5.3 Signal Conditioning Techniques 41

PAGE 4

ii5.3.1 Pulse Generating Sensors 42 5.3.2 Resistive Transducers 44 5.3.3 Data Processing for COTS Sensor boards 48 5.4 Sensor Drivers 49 5.5 Test Software 51 5.6 Cluster Configuration 56 5.6.1 Meteorological Mote 57 5.6.2 Hydrological Mote 59 5.6.3 Seismic Mote 59 5.6.4 Repeater Mote 60 CHAPTER 6 POWERING TECHNIQUES FOR MOTES AND STARGATE 61 6.1 Battery Power for Motes 61 6.2 Powering the Stargate Board 68 CHAPTER 7 NETWORK STATISTICS AND COMMUNICATION QUALITY 73 CHAPTER 8 NETWORK GATEWAY: STARGATE CONFIGURATION 78 8.1 Local Database 78 8.1.1 Alarm Generation 81 8.1.2 Fuzzy Logic 83 8.2 Connecting to the GPRS Network 86 8.2.1 Establishing a PPP Connection 86 8.3 Establishing WLAN Connectivity 87 8.4 Internet Connectivity 88 CHAPTER 9 CONCLUSIONS AND FUTURE WORK 89 REFERENCES 92 APPENDICES 94 Appendix A PPP Scripts 95 A.1 ppp-on script 95 A.2 ppp-on-dialer script 96 A.3 ppp-off script 97 Appendix B Intersema MS5534 Algorithm 98 Appendix C Taos TSL2550 Algorithm 100 Appendix D MDA300CA DAQ board Schematic 101 Appendix E Sensor Driver 102 E.1 SoilDriver.nc 102 E.2 SoilDriverM.nc 103 E.3 TestDriver.nc 104 E.4 TestDriverM.nc 105

PAGE 5

iii LIST OF TABLES Table 1: MICA2 and MICA2DOT Platform Sp ecifications, [1] 13 Table 2: Absolute Maximum Ratings for the MDA300CA DAQ board, [2] 15 Table 3: Sensing Parameters and COTS Equipments 41 Table 4: Data Packet Structure for the MTS4 20 Sensorboards 53 Table 5: Transducers Coupled with the Meteorological Mote 58 Table 6: Transducers Coupled with the Hydrological Mote 59 Table 7: Transducers Coupled with the Se ismic Mote 60 Table 8: Stargate Computer Current Draw in Different Modes 68 Table 9: Multipliers Based on Winter Time Ambient Temperature 71 Table 10: Communication Quality and Network Statistics for the USF Experiment 75 Table 11: Communication Quality and Ne twork Statistics for the Maracay Experiment 76 Table 12: Communication Quality and Network Statistics for the USF Experiment 77 Table 13: Sample Database Table Running on the Server and Client Computers 80 Table 14: Continuation of Sample Database Table Running on the Server and Client Computers 81 Table 15: Alarm Levels Used to Trigger Transmission of Database Files 83 Table 16: Fuzzy Logic Algorithm as a Proof of Concept 85 Table 17: Rules for the Fuzzy Logic Alarm Algorithm 85 Table 18: Continuation of the Rules for the Fuzzy Logic Alarm Algorithm 85

PAGE 6

iv LIST OF FIGURES Figure 1: Mica2 Mote and Plat form’s Block Diagram [1] 12 Figure 2: MICA2DOT Mote and Platform’s Block Diagram [1] 13 Figure 3: MDA300CA Data Acquisition Board for the MICA2 Motes [2] 14 Figure 4: MDA500CA Data Acquisitio n Board for the MICA2DOT Motes [2] 14 Figure 5: MIB510 Programming Board [1] 16 Figure 6: Stargate Board [3] 18 Figure 7: TinyOS Component-based Structure 19 Figure 8: TinyOS Task Scheduler [4] 20 Figure 9: Component Block Diagram 21 Figure 10: MintRoute Multihopping Routing Algorithm Used in the Motes [4] 24 Figure 11: Proactive vs Reactive Approaches in Disaster Monitoring 28 Figure 12: Chain of Events Culminating in a Flash-flood Disaster 30 Figure 13: System Concept Block Diagram 33 Figure 14: System Data Structure Concept 34 Figure 15: Basic Concept Architecture 35 Figure 16: “Davis” Rain Collector II 42 Figure 17: “Davis” Anemometer 43 Figure 18: “Swoffer” Water Flow Transducer 44 Figure 19: Pulse Signal Conditioning for Different Types of Transducers 44 Figure 20: Schematic for Resistive Signal Conditioning 45 Figure 21: “InterMountain” Float & Pulley Water Level Transducer 45 Figure 22: “Davis” Soil Moisture Sensor 46 Figure 23: Pressure and Temperature Readings from an Intersema MS5534 Sensor 48

PAGE 7

vFigure 24: Signal Conditioning Circuitry for the Resistive Soil Moisture Sensor 50 Figure 25: Test Sensor Application Wiring 50 Figure 26: Soil Driver Configuration Wiring 51 Figure 27: Testing Configuration for Sensor Devices Prior to Deployment 52 Figure 28: Front Panel of the LabView Application Used to Test the MTS420CA Sensor board 54 Figure 29: Block Diagram with the Implementation of the Sensor Algorithms 55 Figure 30: Implementation of the Sensor Algorithms 56 Figure 31: Cluster Architecture for Flash-Flood Monitoring 57 Figure 32: Block Diagram of the Meteorological Motes 58 Figure 33: Block Diagram of the Hydrological Mote 59 Figure 34: Block Diagram of the Seismic Mote 60 Figure 35: Powering Scheme Using Two AA Alkaline Batteries 62 Figure 36: Discharge Characteristics for 1.5V Panasonic Industrial Alkaline Batteries 63 Figure 37: System Characteristics Investigated for the Two Powering Schemes 64 Figure 38: Battery Lifetime vs. Battery Capacity for the Two Powering Schemes 65 Figure 39: Powering Scheme Using Three AA NiMH Batteries 66 Figure 40: Discharge Characteristics for 1. 2V Energizer Rechargeable NiMH Batteries 67 Figure 41: Architecture to Provide Solar Energy for the Stargate Computer 69 Figure 42: Testing Network Statistics for the Sensor Network at USF 74 Figure 43: Deployment Site in Maracay,Venezuela 75 Figure 44: TinyOS Message Structure with Dynamic Payload Length 79 Figure 45: Fuzzy Inference System for the Alarm Algorithm 85 Figure 46: Intersema MS5534 Algorithm 98 Figure 47: Taos TSL2550 Algorithm 100 Figure 48: MDA300CA DAQ Board Schematic 101

PAGE 8

vi PERVASIVE SENSING AND COMPUT ING FOR NATURAL DISASTER MITIGATION Daniel H. Quintela ABSTRACT This research proposed the use of stateof-the-art wireless communications and networked embedded systems technologies to provide environmental sensing for the early detection of natural disasters. The data is acquired, processed and transmitted, from the location where the disaster originates, to poten tially threatened conurbations in order to promptly notify the population. The acquire d data is transformed from its raw form into information that can be utilized by local authoritie s to rapidly assess emergency situations and then to apply disaster management procedures Alternatively, the system can generate alerting signals without huma n intervention. Furthermore, recorded historical data can be made available for scie ntists to build models, to understand and to forecast the behavior of disasterous events. An additional, important, contribution of this research was the analysis and applicati on of Wireless Sensor Network technology for disaster monitoring and alerting.

PAGE 9

1 CHAPTER 1 INTRODUCTION Technology has evolved in quantum leaps in recent years. However, technology is still not widely used for mitigating natural disasters that have devastating consequences in regions of the world where the lack of preparedness and basic infrastructure are characteristic. Such disasters are unpredictable by nature and will continue to be a threat to mankind in the years to come. However, it has been observed that where technology is available, strategies to lessen the impact of a disaster are employed in advance to mitigate the loss of lives and property. In develope d countries, where t echnology is applied, the integration of state-of-the-a rt wireless communications a nd information technologies has become the foundation in the development of solutions to monitor, to alert and to mitigate such unpredictable events. Historical databases and behavioral models of the environment, which are extracted with ubiquito us sensing from the site, provide accurate and reliable data to authorities. The tim ely access to relevant information on hazardous environmental conditions provide time fo r the community to apply preparedness procedures that are capable of alleviating damage and reducing the number of casualties derived from the event. The contrast in disaster preparedness from developed to underdeveloped countries that are still subject to the lack of information and infrastructure is evident. In the underd eveloped countries disaster management is reduced to response and recovery efforts from the governments after the event has

PAGE 10

2 occurred. Little can be done by the time civil defense and other assisting agencies arrive at ground zero. Disaster management procedures can be compared over time in those regions of the world where technology is currently pr esent for monitoring and alerting. The Galveston Hurricane of September 1900, regarded as the greatest natural disaster to ever strike the United States, caused at least 8,000 de aths in the hours following the landfall of the hurricane in Texas, [5]. Even though warnin gs were issued, at the time, they were not taken seriously and many chose to stay at home and not seek shelter. In 2004, from August to September, four hurricanes struck the Florida coast causing billions of dollars in property damage. However, these di sasters accounted for only approximately 152 deaths, [6]. This relatively small number of casualties can be attributed to an effective and accurate monitoring system that alerted authorities and the population in advance. What is observed in underdeveloped countries nowadays is similar to what was seen in the United States at the beginning of last century. The technology available to these countries is insufficient to provide their populations with reliable systems for disaster monitoring and management. The lack of infrastructure and technology necessary for collaboration in the mitigation and manageme nt of disasters put at risk lives and properties that could be preserved. The most recent natural disaster, which was one of the most devastating of all time, occurred at the end of 2004 when a tsunami hit South Asia causing approximately 221,100 deaths a nd several billions of dollars in property damage, [7]. The lack of communication channe ls from other disaster monitoring sites to authorities in the South Asia region, to warn that a catastrophe was imminent, contributed to the several thousands of lives that were lost.

PAGE 11

3 In recent years, disasters caused by manki nd have been as devastating as the ones caused by nature. The common ground between the two is their unpredictable nature and their consequences. Man-made disasters are even harder to predict since the parameters involved in the monitoring of such events ar e more subjective. In the 9/11 terrorist attacks of 2001 suspicious evidence that an attack was eminent were underestimated by United States governmental agencies. In th e case of man-made disasters technology can be applied for the creation of databases fo r suspects, the development of biochemical sensors and for the development of an in teroperable communi cation technology among the emergency workers 1.1 Problem Statement Money and effort are normally invested to mitigate the effects of natural disasters after they have occurred. However, in order to lessen the effects of such events it is necessary to anticipate their occurrence. A monitoring system that provides authorities accurate and reliable information prior to a na tural disaster provides the community time to apply preparedness procedures, which will save lives and minimize property loss. The majority of the current commercially available monitoring and alerting systems for disasters use telemetry solutions that are expe nsive, difficult to install and are configured on centralized schemes that often compro mise the reliability of the system.

PAGE 12

4 1.2 Research Scope This research proposed the use of stateof-the-art wireless communications and networked embedded systems technologies to provide environmental sensing for the early detection of natural disasters. The scope of this research was restricted to identifying the best solution for environmen tal monitoring, establis hing the requirements and overall system concept for the solution, adapting technology for natural disaster monitoring and discussions of the test resu lts obtained from one proven deployment. The central workstation where data is processed and analyzed is discussed conceptually. However, it was not implement ed. Throughout the thes is flash-floods will be used as the main example of the natura l disaster being monitored. Other types of natural disasters would utilize the same syst em architecture. The only difference with respect to disasters that are different from the flash-flood solution presented here would be the adaptation of different sensors. Base d on the fact that for disaster monitoring the position of the sensing points is predetermine d and strategically placed in locations that extract the most relevant da ta, the resulting network topol ogies are fixed. Such fixed topologies yield simpler routing algorithms. The environmental sensors, for this resear ch, collected environm ental data related to temperature, barometric pressure, preci pitation, humidity, ambient luminosity, twoaxis accelerations, water level, water flow and sound readings. These were relevant parameters for the sample application.

PAGE 13

5 1.3 Thesis Organization This thesis consists of nine chapters. Chapter 2 presents a State-of-the-Art survey that was conducted to identif y the technologies being currently used for remote sensing related to natural disaster monitoring solutions. Chapter 3 discusses Wireless Sensor Networks, which is an emerging technology that is well suited for remote sensing. Basic concepts that include discussi ons of hardware and software components are presented. Chapter 4 presents the system concept a nd identifies the requirements imposed by the problem statement for this research. Chapte r 5 describes the hardware architecture used for the sensing nodes and how they were conf igured according to a pplication-specific requirements to collect, process and transmit information from remote locations to the network gateway. Chapter 6 discusses the powering techniques used for the sensing nodes and the network gateway. Chapter 7 pr esents results from field tests regarding network statistics and communi cation quality. Chapter 8 desc ribes the implementation of the network gateway and Chapter 9 include s the conclusions and recommendations for future work in this area.

PAGE 14

6 CHAPTER 2 STATE-OF-THE-ART SURVEY There are several environmental monitori ng systems currently used for disaster management, [8], [9], [10]. Traditionall y, space technology and telemetry systems have been used in the remote sensing of the envi ronment at risk. However, the emergence of Wireless Sensor Networks, (WSNs), in rece nt years has prompted researchers to investigate the possibility of implemen ting WSNs for disaster monitoring and management, [11], [12]. This section of the thesis de scribes the advantages and disadvantages of sensing the environment using space technology and telemetry-based systems. Afterwards, two examples of projects implementing WSNs for disaster monitoring and management are presented. 2.1 Space Technology for Remote Sensing Satellite remote sensing is the mo st sophisticated technology used for environmental monitoring in the prediction of natural disasters. The satellites carry onboard sensors that are capable of providing information on every natural feature that prevails on the surface of the Earth. Dependi ng on the type of disaster being monitored, different onboard sensors are employed. Fo r example, thermal sensors capture fire

PAGE 15

7 hazards, infrared sensors are more suitable fo r floods and microwave sensors can record soil moisture, [13]. The two ma in types of satellites used to observe the Earth are the Geostationary and the Polar-Orbiting satellites. The Geostationary satellites are primarily used for meteorological observation whereas th e Polar-Orbiting satellites are particularly important in the monitoring of natural disasters. The data extracted fr om the satellites are transmitted back to ground stations where th e information is processed by computers designed for complex signal processing. The data extracted from the satellites is applied to precisely detect, map, measure and anal yze the environment. The accuracy, the extended coverage and the spat ial continuity obtained from satellite readings are among the main advantages available from th is technology for remote sensing, [14]. Furthermore, satellite remote sensing provides real-time assessment of the event, which is helpful in identifying evacuation routes to safe zones away from the disaster. Unfortunately, not all countries can rely on sp ace technology for remote sensing. In fact, most developing countries have limitations in terms of hardware, software and human resources, [14]. The satellite solution re quires powerful high-end computers for signal processing, software such as a Geographical Information System, (GIS), to implement data analysis, statistics based behavioral models and most importantly qualified professionals to operate the system. The co st to set up and operat e a solution of this magnitude and complexity is also an issue for developing countries. 2.2 Telemetry-based Solutions for Remote Sensing Many of the remote sensing solutions used for disaster management are based on telemetry systems, [10]. Remote sensing solutio ns make use of remote terminal units that

PAGE 16

8 are coupled with sensors to collect data and in a point-to-point strategy transmit the data to a central terminal unit. Each remote terminal unit needs to be self-powered. The remote terminals can be powered by a solar panel or by using an Uninterruptible Power Supply, (UPS). The medium used for communicati on consists of elements such as cable, radio, telephone and satellite. Telemetry-based solutions utilize UHF, VHF and cellular networks for communication. However, a centr alized scheme compromises the reliability of the solution since each section or even the entire sensed field might be isolated in the event of terminal unit failure or malfunction. In addition, these invasi ve architectures are difficult to deploy and to operate. The inst allation process is time consuming and once established the infrastructure is permanent and not easily extendable. 2.3 Wireless Sensor Networks for Remote Sensing The use of Wireless Sensor Networks for natural disaster monitoring is still a novel approach in the attempt to minimize the loss of lives and property incurred as the result of a disastrous event. Initia lly fueled by the evident commonalities with environmental monitoring, disaster monitori ng applications are ra pidly evolving as the technology is trying to adapt to support the ne w and imposed requirements. To date only a limited number of major projects have implemented WSNs for natural disaster monitoring and response. For example, the FireBug project focuses on extracting environmental data from remote locations to alert first responders and the population to the risk of wildfires, whereas the CodeBlue project exploits the use of WSN technology to obtain vital signs of patients in disa ster response. The FireBug project was implemented by the University of Californi a, Berkeley and sponsored by the NSF

PAGE 17

9 Information Technology Research Division. This effort was one of the initial attempts to make use of a WSN for such an application. The project aimed at the development of a platform to detect initiation and to monitor the spread of wildfires in rapidly changing environments, [12]. Each sensing node is equipped with a GPS module and an environmental sensor board. The collected data is routed back to a central station and made available to first responders and the general public. The CodeBlue project, developed by Harvard University, explores applications of wireless sensor network technology to a range of medical applications that include pre-hos pital and in-hospital emergency care, disaster response and stroke patient rehabilitation, [11]. The patients or victims wear motes equipped with a wirele ss pulse oximeter and a two-lead EKG to collect heart rate, oxygen saturation and EKG data The collected data is routed back to remote stations such as PDAs, laptops or ambul ances to be stored in the patient’s profile. Additional features encompassed in the motes include alarms to notif y first responders in the event of any vital sign that reaches lif e-threatening levels. The results from both experiments showed that WSNs lend themselv es well for natural disaster monitoring. However, these experiments also exposed issues that still need to be addressed in order to improve overall performance and reliability. Th is research identified aspects that need to be resolved for the appropria te implementation of a WSN fo r natural disaster monitoring, proposed solutions and suggests areas for future work.

PAGE 18

10 CHAPTER 3 WIRELESS SENSOR NETWORK OVERVIEW 3.1 The WSN Concept Wireless Sensor Networks are “low-power, multihopping systems that combine multiple wireless nodes into an extendable ne twork environment with non-Line-Of-Sight coverage and a self-healing data path” that provide ubiquitous sensing of any environment in the monitoring of natural disasters, [15]. WS N nodes communicate only with neighboring nodes, which reduces th e need for high transmission power and eliminates the need for expensive transmitte rs and repeaters such as those used in traditional telemetry systems. Every node, in a WSN, can act as a data acquisition device, a data router or a data aggregator. As will be discussed later, the clustering architecture chosen to be implemented fo r this solution maximized the redundancy and consequently, the reliability of the entire monitoring system. The independence from third-party provide rs and the absence of infrastructure requirements such as those required in cellular based telemetry systems allow a WSN to be deployed quickly. Furthermore, in scenarios where thr eats may come from unexpected locations, having a dynamic and ad aptable solution enables first-responders to act according to critical situations. For ex ample, these features facilitate the placement

PAGE 19

11 of additional nodes to provide a more comprehe nsive reading of the event as it happens and to replace “dead” nodes. In the event of network congestion, node failure or simply obstacles blocking line-of-sight communicati ons, the meshed interconnection of wireless sensor nodes generate alternative paths for data routing from the source, where the phenomena occurred, to the destination, which is a network gateway. Network gateways in a WSN allow interaction with external systems that possess more storage and computational capacity to create historical databases and for purposes of modeling and forecasting. 3.2 Hardware This section describes the hardware speci fications of the equipments used to configure the proposed solution. All com ponents were purchased off-the-shelf as general-purpose sensor network equipment for later adaptation in order to fit the objective of the research. The discussion on the functionalities of each component and how their integration forms a Wireless Sensor Network to monitor natural disasters is discussed in Chapters 4 and 5. The network was comprised of sensor nodes, or “motes”, data acquisition boards, sensors and a netw ork gateway. In addition, a programming board was required to download the nesC code into the motes. For further details of these devices refer to reference [4].

PAGE 20

12 3.2.1 Mote Platforms The MICA2, (MPR410,) and the MICA2D OT, (MPR510), were two types of mote platforms used. These platforms ar e interoperable and the major difference between them is the physical size. Figures 1 and 2 respectively illustrate the MICA2 and MICA2DOT platforms along with their bl ock diagram. Table 1 describes the specifications of both platforms. Figure 1: Mica2 Mote and Pl atform’s Block Diagram, [1]

PAGE 21

13 Figure 2: MICA2DOT Mote and Platform’s Block Diagram, [1] Table 1: MICA2 and MICA2DOT Platform Specifications, [1]

PAGE 22

14 3.2.2 Data Acquisition Boards There were two types of data acquisition boards used in the solution, one for each platform. The DAQ board used for the MICA2 motes was the MDA300CA, which is pictured in Figure 3. The MDA500CA board was used for the MICA2DOT motes and is pictured in Figure 4. Analog sensors can be attached to different channels of the MDA300CA boards based on the expected precis ion and dynamic range. Digital sensors can be attached to the digital or counter ch annels. A mote samples analog, digital or counter channels and can actuate via digital outputs or relays. The combination of the MICA2 mote and a MDA300CA can be used as a low-power wireless data acquisition device or process control machine. Table 2 details the absolute maximum ratings for various electrical parameters. For the MDA 500CA boards, all of the major I/O signals of the MICA2DOT mote are routed to plated-t hru holes on the MDA500 circuit board, [2]. Figure 3: MDA300CA Data Acquisiti on Board for the MICA2 Motes, [2] Figure 4: MDA500CA Data Acquisition Board for the MICA2DOT Motes, [2]

PAGE 23

15 Table 2: Absolute Maximum Ratin gs for the MDA300CA DAQ board, [2] 3.2.3 Sensors Several sensors are used to monitor th e environment at risk. The motes are equipped with the following sensors: • Crossbow MTS420CA and MTS510CA Environmental Sensor boards, • InterMountain Water level sensor, • Swoffer Water flow sensor, • Davis Rain Collector II Precipitation sensor, • Davis Anemometer Wind direction sensor, • Davis Anemometer Wind speed sensor, • Davis Watermark Soil moisture sensor.

PAGE 24

16 3.2.4 Programming Board The MIB510 serial interface board is a multi-purpose board that is used to program the MICA2 and MICA2DOT motes. C ode is downloaded to the ISP through an RS-232 serial port where the ISP programs the code into the mote. The ISP and Mote share the same serial port. The ISP runs at a fixed ba ud rate of 115.2 kbaud. The ISP continually monitors incoming serial packets for a special multi-byte pattern. Once the pattern is detected it disables the Mote’s seri al RX and TX and takes control of the serial port, [1]. The MIB510 Programming board is pictured in Figure 5. Figure 5: MIB510 Pr ogramming Board, [1] 3.2.5 Network Gateway The Stargate board is the "sink" of the Wireless Sensor Network. It possesses enhanced communications and signal processing capabilities. The features of the Stargate board are: • 32-bit, 400 MHz Intel PXA-255 XScale RISC processor, • SA1111 StrongARM Companion Chip for Multiple I/O Access,

PAGE 25

17 • 32 MB of Intel StrataFlash, • 64 MB of SDRAM, • 1 Type II CompactFlash+ Slot, • 1 PCMCIA Slot, • Small Form Factor with dimensions of 3.5" x 2.5", • Reset Button, • Real Time Clock, • Lithium Ion Battery option, • MICA2 Mote capability with GP IO/SSP and Other Signals via 51pin Expansion connector, • I2C connector via an Installable Header, • A 51-pin Daughter Card Interface, • Wired Ethernet via a 10/100 Base-T Ethernet port, • Host USB port, • JTAG Port, • External A/C power supply adapter, • RS-232 Serial Port via a DB-9 Connector. The Stargate board is pictured in Figure 6.

PAGE 26

18 Figure 6: Stargate Board, [3] 3.3 TinyOS The unique characteristics of Wireless Sensor Networks required the development of a unique operating system to comply with the hardware requirements imposed by wireless embedded sensor networks. The Tiny Microt hreading Operating System, (TinyOS), is a low-power, component-based, and event-driv en operating system designed to support intense concurrent operation. TinyOS was wr itten in nesC, which will be discussed in section 3.3.5. The structure of TinyOS is illustrated in Figure 7.

PAGE 27

19 Figure 7: TinyOS Comp onent-based Structure 3.3.1 Concept TinyOS operation forces the motes remain asleep while waiting for an event to happen. Whenever an external event is captured by a transceiver or the sensors an interrupt is generated and the lower-level co mponents signal events to the higher-level components. The event handlers then post ta sks that run to comp letion unless preempted by another event. Tasks run asynchronously from events, which provide a threaded system behavior. Tasks are placed in a queue inside a First In First Out, (FIFO), task scheduler. Figure 8 illustrates the storage of tasks. After all tasks have been executed and the queue is emptied, TinyOS shuts down the processor while maintaining the peripherals operational.

PAGE 28

20 Perhaps the main issue regarding a WS N concerns power consumption. With regard to power consumption, TinyOS enfor ces a power management strategy within the task scheduler to power the proce ssor only when events are detected. Figure 8: TinyOS Task Scheduler, [4] 3.3.2 Component The structure of the operating system is based on components that are comprised of a fixed-size frame, tasks, event handlers a nd command handlers. Figure 9 presents the block diagram for a component. Components, which are the building blocks of TinyOS, possess bi-direction interfaces as “comm unication ports” where interface providers implement commands and interface users implement events. The bi-directional interfaces alleviate the density of data flow which simplifies the system’s structure. Commands are generated as non-blocking requests from higher-level components to lower-level components to request paramete rs, where return status is expected, and post tasks for later ex ecution. Events are generated fr om lower-level components to

PAGE 29

21 higher-level components to signal asynchronous preempt tasks, to call commands and to post tasks among other duties. Both commands and events, explicitly declared to favor modularity, are simply ‘C’-like function calls implemented in side the program module. The component frame handles the internal state and memory. The frame, which is statically allocated before compilation, reduces the memory requirement and prevents the overhead associated with dynamic allocation, [16]. Figure 9: Component Block Diagram 3.3.3 Concurrency Model In TinyOS the concurrency model is co mprised of tasks and hardware event handlers. Tasks are functions that, once sche duled, run to completion. Tasks are atomic with respect to other tasks and may be pr eempted only by events. Tasks can call lower level commands, signal higher level events a nd schedule other tasks within a component. The run-to-completion semantics of tasks make it possible to allocate a single stack and assign it to the currently executing task. This capability is essential in memory constrained systems, [16]. Hardware even t handlers are executed in response to a hardware interrupt. They run to completion but may preempt the execution of a task or

PAGE 30

22 other hardware event handler. When preemp tion occurs, the interr upt routine handler saves the status at the start and restores it when it ends. The cont ext switch between the activated hardware event handl er and another hardware ev ent handler or a task is performed automatically w ithout the need of any special context management. 3.3.4 Scheduling TinyOS executes only one program consisti ng of selected system components and custom components required for a single appl ication. A complete system configuration consists of a tiny scheduler and a graph of the components. A complete system configuration runs in a single address space and contains two execution environments. Interrupt handlers running at high priority co mprise one execution environment. Tasks that are scheduled in a FIFO order at low priority comprise a second execution environment. Tasks are stored in a FI FO that holds a maximum of 8 tasks. 3.3.5 Programming Language: nesC The programming language nesC was sp ecifically designed to handle the restrictions inherited by low-power netw orked embedded systems such as Wireless Sensor Networks. Derived from the ’C ’ programming language, this dialect was developed to support TinyOS-powered motes with the same syntax and structure as the ones possessed by the operating system. Gi ven their limited resources, nesC addresses several fundamental issues in mote operation. Equipped with an executable code space of only 128 Kbytes of reprogrammable flash me mory and severe power constraints, the MICA2 and MICA2DOT motes need to remain as leep for the majority of the time, wake

PAGE 31

23 up, execute the process quickly and go back to sleep. Limitations in computational power require the language to have a flexible and reusable architecture to ease the job of wiring components during the assembly proces s of an application. Components are the building blocks of TinyOS and nesC where specific functions are performed and the application-specific codes are implemented. Components are explicit ly wired together with bi-directional interfaces to form a conf iguration or an application code. A precompiler for nesC, converts wiring of high-le vel modules into code where the nesC output is a ‘C’ program file that is comp iled and linked using gnu and gcc tools for a specific mote, [17]. Configurations and m odules comprise the two types of components available. Configurations are composed of one or more components wired together whereas modules contain the actual nesC code to be implemented. 3.4 MintRoute Algorithm The power constraint of Wireless Se nsor Networks requires the use of multihopping routing schemes to minimize long range transmissions from remote nodes to the base station. In this fashion, data packets are hopped from node to node in short range transmissions, which conserves power and extends network lifetime. The MintRoute algorithm routes the data by selecting the path w ith the best link quality and the least transmission “cost”. A dynamic routing table specifies the least power consuming path from any node to the base station. The table is managed by the base station and is changed as the network topology changes or when bett er routing paths are discovered. Based on this multihopping concept motes can communicate around obstacles, which are non-Line-of-Sight and even exploit the environment by

PAGE 32

24 communicating through multipath reflections. Figure 10 illustrates the MintRoute algorithm. Figure 10: MintRoute Multi hopping Routing Algorithm Used in the Motes, [4]

PAGE 33

25 CHAPTER 4 OVERALL SYSTEM CONCEPT The main objective of designing a natural di saster monitoring solution is to gather environmental information. Based on the collected data information is passed autonomously, to the authorities and the population, to alert them of the level of risk. In order to accomplish this goal it is necessary to first collect data and then transmit it to a centralized station and conseque ntly to broadcast alerts. Compared to applications that focus only on collecting environmental data a disaster monitoring system has more stringent requirements since the information delivered is of vital importance. Disaster information is considered to exist in “soft-real-time”. The system has to remain operational at all times even when individua l components fail. Therefore, the system requires the characteristics of distributed systems to avoid bottlenecks and to avoid constraining the reliability of the entire system upon individual components. The locations where the data-gathering must take place often l ack electrical and communication infrastructure, which ma kes conventional monitoring systems inappropriate. The remote sensing solution must adapt itself to the environment and rely solely on its own resources, which must be in dependent of third-party providers and work under unattended operational conditions. Th e possibility of extending the network coverage by integrating complementary networ ks to the solution would only enhance the system but the overall system’s functional ity would remain intact even if the

PAGE 34

26 complementary networks failed. As a conse quence, one design priority is to maximize the autonomy and reliability of the remote sensing solution. Sensors must be easily deployable in order to be strategically placed in the locations prone to the events that are to be monitored. Another enhancement to th e system is to extend the network during or after the event has occurred to obtain a more comprehensive reading of the sensed field and to compensate for malfunctioning sensors. Such an enhancement requires an extendable and flexible solution. The core de sign of the system architecture must be adaptable to all types of natural disasters. Different types of sensors must be used according to the phenomena being observed. It is of fundamental importance to develop a system that is sensitive and able to eff ectively recognize hazardous conditions but at the same time the system must be “intellige nt” enough not to overreact and trigger false alarms. A fundamental tradeoff for natural disaster monitoring systems is sensitivity verses false alarms. Regarding user interf ace related requirements, local authorities and first-response personnel have expressed the desire to be able to inte ract with the alerting system from the urban area through media such as cell phones and in ternet access or to obtain information from the sensed field th rough portable equipments such as Personal Data Assistants, (PDAs), and laptop computers. The ability to assign a spatial location to specific events directly related to the occurren ce of a disaster is essential. For example, spatial orientation is necessary in order to observe physical magnitudes on a map of the monitored region. According to the challe nges and requirements identified in this research, the system has to fulfill the following tasks: • Gather relevant data from the th reatened region where the disasters originate,

PAGE 35

27 • Transmit the data from the "sensed field" to the urban area, • Extract and graphically display relevant information to assist authorities to make decisions, • Register the collected data and store it for later use, • Generate alerting signals, • Allow the users to interact with the system via mobile portable devices. 4.1 Proactive Approach to Disaster Management As illustrated in Figure 11, communication and information technologies provide a proactive approach to disaster management Early detection of the event provides the time necessary for the population and authorities to apply preparedness procedures in the hours anticipating the disaster. The response an d recovery from the event can be planned out by local authorities prior to the disaster and carried out quickly and effectively in the hours following the event. When technology is not available the disaster is unforeseen by the population and authorities. Response and re covery is more difficult because there has been no time for authorities to prepare themselves for the disa ster. In addition, the panic caused by the unexpected event leads to an ev en greater number of injuries and deaths.

PAGE 36

28 Figure 11: Proactive vs. Reactive A pproaches in Disaster Monitoring 4.2 Remote Sensing Network Architecture Before designing the remote sensing netw ork architecture for a natural disaster monitoring system it is fundamental that the region and the phenomena that affect it are well understood. Even though two different si tes may be monitoring the same natural disaster, each environment is unique and ma y present different challenges in designing the architecture. A site survey is necessa ry to expose issues re garding communications, network density, cell architecture, connectivity to complementary networks and to better envision the solution. Regardless of the phe nomenon being monitored, the first steps in

PAGE 37

29 designing the system is identifying the parame ters that need to be monitored and the locations where the sensors need to be deployed. Based on the location where the network should be deployed, the available infras tructure at the site must be determined. It is important to determine possible comple mentary networks that will be used along with the WSN in order to structure data dissemination from the network source. With such information available, a cluster cell ar chitecture can be desi gned to ensure that nodes are able to communicate among themse lves and that communication quality such as link quality, packet loss, prediction, BER, RSSI and network statistics such as battery voltage, duty cycle, average leve l, level changes, parent chan ges, received package, sent package and success rate within the network are satisfactory. It may be necessary to increase network density if any of the communication and network parameters are unsatisfactory. 4.2.1 Identifying Sensing Parameters For each type of disaster monitored, app lication-specific sensors are required. Once the sensing parameters have been establ ished the flexibility of the system concept allows the designer to keep the core architecture of the solution intact by adapting and integrating only the required sensors to the network. Identifying the sensing parameters requires structuring the chain of events that will eventually trigger the disaster. The sensors are selected according to the requirements determined by the local authorities. The solution is designed to comply with th e necessity of each “customer”. Flash-floods, which are sudden discharges of large amounts of water, present th e chain of events depicted in Figure 12.

PAGE 38

30 Figure 12: Chain of Events Culm inating in a Flash-flood Disaster Figure 12 demonstrates that the parameters that trigger flash-floods are the ones concentrated in the prediction and formation stages. Rainfall is the first predictor in flash-flood monitoring. The meteorological pred ictors that will form the basis of the threat recognition consist of precipitation, wind direction, wind speed, temperature, humidity, luminosity and barometric pressu re sensor readings. Combinations among these parameters indicate the like lihood of rainfall. In luminos ity, it is possible to detect the intensity of the solar rays. Under a torrid sun, it is very unlikely to rain. Slow winds, high barometric pressure and low humidity are other indicators that the climate is steady and that rain is not imminent. The precipita tion sensor provides th e decisive reading in determining if rain is present or not. Ch anges in any one of these parameters might trigger a chain reaction leading to rainfall. Identifying other factor s that may contribute to flash-floods is the next step. Dependi ng on the geographical characteristics of the region other phenomenon such as landslides ma y contribute to flash-floods. Under high precipitation levels mountainous terrains that surround the rivers become unstable. The

PAGE 39

31 heavy rain softens the ground, which triggers la ndslides that cause a dam-building effect on the rivers. Early detection of unstable terrain can be observed using soil moisture sensors to sense how deep th e water has penetrated the so il and indicate the humidity level of the soil. The use of seismic sensors is appropriate to detect land movement in the locations prone to collapse. Eventually, thes e naturally built barrie rs cannot sustain the high potential of energy of the accumulated water and finally break; discharging large amounts of water downstream. Water level and water flow sensors at different locations along a river provide differenti al reading that are used to predict the disaster. For example, if a sensor location along a river, which is known to be highly affected by landslides, senses abnormal increases in wa ter level readings and at another location downstream the level readings are detected to be at or under normal levels, the differential can be used to indicate that a da m has been formed at the location where the increase in level is detected and that a flash-flood disaster is imminent if that barrier is not broken. Water level and water flow readi ngs also assist in identifying when the safety thresholds of the environment are crossed. 4.2.2 Identifying Locations to Deploy Sensors Understanding the environment at risk op timizes the response of the network by extracting the most relevant da ta at the most appropriate location. These locations are usually determined by professionals of the e nvironmental field in conjunction with Civil Defense authorities. Such locations are char acterized by their tendency to trigger events that lead to a disaster. By extracting the da ta from the field at these strategic locations, the parameters being investigated are likely to break safeguards at earlier stages of the

PAGE 40

32 monitoring process, which provides more time for authorities to al ert the population and to apply preparedness procedures. 4.2.3 Identifying the Available Infrastructure The remote sensing solution for disaster monitoring must be adaptable not only to the environment but also to the available in frastructure. Even though the network can be extended to reach higher networks the basi c remote sensing architecture should be independent of any complementary network an d its architecture must be sufficient and efficient in providing the reliab le data expected from a mon itoring and alerting solution. It is important to understand that the networ k itself is independent of any third-party provider. However, the network sink still n eeds to communicate to a higher network or a back-bone infrastructure in or der to route the data sensed from the field to a central workstation for data processing. The expa ndability discussed here relates to the integration of the network gateway. In orde r to provide the populat ion and Civil Defense authorities other means of receiving alerts or accessing data directly from the WSN network to view current readings and the ove rall status of the environment at risk the network gateway must be able to communicat e with other networks such as cellular networks, 802.11 WLANs and the internet. Th ese add-on features enhance the solution by insuring that data dissemination reaches the maximum number of people in the least amount of time when an event occurs.

PAGE 41

33 4.2.4 Cell Design As presented in Figure 13, the Monitori ng Subsystem, often located in the inhabited or rural areas, performs data acquisition of all re levant variables and incorporates internal communi cation links that allow the transmission of information from the spatially scattered locations to an interfacing port. Figure 13: System Co ncept Block Diagram The Communication Subsystem manages the transmission of the collected information from the Monitoring Subsystem to the urban area and assumes the role of an interfacing bridge. Physically, the reception point can be a “Local Office”. For example, the local office could be the nearest Civil De fense office or Fire Department where the data can be collected and analyzed. The data collection and analysis must be handled by a system with sufficient computati onal resources and storage capacity.

PAGE 42

34 Figure 14: System Data Structure Concept Based on the processed data an Alerting Subsystem is responsible for generating alerting messages that can be broadcast by di fferent means. Figure 14 depicts the data acquisition and data dissemina tion of the proposed solution. The basic cell architecture for natural disaster monitoring is comprise d of a minimum set of application-specific sensors and a network gateway connected to a back-bone structure. This minimum set of sensors contains all sensors requ ired to collect data identified as relevant to the prediction and the formation related to the disaster. Th e network gateway must serve as a local data storage unit in addition to its primary purpose of transmitting the sensed data from the field to a central processing station. The solution can be scaled depending on the required coverage area. For example, adjacent cells for flash-flood monitoring can be formed along the river and the surrounding regions as illustrated in Figure 15.

PAGE 43

35 Figure 15: Basic Concept Architecture 4.3 Central Processing Office The concept of a “Local Office” is any location where signal processing and data analysis takes place. Civil Defense offices a nd Fire Departments, if not chosen to be the Local Office, must interface with the system in real-time in or der to broadcast early alerts and respond to events quickly. The office must have connectivity to the remote network and, if possible, all complementary networ ks supported by the network gateway. This interface must be robust to avoid isolating the remote sensing network. The data collected from the remote network is tran smitted to a central processing station where analysis is performed. Thes e stations must be equipped with software capable of implementing environmental models based on the historical database created by the system. Furthermore, the software must be ab le to generate alarms whenever thresholds are violated. Alarm generation is initiate d by the software or by human intervention

PAGE 44

36 whenever hazardous conditions are detected. Alarms generated by the computer are based on safety thresholds established by the local authorities and first responders. The Local Office personnel can manually generate al arms if conditions are hazardous but the condition has not been detected due to node failure or vandalism for example. Depending on the infrastructure available at the Local O ffice, data dissemination may take different routes to reach the genera l public. Mass media is the most effective method to disseminate information since it r eaches the largest number of people in the least amount of time. For people that are on the move another po ssibility includes the telecommunication media. Broadcasting of alert message would de scribe the current status of the event and suggest an action to be taken such as evacuate, seek shelter or return home. 4.4 A WSN Applied to Natural Disasters Based on the imposed requirements, this research proposed the use of a WSN to provide the remote sensing of any environm ent for natural disaster monitoring. Some commonalities found in a WSN for natural disaster monitoring and general WSN applications are: • The sensing nodes must work in uni nhabited environments. Therefore, nodes have to remain functional for long periods of time without human intervention. The unattended nature of the network requires the solution to be energy efficient in order to prolong the lifetime of the network. Energy optimization at all levels is regarded as the primary goal in WSN design.

PAGE 45

37 • The solution needs to be flexible and extendable to accommodate network growth and topological changes. Th e number of nodes may increase at any time to improve redundancy or simply to expand the monitored region. Changes in topology are comm on in a WSN since they are subject to weather and nature-originated in clemencies. The network routing protocol needs to include node disc overy strategies and a self-forming capability in order to enable new nodes to join the network. • The network discussed can be regard ed as a Distributed System since sensors have to act cooperatively to provide the collected data to bridgenodes, to identify internal failures and to adapt to changes in topology. At the same time sensor nodes have to collectively and dynamically use mechanisms that maximize the lifetime of the network such as those proposed in, [18]. • Exploitation of the processing power of the nodes cannot be overlooked. Localized processing algorithms tran smit only useful data, which reduces data rate requirements, network congestion and transmission power consumption. In order to adapt the generic concepts of Wireless Sensor Networks to the specific application presented in this thesis, the de velopment of a monitori ng and alerting system was designed as is explained in the following chapters.

PAGE 46

38 CHAPTER 5 SENSOR NODES The MICA2 and MICA2DOT “motes” were us ed as sensor nodes for the Wireless Sensor Network. These motes were configured according to application-specific requirements to collect, process and transmit information from remote locations to the network gateway. As data acquisition devices the motes are equipped with transducers to sense the environment at predetermined inte rvals or in an event-driven fashion. However, a major challenge in the design of sensing nodes is the selection of suitable application-specific sensors. Since low-power operation is one of the main requirements for a WSN, transducers must also operate with low power. In addition, the operating voltage is restricted to th e range from 2.5 to 5.0V since all parts of the system are powered by the same types of batteries. In order to compensate for the additional load that the transducers represent to the mote the sensor devices are only activated when a measurement reading is scheduled. Passive sensors with high impedances are preferred to reduce the current draw when operating in the “ON” mode. The following sections describe the hardware architecture for all type s of motes used in flash-flood monitoring and the integration of custom transducers to the system.

PAGE 47

39 5.1 Data Acquisition Requirements for Flash-Flood Monitoring As described in the system concept, the disaster must be we ll understood for the definition of parameters to be monitored. A ccording to the analysis presented in section 4.2.1, the following magnitudes need to be coll ected from the sensing field in order to monitor flash-floods: • Precipitation, • Wind Speed, • Wind Direction, • Soil Moisture, • Water Level, • Water Flow, • 2-Axis Seismic Accelerometer, • Temperature, • Pressure, • Humidity, • Luminosity. 5.2 Node Types The transducers were grouped together in three categories to form environmental nodes that comprise the flashflood monitoring system. Thes e categories are defined as: • Meteorological nodes: Meteorological nodes are positioned in the surrounding fields of the river that causes the flood. These nodes are

PAGE 48

40 responsible for monitoring luminosity, temperature, humidity, barometric pressure, wind direction, wind speed and precipitation. These nodes have the function of measuring meteor ological conditions that are characteristics before flash-floods. • Hydrological nodes: Hydrological nodes are located in the shore along the river. These nodes monitor water level and water flow. These magnitudes are critical during the formation of flash-floods. • Seismic nodes: Seismic nodes are strategi cally placed in hazardous locations in the neighboring mountains. These nodes collect soil moisture and 2-axis accelerometer magnitudes that indicate seismic movement. Since the occlusion of the river has b een identified as the main triggering effect for flash-floods it is mandatory that these magnitudes be monitored. Motes serve as low-power wireless data acquisition devices. This is possible by coupling transducers to the data acquisiti on boards and interfacing them to the motes through proper signal conditioning circuits. In addition to these environmental nodes, repeater nodes are placed in the cell archit ecture to enhance communication performance and overall system robustness. Table 3 de scribes the magnitudes that need to be monitored and the commercial-off-the-shelf, (COTS), sensor boards that may be used. The use of COTS sensor boards is very prac tical since they incorporate all the sensors and the signal conditioning modules that are required to measure specific variables.

PAGE 49

41 Table 3: Sensing Parameters and COTS Equipments As Table 3 indicates, there are many magnit udes that can be measured using sensor boards. However, there are some that requ ire customized signal conditioning circuitry. 5.3 Signal Conditioning Techniques The custom transducers used for flash-fl ood monitoring were cl assified according to their signal conditioning method as: • Those that generate switching pulses. Such was the case for the precipitation, water speed and wind speed transducers, • Those whose transduction principl e is based on resistive changes according to the measured variable. Such was the case for the water level, soil moisture, and wind direction transducers.

PAGE 50

42 5.3.1 Pulse Generating Sensors The pulse generating transducers are res ponsible for sensing precipitation, water speed and wind speed. Precipitation The “Davis” Rain Collector II, pictur ed in Figure 16, was used to sense precipitation in the field. The number of buc ket tips corresponds to pulses that are measured and counted by the input digita l channel of a DAQ board. The rainfall calibration number, (CAL), was used to dete rmine the amount of water each bucket tip represented. For the 0.01 inch rain collector calibration, CAL was equal to 100. This means that the bucket tips and records rainfa ll for every 0.01" of rain. The calculated rainfall is obtained as (1) Figure 16: “Davis” Rain Collector II

PAGE 51

43 Anemometer The “Davis” cup-type anemometer, which is pictured in Figure 17, measures wind speed based on the revolutions per minute of th e transducer’s arms. Each revolution is detected by magnetic switches and counted as pulses by the digital channel of the data acquisition board. The value is processed a nd then converted to wind gust and average wind speed readings. Unit transformations were implemented in the application software to output engineering units These transformations were formulated as (2) (3) Figure 17: “Davis” Anemometer Water Flow The “Swoffer” Fiber-Optics Water flow tr ansducer, which is pictured in Figure 18, measures water velocity based on the numbe r of turns of the pr opeller rotor. Each turn generates four pulses th at are used to determine ve locity. The calibration number represents the number of counts a specific rotor produces as it travels through 10 feet and 10 meters of still water. In order to obtain accurate measurements, the transducer must be calibrated in accordance with the formula

PAGE 52

44 Figure 18: “Swoffer” Water Flow Transducer The schematic for the pulse signal conditioning is illustrated in Figure19. Figure 19: Pulse Signal Conditioning for Different Types of Transducers 5.3.2 Resistive Transducers The transducers that used resistive signal conditioning in cluded water level, soil moisture and wind direction. The genera l schematic is presented in Figure 20.

PAGE 53

45 Figure 20: Schematic for Resistive Signal Conditioning Water Level The “InterMountain” float & pulley water level transducer, presented diagrammatically in Figure 21, operates by movi ng with the water level. As the water level raises or lowers the float moves up or down and turns a pulley. Figure 21: “InterMount ain” Float & Pulley Water Level Transducer The pulley shaft is coupled to a precision 5 turn potentiometer that can register a change of 5 feet. The data acquisit ion board provides the transdu cer with 2500 mV of excitation and detects voltage changes from 0 to 2.5 VDC according to the change in the potentiometer resistance, which has a range from 0 to 5kohms. In other words, the measured voltage between the reference, which was 0 ohm, and the potentiometer leg that changed according to the water level wa s the voltage reading correspondent to the change in water level. The analog signals fr om the water level transducer were measured

PAGE 54

46 by the single-ended channels of the data acquisi tion board that were labeled as A0 to A6. The ADC reading was converted to a voltage reading by the equation Soil Moisture The electrical resistance type Davis so il moisture sensor, which is pictured in Figure 22, converts electrical re sistance from the sensor to a calibrated reading of soil water content measured in soil water potential, which is given in bars. The principle of operation is that the resistan ce of electrodes embedded in a porous block is proportional to its water content. Therefore, the wette r a block, the lower the resistance measured across two embedded electrodes. This implies that the soil water potential is directly influenced by the soil temperature. Figure 22: “Davis” Soil Moisture Sensor Resistance and temperature maintain a linear relationship when soil water content ranges from 0 to 2 bars. The resistance measurement was normalized to degrees C by

PAGE 55

47 In order to calculate the output resistan ce, a voltage divider circuit had to be implemented and interfaced with the data acquisition board. Soil Water potential, (SWP), was then calculated by An excitation voltage of 2500 mV was applied and the analog signals from the transducer were read by the single-ended analog channel of the data acquisition board. Wind Direction The Davis anemometer wind direction se nsor measures a rotational potentiometer and converts the value to an offset from north. A 2500 mV excitation voltage was applied by the data acquisiti on board and the ADC readings were converted to a voltage reading by

PAGE 56

48 5.3.3 Data Processing for COTS Sensor boards Whereas hardware and drivers are availabl e for sensor boards their function has to be analyzed for the development of applications that receive the data. In this section, the MTS420CA sensor board is used as the main example for the data processing that had to take place in order to interpre t the sensor data correctly. The MTS420CA sensor board is capable of sensing environmental parameters of temperature, barometric pressure, humidit y, luminosity and 2-axis acceleration. The temperature reading from the Sensirion m odule was disregarded. The temperature and barometric pressure readings from the Intersema MS5534 module provided uncompensated 16-bit digital values for bot h parameters, which were designated by D1 and D2 respectively and compensation, wh ich was represented by (Word1...4) was performed by an external microcont roller as illustrated in Figure 23. Figure 23: Pressure and Temperature Readings from Intersema MS5534 Sensor Every module was individually factory ca librated at two temperatures and two pressures. As a result, the 6 coefficients ne cessary to compensate for process variations and temperature variations were calculated and stored in the 64-Bit PROM of each module. These 64-Bits were partitioned into four words of 16-Bits each, read by the

PAGE 57

49 microcontroller software and used in the program that converted D1 and D2 into compensated pressure and temperature values. For the ambient light sensor, the algorithm used is described in the Taos TSL2550 datasheet, [ch\s\do5(t)aos]. The TSL2550 contains two ADC registers, which are designated as channel 0 and channel 1. E ach ADC register cont ains two component fields that are used to determine the logari thmic ADC count values, which are designated as CHORD bits and STEP bits. The CHORD bits correspond to the most significant portion of the ADC value and specifies a segm ent of the piece-wise linear approximation. The STEP bits correspond to the least sign ificant portion of the ADC count value and specify a linear value within a segment. All CHORD and STEP bits equal to zero indicates that the light level is below the detection limit of the sensor. All CHORD and STEP bits equal to one indicate that an ove rflow condition exists. Each of the two ADC value registers contains seven da ta bits and a valid bit. Ta ble 4 explains the meaning of these bits 5.4 Sensor Drivers Sensor drivers provide the software in terface between transd ucers that require custom signal conditioning circuitry and the da ta acquisition boards. For example, this section implements the driver interface for the soil moisture transducer using the MDA300CA data acquisition board. Figure 24 pr esents the circuit that provides signal conditioning to the soil moisture transducer As illustrated by the DAQ board schematic presented in Appendix D.1, the transducer ha s two I/O lines. The line designated INT1

PAGE 58

50 corresponds to a power line for the DAQ board and the line designated ADC1 corresponds to the ADC input channel. Both lines were controlled by the software. The resistor “R” varied according to the specifi cations of the soil moisture transducer. Figure 24: Signal Conditioning Circuitry fo r the Resistive Soil Moisture Sensor An application called “TestSensor”, which is depicted in Figure 25, was developed to test the driver. The diag ram, which was generated using the make mica2.docs command and modified slightly for better clarity, shows the components that were written to test the se nsor driver. All red compone nts are part of the TinyOS operating system whereas the blues component s were implemented for this particular application. Figure 25: TestSensor Application Wiring

PAGE 59

51 For simplicity, the application only scheduled a timer, which read the channel every time the timer expired and blinked the LEDs as it was executing. The app lication consisted of the configuration wires together with the “TestSensorM” modu le, the “SoilDriver” driver, and two TinyOS components termed “TimerC” and “LedsC”. The module controlled the initialization, start and stop of the components, the start of the cloc k and the activation of an event to measure the sensor. The “SoilDriver” driver, written in nesC, mapped the I/O lines of the soil moisture transducer into TinyOS software commands and controlled the sensor using TinyOS components that composed the driver. The “SoilDriver” c onfiguration wired two components termed “SoilDriverM” and “AD CC”. The “SoilDriverM” module contained the code written for the implementation of the driver. The “ADCC” component controlled the A/D converter. The SoilDriver component c ould be decomposed into more internal components. The SoilDriver configuration wiring is illustrated in Figure 26. Figure 26: SoilDriver Configuration Wiring 5.5 Test Software A LabView application was developed in or der to verify the f unctionality of the MTS420CA sensor boards. It only reads th e Intersema MS5534 sensor readings for

PAGE 60

52 humidity and pressure, the Taos light se nsor and the battery voltage information contained in the data packet. This appro ach certified that the sensors were working properly prior to deployment in the remote fi eld. Similar applications could be developed for the other sensors attached to any mote since the concept does not change. The concept is illustrated in Figure 27. The MI B510 programming board must be connected to the computer’s serial port for data transfer A MICA2 mote servi ng as the base station was attached to the MIB510 board and a re mote MICA2 mote was coupled with the MTS420 sensor board that sent data packet s wirelessly to the base station. The application read the raw data packets coming in to the serial port, parsed the data stream and decoded the sensor readings in hexa decimal format into engineering units. Figure 27: Testing Confi guration for Sensor Devices Prior to Deployment The IOInstrumentation.vi was an assistan t to the application that communicated with the serial port without the need for a driver. After communi cation was established the I/O assistant read and parsed the incomi ng packets. The data was manually parsed into tokens and assigned variables according to the packet structure. A typical data packet read from the serial port and pars ed by the I/O assistan t is depicted by: 7e0000201d860101007d019803bf1aaccc1eabe3aacad090686545a30000000002e50100.

PAGE 61

53 Table 4 describes the data packet format. Table 4: Data Packet Struct ure for the MTS420 Sensorboards The front panel of the application captu red current and past readings from the remote nodes for analysis. The environmenta l monitoring parameters extracted from the MTS420CA sensor board were temperature, pressure, and luminosity. Since the environmental readings do not change much over short periods of time, a histogram was available to better evaluate th e changes over longer periods. In addition, the lifetime of the network was proportional to the battery life time. Therefore, voltage readings were also included in the data pack ets sent from the remote nodes to the base station. It is important to continuously mon itor the battery status of th e nodes to ensure that they remain within their operating voltage range. The front panel is pictured in Figure 28.

PAGE 62

54 Figure 28: Front Panel of the LabView Application Used to Test the MTS420CA Sensor board The block diagram of the application is presented in Figure 29 and the implementation of the applica tion is presented in Figure 30. Figures 29 and 30 present implementations of the algorithms, which we re described by the se nsor manufacturer and presented in Appendix B.1 and Appendix C.1.

PAGE 63

55 Figure 29: Block Diagram of the Impl ementation of the Sensor Algorithms

PAGE 64

56 Figure 30: Implementation of the Sensor Algorithms 5.6 Cluster Configuration Based on the requirements imposed by the solution concept, the cell architecture for flash-flood disasters evolved as illustrated in Figure 31. Civil Defense authorities in the city of Maracay, Venezuela expressed an in terest in placing at least one transducer for each type of magnitude identified in the s ection titled “Data Acquisition Requirements

PAGE 65

57 for Flash-Flood Monitoring”. Therefore, the basic cell architecture for flash-flood monitoring consisted of: 1. Stargate network gateway, 2. Meteorological Motes, 3. Hydrological Mote, 4. Seismic Mote, 5. Repeater Motes. Figure 31: Cluster Architectur e for Flash-Flood Monitoring 5.6.1 Meteorological Mote Two MICA2 motes, illustrated in Fi gure 32, were used to implement the meteorological nodes. One mote had the MTS 420CA sensor board attached to the 51-pin connector interface to collect light, temperat ure, humidity and barometric pressure

PAGE 66

58 readings. The other mote had the MDA300C A Data Acquisition board attached to the connector interface and coupled to custom sensors that collected precipitation, wind direction, and wind speed readings. The DAQ board was able to sustain the channel requirements for all the transducers. Ther efore, only one DAQ board was necessary. The precipitation and wind speed transducers us ed one digital channel each and the wind direction transducers used one single-ended analog channel. Ta ble 5 describes the list of COTS transducers used to compose this type of mote. Figure 32: Block Diagram of the Meteorological Motes Table 5: Transducers Coupled with the Meteorological Mote

PAGE 67

59 5.6.2 Hydrological Mote The MICA2 mote coupled with the MDA300 Data Acquisition board and the custom water level and water flow sensors were used to implement this node. The hydrological mote is depicted in Figure 33. The water level transducer used one singleended analog channel and the water flow tran sducer used one digital channel. Table 6 describes the transducers used for this mote. Figure 33: Block Diagram of the Hydrological Mote Table 6: Transducers Coupled with the Hydrological Mote 5.6.3 Seismic Mote Figure 34 illustrates the two MICA2DOTs that were used to implement these nodes. One mote was coupled to the MDA 500 Data Acquisition board along with soil moisture whereas the other mote was coupled with the MTS510CA sensor board. Table 7 describes the transducers used for this mote.

PAGE 68

60 Figure 34: Block Diagram of the Seismic Mote Table 7: Transducers Coupled with the Seismic Mote 5.6.4 Repeater Mote Along with the nodes mentioned above, MICA2 motes were used to implement the repeater nodes. They were introduced to the network as redundant nodes and used primarily to ensure that all nodes within the cell could form a mesh network. The repeater nodes were important in the comm unication and power-saving schemes. They present themselves as alternate datapaths in the mesh topology, expand the routing table and ensure non-Line-of-Sight connectivity. Fu rthermore, the increased number of nodes alleviates communication overload on certa in nodes of the to pology and extends the battery life of each individual node, whic h extends the lifetime of the network.

PAGE 69

61 CHAPTER 6 POWERING TECHNIQUES FOR MOTES AND STARGATE Natural disaster monitoring systems usua lly operate in unat tended regions where power grids are not available and access to th e location is difficult. Therefore, motes must be self-powered and should include ener gy awareness as a part of their operation. Power saving strategies used for such a monitoring system includes reducing the duty cycle, using long-lasting batteri es and coupling the motes with solar panels as an external power resource. 6.1 Battery Power for Motes The MICA2 motes were originally de signed to be powered by two AA, 1.5V, alkaline type A91 batteries. These res ources provide the motes with 3V and enough power to operate the CC1000 transceiver, the Atmel processor and the sensor devices. The motes have guaranteed operation down to 2.7V, [1]. Below this threshold the functionality of the mote is compromised. Most sensors a nd I/O devices do not operate below 2.5V. The CC1000 radio transcei ver does not operate under 2.1V and the microprocessor operates at most around 2.2 to 2.3V One of the most important issues in natural disaster monitoring involves the funda mental tradeoff of sensitivity verses false

PAGE 70

62 alarms. When this safety threshold has been reached the accuracy of the sensed data is compromised, which puts at risk the reliabi lity of the system. The operating voltage range provided by two Alkaline AA batteries, from 2.7 to 3V, was concluded to be very narrow. Experiments involving the use of three AA, 1.2V and Rechargeab le Nickel-Metal Hydride batteries were performed to verify how long the mote lifetime could be extended. This scheme increased the operating voltage range from 2.5 to 3.6V. An experiment was conducted that involve d two AA, 1.5V, Panasonic Industrial Alkaline batteries to power the motes. Then an expe riment was conducted th at involved three AA, 1.2V, and Rechargeable Nickel-Metal Hydride batteries as the power source. For both schemes, two models with different duty cycl es of 1% and 0.5% were investigated. The initial configuration of the experiment, as presented in Figure 1, used the two Alkaline AA batteries with a power supply load ranging from 10 to 15mA. Figure 35: Powering Scheme Us ing Two AA Alkaline Batteries Considering a constant discharge rate from bot h batteries in conjuncti on with the fact that at 2.7V the functionality of the mote is compromised, each cell was only functional to the system until it discharged to 1.35V. At fu ll operation, which was at a 100% duty cycle, and a power supply load of 10mA the MICA2 mote lasted for approximately 170 hours.

PAGE 71

63 This configuration provided 1700 mAh of battery capacity for the mote. This battery discharge characteristic is pr esented in Figure.36. By reducin g the duty cycle of the mote to less than 1%, the expected lifetime for the same mote, usi ng the same pair of batteries was dramatically increased. For model 1, operating at a, the mote lasted for 10.45 months. For model 2, operating at a 0.5% dut y cycle, the mote almost doubled the 1% duty cycle lifetime and functionedfor 19.08 months. Figure 36: Discharge Characteristics for 1.5V Panasonic Industrial Alkaline Batteries Figure 37 describes the system specifications for the battery lifetime verses duty cycle modeling for both experiments. Since the logg er was not used for this application, the radio was by far the most power consuming m odule on the mote. From the total current of 0.2169 mAh used in one hour the radio draws approximately half of the current, which was 0.0920 mAh.

PAGE 72

64 Figure 37: System Characteristics Inve stigated for the Two Powering Schemes Figure 38 illustrates the battery lifetime for both models as a function of battery capacity.

PAGE 73

65 Figure 38: Battery Lifetime vs. Battery Capacity for the Two Powering Schemes The second experiment, which is illustra ted in Figure 39, used three AA, 1.2V, and Rechargeable Nickel-Metal Hydride batteries Based on a 500mA or 0.2C discharge rate, the typical average capacity for the cell ap proximated 2500mAh. At full operation with a

PAGE 74

66 power supply load of 10mA the MICA2 mote la sted for 250 hours. At a 1% duty cycle, the mote lasted for 15.79 months. Figure 39: Powering Scheme Us ing Three AA NiMH Batteries Finally, at a 0.5% duty cycl e, the battery lifetime of the mote was extended to 29.54 months. This represents more than 10 months of extended operation at a 0.5% duty cycle and approximately 5 months at a 1% duty cycle when compared to the first experiment. The second configuration incr eased the total voltage across the MICA2 mote to 3.6V. Making the same assumption th at the discharge rate is constant for all three batteries and not violating the functiona l threshold of the mote of 2.7V each cell now remained useful to the system until it discharged to 0.9V. By lowering the minimum voltage requirement from 1.35 to 0.9V, the mote was able to drain each cell for longer periods of time, which increased battery cap acity. This configur ation would allow the battery lifetime to be extended for nearly 2 years if operating at a 0.5% duty cycle. The characteristics of the cell for the s econd experiment are presented in Figure 40.

PAGE 75

67 Figure 40: Discharge Characteristics for 1.2V Energizer Rechargeable NiMH Batteries From these experiments, it was concluded that in order to achieve multi-year performance from the Wireless Sensor Network motes should sleep the majority of the time. Reducing the operating duty cycle of th e motes to a minimum is required to obtain a long lifetime from any type of batteries. The three battery conf iguration was proven to be the most adequate for remote sensing since the motes remain unattended for long periods of time and maximizing battery lifetim e reduces operating costs and increases the network autonomy. It is important to emphasi ze that for this type of application, since the environment does not change rapidly, even with a low duty cycle, events are not missed.

PAGE 76

68 6.2 Powering the Stargate Board The Stargate computer provides the communication link between the remote sensing network and the higher network re sponsible for data dissemination. The importance of this component in the desi gn concept makes the selection of a robust powering scheme fundamental for the proper op eration of the monitoring system. In the selection process, there were two major concerns: The Stargate board operates in ACTIVE mode at all times Table 8 shows the Stargate curre nt draw in different modes, The Stargate board must operate unattended for long periods of time. Table 8: Stargate Computer Cu rrent Draw in Different Modes Based on these requirements solar panels we re chosen to power the gateway. Figure 41 illustrates the components involved in pow ering the Stargate using solar energy.

PAGE 77

69 Figure 41: Architecture to Provide Solar Energy fo r the Stargate Computer The solar module was connected to a char ge controller in or der to regulate the voltage and the current to feed the battery at the right stateof-charge, (SOC). Since the controller only operates with 12V loads a DC/DC converter was used to interface with the Stargate computer. Linear converters were not used because of their low efficiency. Instead, a switching DC/DC converter with 85% efficiency was implemented in the design. The powering subsystem of the Stargate computer included a linear regulator that down converted incoming voltages to the diffe rent component’s operating voltage. The regulator could handle input vol tages from 5V and above. However, since the linear regulator was not equipped with a heat sink and considering th at the current could be as high a 0.5A it was not advisable to input voltages higher that 6V for long periods of time. Such a situation would repres ent a power of 0.5W, ((6V-5V)* 0.5A), which would have to be dissipated by the linear re gulator. Operating at 5V and assuming the worst case conditions with the Stargate in ACTIVE mode at all times and the cellular card ON, the peak power consumption was

PAGE 78

70 Based on the 85% efficiency of the Stargate the input power requirement for the DC/DC converter was For continuous operation, throughout one w eek, the Stargate operated for 168 Hrs/Wk, which produced a total watt-hour-per-week of The total amp-hour-per-week based on the 12V DC/DC converter supply was In one day, the total amp-hour was Authorities in Maracay sugge sted that flash-flood forma tion takes an average of 3 days. Considering the presence of cloudy w eather during this period, the system was designed to remain autonomous for the same number of days. Thus, the system needed to store

PAGE 79

71 Dividing the amount of amp-hour of storage by the battery discharge rate yields the theoretical battery bank size. To diminish the aging affect of d eep discharge, a 30% depth of discharge was chosen. Ther efore, the battery capacity becomes Another factor that needed to be considered when sizing the battery was the winter time ambient temperature. During cold weat her the battery bank experiences charging limitations. Table 9 presents the required multipliers based on temperature. These values need to be multiplied by the battery capacity to ensure that the batteries will be able to overcome cold weather effects. For exam ple, in Maracay, the multiplier was 1.1 based on ambient temperature during the winter. Ther efore, the total batt ery capacity required was Table 9: Multipliers Based on Winter Time Ambient Temperature The “Universal Battery” battery model UB12750 has a 75 Ah capacity, which matches the requirements for this a pplication. Therefore, only one battery would be required.

PAGE 80

72 Based on these specifications the number of solar modul es required to power the Stargate can be determined.

PAGE 81

73 CHAPTER 7 NETWORK STATISTICS A ND COMMUNICATION QUALITY Wireless Sensor Network field tests were performed to verify network behavior in different environments. Two different sites were investigated. A network with nine motes was deployed and communication quality and network statistics parameters were recorded. The motes associated with the ni ne mote deployment were operated at a 100% duty cycle, which meant they were “always on”. Motes #1 through #7 used the 3-battery configuration whereas motes #8 and #9 used the 2-battery configuration. Prior to deployment, experiments were performed at the USF campus. The network was established outside the Engineering II building in an open field with direct Line-of-Sight, (LOS), communication among all the motes. Motes were placed one meter above the ground with an average distance of 15 meters between motes. It is important to place the motes at least one meter off the ground because of the antenna height. Otherwise, the communication range drops to approximatel y one tenth of the maximum range, which was 1000 ft. Figure 42 illustrates the te st network topology, which represented a relatively flat environment. Results in such a flat environment yi elded reliable fidelity between transmitted and received packets. Table10 data verifies that the motes remained within the desired voltage ope rating range of 2.7V to 3.9V. Table 10 presents and relates the most relevant parameters extracted from the USF site. The flat field allowed link

PAGE 82

74 qualities that approached 100%, which means that nearly all transmitted packets were received. As a consequence, BER results were negligible. Figure 42: Testing Network Statisti cs for the Sensor Network at USF

PAGE 83

75 Table 10: Communication Quality and Network Statistics for the USF Experiment The same experiment was performed in the mountains surrounding Maracay, Venezuela, which is a region constantly under th e threat of flash-floods As pictured in Figure 43, this environment presented several issu es that needed to be considered before deployment. Figure 43: Deployment Site in Maracay, Venezuela

PAGE 84

76 The network topology for this type of environment has to rely on multihopping routing schemes since Line -of-Sight communication was not possible. The dense vegetation and the rocky terrain scatter and sometimes block communication between nodes. In addition, the traffic on the road increased the frequency selectivity of the wireless channel. However, in some situations these adverse characteristics actually help the communication. The multipath reflec tions created by the mountains and the vegetation create additional datapa ths for the network. These reflections are beneficial in reaching hidden motes. Each mote was placed approximately 1.5 meters above the ground. The results from communication qual ity and network statistics, which are presented in Table 11, validated the use of a Wireless Sensor Network in such an environment. Table 11: Communication Quality and Network Statistics for the Maracay Experiment Even in harsh environmental conditions such as the one found at the site in Venezuela, the network performed with sati sfactory quality. The average link quality was 86.365% and the average BER was 0.13634%. To improve the results repeater motes could be added to the network. Such additions would create additional datapaths,

PAGE 85

77 reduce communications range and increase the network lifetime since the network load would be distributed among a greater number of nodes. A final experiment investigated the e ffects on link quality of increasing network density. Three repeater nodes, using the 3-battery configuration, were added to the network. This experiment was performed at US F. Table 12 presents the results. When compared to Table 10, the additional datapaths are shown to have prevented packet loss and improved link quality. Table 12: Communication Quality and Network Statistics for the USF Experiment

PAGE 86

78 CHAPTER 8 NETWORK GATEWAY: STARGATE CONFIGURATION The Stargate board is responsible for performing the functionalities described in Chapter 4 of a WSN network gateway for na tural disaster monitoring. The Stargate participates in all phases of th e monitoring and alerting system: Network Sink: It collects a nd locally stores the data sensed from the network, Communication Link: It transmits the database files from the remote location to a central workstati on for further processing, Gateway for higher networks: It is conn ected to backbone infrastructures such as the internet, 802.11 WLAN and the cellular networ k. Such connectivity extends the range of communication from the network to the Local Office, Alarm Generation: It generates alarms to Civil Defense authorities if there is communication breakage from the remote network to the Local Office. 8.1 Local Database The data sensed from the remote fiel d was collected by the base station and processed by the Stargate computer. This application reads the SPI port of the base

PAGE 87

79 station, (MICA2 mote), decode s the data packets based on mote-specific algorithms and writes the parameters into a relational databa se server that is st ored in the network gateway. The server then communicates with the Local Office database client through query requests or by triggers. Queries may be generated by the clie nt database at any time. Since the server is c ontinuously monitoring client re quests local au thorities have the flexibility to download database files at their convenience. The other possibility is through the generation of trigge rs based on alarm levels. As the alarm level increases, the transmission interval decreases. The al arm levels and the fuzzy logic algorithm that determine transmission intervals will be disc ussed in the following section. Regardless of how the transmission is initiated, the se rver database generates a comma-separatedvalue, (csv), file and via the cellular network sends the inform ation to the client database at the Local Office. At the client end databa se files are stored in relational databases and later used in applications such as LabV iew and Geographical Information Systems, (GIS). The TinyOS message structure, which is illustrated in Figure 44, dynamically allocates memory space in the data packet payload according to the type of mote transmitting the message and the number of transducers coupled to the device. Thus, each type of mote has its unique packet length where the payload includes only the parameters sensed by that mote. The database server checks the mote ID on the packet to identify the algorithm to be used in th e parsing and decodi ng of the packet. Figure 44: TinyOS Message Struct ure with Dynamic Payload Length

PAGE 88

80 The uniform structure in which data is stored allows the creation of dynamic tables that adjust themselves dependi ng on the number of magnitudes such as temperature, humidity, water flow and pre ssure that are being monitored. Database tables were arranged in a similar fashion to Tables 10 and 11 but with additional fields to store the transducers readings. Therefore, both the server and client have tables with the same format. Assuming that: NodeID 0: Base sta tion + Stargate gateway, NodeID 1: Meteorological Mote #1, NodeID 2: Meteorological Mote #2, NodeID 3: Hydrological Mote, NodeID 4: Seismic Mote #1, NodeID 5: Seismic Mote #2, NodeID 6: Repeater Mote. A sample database table for flash-flood m onitoring is presented in Tables 13 and 14. Table 13: Sample Database Table Runni ng on the Server and Client Computers

PAGE 89

81 Table 14: Continuation of the Sample Database Table Running on Server and Client Computers 8.1.1 Alarm Generation The life cycle of flash-flood monitoring is as follows: No Signs of flash flood: The WSN nodes indicate stable parameters throughout the sensed field. Characteris tics of a stable condition include dry weather, normal water level and flow, high barometric pressure, with respect to other areas in the country, a nd static behavior of mountains, Rain formation: Pressure can be an important indicator in rain formation, since it tends to drop before rainfa ll. For instance, meteorological, hydrological and seismic nodes may be aler ted by the pressure sensors of a possibility of rainfall in order to begi n a network-wide flash-flood check. The check verifies that the hazardous conditi ons are met before triggering alarms, Rain: Precipitation is of vital importa nce for a flash-flood monitoring system. Landslides and river overflow are products of high levels of rainfall for long or even sometimes short but intense periods of rainfall,

PAGE 90

82 Landslides: They are monitored to prevent dam forming along rivers. Rainfall, seismic activity and unstabl e soil are a few examples of how landslides may be triggered. Such parameters need to be carefully monitored as well as the sudden disappearance of nodes, Dam forming: In most cases of flas h-floods studied in the region, landslides were the major cause for dam forming. As the mountain co llapses the earth rolls downhill and eventually reaches th e rivers. Such a situation should trigger high-priority alarms to initiate actions by first responders. Under these circumstances the level of water raises rapidly in the upstream portions of the river and the flow is redu ced drastically downstream, Flash-flood: This situation is very hard to monitor and it is not expected that most alerting messages will have taken place before it happens. Nodes being wiped out and extreme readings in some se nsors are characteristic of this stage of the disaster. In order to generate alarm messages th e occurrence of a flash flood has to be identified and/or predicted. The life-cycle of a flash-fl ood can be associated with different alarm levels. Higher levels refl ect a more critical situation implying the imminent occurrence of a flash-flood. An automated alarm system could use two types of mechanisms where the results could be comp ared in order to avoid false alarms. Such mechanisms could consist of A simple inference mechanism based on fu zzy logic or intelligent systems that are based on actual measurements or changes with respect to previous

PAGE 91

83 measurements that tries to identify whic h phase of the flash flood exists. Such a mechanism was formerly termed an "expert" system, A more sophisticated mechanism that makes use of models to forecast the disaster or critical situations. Table 15 describes the alarm levels used to tr igger transmission of da tabase files from the Stargate to the Local Office. Table 15: Alarm Levels Used to Trigger Transmission of Database Files 8.1.2 Fuzzy Logic The proposed solution employed a fuzzy inference system to generate the alarms. A fuzzy Inference system, (FIS), has following advantages: Possibility of using human knowledge, It is possible to constantly improve the system, It is simple to maintain, improve and update, Possesses the ability to handle uncertainty : Since the alerting levels are based on “conjectures”, the system has to handle subjective information,

PAGE 92

84 Possesses the capability of working with linguistic variables, which makes the user interface very simple. The system could use a color co de to indicate a measure of: The time left before the disaster happens, The probability that the disaster will strike. The execution of such an inference engi ne does not mandate high computational resources or access to the historic al register of data collected from the field. Therefore, it could be implemented in a computer with dire ct access to the sensor network or even in a distributed manner within the WSN. The fuzzy logic algorithm needs to be developed with the a ssistance of Civil Defense authorities and experts on the environm ent at risk. This is because the algorithm takes into account all variables and, based on certain combinations, outputs alarm levels correspondent to the situati on. As discussed in Chapter 4, it is of fundamental importance to develop a system that is se nsitive and able to effectively recognize hazardous conditions. However, at the same time the system must be "intelligent" enough not to overreact and trigger false alarms For example, if “Luminosity” is LOW, “Pressure” is LOW, “Precipitation” is HIGH, a nd “Water Level” is HIGH, then there is a high probability that a disaster is imminent. The algorithm would then output an alarm level that corresponds to this scenario. As a proof of concept, a fuzzy logi c algorithm with 3 alarm levels and 3 monitored magnitudes was implemented. Table 16 describes the 3 alarm levels that were used. The magnitudes monitored were water level, precipitation and soil moisture.

PAGE 93

85 Table 16: Fuzzy Logic Algorithm as a Proof of Concept For this proof of concept fifteen (15) rule s were generated. Fi gure 45 illustrates the mamdani inference system that was utilized. Figure 45: Fuzzy Inference System for the Alarm Algorithm The rules for this FIS are what determined the sensitivity of the alarms. Tables 17 and 18 relate the 3 magnitudes and the possibl e combinations among them. For each combination an alarm is generated. Table 17: Rules for the Fu zzy Logic Alarm Algorithm Table 18: Continuation of the Rule s for the Fuzzy Logic Alarm Algorithm

PAGE 94

86 8.2 Connecting to the GPRS Network The Sierra Wireless Aircard 750 connected to the Stargate PCMCIA slot provided the gateway connectivity to a Global Syst em for Mobile Communications, (GSM), cellular network. The General Packet Radio Service, (GPRS), network used for data communication was mounted on top of the GSM network. The Wireless Sensor Network used the GPRS network as the communicat ion link between the remote monitoring subsystem and the data analysis and alerting subsystem. The GPRS network was also used as the system for broadcasting alerts to the population and local authorities. For example, if a safety threshold was violated, text messages woul d be sent to cellular phone numbers cataloged with the Civil Defense Aut hority. In the text messages the population would be instructed to evacuate or to apply preparedness procedures and local authorities would be instructed to take necessary meas ures. The Stargate computer and the Local Office workstations accessed the GPRS ne twork by establishing a Point-to-Point Protocol, (PPP), connection. Such a network represents an id eal back-bone infrastructure for such applications since it can be used fo r data communication and data dissemination. In such a network database files are transmitted from the Stargate gateway, which is the server, to the Local Office, which is the client, via the GPRS network. 8.2.1 Establishing PPP Connection The Aircard emulates a serial port over the PCMCIA connection, which enables a PPP connection to be initiated once the fuzzy l ogic model determines that it is time to transmit database files. The PPP is a mech anism for creating and running the Internet

PAGE 95

87 and other network protocols ove r a serial connection, a telnet established link or a link established through the use of modems and telephone lines. The Stargate gateway computer and the Local Office workstation were configured as both the PPP client and server because tr ansmissions could be initiated by client requests or by database triggers. When the ca ll was initiated by triggers in the database server the Stargate computer acted as the PPP client and the Local Office workstation acted as the PPP server. When authoritie s send queries to the network gateway the reciprocal was true. In order to secure the transferring of the files, the PPP c onnection requires both peers to authenticate themselves using the Password Authentication Protocol, (PAP). Once the connection is established, each end requests the other to authenticate itself by sending a user name and a password. Similar scripts as the ones presented in Appendix A.1 were used to automatically initiate PPP connections from both the client and server ends. 8.3 Establishing WLAN Connectivity The Stargate board, which was used as the network gateway, had a CompactFlash slot in the motherboard that allowe d the insertion of an 802.11 Wireless Compact Flash Card. Due to the short range of the technology, which is usually around 100 meters, WLANs cannot be used as the comm unication link between the network gateway and the Local Office. However, this permits Civil Defense authori ties to establish 802.11 connectivity with the Stargate and download database files from the gateway to a laptop computer or a PDA that has a relational database installed.

PAGE 96

88 8.4 Internet Connectivity The internet can perform a two-fold f unctionality for the monitoring and alerting system. One of the features of the netw ork gateway includes running an Apache HTTP server. The data collected from the sensed field and stored in the local database is transmitted to the Local Office primarily thr ough the GPRS network. Another possibility is to publish the database files in the HTTP server and let the workstation at the Local Office download the file remotely. This re dundancy is important in order to guarantee that the database files will r each the central workstations for data processing. The other functionality of the internet is in data dissemination. The da ta can be made available to the general public after it ha s been processed by the Local Office. This provides the population with almost real-time inform ation on the monitoring of a disaster.

PAGE 97

89 CHAPTER 9 CONCLUSIONS AND FUTURE WORK Wireless Sensor Networks have proven th emselves to be a reliable solution in providing remote sensing for natural disast er monitoring systems. The motes were adapted according to application-specific re quirements to sense, collect and transmit relevant parameters. The integration of custom transducers to motes required the implementation of signal conditioning algorithms to accurately extract the data from the devices and apply it to the data acquis ition board channels. The DAQ boards allowed different mote configurations such as meteorological, hydrol ogical and seismic to be designed. The information collected from the network was stored in a local database and made available for transmission via the GPRS network to Local Office workstations for analysis. Since the cell archit ecture was comprised of diffe rent types of sensor nodes, unique data packets were generated according to the type of transducers that were coupled to the mote. The TinyOS message structure allowed adaptations where the payload length could be customized as desired. Passive transducers were selected to minimize current draw. Experiments with three 1.2V batteries increased ba ttery lifetime and consequentl y, network lifetime. At 1% duty cycle the three battery setup enabled the motes to operate for approximately 16

PAGE 98

90 months. More experiments involving ba ttery capacity and mote lifetime are recommended. The integration of a solar panel to power the motes is also an alternative to be investigated. It is vital to conti nue using energy-aware approaches since energy consumption continues to be the limiting fact or of Wireless Sensor Network technology. In the field tests performed at USF a nd in Maracay, the network was established in approximately 20 minutes. The lack of in frastructure requirements enabled easy and fast deployment at both sites. The flexib ility to adapt to any environment makes WSN desirable for the monitoring of several natural disasters. The network proved to be robust under the conditions tested. For regions su ch as USF, with flat characteristics, communication quality and networ k statistics approximated 100% fidelity. However, in the mountainous environments of Maracay the results, on average, remained above 85%. Particularly in Maracay, several chal lenges involving network communications connectivity to higher-networks and accessibi lity were analyzed. The WSN used for remote sensing successfully established co mmunication. However, future work should include investigations with respect to tran smitting the collected data from the network gateway to the Civil Defense office. A second experiment involved increasi ng network density. Improvements in the fidelity of results were detected as additional datapaths were created. Furthermore, transmission distances were reduced, which diminished transmission costs and prolonged battery lifetime in each mote. Presently, Wireless Sensor Networks ha ve generic features for a variety of applications. However, the requirements fo r disaster monitoring are unique and differ from the ones for industrial monitoring or e nvironmental monitoring. In fact, monitoring

PAGE 99

91 the same phenomena in different locations ma y require different cell architectures since each environment is unique. In the near future it will be critical to adapt technology to comply with all requirements imposed for natural disaster monitoring. The use of WSNs to remotely sense the environment is a more reasonable solution in places that lack m oney and human resources. It is affordable to operate and maintain, adaptable to any environment, scalable and provides the population with accurate information regarding the threaten ed region, which saves lives and minimizes property damage. The adaptation and flexib ility for each environment is what makes Wireless Sensor Networks attractive for use in remote sensing. The power of using a WSN in conjunction with a signal processing stat ion lies in its abil ity to transform raw data, such as the data coming form the WSN, into useful information. The workstation, in the hands of decision makers and authorities, becomes a pow erful instrument that helps in the rapid assessment of critical situations. The monitoring requirements for these concept solutions are determined by local authorities. More spec ifically, the types of transducer s to be used and the strategic locations where motes will be deployed to obtain optimum readings require understanding of the environment to be se nsed. Increasing the understanding of the environment with the collected data from the solution can be used to enhance the system by improving the fuzzy logic algorithm that is responsible for the transmission interval of packets from the Stargate gateway to the Local Office. If data is not available prior to deployment, it is difficult to establish thresh olds for the alerting of hazardous conditions.

PAGE 100

92 REFERENCES [1] MPR/MIB User’s Manual, Crossbow Technology [2] MTS/MDA Sensor and Data Acquisition Boards User’s Manual, Revision a ed., Crossbow Technology, April 2004 [3] Stargate Developer’s Guide, Revi sion A, Crossbow Technology, February 2004 [4] C. Technology, “Available online at http://www.xbow.com .” [5] I. Cline, “Special Report on the Galv eston Hurricane”, Monthly Weather Review, September 1900 [6] “Climate of 2004 Atlantic Hurricane S eason”, National Oceanic and Atmospheric Administration, Tech. Rep., 2004, [Online]. Available: http://www.ncdc.noaa.gov/oa/climate/ research/2004/hurricanes04.html =0pt [7] F. Hossain, L. Karklis, B. Cordyack, N. Hsu and A. Hurt, “Tsunami in South Asia”, The Washington Post Compa ny, http://www.washingtonpost.com/wpsrv/world/daily/graphics/tsunami_122804.html, 2004 [8] N. Helm, “Overview of Disaster Mon itoring Activities of the IAF, IAA and IEEE”, Proceedings of the Second United Nations and JUSTAP Joint Symposium on Space Technology Applications for Natu ral Disaster, pp. 23 26, November 1998 [9] K. Arai, “An Expectation on Remote Sensing Technology for Disaster Management and Response”, Proceedings of the First United Nations and JUSTAP Joint Symposium on Space T echnology Applications for Natural Disaster, 4-5 November 1997, Hawaii, USA

PAGE 101

93 [10] S. Herwitz, J. Leung, R. Higgins, S. Dunagan and J. Arvesen, “Remote Command-and-Control of Imaging Payloa ds using Commercial Off-the-Shelf Technology”, Int’l Geosci ence & Remote Sensing Sy mp., Toronto, Canada, 24-28 June 2002 [11] D. Malan, T. Fulford-Jones, M. Wels h, and S. Moulton, “Codeblue: An Ad Hoc Sensor Network Infrastructure for Em ergency Medical Care”, International Workshop on Wearable and Implantabl e Body Sensor Networks, April 2004 [12] D. Doolin, S. Glaser and N. Sitar, “Software Architecture for a GPS-Enabled Wildfire Sensor Board”, TinyOS Technology Exchange, February 2004 [13] Nirupama and S. Simonovic, “Role of Remote Sensing in Disaster Management”, ICLR Research, Paper Series No.21, September 2002 [14] I. Dowman and L. Wald, “Remote Sensing for the Detection, Monitoring and Mitigation of Natural Disasters”, UNISPACE III ISPRS/EARSeL Workshop, no. 4, pp. 20–25, December 1997 [15] W. Webb, “Mesh Technology Boosts Wireless Performance”, EDN Magazine, November 2003 [16] J. Hill, R. Szewczyk, A. Woo, S. Ho llar, D. Culler and K. Pister, “System Architecture Directions for Networked Sensors”, Department of Electrical Engineering and Computer Sciences, Un iversity of California, Berkeley, 2000 [17] D. Gay, P. Levis, R. von Behren, M. We lsh, E. Brewer and D. Culler, “The nesC Language: A holistic Approach to Networked Embedded Systems”, ACM SIGPLAN Conference on Programming La nguage Design and Implementation, San Diego, CA, June 2003 [18] W. Heinzelman, A. Chandrakasan and H. Balakrishnan, “Energy-Efficient Communication Protocol for Wireless Mi crosensor Networks”, Proceedings of the 33rd Hawaii International Confer ence on System Sciences, (HICSS 00), January 2000

PAGE 102

94 APPENDICES

PAGE 103

95 Appendix A PPP Scripts A.1 ppp-on script /**Script to initiate a PPP connection. These are the parameters. **/Change as needed. TELEPHONE=974-4551 ACCOUNT=network PASSWORD=monitor LOCAL_IP=0.0.0.0 REMOTE_IP=0.0.0.0 NETMASK=255.255.255.0 export TELEPHONE ACCOUNT PASSWORD DIALER_SCRIPT=/etc/ppp/ppp-on-dialer *Initiate the connection exec /usr/sbin/pppd debug /dev/ttyS1 115200 \ $LOCAL_IP:$REMOTE_IP \ connect $DIALER_SCRIPT

PAGE 104

96 Appendix A (continued) A.2 ppp-on-dialer script It will perform the connection protocol for the desired connection. /usr/sbin/chat -v \ TIMEOUT 3 \ ABORT '\nBUSY\r' \ ABORT '\nNO ANSWER\r' \ ABORT '\nRINGING\r\n\r\nRINGING\r' \ '' \rAT \ 'OK-+++\c-OK' ATH0 \ TIMEOUT 30 \ OK ATDT$TELEPHONE \ CONNECT '' \ ogin:--ogin: $ACCOUNT \ assword: $PASSWORD

PAGE 105

97 Appendix A (continued) A.3 ppp-off script Determine the device to be terminated. if [ "$1" = "" ]; then DEVICE=ppp0 else DEVICE=$1 fi If the ppp0 pid file is present then the program is running. Stop it. if [ -r /var/run/$DEVICE.pid ]; then kill -INT `cat /var/run/$DEVICE.pid` If the kill did not work then there is no process running for this pid. It may also mean that the lock file will be left. You may wish to delete the lock file at the same time. if [ "$?" = "0" ]; then rm -f /var/run/$DEVICE.pid echo "ERROR: Removed stale pid file" exit 1 fi Success. Pppd cleanup. echo "PPP link to $DEVICE terminated." exit 0 fi The ppp process is not running for ppp0 echo "ERROR: PPP link is not active on $DEVICE" exit 1

PAGE 106

98 Appendix B Intersema MS5534 Algorithm Figure 46: Intersema MS5534 Algorithm

PAGE 107

99 Appendix B (continued) Figure 46: Continued

PAGE 108

100 Appendix C Taos TSL2550 Algorithm Figure 47: Taos TSL2550 Algorithm

PAGE 109

101 Appendix D MDA300CA DAQ Board Schematic Figure 48: MDA300CA DA Q Board Schematic

PAGE 110

102 Appendix E Sensor Driver E.1 SoilDriver.nc includes sensorboard; configuration SoilDriver { provides interface ADC as SensorData; provides interface StdControl; } implementation { components SoilDriverM, ADCC; StdControl = SoilDriverM; SensorData = ADCC.ADC[TOS_ADC_SOIL_PORT]; SoilDriverM.ADCControl -> ADCC; } }

PAGE 111

103 Appendix E (continued) E.2 SoilDriverM.nc module SoilDriverM { provides interface StdControl; uses { interface ADCControl; } } implementation { command result_t StdControl.init() { TOSH_MAKE_SOIL_CTL_OUTPUT(); TOSH_SET_SOIL_CTL_PIN(); return call ADCControl.init(); } command result_t StdControl.start() { TOSH_MAKE_SOIL_CTL_OUTPUT(); TOSH_SET_SOIL_CTL_PIN(); return SUCCESS; } command result_t StdControl.stop() { TOSH_CLR_SOIL_CTL_PIN(); return SUCCESS; } } }

PAGE 112

104 Appendix E (continued) E.3 TestDriver.nc configuration TestSensor { // this module does not provide any interface } implementation { components Main, TestSensorM, LedsC,TimerC, SoilDriver as Sensor; Main.StdControl -> TestSensorM; Main.StdControl -> TimerC; // Wiring for New Sensor TestSensorM.SensorControl->Sensor; TestSensorM.SensorData -> Sensor; TestSensorM.Leds -> LedsC; TestSensorM.Timer -> TimerC.Timer[unique("Timer")]; } }

PAGE 113

105 Appendix E (continued) E.4 TestDriverM.nc module TestSensorM { provides { interface StdControl; } uses { interface StdControl as SensorControl; interface ADC as SensorData; interface Timer; interface Leds; } } implementation { /************************* ********************************** ***************** Initialize the component. ********************************** ************************** ****************/ command result_t StdControl.init() { call Leds.init(); call SensorControl.init(); return SUCCESS; } /************************* ********************************** ***************** Start the component. Start the clock. ********************************** ************************** ****************/ command result_t StdControl.start() { call Leds.redOn(); call SensorControl.start(); call Timer.start(TIMER_REPEAT, 1000); return SUCCESS; } /************************* ********************************** ***************** Stop the component. ********************************** ************************** ****************/ command result_t StdControl.stop() { call SensorControl.stop();

PAGE 114

106 Appendix E (continued) return SUCCESS; } /************************* ********************************** ***************** Measure Sensor ********************************** ************************** ****************/ event result_t Timer.fired() { call SensorData.getData(); call Leds.redToggle(); return SUCCESS; } /************************* ********************************** ***************** Sensor ADC data ready Issue a command to sample the Soil ADC data. ********************************** ************************** ****************/ async event result_t SensorDa ta.dataReady(uint16_t data) { call Leds.greenToggle(); return SUCCESS; } } }