Detecting red tides on the west Florida shelf by classification of SeaWiFS satellite imagery

Detecting red tides on the west Florida shelf by classification of SeaWiFS satellite imagery

Material Information

Detecting red tides on the west Florida shelf by classification of SeaWiFS satellite imagery
Zhang, Haiying 1974-
Place of Publication:
Tampa, Florida
University of South Florida
Publication Date:
Physical Description:
vii, 65 leaves : ill. (chiefly col.) ; 29 cm.


Subjects / Keywords:
Red tide -- Florida -- Gulf Coast ( lcsh )
Oceanography -- Remote sensing ( lcsh )
Dissertations, Academic -- Computer Science -- Masters -- USF ( FTS )


General Note:
Thesis (M.S.C.S.)--University of South Florida, 2002. Includes bibliographical references (leaves 62-65).

Record Information

Source Institution:
University of South Florida
Holding Location:
Universtity of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
029445815 ( ALEPH )
52888953 ( OCLC )
F51-00027 ( USFLDC DOI )
f51.27 ( USFLDC Handle )

Postcard Information



This item is only available as the following downloads:

Full Text


Office of Graduate Studies University of South Florida Tampa, Florida CERTIFICATE OF APPROVAL This is to certify that the thesis of HAIYING ZHANG in the graduate degree program of Computer Science was approved on November 8, 2002 for the Master of Science in Computer Science degree Examining Committee: Co-Major 0 Ph.D. Co-Major Professor: Dmitry B. Goltfgof, Ph.D. Member : Robe"rt F. Chen, Ph.D. Member: Chuanmin Hu, Ph.D. Member: Frank E. Muller-Karger, Ph.D. Committ e e Verification: Dean


DETECTING RED TIDES ON THE WEST FLORIDA SHELF BY CLASSIFICATION OF SEA WIFS SATELLITE IMAGERY by HAIYING ZHANG A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science Department of Computer Science and Engineering College of Engineering University of South Florida Date of Approval: November 8, 2002 Co-Major Professor: Lawrence 0. Hall, Ph.D. Co-Major Professor: Dmitry B. Goldgof, Ph.D.


LIST OFT ABLES LIST OF FIGURES ABSTRACT TABLE OF CONTENTS CHAPTER 1. INTRODUCTION 1.1 Review of Prior Work 1.2 Objective 1.3 Research Region and Data 1.3.1 Research Region 1.3.2 In-situ Data 1.3.3 SeaWiFS Satellite Data 1.4 Outline CHAPTER2.METHODOLOGY 2.1 Overall Structure of the System 2.2 Expert Knowledge Acquisition 2.3 Segmentation 2.3.1 Fuzzy C-Mean Segmentation (FCM) 2.3.2 Validity Measure of FCM 2.3.3 Bit-reduction FCM(brFCM) 2.3.4 Texture Operations 2.4 Cluster Mapping 2.5 Neural Network and Quickprop Method CHAPTER 3. SEGMENTATION OF SEA WIFS IMAGES 3.1 Preparations for Segmentation Ill iv vi 1 3 5 6 6 7 9 12 13 13 13 14 14 17 19 21 23 24 27 27


3.1.1 Satellite Imagery 3.1.2 Image Collection and Knowledge Base Acquisition 3.1.3 Image Ground Truthing 3.1.4 Feature Selection 3.2 Results Using FCM 3.3 Results from Using BrFCM 3.4 Results by Using Texture Operation CHAPTER 4. CLASSIFICATION USING NEURAL NETWORK 4.1 Neural Network Tuning and Applications 4.1.1 Quick-prop Neural Network Algorithm 4.1.2 Number of Nodes 4.1.3 Convergence and Local Minima 4.1.4 Generalize, Overfitting and Stopping Criterion 4.1.5 Cross-validation for NN Performance Evaluation 4.1.6 Other People's Work on NN Training and Testing 4.2 Development of the Neural Network 4.2.1 Trainingffesting Dataset 4.2.2 Neural Network Structure 4.2.3 Performance of the Neural Network 4.3 Year 2001 Case Study 4.3.1 Neural Network Results for Tampa Bay-Charlotte Harbor Area in 2001 4.3.2 Neural Network Results for the Entire WFS in 2001 4.4 Adding New Images CHAPTER 5. SUMMARY AND DISCUSSION REFERENCES ii 27 29 30 31 34 37 39 43 44 44 45 45 46 46 47 49 49 50 51 53 53 58 59 60 62


LIST OF TABLES Table 1. BrFCM vs. FCM for SeaWiFS S2000273.hdf on the WFS Table 2. Statistical Comparison of Segmentation Result between brFCM 37 Using 7 Features and brFCM Using 8 Features 42 Table 3. Confusion Matrix of the Neural Network Results on the Cluster Level 52 Table 4. Confusion Matrix of the Neural Network Results in Image Level 52 Table 5. Confusion Matrix of the Neural Network Results in Pixel Level 52 iii


LIST OF FIGURES Figure 1. Study Area: West Florida Shelf 7 Figure 2. EcoHAB Cruise Sample Stations and K. brevis Cell Count Contour 8 on November, 2001 Figure 3. Satellite Coverage at USF (chi-a image: S2001237 png) 10 Figure 4. 7 Features from SeaWiFS 11 Figure 5. Flow Chart ofFCM 17 Figure 6. Example of Texture Operation 22 Figure 7. Neural Network Structure 25 Figure 8. Feature Selection: 3 Features vs. 7 Features 32 Figure 9. Segmentation Results Using 1 Band on Oct.l, 2000 33 Figure 10. Classification Results Using FCM (c=10, s=7, r=O) on Sept. 29, 2000 34 Figure 11. Classification Result Using FCM for the WFS Area. (c=10) 36 Figure 12. Examples of Textures Extracted from chl_a Image 40 Figure 13 Segmentation Comparison Between brFCM Using 7 Original Features 41 and brFCM Using Extra Texture Feature Figure 14. Neural Network Structure Used in this Study Figure 15. Red Tides Classification Result During Early-mid of August, 2001 Figure 16. Red Tides Classification Result and Ground Truth During Late August, 2001 iv 50 55 56


Figure 17. Red Tides Classification Result and Ground Truth During September, 2001 Figure 18. Classification Result of WFS: First Day of Red Tides in 2001 v 57 59


DETECTING RED TIDES ON THE WEST FLORIDA SHELF BY CLASSIFICATION OF SEAWIFS SATELLITE IMAGERY by HAIYING ZHANG An Abstract of a thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science Department of Computer Science and Engineering College of Engineering University of South Florida Date of Approval: November 8, 2002 Co-Major Professor: Lawrence 0. Hall, Ph.D Co-Major Professor: Dmitry B. Goldgof, Ph.D. vi


Red tides are a recurring problem on the West Florida Shelf with numerous huma n economic and ecosystem impacts. In this study, we report on an automatic classification system for Sea WiFS images, consisting of an unsupervised clustering a l gorithm FCM (fuzzy C-mean) and a neural network classifier which was developed to monitor and detect red tides on the West Florida Shelf Forty SeaWiFS images collected and processed at University of South Florida and ground truth gathered from Ecohab (Ecology and Oceanography of Harmful Algal Blooms) ship observations in Tampa Bay -Charlotte Harbor region, both f rom 1998 to 2001 are used to t r ain and validat e th e system. Seven original features from SeaWiFS multi-spectral bands are used as inputs for clustering. A neural network is trained for automated cluster label i ng and an overall accuracy of 95 % is obtained by leave-on e -out validation. This Neural Network i s then applied to 87 unseen images taken in 2001 from January to November to generate a complete series of segmented red tide images. The cl a ssification resul t s of red t i des agree well with Ecohab data sets and are used to describe th e presenc e and movem e nt o f the harmful algal blooms in 2001. Abstract Co-Major Pro f essor: Lawrence 0. Hall, Ph.D. Professor Coll e ge of E ngineering Date ///1 J /0 t_ Abstract Co-M a for P rofe s s o r : D mitryB Gofd g of, Ph.D Pro fess o r, Co ll ege of Date Approved: I. I t l \ i W I v ii


CHAPTER 1 INTRODUCTION Red tide is a kind of harmful algal bloom (HABs) that one or a few species of algae dominate and create a population above "background" level. This event can cause water discoloration, presence of toxins in water, mass mortality of fish and marine mammals and serious illness in human, thereby leading to numerous impacts on human, economics and the ecosystem. Red tides occur in a wide variety of coastal regions in the US and throughout the world. While their occurrences are increasingly being reported, the causes for these phenomena remain hotly-debated because of their diversity and complexity Thus, apart from their intrinsic scientific interest, detection of red tides in their early stage is a critical issue for protection of the aquaculture environment and public health (Cullen et al., 1997). However, the oceanographer's conventional methods, i.e. station sampling and ship measurement, giving either a time series at a fixed point or a synoptic chart, cannot be used to detect the outbreak of red tides because of their large scale in space and rapid development in time. Satellite remote sensing provides repeated monitoring of the Earth-atmosphere system in several spectral bands with a high spatial resolution. It provides a great tool for measuring the ocean color in marine environments. Providing broader spatial and temporal coverage, and additional environmental information, remote sensing 1


has the potential to play an important role in red tides prediction and monitoring (for example, see Millie et al., 1997, Kahru and Mitchell, 1998, Craknell et al., 2001 and Walsh et al., 2002). Several ocean color sensors have been launched, for example, CZCS (the Coastal Zone Color Scanner, from 1978 to 1986), SeaWiFS (the Sea-Viewing -Wide Field-of-View-Sensor, from 1997 to present), and OCTS-MODIS (the Moderate Resolution Imaging Spectraradiometer from 1999 to present). Among them, SeaWiFS is designed to monitor the bio optical properties of the world's oceans It was launched in September 1997 aboard the Orbview-2 platform as successor to the CZCS instrument. The sensor provides temporal coverage of every 1-2 day with a spatial resolution of a 1-km pixel view at nadir (1-km Local Area Coverage and 4-km global coverage). It has eight spectral bands: 412nm, 443nm, 490nm, 510nm, 555nm, 670nm, 765nm, and 865nm respectively The instrument is able to tilt up to 20 from nadir to avoid sunlight from the sea surface, therefore simplifying the ground processing and ensuring that the Sea WiFS calibration, polarization and angular scanning characteristics are identical for all tilt positions. Improvements of SeaWiFS over CZCS include more bands, a higher signal noise ratio, improved atmospheric correction capabilities, and the construction of more reliable bio-optical alg orithms to derive chlorophyll-a concentration. After more than 4.5 years of operational use, Sea WiFS data have provided unprecedented depictions of the ocean biosphere and biogeochemical processes (Siegel, 2002). The SeaWiFS homepage, containing more information concerning the instrument and datas e ts, can be found at http://seawifs/gsfc.nasa.gof/SEAWIFS html 2


Due to its superior performance and long operational period, we use SeaWiFS measurements in this study. 1.1 Review of Prior Work In recent years, a variety of methods have been proposed for the automatic classification of red tides by processing satellite imagery. Stumpf (1997) proposed a simple method to detect red tides on the West Florida Shelf by using only the chlorophyll concentration, that is, abnormally high chlorophyll concentration corresponds to red tides. However, several limitations of this method are identified: First, of the several organisms causing HABs in different locations of US, only K. brevis (the main organism that causes red tides in the Gulf of Mexico) tends to produce a major portion of the chlorophyll concentration, so this method needs to use complicated regional (GOM) algorithm to re-process chlorophyll measurements and limit its application to the GOM only. Second, for those blooms in the GOM that are not caused by K. brevis, this method still needs additional spectral information on optical characteristics for discrimination. Otherwise, false positive signal will be generated frequently. Third, recent research found low-chlorophyll red tides also occur in the GOM, which cannot be detected by this method (Stumpf, 1997). A more conventional method has been focused on using bio-optic properties of the ocean color for detecting and characterizing harmful algal bloom (Carder et al., 1985, Millie et al., 1997, Cullen et al., 1997 and Kahru and Mitchell, 1998), for example, absorption signatures, reflectance, backscatter coefficient or certain spectral ratio. Although it is theore tically possible to find out the unique pigments of K. brevis (or other 3


organisms that cause red tides) and its absorption characteristics using a few bands, practically they may not be sufficient to distinguish the red tides from diatom, CDOM (colored dissolved organic matter) or sediments because of the optically complex situation in coastal water. Meanwhile, much valuable information in the other bands from multispectral satellite data is lost in this method. Moreover, the true data (field-observed red tides optical data) is very limited, which hampers this approach and makes it not robust. Fuzzy set classification is another classification method using a different perspective from the first two. It can easily expand to multiple bands (using information from all bands if necessary) and also takes into account the fuzzy characteristics of ocean color in the real world, thus providing a useful technique to automatically separate various types of environmental conditions. This method has been applied to multi spectral medical images (Hallet al., 1992, Li et al., 1993, and Clark et al., 1994) and remote sensing images. For example, Zhang et al., (2000) proposed a fuzzy classification method on multi-spectral CZCS imagery. However, due to the limited red tide ground truth collected at that time and fewer bands on CZCS, only simple rules based on the projections of the cluster centriods on a certain feature space were generated to label the red tides class, which oversimplified the labeling process. In the last ten years, artificial neutral network (NN) classifiers have begun to be adopted in remotely sensed dataset classifications and become increasingly common. Examples of applications include global land cover classifications on an A VHRR dataset by Gopal et al., (1999), on a MODIS-like data by Borak et al., (1999) and by Muchoney et al., (2000), cloud classifications using GOES 8 data by Tian et al., (1999), snow cover 4


classification by Simpson and Mcintire (2001). In nearly all cases, the NN classifiers have proven superior to conventional classifiers (like decision trees or maximum likelihood classifiers) and showed great promise with good performance in general. Multilayer NNs provide a way to analyze satellite data because they easily accommodate multidimensional feature space and create nonlinear boundaries in feature space. Its architecture is quite flexible and can be easily modified to optimize performance (Gopal et al., 1999). The primary advantage of using NN for classification concerns processing structure, fault tolerance and statistical flexibility. In terms of processing, NNs are inherently parallel, and parallel processing is faster than the serial structures used in most classifiers. Secondly, neural networks tend to be more robust with respect to missing or noisy information. Finally, neural networks are not bound by statistical assumptions; they are completely nonparametric. One limit a tion is, during the training process, each known pixel with its value(s) and type is fed individually into the NN and processed without considering the spatial neighborhood context. However, there is rich context information in images, i.e., classes are likely to span over a region instead of appearing in isolated pixels. Proper utilization of such cont e xtual information can help to improve the final classificat i on accuracy (Tian et a l. 1999). 1.2 Objective The purpose of this study is to combine the advantages of both fuzzy clustering and neural networ ks to set up an accurate and efficient automatic classification system to identi f y r e d tide s fr o m s a t e ll ite imag e ry. 5


The system, overcoming the limitations in the methods mentioned above consists of two main components: an unsupervised clustering algorithm FCM and a neural network classifier. A fuzzy clustering algorithm, which can offer important domain knowledge and semantic meanings, was used to obtain the initial segmentation from satellite images. The cluster centroids will be the input fed to the NN classifier. Instead of using the fuzzy c-mean (FCM) clustering algorithm, the bit-reduction FCM (brFCM, Ke, 1999) that significantly reduces the computation time by neglecting the last few insignificant bits (see Chapter 3 for detail) was used. Texture information was also computed and combined into the feature domain for clustering. Then a "quickprop" neural network (Fahlman, 1988) was trained using the centroid information from the ground-truthed images, which incorporates spatial context into the NN to label each cluster. Testing was done on validation images for the recognition of red tides. 1.3 Research Region and Data 1.3.1 Research Region The west Florida shelf is chosen as the study area because it was covered extensively by Sea WiFS with its best spatial resolution (see 3.1.2 for detail), and also because a substantial number of ground truth images were available during the same period of time (1998 to present) for this region. The study area is from Big Bend in the north to the Florida Keys in the south, bounded by 24.5 to 30.5 north latitude and -86.5tO -80.0 west longitude (Figure 1). The small area from Tampa Bay to Charlotte Harbor, shown in the white box in Figure 1 is where most of the ground truth data was obtained. Satellite data from this small area 6


was used to build and train the neural network. Once a satisfactory NN was developed, it was applied to the larger area to get the red tides distribution over the entire west Florida shelf. Figure 1. Study Area: West Florida Shelf. Satellite data in the small area (from Tampa Bay to Charlotte Harbor), along with the ground truth in this area, are used to develop a neural network 1.3.2 In-Situ Data The Gulf of Mexico (GOM) has frequent red tide outbreaks. In the GOM, the causative agent is identified as the dinoflagellates K. brevis. During red tides, a population l evels of > 1 x 105 c e ll L -1 occurs, comparing to background abundances of S lx 103 cells L-1 in nature. 7


In order to get reliable field information, a multi-year scientific project called Ecohab-Florida (Ecology and Oceanography of Harmful Algal Blooms) sponsored by NOAA/EPA, has been carried out since June 1998 (Vargo, 1999). Water samples to measure K. brevis concentration, along with other biological, physical parameters, are collected on monthly sampling from a cruise. Figure 2 shows the route and sampling stations for the Ecohab cruises, along with the contour plot of K brevis cell count collected in November of 2001 by RN Suncoaster Considerable K. brevis cells were measured south of Tampa Bay in the middle of November. 2UO -----NO"f. 17@, .20111 z-:lr!r.lj i !3 27. 0:l Figure 2. EcoHAB Cruise Sample Stations and K. Brevis Cell Count Contour on November, 2001. The Ecohab-Florida project provides us well-characterized validation and training sets that are used to build a classification system and validate the output product by testing th e cl a ssifier. 8


1.3.3 SeaWiFS Satellite Data In this study, the satellite data used to build the classification system is SeaWiFS data from the period of 1998-2001, the same period as Ecohab-Florida project. As one of the NASA High Resolution Picture Transmission ground stations in the US, the Institute for Marine Remote Sensing, at the University of South Florida, receives real-time data from the SeaWiFS satellite, covering the Caribbean Sea and Gulf of Mexico (see Figure 3) with high spatial resolution (1-km Local area coverage). The daily data set consist of water leaving radiance at 412nm, 443nm, 490nm, 510nm, 555nm, 670nm, 765nm, 865nm and the derived chlorophyll-a concentration based on Reilly's algorithm (O'Reilly et al., 1998). The data from the first 6 bands and chlorophyll-a data (See more detail in 3.1.2.) are used. Each band/chlorophyll matrix is treated as an intensity image with each pixel in the water having an associated 7 features. The value for each feature is saved in 16 bits An example of SeaWiFS Level 3 data received at band 412,443,490, 510, 555, 670nm and the derived chi-a data for west Florida shelf on October 2 2001 is shown in Figure 4. Pixels of land are colored black in the figure, and the cloud pixels and missing data are in gray Pixels of the valid data are shown in different colors according to their geophysical values The image size is 600x650 9


Figure 3 Satellite Coverage at USF (chi-a image: S200123 7 .png) 1 0


(c) (e) (f) Figure 4 7 Features from SeaWiFS on Oct. 2, 2001 (a) band 412 run (b) band 443run (c) band 490 run (d) band 510 run (e) band 555 run (f) b a nd 6 7 0run ( g ) deriv e d chia 11


1.4 Outline The remaining five chapters are organized as follows: Chapter 2 describes the algorithm including the fuzzy c-mean (FCM) algorithm, the brFCM enhancement, the texture operation, and the neural network method used in this work. Chapter 3 introduces the knowledge base and dataset, and then presents the experimental results from FCM, brFCM and texture operations. Chapter 4 discusses the training, validating and testing results from a neural network. Finally Chapter 5 provides a summary and discussion. 12


CHAPTER2 METHODOLOGY 2.1. Overall Structure of the System In this study, the construction of the automatic classification system is divided into three major stages : (1) expert knowledge acquisition; (2) image segmentation by FCM (brFCM), and (3) cluster labeling by training and validating a neural network. 2 .2 Expert Knowledge Acquisition Generally, expert knowledge acquisition consists of the image collection and the image ground truthing First, a series of images are chosen by an expert, mainly based on the availabilities of the ground truths and the clarity of the key features he/she uses to recognize the objects. These chosen images are later segmented by the unsupervised clustering algorithm (See Section 2.3 for detail of the method and section 3.1 for details of this application). Then the ground truthed images are used for training and testing a classifier and for labeling each cluster to certain type of the interest. The image ground truthing works as follows: 1. The expert needs to write down a list of all the objects he/she is interested in. Based on knowledge and experience, the expert picks one or a few feature images from the multi-feature satellite images for ground truthing consideration. 13


2. By displaying this feature image or a composite of feature images, along with the ground truth if available and reliable, the expert outlines the objects on the image. 3. Other unground-truthed feature images of the same multi-feature image are registered to the ground-truthed feature image. 2.3 Segmentation Segmentation refers to clustering an image at the signal level into meaningful image regions. It can be realized by using image-processing techniques such as thresholding, edge detection, texture segmentation and fuzzy clustering. The problem caused by mixed pixels for image classification is well known in remote sensing. The instantaneous-field-of-view (IFOV) of a sensor records the reflected radiance from heterogeneous mixtures of materials such as pigments, water, and suspended sediments. Thus, for areas where spatial boundaries between phenomena are diffuse, those classification methods that construct mutually exclusive clusters seem inappropriate. The fuzzy set classification, which takes into account the heterogeneous and imprecise nature of the real world, is used in this study. Satellite images are clustered to generate fuzzy clusters and form a feature space of the cluster centroids before a classifier (a neural network is this study) is applied. 2.3.1 Fuzzy C-Mean Segmentation (FCM) FCM (Bezdek, 1981 and Hallet al., 1992) has been widely used in many applications. Wang (1990) described a supervised fuzzy classification method to determine land-use classes in Landsat MSS Image. Li et al., (1992, 1993) proposed an automatic classification for tissue labeling of MR images of human brain. Zhang (1998) applied FCM classification on CZCS satellite images for red tide detection on the west 14


Florida Shelf and Yao (2000) applied FCM to SeaWiFS images for "green river'' classification. FCM works as followings: Consider a set of n vectors X={x1 x2 x0 } to be clustered into c groups of like data. Each vector Xi E 9\5 consists of s features, which are real-valued measurements describing the object represented by Xi. The feature could be length, width, color, etc. Fuzzy clusters of the objects can be represented by a fuzzy membership matrix called a fuzzy partition. The set of all c x n non-degenerate constrained fuzzy partition matrices is denoted by Mrcm and defined as: C n Mfcm = { U E 9\ ex" I L U ik = 1, 0 < L U ik < n, and i=l k=l Uik E [ 0,1]; (2.1) The clustering criterion used to define good clusters for fuzzy c-means partitions is the FCM function: c n Jm(U,V) = LL(U;k)m ), (2.2) i=l k=l where UE Mrcm is the fuzzy partition matrix; mE [1, oo) is the weighting exponent on each fuzzy membership; The larger m is, the fuzzier the partition. V= [v1, ,vc] is a matrix of prototype parameters (cluster centers) viE 9\5 ; and Dik (vi,xk) is a measure of the distance from xk to the ith cluster prototype. The euclidean distance metric (Bezdek, 1981) is used for all FCM results reported here. Good cluster structure in X is again taken as a (U,V) minimizer of (2.2). Typically, optimal (U, V) pairs are sought using an alternation optimization scheme of the type generally described in (Bezdek, 1981). The membership matrix U and 15


the cluster centers V are calculated as follows, (also illustrated in the flow chart in figure 5). Step 1. Initialize V randomly. Step 2. Choose the stopping conditionE> 0. Step 3. Compute U (2.3) (2.4) Step 4. Compute V (2.5) Step 5. If L\U <,then stop; otherwise, set b=b+1, go to Step 4 to update the cluster center V and then repeat the above process until the stopping criteria is satisfied. c n L\U = L L u! II (2.6) i=l k=l Step 6. Assign each pixel to the class with the maximum membership. 16


In itia l i ze the cluster center matrix V S e tb=O u;k j=l DIJ = f f II u :kb) -u II 1=1 k=l no b++ Figure 5 Flow Chart of FCM 2.3. 2 Validity Measure of FCM Given a data set, different fuzzy clustering algorithm or the same algorithm with different values for its parameters is likely to produce different grouping of the data. In practice, a partition generated by a clustering algorithm does not always correspond to the actual underlying classes. Consequently, there is a need for procedures, which can evaluate algorithmically generated portions in a quantitative and objective way, and thereby help to improve the quality of image segmentation. Cluster validity is concerned with the quality of clustering results Cluster properties such as compactness and separation are often used for validity measures that are based on the data. In this study, we adopt a validity measure that accounts for both 17


properties of the fuzzy memberships and the structure of the data (Xie and Beni, 1980). The compactness of a given fuzzy c-partition of a data set is defined as: "'c "'n m II 112 L..J L..J u .. v -v. Compactness = i=t j=t '' ' n (2.7) Parameter cis the number of clusters the algorithm partitions the data set into, and m is defined in the previous section. Compactness refers to the variation or spread manifested by the elements that belong to the same cluster. It measure how compact each cluster is The more compact the clusters are, the smaller compactness is. The compactness is a function of the distribution characteristics of the data set itself, and more importantly a function of how we deliver the data set into clusters. The separation of the fuzzy c-partition is defined as the minimum distance between cluster centriods, Separation= minijllv; -vjr ;.,.j Separation represents the isolation of clusters from one another. The compactness and separation validity functions S is thus defined as the ratio of compactness to the separation, S= _C_o_m...:.p_a_c_tn_e_s_s Separation (2.8) (2.9) The more separate the clusters are, the larger Separation and the smaller Compactness are, and the smaller Sis. However, as mentioned in Xie and Beni (1980), S monotonically decreases as the number of clusters c is increased. Bansaid et al., (1996) introduced partition index SC in their validity-guided clustering algorithm: 18


(2.10) Where ni is the total number of data points partitioned into cluster Ci. Different from the cluster validity S defined in equation (2.9) to compare partitions that move element between clusters, the partition index SC defined in equation (2.10) is designed to evaluate the relative merits of different partition that divide a data set into the same number of clusters. 2.3.3 Bit-reduction FCM (brFCM) Due to the iterative nature and the large number of image pixels involved in the calculations, the FCM algorithm discussed above is a computationally intensive process (Hallet al., 1992). Recently, bit reduction fuzzy c-means (BrFCM) is presented to speed up FCM (Ke et al., 1999,2000). In their studies, experiments were carried out on 32 magnetic resonance images (MRI's). Results from BrFCM and FCM are compared to the ground truth images. Statistical hypothesis testing shows that the discrepancies between the results from 2rFCM and ground truth images are not greater than the discrepancies between the results from FCM and the ground truth images. Therefore, BrFCM preserves the quality of final partitions and the convergence property of FCM with an average speed up of 11.904. Since FCM is designed to group similar feature vectors, by reducing some of the least important bits, we can anticipate getting many identical feature vectors. For identical feature vectors, they will partially belong to the same clusters; therefore they share identical membership in all the clutters. 19


In this study, the same BrFCM algorithm (Ke, 1999) is applied to the SeaWiFS satellite images to test if BrFCM is effective for satellite image segmentation. BrFCM works as follows: Stage 1: Bit Reduction and Weight Calculation Step 1. For each feature vector Xi in X={x1 x2 .. x0}, construct a bin structure Bin[i] The bin structure Bin[.] includes a long integer w for the weight of the bin (initialized to 1), a long integer pointer p (point to dynamically allocated memory for the indices of feature vectors in the bin), and an s-dimensional vector f to store the feature values for a bin. Step 2. Read in the data set X={x1 x2 . x0}, reduce the r lowest bits on each feature value. Fori from 1 ton, set Bin[i].f =yi. Step 3. There are two methods to update the weight w: 3a. Simply looping through, search forward to calculate the weights of bins[i].w. 3b. Hash table. Step 4. After the last step, patterns with w 1 are the only patterns left. A pattern with w > 1 is representative of one or more original patterns. The number of remaining patterns is represented by no. Stage II: Similar to FCM, Clustering Utilizing Bins With Weights Step 1. Initialize V randomly. Step 2. Choose the stopping condition > 0. Step 3. Compute U by equation (2.4). Compute r no llu -u'jj = II wk (u;k -u:k )2 i = l k = l (2.11) 20


if llu U 111 s; stop and go to step 6 Otherwise, go to step 4. Step 4. Compute V by equation (2.12). (2.12) Step 5. Distribute the representative Uj to all the examples in the bin utilizing the indices stored in the bin structure. Copy U to U1 Go to step 3. Step 6. Harden the U matrix and output the clustering result. The running time of finding V from U is O(cxsxn0), the running time of finding U from Vis O(cxcxsxno) In general, n0 is far smaller than n. Therefore brFCM is time-efficient compared to FCM. 2.3.4 Texture Operation A good definition for texture is : Repetition of a pattern or patterns over a region. The goal for texture operation is to improve image segmentation (Parker, 1997) Simple texture operations like moments skewness and kurtosis can help us solve some segmentation problems which "threshold" or "region growing" cannot solve. For example, in Figure 6(a), no threshold can recognize the two different areas with different texture But after a texture operation (mean grey level), a reasonable threshold will separate the area on the left of the image and the area on the right. 21


(a) (b) (c) Figure 6. Example of Texture Operation. (a) An example of an image displaying regions characterized by 2 textures. (b) Image consisting of the mean grey level in a 3x3 running window over all pixels in (a) (c) A threshholding of this image showing two possible regions with different textures. In order to get more texture information from the original image, a grey level co-occurrence matrix (GLCM) is generally used and is also applied in this study. A GLCM contains information about the positions of pixels having similar grey level values. It scans the image and keeps track of how often pixels that differ by llz in values are separated by a fixed distanced in position (see the examples on Parker 1997). Usually there are four directions between the two pixels : horizontal, vertical and two diagonals. But this is really too much data, often more than in the original image. Wh a t is usually done is to analyze these matrices and compute a few simple numerical values that encapsulate the information. These values are called descriptors and 6 of them (mean, standard deviation, contrast, homo g eneity ener g y and entropy) are e xamined here 2 2


Mean: f.i =I 2J-J!Mu. 11 j j Standard Deviation: lT = [ Qijfp)' M[i, j] ]i Contrast: c =I IIi-JlkM[i,Jr j j M[' '] Homogeneity: G =II l,J j j 1 + li112 Energy: E= IIM[i,j]2 j j Entropy: Ep =-IIM[i,j]log(M[i,j]) j j In those formulas, M[i,j] contains the number of pixels Pl and P2 in the original matrix for which Pl=i, and P2=j where PI and P2 are separated by d pixels in certain direction. 2.4 Cluster Mapping After an image is clustered, the centroid of each cluster in that image, specifically, its type and feature values will be put into the training set of a neural network for NN learning. To label the cluster centroid, each pixel in that cluster is registered against ground truth by the following way: A pixel Pxy in a cluster will be labeled as object Oi, if it is mapped to a pixel belonging to an object Oi in the ground truth image. The coordinates x,y of the pixel are used for the mapping. Generally, a cluster is identified as an expected part of an object, if the majority of pixels (90% in this study) in a cluster belong to a unique object in the ground truth image. 23


2.5 Neural Network and Quickprop Method Neural networks were originally developed to model the functioning of the human brain. It derives its computing power through its massively parallel-distributed structure and its ability to learn and therefore generalize. A complete introduction to the different types of neural networks and their applications can be found in Beale and Jackson (1990), Dayhoff (1990), Mitchell (1997) and Haykin (2001). The following is a general description based on the above references. A neural Network consists of a system of interconnected nodes (see figure 7(a)). The first layer of nodes (the input layer) brings the information to be processed into the network. Nodes in subsequent layers are called neurons, as they perform a neuronlike function. A neuron with n inputs is represented graphically in Figure 7(b). The signals received by neuron j are first weighted by the weights Wnj associated with each input/node link and routed through the summat i on function Then the sum is adjusted by the corresponding bias bj and computed in a nonlinear activation function, which maps the output of the neuron j to a normalized value. Generally, the activation function belongs to the sigmoid family of functions, F(x) which maps data from the set of reals{R} to an interval [a,b] such that the mapping is approximately linear on the interval and highly nonlinear elsewhere. 24


Zj + bj (b) Figure 7. Neural Network Structure The most popular approach in neural networks is feed-forward networks trained with the backpropagation algorithm. Figure 7(a) shows a network with one hidden layer of neurons. In the back-propagation learning algorithm, training patterns (for example, pixels with their multiple features) are presented one at a time. For each training pattern, the unit activations propagate forward through the network, ultimately producing a set of activations on the output units. These actual outputs are compared to the desired outputs for that training pattern. If the network computes an output vector that matches the target, weights will not be changed. If there is an error (a difference between the output and target), it will propagate back to each layer and weights Wij are adjusted to 25


reduce this error Typically, a feed fmward neural network (FFNN) is trained using gradient descent techniques to minimize an error function (often the root-mean-square between the desired output specified for the training data and the network output node values computed from the input training data). Gradient descent techniques compute the derivative of an error function to approximate the direction of the minima along the decision surface. The forward/backward cycle for a single training case is called a presentation. For problems that have a finite, reasonably small set of training patterns, it is typical to present them all, one after another and do a weight update after all presented. This cycle, containing a single presentation of each of the training pattern is called an epoch. (Fahlman, 1988) However, the back-propagation learning algorithm is too slow for many applications, and it scales up poorly as real-world tasks become larger and more complex. Fahlman made several modifications to the original back-propagation algorithm and introduced "quickprop" by (1) adding a constant 0.1 to the value of the sigmoid function before the value is used to scale the back-propagated error; (2) replacing the difference computed at each output node by the hyperbolic arctangent (atanh) of that difference; (3) adjustment of the weights. While the back-propagation algorithm calculates the partial first derivative of the overall error with respect to each weight, "quickprop" uses secondorder derivatives for weight update. 26


CHAPTER3 SEGMENTATION OF SEA WIFS IMAGES 3.1 Preparations for the Segmentation 3.1.1 Satellite Imagery When visible light from the sun illuminates the ocean surface, it is subject to several optical effects. Foremost among these effects are light reflection and absorption. Reflection beneath the water surface is generally inefficient, returning only a small percentage of the light intensity falling on the ocean surface. Absorption selectively removes some wavelengths of light while allowing the transmission of other wavelengths. In the ocean, light reflects off particulate matter suspended in the water, and light absorption is primarily due to the photosynthetic pigments (chlorophyll) present in phytoplankton. These optical interactions produce modified emergent flux from the ocean surface, the so called water-leaving radiance". Sea WiFS, as a spectroradiometer, measures radiance in specific bands of the visible light spectrum to study the optically significant components of the water body. As one of the NASA HPRT receivin g stations in the US, the University of South Florida receives SeaWiFS real-time raw data (level-0) which covers the Caribbean Sea and the Gulf of Mexico with hig h spatial resolution (1-km local area coverage LAC). Preprocessing of Sea WiFS images consists of sensor c a libration, atmospheric correction and noise deduction (Hu et al. 2000) An appropriate software SeaDAS (SeaWiFS Data 27


Analysis System) is used to generate level-2 products (derivation of water leaving radiance and chlorophyll estimates) and level-3 products (remapped data). Standard NASA procedures are incorporated into this data-processing package, along with the latest implementation of the systematic and bio-optical algorithms based on McClain et al. (1995) and O'Reilly et al., (1998). All of the data products from SeaWiFS are stored in the Hierarchical Data Format (HDF), which is developed by the National Center for Supercomputing Applications (NCSA) at the University of lllinois. For more detailed technical description, refer to the SeaWiFS Technical Report Series, Volume 1, "An Overview of Sea WiFS and Ocean Color" or the online description at http: //seawi fs.gsfc WIFS/SEAST AR/SPACECRAFT html Each Level 3 Seawifs image consists of 8 useful bands, which are 412nm, 443nm, 490nm, 510nm, 555nm, 670nm, 765nm, and 865nm, respectively. The last two bands are mainly used for atmospheric correction and noise deduction. The water leaving radiance measured by the first 6 bands and the derived chlorophyll (McClain et al., 1995) are used in this study as the 7 features for each pixel to be classified In such a multispectral image, each band is treated as an intensity image and mapped to our study area as 600x650 pixels, with 16 bit floating type data for each pixel. Land pixels and cloud pixels are pre-excluded from further processing. Based on the histogram of each band, chlorophyll-a concentration is less than 20 mg m -3 fo r most o f the pix e ls, and the w a ter le a ving radiances of bands 412, 443, 490, 510, 555 and 670 nm are less than 3.0 mw cm-2)Jm-1 sr"1 for most of the pixels. Values beyond these ranges are considered unrealistic. To better focus on the values falling in these r e asonable rang e s and to m a gnify the differ e nce of the valu e s within these rang e s, 28


two conversion techniques, linear and nonlinear data conversions, are applied to convert the floating type data to 8-bits before clustering the SeaWiFS images with the FCM algorithm. The chlorophyll-a concentration higher than 0 and lower than 20.0 mg m-3 are nonlinearly stretched and scaled from 1 to 250 (converted value =log(l +chlorophyll) /0.00519) and saved as 8-bit data. (The reason for the nonlinear conversion is that the chlorophyll concentration is distributed log-normally in nature.) Water leaving radiance values that higher than 0 and lower than 3.0 mw cm-2 J.lm-1 sr"1 for the other 6 bands are linearly stretched from 1 to 250 and saved as 8-bit data. Since the FCM algorithm is designed to group similar feature vectors, using 8-bit data type, instead of the float data type, this sufficiently preserves the precision of the original data for classifications (Marine biologists generally use 8-bits data for scientific research). Meanwhile, the linear/nonlinear stretch efficiently increases the difference between values within the reasonable data range to improve the classification result (Yao, 1999). Moreover, it reduces the computation time for clustering the images with the FCM algorithm. 3.1.2 Image Collection and Knowledge Base Acquisition During the operational period of Ecohab project, among fifty-two 3-4 day monthly cruises from 1998 to 2001 biologists had found 10 red tides events, which mainly occurred during fall and winter of each year. Based on these ground truth image, 40 SeaWiFS images, from 1998 through 2001, are picked out by an oceanography expert, who has experience in image analysis/remote sensing for more than ten years. These images, with good coverage of our study area, are concurrent with the ground truth when red tides were found and will be used for training and testing the classification system 29


In this study, the knowledge base refers to various types of water in our study area, which are expected to be identified as different types of clusters in a satellite image. Morel and Prieur (1977) have optically classified seawater according to its constituents. Seawater is basically classified into case I and case II water according to its optical properties. Those waters for which phytoplankton and their covarying detrital material play the dominant role in determining the optical properties are called case I waters. Those waters for which inorganic suspended material (such as what might be resuspended from the bottom in shallow areas) plays an important role or in which either detrital, DOM, or both, are uncorrelated with Chi-a, are referred to as case II waters. Most of the open ocean waters are case I water, while for coastal water, it is often hard to separate case I from case II water. So we use case II like water to represent them Red tides occur on/in case I and case II water. Because of our special interest in red tides in this study, we consider it as a separate class even though it can be geographically overlapping with the others. 3 1 3 Image Ground Truthing The objects we re interested in are case I water, case II-like water and red tides, which are listed by the oceanography expert for ground truthing. The expert prefers to use ENVI (a remote sensing software, Environment for Visualizing Images) to visually inspect RGB composite ima ge with band 555 nm as red, 490nm as green and 543nm as blue respectively, along with the K. brevis concentration from ship measurement, to outline the red tides area on the imag e with great confidence. To outline case I water the chi-a imag e is used and found su ffi ci e nt since cl ear boundari e s exists betwe e n the open 30


ocean water with blue-green background showing low chl-a concentration, and the coastal water with much more variation showing high chl-a concentration. All the other areas are labeled as optically complex case II water. 3.1.4 Feature Selection Feature selection is an important stage for pattern recognitions. For this water classification problem, the trained expert mainly rely on a composite image of band 443nm, 490nm and 555nm for red tide identification from satellite images. Therefore our initial clustering experiment takes the same three bands as the three-feature input for FCM. The results are qualitatively compared by visual evaluations with ground truth from ship measurements and the clustering result using all 7 features. Basically, using 3 features can capture the right location and shape of a red tides patch(s), but yields lots of false positive points, which affects the learning/labeling process significantly. For example, the segmentation results using 3 features and 7 features for the image on Oct. 1, 2000 are shown in Figure 8(c) and 8(d), respectively. The number of clusters is set to 10 in both cases with each cluster synthetically colored to give visual presentations. Comparing with ground truth (figure 8(b)), we can see that in figure 8(c), segmentation using 3 features does get the red tides patch as 2 clusters (green and red) north of Charlotte Harbor, but lots of false positive clusters are also generated offshore of Tampa Bay. Better performance is found in the segmentation results using all 7 features, where red tides clusters agree very well with ground truth. All 7 features are therefore used thereafter as the input for our clustering algorithm. 31


(a) (b) (c) (d) Figure 8. Feature Selection: 3 Features vs. 7 Features (image on Oct.l, 2000) (a) Ship observation during October 4-6, 2000. Red tides patch was found north of the Charlotte Harbor. (b) GRB composite ground truth image (c) Segmentation results using 3 features (443nrn 490nrn and 555nrn). c =lO (d) Segmentation results using all 7 features. c = l 0 To further test if any of the 7 bands is more important than others in terms of representing red tides phenomenon FCM is applied to each individual band Segmentation results for the Oct. 1 image (Figure 9) show mixed clusters present in all bands, suggesting no particular band is more important than others 32


(a) ( b ) (c) (d) ( e ) (f) (g ) Figure 9 Segmen t atio n R esults Using 1 Band o n Oct. I 2000 33


3.2 Results Using FCM Using seven features as input to FCM (Figure 4 gives an example of the 7 inputs to FCM) i.e an nx7 floating point matrix, where n is the number of the valid pixels in each chi-a image and each row vector consists of the 7 features associated with a pixel, we tested different values for c, the number of classes used in the segmentation. By comparing the ground truthed red tides patch(s) with the clusters in the segmentation image in a pixel-by-pixellevel, also considering the val idity metric presented in section 2.3 2 ten was chosen as the number of clusters that a SeaWiFS image is partitioned into. Although this is different from the total objects we're interested in this study, which is three, this over-cluster approach is generally adopted to reduce the chance and frequenc y that pixels from different classes are combined into one class at the cost of splitting some true types into multiple clusters. The stopping criteria E is set to 0.0225 and the fuzzy index m is set to 2 Figure 10 shows the results generated on September 29 2000, agree well wi t h the ground truth from the ship observation ,..,!""'"'' ... ... I '"' ' 1 '"-:uu I .... d U<& o n >R u I :::::: :::r(a) (b ) F i gur e 10 C la ss ification R e sults U sin g FC M ( a ) G r ound truth image ( b ) C l assifica t io n res ul t (c = 1 0 s = 7 r=O ) on Sept. 29 2 000 34


Each output cluster is assigned a unique color for display purposes. The colors are chosen from a palette in IDL to provide a good visualization of the segmented regions. With the number of classes being 10, 10 different colors are used Figure 10 to represent different clusters The clouds indicated in gray are not involved in the clustering Visual examinations of this result indicate that different water types are well separated and agree with the expert's outlining. The green and blue patches on the middle of the shelf are obvious case I waters as they have the background color in the chi-a image. Moving closer to the coast, we can see more different colors indicating different classes of water in the optical-complicated coastal water. Based on the contour plot of K. brevis cell count from ship measurements, the light-blue patch near the Charlotte Harbor is mapped as red tide. It parallels along the Sarasota-Charlotte Harbor shoreline and stretches southwestward about 50 miles offshore. With all 7 features as input, this FCM cluster agrees well with ship measurements. The feature values of the ten cluster centroids from these 10 clusters in this image are then added to the training set for neural network training. Figure 11 shows the result of FCM clustering on the same day for the whole WFS area Although no other ground truth outside the small area is a vailabl e at that t i me, we can see the FCM gives reasonable clustering results by showing high spatial context consistency in individual cluster and detectable characteristics among different clusters. Meanwhile, the red tides patch outside of Charlotte Harbor is also shown in this ima ge 35


Figure 11. Classification Result Using FCM for the WFS Area (c =10, s = 7 r=O) on Sept. 29 2000 Although FCM provides us reasonable segmentation results it is found to be very time consuming The running time of finding V from U is O(cxsxn), the running time of finding U from Vis O(cxcxs x n) To obtain a result like the one shown in figure 9, it takes more than 1 hour on the Sun UltraSPARC machine ( 2 C PUs System clock frequency : 100 MHz Memory size: 1024 Megabytes) Considering routine real time satellite images processing FCM is not efficient and practical. To solve this problem a technique called brFCM was tested as decided in the following section to spe ed up the FCM 36


3.3 Results from Using BrFCM Due to the iterative nature and the large number of image pixels involved in the study, brFCM is implemented and tested to replace FCM. (See the brFCM description in section 2 3.2). Five images are used for this test. The following table shows the comparison of speed and accuracy between brFCM and FCM segmentations for October 1, 2000. Each input image, covering the entire WFS, has 144536 valid pixels out of total pixel number of 396000. The first column in the table is the number of bits reduced The second column is the time spent by using FCM. Since it doesn 't involve bit reduction i t's a constant time for the whole column. The third column is the time spent by using BrFCM without hash table. As the number of bits reduced increases, the amount of time spent by brFCM decreases. Table 1 BrFCM vs. FCM for SeaWiFS S2000273.hdf on the WFS (The unit for time is second; the unit for discrepancy is pixel) Number of Time for Time for Time for Speed-up Discrepancy reduced bits FCM brFCM brFCM brFCM brFCM without with vs. vs. Hash-table Hash-table FCM FCM r=O 5707 7 2073.6 1257.3 4.5 0 r=1 5707.7 1101.5 344.3 15.6 2618 r=2 5707.7 283.5 102.7 55.6 8931 r=3 5707.7 83.5 44.5 128.2 21124 r=4 5707.7 33.2 18.5 308.5 54273 37


The speed up in the fifth column is the time used by FCM divided by BrFCM using hash table. Similarly, as the number of bits reduced increases, the amount of time spent by brFCM decreases, indicating that the speed up is more significant. Zero is chosen as the amount of bit reduction for particular considerations: there are two factors which affect the running time. One is the bit reduction itself. The other is the grouping of identical features, which speeds up the algorithm even without any bit reduction. Therefore, when comparing the amount of time used by BrFCM (r=O) with the amount of time used by FCM, we see the clustering becomes faster by combining identical vectors together. More time saving is seen when using brFCM (r>O), indicating a more time-efficient algorithm. The fourth column shows the time by using BrFCM without hashtable. Note that using hashtable is always 2 to 3 times faster than the one that has the same bit reduction but without using hashtable. For each row, since the number of bit reduction is the same, the information lost by bit reduction is the same, so the same classification result is obtained. The discrepancy between 2rFCM and FCM is shown in the last column. When r=O, no information get lost, so the discrepancy between the two classification results is 0. But brFCM is faster than the FCM classification for the reasons stated above. The discrepancy of brFCM-FCM increases while the number of bit reduction increases. When r=2, the discrepancy between brFCM and FCM is 8931 pixels, with the number of valid pixels in this image being 144536, the discrepancy percentage is about 6%. When focusing on the red tides patch, we find a total of 2004 pixels in the ground truth image 38


are located within the patch. When using brFCM with r=2, we find a total of 2017 pixels in the patch. The difference, being only 0.6%, is acceptable. Statistically, an average of 15 is obtained for the speed up when r=2. Overall the discrepancy between brFCM and FCM for all valid pixel is about 6%, with the discrepancy between brFCM and ground truthed red tides clusters being about 0.5%. Based on the speed up and performance of brFCM, we choose brFCM with r=2, s=7 and c= 10 for further processing. 3.4 Results by Using Texture Operation As mentioned in the previous chapters, texture operations often provide an effective segmentation tool by providing extra features. In this study, an experiment is performed to see if texture indeed improves the FCM red tides segmentation. Since chlorophyll is generally believed as one of the important indicators of red tides, the texture operation is applied to the chi-a using the GLCM method (grey level co occurrence matrix) in this study, to get the 6 most popular texture features, which include entropy, energy, average, standard deviation, contrast and homogeneity (see Section 2.3.4). The GLCM method is based on the absolute differences between pairs of pixels having gray levels I and J separated by a distance d at angle with a fixed direction. The distance d used to calculate grey-level co-occurrence is set as 5 pixels to capture the change in the texture. The direction is set as the horizontal direction. As an example, chi a image on September 29, 2000 and its associated texture features: average, entropy, stddev, contrast and homogeneity are shown in Figure 12. 39


(a) (b) ( c ) (d) ( e ) (f) (g) Figure 12. Examples of Textures Extracted from chl-a. Image Texture oper a tion is based on chl-a with d=5 pixels direction=horizontal. (a) chla image on September 29 2000. (b)-(g) energy average entropy, stddev contra s t and homo res pecti vely. 40


Figure 13 (see below) shows the classification result using brFCM ( r=2) with input of 8 features (one texture feature fused with 7 original spectral features). The o ne using contrast is shown here in figure 13(b ), compared with the segmentation result using the 7 original features (figure 13(a)). Visual inspections show little difference in the red tides patch generated by adding extra texture feature. (a) (b ) Figure 13. Segmentation Comparison Between brFCM Using 7 Original Features and Using Extra Texture Feature (a) brFCM based on 7 original features (b) brFCM based on 8 features (7 original features + contrast) Statistical results based on 5 testing images are shown in Table 2 The first row is the average ofthe discrepancy, betw een brFCM usin g 7 original features and using 8 features (with one extra texture feature) for all valid pixels in each image. Among them e ntropy average and homo affect the segmentation result to a much larger degree than 41


the other three texture features. The second row, focusing only on the red tided clusters, gives the average of the discrepancy of the segmentation result in that cluster by using different features. None of the texture feature changes the segmentation significantly. Since our main goal is red tides detection, we decided to use the 7 original features as our brFCM inputs. More experiments will be carried out to further investigate the effect of extra texture feature to red tides classification by using different bands to extract the texture information and different combinations of the direction and distance in GLCM calculation. Table 2. Statistical Comparison of Segmentation Result between brFCM Using 7 Features and Using 8 Features Extra feature Entropy Energy Average Stddev Contrast Difference in Entire image 31% 2% 13% 4% 2% Difference in Red tide cluster 0.1% 1.9% 2.5% 0.8% 1.7% 42 Homo 29% 0.4%


CHAP1ER4 CLASSIFICIA TON USING NEURAL NETWORKS Neural networks (NN) provide a general, practical method for learning real valued, discrete-valued, and vector-valued functions from examples. The backpropagation algorithm is the most commonly used NN learning technique. In chapter 3, brFCM (r=2, c=lO, s=7) is applied to 40 SeaWiFS multi-band images in Tampa Bay Charlotte Harbor area The cluster centers generated by brFCM are fed into a backpropagation NN in this chapter, for automated cluster labeling. The rest of this chapter is organized as follows: We first consider the problems in tuning/testing a backpropagation NN and give examples of NN applications on multi band satellite image classification Then, a NN is built and tested based on the satellite image segmentation results and their concurrent ground truth, with an examination of all the tuning/testing problems. Once the network is developed, cluster centers from new images are fed to the classifier, and classification outputs are generated from the network established during the learning phase. Finally, 2001 is taken as the case study year and unseen images in this year are used to complete the time series analysis of red tides evolution. 43


4 1 Neural Network Tuning and Applications 4.1.1 Quick-prop Neural Network Algorithm The neural network training algorithm used in this chapter is summarized as follows: Step 1. Each training example is a cluster centroid with a pair of the form (X, icf), where X is a ?-dimension vector, and ttt is the crisp water type that the cluster belongs to according to the ground truth and coded as [0.5, -0.5, -0.5] for red tides class, [-0.5, 0.5, -0.5] for case I water class and [-0 5, -0.5, 0.5] for case II water class. Step 2. Initialize each weight wi to some small random value. The weights of the NN were first randomly initialized between -1 and 1 according to a uniform probability distribution Step 3. Until the termination condition is met, do 3.a Input the instance X to the sigmoid units, net = I(Xo W0), z=F(net). This sigmoid unit keeps the linear weighted combination of inputs and adds a non-linear output function to get the nonlinearity of Neural Network Nonlinearity is needed here because the distribution boundary of cluster centers of different water types is nonlinear. Function F is also referred to as the "squashing function" since it maps a very large input domain to a small range of outputs. 3.b Compute training error '12 L(4f-od)2 where 4!, od are the target value and the unit output value for the training example d. 3.c s(t) = x -1) s(t -1)-s(t) (3.1) aE s(t) = . = -1'}( ) + anw. (n -1) 1) a 1) wii (3.2) 44


where l1 is a positive constant called the learning rate, which determines the step size in the gradient descent search. The first term on the right side of the equation calculated in the direction of steepest descent along the error surface. The second term on the right is called the momentum term, where a (0:::; a:::; 1) is a constant called momentum. 3.d For each unit weight Wj, do wi +---w 1 + L\wj. Weights are the primary means of long-term storage in neural networks and learning takes place by updating the weights. 4.1.2 Number of Nodes The number of hidden layer nodes needed depends on the complexity of the function to be approximated If we choose a network that is too small (too few hidden units), the model will be incapable of representing the desired functions. The network needs sufficient units to correctly model the function. If we choose a network that is too large, it will be able to memorize all the examples by forming a large look up table, but will not generalize well to inputs that have not been seen before. Also, increasing the number of hidden nodes inflates computation time and training time Researchers must balance the need for precision with the convenience of quick processing. 4.1.3 Convergence and Local Minima As shown above, the backpropagation algorithm implements a gradient descent search through the space of possible network weights, incrementally reducing the errorE between the training example target values and the network outputs. Because the error surface for multilayer networks may contain many different local minima, gradient descent can become trapped in any of these. As a result, backpropagation is only 45


guaranteed to converge toward to some local minimum in E and not necessarily to the global minimum error. Two common heuristics to attempt to alleviate the problem of local minima include: add a momentum term and use stochastic gradient descent. 4.1.4 Generalize, Overfitting and Stopping Criterion What is an appropriate condition for termination of the weight update loop? One obvious choice is to continue training until the errorE on the training examples falls below some predetermined threshold. In fact, this is a poor strategy because backpropagation is susceptible to overfitting the training examples at the cost of decreasing generalization accuracy over other unseen examples (Mitchell, 1997). One of the most successful methods for overcoming the overfitting problem is to simply provide a set of validation data to the algorithm in addition to the training data. The algorithm monitors the error with respect to this validation set, while using the training set to drive the gradient descent search. In such a way, the number of iterations that produces the lowest error over the validation set will be used for weight-tuning iterations, since this number is the best indicator of network performance over unseen examples. 4.1.5 Cross-validation for NN Performance Evaluation Ideally, the predictive accuracy of a Neural Network constructed from the clusters of a training set should be estimated on new, unseen clusters from a testing set Unless there are a very large number of clusters in both sets, this estimate can be rather erratic. One way to get a more reliable estimate of predictive accuracy is by cross-46


validation. The whole set of labeled clusters are divided into f blocks of roughly the same size and class distribution. For each block in tum, a classifier is constructed from the clusters in the remaining blocks and tested on the cluster centers in the held-out blocks In this way, each cluster is used just once as a test cluster. The error rate of a classifier produced from all the clusters is estimated as the ratio of the total number of errors to the total number of clusters. 4 1.6 Other people's work on NN training and testing There are many variations of NN models developed for multi-feature image classification in different application domains. This section discusses some prior work using NN classifiers in remote sensing. Tian et al., (1999) used neural networks for cloud classification from geostationary operational environmental satellite (GOES) 8 imagery. Textural feature were also examined for the sak e of comparison and found very useful to improve the performance. Additionally a postprocessing scheme was developed which utilizes the contextual information in the satellite images to improve the final classification accuracy. An overall classification rate of 83.4 % is obtained. In Simpson and Mcintire (2001), two approaches to effectively and accurately detect clear land cloud and areal extent o f snow in satellite data are developed. A feed forward neural n e twork (FFNN) is us e d to cla ss i f y individual image s a nd a r e current NN (RNNCCS) is used to classify sequences of images since GOES provides high temporal samplin g (hourly or fast e r). The recurrent NN (RNNCCS ) combines a short term memory (d a t a a nd inform a tion from th e pr evio u s RNNCCS an a lysis ) with 'lon g term 47


memory" (the RNNCCS weights and biases) to determine the current classification. Validation with independent in situ data confirms the classification accuracy (94% for feed-forward NN, 97% for the recurrent NN). Gopal et a!., (1999), applied a neural network architecture called fuzzy ARTMAP to an annual sequence of composited normalized difference vegetation index (NDVI) values from A VHRR data set to classify eleven global land-cover types. When fuzzy ARTMAP is trained using 80% of the data and tested on the remaining (unseen) 20% of the data, classification accuracy is more than 85% compared with 78% using the maximum likelihood classifier. This fuzzy ARTMAP will be used as a global land-cover classification algorithm presently under development for processing data from the MODIS instrument to minimize requirements for human intervention There is also some work that using NN as an alternative for transfer functions In Keiner eta!., (1999), considering the nonlinear nature of (complex and noise in) the transfer function, aNN algorithm was constructed to estimate oceanic chlorophyll concentration from SeaWiFS data. The algorithm was trained and tested using data compiled at the SeaWiFS Bio-optical Algorithm Mini-workshop. A Neural Network using the five visible SeaWiFS bands as inputs with ten nodes in a single hidden layer estimated chlorophyll concentrations efficiently and accurately, modeling the nonlinear transfer function between surface chlorophyll concentrations and remotely sensed reflectance data mor e accurately than traditional regression methods. Gross et a!., (1999) also used a NN to retrieve chlorophyll pigments in the near -s urface of oceans from ocean color measurements. Since it is difficult to gather the necessary amount of data to build a NN prop e rly they used simulated datasets (pairs of 48


marine reflectance and pigment concentrations) as a training set and built aNN with learning rate 0.01, and 2 hidden layers (6 and 4 hidden nodes). By comparing with the polynomial fit conventionally employed is this problem, they showed advantages of neural function approximation, such as the association of non-linear complexity and nois filtering. 4.2 Development of the Neural Network 4.2.1 Trainingffesting Dataset In this study, a total of 40 ground truthed images covering the Tampa Bay Charlotte Harbor area from the last 4 years are selected (see section 3.1.2 about image collection). Among them, 30 images are ground truthed as red tides images (at least one cluster in that image is labeled as red tide by expert) and 10 images are ground truthed as non-red-tides images. The NN is trained on the centroid values of clusters of the training images, which are the purest data in the corresponding cluster. Totally 400 cluster images, which are the purest data in the corresponding cluster. Totally 400 centroids from these 40 images, each having 7 features and it's type, comp l training/testing/validation set for our quick-propagation NN development. These 40 images are further divided into 2 disjoint subsets: 10 images (5 red tides images and 5 non red tides images) are selected as the validation set to help determine when the weight update loop terminates (see 4.1.4). The other 30 images are used to form the training/testing set 49


4.2.2 Neural Network Structure Figure 15 illustrates the network architecture used in this study. The number of input neurons in the network is 7, indicating values of 7 feature of a cluster centroid. Totally 300 centroids are fed into the NN one by one for training/testing. The hyperbolic arc tangent is used for F, although any nonlinear function that is continuous monotonically increasing, and finite over the interval ( -inf, inf) may be used. Several experimental runs determined the number of nodes in the "hidden" layer, where the weighted summation and squashing functions are performed. Three nodes in the output layer represent the 3 types of water we are seeking to identify, i.e. Case I, case II-like and red tides. Output value from each node is trained to return a value between -0.5 to 0.5 The final crisp class is determined by the largest value of these 3. Band 412 Band443 Chi-a Case I water Case II-like water Red-tides water Figure 14. Neural Network Structure Used in This Study 50


In the training experiment, "leave-one-out" cross-evaluation (see 4.1.5) is carried out as following: the training set is further divided into two subsets: 29 images are used to train the network, 1 image is used to evaluate the performance of the network. For each network structure (certain number of hidden nodes, certain values of the learning rate and momentum), the partition is repeated 30 times with a different image assigned as the evaluation image each time. Several network-training runs are conducted using different numbers of hidden units (3,10,16 20, respectively) and different values of learning rate and momentum (0.01, 0.02, 0.1,0.2,0.5, respectively). The number of epochs is chosen based on the performance on the validation set since small SSE does not necessarily imply good generalization (Mitchell, 1997). A network with one hidden layer of 10 neurons, the learning rate of 0.02 and the momentum rate of 0.1, was finally chosen as the optimum architecture for the classification accuracy. 4.2.3 Performance of the Neural Network The performance of this NN is evaluated by "leave one out" and shown in the following confusion matrixes in terms of cluster level, image level and pixel level. Overall accuracy which refers to the percentage of all valid pixels of all classes that are correctly classified, is also calculated respectively. 51


Table 3. Confusion Matrix of the Neural Network Results on the Cluster Level a Red tides Non red tides Red tides 53 12 Non red tides 2 233 The overall performance yields an accuracy of 1-(12+2)/300= 95% on the cluster level. Table 4. Confusion Matrix of the Neural Network Results in Image Level a Red tides Non red tides Red tides 28 2 Non red tides 0 10 The overall performance yields an accuracy of 1-2/40= 95% in image level. Table 5. Confusion Matrix of the Neural Network Results in Pixel Level a Red tides Non red tides Red tides 114897 17496 Non red tides 4619 810010 52


The overall performance yields an accuracy of 1-(17496+4619)/947022= 97% on the pixel level. These overall favorable performances indicate the strong ability of neural networks in learning and generalization 4.3 Year 2001 Case Study 4.3.1 Neural Network Results for Tampa Bay-Charlotte Harbor Area in 2001 After the optimal performance is developed by training/testing, the NN is applied to an additional 87 unseen images in 2001 to generate a series of images in attempt to find the origination and evolution of red tides in Tampa-Charlotte Harbor area in this case study year. These 87 images, spanning the time period from January to November of 2001 (SeaWiFS December images have misnavigation errors and will be added after correction), are images with good satellite coverage, but no ground truth available at the corresponding time. The classification results are put into an animation movie and can be viewed at http://imars In each image, the red tide class is colored in red, clouds and case I ill water are in grey and white, respectively. The classification system identifies red tides on August 9th, 2001, almost 20 days earlier than the time when Ecohab monthly cruise found the first red tides event of 2001 at the end of August. Figure 15 showing the sequence of the classification results during August indicates that discontinued small red tide patches appeared at the mouth of Tampa Bay, along the coast between Sarasota and Charlotte Harbor and south of Charlotte Harbor on August 91h, then grew into 2 compact patches on Aug 11th. The larger one stretched from south of Tampa bay to Charlotte Harbor while the small one 53


stayed at south of it. Expanding further on August. 12th, these two patches moved southward and offshore from August 13 to August 23, then stayed there for a week (Based on images on August 23rd, 25th, 28th, and 29th) and were captured by the August Ecohab cruise. This movement is found consistent with southwestward surface currents between August 12th and 23rd (R. He, personal communication). On September 3rd, the above structure became chaotic (Figure 16). Unfortunately, due to tropical storm Gabrielle moving northeastward and passing Florida from September 5th to September 16th, no good satellite images were available until September 17th. A unique filament red tide structure appeared on 17th and 18th and seemed to be associated to a gyre formed during the storm. Analysis of good satellite coverage show thereafter on September 19th, 22nd, Oct. 15 \ 2nd and 3rd, severe red tides covered the entire coastline from Tampa Bay to Charlotte Harbor, which matches well with Ecohab cruise. In October and November, red tides were continuously found along the coast from Tampa Bay to Charlotte Harbor. An interesting result is the appearance of red tides seems to be related to the storm/hurricane system The classification system identifies the red tides after tropical storm Barry and hurricane Gabriel in August and September, respectively. Similarly, after Hurricane Michelle (October 29 to November 6), red tides are also detected in Tampa Bay-Charlotte Harbor area. The reason for this relation might be due to a significant amount of land-origin nutrients transported by strong river runoff after the storms, which stimulate the growth of red tides. 54


Figure 15. Red Tides Classification Result During Early mid of August 2001 55


28.150Aug 2141, %COl Celli\. -2a00 t J. :.10(.0) 2'!0 "{ f """" 180000 i ,_, L -I 2000: Y.allO .. 80000 . :, . \ u--...... 211 ::c' _.,..., ;..,.. I : X\w 0 L .. -11 !0 Figure 16. Red tides Classification Result and Ground Truth Durin g Late August 2001 56


"' t' .. ' r ' IH1001DOTG Oitl2 C nll .. Sopl I 1001 C .. '"'-1X.CWO .,_, ocamo .....,., I

4.3.2 Neural Network Results for the Entire WFS in 2001 The K. brevis cells are positively phototactic, and concentrate in the upper water column during the day. Therefore they behave like surface drifters and movement can be explained by ocean circulation patterns (Tester, 1997). To estimate the possible origin of red tides and better understand the movement of them, the same Neural Network is applied to a bigger area, which covers the entire WFS. The complete series can be seen in an animation at http : //imars marine The red tides first appeared in July 28th in the WFS area (Figure 18). A small red tides patch showed up close to Charlotte Harbor, which maybe not big enough to form a cluster or be labeled as red tides in the small area case The patch grew a little and stretched northward on August 8th and 9th, when the small area case detected red tides. On July 28th, besides the small patch near Charlotte Harbor, a relatively big Red Tide patch was found by the classification system near big bend area while ground truth was unavailable at that time for validation. No clear connection/similarity can be found between these two p at ches except both were close to river runoff areas. 58


Figure 18. Classification Result ofWFS: First Day of Red Tides in 2001 4.4 Adding New Images In practice adding a new training image into the training set is to add new labeled clusters of the image into the feature space of the original training set. The distribution of labeled clusters will be therefore temporarily altered. After the system extracts the cluster labeling rules and updates the knowledge base clusters of the training images (original and new) must be relabeled using the updated knowledge base. The system performance is computed for the training set (both original and new sets) Once it is proven that the addition will improve the system performance the new rule set will be stored and the pe r formance will be recorded for the updated knowledge 59


CHAPTERS SUMMARY AND DISCUSSIONS Red tides are a recurring problem on the West Florida Shelf with numerous human, economic and ecosystem impacts. In this study, an automatic classification system for SeaWiFS images, consisting of an unsupervised clustering algorithm FCM (fuzzy C mean) and a neural network classifier, is developed to detect red tides on the West Florida Shelf. Forty SeaWiFS images collected and processed at theUniversity of South Florida and ground truth gathered from Ecohab (Ecology and Oceanography of Harmful Algal Blooms) cruise observations in Tampa Bay-Charlotte Harbor region, both from 1998 to 2001, are used to train and validate the system Seven original features from SeaWiFS multi-spectral bands are used as input for FCM after qualitative and quantitative comparisons with clustering results using 1 feature, 3 features, and 8 features with one extra texture feature extracted from chi-a images. BrFCM (bit reduction FCM, in which r=2 with hashtable, s=7 and c=10), which provides significant speed up and good agreement with FCM and ground truth, is used in the operational clustering algorithm. The cluster centers generated from brFCM are further fed into a neural network to be trained for automated cluster labeling and an overall accuracy of 95% is obtained by "leave-one-out" evaluation showing a great learning/generalizing ability of the Neural Networks. This optimal performance Neural Network is then applied to 87 unseen images in 2001 from January to November to 60


present a complete series of red tides evolution. The classification results of red tides agree well with Ecohab ship measurements. The times series of red tides patches identified by the system are then used to describe the presence, progress and movement of the harmful algal bloom during that period. This study demonstrates that brFCM and neural network systems are promising tools for satellite imagery classification of red tides. At the same time, the lack of the ground truth availability to train the NN still limits the capability of this system. In reality, CDOM, which is highly similar to red tides in optical characteristics, may be misclassified as red tides. More in-situ ship measurements are needed to improve the quality of the NN training set. Other environmental factors such as salinity and temperature are considered important in affecting the red tide patterns. Future research, for instance, can incorporate daily 1-km resolution sea surface temperate collected by Satellite A VHRR and daily salinity output by physical numerical models on the west Florida shelf, to improve the accuracy of partitioning and classification. 61


REFERNCES R. Beale, and T. Jackson, Neural Computing: An Introduction, Adam Hilger, Bristol, Philadelphia and New York, 1990. A.M. Bensaid, L.O. Hall, J.C. Bezdek and L.P. Clark, Partially supervised clustering for image segmentation, Pattern Recognition, 29, 859-871, 1996. J.C. Bezdek, Pattern recognition with fuzzy objective function algorithm, Plenum Press, New York 1981. J C. Bezdek, L.O. Hall, and L.P. Clarke, Review of MR image segmentation techniques using pattern Recognition. Med. Phys 20 (4): 1033-1048, 1993. J.S. Borak and A.H. Strahler, Int. J. Remote Sensing, 20(5), 919-938, 1999 K. L. Carder and R.G. Steward, A remote-sensing reflectance model of a red-tide dinoflagellates off west Florida, Limnol. Oceanogr., 30(2), 286-298, 1985. J.J. Cullen, A.M. cioti R.F. Davis and M.R. Lewis, Optical detection and assessment of algal blooms, Limnol. Oceanogr., 42(5, part2), 1223-1239, 1997. J E. Dayhoff, Neural Network Architectures: An Introduction, Van Nostrand Reinhold, New York, 1990. S.E. Fahlman, Faster Learning Variations on back-Propagation: An Empirical Study In Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann, 1988. 62


S. Gopal, C. E. Woodcock, and A. H. Strahler, Fuzzy neural network classification of global land cover form a 1 a AVHRR data set, Remote Sens. Environ., 67:230-243, 1999. H.R. Gordon, J.W. Brown, and et al., Nimbus-7 CZCS: Reduction of its radiometric sensitivity with time, Appl. Opt., vol. 22, pp. 3929-3931, 1983. L. Gross, S. Thiria and R. Frouin, Applying artificial neural network methodology to ocean color remote sensing, Ecological Modeling, 120, 237-246, 1999. L.O. Hall and A.M. Bensaid et al., A comparison of Neural Network and fuzzy clustering techniquies in segmenting magnetic resonance image of the brain. IEEE Transactions on neural networks, vol. 3, No. 5, 672-682, 1992. S. S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, 2001. C. Hu, K.L. Carder and F.E. Muller-Karger, Atmospheric correction of SeaWiFS imagery over turbid coastal waters: a practical method. Remote Sens. Environ. 74:195-206, 2000. M. Kahru and B.G. Mitchell, Spectral reflectance and absorption of a massive red tide off southern California, Journal ofGeophys. Res. Vol. 103, 21601-21609, 1998. J. Ke, L.O. Hall, and D.B. Goldgof, Fast accurate fuzzy clustering through reduced precision, in press. J. Ke, Fast accurate fuzzy clustering through reduced precision, Master thesis, University of South Florida, 1999. C. Li, D.B. Goldgof, and L.O. Hall, Knowledge-based classification and tissue labeling of MR images of human brain, IEEE Trans. Medical Imaging, vol. 12, no. 4, pp. 740750, 1993. 63


C. Li, L.O. Hall, and D.B. Goldgof, Knowledge-based classification and tissue labeling of MR images of human brain, in SPIE conf. on Biomed. Images Processing and Biomed. Visualization, 1993. C. Li, D.B. Goldgof, and L.O. Hall, Towards automatic classification and tissue labeling of MR brain images, in Proc. Int. Workshop on Structural and Syntactic Pattern Recogn., 1992. C. R. McClain, et al. 1995. Volume 28, SeaWiFS Algorithms, Partl. SeaWiFS Technical Report Series. NASA Technical Memorandum 104566, Vol28. NASA Goddard Space Flight Center, Greenbelt, MD. D. F. Millie, O.M. Schofield, G. J Kirkpatrick, G. Johnsen and B. T. Vinyard, Detection of harmful algal blooms using photopigments and absorption signatures: A case study of the Florida red tide dinoflagellates, Gymnodinium breve, Limnol. Oceanogr., 42(5, part2), 1240-1251, 1997 T Mitchell, Machine Learning, McGraw-Hill, 1997. A. Morel and L. Peieur, Analysis of variations in ocean color, Limnol. Oc e anogr., 22, 709, 1977 D. Muchoney, J.Borak, H.Chi,, Application of the MODIS global supervised classification model to vegetation and land cover mapping of Central America, Int J Remote Sensing, Vol., 21, No. 6&7, 1115-1138, 2000. J.E. O'Reilly, S. Maritorena, B.G. Mitchell D.A. Siegel, K.L. Carder, S.A. Garver, M. Kahru, and C. McClain Ocean Color Chlorophyll-a Algorithms fro SeaWiFS. J G e ophys. Res. 103 (C11):24, 937024,953, 1998. J.R. Parker, Algorithm for Image Processing and Computer Vision, 1997. 64


J.J. Simpson and T. J. Mcintire, A recurrent neural network classifier for improved retrievals of areal extent of snow cover, IEEE Transactions on geoscience and remote sensing, vol., 39, no. 10, 2135-2147, 2001. P.A. Tester and K.A. Steidinger, Gymndinium breve red tide blooms: Initiation, transport, and consequences of surface circulation, Limnol. Oceano gr., 42(5, part2), 1239-1051, 1997. B. Tian, M.A. Shaikh, M.R. Azimi-Sadjadi, A study of cloud classification with neural networks using spectral and texture features, IEEE, Trans. on Neural Networks, Vol., 10, No. 1, 138-151, 1999. J.J. Walsh, K.D Haddad, D.A. Dieterle, et al., A numerical analysis of landfall of the 1979 red tide of Karenia brevis along the west coast of Florida, Cont. Shelf Research 22, 15-38, 2002. X.L. Xie and G. Beni, A validity measure for fuzzy clustering, IEEE Trans. P AMI, 11: 841-847, 1980. W. Yao. Knowledge-based classification of SeaWiFS satellite images for monitoring phytoplankton blooms off West Florida, Master thesis, University of South Florida, 1999. M Zhang. Generic knowledge-guided image segmentation and labeling with applications. Ph.D. dissertation, University of South Florida, 1998. 65


Download Options

No images are available for this item.
Cite this item close


Cras ut cursus ante, a fringilla nunc. Mauris lorem nunc, cursus sit amet enim ac, vehicula vestibulum mi. Mauris viverra nisl vel enim faucibus porta. Praesent sit amet ornare diam, non finibus nulla.


Cras efficitur magna et sapien varius, luctus ullamcorper dolor convallis. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Fusce sit amet justo ut erat laoreet congue sed a ante.


Phasellus ornare in augue eu imperdiet. Donec malesuada sapien ante, at vehicula orci tempor molestie. Proin vitae urna elit. Pellentesque vitae nisi et diam euismod malesuada aliquet non erat.


Nunc fringilla dolor ut dictum placerat. Proin ac neque rutrum, consectetur ligula id, laoreet ligula. Nulla lorem massa, consectetur vitae consequat in, lobortis at dolor. Nunc sed leo odio.