USF Libraries
USF Digital Collections

Evaluation of digital imaging systems used in highway applications

MISSING IMAGE

Material Information

Title:
Evaluation of digital imaging systems used in highway applications
Physical Description:
Book
Language:
English
Creator:
Mraz, Alex
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
assessment
image
quality
noise
filtering
pavement
Dissertations, Academic -- Civil Engineering -- Doctoral -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: Manual pavement condition surveys are gradually replaced by more comprehensive automated surveys conducted by multi-function highway evaluation vehicles. Highway evaluation vehicles are generally equipped with laser profiling, land navigation, and imaging sub-systems. The imaging system consists of three cameras; forward-view and side-view digital area-scan cameras for capturing images of traffic signs and right-of-way safety features, and a pavement digital line-scan or area-scan camera for capturing images of the pavement surface. In addition to the 3-laser and accelerometer-based profiling system, these vehicles are also equipped with differential global positioning equipment (DGPS) and an inertial measurement unit (IMU) for cross-slope, curvature and grade measurements.Digital imaging systems installed in automated highway evaluation vehicles are generally designed on modular basis where subsystems by different manufacturers are assembled to customize the system and fulfill the users' needs while minimizing the cost. In most such cases, manufacturers' specifications for a subsystem would not be reliable with respect to the eventual performance of that subsystem as part of the entire assembly. On the other hand, no guidelines are available for performance evaluation of imaging systems as assemblies of discrete subsystems. Moreover, images acquired by digital cameras can become contaminated by random noise affecting their quality and the ability of identifying important features. These issues have surfaced during the development and testing of the Florida Department of Transportation (FDOT) highway evaluation vehicle.This first phase of the work involved in this dissertation research concerns the study of basic criteria for evaluation of image quality through measurement of well-defined properties of images such as color reproduction, tone reproduction, detail reproduction, as well as the levels of noise, and optical distortion. Standard and reliable methods that can be adopted for evaluation of the above properties are discussed first. Then, by applying the above evaluation criteria to the imaging systems of the FDOT highway evaluation vehicle, it is shown how the sources of images sub-quality can be recognized and the optimum settings achieved. The second phase of the dissertation research is focused on the investigation of the sources of noise that can affect the digital line-scan distress images.As a result of this study, a novel technique was developed to filter out noise present in pavement distress images by using intensity measurement obtained from a standard grayscale target. In addition, a detailed experimental study was conducted to investigate the effect of the speed of evaluation and lighting conditions on the accuracy and repeatability of digital line-scan images in representing the actual distress condition of a pavement. The conclusions drawn from the second phase can be used to minimize the effect of noise on digital images of pavement distress and to improve the accuracy of evaluation of pavement cracks based on digital images. Hence the results of this study will certainly enhance the overall efficiency of the automated evaluation of pavement distress and highway features.
Thesis:
Thesis (Ph.D.)--University of South Florida, 2004.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Alexander Mraz.
General Note:
Includes vita.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 208 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001498256
oclc - 57709253
notis - AJU6861
usfldc doi - E14-SFE0000487
usfldc handle - e14.487
System ID:
SFS0025178:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001498256
003 fts
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 041209s2004 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000487
035
(OCoLC)57709253
9
AJU6861
b SE
040
FHM
c FHM
1 100
Mraz, Alex.
0 245
Evaluation of digital imaging systems used in highway applications
h [electronic resource] /
by Alexander Mraz.
260
[Tampa, Fla.] :
University of South Florida,
2004.
502
Thesis (Ph.D.)--University of South Florida, 2004.
504
Includes bibliographical references.
500
Includes vita.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
Title from PDF of title page.
Document formatted into pages; contains 208 pages.
520
ABSTRACT: Manual pavement condition surveys are gradually replaced by more comprehensive automated surveys conducted by multi-function highway evaluation vehicles. Highway evaluation vehicles are generally equipped with laser profiling, land navigation, and imaging sub-systems. The imaging system consists of three cameras; forward-view and side-view digital area-scan cameras for capturing images of traffic signs and right-of-way safety features, and a pavement digital line-scan or area-scan camera for capturing images of the pavement surface. In addition to the 3-laser and accelerometer-based profiling system, these vehicles are also equipped with differential global positioning equipment (DGPS) and an inertial measurement unit (IMU) for cross-slope, curvature and grade measurements.Digital imaging systems installed in automated highway evaluation vehicles are generally designed on modular basis where subsystems by different manufacturers are assembled to customize the system and fulfill the users' needs while minimizing the cost. In most such cases, manufacturers' specifications for a subsystem would not be reliable with respect to the eventual performance of that subsystem as part of the entire assembly. On the other hand, no guidelines are available for performance evaluation of imaging systems as assemblies of discrete subsystems. Moreover, images acquired by digital cameras can become contaminated by random noise affecting their quality and the ability of identifying important features. These issues have surfaced during the development and testing of the Florida Department of Transportation (FDOT) highway evaluation vehicle.This first phase of the work involved in this dissertation research concerns the study of basic criteria for evaluation of image quality through measurement of well-defined properties of images such as color reproduction, tone reproduction, detail reproduction, as well as the levels of noise, and optical distortion. Standard and reliable methods that can be adopted for evaluation of the above properties are discussed first. Then, by applying the above evaluation criteria to the imaging systems of the FDOT highway evaluation vehicle, it is shown how the sources of images sub-quality can be recognized and the optimum settings achieved. The second phase of the dissertation research is focused on the investigation of the sources of noise that can affect the digital line-scan distress images.As a result of this study, a novel technique was developed to filter out noise present in pavement distress images by using intensity measurement obtained from a standard grayscale target. In addition, a detailed experimental study was conducted to investigate the effect of the speed of evaluation and lighting conditions on the accuracy and repeatability of digital line-scan images in representing the actual distress condition of a pavement. The conclusions drawn from the second phase can be used to minimize the effect of noise on digital images of pavement distress and to improve the accuracy of evaluation of pavement cracks based on digital images. Hence the results of this study will certainly enhance the overall efficiency of the automated evaluation of pavement distress and highway features.
590
Adviser: Gunaratne, Manjriker.
653
assessment.
image.
quality.
noise.
filtering.
pavement.
690
Dissertations, Academic
z USF
x Civil Engineering
Doctoral.
090
TA145 (ONLINE)
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.487



PAGE 1

Evaluation of Digital Imaging Systems Used in Highway Applications by Alexander Mraz A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Civil & Environmental Engineering College of Engineering University of South Florida Major Professor: Manjriker Gunaratne, Ph.D. Ram Pendyala, Ph.D. Alaa Ashmawy, Ph.D. Thomas Davis, Ph.D. Sudeep Sarkar, Ph.D. Paul Zandbergen, Ph.D. Date of Approval: July 21, 2004 Keywords: pavement, assessment, image, quality, noise, filtering Copyright 2004, Alexander Mraz

PAGE 2

DEDICATION This dissertation is dedicated to my parents Ludovit and Zlata, my brother Michal, and my friends Mary and Roy. Without their help and comprehension, I could never have reached where I am today.

PAGE 3

ACKNOWLEDGMENTS I would like to express my most sincer e gratitude to my major professor Dr. Manjriker Gunaratne, for his constructiv e advice, continuous encouragement, and patience throughout my graduate study. I can never thank him enough. I would like to extend my appreciation to the other comm ittee members of my doctoral committee Dr. Ram Pendyala, Dr. Alaa Ashmawy, Dr. Tom Da vis, Dr. Sudeep Sarkar, and Dr. Paul Zandbergen for their constructive suggestions. During my graduate study, the financ ial support provided by the Department of Civil and Environmental Engineering, Universi ty of South Florida in terms of teaching assistantship is gratefully acknowledged. A special note of appreciation extends to Mr. Abdenour Nazef and the staff of the Florida Department of Transportation for their support and valuable contribution to this project. I also gratefully acknowledge the support from Depa rtment of Transportation for funding the research projects Study of the F easibility of Video Logging with Pavement Condition Evaluation (Grant BC965) and Evaluation and Validation of the High-Speed Multi-Function System for Automated Pavement Condition Survey (Grant BD544). The opinions, findings, and conclusions expressed here are those of the author and not those of the supporting agency.

PAGE 4

TABLE OF CONTENTS LIST OF TABLES iv LIST OF FIGURES vi ABSTRACT xv CHAPTER 1 EVALUATION OF PAVEMENT DISTRESS AND HIGHWAY FEATURES USING DIGITAL IMAGING 1 1.1 Development of the FDOT Highway Evaluation Vehicle 1 1.1.1. Forward-View and Side-View Imaging Systems 3 1.1.2. Pavement Imaging System 4 1.1.3. Position Orientation System 6 1.1.4. Laser Profiling System 8 1.2 Overview of Digital Imaging 10 1.2.1. State-of-the-Art of Digital Imaging 10 1.2.2. Analog to Digital Conversion 11 1.2.3. Spatial Resolution 12 1.2.4. Bit Depth 14 1.2.5. Software Used for Intensity Measurement of Digital Images (Image J) 16 1.2.6. Storage of Digital Image Data 18 1.2.7. JPEG Compression 19 1.3 Relationships Used in Optical Analysis 21 1.4 An Overview of Pavement Surface Distress 22 1.5 Research Goals 25 1.6 Organization of Dissertation 25 CHAPTER 2 GUIDELINES FOR PERFORMANCE ASSESSMENT OF DIGITAL IMAGING SYSTEMS USED IN HIGHWAY APLICATIONS 26 2.1 Introduction 26 2.2 Quality of Images 29 2.3 Intensity Measurements 31 2.4 Evaluation of Color Reproduction Quality of Digital Images 32 2.4.1. Selection of Aperture Setting Using Color Evaluation 34 2.4.2. Setting of White Balance Using Color Evaluation 36 2.5 Tone Reproduction Quality of Digital Images 40 i

PAGE 5

2.5.1. Evaluation of Dynamic Range 42 2.5.2. Evaluation of Proper Exposure Settings 45 2.6 Detail Reproduction Quality of a Digital Image 46 2.6.1. Theoretical Limits of Spatial Resolution 46 2.6.2. Modulation Transfer Function (MTF) Approach for Determining Spatial Resolution 49 2.6.3. Contrast Transfer Function (CTF) 53 2.7 Level of Noise 60 2.7.1. Definition 60 2.7.2. Evaluation of Noise through Measurements of Signal-to-Noise Ratio 63 2.7.3. Effect of the Noise on Dynamic Range 66 2.8 Optical Distortion 68 2.9 Guidelines for Maximizing the Efficiency of Imaging Operations 70 CHAPTER 3 INVESTIGATION OF NOISE AFFECTING PAVEMENT DISTRESS IMAGES AND ENHANCEMENT OF IMAGES USING NOISE FILTRATION 72 3.1 Noise that Affect CCD Sensor 72 3.1.1. Overview of Charge-Coupled Device (CCD) 74 3.1.2. Gain and Exposure 76 3.1.3. Area-scan CCD Sensor 79 3.1.4. Line-scan CCD Sensor 81 3.2 Types of Noise in CCD Cameras 83 3.2.1. Object-dependent Noise 85 3.2.1.1. Photon Noise 85 3.2.1.2. Thermal Noise 88 3.2.1.3. Bias Noise 92 3.2.2. Object-independent Noise 93 3.2.2.1. Read-out Noise 93 3.2.2.2. Quantization Noise 94 3.2.2.3. Noise Due to the Optical System and Defects on the CCD Sensor 95 3.2.2.4. Compression Noise 97 3.2.3. Effect of Saturation 99 3.2.4. Effect of Flare 101 3.3 SNR (Signal-to-Noise Ratio) 103 3.4 Mathematical Representation of the Noise in the Signal 107 3.5 Noise Filtration 110 3.6 Filtering Method Based on Local Statistics 115 3.6.1. Determination of Noise Variance 122 3.6.2. Illustration of the Filtering Technique 125 3.6.3. Use of the Grayscale Target Procedure 128 3.6.4. Verification of the Filtering Technique 130 ii

PAGE 6

CHAPTER 4 EXPERIMENTAL EVALUATION OF THE PAVEMENT IMAGING SUBSYSTEM OF THE FDOT PAVEMENT EVALUATION VEHICLE 134 4.1 General Description 134 4.2 Evaluation of the Noise Due to Speed 138 4.3 Evaluation of the Gain Due to Lighting System 142 4.4 Evaluation of Pavement Images for Ability to Recognize Cracks 145 4.5 Evaluation of the Effect of the Vehicle Movement on Image 167 CHAPTER 5 CONCLUSION 175 REFERENCES 179 BIBLIOGRAPHY 183 APPENDICES 184 Appendix A: Sample Report for MTF Evaluation Using PhotoES_AM Plugin for ImageJ 185 Appendix B: Sample Report for SNR Evaluation Using PhotoES_AM Plugin for ImageJ 186 Appendix C: Sample Report for Filtering Technique Using PhotoES_AM Plugin for ImageJ 187 ABOUT THE AUTHOR End Page iii

PAGE 7

LIST OF TABLES Table 1.1 Common Image Formats and Their Attributes (DeMello 2003) 19 Table 1.2 Asphalt Concrete Surfaced Pavement Distress Types 23 Table 1.3 Jointed Concrete Surfaced Pavement Distress Types 23 Table 2.1 Specified and Measured Intensity Values and Errors for Patch Number 15 (red) 35 Table 2.2 Dynamic Ranges of Forward-View Camera for Different Apertures 44 Table 2.3 Change in the Image Size (in pixels) Due to Vehicle Movement 49 Table 2.4 Sample Guidelines for Setting Focusing Distances (in meters) of the Forward-View and Side-View Cameras (with a pixel pitch of 6.7 m and MTF 10 = 74 lp/mm) 59 Table 2.5 Sample Guidelines for Setting Focusing Distances (in meters) of the Pavement Camera (with a pixel pitch of 10.0 m and MTF 10 = 28 lp/mm) 59 Table 2.6 Effect of the Vehicular Vibrations ( vibration in pixels) on the FDOT Vehicles Imaging Systems 63 Table 2.7 Relationship Between SNR and Image Quality (Specht 2004) 63 Table 2.8 Results of the SNR Testing for Forward-View Camera at Various Aperture Settings (sunny conditions) 66 Table 3.1 Full Well Capacity of Three Given Cameras (Courtesy of Roper Scientific) and Computed Value for Basler L-100 Camera 88 Table 3.2 Magnitude of the Best and Worst SNR for Forward-View and Pavement Cameras of FDOT Highway Evaluation Vehicle 95 Table 3.3 Intensity Values for 7 x 7 Window with the One for the Central Pixel Highlighted 126 iv

PAGE 8

Table 3.4 Mean Values of the Sub-areas 127 Table 3.5 Intensity Values for the Directional Subset 4 127 Table 3.6 Mean and Variance of Luminance Values of Each Wedge of the Gray-scale Target (from Fig. 3.31 (b)) 129 Table 3.7 Results of the Noise Filtration Test on Synthetic Image 131 Table 4.1 Description of Experiment Conditions 135 Table 4.2 Definition of Notations Used in Figures 4.16 4.25 147 Table 4.3 Definition of Notations Used in Figures 4.26 4.39 154 Table 4.4 Dimensions of Crack Features Evaluated Based on the Images and the Corresponding Errors () at Different Speeds (US 441, lights on) 161 Table 4.5 Dimensions of Crack Features Evaluated Based on the Images and the Corresponding Errors () at Different Speeds (US 441, lights off) 161 Table 4.6 Dimensions of Crack Features Evaluated Based on the Images and the Corresponding Errors () at Different Speeds (SR 331, lights on) 162 Table 4.7 Dimensions of Crack Features Evaluated Based on the Images and the Corresponding Errors () at Different Speeds (SR 331, lights off) 162 Table 4.8 Dimensions of Crack Features Evaluated Based on the Images and the Corresponding Errors () at Different Speeds (SR 28, lights on) 163 Table 4.9 Dimensions of Crack Features Evaluated Based on the Images and the Corresponding Errors () at Different Speeds (SR 28, lights off) 163 Table 4.10 Repeatability of Crack Evaluation 165 Table 4.11 Number of Pixels in Image Displacement Due to Vehicle Movement (25 mph) 174 v

PAGE 9

LIST OF FIGURES Figure 1.1 FDOT Highway Evaluation Vehicle 3 Figure 1.2 Forward-View Camera 4 Figure 1.3 Pavement Camera 5 Figure 1.4 Pavement Lighting System 6 Figure 1.5 Applanix POS TM LV System Used in FDOT Highway Evaluation Vehicle 8 Figure 1.6 Block Diagram of Applanix POS TM LV System in FDOT Highway Evaluation Vehicle 8 Figure 1.7 Laser Profiling Unit Installed on the Front Bumper 9 Figure 1.8 Schematic Diagram of a Digital Camera 11 Figure 1.9 Effect of Resolution on Image Interpretation 13 Figure 1.10 Color Depth of the Pavement Segment 14 Figure 1.11 The RGB Color Cube 15 Figure 1.12 Main Window of the ImageJ Image Processing Software 16 Figure 1.13 Main Window of the PhotoES_AM Plugin Written for ImageJ Software 17 Figure 1.14 Illustration of JPEG 3-stage Compression Algorithms 21 Figure 1.15 Illustration of the Image, Object, and Focal Length Relationship 22 Figure 1.16 (a) Alligator (fatigue) Cracking; (b) Rutting on Flexible Pavement 24 Figure 1.17 (a) Corner Break; (b) Large Panel Crack on Rigid Pavement 24 vi

PAGE 10

Figure 2.1 Forward-View Image Unclear Traffic Signs as a Result of Low Resolution (640x480) 30 Figure 2.2 Forward-View Image (resolution of 1300 x 1024) Undesirable Color Quality Due to Use of Near Infra-red Filter Absorbing Partially Visible Spectrum 30 Figure 2.3 Forward-View Image (resolution of 1300 x 1024) Unnatural Color Quality Due to the Incorrect White Balance Settings 31 Figure 2.4 Forward-View Image (resolution of 1300 x 1024) Poor Quality Due to Noise Introduced with Long Exposure Settings 31 Figure 2.5 Digital Images of a Macbeth Color Checker Taken with the Forward-View Camera with Aperture Settings of F2.8 33 Figure 2.6 Digital Images of a Macbeth Color Checker Taken with the Forward-View Camera with Aperture Settings of F4.0 33 Figure 2.7 Digital Images of a Macbeth Color Checker Taken with the Forward-View Camera with Aperture Settings of F5.6 33 Figure 2.8 Digital Images of a Macbeth Color Checker Taken with the Forward-View Camera with Aperture Settings of F8.0 33 Figure 2.9 Luminance Error in the Color Test for Different Apertures 36 Figure 2.10 Intensity Errors for Each Color Channel of the Forward-View Camera (sunlight-based white balance settings) 38 Figure 2.11 Intensity Errors for Each Color Channel of the Forward-View Camera (incandescent light-based white balance settings) 38 Figure 2.12 Comparison of Two Different Near Infra-red Filters Mounted on Forward-View Camera 39 Figure 2.13 Flare Created by Sunlight Directly Impacting the Optical System 41 Figure 2.14 Sunshades on Cameras to Prevent Flare 41 Figure 2.15 Density Step Target with Optical Density Range from 0.07 to 1.5 43 Figure 2.16 Dynamic Ranges for Density Step Target (Figure 2.15) with Camera Aperture Setting of F2.8 43 vii

PAGE 11

Figure 2.17 Insufficient Gray Scale Range of Forward-View Image with Corresponding Luminance Histogram Plots 45 Figure 2.18 Sufficient Gray Scale Range of Forward-View Image with Corresponding Luminance Histogram Plots 46 Figure 2.19 Arbitrary Position of the Starting Point 51 Figure 2.20 ISO 12233 Resolution Chart (inset is shown for the horizontal resolution bar 8) 55 Figure 2.21 Intensity Profile Plot for the Image of an ISO12233 Target for Scale Value of 8 (Figure 2.1) Corresponding to a Spatial Frequency of 58 lp/mm 55 Figure 2.22 MTF Response of the Forward-View Digital Camera (1296x1024) 57 Figure 2.23 Comparison of MTF Response of Mega-pixel (1296x1024) and VGA (640x480) Resolution Cameras 57 Figure 2.24 Minimum Crack Width Recognition from Image Captured by the FDOT Highway Evaluation Vehicle 59 Figure 2.25 Images Taken with the Side-View Camera of FDOT Highway Evaluation Vehicle in Static and Vibratory Modes 61 Figure 2.26 Trace of the Laser Beam for an Engine Speed of 3,000 rpm 62 Figure 2.27 Measurement of Average White and Black Intensity Values and Standard Deviations of the Image of Macbeth Color Checker for an Aperture Setting of F5.6 65 Figure 2.28 (a) Maximum Tolerable Noise Within Gray Levels; (b) Example of Low Level of Noise; and (c) Example of High Level of Noise Prohibiting Cracks Recognition 67 Figure 2.29 Images Affected by Barrel and Pincushion Lens Distortions 68 Figure 2.30 10% Barrel Distortion of the Forward-View Camera Lens in the FDOT Highway Evaluation Vehicle 69 viii

PAGE 12

Figure 2.31 Evaluation of Lens Distortion Based on Comparison of Object and Image Lengths 70 Figure 3.1 Typical CCD Sensor Containing Micro-lenses, Colored Filters, and Photosensitive Diodes (Source: Fuji Photo Film USA) 72 Figure 3.2 Gaussian Probability Density Function 73 Figure 3.3 The CCD Sensor Detail 74 Figure 3.4 (a) Front Face of the Line-scan Camera Basler L103 Without Optics; (b) Digital Camera DVC 1310c 75 Figure 3.5 Example of the Pixel Intensity Values of the Image 79 Figure 3.6 (a) Basic Structure of the Picture Element Cell of the CCD Sensor; (b) Structure of the Area-scan Interline-transfer CCD Sensor 81 Figure 3.7 Structure of the Line-scan CCD Sensor 82 Figure 3.8 Noise Due to Improper Processing of Odd and Even Pixels on the Line-scan Sensor 83 Figure 3.9 Sources of Noise and the Source of Their Occurrence in Digital Camera 84 Figure 3.10 (a) Example of Dark Frame Image; (b) Dark Frame Image Showing Luminance Due to the On-chip Amplifier 89 Figure 3.11 Details of the Cooling Fan for Forward-View Imaging System of FDOT Highway Evaluation Vehicle 90 Figure 3.12 Peltier Cooling Element: (a) Principle; (b) Structure 91 Figure 3.13 Example of Bias Frame Image 93 Figure 3.14 Example of Column Defect in a Flat Frame Image 96 Figure 3.15 (a) Example of Typical Flat Frame Image; (b) Flat Frame Showing Some Vignetting on the Corners as Well as Dark Halos From the Dust Specks in the Optical Path 97 Figure 3.16 JPEG Artifacts from Pavement Image (zoomed-in) where Lines Demarcate 8 by 8 Pixel Blocks Processed by JPEG Algorithm 98 ix

PAGE 13

Figure 3.17 (a) Saturation of the Parallel Registers Highlighted in White; (b) Saturation in Forward-View Image of FDOT Highway Evaluation Vehicle 100 Figure 3.18 (a) Saturation of the Image from Point Light Sources; (b) Serial Saturation of the Image Where Low Intensity Areas are Intact While High Intensity Areas are Saturated 100 Figure 3.19 Diffusely Scattered Light in the Camera Resulting in Flare 102 Figure 3.20 (a) Flare in the Pavement Image of the Standard Grayscale Target; (b) Example of Flare that Produces Low Readability of the Text; (c) Example of Heavy Flare Problem Resulting in Ghosting (on the left) and Veiling Glare 102 Figure 3.21 Effects of the Sunlight and Pavement Lighting System on Visibility of the Crack 105 Figure 3.22 SNR vs. Exposure Time for Digital Cameras (Courtesy of Roper Scientific) 106 Figure 3.23 (a) Example of Poisson Distribution for Different Mean Values; (b) Example of Gaussian and Poisson Distributions Plotted for = 10 109 Figure 3.24 Example of 3 x 3 Convolution Kernel and the Corresponding Image Neighborhood 113 Figure 3.25 (a) High Contrast Neighborhood of a Pixel; (b) Demarcation of 3 x 3 Sub-areas in the Neighborhood of Pixel in Figure 3.25 (a) (9 x 9) 118 Figure 3.26 Directional Indices 119 Figure 3.27 Mean Intensity Values of Sub-areas 120 Figure 3.28 Definition of Directional Subsets (unshaded areas) on One Side of the Edges 121 Figure 3.29 Overlapping of the 3 x 3 Sub-areas in the 7 x 7 Neighborhood 122 Figure 3.30 Fifteen-wedge Grayscale Target 123 x

PAGE 14

Figure 3.31 Evaluating the Mean and Standard Deviation of the Intensity of One Patch Using ImageJ Software; (b) Plot of Variances of Each Patch of the Grayscale Target 124 Figure 3.32 (a) Evaluated Image; (b) Zoomed-in 7 x 7 Window with the Pixel Under Consideration at its Center 126 Figure 3.33 ImageJ Loaded with an Image of the 15-wedge Grayscale Target 128 Figure 3.34 Synthetic Image Created for Verification of Filtering Technique 131 Figure 3.35 Example of Noise Removal from Synthetic Image 132 Figure 3.36 Example of Noise Removal from Pavement Image (concrete pavement) 132 Figure 3.37 Example of Noise Removal from Pavement Image (asphalt pavement) 133 Figure 4.1 (a) Temporary Closure of the Road; (b) Standard Targets Placed on the Road 136 Figure 4.2 (a) FDOT Highway Evaluation Vehicle Driven Through the Testing Area; (b) Verification of the Optical Distortion Due to Optics Used in the Pavement Imaging System 137 Figure 4.3 SNR and Gain vs. Speed Plot for US 441 with Pavement Lights On 138 Figure 4.4 SNR and Gain vs. Speed Plot for US 441 with Pavement Lights Off 139 Figure 4.5 SNR and Gain vs. Speed Plot for SR 331 with Pavement Lights On 139 Figure 4.6 SNR and Gain vs. Speed Plot for SR 331 with Pavement Lights Off 140 Figure 4.7 SNR and Gain vs. Speed Plot for SR 28 with Pavement Lights On 140 Figure 4.8 SNR and Gain vs. Speed Plot for SR 28 with Pavement Lights Off 141 Figure 4.9 Variation of SNR and Gain for US 441 with Pavement Lights On (Exposure 1/40,000 sec) 142 xi

PAGE 15

Figure 4.10 Variation of SNR and Gain for US 441 with Pavement Lights Off (Exposure 1/40,000 sec) 142 Figure 4.11 Variation of SNR and Gain for SR 331 with Pavement Lights On (Exposure 1/40,000 sec) 143 Figure 4.12 Variation of SNR and Gain for SR 331 with Pavement Lights Off (Exposure 1/40,000 sec) 143 Figure 4.13 Variation of SNR and Gain for SR 28 with Pavement Lights On (Exposure 1/40,000 sec) 144 Figure 4.14 Variation of SNR and Gain for SR 28 with Pavement Lights Off (Exposure 1/40,000 sec) 144 Figure 4.15 Variation of SNR and Gain for SR 28 with Pavement Lights Off (Exposure 1/19,000 sec) 145 Figure 4.16 Verification of Manual Survey (US 441, image DF) with Image Captured by DiMage5 Digital Camera 148 Figure 4.17 Verification of Manual Survey (US 441, image E) with Image Captured by DiMage5 Digital Camera 148 Figure 4.18 Verification of Manual Survey (US 441, image BC) with Image Captured by DiMage5 Digital Camera 149 Figure 4.19 Verification of Manual Survey (US 441, image A) with Image Captured by DiMage5 Digital Camera 149 Figure 4.20 Verification of Manual Survey (SR 331, image CD) with Image Captured by DiMage5 Digital Camera 150 Figure 4.21 Verification of Manual Survey (SR 331, image AB) with Image Captured by DiMage5 Digital Camera 150 Figure 4.22 Verification of Manual Survey (SR 331, image EF) with Image Captured by DiMage5 Digital Camera 151 Figure 4.23 Verification of Manual Survey (SR 28, image CE) with Image Captured by DiMage5 Digital Camera 151 Figure 4.24 Verification of Manual Survey (SR 28, image D) with Image Captured by DiMage5 Digital Camera 152 xii

PAGE 16

Figure 4.25 Verification of Manual Survey (SR 28, image AB) with Image Captured by DiMage5 Digital Camera 152 Figure 4.26 Widths of the Crack Features (US 441, lights on) Measured from Images at Different Speeds 155 Figure 4.27 Lengths of the Crack Features (US 441, lights on) Measured from Images at Different Speeds 155 Figure 4.28 Widths of the Crack Features (US 441, lights off) Measured from Images at Different Speeds 156 Figure 4.29 Lengths of the Crack Features (US 441, lights off) Measured from Images Captured at Different Speeds 156 Figure 4.30 Widths of the Crack Features (SR 331, lights on) Measured from Images at Different Speeds 157 Figure 4.31 Lengths of the Crack Features (SR 331, lights on) Measured from Images at Different Speeds 157 Figure 4.32 Widths of the Crack Features (SR 331, lights off) Measured from Images at Different Speeds 158 Figure 4.33 Lengths of the Crack Features (SR 331, lights off) Measured from Images at Different Speeds 158 Figure 4.34 Widths of the Crack Features (SR 28, lights on) Measured from Images at Different Speeds 159 Figure 4.35 Lengths of the Crack Features (SR 28, lights on) Measured from Images at Different Speeds 159 Figure 4.36 Widths of the Crack Features (SR 28, lights off) Measured from Images at Different Speeds 160 Figure 4.37 Lengths of the Crack Features (SR 28, lights off) Measured from Images at Different Speeds 160 Figure 4.38 Correlation Between Variability in Crack Measurements and SNR Values for US 441 166 xiii

PAGE 17

Figure 4.39 Correlation Between Variability in Crack Measurements and SNR Values for SR 28 166 Figure 4.40 Global vs. Body Coordinate System 167 Figure 4.41 Applanix POSPack Main Window 168 Figure 4.42 Body Acceleration Data on US 441 169 Figure 4.43 Body Velocity Data in the X Direction on US 441 169 Figure 4.44 Incremental Distance Traveled in the X Direction on US 441 170 Figure 4.45 Incremental Distance Traveled in the Y Direction on US 441 170 Figure 4.46 Incremental Distance Traveled in the Z Direction on US 441 171 Figure 4.47 Incremental Distance Traveled in the X Direction on SR 331 171 Figure 4.48 Incremental Distance Traveled in the Y Direction on SR 331 172 Figure 4.49 Incremental Distance Traveled in the Z Direction on SR 331 172 xiv

PAGE 18

EVALUATION OF DIGITAL IMAGING SYSTEMS USED IN HIGHWAY APPLICATIONS Alexander Mraz ABSTRACT Manual pavement condition surveys are gradually replaced by more comprehensive automated surveys conducted by multi-function highway evaluation vehicles. Highway evaluation vehicles are generally equipped with laser profiling, land navigation, and imaging sub-systems. The imaging system consists of three cameras; forward-view and side-view digital area-scan cameras for capturing images of traffic signs and right-of-way safety features, and a pavement digital line-scan or area-scan camera for capturing images of the pavement surface. In addition to the 3-laser and accelerometer-based profiling system, these vehicles are also equipped with differential global positioning equipment (DGPS) and an inertial measurement unit (IMU) for cross-slope, curvature and grade measurements. Digital imaging systems installed in automated highway evaluation vehicles are generally designed on modular basis where subsystems by different manufacturers are assembled to customize the system and fulfill the users needs while minimizing the cost. In most such cases, manufacturers specifications for a subsystem would not be reliable with respect to the eventual performance of that subsystem as part of the entire assembly. xv

PAGE 19

On the other hand, no guidelines are available for performance evaluation of imaging systems as assemblies of discrete subsystems. Moreover, images acquired by digital cameras can become contaminated by random noise affecting their quality and the ability of identifying important features. These issues have surfaced during the development and testing of the Florida Department of Transportation (FDOT) highway evaluation vehicle. This first phase of the work involved in this dissertation research concerns the study of basic criteria for evaluation of image quality through measurement of well-defined properties of images such as color reproduction, tone reproduction, detail reproduction, as well as the levels of noise, and optical distortion. Standard and reliable methods that can be adopted for evaluation of the above properties are discussed first. Then, by applying the above evaluation criteria to the imaging systems of the FDOT highway evaluation vehicle, it is shown how the sources of images sub-quality can be recognized and the optimum settings achieved. The second phase of the dissertation research is focused on the investigation of the sources of noise that can affect the digital line-scan distress images. As a result of this study, a novel technique was developed to filter out noise present in pavement distress images by using intensity measurement obtained from a standard grayscale target. In addition, a detailed experimental study was conducted to investigate the effect of the speed of evaluation and lighting conditions on the accuracy and repeatability of digital line-scan images in representing the actual distress condition of a pavement. The conclusions drawn from second phase can be used to minimize the effect of noise on digital images of pavement distress and to improve the accuracy of evaluation of xvi

PAGE 20

xvii pavement cracks based on digital images. Hence the results of this study will certainly enhance the overall efficiency of the auto mated evaluation of pavement distress and highway features.

PAGE 21

CHAPTER 1 EVALUATION OF PAVEMENT DISTRESS AND HIGHWAY FEATURES USING DIGITAL IMAGING 1.1 Development of the FDOT Highway Evaluation Vehicle Until recently, the Florida Department of Transportation (FDOT) has used forward-view images of the highway network, which were made available on a three-year cycle through a Consultant. In May of 2000, a study was conducted to explore potential improvements to FDOTs videolog program (Dougan, 2001) which resulted in the following recommendations: (1) increasing the frequency of video-logging, (2) upgrading the scale of image acquisition to obtain right-of-way data from outer or center lane and condition data from the pavement, and (3) creating a department-wide unit to manage the consolidated image and field data acquisition, processing, storage and retrieval operations. Hence, a research project was initiated to explore a fully automated exhaustive evaluation operation that includes adding the following functionalities to existing imaging in the forward direction: (1) imaging in the right-of-way mode to identify up-to-date roadway features that also include safety related features such as bridge and railroad crossing identification, edge line of pavements and images of ramps, 1

PAGE 22

(2) imaging in the downward direction, (3) global positioning for location referencing of the collected data, (4) acquiring roadway cross-slope data, and (5) collecting pavement roughness and rut data for pavement distress evaluation. It was envisioned that the above objectives could be achieved by using a profiler van equipped with a video camera system for imaging in forward, sideward, and downward directions, an inertial measurement unit for collecting pavement cross-slope data, and a Differential Global Positioning System (DGPS) equipment for data geo-referencing purposes. At present, pavement evaluation crews of FDOT and several other Department of Transportation (DOT) conduct windshield surveys at relatively slow speeds to identify the types of surface cracking and other distresses on pavements. In addition to this, when necessary, they have to be physically present in a travel lane to take manual rut measurements, exposing themselves to hazardous conditions. Pavement imaging can preclude the need for conducting hazardous surveys, especially on high speed facilities. On the other hand, pavement images which provide a permanent record of the pavement surface condition can be used to evaluate and analyze data using the desktop computer and also to perform a quality control of the windshield survey pavement evaluation data. As the first task of the research project, in April 2002, International Cybernetics Corporation (ICC) in Largo, Florida manufactured the highway evaluation vehicle commonly referred to as the Florida Department of Transportation (FDOT) Highway 2

PAGE 23

Evaluation Vehicle (Figure 1.1). This digital image data collection system consists of forward-view, side-view, and pavement digital imaging systems (ICC 2000). Moreover, this vehicle is also equipped with the Applanix Position Orientation System (POS), consisting of Differential Global Position System (DGPS) and an Inertial Measurement Unit (IMU), capable of delivering accurate and instantaneous information about the position, speed, and orientation of the vehicle, as well as the grade and curvature of the roadway. Furthermore, the vehicle is capable of obtaining longitudinal profiling (IRI) data through a laser accelerometer system mounted on the front bumper of the highway evaluation vehicle. Figure 1.1 FDOT Highway Evaluation Vehicle 1.1.1. Forward-View and Side-View Imaging Systems pixels) digital area-scan cameras for forward-view and side-view images at a rate up to The FDOT highway evaluation vehicle uses two high resolution (1300 x 1024 12 frames per second enabling video capture at highway operating speeds of up to 75 3

PAGE 24

mph. Td le the side-view camera uses a C-mount lens with a 25 mm focal length. he forward-view imaging system is used to record the highway inventory features such as pavement markings, number of lanes, permanent roadway signage in front of the vehicle, work zones, traffic control and data collection devices. Side-view imaging system is used to specifically record the roadway signs and maintenance and safety features. he forward-view camera (Figure 1.2) captures the panoramic view of the roawhile the side-view camera is set up to obtain the side-view right-of-way data. These cameras are mounted in a Pelco enclosure with a fan and a heater to protect them from environmental effects. The forward-view camera uses a C-mount lens with a 16.5 mm focal length whi T Figure 1.2 Forward-View Camera 1.1.2. Pavement Imaging System The pavement (downward-view) imaging system of the FDOT highway evaluation vehicle consists of a Basler L-103 line-scan camera (Figure 1.3) with SigmaFisheye lens of focal length of 15 mm, controlled by a Windows 2000 operational sy stem 4

PAGE 25

and ICC LineScan capturing software. The pavement camera is mounted 9.25 ft over the pavement surface providing the ability to capture an area w ith a width and length of 14.5 ft and 2 es of the pavement during the short period of time each imred by minimizing the effect of the shadow cast on the visibility of the pavement features such as cracks. To enable pavement survey in the night time, FDOT highway evaluation ve pavement lighting system shown in Figure 1.4. The pavement lighting system consists of ten polished reflectors, each containing a 150 Watts lamp. 0 ft, respectively. The image created by the capturing software contains 2048 by2942 pixels respectively, representing the width and length of the captured area. Each image is built-up of 2942 image lines captured separately and combined together to create one image. Each image line can be captured in preset exposure times of 1/19,000 or 1/40,000 second depending on the lighting conditions. To ensure good quality imag age line is captu hicle is equipped with a built-in Figure 1.3 Pavement Camera 5

PAGE 26

1.1.3.Position and Orientation System Figure 1.4 Pavement Lighting System Global Position System (GPS) is an excellent positioning system, especially for slow moving vehicles in open areas. However, in road survey one cannot afford the occasional loss in position data caused by blocked satellites, and one requires data updated more frequently than is possible with current receiver technology. In many of these cases, the requupdate information at a frequency of 200 Hz or higher. Use of inertial technology to measure position and orientation has a number of advantages, especially for moving vehicles. It provides high accuracy irrespective of vehicle motion, and is self-contained. However, errors grow over time, making an inertial system best suited for short-term d end of the vehicles run. This provides a geographic context for the systems irement can be met with integration of an inertial system which can observations only. Thus, inertial systems require an external position fix at the start an 6

PAGE 27

observations. If both systems, GPS and inertial system, are coupled together, regular external position fixes are provided through the GPS unit onboard the vehicle. The central part of an inertial navigation system is the Inertial Measurement Unit(IMU) which is a self-contained sensor consisting of three silicon accelerometers and three fiber-optic gyroscopes. This sensor is bolted to the vehicle, so that it undergoes same motion as the vehicle. The accelerometers measure accelerations along eachthree axes, X in the travel direction, Y in the lateral right direction, and Z towardsvehicles floorboard. If the IMUs initial location is known, double integration of the accelerations experienced by the vehicle will yield the vehicle position. Similarly, the three gyroscopes measure the rates of angular rotations about the X, Y, and Z axes and are used to determine vehicle orientation as well as grade and cross-slope of a road. In the FDOT highway evaluation vehicle, the above functions are accomplishedby a mounted Applanix POS the of the the s Differential GPS (DGPS) and inertial technologies into one precise position and orientation location system generating a stable, reliable, and repeatable positioning solution that provides the benefits of both systems, while mthe IMU, which provides the inertial solution. The IMU is complemented by two GPS receivers, whose position information serves to provide the inertial solution with position updates, thereby controlling the error growth. If the GPS receiver is unable to provide position information (e.g. due to blocked satellites), the IMU will continue to provide TM LV (Position and Orientation System for Land Vehicles) system which integrate inimizing their shortcomings. The core of an Applanix system shown in Figure 1.5 is 7

PAGE 28

position and orientation information unaided. Figure 1.6 shows the block configuration of the Applanix system. Figure 1.5 Applanix POSTM LV System Used in FDOT Highway Evaluation Vehicle Figure 1.6 Block Diagram of Applanix POSTM LV System in FDOT Highway Evaluation hown in f Vehicle 1.1.4. Laser Profiling System FDOT highway evaluation vehicle is equipped with a profiling system, sFigure 1.7 that consists of three laser units, two of which have a sampling frequency o32 kHz and a third one with a sampling frequency of 16 kHz. The lasers are mounted on 8

PAGE 29

the front bumper and two 32 kHz laser units are positioned above the wheel paths to profile the road surface and evaluate the rut depth. The sensors used in the vehicle are semi-conductor laser diodes that use appropriate optics to project a lase r spot on a pavemeed to determine the vertical distance to the vehicles bumIn order to establish a reference plane which the profile is measured, an accelerometer has to be used. Thence the vehicle also utilizes a vertical position sensing Jewell LCA-100 Series accelerometer with a sensitivity range of 0.5/g to 10/g. Accelerometer readings can be used to determine the vertical position of the vehicle bumper with respect to the reference plane. Then, the road profile is determined from the numerical difference between the vehicles vertical position and the distance between the vehicle body and the pavement surface. Finally, the International Roughness Index (IRI) is computed from the road profile along the left and right wheel paths. nt surface. The reflected laser can be us per from the pavement. from Figure 1.7 Laser Profiling Unit Installed on the Front Bumper 9

PAGE 30

1.2 Overview of Digital Imaging 1.2.1. State-of-the-Art of Digital Imaging According to Wang (Wang 2000), a common method of imaging pavement surfaces was using the analog format through area-scan cameras. A digitizing process converts the analogbased images, in which analog data is transformed into computer-understandable digital format. Wang (Wang 2000) discusses the advantages of the relatively new digital camera technology. On the other hand, the line-scan cameras scan one line at a time with a resolution as high as 6,000 pixels per line (2Kx2K) with a data rate of 30 MHz. Captured single lines are then compiled to form a 2-D area for analysis. Although several problems associated with analog area-scan cameras, such as the relatively low resolution and the necessity for digitizing, do not exist with digital line-scan cameras, Wang (Wang 2000) emphasizes the need for higher light intensity in line scan cameras. For area scanning at highway speeds, the maximum available exposure time is about 60 s while in line scanning, the maximum available exposure for one line is about 50 s in order to capture a crack that is less than 2 mm. Since these short exposure times require high illumination intensity, strobe-illuminating devices are effective for area-scan cameras. However, for line-scan cameras, high intensity continuously illuminating devices are needed. The main difficulty associated with automated survey of pavement surface distress is the rapid rate of data collection and the corresponding extraordinary computational needs, when real-time processing is to be implemented. However, real10

PAGE 31

time processing technology is still in development. When a compromise is made with respect to computing performance, both the data quality and performance speed are affected. However this issue is gradually being resolved with the continued development of high speed processors. Wang (Wang 2002) also describes a pavement imaging system that is capable of analyzing automated distress survey data on a real-time basis at speeds of up to 20 mph. 1.2.2. Analog to Digital Conversion In electronic terms, analog signals contain data over a continuous range. Analog voltage signals represent the intensity or brightness of an image over a given area. However, modern computers can work only with digital data that are represented by discrete numbers. Therefore, the primary function of digital cameras, the schematic diagram of which is shown in Figure 1.8, is to convert analog image intensities into digital values. The process of measuring the intensity values in a continuous image at discrete intervals in space is known as sampling. Figure 1.8 Schematic Diagram of a Digital Camera 11

PAGE 32

In a digital camera, the image sensor consists of a large number of individual pixels, each of which measures the intensity of light reflected or transmitted from a real scene. Each pixel generates a voltage signal in analog form which is proportional to the amount of the light received. The process that converts the analog signal to digital data is performed by an analog-to-digital converter (ADC). The number of gray levels a given camera can recognize between black and white depends on the type of ADC used in the camera. For example, an 8-bit ADC would allow 256 different levels of intensity values between black and white or pure black, pure white, and 254 different shades of gray in the resulting digital image. 1.2.3. Spatial Resolution Digital images are composed of pixels and the resolution of a digital image will be largely, but not totally, dependent upon the physical size of a pixel in the image. For a given field of view, dense sampling will produce high resolution images, in which there are a relatively large number of pixels representing a small part of a scene. Spatial resolution dictates the amount of useful information that can be extracted from an image. The effect of the resolution on interpretation of a pavement image is illustrated in Figure 1.9 which displays the same portion of the pavement image at three different resolutions. 12

PAGE 33

(a) (b) (c) Figure 1.9 Effect of Resolution on Image Interpretation. (a) 16 by 24 pixels; (b) 34 by 50 pixels; and (c) 136 by 200 pixels To determine the resolution of the imaging sensor in a given direction in terms of the number of pixels per a given distance unit, the total number of the pixels has to be divided by the dimension of the sensor in that direction. For example, Basler L103 line-scan camera used in pavement imaging system of the FDOT highway evaluation vehicle contains a Thomson TH7814A linear sensor which is 20.48 mm wide and comprises of 2048 photosensitive pixels. Then, the maximum attainable resolution on the sensor is 2048/20.48, or approximately 100 pixels per mm. The rate at which the intensity value changes over an image is measured by the spatial frequency. Rapid changes in intensity are characterized by high spatial frequencies and they can be represented accurately only in a densely-sampled image. Whenever possible, the sampling chosen for a given image must satisfy the Nyquist criterion, which states that the sampling frequency must be at least twice the highest spatial frequency found in the image (Efford 2000). If the image sampling does not satisfy the Nyquist criterion, then the image may suffer from the effect of aliasing. In aliasing, a signal of a certain frequency that has been undersampled can appear at a lower frequency upon 13

PAGE 34

image reconstruction. This obviously has the effect of distorting the signal, introducing frequency components that are unrepresentative of the original signal. 1.2.4. Bit Depth The bit depth is determined by the number of possible values used to define each pixel. In computing, one bit is the smallest amount of data that can be processed by a computer. A bitonal image made of pixels consisting of one bit each can represent two tones only, which are typically black and white. This image type uses the value zero for the intensity of black color and a value one for the intensity of white color. Then the number of available gray levels would be 2bi where b represents the bit depth. Accordingly, for black and white imaging, 256 gray levels, or 8 bits, are necessary for the eye to recognize a continuous range of gray tones without any noticeable banding. The comparison of images of the same pavement section at different bit depths is shown in Figure 1.10. (a) (b) (c) Figure 1.10 Color Depth of the Pavement Segment. (a) 1-bit (black and white); (b) 4-bit (16 gray levels); and (c) 8-bit (256 gray levels) 14

PAGE 35

On the other hand, digital color images are usually composed of three primary colors (red (R), green (G), and blue (B)). In a color image conforming to the RGB model, the color value of a given image pixel is a vector with three components, corresponding to R, G, and B colors. R, G, and B can be regarded as orthogonal axes defining a three-dimensional color intensity space where every possible color is mapped onto a point inside the color cube shown in Figure 1.11 (Efford 2000). Eight bits per pixel are required for each of the three colors in order to display continuous tone and a photo-realistic image. Therefore, a single pixel has to have 8 bits each for R, G, and B colors with a total of 24 bits (256 x 256 x 256 = 2 24 ), allowing the representation of more than 16.7 millions of color combinations. Figure 1.11 The RGB Color Cube 15

PAGE 36

1.2.5. Software Used for Intensity Measurements of Digital Images (ImageJ) ImageJ (Rasband 2004) is a public domain image processing software that is programmed in the Java programming environment. It can be executed either as an online applet or as a downloadable application on any computer equipped with Java 1.1 or newer virtual machine environment. The main window of the program is illustrated in Figure 1.12. Figure 1.12 Main Window of the ImageJ Image Processing Software ImageJ program can display, edit, analyze, process, save, and print grayscale or color 8-bit, 16-bit, and 32-bit images. It can read data files in many formats including BMP, JPG, TIFF, GIF, and raw image formats. ImageJ program is designed to work amulti-threading application so that time-consuming operations such a s a s image file reading em. To solve analytical problems related to the images captured by the FDOT highway can be performed in parallel with other operations. The program can simultaneouslyprocess a number of images limited only by the available memory. ImageJ program has been designed with an open architecture that provides extensibility via Java plugins. Custom acquisitions, analysis, and processing plugins canbe developed using ImageJs build-in editor and Java compiler. As a result, user-written plugins can facilitate solutions to almost any image processing or analysis probl 16

PAGE 37

evaluation vehicle and to post-process pavement distress images by filtering, a PhotoES_AM plugin w as written in Java language. The main window of this plugin is hown in Figure 1.13. s 1.13 Main Window of the PhotoES_AM Plugin Written for ImageJ Software e it intensity statistics to filter noise out of pavement images to improve their quality. The PhotoES_AM plugin specifically enables one to measure the Signal-to-NoisRatio (SNR) of an imaging system based on an image of a standard grayscale target. It also allows a color quality assessments using an image of the Macbeth Color Checker(Figure 2.5) and determination of the Modulation Transfer Function (MTF) (Section 2.6.3) using an image of the bar patterns of a standard resolution target. In additionallows the use of local 17

PAGE 38

1.2.6. Storage of Digital Image Data If the total number of pixels in an image is known, the file size of the uncompressed image s file can be determined as: 8filetotal b ( sp1.1) pth. els, then from Eqn. (1.1), the size of the image abilities the file size and the image quality. There are two basic typ(1) reconstructed image is mathematically and visually identical to (2)on the degree of compression, the lost information may or may not be noticeable. where p total is the total number of pixels in the image and b represents the image bit deFor example, if an image has 1200 x 800 pix will be 2,880,000 bytes, or 2.8 Mb. However, the final size of the file that stores an image will be determined by the file format used to save it. Important attributes of five most commonly used formats are listed in Table 1.1. Many formats, such as JPEG or TIFF have compression capwhich will reduce the size of the stored file. When compressing images, equal consideration must to be given to es of compression methods: lossless where no information is lost during the compression and decompression processes and the its original, and lossy where some information is lost during the compression process and based 18

PAGE 39

Table 1.1Name and ComrTheir eM) Current Version (Tagged Image File Format) Interchange Format) JP2-JPX/ JPEG 2000 able Document Format) mon Image FoTIFF 6.0 mats and GIF 89a (Graphics Attributes (DJPEG (Joint Photographic Expert Group)/JFIF (JPEG Fi ello 2003 le Interchange Form at) PDF 1.4 (Port Extension(s) .tif, .tiff .gif .jpeg, jpg, .jif, .jfif .jp2, .jpx, .j2k, .j2c .pdf Bit-depth(s) 4bitonal, grayscale, or color 8-bit grayscale; 24-bit color ith 1-38 r; up to 64-bit color 1-bit bitonal; or 8-bit grayscale or palette color; up to 64-bit color 1-8 bit supports up to 2 14 channels, each w bits; gray or color 4-bit grayscale;8-bit colo support Compression Lossless: ITU-. Lossless: LZW G Lossless Lossless/Lossy: ed ss: ITU-T.6, LZW. EG Uncompressed T.6, LZW, etc Lossy: JPEG Lossy: JPE Uncompressed Wavelet UncompressLossle JBIG Lossy: JP Color Mgmt. E Palette YCbCrbCr, GB, RGB, YCbCr, RGB, Palette,YCC, b r CMYK, CIL*a*b* Palette, YC RGB, sRsome ICC CMYK Web Support external application Plug-in external application Plug-in or Native since Microsoft Internet Explorer 3, Netscape Navigator 2 Native since Microsoft Internet Explorer 2, Netscape Navigator 2 Plug-in or M etadata Support Basic set of labeled tags Free-text comment field Free-text comment field Basic set of labeled tags Basic set of labeled tags 1.2.7. ittee on JPEG Compression Joint Photographic Expert Group (JPEG) was developed during 1980s and madeavailable in 1991. This development is a joint effort by the Consultative CommInternational Telegraphy and Telephony (CCITT) and International Standards Organization (ISO). It was created to define a standard for compressing photographic 19

PAGE 40

images. JPEG compression technique relies on the fact that human eye is much more sensitive to change of intensity (luminance) in the image than to color (chrominThe image data are thus separated into luminance and chrominance, and ance). lossy compreorm corner of the block and the rest of 63 pixels are assigned values relative to this averageCT oding. e ssion algorithms are then applied to the chrominance data only. JPEG compression is a three-stage process illustrated in Figure 1.15. First, the system divides the image into blocks of 8 by 8 pixels, to which a mathematical transfknown as forward discrete cosine transform (FDCT) is applied. Purpose of this is to transform the image from the spatial domain into the frequency domain. This has its basisin Fourier transform used by electrical engineers for analyzing the frequency component of signals. Then, the compression procedure averages 24-bit intensity value (or 8-bit for grayscale image) of every pixel inside the block. This average value is stored in the top left-hand In the next step, quantization is performed where the values produced by FDare divided by values in the quantization matrix. When a quantization matrix with sufficiently high quantizing values is used, many of the values produced by FDCT are reduced to zero resulting in blocky appearance in the low frequency part of an image (Figure 3.16). After quantization, the 64 coefficients are prepared for entropy encIn this step, previous coefficient is used to predict the current coefficient and the difference is encoded. Then the quantized coefficients are passed to entropy encoding procedure which compresses the data further. Huffman and arithmetic encodings are thencoding procedures implemented into the JPEG compression algorithm because they 20

PAGE 41

produce the most effective compression for a wide range of image types (Davies and Fennessy 2001). During the reordering, most zero values can be represented by a very short piece of computer code (ITU-T 1992). Figure 1.14 Illustration of JPEGpression Algorithms 1.3 Rapacity. The magnification factor, m, of the lens can be expressed as (E 3-stage Com elationships Used in Optical Analysis The most important characteristics of a lens are its magnifying power and the light gathering c fford, 2000): image sizevmobject size u(1.2) where u ts distance from the lens to the point at which parallel incident and v are the distances from the lens to the object and the image sensor, respectively. The power of lens is usually expressed in terms of the reciprocal of its focal length, f, which represen 21

PAGE 42

rays convefollows (Efford, 2000): rge (Figure 1.15). The commonly used relationship between u, v, and f is as 111 f uv (1.3) Figure 1.15 Illustration of the Image, Object, and Focal Length Relationship Eqn (1.3) has been used frequently in this dissertation for analysis involving the digital camera optical system. 1.4 An Overview of Pavement Surface Distress Pavement distress can be classified based on the type of pavement surfaces. Strategic Highway Research Program (SHRP) Distress Identification Manual (SHRP, 1992) recognizes three main types of pavement surfaces. They are asphalt concrete pavements, jointed Portland cement concrete pavements, and continuously reinforced pavements. Types of pavement surface distress based on pavement types are described in Tables 1.2 and 1.3 for asphalt and rigid pavements, respectively. 22

PAGE 43

Table 1.2 Asphalt Concrete Surfaced Pavement Distress Types DISTRESS TYPEUNIT OF MEASUREDEFINED SEVERITY LEVELS?A.Cracking1. Fatigue Crackingmyes2. Block Crackingmyes3. Edge Crackingmyes4a. Wheel Path Longitudinal Crackingmyes4b. Non-Wheel Path Longitudinal Crackingmyes5. Reflection Cracking at Joints Transverse Reflection CrackingNumber, myes Longitudinal Reflection Crackingmyes6. Transverse CrackingNumber, myesB.Patching and Potholes7. Patch / Patch DeteriorationNumber, myes8. PotholesNumber, myesC.Surface Deterioration9. Ruttingmmno10. ShovingNumber, mnoD.Surface Defects11. Bleedingmyes12. Polished Aggregatemno13. RavelingmyesE.Miscellaneous Distresses14. Lane-to-Shoulder Dropoffmmno15. Water Bleeding and PumpingNumber, mno Table 1.3 Jointed Concrete Surfaced Pavement Distress Types B.Joint DeficienciesNumberyesNumber, mno6. Spalling of Longitudinal Jointsmyes7. Spalling of Transverse JointsNumber, myesC.Surface Defects8a. Map CrackingNumber, mno8b. ScalingNumber, mno9. Polished Aggregatemno10. PopoutsNumber, mnoD.Miscellaneous Distresses11. BlowupsNumberno12. Faulting of Transverse Joints and Cracksmmno13. Lane-to-Shoulder Dropoffmmno14. Lane-to-Shoulder Separationmmno15. Patch / Patch DeteriorationNumber, myes16. Water Bleeding and PumpingNumber, mno DISTRESS TYPEUNIT OF MEASUREDEFINED SEVERITY LEVELS?1. Corner BreaksNumberyes2. Durability Cracking ("D" Cracking)Number, myes3. Longitudinal Crackingmyes4. Transverse CrackingNumber, myes5a. Transverse Joint Seal Damage5b. Longitudinal Joint Seal Damage A.Cracking 23

PAGE 44

In asphalt pavements, the most common distress types are rutting (Figure 1.16 (a)) and alligator cracking (Figure 1.14(b)) while in rigid pavements, they are corner breaks (Figure 1.17(a)) and panel cracks (1.15(b)). The majority of roads (97%) in the State of Florida are made of asphalt pavements. (b) (a) Figure 1.16 (a) Alligator (fatigue) Cracking; (b) Rutting on Flexible Pavement (a) (b) Figure 1.17 (a) Corner Break; (b) Large Panel Crack on Rigid Pavement 24

PAGE 45

1.5is dissertation research are as follows: digital m that influence the quality of images, (3) (5) develop an effective filtering m uses a standard grayscale target (6) ing systems. Major sources of noise present in digital images are identified in Chapter 3 and a novel technique is developed for filtering noise in digital images. Then in Chapter 4, the experimental methodology that was setup to investigate the effect of vehicle speed and lighting conditions on the digital images is described and the results are exemplified. And finally in Chapter 5, conclusions based on the research findings are summarized. Research Goals The objectives of th (1) using fundamentals of optics to identify the key components of a standardimaging syste (2) describe scientific methods of assessing digital imaging systems layout a systematic methodology to achieve the optimum settings of digital imaging systems, (4) identify the major noise sources affecting the quality of digital images of pavements, ethod that identify the effects of the speed of the highway evaluation vehicle and lighting conditions on the quality of images, and (7) uncover any relationship between signal to noise ratio and the ability to recognize cracks from images. 1.6 Organization of the Dissertation Chapter 2 of the dissertation describes the development of rational guidelines for performance assessment of digital imag 25

PAGE 46

CHAPTER 2 GUIDELINES FOR PERFORMANCE ASSESSMENT OF DIGITAL IMAGING SYSTEMS USED IN HIGHWAY APPLICATIONS 2.1 Introduction State-of-the-art highway evaluation systems have become multi-functional by incorporating combinations of pavement surface roughness sensors, high resolution cameras, inertial measurement units (IMU), DGPS, and distance measurement instrumentation (DMI). Typical information obtained in automated highway evaluation includes data on roadway geometry (horizontal and vertical curvature, longitudinal and transverse profiles), pavement condition (distress) and texture, rutting, and safety features. Evaluation data are stored and archived in formats compatible with data stored in pavement management databases. By associating spatial information from GPS or linear measurements, users can query the system and retrieve information relevant to damage assessment, design, planning and maintenance, location of roadway and roadside assets, selection of locations for traffic-monitoring systems, and quality control and quality assurance (QC/QA). The images can be used for public hearing presentations, responses to questions raised by public and private individuals, and inspection of sites prior to and after construction. In addition, images collected at regular intervals can be helpful in providing useful historical records for condition assessment, and in facilitating various engineering 26

PAGE 47

evaluations. Hence, imaging of traffic and safety features and pavement distress is a valuable tool for highway asset and pavement management. Traditional pavement surveys range from a thorough walking survey of 100% of the pavement surface in which all distress types, severities, and quantities are measured, recorded, and mapped to a windshield survey at normal traffic speed in which the rater assigns the pavement a general category or sufficiency rating without identifying individual distress types. In either case, the inspection of the pavement surface is direct and human cognition is used to categorize and determine the type of distress, severity and quantity of distress present on the pavement surface. Overall, manual surveys are considered labor intensive, slow, expensive, and sometimes unsafe. They also invariably involve a certain degree of human subjectivity. Therefore, automated highway evaluation operation also saves an enormous amount of evaluation time and effort while obviating the frequent safety concerns associated with traditional manual evaluation. In the 1970s, many states initiated imaging of highway features using photographic cameras mounted on vans. Later in the 1980s, imaging systems were improved to capture images on videotape or videodisc for subsequent transfer to optical discs that could be viewed at dedicated view stations (Overturf 2001). In the past decade, digital imaging and automated data acquisition systems have become standard equipment for many DOTs and their use has led to further elimination or curtailment of traditional labor-intensive evaluation methods (Dougan, 2001). A digital imaging system is a combination of optics, interfacing electronics, and software (Figure 1.8). In most cases, digital imaging systems are built of modules from 27

PAGE 48

different manufacturers allowing different components to interact with each other. Moreover, manufacturers use their own proprietary tools to benchmark their system without following any standard set of techniques and targets and hence provide widely varying evaluations of two similarly performing systems (Reichmann 2003). Therefore, the overall quality of images cannot be determined solely from the manufacturers specification of different attributes of image quality such as the dynamic range, signal-to-noise ratio (SNR), spatial resolution, etc. For example, the manufacturer of the DVC 1310C hi-resolution color digital camera with 16.5mm Pentax video lenses (C-mount) used in the FDOT highway evaluation vehicle claims that the equipment has a SNR of 60 dB. However, when installed in the FDOT highway evaluation vehicle, the SNR was determined to be most 50 dB even under well-lit conditions. Furthermore, emerging developments in the science and art of high-resolution digital imaging also enables precise measurement of attributes of interest, such as distances or heights, directly from image records. In addition, several automated algorithms for pavement surface distress evaluation are being developed and tested (Wang 2000). However, the success of the above automated pavement evaluation techniques eventually depends on the attributes of image quality such as spatial and tonal resolution, and the levels of noise and distortion. Currently, there are no definitive guidelines or standards for evaluating the output from imaging systems. Manufacturers specifications of each discrete system components are not reliable in assessing the performance of the assembled imaging system. 28

PAGE 49

Cases have been reported where limited knowledge of modern imaging systems had led to unexpected delays in their implementation (Overturf 2001, Gunaratne et al. 2003). FDOT has developed an automated highway evaluation vehicle in which several problems such as unnatural color quality of forward-view images or too dark pavement images have surfaced with regard to the quality of images (Gunaratne et al. 2003). Many attempts to resolve these problems by trial and error based adjustments have proven to be inefficient with regard to time and cost (Gunaratne et al. 2003). Connecticut Transit (CTTransit) has been seeking to videolog all bus routes incorporating photolog-generated Global Positioning System (GPS) data for their Geographical Information System (GIS) database with the idea of improving the quality of bus route maps and information tools for both CTTransit and its clients. However, the contractors lack of familiarity with the new photolog system impeded its full implementation in 1995 (Overturf 2001). On the other hand, quality assurance tests conducted by Pennsylvania DOT on its panoramic imaging program revealed a substantial number of illegible signposts in the collected images (Stoffels, 2003). 2.2 Quality of Images Figures 2.1 through 2.4 illustrate the variation of digital image quality due to common deficiencies of imaging systems. Image quality can be objectively assessed through physical measurements of the image properties such as color reproduction, tone reproduction, detail reproduction, level of noise, and optical distortion. Specifically designed targets were available to make objective measurements of each of the above 29

PAGE 50

properties in terms of intensity or luminance error, dynamic range, spatial resolution, signal-to-noise ratio, and the degree of distortion, respectively. Figure 2.1Forward-View Image Unclear Traffic Signs as a Result of Low Resolution (640x480) Figure 2.2 Forward-View Image (resolution of 1300 x 1024) Undesirable Color Quality Due to Use of Near Infra-red Filter Absorbing Partially Visible Spectrum 30

PAGE 51

Figure 2.3 Forward-View Image (resolution of 1300 x 1024) Unnatural Color Quality Due to the Incorrect White Balance Settings Figure 2.4 Forward-View Image (resolution of 1300 x 1024) Poor Quality Due to Noise Introduced with Long Exposure Settings ent 8Similarly, in a color digital image (from the forward-view camera) any given pixel 2.3 Intensity Measurements Many of the evaluation procedures discussed in this paper are based on the measurement of intensity values. A black and white digital image (ex: from the pavemcamera) contains 8 bits of gray resolution with 2 intensity levels ranging from 0 to 255. 31

PAGE 52

contains three intensity values from 0 to 255, for each of the primary colors (red, green, and blue), forming 24 bits (3 x 8) of color resolution. Intensity values encoded into imafiles through JPG, TIFF, or BMP formats can be measured with imaging progas Adobe Photoshop or Adobe Paint Shop Pro. For the present study, the imaging software ImageJ (Rasband 2004) which also allows writing of the scripts in Java for additional processing of parameters such as signal-to-noise ratio, color evaluation, an ge rams such d oth2.4 Evaluation of the Color ReproducIn somsettings of(Figure 2.5) can be used to compare, measure, and analyze the outputs from different imaging systems in terms of color reproduction under different settings. It is a unique test pattern of 24 scientifically prepared colored squares, designed to help determine the true color balance of the imaging system. Color reproduction is assessed by comparing the color intensity values meker and its orresponding standard intensity values. The comparison must be made on the same color spthat allows expressing of the color information. There are two types of color spaces: (1) device dependent same color with particular definitions rendered differently for different devices (i.e., sRGB color space used by digital cameras). (2) device independent by using a mathematical model, device dependent colors can be converted to device independent colors. ers was used. tion Quality of Digital Images e instances, images show unnatural colors (ex: in Figure 2.2) caused by improper white balance, optics, filters, or image software. A Standard Color Checker asured from the image of the Color Chec c ace with the same chromaticity of the white point. A color space is a model of colors 32

PAGE 53

Fu igre 2.5 Digital Images of a Macbeth Color Checker Taken with the Forward-View Camera with Aperture Settings of F2.8 -View ettings of F4.0 Figure 2.6 Digital Images of a Macbeth Color Checker Taken with the ForwardCamera with Aperture S Figure 2.7 Digital Images of a Macbeth Color Checker Taken with the Forward-V iew Camera with Aperture Settings of F5.6 Camera with Aperture Settings of F8.0 Figure 2.8 Digital Images of a Macbeth Color Checker Taken with the Forward-View 33

PAGE 54

Most available digital cameras produce images in the sRGB color space with the D65 white point (Lindbloom 2003). How ever, intensity data for the Macbeth Color Checker used in the prese stdy aren the CE XY colorspaceith thr ious aperture settings. era uses a DVC 1310C with 1296 by 1024 pixel resolution and wide angle Pentax a focafd saas JP w% compression ratio. Fig 2sokred at each aperture setting wpte be color imagn be eted for indal intensities of the ps or by using luminance as the sole criterion. Lumisents color or brightan perception and provides a single value for comparison as opposed to three separate values for each color channel. Hence, the generally accepted formula for luminance is (Studelle-Schwarz 2004): Y ntu iIZs we C white point. Hence, one has to transform the specified color intensity values from device-independent CIE XYZ color space into sRGB color space to be able to use this colochecker. This transformation was programmed on the basis of the mathematical procedure described in (Lindbloom 2003). 2.4.1. Selection of Aperture Setting Using Color Evaluation The setting of appropriate aperture based on color evaluation was illustrated by the following test. The images of the Macbeth Color Checker were obtained by the FDOT highway evaluation vehicles forward-view camera at var The cam lens with l length oures 2.5 8.5 mm, an.8 show image ves images of Macbeth C G files ith a 75 lor Chec er captu ith the ap ropriate whi alance. Th es ca valua ividu rimary color nance (Eqn. (2.1)) repre ness from hum 0.590.11RGB (2.1) 0.3 34

PAGE 55

where Y is the luminance and R, G, and B represent intensities of red, green, and bcolor respectively in a particular pixel. For example, patch number 15 (red) of the Macbeth Color Checker (Fig. 2.5) imaged under two different apertures (F4.0 and F5.6can be evaluated as seen in Table 2.1. The device independent values of the patch number 15 provided by the manufacturer in C2 color space are X lue ) were transformed to the sRGB color space (Lindbloom 2003) used y the digital camera as shown in Table 2.1. Table 2.1 also shows the average intensity values of the two images of the red patch of the Color Checker. rs for Patch Number 15 (red) Channel Intens C2 =20.65, Y C2 = 12.00, and Z C2 =5.7. These values b Table 2.1 Specified and Measured Intensity Values and Erro ity value provided(sRGB) Measured intensity value (image 1) Aperture 1 Measured intensity value (image 2) Aperture 2 Error image 1 Error image 2 R 177 187 187 10 10 G 51 71 56 20 5 B 61 66 81 5 20 Average R,G,B Error 11.67 11.67 Luminance Error 15.35 8.15 The image quality cannot be assessed on the basis of the average error of the three color channels since, as seen in Table 2.1 for both images 1 and 2, the average error could become the same. However, Table 2.1 also shows that the luminance error or the differential form of (Eqn. (2.1)) can be used as the differentiating criterion between d on te luminance criterion, Figure 2.9 shows that the optimum images 1 and 2. Baseh 35

PAGE 56

aperture settings for the tested camera under daylight conditions are between F2.8 and F4.0. In addition, Figure 2.9 also shows the differences in the reproduction of specific colors by the sensor. 01020304050 607080 123456789101112131415161718192021222324Color Square NumberLuminance Error (max. 255) F1.5 F2 F2.8 F4.0icc F5.6`````erate brownreddish borwn3 moderate blue4 moderate olive green5 light violet6 light bluish green7 strong orange8 strong puplish blue9 moderate red10 deep purple11 strong yellow green12 strong orange yellow13 vivid purplish blue14 strong yellowish green15 strong red16 vivid yellow17 strong reddish purple18 strong greenish blue19 white20 light gray21 light-medium gray22 medium gray23 dark gray24 blackFigure 2.9 Luminance Error in the Color Test for Different Apertures 2.4.2. Setting of White Balance Using Color Evaluation bjects would appear white under any given lighting condition. However, white balance dep object generates diff Blue (B) color components depending on the color temperature of the light source. The simplest white 1 mod2 light White balance refers to the adjustment of the colors in visible light so that white o ends on the color temperature of the light source and the lighting condition. A whiteerent intensity values for its Red (R), Green (G), and 36

PAGE 57

balance correction involves equalization of the intensity data based on first fixing thegreen intensity values, and then adjusting the red and blue intensity values using appropriate gain coefficients so that the final R, G, and B intensities would be equal for an image of a white object. For example, the image of a white board obtained under incandescent light can produce intensity values of R=200, G=250, and B=110. Then, the red and blue gain coefficients that produce white-balancing would be 250/2001.25Rk and 250/1102.27Bk respectively. Incorrect setting of the white balance can cause a color shift in the image as seen in Figure 2.2. Thus, color evaluation would be vital for evaluating the white balance of a camera under the lightning conditions, in addition to setting the aperture. given White balance evaluation of the imaging system can also be performed by capturing the image of the MacBeth Color Checker with different white balance settings under different lighting conditions (Figure 2.10 and 2.11). As seen in Figures 2.10 and 2.11, high luminance errors for any channel indicate improper white balance. 37

PAGE 58

0 201601357911131517192123Iny 406080100120140tensit Error (max. 255) R G Color Square Number B ra (sunlight-based white balance settings) Figure 2.10 Intensity Errors for Each Color Channel of the Forward-View Came 020 140160 100ax. 255) 120 80rror 40601357911131517192123tensit Color Square NumberIny E (m R G BFigure 2.11 Intensity Errors for Each Color Channel of the Forward-View Camera (incandescent light-based white balance settings) 38

PAGE 59

In addition, color evaluation can also be used to test the efficiency of different filters such as near infra-red or neutral density filters and their performance on the coquality of the imaging system. The Charge Coupled device (CCD) sensors are sensitivethe near infrared (NIR) spectrum and hence longer exposures with CCD cameras usually lor in roduce blooming effects in the images. Hence, NIR filters are used to eliminate the near infra-red spectrum (from 700nm-950nm) while maintaining the transmission of intens blooming effect from over-exposed images. The results from the tests on two NIR filters are shown in Figure 2.12 for which images were captured under similar lighting conditions, white balance and aperture (F4.0) settings. p ities across the visible spectrum. These filters are also effective in removing the 0.0 10.020.030.040.050.060.01357911131517192123Luminance Error (max. 255) Color Square Number NIR filter #1 NIR filter #2 Figure 2.12 Comparison of Two Different Near Infra-red Filters Mounted on Forward-View Camera 39

PAGE 60

Figure 2.12 shows that the response of the NIR filter 2 is superior to that of the NIR filt he ting conditions, and to some extent, on camera settings such as the aperture and software settings such as the gain or exposure. Three attributes that affect the tone reproduction are: (1) the opto-electronic conversion function (OECF) of the camera sensor, (2) flare, and (3) dynamic range. ght shining or reflecting directly onto the camera lens. The ISO 12233 target shown in Figure d ter 1 which was originally used in the camera. As predicted, changing of the filter improved the quality of images obtained from the FDOT imaging vehicle indicating thathe originally used NIR filter had partially absorbed the visible spectrum (Figure 2.12). The neutral density filter can be a remedy for situations where it is needed to lower tamount of the light reaching the CCD sensor without affecting the color balance. Therefore, the color evaluation tests also facilitate the selection of appropriate filters. 2.5 Tone Reproduction Quality of Digital Images Tone reproduction is necessary to ensure that a wide range of light in a real-world scene is conveyed on a display with limited capabilities. Tone reproduction mostly depends on the ligh Flare in an image can be seen in the form of flashes and it is caused by excessive li 2.20 can be used with areas of minimal (white) and maximal (black) optical density to measure the flare introduced by an imaging system. When the intensities of resulting black and white (8-bit) images are measured, the intensity value should be 0 an255, respectively. By measuring the extent of reduction of the original contrast in the 40

PAGE 61

digital image, the influence of flare can be determined (Franziska and James 1999). Tominimize flare (Figure 2.13), the imaging system can be fitted with a sunshade that prevents direct impact of sun light as shown in Figure 2.14. Effects of flare can also be educed by avoiding imaging operations an hour before sunset and an hour after sunrise. r Figure 2.13 Flare Created by Sunlight Directly Impacting the Optical System Figure 2.14 Sunshades on Cameras to Prevent Flare 41

PAGE 62

2.5.1. Evaluation of the Dynamic Range The dynamic range (DR) of a digital imaging system indicates the range of thegray scale (black to white) that the system can differentiate. Hence it is expected thawould be a fitting indicator of an imaging system and its applicab t DR ility in accurate image (Dulis 2004): at of a standard target. tha2.1 can be captured by the imaging system and corthe dynamic range of the Figray wedge corresponds to a dynamic range of 0.1 and hence the evaluated digital camera is determined to posses a dynamic range of 1.1 with respect to the Density Step Target. interpretation. DR is measured in units of optical density of (OD) a surface defined as logODR (2.2) where R represents the reflectance of the surface such as th DR can be determined practically by evaluating the number of gray-level wedges t the given imaging system can recognize in a standard Density Step Target (Figure 5). Accomplish this, an image of the target the intensity of each gray level wedge measured. Then by visual observation of the responding intensity vs. optical density plot (Figure 2.16), camera can be determined under the given lighting conditions. In the example shown inure 2.16, only the first 11 out of 15 gray wedges are visually distinguishable. Each g 42

PAGE 63

Figure 2.15 Density Step Target with Optical Density Range from 0.07 to 1.5 range of the forward-view camluation vehicle at different re settings (example in Figure 2.16) and results are shown in Table 2.2. It is seen ic range is maximum. It is noted that during testing, a e was set in the capturing software to control the amount of light in The above target (Figure 2.15) was used to study the variation of the dynamic era of the FDOT highway eva apertu that the optimum aperture settings for the tested camera under daylight conditions are F4.0 and F5.6 where the dynam fixed exposure tim strikg the camera sensor. Figure 2.16 Dynamic Ranges for Density Step Target (Figure 2.15) with Camera Aperture Setting of F2.8 43

PAGE 64

Table 2.2 Dynamic Ranges of Forward-View Camera for Different Apertures Aperture Dynamic Range F1.5 0.7 F2.0 1.0 F2.8 1.1 F4.0 1.5 F5.6 1.5 F8.0 0. 9 The benchuser. For example, the Federal Bureau of Investigation (FBI) has defined a criterion in which the dynamcan 2.18 sh inroducing such a histogram for a given image (Chastain 2004). mark values for satisfactory dynamic range are often specified by the ic range must cover at least 200 gray levels for an image to be of acceptable quality (FBI 1999) while the National Archives uses in their guidelines a range from 8 to 247 (or 240) gray levels (Franziska and James 1999). These criteria be converted to dynamic ranges of 0.9 and 1.4, respectively on the optical density scale. During the current research, it was found that for dynamic range, the FBI criterion is sufficient for imaging of highway features as well as pavement distress. Figures 2.17 and ow the comparison of two images, with an insufficient dynamic range (Figure 2.17) and with a sufficient dynamic range (Figure 2.18), captured by the forward-viewimaging system of the FDOT highway evaluation vehicle. Figures 1.34 and 1.35 also show the corresponding intensity or luminance histograms for each image with intensityvalues plotted along the x-axis and the frequency (i.e., number of pixels found at that tensity) on the y-axis. Most image editing softwares have a built-in capability of p 44

PAGE 65

2.5.2. Evaluation of Proper Exposure Settings The gray scale range of an image c aptured by an imaging system can also reveal whether the exposure of the cammust be checked in the intensity or luminance histogram: 1) To ensure smooth transition between the tones, there should be an even distribution of (2) To t Gray Scale Range of Forward-View Image with Corresponding Luminance Histogram Plots era has been properly set. The following characteristics ( intensities (or tones) through the entire range of tones with no sharp rises or drops (Arrow 1 in Figure 2.18), ensure that the image is not under-exposed, there should not be a significant amount of low intensities in the image (Figure 2.17), and (3) To ensure that the image is not over-exposed with consequent loss of information, there should be gaps in the low and high intensity areas (Arrows 2 in Figure 2.18). Figure 2.17 Insufficien 45

PAGE 66

Figure 2.18 Sufficient Gray Scale Range of Forwiew I with sponding stograots If any oage tion is a significant attribute of imaging systems because it ensures retention ard-V mage Corre Luminance Hi m Pl f the above conditions are not met, the exposure settings on the camera must be changed and the test repeated. 2.6 Detail Reproduction Quality of a Digital Im 2.6.1. Theoretical Limits of Spatial Resolution Image resolu n of information relating to highway features. Moreover, higher image resolutiopermits more accurate estimates of dimensions such as distances between features of interest or crack widths that are to be made from the images. The resolution of an imaging system is influenced not only by the sensor sensitivity but also by the optical settings. Therefore, evaluation of image resolution can also ensure that the imaging system is in proper focus. 46

PAGE 67

The magnification, M, (ratio of image height H 1 to object height H 0 ) of an objectat a distance O from an imaging system with focal length, f, can be computed using the lens equation as (Edmd Industrial Optics 2003): un 10H f M H fO (2.3) Every camera has its system limitations and when the frequency of information flow exceeds the frequency at which a pixel can register, the phenomenon of aliasing occurs. The highest frequency that the CCD sensor can record is called the Nyquist frequency (N) which can be determined as 1/(2p) where p is the pixel pitch. For example, for the DVC1310C digital camera used in FDOT highway evaluation vehicle with a pixel pitch of 6.7 m, the Nyquist frequency is 74.6 cycles/mm or 74.6 line pairs (lp)/mm. d safety signs or crack widths), H0, is known, one can obtain the focal length required for clear recognition of the image of that feature by substituting H1= p in Eqn (2.3): f Therefore, for any object feature in image to clearly appear on the sensor, therequired minimum height of its image, H 1 must be theoretically equal to the sensors pixel pitch p. Then, if the height of the object feature (such as text on traffic an 0 pf OHp (2.4) Therefore, for given f and O, using Eqn. (2.4) the theoretical minimum height of an object feature (H0) can be determed in as: 47

PAGE 68

0 p fOHf (2.5) In Section 4.2 the above criterion will be compared to an alternative criterion based on ractical evaluation of the spatial resolution. One can also compute the effect of the exposure time and vehicle speed on the sharpness of an image. If the change in the image size on the sensor within the exposure time is more than half of the pixel pitch, blurred images can occur. From Eqn. (2.3) one can express the height of the image of an object on the sensor of a moving vehicle at a given instant (H1). If the exposure time is TE and the vehicle speed is V, the change in size of the image o from Eqn. (2.3) as: p n the sensor within the exposure time can be obtained 0 112()EEEHfdHdHdOHTTTVdtdOdtfO (2.6) Table 2.3 shows the maximum possible change in the size of an image on the icles optical systems, computed based on a 279.4 mm (11 inch) tall highway sign (H0). sensors of the FDOT veh 48

PAGE 69

Table 2.3H Exposure, T speed[m/s] 40 (1/25) 10 (1/100) 4 (1/250) Change in the Image Size (in pixels) Due to Vehicle Movement [number of pixels] E [ms] Forward-View 8.94 (20 mph) 0.13 0.03 0.01 (f = 8.5 mm) 22.35 (50 mph) 0. 34 0.07 0.03 (O=30.48m) 31.29 (70 mph) 0.48 0.09 0.05 Side-View 8.94 (20 mph) 1.3 0.31 0.13 (f = 20 mm) 22.35 (50 mph) 3.2 0.81 0.32 (O = 15.24m) 31.29 (70 mph) 4.5 1.1 0.45 According to Table 2.3, exposure times of 40 milliseconds affect the side-view Furthermore, it can be seen that exposure times of 10 milliseconds or lower do not affect the quality of images for any speed up to 70 mph. 2.6.2. ModulationResolution s test est ern where detail was still visible, benefits of repeatability could not be assured using this method. This problem has been resolved now with the introduction of the Modulation Transfer Function (MTF). MTF ission atio camera for speeds higher than 20 mph since the change exceeds more than half a pixel. Transfer Function (MTF) Approach for Determining Spatial In the past, the resolving power of an imaging system, or its ability to separate two lines was measured in line pairs per millimeter based on the USAF 1951 lenchart. Because human perception and judgment were involved in determining the highresolution patt is a measure of the contrast transm capability of an imaging system at a given spatial frequency (), typically measured in line pairs per millimeter (lp/mm). MTF of an imaging system can be defined as the r 49

PAGE 70

of the mthat imaging system, M, to that of thet itself, M (Nill 2001): odulation of the image of a standard sinusoidal (bar pattern) target captured by i targe o 0/i M TFMM (2.7) area of the target. The transmsin Therefore, it is necessary to first determine the modulation values for each ittance of a sinusoidal area of this target T(x) can be described by the equation: sin0123()cos2cos4cos6...Txttxtxtx (2. 8) where t1 is the amplitude of the fundamental frequency and t2, t3, etc are the amplitudes of the harmonics, and is the spatial frequency, which can be in terms of cycles per mm. odulus of the Fourier transform) MTF is defined as frequency amplitude response or m of the line spread function (Dallas 2004). According to Fourier theory, if T sin (xis a periodic function, then it can also be written as follows (Lamberts 2004): sin0 112nnn ()[cos2sin2]Txaanxbnx (2.9) na where: 1/22()cos2()Txnxdx (n = 0, 1, 2, 3 ) (2.10) sin1/2 50

PAGE 71

1/2)xnxdx sin1/22()sin2( nbT(n = 0, 1, 2, 3 ) (2.11) Equation 2.8 includes terms up to infinity but with most test patterns the magnitude of haction rmonics beyond n = 3 is negligible. In most cases, starting point of the cosine fun is arbitrary resulting in phase angle that has to be taken into account (Figure 2.19). Figure 2.19 Arbitrary Position of the Starting Point Then by using trigonometric identity coscoscossinsin x yxyxy, Eqn. (2.9)be rewritten as: can sin011()[cos(2]2nnnTxacnx (2.12) where: n cosnnac (2.13) 51

PAGE 72

sinnnbc n (2.14) arctannnnba (2.15) 2 (2.16) Then based on Eqn. (2.12), one can determine maximal and minimal transmittance as: 221/()nnncab max0123 (if x=0 and =0)Taccc (2.17) min0123 1 (if x= and =0)ccc Ta ) Based on Equations (2.17) and (2.18), the peek to peek modulation can be determined as: (2.18 maxmin130MTTac maxmin02TTcc 9) ts, M0 can be defined as (Nill 2001): (2.1For reflecting targe maxmin0maxminRRMRR where Rmin and Rmax are the minimum and maximum reflectances of the bar pattern of a given spatial frequency of the target viewed at a given a uniformly illuminated backgr (2.20) ound. M 0 can also be expressed in terms of the corresponding optical densities (Eqn. 2.2) as: 52

PAGE 73

maxminmaxmin10101010DDM (2.21where D 0DD) ively. n the other hand, the modulation of the image can be determined as (Nill 2001): min and D max represent the minimum and maximum density values, respectHence, M o can be evaluated by a photoelectric or densitometer scanner with its gain calibrated to directly measure optical densities of the bar pattern target. O maxminmaxminiIIMII (2.22) where, Imax and Imin are the maximum and the minimum intensity values of the image of the sinusoidal bar pattern corresponding to the desired spatial frequency. To obtain Mi, intensity values are measured on the basis of electrical pulses generated on the sensor and inverted to actual intensity values using the Opto-electronic Conversion Function (OECF) of the camera sensor. OECF defines the relationship between input luminance and the grayscale or digital intensity output from the camra. The OECF of a camera can be measured by using a test chart with known gray levels of its patches and the method described in ISO 14524 (McDowell 1999) or algorithm des Contrast Transfer Function (CTF) The contrast transfer function (CTF) of an imaging system is its relative contrast response to a square wave modulation. It is determined from the detector response to a e cribed in (Hasler and Susstrunk 2002). 2.6.3. target containing a series of black and white resolution bars with 100 percent contrast and 53

PAGE 74

increasing in spatial frequency (Figure 2.20). An approximate exprCTF solely from the image of the target can be determined as (Koren 2003): ession for determining ()C 100%CTF (0)C (2.23) where () is the contrast at the spatial frequency of C equal to: WBWB I IC I I24) (2.and (0)Cis the low frequency (black-white bars) contrast computed as: maxminmaxmin0IIC (2.2where I II5) The intensity profile plot for a scale number (SN) of 8 on ISO12233 target is shown in Figure 2.21. It is noted in the inset that the points A and B lie on the white and black areas, respectively, of the target. The corresponding spatial frequency on the sensor can be computed from SN using the generic equation of the target as: B is the average intensity of black areas I 1 I 2 I 3 , I 9 at a given frequency (AB in Figure 2.21) I W is the average intensity of white areas I 11 I 12 I 13 , I 18 at the given frequency I max and I min are the positive and negative peak intensities for the bar pattern atthe given frequency. 54

PAGE 75

5085058SN 6.9PH mmlp/mm where SN is the scale number on the ISO 12233 target and PH represents the picture height on the sensor. Figure 2.20 IS O 12solutioninset is8) 233 Re Chart ( shown for the horizontal resolution bar igure 2.21 Intensity Profile Plot for the Image of an ISO12233 Target for Scale Value of 8 (Figure 2.20) Corresponding to a Spatial Frequency of 58 lp/mm F 55

PAGE 76

A series expan sion can convert the square wave CTF to its equivalent sine wave MTF, as (Nill 2001): (3)(5)(7)(11)435711CTFCTFCTFCTFc ()()...MTFCTF (2.26) The number of CTF modulation terms in Eqn. (2.26) depends on the cut-off frequency at which MTF equals 0%. For 3 c and C() < 0.7C(0), Eqn. (2.26) can be app roximated by: ()()4MTFCTF (2.27) Therefore, for the visual resolution bars of the ISO 12233 resolution chart corresponding to ae 2.21), the MTF value can be computed as: spatial frequency of 58 lp/mm (Figur 130.556.258130.580.785C 556.2100%37%155.513.440155.513.4MTFC mera of s 78 different digital cameras. As seen in Figure 2.23, for a MTF value of 10%, the vertical Figure 2.22 shows the results of the resolution test for the forward-view cathe FDOT highway evaluation vehicle. If one determines a MTF value of 10% as the minimum allowable based on Rayleigh diffraction limit MTF of 9%, then the horizontal resolution of the above camera is seen to be 42 lp/mm while its vertical resolution ilp/mm. A MTF plot can also be used to evaluate the relative resolution powers of 56

PAGE 77

resolution of the mega-pixel camera used in the FDOT vehicles forward-view imaging system (75 lp/mm) is almost double that of the VGA camera (42 lp/mm). 0 1030507090MTF [%] 204060800102030405060708090Spatial resolution [lp/mm] Vertical Resolution Horizontal Resolution Poly. (Horizontal Resolution) Poly. (Vertical Resolution) Figure 2.22 MTF Response of the Forward-View Digital Camera (1296x1024) 01020 304050607080MTF [% 900102030405060708090Spatial Frequency [lp/mm]] Vert Mega-pixel Vert VGA resoution Poly. ( Vert VGA resoution) Poly. (Vert Mega-pixel) Figure 2.23 Comparison of MTF Response of Mega-pixel (1296x1024) and VGA (640x480) Resolution Cameras 57

PAGE 78

The spatial resolution values plotted in a MTF plot are applicable to the sensor. In order to verify if a given imaging system is capable of legibly registering information from a target object, the sensor resolution has to be transformed to spatial resolution at the desired distance in the field of view using Eqn. (2.3). Then, if, 110%12MTFH the minimum recognizable height of a feature, H0, can be determined in terms of the distance, O, as: 0(10%)2MTFfOH f (2.28) Similarly, based on evaluation of spatial resolution of the imaging system, the minimum recognizable crack width can be determined from a captured image (Figure 2.24). For example, the pavement imaging system of the FDOT highway evaluation vehicle employs Basler L103 line-scan digital camera and Sigma Fisheye optics with a focal length of 15 mm. The pixel pitch of the sensor, p, is 10 m and its distance from the pavement is 9.25 ft. Hence, the Nyquist frequency of this imaging system is 1/2p or 50 lp/mm However, the spatial resolution testing of this camerareveal that (10orresponds to a spatial frequency of 28 lp/mm indicating that the minimum recognizable crack w et al. edMTF%) c idth is 3.37 mm (0.13 inch) (Sokolic. 2004) (Figure 2.5). It is seen that Eqns(2.5) and (2.28) furnish two distinct criteria for minimum recognizable feature based on the theoretical Nyquist frequency and the evaluated MTF, respectively. 58

PAGE 79

re 2.24 ack Width e CT Highway Evaluation Vehicle Table 2.4 Sample Guidelines for Setting Focusing Distances (in mers) of the Forward-iew and Side-View Cameras (with a pixel pitch of 6.7 m and MTF10 = 74 lp/mm) Height of the roadway signte 25.0mm 10 Figu Minimum Cr Recognition from Imag aptured by the FDO te V Focal Length ( f) 6.0mm 8.5mm 12.0mm 20.0mm xt Ny. MTF 10 Ny. MTF 10 Ny. MTF 10 Ny. MTF 10 Ny. MTF 10 cm 8.9 8.9 12.7 12.6 17.9 17.7 29.8 29.6 37.3 37. 2 cm59.7 59.2 74.6 74.0 17.9 17.8 25.4 25.2 35.8 35.5 5149.2 148.0 186.5 185.0 cm 44.8 44.4 63.4 62.9 89.5 88.8 10 cm 89.5 88.8 126.9 125.8 179.1 177.6 298.5 296.0 373.1 370.0 2 179.1 177.6 253.7 251.6 358.2 355.2 597.0 592.0 746.2 740.0 0 cm 30 cm 268.7 266.6 380.6 377.4 537.3 532.8 895.5 888.0 1119.4 1110.0 50 cm .6 1850.0 447.8 44.0 634.3 629.0 895.5 888.0 1492.5 1480.0 1865 Table 2.5 Sample Guidelines for Setting Focusing Distances (in meters) of the Pavement th a pixe10 = 28 lp/mm) Focal Lengt ( f Crack width 4.7 mm Ny. MTF1 Camera (wi l pitch of 10.0 m and MTF h ) 0 6.0 mm 8.5 mm 15.0 Ny. MTF10 Ny. MTF10 mm Ny. MTF1020.0 mm Ny. MTF10 1 mm 0.5 0.3 0.6 0.3 0.8 0.5 1.5 0.8 2.0 1.1 2 mm 2.2 0.9 0.5 1.2 0.7 1.7 0.9 3.0 1.7 4.0 3 mm 1.4 0.8 1.8 1.0 2.5 1.4 4.5 2.5 6.0 3.3 44.5 mm 1.9 1.0 2.4 1.3 3.4 1.9 6.0 3.3 8.0 5 m5.6 m 2.3 1.3 3.0 1.7 4.2 2.4 7.5 4.2 10.0 59

PAGE 80

Tables 2.4 and 2.5 were developed based on Eqn. (2.5) (Nyquist frequency criterion) and Eqn. (2.28) (MTF criterion) to illustrate the determination of the focusing distances to objects that need to be clearly cay an optical system for specific focal lengths. It was assumed that the sensor has pixel pitch of 6.7 m. 2.7 Level of Noise from the imaging system are: (1) optical imperfections, (2) amplifier noise, (3) fixed pattern noise, (4) color shift noise, (5) compression artifacts noise, and nd read noise of the sensor. ptured b 2.7.1. Definition Unrelated energy fluctuations in the optical signal are referred to as noise which can occur during image capture, transmission, or processing, depending on the contentsof the image. Noise has a significant impact on the quality of images and hence evaluation and subsequent control of noise can certainly improve the quality of images. The primary sources of noise that originate (6) temporary varying noise, which is random noise due to photon noise, dark noise, a 60

PAGE 81

The causes of the above sources of noise are described in detail in (Roper Scientific 2003) and (Askey 2003). The compression artifact noise can be minimized by performing an appropriate compression set up on the capturing software. Through visutesting of uncompressed images captured by forward-view and side-view imaging systems of the FDOT highway evaluation vehicle, it was determined that images captwith an 80% JPEG compression show minimal pixelization and distortion (Gunaratne et al. 2003). On the oth al ured er hand, noise becomes a significant factor when optical systems with elatively long focal length are used in a moving vehicle, especially under low-lighting onditions and low shutter speeds. Furthermore, vehicle vibration caused by road rovien(engors (engine speed of 3,000 rpm) a slig is seccur dhe vehiibratio r c ughness also causes vibration noise. Figure 2.25 shows two images taken by the sidew camera u der static ngine idli ) and vibrat y condition where ht blur en to o ue to t cles v n. Figure Vehicle in Static and Vibratory Modes the the 2.25 Images Taken with the Side-View Camera of FDOT Highway Evaluation The FDOT highway evaluation vehicles vibration effect was quantified by attaching a laser pointer to the forward-view camera enclosure. In order to quantify effects of vibration, the laser beam was traced on a white board 8 meters away from 61

PAGE 82

cameraure for different engine speeds from idle up to 4,000 rpm (example shown in Fig2.26 is for engine speed of 3,000 rpm). The maximum vertical angular deviation of the laser beam due to vehicular vibration is given by: Figure 2.26 Trace of the Laser Beam for an Engine Speed of 3,000 rpm ()CADistance Based on the above angular deviation, the maximum vertical position c x (2.29) hange (on the sensor) of the image of an object at a distance O due to the vibration effect can be determined as (Eqn. (2.5)): []()( )vibrationfCAOpixelsfODistancepixelpitch (2.30) 62

PAGE 83

Table 2.6 shows the magnitude of vibration for the imaging systems tested in this work. Based on the results, one would expect a more pronounced effect of vibration on the side-view camera. It was also seen that the effect of the vibration on image quality can be minimized by using a special rubber shock-absorption casing for the camera. However, if distance measurements are to be performed based on the images, the effect of vehicular vibration needs to be account for. Camera Given Values Horizontal Displacement [number of pixels] Vertical Displacement [number of pixels] Table 2.6. Effect of the Vehicular Vibrations ( vibration in pixels) on the FDOT Vehicles maging Systems I Forward-View f = 8.5 mm 0.6 1.1 O = 30.48 m (100 ft) Side-View f = 25 mm 1.9 3.3 O = 15.24 m (50 ft) 2.7.2. d Evaluation of Noise through Measurements of Signal-to-Noise Ratio Signal-to-noise ratio (SNR) describes the relative magnitude of a signal compareto the noise or uncertainty in that signal. Hence SNR is also considered as an important parameter in the assessment of image quality (Table 2.7). Table 2.7 Relationship Between SNR and Image Quality (Specht 2004) SNR (dB) Image Quality More than 30 Excellent, no perceptible noise. 25 Good, only little noise. 20 Average, noise visible as fine granulation. 15 Bad, intensive noise, information drowns. Less than 10 Unusable. 63

PAGE 84

A lthough there are several methods of measuring SNR (Hasler and Susstrunk 2002), SNR was determined in this work by measuring the luminance (Eqn. (2.1)) from at least 1,000 randomly chosen sub-areas in the image. In order to perform this evaluation, Munsell N9-white matte and Munsell N3-black matte boards (or black and white patches on the Standard Color Checker shown in Figure 2.5) were imaged. Under these con998): ditions, black and white SNR can be defined separately as (Young et al. 1 10kblack ()20logwhiteblackblacSNRdB (2.31) 10 ()whiteblackwhitewhitewhere ( 20logSNRdB (2.32) ons e l sults of similar tests, users of imaging systems would be able to determine the optimum aperture settings and lighting conditions that would minimize the noise. white white ) and ( black black ) represent the averages and the standard deviatiof the pixel intensities of the black and white patches of the image of Macbeth Color Checker (Figure 2.5). Eqns. (2.31) and (2.32) were used to evaluate the SNR at six different aperture settings under sunny conditions. This was facilitated by ImageJ imagprocessing software that computes the standard deviations and average values of pixeintensities for regions of interest (Figure 2.27). From the re 64

PAGE 85

Figure 2.27 Measurement of Average White and Black Intensity Values and Standard Deviations of the Image of Macbeth Color Checker for an Aperture Setting of F5.6 As seen in T able 2.8, the most desirable SNRwhite and SNRblack (about 50 dB) are obtained for an aperture setting of F4.0. For the DVC1310C camera used in this test, the manufacturer specified SNR is 60 dB for 10-bit images. Because of the transformation from a 10-bit to an 8-bit (JPG) image, a lower SNR (ex: 50 dB) can naturally be expected. However, any further lowering of SNR (Table 2.8) due to aperture settings or lighting condition variations can be attributed to the previously mentioned sources of noise. This faging systems. These criteria also enable the user to determine the most favorable r the forward cameraely. urther underscores the need for independent user-driven evaluation of im lighting conditions. For example, the optimal white and black SNR values fo at an aperture setting of F4.0 in cloudy conditions were found to be 39.3 and 40.5 dB, respectiv 65

PAGE 86

Tablrture Settings (sunny conditions) Ape[dB] [dB] e 2.8 Results of the SNR Testing for Forward-View Camera at Various Aperture AVE white AVE black StDEV white StDEV black SNR white SNR black F6.5 101.7 0.936 1.995 43.2 36.6 1.5 23 F2.0 232.0 77.1 1.147 0.885 42.6 44.8 F2.8 229.5 64.9 0.685 1.300 47.6 42.0 F4.0 239.4 48.3 0.592 0.548 50.2 50.8 F 5.6 185.4 28.1 1.522 0.444 40.3 51.0 F8.0 67.2 5.7 1.141 0.887 34.6 36.8 2.7.3. Effect of the Noise on Dynamic Range If images are used in automatic detection of features and evaluation of pavement distress in particular, the maximum tolerable noise level that will allow a specific algorithm to detect a specified difference in grayscale (ex: 5%) becomes a key issue. At a certain level of noise, the detection algorithm fails to recognize and distinguish the required features (such as pavement cracks) from their background. In general, thresholds of recognition of one noise level from another can be expressed in terms of a signal-to-noise ratio. If one considers images where the pixels at levels 1 and 2 have average intensity values of and 1 2 then based on Eqn. (2.31) or (2.32) the recognition threshold for level 1, can be defined as: level 1k 21120logdB (2.33) Assuming that the camera captures a 10-bit grayscale image with 1024 (2) gray age is transformed into an 8-bit BMP file with only 256 (2 10level 1k 10levels. After capture, the im8) different gray levels where 5% of the gray scale represents approximately 13 (256x5%) 66

PAGE 87

gray levels (ex: levels 1 and 2 in Figure 2.28 (a) with intensities of 13 and 26 respectively). Figure 2.28 (b) and 2.28 (c) show two images of a crack with the noise represented by the standard deviations of the intensity oon 1.7 dB, rhy 28 (c) is practically impossible. Hence Eqn. (2.33) and T f 3.7 and 10.7 respectively. Based Eqn. (2.33), level 1k values for the images in Figs. 2.28 (b) and 2.28 (c) are 11 dB and espectively. Then, judging from the criteria in Table 2.7, it can be realized w automatic detection of the crack in Figure 2. able 2.7 provides an excellent set of tools to assess the limitations of a pavement imaging system in advance of automatic distress (crack) detection. (b) (a) (c) Figure 2.28 (a) Maximum Tolerable Noise Within Gray Levels; (b) Example of Low Level of Noise; and (c) Example of High Level of Noise Prohibiting Cracks Recognition 67

PAGE 88

2.8 Optical Distortion Although imaging technology has progressed to the point where it is now possibleto correct many lens defects and color shifts, most digital cameras still suffer from lendistortion. Lens distortion is caused by lenses of relatively inferior quality in which the magnificn varies from the center ofhe lenthe effet i s atio ts to the edges. In fixed focal length lenses, cs less prominent than in zoom lenses. In general photography, distortion may hardly be noticeabedges, le. However, when parts of the image are subjects composed of straight such as buildings or right-of-way highway features, barrel or pincushion distortion may affect the quality of the image (Figure 2.29). Moreover, if the images are used for evaluations such as distance gauging between certain image features, then a correction must be made for the level of distortion. Figure 2.29 Images Affected by Barrel and Pincushion Lens Distortions One method of measuring distortion is based on capturing the image of a specially printed target, such as Edmund Optics 5%, 10%, and 15% distortion targets, at radial heights between 20mm and 80mm. When a curved line of a specific distortion target (i.e., 10% as shown in Fig. 2.30) at a given radial distance appears straight, that specific 68

PAGE 89

distortion (i.e., 10%) can be considered as the distortion in the image at the given radial distance appears straight, that specific distortion (i.e., 10%) can be considered as thedistortion in the image at the given radial distance. It was found that the forward-viecamera with a wide angle lens of focal length 8.5 mm used in this illustration suffers from a 10% barrel distortion w at a radial distance of 60mm. Figure 2.30 10% Barrel Distortion of the Forward-View Camera Lens in the FDOT Highway Evaluation Vehicle A more objective method of measuring the image distortion is to capture an image of a une e AB x 6.7 ith a iform grid (Figure 2.31). When the distance between two given points on thtarget and the distance between the corresponding points on the image are measured separately, the distortion can be determined. For example, Figure 2.31 shows a standard grid and its image captured by the forward-view camera of the FDOT highway evaluation vehicle at a distance of 1m away from the grid target. The standard length of the linsegment is 50 mm. By using the program ImageJ (Rasband 2004), 69 pixels can be counted on A'B', which is the image of AB. This represents a distance of 462 m (69 m ) on the sensor. If the camera produces no distortion, Eqn. (2.3) can be used, w 69

PAGE 90

focal length of 8.5 mm and a target distance of 1m, to determine the ideal size of the image of AB as 425 m. Hence the barrel distortion of this camera can be computed as ( 462-425)/4.25 or 8.7% for the given radial distance. Figure 2.31 Evaluation of Lens Distortion Based on Comparison of Object and Image Lengths 2.9 Guidelines for Maximizg Operations The following steps were found to help to maximize the efficiency of the imaging operations: (1) measure the dynamic range and gray-scale range of the imaging system for different aperture settings. Determine the aperture, exposure setting, and gain combination (if the latter option is available in the software) that provides satisfactory dynamic range and gray-scale range responses under typical lighting conditions, (2) evaluate the Signal-to-Noise ratio and determine the aperture, exposure, and gain setting combination (if available in software) required to minimize the noise in images under typical lighting conditions. Then the user can determine the ing the Efficiency of Imagin 70

PAGE 91

optimum aperture, exposure, and gain combination that satisfies both the color reproduction and noise criteria. The image compression ratio that would minimize tandard p of the field of view and positions of the the compression artifacts noise can also be determined, (3) assess the color reproduction capability of the imaging system by using a SColor Checker. The optimum lighting conditions as well as the corresponding white balance can be determined using the described procedure, (4) evaluate the spatial resolution of the imaging system with respect to highway/pavement feature identification needs by deriving the Modulation Transfer Function of the system using a spatial resolution target. Criteria provided in this article will also enable setu imaging systems, and (5) check the degree of distortion in the images using the presented methodology. By estimating the level of image distortion, the estimates of crack widths and distances between desired features can be refined. Moreover, the same criteria can be used to select appropriate optics that minimizes the level of distortion. 71

PAGE 92

CHAPTER 3 INVESisible effects of a cumrn s, how well th TIGATION OF NOISE AFFECTING PAVEMENT DISTRESS IMAGES AND ENHANCEMENT OF IMAGES USING NOISE FILTRATION 3.1 Noise that Affect CCD Sensors Noise that degrades the quality of digital images can be described as the v ulative electronic error or other interferences or compression artifacts that appear in the final image obtained from a digital camera. Images acquired using modeCCD sensors (Figure 3.1) can become contaminated by random noise originating from a variety of sources such as variations in the detector sensitivity, environmental variationthe discrete nature of radiation, transmission or quantization errors, etc (Figure 3.9). Noise, which is not an inherent part of the signal, arises as a result of unmodelable processes persistent in the production and capture of a signal. Therefore, noise reflects e sensor and the digital signal processing systems function inside the camera. Figure 3.1 Typical CCD Sensor Containing Micro-lenses, Colored Filters, and Photosensitive Diodes (Source: Fuji Photo Film USA) 72

PAGE 93

Noise can be described through the variation in the pixel intensities of a digital image of a uniformly bright area. Noise is usually described by its probabilistic charact d nal edefine ime, if a random d d digital cameras is Gaussian noise (Sonka et al. 1999). Gaussian noise, a special case of white oise, is a random noise with a normal probability distribution (Figure 3.2). The Gaussian noise will be considered as additive noise with a Gaussian distribution. eristics. Idealized noise, also called white noise, is a signal with a power spectrum(energy per unit time) falling within given frequency bins homogeneously distributeacross all frequencies. Having power at all frequencies, the total power of such a sigwould be infinite and therefore the white noise signal is purely a theoretical concept. By considering noise of a certain signal as white noise in the frequency domain on can portant statistical properties of that noise in time. As an examplprocess []wn is white noise, its values []wn and []wm are uncorrelated for every n anm where mn: [][]0Ewnwm (3.1)A close approximation to noise that occurs in many practical cases relate n Figure 3.2 Gaussian Probability Density Function 73

PAGE 94

In order to comprehend th e mechanisms that introduce noise to CCD sensors, one must poe 3.1.1. Overview of Charge-Coupled Device (CCD) The heart of the digital imaging system is the CCD sensor that relies on the physical conversion of the light, or photons, to an electronic charge. The charge is generated by electrons excited from the poly-silicon valence band to the silicon dioxide conduction band (Figure 3.3) due to a reaction between the silicon and the impeding light (Davies and Fennessy 2001). The charge will be stored as a potential in the silicon substrate layer located directly under the sensor. Therefore the number of electrons created for a given wavelength of light will be a linear function of the number of photons per unit time and unit area. ssess a thorough understanding of the structure of CCD sensor s. Hence thfollowing chapters describe the important aspect of the composition of CCD sensors. Figure 3.3. The CCD Sensor Detail (Davies and Fennessy 2001) 74

PAGE 95

The CCD sensor is created by using a technique allowing the assembly of thousands or millions of separate elements, or pixels, together. Generally, the greater number or pixels, the greater the detail that can be achieved in the resulting image. The current market favors the full frame CCD and the interline CCD that differ in both qualand cost. The pavement imaging system of the FDOT highway evaluation vehicle contains Basler L-103 camera with a line-scan CCD sensor shown in Figure 3.4 (a) wlinear array of 2048 pixels which is continuously integrated forming high resolution images. The forward-view and si ity ith de-view imaging systems of the FDOT highway valuation vehicle use DVC-1310c color cameras (Figure 3.4 (b)) that utilize a 2/3-inch ony ICX085AK interline CCD sensor. This sensor utilizes 1300 x 1030 effective pixels. e S (a) (b) Figure 3.4 (a) Front Face of the Line-scan Camera Basler L103 Without Optics; (b) igital Camera DVC 1310c D 75

PAGE 96

3.1.2. Gain and Exposure In CCD imaging, the gain refers to the level of amplification a given system wiproduce. Gain is reported in terms of electrons/ADU (analog-to-digital unit). Asexample, a gain of 8 associated with a signal means that the Analog-to-Digital Converter (ADC) of the camera digitizes the signal so that each ADU corresponds to 8 photoelectrons. Usually the factory default gain value is set in the imaging system so that with optimal lighting and exposure setting, the linear output range of the CCD sensor mapthe input range of the ADC. Under these conditions, black co ll an s to lor will produce a gray alue of 1 from the ADC and white will produce a gray value of 254. Then the amplification of the signal as a multiple of the raw signal from CCD before amplification can be computed as follows: v 254 actfullg (3where k represents the gain coefficient, g kc.2) acity d in act is the actual gain value obtained after amplification of the signal in units of electrons per ADU and c full is full the well capof the sensor pixel in electrons. As an example if Basler L-103 camera reports a gain of 500 associated with a given image, and the full-well capacity of this sensor based on its pixel size of 10 m x 10 m (Table 3.1) is 82,000 electrons, then by using Eqn. (3.2) actual multiplication gain coefficient k is equal to 1.55. The above concept will be useChapter 4 to obtain information on gain. 76

PAGE 97

The gain of a camera can also be selected under software control to meet the needs of a given application. For example, the gain can be increased when the survey of asphalt pavements is conducted under low lighting (photon starved) conditions and a high-sensitivity mode is required. Alternatively, the gain can be reduced when imaging of concrete pavements under bright light conditions is photon-noise limited and when high SNR mode is required. When the gain setting of a camera is increased, both the signal and noise are amplified resulting in no change in the SNR, as shown in Eqn. (3.5). If is the mean of all intens ity values x in the image, 2 represents their variance, and a is the signal intensity amplification, then from basic statistics: (3.3) ax aax x ax (3.4)Based on Eqns. (2.30) and (2.31): axaxxSNRaxSNRxaxaxx Therefore, the gain is not an effective tool for increasing the amount of information contained in the signal from a CCD. Gain only changes the contrast existing image. It must be noted that although SNR would not change with a gain, it could hinder contrast between two bright areas because both areas could reach a saturation pixel intensity limit of 255 during amplification of the signal. At present, more and more digital cameras use on-chip exposure to regulate th (3.5) of an e conversion of light captured by the pixels. On-chip exposure involves accumulating 77

PAGE 98

photons in the CCD sensor pixel wells over a defined period of time. As photons strike the pixels, a corresponding electron charge is collected in each pixel. This a mount of lectron charge is directly proportional to the amount of light that has been accumulated from the sample. Once the predefined exposue has ended, this accumulated electron charge is converted from an analog to a digital signal. The downward imaging system of the FDOT highway evaluation vehicle, based on its B sec. The exposure time is set for each line constituent of the image. Setting the proper exposure time is based on evaluation of the gain used for the previous line captured by the camera. As an example, if lighting becomes too bright, and are still too bright then the exposure time changes to 1/40,000 second. For Basler L-103, the manufacturer recommends the minimum exposure time of 1/50,000 second. The following method can be used to determine the proper exposure time for the captured and analyzed by using an appropriate image processing software, such as ImageJ. The intensity values of brightest and darkest pixels are recorded. As an example, kest intensity values shown in Figure 3.5 are equal to to. As a rule (Stein 2004), the brightest pixel in the imbecome non-linear and weak. For a camera which produces 8-bit images, the threshold e re tim asler L-103 line-scan camera, is equipped to use two different exposure times: 1/19,000 and 1/40,000 the gain value was reduced to a preset minimum value and pixels in the captured image captured image. The test image for the given exposure time T, e.g. 1 sec, has to be the brightest and the dar 255maxI and 80 respectively. Then, the difference between them is equal exceed 80% of the maximum intensity value of the grayscale in order for the signal to minI 25580175 age should not 78

PAGE 99

value based on 80% criterion is 205thresholdI Then, the correct exposure corT can be obtained as (Stein 2004): 20580145corII 10.71 secsecthresholdminTT (3.6) Figure 3.5 Example of the Pixel Intensity Values of the Image ion at 25 frames per second. These sensors move the charges from the sensor elements first sideways into their own charge-transfer region, then down the vertical shift registers and finally out via the horizontal shift r (Figure 3.6 (b)) (Electus Distribution 2004). Inside each cell is a light sen 3.1.3. Area-scan CCD Sensor The forward-view and side-view imaging systems of the FDOT highway evaluation vehicle utilize area-scan interline CCD sensors. Interline sensor type has been designed to capture live actegistersitive pixel el overflow gate (Figure 3.6 (a)). Moreover, each cell also contains a short section of a long vertical structure called the overflow drain. When light falls on the sensor ement, a very tiny photodiode, together with a charge transfer area that forms part of a long vertical shift register. There are also two control elements called the readout gate and the 79

PAGE 100

elemenh Thus, afted earlier, this area is actually part of a long vertical shift register, which links all of the charge-transfer areas in a complete column of cells. This shift regsfer areas down the The overflow gate and the drain are designed to prevent the sensor elements from accumuver-ining r ide. This means that if the charge builds up to reach that level, any further charge simplyld s t, the photons generate charge carriers and as a result a small quantity of charge builds up in that part of the cell. Then after a short time, a voltage pulse is applied to the readout gate which has the effect of lowering the retaining wall on that side of te photodiode, allowing the accumulated charge to flow out of the photodiode and into the charge-transfer area.r the readout pulse, the charge that was generated in the sensor element by the incident light has been shifted into the charge-transfer area alongside. And as mentione ister is used to transport the charges in each of the charge-tran columns, and ultimately out of the chip lating too much charge, in case the light falling on them is too high due to oexposure. The idea is that the overflow gate is held at a voltage level where the retawall on that side of the sensor photodiode is a little lower than on the charge-transferegion s flows over the wall into the overflow drain, where it is drained away. This system prevents the photodiode from ever completely filling with charge, which woutend toaturate the CCD image. 80

PAGE 101

(a) Figure 3.6 (a) Basic Structure of the Picture Element Cell of the CCD Sensor; (b) .1.4. Line-scan CCD Sensor The Basler L103 line scan camera used in FDOT highway evaluation vehicle mploys CCD sensor chips which provide features such as electronic exposure time CCD sensor, tw, and twital in the CCD sensoals whposed to the light. At readout, accumulated charges are transported froht-sensitive seents to the CCD hift registem even aixels are proceately in two channels as shown in Figure 3.7. The charges then move from the two lines of shift registers to the output amplifiers where they are converted to voltages proportional to the accumulated charges. The voltages are digitized and transmitted by the camera. The video data is transmitted as a single 8-bit video data stream. (b) Structure of the Area-scan Interline-transfer CCD Sensor 3 e control and anti-blooming. The major components in the camera electronics include the o amplifiers o Analog-to-Dig Converters. The pixels r output voltage sign en they ex m the lig nsor elem s rs. The charges fro nd odd p ssed separ 81

PAGE 102

Figure 3.7 Structure of the Line-scan CCD Sensor Since the line-scan camera used in the pavement imaging system of the FDOT highway evalu ation vehicle is also a CCD camera, it inherits all of the sources of noise that a CCD cassemble the line-scan images together to form one 2D pavement image. Hence additional sources of error may be present in the line-scan CCD camera due to errors in assembling. As seen in Figure 3.7, Basler L-103 processes odd and even pixels separately in two different data streams. Consequently, this process can bring another type of noise as shown in images in Figure 3.8 captured with the FDOT pavement imaging system during the experiment described in Section 4. amera has. In addition, in a line-scan camera, the manufacturer has to 82

PAGE 103

Figure 3.8 Noise Due to Improper Processing of Odd and Even Pixels on the Line-scan Sensor 3.2 Types of Noise in CCD Cameras nt on on-ideal effects introdus D camera aboard the pavement imaging systems. This noise is With the understanding how CCD cameras work, now one can classify differetypes of noise. There are a variety of noise sources present in pavement image data acquired with a Charge-Coupled Device (CCD) line-scan camera. The photo-conversiprocess by which object light is converted into photoelectrons introduces object-dependent noise characterized statistically as a Poisson random process. N ce extraneous electrons that are indistinguishable from object-dependent photoelectrons. Examples of this noise include object-independent photoelectrons, biaelectrons, and thermo-electrons and their cumulative effect can be described by the term background count. Read-out noise further contributes to the degradation of images acquired with the CC 83

PAGE 104

characterized as a Gaussian random process. The majority of noise sources attributing to the loss of image quality along with their sources are shown in Figure 3.9. ure 3.9 Sources of Noise and the Source of Their Occurrence in Digital Camera Based on (Snyder et al. 1994), the CCD image data can be described by followimathematical model: (3.7) where Figng objbrjnjnjgj rj is number in electrons acquired by reading out pixel j of the CCD sensor array, nj is the number of object-dependent photoelectrons, njis the number of obj b electrons for background count, gj is readout noise, and is the number of pixels in j 84

PAGE 105

the CCD camera array. The random variables objnj, bnj, and gj are statisticaindependent of eac lly h other. .2.1. Object-dependent Noise 3.2.1.1. Photon Noise Today, the use of modern CCD cameras, such as the forward-view and pavement cameras of FDOT highway evaluation vehiclor inside the camera is an array of photosensitive elements, each one of which generates photoelectront on the CD chip convert to photoelectrons within the devices silicon layer. Since light consists f discrete photons that do not arrive at a camera sensor in a steady stream, the number of me interval is a random variable. Therefore photoelectrons constitute not juo T 3 e is common. T h e CCD sens ns in response to light and stores them as a charge. Photons incide C o photons collected on a sensor within a given ti st the signal, but also carry statistical variation in the photon arrival rate at a given point. Modern CCD cameras are sensitive enough to count the individual photons within a finite time and hence the photon noise is introduced intthe signal (Young et al. 1998). Most of the observations made about noise and its various sources hold equally well for other imaging modalities such as Complementary Metal Oxide Semiconductor (CMOS) sensors or photographic film. The photon noise problem arises from the fundamentally statistical nature of photon production which is governed by the laws of quantum physics. The quantification of photons must be limited to only an average number within a given observation time window. The probability distribution of p photons in an observation window of length 85

PAGE 106

seconds can be closely approximated by a Poisson distribution, with a mean of T wher e is the photon flux, an intensity rate parameter measured in photons per second. Therefore, the number of photoelectrons collected by a CCD pixel also follows a Poidistribution which has the property that its variance is equal to its mean, i.e.2 sson Therefore the relationship between the signal and noise can be written as: ,,mnpm n x (3.8) where ,mn x represents signal constituted by the phot oelectrons collected inside the CCD pixel. herefore, as an example, for given lighting conditions if the photon flux is 9000 photons per pixel per second, the CCD sensorapture 9 photons per pixel per exposure time (T) of 1 milliseconill represent T c 93 d. For this exposure time, the photon noise w photons, or 33.3 % of the signal. But if the exposure time is prolonged to 0.1 second, the 90030 noise will represent or 3.3%. Thus, as the exposure time increases, the effect of the noise on the signal can be ignored. Then the signal-to-noise ratio for an image with Poisson noise is given by: PoissonprocessSNR (3.9) where the standard deviation, or noise T photon Based on Eqn. (3.9), one can conclude that even if there are no other noise sources in the imaging sequence, the statistical fluctuations associated with 86

PAGE 107

countinc or g over a finite time interval T would still lead to a finite signal-to-noise ratio (SNR). Even though photon noise cannot be reduced via camera design (Roper Scientifi2004), it can be minimized through the imaging method. For very bright signals, fwhich T exceeds 10 5 the photon noise, or fluctuations due to photon statistics, can beignored if the sensor has a sufficiently high saturation level. This can be achieved by f an image thus capturing more photons. Generally, the pixels i l capacity, typical full well capacity for a camera with a pixel size of 10 x 10 m had to be derived from the data pen that theimes. increasing the exposure time o n CCD cameras have a finite well capacity reaching their limit at saturation level.It defines the amount of the charge an individual pixel can carry before saturating. Fulwell capacity depends upon the pixel size of the CCD, the operating voltage used on the CCD, and the aperture of the camera lens. Basler L-100 line scan camera used in the pavement imaging system of the FDOT highway evaluation vehicle uses one pixel line CCD sensor with a pixel size of 10 x 10 m. Since the manufacturer of the L-100 digital camera does not provide information regarding the full well rovided by Roper Scientific (Roper Scientific 2004) as shown in Table 3.1. It is se full well capacity of this camera is large enough to handle longer exposure tHowever the maximum exposure time is limited because of the rate at which the camera has to capture images to suit the vehicle speed. 87

PAGE 108

Table 3.1 Full Well Capacity of Three Given Cameras (Courtesy of Roper ScientifCCD Type Pixel size (m) Typical Full Well (e ic) and Computed Value for Basler L-100 Camera ) Kodak KAF1401E 45 6.8 x 6.8 ,000 Basler L-100 10 x 10 82,000 Marconi CCD37-10 15 x 15 165,000 Kodak KAF1000 24 x 24 630,000 3.2.1.2. Thermal Noise faand interacts with the silicon layer of the sensor and free up electrons. Thend counted by the ADC. Besides light there aays electrons can leak into the pixel without light. The dominant source of thrrent. Electrons freed from the CCD material itself through the CCD well and become indistinguishable from "trunoise due to the dark current of the CCD is about 1 electron per pixel per second if the chip temperature is 0C (Stein 2004). As the exposure timber of thermal electrons increases. Dark urrent describes the rate of generation of thermal electrons at a given CCD temperature. e rate par Normally, ligh ts lls on the CCD se electrons are then moved out of the CCD are several w ese electrons is dark cu rmal vibration get trapped in the e" photoelectrons. The thermal e T increases, the num c The probability distribution of thermal electrons is also a Poisson distribution where th ameter () is an increasing function of temperature. Thermal noise is the square root of the number of thermal electrons generated within a given exposure time and canbe written as: ,mntDT (3.10where D represents the dark current and T represents exposure time interval. ) 88

PAGE 109

As a measure of the thermal noise, one can look at the time necessary to produce a sufficient number of thermal electrons for the image to move from one brightness level to the next in the absence of photoelectrons striking the sensor through the optics. This last condition, the absence of photoelectrons, is the reason for the name dark current. Mapping of pixels that have some source of signal other than light can be achieved by taking dark fram e image (Figure 3.10 (a)). It is an image with an exposure of several minutes but without any light allowed to strike the CCD sensor taken at the same temperature. it emitting hotons onto the CCD array due to the heating by the amplifiers themselves, which can be seen Moreover, dark frame also can show the on-chip amplifier circu p in the lower left corner of the Figure 3.10 (b). With the voltages required for read-out, the intensity of the light can become quite high and can affect the ability to capture images correctly. (a) (b) Figure 3.10 (a) Example of Dark Frame Image; (b) Dark Frame Image Showing Luminance Due to the On-chip Amplifier 89

PAGE 110

There are alternative techniques for suppressing dark noise and these usually involve estimating the intensity difference corresponding to the average dark current for the given exposure time and then subtracting this value from the CCD pixel intensity values before the ADC processing occurs. While this technique does reduce the dark current average, it also reduces the possible dynamic range of the signal. To achieve the most favorable working conditions for an imaging system, it can be maintained in favorable temperature conditions using a fan and a sensor mounted inforward-view, side-view, and pavement im systems of the Florida DOT highway valuation vehicle. side the camera enclosure, as shown in Figure 3.11. This technique is employed in the aging e Figure 3.11 Details of the Cooling Fan for Forward-View Imaging System of FDOTHighway Evaluation Vehicle 90

PAGE 111

Other cooling techniques are based upon Peltier cooling elements with which it is straightforward to achieve temperatures difference up to 130 K depending on the element type and air humidity (Norton 2004). Peltier cooling elements are based on pumping the heat from one side of 2-semiconductor element to the other, utilizing thePeltier effect. The principle of Peltier cooling elements is shown in Figure 3.12 (a) while its structure is shown in Figure 3.12 (b). If one places a drop of water in the hollow on tjoint of p-type (such as antimony he Sb) and n-type (such as bismuth Bi) semiconductors, and switch on the current, the drop would freeze, and, with the reversal of the direction of the current the drop would melt (Rudometov and Rudometov 2004). This cooling technique leads to low thermal electron production rates. If night time surveys are performed during hot summer nights when outside temperatures are high, dark or thermal noise may grow large enough to become large enough to influence the quality of the pavement image. (a) (b) Figure 3.12 Peltier Cooling Element: (a) Principle; (b) Structure 91

PAGE 112

3.2.1.3. Bias Noise There is a certain amount of electric potential difference that must be imposed on the chip so that each pixel can act as a photosensitive pixel able to gather electronresult of applying a potential difference on the chip causes an electron build-up in the pixels, even if no light is incident on the CCD sensor. The number of bias electrons remains constant no matter how long one exposes the CCD chip, as long as the potential difference applied on the chip does not change. To evaluate the bias noise, bias frame (Figure 3.13) is capture with a zero exposure time and no light striking the CCD (with the shutter closed). Moreover, the biasframe also determines the amount of the read-out noise described in Section 3.2.2.1 sincean image of bias frame that captures bias noise is also affected by the noise due to thread-out of the sensor. If zero-length exposure is not allowed by capturing software, the use of the shortest possible exposure time is a solution. The histogram of a typical averaged bias frame reveals a Gaussian distribution (Howell 2000). The intensity ofpixels in a bias fra s. The e me should have an average value somewhere above zero and randomly varyingit. among pixels. It is important that no pixel in a bias frame image has an intensity value of zero which represents a value that was outside the lower range of the ADC that processes the analog signal from the sensor and thus the pixels will have no signal and therefore no statistical information on 92

PAGE 113

Figure 3.13 Example of Bias Frame Image 3.2.2. Object-independent Noise 3.2.2.1. Read-out Noise Another type of signal independent noise is read-out noise. All electronic noise sources inherent to the digital camera and the CCD sensor are collectively referred to as read-out noise and it represents the error introduced during the process of reading the schip. The read-out noise of a typical CCs about 15 electrons per pixel for every read-out process (Stein 2004). Read-out noise for a given imaging system can be evaluated by capturing the bias frame described in Section 3.2.1.3. The standard model for this type of noise is additive, Gaussian, and independent of the signal. As mentioned in Section 1.1.2 since the line-scan sensor is a CCD sensor with one line of pixels, the same concepts apply to line-scan cameras as well. This noise is present even at exposure times of zero length. The magnitude of this noise varies with the gain and also over time. A major component of read-out noise arises from the on-chip ignal from the sensor, in this case through the field effect transistor (FET) of a CCD D camera i 93

PAGE 114

amplifier and it can be reduced to manageable levels by appropriate read-out proper electron rates and ics. 3.2.2.2.bits Quantization Noise Noise that occurs in the analog-to-digital converter (ADC) is called quantization noise. The noise is additive and independent of the signal when the number of image 4 B This is equivalent to a number of gray levels of or For a signal sto an electrical form and thus has minimum and maximum voltagefor SNRBdB (3.11) valuation vehicle with a preset JPEG (8-bit) image format depends on the quantization noise. Based on Eqn. (3.11), max. 2BL 16L that ha been converted values, the ADC is adjusted so that zero corresponds to the minimum voltage value and 2 B -1 corresponds to the maximum voltage value. The following equation SNR can be used to evaluate the quantization noise (q n ) (Young 1998): nqQuantization noise can usually be ignored as the overall SNR of a complete imaging system is typically dominated by the smallest SNR. The overall magnitude of theSNR for forward-view and pavement imaging systems of the FDOT highway evaluation vehicle are given in Table 3.2 for the best and worst cases. Forward-view camera hadbeen tested in November 2003 for different aperture settings and the pavement camerawas tested in May 2004 for different pavements, lighting conditions, and speeds. For example, the maximum SNR for the line-scan camera used in the FDOT highway 611 e 681159linescanSNRdB. 94

PAGE 115

Table 3.2 Magnitude of the Best and Worst SNR for Forward-View and Pavement Cameras of FDOT Highway Evaluation Vehicle Camera Conditions Average SNR (dB) Forward-View Aperture F4.0 50.8 (area-scan CCD) Aperture F8.0 35.7 Aperture F1.5 39.9 Pavement (line-scan CCD) Concrete pavemlig ent, sunny, pavement hts ON 28.5 Concrete pavement, mostly sunny, pavement lights OFF 30.4 Asphalt pavemnt, sunny with shadow overcast, pavent lights ON 20.0 e em Asphalt pavovercast, pavement lights OFF 30.4 ement, sunny with shadow Asphalt pavement, cloudy, pavement lights ON 33.1 Asphalt pavement, cloudy, pavement lights OFF 34.1 3.2.2.3. Noise Due to the Optical System and Defects on the CCD Sensor In a CCD sensor, the most common noise sources due to the optical system are sensor.g direction. As each pixel spends time in the vignetting and shadows from out-of-focus dust specks in the optical system and on the Moreover, most CCD sensors will have a few pixels that do not respond to light properly due to some defects in the CCD structure that occurs during the manufacturing process. The result is that those pixels are not as sensitive to the light as the surroundinpixels and therefore they appear dark. Another form of defect that can affect the ability torecognize pavement features is the column defect due to bad pixels that can trap or drain charge away resulting in dark spots. As the image is read from the CCD, the charge mustbe pushed from pixel to pixel in the vertical 95

PAGE 116

bad ixel, it is drained of charge. Thu ps each pixel below the trap becomes dark as shown in Figu re 3.14. Figure 3.14 Example of Column Defect in a Flat Frame Image If the image is captured with an evenly diffused light source, also called the flaframe (Figure 3.15 (a)), the noise due to the optical system (Figure 3.15 (b)) as well as any defects on the CCD (Figure 3.14) will unveil. To obtain a flat frame image, one has to place the camera face up with the window of the optics exposed. The light can be diffused by placing one or more white paper sheets over the optical lens. The key is t take a shot with the exposure time high enough to achieve an average intensity level (128 for a 8-bit camera) in the image with magnitude of half the maximum intensity level (256 for a 8-bit camera). Exposure lengths of 0.5 2.0 seconds are ideal with an average intensity values in the image of 1,000 to 3,000 for a 12-bit camera producing 4,096 different intensity levels or 10,000 to 40,000 for a 16-bit camera which is able to t odifferentiate 65,536 intensity levels. 96

PAGE 117

(b) (a) Figure 3.15 (a) Example of Typical Flat Frame Image; (b) Flat Frame Showing SoPath 3.2.2.4. Compression Noise In addition to noise added inherently by the sensor, image processing techniques also corrupt the image with noise. Very often, the raw image acquired b me Vignetting on the Corners as Well as Dark Halos From the Dust Specks in the Optical y the sensor is rocessed using various operations such as filtering, compression, enhancement, and etc. The JPEG image format is the predominant fomat used by digital cameras among others suespite its significant compression which allows high compressions at the expense of loss e of 8:1 to 10:1 without any degradation in image quality to the human eye at normal viewing magnifichumans have only a restricted capability to perceive high frequencies. The more densely p r ch as BMP or TIFF. Even professional cameras have a JPEG (Figure 3.16) mode d of information. This compression algorithm has become the de facto image format for electronic storage of photographic images mainly due to its ability to reduce an image filsize by a ratio ation. This compression algorithm is based on the fact that 97

PAGE 118

o ne compresses the file, the more information one loses and the more artifacts are created in the icause mage. JPEG is particularly susceptible to artifacts because of the way it attempts to maintain details (edges) against large plain color areas (Askey 2003). Because JPEG analyzes the image in 8 by 8 blocks, these artifacts can sometimes appear with sharp "square" edges. Furthermore, JPEG cannot handle very noisy images effectively beof the excessive amount of information in a very noisy image that JPEG has to discard thus introducing more artifacts. by 8 Pixel Blocks Processed by JPEG Algorithm If the image contains features that are one dimensional such as hairline cracks, They can also create phantom features with contrast high enough to be wrongly Figure 3.16 JPEG Artifacts from Pavement Image (zoomed-in) where Lines Demarcate 8 compression artifacts can lower the edge contrast between the crack and the background. recognized as cracks when analyzed manually or using automatic means. 98

PAGE 119

3.2.3 Effect of Saturation .nse of the pixel departs from linearity and random noise str in in D sensors, it is possible to exceIt was recognized that forward-view and side-view imaging systems of the FDOT highway evaluation vehisuch as ones bounced from moving vehicles in front of highway evaluation vehicle, enter A CCD pixel can contain only finite number of electrons, about 3 x 10 5 (Morrison 2004). The capacity of each pixel is referred to as full-well capacity. If a pixel is illuminated by a bright object and/or if the exposure time is long enough, the well will start to fill and the photometric respo arts to be clipped at the top end. Once this capacity is reached, saturation occurs. The 12-bit ADC of the CCD camera saturates at about 4.1 x 10 5 electrons and any charge above this level is lost. If the saturation level exceeded the full-well capacity, the charge begins to spill into adjoining pixels. Because the barriers defining the pixel are lowethe parallel direction, the spill occurs along the parallel register of the CCD as shownFigure 3.17(a). Thus the vertical blooming spikes called parallel saturation (Figure 3.18(a)) occur. In most CCD sensors, the capacity of the serial register pixels is designed to be twice the capacity of the parallel register pixels. In some CC ed the charge capacity of the serial register and then serial saturation occurs (Figure 3.18(b)). cle can suffer from the saturation problem if strong light sources, the lens, as shown in Figure 3.17(b). 99

PAGE 120

(a) (b) Figure 3.17(a) Saturation of the Parallel Registers Highlighted in White; (b) Saturation in Forward-View Image of FDOT Highway Evaluation Vehicle (a) (b) Figure 3.18(a) Saturation of the Image from Point Light Sources; (b) Serial Saturation othe Image Where Low Intensity Areas are Intact While High Intensity Areas are Saturated f If the saturation occurs during exposure, it is also possible to exceed the capacity of the serial register pixels as well. When this occurs, charge begins to spill along the serial register, usually in the horizontal direction. In most CCD sensors, the capacity of the serial register pixels is designed to be twice the capacity of the parallel register pixels. The saturation of the CCD sensor should occur near the saturation level of the ADC, which for a 12-bit camera must be close to an intensity value of 4095. If not, the camera 100

PAGE 121

would have an adjustment problem which has to be corrected by the camera manufacturer. To minimize this saturation effect, manufacturers implement a special drain to draw off these excess electrons before they can spill into adjacent pixels. The price for implementing the anti-blooming drains is that they take up room in the CCD structure and consequently lead to lower full well capacity. 3.2.4. Effect of Flare flare may occur on the image. Flare can occur even if the strong light source is not included in the image. The phenomenon of flare occurs due to light bouncing off the glass surface of the lens as shown in Figure 3.19, i.e. internally reflecting, rather than transmitting through. Flare represents itself in two different forms (1) contrast deterioration (flaout area near the bright spot on the image while ghosting (Figure 3.20(c)) appears like a of contrast and tonality are degraded. When a light from a strong light source is incident on the optics of the camera, re) and (2) ghosting. Flare shown in Figure 3.20(b) is seen as a washedstring of dots, in a color image usually of green, purple, or violet color, that has the shape the aperture of the lens and is not a part of the actual scene. Sometimes, flare can represent itself as an evenly fogged image shown in Figure 3.20(c) with veiling glare causing a lowering of the overall image quality. Most wide angle lenses suffer from this problem. Because of this internal reflection, image 101

PAGE 122

Figure 3.19 Diffusely Scattered Light in the Camera Resulting in Flare n be To decrease the problem due to direct sunlight, the glass surface of good quality lenses is multicoated with special anti-reflection chemicals to prevent flare and ghosting. However, even with multicoated lens, flare cannot be eliminated completely. To overcome flare due to parasitary light sources outside the image, the field of view cablocked using a lens hood. (a) (b) (c) Figure 3.20 (a) Flare in the Pavement Image of the Standard Grayscale Target; (b) Problem Resulting in Ghosting (on the left) and Veiling Glare Example of Flare that Produces Low Readability of the Text; (c) Example of Heavy Flare 102

PAGE 123

On forward-view images of the FDOT highway evaluation vehicle, the effectsflare and ghosting were encountered especially when direct sunlight entered the lens ofthe forward-view and side-view cameras. This of problem was significantly reduced by designiing ng light in ng a lens hood for these cameras (Mraz et al. 2004). Also, the pavement imagsystem of the FDOT highway evaluation vehicle would suffer from flare if a strosource directly enters the lens while capturing the standard grayscale target as shown Figure 3.20(a). 3.3 SNR (Signal-to-Noise Ratio) A good estimation of SNR can be achieved by using the CCD equation for SNR (Roper Scientific 2003): 2 eQTS bger NRQTDTN .12) (3 where eQT represents the signal from the object of interest, i.e. crack and eQT represents the photon noise due to uncertainty of the incoming light from the object. The signal of the object of interest can be written using the average photon flux incidthe CCD, ent on per pixel per given exposure time interval T. The photons entering of the CCD must be multiplied by the quantum efficiency (Q one pixel o e ) of the CCD camera tdetermine the number of electrons detected, as described in (Deiries 2004). bg R represents number of photons per pixel per second from the background that can arise from many sources and is usually scattered light that is not of interest to the observer. The number of the photons has to be multiplied by the Qe and it exhibits a Poisson distribution having a 103

PAGE 124

square root relationship between the signal and noise.D is the electrons per pixel per second due to dark current, and 2rN is readout noise in units of root mean square electrons (electrons RMS) per pixel. The larger the SNR, the stronger the signal and/or the lower the noise, the more desirable the image quality is. Under low-lighting conditions, read noise exceeds photon noise and the image is said to be read-noise-limited. If the read noise is the dominant noise for short exposure times T, Eqn. (3.12) can be simplified to the form: eQTSNN r (3.13) surface affects the quality of the image. Figure 3.21 shows three cases with two sources of the light: (1) sunlight, and (2) light from the pavemeg system which is preset to illuminate the pavement area captucan camera at a fixed orientation. The only variable here is the position of the sun during the survey. Figure 3.21 also shows how a combinatiounlight is not directly above. By using the improved filtering algorithm described in Section 3.5 and 3.6 the border of dark (internal part of the crack) and the bright (background pavement surface) areas can be enhanced. Another question that can arise is how the light reflected from the pavement nt lightinred by the line-s n of both sources of light will affect brightness of the crack on the image. In all of these cases, the actual crack width is not clearly seen in the image. This is worse when the s 104

PAGE 125

Figure 3.21 Effects of the Sunlight and Pavement Lighting System on Visibility of the Crack f the integration time is prolonged, photon noise exceeds both read noise and dark noise, and at I this point the image is said to be photon-noise limited. Then, Equation (3.12) canbe written as: eQST bgeRQD N An example of the relation between SNR and exposure time for a digital imaging system is shown in Figure 3.22 where the points computed using Eqn. (3.12), (3.1 (3.14) 3), and (3.14) are respectively represented by boxes, a solid line fitting the computed points for shorter exposure times, and the dasheexposure times. The intersection of Eqns. (3.13) and (3.14) divides the graph into two sections, read-noise-limited and photon-noise limited regions. d line fitting the computed points for longer 105

PAGE 126

Figure 3.22 SNR vs. Exposure Time for Digital Cameras (Courtesy of Roper Scientific) The SNR increases linearly with exposure time T if the mean pixel value is within the read-noise-limited region (Roper Scientific 2003). A single 1.0-second exposure habout ten times the SNR of a single 0.1-second exposure in this region. However, addintogether multiple exposures increases the SNR by the square root of the number of exposures. Therefore, if ten 0.1-second exposures as g are taken, SNR will increase by a factor os ly n the g f three. On the other hand, in the photon-noise-limited region, the SNR increaseonly as the square root of the exposure time T and 100-second exposure has essentialthe same SNR than ten exposures of 10-second duration. For the cameras used iFDOT highway evaluation vehicle, SNR could fall in any of the two regions dependinon lighting conditions and the camera settings. 106

PAGE 127

3.4 Mathematical Representation of the Noise in the Signal Ideally, each pixel in the CCD sensor must have exactly the same response in the form of the intensity value for uniform light striking the sensor. In reality, the intensity ofeach pixel is a little above or below this ideal value by some random factor or noise that is added to each pixel in the image. The level of noise present in an image can be reduced conveniently if the scene is the movement of the approximately symmetrical with a mean of zero (Efford 2000). As a result, positive perturbations of the intensity value of a pixel by a given amount are just as likely as c a tendency for thise values are added. Phon distribution modeling photon noise can be expressed as: static. However, dynamic modes can be considered as quasi-static, if vehicle is negligible in comparison to the short exposure time during which the image is captured. This is because the noise distribution in a static image can be regarded as negative perturbations by the same amount, and hene there will beese perturbations to cancel out when several no oton noise can be modeled with multiplicative or non-linear models. Then the Poiss (|)!Ppp ,TeT pT (3.15) where P is a probability of having p photons strike the sensor during a time interval T and is the photon flux, an intensity rate parameter measured in photons per second. A Poisson distribution is similar to a Gaussian distribution except that it is used for discrete random variables, applies only to non-negative quantities, and has the 107

PAGE 128

property that its variance is equal to its mean, i.e.2 Moreover, Poisson distribution is generally non-symmetric and therefore the mam or the peak does not correspond to the ms funcebe expres ximu ean value. However, as becomelarge (Figure 3.23) the distribution becomes more and more symmetric and approaches a Gaussian form. The probability density tion of Gaussian nois can sed by: 2 2212pxe (where x is the intensity of the sampled signal, ()x3.16) is its mean and is its standard deviation (noise). For 20 lace of a Pois a Gaussian, in fact, becomes a relatively valid approximation and can be used in pson distribution in numerical modeling of noise. When 20 modeling of photon and thermal noises becomes mathematically complicated, and therefore the noise in the imaging system is usually assumed to be signal independent only (Fisher et al. 2000). Another main source of noise, read noise, is distributed about the ideal value following the Gaussian distribution. 108

PAGE 129

(a) (b) Figure 3.23 (a) Exam Therefore, based on the Eqn (3.7), the intensity of a digital image captured by an imaging system containing noise can be modeled as: ple of Poisson Distribution for Different Mean Values; (b) Example of Gaussian and Poisson Distributions Plotted for = 10 ,,,mnmnmnzx (3.17) where represents the intensity of the image, ,mnz ,mn x the intensity of a noiseless image, and ,mn the resulting Gaussian noise on a pixel defined by (m,n). Noise is often described by the variance of pixel intensities in an area of an image nce of pixel intensities, which is a direct m with a more or less uniform brightness. The varia easure of the noise shown in Figure 3.2, can be defined as: 2z 1N (3.1 8) 109

PAGE 130

wlue of By defining noise in pavement image based on Eqn. (3-17), the noise removal technique based on local statistics filtering can be used, as described in following chapters. .5 Noise Filtration Some simplest image enhancing operations, also called linear mapping, involve the adjustment of brightness, contrast, or color in an image. A common reason for linear mapping manipulations that modify a pixels intensity value independently of all other pixels is the need to compensate for difficulties in image acquisition, such as under-exposition. The simplest operation on single pixels is linear mapping where the overall adjustment of brightness (b) and contrast (c) is made. Brightness is defined as a relative expression of the intensity of the energy output of a visible light source while contrast is defined as a variation in intensity of an image formed by an optical system. General b (3.19) where ssb. here z represents intensity values in the image, represents the mean intensity va the image, and N represents number of pixels in the image 3 expression for brightness and contrast modification is: ,,mnmngcz ,mng represents post-processed intensity value for a given pixel and z is the original intensity value of the pixel under the consideration. If the brightne0, the overall brightness is increased and if 0b, it is decreased. Similarly, if 1c, the contrast is enhanced, whereas if 1c, it is reducedBrightness and contrast modification are the simplest image processing operations that ,mn 110

PAGE 131

can be applied to a pavement distress image to improve its overall quality. This will fy the tiny residual variations in contrast to reveal enough detail to allow proper interpretation of the features, such as cracks. An isolated pixel carries information on the intensity and color but it cannot express any information about the way in which these properties vary spatially. Therefore, processes such as linear magnimapping cannot be used to investigate or control spatial variations in image intensity or color noise. To be able to minimize these ariations, the variation of intensity over a designated area of the image has to be evaluated. Therefore, the upgraded intensity value of a pixel has to be computed from its original value and the intensity values of pixels in its vicinity. These neighborhood operations are more costly in terms of computing time than single point processes, but they facilitate the achievement of a range of useful effects, including noise filtering. One of the fundamental neighborhood operations of image processing is convolution. This operation can be used to filter images and suppress noise. In convolution, the weighted sum of intensity v the neighborhood surrounding that pixel is evaluated. The neighborhood includes the pixel under consideration, and it is custotrically about the considered pixel (Efford 2000). Then, obviously the neighborhood has to have odd dimensions, e.g. 3 x 3, 5 x 5, 7 x 7, etc. Although the neighborhood does not need to be a square, a square neighborhood is selected usually since there is rarely any reason to bias the calculations in the x or y directions. v alues, or grey levels, from mary for it to be disposed symme 111

PAGE 132

In this approach, the intensity values of the neighborhood of the considered pixel are weighted by coefficients that are elements of a matrix called the convolution kernel. Thus, the kernels size defines the size of the neighborhood where the evaluation takes place and it is centeras to be small relative to the image size. During convolution, each kernel coefficient is multiplied by an intensis t corner of the neighborhood. This summation can be generally expressed for a p x q kernel as: (3.20) where p and q are odd numbers that represent the width and the height of the kernel respectively, ed on the pixel of interest. The size of the kernel h ty value from the neighborhood of the image lying under the kernel. The kernel iapplied in such way that the value at the top-left corner of the kernel is multiplied by the value at the bottom-righ ,,qpmnjkmjnkkqjpghz j kh defines the kernel, and represent the updated and the original intensity values respectively, of the pixel under consideration. This kernel is then successively moved across the image until every pixel has been operated on. For the example of the kernel and the neighborhood illustrated in Figure 3.24, the operation in Eqn. (3.20) can be expressed as: (3.21and the result of the convolution operation can be expressed by the modified intensity of the pixel as: ,mng ,mnz 11,,11mnjkmjnkkjghz ) 0,0 0,0(1 x 82)(1 x 88)(2 x 65)(2 x 76)(1 x 60)(1 x 72)40g 112

PAGE 133

I f the kernel j kh is defined appropriately, represents the intensity of the pixel 0,0g 0,0 with reduced noise. Figure 3.24 Exam ple of 3 x 3 Convolution Kernel and the Corresponding Image Neighborhood After the convolution operation, the new image g has to be created to store the results of the convolution because application of convolution to any pixel except the first pixel would make use of intensity values already altered by a prior convolution operation. Any convolution kernel where all the coefficients are positive will act as a low pass filter. In the simplest case, all coefficients in the kernel are equal and their sum is equal to 1 so that the kernel is normalized (Eqn. (3.22)). Thus the convolution operation will ensure that the upgraded intensities of all pixel lie in the range of (0 255). 1110.1110.1110.111111 10.1110.1110.1111110.1110.1110.111 (3 9.22) j : -1 0 1k: 0 1 -1 h (j,k) -1 0 1 -2 0 2 -1 0 1 m: 1 0 1 n: 0 1 -1 z (m,n) 72 53 60 76 56 65 88 78 82 113

PAGE 134

Convolution with normalized kernels is therefore equivalent to computing the mean gray level value (intensity) over the neighborhood defined by the kernel. For this reason, these kernels are also called mean filters. Although median filters suppress noise in an image, they do not eliminate it. Moreover, they can also blur the objects of interest, such as cracks in a pavement, making their edges less well defined because this technique is based on the assumption that most points in an image are spatially coherent with their neighbors, a hypothesis that is not valid at the edge or feature points such as cracks (Owenof pavemeages. This technique is based on the use of local mean and variance without the need for the modeling of the original image required by other methods such as Kalman or Wiener filtering techniques (Lee 1981). The only assumption that the sample mean and variance of the intensity of a given pixel are equal to the local mean and variance of the pixels within a fixed neighborhood surrounding it. This assumption is generally valid in low-contrast areas where the estimated pixel intensity value approaches the local mean intensity value. To ensure that this algorithm works in high-contrast areas, where cracks occur, redefinition of the neighborhood is implemented, in which the variation of the local meorientation of the edge, or incorporation oal gradient into the local-mean and local-variance filtering algorithm. For each pixel with a high local variance or high contrast over a preset threshold, a gradient is computed in the local area in different s 1997). Therefore use of an alternative noise filtering algorithm based on the use of local statistics developed by Lee (Lee 1981) is more applicable in enhancing nt distress or crack im is ans and variances are tracked. This technique is based on accounting for the f the loc 114

PAGE 135

spatial ls contains t orientations to obtain the orientation of the crack edge. Then two subsets of pixein the local areas on each side of the crack edge are defined. It has to be first determined to what subset the pixel under consideration belongs. Since the required subsetpixels only on one side of the edge, the local mean and variance computed in this subsemust more precisely represent the mean and variance of the pixel under consideration. 3.6 Filtering Method Based on Local Statistics As shown in Eqn (3.17), noise in a two-dimensional N x N image can be modeled as ,,mnmnmnzx In most filtering algorithms, the apriori mean and variance of is ,mn x obtained from an assumed noise model while in the local-statistics method the apriori mean and the variance are approximated by the local mean ,mn x ,mn Q ,mnz and variance ,,mnmnEzz of all pixels in the neighborhood surrounding ,mnz as: 2 ,,mnmn x z (3.2and: 3) 22,,,mnmnmnQEzz (3.24) where 2 represents the noise variance or in general form. ,mn ,mn Under this assumption, the minimum-square filter (Lee 1981) giving the estimated pixel intensity value before degradation, x can be obtained as: ,,,,,mnmnmnmnmn x xkzz (3.25where: ) 115

PAGE 136

,,mnmnQkQ ,2mn (3.2Sin 6) ceand ,mnQ 2 are both positive, will lie between 0 and 1. For a low-contras ,mnk t area, ,mnQ would be relatively small, therefore ,,mnmn x x On the contrary, in high-contrast areas, Q is much larger than2 and ,mn ,, mnmn x z For most noisy image s this algorithm produces quite satisfactory results, since human vision is more sensitive to noise in a flat area than in an edge region (Lee 1981). However, in the case of pavement rack images, it is desirable to smoothen out the noise around the edge areas. Thus, an improved filtering algorithm known as the ask method provides a more putation of the local mean and variance. In this method a subset of the neighborhood, where the pixel under consideration is located, is used for the computation of the local statistics. As it can be seen in Figure 3.25 (a), the point c gradient m feasible solution to the com ,mn x is more likely to be a member of the subset of pixels in the unshaded area rather than a member of the entire neighborhood. If the local mean and variance are computed based on pixels in the unshaded subset, the new obtained is considerably smaller than obtained from the entire set. As a result, the noise will be smoothened at the edge. Then, based on Eqns. (3.25) and (3.26): ,mnQ ,mnQ ,,mnmn x x (3.27) here w ,mn x is the local mean of the subset. s to know the orientation of the crack edge and on which side of that edge the pixel To determine the subset to which the pixel under consideration belongs, one ha ,mn 116

PAGE 137

lies. To determine the ra mask is used. To minimize the noise efftaiodivided into nine, 3 x 3 sub-areas as shown i deans of each sub-area is computed. Then, the 3mecal means of these ub-areas (Figure 3.25 (b)). To determine the direction of the gradient, Sobel kernels expressed in Eqn. (3.30) are applied to the means of the sub-area as expressed by the following pair of convolution operations (Sonka et al. 1999): mnxxmgsz (3.28) n (3.29) edge o ientati on, a 3 x 3 p ixel lo cal-gr dient ect on he loc l grad ent, th e 9 x 9 wind w is n Figur e 3.25 (b) an the lo cal m x 3 gra dient ask is appli d to e stimate the lo s ,n ,,mnyymgsz 101121 1120200044xyss and 101121 (3.30) where represents orBecause Sobel contain both positive and negatfficients, the output can be negative or positive. The kernel is sensitive to intensity changes in the x direction, or to edges is sensitive to intensity changes in the y directiothe kernels and are normalized by a factor of iginal intensity values. ,mnz kernels ive coe xsrunning vertically while ys n, or edges that run horizontally. To keep g and g values between 0 and 255, xy x y s s 1 4,mnx,mnycomponents of a gradient vector gexpressed by: (3.31) Gradients and are g g gxygg 117

PAGE 138

This vector is oriented along the direction of change and normal to the direction in which the edge runs. The magnitude and direction of the gradient can be expressed as: 22xyggg (3.32) 1tanyxgg (3.33) where is measured relative to the xig26The direction of the g mita absolute value of the gradient is used as the direction of thefor the gradient mask proposed by Lee (Lee 1981) were refined to evaluate the axis (F ure 3. ). radient ask w h the m ximum edge. The directional indices for every 15 degrees in comparison to 45 degrees and they are also shown in Figure 3.26. Figure 3.25 (a) High Contrast Neighborhood of a Pixel; (b) Demarcation of 3 x 3 Sub-areas in the Neighborhood of Pixel in Figure 3.25 (a) (9 x 9) a1322 a31 x m,n a x m,n g 118

PAGE 139

119Once the edge orientation is identified, the mean pixel intensities of the sub-area (Figure 3.27) orthogonal to the edge are compared to determine on which side of the edge the considered pixel falls. Subsets corresponding to all of the directions in Figure ple in Figure 3.25 (b), a comparison of ,mn3.26 are shown in Figure 3.27. For the exam 3122aa and 1322aa determines whether the relevant subset is in direction 3 or 15 (Figure 3.26). If 31221322aaaa then subset 15 (Figure 3.28) will be chosen as the one which contains the pixel ,mn. Accordingly, all the pixels in the unshaded area will be used in the computation of the local mean and variance. This will ensure that the intensips Figure 3.2ectional Ind ty variation among pixels of similar brightness would be less marked. Moreover, areas of inherently different brightness would acquire a much higher contrast, perhaleading to much easier identification of crack boundaries. y 6 3 9 6. Dir ices 0 21 18 15 12 1 2 x

PAGE 140

120used with each sub-area containing a 3 x 3 pixel matrix overlapping with its neighbors, as Figure 3.27 Mean Intensity Values of Sub-areas For easier implementation of edge orientation identification, a 7 x 7 window is a11 12 a13a21 a22 a23a31 a32 a33 a shown in Figure 3.29.

PAGE 141

Figure 3 .28 Dion otionalts (unshaded areas) on One Side of the Edges efinit f Direc Subse 121

PAGE 142

Figure 3.29 Overlapping of the 3 x 3 Sub-areas in the 7 x 7 Neighborhood 3.6.1. Determination of Noise Variance In most practical applications, the noise variance aa 1,11,2a 2,1 2 (Eqn. (3.24)) is unknown and spatially variant. The noise variance of a local area can be estimated reasonable well by the local variance is idea is implemented in the adaptive filtering technique discussed by Lee (Lee 1981). Another alternative to obtain the noise variance is based on the use of a grayscale target (Figure 3.30). The implementation of this technique should result in a practical noise filtering algorithm that will preclude the need for neither apriori image modeling as in the case of Kalman or Wiener filtering techniques nor the need to use adaptive filtering technique. The latter technique evaluates each pixel in a 7 x 7 window for local mean and variance and then average of five smallest variances determine the noise variance (Lee 1981). of a more or less equally in tense (flat) area. Th 122

PAGE 143

Each patch of the grayscale target represents a flat area (wedge) with a fairly uniform optical density. Because imaging systems used for practical purposes are not ideal systems, noise composed of undesirable signal components that arise from various sources described in Section 3.2 is introduced into the signal. As a result, slight variations in intensity value can occur even within a wedge or patch. Therefore, any variation from the mean value for a given wedge represents the 2 of the noise described in Section 3.6. Figure 3.30 Fifteen-wedge Grayscale Target For each wedge, the mean and variance of intensity can be computed using image editing software such as the ImageJ, as shown in Figure 3.31 (a) and 3.31 (b). 123

PAGE 144

Noise Variance for each wedge 0.0010.0030.0040.00 (a) 20.0050.0060.0070.0080.0090.00100.000246810121416Grayscale patch number (b) of One 6.3, the noise variance of any pixel of the that imean intensity. This method should give more precisan the adaptive filtering mallest variances computed from 7 x 7 window in thpixels used for comce were originally represented Figure 3.31 (a) Evaluating the Mean and Standard Deviation of the Intensity Patch Using ImageJ Software; (b) Plot of Variances of Each Patch of the Grayscale Target After the subset mean is computed for a pixel of an image under consideration using the procedure described in the Section 3. age is determined through linear interpolation of noise variance values corresponding to the patches with the closest m e determination of the true level of noise for a given pixel th ethod which assumes the variance to be the average of the five sm e neighborhood. This would be true if all the putation of the mean value and varian by the flat area and any change in intensity values was due to noise. On other hand, each wedge of the grayscale target is uniform and any deviation of the signal can be assumed to be caused by the noise. The assumption in this case is that the pixel under 124

PAGE 145

consideration with a given intensity value has the same 2 of the nois e as the grayscale e te equal l. Tod aet than adaptive filtering as there is no need for extra computation of ms ch inf a tchm adg x 7 w provided to illuspplication of the concepts discussed in the previous sections. The image is of an Edmun a lent of used above. wedg with a m ean in nsity value to that pixe his meth lso is more fficien ean and variance for ea pixel side o 7 x 7 window. 3.6.2. Illustr tion of he Filtering Te nique An exa ple of vertical noisy e e in a 7 indow is trate the a d Optics Optical Density Target containing 15 grayscale uniform density steps separated by vertical edges. Minolta DiMage5 was used for capturing this image innon-compressed TIFF format. The sensitivity of the camera was set to an equivaISO800 at which the noise is most visible. Then, ImageJs plugin PhotoES_AM was for evaluating the mean and variance values for each patch and also for computation of the updated intensity of the pixels considered with respect to the previously described methods. Figure 3.32 (a) shows a part of the image containing a vertical edge while Figure3.32 (b) shows a zoomed-in 7 x 7 window of this image with the intensity values for pixels inside the window shown in Table 3.3. The pixel highlighted in Figure 3.32 (b) 4,4 would be chosen for the application of the noise reduction technique discussed 125

PAGE 146

(a) (b) Figure 3.32 (a) Evaluated Image; (b) Zoomed-in 7 x 7 Window with the Pixel UndTable 3.3 Intensity Values for 7 x 7 Window with the One for the Central Pixel er Consideration at its Center Highlighted 158.72 155.03140.17135.18137.40115.2480.08 160.24 160.07153.18145.01153.54119.8594.49 166.27 162.37156.05149.29166.16123.8297.88 158.70 163.2 3157.82 155.06158.51126.33101.06 157.39 162.15161.72167.25153.70125.3198.71 157.01 160.04158.80153.74145.85123.4485.74 158.23 158.00151.85144.13137.25121.5790.42 sity value of central pixel under consideration which has to be filtered is The inten 155 .06. If local statistics are used in filtering, the mean of the 7 x 7 area will be 4,41x 41.29and its variance 2590.74orig. 126

PAGE 147

Next, the 3 x 3 mask described in Section 3.6 is applied to obtain the meub-areas 111233,,,aaa as shown in Table 3 an values of the s.4. Table 3.4 Mean Values of the Sub-areas 156.9 148.4221 120.94 160.633 158.396 127.942 158.354 152.699 120.221 By applying Eqns (3.28) (3.33), the magnitude and direction of the gradient evaluated as 34.95g and4 are respectively. The gradient direction of approximately corresponds to the directional index of 0 (zero) as shown in Figure 3.26. To determine on which side of the edge the pixel under consideration is located, the 22a value of 22 212.24aa is comp ared to the value 232230.45aar consideration lies on the left of the edge. Based on the criterion defined in Section 3.6, the pixel undeHence the directional Subset 12 (Figure 3.28) of the 7 x 7 window in Table 3.3 is used for the estimation of the mean and variance of the pixel 4,4 highlighted in Figure 3.32 (b). This subset is separately shown in Table 3.5. Table 3.5 Intensity Values for the Directional Subset 4 158.72 155.03 140.17 135.18 160.24 160.07 153.18 145.01 166.27 162.37 156.05 149.29 158.70 163.23 157.82 155.06 157.39 162.15 161.72 167.25 157.01 160.04 158.80 153.74 158.23 158.00 151.85 144.13 127

PAGE 148

The mean intensity 4subx and the vari ance 24sub of the Subset 4 are 155.95 and 55.81 respectively w hich represent a reduction of 2 orig by a factor of 10 when orig 2 compared to the 7 x 7 variance of 590.74. In the next step, the noise variance 2 (Eqn. (3.24)) will be determined using the method described in Section 3.6.3. 3.6.3. Use of the Grayscale Target Procedure First, the image of the Edmunds Optics Density Target is captured by a digital imaging system and then loaded into ImageJ imaging software as shown in Figure 3.33. Figure 3.33 ImageJ Loaded with an Image of the 15-wedge Grayscale Target In the next step, the mean and variance values are computed for each wedge and the results are shown in Table 3.6 and Figure 3.31(b). If a color image is used, the luminance value of any pixel L is computed by using Eqn. (2.1). On the other hand, for 128

PAGE 149

black and white digital images, such as the one produced by the pavement imaging system of FDOT highway evaluation vehicle, the intensity values are used directly. Table 3.6 Mean and Variance of Luminance Values of Each Wedge of the Gray-scale Target (from Fig. 3.31 (b)) PATCH AVE_SCALE_MEAN AVE_SCALE_VAR 1_white 221.67 3.80 2 206.47 7.55 3 193.44 12.08 4 170.08 21.41 5 144.43 35.61 6 121.74 49.21 7 104.50 60.31 8 94.96 64.60 9 81.76 70.87 10 74.44 74.34 11 65.69 80.27 12 60.22 83.84 13 53.37 88.36 14 50.38 85.63 15_black 48.39 75.43 With respect to the illustration in Fig. 3.31 (b), since the mean intensity of the Subset 4 where the pixel under consideration belongs, 4 s ub x is 156, the corresponding noise variance can be interpolated from Table 3.6 as Then, from Eqn. (3.24), the variance of directional Subset 4 2 29.21 4,4Q would be 26.6. Finally, by using Eqns. (3.25) and (3.26), the estimated intensity value 4,4 x for the pixel under consideration 4,4 is 155.5 129

PAGE 150

3.6.4. Verification of the Filtering Technique The filtering technique discussed in Section 3.6.3 was coded in PhotoES_AM plugin of ImageJ software. To verify applicability of the proposed filtering technique, two sets of tests were conducted. First, a synthetic image (Figure 3.34) was created in Microsoft Paint program so that background and feature intensity values are equal to 130 and 42, respectively. Then, by using Add Noise built-in function of the Jasc Pain Shop Pro program (Jasc Software 2004), three different amounts of the Gaussian noise (5%, 10%, and 15% of coefficient of variation) were introduced into the original image. Finally, these noisy images were filtered by the PhotoES_AM plugin under ImageJ image editing software and the SNR for both noisy and filtered images were computed. The results of this test are presented in Table 3.7. The effectiveness of the noise removal techniquThe second evaluation was based on removal of noise from actual pavement images taken by the FDOT highway evalua e is illustrated in Figure 3.35. tion vehicle. Figures 3.36 and 3.37 show results of noise filtration from asphalt and concrete pavements. 130

PAGE 151

Figure 3.34 Synthetic Image Created for Verification of Filtering Technique Table 3.7 Results of the Noise Ftration Test on Synthetic Image Area Coefficient Type SNR SNR Improvement Improvement il Of Variation Background[dB] Object [dB] Background [dB] Object [dB] Area 1 5% Before.0 29.1 29 After 42.1 40.9 13.0 11.9 10% Before21.6 21.5 After 35.2 29.9 13.6 8.4 15% Before17.9 18.0 After 32.3 29.7 14.4 11.7 Area 2 5% Before29.1 29.0 After 42.8 38.4 13.7 9.4 10% Before21.9 21.7 After 36.3 34.4 14.4 12.7 15% Before17.9 17.7 After 32.2 27.0 14.3 9.3 n local statistics and use of grayscale target improves the image quality of the object representing the crack and background more than 9 dB and 13 dB, respectively. From the results given in Table 3.7 it can be seen that improved filtering technique based o 131

PAGE 152

(a) noisy image (5%) (b) filtered image Figure 3.35 Example of Noise Removal from Synthetic Image (a) original image (b) filtered image Figure 3.36 Example of Noise Removal from Pavement Image (concrete pavement) 132

PAGE 153

(a) original image (b) filtered image Figure 3.37 Example of Noise Removal from Pavement Image (asphalt pavement) 133

PAGE 154

CHAPTER 4 EXl Description One important objective of this dissertation research was to determine the ccuracy and the reliability of the pavement imaging subsystem with respect to valuating pavement cracks. In keeping with this objective, an experiment was conducted Gainesville, Florida on May 18 and 19, 2004 to evaluate the accuracy of pavement ages captured by the imaging subsystem of the FDOT highway evaluation vehicle and vestigate the effect of noise on the images. The repeatability and accuracy of the downward camera in imaging cracks was tested nder the following conditions: (1) different lighting conditions (sunny, cloudy, and overcast), (2) with ahicle senll dir s ex PERIMENTAL EVALUATION OF THE PAVEMENT IMAGING SUBSYSTEM OF THE FDOT PAVEMENT EVALUATION VEHICLE 4.1 Genera a e in im in u nd without the pavement lighting system, (3) different ve peeds (25 mph, 35 mph, 43-45 mph), and (4) different pavem t types (asphalt and concrete). Table 4.1 describes a fferent testing conditions under which the experiment was conducted. All th e othe ignificant variables such as the aperture e tc. were maintained constant. The g ain and posure time were automatically regulated by the capturing software. 134

PAGE 155

Tablns RNT e 4.1 Description of Experiment Conditio oad Dire ame ion ctDate Surve ying Pavement Lighting Pavement Lighting Speed [mph] ime Type Conditions System S05/18 11 R 28 South .33 12.15 pm Concrete Sunny On 25, 35, 45 S1 R 28 South 05/18 2.19 12.35 pm Concrete Sunny Off 25, 35, 45 S0 R 331 North 05/18 3.25 03.45 pm Asphalt Cloudy Off 25, 35, 44 S03 R 331 North 05/18 3.25 03.45 pm Asphalt Cloudy On 25, 35, 4 U1, 45 S 441 North 05/19 0.45 11.05 am Asphalt Overcast On 25, 35 U15 S 441 North 05/19 1.11 11.40 am Asphalt Overcast Off 25, 35, 4 To approp riately or testing, they had to rsverse, and alligator craced on a ansparent paper in order to determine the dimensions (length and width) of cracks. Then, the cracks were imaged by a hand-held Minolta DiMage5 3.3-megapixel camera and images were saved as uncompressed TIFF files. Next, two poster boards containing the grayscale and the standard resolution targets (Figure 4.1(b)) were placed in front of the selected crack locations so that the highway evaluation vehicle would pass through and image the targets before imaging the selected cracks. Intensity data obtained from the images of these targets would be used in the study of the effect of noise on images. Each speed was surveyed three times to evaluate the repeatability of the pavement imaging system. prepare the selected road sections to be f be temporarily closed fo traffic as shown in Figure 4.1(a). First, the longitudinal, t ran racks on selected sections were manually t tr 135

PAGE 156

(a) (b) Figure 4.1 (a) Temporary Closure of the Road; (b) Standard Targets Placed on the Road Finally, the FDOT highway evaluation vehicle was driven over the tested area (Figure 4.2 (a)) while capturing the images of the targets and cracks under different lighting, pavement lighting, and speeds conditions indicated in Table 4.1. Each run was repeated three times to account for statistical variations. Subsequently, the data acquired during testing were analyzed to verify the accuracy of the pavement imaging subsystem in recording crack information, eria: (1) identification ability ability to identify a crack in its entirety, especially with (2) crack length based on cracks from the imprint on the transparency and the image. The determination of the length of a crack can be made from the number of pixels forming the image of the crack using software ImageJ, (3) crack thickness based on the measured width of the cracks and the computation of the width from the number of pixels corresponding to the crack width in the based on the following crit respect to hairline cracks, the measuredlengths of the 136

PAGE 157

image using software ImageJ. This determination was limited to easily identifiable locations of the cracks only, and (4) extent of distortion of the crack based on the measurement of crack propagation in longitudinal and transverse directions of the image and comparison with the corresponding dimensions based on the imprint on the transparency. The second phase of the experiment was to determine the distortion in the pavement image due to optics used in the downward imaging system of the FDOT highway evaluation vehicle at different speeds. The relevant testing area was chosen within the FDOT State Materials office complex. A grid of small bright tacks was setup on one loca10 inches x 10 inches grid was oails were placed at the nodes. tion of this site with a relatively new asphalt pavement (Figure 4.2(b)). Thisf square shape where tacks or colored n (a) (b) Figure 4.2 (a) FDOT Highway Evaluation Vehicle Driven Through the Testing Area; (b) Verification of the Optical Distortion Due to Optics Used in the Pavement Imaging System 137

PAGE 158

4.2 Evaluation of the Noise Due to Speed The maximum SNR for the Basler L-103 line-scan camera solely due to quantization noise would be 59 dB. Generally, because of the presence of other noise sources, the actual SNR value will be significantly lower than this theoretical one, mainly due to photon noise. One standard way that can be adopted to evaluate the effect of the overall noise on the images captured by the pavement imaging system is to analyze the Signal-to-Noise Ratio (SNR). Therefore, SNR for each testing scenario in Table 4.1 was computed by measuring the intensity values of the images of black and white patches of the resolution target (Figure 4.1(b)). The average SNR value for each individual testing condition was determined and the results are shown in Figures 4.3 to 4.8. US 441, Lights ON, sunny-shadow5101520253035253545Speed [mph]SNR [dB]22.22.42.62.833.2Gain Coefficient AveSNR(black) AveGainCoeff Figure 4.3 SNR and Gain vs. Speed Plot for US 441 with Pavement Lights On 138

PAGE 159

US 441, Lights OFF, sunny-shadow5101520253035253545Speed [mph]SNR [dB]22.22.42.62.833.2Gain Coefficient AveSNR(black) AveGainCoeff Figure 4.4 SNR and Gain vs. Speed Plot for US 441 with Pavement Lights Off SR 331, Lights O N, cloudyNR [ent 353.2 51015202530253543Speed [mph]SdB]22.22.42.62.83Gain Coeffici AveSNR(black) AveGainCoeff Figure 4.5 SNR and Gain vs. Speed Plot for SR 331 with Pavement Lights On 139

PAGE 160

SR 331, Lights OFF, cloudy10152025NR [dB2.22.42.62.8Gai Coeffient 53035Speed [mph]S]233.2nci AveSNR(black) AveGainCoeff 253544 Figure 4.6 SNR and Gain vs. Speed Plot for SR 331 with Pavement Lights Off SR 28, Lights ON, sunny353.2 5 1020253025SpNR [dB]2.22.62.83Gain Coefficient 15S2.4 2 3545 eed [mph] AveSNR(black) AveGainCoeff .7 SNR and Gain vsights On Figure 4 Speed Plot for SR 28 with Pavement L 140

PAGE 161

SR 28, Lights OFF, sunny510 35 3.2 30 3 15SNR [ 20Speed [2.6 Coefent 25dB] 2.8fici AveSN R(black) AveGainC 2.2Gai 2.4n 2 25 3545 mph] oeff and Gain vsghts Off Figures 4.3 to 4.8 deviations of SNR are 1.31 dB and 0.0 dB respectively. The maximum standard deviation of 1.31 dB is observed in the plot in Figure 4.3 probably due to the occurrence of flare in the images due to excessive lighting. Hence, it can be concluded that the SNR does not seem to depend on the speed. The flare effect on the target can also be seen in Figure 3.20(a). In addition, SNR results obtained from SR 28 at a speed of 45 mph without pavement lights were excluded from the results, as at that point, the exposure time of the camera automatically changed from 1/40,000 seconds to 1/19,000 second per each imaging line due to low-lighting conditions. Also, based on the plots presented in Figures 4.3 to 4.8, one can also conclude that the vehicles lighting system does introduce a change in SNR due to photon noise and saturation effect of 1 dB for asphalt pavements under cloudy conditions and 10 dB for asphalt pavement for overcast sunny conditions, respectively. Figure 4.8 SNR Speed Plot for SR 28 with Pavement Li Based on the maximum and minimum standard 141

PAGE 162

Moreover, the plots also seem to show that the vehicle speed does not contribute to a change in the gain coefficient. 4.3 Evaluation of the Gain Due to Lighting Conditions Another set of plots presented in Figures 4.9 to 4.15 was generated to show the variation of SNR and gain coefficient (Eqn. (3.2)) for different trials on a given section under a specific lighting condition irrespective of the vehicle speed. US 441, Lights ON, sunny-shadow2.02.22.4 2.62.8 Coeffi2530SNR [dB 3.03.23.4123456910TrialsGaincient15203540] Gain SNR(black) 7 Figure 4.9 Variation of SNR and Gain for US 441 with Pavement Lights On (Exposure 1/40,000 sec) US 441, Lights OFF, sunny-shadow2.02.22.42.62.83.03.23.4111214151617181920TrialsGain Coefficient152025303540SNR [db] Gain SNR(black) Figure 4.10 Variation of SNR and Gain for US 441 with Pavement Lights Off (Exposure 1/40,000 sec) 142

PAGE 163

SR 331, Lights OFF, cloudy2.02.22.42.62.83.03.23.4123456789TrialsGain Coefficient152025303540SRN [dB] Gain SNR(black) Figure 4.11 Variation of SNR and Gain for SR 331 with Pavement Lights On (Exposure 1/40,000 s SR 331, Lights ON, cloudy 2.02.22.42.62.83.23.4101112131415161719TrialsGain Coe1520253540SNR [d 3.0fficient30B] Gain SNR(black) Figure 4.12 Variation of SNR and Gain for SR 331 with Pavement Lights Off (Exposure 1/40,000 s) 143

PAGE 164

SR 28, Lights ON, sunny 2.02.22.42.63.03.23.4124579111213TrialsGain Coefent1520253540SNR [ 2.8fici30dB] Gain SNR(black) Figure 4.13 Variation of SNR and Gain for SR 28 with Pavement Lights On (Exposure 1/40,000 s) SR 28 (part 1), Lights OFF, sunny2.02.22.42.62.83.03.23.4141516171819TrialsGain Coefficient152025303540SNR [dB] Gain SNR(black) Figure 4.14 Variation of SNR and Gain for SR 28 with Pavement Lights Off (Exposure 1/40,000 s) 144

PAGE 165

SR 28 (part 2), Lights OFF, sunny2.02.22.42.62.83.03.23.4202122TrialsGain Coefficient152025303540SNR [dB] Gain SNR(black) Figure 4.15 Variation of SNR and Gain for SR 28 with Pavement Lights Off (Exposure 1/19,000 s) The above plots indicate that, when the pavement lighting system is on, the gain coutside lighting condition (i.e. sunny or cloudy). Moreover, it can be seen that the gain coefficient generally depends not on the paveent type but on the lighting conditions sconditions either when the pavement lights are turned on (Figure 4.9 and 4.13) or when they are turned off (Figure 4.10 and 4.14). The plots also show that for a given pavement type and outside lighting conditions, the gain coefficients do not vary substantially (standard deviation less than 0.12) with or without the vehicles lighting system. 4.4 Evaluation of Pavement Images for Ability to Recognize Cracks Images captured by the FDOT highway evaluation vehicle were also analyzed to determine the ability of recognition of different crack features. First, the crack features traced on a transparent paper during manual survey (Section 4.1), were used to measure oefficient is consistently lowered for both asphalt and concrete pavement under any m ince both asphalt and concrete pavements have similar gain coefficients under sunny 145

PAGE 166

crack widths and lengths between distinctive points. Then, images of these features captured by the hand-held Minolta DiMage5 3.3-megapixel camera were also evaluated. Next, by using the Equation 2.5, the actual sizes of the same features were computed. Verification of the results of the manual survey was performed by comparing the feature sizes from the manual survey with those computed from the hand-held digital camera. The results of this comparison are shown in Figure 4.16 4.25 and Table 4.2 defines the notation used to identify crack features. The error, dist presented in these figures was computed as: [%]100crackcrackdistcrackddd (4.1) where and that computed from the paveme by the Minolta DiMage5 camera, respectively. crackd and crackdrepresent the crack dimensions obtained from the manual survey nt image captured s 146

PAGE 167

Table 4.2 Definition of Notations Used in Figures 4.16 4.25 Notation Name Description a_DF_1 a_DF_10 US 441 section, image DF, crack features 1 to 10 a_DF_vert1 a_DF_vert3 US 441 section, image DF, scale in transverse direction, rectangles 1 to 3 a_DF_hor1 a_DF_hor3 US 441 section, image DF, scale in longitudinal direction, rectangles 1 to 3 a_E_1 a_E_5 US 441 section, image E, crack features 1 to 5 a_E_vert1 a_E_vert3 US 441 section, image E, scale in transverse direction, rectangles 1 to 3 a_E_hor1 a_E_hor3 US 441 section, image E, scale in longitudinal direction, rectangles 1 to 3 a_BC_1 a_BC_3 US 441 section, image BC, crack features 1 to 3 a_BC_vert1 a_BC_vert3 US 441 section, image BC, scale in transverse direction, rectangles 1 to 3 a_BC_hor1 a_BC_hor3 US 441 section, image BC, scale in longitudinal direction, rectangles 1 to 3 a_A_1 a_A_3 US 441 section, image A, crack features 1 to 3 a_A_vert1 a_A_vert3 US 441 section, image A, scale in transverse direction, rectangles 1 to 3 a_A_hor1 a_A_hor3 US 441 section, image A, scale in longitudinal direction, rectangles 1 to 3 b_CD_1 b_CD_9 SR 331 section, image CD, crack features 1 to 9 b_CD_vert1 b_CD_vert3 SR 331 section, image CD, scale in transverse direction, rectangles 1 to 3 b_CD_hor1 b_CD_hor3 SR 331 section, image CD, scale in longitudinal direction, rectangles 1 to 3 b _AB_1 b_AB_4 SR 331 section, image AB, crack features 1 to 4 bb_AB_vert3 _AB_vert1 SR 331 section, image AB, scale in transverse direction, rectangles 1 to 3 b_AB_hor1 b_AB_hor3 SR 331 section, image AB, scale in longitudinal direction, rectangles 1 to 3 b_EF_1 b_EF_5 SR 331 section, image EF, crack features 1 to 5 b_EF_vert1 b_EF_vert3 SR 331 section, image EF, scale in transverse direction, rectangles 1 to 3 b_EF_hor1 b_EF_hor3 SR 331 section, image EF, scale in longitudinal direction, rectangles 1 to 3 c_CE_1 c_CE_9 SR 28 section, image CE, crack features 1 to 9 c_ CE_vert1 c_CE_vert3 SR 28 section, image CE, scale in transverse direction, rectangles 1 to 3 c_CE_hor1 c_CE_hor3 SR 28 sectioongitudinal direction, rectangles 1 to 3 n, image CE, scale in l c_D_1 c_D_5 SR 28 section, image D, crack features 1 to 5 c_D_vert1 c_D_vert3 SR 28 section, image D, scale in transverse direction, rectangles 1 to 3 c_D_hor1 c_D_hor3 SR 28 section, image D, scale in longitudinal direction, rectangles 1 to 3 c_AB_1 c_AB_5 SR 28 section, image AB, crack features 1 to 5 c_AB_vert1 c_AB_vert3 SR 28 section, image AB, scale in transverse direction, rectangles 1 to 3 c_AB_hor1 c_AB_hor3 SR 28 section, image AB, scale in longitudinal direction, rectangles 1 to 3 f Focal length set on DiMage 5 camera O Distance from the DiMage 5 lens to the pavement 147

PAGE 168

US 441, image DF, f=16.8, O=4ft3in 01020304050F_1a_DF_2a_DF_5DF_vert1a_DF_a_r3a_DF_8a_Daem]80Error [%] 100 e [m anc 60 MeasDist [mm ] st a_D F_3F_4D a_Da_ a_ vert2ert3hor1hor2_ho Da_a_a_ F_vDF_DF_DF F_6F_7 a_Da_D F_9_10 _DF Featur M eau red Di 0 20 40 DistError [%] Figure 4.16 Verification of Manual Survey (US Dithtuy ge5ita 441, image F) w Image Cap red b DiMa Dig l Camera US 441, image E, f=16.5, O=4ft1in 0 50eaured 100 Dist 150200_1a_E_2E_4a__verE_hoance [m020406080 100 m] %] Meas Dist [m m] Error [ DistErro M a _E a_E_a_ 3 E_vertvert 12 a_E_a_E_a_E t_hor 31 r_hor3 2 a_E a_Ea_ Feature r [%] Figure 4.17 Verification of Manual Survey (USe Eithtury geita 441, imag ) w Image Cap ed b DiMa 5 Dig l Camera 148

PAGE 169

US 441, image BC, f=16.8, O=4ft0.5in 40mm] 30anc 0 10 20 501a_BC_3a_BC_vert3a_BC_hor020406080100 e [ rror [ a_ BC_ _BC 2 a_ tvertver 12 a_a_ BC_BC_ __BC hor 1 a 23 a _BC _hor Feature Meaured Di st E %] MeasDi st [m m] DistErro r[%] Figure 4.18 Verification of Manual Survey (USge Bithapty agegita 441, ima C) w Image C ured b DiM 5 Di l Camera US 441, image A, f=16.5, O=4ft0.5in 0 10M 20red Di 30tanc 40_1a_A3a_A_ver3a_A_horamm]020406080100 50 e [ rror [ a_ A _2A_ a_ t1rt2ert aa _A_ve_A_v _h_A or1 a 2or3 _A_h Feature eau s E %] MeasDi st [m m] DistErro r [% ] Figure 4.19 Verification of Manual Survey (USge ithtuy gegita 441, imal ra A) w p Image Ca red b DiM a 5 Di Came 149

PAGE 170

SR 331, image CD, f=16.3, O=4ft1in 02040608010012014016018020b_CD_1b_CDCD_4b_CD_5b_CD_b_Chor1_CD_hb6b_Catu1 0 00 80 mm] e [ 60[%] tanc M easD ist [mm] _CD_ 23 b_b_ vertD_vertvertD_ 123 bbb _CD__C or_hoCD_ 2r3 _CDb_ D_CD_CD_ 789 b_b_ Fe re M eaured Dis 0 20 40Er ror Dis tErr or [%] .20 Vcation l S (Sag) wCaed by Maigra Figure 4 erifi of Manua urvey R 331, im e CD ith Image ptur Di g D e5 ital Came SR 331, image AB, f=16.3, O=4ft1.5in 050100150200b_AB_1b_AB_3b_2b_AB_or1b_Aaturm]461 0Err 0%] 80 00 or [ M st ance [ m ea sDi st [mm] Dist Erro r [%] eaured Di 20 M 0 _AB_b 2 AB__vert_vert 41 vert_h 3 b_AB B_hor_hor b_ABb_AB 23 b_AB Fe e .21 Vicational S (Sag w Ca by iMigra Figure 4 erif of Manu urvey R 331, im e AB) ith Image ptured D a D ge5 ital Came 150

PAGE 171

SR 331, image EF, f=16.3, O=4 ftin010203040b_EFb_EFb_EFb_EFb_EFb_EF_veb_EF_ve_EF_veb_EF_hb_EF_hb_EF_hFeatureMsance [m]020406080Error [%] 250_1_2_3_4_5rt1rt2brt3or1or2or3eaured Ditm100 MeasDist [mm] DistError [%] d by DiMage5 Digital Camera Figure 4.22 Verification of Manual Survey (SR 331, image EF) with Image Capture SR 28, image CE, f=16.3, O=4ft1.5in0 10203040Meared Distance [m]20406080Error%] 50c_CE_1c_CE_2c_CE_3c_CE_4c_CE_5c_CE_vert1c_CE_vert2c_CE_vert3c_CE_hor1c_CE_hor2c_CE_hor3c_CE_6c_CE_7c_CE_8c_CE_9Featureum0100 [ MeasDist [mm] DistError [%] Figure 4.23 Verification of Manual Survey (SR 28, image CE) with Image Captured by DiMage5 Digital Camera 151

PAGE 172

SR 28, image D, f=16.3, O=4 ftin 2 0c_D_1c_D_2c_D_3c_D_4c_D_5D_vert1D_vert2D_vert3c_D_hor1c_D_hor2c_D_hor3Feature0 20c_c_c_Meaured Distance [mm]20406080100120Error [%] 40 60 80 100 120 Me asDist [mm] Dis tError [%] DiMage5 Digital Camera Figure 4.24 Verification of Manual Survey (SR 28, image D) with Image Captured by SR 28, image AB, f=16.3, O=4ft3in01020304050c_AB_1c_AB_2c_AB_3c_AB_4c_AB_5c_AB_vert1c_AB_vert2c_AB_vert3c_AB_hor1c_AB_hor2c_AB_hor3FeatureMeaured Distance [mm]020406080100Error [%] MeasDist[mm] DistError[%] Figure 4.25 Verification of Manual Survey (SR 28, image AB) with Image Captured by DiMage5 Digital Camera 152

PAGE 173

From Figures 4.16 to 4.25 it can be seen that the computed error for features larger than 10 mm is less than 9.6 %. It can also be seen that, as feature size decreased, the error generally increases. This is to be expected since a pixel represents a pavement area as large as 2 mm x 2 mm. Therefore, if a hairline crack is of 3 mm width and it is visually estimated as consisting of two pixels, its width is evaluated as 4 mm resulting in an error of 50%. Moreover, the precision of the measurements based on the image was checked by measuring the length of two scales attached to the pavement in longitudinal and transverse directions. Each rectangle printed on these scales has a length of 20 mm. The maximum error arising from the recognition of the scale, measured as an average value from 3 rectangles in each direction or 4.5% were 0.85 mm 0.8510020mmxmm in the transverse direction and 0.84 or 4.4% 0.84100 20mm mmx in the longitudinal direction. easurement of crack features in images BC (US 441), EF (SR 331), and AB (SR 28) ed by inexperienced personnel.e. investigated. First, the influence of the speed on the recognas tested and the of crack features. It must be noted that, in these figures, M produced higher errors due to lower precision of the manual survey perform Considering the above discussed limitations, based on the plots in Figs. 4.16 4.25, it can be concluded that the manual survey was reasonably precisBased on the manually verified crack widths and lengths, the ability to recognize the same features from images captured by the pavement imaging system was ition ability wresults are shown in Figures 4.26 to 4.38 and in Tables 4.4 to 4.9. Table 4.3 describes the notation used for identification 153

PAGE 174

cracks darkes rerom Table 4.3 Definition of Notations Used in Figures 4.26 4.39 Notation Name D of zero width or lengths were unrecognizable from the imagsulting flow-lighting conditions. escription a_1, 3, 5 f1_w, a_f3_w, a_f5_w US 441 section, image F, width of cracks a_d8_w a_d10_w US 441 section, image D, width of cracks 8 to 10 a_e1_w, a_e2_w US 441 section, image E, width of cracks 1, 2 a_c1_w, a_c2_w US 441 section, image C, width of cracks 1, 2 a_ vert1 a_vert3 US 441 section, image E, scale in tra nsverse direction, rectangles 1 to 3 a hor1 a _hor3 US 441 section, image E, scale in longitudinal direction, rectangles 1 to 3 a_e3_t, a_e5_t US 441 section, image E, length of cracks 3, 5 b_c1_w, b_c2_w 1, 2 SR 331 section, image C, width of cracks b_d9_w SR 331 section, image D, width of crack 9 b_e2_w SR 331 section, image E, width of crack 2 b_f3_w, b_f5_w SR 331 section, image C, width of cracks 3, 5 b_ vert1 b_vert3 SR 331 section, image C, scale in transverse direction, rectangles 1 to 3 b_hor1 b_hor3 SR 331 section, image C, scale in longitudinal direction, rectangles 1 to 3 b_c3_t, b_c4_t, b_c8_t SR 331 section, image C, length of cracks 3, 4, 8 b_a1_t SR 331 section, image A, length of crack 1 b_b2_t, b_b3_t SR 331 section, image B, length of cracks 2, 3 c_d1_w c_d3_w, c_d5_w SR 28 section, image D, width of cracks 3, 4, 8 c_c3_w c_c5_w, c_c7_w c_c9_w SR 28 section, image C, width of cracks 3 5, 7 9 c_a1_w, c_a2_w, c_a4_w SR 28 section, image A, width of cracks 1, 2, 4 c_ vert1 c_ vert3 SR 28 section, image C, scale in transverse direction, rectangles 1 to 3 c_ hor1 c_ hor3 SR 28 section, image C, scale in longitudinal direction, rectangles 1 to 3 154

PAGE 175

US 441, lights ON, crack width 5 0a_f1_wa_f3_wa_f5_wa_d8_wa_d9_wd10_wa_e1_wa_e2_wa_c1_wa_c2_wFeature 10152025a_Width [mm] 25mph 35mph 45mph n) Measured from Images at Different Speeds Figure 4.26 Widths of the Crack Features (US 441, lights o US 441, lights ON, crack length020406080100120140160180200a_vert1a_vert2a_vert3a_hor1a_hor2a_hor3a_e3_ta_e5_t Feature Length [mm] 25mph 35mph 45mph Figure 4.27 Lengths of the Crack Features (US 441, lights on) Measured from Images at Different Speeds 155

PAGE 176

US 441, lights OFF, crack width0510152025a_f1_wa_f3_wa_f5_wa_d8_wa_d9_wa_d10_wa_e1_wa_e2_wa_c1_wa_c2_wFeatureWidth [mm] 25mph 35mph 45mph Figure 4.28 Widths of the Crack Features (US 441, lights off) Measured from Images at Different Speeds US 441, lights OFF, crack length0255075100125150175 200 a_vert1a_vert2a_vert3a_hor1a_hor2a_hor3a_e3_ta_e5_tFeatureLength [mm] 25mph 35mph 45mph 156

PAGE 177

Figure 4.29 Lengths of the Crack Features (US 441, lights off) Measured from Images Captured at Different Speeds SR 331, lights ON, crack width05101520b_c1_wb_c2_wb_d9_wb_e2_wb_f3_wb_f5_wFeatureWidth [mm] 25mph 35mph 45mph Figure 4.30 WDifferent Speeds idths of the Crack Features (SR 331, lights on) Measured from Images at SR 331, lights ON, crack length 01020304050607080b_c3_tb_vert1b_vert2b_vert3b_hor1b_hor2b_hor3b_c4_tb_c8_tb_a1_tb_b2_tb_b3_tFeatureLength [mm] 25mph 35mph 45mph Figure 4.31 Lengths of the Crack Features (SR 331, lights on) Measured from Images at Different Speeds 157

PAGE 178

SR 331, lights OFF, crack width 20 0510b_c1_wdth [m] 15b_c2_wb_d9_wb_e2_wb_f3_wb_f5_wFeatureWim 25mph 35mph 45mph Figure 4.32 Widths of the Crack Features (SR 331, lights off) Measured from Images at Different Speeds SR 331, lights OFF, crack lengt h 01tt1t2t3r1r2r3t_t_t_t_t 0203040507080c3_verververhohohoc4_c8a1b2b3Length [mm] 60 25mph 35mph 45mph b_b_b_b_b_b_b_b_b_b_b_b_Feature Figure 4.33 Lengths of the Crack Features (SR 331, lights off) Measured from Images at Different Speeds 158

PAGE 179

SR 28, lights ON, crack width0 5 1030c_d1_wc_d2_wc_d3_wc_dc_c47_w9_wwcFeatureWidt] 152025h [mm 5_wc3_w c_ _w5_wc_c c_c c_c8_wc_c c_a1_wc_a2_ _a4_w 25mph 35mph 45mph Fe 4.34 WRhts oasure ImagDifferent Speeds igur idths of the Crack Features (S 28, lig n) Me d from es at SR 28, lights ON, crack length051015202530c_vert1c_vert2c_vert3c_hor1c_hor2c_hor3FeatureLength [mm] 25mph 35mph 45mph Figure 4.35 Lengths of the Crack Features (SR 28, lights on) Measured from Images at Different Speeds 159

PAGE 180

SR 28, lights OFF, crack width 05100c_dwc_dwc_d3wc_d5wc_cwc_cwc_cwc_cwc_cwc_cwc_a1wc_a2wc_a4wFeatureWidt 15225301_2___3_4_5_7_8_9____h [mm] 25mph 35mph 45m ph Figure 4.36 Widths of the Crack Features (SR 28, lights off) Measured from Images at Different Speeds SR 28, lights OFF, crack length 0202530c_vert1c_vert2c_vert3c_hor1c_hor2c_hor3FeatureLengtmm] 51015h [ 25mph 35mph 45mph Different Speeds Figure 4.37 Lengths of the Crack Features (SR 28, lights off) Measured from Images at 160

PAGE 181

Table 4.4 Dimensions of Crack Features Evaluated Based on the Images and the Feature Measured [mm] Dimension/ 25 mph [mm] mph [%] Dimension/ 35 mph [mm] mph [%] Dimension/ 45 mph [mm] mp[%] Corresponding Errors () at Different Speeds (US 441, lights on) Dimension Std. Dev. 25 Std. Dev. 35 Std. Dev. 45 h a_f1_w 16.5 17.3 / 1.44 4.9 19.9 / 2.72 20.7 17.1 / 2.76 13.6 a_f3_w 4.0 5.8 / 1.91 45.4 5.7 / 1.76 42.7 6.3 / 1.09 57 .5 a_f5_w 2.5 5.1 / 0.90 103.2 4.4 / 1.09 76.4 1.9 / 0.00 24.4 a_vert1 20.0 18.3 / 1.09 8.7 17.0 / 0.00 15.0 17.6 / 1.09 11.8 a_vert2 20.0 18.3 / 1.09 8.7 17.0 / 0.00 15.0 17.0 / 0.07 14.8 a_vert3 20.0 17.0 / 0.00 15.0 17.0 / 1.89 15.0 17.6 / 1.09 11 .8 a_hor1 20.0 22.7 / 0.00 13.4 22.7 / 0.00 13.4 22.1 / 1.04 10.4 a_hor 2 20.0 22.7 / 0.00 13.4 20.8 / 1.89 3.9 21.4 / 1.09 7.1 a_hor 3 20.0 22.0 / 1.09 10.2 22.0 / 1.09 10.2 22.0 / 1.09 10.2 a_d8_w 9.0 7.7 / 0.85 14.2 9.6 / 0.85 7.2 6.9 / 0.65 23.8 a_d9 _w 18.5 18.0 / 1.72 2.4 18.4 / 2.66 0.7 16.8 / 1.32 9.4 a_d10_w 4.0 5.6 / 0.90 39.9 3.3 / 3.09 17.5 4.2 / 0.55 9.8 a_e1_w 12.0 11.3 / 5.00 5.5 13.5 / 0.26 12.8 12.3 / 0.96 5.3 a_e2_w 15.0 15.5 / 2.62 3.5 14.5 / 3.64 3.0 11.3 / 1.34 24.8 a_e3_t 157.0 148.8 / 4.28 5.3 143.7 / 2.70 8.5 143.1 / 6.58 8.9 a_e5_t 168.0 178.0 / 17.2 5.9 185.8 / 1.64 10.6 173.6 / 7.47 3.8 a_c1_w 16.0 17.7 / 2.21 10.4 17.2 / 3.43 7.4 14.6 / 1.08 8.6 a_c2_w 6.0 8.2 / 1.09 36.5 8.2 / 1.09 36.5 5.0 / 1.09 16.0 Table 4.5 Dimensions of Crack Features Evaluated Based on the Images and the Feature Measured Dimension/ Dimension/ Dimension/ 45 mph Corresponding Errors () at Different Speeds (US 441, lights off) Dimension [mm] Std. Dev. 25 mph [mm] 25 mph [%] Std. Dev. 35 mph [mm] 35 mph [%] Std. Dev. 45 mph [mm] [%] a_f1_w 16.5 Not visible n/a Not visible n/a Not visible n/a a_f3_w 4.0 Not visible n/a Not visible n/a Not visible n/a a_f5_w 2.5 Not visible n/a Not visible n/a Not visible n/a a_vert1 20.0 17.6 / 1.09 11.8 17.6 / 1.09 11.8 18.3 / 1.09 8.7 a_vert2 20.0 17.0 / 0.00 15.0 17.6 / 2.18 11.8 16.4 / 1.09 1 8.1 a_vert3 20.0 18.9 / 0.00 5.5 17.6 / 2.18 11.8 18.9 / 0.00 5.5 a_hor1 20.0 22.0 / 1.09 10.2 23.6 / 0.95 17.8 22.7 / 1.89 13.4 a_hor2 20.0 22.0 / 1.09 10.2 22.0 / 1.09 10.2 21.4 / 1.09 7.1 a_h10.2 or3 20.0 22.0 / 1.09 10.2 22.7 / 0.00 13.4 22.0 / 1.09 a_dn/a 8_w 9.0 Not visible n/a Not visible n/a Not visible a_d9_w 18.5 Not visible n/a Not visible n/a Not visible n/a a_d10_w 4.0 Not visible n/a Not visible n/a Not visible n/a a_e1_w 12.0 8.6 / 0.95 28.6 8.6 / 0.95 28.6 6.4 / 1.27 46.7 a_e2_w 15.0 9.1 / 0.70 39.3 6.0 / 0.26 60.2 6.5 / 0.65 56.7 a_e3_t 157.0 Not visible n/a Not visible n/a Not visible n/a a_e5_t 168.0 Not visible n/a Not visible n/a Not visible n/a a_c1_w 16.0 Not visible n/a Not visible n/a Not visible n/a a_c2_w 6.0 Not visible n/a Not visible n/a Not visible n/a 161

PAGE 182

Table 4.6 Dimensions of Crack Features Evaluated Based on the Images and the Feature Measured [mm] Dimension/ 25 mph [mm] mph [%] Dimension/ 35 mph [mm] mph [%] Dimension/ 45 mph [mm] mph[%] Corresponding Errors () at Different Speeds (SR 331, lights on) Dimension Std. Dev. 25 Std. Dev. 35 Std. Dev. 45 b_c1_w 3.2 3.8 / 1.89 18.1 4.8 / 1.19 50.6 4.3 / 0.45 34.4 b_c2_w 11.0 15.2 / 3.83 38.1 14.9 / 0.95 35.6 11.5 / 0.00 4.8 b_c3_t 9.0 11.4 / 2.34 26.6 12.6 / 2.32 39.8 10.0 / 2.84 11.3 b_vert1 20.0 17.0 / 15.0 Not visible n/a Not visible n/ a b_vert2 20.0 15.1 / 24.4 Not visible n/a Not visible n/a b_vert3 20.0 17.0 / 15.0 Not visible n/a Not visible n/a b_hor1 20.0 20.2 / 1.89 0.8 Not visible n/a Not visible n/a b_hn/a or2 20.0 19.5 / 2.18 2.4 Not visible n/a Not visible b_hn/a or3 20.0 19.8 / 1.34 0.8 Not visible n/a Not visible b_c4_t 8.0 10.1 / 2.89 26.0 12.0 / 1.09 49.6 13.3 / 0.09 66.0 b_c8_t 12.0 13.0 / 0.96 8.2 14.2 / 2.30 18.6 14.2 / 0.33 18.2 b_d9_w 9.0 10.7 / 1.09 19.0 10.2 / 1.14 13.5 10.8 / 1.01 20.5 b_a1_t 28.6 29.0 / 5.54 1.3 26.6 / 1.67 7.0 30.9 / 3.87 8.1 b_b2_t 31.3 33.6 / 4.37 7.4 33.7 / 6.79 7.8 29.9 / 1.44 4.6 b_b3_t 79.0 71.2 / 1.51 9.9 59.5 / 12.83 24.6 70.4 / 1.91 10.9 b_e2_w 2.0 3.7 / 1.54 83.6 3.7 / 1.54 83.6 4.3 / 0.45 115.0 b_f3_w 3.8 4.4 / 1.09 16.0 5.7 / 0.00 49.2 5.3 / 0.64 39.5 b_f5_w 5.5 6.2 / 1.65 12.4 7.8 / 1.95 40.9 6.0 / 1.34 8.6 Table 4.7 Dimensions of Crack Features Evaluated Based on the Images and the Feature Measured Dimension/ Dimension/ Dimension/ Corresponding Errors () at Different Speeds (SR 331, lights off) Dimension [mm] Std. Dev. 25 mph [mm] 25 mph [%] Std. Dev. 35 mph [mm] 35 mph [%] Std. Dev. 45 mph [mm] 45 mph [%] b_c1_w 3.2 6.1 / 0.00 91.1 3.8 / 18.1 Not visible n/a b_c2_w 11.0 13.1 / 0.23 18.8 13.8 / 25.8 Not visible n/a b_c3_t 9.0 15.9 / 0.99 77.2 14.0 / 55.4 Not visible n/a b_vert1 20.0 17.0 / 0.00 15.0 17.0 / 15.0 Not visible n/a b_vert2 20.0 17.0 / 0.00 15.0 15.1 / 24.4 Not visible n/a b_vert3 20.0 17.0 / 0.00 15.0 17.0 / 15.0 Not visible n/a b_h or1 20.0 20.2 / 1.03 1.0 21.7 / 1.34 8.7 20.8 / 3.9 b_ho r2 20.0 20.2 / 2.89 0.8 20.8 / 0.00 3.9 18.9 / 5.5 b_hor3 20.0 20.8 / 3.27 3.9 21.7 / 1.34 8.7 20.8 / 3.9 b_c4_t 8.0 13.4 / 0.00 67.3 13.2 / 65.3 Not visible n/a b_c8_t 12.0 12.1 / 1.11 1.0 11.5 / 4.0 Not visible n/a b_d9_w 9.0 16.1 / 4.53 79.0 12.1 / 34.1 Not visible n/a b_a1_t 28.6 36.0 / 0.00 25.7 41.5 / 2.65 44.9 Not visible n/a b_b2_t 31.3 43.7 / 8.29 39.6 40.3 / 8.10 28.7 40.5 / 23.83 29.4 b_b3_t 79.0 69.4 / 2.69 12.1 67.8 / 1.41 14.2 70.6 / 3.36 10.6 b_e2_w 2.0 2.8 / 1.34 41.7 4.6 / 128.1 Not visible n/a b_f3_w 3.8 4.9 / 1.65 30.2 7.9 / 107.0 Not visible n/a b_f5_w 5.5 4.2 / 0.55 24.2 1.9 / 65.6 Not visible n/a 162

PAGE 183

Table 4.8 Dimensions of Crack Features Evaluated Based on the Images and theFeature Measured [mm] Dimension/ 25 mph [mm] mph [%] Dimension/ 35 mph [mm] mph [%] Dimension/ 45 mph [mm] mph [%] Corresponding Errors () at Different Speeds (SR 28, lights on) Dimension Std. Dev. 25 Std. Dev. 35 Std. Dev. 45 c_d1_w 7.0 7.1 / 0.89 1.0 6.1 / 0.00 12.6 6.1 / 0.00 12.6 c_d2_w 7.8 7.8 / 0.18 0.5 7.3 / 1.01 6.6 6.6 / 0.83 15.4 c_d3_w 0.6 Not visible n/a Not visible n/a 1.9 / 214.9 c_d5_w 1.3 2.5 / 1.09 93.8 3.8 / 1.89 190.7 3.8 / 0.00 190.7 c_vert1 20.0 17.6 / 1.09 11.8 18.3 / 1.09 8.7 17.0 / 0.00 15.0 c_vert2 20.0 17.0 / 0.00 15.0 17.0 / 0.00 15.0 17.0 / 0.00 15.0 c_vert3 20.0 17.0 / 0.00 15.0 17.0 / 0.00 15.0 17.0 / 0.00 15.0 c_hor1 20.0 21.4 / 1.09 7.1 22.0 / 1.09 10.2 21.4 / 1.09 7.1 c_hor2 20.0 21.4 / 1.09 7.1 20.2 / 1.09 0.8 20.8 / 1.09 3.9 c_hor3 20.0 21.4 / 1.09 7.1 21.4 / 1.09 7.1 21.4 / 1.09 7.1 c_c3_w 4.0 7.8 / 1.67 95.2 7.6 / 1.89 89.0 6.5 / 1.16 63.7 c_c4_w 0.8 3.8 / 372.4 3.2 / 1.89 303.2 6.1 / 664.4 c_c5_w 1.0 2.5 / 1.09 151.9 2.5 / 1.09 151.9 2.5 / 1.09 151.9 c_c7_w 4.0 5.3 / 1.35 33.4 5.6 / 0.90 39.9 5.1 / 0.90 27.0 c_c8_w 1.0 1.9 / 0.00 89.0 3.2 / 2.29 222.6 1.9 / 0.00 89.0 c_c9_w 0.7 3.2 / 2.29 360.8 Not visible n/a Not visible n/a c_a1_w 5.0 6.7 / 1.01 34.0 6.1 / 0.00 22.3 6.5 / 0.65 29.8 c_a2_w 21.0 22.9 / 0.61 9.0 22.7 / 0.95 8.0 22.7 / 0.95 8.0 Table 4.9 Dimensions of Crack Features Evaluated Based on the Images and the Feature Measured Dimension/ Dimension/ Dimension/ [mm] [%] Corresponding Errors () at Different Speeds (SR 28, lights off) Dimension [mm] Std. Dev. 25 mph [mm] 25 mph [%] Std. Dev. 35 mph [mm] 35 mph [%] Std. Dev. 45 mph 45 mph c_d1_w 7.0 6.1 / 0.00 12.6 6.7 / 1.01 4.3 6.1 / 0.00 12.6 c_d2_w7.8 8.4 / 1.15 7.3 8.2 / 1.84 4.7 8.7 / 0.91 12.2 c_d3_w0.6 1.9 / 214.9 3.8 / 529.9 3.8 / 529.9 c_d5_w 1.3 4.4 / 1.09 239.2 Not visible n/a 3.8 / 1.89 190.7 c_vert1 20.0 17.6 / 1.09 11.8 17.6 / 1.09 11.8 17.0 / 0.00 15.0 c_vert2 20.0 17.0 / 0.00 15.0 17.6 / 1.09 11.8 17.0 / 0.00 15.0 c_vert3 20.0 17.7 / 1.06 11.6 17.6 / 1.09 11.8 17.0 / 0.00 15.0 c_hor1 20.0 22.7 / 0.00 13.4 22.0 / 1.09 10.2 22.7 / 0.00 13.4 c_hor2 20.0 21.4 / 1.09 7.1 20.8 / 0.00 3.9 21.7 / 1.34 8.7 c_hor3 20.0 22.7 / 0.00 13.4 21.4 / 1.09 7.1 22.7 / 0.00 13.4 c_c3_w 4.0 7.4 / 2.60 84.2 8.5 / 1.13 111.5 7.0 / 1.49 74.4 c_c4_w 0.8 4.6 / 0.00 470.2 5.1 / 0.78 539.4 4.6 / 470.2 c_c5_w 1.0 2.8 / 1.34 183.4 5.7 / 466.9 3.8 / 1.89 277.9 c_c7_w 4.0 5.1 / 0.90 27.0 7.0 / 1.49 74.4 6.5 / 2.09 61.5 c_c8_w 1.0 2.8 / 1.54 178.0 3.7 / 1.54 267.1 4.0 / 1.94 304.0 c_c9_w 0.7 1.9 / 169.9 3.8 / 439.9 1.9 / 0.00 169.9 c_a1_w 5.0 6.1 / 0.00 22.3 7.4 / 0.36 48.9 8.6 / 2.37 72.0 c_a2_w 21.0 23.5 / 0.50 11.8 24.8 / 1.24 18.2 23.2 / 1.64 10.3 163

PAGE 184

Based on Figures 4.26 to 4.37, it can be concluded that the vehicle speed does not play a significant role in the recognition ability of crack widths and lengths. When the vehicle lighting system is on, the largest diffepeeds is 3.2 mm (Figure 4.26) for overcast sunny conditions and crack widths less than 10 mmehicle lighting system is off, the largest difference between e 4.28) for overcast sunny onditions and crack widths less than 10 mm. Hence it can be concluded that overcast unny cnition of the crack features. The above figures also show that for asphalt pavements the error of recognition with the vement features. For with th smaller than that without it. Based on the results in Tables .4 to 4.9, conclusions regarding the repeataation are pre minimum standard es at any speed. rence between widths estimated at different s Similarly, when the v widths estimated at different speeds is 3.1 mm (Figur c s onditions create an environment the pavement imaging system has most problems with and introduces a higher uncertainty into the recog lighting system is lower than when it is off. Moreover, without the lighting system there were a number of situations where it was unable to recognize pa c oncrete pavements on the other hand the recognition ability was not an issue. The error e lighting system was 4 bility of image based crack feature evalusented in Table 4.10. Table 4.10 depicts the maximum and deviations involved in crack feature evaluation based on imag 164

PAGE 185

Table 4.10 Repeatability of Crack Evaluation Feature Size Lighting System Standard Deviation of Crack Evaluation (Max/Min) US 441 (asphalt) SR 331 (asphalt) SR 28 (concrete) (< 3 mm) on 1.09 / 0.00 1.54 / 0.45 Not Visible / 0.0 0 (< 3 mm) off Not Visible Not Visible / 1.34 Not Visible / 1.94 (3 6 mm) on 3.09 / 0.55 1.89 / 0.00 1.89 / 0.00 (3 6 mm) off Not Visible Not Visible / 1.34 1.49 / 0.00 (> 6 mm) on 17.2 / 0.00 12.83 / 0.00 1.09 / 0.00 (> 6 mm) off Not Visible / 0.00 Not Visible / 8.29 1.8 9 / 0.00 In the next step, the images were evaluated to investigate any possible correlati on the evagures .38 and 4.39, one can see that wwith thhere the difference between maximum and minimum SNR value is more than 1 dB are b etween the SNR values presented in plots in Figures 4.3 to 4.8 and error associated with luation of crack width and length shown in Figures 4.26 to 4.37. Based on Fi 4 hen the SNR value is relatively high, error associated e measured widths or lengths of crack features generally reduces. Only the plots w u sed to generate Figures 4.38 and 4.39. 165

PAGE 186

US 441, lights ON 120 80 4060rror [% 020_w_w_wert1ert2ert3 100135a_hor1a_hor2a_hor3a_d8_wa_d9_wa_d10_wa_e1_wa_e2_wa_e3_ta_e5_t1_w2_wE] SNR(18 .8dB) SNR(19 a_fa_fa_fa_va_va_va_ca_cFeature .8dB) SNR(21.4dB) R Values for Figure 4.38 Correlation Between Variability in Crack Measurements and SNUS 441 SR 28, lights ON 600700 0500FeatureError [ 300400%] SNR(29.7dB) SNR(28.1d 100200 c_d1_wc_d2_wc_d3_wc_d5_wc_vert1c_vert2c_vert3c_hor1c_hor2c_hor3c_c3_wc_c4_wc_c5_wc_c7_wc_c8_wc_c9_wc_a1_wc_a2_wc_a4_w B) Figure 4.39 Correla tion Between Variability in Crack Measurements and SNR Values for SR 28 166

PAGE 187

4 .5 Evaluation of the Effect of the Vehicle Movement on Image DOT highway evaluation vehicle to evaluate data relevant to roadway geometry. Therefome, one can extract lobal coordinate system and accelerations in the x, y, and z directions of the body coordina were processed through the Applanix POSPack Then, the velocities in the global coordinate system ( Inertial data produced by the IMU unit are integrated by the POS computer in the F re, even with the time interval corresponding to one image fra the data relevant from the IMU, such as velocities in the x, y, and z directions in the g ate system. These raw dat program shown in Figure 4.40. x g lobaly v, g lobal v z g loba lv) were transformed into the corresponding velocities in the body coordinate system (xbodyv, ybody v z body v) using following equations: cossinvv xxybodyglobalglobal v (4.2) cossinyyglobalglobalvv y bodyv (4.3) coszzbodyglobalvv ( 4.4) here is the heading and represents the roll or cross-slope. w Figure 4.40 Global vs. Body Coordinate System 167

PAGE 188

Figure 4.41 Applanix POSPac Main Window Next, the incremental distances ( ) traveled by the vehicle in the x, y, and z ns of the body coordinate s d irectioystem were computed from the corresponding velocities (v) and accelerations (a) as: ibody ibody 111222iibodybodyiiboiiiivtttt 11iiiibodybodydybodyaavvttv (4.5) here 1iitt w is the considered time interval equal to 0.05 second. For the testing run on US 441, acceleration and velocity results for testing uration (time interval of 312,297.2524 312,313.6973) are shown in Figures 4.41 and .42. The distances computed using Eqn. (4.5) are plotted in Figures 4.43 to 4.45 and .46 to 4.48 for US441 and SR 331, respectively. d 4 4 168

PAGE 189

Figure 4.42 Body Acceleration Data on US 441 Figure 4.43 Body Velocity Data in the X Direction on US 441 169

PAGE 190

Figure 4.44 Incremental Distance Trav eled in the X Direction on US 441 Figure 4.45 Incremental Distance Traveled in the Y Direction on US 441 170

PAGE 191

Figure 4.46 Incremental Distance Traveled in the Z Directio n on US 441 Figure 4.47 Incremental Distance Traveled in the X Direction on SR 331 171

PAGE 192

Figure 4.48 Incremental Distance Traveled in the Y Direction on SR 331 Figure 4.49 Incremental Distance Traveled in the Z Direction on SR 331 172

PAGE 193

Maximum and average distance valuting the displacement of the vehicle in a period of 0.05 s corresponding to the IMUs collection data frequency of 1/200 Hz, were extracted from Figures 4.44 to 4.48. Then the simple lens formula (Eqn. (2.3)) was used to determine the movement of the pavement features on the sensor in the three directions x, y, and z as: es, represen xx s ensorobjectfddOf (4.6) y y s ensorf.7) objectOf (4 dd (for a unit size) zzobjectffOfd sensordOf (4.8) Then, the displacement in pixels, s ensor in the x, y, or z direction can be computed as: s ensordp where sensor (4.9) s ensord represents distance c hange on the sensor due to the movement of the vehicle x, y, or z directions and p is pixel pitch of the Basler L103 line-scan camera. The complete results of this evaluation are presented in Table 4.10. Based on the results for a speed of 25 mph, it can be concluded that the vertical (z) movement of the vehicle for exposure times of 1/19,000 or 1/40,000 seconds does not introduce any error. In the x and y directions, the movement of vehicle does not introduce any substantial error except in the x direction on both US 441 and SR 331 where an exposure time of 1/19,000 seconds was used. The magnitude of this error is approximately 32% and close in 173

PAGE 194

174 to the Nyquist frequency limit. For highw ay speed and exposure time of 1/19,000 seconds, the movement of vehicle in the x direction can become a limiting factor in image quality evaluation. Table 4.11 Number of Pixels in Image Displ acement Due to Vehicle Movement (25 mph) Road Name Body Coordinates Avg Distance traveled [mm] in 1/200 s Max Distance traveled [mm] in 1/200 s Avg Displacement of image in pixels in 1/19,000 s Max Displacement of image in pixels in 1/19,000 s Avg Displacement of image in pixels in 1/40,000 s Max Displacement of image in pixels in 1/40,000 s US x 56.0 57.0 0.32 0.31 0.15 0.15 441 y 0.7 2.5 0.014 0.004 0.007 0.002 z 1.0 4.5 0 0 0 0 SR x 54.0 57.0 0.32 0.30 0.15 0.14 331 y 0.2 1.2 0.007 0.001 0.003 0.001 z 0.5 1.25 0 0 0 0

PAGE 195

175CHAPTER 5 CONCLUSION Guidelines for the objective assessment of imaging systems are presented in this dissertation for the benefit of agencies th at use imaging systems to conduct periodic monitoring of pavement, traffic, and safety features. The dissertation also describes how the above guidelines can be used to identify the appropria te settings of the imaging systems and the most favorable imaging conditions which would improve the efficiency and reliability of infrastructure monitori ng operations. The guidelines were developed based on the assessment of the Florida Department of Transportation (FDOT) highway evaluation vehicle’s imaging system compone nts; forward-view and side-view digital area-scan cameras as well as the digital line-scan pavement camera. The spatial resolution tests were used to determine the relationships among assessment parameters such as the minimum iden tifiable size of relevant features and the contrast and optical settings such as the fo cal length, exposure time and the distance of view. These findings will enable the user to set the field of view of evaluation cameras depending on their optical characteristics and the evaluation needs. As for the highway feature imaging system of the FDOT highway evaluation vehicle, the optimum aperture settings were determined based on the maximum dynamic range criterion. A key adjustment that influences the color quality of images is the white balance and this study revealed that it can be effectively achieved by the evaluation of color resolution. The

PAGE 196

176 results of the study further indicate the need for repeating white balancing if lighting conditions vary significantly during the imaging operation. In routine monitoring operations, the creation of a number of pres et white balance initi alization files that correspond to different lighting conditions would certainly expedite the evaluation process. Assessment of color resolution was also shown to be helpful in determining the optimum aperture settings and the effectiven ess of filters in upgr ading the quality of images. The study also indicated how the signal-tonoise ratio represents another criterion for the selection of aperture setting and the optimum lighti ng conditions. In addition, the noise effects due to vehicular vibration were quantified in terms of the degree of image displacement as a prelude to development of criteria for overcoming blurring in images. Also, the noise evaluation provides a methodi cal means of assessing the effectiveness of techniques that can be utilized to mitigate the effects of vibration such as rubber shockabsorption casings and reduced exposure ti mes. Useful relationships were also formulated to quantify the effects of exposur e times and vehicle speed on the clarity of images. To quantify the degree of distortion induced by imaging systems and to identify improved optics such as aspherical lenses that can minimize the distortion effects, standard distortion evaluation methods were presented. It is common for some transportation agencies to out-source the imaging operations and other evaluations while othe r agencies adopt in-house operations and evaluations. The image quality assessment gui delines presented in this dissertation will furnish definitive criteria for specification of the resolution expectations of the

PAGE 197

177 outsourcing agency to the vendors. Furthermore, in the case of in-house evaluators, these guidelines will provide objective criteria to asse ss the capabilities and limitations of their imaging systems and the impact of operati ng conditions such as illumination, vehicle speed, road roughness etc. on the image quality. In fact, these criteria can in turn be used in identifying the appropriate operating conditions that will mitigate the undesirable effects. Common imaging issues of distortion, l ack of sharpness, etc. are generally resolved with improved optics. However, the eventual performance of such newly introduced optical system components within a broad imaging system, that also contains capturing and post-processing software, cannot be predicted from the manufacturers’ specifications alone. In this respect, the pr otocol laid out in this dissertation for maximizing the efficiency of imaging operations will be useful in assessing the effectiveness of optical sub-systems when they become integral parts of an imaging system. On the other hand, issues concerning appropriate settings will have to be resolved by the users themselves. As such, the assessment methods hereby discussed will furnish convenient tools to achieve white balance and optimum aperture, exposure, and gain settings. Finally, an experimental i nvestigation was conducted to study the effect of the vehicle speed, pavement type, and different lighting conditions on the pavement images captured with the digital line-scan imaging sy stem. It showed that the vehicle speed does not significantly affect the noise in the images the camera gain settings, and the ability of recognition of cracks in a pavement. More over, the research findings show that

PAGE 198

178 conditions that produce relatively higher SN R infact improve the accuracy of crack evaluation. The study also shows that the light ing system introduces a significant level of noise into the images while reducing the gain in both asphalt and concrete pavements. Crack measurements are seen to have a highe r variability in surveys with the pavement lighting system turned off due to the highe r uncertainty in crac k recognition. Based on the research findings, it can be concluded that for hairline cracks with a thickness less than 5 mm, the error in recognition ability is relatively high, sometimes reaching over 100%. On the other hand, for features larger than 10 mm in width or length, the above error is less than 9.6%. The frontier of imaging technology lies in the development of software that can be used to accurately classify and quantify pavement distress on a real-time basis. Such automation efforts would indeed be boosted by the knowledge gained from this research in general and the adoption of the presente d assessment techniques in particular since they provide the tools to determine the tonal and spatial resolution limitations of imaging systems and means of enhancing accurate imaging capabilities.

PAGE 199

179REFERENCES Askey, P. (2003) Artifacts. . Chastain, S. (2004) Graphic Software – Histogram. . Criminal Justice Information Services Divi sion of Federal Bureau of Investigation. (1999) Electronic Fingerprint Transmission Specification Department of Justice. Dallas, W. (2004) Image Quality of Display Devices. . Davies, A., Fennessy, P. (2001) Digital Imaging for Photographers. Focal Press, Oxford. Deiries, S. (2004) How CCD Quantum Efficiency is Computed. . DeMello, C., Jacovski, V., McClellan, G. (2003) Moving Theory into Practice – Digital Imaging Tutorial. < Uhttp://www.library.cornell.edu/preservatio n/tutorial/presentation/table7-1.html >. U Dougan, C., Hudson, J., Bowers, D. (2001) Florida Department of Transportation – Videolog Program. Connecticut Transportation Institute. Dulis, P. (2004) Choosing Right Scanner – Dynamic Range . Edmund Industrial Optics. (2003) 2003 Optics and Optical Instruments Catalog Edmund Industrial Optics. Edmund Industrial Optics. (2004) Large Grayscale Target, . Efford, N. (2000) Digital Image Processing – A Practical Introduction Using Java. Pearson Education limited, Essex, England.

PAGE 200

180 Electus Distribution (2004) Reference Data Sheet – Understanding and Using CCD Cameras . Fisher, R., Perkins, S., Walker, A., Wolfart, E. (2000) Image Synthesis – Noise Generation. . Franziska S. R., James M. R. (1999) Digital imaging for photographic collections. Image Permanence Institute and Roches ter Institute of Technology. Gunaratne, M., Mraz, A., Sokolic, I. (2003) Study of the Feasibility of Video Logging with Pavement Condition Evaluation. Department of Civil and Environmental Engineering, University of South Florida. Gunaratne, M., Mraz, A., Sokolic, I., Nazef, A. (2002) Development of Florida’s Comprehensive Pavement Evaluation Vehicle. Washington DC: TRB 2003 Annual Meeting. Hasler, D., Susstrunk, S. (2003) Colour handling in panoramic photography. Audiovisual Communication Laboratory, Swiss Federa l Institute of Technology, Lausanne, Switzerland. Howell, S. (2000) Handbook of CCD Astronomy. Cambridge University Press. ICC (International Cyberne tics Corporation) (2000) Imaging Vehicle Operation Manual. International Cybernetics Corporation. ITUT (1992) Recommendation T.81. The International Telegraph and Telephone Consultative Committee. Jasc Software (2004) Paint Shop Pro 8. < H T Uhttp://www.jasc.com/products/paintshoppro/ U T H>. Koren, N. (2003) Understanding image sharpness part 5: Lens Testing. . Lamberts, R., L. (2004) MTF Engineering Notes: Use of Si nusoidal Test Pattern for MTF Evaluation. . Lee, J-S. (1981) Refined Filtering of Image Noise Using Local Statistics. Computer Graphics and Image Processing 15, pp. 380-389, 1981.

PAGE 201

181 Lindbloom, B. J. (2003) XYZ to RGB . Mai, L. (2004) Introduction to Computer Vision and Image Processing H T Uhttp://www.netnam.vn/unescocourse/computervision/computer.htm U T H. McDowell, D. (1999) Standards Update ,” IS&T Reporter Vol. 14(4), 1-2. Morrison, M. (2004) Yohkoh Analysis Guide: Saturation Effect . Mraz, A., Gunaratne, M., and Nazef, A (2004) Guidelines for Evaluation of Digital Imaging Systems Used in Highway Applications. Accepted by the ASCE Journal of Transportation Engineering, July 2004. Nill, N. R. (2001) Conversion between Sine Wave and Square Wave Spatial Frequency Response of an Imaging System The Mitre Corporation. Norton, P. (2004) How Does the Peltier Cooler Works. . Overturf, B. (2001) Development of a Photolog/PLV Application for Connecticut Transit – Final report. Connecticut Department of Tran sportation in cooperation with the U.S. Department of Transportati on Federal Highway Administration. Owens, R. (1997) Image Enhancement – Spatial domain methods. . Rasband, W. (2004) ImageJ – Documentation . Reichmann, M. (2003) Understanding MTF – The Modulation Transfer Function Explained . Roper Scientific. (2004) Encyclopedia: Full Well Capacity. . Roper Scientific. (2003) Encyclopedia: Signal-to-Noise Ratio . H TRudometov,T TV., RudometovT H, E. (2004) Peltier Coolers. .

PAGE 202

182 SHRP (Strategic Highway Research Program) (1992) Distress Identification Manual for Long-Term Performance Program. U. S. Department of Transportation, Federal Highway Administration. Snyder, D., Helstrom, C., White, R. (1994) Compensation in Read-out Noise in HST Image Restoration. Space Telescope Science Institute. Sokolic, I., Gunaratne, M., Mr az, A., and Nazef, A. (2004) Evaluation of Pavement Distress Imaging Systems Annual TRB meeting, Washington, D.C. Sonka, M., Hlavac, V., Boyle, R. (1999 ) Image Processing, Analysis, and Machine Vision. PWS Publishing. Specht, F. (2004) Webcam Astronomy . Stein, J. (2004) CCD Paramenter. . Stoffels, S. M. (2003) Videologging QA – Development, Implementation and Analysis. Final project report submitted to the Pennsylvania Dept. of Transportation. Trudelle-Schwarz, R. (2004) Converting Color Image to Grayscale . Wang, K. C (2000). Design and Implementation of Automated System for Pavement Surface Distress Survey. Journal of Infrastructure Systems, pp. 24-30, March 2000, 24-30. Young, I., Gerbrands, J., and Van Vliet, L. (1998) Image Processing Fundamentals. .

PAGE 203

183BIBLIOGRAPHY American Society for Testing and Materials (2000). Standard guide for classification of automated pavement condition survey equipment Designation: E 1656 – 94 (Reapproved 2000). Philadelphia. Bright, Newbury & Steel (1998). Visibility of objects in co mputer simulations of noisy micrographs. Journal of Microscopy, Volume 189 Issue 1. International Standard Organization (2000). Photography – Electronic still-picture cameras – Resolution measurements ISO copyright office, Geneva, Switzerland. Project PCS/Law Engineering (1993). Distress Interpretation from 35mm Film for the LTPP Experiments. Washington DC: SHRP-P-642, National Research Council. Strategic Highway Research Program (1993). Distress Identifica tion Manual for the Long-Term Pavement Performance. Washington DC: SHRP-P-338, National Research Council. Technical Advisory Service for Images (2002). Colour and Resolution Targets .TASI. . Lindbloom, B., J. (2003) XYZ to RGB . Strategic Highway Research Program (1993). Distress Identifica tion Manual for the Long-Term Pavement Performance. Washington DC: SHRP-P-338, National Research Council.

PAGE 204

184APPENDICES

PAGE 205

185 Appendix A: Sample Report for MTF Evaluation Using PhotoES_AM Plugin for ImageJ MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 pixels Hori. number of pixels on sensor: 2048.0 pixels Nyquist Frequency: 143.10 lp/mm Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 12 (112.8 lp/mm) Comput frequency: 115.2 lp/mm Error in frequency: 2.1 % The ave luminance for the black areas: 29.0 The ave luminance for the white areas: 155.0 Low frequency (black-white) contrast C(0): 0.70 Contrast at spatial frequency C(112.8 lp/mm): 0.01 MTF(112.8 lp/mm): 4.1 % CTF(112.8 lp/mm): 5.3 % --------------------------------------MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Measurement No: 60 Starting point X: 214 Starting point Y: 15 Lenght of LINE: 0.96 ** ERROR: System could not recognize black white black white ... black pattern ** ** WARNING: System could not recognize whole pattern (before column 39.0) ** ** ADVICE: Please try to use lower frequency Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 pixels Hori. number of pixels on sensor: 2048.0 pixels Nyquist Frequency: 143.10 lp/mm Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 13 (122.2 lp/mm) Comput frequency: 115.2 lp/mm Error in frequency: 5.7 % The ave luminance for the black areas: 24.0 The ave luminance for the white areas: 156.0 Low frequency (black-white) contrast C(0): 0.71 Contrast at spatial frequency C(122.2 lp/mm): 0.00 MTF(122.2 lp/mm): 0.9 % CTF(122.2 lp/mm): 0.6 % ---------------------------------------

PAGE 206

186 Appendix B: Sample Report for SNR Evaluation Using PhotoES_AM Plugin for ImageJ ----------------------------------------------------------| | Creating File: SNRtesting_compass_BMP_15percentGaussNoise | ----------------------------------------------------------SIGNAL TO NOISE RATIO (MacBeth Target) ----------------------------------------------------------Measurement No: -5 Width: 11 pixels Height: 334 pixels Node coordinate X: 542 pixels Node coordinate Y: 102 pixels Average Black value: 42.0 Average White value: 130.0 Standard Deviation (for active ROI): 11.432 Pixel Count: 3674 ----------------------------------------------------------Black SNR: 7.6 (17.7 dB) Black SNRarea: 466.4 ----------------------------------------------------------SIGNAL TO NOISE RATIO (MacBeth Target) ----------------------------------------------------------Measurement No: -4 Width: 91 pixels Height: 93 pixels Node coordinate X: 340 pixels Node coordinate Y: 171 pixels Average Black value: 42.0 Average White value: 130.0 Standard Deviation (for active ROI): 11.104 Pixel Count: 8463 ----------------------------------------------------------White SNR: 7.9 (17.9 dB) White SNRarea: 728.7 ----------------------------------------------------------

PAGE 207

187 Appendix C: Sample Report for Filtering Technique Using PhotoES_AM Plugin for ImageJ -------------------------------------------Measure 15 gray-scale wedge mean int and var -------------------------------------------Mean/Var(1): 130.0/9.58 Mean/Var(2): 130.0/9.58 Mean/Var(3): 130.0/9.58 Mean/Var(4): 130.0/9.58 Mean/Var(5): 130.0/9.58 Mean/Var(6): 130.0/9.58 Mean/Var(7): 130.0/9.58 Mean/Var(8): 130.0/9.58 Mean/Var(9): 42.0/9.48 Mean/Var(10): 42.0/9.48 Mean/Var(11): 42.0/9.48 Mean/Var(12): 42.0/9.48 Mean/Var(13): 42.0/9.48 Mean/Var(14): 42.0/9.48 Mean/Var(15): 42.0/9.48 -----------------------------------------------------------Filtering method using local statistics and targets from ROI -----------------------------------------------------------Measurement No: -2 Width: 418 Height: 411 Node coordinate X: 330 Node coordinate Y: 50 -----------------------------------------------------------7 x 7 matrix with central pixel [130.0] in (343,273): Average Intensity/Variance for 7x7 matrix: 111.612/1306.409 *** g_mag(45.3) > ThreshMin(16.0) *** Subset [12] average/variance: 129.89/6.46 Noise Variance [posit= 8 <--> 9 ]: 9.5 *** sigma_noise > v_var_dir7x7 -----------------end of (343,273) -----------------7 x 7 matrix with central pixel [130.0] in (541,301): Average Intensity/Variance for 7x7 matrix: 92.632/1938.320 *** g_mag(87.1) > ThreshMin(16.0) *** Subset [12] average/variance: 130.25/10.41 Noise Variance [posit= <-1 ]: 9.5 Estimated intensity value (Q_ij=0.83/k_ij=0.07): 130.2 -----------------end of (541,301) -----------------7 x 7 matrix with central pixel [48.0] in (542,301): Average Intensity/Variance for 7x7 matrix: 79.734/1954.990 *** g_mag(88.3) > ThreshMin(16.0) *** Subset [0] average/variance: 41.96/17.44 Noise Variance [posit= 15 --> ]: 9.4 Estimated intensity value (Q_ij=7.96/k_ij=0.45): 44.7 -----------------end of (542,301) ------------------

PAGE 208

188 About the Author Alexander Mraz received a Master’s De gree in Civil Engineering from Slovak University of Technology in 1997 and. In 2001, he entered the Ph.D. program at the University of South Florida. While in the Ph.D. program at the Univer sity of South Florid a, Mr. Mraz was very active in research projects with Florida Department of Transportation, which were evaluation the feasibility of imaging vehi cle for pavement survey. He has also coauthored two publications and made severa l paper presentations at meetings of the Transportation Research Board.