USF Libraries
USF Digital Collections

Criteria to evaluate the quality of pavement camera systems in automated evaluation vehicles

MISSING IMAGE

Material Information

Title:
Criteria to evaluate the quality of pavement camera systems in automated evaluation vehicles
Physical Description:
Book
Language:
English
Creator:
Sokolic, Iván F
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
automated pavement distress data collection
pavement distress imaging systems
pavement cracking
pavement surface distress
imaging quality
Dissertations, Academic -- Civil Engineering -- Masters -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: The use of high technology in common daily tasks is boarding all areas of civil engineering; pavement evaluation is not the exception. Accordingly, current pavement imaging systems have been able to collect images at highway speeds and with the use of proper software, this digital information can be translated into pavement distress reports in which all distresses are classified and presented by their type, extent, severity, and location. However, a number of issues regarding the quality of pavement images and the appropriate conditions to acquire them, remain to be addressed. These issues surfaced during the development of a pavement evaluation vehicle for the Florida Department of Transportation (FDOT). The work involved in this thesis proposes basic criteria to evaluate the performance of pavement imaging systems. Mainly four parameters (1) spatial resolution, (2) brightness resolution, (3) optical distortion, and (4) signal to noise ratio, have been identified to assess the quality of a pavement imaging system. First, each of the four parameters is studied in detail in USF's Visual Imaging Laboratory to formulate relevant criteria that can be used to evaluate imaging systems. Then, the developed criteria are used to evaluate the FDOT Survey Vehicle's pavement imaging system. The evaluation speed does not seem to have any significant influence on the spatial resolution, brightness resolution and signal to noise ratio. Little or no optical distortion was observed on the images on wheel paths. Limitations of the imaging system were also determined in terms of the brightness resolution and noise. The conclusions drawn from this study can be used to (1) enhance pavement imaging systems and (2) setup appropriate guidelines to perform automated distress surveys, under varying lighting conditions and speeds to obtain good quality images.
Thesis:
Thesis (M.S.C.E.)--University of South Florida, 2004.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Iván F. Sokolic.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 79 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
oclc - 55731068
notis - AJR1154
usfldc doi - E14-SFE0000301
usfldc handle - e14.301
System ID:
SFS0024996:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam 2200421Ka 4500
controlfield tag 006 m d s
007 cr mn
008 040524s2004 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000301
035
(OCoLC)55731068
9
AJR1154
b SE
040
FHM
c FHM
090
TA145
1 100
Sokolic, Ivn F.
0 245
Criteria to evaluate the quality of pavement camera systems in automated evaluation vehicles
h [electronic resource] /
by Ivn F. Sokolic.
260
[Tampa, Fla.] :
University of South Florida,
2004.
502
Thesis (M.S.C.E.)--University of South Florida, 2004.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 79 pages.
520
ABSTRACT: The use of high technology in common daily tasks is boarding all areas of civil engineering; pavement evaluation is not the exception. Accordingly, current pavement imaging systems have been able to collect images at highway speeds and with the use of proper software, this digital information can be translated into pavement distress reports in which all distresses are classified and presented by their type, extent, severity, and location. However, a number of issues regarding the quality of pavement images and the appropriate conditions to acquire them, remain to be addressed. These issues surfaced during the development of a pavement evaluation vehicle for the Florida Department of Transportation (FDOT). The work involved in this thesis proposes basic criteria to evaluate the performance of pavement imaging systems. Mainly four parameters (1) spatial resolution, (2) brightness resolution, (3) optical distortion, and (4) signal to noise ratio, have been identified to assess the quality of a pavement imaging system. First, each of the four parameters is studied in detail in USF's Visual Imaging Laboratory to formulate relevant criteria that can be used to evaluate imaging systems. Then, the developed criteria are used to evaluate the FDOT Survey Vehicle's pavement imaging system. The evaluation speed does not seem to have any significant influence on the spatial resolution, brightness resolution and signal to noise ratio. Little or no optical distortion was observed on the images on wheel paths. Limitations of the imaging system were also determined in terms of the brightness resolution and noise. The conclusions drawn from this study can be used to (1) enhance pavement imaging systems and (2) setup appropriate guidelines to perform automated distress surveys, under varying lighting conditions and speeds to obtain good quality images.
590
Adviser: Gunaratne, Manjriker
653
automated pavement distress data collection.
pavement distress imaging systems.
pavement cracking.
pavement surface distress.
imaging quality.
690
Dissertations, Academic
z USF
x Civil Engineering
Masters.
773
t USF Electronic Theses and Dissertations.
949
FTS
SFERS
ETD
TA145 (ONLINE)
nkt 6/7/04
4 856
u http://digital.lib.usf.edu/?e14.301



PAGE 1

Criteria to Evaluate the Quality of Pavement Camera Systems in Automated Evaluation Vehicles by Ivn Sokolic A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Civil Engineering Department of Civil and Environmental Engineering College of Engineering University of South Florida Major Professor: Manjriker Gunaratne, Ph.D. Alaa Ashmawy, Ph.D. Ram Pendyala, Ph.D. Date of Approval: July 17, 2003 Keywords: automated pavement distress data collection, pavement distress imaging systems, pavement surface distress, pavement cracking, imaging quality. Copyright 2004, Ivn Sokolic

PAGE 2

DEDICATION This thesis is dedicated to my father Ivn, my mother Heidhy (rests in peace), my siblings Vlado, Mirko and Karina, my son Franzo and my lovely wife Sisy, who has expressed her true love and comprehension.

PAGE 3

ACKNOWLEDGEMENTS I would like to express my true gratitude to Dr. Manjriker Gunaratne for his continuous guidance, persistent encouragement and true support throughout my Masters program. I am also grateful to Dr. Ram Pendyala and Dr. Alaa Ashmawy for serving in the committee. Special thanks to my colleague Alexander Mraz for his interesting and constructive critics and for his unconditional support in the computer program routines which helped me in the computer-assisted part of this thesis. Thanks are also due to Mr. Abdenour Nazef from the Florida Department of Transpor tation for his valuable contribution to this project. I gratefully acknowledge the support from the Florida Department of Transportation for funding the research project ‘Study of The Feasibility of Video Logging with Pavement Condition Evaluation’ (Grant BC965). The opi nions, findings and conclusions expressed here are those of the author and not necessarily those of the supporting agency.

PAGE 4

i TABLE OF CONTENTS LIST OF TABLES iv LIST OF FIGURES v ABSTRACT vii CHAPTER 1 INTRODUCTION 1 1.1 Problem Statement 1 1.2 Overview of Image Processing 2 1.2.1 Digital Imaging 2 1.2.2 Image Enhancement 3 1.2.3 Image Analysis 4 1.2.3.1 Segmentation 4 1.2.3.2 Feature Extraction 5 1.2.3.3 Classification 5 1.3 Image Acquisition 5 1.3.1 Scope of the Problem 5 1.3.2 Imaging Hardware 6 1.4 The Modulation Transfer Function 7 1.5 Overview of Pavement Surface Distress 10 1.5.1 Common Pavement Surface Distress Types 10 1.5.2 The SHRP Distress Identification Manual 11 1.5.3 The FDOT Pavement Condition Survey Procedure 13 CHAPTER 2 PRESENT TECHNOLOGIES FOR AUTOMATED PAVEMENT DISTRESS EVALUATION 14 2.1 History 14 2.2 Classification of Automated Pavement Condition Survey Equipment 14 2.3 State-of-the-Art of Automated Distress Evaluation 16 2.3.1 Roadware’s Automatic Road Analyzer 17 2.3.2 Samsung Data Collection Vehicle 17 2.3.3 The Fugro ADVantage 18 2.4 The FDOT Survey Vehicle 19 2.4.1 Systems and Components of the FDOT Survey Vehicle 20

PAGE 5

ii 2.4.1.1 Camera Imaging Systems 20 2.4.1.2 The Illumination System 21 2.4.1.3 The Generator 21 2.4.2 The Downward Imaging System 21 2.4.3 The Workstation Program 22 CHAPTER 3 CRITERIA FOR EVALUATING THE QUALITY OF PAVEMENT IMAGING SYSTEMS 24 3.1 Criteria for Evaluation of Imaging Systems 24 3.2 Proposed Quality Evaluation Criteria for Pavement Evaluation Imaging Systems 24 3.2.1 Spatial Resolution 25 3.2.2 Dynamic Range 26 3.2.3 Optical Distortion 28 3.2.4 Signal to Noise Ratio 31 CHAPTER 4 EXPERIMENTAL PROCEDURE 32 4.1 Testing Variables 32 4.1.1 Speed 32 4.1.2 Lighting Condition 33 4.2 Spatial Resolution Test 33 4.2.1 Spatial Resolution Target 33 4.2.2 Procedure and Considerations 34 4.3 Brightness Resolution Test 35 4.3.1 Optical Density Step Target 37 4.3.2 Procedure and Considerations 38 4.4 Optical Distortion Test 38 4.4.1 Optical Distortion Target 39 4.4.2 Procedure and Considerations 39 4.5 Preliminary Testing 40 4.5.1 Spatial Resolution Preliminary Testing Results 40 4.5.2 Optical Density Preliminary Testing Results 43 4.5.3 Optical Distortion Preliminary Testing Results 45 4.5.4 Signal to Noise Ratio Preliminary Testing Results 45 CHAPTER 5 FINDINGS AND CONCLUSIONS 46 5.1 Results from Field Studies 46 5.1.1 Spatial Resolution Testing Results 46 5.1.2 Dynamic Range Resolution Testing Results 48 5.1.3 Optical Distortion Testing Results 50 5.1.4 Signal to Noise Ratio Testing Results 51 5.2 Testing Limitations and Sources of Error 52

PAGE 6

iii 5.3 Conclusions 53 REFERENCES 55 BIBLIOGRAPHY 56 APPENDICES 57 Appendix A: Sample Report for MTF Evaluation Using Photoes_am Plugin for Imagej 58 Appendix B: Sample Report for Signal to Noise Ratio and Optical Density Evaluation Using Photoes_am Plugin for Imagej 63

PAGE 7

iv LIST OF TABLES Table 1.1 Pixels Needed in an Image for Different Pavement Dimensions and Desired Resolution 6 Table 1.2 Asphalt Concrete Surfaced Pavement Distress Types 11 Table 1.3 Jointed Concrete Surfaced Pavement Distress Types 12 Table 2.1 Equipment Capability Measuring Cracking of Pavement Surfaces (ASTM E 1656, 2000) 15 Table 3.1 Severity Levels for Longitudinal and Transverse Cracking (Distress Identification Manual, 1993) 26 Table 4.1 Spatial Resolution Testing 33 Table 4.2 Optical Density Testing 35 Table 4.3 Density and Reflectance Values for the Optical Density Target 38 Table 4.4 Optical Distortion Testing 39 Table 4.5 Characteristics, Distances and Magnifications for Minolta Digital Camera 42 Table 4.6 MTF Results from Laboratory Te sting for Minolta Digital Camera 42 Table 5.1 SNR Testing Results 52

PAGE 8

v LIST OF FIGURES Figure 1.1 Digital Image of a Pavement Surface and its Corresponding Brightness Histogram 3 Figure 1.2 Conceptualization of th e Modulation Transfer Function 8 Figure 1.3 Comparison of Square and Sine Wave Brightness Functions 9 Figure 1.4 Rutting in Asphalt Pavements 10 Figure 1.5 Alligator Cracking in Asphalt Pavements 10 Figure 2.1 Roadware's ARAN Pavement Evalaution Vehicle 17 Figure 2.2 Samsung SDS America's Data Collection Vehicle 18 Figure 2.3 Fugro ADVantage Pave ment Evaluation Vehicle 19 Figure 2.4 The FDOT Survey Vehicle 20 Figure 2.5 Downward Camera of the FDOT Survey Vehicle 22 Figure 3.1 Original Image without Optical Distortion 29 Figure 3.2 Image Showing the Ba rrel Distortion Effect 29 Figure 3.3 Image Showing the Pi ncushion Distortion Effect 29 Figure 3.4 Sketch of an Aerial View of Linescan Camera and Distorted Parallel Lines 30 Figure 3.5 Digital Image Showing Optical Linear Distortion 30 Figure 4.1 Simple Lens Equation Illustration 35 Figure 4.2 Ivn Sokolic Resolution Target 2003 36 Figure 4.3 Optical Density Step Target 37 Figure 4.4 Ivn Sokolic Resolution Target 2003 for Linescan Camera Imaging Systems 41 Figure 4.5 MTF Curves for Different Te sts Using a Minolta DiMage5 Digital Camera 42 Figure 4.6 Grayscale Variation in the Density Step Target (room light +additional illumination) 44 Figure 4.7 Grayscale Variation in the Dens ity Step Target (only room light) 44

PAGE 9

vi Figure 4.8 Evaluation of the Color Response of a Digital Imaging System (Minolta Camera) 45 Figure 5.1 Set up of Targets on the Pavement Surface 46 Figure 5.2 MTF Plots for the FDOT Surv ey Vehicle at Different Speeds 47 Figure 5.3 Brightness Profile of a Uniform White Target (illumination system used) 48 Figure 5.4 Brightness Profile of a Uniform White Target (illumination system not used) 48 Figure 5.5 Brightness Intensity Values for Optical Density Target 49 Figure 5.6 Negligible Variance in Intensity due to Speed 49 Figure 5.7 Optical Distortion Present in the FDOT Survey Vehicle 50 Figure 5.8 Optical Distortion Testing Scene 50 Figure 5.9 Left Half of a Digital Image of the Optical Distortion Target 51

PAGE 10

vii CRITERIA TO EVALUATE THE QUALITY OF PAVEMENT CAMERA SYSTEMS IN AUTOMATED EVALUATION VEHICLES Ivn Sokolic ABSTRACT The use of high technology in common daily tasks is boarding all areas of civil engineering; pavement evaluation is not the exception. Accordingly, current pavement imaging systems have been able to collect images at highway speeds and with the use of proper software, this digital information can be translated into pavement distress reports in which all distresses are classified and presented by their type, extent, severity, and location. However, a number of issues regarding the quality of pavement images and the appropriate conditions to acquire them, remain to be addressed. These issues surfaced during the development of a pavement evalua tion vehicle for the Florida Department of Transportation (FDOT). The work involved in this thesis proposes basic criteria to evaluate the performance of pavement imaging systems. Mainly four parameters (1) spatial resolution, (2) brightness resolution, (3) optical distortion, and (4) signal to noise ratio, have been identified to assess the quality of a pavement imaging system. First, each of the four parameters is studied in detail in USF’s Visual Imaging La boratory to formulate relevant criteria that can be used to evaluate imaging systems. Then, the developed criteria are used to evaluate the FDOT Survey Vehicle’s pavement imaging system. The evaluation speed

PAGE 11

viii does not seem to have any significant influence on the spatial resolution, brightness resolution and signal to noise ratio. Little or no optical distortion was observed on the images on wheel paths. Limitations of the imaging system were also determined in terms of the brightness resolution and noise. The conc lusions drawn from this study can be used to (1) enhance pavement imaging systems and (2) setup appropriate guidelines to perform automated distress surveys, under varying lighting conditions and speeds to obtain good quality images.

PAGE 12

1 CHAPTER 1 INTRODUCTION 1.1 Problem Statement Pavements, both flexible and rigid, have an inherent characteristic in the way they perform during their operational life; they deteriorate due to traffic and environmental factors. The most representative expression of the deterioration is cracking. Cracking is a phenomenon that pavement design and maintenance engineers have to prevent from occurring for a reasonable duration after construction. For asphalt pavements another big consideration that must be taken into account when designing a pavement structure is rutting, a groove caused in the wheelpath due to traffic. The allocation of funding for road maintenance and rehabilitation requires a continuous evaluation of the state of the highway network. Periodic survey of the condition of the pavements reveals how necessary it is to intervene with a major rehabilitation as an overlay or with crack sealing as a maintenance routine. Traditional pavement surveys range from a thorough walking survey of 100% of the pavement surface in which all distress types, severities, and quantities are measured, recorded, and mapped to a windshield survey at normal traffic speed in which the rater assigns the pavement a general category or sufficiency rating without identifying individual distress types. In either case, the inspection of the pavement surface is direct and human cognition is used to categorize and determine the type of distress, severity and quantity of distress present on the pavement surface. Overall, manual surveys are considered labor intensive, slow, expensive, and sometimes unsafe. They also invariably involve a certain degree of human subjectivity. Automated or semi-automated pavement evaluation surveys consist of the use of computer systems to help the survey personnel to acquire, store, process and/or analyze the distress data collected from the pavement surface under study. An ideal ‘fully

PAGE 13

2 automated’ pavement condition system should be able to record the surface of the pavement and process that imaging information to objectively determine the pavement's condition following a stipulated distress classifi cation criterion. At this stage no system is found to successfully fulfill the previous descri ption. However as presented in this thesis, a few pavement evaluation vehicles are on their way to achieving the above mentioned goal. In order to identify low severity cracking, which are defined as cracks with a mean width of less than 6 mm according to the Distress Identification Manual (SHRP) imaging systems must satisfy certain characteristics of spatial resolution. The American Society for Testing and Materials’ (ASTM) Standard Guide for Classification of Automated Pavement Condition Survey Equipment (E1656-94 -2000) is the only available standard that relates to such equipment. However it lacks of a definitive criterion for evaluating the quality of the pavement surface imaging systems. This research proposes a methodology to determine the quality of a pavement imaging system. Using the proposed methodology the capabilities of any given imaging system can be quantified and evaluated in a manner relevant to automatic distress identification. 1.2 Overview of Image Processing 1.2.1 Digital Imaging A continuous-tone image has various shades that blend without disruptions. A digital image is composed of discrete points of gray tone rather than continuously varying tones. The processes of breaking up a continuous-tone image are known as sampling and obtaining the brightness values at each quantized sample is referred as quantization. A quantized sample is referred to as a pixel or picture element. The quality of the digital image is highly related to its capability to represent in detail the aspects of the natural scene. These aspects are referred as image resolution. For digital imaging there are two types of resolution: (1) the spatial resolution that corresponds to the number of pixels that are used to sample the image and (2) the brightness resolution which refers to the different types of gray that are used to categorize each pixel, when referring to grayscale images.

PAGE 14

3 Sampling aliasing is a phenomenon which is related to the appearance of corrupted sectors in the image due to an excessive demand of resolution than the imaging system is able to reproduce. It happens at high spatial frequencies with a lower limit known as Nyquist Frequency. Typically, Nyqui st frequency is defined as half the sampling frequency. Therefore, for many (but not all) applications, the Nyquist frequency represents the highest spatial frequency that can be captured without unwanted frequency distortion. 1.2.2 Image Enhancement Brightness histogram is a key concept to understand how the image enhancement can be performed. This tool can help in identifying the satisfactory or unsatisfactory performance of a digital image. Brightness histograms are distribution charts of the gray levels of pixels within the image. The gray level is indicated in the horizontal axis while the vertical axis shows the number of pixels at a specific gray level. For instance, in an 8-bit grayscale digital image, there will be 256 (28 =256) gray levels ranging from 0 corresponding to black up to 255 corresponding to white. A typical digital image of an asphalt pavement and its corresponding brightness histogram are shown in Figure 1.1. Figure 1.1 Digital Image of a Pavement Surf ace and its Corresponding Brightness Histogram Contrast is a term that is often used to describe the brightness attributes of an image. Contrast is easy to observe in the brightness histogram. For instance, low contrast

PAGE 15

4 images have histograms with a tightly grouped mound of pixel brightnesses in the gray scale, leaving the rest of gray levels clear or unoccupied. There are two basic types of digital processing: (1) pixel point processing and (2) pixel group processing. With the first type of processing, the gray level of each pixel is modified in some way, for example by adding or subtracting a constant value to every pixel known as histogram sliding or by multiplying or dividing the pixels by a certain value termed histogram stretching Both of these processes enhance the contrast of a digital image by a redistribution of the bri ghtness histogram. In contrast, in pixel group processing the gray value of each pixel is modified taking into account the brightness values of the neighboring pixels. The resulting brightness value of a given pixel will be obtained by an operation called convolution in which the brightness values of the neighboring pixels and the input pixel will be weighted using an algorithm called the convolution mask 1.2.3 Image Analysis Operations in image analysis involve meas urement and classification of the digital image information. Results from these operations are usually not pictorial. Elements in the image will be quantified including such things as measurement of size, indicators of shape and descriptions of outlines. Image analysis involves three types of operations: (1) segmentation, (2) feature extraction and (3) classification. 1.2.3.1 Segmentation Segmentation is an operation that isolates or highlights the individual objects within an image. It is performed in three stages: Preprocessing which is a simplification of an image by removing undesired information from it. Initial object discrimination, which deals with isolating the object of interest from background and highlighting of edges of objects. Object boundary cleanup, which is basically a clarification (simplification) of the structure of objects.

PAGE 16

5 1.2.3.2 Feature Extraction Once the objects are isolated in the image (Segmentation) their relevant features are measured. The kind of features that are s ought are brightness, texture, color, shape, boundary descriptions (most precise way to measure shape), among others. With these measurements a comparison between them and known measures to classify an object can be executed. 1.2.3.3 Classification This operation performs a comparison of the measurements of the highlighted objects with the measurements of a known object or with a set of criteria of identification. By performing this it is possible to determine whether the object belongs to a particular category of objects or not. Generally it involves: Determination of those particular features of the object that are to be used in classifying Determination of how close to a given criteria these measurements must be Creation of particular categories to which the objects will be assigned 1.3 Image Acquisition In this section the question of “What capabilities are needed in a pavement distress imaging system?” is posed. The answer will depend on what features are to be evaluated in the image, for instance, down to a certain thickness of pavement cracking. The resolution will play a fundamental role in this issue. What matters is the overall resolution of the entire system rather than that of individual components such as camera’s charge-coupled device (CCD) sensor or lens. 1.3.1 Scope of the Problem An initial estimation can be made to have an idea of the amount of information that will be handled in the acquisition proce ss. Assuming that images will cover the entire width of the lane i.e. of 3.66 m (12 ft), the other dimension of the image is half that width, 1.83 m (6 ft), the desirable minimum visible crack thickness (image resolution) is 1 mm, and a sampling rate of 0.5 pixel/mm (Nyquist frequency), for every pavement

PAGE 17

6 digital image with the described characteristics, one would be require 26,791,200 pixels (26.8 MegaPixels). Multiple scenarios with varying dimensions of the digital image with respect to the pavement width, the pavement length and the image resolution are presented in Table 1.1. The digital image size is directly related to the amount of pixels it contains, as can be seen from Table 1.1. Although the resolution needed to capture a 0.5 mm detail in the pavement images can be achieved with currently available hardware, such an amount of information is virtually impossible to handle and store in real time. Table 1.1 Pixels Needed in an Image for Different Pavement Dimensions and Desired Resolution 3.661.830.91 0.5214,329,600107,164,80053,289,600 1.053,582,40026,791,20013,322,400 2.013,395,6006,697,8003,330,600 0.5107,164,80053,582,40026,644,800 1.026,791,20013,395,6006,661,200 2.06,697,8003,348,9001,665,300 0.553,289,60026,644,80013,249,600 1.013,322,4006,661,2003,312,400 2.03,330,6001,665,300828,100 Variable Image resolution (mm) Image length (m)Image width (m)3.66 1.83 0.91 Therefore, it is seen that first it is critical to define the expectations of the digital imaging system for acquiring surface pavement distress in an acceptable manner. 1.3.2 Imaging Hardware The hardware will play an important role in the imaging system. The hardware for a pavement imaging system will consist of a camera, or cameras, and recording devices. The configuration of the hardware in the evaluation vehicle will depend on the type of camera in use, i.e. area or line-scan. Basically a digital camera converts a natural scene into digital information by converting light into an electric signal in the CCD sensor. Important characteristics of a camera are its resolution, framing rate, data per frame, number of pixels in the sample size and the shutter speed. Electronic cameras can be framing or line-scan cameras.

PAGE 18

7 Video cameras and still cameras are both framing cameras. A framing camera captures an image of a certain size and resolution. The distance between the object and the lens and the focal length of the lens will determine the size of the area captured. The larger the area covered the lower the resolution of the image would be. This occurs because each pixel will cover a greater area. In contrast to digital still cameras, video cameras have a smaller framing rate, which is the number of frame shots that the device can capture per second. Linescan cameras in contrast to area-scan cameras capture one strip (width of the pixel) at a time. Linescan cameras are better suited for capturing moving objects. These objects will displace with respect to the object-camera reference system, for example, the camera moving linearly along the pavement. Hence motion, of either the camera or the object must occur, perpendicular to this strip for capturing of an entire image. These types of cameras do not have a framing rate but instead have a data acquiring rate which is an indication of the number of lines they can image in one second. In this research, the two available types of cameras used for pavement distress imaging, framing and linescan cameras, will be analyzed. The output information in the survey process has to be stored in the data storage devices. The amount and the rate information that can be stored will depend on them. Lighting will certainly play an important role in dealing with pavement distress imaging systems. Considering that most asphalt pavements are dark, the amount of brightness in creating the snap picture will greatly depend on the characteristics and magnitude of the light applied on the pavement surface. All these individual hardware components operating as a system will determine the quality of the captured images. 1.4 The Modulation Transfer Function The Modulation Transfer Function (MTF) is one scientific means of evaluating the spatial resolution performance of an imaging system, or components of that system. The resolution is a measure of how well spatial details are preserved. Two characteristics of an image that need to be measured in orde r to define MTF are: (1) spatial detail and (2) preservation. It is these fundamental metrics of detail and preservation that define MTF.

PAGE 19

8 “Detail and preservation metrics are not single measurements, but rather a continuum of measurements, which is why a functional curve quantifying them, i.e., the MTF, can be plotted” (Williams 2001). For every frequency at which specific spatial details are to be shown, there will be a corresponding response of the imaging system, indicating how the output is preserving the input. An entire set of point pairs are then plotted, a measure of spatial detail or "frequency" on the horizontal axis and the extent of preservation of that detail on the vertical axis, to compound the MTF curve as illustrated in Figure 1.2. 0 10 20 30 40 50 60 70 80 90 100 1030507090110 Spatial frequency (cycles per unit of length)Extent of preservation (%) Figure 1.2 Conceptualization of the Modulation Transfer Function One of the attractions of the MTF is that it provides a continuum of unique rankings by which to judge a device's resolution performance. Spatial detail can be directly related to spatial frequency content of a given feature. A line pair (darkline/white-space pair) is universally referred to, in image processing, as one cycle. The higher the spatial frequency, the greater the detail, the greater the number of cycles per unit distance, and more closely spaced the lines become. Square wave signals, which have abrupt changes in brightness value from one extreme (black) to the other (white) within one cycle, are easy to produce. However, since these are not considered as building blocks, it is technically inappropriate to employ them as reference signals in determining MTF curves. On the other hand, sine-waves signals are more appropriate to be used for such an application. Figure 1.3 shows these two signals and their corresponding plots.

PAGE 20

9 Figure 1.3 Comparison of Square and Sine Wave Brightness Functions Spatial frequencies are always plotted as the independent variable on the x-axis on the MTF plot. To complete the MTF metric, a measure of how well each sine-wave frequency is preserved after being imaged, i.e., transferred through an imaging device, is required. This measure, called modulation transfer, is plotted on the y-axis for each available frequency to obtain the MTF. The modulation for any signal is defined with the maximum light intensity (or brightness) value, Imax, and minimum light intensity value, Imin. Modulation is defined as the ratio of their differences to their sums. Equation (1.1) expresses the described formulation. min max min max: I I I I M Modulation (1.1) The goal in determining the MTF is to measure how well the input modulation is preserved after being imaged. Hence the modulation transfer is quantified by comparing the modulations of the output sine wave of the image to the input sine-wave modulation of the target. I OM M Transfer Modulation modulation Input modulation Output (1.2)

PAGE 21

10 Then, the ratio of Mo/Mi, expressed in Equation (1.2), is plotted corresponding to each spatial frequency yielding the Modulati on Transfer Function, or MTF (Figure 1.2). 1.5 Overview of Pavement Surface Distress 1.5.1 Common Pavement Surface Distress Types According to the SHRP Distress Identification Manual, the classification of pavement distresses will be grouped depending on the type of pavement analyzed. Hence there are three major groups of distress: Asphalt concrete surfaced pavements Jointed (plain and reinforced) Portland cement concrete surfaced pavements Continuously reinforced concrete surfaced pavements The last category will not be considered since it represents a very low percentage of the pavements nationwide. Table 1.2 and Table 1.3 illustrate the pavement distress corresponding to asphalt and rigid pavements respectively, while Figure 1.4 and Figure 1.5 show two of the most likely distress types on asphalt pavements. These are rutting and fatigue or Alligator cracking respectively. Figure 1.4 Rutting in Asphalt Pavements Figure 1.5 Alligator Cracking in Asphalt Pavements

PAGE 22

11 1.5.2 The SHRP Distress Identification Manual In 1987, the Strategic Highway Research Program (SHRP) initiated the longest (20 years) and the most comprehensive pavement performance test in history, the Long Term Pavement Performance (LTPP) program. Table 1.2 Asphalt Concrete Surfaced Pavement Distress Types DISTRESS TYPE UNIT OF MEASURE DEFINED SEVERITY LEVELS?A.Cracking 1. Fatigue Crackingm yes 2. Block Crackingm yes 3. Edge Crackingmyes 4a. Wheel Path Longitudinal Crackingmyes 4b. Non-Wheel Path Longitudinal Crackingmyes 5. Reflection Cracking at Joints Transverse Reflection CrackingNumber, myes Longitudinal Reflection Crackingmyes 6. Transverse CrackingNumber, myes B.Patching and Potholes 7. Patch / Patch DeteriorationNumber, myes 8. PotholesNumber, myes C.Surface Deterioration 9. Ruttingmmno 10. ShovingNumber, mno D.Surface Defects 11. Bleedingm yes 12. Polished Aggregatem no 13. Ravelingm yes E.Miscellaneous Distresses 14. Lane-to-Shoulder Dropoffmmno 15. Water Bleeding and PumpingNumber, mno US highway agencies collaborating with 15 other countries would collect data on pavement condition, climate, and traffic volumes and loads from a large sample of pavement test sections. The SHRP Distress Identification Manual designated for the LTPP was developed to provide a consistent basis for collecting distress data for the LTPP program. It will allow states and other agencies to provide accurate, uniform, and comparable information on the condition of LTPP test sections.

PAGE 23

12Table 1.3 Jointed Concrete Su rfaced Pavement Distress Types DISTRESS TYPE UNIT OF MEASURE DEFINED SEVERITY LEVELS?A.Cracking 1. Corner BreaksNumberyes 2. Durability Cracking ("D" Cracking)Number, m yes 3. Longitudinal Crackingmyes 4. Transverse CrackingNumber, myes B.Joint Deficiencies 5a. Transverse Joint Seal DamageNumberyes 5b. Longitudinal Joint Seal DamageNumber, mno 6. Spalling of Longitudinal Jointsmyes 7. Spalling of Transverse JointsNumber, myes C.Surface Defects 8a. Map CrackingNumber, m no 8b. ScalingNumber, m no 9. Polished Aggregatem no 10. PopoutsNumber, mno D.Miscellaneous Distresses 11. BlowupsNumberno 12. Faulting of Transverse Joints and Cracksmmno 13. Lane-to-Shoulder Dropoffmmno 14. Lane-to-Shoulder Separationmmno 15. Patch / Patch DeteriorationNumber, m yes 16. Water Bleeding and PumpingNumber, mno Although developed as a tool for the LTPP program, the SHRP Distress Identification Manual has much broader applications. It provides a common basis for evaluating pavement distresses monitored above such as cracks, potholes, rutting, spoiling, etc. As a "distress dictionary," the manual will improve inter and intraagency communication and lead to more uniform evaluations of pavement performance. Methods for measuring the extent of distress and assigning severity levels for distress are provided in this manual. The document also describes how to conduct the distress survey, from the traffic control stage to distress evaluation. This Manual will be the reference base for the field testing phase of this research. Distress evaluation will follow the guidelines proposed in the Distress Identification Manual. It was chosen because of its wide acceptance in most states and the familiarity of this author with it. However the author developed his own formats for the survey data collection.

PAGE 24

13 1.5.3 The FDOT Pavement Condition Survey Procedure This document describes the procedures for conducting visual, mechanical and automated condition evaluation of the Florida’s highway pavement network. The guidelines contained in this handbook provides tools to evaluate the surface distress and determine the ride quality of a pavement. There are two separate versions of this handbook designated for flexible and rigid pavements. The Flexible Pavement Condition Survey Manual has been developed as a reference to be used by personnel responsible for conducting distress survey on asphalt pavement. Features evaluated in flexible pavement surveys include: Riding Quality, Class IB Cracking, Class II Cracking, Class III Cracking, Manual Rut Depth, Profiler Rut Depth, Patching and Raveling. On the other hand, the Rigid Pavement Condition Survey Manual enables one to evaluate: Riding Quality, Surface Deterioration, Spalling, Patching, Transv erse Cracking, Longitudinal Cracking, Corner Cracking, Shattered Slabs, Faulting, Pumping and Joint Condition. Although the field testing phase of this th esis will not require these manuals as reference methodologies, they are mentioned in this thesis because the evaluation vehicle used in this research project belongs to the Florida Department of Transportation (FDOT).

PAGE 25

14 CHAPTER 2 PRESENT TECHNOLOGIES FOR AUTOMATED PAVEMENT DISTRESS EVALUATION 2.1 History Interest in evaluating the condition of pavements started around 1950’s and 1960’s during the conduct of the AASHO Road Test. The serviceability concept came into being with cracking, patching, rutting, and roughness continuously evaluated in order to judge the performance of the test pavements. Hence the use of automated pavement condition survey systems has been a key goal of highway managers and also one direction of continued research efforts for related technological companies. In addition, with the advent of the Strategic Highway Research Program's LTPP studies, the need for permanent, high quality, pavement distress records arose. The first system, completed in 1970, used photogrammetry principles to obtain a continuous high resolution, 35mm strip film of the pavement's surface at highway speeds. The second system, completed in 1975, used 35 mm film technology combined with photogrammetry principles and computer digitizing technology to obtain a transverse profile of the paveme nt's surface with a high level of accuracy. 2.2 Classification of Automated P avement Condition Survey Equipment The only available standard for such classification was developed by the American Society for Testing and Materials and it is the Standard Guide for Classification of Automated Pavement Condition Survey Equipment (ASTM Designation: E 1656) originally created in 1994 and reapproved in 2000. The above guide illustrates a methodology for classification of pavement condition survey equipment in terms of its capability of measuring longitudinal profile, transverse profile or cracking of pa vement surfaces while operating at or near

PAGE 26

15 traffic speeds. However, the standard does not address the processing of measured data. In its section on Equipment for Measuring Cracking of Pavement Surfaces, the ASTM standard E 1656 states that the equipment capability depends on the stationary repeatability precision with which crack widths can be measured, the transverse sampling interval and the longitudinal sampling interval. Table 2.1 shows the classes in which the equipment can be classified in terms of their capabilities. Table 2.1 Equipment Capability Measuring Cr acking of Pavement Surfaces (ASTM E 1656, 2000) CharacteristicCodeDescription Measured AttributeCCracking of Pavement Surface Crack width Stationary Repeatab ility Precision 1Less than or equal to 0.25 mm (0.01 in) 2Greater than 0.25 mm to 0.5 mm (0.01 in to 0.02 in) 3Greater than 0.5 mm to 1 mm (0.02 in to 0.04 in) 4Greater than 1 mm to 3 mm (0.04 in to 0.12 in) 5Greater than 3 mm (0.12 in) Longitudinal Sampling Interval 1Less than or equal to 0.25 mm (0.01 in) 2Greater than 0.25 mm to 0.5 mm (0.01 in to 0.02 in) 3Greater than 0.5 mm to 1 mm (0.02 in to 0.04 in) 4Greater than 1 mm to 3 mm (0.04 in to 0.12 in) 5Greater than 3 mm (0.12 in) Transverse Sampling Interval 1Less than or equal to 0.25 mm (0.01 in) 2Greater than 0.25 mm to 0.5 mm (0.01 in to 0.02 in) 3Greater than 0.5 mm to 1 mm (0.02 in to 0.04 in) 4Greater than 1 mm to 3 mm (0.04 in to 0.12 in) 5Greater than 3 mm (0.12 in) Transverse coverage Width 1Greater than 3.7 m (12 ft) 2Greater than 2.7 m to 3.7 m (9 ft to 12 ft) 3Greater than 1.8 m to 2.7 m (6 ft to 9 ft) 4Less than or equal to 1.8 m (6 ft) As an example, if the pavement condition equipment is able to measure vertically with a crack width stationary repeatability precision of 2 mm (Code 4), a

PAGE 27

16 longitudinal sampling of 1 mm (Code 3), a transverse sampling of 1mm (Code 3) and a transverse coverage of 4.15 m (Code 1), it is classified as a Code C4331 unit. The above standard does not dictate any specification on the performance of an imaging system. Furthermore the effect of the lighting conditions is not addressed. 2.3 State-of-the-Art of Automated Distress Evaluation Human observation is still the method most widely used means to inspect and evaluate pavements. Such evaluations are known as manual surveys and involve a high degree of subjectivity, hazard, low production rates and in addition they are extremely labor intensive. An ideal automated distress detection and recognition system must be able to determine all types of surface distresses at any severity under any collection speed and weather conditions. Such equipment must be affordable and easy to operate. Technology has evolved tremendously during the recent years and innovations in computer hardware and im aging recognition techniques have provided new alternatives for automated distress evaluation surveys in a cost-effective way. However, despite the performance improvements of newer generation equipment over the older systems, problems still remain in the areas of costs of implementation, processing speed, and accuracy (Wang, 1999). At present, Florida Department of Transportation (FDOT) is exploring this new technology with a multi-function survey vehicle, which is used to collect images of the right of way and the pavement surface, utilizing framing and linescan cameras, respectively. The vehicle has the capability to collect other types of data, such as roughness and rutting as well. Once the survey is completed, the images can be analyzed within the comfort of the office. The FDOT pavement survey vehicle, which does not have the image processing capability, will be described in detail later in this Chapter. Prior to that three different, currently well known, vehicles with similar image processing capabilities will be described.

PAGE 28

17 2.3.1 Roadware’s Automatic Road Analyzer Roadware’s Automatic Road Analyzer (ARAN) is able to collect a wide variety of pavement information at highway speeds, such as longitudinal profile/roughness (IRI), transverse profile/rutting, grade, cross-slope, pavement condition or distress, panoramic right-of-way images, pavement images and feature location. WiseCrax is a specific software that can be uploading by ARAN. This program is claimed to detect cracks as small as three millimeters, automatically. High speed cameras on retractable booms record pavement high contrast images of 1.5 m x 4 m (4.9 ft x 13 ft) size at variable highway speeds up to 80 km/h (50 mph). Images are recorded on a continuous series of non-overlapping basis. The addition of synchronized strobe lights is intended to eliminate shadows from trees, bridges, tunnels, and other overhead objects. Images are processed off-line overnight at the office workstation by a unique open architecture process using advanced image recognition software. A sketch of Roadware’s ARAN is shown in Figure 2.1. Figure 2.1 Roadware's ARAN Pavement Evalaution Vehicle 2.3.2 Samsung Data Collection Vehicle The Samsung Data Collection Vehicle provides real-time pavement image acquisition and inventory collection equipmen t that is capable of acquiring pavement images and location data at user defined distance intervals. Advanced digital

PAGE 29

18 progressive area scan camera technology is used to provide objective data about pavement distress conditions. Figure 2.2 shows the Samsung data collection vehicle. The digital progressive area scan camera is mounted on the back of vehicle with its line of sight normal to the pavement surface. It collects pavement images continuously at evenly spaced intervals. The collected pavement images are shown in real time on the on-board PC monitor and recorded to CD-ROM. The resolution of image is 758 x 580 pixels, and covers a 3.7 m by 2.8 m (12 ft by 9.2 ft) pavement area each. The size of each pixel is approximately 0.5 mm (0.19 inch) in width by 0.5 mm (0.19 inch) in length. Figure 2.2 Samsung SDS America's Data Collection Vehicle 2.3.3 The Fugro ADVantage The Fugro ADVantage gathers high-resolution digital images (1300 x 1024 pixels or 2048 x 1024 pixels) of the pavement using a system of strobe lights synchronized with the shooting of four digital cameras in order to create a composed image of a section of the pavement. The vehicle is capable of covering one hundred percent (100%) of the pavement surface at highway speeds over 100 km/h (60 mph) on lanes up to 4.25 m (14 feet) in width. A distress classification and rating criteria can be incorporated in the system in order to produce distress indices based on the given input. Such classification could be, for instance, based on the AASHTO protocol. All data acquisition and processing is conducted in real-time. Cracks along the pavement are converted to pixels and identified using the crack identification

PAGE 30

19 software. The Fugro ADVantage is powered by technologies from Waylink Systems Corporation. Figure 2.2 shows an image of the Fugro ADVantage pavement evaluation vehicle. Figure 2.3 Fugro ADVantage Pa vement Evaluation Vehicle 2.4 The FDOT Survey Vehicle International Cybernetics Corporation (ICC) in Largo, Florida manufactured the FDOT Survey Vehicle. This Digital Image Data Collection System consists of three different camera systems intended to collect Frontview, Sideview and Downward digital information from pavements. This vehicle is also equipped with a DGPS and an IMU unit both capable of delivering high accuracy information about the location. Furthermore, the front bumper has three lasers that acquire longitudinal profiling (IRI) data.

PAGE 31

20 Figure 2.4 The FDOT Survey Vehicle 2.4.1 Systems and Components of the FDOT Survey Vehicle The FDOT surveying van contains four computers. They are: Downward-view camera computer Forward-view camera computer Sideview camera computer DOS Mobile Data Recorder (MDR) computer Computers associated with the downward camera, the forward-view camera, and the sideview camera computer collect data from the relevant digital cameras. All the above computers use Intel Pentium IV processors with 512 MB of system memory and are loaded with Microsoft Windows 2000. A line scan camera computer performs all of the processes related to the pavement camera. This computer contains a special encoder board that controls the timing of the pavement camera triggering and a capturing card that controls capture of the videologs. 2.4.1.1 Camera Imaging Systems Three camera systems are mounted outside the vehicle. The image capture system also utilizes software operating under Windows 2000. The system receives

PAGE 32

21 commands to capture images from the Mobile Data Recorder (MDR) computer and sends image catalog information back to the MDR data collection computer to correlate with the other data. The system also provides the ability to review the images on an LCD display on a real-time basis or later in the office with viewing software. The subsystem is designed to withstand shock, vibrations and the environmental impacts that a vehicle traveling up to 120 km/h (75 mph) is usually subjected to. The system operates with 115 VAC requiring less than 300 Watts. The images are captured on a high-speed 24-bit color PCI imaging PCB and displayed on 15” active matrix flat panel display. The subsystem contains a fixed 80 GB hard disk for the Windows operating system and software and two 80 GB removable hard disks for data storage. A 3.5” floppy disk, CD-ROM reader/writer, keyboard and mouse are provided for software loading and data preparation. 2.4.1.2 The Illumination System The vehicle possesses a lighting system composed of 10 lamps at 150 watts each with polished reflectors, which is used to illuminate the road. This artificial lighting is used to ensure that the downward camera acquires good quality images of the pavement within a very short period of time. 2.4.1.3 The Generator An AuraGen G5000 is a 5 kilowatt maintenance-free generator, which is mounted in the motor compartment of the vehicle that produces 60Hz AC power. The electrical power generated by this system is adequate to fit all the needs of the vehicle. The AuraGen supplies 400 volts, whic h is converted to 120 volts AC with the aid of a computer. 2.4.2 The Downward Imaging System The main characteristics of the FDOT survey vehicle’s downward camera system are listed below. Figure 2.5 shows a picture of the downward linescan camera installed on the FDOT Survey vehicle.

PAGE 33

22 Ability to perform real-time collection of high-resolution digital pavement images under different lighting conditions and different posted traffic speeds. Capability to work in conjunction with ICC’s imaging workstation software. A line scan camera with a linear resolution of 2048 pixels covering a width of approximately 14.5 feet. This camera is attached to a system, which is able to produce image lengths according to the use r’s need. For example, 6m (20 feet) is the current image length of the downward images, providing a frame resolution of 2048 x 2942 pixels. Figure 2.5 Downward Camera of the FDOT Survey Vehicle 2.4.3 The Workstation Program The Imaging Workstation has been designed for pavement surface analysis using digital image information collected by ICC imaging vehicles. The software has been designed to expedite the distress rating process and to manage and maintain

PAGE 34

23 rating data. The Imaging Workstation allows users to synchronize images from multiple cameras. The application has tools to assist in distress analysis and measurement. Users can categorize and save all pavement distress information, which can be then printed or exported in several formats. The ICC distress manager software includes a help file that exemplifies and explains the different distresses that can occur on a road surface. This information must be uploaded in advance of the rating process. In such cases the software might be customized according to the project. Users can enter the distress manager, select a road surface type and severity level, and view an image that exemplifies the distress specification. The Distress Manager allows users to create a multitude of categories using any combinations of crack type, lo cation, severity, crack width, and method of measurement.

PAGE 35

24 CHAPTER 3 CRITERIA FOR EVALUATING THE QUALITY OF PAVEMENT IMAGING SYSTEMS 3.1 Criteria for Evaluation of Imaging Systems Imaging systems in pavement evaluation vehicles are intended to capture pavement surface images that can be used for condition assessment of the road. These images of the pavement surface will be analyzed manually or automatically. Basically, the information will be analyzed using a standard distress classification manual, and the density, type and severity level of the different distresses will be reported as the overall Pavement Condition Index (PCI) of the particular section of the roadway or the network of roadways. Different severity levels of distress will lead to different treatment strategies. Hence having an imaging system, which is able to provide pavement managers with accurate images of the pavement surface, is vital to pavement management decision-making. The capability of an imaging system to recognize different levels of cracking, especially at the low severity level, will be assessed by determining the spatial resolution of the imaging system. Actual dimensions and true shapes of different distress will be addressed by the optical distortion parameter while the capability of the system to show the different tones of gray, if working with monochromatic images, is determined by evaluating its optical density response. 3.2 Proposed Quality Evaluation Criteri a for Pavement Evaluation Imaging Systems Due to the versatility of different imaging systems and variation of the performance of a given system under different operating conditions, the necessity of evaluating optical systems arises. The speed at which the images are collected and the intensity and effectiveness of the illumination system are the two major factors that govern the

PAGE 36

25 performance of an imaging system. Some of the latest imaging systems used in pavement evaluation are able to capture images at highways speeds, i.e. 70 mph. Operating at highway speeds assures perfor mance of the survey without interrupting traffic and causing related safety hazards. In order to evaluate the capabilities of a given imaging system, four different characteristics can to be determined. They are (1) spatial resolution, (2) optical density, (3) optical distortion, and (4) signal to noise ratio. Each property is equally important and hence unacceptability of one of them in any given image would lead to a poor quality image. 3.2.1 Spatial Resolution The spatial resolution is the ability of the imaging system to recognize detail in the image. The higher the spatial resolution, the higher the ability to recognize details, the higher the number of pixels used to characterize the image and the higher the details stored in the individual pixels will be. In its simplest form one can define spatial resolution as the smallest discernable detail in a visual presentation. Optics researchers generally define spatial resolution in terms of the Modulation Transfer Function (Section 1.4). As previously stated (Section 1.4) the Modulation Transfer Function (MTF) characterizes the response of an imaging system at any given input frequency expressed in line pairs per millimeter (lp/mm). Different percentages of image preservation corresponding to a range of spatial frequencies will define the MTF curve for a given imaging system. Table 3.1 shows the description of the different severity levels for longitudinal cracking and transverse cracking. The MTF curve of an imaging system at the frequency corresponding to low severity cracking must provide a reasonable preservation of the original scene. In other words, the imaging system must be able to identify cracks with a mean width less than 6 mm (Table 3.1).

PAGE 37

26Table 3.1 Severity Levels for Longitudinal and Transverse Cracking (Distress Identification Manual, 1993) Severity levelDescription Low Cracks with a mean width 6 mm (0.25 in.); or sealed cracks with sealant material in good condition and with a width that cannot be determined. Medium Cracks with a mean width >6 mm (0.25 in.) and 19 mm (0.75 in.); or any crack with a mean width 19 mm (0.75 in.) and adjacent low severity random cracking. High Cracks with a mean width > 19 mm (0.75 in.); or any crack with a mean width 19 mm (0.75 in.) and adjacent moderate to high severity random cracking. On the other hand, storing more detail also requires bigger files, which could create storage and processing problems. Hence if one wants to see the upper limit of low severity cracking using a pavement distress imaging system, it must have a MTF value of at least 50 % at the spatial frequency corresponding to 6 mm of the captured object, for this analysis the object is the pavement surface. MTF 50% means that the reduction due to loss of preservation of contrast in the image is half the perfect reproduction of the scene. This percentage has been noticed to offer an adequate degree of acceptability, especially when a human being is to perform the survey. 3.2.2 Dynamic Range The dynamic range defines the ability of an imaging device to record or display the full range of optical density. The color of an object is often referred to as surface color and the nature of it is determined by surface reflectance properties. Human beings possess the ability to perceive and judge these relative surface reflectance measures despite their varying wavelengths. Brightness is proportionally dependent both on the wavelength (intensity or illumination) and the surface reflectance. Hence, in order to determine the color information one has to obtain the reflectance values as well. Optical density (D) is a property of materials related to the reflectance of light;

PAGE 38

27 it ranges from 0 (pure white) to 4 (pure black) and it can be expressed mathematically in terms of the Reflectance (R) or the Opacity (O) as shown in Equation (3.1). More density is less brightness. Density is measured on a logarithmic scale. Density of 3.0 is 10 times greater intensity than a density of 2.0. An intensity range of 100:1 is a density range of 2.0, and 1000:1 is a range of 3.0. For precise quantitative measurements, it is expressed in terms of the light incident upon an image and the light reflected by the image. Reflectance (R) is the ratio of the intensities of reflected light to the incident light. It is the inverse of Opacity (O), which refers to the amount of light absorbed. The dynamic range defines the ability of an imaging device (like a camera, scanner, display monitor or printer) to record or display the full range of brightness, from absolute black darkness (0.0) to full white brightness (4.0). It is expressed in terms of two values Dmax and Dmin, which are the maximum and the minimum values of optical density capable of being captured. The Dynamic Range of the imaging system would be Dmax Dmin. Systems with a larger dynamic range can detect greater image detail in dark shadow areas. If the imaging system’s Dmin were 0.2 and Dmax were 3.1, its Dynamic Range would be 2.9. Greater dynamic range can detect greater image detail in dark shadow areas of the photographic image, because the range is extended at the black end. Brightness resolution refers to the number of brightness levels that can be recorded in any given pixel. In 8 bit grayscale black and white images, each pixel is black, white or one of 254 shades of gray (28 = 256). An optical density step target was utilized in this investigation in order to determine the dynamic range of the evaluated imaging system of 256 levels brightness resolution. Reflectance (R) is the ratio of reflected light to incident light. It is the inverse of Opacity (O), which refers to the amount of light absorbed. Optical Density (D) can be mathematically expressed as the base 10 logarithm of Opacity. These relations are shown in Equation (3.1) and Equation (3.2).

PAGE 39

28 ) 1 ( log log10 10 R O D (3.1) DO R 10 1 (3.2) 3.2.3 Optical Distortion Optical distortion refers to the changes in shape and dimensions appearing in the images of objects when they are photographed. “Barrel distortion” and “Pincushion distortion” are two common types of opti cal distortion. The effect of these phenomena is revealed when actual straight lines appear to be curved. Barrel distortion is a lens effect that causes images to be spherized at their center and it is associated with wide-angle lenses. It occurs only at the wide end of a zoom lens. In contrast, pincushion distortion causes images to be pinched at their centers and it is associated with zoom lenses or when adding telephoto adapters. The latter distortion only occurs at the telephoto end of a zoom lens and it is most noticeable when one places a very straight edge near the side of the image frame. Figure 3.2 and Figure 3.3 show optical dist ortion, the barrel and pincushion effect respectively, of the same original image shown in Figure 3.1. These effects are noticeable mostly in images taken by area-scan cameras, in which the spherical lens will cover the area (in two dimensions) of the captured object. On the other hand, linescan cameras compose an image by assembling adjacent lines of 1 pixel thickness thereby producing an image with the width of the linescan lens calibration times the number of pixel lines that the user specifies. Hence, the optical distortion occurring in images produced by linescan cameras is quite different to that produced by area-scan ones. Assuming that the vehicle in which the linescan camera system is installed moves along a straight line, then the image of a line 0.5 m away from the center (parallel to the movement line) will always be displayed as a straight line and parallel to the line of movement. Similarly, a line perpendicular to the movement line will be captured with one scan of the linescan camera and will be displayed as a straight line perpendicular to the line of movement. Ther efore neither barrel nor pincushion effects

PAGE 40

29 are applicable to images from linescan cameras due to its one-dimensional imaging nature. Figure 3.1 Original Image without Optical Distortion Figure 3.2 Image Showing the Barrel Distortion Effect Figure 3.3 Image Showing the Pincushion Distortion Effect On the other hand, a different type of di stortion can be expected due to the onedimensional imaging nature of the linescan camera lens. Straight diagonal lines in the captured scene might be displayed as curved ones after the image composition. This can be explained by the variation of the field of vision of the linescan camera lens, as it scans objects from the centerline of movement. Figure 3.4 shows an aerial view of a downward camera system of a pavement evaluation vehicle where equidistant lines will not appear as equidistant in the digital image. A distortion in the space between lines is seen to occur away from the centerline.

PAGE 41

30 Figure 3.4 Sketch of an Aerial View of Linescan Camera and Distorted Parallel Lines This phenomenon is noticed from an image (Figure 3.5) taken when surveying a metal bridge in Hillsborough Avenue, Hillsborough County, Tampa, Florida. This bridge was imaged during the pilot project of a research project conducted for FDOT. Figure 3.5 Digital Image Showing Optical Linear Distortion Figure 3.5 shows that the digital image in which the constant distance between any adjacent pair of metal strips is clearly distorted as scanning moves away from the centerline. It must be noted that the color in the image has been inverted for easy recognition of metal strips.

PAGE 42

313.2.4 Signal to Noise Ratio Noise in digital imaging is referred as the visible effects of an electronic error (or interference) in the final image produced by an imaging system. Noise is a function of how well the sensor and digital signal processing systems inside the digital camera work together in replicating the details of an original scene in an image. From previous research (Bright, 1998) it has been found that visibility of objects depends on the average difference in intensity of the image and the background, the noise level and the number of pixels representing the image. On the other hand, visibility does not depend on the object shape. The Signal to Noise Ratio (SNR) can be determined using Equation (3.3) and Equation (3.4). k = [(n-no)/s] (3.3) K = [(n-no)/s]A (3.4) Where k is the SNR for an individual pixel, K is the SNR for a group of pixels, n is the mean signal (pixel intensity) of the object, no is the mean signal for background, A is the number of pixels in the object and s is the standard deviation of the signal.

PAGE 43

32CHAPTER 4 EXPERIMENTAL PROCEDURE 4.1 Testing Variables Based on the experience with a related project performed for FDOT, the author identified the predominant two variables that might affect the performance of the pavement imaging system. These variables are: 1) speed and 2) lighting condition of the environment where the images are captured. The other parameters like pavement type, pavement roughness, and extreme weather conditions also play somewhat important roles. However, detailed study of the latter factors was excluded from the current research. The field-testing phase of this research will consist of collecting digital images of the standard targets placed on pavements, under different speeds and lighting conditions. Then the images would be processed and evaluated to determine the effect of speed and illumination. 4.1.1 Speed The speed of the evaluation vehicle in which the pavement imaging system is installed becomes an important parameter since the purpose of these automated or semi-automated evaluation vehicles is to be able to perform surveys up to highway speeds. Furthermore, on arterials and collectors it must be able to mingle smoothly with the normal traffic and perform their evaluation functions. Hence a high degree of versatility is expected from the survey vehicle with respect to the speed. The goal of this analysis is to determine the effect of speed on the images. The testing will include acquisition of images at speeds varying from 10 mph to 50 mph in increments of 10 mph.

PAGE 44

334.1.2 Lighting Condition It was stated on Section 3.2.2 that lighting intensity and reflectance are the key parameters that determine the brightness of images and thus ease the recognition of distress details on pavements. With respect to illumination, there are three distinct possibilities of light environments that are encountered when performing surveys: 1) in daylight without the vehicle’s illumination system, 2) in daylight with the vehicle’s illumination system and 3) at night time with the vehicle’s illumination system. Of these, the daylight condition can have two subcategories depending on whether the vehicle itself or any other objects project shadows onto the image or not. Because of power supply limitations and possible overheating of the pavement surface, adequate illumination cannot be provided to overcome the shadows. 4.2 Spatial Resolution Test In order to evaluate spatial resolution, three different lighting conditions are to be used: 1) in daylight with the sun positioned at the back of the vehicle (no vehicle shadows on the images), 2) in daylight facing the sun (vehicle shadows appearing) and 3) at night time. Table 4.1 illustrates th e testing details. Each test is named by two letters and a number. The first letter stands for daytime or night time, the second one for the illumination and the number for the speed. Table 4.1 Spatial Resolution Testing Daylight no shadowsDaylight shadowsNight time 10DN10DS10NI10 20DN20DS2030DN30DS3040DN40DS4050DN50DS5060DN60DS60Lighting conditions Speed (mph) 4.2.1 Spatial Resolution Target The Ivn Sokolic Resolution Target 2003 is intended to be used for testing the spatial resolution of the downward camera system. The target is enclosed in a rectangular

PAGE 45

34 border with dimensions 17cm x 25cm. It is basically a plot of stripes the thicknesses of which decrease following a quadratic function. There are nine (9) black stripes with a white strip in between each pair producing a total of seventeen (17) stripes of the same thickness (i.e. 9 blacks and 8 whites). The stripe thickness can be noticed at different locations along the horizontal axis using the Scale Numbers (SN) indicated (Figure 4.2). The chart has been created using Microsoft Excel 4.2.2 Procedure and Considerations When a digital image is taken using the total active height of the CCD sensor capturing the total height of the chart, the Scale Numbers (SN) must be determined from Equation 4.1. For instance if the reading is 25, the corresponding scale number is 4 as computed using Equation (4.1), SN = (25+7) / 8 = 4. 8 7 reading SN (4.1) The spatial frequency (w) in line pairs per millimeter (lp/mm) in the camera sensor is calculated using Equation (4.2) which is obtained by dividing the length of the line pair width over the total active height of the sensor. The coefficient 10 in the equation is derived from the analysis for SN=1 in which the line pair width is equal to 2 cm and has to be multiplied by 10 in order to cover the full height of the target. w (lp/mm) = SN 10 / Sensor Active Height (4.2) The magnification factor which is the ratio of the size of the object to the size of the sensor can be calculated using the Simple Lens Equation shown in Equation (4.3), where D is the distance between the object and the lenses, d is the distance between the lens and the sensor and f is the focal length of the camera lens. f d D 1 1 1 (4.3)

PAGE 46

35 Figure 4.1 Simple Lens Equation Illustration If the Magnification M is defined as M=D/d then based on similar triangles, Equation (4.4) can be written. M=D/d=H/h (4.4) Where H is the height of the object and h is the height of the object projected on the sensor. By rearranging terms in Equation (4.3) and using Md instead of D one can obtain Equation (4.5) which allows one to determine the distance between the camera lens and the object by knowing the active sensor height and the focal length of the lenses. D = (M+1) f (4.5) 4.3 Brightness Resolution Test This test will be conducted using a procedure similar to as described for the Spatial Resolution Test and is described in Table 4.2. Table 4.2 Optical Density Testing Daylight no shadowsDaylight shadowsNighttime 10DN10DS10NI10 20DN20DS2030DN30DS3040DN40DS4050DN50DS5060DN60DS60Lighting conditions Speed (mph)

PAGE 47

36 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Figure 4.2 Ivn Sokolic Resolution Target 2003

PAGE 48

374.3.1 Optical Density Step Target In order to evaluate the system capability in determining different levels within the grayscale spectrum (white to black), a grayscale target was used. In the 8.5" x 11" Mylar variable Optical Density Step Target, there are 15 density steps from 0.07 to 1.5 in optical density values on two progressions which vary from the highest to the lowest value on the upper scale and advance from the lowest to the highest on the lower scale. The variation between density steps is linear, which leads to a logarithmic change in diffuse reflectivity. Figure 4.3 shows the mentioned target. Since the limits of density in the progressions are 0.07 and 1.5 and there are 15 steps in each, the linear step value between the individual square elements is 0.102, which is the result of the operation (1.5-0.07)/14. Based on the increments and Equation (3.2), Table 4.4 was constructed for different values of Optical Density and the required reflectance measurements. Figure 4.3 Optical Density Step Target

PAGE 49

38Table 4.3 Density and Reflectance Val ues for the Optical Density Target DensityReflectance 0.0785.11% 0.17267.30% 0.27453.21% 0.37642.07% 0.47833.27% 0.5826.30% 0.68220.80% 0.78416.44% 0.88613.00% 0.98810.28% 1.098.13% 1.1926.43% 1.2945.08% 1.3964.01% 1.4983.18% 4.3.2 Procedure and Considerations The procedure to evaluate brightness resolution of the imaging system requires computer assistance with proper software capable of determining intensity values at any pixel of the digital image. From the digital image of the Optical Density Step Target, each block, which is included in the grayscale progression, would to be analyzed. The intensity for each one of the 15 progressions or grayscale (or blocks) will be compared to the grayscale values provided on the target, Equation (3.1) can be used for this purpose. A comparison of the corresponding values determines how much the imaging system has altered the original image intensities. 4.4 Optical Distortion Test In contrast to the Spatial Resolution and the Brightness Resolution tests, this test only needs one image taken at very low speed, i. e. 10 mph. The illumination is not relevant unless a problem in visualizing the target occurs. Table 4.3 indicates the testing for optical distortion.

PAGE 50

39Table 4.4 Optical Distortion Testing Lighting conditions Daylight no shadows 10DN10 Speed (mph) 4.4.1 Optical Distortion Target The Ivn Sokolic Distortion Target 2003 is to be used for the optical distortion evaluation of the pavement imaging systems. Considering that the system to be evaluated is a linescan camera the author felt the necessity for creating his own target given the non existence of distortion targets for evaluating linescan camera systems. The Ivn Sokolic Distortion Target 2003 for linescan camera systems basically consists of equidistant lines parallel and perpendicular to the direction of movement separated by 1 cm. Diagonal lines are also included for easy recognition of the optical distortion phenomenon. In linescan cameras, the goal of using this target is to measure the distortion in the longitudinal direction as well as in the transverse direction. Figure 4.4 shows the Ivn Sokolic Distortion Target 2003 In Section 3.2.3, it was illustrated how a linescan camera distorts an image away from the centerline of movement defined by the camera’s travel position. Therefore, testing of distortion targets is necessary to indi cate the degree of optical distortion of the pavement imaging system and the optimum width that can be imaged by the pavement evaluation vehicle in one scan. 4.4.2 Procedure and Considerations The procedure described next is only applicable to linescan camera systems. The relevant target is shown in Figure 4.4, where the further any square is from the centerline of the image, the shorter the dimension of the square perpendicular to the centerline becomes. Then the distortion will be indicated as a percentage of the dimension perpendicular to the centerline of the original square for any given distance from the centerline (line of movement). For instance, if the square dimensions become 0.7 cm x 0.7 cm at a distance 2.2 m away from the centerline, one can says

PAGE 51

40 that the distortion at 2.2 m is (100%*(1.0-0.7)/1.0) or 30%. Due to the onedimensional nature of the imaging system, distortion in the dimension parallel to the centerline is constant and assumed to be negligible if calibrations of the shutter and the encoder are properly set up. 4.5 Preliminary Testing In preparation for the actual testing, the author performed some preliminary testing in order to familiarize himself in the testing procedures. Therefore, multiple photographic tests were performed in the U SF Visual Laboratory by the investigator and his research team in order to validate the developed methodology. 4.5.1 Spatial Resolution Preliminary Testing Results The Ivn Sokolic Resolution Target 2003 (Figure 4.1) was printed in a letter size photographic paper manufactured by the Hewlett Packard Corp. using a high resolution inkjet Epson printer. A matte paper was chosen to avoid excess reflection of light. The color of the paper is white and the color of the ink used is black. A Minolta digital camera model DiMage5 with sensor dimensions of 7.18 mm x 5.32 mm providing a maximum resolution of 3.2 Megapixels was used to take multiple images of the target under different characteristics. The telephoto with a focal length of 50.8 mm was chosen to capture the images. If one is evaluating any given digital camera based on Equation (4.5), one can produce tables like Table 4.5 in which the characteristics, distances and magnifications for a given digital camera are related. A subroutine computer program called PhotoES_AM developed by a member of the research team was used to obtain the Modulation Transfer Function (MTF) for the evaluated imaging system. The MTF values can be approximated from the Contrast Transfer Function (CTF) as discussed in Section 1.4. The results from 6 tests, 3 at the selected first distance and another 3 at double that distance are shown in Table 4.6 and Figure 4.5.

PAGE 52

41 Figure 4.4 Ivn Sokolic Resolution Target 2003 for Linescan Camera Imaging Systems

PAGE 53

42Table 4.5 Characteristics, Distances and M agnifications for Minolta Digital Camera Active Sensor Height = 5.32 mm, Focal Length = 50.8 mm Distance lens / object (mm)Magnification 1673.931.95 3297.463.91 2000.038.37 Table 4.6 MTF Results from Laboratory Testing for Minolta Digital Camera p1481p1482p1483St.Devp1485p1486p1487St.Dev 51.13664.4%67.8%65.6%0.01769.6%66.7%64.3%0.02760.72455.3%58.5%54.6%0.02159.6%61.5%55.7%0.03070.31237.3%42.5%39.0%0.02741.6%39.2%41.5%0.01483.09623.8%18.5%18.2%0.03226.9%23.5%23.5%0.020102.274.6%4.7%4.7%0.0016.9%8.4%7.8%0.008 w (lp/mm)normaldouble MTF (%) As noticed in Figure 4.5, the MTF plot is unique for a given optical setup, irrespective of the distance between the object and the camera. Two objectives were achieved by performing these tests: 1) the repeatability of the process was verified and 2) the need for no MTF adjustment for the distance between the object (target) and the camera lens. Adjustment can be achieved only by using the appropriate scaling factors (Equation (4.5)). MTF Approximation0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 80.0% 5060708090100110 Spatial frequency (lp/mm)MTF p1481 p1482 p1483 p1485 p1486 p1487 Figure 4.5 MTF Curves for Different Test s Using a Minolta DiMage5 Digital Camera

PAGE 54

43 Knowing the above results one can set up any distance between the target and the camera lens and mathematically estimate the scaling factor needed to determine the spatial frequencies for different MTF values obtained from the test. As an example, if the distance between the object and the camera lens x is 2000 mm, if the rest of the parameters are unaltered, a focal length of 50.8 mm and an effective sensor height h of 5.32 mm and using Equation (4.5) the resulting scaling factor M is 38.37. The spatial frequencies would be determined using Equation (4.6). In order to use Equation (4.6) one must first determine the base distance D The base distance is one at which the total active height of the sensor will be utilized to capture the entire height of the target. For the tested camera set up, the base distance was found to be 1673 mm. ) 17 ( h SN D x w (4.6) Therefore, when the imaging distances are changed the corresponding values of the spatial frequency will change in the same proportion (Equation (4.5)). 4.5.2 Optical Density Preliminary Testing Results As part of the preliminary series of testing, two images of the Optical Density Step Target were taken in the USF Visual Laboratory. The resulting grayscale values (intensities) for different gray steps of the target were plotted on the charts shown in Figure 4.6 and Figure 4.7. The first image corresponding to Figure 4.6 was taken using a 4200 K (Kelvin) bulb and the room light while the second image was taken under the room lights only.

PAGE 55

44 0 50 100 150 200 250Optical Density steps (0.07 1.498)Intensity value (0-255) Figure 4.6 Grayscale Variation in the Density Step Target (room light +additional illumination) 0 50 100 150 200 250Optical Density steps (0.07 1.498)intensity value (0 255) Figure 4.7 Grayscale Variation in the D ensity Step Target (only room light) Figure 4.8 shows the results from the two different images and an ideal curve of intensity based on the results of Table 4.1. From this plot it can be concluded that the light environment plays an important role in defining the color intensity of the images. It is also important to mention that no image enhancement was made in order to analyze the pictures.

PAGE 56

45 0 50 100 150 200 250 050100150200250 ReflectanceIntensity (0-255) Ideal room + lights room illumination Figure 4.8 Evaluation of the Color Response of a Digital Imaging System (Minolta Camera) 4.5.3 Optical Distortion Preliminary Testing Results No preliminary distortion testing was performed. 4.5.4 Signal to Noise Ratio Preliminary Testing Results No preliminary signal to noise ratio (SNR) testing was performed.

PAGE 57

46CHAPTER 5 FINDINGS AND CONCLUSIONS 5.1 Results from Field Studies During the month of July 2003 the author performed the field testing necessary to validate the developed criteria. A sparsely trafficked location inside the University of South Florida campus was chosen to achieve this goal. The FDOT Survey Vehicle’s imaging system was used for testing. The spatial resolution, optical density, and signal to noise targets were assembled on a black foam board while the optical distortion was placed separately because of its oversized dimensions. The pavement imaging vehicle is intended to pass over all the targets placed on the pavement surface. Figure 5.1 shows the set up of the targets during testing. A public domain program, ImageJ (Rasband, 2003) was used to view and process the digital images. Figure 5.1 Set up of Targets on the Pavement Surface 5.1.1 Spatial Resolution Testing Results The testing condition in the daylight environment with the illumination system of the vehicle was not employed since the camera aperture was setup to capture the images

PAGE 58

47 of dark objects. Since the spatial resolution target contains a large area of white, the illumination can cause over-saturation of white on the images. Hence images of the target in daylight without the illumination system were captured at different speeds and used in this analysis. The results produced the plot presented in Figure 5.2. No major differences are found in the MTF due to speed. In Figure 5.2, a 50% value for MTF corresponds to a spatial frequency of approximately 18 lp/mm while MTF of 10 % corresponds to 28 lp/mm. These results indi cate that the system is poor in resolving detail in terms of contrast between white and black at frequencies higher than 28 lp/mm. This is confirmed by comparing the cut-off frequency of 28 lp/mm with the computed Nyquist Frequency of the system of 50 lp/mm. The scaling factor used for this scenario covers 6.096 m (20 ft) of the pavement surface with 2948 pixels which gives 29.48 mm as the total length of the CCD sensor in that direction (2948pixels 0.01 mm / pixel = 29.48 mm) leading to a scaling factor of 206.8 (6096mm/29.48mm = 206.8). From the MTF results it can be noticed that the low performance of the evaluated imaging system to recognize detail at a spatial frequency of higher than 28 lp/mm, which represents 60% of the Nyquist Frequency (50 lp/mm). This value of the spatial frequency represents cracks of 6.9 mm / 2 = 3.45 mm (206.8 / (30 lp/mm) = 6.9 mm. 0% 10% 20% 30% 40% 50% 60% 70% 10.015.020.025.030.0w (lp/mm)MTF (%) 10 mph 20 mph 30 mph 40 mph 50 mph Figure 5.2 MTF Plots for the FDOT Su rvey Vehicle at Different Speeds

PAGE 59

485.1.2 Dynamic Range Resolution Testing Results Figure 5.3 and Figure 5.4 show the brightness profiles of a uniform white target when imaged with and without the illumination system at low speed (10 mph). Both profiles reveal the non uniformity of light received by the CCD sensor (both cases) as well as the light projected on the pavement (first case). Figure 5.3 Brightness Profile of a Uniform White Target (illumination system used) Figure 5.4 Brightness Profile of a Uniform Wh ite Target (illumination system not used) The Optical Density Step Target was used for evaluating the dynamic range response of the system. Three different lighting environments: (1) in daylight using the illumination system, (2) in daylight without using the illumination system, and (3) at night time using the illumination system; were evaluated in order to see the changes in response. Figure 5.6 shows the brightness intensities for the three different lighting environment images, it can be seen that for the five last steps the brightness levels

PAGE 60

49 remain almost constant. These steps correspond to optical density values of approximately 1.0 to 1.5. Hence the range of optical densities where the system produces a satisfactory response is approximately between 0 and 0.9, so the dynamic range for the imaging system is calculated as 0.9 0 = 0.9. As previously stated, the imaging system was set up to image dark objects, which explains why the white steps of the optical density target have intensity values of 255 (highly saturated images). No major effects in the resulting intensities values are seen due to the effect of speed during this testing analysis as demonstrated in Figure 5.6 which indicates the insignificant difference in resulting intensities due to speed. 0 50 100 150 200 2500.070.170.270.380.480.580.680.780.890.991.091.191.291.401.50Optical Density valuesIntensity ideal night time and lights daytime and lights daytime without lights Figure 5.5 Brightness Intensity Val ues for Optical Density Target 0 50 100 150 200 2500.070.170.270.380.480.580.680.780.890.991.091.191.291.401.50Optical Density valuesIntensity 10 mph 30 mph 60 mph Figure 5.6 Negligible Variance in Intensity due to Speed

PAGE 61

505.1.3 Optical Distortion Testing Results As discussed in Section 4.4.2 the distortion is seen to occur significantly only in the direction perpendicular to the direction of movement and it is one-dimensional due to the linear characteristics of the linescan camera. The optical distortion is expressed as a percentage with respect to the original width of the square components of the target. Hence Figure 5.7 shows different degrees of distortion along a line starting at the centerline. From the figure it can be seen th at at 25 cm from the centerline a distortion of 15% occurs while at 2 m away it is reduced to almost -25%. There is no distortion 1 m away from the centerline where the wheelpaths are located. A polynomial function is used to fit the optical distortion variation as seen in Figure 5.7. y = -4E-06x2 0.0013x + 0.1806 -30% -25% -20% -15% -10% -5% 0% 5% 10% 15% 20% 050100150200Distance from centerline (cm)Optical Distortion Figure 5.7 Optical Distortion Present in the FDOT Survey Vehicle Figure 5.8 Optical Distortion Testing Scene

PAGE 62

51 It must be mentioned that the impact of the road cross-slope on image distortion is negligible. Figure 5.8 depicts the optical distortion testing while Figure 5.9 shows the digital image of the optical distortion target captured using the FDOT Survey Vehicle’s imaging system. As previously stated in Section 4.4, the speed is not a variable for the optical distortion testing. Figure 5.9 Left Half of a Digital Image of the Optical Distortion Target 5.1.4 Signal to Noise Ratio Testing Results Based on previous research done by the FBI and used in their Integrated Automated Fingerprint Identification System (IAFIS) Project (Appendix F Electronic Fingerprint Transmission Specification ), both the ratio of signal to white noise standard deviation (kWHITE) and the ratio of signal to black noise standard deviation (kBLACK) of the digital imaging system shall be greater than or equal to 125; this standard is only applicable to images captured using the illumination system. Results of the SNR are presented in Table 5.1 where the values obtained for kWHITE (139.3, 124.6, and 127.8) and for kBLACK (70.7, 83.3, and 80.1), during daytime at different speeds, indicate the low performance of the system in reducing the amount of noise in the resulting images considering the mentioned criterion (k 125) when collecting images during daytime and using the illumination system. On the other hand, values for kWHITE and kBLACK of 259.6 and 132.2 respectively, denote a better response from the system when collecting images during night time. K is the SNR for a group of pixels and k is the SNR for an individual pixel, A is the number of pixels in the object and s is the standard deviation of signal (Equation (3.3)

PAGE 63

52 and Equation (3.4)). Table 5.1 shows no major effects in the resulting k values in the signal to noise evaluation due to the differences in speeds used during the testing. Table 5.1 SNR Testing Results Night time with lights 10 mph30 mph50 mph10 mph KWHITE2428.92173.22228.74526.6 kWHITE139.3124.6127.8259.6 sWHITE1.4511.6061.5950.862 AWHITE304304304304 KBLACK1233.71452.51396.92306.2 kBLACK70.783.380.1132.2 sBLACK2.8572.4032.5441.692 ABLACK304304304304 Daytime with lights Parameter 5.2 Testing Limitations and Sources of Error There were limitations in this testing program like in any scientific study, although efforts were made to minimize sources of error. Some of the limitations are: Sizes of the steps in the optical density target (4.4 cm) and in the Macbeth target (16 cm) utilized for the signal to noise evaluation are too small considering the small number of pixels (130 and 480 respectively) that the imaging system used to capture them. This is due to the large area covered and the distance between the camera and the pavement surface. Sizes of the wedges in the spatial resolution target are too small for the same reason as above. Night time testing was performed only at quasi-static mode or very low speeds. Background of the target (pavement surface) was maintained to be the same and was not considered as an extra variable. Other sources of error could be as follows: Possibility of the downward camera not being properly centered.

PAGE 64

53 Non uniformity of lighting provided by the illumination system. The intensity of sunlight does not remain constant during the entire testing program. 5.3 Conclusions Pavement imaging systems can be characteri zed by different factors that can affect the quality of the digital images. In this thesis study, four of these factors were evaluated in detail. They are: (1) spatial resolution, (2) brightness resolution (referred to as optical density response), (3) optical distortion, and (4) signal to noise ratio. Based on the results of this investigation the following conclusions can be drawn: It was found that the speed does not influence the quality of images within the range of speeds used for field testing. However, the maximum speed reached during the field testing is 50 mph which, as previously stated, does not quite represent the speeds used in interstate highways of the state roadway network. The following deficiencies were found with the FDOT Survey Vehicle’s pavement imaging system: o In terms of the spatial resolution evaluation, the inability of the imaging system to offer an acceptable value of MTF of 50% or more at the Nyquist Frequency, of 50 lp/mm or a crack width of 2.1 mm was observed. The low performance of the pavement imaging system is confirmed by the cut-off frequency of 28 lp/mm or a crack width of 3.45 mm. o In terms of the response of the system to recognize different brightness levels, the dynamic range of the system was calculated to be slightly greater than 1.0 for nighttime illuminated conditions and less than 1.0 for daytime light conditions, using or not using the artificial lighting system. o The optical distortion test results revealed the inability of the imaging system to reproduce the geometry of images occupying the entire field

PAGE 65

54 of view. However, since the purpose of such systems is to detect cracks which are considered more important if located within the wheelpath, the evaluated imaging system seems satisfactory. This is because only a little distortion is observed at 1m away from the centerline which is the approximate location of the wheelpath. o The signal to noise ratio results exposed the inclusion of undesirable frequencies in the main frequency resulting in a loss of image quality, especially when collecting images during daytime. The brightness value or intensity of images will directly depend on the illumination provided to the pavement surface. Therefore, a controlled illumination system is necessary if daytime surveys are to be performed. Otherwise, the author recommends nighttime surveys in order to have comparable images from the pavement without the necessity to change the setup of the imaging system or perform image enhancement. In order to improve the quality of the images out of the wheelpaths, some remedial measures must be taken like changing of the type of camera lens or installing of an extra camera to im age half of the pavement surface.

PAGE 66

55REFERENCES Baxes, G. (1994). Digital Image Processing. New York: John Wiley & Sons, Inc. Gramling, W. and Hunt, J. (1993). Photographic Pavement Distress Record Collection and Transverse Profile Analysis. Washington DC: SHRP-P-660, National Research Council. M. Gunaratne, A. Mraz, I. Sokolic, A. Nazef (2002). Development of Florida’s Comprehensive Pavement Evaluation Vehicle. Washington DC: TRB 2003 Annual Meeting. Koren, N. (2003).Understanding image sharpness part 1: Introduction to resolution and MTF curves. http://www.normankoren.com/Tutorials/MTF.html. Lamberts, R. Use of Sinusoidal Test Patterns for MTF Evaluation. MTF Engineering Notes. http://www.sinepatterns.com/MTF/EngNotes.htm Sitter, D. N., Goddard, J. S., and Ferrell, R. K., (1995). Method for the measurement of the modulation transfer function of sampled imaging systems from bartarget patterns. Applied Optics, v. 34 n. 4. Wang, K. and Elliott, R. (1999). Investigation of image archiving for pavement surface distress survey. Fayetteville: University of Arkansas. Williams, D., Burns, P. (2001). Diagnostics for digital capture using MTF Eastman Kodak. Rochester, NY. Williams, D. (2001). What is MTF ...and Why Should You Care? Eastman Kodak. http://www.rlg.org/preserv/diginews/diginews21.html#technical.

PAGE 67

56BIBLIOGRAPHY American Society for Testing and Materials (2000). Standard guide for classification of automated pavement condition survey equipment Designation: E 1656 – 94 (Reapproved 2000). Philadelphia. Bright, Newbury & Steel (1998). Visibility of objects in computer simulations of noisy micrographs. Journal of Microscopy, Volume 189 Issue 1. International Standard Organization (2000). Photography – Electronic still-picture cameras – Resolution measurements ISO copyright office, Geneva, Switzerland. Project PCS/Law Engineering (1993). Distress Interpretation from 35mm Film for the LTPP Experiments. Washington DC: SHRP-P-642, National Research Council. Strategic Highway Research Program (1993). Distress Identification Manual for the Long-Term Pavement Performance. Washington DC: SHRP-P-338, National Research Council. Technical Advisory Service for Images (2002). Colour and Resolution Targets .TASI. http:/www.tasi.ac.uk/.

PAGE 68

57 APPENDICES

PAGE 69

58Appendix A: Sample Report for MTF Evaluation Using Photoes_am Plugin for Imagej ----------------------------------------------------------| | Creating File: 10mphfile | ----------------------------------------------------------MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Measurement No: 1 Starting point X: 114 Starting point Y: 144 Lenght of LINE: 104.00 ** WARNING: Edge contrast less than 20% occurred (Code 91) ** Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 Hori. number of pixels on sensor: 2048.0 Nyquist Frequency: 143.10 lp/mm Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 1 (9.4 lp/mm) Comput frequency: 26.8 lp/mm Error in frequency: 184.10 % Low frequency (black-white) contrast C(0): 0.50 Contrast at spatia l frequency C(1): 0.50 NOTE: C(f) is NOT less than 0.7*C(0) NOTE: If you reached value near scale = 6 please try NOTE: to use bar 6-20 and clicking on button <<>>. NOTE: If the value is less than 6 please try to use higher frequency. MTF(9.4 lp/mm): 72.2 % CTF(9.4 lp/mm): 92.9 % Num Pixels on the LINE: 103 Angle of LINE: 180.6 deg ----------------------------------------------------------------------------MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Measurement No: 2 Starting point X: 112 Starting point Y: 136 Lenght of LINE: 93.00 ** WARNING: Edge contrast less than 20% occurred (Code 91) ** Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 Hori. number of pixels on sensor: 2048.0 Nyquist Frequency: 143.10 lp/mm

PAGE 70

59Appendix A: (Continued) Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 1 (9.4 lp/mm) Comput frequency: 34.9 lp/mm Error in frequency: 271.3 % Low frequency (black-white) contrast C(0): 0.60 Contrast at spatia l frequency C(1): 0.51 NOTE: C(f) is NOT less than 0.7*C(0) NOTE: If you reached value near scale = 6 please try NOTE: to use bar 6-20 and clicking on button <<>>. NOTE: If the value is less than 6 please try to use higher frequency. MTF(9.4 lp/mm): 68.0 % CTF(9.4 lp/mm): 87.3 % Num Pixels on the LINE: 92 Angle of LINE: 180.0 deg ----------------------------------------------------------------------------MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Measurement No: 3 Starting point X: 105 Starting point Y: 126 Lenght of LINE: 77.00 ** WARNING: Edge contrast less than 20% occurred (Code 91) ** Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 Hori. number of pixels on sensor: 2048.0 Nyquist Frequency: 143.10 lp/mm Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 1 (9.4 lp/mm) Comput frequency: 47.10 lp/mm Error in frequency: 410.6 % Low frequency (black-white) contrast C(0): 0.51 Contrast at spatia l frequency C(1): 0.40 NOTE: C(f) is NOT less than 0.7*C(0) NOTE: If you reached value near scale = 6 please try NOTE: to use bar 6-20 and clicking on button <<>>. NOTE: If the value is less than 6 please try to use higher frequency. MTF(9.4 lp/mm): 58.0 % CTF(9.4 lp/mm): 74.0 % Num Pixels on the LINE: 76 Angle of LINE: 180.0 deg ----------------------------------------------------------------------------MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Measurement No: 4

PAGE 71

60Appendix A: (Continued) Starting point X: 109 Starting point Y: 116 Lenght of LINE: 74.00 ** WARNING: Edge contrast less than 20% occurred (Code 91) ** Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 Hori. number of pixels on sensor: 2048.0 Nyquist Frequency: 143.10 lp/mm Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 2 (18.8 lp/mm) Comput frequency: 65.8 lp/mm Error in frequency: 250.1 % Low frequency (black-white) contrast C(0): 0.60 Contrast at spatia l frequency C(2): 0.30 MTF(18.8 lp/mm): 34.0 % CTF(18.8 lp/mm): 43.10 % Num Pixels on the LINE: 73 Angle of LINE: 180.0 deg ----------------------------------------------------------------------------MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Measurement No: 5 Starting point X: 108 Starting point Y: 107 Lenght of LINE: 68.00 ** WARNING: Edge contrast less than 20% occurred (Code 82) ** Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 Hori. number of pixels on sensor: 2048.0 Nyquist Frequency: 143.10 lp/mm Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 2 (18.8 lp/mm) Comput frequency: 85.3 lp/mm Error in frequency: 353.8 % Low frequency (black-white) contrast C(0): 0.60 Contrast at spatia l frequency C(2): 0.20 MTF(18.8 lp/mm): 23.0 % CTF(18.8 lp/mm): 29.9 % Num Pixels on the LINE: 67 Angle of LINE: 180.0 deg ----------------------------------------------------------------------------

PAGE 72

61Appendix A: (Continued) MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Measurement No: 6 Starting point X: 108 Starting point Y: 97 Lenght of LINE: 64.00 ** WARNING: Edge contrast less than 20% occurred (Code 91) ** Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 Hori. number of pixels on sensor: 2048.0 Nyquist Frequency: 143.10 lp/mm Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 3 (28.2 lp/mm) Comput frequency: 121.2 lp/mm Error in frequency: 329.9 % Low frequency (black-white) contrast C(0): 0.60 Contrast at spatia l frequency C(3): 0.10 MTF(28.2 lp/mm): 10.0 % CTF(28.2 lp/mm): 12.9 % Num Pixels on the LINE: 63 Angle of LINE: 180.0 deg ----------------------------------------------------------------------------MTF from HOR or VERT visual resolution bars (6-20) ----------------------------------------------------------Measurement No: 7 Starting point X: 106 Starting point Y: 87 Lenght of LINE: 59.00 ** ERROR: System could not recognize black white black white ... black pattern ** ** WARNING: System could not recognize whole pattern (before column 46.0) ** ** ADVICE: Please try to use lower frequency ** WARNING: Edge contrast less than 20% occurred (Code 62) ** Vert. size of the sensor: 5.32 mm Horiz size of the sensor: 7.18 mm Vert. number of pixels on sensor: 1546.0 Hori. number of pixels on sensor: 2048.0 Nyquist Frequency: 143.10 lp/mm Pixel Size/Spacing (ideal square): 3.5 microMeters Scale (frequency): 3 (28.2 lp/mm) Comput frequency: 65.8 lp/mm Error in frequency: 133.4 % Low frequency (black-white) contrast C(0): 0.60 Contrast at spatia l frequency C(3): 0.10

PAGE 73

62Appendix A: (Continued) MTF(28.2 lp/mm): 13.0 % CTF(28.2 lp/mm): 17.1 % Num Pixels on the LINE: 46 Angle of LINE: 180.0 deg --------------------------------------| Created in PhotoES_AM plugin for ImageJ (by Alexander Mraz) -----------------------------------------------------------

PAGE 74

63Appendix B: Sample Report for Signal to Noise Ratio and Optical Density Evaluation Using Photoes_am Plugin for Imagej ----------------------------------------------------------| | Creating File: 30mph 001677file | ----------------------------------------------------------SIGNAL TO NOISE RATIO (MacBeth Target) ----------------------------------------------------------Measurement No: 0 Width: 19 pixels Height: 16 pixels Node coordinate X: 81 pixels Node coordinate Y: 11 pixels Average Black value: 54.1 Average White value: 254.3 Standard Deviation (for active ROI): 2.403 Pixel Count: 304 ----------------------------------------------------------Black SNR: 83.3 Black SNRarea: 1452.5 ----------------------------------------------------------SIGNAL TO NOISE RATIO (MacBeth Target) ----------------------------------------------------------Measurement No: 1 Width: 19 pixels Height: 16 pixels Node coordinate X: 85 pixels Node coordinate Y: 120 pixels Average Black value: 54.1 Average White value: 254.3 Standard Deviation (for active ROI): 1.606 Pixel Count: 304 ----------------------------------------------------------White SNR: 124.6 White SNRarea: 2173.2 ----------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #1: 0.07 ----------------------------------------------------------Measurement No: 2 Width: 15 Height: 6 Node coordinate X: 154 Node coordinate Y: 119 Average Intensity: 254.966 Optical Density Measured: 0.000 Optical Density Given: 0.07 Optical Density Difference: 0.069 Intensity Difference: 37.926

PAGE 75

64Appendix B: (Continued) --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #2: 0.17214 ----------------------------------------------------------Measurement No: 3 Width: 15 Height: 6 Node coordinate X: 154 Node coordinate Y: 112 Average Intensity: 254.566 Optical Density Measured: 0.000 Optical Density Given: 0.17214 Optical Density Difference: 0.171 Intensity Difference: 83.016 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #3: 0.27428 ----------------------------------------------------------Measurement No: 4 Width: 15 Height: 6 Node coordinate X: 154 Node coordinate Y: 106 Average Intensity: 254.511 Optical Density Measured: 0.000 Optical Density Given: 0.27428 Optical Density Difference: 0.273 Intensity Difference: 118.911 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #4: 0.37642 ----------------------------------------------------------Measurement No: 5 Width: 15 Height: 6 Node coordinate X: 153 Node coordinate Y: 99 Average Intensity: 230.677 Optical Density Measured: 0.043 Optical Density Given: 0.37642 Optical Density Difference: 0.332 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #5: 0.47856

PAGE 76

65Appendix B: (Continued) ----------------------------------------------------------Measurement No: 6 Width: 15 Height: 6 Node coordinate X: 153 Node coordinate Y: 93 Average Intensity: 182.844 Optical Density Measured: 0.144 Optical Density Given: 0.47856 Optical Density Difference: 0.334 Intensity Difference: 98.124 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #6: 0.5807 ----------------------------------------------------------Measurement No: 7 Width: 15 Height: 6 Node coordinate X: 153 Node coordinate Y: 86 Average Intensity: 142.633 Optical Density Measured: 0.252 Optical Density Given: 0.5807 Optical Density Difference: 0.328 Intensity Difference: 75.673 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #7: 0.68284 ----------------------------------------------------------Measurement No: 8 Width: 15 Height: 6 Node coordinate X: 153 Node coordinate Y: 79 Average Intensity: 118.511 Optical Density Measured: 0.332 Optical Density Given: 0.68284 Optical Density Difference: 0.350 Intensity Difference: 65.581 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #8: 0.78498 ----------------------------------------------------------Measurement No: 9 Width: 15

PAGE 77

66Appendix B: (Continued) Height: 6 Node coordinate X: 152 Node coordinate Y: 71 Average Intensity: 106.888 Optical Density Measured: 0.377 Optical Density Given: 0.78498 Optical Density Difference: 0.407 Intensity Difference: 65.048 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #9: 0.88712 ----------------------------------------------------------Measurement No: 10 Width: 15 Height: 6 Node coordinate X: 152 Node coordinate Y: 63 Average Intensity: 92.011 Optical Density Measured: 0.442 Optical Density Given: 0.88712 Optical Density Difference: 0.444 Intensity Difference: 58.941 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #10: 0.98926 ----------------------------------------------------------Measurement No: 11 Width: 15 Height: 6 Node coordinate X: 152 Node coordinate Y: 58 Average Intensity: 85.822 Optical Density Measured: 0.472 Optical Density Given: 0.98926 Optical Density Difference: 0.516 Intensity Difference: 59.682 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #11: 1.0914 ----------------------------------------------------------Measurement No: 12 Width: 15 Height: 6 Node coordinate X: 152 Node coordinate Y: 53

PAGE 78

67Appendix B: (Continued) Average Intensity: 78.166 Optical Density Measured: 0.513 Optical Density Given: 1.0914 Optical Density Difference: 0.577 Intensity Difference: 57.506 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #12: 1.19354 ----------------------------------------------------------Measurement No: 13 Width: 15 Height: 6 Node coordinate X: 152 Node coordinate Y: 48 Average Intensity: 72.711 Optical Density Measured: 0.544 Optical Density Given: 1.19354 Optical Density Difference: 0.648 Intensity Difference: 56.381 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #13: 1.29568 ----------------------------------------------------------Measurement No: 14 Width: 15 Height: 6 Node coordinate X: 151 Node coordinate Y: 40 Average Intensity: 65.755 Optical Density Measured: 0.588 Optical Density Given: 1.29568 Optical Density Difference: 0.707 Intensity Difference: 52.845 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #14: 1.39782 ----------------------------------------------------------Measurement No: 15 Width: 15 Height: 6 Node coordinate X: 151 Node coordinate Y: 34 Average Intensity: 63.177 Optical Density Measured: 0.605 Optical Density Given: 1.39782 Optical Density Difference: 0.791

PAGE 79

68Appendix B: (Continued) Intensity Difference: 52.977 --------------------------------------------------------------------------------------------------------------------OPTICAL DENSITY TEST (Edmund Scientific target) Grayscale #15: 1.5 ----------------------------------------------------------Measurement No: 16 Width: 15 Height: 6 Node coordinate X: 151 Node coordinate Y: 27 Average Intensity: 61.911 Optical Density Measured: 0.614 Optical Density Given: 1.5 Optical Density Difference: 0.885 Intensity Difference: 53.851 --------------------------------------------------------------------------------------------------------------------| | Created in PhotoES_AM plugin for ImageJ (by Alexander Mraz) | -----------------------------------------------------------