USF Libraries
USF Digital Collections

Digital holography applications in ophthalmology, biometry, and optical trapping characterization

MISSING IMAGE

Material Information

Title:
Digital holography applications in ophthalmology, biometry, and optical trapping characterization
Physical Description:
Book
Language:
English
Creator:
Potcoava, Mariana Camelia
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Three-dimensional tomography
Digital interference holography
Retina
Fingerprinting
Optical tweezers
Dissertations, Academic -- Physics -- Doctoral -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: This dissertation combines various holographic techniques with application on the two- and three-dimensional imaging of ophthalmic tissue, fingerprints, and microsphere samples with micrometer resolution. Digital interference holography (DIH) uses scanned wavelengths to synthesize short-coherence interference tomographic images. We used DIH for in vitro imaging of human optic nerve head and retina. Tomographic images were produced by superposition of holograms. Holograms were obtained with a signal-to-noise ratio of approximately 50 dB. Optic nerve head characteristics (shape, diameter, cup depth, and cup width) were quantified with a few micron resolution (4.06 -4.8 microns). Multiple layers were distinguishable in cross-sectional images of the macula. To our knowledge, this is the first report of DIH use to image human macular and optic nerve tissue. Holographic phase microscopy is used to produce images of thin film patterns left by latent fingerprints.Two or more holographic phase images with different wavelengths are combined for optical phase unwrapping of images of patent prints. We demonstrated digital interference holography images of a plastic print, and latent prints. These demonstrations point to significant contributions to biometry by using digital interference holography to identify and quantify Level 1 (pattern), Level 2 (minutia points), and Level 3 (pores and ridge contours). Quantitative studies of physical and biological processes and precise non-contact manipulation of nanometer/micrometer trapped objects can be effectuated with nanometer accuracy due to the development of optical tweezers. A three-dimensional gradient trap is produced at the focus position of a high NA microscope objective. Particles are trapped axially and laterally due to the gradient force. The particle is confined in a potential well and the trap acts as a harmonic spring.The elastic constant or the stiffness along any axis is determined from the particle displacements in time along each specific axis. Thus, we report the sensing of small particles using optical trapping in combination with the digital Gabor holography to calibrate the optical force and the position and of the copolymer microsphere in the x, y, z direction with nm precision.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2009.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Mariana Camelia Potcoava.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 181 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002063963
oclc - 558825207
usfldc doi - E14-SFE0003037
usfldc handle - e14.3037
System ID:
SFS0027354:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam 2200397Ka 4500
controlfield tag 001 002063963
005 20100318141009.0
007 cr mnu|||uuuuu
008 100318s2009 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0003037
035
(OCoLC)558825207
040
FHM
c FHM
049
FHMM
090
QC21.2 (Online)
1 100
Potcoava, Mariana Camelia.
0 245
Digital holography applications in ophthalmology, biometry, and optical trapping characterization
h [electronic resource] /
by Mariana Camelia Potcoava.
260
[Tampa, Fla] :
b University of South Florida,
2009.
500
Title from PDF of title page.
Document formatted into pages; contains 181 pages.
Includes vita.
502
Dissertation (Ph.D.)--University of South Florida, 2009.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
520
ABSTRACT: This dissertation combines various holographic techniques with application on the two- and three-dimensional imaging of ophthalmic tissue, fingerprints, and microsphere samples with micrometer resolution. Digital interference holography (DIH) uses scanned wavelengths to synthesize short-coherence interference tomographic images. We used DIH for in vitro imaging of human optic nerve head and retina. Tomographic images were produced by superposition of holograms. Holograms were obtained with a signal-to-noise ratio of approximately 50 dB. Optic nerve head characteristics (shape, diameter, cup depth, and cup width) were quantified with a few micron resolution (4.06 -4.8 microns). Multiple layers were distinguishable in cross-sectional images of the macula. To our knowledge, this is the first report of DIH use to image human macular and optic nerve tissue. Holographic phase microscopy is used to produce images of thin film patterns left by latent fingerprints.Two or more holographic phase images with different wavelengths are combined for optical phase unwrapping of images of patent prints. We demonstrated digital interference holography images of a plastic print, and latent prints. These demonstrations point to significant contributions to biometry by using digital interference holography to identify and quantify Level 1 (pattern), Level 2 (minutia points), and Level 3 (pores and ridge contours). Quantitative studies of physical and biological processes and precise non-contact manipulation of nanometer/micrometer trapped objects can be effectuated with nanometer accuracy due to the development of optical tweezers. A three-dimensional gradient trap is produced at the focus position of a high NA microscope objective. Particles are trapped axially and laterally due to the gradient force. The particle is confined in a potential well and the trap acts as a harmonic spring.The elastic constant or the stiffness along any axis is determined from the particle displacements in time along each specific axis. Thus, we report the sensing of small particles using optical trapping in combination with the digital Gabor holography to calibrate the optical force and the position and of the copolymer microsphere in the x, y, z direction with nm precision.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
590
Advisor: Myung K. Kim, Ph.D.
653
Three-dimensional tomography
Digital interference holography
Retina
Fingerprinting
Optical tweezers
690
Dissertations, Academic
z USF
x Physics
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.3037



PAGE 1

Digital Holography Applications in Ophthalmology, Biometry, and Optical Trapping Characterization by Mariana Camelia Potcoava A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor in Philosophy Department of Physics College of Arts and Sciences University of South Florida Major Professor: Myung K. Kim, Ph.D. Dennis K. Killinger, Ph.D. Martin Muschol, Ph.D. George S. Nolas, Ph.D. David W. Richards, Ph.D. Date of Approval: June 12, 2009 Keywords: Three-dimensional tomography, digital interference holography, retina, fingerprinting, optical tweezers, Gabor holography Copyright 2009, Mariana C. Potcoava

PAGE 2

Dedication To my family.

PAGE 3

Acknowledgements There are many people to whom I owe gratitude and I hope I have included all of them below. First and foremost I am inde bted to my advisor Dr. Myung K. Kim who gave me the opportunity to work on very ch allenging experiments. Under his guidance I have learned about digital holography, dig ital interference holography (DIH), wavelength scanning, optical coherence tomography (OCT), optical trapping, and developed optical imaging instrumentation requiring the integratio n of various electro-optic subsystems to probe new physics. I would especially like to thank the pe ople who worked closely with me on the DIH project within the last two years: Dr. Da vid Richards, Christine Kay, and Dr. Curtis Margo from the Ophthalmology department at USF. This thesis would not have been possible but for their help. I wish to thank all of my committee me mbers, Dennis Killinger, Martin Muschol, George Nolas, and David Richard, for their comments. I especially thank Dr. Richards for all of his suggestions, kind encouragement to write grant proposals. Thanks are also due to the support of Dr. Kim for signing all documents related to the grant proposals submission. I greatly appreciate all of them he lp in proofreading the final copy of this thesis, as well. I would also like to thank Dr. Manoug Manougian for chairing my defense examination.

PAGE 4

I also thank to Dr. Hwang from NIST for valuable discussions about the dynamical imaging of erythrocytes. I would like to thank the whole DHML group members who were working on various other experiments in the lab and with me at the same time. These members are Leo Krzewina, Lingfeng Yu (Frank), Chri stopher Mann, Nilanthi Warnasooriya, Alexander Khmaladze, and William Ash. Thanks are also due to the support sta ff at USF; their help over the years was invaluable. Much thanks to my dear friends Mih aela Popa-McKiver and Richard McKiver for their real help when it was needed. Finally, I owe special gratit ude to my family members fo r their constant love and support: To my father for his hard work in choosing good schools for me, to my mom, and my sister who were always there for me, to my husband George, for all his kind support, encouragement, and to my daughter Ana Karina, for the joy she brings.

PAGE 5

TABLE OF CONTENTS LIST OF FIGURES vi ABSTRACT ix CHAPTER 1. GENERAL INTRODUCTION 1 1.1. Holography and Three-Dimensional Imaging 1 1.2. Ophthalmic Imaging 3 1.3. Biometry Imaging 7 1.4. Optical Trapping Imaging 9 1.5. Research Contribution 10 1.6. Thesis Organization 12 1.7. Bibliography 13 CHAPTER 2. SCALAR DIFRAC TION THEORY AND OPTICAL FIELD RECONSTRUCTION METHODS 23 2.1. Introduction 23 2.2. Green Functions. The Integr al theorem of Helmholtz and Kirchhoff. The Rayleigh-Sommerfeld Diffraction Formula 25 2.3. Optical Field Reco nstruction Methods 29 2.3.1. Fresnel Approximation 30 i

PAGE 6

2.3.2. The Angular Spectrum of a Plane Wave 33 2.4. Results 34 2.5. Conclusions 39 2.6. Bibliography 39 CHAPTER 3. DIGITAL INTERFERENCE HOLOGRAPHY 41 3.1. Introduction 41 3.2. Principle of Digital Interference Holography 43 3.3. Multiple-Wavelength Optical Phase Unwrapping by Digital Interference Holography 46 3.4. Experimental Setup 47 3.5. Experimental Calibration 50 3.6. Conclusions 52 3.7. Bibliography 52 CHAPTER 4. OPTIMIZATION OF DIGITAL INTERFERENCE HOLOGRAPHY 59 4.1. Dispersion Compensation-Phase Matching 59 4.2. Signal-to-Noise Ratio 63 4.3. Results 63 4.4. Conclusions 69 4.5. Bibliography 70 CHAPTER 5. IN-VITRO IMAGING OF OPHTHALMIC TISSUE BY DIGITAL INTERFERENCE HOLOGRAPHY 72 5.1. Introduction 72 ii

PAGE 7

5.2. Methods 74 5.3. Theory 76 5.4. Ophthalmic DIH Scanning System 80 5.5. Results 82 5.6. Conclusions 87 5.7. Bibliography 92 CHAPTER 6. FINGERPRINT BIOMETRY APPLICATIONS OF DIGITAL INTERFERENCE HOLOGRAPHY 94 6.1. Introduction 94 6.2. Theory 96 6.3. Digital Interference Hologra phy Fingerprint Scanner Setup 98 6.4. Sample Characteristics 100 6.5. Results 102 6.6. Conclusions 111 6.7. Bibliography 112 CHAPTER 7. THREE-DIMENSIONAL SPRING CONSTANTS OF AN OPTICAL TRAP MEASURED BY DIGITAL GABOR HOLOGRAPHY 115 7.1. Introduction 115 7.2. Theory 118 7.2.1. Principle of Digital Gabor Holography 118 7.2.2. Principle of Optical Trapping 120 7.2.3. Force Calibration Methods 121 iii

PAGE 8

7.2.4. Computational System for Motion Tracking 124 7.2.5. Centroid Position Identification Algorithm 124 7.3. Experimental Setup 126 7.3.1. Digital Gabor Holography Arm 127 7.3.2. Optical Trap Arm 128 7.4. Results 129 7.5. Conclusions 136 7.6. Bibliography 138 CHAPTER 8. CONCLUSIONS AND FUTURE WORK 141 8.1. Conclusions 141 8.2. Future work 143 8.3. Bibliography 150 REFERENCES 152 APPENDICES 154 Appendix A: Digital Interference Holography Wavelengths Superposition 155 Appendix B: Diffraction Recons truction Methods Comparison 160 Appendix C: Fourier Transform 163 Appendix D: Digital Interference Holography Computer Interface 166 Appendix E: Brownian Motion and Optical Trapping Computer Interface 174 Appendix F: List of Accomplishments 178 iv

PAGE 9

ABOUT THE AUTHOR End Page v

PAGE 10

LIST OF FIGURES Figure 1.1. Structure of the Retina 6 Figure 2.1 Geometric Illustration for HelmholtzKirchhoff Integral Theorem 27 Figure 2.2. Huygens-Fresnel Principl e in Rectangular Coordinate 28 Figure 2.3. Holography of an USAF Resolution Target 36 Figure 2.4. Holography of the Onion Skin 37 Figure 2.5 Holography of an US Coin 38 Figure 3.1. Digital Interference Holography Geometry 45 Figure 3.2 Digital Interference Holography Apparatus 47 Figure 3.3. Rays Diagram 49 Figure 3.4. Polarization Control in Digital Interference Holography 50 Figure 3.5. Tuning Curve of the Rhodamine 6G 51 Figure 4.1. The Reconstructed Volume of the Resolution Target 64 Figure 4.2. Signal-to-Noise-Ratio Improvement 65 Figure 4.3. The Reconstructed Volume of the Retina with Filled Blood Vessels 67 Figure 4.4. The Reconstructed Volume of the Retina with vi

PAGE 11

Empty Blood Vessels 67 Figure 4.5. Phase-Matching Demonstr ation on Human Macula Sample 68 Figure 5.1. Optic Disc Geometry and Parameter Representation 75 Figure 5.2. Sketch of Object, Holo gram, and Reconstruction Planes 77 Figure 5.3. Experimental Apparatus 82 Figure 5.4. The Reconstructed Volume of the Human Macula Sample 83 Figure 5.5. The Reconstructed Volume of the Human Optic Nerve Sample. Big FOV 84 Figure 5.6. The Reconstructed Volume of the Human Optic Nerve Sample. Small FOV 86 Figure 5.7. Y-Z Cross Section Images of the Reconstructed Volume of the Human Optic Nerve Sample 87 Figure 6.1. Digital Interference Holography Fingerprint Scanner Setup 100 Figure 6.2. Fingerprints Samples 102 Figure 6.3. Enamel Vi sible Fingerprints 104 Figure 6.4. Reconstructed Volume of the Plastic Print on a Mixture of Clay and Silver Enamel Sample 105 Figure 6.5. Reconstructed Volume of the Plastic Print on Clay. Small FOV 106 Figure 6.6. Reconstructed Volume of the Plastic Print on Clay Big FOV 108 Figure 6.7. Reconstructed Volume of the Plastic Cement Print Sample 109 vii

PAGE 12

Figure 6.8. Latent Fingerprints. Multiple-Wavelength Optical Phase Unwrapping 111 Figure 7.1. Particle Displacements Histogram 123 Figure 7.2. Centroid Position Identification 125 Figure 7.3. Optical Tweezers Sample Chamber 126 Figure 7.4. Optical Tweezers with Digital Gabor Holography Microscope 127 Figure 7.5. Focused Trapping Light 129 Figure 7.6. Three-Dimensional Single Particle Tracking 130 Figure 7.7. The Mean-Square-Displacement Versus Time Intervals 131 Figure 7.8. The Mean-Square-Displacement in the Z Direction versus Time Intervals 132 Figure 7.9. 3D Scatterplots of an Optically Trapped Bead 133 Figure 7.10. Equipartitio n Calibration Method 134 Figure 7.11. Boltzmann Statistics Ca libration Method. Potential Well 135 Figure 7.12. Boltzmann Statistics Calib ration Method. Spring Constants 135 viii

PAGE 13

Digital Holography Applications in Ophthalmology, Biometry, and Optical Trapping Characterization Mariana Camelia Potcoava ABSTRACT This dissertation combines various hologr aphic techniques with application on the twoand three-dimensional imaging of ophthalmic tissue, fingerprints, and microsphere samples with micrometer resolution. Digital interference holography (DIH) uses scanned wavelengths to synthesize short-coherence interference tomographic imag es. We used DIH for in vitro imaging of human optic nerve head and retina. Tomographic images were produced by superposition of holograms. Holograms were obtained with a signal-to-noise ratio of approximately 50 dB. Optic nerve head characteristics (shape diameter, cup depth, and cup width) were quantified with a few micron resolution (4.06 -4.8m ). Multiple layers were distinguishable in cross-sect ional images of the macula. To our knowledge, this is the first report of DIH use to image human macular and optic nerve tissue. Holographic phase microscopy is used to pr oduce images of thin film patterns left by latent fingerprints. Two or more holographic phase images with different wavelengths ix

PAGE 14

x are combined for optical phase unwrapping of im ages of patent prints. We demonstrated digital interference holography images of a plastic print, and latent prints. These demonstrations point to significant contributions to biometry by using digital interference holography to identify and quantify Level 1 (pat tern), Level 2 (minutia points), and Level 3 (pores and ridge contours). Quantitative studies of physical and biological processes and precise non-contact manipulation of nanometer/micrometer trap ped objects can be effectuated with nanometer accuracy due to the development of optical tweezers. A three-dimensional gradient trap is produced at the focus pos ition of a high NA mi croscope objective. Particles are trapped axially and laterally due to the gradient force. The particle is confined in a poten tial well and the trap acts as a harmonic spring. Th e elastic constant or the stiffness along any axis is determined fr om the particle displacements in time along each specific axis. Thus, we report the sensing of small particles using optical trapping in combination with the digital Gabor holography to calibrate the optical force and the position and of the copolymer microsphere in the x, y, z direction with nm precision.

PAGE 15

1 CHAPTER 1 GENERAL INTRODUCTION This chapter presents a brie f history of holography and an overview of existent im aging techniques for biomedical op tics, biometry, and optical trapping. A brief review of holography and three-dimensional imaging is presented in Section 1.1. Ophthalmic imaging devices overview and the structure of the retina are given in Section 1.2. In Section 1.3 fingerprint characteristics and biometry imaging techniques designated for fingerprint imaging are presented. Opti cal trapping imaging and the relation to holography are described in Section 1.4. The mo tivation for this research and a summary of the original contributions in this dissertation are pres ented in Section 1.5. Finally, Section 1.6 outlines the organisation of this thesis. 1.1 Holography and Three-Dimensional Imaging The principle of holography was introduced by Denis Gabor [1] in 1948, as a technique where wavefronts from an object we re recorded and reconstructed in such a way that not only the amplitude but also the phase of the wave field were recovered. Gabor called this interference pattern a hologram, from the Greek word holos-the whole, because it contained the whole info rmation, the entire three-dimensional wave field as amplitude and phase. In 1967, J. G oodman demonstrated the feasibility of

PAGE 16

2 numerical reconstruction of holographi c images using a densitometer-scanned holographic plate [2]. Schnars and Jueptner, in 19 94, were the first to use a CCD camera connected to a computer as the input, completely eliminating the photochemical process, in what is now referred to as digital hologra phy [3]. Various useful and special techniques have been devel oped to enhance the capabilitie s and to extend the range of applications. Phase-shifting digital holography allows elimina tion of zero-order and twinimage components even in on-axis arrangemen t [6-8]. Optical scanning holography can generate holographic images of fluorescence [9]. Three-channel color digital holography has been demonstrated [10]. Application of digital holography in microscopy is especially important, because of the ex tremely narrow depth of focus of highmagnification systems [11, 12]. Numerical fo cusing of holographic images can be accomplished from a single exposed hologram. Di rect accessibility of phase information can be utilized for numerical correction of various aberrations of the optical system, such as field curvature a nd anamorphism [13]. Digital holography has been particular ly useful in metrology, deformation measurement, and vibrational analysis [14-16]. Microscopic imaging by digital holography has been applied to imaging of microstructures and biological systems [14, 17-18]. Digital interference holography for optical tomographic imaging [19-24], as well as multiwavelength quantitative phase contrast digital holography for high resolution microscopy [25-28], was demonstrated.

PAGE 17

3 1.2 Ophthalmic Imaging Examples of noninvasive ocular imaging t echnologies are scanning laser polarimetry (Retinal Nerve Fiber Analyzer GDx), [29, 30], confocal scanning laser tomography (Heidelberg Retinal Tomograph) [31], and optical coherence tomography (OCT), [2944]. For purposes of this discussion, OCT will be described in or der to serve as a comparison to our technology, digital interference holography (DIH). OCT is a noncontact, non-invasive optical imaging techni que that uses a low-coherence source to determine the retinal thickness and to imag e optic nerve by means of cross-sectional images. OCT is probably the most significant development in ophthalmic imaging in the past decade [32-36]. The most basic form of OCT, time domain OCT (TDOCT), is based on the interference of low coherence light in a Michelson interferometer, and the reference mirror mechanically moves in orde r to scan the z axis. TDOCT generates an interference signal only when the reference mirro r is at the same distance as the objects reflecting surface. The distances need to matc h within the coheren ce length of the light, which therefore determines the axial reso lution. OCT uses a low coherence, i.e. broadband, light source, such as a tungsten la mp or superluminescent diode (SLD). OCT is used in clinical practice to create cross-sectional images of in vivo retina at a resolution of approximately 10-15 microns, taking advantage of the fact that various layers of the retina vary in their reflectivity [44]. The highest reflection occurs in layers of the retina with cell surfaces and membranes. This incl udes the internal limiting membrane and the retinal pigment epithelium (RPE). Less reflec tive layers include the inner and outer nuclear layers. OCT imaging has become an important tool in the imaging and evaluation of retinal cross-sectional anat omy, allowing retinal specialists to diagnose diseases such

PAGE 18

4 as epiretinal membrane and macular hole, and to monitor conditions such as macular edema with objective measurements. It also supplies reproducible estimates of retinal thickness with accuracy not previously possi ble. New developments in OCT, with resolution under 10m include spectral-domain OCT (S D-OCT), where the mechanical z-scanning of the TDOCT is replaced with spectral analysis, and swept-source OCT (SSOCT), where the spectral analysis is repl aced with wavelength scanning of the light source [37-43]. An axial resolution of 1-2 m has been reported using a femtosecond laser [39]. TDOCT provides th e necessary resolution, but im ages are two dimensional only. The newer developments of FDOCT a nd SSOCT can now generate B-scan (crosssectional) images at video rate, but to image one square centimeter of the posterior pole of the retina without interpol ation, at least 1000 linear OCT scans are required, and these have to be re-assembled by computer to give a 3D volume image. In our lab [45], this method was dem onstrated for surface and sub-surface imaging of biological tissues, based on the pr inciple of wide field optical coherence tomography (WFOCT) and capable of providing full-color three-dime nsional views of a tissue structure with about 10 m axial reso lution, about 100 ~ 200 m penetration depth, and 50 ~ 60 db dynamic range. WFOCT technique is similar to OCT, but without x-y scanning provides full color information. Al so, the experiments were performed in three color channels (3 LED, red, blue, green) and th e results were combined to generate the contour and the tissue structures of the specimen in their natural color. Digital Interference Hol ography (DIH) technique is based on an original numerical method developed in DHM Laboratory of the Physics Department at USF,

PAGE 19

5 where a three-dimensional microscopic struct ure of a specimen can be reconstructed by a succession of holograms recorded using an extended group of scanned wavelengths. DIH technology will be explained mo re elaborately in Chapter 3. 1.2.1 Structure of the Retina Glaucoma is a group of eye diseases where vi sion is lost due to damage of the optic nerve. More precisely, the pathologic process results in the loss of retinal ganglion cells and their axons in the retinal nerve fiber layer resulting in thinning of the retinal nerve fiber layer (RNFL), [46, 47] A yellowish white ring surrounding the optic disk, indicating atrophy of the choroid in glaucoma is called glaucomatous. Measurement of macular thickness is not only important in the diagnosis and monitoring of macular diseases; it has also been found to be useful in evaluating glaucomatous changes since up to seven layers of retinal ga nglion cells are located at th e macula, [48, 49]. In Figure 1.1A, from top to bottom, the layers are : Inner Limiting Membrane, Nerve Fiber Layer, Ganglion Cell Layer, Inner Plexiform Layer, Inner Nuclear Layer, Outer Plexiform Layer, Outer Nuclear Layer, Inner and Ou ter Segments of Photoreceptors, Retinal Pigment Epithelium, and Choroid. The ganglio n cell layer is the layer with dark red nuclei (second blue arrow from the top). A nother arrangement of the retinal layers, showing the basic circuitry of the retina, is illustrated in Figure.1.1B and Figure 1.1C [50]. OCT cannot image these nuclei. The best that current OCT can do is measure the thickness of the "ganglion cell complex", whic h consists of the top three layers (top 3 blue arrows of Figure 1.1A). Adaptive optic s cannot do it either. For the diagnosis and management of glaucoma, we would like to have maps of the density of ganglion cells as

PAGE 20

6 function of location in the back of the eye. The challenge for the future will be to develop a 3D imaging technology to iden tify what percent of cells are lost due to the retinal damage. Figure 1.1. Structure of the Retina. (A) Section of retina (Kansas University, Medical Center), (B) Section of the retina showing overall arrang ement of retinal layers. (C) Diagram of the basic circuitr y of the retina. A three-neuron chainphotoreceptor, bipolar cell, and ganglion cellprovides the most dir ect route for transmitting visual information to the brain. Horizontal cells and amacrine ce lls mediate lateral interactions in the outer and inner plexiform layers, respectively. The terms inner and outer designate relative distances from the center of the eye (inner, n ear the center of the eye; outer, away from the center, or toward the pigment epithelium).

PAGE 21

7 1.3 Biometry Imaging Available biometric technologies rely on the recognition of DNA residue, face, voice, iris, signature, hand geometry, and fingerpri nts. Depending on the complexity of the sensing method, these technologies may be cl assified in terms of accuracy, simplicity, acceptability, and as well as stability. One of the simplest and most acceptable human authentification methods is fingerprint r ecognition. Ancient Babylonian and Chinese civilizations used the fingerprint impression s as a method to sign documents. Later in the 1880s, the first fingerprint considerations were published by Henry Faulds in Nature [51]. He collected fingerprints from different nationalities and made the conclusion that the copy of the forever unchangeable finger fu rrows may assist in the identification of criminals. After a few years of experime ntal work, the Galton-Henry system of fingerprint classification was published and quickly introduced in the USA in 1901 for criminal-identification records [52]. Fingerprint recognition systems can be cl assified in four main methods as follows: ink-technique, solid-state, ultrasonic, and optical. The traditional ink technique is based on using liquids and powder to enhance th e contrast of the prints template [53]. The solidstate sensors method uses an array of sensing elements such as: pyro-electric material (thermal-type), piezoelectric materi al (pressure-type), or capacitor electrodes (capacitance-type) covered with a hard protec tive layer. For example, the thermal-type sensing technique is based on the temperat ure differences between the surface of the finger (ridges/ valleys) and each thermo-element sensor. The temperature difference data is read by a sensor that performs an 8-bit analog-to-digital (AD) conversion to output an image of the fingerprint. The cross-sec tional reconstruction of a silicone rubber

PAGE 22

8 fingerprint model was performed with a valley width of 100 m and a height of 50 m [54]. Ultrasonic scanne rs [55], use sound waves to see through skin fat and tissue. The difference in acoustic impedance between the fi nger pattern and the plate is obtained and the echo signal is recorded by the receiver a nd transformed into ridge depth data. This technology allows creating images of difficult pr ints because the quality of the images is not affected by the dirt, grease, and grime. Fingerprint sensing by optical sensors has been used since 1970. The first optical sensor was based on the total internal reflection (TIR). The finger is illuminated through a pris m and a reflectance profile of the object is built based on reflected light from the fingerprints. For instance, the LightPrintTM, developed by Lumidigm, uses an optical se nsor based on TIR. The skin layers are scanned by a range of wavelengt hs to improve the quality of the images due to different skin condition and to improve spoofing protectio n of the scanners [56]. An application of optical polarization was demonstrated [57] to enhance the visibility of the latent fingerprint without using any chemical treatment. A novel optical coherence tomographybased system was demonstrated for depth-resolved 2-D and 3-D imaging to provide information of both artificial and natural ridge and furrow pattern s simultaneously [5860]. More recently, another scanner, full -field swept-source optical coherence tomography, uses a combination of a supe rluminescent broadband light source and an acousto-optic tunable filter. Th e light source is tune d to operate at different wavelengths. This scanner was used in forensic science to image the three-dimensional structure of latent fingerprints [61-63].

PAGE 23

9 1.3.1 Fingerprint Patterns Features of fingerprints can be classified in three levels [53, 64-69]. Level 1 feature refer to the pattern type, such as arch, tented arch, left loop, right l oop, double loop, and whorl. Level 2 features are formed when the ridge flow is interrupted by some irregularities, known as minutiae. Examples of minutiae are bifurcation, ending, line-unit, linefragment, eye, and hook. Level 3 features include other dimensional characteristics like pores, creases, line shape, incipient ridges, scars, and warts. 1.4 Optical Trapping Imaging Matter-light interaction reveals physical phenomena and object characteristics by monitoring optically trapped object fluctuations about equilibrium. Optical trapping microscopes can be classified by function of the illumination method, optical trapping schemes, optical detection modes, and app lications. Position -tracking algorithms and trapping light (laser) are integral part of the applica tions and they are chosen as a function of the object being characterized. Commercial optical trapping systems are pr eferable due to the flexibility to be attached to any microscope arm, but hom e-made systems are more convenient giving possibility to upgrade the system easily at a low cost. Starting from a simple configuration, one or two trapping laser beams [70-72] to cool and trap neutral atoms, the optical trapping systems have become sophisticated devices. The invention of laser and the ab ility to control object position applying picoNewton forces have found applications in physics and biology.

PAGE 24

10 The main use for the optical trap is th e manipulation of biological structures to study molecular motors and the physical prope rties of DNA [73, 74 ]. Optical sorting tweezers use an optical lattice to sort cells by size and by refractive index [75, 76]. The evanescent field and more recently surface plasmon waves propel microparticles along their propagating path [77, 78]. Optofluidics is a joint technology between microfluidics and micro-photonics. Optical control of the mi crofluidic elements using optical tweezers was also reported [79]. Another applicati on of optical trapping techniques includes integrated lab-on-a-chip tec hnologies where optical force landscapes are highly desirable to manipulate multiple microparticles in parallel [80]. Position detection, trapping beam a lignment and high NA microscope are the most challenging parts of the trapping system The position detecti on [81] is possible using: video-based position detection (CCD) [82, 83], imaging position detection (QPD), laser based-position detection (QPD and back-focal laser beam), and axial position detection technologies. The vide o-based position detection is limited due to unavailability of a camera with high video acquisition rates. The benefit of this method is that the trapped sample can be imaged onto a CCD camera and make it desirable holography uses. 1.5 Research Contribution The motivation of this work has been to develop and characteri ze optical imaging instruments for ophthalmology, biometry, an d optical trapping, based on the latest development of digital holography.

PAGE 25

11 My early work focused primarily on developing a retinal sc anner, based on Digital Interference Holography. The developm ent of this instrument requires electrooptic system integration including software de velopment, as well as an understanding of biological specimens behavior, morphol ogy, and physiology. Holograms acquisition, optical field reconstruction, and optical field superposition programs were developed to characterize the sample under study. This in strument uses high-speed, non-contact, noninvasive technology, has no mechanical moving pa rts, has an axial resolution better than 5 m, and signal-to-noise ratio (SNR) of about 50 dB. To achieve these characteristics, the calibration scheme was modified by intr oducing a phase-matching technique that accounts for the dispersion in the system. A pha se variable was introduced that minimizes the errors resulting from phase mismatch. Calibration experiments using a resoluti on target demonstrates improvement of SNR with increasing number of holograms consistent with th eoretical prediction. Imaging experiments on pig retinal tissue rev eal topography of blood vessels as well as optical thickness profil e of the retinal layer [84, 94]. We reported for the first time the use of DIH to image human macular and optic ne rve tissue [85-93, 95, 96]. This might be of significance to researchers and clinicians in the diagnosis and treatment of many ocular diseases, including glaucoma and a variety of macular diseases. DIH also offers phase unwrapping capability. By choosing appropriate wavelengths, the beat wavelength can be made large enough to cover the range of optical thickness of the object being imaged. Together with various techniques such as low coherence tomography and digita l holography microscopy, we also demonstrated the use of DIH for imaging fingerprints [86].

PAGE 26

12 Most recently, I have focused primaril y on the design and characterization of holographic optical tweezers for trapping and manipulating microspheres undergoing Brownian motion [87]. Hologram acquisiti on, optical field reconstruction, particle tracking, and statistics programs were develope d to characterize the trapped particle. The future goal of this project is to develop a new tool to study how cells ingest foreign particles through the process known as phagoc ytosisa or to unders tand a variety of biophysical processes. 1.6 Thesis Organization This dissertation is organized in the following way. Chapter 2 presents scalar field theory and discusses the reconstruction of the op tical field by the angular spectrum and the Fresnel approximation. Chapter 3 discusses in more detail the theoretical background of the digital interference holography, the experi mental apparatus, and calibration. Chapter 4 covers the optimization methods of the digital interference holography system. In Chapter 5, the in-vitro imaging of ophthalmic tissue by digital inte rference holography is presented. The application of digital interferen ce holography in biometry is presented in Chapter 6. The digital Gabor holography microscope together with th e optical trapping apparatus is described and experimental re sults are presented in Chapter 7. Major conclusions and future directi ons are summarized in Chapter 8.

PAGE 27

13 1.7 Bibliography [1] Gabor D 1971 Holography Nobel Lecture http://nobelprize.org/nobel _prizes/physics/laureat es/1971/gabor-lecture.pdf [2] Goodman J W and Lawrence R W 1967 Digital image formation from electronically detected holograms Appl. Phys.Lett. 11 77-79 [3] Schnars U 1994 Direct phase determination in hologram interferometry with use of digitally recorded holograms J. Opt. Soc. Am A 11 2011-5 [4] Schnars U and Jueptner W 1994 Direct recording of holograms by a CCD target and numerical reconstruction Appl.Opt. 33 179 [5] Schnars U and Jueptner W 2002 Digital re cording and numerical reconstruction of holograms Meas. Sci. Technol. 13 R85-R101 [6] Yamaguchi I, Kato J, Ohta S and Miz uno J 2001 Image formation in phase-shifting digital holography and applications to microscopy Appl. Optics 40 6177-86 [7] Yamaguchi I and Zhang T Phase-shifting dig ital holography 1997 Opt. Lett. 22 1268 [8] Zhang T and Yamaguchi I 1998 Three-dimensional microscopy with phase-shifting digital holography Opt. Lett. 23 1221 [9] Poon T C 2003 Three-dimensional image processing and optical scanning holography Adv. Imaging & Electron Phys 126 329-50 [10] Yamaguchi I, Matsumura T and Kato J 2002 Phase-shifting color digital holography Opt. Lett. 27 1108 [11] Barty A, Nugent K A, Paganin D and Roberts A 1998 Quantitative optical phase microscopy Opt. Lett. 23 817

PAGE 28

14 [12] Cuche E, Bevilacqua F and Depeursi nge C 1999 Digital holography for quantitative phase-contrast imaging 1999 Opt. Lett. 24 291 [13] Ferraro P, De Nicola S, Finizio A, C oppola G, Grilli S, Magr o C and Pierattini G 2003 Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging Appl. Opt. 42 1938-46 [14] Xu M L, Peng X, Miao J and Asundi A 2001 Studies of digital microscopic holography with applications to microstructure testing Appl. Opt. 40 5046-51 [15] Pedrini G and Tiziani H J 1997 Quantitativ e evaluation of two-dimensional dynamic deformations using digital holography Opt. Laser Technol 29 249 [16] Picart P, Leval J, Mounier D and G ougeon S 2005 Some opportunities for vibration analysis with time averaging in digital Fresnel holography Appl. Opt 44 337 [17] Haddad W S, Cullen D, Solem J C, Lo ngworth J W, McPherson A, Boyer K and Rhodes C K 1992 Fourier-transform holographic microscope Appl. Opt. 31 4973-8 [18] Xu W, Jericho M H, Meinertzhagen I A and Kreuze r H J 2001 Digital in-line holography for biological applications Proc. Natl. Acad. Sci. USA 98 11301-05 [19] Kim MK 1999 Wavelength scanning digital interfer ence holography for optical section imaging Opt. Letters 24 1693 [20] Kim MK 2000 Tomographic three-dimensional imaging of a biological specimen using wavelength-scanning digi tal interference holography Opt. Express 7 305-10 [21] Dakoff A, Gass J and Kim M K 2003 Mi croscopic three-dimensional imaging by digital interference holography J. Electr. Imag 12 643-647 [22] Yu L, Myung M K 2005 Wa velength scanning digital in terference holography for variable tomographic scanning Opt. Express 13 5621-7

PAGE 29

15 [23] Yu L, Myung M K 2005 Wa velength-scanning digital in terference holography for tomographic 3D imaging using the angular spectrum method Opt. Lett. 30 2092 [24] Kim M K, Yu L and Mann C J 2006 Inte rference techniques in digital holography J. Opt. A: Pure Appl. Opt. 8 512-23 [25] Gass J, Dakoff A and Kim M K 2003 Phase imaging without 2-pi ambiguity by multiwavelength digital holography Opt. Lett. 28 1141-3 [26] Mann C J, Yu L, Lo C M and Kim M K 2005 High-resolution quantitative phasecontrast microscopy by digital holography Opt. Express 13 8693-98 [27] Parshall D and Kim M K 2006 Digita l holographic micr oscopy with dual wavelength phase unwrapping Appl. Opt. 45 451-59 [28] Mann C, Yu L and Kim M K 2006 Movies of cellular and sub-cellular motion by digital holographic microscopy Biomed. Engg. Online 5 21 [29]. Chiseli D, Danielescu C and Apostol A. Correlation between structural and functional analysis in glaucoma suspects. Oftalmologia. 2008; 52:111-8. [30]. Zaveri MS, Conger A, Salter A, et al Retinal imaging by laser polarimetry and optical coherence tomography evidence of axonal degeneration in multiple sclerosis. Arch Neurol 2008; 65:924-8. [31]Ycel YH, Gupta N, Kalichman MW, et all. Relationship of Optic Disc Topography to Optic Nerve Fiber Number in Glaucoma. Arch Ophthalmol. 1998;116:493-497. [32]. Huang D, Swanson EA, Lin CP, et al. Optical coherence tomography. Science 1991; 254:1178-1181. [33]. Brezinski M. Optical Coherence Tomography. Principles and Applications. Burlington, MA: Elsevier; 2006.

PAGE 30

16 [34]. Wojtkowski M, Ba jraszewski T, Targowski P, et al Real-time in vivo imaging by high-speed spectral optical coherence tomography. Opt. Lett. 2003; 28: 1745-1747. [35]. Podoleanu AG, Rogers JA, Jackson DA, Dunne S. Three-dimensional OCT images from retina and skin. Opt Exp. 2000;7:292. [36]. Rogers JA, Podoleanu AG, Dobre GM Jackson DA, Dunne S. Topography and volume measurements of the optic nerve using en-face optical coherence tomography. Opt Exp. 2001;9:533. [37]. Schuman JS, Puliafito CA, Fujimoto JG. Optical Coherence Tomography of Ocular Diseases. 2. Thorofare, NJ: SL ACK Inc; 2004: 21. [38]. Wojtkowski M, Srinivasan V, Fujimoto JG et all. Three-dimensional Retinal Imaging with High-Speed Ultrahigh-Re solution Optical Coherence Tomography. Ophthalmology. 2005; 112: 1734-1746. [39]. Drexler W, Morgner U, Ghanta RK Schuman JS, Krtner FX, Fujimoto JG. Ultrahigh resolution ophthalmologic optical coherence tomography. Nat Med. 2001;7:502. [40]. Srinivasan VJ, Gorczynska, and Fujimoto GJ. High-speed, high-resolution optical coherence tomography retinal imaging with frequency-swept laser at 850 nm. Opt. Lett. 2007:32: 361-363. [41]. Srinivasan VJ, Ko TH, Wojtkowski M, et al. Noninvasive Volumetric Imaging and Morphometry of the Rodent Retina with High-Speed, Ultrahigh-Resolution Optical Coherence Tomography. Invest. Ophthalmol.Vis. Sci. 2006; 47:5522-5528.

PAGE 31

17 [42]. Yasuno Y, Hong Y, Makita S, et al. In vivo high-contrast imag ing of deep posterior eye by 1m swept source optical coherence tomogr aphy and scattering optical coherence angiography. Opt. Express 2007; 15: 6121-6139. [43]. Wollstein G, Paunescu LA, Ko TH. Ultrahigh-resolution optical coherence tomography in glaucoma. Ophthalmology. 2005;112:229. [44]. Dubois A, Vabre L, Boccara AC and Beaurepaire E. High-resolution full-field optical coherence tomography with Linnik microscope. Appl. Opt. 2002; 41: 805-812. [45]. Lingfeng Yu and M.K. KimFull-color three-dimensional microscopy by wide-field optical coherence tomography, Vol. 12, No. 26 / OPTICS EXPRESS 6632` 2004 [46]. Quigley HA, Dunkelberger GR, Green WR Retinal ganglion cell atrophy correlated with automated perimetry in human eyes with glaucoma. Am J Ophthalmol. 107, 453467 (1989). [47]. Cense B, Chen TC, Pierce MC, De Boer JF. Thickness and Birefringence of Healthy Retinal Nerve Fiber Layer Tissue Measured with Polarization-Sensitive Optical Coherence Tomography. Investigative Opht halmology & Visual Science, 45, 2606-2612 (2004). [48]. Leung CK, Chan WM, Yung WH, et al. Comparison of macular and peripapillary measurements for the detection of glauco ma: an optical coherence tomography study. Ophthalmology 112, 391-400 (2005). [49]. Wollstein G, Ishikawa H, Wang J, et al. Comparison of th ree optical coherence tomography scanning areas for detection of glaucomatous damage. Am J Ophthalmol. 139, 39 (2005). [50]. Dale Purves et al. ,Neuroscience, second edition, 2001.

PAGE 32

18 [51]. H. Faulds, On the Skin-furrows of the Hand, Nature 22 605 (1880). [52]. F. Galton, Pers onal Identification and Description, Nature 38, 201-202 (1888). [53]. A. M Knowles, Aspects of physicochemical methods fo r the detection of latent fingerprints, Phys. E: Sci. Instrum. 11, 713-721 (1978). [54]. J. Han, Z. Tan, K Sato and M Shikida, Thermal characteriza tion of micro heater arrays on a polyimide film substrate for fingerprint sensing applica tions, J. Micromech. Microeng. 15 282-289 (2005). [55]. M. Pluta, W. Bicz, Ultrasonic Se tup for Fingerprint Pa tterns Detection and Evaluation, Acoustical Imaging 22, Plenum Press (1996) [56]. R. K. Rowe, S. P. Cocoran, K. A. Nixon, and R. E. Nostrom, Multispectral Fingerprint Biomet rics, Proc. SPIE 5694, 90 (2005). [57]. S. S. Lin, K. M. Yemelyanov, E. N. Pugh Jr., N. Engheta, Polarizationand Specular-Reflection-Based, Non-co ntact Latent Fingerprint Imag ing and Lifting, J. Opt. Soc. Am. A 23, 2137-2153 (2006). [58]. S. Chang, Y. Mao, S. Sherif and C. Flueraru, Full-fiel d optical coherence tomography used for security and document identity, Proc. of SPIE, 6402, 64020Q (2006). [59]. Y. Cheng and K. V. Larin, Artific ial fingerprint recognition by using optical coherence tomography with autocorr elation analysis, Appl. Opt. 45, 9238-9245 (2006). [60]. Y. Cheng and K. V. Larin, In Vivo Twoand Three-Dimensional Imaging of Artificial and Real Fingerprints With Op tical Coherence Tomography, Photonics Technology Letters, 19, 1634-1636 (2007).

PAGE 33

19 [61]. S. Chang, Y. Cheng, K.V. Larin, Y. Ma o1, S. Sherif, and C. Flueraru, Optical coherence tomography used for security and finger print-sensing applic ations, IET Image Process., 2, 48 (2008). [62]. S. K. Dubey, T. Anna, C. Shakher, and D. S. Mehta, Fingerprint detection using full-field swept-source optical coherence, Tomography, Appl. Phys. Lett. 91, 181106 (2007). [63]. S. K. Dubey, D. S. Mehta, A. Ana nd and C. Shakher, Simultaneous topography and tomography of latent fingerprints usi ng full-field swept-source optical coherence tomography, J. Opt. A: Pure Appl. Opt. 10, 015307 (2008). [64]. L. OGorman, Overview of fingerpri nt verification technologies, Elsevier Information Security Technical Report 3, (1998). [65]. A. K. Jain, L. Hong, S. Pankanti, and R. Bolle, An Identity-A uthentication System Using Fingerprints, Proc. of the IEEE 85, 1365-1388 (1997). [66]. A. K. Jain, J. Feng, A. Nagar a nd K. Nandakumar, On Matching Latent Fingerprints, Workshop on Biometrics, CVPR, (2008). [67]. U. Park, S. Pankanti and A. K. Ja in, "Fingerprint Veri fication Using SIFT Features, Proc. of SPIE Defense and Security Symposium, (2008). [68]. Y. Zhu, S.C. Dass and A.K. Jain, "Statistical Models for Assessing the Individuality of Fingerprints", IEEE Transactions on Information Forensics and Security, 2, 391-401 (2007). [69]. A. K. Jain, Y. Chen, M. Demirkus, P ores and Ridges: HighResolution Fingerprint Matching Using Level 3 Featur es, IEEE Transactions on Pa ttern Analysis and Machine Intelligence 29, 15-27 (2007).

PAGE 34

20 [70]. A. Ashkin, Acceleration and trapping of particles by radiation pressure, Phys. Rev. Lett. 24 156-159, (1970). [71]. A. Ashkin and J..M. Dziedzic, Optical levitation by radiation pressure, Appl. Phys. Lett., 1971. [72]. A. Ashkin, J..M. Dziedzic, J.E. Bjor kholm, and S. Chu, Observation of a singlebeam gradient force optical trap for diel ectric particles, Optics Lett., (1986). [73]. J.C.H. Tan and R.A. Hitchings, Invest. Ophthalmol.Vis. Sci. 44 1132 (2003). [74]. K.H. Min, G.J. Seong, Y.J. Hong, et al Kor. J. Ophthalmol 19 189 (2005). [75]. J. Xu, H. Ishikawa, G. Wollstein, et al Invest. Ophthalmol.Vis. Sci. 49 2512 (2008). [76]. O. Geyer, A. Michaeli-Cohen, D.M. Silver, et al Br. J. Ophthalmol. 82 14 (1998). [77. F.S. Mikelberg, Can. J. Ophthalmol. 42 421 (2007). [78]. J.B. Jonas, G.C. Gusek, and G.O.H. Naumann, Invest. Ophthalmol.Vis. Sci. 29 1151 (1998). [79]. N.V. Swindale, G. S tjepanovic, A. Chin, et al. Invest. Ophthalmol.Vis. Sci. 41 1730 (2000). [80]. C. Bowd, L.M. Zangwill, E.Z. Blumenthal, et al. J. Opt. Soc. Am. A 19 197 (2002). [81]. K. C. Neuman and S. M. Blocka, Optical trapping, Rev Sci Instrum 2004 September ; 75(9): 2787. [82]. Gosse C, Croquette V. Bi ophys J 2002;82:3314. [PubMed: 12023254] [83]. Keller M, Schilling J, Sackmann E. Rev Sci Instrum 2001;72:3626. [84]. M. C. Potcoava and M.K. Kim, Optic al tomography for biomed ical applications by digital interference hologr aphy Meas. Sci. Technol Vol. 19, 074010 (2008).

PAGE 35

21 [85]. Mariana C. Potcoava, Christine N. Kay, Myung K. Kim, and David W. Richards, Digital Interference Hologr aphy in Ophthalmology, J ournal of Modern Optics. (Accepted). [86]. M. C. Potcoava and M.K. Kim, Fingerprint Biometry A pplications Digital Interference Holography and Low-Cohere nce Interferography, Applied Optics (In Review). [87]. M. C. Potcoava, L. Kr ewitza and M.K. Kim, Brownia n motion of optically trapped particles by digital Gabor hol ography (In preparation). [88]. Myung K. Kim and Mariana Potcoava, Fingerprint Biometry Applications of Digital Holography and Low-Coherence Interf erence Microscopy in Digital Holography and Three-Dimensional Imaging, (Op tical Society of America, 2009). [89]. M. C. Potcoava and M.K. Kim, Fi ngerprints scanner usi ng Digital Interference Holography , in Biometric Technology for Human Identification VI, (SPIE Defense, Security, and Sensing 2009), paper presentation 7306B-80. [90]. Mariana C. Potcoava, Myung K. Kim, Christine N. Kay, Wavelength scanning digital interference hologra phy for high-resolution ophthalmic imaging, in Ophthalmic Technologies XIX, (SPIE 2009 BiOS), paper presentation 7163-10. [91]. Kay CN, Potcoava M, Kim MK, Rich ards DW. Digital Holography Imaging of Human Macula, Florida Society of Ophtha lmology Resident Symposium. Palm Beach, FL, 2008 (Second place), paper presentation. [92]. M.C. Potcoava, C.N. Ka y, M.K. Kim, D.W. Richards. Digital Interference Holography in Ophthalmology, ARVO 2008, paper presentation 4011.

PAGE 36

22 [93]. M. C. Potcoava and M. K. Kim, "3-D Representation of Retinal Blood Vessels through Digital Interference Holography," in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD), (Op tical Society of America, 2008), paper presentation DMB2. [94]. M. C. Potcoava and M. K. Ki m, "Animal Tissue Tomography by Digital Interference Holography," in Adaptive Op tics: Analysis and Methods/Computational Optical Sensing and Imaging/Information Photonics/Signal Recovery and Synthesis Topical Meetings on CD-ROM, OSA Technical Digest (CD) (Optical Society of America, 2007), paper presentation DWC6. [95]. C. Potcoava,Digital In terference Holography in the 21 st Century, USF Graduate Research Symposium 2008, poster presentation, (F irst Place). [96]. Kay CN, Potcoava M, Kim MK, Rich ards DW, Pavan PR. Digital Holography Imaging of Human Macula, ASRS (American Society of Retina Specialists 2008), poster presentation.

PAGE 37

23 CHAPTER 2 SCAL AR DIFRACTION THEORY AND OPTICAL FIELD RECONSTRUCTION METHODS This chapter reviews numerical reconstruction algorithms for digital hol ography with emphasis on the angular spectrum method and Fresnel approximation method. A brief review of the diffraction principles are presen ted in Section 2.1. Nume rical reconstruction methods are reviewed in Section 2.2. In S ection 2.3 a comparison between the angular spectrum method and the Fresnel transfor m is presented. Results of the two reconstruction methods are shown in Section 2.4. Conclusions are pr esented in Section 2.5. 2.1. Introduction The first definition of the diffraction has been made by Sommerfe ld [1] as any deviation of light rays from rectilin ear paths which cannot be in terpreted as reflection or refraction. The explanation of this phenomen on was made by Christian Huygens, as an answer to the question why the transition from light to shadow was gradual rather than abrupt [2]. After Thomas Young introduced the concept of interference, progress on further understanding diffraction was made in 1818 by Fresnel who made assumptions about the amplitude and phase of Huygens s econdary sources. He also calculated the

PAGE 38

24 distribution of light in diffr action pattern with excellent accuracy and introduced the obliquity or inclination factor, in order to account for the deficiency in the back wave propagation. Both Huygens and Fresnel ideas were put together by Kirchhoff in a mathematical description of the boundary valu es of the light incident on the surfaces [3]. Kirchhoff formulated the so-called Huygens-Fresnel prin ciple that must be regarded as a first approximation. The difficulties of this th eory occurred when the boundary conditions must be imposed both on the field strength and its normal derivative. The Rayleigh Sommerfeld diffraction theory eliminates the use of the lig ht amplitude at the boundary, by making use of the theory of Greens function [4, 5]. The Kirchhoff and RayleighSommerfeld theories require the electromagnetic field to be treated as a scalar phenomenon, the diffraction aperture must be large compared with a wavelength, and the diffraction fields must not be observe d too close to the aperture [6]. This study will be presented as a scalar th eory, ignoring the vectorial nature of the electric and magnetic fields that make up light waves. The vectorial nature becomes important in dealing with polarization and non-isotropic media. On solving Maxwells wave equation, the electromagnetic wave has the form, (,,;)(,,)iwt x yztuxyze where (,,) uxyz is the complex amplitude of the wave and iwteis the wave absolute phase time variation (see Appendix A). To apply the scalar theory, one needs to assume the polarization direction of the field with the unit vector, is constant and the vector field (,,)(,,) uxyzuxyz transforms to the scalar field, and consequen tly the spatial part of the electromagnetic wave ,(,,) uxyz, satisfies the scalar Helmholtz equation:

PAGE 39

2522()(,,)0 kuxyz (2.1) where / kwc is the wavevector, w is the frequency of the light, c is the speed of light in vacuum, and 2 is the Laplacian operator. This equation can be used to derive the equation for a general diffraction problem (i.e. an equation for the light field and, hence, the intensity, as a function of position be hind an obstacle which is between the observation point and a given source). 2.2. Green Functions. The Integral Theo rem of Helmholtz and Kirchhoff. The Rayleigh-Sommerfeld Diffraction Formula Let U and V be any two complex-valued func tions of position, and let S be a closed surface surrounding a volume V. If U, V, and th eir first and second pa rtial derivatives are single-valued and continuous within and on S, Figure 2.1, the Gauss theorem can be applied to the vector fields U and V, 22VSUVVUdvUVVUds (2.2) where n is the partial derivative in the outward normal direction at each point on S. This theorem is the prime foundation of scalar diffraction theory. A Green function is chosen to be a sc alar function for the Equation (2.1) and the derivative of the outgoing Green function over the small sphere has the expression, (')4 Grr n (2.3) Within the volume V,G is forced to satisfy the Helmholtz equation, 22()0 kG (2.4)

PAGE 40

26 Substituting the two Helmholtz equations (2.1) a nd (2.4) in (2.2) in the left-hand side of the Greens theorem, we find, 22 22 ''0VVUVVUdvUGkGUkdv (2.5) The right member of Equation (2.5) cancels, so the theorem reduces to, '0SUVVUds (2.6) or, 'SSUVVUdsUVVUds (2.7) For a general point r on 'S, we have, exp(|'|) (') |'| ikrr Grr rr and '(')4SGrrds n (2.8) Letting become arbitrarily small or at the limit of S approaching' P, Equation (2.6) will become: ()( ) 40SUGGUdsUr nn (2.9) and therefore, 1 (')(( ) 4SUrUGGUds nn (2.10) Considering a volume V complimentary to V, "1 (( )0 4SUGGU nn (2.11) Substitution of this result in Equation (2.7 ) and taking account of negative sign, yields,

PAGE 41

27 1(|'|)(|'|) (')((exp [exp]) 4|'||'|Sikrr ikrr Ur UU ds rrnnrr (2.12) The result is known as the integral theore m of Helmholtz and Kirchhoff. It allows the field at any point P to be expressed in terms of boundary values of the wave on any closed surface surrounding that point. The final expression of the field (') Ur is, 11 (')()() 22SSUrUGdsGUds nn (2.13) These results are known as the RayleighSommerfeld diffraction formula of the first and second kind respectively (Goodman) If a potential function and its normal derivative vanish at the same time, along a ny finite curve segment, then the potential function must vanish on the entire plane. Figure 2.1. Geometric Illustration for Helmho ltzKirchhoff Integral Theorem. Now, we want to calculat e the field U at point P di ffracted by a semi-transparent window SA cut in an opaque screen. (')1 ()(')cos(,('))()(') |'| Grr ikGrrnrrikGrr nrr (2.14)

PAGE 42

28 and finally, 1 (')()(()(')) 22 (()('))AA ASS Sik UrUGUrGrrds n i UrGrrds (2.15) This is known as the Huygens -Fresnel integral. The field (') Ur in plane z can be calculated from the field in plane z, () Ur. Lets consider two parall el planes (x, y, z) and 00(,;0) xyz at normal distance z from each other. The diffracting aperture (source) lies in the00(,) x y plane and the observation plane (reconstructio n) lies in the (x, y). Figure 2.2: Huygens-Fresnel Principle in Rectangular Coordinate. (Adapted after J.W. Goodman, Introduction to Fourier Optics Third Edition ). Huygens law states that the field () ur at a time t is related to the field (') ur at an earlier time t by the integral equation, ()(')(,')VururGrrdv (2.16)

PAGE 43

29 where the dependence of time was ignored. Equation (2.13) can be stated as, 1 exp() (,,)((0,0;0)cosASikr UxyzUxyzds ir (2.17) Where is the angle between the outward normal n and the vector r pointing from (,,) x yz and 00(,;0)xyz cosz r and 222 00()() rzxxyy and the Huygens-Fresnel principle can be written, 00 00exp() (,,)((,;0)ASzi k r UxyzUxyz dxdy ir (2.18) 2.3. Optical Field Reconstruction Methods Optical field reconstruction using diffraction methods involves the determination of the object amplitude and phase. Amplitude is a qu antity proportional to the square root of the intensity in the diffraction pattern and represents the strength of interference at a specific point. Phase is the relative time of arrival of the s cattered radiation (wav e) at a particular point (e.g. photographic film), and this information is lost when the diffraction pattern is recorded. In digital holography a hologram is r ecorded digitally. The object field, (,) Oxy interferes with th e reference field, (,) R xy, at the hologram plane. Here, we use a setup in off-axis geometry, meaning the re ference field interferes with the object field at an angle, The interference between the object wave (,)(,)exp[(,)]OOOxyAmpxyixy and the plane reference wave (,)exp()exp[2()]Rx y R xyiiqxqy is recorded in the hologram plane ),(0 0yyxx in form of intensity, (,) hxy. (,)OAmpxyis the amplitude and

PAGE 44

30),( yxO is the phase of the object b eam. The other two quantities, xqand yq, are the carrier frequency of the reference beam in the x and y directions respectively. The complex amplitude of the interference pattern is: (,)(,)(,) UxyRxyOxy The hologram intensity pattern is recorded digitally by the CCD in the form: 2 2 2(,)(,)(,)1(,)2(,)cos() 1(,)(,)exp[2()](,)*exp[2()]OOR O Ox yx yhxyRxyOxyAmpxyAmpxy AmpxyOxyiqxqyOxyiqxqy (2.19) The recorded image ),( yxhcontains information about both the amplitude and phase of the object beam. To reconstruct the object optical field from the recorded holograms, various methods are used. Optical methods or forward methods are pref erred to statistical and inverse methods. Here we will review nu merical reconstruction algorithms for digital holography with emphasis on the Fresnel approx imation and angular spectrum methods. The relationship between th e two methods, or in other words, how to derive the Fresnel approximation starting from the angula r spectrum of a plane wave, is given in Appendix B. The mathematical background of the Fourier transform is given in Appendix C. 2.3.1 Fresnel Approximation The Fresnel transform, as an approximation to the Kirchoff diffracti on integral ( Equation 2.12), plays a significant role in evaluating the propagation of wave fields. In the onedimensional case it is defined by 22()()exp[()/]D Fr f axixfDdx (2.20)

PAGE 45

31 Where () f is called the integral transform of the signal ()ax, or its spectrum, and D is a transform parameter (Jaroslavsky). When the complex amplitude of the wave field is linked with the wave field amplitude in a Fresnel plane of the object, 2D is the product of the illumination wavelength, with the distance between the object and the Fresnel or observation plane z, so 2Dz We apply Rayleigh-Sommerf eld formula of the first kind to the calculation of (') Ur, by computing the surface integral on S, surrounding the volume V. The more usable expression for the Huygens-Fresnel principle needs approximations for the absolute distance 222rxyz Equation (2.21) and for the wavevector along the propagation distance 222zxykkkk Equation (2.22). 22 22221/2 00 00 2 2222 0000 22 22 00 2()() ()()[(1)] ()()(() (1.....) 24 ()() (1) 2 xxyy xxyyzz z xxyyxxyy z zz xxyy z z (2.21) and, 2222 x yzkkkk where, 2222 222 2 22.....) 24 2xyxy zxy z xykkkk kkkkk kk kk k k (2.22) And Equation (2.18) therefore becomes, 22 000000exp() (,,)((,;0)exp{[()())]} 2 ikzik UxyzUxyzxxyydxdy iz z (2.23) Equation (2.23) is a convolution between the field at source and th e convolution kernel, 22exp() (,,)exp[()] 2 ikzik hxyzxy izz

PAGE 46

32 Arranging this expression further, we get 22 22 00 00 0000exp() (,,)exp[()]((,;0)exp[())] 22 *exp[()] 2 ikzik ik Uxyz xyUxyzxy izz z ik xxyydxdy z (2.24) Ignoring the front factor, the integral represents the Fourier transform of the product of the complex field to the right of the aperture and a quadratic phase exponential (Goodman, Hariharan, Schnars, Kuo, Scott). The expression (2.24) could be written as, 22 00(,,)exp[()][(,;0)] 2ik UxyzxyUxyzh z F (2.25) where 22 00exp[())] 22ikik hikzxy zz is the PSF of the system. There are two common methods to calculate the Fresnel transform. The first is by evaluating the Huygens integral for back propagating waves, and the second is by multiplying the Fourier transform of the Fresne l field with the Fresnel optical transfer function h, and then performing an inverse Fourie r transform. Here we discussed the second one which is the most common hologr am reconstruction method since it requires only one FFT. The minimum reconstruction distance is imposed by the discrete Fourier transform, and it has the expression, 2 mina z N where aNx is the size of the hologram, N x N is the hologram area in pixels, x is the pixels size or the lateral resolution. We can also write the expression for the lateral resolution being 0z x Nx

PAGE 47

33 where zis the reconstruction distance and 0 x is the pixel size of the CCD camera. The minimum of the reconstruction distance is min.z 2.3.2 The Angular Spectrum of a Plane Wave The scalar diffraction theory can be reformul ated using the theory of linear, invariant system. The Fourier components of any disturbance are analyzed at an arbitrary plane as plane waves traveling in variou s direction from that plane. The resultant field amplitude is the superposition off all these plane waves, at an arbitrary plane with a phase shift contribution due to the wave propagation.. We take Fourier transform of Equation (2. 19), and obtain the spatial frequencies as follows: 2 0 0(,)(,)[(,)](,)*(,) (,)(,) x yxyOxyxxyy xyxxyykkkkAmpxyAkkkqkq Akkkqkq HF (2.26) The first two terms represent the zero-order te rm and the third and fo rth represent the two conjugate images, real image centered around ),(yyxxqkqk and virtual image centered around),(yyxxqkqk The first three terms can be filtered out in the Fourier space and the forth term is shift to the center of the coordinate to obtain the angular spectrum of the object, 0(,) x yAkk, in the hologram plane. To obtain the spectrum in the object plane, 0(,) x yAkkis backward propagated in th e frequency domain, along the propagation distance ( Z z ), and has the expression: 0(,,)(,)exp()xy xyzAkkzAkkikz (2.27) Taking the inverse Fourier transform, we obtain the reconstruc ted object wavefront,

PAGE 48

341(,,){(,)exp()}xyzUxyzAkkikzF (2.28) Breaking the reconstructed complex fiel d into its polar components we get (,,)(,)exp[(,)]Orec OrecUxyzAmpxyixy (2.29) where (,)OrecAmpxy, and ),( yxrecO represent the reconstructed object wavefront amplitude and phase at r In this way, we can have acce ss to both the amplitude and the phase information. Using the angular spectrum method in hologram reconstruction does not require any minimum reconstruction distance. Anothe r benefit of using this method is the filtering capability in the frequency space to remove the background and the virtual term. 2.4. Results In the previous section and Appendix B, we concluded the two opt ical reconstructed methods are identical within the paraxial ap proximation. Since the angular spectrum does not use any approximation, the Fresnel met hod will never yield similar results as the angular spectrum method does for small re construction distance, unless numerical parametric lenses are introduced for wa vefront reconstruction to make small reconstruction distances possibl e without aliasing, but increa sing the computational load [7, 8]. To have a concrete idea about how objects are imaged using the two reconstruction methods, we present a few results of various samples. The objects are: the USAF 1951 resolution target, onion skin, and a US co in. Corresponding holograms recorded in offaxis geometry are shown in Figure 2.3a, Figur e 2.4a, and Figure 2.5a respectively. Their Fourier transforms are displayed in Figure 2. 3b, Figure 2.4b, and Figure 2.5b. The bright spot in the center of images represents the DC term or the zero order of diffraction and can be separated from the real and the virtua l images (located symmetrically of the DC

PAGE 49

35 term) by choosing an appropriate angle betw een the object and reference wave fronts. The DC term represents the background or lo w frequencies features of the object. The virtual and real images account for high frequencies object features. Applying a circular filter (white circle) in the F ourier space we can get rid of the zero order term, virtual images and other noise present in the image. Figure 2.3c and Figure 2.3d represent the amplitude and the phase images reconstructed from the hologram (Figur e 2.3a), of an area of 1040 10402m using the angular spectrum. The object is situated at a distance z = 270 m from the hologram. Figure 2.3e, f, g, h represent the amplitude and the phase images reconstructed from the hologram (Figure 2.3a), of an area of 1040 10402m using the Fresnel approximation. Figure 2.3e and Figure 2.3f are reconstructe d with the minimum r econstruction distance 2 min7348a zm N (imposed by the discrete Fourier transform) and Figure 2.3g and Figure 2.3h are reconstructed with the reconstruction distance minzz 7000mz

PAGE 50

36 a) b) c) d) e) f) g) h) Figure 2.3: Holography of an USAF Resoluti on Target. The image area is 1040 10402m (256 256 pixels) and the image is at z = 270 m from the hologram, 0.575m 1040ma N=256, : (a) hologram; (b) angular spectrum; (c) amplitude and (d) phase images by the angular spect rum method; (e) amplitude and (f) phase images at minimum reconstruction distance 2 min7348a zm N by the Fresnel transform method; (g) amplitude and (h) phase images at the reconstruction distance 7000mz (z < minz) by the Fresnel transform method

PAGE 51

37 Figure 2.4 is an example of how the late ral resolution is affected by the minimum reconstruction distance requirement. In Section 2.3.1 we have shown the lateral resolution is 0 0(,)z x fzx Nx The area of the hologram (Figure 2.4a), reconstructed amplitude (Figure 2.4c ), and phase (Figure 2.4d) is 235x176 2m 640x480 pixels which gives two reconstruction distances in each direction, x, y, 2 min,150x xx xa zz m N 2 min,112y yy ya zz m N The angular spectrum is not constrained by the hologram area and the latera l resolution is affect ed only by the optics. a) b) c) d) Figure 2.4. Holography of the Onion Skin. The image area is 236 1762m (640 480 pixels) and the image is at z = 4.95 m from the hologram: (a) hologram; angular spectrum; (c) amplitude and (d) phase, images by the angular spectrum method. Again, this is an example of a hologram record ed when the object is si tuated at a distance z = 198 m from the hologram Figure 2.5. Th is distance is smaller than the

PAGE 52

38 2 min8370a zm N and the amplitude(Figure 2.5d) and phase images (Figure 2.5e) are not qualitative images. When z < minz aliasing occurs (Figure 2.5f). a) b) c) d) e) f) g) h) Figure 2.5. Holography of a US Coin The image area is 1110 11102m (256 256 pixels) and the image is at z = 198 m from the hologram, 0.575 m 1110m a N=256 pixels: (a) hologram; (b) angular spectrum; (c) amplitude and (d) phase images by the angular spectrum method; (e) amplitude and (f) phase images at minimum reconstruction distance 2 min8370a zm N by the Fresnel transform method; (g) amplitude and (h) phase images at the reconstruction distance 8000mz (z < minz) by the Fresnel transform method.

PAGE 53

39 In summary, unique capabilities of the angular spectrum compared to the Fresnel approximation are: higher degree of accuracy as it is seen in all images obtained by the angular spectrum, filtering in the frequency domain shown in Figure 2.3b, Figure 2.4b, Figure 2.5b, and there is no mini mum reconstruction distance. 2. 5. Conclusion We demonstrated the capabilities of the two diffraction reconstruction methods, the angular spectrum and the Fresnel approximati on in imaging resolution target, onion and coin. The two optical reconstructed methods are identical within the paraxial approximation. Since the angular spectrum is the true method, the Fresnel approximation will never give similar results as the angular spectrum method does. 2. 6. Bibliography [1] J.W. Goodman, Introduction to Fourier Optics. McGraw-Hill Publishing Company, New York (1968). [2]. Huygens, C. Trait de la lumire (completed in 1678, published in Leyden, 1690). [3]. Kirchoff, G. Zur Theorie der Lichtstrahlen, Wiedemann Ann. 1896, 18(2), 663. [4]. Sommerfeld, A. Mathema tische Theorie der Diffraction, Math. Ann. 1896, 47, 317. [5]. Sommerfeld, A. Die Greensc he Funktion der Schwingungsgleichung, Jahresber. Deut. Math. 1912, 21, 309. [6]. Wolf, E.; Marchand, E. W. Compa rison of the Kirchhoff and the RayleighSommerfeld theories of di ffraction at an aperture, J. Opt. Soc. Am. 1964, 54(5), 587 594.

PAGE 54

40 [7]. F. Montfort, F. Charrire, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, Purely numerical correction of the microsc ope objective induced cu rvature in digital holographic microscopy, J. Opt. Soc. Am. A 23, 2944 (2006). [8]. T. Colomb, F. Montfort, J. Khn, N. Aspe rt, E. Cuche, A. Marian, F. Charrire, S. Bourquin, P. Marquet, and C. Depeursinge, Numerical para metric lens for shifting, magnification, and complete aberration comp ensation in digital holographic microscopy, J. Opt. Soc. Am. A 23 (2006).

PAGE 55

41 CHAPTER 3 DIGITAL INTERFERENCE HOLOGRAPHY This chapter introduces the principle of dig ital interference holography (DIH), geometry, apparatus, calibration, and pha se unwrapping theory. The chapte r is organized as follows: Section 3.1 describes the digital interfer ence holography in comp arison to the other optical imaging techniques. The digital interf erence holography technique is reviewed in Section 3.2. Section 3.3 presents the phase unwrapping theory based on DIH. Section 3.4 describes the design of the DIH apparatus. Section 3.5 reviews the setup calibration and the scanning characteristics of the light so urce. Finally, conclusions are presented in Section 3.6. 3.1. Introduction One of the important challenges for biomedi cal optics is noninvasive three dimensional imaging, and various techniques have been proposed and available. For example, confocal scanning microscopy provides high-reso lution sectioning and in-focus images of a specimen. However, it is intrinsically limited in frame rate due to serial acquisition of the image pixels. Ophthalmic imagi ng applications of laser scanning in vivo confocal microscopy have been recently reviewed [1 ]. Another technique, optical coherence tomography (OCT), is a scanning microscopic imaging technique with micrometer scale axial and lateral resolution, based on low c oherence or white light interferometry to

PAGE 56

42 coherently gate backsc attered signal from different depths in the object [2, 3]. Sweptsource optical coherence tomography is a significant improvement over the time-domain OCT [4-6], in terms of the acquisition speed and signal-to-noise ratio (SNR). A related technique of wavelength sca nning interferometry uses th e phase of the interference signal, between the reference light and the obj ect light which varies in the time while the wavelength of a source is swept over a range. A height resolution of about 3 m has been reported using Ti:sapphire laser with wavelength scan ning range of about 100 nm [7, 8]. The technique of stru ctured illumination microsc opy provides wide-field depthresolved imaging with no requirement for time-of-flight gated detection [9]. In the last few years, the scanning wa velength technique in various setups has been adopted by researchers for three-di mensional imaging of microscopic and submicroscopic samples. When digital hologra phy is combined with optical coherence tomography, a series of holograms are obtained by varying the reference path length [38]. A new tomographic method that combines the principle of DIH with spectral interferometry has been developed using a broadband source and a line-scan camera in a fiber-based setup [39]. Sub-wavelength resolution phase microscopy has been demonstrated [40] using a full-field swep t-source for surface profiling. Nanoscale cell dynamics were reported using cross-secti onal spectral domain phase microscopy (SDPM) with lateral resolution better than 2.2 m and axial resolution of about 3 m [41]. A spectral shaping technique for DIH is seen to suppress the sidelobes of the amplitude modulation function and to improve the perf ormance of the tomographic system [42]. Submicrometer resolution of DIH has been demonstrated [43].

PAGE 57

43 Another optical tomographic technique, a pplied widely for determination of the refractive index [44-49], is based on acquiri ng multiple interferograms while the sample is rotating. The reconstruction of the phase distribution is performed using filtered backprojection algorithm. Then the phase distribu tion is scaled to refractive index values. Refractive index distribution reveals informati on about the cellular inte rnal structure of a transparent or semitransparent specimen. In this paper, we use computer and holographic techni ques with digital interference holography (DIH ) to accurately and consistently identify and quantify different objects structure with m resolution. This technique is based on an original numerical method [28], where a three-dimensional microscopic structure of a specimen can be reconstructed by a succession of hol ograms recorded using an extended group of scanned wavelengths. 3.2. Principle of Digital Interference Holography Suppose an object is illuminated by a laser beam of wavelength A point 0r on the object scatters the light into a Huygens wavelet, ) exp()(0 0rrikrA where the object function )(0rAis proportional to the amplitude and phase of the wavelet scattered or emitted by object points (Figure 3.1a). For an extended object, the field at r is 0 3 0 0) exp()(~)( rdrrikrArE where the integral is over the object volume. The amplitude and phase of this field at the hologr am plane z = 0 is recorded by the hologram, as );,( hhyxH. The holographic process is repeated using N different wavelengths, generating the holograms );,(),...,;,(),;,(2 1Nhh hh hhyxHyxHyxH From each of the holograms, the field );,,( zyxE is calculated as a complex 3D array over the volume in

PAGE 58

44 the vicinity of the object (Figure 3.2a). Superposition of these N 3D-arrays results in )(~)()(~) exp()(0 3 0 0 0 3 0 0rArdrrrArdrrikrAk That is, for a large enough number of wavelengths, the resultant fi eld is proportional to the field at the object and is nonzero only at the object points. In practic e, if one uses a finite number N of wavelengths, with uniform increment )/1( of the inverse wavelengths, then the object image )( rArepeats itself (other than the diffraction/defocusing e ffect of propagation) at a beat wavelength 1)]/1([, with axial resolution N / By use of appropriate values of )/1( and N, the beat wavelength can be matched to the axial range of the object, and to the desired level of axial resolution (see Appendix A).

PAGE 59

45 a) b) Figure 3.1: Digital Interference Hologr aphy Geometry a) DIH Volume Representation, and b) process of DIH. H: hologram; E: optical field in the object volume; A: object function. See text for more details.

PAGE 60

463.3 Multiple-Wavelength Optical Phase Unwrapping by Digital Interference Holography The optical thickness profile of transparent object can be obtained from quantitative phase images with sub-wavelength accuracy. Tw o important parameters are subsequently derived from the optical th ickness profile, the physical thickness and the index of refraction of the sample. Quantitative phase images are already demonstrated using several digital holography techniques based on two or three wavele ngths [50-54]. Using digital interference holography, we want to make use of the phase information to determine the physical height of the sample. The combinati on of phase images of two different wavelengths 1 and 2 results in another phase image whose effective or beat wavelength is 2121/ By choosing the two wavelengths close enough, the beat wavelength can be made large enough to c over the range of optical thickness of the object. This is another example of the capab ilities of digital hol ography that are not possible in real space holography. The phase difference between two wavelengths 12, is: 1212 1211 4()() hn where () n is an approximate value of the refractive index of the sample being imaged. Cons equently, by choosing an appropriate combination of wavelengths, the height profile of an image is: 1212 1212 11 4() 4()() h n n (3.1)

PAGE 61

473.4. Experimental Setup The basic configuration of the apparatus is a Michelson interferometer, Figure 3.2. The light source is a Coherent 699 ring dye laser, pumped by Millenia V diode-pumped solidstate laser, tunable over a range of 565 nm to 615 nm with an output power of up to 500 mW. The laser output is spatial-filtered and collimated. The focusing lens L2 focuses the laser on the back focus of the objective lens L3, so that the object is illuminated by a collimated beam. Figure 3.2. Digital Interference Ho lography Apparatus. RDL: ring dye laser; Ms: mirrors; SF: spatial filter and expander; Ls: lenses; Ps: polarizers; BS: polarizing beamsplitter; QWs: quarter waveplates; A: aperture; H: hologram plane; OBJ: object; REF: reference mirror; MM: motorized micrometer; MMC: controller for MM.

PAGE 62

48 The lenses L3 and L5 form a microscope pair, Figure 3.3, so that the CCD acquires a magnified image of a plane H in the vicinity of the object plane. The reference mirror is an optical conjugate of the pl ane H through the matching objective lens L4. Then the image acquired by the CCD is equiva lent to a holographic interference between a plane reference wave and the object wave that has propagated (diffracted) over a distance z from the object plane. In general the object plane may be at an arbitrary distance z from the hologram plane H, and the object can be numerically brought back in focus by the digital holography process. But in practice, it is advantageous to keep the object plane in focus to simplify the optical alignment and to help identify the object portion being imaged, as well as minimizing pot ential secondary abe rration effects. The polarization optics polarizer P2, analyzer P3, quarter wave plates, and polarizing beam splitter is used to continuously adjust the relative part ition of optical power between the object and referernce fields and to maximize th e interference contrast. The polarizer P1 at the output of laser is used to continuously adjust the overall power input to the interferometer. The CCD camera (Sony XC-ST50) has 780 x 640 pixels with 9 m pitch, and is digitized with an 8-bit monoch rome image acquisition board (NI IMAQ PCI1407). Slight rotations of the reference mirror and object planes enable the acquisition of off-axis hologram. A variable aperture placed at the back focal (Fourier) plane of the objective lens L3 can be useful in controlli ng the angular spectrum of the object field. The aperture acts as a Fourier filter that st ops the scattered light coming from the object. Hence, relatively most of the light that passes the small disk filter corresp onds to a small scattering angle.

PAGE 63

49 Figure 3.3. Rays diagram When laser light travels through the lin ear polarizer, Figure 3.4a, a selected vibration plane is passed by the polarizer (parallel to th e transmission axis) and the electric field vectors vibrating in all other orientations are blocked. The polarized beam splitter reflects part of the laser beam, linearly polarized at 90o from its original plane, to the object and also the BS transmits part of the laser beam to the reference. When the light is incident of a quarter-wave plate, QW the light is divided into two equal electric field components and one of them is re tarded by a quarter-wave plate, producing a

PAGE 64

50 circularly polarized light. After passing b ack through the QW plate, Figure 3.4b, the reflected light is linearly polarized at 90o from its original plane. The reflected light from the object passes as a transmitted wave through the beam splitter and it combines at the analyzer plane with the reflected light from the reference mirror. The analyzer is utilized to control the amount of light passing through the crossed pa ir (polarizer-a nalyzer), and can be rotated in the light path to enable various amplitudes of polarized light to pass through. Figure 3.4. Polarization Control in Di gital Interference Holography 3.5. Experimental Calibration 3.5.1 The tuning characteristics of the light source The light source used in this experiment was a dye laser pumped by a so lid state laser. To obtain the tuning range of about 40 nm nece ssary for our experiments, Rhodamine 6G

PAGE 65

51 (R6G) dissolved in ethylene glycol has been chosen. The tunning curve is shown in Figure 3.5. The wavelength tunable range of 34 nm for this experiment was between 568 nm to 602 nm. The tunable wavelength range determines the axial resolution of the image, while the tuning resolution or the wavelength increment determines the axial range, or the axial size of an object. The ability to distinguish axial distances of various layers of a tissue is called the axial resolution, z This tuning parameter is obtained in the following way: N z k kk 2 2 22 2 (3.1) where, is the center wavelength, is the wavelength increment, k is the wavevector, k is the wave number increment, z is the axial resolution and is the object axial size. The hologram acquisition process is descri bed in Appendix D. Figure D.1 shows the main screen of the DIH software, written in Labview 8.5. The wavelength scanning process is controlled by a stepper motor that changes the birefringent filter of the laser in small increments, changing the laser wavelength when it rotates, Figure D.2. RDL calibration, 01/12/2007 0 50 100 150 200 250 300 350 0.560.570.580.590.60.610.62 lambda (microns)Power (mW) Figure 3.5. Tuning Curve of the Rhodamine 6G

PAGE 66

523.5.2 The calibration curve The digital interference holography requi res a sequence of wa velength within the Rhodamine 6G spectral range. To be able to get the right wavelengths when the controlled micrometer changes the birefringent filter we used a monocrometer (CVI Digikrom 240) with with a spectral resolution of ca. (12cm ). Two calibrations were performed, micrometer position versus wavele ngth (Figure D.3) and, micrometer position versus wavevector (Figure D.4). The points represent measurements and the solid curve theoretical values. The calibration curves are: 2 010zaaa and 2 01011()zbbb kk viceversa, where 210,, aaaand 210,, bbbare the calibration coefficients. 3.6. Conclusions We have characterized the digital interf erence instrument in terms of scanning parameters. They are: the wavelength range of the source, the axial size of the object, and the axial resolution of the system. In the following, we will optimize and evaluate the DIH technique in imaging and characterize human, animal tissue and fingerprint patterns. 3.7. Bibliography [1] Patel D V and McGhee C N 2007 Contempora ry in vivo confocal microscopy of the living human cornea using white light and la ser scanning techniques: a major review Clin. Experiment. Ophthalmol. 35 (1) 71-88 [2] Huang D et al 1991 Optical coherence tomography Science 254 1178

PAGE 67

53 [3] Fujimoto J G, Brezinski M E, Tearne y G J, Boppart S A, Bouma B, Hee M R, Southern J F and Swanson E A 1995 Optical bi opsy and imaging using optical coherence tomography Nature Med. 1 970 [4] Srinivasan V J, Huber R, Gorczyns ka I and Fujimoto J G 2007 High-speed, highresolution optical coherence tomography retinal imaging with a frequency-swept laser at 850 Optics Letters 32(4) 361-63 [5] Hiratsuka H, Morisak K and Yoshim ur T 2000 Optical Coherence Tomography System based on Synthesis of Optical Cohe rence Function With a Wavelength-Scanning Laser Source Optical Review 7(5) 442-47 [6] He Z and Hotate K 1999 Synthesized optic al coherence tomography for imaging of scattering objects by use of a stepwise frequency-modulated tunable laser diode Optics Letters 24(21) 1502-04 [7] Yamamoto A, Kuo C, Sunouchi K, Wada S, Yamaguchi I and Tashiro H 2001 Surface Shape Measurement by Wavelength Scanning Interferometry Using an Electronically Tuned Ti:Sapphire Laser Optical Review 8(1) 59-63 [8] Yamaguchi I, Yamamoto A and Yano M 2000 Surface topography by wavelength scanning interferometry Opt. Eng. 39(1) 40 [9] Ansari Z et al 2002 Wide-f ield, real time depth-resolved imaging using structured illumination with photorefractive holography Applied Physics Letter 81 2148-50 [10] Gabor D 1971 Holography Nobel Lecture [11] Goodman J W and Lawren ce R W 1967 Digital image formation from electronically detected holograms Appl. Phys.Lett. 11 77-79

PAGE 68

54 [12] Schnars U 1994 Direct phase determinatio n in hologram interferometry with use of digitally recorded holograms J. Opt. Soc. Am. A 11 2011-5 [13] Schnars U and Jueptner W 1994 Direct recording of holograms by a CCD target and numerical reconstruction Appl.Opt. 33 179 [14] Schnars U and Jueptner W 2002 Digital recording and numerical reconstruction of holograms Meas. Sci. Technol. 13 R85-R101 [15] Yamaguchi I, Kato J, Ohta S and Mizuno J 2001 Image formation in phase-shifting digital holography and applic ations to microscopy Appl. Optics 40 6177-86 [16] Yamaguchi I and Zhang T Ph ase-shifting digital holography 1997 Opt. Lett. 22 1268 [17] Zhang T and Yamaguchi I 1998 Three-di mensional microscopy with phase-shifting digital holography Opt. Lett. 23 1221 [18] Poon T C 2003 Threedimensional image processing and optical scanning holography Adv. Imaging & Electron Phys. 126 329-50 [19] Yamaguchi I, Matsumura T and Kato J 2002 Phase-shifting color digital holography Opt. Lett. 27 1108 [20] Barty A, Nugent K A, Paganin D and Roberts A 1998 Quantitative optical phase microscopy Opt. Lett. 23 817 [21] Cuche E, Bevilacqua F and Depeursinge C 1999 Digital holography for quantitative phase-contrast imaging 1999 Opt. Lett. 24 291 [22] Ferraro P, De Nicola S, Finizio A, C oppola G, Grilli S, Magr o C and Pierattini G 2003 Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging Appl. Opt. 42 1938-46

PAGE 69

55 [23] Xu M L, Peng X, Miao J and Asundi A 2001 Studies of digital microscopic holography with applications to micros tructure testing Appl. Opt. 40 5046-51 [24] Pedrini G and Tiziani H J 1997 Quantitativ e evaluation of two-dimensional dynamic deformations using digital holography Opt. Laser Technol. 29 249 [25] Picart P, Leval J, M ounier D and Gougeon S 2005 Some opportunities for vibration analysis with time averaging in digital Fresnel holography Appl. Opt. 44 337 [26] Haddad W S, Cullen D, Solem J C, Lo ngworth J W, McPherson A, Boyer K and Rhodes C K 1992 Fourier-transform holographic microscope Appl. Opt. 31 4973-8 [27] Xu W, Jericho M H, Meinertzhagen I A and Kreuze r H J 2001 Digital in-line holography for biological applications Proc. Natl. Acad. Sci. USA 98 11301-05 [28] Kim MK 1999 Wavelength scanning digital interfer ence holography for optical section imaging Opt. Letters 24 1693 [29] Kim MK 2000 Tomographic three-dimensional imaging of a biological specimen using wavelength-scanning digi tal interference holography Opt. Express 7 305-10 [30] Dakoff A, Gass J and Kim M K 2003 Mi croscopic three-dimensional imaging by digital interference holography J. Electr. Imag. 12 643-647 [31] Yu L, Myung M K 2005 Wa velength scanning digital in terference holography for variable tomographic scanning Opt. Express 13 5621-7 [32] Yu L, Myung M K 2005 Wa velength-scanning digital in terference holography for tomographic 3D imaging using the angular spectrum method Opt. Lett. 30 2092 [33] Kim M K, Yu L and Mann C J 2006 Inte rference techniques in digital holography J. Opt. A: Pure Appl. Opt. 8 512-23

PAGE 70

56 [34] Gass J, Dakoff A and Kim M K 2003 Phase imaging without 2-pi ambiguity by multiwavelength digital holography Opt. Lett. 28 1141-3 [35] Mann C J, Yu L, Lo C M and Kim M K 2005 High-resolution quantitative phasecontrast microscopy by digital holography Opt. Express 13 8693-98 [36] Parshall D and Kim M K 2006 Digital holographic micr oscopy with dual wavelength phase unwrapping Appl. Opt. 45 451-59 [37] Mann C, Yu L and Kim M K 2006 Movies of cellular and s ub-cellular motion by digital holographic microscopy Biomed. Engg. Online 5 21 [38] Sarunic M V, Weinberg S, and Iz att J A 2006 Full-field swept-source phase microscopy Opt. Lett. 30 1462-64 [39] Massatsch P, Charriere F, Cuche E, Marquet P, and Depeursinge C D 2005 Timedomain optical coherence tomography with digital holographic microscopy Appl. Opt. 44 1806-12 [40] Yu L and Chen Z 2007 Digital hol ographic tomography based on spectral interferometry Opt. Lett. 32, 3005-07 [41] Ellerbee A K, Creazzo T L and Izatt J A Investigating nanoscale cellular dynamics with cross-sectional spectral domain phase microscopy Opt. Express 15 8115-24 [42] Yu L and Chen Z 2007 Improved tomogr aphic imaging of wavelength scanning digital holographic microscopy by us e of digital spectral shaping Opt. Express 15 878-86 [43] Montfort F, Colomb T, Charrire F, K hn J, Marquet P, Cuche E, Herminjard S, and Depeursinge C 2006 Submicrometer optical tomography by multiple-wavelength digital holographic microscopy Appl. Opt. 45 8209-17

PAGE 71

57 [44] Meneses-Fabian C, Rodriguez-Zurita G, and Arrizn V 2006 Optical tomography of trans-parent objects with phase-shifting inte rferometry and stepwise-shifted Ronchi ruling J. Opt. Soc. Am. A 23 298-305 [45] Gorski W 2006 Tomographic microinterferometry of optical fibers Optical Engineering 45 125002 [46] Charriere F, Pavillon N, Colomb T, Hege r T, Mitchell E, Marquet P, Rappaz B, and Depeursinge G 2006 Living specimen tomogr aphy by digital holographic microscopy: morphometry of testate amoeba Opt. Express 14 7005-13 [47] Charriere F, Marian A, Montfort F, Kuhn J, Colomb T, Cuche E, Marquet P, and Depeursinge C 2006 Cell refrac tive index tomography by digital holographic microscopy Opt. Lett. 31 178-180 [48] Vishnyakov G N, Levin G G, Minaev V L, Pickalov V V, and Likhachev A V 2004 Tomographic Interference Mi croscopy of Living Cells Microscopy and Analysis 1815-17 [49] Choi W, Fang-Yen C, Ba dizadegan K, Oh S, Lue N, Dasari R R, and Feld M S 2007 Tomographic phase microscopy Nature Methods 4 717-19. [50]. N. Warnasooriya & MK Kim, LED -based multi-wavelength phase imaging interference microscopy, Opt. Expr. 15, 9239-9247 (2007). [51]. D. Parshall & M.K. Kim, Digital holographic microscopy with dual wavelength phase unwrapping, Appl. Opt. 45, 451-459 (2006). [52]. C.J. Mann, L. Yu, C.M. Lo, & M.K. Kim, High-resolution quantitative phasecontrast microscopy by digital holography, Opt. Express 13, 8693-8698 (2005). (An image from this paper was featured on the masthead of the issue.)

PAGE 72

58 [53]. C. Mann, L. Yu, & M.K. Kim, Movi es of cellular and sub-cellular motion by digital holographic microscopy, Biomed. Engg. Online, 5, 21 (2006). [54]. A Khmaladze, MK Kim, & CM Lo, Phase imaging of cells by simultaneous dualwavelength reflection digital holography, Opt. Express 16, 10900-10911 (2008).

PAGE 73

59 CHAPTER 4 OPTIMIZATION OF DIGITAL INTERFERENCE HOLOGRAPHY In the previous chapter, we presented the principle of the digita l interference holography and the fundamental parameters that char acterize the digital interference holography system. An improved digital interference hologra phy (DIH) technique is proposed. This technique incorporates a dispersion comp ensation algorithm to minimize the phase variation in the system. Using this instru ment we acquired successfully tomographic images of the resolution target wrapped with scotch tape with a signal-to-noise ratio of about 50 dB. To demonstrate the capabilitie s of our system we also reconstructed tomographic and volume fundus images in human and animal eyes with narrow axial resolution less than 5 m. The chapter is organized as follows: Section 4. 1 describes the dispersion compensation based on wavelength. The significan ce of the signal-to-noise ratio in DIH system is presented in Section 4.2. In Section 4.3 are shown various results. Conclusions are presen ted in Section 4.4. 4.1. Dispersion Compensation-Phase Matching Tunable lasers are particularly se nsitive to chromatic-dispersion, () n characteristics of materials, in particular second-order '' k and third order dispersion ''' k, which typically cause broadening of the axial point spread f unction [3]. Any mismatch in the length of

PAGE 74

60 the reference and sample arms of the interf erometer will generate wavelength dependent phase error. Precise measurements of any sample characteristics using tunable lasers acquire accurate measuring of the media disp ersion effect in the sample arm of the interferometer. Recent studies have been reported to meas ure the refractive index of ocular media using white-light interferometry [4]. The study concluded the dispersion of aqueous and vitreous humors for bovine, monke y, goat, and rabbit did not vary from the dispersion of water. Numerous approaches ha ve been used to measure the second order dispersion. The most common method is to inse rt a dispersive material in the reference arm to compensate the sample dispersion in the object arm [5, 6]. A numerical dispersion compensation method was introduced for optical coherence tomography [7] to compensate for the depth resolution loss. It is based on the correlati on of the depth scan signal with a depth-dependent correlation kernel taken as a Gaussian temporal distribution. Another numerical compensation of dispersion mismatch was demonstrated in real space directly to experimental hol ogram using wavenumber scanning. The cosine function (that contains the dispersion mismatchi ) in the expression of the interferogram is transformed to a complex function followed by multiplication with an exp()ij in the complex Fourier transf orm operation [8, 10]. In DIH [13, 14], the laser beam is tuned from short to high wavelengths. The phase calculated by digital holography is given by /2 zkz where z is the distance of an object point relative to the position of the reference mirror and is the wavelength of the laser. Uncertainty in k or z leads to phase error, which needs to be corrected for. Dispersion error (uncertainty in k) tends to have severe effect on the accuracy due to its accumulative nature. Short wavelengths are associated with higher frequency periodicity

PAGE 75

61 of interference while long wavelengths are associated with a slow periodicity of interference. As the optical path differen ce between the reference and the object arms increases, ex. because of the ocular media, the modulation across the spectrum increases in frequency because short wavelengths accumulate more phase than longer wavelengths do [9]. All optical components and ocular media dispersion contribute to the distortion of the spectrum. The function )( k is not linear with respect to the wavenumber /2 k. This function can be expanded in Taylor series, around the center wavenumber c ck /2 with c being the center wavelength. n c kc n n c kc c kc ckk k k n kk k k kk k k kk )( )( 1 ...)( )( 2 1 )( )( )()(2 2 2 ..(4.1) The dispersion components are: first term is a constant offset, second term is the group-delay, third term is the group-velocity dispersion or chromatic dispersion, and the last term represents higher order dispersion. The third term in the Equation (4.1) is the cause of the broadening of the input signal as it travels through a dispersive media. This fact has a direct impact in wo rsening of the axial resolution and the signal to noise ratio of the optical field superposition. The process of dispersion-compensation becomes complicated due to the higher order terms in Equation (4.1). We employed the following method to estim ate the phase corrections. For example, from each hologram )(nH a 3D object field is reconstruc ted. The expression of the free error phase of this field is given in the Equation (4.2). It has a 2D phase profile at a suitable value of z that corresponds to the location of the object, (,)(,)phase(,,;)nnOrec nn x yxyobjxyzkz (4.2)

PAGE 76

62 where nk are various wavenumbers and nkz are the free error phases for each of nks. Next step is to calculate the difference profiles, 1(,)(,)(,)(,)nnn x yxyxykzxy (4.3) where ,zxy is the z-profile of the object being im aged. If there are not errors in the wavenumber estimation, then all n s for various ns should be identical sincek s are perfectly equally sp aced between holograms. Otherwis e, the uncertainties introduce phase error,nnkz so that the Equation (4.3) will become, '(,)()(,)nnnnnxykkzxy (4.4) where k is the deviation from the nominal constant k, and the new difference profile, 11'(,)'(,)'(,)(,)()nnnnnxyxyxykzxy (4.5) The idea is to find the series n ,...,,32 that makes 23,,...,n as identical as possible. This is done by taking the difference 11'(,)'(,)nxyxy modulo 2, and find n s by minimizing 11 ,'(,)'(,)n xyxyxy versus the variable This procedure assumes that ),( yxz is a well-defined 2D function, and gives very accurate and straightforward estimate of s. With diffuse or multilayered objects, it is more difficult to obtain completely deterministic procedures to obtain s. In that case, one needs to reduce the domain ),( yx to an area of the object known to have a well-defined singlesurface profile.

PAGE 77

634.2. Signal to Noise Ratio Addition of a series of N cosines or imaginary exponentials yields: N nkzin N S1)exp( 1, so that )/(sin )/(sin22 2 2 zN zN S where /2 k. The signal to noise ratio (SNR) of the peaks at ,...,0 z grows proportional to 2N, while the width of the peak narrows as N /~ This behavior of the SNR and resolution is achieved only if all the amplitudes and phases of cosines are identical. Each hologram captured by the camera is normalized by the 2D average of each hologram to comp ensate for the laser power variation across the tuning range. yxnhhyx nhh nhhnormNNyxH yxH yxHhh/);,( );,( );,(, (4.6) 4.3. Results 4.3.1. Resolution Target We used a U.S. Air Force resolution target, with an area of 1040 x 1040 2m, 256 x 256 pixels, as a standard to calibrate the sy stem. The area selected in the Figure 4.1a represents group 2, element 3 and group 4 elements 2, 3, 4, 5, 6 of the resolution target. The bars in group 4, element 6 with 17.54 m width are evident. The reconstruction distance z, representing the distance from the object to the hologram plane is 643.56 m The complex field of the resolution target is computed separately for 50 wavelengths by numerical diffraction using the angular spectrum meth od, which gives an axial range of 500 m and axial resolution of10 zm All 3D electric fields are added together to obtain a 3D electric field of the object being imaged. Cross-sections of

PAGE 78

64 the volume can be taken in the x, y, and z plan es. Cross-sectional images in the y-z planes Figure 4.1b, and x-z planes Figure 4.1c, are shown below. The resolution target is an object without internal structure and the refl ection of the laser beam takes place at the surface of the resolution target. A piece of clear tape is placed on top of the resolution target to provide a second surface for demons tration of tomographic imaging. The first layer (interrupted) in Figure 4.1b, 4.1c is the reflection that come s from the chromium coated surface. The second laye r that can be seen in Figure 4.1b, 4.1c is the reflection from the attached tape surface. a) b) c) Figure 4.1. The Reconstructed Volume of the Resolution Target: (a) x-y crosssection, 1040 x 1040 2m. (b) y-z cross sections at various x values, 500 x 1040 2m, from left to right, x1, x2, and x3. (c) x-z cross sections at various y values, 1040 x 500 2m, from top to bottom, y1, y2, and y3.

PAGE 79

65 We have tested the improvement of SN R with increasing number of holograms N. As described above, the SNR is expected to grow as 2N. As seen in Figure 4.2, the 4fold increase in N from 100 to 400 lowers the noise fr om -30 dB to about -45 dB, which is consistent with 10log1612dB. This data set is without the clear tape attachment. Figure 4.2. Signal-to-Noise-Ratio Improvement. The peak in each semi-log graph represents the surface of resolution target. As N increases four-fold the SNR increases by 12 dB, as expected. 4.3.2 Biological Samples: Pig Retina In the following, we present a few examples of tomographic imaging of biological specimens using digital interf erence holography. Figure 4.3 a nd Figure 4.4 are images of a porcine eye tissue provided by the Ophtha lmology Department at the USF. It was preserved in formaldehyde, refrigerated and a piece of the sclera, with retinal tissue attached, was cut out for imaging. The hologr aphic image acquisition and computation of

PAGE 80

66 the optical field are carried ou t for each of 50 wavelengths in the range from 565 nm to 602 nm. Superposition of images, in DIH pro cesses described above, reveals the principal features of the retinal anatomy. Th e imaged surface areas are 0.67 x 0.67 mm2 for Figure 4.3 and 1.04 x 1.04 mm2 for Figure 4.4. The axial range500 m and axial resolution10 zm for both image sets. The measured SNR for these images was about 45~55 dB. In Figure 4.3, the images reveal convex surfaces of blood vessels, as well as about 150 m thick layer of retina on top of the choroidal surface. A pparently, the blood vessels in figure Figure 4.3 were fixed with blood in them, while figure Figure 4.4 shows mostly empty and flattened blood vessels. In fact, the preparation and ha ndling of the tissue sample resulted in tear of so me of the retinal tissue. Thus the upper right half of Figures 4.4 has intact retinal tissue, while the lower left half is missing the retinal layer and the choroids surface is exposed. In Figure 4.4b, the boundary marked c is the bare choroidal surface, while the surface a is the choroidal surface seen th rough the retinal surface b. The index of refraction of the retinal layer ca uses the choroidal surface to appear at a different depth compared to the bare surface, causing the break in the outline of the choroidal surface in Figures 4.4b and Figures 4.4c.

PAGE 81

67 a) b) c) Figure 4.3. The Reconstructed Volume of the Retina with Filled Blood Vessels: (a) x-y cross-section, 670 x 670 2m. (b) y-z cross section along x1, 500 x 670 2m. (c) x-z cross sections at various y values, from top to bottom, y1 and y2, 670 x 5002m. a) b) c) Figure 4.4. The Reconstructed Volume of the Retina with Empty Blood Vessels: (a) x-y cross-section, 1040 x 1040 2m. (b) y-z cross sections along x1, x2, and x3, 500 x 1040 2m. (c) x-z cross sections at various y pl anes, from top to bottom, y1, y2 and y3, 1040 x 500 2m.

PAGE 82

68 The dispersion-compensation method is app lied to all reconstructed samples from above. As an example the phase matching t echnique was demonstrated on human macula. The imaged surface area is 5020 x 5020 2mm, Figure 4.5a, with a physical axial range 209.75m and physical axial resolution4.19zm Tomographic images of the human macula are shown in Figure 4.5b, and Figure 4.5c respectively. Nerve fiber layer and retinal pigment epithelium are emphasi zed in Figure 4.5b by applying the phase correction. The optical thickness between th e retinal nerve fiber layer (NFL) and the retinal pigment epithelial layer (RPE) is aboutm 84. Figure 4.5. Phase-Matching Demonstration on Hu man Macula Sample: a) x-y crosssection, FOV=5000 x 5000 2m; b) x-z cross section along th e dashed line from Figure 4.5a, 5020 x 209.75 2m with the phase-matching scheme included; c) x-z cross sections along the dashed line from fig.3a, 5020 x 209.75 2m without phase-matching scheme included; : 0.560-0.600 m ; mz 19.4 ; myx 6.19 ; NX= NY=256 pixels; NZ= 50 pixels. y3

PAGE 83

694.4. Conclusion We have presented results of imaging expe riments using digital interference holography. Calibration experiments using re solution target demonstrates improvement of SNR with increasing number of holograms consistent with theoretical prediction. Imaging experiments on retinal tissue reveal topography of blood vessels as well as optical thickness profile of the retinal layer. The SNR of tissue images is comparable to that of resolution target, implying the imaging syst em is operating at close to theoretical optimum. Further improvement in SNR may be achieved if the hologram number N is increased further. At this point, however, imperfection in the phase matching scheme seems to be limiting such improvement. M odification of the hologram exposure method have to be made so that the holograms are ta ken at equal intervals of wave vectors, not wavelengths, as well as automatically mini mize the reference-object distance difference by a simple interferometric tracking and fee dback. The next chapter covers the in-vitro imaging of the human ophthalmic tissue using DIH, and a comparison of ophthalmic devices capabilities is given relative to the DIH. Acknowledgments The authors thank Dr. Hilbelink and Dr. Mar go of USF School of Me dicine for providing the tissue samples.

PAGE 84

704.5. Bibliography [1]. Kim, M.K., Yu, L., and Mann C.J., Inter ference techniques in digital holography, J. Opt. A: Pure Appl. Opt. 8, 512-23 (2006). [2]. Potcoava, M.C., and Kim, M.K., Optical tomography for biomed ical applications by digital interference holography, Meas. Sci. Technol., 19, 074010 (2008). [3]. C. K. Hitzenberger, A. Baumgartner, W. Drexler, and A. F. Fercher, Dispersion effects in partial coherence interferometry: implications for intraocular ranging, J. Biomed. Opt. 4, 144 (1999). [4]. Hammer D.X., Welch A.J., Spectrally resolved white-light interferometry for measurement of ocular dispersion, J. Opt. Soc. Am. A., 16, 2092-2102 (1999). [5]. C.K. Hitzenberger, A. Baum gartner, W. Drexler, A.F. Fe rcher, Dispersion effects in partial coherence interferometry: implications for intraocular ranging, J. Biomed. Opt. 4, 144-151 (1999). [6]. Drexler, W., Morgner, U., Krtner, F. X., Pitris, C., Boppart, S. A., Li X. D., Ippen E. P., and Fujimoto J. G., In vivo ultrahigh-resolution optical coherence tomography, Opt. Lett., 24 1221-1223 (1999). [7]. Fercher, A. F., Hitzenberger, C. K., Sticker, M., Zawa dzki, R., Karamata, B., Lasser, T., Numerical dispersion compensation for Pa rtial Coherence Interf erometry and Optical Coherence Tomography, Opt. Express 9 12, 610-615 (2001) [8]. Choi, D., Hiko-Oka, H., Amano, T., Furukawa, H., Kano, Nakanishi, F., M., Shimizu, K., and Ohbayashi, K., Numerical compensation of dispersion mismatch in discretely swept optic al-frequency-domain-reffectometry optical coherence tomography, Jpn. J. Appl. Phys. Part 1 45, 6022 (2006).

PAGE 85

71 [9]. Tumlinson, A.R., Hofer, B., Winkler, A.M., Povazay, B., Drexler, W., Barton, J.K., Inherent homogenous m edia dispersion co mpensation in frequency domain optical coherence tomography by accurate k-sampling, Appl Opt. 47, 687-93 (2008). [10]. Cense, B., and Nassif, N.A., Ultrahighresolution high-speed retinal imaging using spectral-domain optical coherence tom ography, Opt. Express 12, 2435-2447 (2004).

PAGE 86

72 CHAPTER 5 IN-VITRO IMAGING OF OPHT HALMIC TISSUE BY DIGITAL INTERFERENCE HOLOGRAPHY This chapter introduces the in-vitro imaging of human optic nerve head and retina by the digital interference holography. Samples of peripheral retina, macula, and optic nerve head from two formaldehyde-preserved human eyes were dissected and mounted onto slides. Holograms were captured by a monoc hrome CCD camera (Sony XC-ST50, with 780 x 640 pixels and pixel size of ~ 9m ). Light source was a so lid-state pumped dye laser with tunable wavelength range of 560605 nm. Using about 50 wavelengths in this band, holograms were obtained and numerical ly reconstructed using custom software based on NI LabView. Tomographic images were produced by superposition of holograms. Section 5.1 reviews some of the eye fundus diseases identified with existent ophthalmic devices. Section 5. 2 gives details of samples preparation and ophthalmic tissue. Section 5.3 discusses the experimental setup for ophthalmic imaging. Section 5.4 presents experimental results. Finally section 5.5 concludes this chapter. 5.1. Introduction Imaging methods have become a critical player in the field of ophthalmology, and currently the clinical ophthalmologist depends heavil y on such imaging to guide

PAGE 87

73 diagnosis and treatment decisions. The optic nerve and macula are emphasized in currentday ocular imaging. Diseases of optic nerve (glaucoma) and macula (age related macular degeneration, epiretinal membrane, cystoid or diabetic macular edema) can be objectively diagnosed and monitored with the aid of optical imagi ng techniques. For glaucoma management, accurate evaluation and asse ssment of optic disc topography and neuroretinal rim area is helpful for tracki ng glaucomatous change [1-8]. Macular thickness in particular nerve fiber layer a nd ganglion cell layer th ickness may also be important in evaluating glaucomatous change s [9,10]. Studies have shown correlations between optic nerve cup meas urements (cup volume, the ratio of cup area to disc area, and cup shape) and nerve fiber number [9]. Treatment of macular diseases, such as ag e-related macular dege neration and cystoid macular edema, depends on accurate knowledge of retinal thickness and microanatomy. Drusen can be identified in dry macula r degeneration with tomographic imaging techniques; more importantly, subretinal fl uid can be identified in exudative macular degeneration. Epiretinal membranes can be visualized a bove the nerve fiber layer and aid in surgical planning. Macular edema, often difficult to assess clinically, can be easily diagnosed and objectively followed throughout treatment to mark resolution of intraretinal fluid by tomographic imaging. Digital Interference holography (DIH) is similar to OCT, in that it offers rapid 3D imaging with theoretically higher resolution [1 7-25]. The goal of the present study is to evaluate the thickness of the macula and to quantify optic nerve head characteristics (shape, diameter, cup depth, and cup width), using micrometer reso lution DIH. DIH may provide another option in ocular imaging, potentially providing high resolution 3D

PAGE 88

74 information which could potentially aid in guiding diagnosis and treatment of many ocular diseases. 5.2. Methods 5.2.1 Specimen Preparation Each of the two formaldehyde-preserved human eyes were obtained from the Lions Eye Institute for Transplant & Research (Tampa FL) and dissected into two hemispheres. Sample preparation was performed under a microscope to minimize tissue damage. The inner vitreous was peeled away using tweezers and the sclera was removed using microforceps. Samples of the peripheral retin a, macula, and the optic nerve head were removed and flat-mounted on microscope slides Due to desiccation of tissue, measured macula thickness and optic disc parameters ar e smaller compared to normal values. The research did not meet the USF definition of human research activit ies; therefore, IRB approval was not required. 5.2.2 Macula and Optic Nerve Characteristics The term "macula" clinically refers to the area of the retina within the temporal vascular arcades, typically spanning a diameter of 56mm. Histologically, it is a region with multiple layers of ganglion cell nuclei. The neurosensory retina can be divided into several layers which include (from anterior to posterior): internal limiting membrane, nerve fiber layer, gangl ion cell layer, inner plexiform layer, inner nuclear layer, outer plexiform layer, outer nuclear layer, inne r and outer segments of photoreceptors, and

PAGE 89

75 retinal pigment epithelium. Studies report central foveal thickness in normal patients to be an average of 182 microns by OCT measurement [28, 32]. A schematic of optic disc head geometry is shown in Figure 5.1. The anterior surface of the optic nerve is visible ophtha lmoscopically as the optic disc, and oval structure measuring 1.5 mm horizontally an d 1.75 mm vertically, with a depression known as the physiologic cup loca ted slightly temporal to th e geometric center of the disc. The central retinal artery passes through th is cup, vascularizing the retina. The optic nerve head consists of four parts: the superf icial nerve fiber layer, the prelaminar region, the laminar region, and the retrolaminar regi on. Glaucoma damages the superficial nerve fiber layer [26]. Figure 5.1. Optic Disc Geometry and Parameter Representation.

PAGE 90

765.3 Theory A hologram can record all information present in a wave front, including both amplitude and phase. For this reason it can reproduce the three-dimensi onal structure wavefront of the object. In DIH [17-25] images are reconstructed from a numb er of 2D digitallyrecorded holograms while the wavelengths are va ried at regular intervals. The amplitude and phase information of the optical field ar e reconstructed from the digitally recorded holograms using the angular sp ectrum method [21-24, 33]. The optical field, (,;) E xyz, is assumed to be a solution of the wave equation in the frequency domain (paraxial or Helmholtz equation) that has the form,22()(,;)0kExyz, where 2222 x yzkkkk kbeing the wavenumber or modulus (2/ ) of the propagation vector, and ,, x yzkkkrepresent spatial frequencies along x, y, z respectively. One way to solve the wave equation is to use the angular spectrum method [21-24, 33]. Starting at the hologram plane where 0 z (aperture plane), the Fourier transform of the optical field,(,;0) E xy, represents the angular spectrum at that plane, 0(,;0)xyAkk A complex transfer function is used to propagate the angular spectrum along z axis toward the reconstr uction plane at z = Z, and th e angular spectrum becomes, 0(,;)(,;0)exp()xy xyzAkkzAkkikz. The field at any other z-pl ane can be calculated with just one inverse Fourier tr ansform of the angular spec trum at that plane. The superposition of the multiple three-dimensio nal reconstructed optical fields yields tomographic images with narrow axial re solution. The range of physical sizes and resolution of objects are cont rolled by the proper choice of wavelength interval.

PAGE 91

77 Under the Huygens principle, the source is treated as ma ny individual point sources, located at Pr from the center of the source, Figure 5.2. Figure 5.2. Sketch of Object, Hologram and Reconstr uction Planes. Illustration of the object wave scattered off the object points and the propagate of the real image wave r to the reconstruction planes; zyxkkk, represent the spatial carrier frequencies in the frequency space, imparted by the offset angle of the reference wave with respect to the optical axis of the scattered wave. Scatter from a point initiates spherical wave s. When an object is illuminated by a laser beam of wavelength each point on a wave front serv es as a source of secondary spherical wavelets, ()exp() P POrikrr, where the object function () P Or is proportional to the amplitude and phase of the wavelet scattered or emitted by object point. The object

PAGE 92

78 field, OBJiE for a specific wavelength interfe res with the reference field, REFiE at the hologram plane and the amplitude and phase of the object field at the hologram plane are recorded by the hologram, in form of intensity, (,,)hhiHxy 22 00(,,)||||iOBJiREFiOBJiREFiOBJiREFiHxyEEEEEE (5.1) The first two terms represent the zero-order term and the third and forth represent the two conjugate images, virtual and real. We call the optical field at the hologram plane, );,(0000zyxE. Accordingly, a complex field );,(0000zyxE at a position vector )0;,(000zyx can be decomposed into its sp ectrum of plane-wave components )0;,(0 yxkkA defined by the Fourier transform, 00 0 0 00 0 0 0(,;0)(,;)exp[()]xy xyAkkExyzikxkydxdy (5.2) Given the field at a specified plane (z = 0), we wish to calcu late the field at another plane z where the object is located. The Fourie r transform of the first three terms from Equation (5.1) are eliminated by applying a filter in the Fourier space of the object field, Equation (5.2). The angular spectrum can then be propagated in space along the z axis, perpendicular to the hologram plane, multiplying the Equation (5.2) by ]exp[zikz. The reconstructed complex wave-field ),,(zyxEi is found by: 2 222 01 (,,)()(,,)(,;0)exp[( )] 2ix y x y x y x y E xyzxyzdkdkAkkikxkykkkz 1 00{{}{}}{} E hAH-1FFFF (5.3) where, )] (exp[ 2 1 ),(22 2zkkki kkHyx yx is the Fourier transform of the Huygens PSF for zkk. The holographic process is repeat ed using N different wavelengths, generating the holograms1(,,)hhHxy (,,)hhiHxy From each of the holograms, the

PAGE 93

79 optical field (,,,)iExyz is reconstructed as a complex 3D array over the volume in the vicinity of the object. This process is illu strated using the superposition principle, 3 11 33 1()()~()exp(||) ()exp(||)~()()~()NN iPiPP ii N PiPPPPP iErErOrikrrdr OrikrrdrOrrrdrOr (5.4) When N goes to infinity, the sum under the volume integral from Equation (5.4) becomes a delta function. In this way, the c onjugate of the digitized scattered wave is reconstructed at various z positions and it is proportional to the real object image wave at r, () Or The addition of the 3D optical fields behaves as a periodic sequence of pulselike peaks with an optical peri od or optical beat wavelength (or optical axial extent), 2OPDk with optical axial resolution OPDz wavenumber rangemaxminkkk and wavenumber increment 1 k k N In the Michelson interferometer the physical difference in length between the two arms, z depends on the optical path difference (OPD) between the object wave and the refere nce wave, the optical frequency dependent index of refraction )( n, and the dependence on the d ouble pass in reflection mode: )(2 1n OPD z (5.5) Assuming an index of refraction of 1. 38 in the macular retina the physical difference in length becomes, 0.36() zOPD Therefore, the physic al beat wavelength becomes, 0.36() OPD and the physical axial resolution becomes, 0.36() zz OPD Knowing the axial resolution z and the number of pixels zN along

PAGE 94

80 the axial scan, z axis, one can obtain quantitative information of the height profile of the object. 5.4 Ophthalmic DIH Scanning System The DIH instrument uses a high-speed, high-resolution, non-contact, non-invasive technology and has no mechanical moving parts. It consists of an off-axis Michelson interferometer in a backscattering geometry shown in Figure 5.3. The light source is generated by a Millenia V solid-state lase r pumping a ring dye laser (Coherent 699), centered on nm 575 with an output power of up to 500 mW. The laser beam, after being spatially-filtered, collim ated (BE) and linearly polari zed (P2), is divided by the polarizing beam splitter (PBS) into two beams, sample (OBJ) and reference (REF) waves. Using the lens L1, the laser beam is focused on the back focus of the lens L2, so that the object is illuminated with a collimated light The objective L3 is placed in the reference arm to compensate for phase curvature induced by L2 into the object arm. The lens L2 projects the hologram plane H (a n optical conjugate of the reference mirror plane) onto the CCD camera, through C-mount lenses L4, which form the image at the infinity. The digital camera (monochrome CCD, 30 frames pe r second, 780 x 640 pixels, with square pixels of 9 m) acquires an image of the hologram at H, a superposition between a plane reference wave and the object wave that has diffracted over a distance zZ from the object plane. We call Z the reconstruction distance due to the fact the object can be numerically brought in focu s by adjusting the distance Z without moving the object or the CCD camera. Polarizers P1, P2, analyzer P3 and quarter-wave plates QW are used in conjunction with the polarizing beam splitter PBS to continuously adjust the overall laser

PAGE 95

81 power in the experimental setup and inside of the interferomet er. The analyzer P3 is cross polarized with regard to the polarizer P2 a nd the quarter wave plat es are oriented at 45o. The reflected light at the objec t and at the reference pass the quarter wave plates two times changing the polarization plane by 90o, and interfering at the CCD plane after passing the analyzer P3. The role of the polarizer P3 is to pass the light reflected from the sample and to block the stray light from the optics [27]. A variable aperture placed at the back focal (Fourier) plane of the objective lens L2 can be useful in controlling the angular spectrum of the object field.

PAGE 96

82 Figure 5.3. Experimental Apparatus. RDL: ring dye laser; M1 and M2: mirrors; REF: reference mirror; L1, L2, L3, L4 and L5 lenses; BE: beam expander; P1, P2 and P3: polarizers; PBS: polarizing beamsplitter; QWs: quarter waveplates; A: aperture; H: hologram plane; OBJ: object; MM: motori zed micrometer; MMC: controller for MM. 5.5. Results The attached amplitude images were obtained from a healthy excised human eye supplied to us by the Lions Eye Institute for Transpla nt & Research of Tampa FL. The holographic image acquisition and computation of the optical field of the macula sample are carried out for about 50 wavelengths in the range of 560-600 nm. Superposition of images, in the DIH processes described above, reveals the topographic mapping within the macular tissue, Figure 5.4a, and clearly delineates borders of blood vessel segments. The imaged surface area is 5020 x 5020 2m with a physical axial range 209.75 m and physical

PAGE 97

83 axial resolution4.19 zm Different layers are distinguishable in the cross-sectional images of the human macula, Figure 5.4b, 5.4c. The optical thickness between the retinal nerve fiber layer (NFL) and the retinal pigment epithe lial layer (RPE) is about 84 m a) b) c) Figure 5.4. The Reconstructed Volume of the Human Macula Sample: (a) x-y crosssection, FOV = 5020 x 5020 2m; (b) y-z cross sections at various x values, 5020 x 209.75 2m, from left to right, x1, x2, and x3; (c ) x-z cross sections at various y values, 209.75 x 5020 2m, from top to bottom, y1, y2, and y3; : 0.560-0.600 m ; mz 19.4 ; myx 6.19 ; NX= NY=256 pixels; NZ= 50 pixels. Figure 5.5a, represents the en-face reconstr ucted 3D structure of the optic nerve region with an area of 5020 x 50202m. We can identify the scleral ring (disc) diameter Ddisc = 1750 m and the cup diameter Dcup = 660 m Figure 5.5b. Our measurements cannot be clinically correlat ed with normal anatomic values as the tissue was postmortem and edematous, falsely enlarging th e disc diameter with swelling of the surrounding nerve fiber layer tissue. Vitreous papillary adhesions also had to be removed

PAGE 98

84 with forceps from the optic nerve tissue, an d any remnants laying atop the nerve tissue rim could have falsely enlarged our measurem ents. The optic nerve sample was slightly tilted when it was imaged, resulting in the left side of the scleral ring appearing to be darker than the right. The optic cup has a we ll distinguished shape w ith high reflectivity depicted by brighter colors on 3D imaging. The higher reflectivity within the cup depth explains the higher noise in the region of the optic cup, (Figure 5.5a .). The black valley around the disc is a false valley due to a phase jump at the edge of the disc. a) b) Figure 5.5. The Reconstructed Volume of the Human Optic Nerve Sample: (a) x-y cross-section, FOV = 5020 x 50202m; Z0 = 446m ; : 0.563-0.605m ; mz 06.4 ; myx 6.19 ; NX= NY=256 pixels; NZ= 50 pixels; (b ) cross section along the dashed line in Figure 5.5a; Dcup = cup di ameter, Ddisc = disc diameter. Figure 5.6a, represents the en-face rec onstructed optical field of the rhombus region from Figure 5.5a. The cross-sectional im ages in the y-z planes Figure 5.6b, and xz planes Figure 5.6c, are also shown. In practice, if one uses a finite number of

PAGE 99

85 wavelengths, N with a uniform increment k of inverse wavelengths, then the object image repeats itself at a beat wavelength k 2 with axial resolution / zN This occurred in the x-z cross sections, Figure 5.6c top, where the bottom of the cup depth, h2, is found at the top. Therefore, to find the cup depth, h1 is ad ded to h2, where h1 (relative to the baseline height) and h2 are the upper and the bottom part of the cup. By using appropriate values of k and N the physical axial size (beat wavelength),, can be matched to the axial range of the object, and z to the desired level of physical axial resolution. The height of the x-z cross sections, Figure 5.6c top, represents the physical axial size m 35.280 made of 50 pixels (the number of wavelengths being scanned) with a physical axial resolution 5.61 zm Using this information one can quantify the cup depth being as h= 355.11m and the cup slope, s, of about 47o.

PAGE 100

86 a) b) c) Figure 5.6. The Reconstructed Volume of the Human Optic Nerve Sample (the rhombus shape volume from Figure 5.5a: (a) x-y cross-section, FOV=1100 x 1100 2m; (b) y-z cross sections at various x values, 1100 x 280.35 2m, from left to right, x1, x2, and x3; (c) x-z cross sections at various y values, 280.35 x 1100 2m, from top to bottom, y1, y2, and y3; Z = 29.7 m ; m 595.0565.0: ; mz 61.5 ; myx 32.4 ; NX= NY=256 pixels; NZ= 50 pixels; s = the slope; h1, h2: heights. The y-z cross section images of the rec onstructed volume of the second eye optic nerve at various x values are shown in Fi gure 5.7a. The characteristics of the crosssectional area are: the ar ea imaged is 4696 x 239.852m, the physical axial extent239.85 m the reconstruction distance Z = 5049.5m ; lateral resolution 18.37 m (256 pixels), and phys ical axial resolution mz 8.4 (51 pixels). Cro ss section (x3) of the intensity levels data along the dotted line is shown in Figure 5.7b, from which the disc diameter and the cup diameter were quantified as: Ddisc = 1750 m and the cup

PAGE 101

87 depth is about 240m The cup-shaped depression is located slightly temporal to its geometric center as shown in all three cross-sections a) b) Figure 5.7. (a) Y-Z Cross Section Images of the Reconstructed Volume of the Human Optic Nerve Sample (the second eye optic nerve): at various x values, 4696 x 239.852m, from left to right, x1, x2, and x3; (b ) intensity levels function of x with temporal, cup, and nasal picks; Z = 5049.5 m ; m 597.0562.0: ; mz 8.4 ; myx 37.18 ; NX= NY=256 pixels; NZ= 51 pixels ; Ddisc: disc diameter; high intensity in the region of the cup does not correspond to physical shape (see discussion in the text). 5.6. Conclusions Imaging techniques in ophthalmology are cu rrently evolving rapidly. Histological resolution approaching that fo r in-situ imaging is the goal of many imaging techniques. The ideal imaging device s hould be a robust instrument that uses high-speed, non-

PAGE 102

88 contact, non-invasive technol ogy, has no mechanical moving parts, and has an axial resolution better than 5 m. There is no doubt that OCT, with additional extensions, has become one of the most advanced imagi ng technique in the ophthalmology area. For ophthalmic imaging, the light source has to be chosen as a function of the characteristics of the reflected spectrum. This spectrum vari es with three histological parameters, RPE melanin, haemoglobin, and choroidal melanin [31]. The intensity decreases in the green region once the melanin increases and the RPE response becomes weaker. Also, the absorption of the ophthalmic tissue depends on haemoglobin in visible and water in infrared. Water absorption in the vitreous at 950 nm and above 1100 nm limits the scanning wavelength range of a bout 150 nm [13, 14]. Therefor e, light sources operating on a centered wavelength between 800 and 850 nm are necessary to avoid absorption in the ocular media. The wavelength band centered on 830 nm has been employed in clinical OCT instruments. An ultrahig h resolution (UHR) OC T technology using a broadband Ti:Al2O3 laser (centered on 800 nm) at an axial resolution of 2-3 m has been demonstrated for in-vivo imaging of retin al and corneal mor phology [12, 16]. Other recent Fourier domain detection methods ut ilize high-speed UHR OCT imaging with a bandwidth of ~ 150 nm centered near ~900 nm [11, 14]. Spectral / Fourier domain detection (SD-OCT) utilizes high speed imaging at 1300 nm w ith an axial resolution of 10m [41], and at 1060 nm with an axial resolution of 10.4m [15]. Swept source / Fourier (SS-OCT) domain detection for 3D vol umetric imaging of the retina [13], utilizes a bandwidth centered at 850 nm with an axial resolution of < 7m Optical imaging of tissue at a longer wavelength (1300 nm) offers deeper choroidal penetration, in spite of

PAGE 103

89 the fact that tissue e xposure is higher for wavelengths above 1000 nm. Also, the retinal tissue is more transparen t for wavelengths in near-infrared (above 1000 nm). Despite these advances in the OCT optical biopsy, there is still room to make for improvement of OCT techniques. The result s of the Advanced Imaging for Glaucoma Study (AIGS) indicate the SD-O CT is still in infancy [29] A dilemma, as to why SDOCT has been unsuccessful compared to time-domain OCT (TD-OCT) in detecting glaucoma by imaging the circumpapilary retinal fiber layer (cpRNFL), constitutes a problem in the retina imaging technology [29] All OCT devices are characterized by complicated software for image processing and registration of cross-sectional OCT images with a fundus image fr om the same OCT data set. Digital Interference holography (DIH) offers rapid 3D imaging with theoretically higher resolution than OCT, and without the n eed to reassemble images from scans. Our research has demonstrated that, in vitro, DI H can measure the dimensions of the scleral ring and provide a definitive answer regarding the size of the optic disc, a clinically important parameter that is not provided by current OCT instruments. Numerical focusing of holographic images can be acco mplished from a single exposed hologram. This work is to our knowledge a nove l and innovative approach to retinal imaging. The goal in the future is to im prove DIH imaging parameters to a level compatible with clinical appl ications and to identify and ad dress technical challenges for such applications. At this point, our scanner has to overcom e the signal-to-noise ratio issue, to provide clinically rele vant information. The scanning tim e is 30 s and the signal-to-noise ratio (SNR) is about 50 dB [23]. The imaging dynamic range or SNR will be improved to

PAGE 104

90 about 90 dB by replacing the dye laser with a T i:Sapphire laser (long er wavelengths), introducing a high-speed camera and increasi ng the number of holograms from 50 to 500. With increased speed, and resolution, DIH has the potential to provide a significant improvement in terms of information captured, both for the diagnosis of disease and for the understanding of normal histopathology and physiology. With bette r axial resolution and greater axial range, we expect to be able to extr act more information about retinal thickness and structure. In vivo imaging of human eye needs to be fa st enough to avoid blurring due to eye movement (tremors, drifts, saccades). More a ttention needs to be paid to scanning of wavelengths at high speed. Normally, the dye laser wavelength is tuned by rotating a birefringent filter (BRF) with a micrometer. The micrometer moves at a speed of ~0.5mm/s, and information on the laser spectral and power behavior at high speed scan is not readily available. A Ti:S apphire laser with an appropriate actuator and a sweep function parameterized by time is a good option that works very well with the SS OCT systems. Also, a retinal tracker system for three-dimensional retinal morphology and function will be developed and integrated in the DIH setup. We plane to translate and adapt the optical bench appa ratus onto a f undus camera. Optically, the apparatus will consist of a Mi chelson interferometer with fundus camera attached to the object arm as an optimized imag ing lens for the object, i.e. the eye. Use of optical fibers and couplers allow flexible and compact design of the holography module. With the holography imaging module in place, a set of basic digital holography imaging experiments are to be carri ed out in order to establish and optimize the imaging characteristics of the fundus camera used as a holography camera. The reference mirror

PAGE 105

91 will have a motorized z-translation stage, fo r proper matching of relative distance to the object (retina), and the optics will be modified to compensate for the eyes optics in the object arm. Another challenge in ophthalm ic imaging applications is to ensure that the level of laser radiation on the eye is not damaging to the vision. The SNR of the system needs to be sufficient so that a radiation level weak enough to safe can still generate good quality images. Once new equipment is integrated in the setup, we will carry out a set of measurements to calibrate th e irradiance arriving at the object. The image quality, i.e. SNR, is to be measured as a function of the irradiance, as well as various other optical parameters, such as polarization, object-referen ce intensity ratio, and the type of object being imaged. Admittedly, limitations inherent to post-mortem cadaveric models did exist in regard to our ability to correlate meas urements from imaging with known normal anatomic findings. However, the images do illustrate the ability of DIH imaging to successfully depict and measure contour of ocular tissue. Ou r goal in imaging was not to replicate known measurements, but rather to illustrate the concept of the ability of DIH imaging to image a 3D ocular structure with adequate resolution. With full development of its capabilities, DIH may provide another option in ocular imaging, providing hi gh resolution 3D information which could potentially aid in guiding the diagnosis an d treatment of many ocular diseases. Acknowledgments This work was supported by NSF BISH grant #0755705.

PAGE 106

925.7. Bibliography [1]. J.C.H. Tan and R.A. Hitchings, Invest. Ophthalmol.Vis. Sci. 44 1132 (2003). [2]. K.H. Min, G.J. Seong, Y.J. Hong, et al Kor. J. Ophthalmol 19 189 (2005). [3]. J. Xu, H. Ishikawa, G. Wollstein, et al Invest. Ophthalmol.Vis. Sci. 49 2512 (2008). [4]. O. Geyer, A. Michaeli-Cohen, D.M. Silver, et al Br. J. Ophthalmol. 82 14 (1998). [5. F.S. Mikelberg, Can. J. Ophthalmol. 42 421 (2007). [6]. J.B. Jonas, G.C. Gu sek, and G.O.H. Naumann, Invest. Ophthalmol.Vis. Sci. 29 1151 (1998). [7]. N.V. Swindale, G. Stjepanovic, A. Chin, et al. Invest. Ophthalmol.Vis. Sci. 41 1730 (2000). [8]. C. Bowd, L.M. Zangwill, E.Z. Blumenthal, et al. J. Opt. Soc. Am. A 19 197 (2002). [9]. C.K. Leung, W.M. Chan, W.H. Yung, et al. Ophthalmology 112 391 (2005). [10]. G. Wollstein, H. Ishikawa, J. Wang, et al. Am J Ophthalmol. 139 39 (2005). [11]. M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, et al. Ophthalmology 112 1734 (2005). [12]. W. Drexler, U. Morgner, R.K. Ghanta, J.S. Schuman, F.X. Krtner, J.G. Fujimoto. Nat Med. 7 502 (2001). [13]. V.J. Srinivasan, I. Go rczynska, and G.J. Fujimoto. Opt. Lett. 32 361 (2007). [14]. V.J. Srinivasan, T.H. Ko, M. Wojtkowski, et a l. Invest. Ophthalmol.Vis. Sci. 47 5522 (2006). [15]. Y. Yasuno, Y. Hong, S. Makita, et al. Opt. Express 15 6121 (2007). [16]. G. Wollstein, L.A. Paunescu, T.H. Ko. Ophthalmology.112 229 (2005). [17]. M.K. Kim. Opt. Lett. 24 1693 (1999).

PAGE 107

93 [18]. M.K. Kim. Opt. Express 7 305 (2000). [19]. A. Dakoff, J Gass and M.K. Kim. J. Electr. Imag.12 643 (2003). [20]. L. Yu and M.K. Kim. Opt. Express 13 5621 (2005). [21]. L. Yu and M.K. Kim. Opt. Lett. 30 2092 (2005). [22]. M.K. Kim, L. Yu and C. J. Mann. J. Opt. A: Pure Appl. Opt.. 8 512 (2006). [23]. M. C. Potcoava and M.K. Kim. Meas. Sci. Technol. 19 074010 (2008). [24]. M.C. Potcoava, C.N Kay, M.K Kim, and D.W Richards, Ophthalmic Technologies XIX 7163-10, SPIE BIOS 2009. [25]. F. Montfort, T. Colomb, F. Charrire, et al. Appl. Opt. 45, 8209-8217 (2006). [26]. C.K. Leung, W. Chan, Y. Hui, et al. Invest. Ophthalmol.Vis. Sci 46 891 (2005). [27]. H. Piller, Zeiss-Werkschrift 34 87 (1959). [28]. A. Chan, J.S. Duker, T.H. Ko, et al Arch Ophthalmol. 124 193 (2006). [29]. S.J. Preece and E. Claridge, Phys.Med. Biol 47 2863 (2002). [30]. R. Huber, K. Taira, M. Wojtkowski, and J.G. Fujimoto. Proc. SPIE; 6079, 60790U (2006). [31]. J. G. Fujimoto. Natu re Biothechnology. 21 1361(2003). [32]. American Academy of Ophthalmology Basic and C linical Science Course: Fundamentals and Principles of Ophthalmology, 2006). [33]. J.W. Goodman, Introduction to Fourier Optics (3rd Edition, Roberts & Company Publishers, 2005).

PAGE 108

94 CHAPTER 6 FINGERPRINT BIOMETRY APPLICATIO NS OF DIGITAL INTERFERENCE HOLOGRAPHY The Digital interference holography (DIH) is a multiwavelength optical technique that can be used to build holographically the thre e dimensional structure of the fingerprints. This chapter proposes to show how the DIH technique could be used in the field of forensic science as a powerful fingerprints scanner to identify and quantify Level 1 (pattern), Level 2 (minutia points), and Le vel 3 (pores an d ridge contours) fingerprint characteristics from the amplitude images. Section 6.1 reviews some of the fingerprint features used in the enrollment, verification, and identification phase s. Section 6.2 covers the theory of the fingerprint imaging a nd object field reconstr uction. Section 6.3 discusses the experimental setup for fingerp rint imaging. Samples characteristics are discussed in Section 6.4. Section 6.5 pres ents experimental re sults. Conclusions are presented in Section 6.6. 6.1. Introduction There are three kinds of fingerprints that coul d be identified where they were left behind. They are latent, visible, and plastic prints. Latent prints are left on the surface of the objects and are invisible. A chemical is used to make the prints visible. Visible prints are

PAGE 109

95 left when a finger is coated with a colored s ubstance. Plastic prints are formed when the finger presses onto a soft surface such as wax, soap or putty. All these prints are called exemplar fingerprints when th ey are obtained from human fi ngers using scientific tests under supervision. The features of the fingerprints can be classi fied in three levels [1-7]. Level 1 feature refer to the patte rn type, such as arch, tented arch, left loop, right loop, double loop, and whorl. Level 2 features are form ed when the ridge flow is interrupted by some irregularities, known as minutiae. Examples of minuti ae are bifurcation, ending, line-unit, line-fragment, eye, and hook. Le vel 3 features include other dimensional characteristics like pores, creases, line shape, incipient ridges, scars, and warts. Finger recognition is a complex process that occurs in three phases: enrollment, identification and verification [2]. During the enrollment pha se fingerprints from different individuals are recorded digitally by th e CCD camera. The identification process or one-to-many matching refers to the finding the person who committed the crime based on the matching of his fingerprints against an existing databa se of known fingerpri nts. The verification process or one-to-one matching refers to th e comparison of the i ndividual fingerprints against those of his/her enrolled fingerpri nts template. When enough similarities are found between three or more n earby minutiae in both claimant and enrollee fingerprints, then the fingerprints are said to match. This is called minutia matching. Another matching scheme is the correlation matching [2 ] performed in the frequency domain due to the fact that the fingerprints in the spacedomain are subject to alignment errors such as the elasticity, different noise between claimant and enrollee images, as well as translational and rotational fr eedoms. Most of the fingerp rints technologies rely on minutiae matching approach with higher rec ognition accuracy. The matching process is a

PAGE 110

96 challenging work; usually it requi res a few combined algorithms to improve the accuracy of the measurements. Therefore, there is still much potential for algorithmic improvement [8]. We use the multi-wavelength digital interference holography (DIH) scanner [1322], and fingerprints recognition to build up the three dimensional structure of the fingerprints and to identify a nd match fingerprints feature that could be used in the two phases, identification and verification. The wavelengths range is swept automatically and for specific wavelengths the hologram is recorded digitally followed by numerical reconstruction of the optical field. The axial resolution is a parameter that depends on the wavelength scanning range and is obtained by superposing all optical fields. The DIH scanner setup is a Michelson interferometer in off-axis reflection geometry. This work proposes to show how the DIH t echnique could be used in the field of forensic science to identify and quantify Level 1 (pattern), Leve l 2 (minutia points), and Level 3 (pores and ridge contours) fingerprint feat ures. Our contribution is as fo llows: a new optical scanner, DIH, is introduced in the area of the fingerprints recognition; a database of fingerprints is created for each enrolled subject using different print materials; the fingerprint 3D images along with minutiae extracting an d minutiae matching are presen ted; using this technique we were able to obtain information not only about the ridges, but al so the depth and the width of the ridges; we built a three dimensi onal structure of the fingerprints templates with microns size axial resolution. 6.2. Theory The diffraction is described by two mathema tical methods. In th e transfer function method, the object field at th e observation plane is the inve rse Fourier transform of the

PAGE 111

97 Fourier transform of the object field at the aperture multiplied by the free-space transfer function. The second method is based on calculating the spatial distribu tion of the object field at the observation plane as an integral over individual responses of each point of the aperture through the point-spread function of the system. Func tion of the type of wavelet arriving at the observation plan e, the point-spread function can take various mathematical expressions. The below developed theory is based on the Fourier approach. Suppose an object is illuminated by a laser beam of wavelength Considering the actual wave-front amplitude()OBJOr, the point OBJr on the object scatters the light into a Huygens wavelet, ()exp()OBJ OBJOrikrr where the object function ()OBJOr is proportional to the amplitude and phase of the wavelet scattered or emitted by object points. For an extende d object, the field at r is found by linearly superposing the the reconstructed optical fields due to the secondary wavelets in side of the object and it has the expression, 3()()exp(||)OBJ OBJOBJUrOrikrrdr (6.1) where the integral is over the object volume In holography, the amplitude and the phase of the object field is recorded at the hologram plane,(,,0)hhhxyz as ( HN(),,NNNNHxy for each Nth wavelength. The total optical field as a result of the superposition of all N 3D-arrays results in: 3 3(,,;)()exp(||) ()()()OBJ OBJOBJ k OBJ OBJOBJUxyzOrikrrdr OrrrdrOr (6.2)

PAGE 112

98 If a large number of wavelengths is used with uniform increment (1/) the object image repeats itself at the beat wavelength (axial size), 1(1/), having an axial resolution of /zN We perform the reconstruction of the op tical field using the angular spectrum algorithm. The advantages of the angular spectrum method over more commonly used Fresnel transformation (or Huygens convoluti on) method, are as follow: Firstly, the Fresnel transformation is accurate only when small angles of diffraction are involved [22]. The angular spec trum method and the Fresnel approximation are equivalent within the small angles approximation called paraxi al approximation. Sec ondly, the noise and frequency components in the Fourier space can be easily controlled by eliminating unwanted frequencies. Thirdly, no minimum reconstruction distance is necessary as in the case of the Fresnel approximation [17]. Once the angular spectrum at z = 0 is calculated by a Fourier transform, the field at any other z-plane can be calculated with just one more Fourier transform. 6.3. Digital Interference Holograp hy Fingerprint Scanner Setup A schematic diagram of the DIH scanner setup is presented in Figure 6.1. We used a Michelson interferometer in a backscatteri ng geometry with a ring dye laser source, tunable over a range of 563 nm to 615 nm w ith an output power of up to 500 mW. The object (fingerprint sample) is illuminated by a collimated beam as follow: The laser output, after being spatial-filtered, collimated and linearly polarized, is split into sample and reference waves at polarized beam splitter, PBS. The focusing lens, L2, focuses the laser on the back focus of the objective lens L3, and L5 (C-mount lenses) forms the

PAGE 113

99 image at infinity. The reference mirror is at the optical conjugate of the plane H through the matching objective lens L4. A digital CCD camera (8 bit, 30 frames per second, 780 x 640 pixels with 9 m pitch) acquires the holographic interference between the plane reference wave and the objec t wave that has propagated (diffracted) over a distance z from the object plane. We call z the reconstruc tion distance due to the fact the object can be numerically brought into fo cus by adjusting the distance, z, without moving the object or the CCD camera. The fingerprint on a gla ss surface induces differences in the optical polarization or reflection, or both, between the clean part of the surface and that bearing the print [10]. The polarization optics pola rizers P1, P2, analyzer P3, quarter wave plates, and polarizing beam splitter are used to continuously adjust the overall laser power in the experimental setup and inside th e interferometer. A variable aperture placed at the back focal (Fourier) plan e of the objective lens, L3, can be useful in controlling the angular spectrum of the object field. Sample s are attached to a microscope slide and mounted on a lens holder.

PAGE 114

100 Figure 6.1. Digital Interference Holography Fi ngerprint Scanner Setup. RDL: ring dye laser; Ms: mirrors; SF: spatial filter and expander; L1, L2: lenses with focal length of 25 cm; L3, L4: lenses with focal length of 15 cm; L5: C-mount lens set to infinity focus;; PBS: polarizing beamsplitter; QWs: quarter waveplates; A: aperture; H: hologram plane; OBJ: object (visible, cl ay, cement); REF: reference mirror; MM: motorized micrometer; MMC: controller for MM. 6.4. Sample Characteristics For the purpose of evaluating our DIH system in the enrolment phase, we created two sets of image databases, DB1 with field-of-view (FOV) 5.0 x 5.0 mm2 and DB2 with FOV 10.4 x 10.4 mm2. Each database contains image data from a) visible fingerprint, b) the clay fingerprint, c) the plastic print on a mixture of clay and silver enamel, d) the plastic print on clay (the thum b finger was coated with a ligh tly layer of enamel before pressing into the clay), e) the plastic clay print only; and f) the plastic cement print of

PAGE 115

101 thumb fingerprint from three subjects A, B, and C, with 5 impressions for each subject: visible fingerprints on glass by using silver enamel Figure 6.2a (s ubject A), and Figure 6.2b (subject B); the plastic print on a mixtur e of clay and silver enamel Figure 6.2c, the plastic print on clay (the thum b finger was coated with a light layer of enamel before pressing into the clay), Figure 6.2d; the plastic clay print only Figure 6.2e; and the plastic cement print Figure 6.2f. We tested different kinds of hous ehold materials. One set of prints was made with Crayola So lid White Model Magic FUSION mixed with TESTORS Gloss Enamel Craft Paint, 1180 Steel Acier Figure 6.2c, to make the imprints more reflective. The two materials become sticky if too much enamel is added resulting not good fingerprint impressions. The other set of prints was made with Van Aken Claytoon Modeling Clay only. Usually the plastic fingerprints are lifted using different impression materials. To obtain the positive fingerprints (real), a few drops of Duco (ITW Devcon Corporation) household cement we re poured at the top of the negative (reverse of real print) clay pr int [11, 12]. This kind of cement dries hard and clear, making the fingerprint features look natura l. After a few hours, the clay was removed from below the cement print surface, and extra work has been done to clean the fingerprints positive mold of the impurity. Ai r bubbles are inevitable since the clay does entrap air inside of it, Figure 6.2f. In order to affix the fi ngerprints samples in the optical setup a small scanning screen made with a mi croscope slide attached to a lens holder, Figure 6.1. All clay print samples in different combinations and the cement sample were fixed on microscope slides facing the CCD camera.

PAGE 116

102 Figure 6.2. Fingerprints Samples: the enamel visi ble fingerprints, a) subject A; b) subject B; the clay fingerprints, c) the plastic print on a mixture of clay and silver enamel, d) the plastic print on clay (the thumb finger was coated w ith a lightly layer of enamel before pressing into the clay), e) the plas tic clay print only; and f) the plastic cement print. 6.5. Results When a finger tip coated with enamel touche s the microscope cover slip, a visible pattern of ridges and valley is left behind, Figure 6.3a, 6.3d. From the experimental setup, one can notice the collimated laser beam illumina tes the microscope cover glass at a right angle, and the CCD camera plane is parallel to the object plane, therefore the light reflected from the glass only (valleys) is brig ht since the specular reflection happens at a

PAGE 117

103 right angle. The light reflected from the enam el prints (ridges) represents a combination of specular and diffuse light, so the ridges prints appear darker. We can notice a doubledloop fingerprints pattern for subject A, with a few examples of bifurcation points A1s and ending points A2s in both direct image Figure 6.3a, and reconstructed image using DIH, Figure 6.3b. The quality of the reconstruction of the visible fingerprints using DIH is high since the flow of the ridges follo ws the same pattern on both the direct and reconstructed prints. Moreover, the closed and open pores A3s, located on the ridges are very well distinguishable in both images. Taki ng into consideration the lateral resolution in the x or y direction is x= y= 40.56 m with the same number of pixels, Nx = Ny =256, the width of the fingerprint ridge to which the arrow A4 is pointing to is 212 m for subject A, in Figure 6.3b, and 454 m (arrow B4) for subject B in Figure 6.3d. In the identification or verification stages, the examin er also determines the orientation of the imprint left by the fingerprint ridges. Here, using the 3D representation of the reconstructed optical field by the DIH, Figure 6.3c, an examiner could obtain significant information about the orientation of the impr int left by the fingerprint ridges under different views. As a comparison, a direct image of subject B, with a right loop pattern, bifurcation point B1, ending point B2, an d pores B3, is shown in the Figure 6.3d. Therefore, based on all this information, DIH ca n act as an optical lif ting tool in the area of visible fingerprint recognition.

PAGE 118

104 a) b) c) d) Figure 6.3. Enamel Visible Fingerprints, ridges (d ark), valley (brigh t): a) x-y direct image data, subject A; b) x-y reconstructed optical field, subject A; c) the reconstructed and rotated (Euler angles: =4.32o, = 234o = 0.23o) optical field subject A; d) direct image data, subject B; FOV =10.383 x 10.383 mm2 x= y= 40.56 m; Nx = Ny =256 pixels; As, Bs represent Level 2 and Level 3 fingerprint characteristics. Fingerprints left at a crime scene do not al ways present good quality. This is the case of the sample in Figure 6.2c, where the core part of the finge rprint is not shown. This is a fingerprint pattern with high reflectiv ity, similar to a silver coin. Using DIH, we can reconstruct the 3D fingerprint profile of this sample, the plastic print on a mixture of clay and silver enamel Figure 6.4. The holo graphic image acquisition and computation of the optical field are carried ou t for each of 50 wavelengths in the range from 560 nm to 600 nm. Superposition of images, in the DI H process described a bove, reveals the wavy features of the fingerprints pattern. Th e imaged surface area is about 5.0 x 5.0 mm2. The axial range is 210.5 m the lateral resolution 19.6 x ym and 4.205 zm A valley/ridge in a negative clay prin t corresponds to a ridge/valley in a positive clay print. From the fingerprint topogr aphy in Figure 6.4b, 6.4c, one can quantify the height of the fingerprint ridges from va lley to top and the width of the fingerprint

PAGE 119

105 ridges from valley to valley, as 80.5 hm (about 19 pixels of size m), and 764 wm respectively (about 39 pixels of size 19.6 m). a) b) c) Figure 6.4. The Reconstructed Volume of the Plas tic Print on a Mixture of Clay and Silver Enamel Sample: (a) xy cross-section, 5017 5017 m2; (b) yz cross sections at various x values, 210.5 5017 m2, from left to right, x1, x2 and x3; (c) xz cross sections at various y values, 5017 210.5 m2, from top to bottom, y1, y2 and y3; : 0.560-0.600 m; Z= 743 m; = 210.5 m; x= y =19.6 m, z = 4.205m, Nx = Ny =256 pixels, Nz = 50 pixels. The next example is a small area sample, FOV~5.0 x 5.0 mm2, the thumb finger of the subject C was coated with a light la yer of liquid enamel and pressed against the clay mold to make the prints. In the en-f ace image, Figure 6.5a, there is an important Level 2 (minutiae) feature, called crossover, which is a short ridge that runs between two parallel ridges. The end poi nts of this feature are P1 and P2. The distance between the two points along the x axis is 767 m and 1505 m along the y axis respectively. In the tomographic images, Figure 6.5b, 6.5c, the crests of the ridges (the actual valleys of the

PAGE 120

106 fingerprint) are shown. The two end points of the crossover f eature are also visible in the tomographic images. The dark gaps between cr ests correspond to the actual ridges of the fingerprint and they do not reflect the light since the silver enamel is not more present there. a) b) c) Figure 6.5. The Reconstructed Volume of the Pl astic Print on Clay (finger coated with enamel before pressing into the cl ay) sample (a) xy cro ss-section, 5017 5017 m2; (b) yz cross sections at va rious x values, 209.5 5017 m2, from left to right, x1, x2 and x3; (c) xz cross sections at various y values, 5017 209.5 m2, from top to bottom, y1, y2 and y3; : 0.560-0.600 m; Z= 743 m; = 209.5 m; x= y =19.6 m, z = 4.19 m, Nx = Ny =256 pixels, Nz = 50 pixels; Ps represent Level 2 fingerprint characteristics. The reconstructed optical field with an area of 10.4 x 10.4 mm2, of the core part of the sample in Figure 6.2d, is shown in Figu re 6.6a. This case is similar to the previous sample; the only difference is a double F OV. Because of this, the fingerprint ridges appear closer to each other in this case an d the gaps between thes e are not seen in the

PAGE 121

107 tomographic images, Figure 6.6b, 6.6c. The gaps are visible when the images are magnified. A closer analysis gives reasonable comparison and a point of view as to match a visible print with a 3D print. The valleys pattern and the valleys contour are similar to the doubled-loop fingerprints pattern and the contour of the ridges in the silver enamel visible prints, Figure 6.3a, 6.3b. During th e reconstruction process, we observe the features are preserved in spit e of the noise present in any optical setup and from various sample conditions. In the fingerprint verifica tion or identification phases, fingerprints are rejected if they do not have the same flow of ridges, direction and location in the two situations. After all these basic similarities ar e resolved, we need to see where the valley and ridge widths match. The va lley A5, with a width of 404 m in Figure 6.3a is approximately equal to the width of the ridge A5 in Figure 6.6a, which is 448 m The second one is wider since the width of the ridge of a plastic print is measured from valley to valley and not from one edge to the othe r edge as in the case of the silver enamel visible fingerprints. Moreover the height of the ridge R1 in Figure 6.6b is 38.64 m and the width of the same ridge is 677 m

PAGE 122

108 a) b) c) Figure 6.6. The Reconstructed Volume of the Pl astic Print on Clay (finger coated with enamel before pressing into the clay ) sample: (a) xy cros s-section, 10 10 mm2; (b) yz cross sections at various x values, 212.5 10383 m2, from left to right, x1, x2 and x3; (c) xz cross sections at various y values, 10383 212.5 m2, from top to bottom, y1, y2 and y3; : 0.563-0.603 m; Z= 8911 m; = 212.5 m; x= y =40.56 mm, z = 8.497 mm, Nx = Ny =256 pixels, Nz = 50 pixels; A5 and R5 are ridges. Once the plastic prints are found at the s cene of the crime their ridges and furrows must be preserved. A solution to lift prin ts from a rough surface is to use casting materials that fill in the whole area and not break up fingerprint pattern when the whole print is lifted. A few casting materials, used to record impression marks, are: liquid silicone rubber with catalyst, DuroCast casting putty, TexturLift liquid silicone, and common materials used for dental impressi ons. Usually, before applying the casting materials the plastic prints should first be dusted with magnetic powder that contains ferromagnetic particles to increase the contrast of the prints. Here, we used Duco household cement, due to the fact that it wa s cheaper than the other materials, without any magnetic powder applied to enhance th e contrast. As an example, Figure 6.7

PAGE 123

109 represents the reconstructed volume of th e plastic cement print sample; the en-face image, Figure 6.7a, with an area of 10.4 10.4 m2 and the tomographic images, Figure 6.7b, 6.7c with areas of 264 10383 m2 (10383 264 m2). The axial resolution for this sample is z = 5.26 m and the lateral resolution is x= y =40.56 m. Sometimes, trying to image the core part of a fingerprint is not simple since the details are very small. The fingerprint surface is clearly distinguisha ble but the shape of the ridges is barely seen; it means that a better resolution than 5.26 m is a need to reveal the wavy pattern of prints in the tomographic images or a different casting materi al should be used. a) b) Figure 6.7. The Reconstructed Volume of the Plastic Cement Print Sample: (a) xy cross-section, 10.383 10.383 mm2; (b) yz cross sections at various x values, 264 10383 m2; from left to right, x1, x2 and x3; : 0.561-0.593 m; Z= 2975 m; = 264 m; x= y =40.56 m, z = 5.26 m, Nx = Ny =256 pixels, Nz = 50 pixels. In Figure 6.8a, a direct (non-holographic) image of a patent print is shown, where a finger tip coated with enamel is lightly pressed on a slide gl ass, leaving a visible, print.

PAGE 124

110 The amplitude image Figure 6.8b clearly re produces the double loop pattern of this finger. Such two-dimensional image of patent prints can be obtained using conventional imaging methods and holography does not offer particular advantages. Holographic phase images, on the other hand, can provi de additional information on the third dimension. Figure 6.8c is a phase image obt ained from the same hologram used in Figure 6.8b. Although there is some hint of the loop structure as well as some low frequency variation of the enamel thickness over the field of view, the film is too thick and the phase map is severely wrapped, resulting in overall very noisy pattern. Another hologram is acquired using a slightly different wavele ngth, 577.71 nm vs. 580.17 nm, and its phase image is shown in Figure 6.8d, which is ju st as noisy. The two phase images are now combined using the optical phase unwrapping procedure described above, and the result is shown in Figure 6.8e. The beat or synthetic wavelength is then =12136 m which is the full z-scale or color scale of Figure 6.8e. The fact that there is little phase-wrapping in Figure 6.8e implies that the enamel thickness is about 100 m These images include bifurcation points (A1), end points (A2), as well as open pores (A3), and the ridge-toridge width is measured to be about 212 m

PAGE 125

111 Figure 6.8. Latent Fingerprints. Optical phase unwrapping of fingerprint images by two-wavelength holographic phase microscopy: a) direct image of patent fingerprint made with lightly enamel-coated finger; b) holographic amplitude image; c) and d) phase images from two holograms made with wavelengths 577.71 nm and 580.17 nm; e) optically phaseunwrapped image by combination of c) and d). FOV is and the gray (color) scale of e) corresponds to 136 m of optical thickness range. 6.6. Conclusions To summarize, a new non-inva sive optical scanner, DIH, was successfully introduced to forensic science, more precisely in the area of fingerprints r ecognition. The threedimensional imaging of fingerprints and also their role in identification and verification systems were demonstrated. The selection of the optimum casting materials depends on many variable, including: (1) The material ha s to be soft enough to not stress the ridges by pressing the finger onto the clay; (2) It has to set up in a short pe riod of time; (3) It does not shrink or expand when it sets up; and (4) The casting materials fill in the entire area and does not break up fingerprint pattern when the whole print is lift. To obtain qualitative fingerprints a few mo ld (print) and casting materials should be tested to see the effectiveness of the materials. We were ab le to obtain 2D and 3D enrollee fingerprint structures. Various fingerprint characteristics, Level 1 (pattern), Le vel 2 (minutia points),

PAGE 126

112 and Level 3 (pores and ridge contours) were id entified. Also the height and the width of the ridges were quantified with m axial resolution (~m 5). In order to improve this method, we will replace the ring dye laser with a Ti:Saphire laser to increase the tunable wavelength range and subsequently increase the axial resolution below 5 m and try different casting materials. Advanced commercial fingerprints systems, live-scan sensors, reveal the skin layers using a multiwavelength technique (TIR), [9]. Our future work will concentrate on building up the 3D structure of the live-scan fingerprints using our optical scanner based on the DIH technique. Once the re al fingerprints features are stored as digital information in the computer, the real and the casting material fingerprints could be used to map and match identical features. By doing live-scan fingerprints we will also have access to the 3D information of the pores not captured in the print or casting materials. 6. 7. Bibliography [1]. A. M Knowles, Aspects of physicochemical methods for the detection of latent fingerprints, Phys. E: Sci. Instrum. 11, 713-721 (1978). [2]. L. OGorman, Overview of fingerpri nt verification techno logies, Elsevier Information Security Technical Report 3, (1998). [3]. A. K. Jain, L. Hong, S. Pankanti, and R. Bolle, An Identity-Authentication System Using Fingerprints, Proc. of the IEEE 85, 1365-1388 (1997). [4]. A. K. Jain, J. Feng, A. Nagar and K. Nandakumar, On Matching Latent Fingerprints, Workshop on Biometrics, CVPR, (2008).

PAGE 127

113 [5]. U. Park, S. Pankanti and A. K. Jain, "Fingerprint Verification Using SIFT Features, Proc. of SPIE Defense and S ecurity Symposium, (2008). [6]. Y. Zhu, S.C. Dass and A.K. Jain, "Statis tical Models for Assessing the Individuality of Fingerprints", IEEE Transactions on Information Forensics and Security, 2, 391-401 (2007). [7]. A. K. Jain, Y. Chen, M. Demirkus, P ores and Ridges: High-R esolution Fingerprint Matching Using Level 3 Features, IEEE Tran sactions on Pattern Analysis and Machine Intelligence 29, 15-27 (2007). [8]. R. Cappelli, D. Maio, D. Maltoni, J..L. Wayman, and A.K. Jain, Performance evaluation of Fingerprint Verification System , IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 3-18 (2007). [9]. R. K. Rowe, S. P. Coco ran, K. A. Nixon, and R. E. Nostrom, Multispectral Fingerprint Biometrics, Proc. SPIE 5694, 90 (2005). [10] S. S. Lin, K. M. Yemelyanov, E. N. Pugh Jr., N. Engheta, Polarizationand Specular-Reflection-Based, Non-contact Latent Fingerprint Imaging and Lifting, J. Opt. Soc. Am. A 23, 2137-2153 (2006). [11]. Y. Cheng and K. V. Larin, In Vivo Twoand Three-Dimensional Imaging of Artificial and Real Fingerpr ints With Optical Cohere nce Tomography, Photonics Technology Letters, 19, 1634-1636 (2007). [12]. S. Chang, Y. Cheng, K.V. Larin, Y. Mao1, S. Sherif, and C. Flueraru, Optical coherence tomography used for security and fingerprint-sensing appl ications, IET Image Process., 2, 48 (2008).

PAGE 128

114 [13]. M. K. Kim, Wavelength scanning digital interference holography for optical section imaging, Opt. Letters 24, 1693-1695 (1999). [14]. M. K. Kim, Tomographic three-dime nsional imaging of a biological specimen using wavelength-scanning digi tal interference holography, Opt. Express 7, 305-310 (2000). [15]. L. Yu and M. K. Kim, Wavelength s canning digital interf erence holography for variable tomographic scanning, Opt. Express 13, 5621-5627 (2005). [16]. L. Yu and M. K. Kim, Wavelength-scanning digital interference holography for tomographic 3D imaging using the angular spectrum method, Opt. Lett. 30, 2092-2094 (2005). [17]. M. K. Kim L. Yu and C. J. Mann, Interference techniques in digital holography, J. Opt. A: Pure Appl. Opt 8, 518-523 (2006). [18]. L. Yu and Z. Chen, Digital hol ographic tomography based on spectral interferometry, Opt. Lett. 32, 3005-3007 (2007). [19]. M. C. Potcoava and M.K. Kim, Optical tomography for biomed ical applications by digital interference holography, Meas. Sci. Technol. 19, 074010 (2008). [20]. L.Yu and Z. Chen, Improved tomographic imaging of wavele ngth scanning digital holographic microscopy by use of digital spectral shaping, Opt. Express 15, 878-86 (2007). [21]. F. Montfort, T. Colomb, F. Charrire, J. Khn, P. Marquet, E. Cuche, S. Herminjard, and C. Depeursinge, Submicrometer optical tomography by multiplewavelength digital holographic microscopy, in Appl. Opt. 45, 8209-8217 (2006). [22]. J. W. Goodman, Intr oduction to Fourier Optics, 3rd Edition (2005).

PAGE 129

115 CHAPTER 7 THREE-DIMENSIONAL SPRING CO NSTANTS OF AN OPTICAL TRAP MEASURED BY DIGITAL GABOR HOLOGRAPHY This chapter presents experimental results on quantitative mapping of three-dimensional optical force acting on an optically trapped co -polyester particle in a three-dimensional parabolic potential by using digital Gabor holography. A brief review of the optical trapping technologies is presented in Section 7.1. The background of the optical trapping process, the digital Gabor holography th eory, calibration methods, and the motion tracking algorithm are described in Section 7.2. In Section 7.3 the sample characteristics and the experimental apparatus are presented. The results are presented in Section 7.4. Finally, in Section 7.5 conclusions are made. 7. 1. Introduction Quantitative studies of physical and biol ogical processes and precise non-contact manipulation of nanometer/micrometer trap ped objects can be effectuated with nanometer accuracy due to the development of optical tweezers. A three-dimensional gradient trap is produced at the focus position of a high NA microscope objective. Particles are trapped axially and laterally due to the gradient force. Particles are confined in a potential well and the trap acts as a harmonic spring. The elastic constant or the stiffness along any axis is determined from the particle displacements in time along each

PAGE 130

116 specific axis. Optical tweezers have been used to trap dielectric spheres, living cells, organelles, viruses, and bacteria. The main use for optical trap is the manipulation of biological structures to study of the molecular motors and the physical properties of DNA [1, 2]. Optical sorting tweezers use an optical la ttice to sort cells by size and by refractive index [3, 4]. The evanescent field and more recently surface plasmon waves propel microparticles along their propagating path [5 6]. Optofluidics is a joint technology between microfluidics and micro-photonics. Op tical control of the mi crofluidic elements using optical tweezers was also reported [7]. Another application of optical trapping techniques includes integrated lab-on-a-chi p technologies where optical force landscapes are highly desirable to manipulate multiple microparticles in parallel [8]. Several optical trap geometries have been reported [1 14]. Cells and colloidal particles have been manipulated in recent years using single or multiple optical beam traps. The first counter propagating beam optical trap experiment was introduced by Ashkin in 1970 [9]. Optical scattering forc es and gradient forces were obtained on submicron silica spheres. The optical fountain is the first optical trapping in three dimensions [10]. It uses gravity against the "scattering" force due to the beam pushing the particle. The single beam gradient trap [11] is considered the most accurate trap in 3D. As with elastic forces, the optical restor ing force is proportional to the particle displacement. A stiffness of 0.2 pN/nm was demonstrated for a laser power of 50 mW. There are two main advantages of using holography in combination with optical tweezers [12-14] instead of quadrant photodetector detectors (QPD). Firstly, in holography the entire 3D structure is reconstr ucted from a single hologram. It means the 3D position of a particle is also encrypted in a hologram. Recording many holograms in

PAGE 131

117 time we can track the 4D particle position with nanometer precis ion. The quadrant photodetector detector (QPD) cannot yield the three-dimensional information of the particle position but is favored due to the high recording speed. Video-imaging of the the particle requires high speed CCD camera. Sec ondly, a laser beam sent into a hologram is divided into a myriad of sub-beams (HOT holographic optical tweezers), which can independently suspend and manipulate numerou s tiny objects for possible transportation, mixing or reacting. Movies of ensembles of microspheres moved into patterns and set to spinning by holographically sc ulpted light fields were al so demonstrated [12, 13]. Another factor that has to be taken in consideration in build ing optical tweezers is the drift of center of mass of particle. It is difficult to isolate the sample from the environmental and instrumental disturbances to observe the pristine particle motion [22, 25]. The drift frequency is lower than the optically trapped particle motion frequency. Also, this noise is a smooth function of time, independent of the data. The low frequency data components may be excluded by either appl ying a high pass-filter [24] or by fitting with a smooth curve to be us ed as a local baseline [25]. Thus, we report the three-dimensional sensing and identification of trapped copolyester particles of 9.6 m diameter, trapped in ethylen e glycol, using digital Gabor holographic microscopy, with a precision of 1 nm in the z direction and 10 nm along the x and y directions. Holograms were recorded by an IMAQ USB camera at 10 fps for various power values of the trapping laser beam. High resolution complex -valued images of the particles are numerica lly reconstructed along the opti cal axis from the holograms using the angular spectrum method [15 17]. Three-dimensional position tracking software was developed to compute reconstr uctions from holograms using CUDA/C and

PAGE 132

118 a graphics processing unit (GPU), the GeForc e 8800 GT. Using the object displacements we calculated the stiffness of the optical trap by two calibration methods: equipartition theory, and Boltzmann statistics. The results confirm the linear relationship between the stiffness and the trapping laser power. We calculated the optical force consta nts in a three-dimensional parabolic potential model. The results confirm the linear relationship between the stiffness and the trapping laser power. The values of the spring constants of the radiation force, in the axial (z) direction is different from (and weaker than) the transverse (x) and (y). The average values of the spring constants are: the spring constant along x axis 67.6*10/ kxNm, the spring constant along y axis 64.8*10/ kyNm, and the spring constant along z axis 75.0*10/ kzNm. Software to visualize, record and reco nstruct particle positions is written in National Instruments LabView, Matla b and the NVidia CUDA environment. 7. 2. Theory 7. 2.1. Principle of Digital Gabor Holography The fundamental setup of a digital holography system is the in-line configuration. The digital in-line, or digital Ga bor, holography (DGH) is a 3D imaging technique in which a hologram is formed from interference of the original non-diffracted beam with a component that is diffracted by the object in its path. Information for both virtual and real images is situated in-line along the same axis; therefore, the focused image of either is polluted by its defocused twin. This is one of the drawbacks of using DGH. For the

PAGE 133

119 purpose of tracking particle positions, ho wever, DGH is a particularly simple yet effective solution. As indicated here by Equation (7.1), the intensity of the interference pattern recorded by the CCD may be decomposed into four terms. The first two appear as zeroorder background and are approximately remove d by subtracting the average intensity of the entire image from each pixel. The third and forth terms represent the two conjugate real and virtual images. 222(,)(,)(,)(,)(,)(,)*(,)*(,)(,) hxyOxyRxyOxyRxyOxyRxyOxyRxy (7.1) The recorded image ),( yxhcontains information about both the amplitude and phase of the object beam. This makes it possible to reconstruct the image from the hologram, and we use the angular spectrum method. The Fourier transform of Equation (7.1). is calculated to obtain the spectrum in the object plane, 0(,) x yAkk. After that, the spectrum is propagated in the frequency domain to any desired distance along the optical axis ( z ), and is expressed as: 0(,,)(,)exp()xyxyzAkkzAkkikz (7.2) Taking the inverse Fourier transform, we obtain the reconstructed object wavefront, 1(,,){(,)[(,;)]}xy E xyzAkkhxyzFF. (7.3) where 222(,;)exp( ) 2 ik hxyz ikxyz z is the Huygens PSF. Breaking the reconstructed complex field into its polar components we get (,,)(,,)exp[(,,)] E xyzExyzixyz (7.4)

PAGE 134

120 where (,,) E xyz, and (,,) x yz represent the reconstructed object wavefront amplitude and phase at r In this way, we have access to both the amplitude and the phase information. However, in this experiment we use only the amplitude information. 7. 2.2 Principle of Optical Trapping Since the size of the particles us ed in this experiment is 9.6 dm and the laser trapping wavelength is 0.532 m the scattering of light by pa rticles is described by the Mie theory (d ) [18]. Also, the trapping position is determinable by analysis of reflected or refracted light from the particles. The optical trapping forces acting on particles are radiation pressure forces in two forms: scatteri ng (external) and gradie nt (internal) forces. The scattering force points away from the li ght source and occurs when photons reflect from the particle. The gradient force is direct ed toward the focus of the laser beam, as can be seen by conserving momentum when a photon is refracted twice through a sphere. When the gradient force in the region beyond the focus is greater than the scattering force, trapping is stable [11]. The total force acting on the particle has the expression: 2() 0[1cos2]sin2nianb totsg nnP FFiFRiRTRe c (7.5) where, s F and g F are the scattering force and the grad ient force. The expressions for the two forces are: 2() 0[1cos2]sin2nianb totsg nnP FFiFRiRTRe c (7.6) 2 2[cos(22)cos2] 1cos2 12cos2r s rTR nP FR cRR (7.7)

PAGE 135

121 2 2[sin(22)sin2] 1sin2 12cos2r g rTR nP FR cRR (7.8) In the two equations:, n is the refractive index of the surrounding medium, P is the laser beam power, c is the speed of light, R is th e reflectance of light at the surface of the particle, T is the transmittance of light, is the angle of incidence, and r is the angle of refraction. The trapping force is: () nP FQ c, where Q is the trapping efficiency, nP c is the incident momentum per second of a ra y of power P in a medium of index of refraction n. As a conclusion, the fluctuations of an optically trapped particles depends on its size, the temperature and the viscosit y of the immersed medium, the numerical aperture of the focusing beam, and the laser power. 7. 2.3 Force Calibration Methods Calibration of the optical trap is necessary to determine the force acting on a micro/nano size object at a given position. An optically trapped particle, whose inertia is neglected, behaves as a damped harmonic oscillator. The linear equation of motion of a trapped particle in a harmonic potential is give n by the reduced Lang evin equation [27]: [()()]()trap thermal x kxtxtFt (7.9) where first term, x represents the is the drag force, which is proportional to the velocity of the bead relative to a fixed position, second term, [()()]trapkxtxt, represents the optical force, and the third term, ()thermalFt represents random kinetic agitation,() x t is the particle position ()trap x t is the trap position, 6 a is the Stokes drag coefficient,

PAGE 136

122 is the dynamic fluid viscosity (here the medium is ethylene glycol), ais the radius of the particle, and kis the stiffness of the optical trap which is proportional to the trapping laser power. Since photon-particle interactions result in an effective restoring force of Fkx it is customary to describe trap behavior with the spring constant k, to be determined. In three dimensions, there are three spring constants to be found (,, x yzkkk ), expected to have lateral symmetry ( x ykk ) with weaker strength along the optical axis (zxkk). The optical trap calibration methods desc ribed below involve observation of the particles Brownian motion within th e harmonic potential by measurement of displacement() rt Particles undergoing Brownian mo tion are characterized by Boltzmann energyBkT where Bk is the Boltzmann constant and Tis the temperature of medium in Kelvin. The first calibration method uses the equipartition analysis of the form: 2111 var() 222BiikTkrkr (7.10) By measuring the variance var() rof the particle displacements () rt for a known medium temperature, one can find the trap stiffness var()B ikT k r, ,, ixyz The second method is based on the Bo ltzmann statistics, with the probability distribution of the particle displacements of the form: ()()BEr kT iPrNe (7.11) where () Er is the potential energy along the thi axis and iN are normalization constants. The parabolic potentials () Er are obtained in two ways: first way is to find the () f itPr

PAGE 137

123 values by fitting the probability distribution with a Gaussian functi on, Figure 7.1a (red), after that the potential energy () Er in BkT is written as ()[ln(())ln()] f it iErPrN which represents the parabola shape from Fi gure 7.1b. The second way to obtain the potential energy is to write the energy as 2()/2iErkr. The energy is known if we also have information about the spring constant ik We can use the spring constant values from the equipartition theorem but if we want to use the Boltzmann statistics only, the spring constant can be obtained by fitting a parabola to the experimental data, () E r (not in BkT ), This implies the spring constant 2iBkakT where ais the coefficient of 2r from the parabolic fit of [ln()ln(())]BkTNPr and has the form: 1() ln var()iPr a rN a) b) Figure 7.1. a) Particle Displacements Histogram, along y direction, frequency or counts (blue), starting Gaussian distributio n (green), and ending Gaussian distribution (red); b) The potential energy in BkT derived from the probability distribution from Figure 7.1a.

PAGE 138

1247. 2.4. Computational System for Motion Tracking The angular spectrum requires two Fourier transforms once the field is known at the hologram plane, one forward FFT to switch to the frequency space and one inverse FFT to return to the space domain obtain after the spectrum propagation to the object plane. Using a Pentium IV CPU, the time required to reconstruct 1000 holograms of 150 x 150 pixels is 100 seconds. GPGPU (General -purpose on graphics processing unit) computation with a CUDA (Compute Unif ied Device Architecture) based on C-like language allows numerical manipulation of da ta using a GPU (graphical processing unit) faster than CPU when big grid data ar rays are involved. The GPU multi-parallel processors perform operations at 32-bit prec ision which make it highly useful for large data volume processing. Holograms are stored in the computer (CPU) and processed at a later time by the GPU. The GPU is a GeFor ce 8800 with 112 stream processors, memory of 512 Mbytes, and GPU clock of 1500 MH z, memory clock of 900 MHz. Holograms stored on the CPU are called by the CUDA functions and sent to the GPU memory. While there, the terms of Equation (7.3) are processed by the GPU chip. The reconstructed amplitude images and consequently the reconstructed (x,y,z) particle position of each hologram is transferred to the CPU. 7. 2. 5. Centroid Position Identification Algorithm Single particle tracking algor ithms usually are useful for motion tracking of objects of varying sizes such as molecules, biological cells, subcellular components, and microbeads. Three-dimensional tracking conf ers a problem since the lateral and axial resolution is not equivalent for the x, y, z axis. The center of mass does not have a

PAGE 139

125 significant shift in the z-axis as it is in th e x, y axis. Example of tracking algorithms are center-of-mass (centroid), cross-correlation, su m-absolute difference, and directly fitting Gaussian curves to the intensity profile [2 6]. Using digital Gabor holography, the best estimate of the z position is found for the amplitude maximum of the diffracted optical field, Figure 7.2. We recorded 1000 holograms and using GPGPU technology the reconstructed position (x, y, z) is determined for each of the reconstructed optical field. The particle is focused where the amplitude of the optical field, (,,) Axyz is maximized, which is the pick in the Figure 7.2. If the axial range is z and we desire a precision of z set of M zz equally spaced axial positions ar e examined. As an example, the reconstruction distance is 3.5 m and 1000M it means 3.5znm Focus Metric vs. Axial Position7.05 7.1 7.15 7.2 7.25 7.3 7.35 7.4 0.10.30.50.7Z (mm)Response (arbitrary units) Figure 7.2. Centroid Position Identification. Also, the center of the particle oc(x, y) is determined using a weighted average, , 0 ,(,)(,) (,) (,)ijij ij ijAijcxy cxy Aij (7.12)

PAGE 140

126 where the (,) Aij is the amplitude of each pixel and ,(,)ijijcxy is the center of each pixel in the FOV. Thus, the centroid is the weighted average of the pixels by intensity. Nanometer precision is also obtained in the x, y directions due to the amplitude weighting and only fractional pixel lateral motion, noise and edge effects are negligible. 7. 3. Experimental Setup Our sample was co-polyester particles of diameter 9.6 m from Duke Scientific Corporation with a 20% coefficient of variat ion. Particles were suspended in ethylene glycol with viscosity 20.0161 Nm/s A custom-made sample chamber, Figure 7.3, consists of a microscope slide (the bo ttom), under a cover slip, spaced apart in z by parts from another cover slip, and sealed with glue. Figure 7.3. Optical Tweezers Sample Chamber; a) chamber made of microscope slide and parts of cover slip; b) chamber fille d with particles immers ed in ethylene glycol and covered by a cover slip. A schematic of the experimental appara tus is shown in Figure 7.4. The setup consists of two arms. The digital Ga bor holography arm is utilized for illumination/imaging and for three-dimensional position tracking. The trap arm is utilized

PAGE 141

127 to trap the particle at the focus position of the high NA lens, MO2. Each of the two arms will be described in turn below. Figure 7.4. Optical Tweezers and Digital Gabor Holography Microscope; a) CCD, charged-coupled device camera, Ls lenses MOs, microscope objectives; b) Gabor holography illustration; c) Illust ration of the focus position; 7. 3.1 Digital Gabor Holography Arm A diode laser (635 nm) illuminates the sample from below, in transmission, along the DGH arm, Figure 7.3a, 7.3b. A fiber optic tip acts as a point source from where the light is radiated in spherical waves. The laser beam is collimated by a 4X microscope objective. Then it is diffracted (object wave) by the object points and interferes with the undiffracted (reference wave) lig ht at the CCD plane to form a hologram. The imaging

PAGE 142

128 lens is a Zeiss Plan-Neofluar 100X, 1.25 NA, oil-immersion objective with a back aperture of 6 mm. A red filter and a polarizer (not shown in the figure) are inserted between the CCD and the dichroic mirror. The red filter blocks reflections from the trapping light and the polarizer adjusts light intensity at the camera. A set of 1000 holograms are recorded in 100 seconds, by the CCD and save d into computer for postprocessing. 7.3.2. Optical Trap Arm A Millenia V Spectra Physics 532 nm laser is us ed to trap the particle, as shown in Figure 7.3a, 7.3c. The trapping beam passes thr ough a combination of mirrors to position the beam at the desired height. Microscope objective MO1 expands the beam and the combination of L1 (f1 = 10cm), L2 (f2 = 15 cm ) adjusts the beam diam eter to just overfill the back aperture of the trapping optics. Af ter that, the beam enters the microscope through a dichroic mirror. The dichroic mirro r reflects the trapping beam and then it is focused by MO2, thus forming the optical trap near the sample plane. The front working distance is 0.17mm, and the back image di stance is 160 mm. The lens L2 with a focal length of 150 mm is positioned at 31 cm from the back of MO2. The dichroic mirror transmits the illumination beam and reflec ts the trapping beam. The trapping light backward-scattered by the trapped object has tw o parts; one is the re flected light outside of the microscope, and the other one, approxima tely 15% of the scattered light, arrives at the camera if the red filter is removed and may be used to initially find the focal position Figure 7.5a. Figure 7.5b shows an optically trapped bead with a field-of-view of 11.511.5 x m or 150x150 pixels.

PAGE 143

129 Figure 7.5. a) Focused Trapping Light; b) Opti cally trapped particle; FOV=11.511.5 x m or 150x150 pixels. 7. 4. Results We performed experiments on optically trappe d co-polyester microspheres immersed in ethylene glycol, with viscosity of 20.0161 Nm/s The tracking of a single particle along x, y, z axis is shown in Figure 7.5. We noticed a low frequency variation in the x, y data. This is due to the low speed CCD came ra. We also noticed a drift in the z axis, noticing the particle tried to escape the trap.

PAGE 144

130 Figure 7.6. Three-Dimensional Single Particle Tracking Function of Time; a) along x axis; b) along y axis; and c) along z axis. We obtained the mean-square displacem ent (MSD), individual axis and 3D, from the measured displacements of an optically trapped microsphere in ethylene glycol according to the formula [28]: 1 2 11 ()()() (1)Nn jMSDnt rjtrnt Nn (7.11)

PAGE 145

131 where, t is the time resolution, and ()() rjtrnt describe the particle position following a time interval nt after starting at position () rjt N is the total number of frames, n, j integers. Evidence for distinct trapping, hopping, a nd hindered-diffusive regimes are seen in the mean-square displacement and the probability distribution () Px [29]. Figure 7.7a b depicts the MSD in x, y, z versus the time interval Looking at the three graphs x M SD y M SD z M SD versus the time interval 25s Figure 7.7a, the x M SD and y M SD at first increase with increasing and later become stationary, Figure 7.7b. This behavior of the MSD shows clear evidence of the trapping. a) b) Figure. 7.7. The Mean-Square-Displacement Versus Time Intervals of an Optically Trapped Particle in Ethylene Glyc ol; a) MSD along x, y, z axis, 25s b) MSD along x, y axis, 40s The z M SD does not have the same trend as x M SD and y M SD since it first increase quickly; then attains a plateau and temporarily ceases to increase, Figure 7.8. Hopping

PAGE 146

132 particle was observed to escape tr aps (the drift in z). At large the z M SD of a hopping particle increases with increasing Figure. 7.8. The Mean-Square-Displacement in the Z direction Versus Time Intervals of an Optically Trapped Particle in Ethylene Glycol, along z axis, 90s Increasing the optical power of the trappi ng beam from 38 mW to 201 mW, the 3D distribution of the particle displacements becomes more compact as is shown in the Figure 7.9.

PAGE 147

133 Figure. 7.9. 3D Scatterplots of an Optically Tr apped Bead. a) The 3D distribution of the particle positions at the laser power P = 38 mW; b) The 3D distribution of the particle positions at the laser power P = 201 mW; Using the equipartition theorem we calculated the sp ring constants ,, x yzkkk for each coordinate ,, x yz and the results are graphed versus the laser power in Figure 7.10. The spring constant in the axial (z) direction is different from (and weaker than) the transverse (x and y) directions (in fact for nano/micro-p articles the spring constants in xand yare also different from each other due to polarization induces symmetry breaking).

PAGE 148

134 Optical Trap Stiffness Versus Laser Power (Equipartition)R2 = 0.497 R2 = 0.6091 R2 = 0.7215 -0.000005 0 0.000005 0.00001 0.000015 0.00002 0.000025 0.00003 0 50 100 150 200 250 Laser Power (mW)Optical Trap Stiffness (N/m) Spring constant, kx Spring constant, ky Spring constant, kz Figure. 7.10. Equipartition Calibration Method. Sp ring constant as a function of the laser beam power. R2 (=Regression Sum Squares / Total Sum Squares) represents a figure of merit of curve fitting; R2 = 1 means a perfect fit. We also followed the procedure described by Equation (7.11). Probability distributions are fitted to a Gaussian curve to derive the potential energy for each x, y, z wells. In Figure 7.11a the optical parabolic potentials E(x), E(y), and E(z) are displayed for an optically trapped particle at P =38 mW. E(x) is st ronger than E(y) and E(z). In Figure 7.11b the optical parabolic potentials E(x)s for an optically trapped particle at P =2.5 mW, 45 mW, 107 mW, 156 mW, 173 mW, 201 mW are also s hown. The correspondent spring constants are shown in Figure 7.12. Th e linearity between the spring constants and the laser power is preserved.

PAGE 149

135 Figure. 7.11. Boltzmann Statistics Calibration Method. a) Optical parabolic potentials E(x), E(y), and E(z) for an optically trapped particle at P =38 mW. b) Optical parabolic potentials E(x)s for an optically trapped particle at P =2.5 mW, 45 mW, 107 mW, 156 mW, 173 mW, 201 mW. The narrow the displacement distribution is, the stronger the stiffness of the potential well is. As we see the E(x) becomes steeper and th e force constant becomes larger and larger once the optical power increases. Optical Trap Stiffness Versus Laser Power (Boltzmann) R2 = 0.68930.00E+00 1.00E-05 2.00E-05 3.00E-05 4.00E-05 5.00E-05 6.00E-05 7.00E-05 8.00E-05050100150200250 Laser Power (mW)Optical Trap Stiffness (N/m) Figure. 7.12. Boltzmann Statistics Calibration Me thod. Spring constant as a function of the laser beam power. R2 (=Regression Sum Squares / Tota l Sum Squares) represents a figure of merit of curve fitting; R2 = 1 means a perfect fit.

PAGE 150

1367. 5. Conclusions We have demonstrated how one may calibrate an optical tweezers based on digital Gabor holography of a co-polymer particle. We have characterized the Brownian motion as an essential element for optical force calibra tion using optical tweezers. Spring constants and subsequently force measurements were meas ured from the statistical analysis of the oscilations of the optically trapped particle. The 3D position is tracked by analysis of the complex optical field reconstr ucted via the angular spectrum method. Stiffness of the optical trap is calculated by three calib ration methods: equipartition theory, and Boltzmann statistics. The results confirm the linear relationship between the stiffness and the trapping laser power. The valu es of the spring constants of the radiation force, in the axial (z) direction is different from (and w eaker than) the transverse (x) and (y). The digital Gabor holography micros cope together with the optic al trapping arm can be used as a new tool to study how cells ingest foreign particle s through the process known as phagocytosisa, and to understand a variety of biophysical processes such as organelle membrane interactions or the cytoskelet al rearrangements. Moreover, the digital interference holography techniques could be combined with optical tweezers to monitor and quantify membrane physical properties as a response to the evol ution of the malaria parasite acting on erythrocytes. The three-di mension map of cell refractive index gives subsequently information of nanoscale cell me mbrane thickness fluctuation in infected erythrocytes, which is the leading factor in quantifying the cell deformation. Another possible investigation is to understand the cytoadheren ce of infected cell under physiological shear stresses in blood vessels and capillaries monitoring healthy and infected cells on various substrates.

PAGE 151

137Acknowledgments This work is supported in part by Na tional Science Foundati on under Grant #0755705. The particle tracking code was developed by Dr.Leo Kr zewina. Experimental work, holograms recording, data processing, and simulation were performed by me.

PAGE 152

1387. 6. Bibliography [1]. M.J. Lang, P.M. Fordyce, A.M. Engh, K.C Neuman, S. M. Block, Simultaneous, coincident optical trapping and single molecule fluorescence, Nature Methods 1, 133139 (2004) [2]. S. M. Block, Fifty ways to love your lever: myosin motors, Cell 87, 151-157 (1996). [3]. Macdonald MP, Spalding GC, Dholakia K, "Microfluidic sort ing in an optical lattice, Nature 421: 421-424 (2003). [4]. K. Ladavac, K. Kasza and D. G. Grier, Physical Review E 70, 010901(R) (2004). [5]. S. Kawata and T. Sugiura, Opt. Lett. 17, 772 (1992). [6]. M. Righini, G. Volpe, G, C. Girard, D. Petrov, and R. Quidant, Surface Plasmon Optical Tweezers : Tunable Opti cal Manipulation in the Femt onewton Range, Phys. Rev. Lett. 100, 186804 (2008). [7]. P Domachuk, F G Omenetto, B J Eggl eton and M Cronin-Golomb, Optofluidic sensing and actuation with optical tweezer s, J. Opt. A: Pure Appl. Opt. 9 S129-S133 (2007). [8]. K. Uhrig, R. Kurre, Christian Schmitz, Jennifer E. Curtis, et al., Optical force sensor array in a microfluidic device based on holographic optical tweeze, Lab Chip, 9, 661 668 (2009). [9]. A. Ashkin, Acceleration and trapping of particles by radiation pressure, Phys. Rev. Lett. 24, 156-159, (1970). [10]. A. Ashkin and J..M. Dziedzic, Optical levitation by radia tion pressure, Appl. Phys. Lett., 1971.

PAGE 153

139 [11]. A. Ashkin, J..M. Dziedzic, J.E. Bjor kholm, and S. Chu, Observation of a singlebeam gradient force optical trap for diel ectric particles, Optics Lett., (1986). [12]. J. E. Curtis, B. A. Koss and D. G. Grier, Dynamic holographic optical tweezers, Optics Communications 207, 169-175 (2002). [13]. D. G. Grier, A revol ution in optical manipulation , Nature 424, 810-816 (2003). [14]. B. Sun, Y. Roichman and D. G. Gr ier, Theory of holographic optical trapping Optics Express 16, 15765-15776 (2008). [15]. M. K. Kim, Tomographic three-dime nsional imaging of a biological specimen using wavelength-scanning digital inte rference holography, Opt. Express; 7: 305-310 (2000). [16]. M. K. Kim, L. Yu and C. J. Mann. Inte rference techniques in digital holography. J. Opt. A: Pure Appl. Optics.; 8:512-523 (2006). [17]. M. C. Potcoava, and M. K. Kim, Op tical tomography for biomedical applications by digital interference holography, Meas. Sci. Technol. 2008; 19: 074010 (2008). [18]. G. Mie, "Articles on the optical characteri stics of turbid tubes, especially colloidal metal solutions," Ann. Phys.-Berlin 25, 377-445 (1908). [19]. K. C. Neuman and S. M. Block. Optical trapping. Rev. Sci. Instr., 75, 2787 (2004). [20]. L. P. Ghislain and W. W. Webb, Scanning-force micr oscope based on an optical trap, Opt. Lett. 18, pp. 1678 (1993). [21]. L. P. Ghislain, N. A. Switz, and W. W. Webb, Measurement of small forces using an optical trap, Rev. Sci. Instrum. 65, 2762 (1994).

PAGE 154

140 [22]. J. R. Moffitt, Y. R. Chemla, D. Izhaky, and C. Bustamante, Differential detection of dual traps improves the spatial resolution of optical tweezers, PNAS, 103, 9006 (2006). [23]. A. Carter, G. King, T. Ulrich, W. Halsey., D. Alchenberger, and T. Perkins, Stabilization of an opti cal microscope to 0.1 nm in 3D, Appl. Optics. 46, 421-427 (2007). [24]. W. P.Wong, and K. Halvorsen, The effect of integration time on fluctuation measurements: calibrating an optical tr ap in the presence of motion blur, Optics Express 14, 12517-12531 (2006). [25]. M. L. Salit and G. C. Turk, A Drift Correction Procedure, Anal. Chem. 70, 31843190 (1999). [26]. M K Cheezum, W F Walk er, and W H Guilford, Quantitative comparison of algorithms for tracking single fl uorescent particles, Biophys J. 81, 2378 (2001). [27]. S. F. Tolic-Nrrelykke, E. Schffer J. Howard, F. S. Pavone, F. Jlicher, H. Flyvbjerg, Calibration of optic al tweezers with positional de tection in the back focal plane, REVIEW OF SCIENTIFIC INSTRUMENTS 77, 103101 2006 [28]. Sako, Y., and A. Kusumi. 1994. Comp artmentalized structure of the plasma membrane for receptor movements as reve aled by a nanometer-level motion analysis. J. Cell Biol. 125:1251. [29]. L. Luo, G. D. J. Phillies, Brownian motion through a two-dimensional glass: Trapping, hopping, and diffusion, J. Chem. Phys. 105 (2) (1996).

PAGE 155

141 CHAPTER 8 CONCLUSIONS AND FU TURE WORK 8. 1 Conclusions In this dissertation, I have demonstrated and applied the latest development of digital holography techniques. Results confirm the cap ability of digital interference holography instrument (DIH) to image in-vitro human ma cular, optic nerve tissue, and fingerprint patterns. This work is to my knowledge a novel and innovative approach to ophthalmic and fingerprint imaging. I have also succe ssfully demonstrated the Brownian motion tracking of the optically trapped partic les by digital Gabor holography. I have demonstrated the capability of digital in terference holography to produce micron-size images without out-of-focus of the entire specimen. First, I have characterized the digital interference holography systems in terms of wavelength scanning capabilities. The scanning time is 30 s and the signal-to-noise ratio (SNR) is about 50 dB. The axial resolution is a parameter that depends on the wavelength scanning range and is obtained by superposing a ll optical fields. The axial resolution of the system is 5m Calibration experiments using a resolution target demonstrates improvement of SNR with increasing number of holograms consistent with theoretical prediction. The SNR of tissue images, 50 dB is comparable to that of the resolution target, implying the imaging system is ope rating at close to theoretical optimum.

PAGE 156

142 Tunable lasers are particularly sensitive to chromatic-dispersion, () n characteristics of materials, in particular second-order '' k and third order dispersion ''' k, which typically cause broadening of the axia l point spread functi on. I introduced and demonstrated a phase-matching scheme to make consecutive wavelength phase differences as identical as possible. Imaging experiments on retinal tissue reve al topography of blood vessels as well as optical thickness profile of th e retinal layer. I have already demonstrated that, in vitro, DIH can measure the dimensions of the sclera l ring and provide an answer regarding the size of the optic disk, a clinically important parameter. The disc diameter is about 1750 m and the cup depth is about 240m The depth between the retinal fiber and the retinal pigmented epithelium layers is about 84 m This method could be applied in ophthalm ology as a noninvasive, high speed tool to image the retinal and choroida l sub-structure and have si gnificant applications in a variety of retinal disorders, especially macular degeneration, di abetic retinopathy and glaucoma. This research has also shown how the DIH technique could be used in the field of forensic science as a fingerprint scanner to identify and quantify Leve l 1 (pattern), Level 2 (minutia points), and Level 3 (pores and ri dge contours) fingerprint characteristics from amplitude images. The optical thickness profile of a tr ansparent object can be obtained from quantitative phase images. Usually, two importa nt parameters are subsequently derived from the optical thickness prof ile, the physical thickness and th e index of refraction of the sample. Holographic phase microscopy DIH is used to produce images of thin film

PAGE 157

143 patterns left by latent fingerprints. The optic al thickness range of th e latent fingerprints was 136 m concerning with our ex periment. These results ha ve revealed the DIH is highly effective for bi ometry applications. Furthermore, I have demonstrated a new technique for position and force calibration of optically trappe d particles combined with digital Gabor holography. The system offers high-resolution 3D particle-t racking capabilities. The Gabor holography instrument is used to track and monitor co -polyester microsphere beads moving in depth over time, and when an optical trapping arm is attached to the microscope, the system can track and monitor optically trapped small objects undergoing Brownian motion. The 3D position is tracked by analysis of the complex optical field reconstructed via the angular spectrum method. Stiffness of the optical tr ap is calculated by two calibration methods: equipartition theory, and Boltzmann statistics. The results confirm the linear relationship between the stiffness and the trapping laser po wer. The values of th e spring constants of the radiation force, in the axial (z) direc tion is different from (and weaker than) the transverse (x) and (y). The average values of the spring constants are: the spring constant along x axis 67.6*10/ kxNm the spring constant along y axis 64.8*10/ kyNm and the spring constant along z axis 75.0*10/ kzNm Optical tweezers in addition to digital holography can investigate systematically the large deformation charac teristics of the human red blood cell infected by malaria parasite and to perform quantitative assessm ent and optimization of a variety of surface attachment technique.

PAGE 158

1448. 2 Future Work 8.2.1 Tunable Source and the Wavelength Scanning System I have developed a modification of the hol ogram exposure method so that the holograms are taken at equal intervals of wave vectors, not wavelengths (Figure D. 4). I noticed no improvement in the SNR of the reconstructed images. I believe the DIH imaging dynamic range or SNR will be improved by replacin g the dye laser with a swept source, introducing a high-speed camera and increa sing the number of recorded holograms. The light source is one of the most important components in any wavelength scanning imaging system. At this point, our scanner has to overcome the signal-to-noise ratio issue, to provide clinically relevant information. The scanning time is 30 s and the signal-to-noise ratio (SNR) is about 50 dB. A swept light source with an appropriate actuator and a sweep function pa rameterized by time is desirable for the DIH system to improve the system performance. Important key parts of a swept source are the optical gain medium and the linearity of the light source (data recorded evenly in k space). Research based on the use of semiconductor optical amplifier (SOA) as optical gain medium was already demonstrated [1-3]. 8.2.2 Multiplicative Noise (Speckle) Reduction The digital interference hologr aphy apparatus needs improveme nt in terms of speckle, energy loss, and other noise introduced by the stray reflections or CCD. There are two types of noise that we en counter in imaging systems, additive and multiplicative. The additive noise is not part of the signal and is spatially uncorrelated, this means the noise for each pixel is i ndependent and identically distributed (iid ). Also, it

PAGE 159

145 may be caused by a wide range of sources, e.g. variations in the detector sensitivity, environmental variations, the discrete nature of radiation, transm ission or quantization errors, etc. In multiplicative noise, speckle is dependent on image data and it arises when the radiation is scattered by the object. Digitally recorded and reconstructe d holograms contain not only spectral information but also the coherent noise, speckl e, introduced by the illumination source, in our case the laser, which decreases the spatial resolution of the image. When coherent radiation is reflected, the surface imperfections of the il luminated object generate a random interference that corrupts the image w ith a diffuse pattern called speckle noise, which is very difficult to filter because of its random and multiplicative nature. Speckle noise is a common phenomenon in all coherent imaging systems like laser, acoustic and SAR imagery [4, 7, 10]. Its undesired effect was recognized from the very beginning of holography [5, 8]. The dig itally recorded and r econstructed data are stored in the computer as complex numbers. To improve the spatial resolution and image quality of holographic images, the complex noise stored with the data can be reduced by applying various speckle re duction filters [6, 7]. Our scanning digital interference system does not benefit from these approaches. The coherent noise level explodes with the incr easing of the laser intensity in the time of the wavelength scanning and we cannot control it. Goodman [8] showed that the speck le can be reduced by superimposing M uncorrelated speckle patterns using an active diffuser placed in an intermediary image plane. The active diffuser is moved in the in termediate image plane in a rotation about a display system optic axis in order to create a shifting phase at a display screen. Moreover,

PAGE 160

146 all M independent speckle configurations have equal mean intensities and the speckle contrast is reduced from 1 to 2/1/1 M. In our noise multiplicative model the observed value 2E will be modeled as a random variable resulting from the product of two independent random variables, which correspond to the image in the reconstructed optical field 1E and the stationary multiplicative noise n. After elimination of zero-order term and virtual image, the real image optical field 2E can be represented as: 21EnE where 100exp() EEi and exp()nnEin 2100exp[()]nEnEnEi .The intensity distribution of this optical field becomes 22 220 nIEEEE. If M holograms of the same scene and wavelength are recorded, each of them is individually recons tructed and added on an intensity basis. The result contains the object information 2 0 E and a new noisenewn, where 1 2 0 M N newn inE This noise can be characterized as random multiplicative noise, following a Gamma distribution as both real and imaginary parts of the speckle have zero-mean Gaussian density, and the contrast of this type of distribution behaves as 1/21/ M[8, 11]. A motorized diffuser will be added in the front of the object, and it will rotate in N different positions but remain motionless during the detectors integration time. 8.2.3 Fundus Camera Adaptation for Digita l Interference Holography Imaging Imaging measurements and experiments will be performed using the fundus camera. The performances will be assessed and optimized to be compatible with current ophthalmic

PAGE 161

147 imaging applications. The retinal camera is a model TRC-50X (mydriatic retinal camera); Topcon Corp., Tokyo, Japan equipped with a digital back piece, MEGAPLUS model 1.4; Eastman Kodak, San Diego, CA and a PC-bas ed image-management system (Ophthalmic Imaging Systems Inc., Sacramento, CA). The fundus camera will be incorporated in to the optical design in the object arm, and the reference arm length has to be modified in order to match the coherence length of the laser beam. The light coming from the lase r reflects back from the object (the fundus camera image) and the reference mirror, in terfere both at CCD plane and form the interference patern. A few methods have been previously re ported [8] for holographic imaging of the eye using a fundus camera. A modified Zeis s fundus camera was used [9] to obtain holograms. Resolution near the resolution limit of the camera, 20 m, has been obtained in holograms of the fundus of anaesthetized cats. Finally, the laser power has to be m odified to agree with the American Conference of Government Industrial Hygien ists (ACGIH) standards. Then, the DIH setup will be substantially improved and suitable for clinical usage. 8.2.4 Integrated optical diagnostics technique to investigate mechanical properties of single erythrocytes infected with malaria parasite A Plasmodium falciparum (Pf) is a malaria parasite which infects human erythrocytes, resulting in changes in their biological f unctions and modifications to their physical properties. Notable changes in physical characteristics of the cell membrane, such as elasticity and permeability, have been observed and believed to be inter-related with the

PAGE 162

148 adherence force of erythrocytes to endothe lial cells. However, there has been no systematic, quantitative study of this aspect Erythrocyte membrane provides a barrier maintaining the integrity of the cell and consists of three major components: a mixed lipid-protein bilayer; a cytoskeletal ne twork structure whose main components are spectrin proteins [12, 13], and transmembrane proteins such as glygoproteins, band 3, and glygophorin. Stability in the macr oscopic characteristics of the plasma membrane is the key to maintain the physical integrity and biological function of the cell. In the macroscopic scale, differences in mechanic al properties between healthy and malariainfected cells have been investigated in an effort to develop im age-based single cell diagnostics. Healthy red blood cells experien ce numerous deforma tions and significant shape changes in flow, being squeezed and el ongated through narrow capillaries [14]. On the other hand, the elasticity of Pf -infected cells is substantia lly decreased and tend to aggregate in a blood vessel. Furthermore, in fected cells tend to adhere to endothelial cells, where Pfs export proteins to the er ythrocyte plasma membrane to result in proteinrich knob regions in the membrane, enabling th e infected cells to r eadily adhere to the receptors of endothelial cells [15] The expression of kno b proteins on the membrane may induce sub-micron scale flicker motion of the membrane, which is also associated with the cooperative motion between its cy toskeleton and lipid bilayer. The key hypothesis of a future experiment will be that this sub micron scale membrane fluctuation influences the cytoadherence of erythrocytes to endothelial cells. In malaria disease, adherence of erythrocytes to endothelial cel ls is one of the key processes of human infection. Therefore, measurements will focus on inter-relating the cytoadherence properties with submicron scale mechanical motion of the membrane.

PAGE 163

149 The idea is to develop a novel imaging method where digital interference holography technique is combined with optic al tweezers to enable quantitative dynamical imaging of single erythrocytes in a native c ondition. The role of optical tweezers is twofold: (1) immobilizing erythrocytes without attaching them onto a substrate to enable imaging them in a native condition; (2) assess ing adherence force of erythrocytes on the surface of a monolayer of endothelial cells or on the surface of biomim etic substrates of endothelial cells. Erythrocytes which are attached onto a biomimetic substrate mimicking endothelial cell surface can be shear-stretched by optical tweezers to assess the stretching behaviors between normal cells and infected cells. Shear modulus calculation using digital interference holography will be performed and compared with the shear modulus obtained by the optical tweezers. We expect to build up the 3D struct ure of the cell by di gital interference holography, to extract the threedimensiona l cell thickness fluctu ation maps, and to quantify the elastic shear modulus using both optical tweezers tec hnique and the digital holography on a variety of substrates. A list of potential impact areas includes better understanding of the pathophys iology of the disease, th e development of medical diagnostic devices that are not only novel, portable and inex pensive, but also accessible to the developing world where some diseases are especially rampant.

PAGE 164

1508. 3 Bibliography [1]. R. Huber, M. Wojtkowski, K. Taira, J. G. Fujimoto, K. Hsu, Amplified, frequency swept lasers for frequency domain reflectom etry and OCT imaging: design and scaling principles, Opt. Express 13, no. 9, pp. 3513-3528, 2005. [2]. R. Huber, M. Wojtkowski, J. G. Fujim oto, Fourier Domain Mode Locking (FDML): a new laser operating regime and applica tions for optical coherence tomography, Opt. Express 14, no. 8, pp. 3225-3237, 2006. [3]. Youxin Mao, Costel Flueraru, Sherif Sherif, Shoude Chang, High performance wavelength-swept laser with mode-locking t echnique for optical coherence tomography, Optics Communications 282, 1067(2009). [4] H. H. Arsenault, G. April, Properties of Speckle Integrated with a Finite Aperture and Logarithmically Transformed", J. Opt. Soc. Am., Vol. 66, pp. 1160-1163, 1976 [5] D. Gabor, Laser speckl e and its elimination, IBM J Res. Develop. 14, 509, 1970. [6] L. Gagnon, Wavelet Filtering of Speckle Noise Some Numerical Results, Vision Interface '99, Trois-Rivires, Canada, 19-21 May,1999. [7] L. Gagnon and A. Jouan, Speckle Filt ering of SAR Images A Comparative Study Between Complex-Wavelet-Based and Standa rd Filters, SPIE Proc. #3169, conference "Wavelet Applications in Signal an d Image Processing V", San Diego, 1997 [8] J. W. Goodman, Some F undamental Properties of Speckle ", J. Opt. Soc. Am., Vol. 66, pp. 1145-1150, 1976 [9] E. N. Leith and J. Upatnieks, Wavefr ont reconstruction with diffused ilumination and three-dimensional objects, J. Opt. Soc. Am. 54, 129, 1964.

PAGE 165

151 [10] J. S. Lim, H. Nawab, Techniques fo r Speckle Noise Removal", Opt. Engineering, Vol. 20, pp. 472-480, 1981 [11] J. Trisnadi, Hadamar d speckle contrast reduction, Opt. Lett. 29, 11, 2004. [12]. Browicz, T. 1890. Further observati on of motion phenomena on red blood cells in pathological states. Zbl. med. Wissen. 28:625. [13]. Parpart, A. K., and J. F. Hoffman 1956. Flicker in erythrocytes; vibratory movements in the cytoplasm. J. Cell. Physiol. 47:295. [14]. Skalak, R. and Branemark, P. ( 1969) Deformation Of Red Blood Cells In Capillaries. Science, 164, 717. [15]. G. Crivat, J. M. Sa, F. Tokumasu, T. E. Wellems, and J. Hwang, Fluorescence Reporter Protein for Studying the Protein Tra fficking of Malaria Infected Human Red Blood Cell, Conference Poster.

PAGE 166

152 REFERENCES J.W. Goodman, Introduction to Fourier Optics Third Edition Roberts and Company Publishers, Inc., Englewood, CO (2005). P. Hariharan, Optical Interferometry, Second Edition, Academic Press ISBN 0123116309 (2003). U. Schnars, W Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques, Springer; ISBN: 354021934X (2004). C. Scott, Introduction to Optics and Optical Imaging IEEE Press, ISBN: 078033440X (1998). R. Jones and C. Wykes, Holographic and Speckle Interferometry, Second Edition Cambridge University Press, ISBN: 052134417 (1989). C. J. Kuo and M. H. Tsai, Three-Dimensional Holographic Imaging John Wiley and Sons, Inc., ISBN: 0471358940 (2002).

PAGE 167

153 L. Yaroslavsky and M. Eden, Fundamentals of Digital Optics Birkhauser Boston, ISBN: 0817638229 (1996). G. Saxby, Practical Holography, Prentice Hall International (UK) Ltd, ISBN: 0136937977 (1988). D. Clarke and J. F. Grainger, Introduction to Polarized Li ght and Optical Measurement Pergamon Press., ISBN: 080163203 (1971).

PAGE 168

154 APPENDICES

PAGE 169

155 Appendix A: Digital Interference Holography Wavelengths Superposition The electromagnetic signal is represented as a pair of real and imaginary signals. By convention, the measured signal is the real signal. As an example, the sinusoidal time varying signal has the expression, ()()()()(;) ()ikzwt iwtiziwtztAeuzeAee (A.1) where A is the amplitude, ()kzwt is the absolute phase, 2 k is the wavevector, 2 wf is the angular frequency, f is frequency, t is the time, z is the propagation distance. Also, the function()()izuzAe is called phasor, or complex amplitude of the signal, and () zkz is called the phase. The real part of the Equation. (A.1) is ()[]cos()ikzwtEAeAkzwt Ignoring the time variation of Equation. (A.1), we obtain the expression: ()()() cos()izikzuzAeAeAkz (A.2) In the digital interfer ence holography, we perform wavelength scanning and a hologram is recorded for each wavelength. The superposition of the reconstructed optical fields or cosine waves ()()() cos()iiizikz iiiiiuzAeAeAkz for each wavelength i behaves as a periodic function of pulse-like picks with the period 2 c and the width of each pulse 22 ccz NN Here, N is the number of wavelengths, c is the

PAGE 170

156 central wavelength of the wavelength range, is the wavelength increment, and is the wavelength range. If we consider the amplitude of each wave,iA being constant over the wavelength range, and each 2i ik with an increment of maxmin maxmin11 kkk the addition of the N waves has the expression: ) ..... 1( .....)1( 2 11 3 2 1kzNi kzikzikzi zik zik zik zikzik N i zike eeeee eeeei i (A.3) The summation of the exponential function is a geometric progression with a constant ratio of kzier. The Equation (A.3) becomes, 1 1 1123(1) 1 222 2221 (1.....) 1 (1)() (1) ()iiNkz N ikz ikzikz ikzikzikziNkz ikz i NkzNkzNkz iii iNkz ikzikz kzkzkz ikz iiie eeeeeee e eeee ee e eee (A.4) Using the exponential expression for 2 sinixixee x we get, ) 2 1 sin( ) 2 sin(2 2 11kz kz N e e eekz i kzN i zik N i ziki (A.5) We know the wavevector is proportional w ith the reciprocal of the wavelength, 2 k The derivative of the wavevector is 22 2cdk the period of the pulses in the optical field superpos ition function has the form 2 k and the width of each

PAGE 171

157 pulse is z N where N is the number of wavelengths. We also call the axial extent of the object and z the axial resolution of the system. The simulation of the super position of one (Figure A1.a), 50 (Figure A1.b), 100 (Figure A1.c), and 200 (Figure A1.d) optical fields versus the reconstruction distance (3000 z m ) are shown below. The wavelength range is 35nm (min0.565 m ,max0.600 m ). The digital interference holography apparatus is an off axis holography setup in reflection geometry so all parameters from the graph with significance of distance have to be divided by 2. For the case with one optical field there is no periodicity so I will no t give details about this grap h (Figure A1.a). For the other cases, the parameters are: for N = 50 wavelengths, -10.013m k 242.1429m 4.8429m z for N = 100 wavelengths, -10.0065m k 484.2857m 4.8429m z for N = 200 wavelengths, -10.0032m k 968.5714m 4.8429m z

PAGE 172

158 Figure A1. Optical field superposition simulation, 35nm ; a) One wavelength; b) 50 wavelengths; c) 100 wave lengths; d) 200 wavelengths. If the wavelength range inte rval is doubled than before, 70 nm (min0.560 m ,max0.630 m ), the width of the pulses are thinner by a factor of two. It means the axial resolution increase s by increasing the wavelength range. Figure A2 shows the optic al field superposition 70 nm The scanning parameters are: for N = 50 wavelengths, -10.0249m k 126m 2.52mz for N = 100 wavelengths, -10.0125m k 252m 2.52mz for N = 200 wavelengths, -10.0062m k 504m 2.52mz

PAGE 173

159 Figure A2. Optical field superposition simulation, 70 nm ; a) One wavelength; b) 50 wavelengths; c) 100 wave lengths; d) 200 wavelengths.

PAGE 174

160 Appendix B: Diffraction Recons truction Methods Comparison This appendix contains two parts. The firs t part covers the deri vation of the Fresnel transform from the angular spectrum of a plan e wave. In the second part, the interface of the computer program written in LabView is presented which is used to reconstruct numerically a hologram using the angular spectrum and the Fresnel method. B1. From the Angular Spectrum to the Fresnel Transform The optical field can be rec onstructed using any reconstruc tion distance using the angular spectrum method. The accuracy of the Fresnel integral is g ood to distances close to the aperture (Goodman). Here we wa nt to derive the Fresnel a pproximation from the point of view of the angular spectrum method starting from the tran sfer function of propagation through space, 22exp[ ] (,) 0xy xy j zkkk Hkk (B1) where the exponential is defined for 222xykk and the transfer function is 0 otherwise. The more usable expression for the Huygens-Fresnel principle needs approximations for the absolute distance 222rxyz Equation (B2) and for the wavevector along the propagation distance 222 zxykkkk Equation (B3).

PAGE 175

161 22 2222 22 2222 1/2 22 22[(1)](1 .....)(1) 242 x yxyxyxy xyzz z z zzzz (B2) 2222 x yzkkkk, 2222 22 222 2.....) 242 x yxy xy zxy zkkkkkk kkkkkk kkk (B3) The optical field at the hologram plane is );,(0000zyxE. A complex field );,(0000zyxE at a position vector )0;,(000 zyx can be decomposed into its spectrum of plane-wave components )0;,(0 yxkkA defined by the Fourier transform, 00 0 0 00 0 0 0(,;0)(,;)exp[()]xy xyAkkExyzikxkydxdy (B4) The angular spectrum can then be propagated in space along the z axis, perpendicular to the hologram plan e, multiplying the Equation (B4) by ]exp[ zikz. 22 000000000(,;)exp[()](,;)exp[()] 2xy xy xykk AkkzikzExyzikxkydxdy k (B5) The reconstructed complex wave-field (,,) Exyz is found by: 0 22 00000000 22 00000000(,,) (,;0)exp[( )] ((,;)exp[()])exp[()()] 2 exp()((,;)exp[()])exp[( 2xyxy xyz xy xy xy xy xy xy xyExyzdkdkAkkikxkykz kk dkdkExyzikxkydxdyikxkyikz k kk ikzdkdkExyzikxkydxdyi k 22 00000000] exp()(,;)exp[()()()] 2xy xy xyxyzkxky kk ikzExyzdxdyizikxxikyydkdk k (B6) To solve the exponential integral we need the identity: 2 2exp()exp() 4 iib iaxibxdx aa (B7) where 2 z a k and 0() bxx or 0() byy Using this identity, the Equation (B6) becomes,

PAGE 176

162 22 00000000(,,)2exp()(,;)exp[()()] 2 ik ik Exyz ikzExyzdxdyxxyy zz (B8) Developing further this equation we get, 00 0022 22 000000 00 22 22 0000(,,)2exp()exp[()(,;)exp[()2()] 22 2exp()exp[()(,;)exp[()][,] 22xyik ik ik Exyz ikzxyExyzdxdyxyxxyy zz z ik ik ik ikzxyExyzxykk zz z F (B9) As a conclusion, we obtained the optical field based on the Fresnel approximation from optical field based on the angular spec trum method. The two optical fields are identical within the approximations (B2) and (B3). B2. Difraction Reconstruction Methods. Computer Main Screen. Figure B.1: Main screen of difr action reconstruction methods Labview program.

PAGE 177

163 Appendix C: Fourier Transform FOURIER TRANSFORM If f(x) is a reasonably we ll-behaved function, then 11 exp 2 1 exp 2 fxFkikxdk Fkfxikxdx FkfxkfxFkx FF FOURIER SERIES If f(x) is a periodic function of period then fx Fnexp inKxn Fn 1 fxexp inKxdx0 where K 2 is the fundamental frequency. DISCRETE FOURIER TRANSFORM f(x) defined in [-X/2, X/2]: fx 0 for x X 2 X 2 fs(x) sampled at x intervals: fsx fx comb x x i.e., fsxi fxi for xi X 2 :x : X 2

PAGE 178

164 Fourier transform Fs(k): Fsk F fsxk F fx comb xx Fk 2 x comb k 2/x 2 x Fk n 2 x n Therefore, if Fk 0 for k K 2 K 2 where K 2 x then Fsk Fk for k K 2 K 2 Conversely: F(k) defined in [-K/2, K/2]: Fk 0 for k K 2 K 2 Fs(k) sampled at k intervals: Fsk Fk comb k k Fourier transform fs(x): fsx fx 2 k comb x 2/k Therefore, if fx 0 for x X 2 X 2 where X 2 k then f(x) x X/2 -X/20 dx F(k) k K/2 -K/20 dk

PAGE 179

165 fsx fx for x X 2 X 2 Therefore, if both f(x) and F(k) ar e discretized with N+1 points, then: K Nk N 2 X 2 x k K N 2 Nx 2 X X Nx N 2K 2 k x X N 2Nk 2K DIRAC DELTA FUNCTION PROPERTIES x x0dx 1 fxx x0dx fx0x 1 x xx

PAGE 180

166 Appendix D: Digital Interference Holography Computer Interface This appendix contains a few computer pr ograms, for digital interference holography, written in Labview. The main Digital In terference Hologra phy computer interface (Figure D.1) has designated buttons and each of them performs specific functions. They are: set mike, the micrometer setup which is used for the calibration Holo EE, the reconstruction of a hologram using the angular spectrum method set camera, the camera setting ws HHH, which is used to acquire hol ograms from the IMAQ CCD camera for each wavelength in the wavelength range. DIH calc, which has the option to perform/ not to perform the phase-matching correction, does the optical field superposition, and saves the stack of holograms. DH View which is used to visualize va rious image files already saved.

PAGE 181

167 Figure D.1: Main screen of Digital Interf erence Holography Labview program.

PAGE 182

168 Figure D.2: Calibration via micrometer controller.

PAGE 183

169 Figure D.3: Calibration curve, micrometer position versus wavelength. Figure D.4: Calibration curve, micrometer position versus wavevector.

PAGE 184

170 Figure D.5: Digital interference holography, holograms acquisition from the IMAQ CCD camera

PAGE 185

171 Figure D.6: Holograms acquisiti on. Block Diagram.

PAGE 186

172 Figure D.7: Phase-match, Filter correction. Block Diagram.

PAGE 187

173 Figure D.8: Phase-unwrapping using DIH. Block Diagram. Figure D.9: Phase-unwrapping using DIH. Block Diagram.

PAGE 188

174 Appendix E: Brownian Motion and Op tical Trapping Computer Interface This appendix contains sele cted computer programs, for Brownian motion and optical trapping, written in Labview. The ma in computer interface functions are: set camera, the camera setting Reconstruction distance, the reconstruction of a hologram using the angular spectrum method Get holograms, which is used to capture holograms from the IMAQ CCD camera for a time interval. EE Rec, has two option, to performs indi vidual hologram reconstruction, and to perform hologram differences for part icle tracking. In both cases the results are saved at the end. Gabor View which is used to visualize va rious image files already saved.

PAGE 189

175 Figure E.1: Main screen of Brownian mo tion and optical trapping Labview program.

PAGE 190

176 Figure E.2: Brownian Motion and Optical Tr apping, holograms acquisition from the IMAQ CCD camera.

PAGE 191

177 Figure E.3: Sequence acquisition of multiple holograms. Block Diagram.

PAGE 192

178 Appendix F: List of Accomplishments REFEREED PUBLICATIONS M. C. Potcoava and M.K. Kim, Optical tomography for biomed ical applications by digital interference hol ography Meas. Sci. Technol Vol. 19, 074010 (2008). M. C. Potcoava, C. N. Kay, M. K. Kim, and David W. Richards, Digital Interference Holography in Ophthalmology, Journal of Modern Optics. (Accepted). M.K. Kim and M. C. Potcoava, Fingerpri nt Biometry Applications of Digital Interference Holography, A pplied Optics (In Review). M. C. Potcoava, L. Krewitza and M.K. Kim, Brownian motion of optically trapped particles by digital Gabor holography (To be submitted to Optics Express). PROCEEDINGS M. C. Potcoava and M.K. Kim, Fingerp rints scanner using Digital Interference Holography , in Biometric Technol ogy for Human Identification VI, (SPIE Defense, Security, and Sensing 2009 ), paper presentation 7306B-80. M. C. Potcoava, M. K. Kim, Christin e N. Kay, Wavelength scanning digital interference holography fo r high-resolution ophthalmic imaging, in Ophthalmic Technologies XIX, (SPIE 2009 BiOS ), paper presentation 7163-10.

PAGE 193

179CONFERENCES M. K. Kim and M. C. Potc oava, Fingerprint Biometry Applications of Digital Holography and Low-Coherence Interferen ce Microscopy in Digital Holography and Three-Dimensional Imaging, Optical Society of Amer ica, Vancouver CA, 2009. M. C. Potcoava and M.K. Kim, Fingerp rints scanner using Digital Interference Holography , in Biometric Technol ogy for Human Identification VI, SPIE Defense, Security, and Sensing Orlando FL, 2009, paper presentation 7306B-80. M. C. Potcoava, M. K. Kim, C. N. Kay, Wavelength scanning digital interference holography fo r high-resolution ophthalmic imaging, in Ophthalmic Technologies XIX, SPIE BiOS, San Jose CA, 2009, paper presentation 7163-10. C. N. Kay, M. C. Potcoava, M. K. Ki m, D. W. Richards, Digital Holography Imaging of Human Macula, Florida Society of Ophthalmology Resident Symposium. Palm Beach, FL, 2008, paper presentation (Second place). M.C. Potcoava, C.N. Kay, M.K. Kim, D.W. Richards. Digital Interference Holography in Ophthalmology, ARVO Fort Lauderdale, 2008, paper presentation 4011. M. C. Potcoava and M. K. Kim, "3-D Representation of Retinal Blood Vessels through Digital Interference Holography," in Digita l Holography and ThreeDimensional Imaging, OSA, San Petersbur g FL, 2008, paper presentation DMB2. M. C. Potcoava and M. K. Kim, "Animal Tissue Tomography by Digital Interference Holography," in Adaptive Optics: Analysis and Methods/Computational Optical Se nsing and Imaging/Information

PAGE 194

180 Photonics/Signal Recovery and Synthe sis Topical Meetings, OSA Technical Digest OSA, Baltimore MD, 2007), paper presentation DWC6. M. C. Potcoava,Digital Interference Holography in the 21 st Century, USF Graduate Research Symposium, Tampa FL, 2008, poster presentation, (First Place). M.C. Potcoava, C.N. Kay, M.K. Kim, D.W. Richards, P. R. Pavan, Digital Holography Imaging of Human Macu la, ASRS, Maui HI, 2008), poster presentation.

PAGE 195

181 About the Author Mariana Potcoava received a Bachelors Degree in Physics from Bucharest University in 1994, a M.S. in Photonics fr om Politechnica University Bucharest in 1997 and another M. S. in Atmosphere Science in 2003 from University of Michigan. She entered the Ph.D program in Applied Physics at the University of South Florida in Fall 2004. She has completed an Industrial Practicum at Imaging Eyes, Orsay France as part of the Applied Physics practical training. While in the Ph.D. program at the Univ ersity of South Florida, Ms. Potcoava presented her work in numerous technical c onferences, such as OSA, SPIE, ARVO, and published the results of this dissertation in various academic journals including Measurements Science Technology Modern of Journal Optics, Applied Optics, and Optics Express.