xml version 1.0 encoding UTF8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchemainstance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001910730
003 fts
005 20070928150738.0
006 med
007 cr mnuuuuuu
008 070928s2006 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14SFE0001709
040
FHM
c FHM
035
(OCoLC)173488751
049
FHMM
090
QC21.2 (ONLINE)
1 100
Mann, Christopher J.
0 245
Quantiative biological micrsocopy by digital holography
h [electronic resource] /
by Christopher J. Mann.
260
[Tampa, Fla] :
b University of South Florida,
2006.
3 520
ABSTRACT: In this dissertation, improved techniques in digital holography, that have produced highresolution, highfidelity images, are discussed. In particular, the angular spectrum method of calculating holographic optical field is noted to have several advantages over the more commonly used Fresnel transformation or Huygens convolution method. It is observed that spurious noise and interference components can be tightly controlled through the analysis and filtering of the angular spectrum. In the angular spectrum method, the reconstruction distance does not have a lower limit, and the offaxis angle between the object and reference waves can be lower than that of the Fresnel requirement, while still allowing the zeroorder background to be cleanly separated. Holographic phase images are largely immune from the coherent noise commonly found in amplitude images. With the use of a miniature pulsed laser, the resulting images have 0.5um diffractionlimited lateral resolution and the phase profile is accurate to about several nanometers of optical path length. Samples such as ovarian cancer cells (SKOV3) and mouseembryo fibroblast cells have been imaged. These images display intracellular and intranuclear organelles with clarity and quantitative accuracy. This technique clearly exceeds currently available methods in phasecontrast opticalmicroscopy in both resolution and detail and provides a new modality for imaging morphology of cellular and intracellular structures that is not currently available. Furthermore, we also demonstrate that phase imaging digital holographic movies provide a novel method of noninvasive quantitative viewing of living cells and other objects. This technique is shown to have significant advantages over conventional microscopy.
502
Dissertation (Ph.D.)University of South Florida, 2006.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 209 pages.
Includes vita.
590
Adviser: Myung K. Kim, Ph.D.
653
Computer holography.
Holographic interferometry.
Interference microscopy.
Numerical reconstruction.
Phaseunwrapping.
Phasecontrast microscopy.
690
Dissertations, Academic
z USF
x Physics
Doctoral or Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1709
PAGE 1
Quantitative Biological Micros copy by Digital Holography by Christopher J. Mann A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Physics College of Arts and Sciences University of South Florida Major Professor: Myung K. Kim, Ph.D. Dennis K. Killinger, Ph.D. ChunMin Lo, Ph.D. George S. Nolas, Ph.D. Date of Approval: July 14, 2006 Keywords: computer holography, holographic interferometry interference microscopy numerical reconstruction, phaseunwrapping, phasecontrast microscopy Copyright 2006 Christopher J. Mann
PAGE 2
Dedications To all my family and friends who have supported me on my journey.
PAGE 3
Acknowledgments I would like to first expre ss my sincere appreciation to Dr. Kim for his support and guidance during this proj ect and for his continued patience and encouragement throughout my study. He is an outstanding advisor. I would also like to thank all my comm ittee members Dr. Dennis Killinger, Dr. ChunMin Lo, and Dr. George Nolas for provi ding technical advice and serving on my dissertation and candidacy exam committee. I would also like to thank Dr. Shekhar Bhansali for chairing my defense examination. I would also like to convey my thanks to my fellow lab members Leo Krzewina, Alex Khmaladze Nilanthi Warnasooriya, and Marian a Potcoava as well as Dr. Lingfeng Yu who helped me to build this work. Also I must say a huge thank you to An ali Makoui who has provided me which much impetus and gave so much valuable support and encouragement throughout this project. Also I acknowledge the numerous help ful discussions with Avis Pal, Ed Navarro, and Yaroslav Shtogun. I would like to thank finally the staff of the Physics department for their help, Sue Wolfe, Evelyn KeetonWilliams, Sam Valenti and Phil Bergeron.
PAGE 4
i Table of Contents List of Tables ......................................................................................................................v List of Figures ............................................................................................................... ....vi Abstract ...................................................................................................................... .......xi 1. Introduction......................................................................................................................1 1.1. Foundations of Holography..............................................................................1 1.2. Digital Holography...........................................................................................3 1.3. Applications to Microscopy..............................................................................6 1.3.1. ThreeDimensional Microscopy........................................................6 1.3.2. PhaseContrast Microscopy...............................................................7 1.3.3. Numerical Focusing...........................................................................8 1.3.4. Phase Unwrapping.............................................................................9 1.4. Research Contributions.....................................................................................9 1.5. Thesis Organization........................................................................................11 2. Principles of Holography...............................................................................................12 2.1. Wave Theory of Light.....................................................................................12 2.2. Interference.....................................................................................................14 2.3. Coherence.......................................................................................................18 2.3.1. Temporal Coherence........................................................................18
PAGE 5
ii 2.3.2. Spatial Coherence............................................................................19 2.4. Diffraction Theory..........................................................................................19 2.4.1. Huygens Fresnel Principle...............................................................21 2.4.2. Integral Theorem of Kirchhoff and Helmholtz................................23 2.4.3. FresnelKirchhoff Diffraction..........................................................27 2.4.4. RayleighSommerfield Diffraction..................................................30 2.5. Speckles..........................................................................................................31 2.6. Holographic Recording and Reconstruction...................................................32 3. Digital Holography........................................................................................................38 3.1. Digital Recording of Holograms.....................................................................38 3.2. Numerical Diffraction.....................................................................................42 3.2.1. Reconstruction by Fresnel Transform..............................................43 3.2.2. Reconstruction by Fr esnel Convolution...........................................48 3.2.3. Reconstruction by Huygens Convolution........................................49 3.2.4. Reconstruction by Angular Spectrum..............................................50 3.3. Separation of Conjugate Image and DC Term................................................54 3.3.1. Suppression of DC Term.................................................................55 3.4. Recording Digital Holograms.........................................................................57 3.4.1. CCD Cameras in Digital Holography..............................................57 3.4.2. Spatial Frequency Requirements.....................................................58 3.4.3. Lateral Resolution of the Optical System........................................59
PAGE 6
iii 3.4.4. Calibration of Optical System in Digital Holography.....................60 4. Digital Gabor Holography for Microscopy....................................................................63 4.1. Introduction.....................................................................................................63 4.2. Gabor Holography Simulation........................................................................64 4.3. Limitations of Gabor Holography...................................................................66 4.4. Recording of Digital Gabor Holograms..........................................................68 4.5. Experimental Results......................................................................................74 4.6. Digital Gabor Holographic Movies................................................................83 4.6.1. Automatic focusing for Digital Gabor Movies................................87 4.7. Conclusions.....................................................................................................89 5. Quantitative OffAxis Digital Holography for Microscopy..........................................92 5.1. Introduction.....................................................................................................92 5.2. Quantitative Evaluation of an object...............................................................94 5.3. Experimental Setup for OffAxis Holography................................................96 5.3.1. Optical Alignment..........................................................................100 5.4. Phase Unwrapping........................................................................................101 5.5. Comparison of Numerica l Reconstruction Methods.....................................104 5.6. Phase Imaging Digital Holography...............................................................108 5.7. Quantitative PhaseContrast Microscopy by Digital Holography................111 5.8. OffAxis Digital Holographic Movies..........................................................138 5.9. Discussion and Conclusions.........................................................................145
PAGE 7
iv 6. MultiWavelength Phase Imagi ng Digital Holography (MWPIDH)...........................148 6.1. Introduction...................................................................................................148 6.2. Theory of MWPIDH.....................................................................................149 6.3. Experimental Setup for MWPIDH................................................................157 6.4. TwoWavelength PIDH Results...................................................................158 6.5. MWPIDH Holographic Movies....................................................................162 6.6. Conclusions...................................................................................................168 7. Conclusions and Future Work.....................................................................................170 References........................................................................................................................174 Bibliography................................................................................................................... .182 Appendices.......................................................................................................................184 Appendix A: Theorems........................................................................................185 Appendix B: LabView Programs.........................................................................191 Appendix C: Matlab Programs............................................................................198 Appendix D: List of Accomplishments...............................................................207 About the Author...................................................................................................End Page
PAGE 8
v List of Tables Table 3.1. Specification data for a 1951 USAF resoluti on test target............................. 61
PAGE 9
vi List of Figures Figure 1.1. Process of Digital Holography.........................................................................5 Figure 2.1. Interference of Plane Waves..........................................................................16 Figure 2.2. Diffraction from Aperture..............................................................................20 Figure 2.3. Huygens Principle..........................................................................................21 Figure 2.4. HuygensFresnel Principle.............................................................................22 Figure 2.5. Integration surface in Greens Theorem........................................................24 Figure 2.6. FresnelKi rchhoff Diffraction........................................................................28 Figure 2.7. Speckle Pattern...............................................................................................31 Figure 2.8. Recording a Hologram...................................................................................34 Figure 2.9. Reconstruction of a Hologram.......................................................................36 Figure 3.1. Recording a Digital Hologram.......................................................................39 Figure 3.2. Coordinate System in Digital Holography.....................................................41 Figure 3.3. Angular Spectrum Propagation......................................................................51 Figure 3.4. Fourier Spectrum of a Hologram...................................................................55 Figure 3.5. USAF 1951 Reso lution Test Pattern..............................................................60 Figure 4.1. Digital Gabor Holography Simulation...........................................................65 Figure 4.2. Twin Image Formation..................................................................................67 Figure 4.3. Digital Gabor Experimental Setup.................................................................69
PAGE 10
vii Figure 4.4. Birds Eye view of Experimental Setup.........................................................70 Figure 4.5. Optical Layout of Pulsed Laser......................................................................71 Figure 4.6. Pulsed Laser Specifications...........................................................................71 Figure 4.7. Triggering of the Optical System...................................................................73 Figure 4.8. USAF Resolution Target 1.............................................................................75 Figure 4.9. USAF Resolution Target 2.............................................................................76 Figure 4.10. Fourier Transf orm of Gabor Hologram.........................................................77 Figure 4.11. Onion Cells....................................................................................................78 Figure 4.12. Copolymer Microspheres 1............................................................................80 Figure 4.13. Copolymer Microspheres 2............................................................................80 Figure 4.14. Paramecium....................................................................................................81 Figure 4.15. Anatomy of Paramecium...............................................................................82 Figure 4.16. Timeseries holograms of a microbe..............................................................84 Figure 4.17. Timeseries reconstructe d amplitude images of a microbe............................85 Figure 4.18. Timeseries images of euglenas.....................................................................86 Figure 4.19. Conventional microscopic image of euglenas...............................................87 Figure 4.20. Autofocusing of a Test Target......................................................................89 Figure 5.1. Phase Shift Produced by an Transparent Object............................................95 Figure 5.2. Experimental Set up for OffAxis Holography...............................................98 Figure 5.3. Birds Eye View of Experime ntal Setup for OffAxis Holography................99 Figure 5.4. Phase variation of a Tilted Object................................................................102
PAGE 11
Figure 5.5. Removing 2 discontinuities......................................................................102 Figure 5.6. Holography of Resolution Target................................................................106 Figure 5.7. Numerical Focusing of Resolution Target...................................................107 Figure 5.8. Holography of Paramecium.........................................................................109 Figure 5.9. Holography of Paramecium 2......................................................................109 Figure 5.10. Holography of Layer of Onion Cells...........................................................110 Figure 5.11. Quantitative Evaluation of Resolution Target.............................................112 Figure 5.12. Quantitative Evaluation of Resolution Target 2..........................................113 Figure 5.13. Quantitative Evaluation of Resolution Target 3..........................................115 Figure 5.14. High Magnification Evaluation of Resolution Target.................................116 Figure 5.15. Sample Preparation......................................................................................118 Figure 5.16. MouseEmbryo cell......................................................................................119 Figure 5.17. Onion nucleus..............................................................................................120 Figure 5.18. Onion nucleus 2...........................................................................................121 Figure 5.19. Red Blood cells............................................................................................122 Figure 5.20. Red Blood cells colormap change................................................................123 Figure 5.21. Red Blood cell..............................................................................................125 Figure 5.22. Red Blood cells 2.........................................................................................126 Figure 5.23. SKOV3 ovarian cancer cell........................................................................128 Figure 5.24. HUVEC cell.................................................................................................129 Figure 5.25. Confluent SKOV3 ovarian Cancer cells.....................................................131 viii
PAGE 12
ix Figure 5.26. SKOV3 ovarian Cancer cells......................................................................133 Figure 5.27. SKOV3 ovarian cancer cell 2.....................................................................134 Figure 5.28. Cheek cell.....................................................................................................135 Figure 5.29. Quartz Fragment..........................................................................................136 Figure 5.30. Water Droplet...............................................................................................137 Figure 5.31. Time series amplitude images of paramecium.............................................139 Figure 5.32. Time series phase images of paramecium...................................................139 Figure 5.33. Time series amplitude images of paramecium 2..........................................140 Figure 5.34. Time series phase images of paramecium 2................................................141 Figure 5.35. Time series images of mouseembryo cell migration..................................142 Figure 5.36. Time series quantitative 3D images of mouseembryo cell migration........143 Figure 5.37. Fibroblast cells undergoing mitosis.............................................................144 Figure 6.1. Simulation of twowavelength phase imaging.............................................154 Figure 6.2. Simulation of twowavelengt h phase imaging w/noise removal.................155 Figure 6.3. Simulation of threewavelength phase imaging...........................................156 Figure 6.4. Experimental setup for twowavelength phase imaging..............................157 Figure 6.5. MWPIDH of Resolution Target...................................................................159 Figure 6.6. MWPIDH of Resolution Target 2................................................................160 Figure 6.7. MWPIDH of onion cells..............................................................................161 Figure 6.8. Spectral response of CCD camera...............................................................163 Figure 6.9. Time series amplitude images of rotifer......................................................165
PAGE 13
x Figure 6.10. Time series phase images of rotifer.............................................................165 Figure 6.11. Time series of images by MWPIDH of rotifer............................................166 Figure 6.12. Time series amplitude images of paramecium.............................................167 Figure 6.13. Time series phase images of paramecium...................................................167 Figure 6.14. Time series of images by MWPIDH of paramecium...................................168
PAGE 14
xi Quantitative Biological Microsc opy by Digital Holography Christopher J. Mann ABSTRACT In this dissertation, improved techniques in digital holography, that have produced highresolution, highfidelity images, are disc ussed. In particular, the angular spectrum method of calculating holographic optical field is noted to have several advantages over the more commonly used Fresnel transforma tion or Huygens convolution method. It is observed that spurious noise and interfer ence components can be tightly controlled through the analysis and filtering of the angular spectrum. In the angular spectrum method, the reconstruction distance does not have a lower limit, and the offaxis angle between the object and reference waves can be lower than that of the Fresnel requirement, while still allowing the zeroorder background to be cleanly separated. Holographic phase images are largely immune from the coherent noise commonly found in amplitude images. With the use of a miniat ure pulsed laser, the re sulting images have 0.5m diffractionlimited lateral resolution and the phase profile is accurate to about several nanometers of optical path length. Samples such as ovarian cancer cells (SKOV3) and mouseembryo fibroblast cells have b een imaged. These images display intracellular and intranuclear organelles with clarity and quantitative accuracy. This technique clearly exceeds currently availa ble methods in phasecontrast optical
PAGE 15
xii microscopy in both resolution and detail and provides a new modality for imaging morphology of cellular and intracellular structur es that is not currently available. Furthermore, we also demonstrate that phase imaging digital holographic movies provide a novel method of noninvasive quant itative viewing of liv ing cells and other objects. This technique is shown to have significant advantages over conventional microscopy.
PAGE 16
1 Chapter 1 Introduction Holography, formed from the Greek words Holo (whole), and graphe (write), is a method of recording both the intensity and phase information of a wavefield. It was first suggested by Dennis Gabor in 1948 while wo rking to improve upon the resolution of electron microscopy [1], which suffers from limiting aberrations. Since its discovery, holography has become an active field of re search in modern optics and is now an established tool for scientif ic and engineering studies. This chapter presents an introduction to digital holography and the motivations behind our research, in the following order: Section 1.1 discusses the foundations of holography and its development in a historical context. Section 1.2 describes the advent of digital holography, which has seen rapid progression due to advances in computers and technology. Its applications to microscopy ar e investigated in S ection 1.3. Section 1.4 summarizes research contributions and motiv ations. Finally Section 1.5 outlines the organisation of this thesis. 1.1 Foundations of holography An object illuminated with a coherent light source reflects or transmits the light wave, which consequently carries informati on about the objects physical and optical properties. The intensity or amplitude inform ation of the wave is associated with the reflectance or attenuation property of the object while the phase information is related to
PAGE 17
2 the topography or thickness. Recording devices such as ChargedCoupled Device (CCD) cameras and photographic plates are able to capture the intensity of the incoming light, but not the phase information. As an inhere nt threedimensional technique, holography presents a means of encoding the phase information as an intensity variation termed a hologram, which can then be recorded by devices sensitive only to the intensity component of light. A hologram is defined as the recorded in terference pattern between a coherent object wave transmitted or reflected by an obj ect and a reference wave which interferes directly with this wave at the recordin g medium. The hologram contains information about the entire threedimensional wavefield (amplitude and phase) which is encoded in the form of interference lines at high spatia l frequencies. In conve ntional holography the recorded object wave is reconstructed optically by illuminating the chemically processed hologram with an optical repl ica of the reference wave. Th is produces a reconstructed object which is indistinguishable from th e original object and contains the 3D information, both amplitude and phase. In the original experimental setup of Gabor, both the object wave and reference wave are located along the optical axis normal to the recording medi um. This leads to a reconstructed image superimposed by the bright reconstruction wave called the dc term or zero order and a second component, the so called twin or conjugate image. These unwanted components obscure the true amplit ude and phase of the reconstructed object wave. While the basic foundations of hologra phy were laid down by Gabor, it was not until the 1960s, that holography gained widespre ad interest and practical use. This was
PAGE 18
3 due to the availability of the laser and the introduction of the offaxis technique by Leith and Uptanieks [2], in which the twin images and the dc term are spatially separated by the introduction of an angle between the objec t and reference waves. In the late 1960`s the first paper on Holographic Interferom etry (HI) was published by Stetson, Powell et al [3]. Holographic Interferometry, was designe d as a noncontact optical test tool for deformation studies of strains, vibrati ons and heating, and enabled nondestructive measurement of surfaces with an accuracy of less than a micron. Other significant advances in hologra phy made around the same time were the creation of computer generated holograms using Fraunhofer di ffraction. Numerical methods for reconstruction of the hologram were initially proposed by Goodman et al [4] and Kronrod et al [5]. These methods included digitizing holograms recorded on photographic plates which were then reconstructed numerically. 1.2 Digital holography The conventional process of performing holography using photographic plates is time consuming and cumbersome. This has prec luded the widespread use of conventional holography except in the case where measurem ents are considered important enough to justify the time and expense involved. Also real time processing of a conventional hologram is not feasible unless one uses photorefractives and other nonlinear optical materials. The advances in digital imaging and computation technol ogies have now made it feasible and advantageous to replace the photochemical processing of conventional holography with CCD arrays and numerical com putation [6, 7]. With the development of higher performance CCD and computational techniques, digital holography is fast
PAGE 19
4 becoming an increasingly attr active alternative to conventional filmbased holography. It offers a number of significant advantages, su ch as simple, fast image acquisition and the availability of many powerful digital processing algorithms. By calculating the complex optical field of an image volume, the amp litude and phase of the optical field are simultaneously available for di rect manipulation [812]. There are numerous digital processing techniques for manipulating the optical field information in ways th at are difficult or impossible in real space processing. For example, optical system aberration can be numerically corrected [13,14] and holographic interferometry can be performed betw een remotely situated objects through telecommunication links [15]. Figure 1.1 illustrates the digital holography process. A digital hologram is created by the interference between a coherent object and a reference beam which is recorded by a CCD camera and digitally transferred to a computer, where it is processed by computational methods to obtain the holograp hic images. The digital hologram contains not only amplitude information of the object, but also phase. Moreover, the ability of the CCD camera to quantify the recorded light gives rise to a number of post processing methods that can for instance be used to obtain quantitative threedimensional topography of an object surface from the phase information. The fact that qua ntities such as the phase of the wave cannot be measured directly with conventional detectors, whic h are only sensitive to the wavefields intensity, makes digital holography a particularly attractiv e technique for measuring morphological properties of an object.
PAGE 20
Figure 1.1: The process of digital holography 5
PAGE 21
1.3. Applications to microscopy Digital holography has been applied in diverse fields, including metrology [16], deformation measurement [17], vibrational analysis [18], art conservation [19], microstructures [20, 21] and more recently biological microscopy [2227]. The applications to microscopy are particularly appealing due to a number of technical advantages. 1.3.1 ThreeDimensional Microscopy One of the main drawbacks in conventiona l optical microscopy is the small depth of focus at higher spatial resolution, due to the objectives large num erical aperture and magnification. Only two dimensional informa tion of an object can be obtained. More specifically the diffractionlimited depth of focus is given by 2 focusn z NA (1.1) where is the index of refraction of the medium surrounding the object, n is the illumination wavelength, and is the numerical aperture of the focusing lens or the microscope objective. NA Common ways to circumvent the probl em of the low depth of focus in conventional microscopy include sectioning of the sample or a mechanical motion along the optical axis to scan the complete experi mental volume. However, physical sectioning of a biological sample is invasive and the m echanical operation of scanning increases the time for acquisition of threedimensional images. These are constraints for threedimensional imaging of a live sample. 6
PAGE 22
7 This problem can be resolved usi ng digital holographic microscopy. The information on a complete volume can be recorded in just a single hologram and optically reconstructed with visual 3D observation. 1.3.2 Phase Contrast Microscopy As well as the limited depth of focus, conventional brightfield microscopes have difficulty in observing transparent samples su ch as living biological cells which exhibit little intensity contrast. The conventionally a pplied phase contrast techniques of Zernike and differential interference c ontrast (DIC) microscopy of Nomarksi that have been developed for visualization of unstained transparent specimens do not offer direct quantitative evaluation of the phase information. The unavailability of quantitat ive phase information in these techniques presents a difficulty in observing and interpreting mo rphological changes an d properties of a sample. Quantitative phase imaging is part icularly important in microscopy because it enables determination of either the refractive index or optical thickness variations from the measured optical path length with an accuracy of a few nanometers. Unlike the abovementioned conventional phasecontrast micros copic techniques, interferometric techniques allows for quantit ative measurement of the phase information produced by the object. In addition, digital holography as an interferometric technique offers quantitative phase information and high fidelity and high resolution images with a precision of optical thickness in the order of tens of nanometers [28]. While other interferometric configurations such as interference microscopy are commonly used in metrology, very few applications have been reported in biological microscopy. These
PAGE 23
8 techniques often require the phase to be m easured through the process of multiple image acquisition and phase modulation. Digita l holography on the othe r hand requires only a single image (hologram) and no phase modi fying devices in order to obtain phase information. This is a significant advantage for realtime sample analysis. A common presumption is that coherent imaging suffers from the image degrading effect of coherent noise; however, th rough careful control of laser beam and other optics quality, remarkably clean images can be obtained. This is especially true with phase imaging in digital holography b ecause of its relative immunity to the amplitude or phase noise of the laser profile. 1.3.3 Numerical Focusing Another unique capability of digital holography is numerical focusing, which emulates the focusing contro l of conventional microscopes. As the focusing can be adjusted in the reconstruction process, digi tal holography is free from the process of mechanical focusing and can be used to monitor the dynamic change of objects. A number of algorithms have been devel oped for use in the reconstruction of a hologram such as the Fresnel transform, c onvolution and angular sp ectrum methods [2831]. In particular, the use of the angular spectrum reconstruction algorithm provides a significant advantage in focusing and recons truction [32]. It has no minimum distance requirement from the object pl ane to the hologram plane, and allows for flexible and effective filtering and control of the dc term and spurious noise components from sources such as stray reflections within the experiment.
PAGE 24
9 1.3.4 Phase Unwrapping From a computational point of view, phase images are directly determined from the computed field. However, the phase images are required to be unwrapped in order to simplify their interpretation. Of particular significance is the 2 discontunuity problem in phaseimaging. A conventional approach is to apply one of many phaseunwrapping algorithms [33, 34], but often these require substantial us er intervention and strict requirements on the level of phase noise and phase discontinuity. It has long been recogni zed that the range of una mbiguous phase measurement can be extended beyond a single wavelength by synthesizing a beat wavelength between two wavelengths [35, 36]. The 2 discontunuities inherent in the wrapped phase image can be effectively removed without the addition of any noise, by using a multiwavelength digital holographi c approach in a microscope configuration [37]. This technique, unlike the mathematically challengi ng and often time c onsuming processes of conventional phase unwrapping algorithms, make s real time processing much simpler to perform because any calibration for removing 2 discontinuities can be made prior to any image capture. The drawback with this met hod lies in the increased technical effort required for optical alignment of more than one laser (unless the same laser produces the required wavelengths) along the same optical path. 1.4 Research Contributions As a recently developed technique, digital holography has yet to be firmly established in the microscopy world and many of its technical advantages are yet to be fully explored and examined. The motivation behind this work has been to improve and
PAGE 25
10 develop techniques of digital holography in or der to obtain images which effectively provide more information and clarity th an images produced by other comparable microscopy techniques. In this thesis, technique s of digital holography have been vastly improved to obtain highresolution, highfide lity, quantitative pha secontrast images of microscopic samples. The level of resolution and details in these images clearly exceed currently available techniques in phasecontrast optic al microscopy and provi des a new modality for imaging the morphology of cellular, intrace llular and intranuclear structures that is not currently available with noninvasive opt ical methods. Furthermore for the first time to our knowledge, quantitative, dynamic effects have been measured on living biological samples using digital holography [38]. This research will help with precise measurements of such properties as the physical thickness of a cell, to a vertical reso lution of just a few nanometers. This might be of significance to biologists who can th en accurately investigate the physical morphology of specimens like cancer cells, whic h are known to have different thickness and other morphological properties as compared to healthy cells [39]. We also demonstrate the use of the a ngular spectrum reconstruction method for digital holography, which is seen to solve some of the significant problems that have so far prevented the wider use of digital holography in biomedical imaging applications. Spurious noise components can be easily controlled and ther e is no minimum reconstruction distance as w ith other methods for numeri cal reconstruction. Also the dc term of diffraction and the c onjugate image can be eliminated by an appropriate selection of spatial frequencies in the angular spectrum of an offaxis hologram.
PAGE 26
11 Finally we show that multiwavelength phase imaging digital holography is an effective and efficient technique for removal of 2 discontunuities in the phase image by the extension of the axial range and also overcomes many of the limitations imposed by conventional, mathematical based phase unwrapping techniques. 1.5 Thesis organization This thesis is organized into seven chapters and f our appendices. The main body of content in this dissertation is contained in Chapters 2, 3, 4, 5, and 6. Chapter 2 presents some of the basic optical concepts and principles of holography, beginning with an analysis of the wave theory of light and late r describing the propertie s and roles of effects such as interference, coherence, diffrac tion, and speckle in holographic recording and reconstruction. Chapter 3 presents the technique of recording digital holograms and discusses the reconstruction of the recorded optical wave fi eld by the Fresnel transform, the convolution approach and the angular spectrum. In Chapter 4, digital Gabor holography is described and experimental results are presented which show the capabilities for microscopy. Chapter 5 descri bes methods of determining quantitative phase information and demonstrates the high qu ality results achievable by digital offaxis holography. Chapter 6 describes phase imaging di gital holography using two or more wavelengths that achieves unambiguous phase unwrapping by an optical method. Future works and conclusions are presented in Chapter 7.
PAGE 27
Chapter 2 Principles of Holography This chapter introduces a number of fundamental optical principles which together form the conceptual and mathem atical foundations of holography. Section 2.1 discusses the wave theory of light, in wh ich both the intensity and phase information form the threedimensional nature of the li ght wave. Section 2.2 describes the phenomena of interference which stems directly from the wave theory of light. Section 2.3 examines different aspects of coherence. Section 2.4 includes a detailed analysis of the development of models of diffraction, whic h provide the foundations of wavefield propagation in holography. Finally Section 2.5 introduces the theo retical background and underlying principles of holography. 2.1 Wave Theory of Light A light wave can produce both diffraction a nd interference effects which form the basis of holographic recording and reconstruction. Light is an electromagnetic wave and the theory used to explain its motion is de scribed by the wave mode l. The propagation of a light wave in a vacuum can be described by 2 2 221 0 E E ct = (2.1) E 12 where is the electric field and is the Laplace operator defined as 2
PAGE 28
22 2 22 2 2 x yz =++ (2.2) E The electric field is a vector quantity and therefore has direction associated with it. It can oscillate in any directi on perpendicular to the light waves propagation direction. However, it frequently oscillates only within a single plane. In such cases we say that the light is linearly polarized a nd so the scalar wave equation can be considered where the light propagates in one direction. If the light is propagating in the z direction, then 22 2221 0 EE zct = (2.3) The electromagnetic wave can be considered in terms of two components, namely the real part and the imaginary part of a complex quantity. A sinusoidal timevarying signal,, at a time t and at a point z of a wave propagating in the z direction can be represented by ((,)ikztztAe= ))) (2.4) where A is the amplitude, is the angular frequency and k is the wave number. The term is the absolute phase and is dimens ionless. The real part of Equation (2.4) is taken to be that of the measurable signal E ( kzt ()[]cos(ikzt E AeAkzt= = (2.5) Equation (2.4) can be rewritten as ()(,) ()izititztAeeuze== z (2.6) where is the complex amplitude of the signal and is the phase. ()()izuzAe= () zk = 13
PAGE 29
The only directly measurable quantity is in tensity, which is proportional to the time average of the square of the electric field. 2 001 lim 2T t T T 2 I cEcEdt T == (2.7) The intensity is calculated by taking the square of the modulus of the complex amplitude. The intensity at a point z is then defined as 2 *2()()()() I zuzuzuzA == = (2.8) 2.2 Interference The concept of interference was first introduced by Thomas Young in 1804. He noted that under appropriate conditions, light could be combined with light to create darkness. Interference is, in essence, th e superposition of two or more waves in space. The result of adding two waves of the same fre quency depends on the value of the phase of the wave at the point in whic h the waves are added. For two sources of electromagnetic waves to interfere they must be coherent, have the same frequenc y and polarization, and the superposition principle must apply. (,)jErt If we consider each individual wave of the form which is a solution to the wave equation, the superpos ition of the waves is found by (,)(,)j j E rtErt = j =1, 2 (2.9) For two monochromatic waves with th e same polarization direction and frequency, the complex amplitude of the waves are 14
PAGE 30
151 1() 1(,,)i A xyzae= (2.10) 2() 22(,,)iAxyzae= The scalar approach can be considered in this case because the waves have the same polarization. The addition of each single wave in Equation (2.10) creates a new resultant wave with complex amplitude A, 1AAA =+ 2 (2.11) From Equation (2.8), then 2 121212()( ) I AAAAAA =+=++ (2.12) 22 1212122cos( aaaa =++ ) 12122cos IIII =++ 1 I 2 I where and are the individual intensities and the phase difference between the sources of the two waves is 1 = 2 2 2 (2.13) If the phase difference is constant, the two sources are mutually coherent. If the sources are mutually incoherent, then varies randomly with time. If this latter condition occurs, it signifies th at the average value of the term is zero and therefore no interference exists In effect, this explains why no interference occurs between two incoherent sources. 1 1 cos In Equation (2.12) the total intensity is calculated by the sum of the individual intensities 1 I 2 I 122cos II and plus the addition of the last term
PAGE 31
This last term depends on the phase difference between the waves as seen from Equation (2.13). The intensity reaches its maximum at points for which 16 for n = 0,1,2 (2.14) 2 n = which represents the condition for constructive interference. The intensity is a minimum where for n = 0,1,2.. (2.15) (21) n =+ This is the condition for destructive interference n represents the interference order and is an integer. On examination of an interfer ence pattern, one sees br ight and dark fringes as a consequence of the constructive and dest ructive interference of the light waves. Consider the superposition of two plane waves E 1 ( dashed lines ) and E 2 (solid lines) which creates an interferen ce pattern with equidistant spacing d as illustrated in Figure 2.1. Figure 2.1 : The Interference betwee n two plane waves. (Adapted from Digital Holography by U. Schnars, W. Jueptner).
PAGE 32
The distance between the interference maxima formed at points P and P 1 2 is defined by the fringe spacing d. By geometry the followi ng relations are true 1 1sin l d = 2 2sin l d = (2.16) and where and represent the angles between the vertical plane and the propagation direction of each individual wavefront. is the path difference between the wavefront E 1 2 2l 2 with respect to that of the second wavefront E 1 which is located at the position of the interference maximum P 17 1 In addition is the path difference between the wavefront E 1l with respect to E 2 1 One notices in the Figure that E travels a longer distance to P than E 2 1 1 and conversely at P 2, the opposite is also true with E 1 traveling a further distance than E 2 It follow that the path differences between the maxima at points P and P 1 2 is (2.17) 12ll + This path difference is equal to one wavelength so that (2.18) 12ll += By combination of equations and substitution then 1212 12sinsin 2sincos 22 d == + + (2.19) By substitution of the approximation of and we obtain the fringe spacing 12cos()/21 2 1 =+ 2sin 2 d = (2.20)
PAGE 33
2.3 Coherence Coherence is the measure of the ability of light to interfere. The coherence of two waves follows from how well correlated the waves are as quantified by the crosscorrelation function and is derived from th e phase relationship between two points, separated in either time or space, along a wavetrain. Consider two points along the train that are spatially separated and moving with the train. If the phase relationship between th e waves at these points remains constant in time, then the waves between these points are coherent. On the other hand, if the phase relationship is random or rapidly changing, the waves at these two poi nts are incoherent. 2.3.1 Temporal Coherence Temporal coherence describes the correlation of a wave with itself at different instances in time and is related to the finite bandwidth of the source If light were strictly monochromatic then the following conditions would apply: and This is not practically attainable. However, over a shorter period of time a wave can behave like its monochromatic. The temporal coherence length is the greatest distance between two points for which there is a phase difference that remains constant in time. For points separated by a distance greater th an the coherence length, there is no phase correlation. The coherence time is defined as the temporal interval required for the train to traverse the coherence length when traveling in vacuum. In optics, temporal coherence is measured by combining beam s from the same source but having a known path length difference, and observing the interference pattern produced. ct = 0 = ct ct 18
PAGE 34
19 2.3.2 Spatial Coherence Like temporal coherence where a source is considered to be never strictly monochromatic, spatial coherence derives from the fact that a source is never truly a point source. Spatial coherence depicts the mu tual correlation of different parts of the same wavefront. Thus, for two laterally disp laced points residing on the same wavefront at a given time, the fields are spatially coherent. 2.4 Diffraction Theory A mathematical description of the propa gation of light waves must recognize diffraction effects. Diffraction is broadly defined as the spre ading out of light from its geometrically defined path. Figure 2.2 illustrates the problem posed by diffraction. Light emitted by a source at a location Q falls on an opaque plate containing an aperture. The problem is then to describe the light field that will be received at point R to the right of the aperture. One of the earliest solutions was put forward by Huygens, whose principle underlies the idea that each point on a wavefront acts as a source of secondary wavelets. In terms of Figure 2.2 this means that that the illuminated aperture can be replaced by an array of point sources. Huygens theory was mathematically desc ribed by Fresnel, who integrated the appropriate contributions and Kirchhoff, who put in place all the correct multiplying terms. The problem was later solved by Fres nel and Fraunhofer in the form established by Huygens, and was later solved in more detail by Kttler and Sommerfield. Kttler considered the vectorial nature of light, and Sommerfield presente d a rigorous scalar derivation.
PAGE 35
Figure 2.2 : Diffraction from an aperture 20
PAGE 36
2.4.1 HuygensFresnel Principle Diffraction is a phenomenon by which wave fronts of propagating waves bend in the neighborhood of obstacles. It can be desc ribed qualitatively by Huygens principle as depicted in Figure 2.3. Ev ery point on a primary wave front serves as the source of spherical secondary wavelets, such that the pr imary wave front at some later time is the envelope of these wavelets. Moreov er, the wavelets advance with a speed and frequency equal to that of the primary wave at each point in space. By simple assumptions about the amplitude and phase of the secondary waves that form the envelopes, Huygens principle wa s able to accurately determ ine the light distribution of diffraction patterns. Figure 2.3: Huygenss principle. The constructio n of envelopes, propagating from a point source. 21
PAGE 37
Huygens principle accounted satisfactoril y for the reflection and refraction of light waves. However, his theory did not e xplain why there is no backward propagation of the waves. Furthermore, even though it predicts diffraction, it does not explain quantitatively the results obtained from diffraction experiments. Fresnel made changes to Huygens's princi ple to account for the deficiency in the back wave propagation. He introduced a new term, known as the obliquity or inclination factor, in orde r to solve this problem. () K Figure 2.4 illustrates a geometrical repres entation of a spherical wavefront, arising from a point source P 0 In the figure S represents the instan taneous position of the wavefront with a radius r 22 0 and a frequency P is a point at which the light disturbance is to be determined. Om itting the periodic factor the disturbance at point T on the wavefront may be represented by ite 00/ikr A er where A is the amplitude at a unit distance from the source. Figure 2.4 : HuygensFresnel Principle. (Adapted fr om M. Born, E. Wolf, Principles of Optics: Electromagnetic Theory of Propagati on, Interference and Di ffraction of Light).
PAGE 38
From HuygensFresnel principle, each point on the wavefront is the source for a secondary disturbance which propagates as a spherical wave. For an element of dS at T the contribution to du(P) is found by 00()()ikr ikrAee duPK dS rr= (2.21) where and the obliquity factor is introduced to describe the variation in the propagation direction of the amplitudes of the secondary waves. is the angle between the normal to S at T and the direction TP rTP = () K Fresnel made the assumption that was a maximum in the original direction of propagation, i.e. for and that it rapidly decr eases with increasing eventually being zero when TP is tangential to the wavefront, i.e. when The total disturbance at P is then found by () K 0 = /2 = 00() ()ikr ikr SAe e uP KdS rr= (2.22) The HuygensFresnel principle is a superpos ition integral, or a lternatively it can be regarded as a convolution integral. 2.4.2 Integral Theorem of Kirchhoff and Helmholtz The HuygensFresnel principle containe d certain inconsistencies which were remedied by Kirchhoff. He established that the results and theory could be directly derived from the wave equation. The scalar wave equation in threedimensions is 2 22 2(,) (,) xt cx t = t (2.23) 23
PAGE 39
24 where (,) x t is the wavefield at the point x = (x; y; z) T at time t and c is the speed of light in vacuum. For a monochromatic scalar wave then (,)()it x tuxe= (2.24) Using separation of variables, then u(x) satisfies the timeindependent wave equation 22() ku += 0 (2.25) Equation (2.25) is known as the Helmholtz equation. Referring to Figure 2.5, if V is the volume bounded by a closed surface S, and P is a point within it, we can assume that u(x) possesses continuous first and second order partial derivatives within and on S. If f and g are defined as scalar functions are continuous and integrable in V and bounded by S, then we have the general form of Greens theorem 22 SVgffgdgffgdV S (2.26) V P S S' Figure 2.5: Integration surface in Greens Theorem.
PAGE 40
If v(x) is another function which satisfies the same continuity requirements as u(x) we have by substitution into Green's theorem in Equation (2.26) '' 3''2''2' 2'' '()() [()()()()][()()]VSvxux dxuxvxvxuxdxuxvx nn = (2.27) n n = 25 where is the directional derivative along the inward normal to S. n One now chooses a Green function of the Helmholtz Equation in (2.25) so that This function describes th e disturbance at a point x '()(.) vxGxx = on the surface S due to a spherical point source at a position x. Within the volume the disturbance, satisfies '(.) Gxx 22' 2 '() ()(,) 4 x x kGxx x x += (2.28) The equation in (2.28) contains spherical symmetry, and where '(,)() GxxGr = 'rxx = By spherical coordinates relations 2 2 221 () r rr = (2.29) and subsequent substitution into Equation (2.28) yields 2 2 21 (()()) 4 r rGkrG r rr += 2( ) (2.30) For then 0 r> 2 2 2()()0 rGkrG r += (2.31)
PAGE 41
26 This implies the condition that where ''ikrikrrGAeBe=+ A B and are arbitrary constants. From the Helmholtz equation 44ikr ikrAB Gee rr=+ (2.32) with and To avoid back waves, when then '/4 AA = '/4 BB = 0 B= 4ikrA Ge r= for (2.33) 0 r> Now, extending G to all values of r then 0() lim( ) 4ikrAr G r = e (2.34) Where if x>0, and for all other x. ()1 x = ()0 x = () x is the Heaviside function. Then 22 2 01( lim[(( ))] 4ikrAr Gr rrr r = ) e (2.35) 2 01 lim[(()( )())] 4ikr ikrikrrAerrAikeAer r r = + 2 0 1 4 lim[(()( )())]ik ikrikrAerrAikeAer r r = + 2 01 lim[(()( ) 4ik ikikAerAikeAe r = + ]] 2()( )(ikxikxikxrrkAeikAeikAer + + 2 3'() 4ikrAke A xx r = and so 22'3'()(,)( kGxxAxx += ) (2.36)
PAGE 42
Choosing A=1 to agree with Greens theorem, then '' '(,) 4ikxxe Gxx x x = (2.37) Substituting the Helmholtz equations for u(x) and ((2.25) and (2.36)) into the volume integral in Green's theorem (2.27), we obtain '(,) Gxx ''' 2'' ''1( () [()( ] 4ikxxikxx Seeu uxdxux nn xxxx = ) x (2.38) This is the integral theorem of Helmholtz and Kirchhoff. 2.4.3 FresnelKirchhoff Diffraction Consider a monochromatic wave, propagating from a point source, Q to an opening in a plane opaque screen as depicted in Figure 2.6. The problem is to find the light disturban ce at a point R The disturbance can be found by taking Kirchhoffs integral over a surface S formed by the opening A a portion of the nonilluminated side of the screen, B and a portion of a large sphere of radius R 0 centered at R, C, which, together with A and B form a closed surface. Using Equation (2.38), we obtain the total disturbance at point R, as () uR ''' 2'' ''1( ()[ ][()() ] 4ikxxikxx ABCeeu uR dxux nn xxxx =++ ) x (2.39) 27 where, 'rxx = / nn = is the distance of the element dS from R and denotes differentiation along the inward norma l to the surface of integration.
PAGE 43
Figure 2.6: FresnelKirchoff Diffracti on. (Adapted from M. Born, E. Wolf, Principles of Optics: Electromagnetic Theory of Propa gation, Interference and Diffraction of Light). The values of the function of and on A B and C are not known accurately, which presents a difficulty. On the other hand, by the assumption that at A, u and will not differ significantly from the values obtained in the absence of the screen, and that at B these quantities are approxima tely zero, then the boundary conditions are u / un / un 28
PAGE 44
()iuu nn = on A : ()iuu = (2.40) 0 u n = on B : 0 u= The conditions in Equation (2.40) are known Kirchhoffs boundary conditions and these allow for substantial simplification of the result s. If the aperture is large compared to the wavelength, then the boundary conditions can yield very accurate results. For the incident field then 0() 0 001 []cos(, 4i ikruA eiknr nrr = 0() 04ikr iA u r= 29 e ) where A is a constant and is the cosine of the angle between n and in Figure 2.6. 0cos(,) nr 0r Finally, one considers the point C. The field at this point is produced by a source that emits at a time. At a later time, the wave propagates outward from the source. However, the fields outer bounda ry is at distance not greater than from Q. Therefore, if one chooses the radius R itt = itt > ()ictt 0 to be large enough, so that at the time when the disturbance at R is considered no contributions from C have reached R, the integral over C will vanish. Thus, on substituting the above conditions into Equation (2.39), we obtain 0() 2' 0 0() [cos(,)cos(,)] 2ikrr AiAe uRdx nrnr rr += (2.41) Equation (2.41) is the FresnelKi rchhoff diffraction formula.
PAGE 45
2.4.4 RayleighSomerfield Diffraction The FresnelKirchhoff diffraction theory ha s limitations; it has attached boundary conditions which make it mathematically inconsistent. The difference between the RayleighSommerfield theorem and the FresnelKirchhoff theorem is in the inclination factor. For Kirchhoff, the obliquity factor can be written as ( ) ( ) 0cos,cos, () 2 nrnr K = )nr (2.42) In the RayleighSommerfield approa ch the obliquity factor is (()cos, K = (2.43) A drawback of Kirchhoffs theory lies in the attached boundary conditions which are required for the field and its derivative. If a twodimensional pot ential function and its normal derivative vanish together along any finite curve segment, then that potential function must vanish over th e entire plane. Similarly, if a solution of the three dimensional wave equation vanishes on any fi nite surface element; it must vanish in all space. The two Kirchhoff boundary conditions togeth er imply that the field is identically zero everywhere behind the aperture, a re sult which contradi cts the known physical situation. From the integral theorem of He lmholtz and Kirchhoff, Equation (2.38), then '' 2'' '1( ) () [() (,)] 4SGxx ux uxdxux Gxx nn = ( ) (2.44) By making adjustments to the Gr eens function in Equation (2.44), G, or its derivative, disappears at the aperture A and the nonilluminated portion B. In both cases, the necessity of imposing simultaneous boundary conditions on u and is / Gn / un 30
PAGE 46
removed. Using the former case in which G vanished over the aperture, the integral theorem is 2''() [()cos(,)]ikr Aie uxdxuxnr r = (2.45) and the RayleighSomerfield diffraction formula is 0() 2' 0() [cos(,)]ikrr AiAe uxdx nr rr += (2.46) The derivation is more self consis tent than Kirchhoffs derivation. 2.5 Speckles Holography uses coherent light sources to form the interference pattern which creates the hologram. However, a significant issue with using a coherent source is the degradation in image quality due to a phenomenon called speckle. A rough, uneven surface illuminated with a coherent light source produces a coarse indeterminate pattern as displayed in Figure 2.7. The coherent light which illuminates the surface is scattered by the su rface and produces randomly oriented bright and dark patches. Figure 2.7: A typical speckle pattern formed by coherent illumination of a rough surface. (Obtained from http://www.polytec.com/) 31
PAGE 47
The occurrence of speckle is a result of the interference between different scattered beams from the surface. This kind of speckle pattern is created if the height variations of the rough surface are larger than the wavele ngth of the light. The coherent noise of speckle effectively disturbs the image quality to a particularly high degree and makes identification of features in highly scattering struct ures difficult. This has been a major barrier for widespread coherent imag ing and in particular, microscopy. The probability density function for the in tensity in a speckle pattern is given by 1 () I I PIdIe I = (2.47) where is the probability that the intensity at a certain point is lying between and () PIdI I I dI is the mean intensity of the entire speckle field. The most probable intensity value of a speckle is therefore zero, which is why most speckl es, in fact, appear black. The standard deviation is found by (2.48) II = That means the intensity variations are in the same order as the mean value. The contrast is given by I V I = (2.49) The contrast of the sp eckle pattern is unity. 2.6 Holography Recording and Reconstruction In conventional imaging tec hniques, such as photography, what is recorded is simply the intensity distribution of the origin al object. As a result all information about 32
PAGE 48
the relative optical paths to the different parts of the object is lost. The unique characteristic of holography is the recording of both intens ity and phase information of the light wave reflected or transmitted by an object. As recording media respond only to the intensity, it is necessary to convert the phase information into variations in intensity. This is performed by using coherent illumination. The general principle of conventional hol ography is shown in Figure 2.8, one wave illuminates the object and is scattered onto the recording medium. This is called the object beam. The second wave, the reference beam, illuminates the recording medium directly. Both waves interfere and the patte rn is recorded onto the photographic plate. This recorded interference pattern is termed a Hologram. Let the complex amplitude object and reference waves at the photographic plate be denoted by and respectively. oU rU The complex amplitude of the object wave is described by 0(,)(,)exp((,))ooUxyaxyxy = (2.50) with real amplitude and phase oa o The reference wave is described by (,)(,)exp((,))rrrUxyaxyxy = (2.51) with real amplitude and phase ra r Both waves interfere at the recording medium and the intensity may be written as 2(,)(,)(,)((,)(,))((,)(,))ororor I xyUxyUxyUxyUxyUxyUxy=+=++ (2.52) (,)(,)(,)(,)(,)(,)(,)(,)oorrororUxyUxyUxyUxyUxyUxyUxyUxy=+++ 33
PAGE 49
Figure 2.8: Recording a hologram 34
PAGE 50
The amplitude transmission of the develope d photographic plate is proportional to the intensity and can be represented by 35 y0(,) tIx =+= (2.53) 0[(,)(,)(,)(,)(,)(,)(,)(,)]oorrororUxyUxyUxyUxyUxyUxyUxyUxy =+ + + +where and are constants determined by the recording medium type. 0The original object wave is reconstructe d by illuminating the recorded hologram at the photographic plate with the reference beam as show n in Figure 2.9. The observer sees the virtual image, which is indistinguishable from the image of the original object. When the plate is illuminated by the reference beam for reconstruction 0[ (,)(,)(,)(,)(,)(,)(,)(,)(,)(,)]rroorrrrorrorrUtUUxyUxyUUxyUxyUUxyUxyUxyUxyUxyUxy =+ + + + 2 22 *2 00 00[rrr rrUUUUUUUUU =++++ ]r (2.54) The first term on the right hand side of Equa tion (2.54) is u sually called the ambiguity term, and is the product between the object waves intensity and the reference wave. Since 2oU is generally nonuniform, it produces a wave that propagates along the direction of the illuminating plane wave within a cone, the angular aperture of which depends on the spatial spectrum of the object wave. The second term on the righthand side is the product of the in tensity of the reference wave the illumination wave. Since the reference wave is a plane wave, 2rUis uniform, this term represents a plane wave propagating along the direction of the illumina ting plane wave. Together, these first two terms on the righthand side of Equation (2.54) form the zero order of diffraction also known as the dc term.
PAGE 51
Figure 2.9: Reconstruction of a hologram 36
PAGE 52
The third and the fourth terms in Equation (2.54) are produ ced by the holographic interference terms, and each of them generates an image of th e object. The third term is the reconstructed object wave forming the virt ual image located at the position initially occupied by the object. The fourth term produces a real image located on the other side of the hologram. Since this term is form ed by the conjugate of the object wave the depth is inverted. Points on th e virtual image and real image are located at equal distances from the hologram plane but in opposite direc tions. The real image, therefore, appears inside out and is calle d a pseudoscopic image. oU 37
PAGE 53
38 Chapter 3 Digital Holography This chapter discusses principles of digital holography. Section 3.1 examines the physical and mathematical aspects of reco rding holograms onto a CCD camera. Section 3.2 analyses numerical diffraction methods for reconstructing the digital hologram in the diffractive planes of the holographic terms. Section 3.3 describes methods of separating the dc and holographic terms. In Section 3.4, the use of CCD cameras for holographic recording is discussed. Finally, Section 3.5 exam ines resolution and calibration issues in the holographic optical system. 3.1 Digital Recording of Holograms The method of recording digital holograms is illustrated in Figure 3.1. A coherent object wave reflected or transmitted from an object and a reference wave interfere at the surface of a CCD camera. The holographic inte rference pattern formed at the CCD is electronically recorded and stored. In digital holography this recorded ho logram is subsequently reconstructed computationally by using numerical algorithms to provide direct access to amplitude and phase information of the object.
PAGE 54
Figure 3.1: Recording of a digital hologram. 39
PAGE 55
In the numerical reconstruction of the hol ogram, the holographic terms, the virtual image and real image are formed at distances, d and d respectively from the hologram plane which is assumed to be in the plane as displayed in Figure 3.2, which illustrates the coordinate system for hologram reconstruction. In classical holography, the hologram is reconstructed by illumination with an optical replica of the wavefield. In digital holography reconstruction takes place by multiplication with the numerical model of the reference wavefield. The inte rference between the object wave '0 dd == (,) o transmitted by the object located at the object plane and the reference wave (,) r is recorded in the hologram plane(, ) The complex object wave at the hologram plane is 22 0(,)exp{()()} 2 ik ooxy d = + (3.1) where is the amplitude of the object wave. The complex amplitude of the reference wave diverging from a point 0o (,,)rrr x yd is 22 0(,)exp{()()} 2rr rik rrxy d = + (3.2) where is the amplitude of the reference wave. The complex amplitude of the interference pattern at the hologram plane is 0r (,)(,)(,) Uor (3.3) A CCD sensor records this interference patter n in the form of the intensity distribution *(,)(,).(,) hUU (3.4) which is stored in the comput er as the digital hologram. 40
PAGE 56
Figure 3.2: Coordinate system for digital holography 41
PAGE 57
The interference pattern is then mathematically described by 22 **(,)(,)(,)(,)(,)(,)(,) hororor (3.5) 3.2 Numerical diffraction Once the hologram has been captured, the opt ically diffracted field is numerically propagated by the use of recons truction algorithms. The diffr action pattern is calculated at a distance d behind the CCD focal plane (hol ogram plane), which means, it reconstructs the complex amplitude in the plan e of the real image. Note however, that one can focus on either the real or virtua l image located a respective distances d and d from the hologram plane. The magnitude of the distances is the same but the signs are opposite. In this thesis we define the genera l expression for the r econstruction distance for either of the holographic images as z. Mathematically, the reconstruction proce ss reduces to the ca lculation of the RayleighSommerfeld diffraction in tegral. If one treats the di gital holographic system as a coherent imaging system, then the reconstr uction algorithm functions like the lens in an optical system. All computer calculations can be performed based on the RayleighSommerfeld diffraction formula, however, using this formula directly is often time consuming. There are a number of computati onally efficient numerical methods available to calculate the holographic diffraction[40] The most commonly applied method has been the discrete Fresnel transform wher e the approximation of spherical Huygens wavelet by a parabolic surface allows the calculation of diffraction inte gral using a single Fourier transform[6]. 42
PAGE 58
In the convolution method, the diffraction in tegral is calculated without such approximation using three Fourier transforms through the convoluti on theorem [41]. On the other hand, the angular spectrum method involves two Fourier transforms, plus simple filtering of the angular spectrum [32]. Here we review these methods. 3.2.1 Reconstruction by Fresnel Transform method. If the wavefield is known in one plane, it can be calculated in any other plane in the 3D space. The RayleighSommerfeld diffr action theory is a precise mathematical form of this statement. Referring to Fi gure 3.2, the RayleighS ommefield diffraction formula in this coordinate system for the reconstruction of the holographic image in the image plane is '''1 exp() (,)(,)(,)cos ik bxyhr dd i (3.6) where is the reconstructed image in the image plane, '''(,) bxy (,) h is the hologram recorded by the CCD sensor array, (,) r is the reference wavefield and is the distance between a point in the hologram pl ane and a point in the reconstruction plane defined as 22 '2' 'dxy (3.7) 'd is the reconstruction distance i.e. the distance between the hologram and image plane. Due to the small angles between the hologram normal and the rays from the hologram to the image points, the obliquity factor can be set to cos1 in Equation (3.6). The expression in Equation (3.7) can be expanded to a Taylor series so that 43
PAGE 59
2 22 '' '2'2 ''' 3()()1 ... 8 22 xy xy d ddd (3.8) The fourth term can be neglected if it is small compared to the wavelength and so '2'2 ''()() 22 x y d dd (3.9) With the additional approximation of replacing the factor by the reconstruction distance in the denominator of Equation (3.6) then 'd 22 ''' ''exp() (,) (,)(,)exp[ ] ikd i bxy hr xydd id d (3.10) Further multiplication in the argum ent of the exponential yields 22 ''' ''exp() (,) (,)(,)exp[ ] ikd i bxy hr id d 22 '' '' ''2 exp[ ]exp[ ] ii x yxy dd d d (3.11) Defining the impulse response func tion of the optical system as 22 ''exp() (,)exp[()] ikdi g idd (3.12) the reconstructed wavefield is then 22 ''' '' '' ''2 (,)(,)(,)(,)exp[ ]exp[ ] ii bxyhrg xy xydd dd (3.13) This equation is known as the Fresnel approximation. This approximation states that the size of the aperture is small in comparison to the distance to the viewing plane. 44
PAGE 60
Fresnel diffraction is essentially the Four ier transform of the aperture multiplied by the phase factor of the light at the aperture. When the reconstruction distance z (d or d) is sufficiently large for this approximation to be an accurate one, the observer is said to be in the region of Fresnel diffraction. Fo r the Fresnel approximation to remain valid, it is only necessary that the higher order terms of the expa nsion do not cha nge the value of the integral in Equation (3.13). The intensity is calculated by 2 '''''(,)(,) I xybxy (3.14) and the phase by ''' '' '''Im(,) (,)arctan Re(,) bxy xy bxy (3.15) For digitization of the Fresnel transform th en the following spatial frequency terms are introduced ' x d and 'y d (3.16) Equation (3.13) then becomes '' 2 2(,)(,)(,)(,)exp[ ]exp[2 ] bhrgidid d (3.17) Comparison of Equation (3.17) with the de finition of the two dimensional Fourier transform {((,)}(,)exp[2()] f fi dd (3.18) with (,)(,)(,)(,) fhrg then Equation (3.17) becomes 45
PAGE 61
'' 2 2(,)exp[ ]{(,)} bidf (3.19) The function '(,) b can be digitized if the hologram, (,) h is sampled on a M N pixel array on the CCD sensor, with steps and along the coordinates. Therefore and represent the distances between nei ghboring pixels in the horizontal and vertical directions and so '' 2 2 2 22 'exp() (,)exp[ ][(,)(,)exp[( )] ikd i bmn idmnhklrklkl id d 2 2 2 (3.20) where k, l, m, n are integers defined in the range from ( /2,/2) MklM and (/2,/2 NmnN ) The pixel sizes of the CCD array ( ,) located at the hologram plane are different to those in the image plane ''(, ) x y The discrete Fourier transform theorem tells us the maximum frequency is determin ed by the sampling interval in the spatial domain and so 1 M and 1 N (3.21) substitution of these terms pr ovides the relations between the hologram and image plane 'd M x and 'd Ny (3.22) this gives the reconstructed wavefield as '2 2 2 22 '' 'exp() (,)exp[ ][(,)(,)exp[( )]mnikdi i bmn mxnyhklrklkl idd d 2 2 2 (3.23) 46
PAGE 62
This is the discrete Fresnel transform. The calculation in Equation (3.23) can be made more efficient by applying the Fast Fourier Transform (FFT ) so that Equation (3.23) becomes '2 2 2 22 '' 'exp() (,)exp[ ][(,)(,)exp[( )]mnikdi i bmn mxnyFFThklrklkl idd d 2 2 2 (3.24) The pixel resolution x and 'y of the reconstructed images which is determined directly from the Fresnel transform will vary as a function of the reconstruction distance d as 'd x M and 'd y N (3.25) Therefore the reconstructed image changes in size according to the reconstruction distance. This is an inconvenient limitation of this method. Equation (3.25) is also the horizontal and vertical diameters of the Ai ry disk in the image plane and sets the diffraction limited resolution of the optical system. Although the Fresnel diffraction formula can still give an accur ate reconstruction for smooth and slowly varying objects where the Fresnel approximation is not strictly satisfied, it cannot correctly reconstruct near wavefields for more diffractive objects, where the higherorder terms in the Taylor expansion of the Fresnel approximation in Equation (3.8) are more significant. The Fresnel transform also requires that the distance between the object and the hologram plane be sufficiently large in co mparison to the size of the object or the hologram. This sets the minimum reconstruction distance z requirement so that 2 min x a z N (3.26) 47
PAGE 63
where x a is the image size and N is the number of pixels. At too close a distance, the spatial frequency of the pixelated hologram is not high enough to reproduce a large angular size of the obj ect without aliasing. 3.2.2 Reconstruction by Fresnel Convolution method The RayleighSommerfield diffraction form ula in Equation (3.4) can be rewritten as a superposition integral so that '''(,)(,)(,)(,)bxyghrdd (3.27) where the Fresnel impulse response function is 22 ''exp() (,)exp[()]ikdi g idd (3.28) The linear system is space invariant so that for the impulse response function '' ''(,,,)(,)gxygxy (3.29) The superposition integral in (3.27) can be expressed as a convolution ''1(,){[(,)(,)][((,)]}bxyhrg (3.30) [((,)]g is the Fourier transform of the impu lse response function. From Equation 3.16 the definition for the F ourier transform of the im pulse response function is [(,)](,)exp[2()] ggid d '' 2exp()exp[()] ikdid 2 (3.31) The reconstructed wavefield is then 48
PAGE 64
''1 '22(,){[(,)(,)]exp()exp[()]} bxyhrikdid (3.32) The convolution approach gives an image with constant pixel dimensions between the hologram and image plane. 'x and 'y (3.33) However, due to the more complex algorithm, the convolution appro ach is slower than the Fresnel transform approach. The whole process requires three Fourier transforms, which are carried out using the FFT algor ithm for more effective calculation. The sampling requirements in the Fresne l and convolution algorithm are the same and therefore this method is also subject to the minimum reconstruction distance set by Equation (3.26). At too close a distance, the spatial frequency of the hologram is too low and aliasing occurs. Normally the object is placed just outside this minimum distance found from Equation (3.26). 3.2.3 Reconstruction by Huygens Convolution method The Fresnel approximation in convoluti on is not wholly justified for all zvalues. The Huygens convolution method often yields be tter quality images, especially when the hologram and image plane are close [31]. Th e RayleighSommerfield diffraction formula in Equation 3.4 is again rewritte n as a superposition integral` '''1exp() (,) (,)(,) ik bxy hrdd i (3.34) where the impulse response function is ''1exp() (,,,) ik gxy i (3.35) 49
PAGE 65
The Huygens impulse response function is approximated by 22 '2' '' 22 '2' 'exp 1 (,,,) ikdxy gxy i dxy (3.36) '2'2'2 '1 exp ikdxy id (3.37) Finally ''1(,){[(,)(,)][((,)]} bxyhrg (3.38) The final inverse Fourier transform brings the convolution resu lt back to the spatial domain, whereby the reconstruc ted pixel size becomes independent of z and one can conveniently focus the reconstruction at di fferent reconstruction distances without rescaling issues as in the Fresnel transform. The whole process requires th ree Fourier transforms, wh ich are carried out using the FFT algorithm for a more efficient cal culation. The pixel sizes of the images reconstructed by the convolution approach are equal to that of the hologram as in the Fresnel convolution. The mi nimum distance for reconstruc tion of the image by this method is given by Equation (3.26). 3.2.4 Reconstruction by Angular Spectrum method Suppose that, due to some unspecified syst em of monochromatic sources, a plane wave is incident on the 00 x y plane of Figure 3.3, traveling in the positive z direction. Let the complex field across that plane be represented by E 0 (x; y; 0) The objective is to 50
PAGE 66
calculate the consequent field E (x; y; z) that appears at a second point with coordinates (x; y; z). By use of the generalized form of the twodimensional Fourier transform {(,)(,)(,)exp{2() f xyFuvfxyiuxvydxdy (3.39) Across the 00 x y plane, the function E 0 has a twodimensional F ourier transform which is then given by 000 00(,,0) (,,0)exp[2()]xy xy A ff Exyifxfydxdy = + (3.40) where is the angular spectrum. With the substitution of (,;0)xyAff /2xxfk and /2yyfk (3.41) then we obtain 000 0000(,;0)(,,0)exp[()]xy xyAkkExyikxkydxdy (3.42) where x k and are corresponding spat ial frequencies of x and y. yk Figure 3.3: Coordinate system for recons truction of the hologram in the angular spectrum method. 51
PAGE 67
Fourierdomain filtering can be applied to th e spectrum to block unwanted spectral terms in the hologram and select a region of inte rest corresponding only to the object spectrum. A modified wavefield can be written as the invers e Fourier transform of the filtered angular spectrum The operation of a Fourier transform may be regarded as a decomposition of a complicated function into a collection of more simple complexexponential functions. So we can write E 000(,;0) Exy (,;0)xyAkk 0 as an inverse transform of its spectrum 000 0000(,,0)(,;0)[()]xyxyExy Akkeikxkydxdy =+ (3.43) Propagating plane waves are of the form, (writing the dot product of vectors k and r out fully, r being position vector) (,,)exp[( )xyzbxyzikxkykz (3.44) Where 222 2 x ykkkk z (3.45) The new angular spectrum at plane z is calculated from as (,;)xyAkkz (,;0)xyAkk (,;)(,;0)exp x yxy z A kkzAkkikz (3.46) The spatial frequency di ffraction kernel is the z portion from Equation (3.44) and substitution of Equation (3.45) gives 222exp[]exp[ ]zxikzikkkz y (3.47) This can be written by substitution of terms in Equation (3.41) as 2 2 22 exp[]exp[22]zxikzi ffz y (3.48) 52
PAGE 68
Removing the 2 term from square root bracket in Equation (3.48) then the angular spectrum diffraction kernel or optical transfer function is 2 2 21 (,)exp[2 ]xy xyHffiff (3.49) The reconstructed complex wavefield of any plane perpendicular to the propagating z axis is found by (,;)(,;)exp[()] x yxyx y E xyzAkkzikxkydkdk (3.50) Therefore by substitution of Equations (3.48) and (3.49) then 2 2 21 (,;)(,;0)exp[2 ]exp[()] x yx y x y x y E xyzAkki ffikxkydkdk (3.51) 2 2 2 1 001 (,;){[{(,,0)}]exp[2 ]}xyExyzfilterUxyi ff (3.52) Here filter represents filtering in the spectral domain. The effect of propagation over a distance z is simply a change in the relative phases of the various components of the angular spectrum. Since each plane wave component propagates at a differe nt angle, each travels a di fferent distance to reach a given observation point and relative phase delays are thus introduced. In the calculation in Equation (3.52) tw o Fourier transforms are needed for the calculation in comparison to the one needed by the Fresnel transform. However once the field is known at any one plane, only one additional Fourier transform is needed to calculate the field at different values of z. This method allows frequencydomain spectrum filtering to be applied, which for ex ample can be used to block or remove the 53
PAGE 69
54 disturbance of the dc term and twin image components. The angular spectrum method of calculating the holographic optical field is seen to have several advantages over the more commonly used Fresnel transformation or H uygens convolution method. Spurious noise and interference components can be tightly controlled through the analysis and filtering of the angular spectrum. The reconstruction distance does not have a lower limit and the offaxis angle between the object and referen ce can be lower that the Fresnel requirement and still be able to cleanly separa te out the zeroorder background. 3.3 Separation of the Virtual Image, Real Image and dc term In Gabor (inline) holography, the virtual image, real image and dc term are superimposed thus creating difficulties for the acquisition of the object wave information. There have been numerous ideas and suggestions proposed as solutions to this problem, including some by Gabor himself, however most of these require ex tra technical effort and are not efficient procedures [42]. Offaxis holography, as fi rst devised by Leith and Upatnieks provides both an effective and simple means of laterally separating the dc and holographic terms by the introduction of a sufficiently large angle be tween the object and reference beams. The increase in the angle between the beams increases the spatial distance between the dc and holographic terms which is desired so that one can easily extract the relevant frequency information for the holographic image. However there is a limitation associated with this increased spatial distance due to the low spatial resolution of CCD cameras which means that often these terms may still partially overl ap. In this section we explore the anatomy of a digital hologram.
PAGE 70
3.3.1 Suppression of the DC term Figure 3.4 displays a digital hologram, Fi g.3.4 (a), recorded in an offaxis configuration and its associat ed Fourier transform Fig.3.4 (b). The bright area at the centre of the image is the undiffracted reconstruction wave known as the dc term or zero order This term holds most of the energy in the image. The holographic terms, the virtual image and real image are located slightly shifted to the top left and bottom right of the dc term. The spatial frequencies of the in terference terms are located symmetrically with respect to the center of the image. This shift is caused by the angle introduced between the object and referen ce beams. Higher order hologra phic terms are also present to the extreme top left and bottom right of the figure. Also one can see spurious noise components whose source is from the CCD camera. The energy is concentrated around three main frequencies: (0,0) for the dc term, ( kx ky ) for the image and ( kx ky ) for the conjugate image. Figure 3.4: Absolute value of the Fourier tran sform of the hologram. The frequency origin is in the center. 55
PAGE 71
56 As the reconstruction distance increases, the dc term stays located in the central part of the image, while the holographic terms move away from the center. Either of the holographic terms contains the entire inform ation of the object wave and so in the numerical reconstruction one fo cuses either on the virtual or real image located at the object and image planes respectively. The objective is then to remove the dc term, conjugate image and any residual noise in the hologram or alternatively one can simply cut out the relevant frequency information of the holographic image and discard the rest via a bandpass filteri ng procedure. This procedure can be implemented in either the spatial or frequency domain. A combination of windows of various shapes and sizes can be used to filter the undesired spatial frequencies while keep ing intact the contributions of the interference terms. Inserting, a mask into the image spectrum in Fourier space allows the holographic term to be isolated and cut out. On the other hand, one notices in Fig. 3.4b) that if either holographic term were to be cut out then there would still be weak frequency components of the dc term that still would disturb either of the selected holographic images and as a consequence would degrade the quality of the reconstructed image. These components cannot be removed by filtering in either domain, unlike the easily recognizable spuri ous noise components. A simple method for suppression of the dc term is to subtract the average intensity from the hologram before reconstruction. A similar effect can be achieved by the subtraction of two holograms. This pr ovides a particular effective way of dc suppression by experiment. However a disadvantage of e ither of these methods is the increased experimental effort.
PAGE 72
57 3.4 Recording of Digital Holograms 3.4.1 CCD Cameras for Digital Holography Digital holography is very different to conventional holograp hy in terms of the recording medium and the way in which the hologram is stored. In conventional holography the hologram is reco rded on a photographic plate, typically made of silver halide which is then chemically processed. The original object wa ve is subsequently obtained by illuminating the recorded hologram with original reference wave. In digital holography, the hologram is directly recorded onto the CCD and digitally stored in a computer where the objec t wave is reconstructed numerically on the computer. The exposure time of a CCD camera recording a hologram is significantly shorter than that of the photographic plate, so it relaxes the stability requirement on the recording system, and as a result, moving objec ts can be recorded with a short pulse of light. The elimination of the need for chemical processes, quantitative information, easy data storage, and the fast recording of holograms are just some of the important advantages of using a CCD camera in digi tal holography. CCD cameras are an essential component of exploring the many advantag es of digital holography and should be carefully selected dependant upon the specific application. There ar e a number of CCD cameras which can be used effectively in digi tal holography with differing architectures. In a full frame CCD, the entire detector surface is first exposed to light and then the light is externally blocked off so that readout can occur without further charge accumulation during the readout process and in an interline tran sfer CCD, alternate columns of the detector array are masked off with opaque material. To read out the
PAGE 73
image, the charge in each column of light pixels is simultaneously shifted into the adjacent dark column, which is blocked off from light. The drawback of full frame CCDs is that they require an external m eans of controlling exposure, typically a mechanical shutter. This increases camera cost, size and weight. Readout speed, which influences image noise, is also slower than for interline transfer CCDs. 3.4.2 Spatial Frequency Requirements for Hologram Recording For a hologram to be effectively recorded, the CCD camera must be able to record all of the interference fringes in the hologram. From signa l sampling theory, the Shannon criterion requires the sampling frequency be at least twice th e highest signal frequency. This means that every interferometric fringe of the hologram has to be sampled by at least two pixels of the CCD array to resolve the fringe frequency. This requires that the angle between the object and reference waves must be sufficiently small. The low spatial resolution of a CCD camera means that the maximum angle between the reference wave and the object wa ve is limited to just a few degrees. The maximum spatial frequency max f which needs to be resolved, for the wavelength is determined by the maximum angle max between the reference and the object beams as max max2 sin 2f (3.53) or max1 2f x (3.54) where x is the spacing between the pixel elements on the CCD sensor. 58
PAGE 74
As the maximum spatial frequency max f recorded on the CCD is limited by the pixel size of CCD, the angle between the refere nce and the object wa ve at any point of CCD must not exceed the maximum value max f If however the angle between the reference and the object wave at some region of the CCD exceeds the maximum value max f the interference fringe becomes under sample d, this often results in a decrease in the resolution of the reconstructed and ther efore image aliasing appears in the recorded image of the fringe. Hence the distance betw een neighboring pixels is what limits the maximum angle between the object and refere nce wave. In digital holography, the angle between the reference and the object wave must be less than the maximum value max The poor resolution of CCD cameras availa ble today compares rather poorly to that of photographic plates. This create s a limitation on the maximum angle between object and reference beams and hence the spatia l distance that can o ccur between the real and virtual image as shown earlier in Figure 3.4. As a consequence, the three diffracted waves often partially overlap during reconstr uction. With the introduction of better CCD cameras this restriction will be less of a problem. 3.4.3 Lateral Resolution of the Optical System The resolution of the reconstructed image depends on the information recorded in the hologram. However, this information is influenced by the size and the spatial resolution of the record ing material. For digital hologra phy the information recorded in the hologram is determined by the number of pixels and the pixel size and dynamic range of CCD [43,44]. 59
PAGE 75
In particular, the lateral resolution in di gital holography is restricted by the pixel size of the CCD sensor and therefore the re solution of the reconstructed image is determined by the highest spatial frequency ma ximum of the object wave recorded by the CCD camera. For this reason, microscope lens es have to be applied for magnification of the object. 3.4.4 Calibration of Optical System in Digital Holography The USAF 1951 Test Target shown in Fi gure 3.5 is one of the most commonly used resolution targets in optics. It allows one to determine the lateral resolution of the optical system. Table 1 displays the chart for the 1951 resolution target that specifies the frequency and line width for each group and elem ent. This standard test object contains horizontal and vertical threeba r patterns in the form of a reflecting chromium coating set on a glass substrate. The vertical bars are us ed to calculate horizontal resolution and the horizontal bars are used to cal culate vertical resolution. Figure 3.5: USAF1951 resolution test target pa ttern (see www.sinepatterns.com). 60
PAGE 76
1951 USAF Resolution Target Data 2 1 0.2500 2000 4 1 16.00 31.3 2 0.2806 1782 2 17.96 27.8 3 0.3150 1587 3 20.16 24.8 4 0.3536 1414 4 22.63 22.1 5 0.3969 1260 5 25.40 19.7 6 0.4454 1122 6 28.51 .17.5 1 1 0.5000 100 5 1 32.00 15.6 2 0.5612 891 2 35.92 13.9 3 0.6300 794 3 40.32 12.4 4 0.7071 707 4 45.25 11.0 5 0.7937 630 5 50.80 9.84 6 0.8909 561 6 57.02 8.77 0 1 1.00 500 6 1 64.00 7.81 2 1.122 445 2 71.84 6.96 3 1.260 397 3 80.63 6.20 4 1.414 354 4 90.51 5.52 5 1.587 315 5 101.6 4.92 6 1.782 281 6 114.0 4.38 1 1 2.000 250 7 1 128.0 3.91 2 2.245 223 2 143.7 3.48 3 2.520 198 3 161.3 3.10 4 2.828 177 4 181.0 2.76 5 3.174 157 5 203.2 2.46 6 3.564 140 6 228.1 2.19 2 1 4.00 125 8 1 256.0 1.95 Table 3.1: Specification data for the groups and elements of a 1951 USAF resolution test target. 61
PAGE 77
62 The resolution target is a vital instrument in the calibration of the holographic optical system. It allows determination of pa rameters such as the image size, number of pixels and the lateral resolution of th e hologram and reconstructed image. The methodology to obtain the correct image size and number of pixels is conducted as follows: One calculates the number of pixe ls on the CCD camera required to draw a square box around a specific element made up of 2.5 line pairs. From Table 1 we are then able to calculate the physical dimensions of the square box corresponding to the number of pixels on the camera. One then increases the required image size to use the highest number of square pixels on the camera.
PAGE 78
Chapter 4 Digital Gabor Holography for Microscopy The structure of this chapter is as fo llows: Section 4.1 intr oduces the background of Gabor holography and the theoretical basi s for recording and r econstruction of the hologram. Section 4.2 presents a simulati on of digital Gabor holography using a theoretical object. Section 4.3 examines some of the main applicati ons and limitations of the Gabor holographic technique. In Section 4.4 technical details a bout the experiment are discussed. Section 4.5 presents a host of applications of dig ital Gabor holography in microscopy. Section 4.6 looks at the use of digital Gabor holographic movies for microscopy and demonstrates some of th e technical advantages over conventional microscopy. Finally Section 4.7 summar izes and concludes the chapter. 4.1 Introduction to Digital Gabor Holography Gabor or inline holography as it is commonly known provides a simple method of highresolution imaging that routinely ach ieves both micron level, lateral and depth resolution in threedimensional imaging The same beam serves as the reference wave and illuminates the object; no mirrors or beam splitters are strictly needed, but can add more capabilities to the holographic optical system. In Gabor holography, the object is required to be mostly transparent with small thin obstructions, which diffract and scatte r the illuminating beam. The light wave 63
PAGE 79
arriving at the detector plane consists of the undiffracted plane wave, the reference (R) and the diffracted object wave (O), which in terfere to give the hol ographic interference pattern, 2 H at the CCD camera 2222 * H RORORORO (4.1) The reference term 2 R on the right side of Equation (4.1) is removed by exposing another frame with the object removed, whereas the 2O term cannot be removed. However, by assuming that the object is mostly transparent then this term contributes only a small amount of background noise. The two remaining terms are the virtual image R O and the real image R O These terms are spatially superimpos ed in the reconstructed field, which results in the loss of phase information. Th erefore when performing numerical focusing in the reconstruction of the hologram, the defocused blur from the conjugate image although weak, is problematic as the superpos ition of these terms effectively blurs the details and structures of the reconstructed object wave. This effect combined with the need for a mostly transparent object to minimize the 2O term is the main limitation of Gabor holography. 4.2 Gabor Holography Simulation Figure 4.1 shows a simulation of Gabor holography using a cross and circle as the theoretical object. For a mostly transparent object (Fig.4.1a), this results in a positive image against dark background (Fig.4.1b). On the other hand, if one uses a Michelson interferometer set up, with a sepa rate reference wave, as in offaxis, then both of the first 64
PAGE 80
two terms on the right side of Equation (4.1) can be subtracted, and the object and the holographic image is dark against light background (Fig.4.1c). For a mostly opaque object (Fig.4.1d), there is not enough undiffracted refere nce in Gabor holography to generate image correctly (Fig.4.1e), wher e as the Michelson holography generates a correct image (Fig.4.1f). a) b) c) d) e) f) Figure 4.1 : Gabor and Michelson holography of mostly transparent object: a) mostly transparent object ; b) Gabor holographic image; c) Michelson holographic image.; Gabor and Michelson holography of m ostly opaque object: d) mostly opaque object; e) Gabor holographic image; f) Michelson holographic image. 65
PAGE 81
4.3 Limitations of Gabor Holography Gabor holography is seen to suffer from certain limitations which restrict the extent of its applicability. From the simulation in Figur e 4.1 one can see that Gabor holography is best suited towards the imagi ng of low density, small discrete opaque objects such as particle fields or larger but highly transpar ent objects such as biological cells. This type of object requirement minimizes the amount of noise from the disturbing 2O term. Besides being limited to specific t ypes of objects, a second limitation of Gabor holography is the overlapping conjugate imag e as depicted in Figure 4.2. The problem lies with the fact that they ar e not spatially separa ble as is the case in offaxis holography. When the real image is brought into focus it is accompanied by the out of focus virtual image. Alternately, an observer looking at a focused virtual image also sees a defocused real image. The conjugate image however is ge nerally not considered to be a significant problem in Gabor holography because its effect at large enough recording distances appears as the background signa l in the reconstructed image which is often negligibly small. The infocus image amplitude to out offocus conjugate image amplitude ratio is proportional to the inverse of the recording distance z At small recording distances, the contrast between the image and its conjuga te image is low and both images merge. Therefore, the recording distance must be sufficiently large for adequate suppression of the disturbing conjugate image. A number of methods have been devel oped for removal of the conjugate image and recovery of the phase information [45,46] If the phase distribution in the hologram plane is recovered then it is possible to reconstruct the whole wave field exactly. 66
PAGE 82
Figure 4.2 : Formation of the conjugat e image in a Gabor hologram. 67
PAGE 83
68 Phaseshifting digital holography is one technique in particular which has attracted widespread interest [47]. In the phaseshifting technique, the dc term and conjugate image are removed through multiexposure holographic recording while shifting the phase of the reference field by an integer fraction of 2 While this method has the disadvantage of increa sing the experimental effort; a more serious limitation for practical use in microscopy is the requirement of multiple exposures, therefore hindering the investigation of dynamic objects. Although Gabor holography does not record the phase, it is still very useful for the study of suitable objects at high resolution. By taking advantage of the large depth of field and the planetoplane numerical reconstruction capability of digital holography, one can produce 4D representations of the paths followed by micronsized objects such as biological sample s and particles [48]. 4.4 Recording of Digital Gabor Holograms Figures 4.3 and 4.4 depict the optical setup for hologram recording in the digital Gabor holographic optical system in transm ission geometry. The 532nm coherent light from a frequency doubled Qswitched Nd: YAG laser (Continuum Minilite II, shown schematically with specifications in Fi gures 4.5 and 4.6), operat ing at a power of ~2mJ/pulse is used for hologram recording. The main advantage of using a pulsed laser is that it can record an object field at selected times and so is able to serve as a useful tool in studyi ng the dynamics of fast phenomena. To ensure proper timing in the ex periment, a digital delay/pulse generator (Stanford DG535) is used to synchroni ze the pulsed laser and digital camera.
PAGE 84
Figure 4.3: Experiment setup for recording of digital Gabor holograms. 69
PAGE 85
Pulsed Laser Spatial Filter CCD camera Object Lens L1 Lens L2 Figure 4.4 : Birdseye photograph of experimental setup for digital Gabor holography. 70
PAGE 86
Figure 4.5: Optical layout of the Contiuum Nd:Yag Minilite pulsed laser. Figure 4.6 : Continuum Minilite laser opti cal layout and specifications. 71
PAGE 87
Three channels on the delay/pulse genera tor are used for triggering, channels T 0 A and B as displayed in Figure 4.7. Channel T 0 is used for triggering of the flashlamp and Channel A is used to trigger the Qswitch on the pulsed laser. Channel B is used to trigger the CCD camera to capture a frame. In triggering each channel a TTL pulse is used. A delay of 250 s between Channels T 0 and A fires the pulsed laser at maximum output. Channel B is delayed for 80ns with re spect to Channel A. The main purpose of this type of triggering is that it allows the speed of frame capture on the CCD to synchronize with the dynamics of the obj ect. The CCD camera is a Sony DFWV500 YUV/Monochrome, 640x480 pixels with pixel size 7.4 m x 7.4 m. The camera can switch between frame rates of 3.75, 7.5, 15, 30 frames per second and also one shot. A pulse fired from the laser is reflect ed from mirrors M1 and M2 of quality The spatial filter effectively brings th e beam into sharp focus by means of a 20x microscope objective with focal length 9.0mm. A 5 m pinhole placed at the focal plane of the microscope objective constitutes the filter. This acts to remove the unwanted multipleorder energy peaks and pass only the central maximum of the diffraction pattern. This is a diffraction plane in itself. A lens of focal length f = 3.0cm is used to collimate the beam from the spatial filter The coherent plane wave then propagates through the object in the setup which is located a distance z from the focal plane of the CCD detector, H The light scattered by the object serv es as the object wave and the unscattered light serves as the reference wa ve. The interference between these two waves forms a geometrically magnified diffraction pattern on the CCD sensor using a 20x lens (L1). Only a small amount of spherical abe rration is introduced to the image since both reference and object beam angles are normal to the recording plane. The role of /10 72
PAGE 88
reconstruction is to obtain the 3D structur e of the object from the 2D hologram on the screen or, in physical terms, to reconstruct the wave front at the object. In the reconstruction stage the same plane wave is diffracted numerically. A number of LabView and Matlab programs are used for control of the experiment for numerical computations and for presentation of the resulting images. Figure 4.7 : Triggering the system using the puls ed laser in order to capture dynamic events on the CCD. 73
PAGE 89
4.5 Experimental Results of Gabor holography Figures 4.8(a) and 4.8(b) are the hologram and the reconstructed amplitude image of a U.S. Air Force (USAF) 1951 positive reso lution test target recorded with the experimental setup in Figure 4.3. The image area is 250 x 250 m 2 recorded with 464 x 464 pixels and the image is reconstructe d using the Huygens convolution algorithm at z = 1.30mm from the hologram. The determination of the best focus for the reconstructed image is performed by observation alone. After image capture of the hologram and digital transfer to the computer, the object is removed from the e xperiment setup and twenty consecutive images of the reference beam are captured. The reference beam images are then digitally transferred to the computer and averaged to facilitate the removal of the 2 R term from the holographic interference in Equation (4.1) This in turn leaves 22 ** rHOROR O (4.2) The dc term is mainly made up by the contribution of the reference wave, and therefore this term is largely eliminated by this subtraction of the average intensity of the hologram. Note however, that this method does not fully remove the dc term. The contribution of the 2O term remains and cannot be removed. The smallest observable reconstructed vert ical threebar patter n of the resolution target in the reconstructed amplitude image is that of the third el ement of group 7 which corresponds to a spatial fr equency of 161 line pairs per mm, a line width of 3.1 m and a line length of 0.0155mm (see Tabl e 3.1 in Chapter 3). The deta ils of this element are 74
PAGE 90
Figure 4.8 : The hologram (a), the reconstructed amplitude image (b), and the direct image (c), of groups 6 and 7 of USAF 1951 positive resolution Target (Area =250 m x 250 m 2 Pixels = 464 x 464, z = 1.30mm) clearly observable in the reconstructed amp litude image. This agrees well with the predicted resolution limit of around 2 m using; ~2 z m Nx (4.3) where is the lateral resolution, is the wavelength =532nm, z is the distance between the object and the CCD array and x is the CCD pixel size = 7.4 m 75
PAGE 91
Comparing the reconstructed amplitude image to the direct image as displayed in Fig. 4.8(c), one sees that the image quality of the reconstructed amplitude image is degraded. The opaque bars of the resolution target occupy a signifi cant portion of the image area thus reducing the overall image tr ansparency and as a result there is not enough undiffracted light to reconstruct the im age correctly. In a ddition, the localized presence of the defocused conjugate image in the reconstructed image plane creates a blur around the bars in the image. Figures 4.9(a) and 4.9(b) are the hologram and reconstructed amplitude image of groups 6 and 7 of the resolution target. The image area is 250 x 250 m 2 recorded with 464 x 464 pixels and the image is reconstruc ted using the Huygens convolution algorithm at z = 2.20mm from the hologram. Note the relatively higher image quality of the amplitude reconstruction compared to that s hown in Figure 4.8(b). This could be because of the larger recording distance used in this example. However, the image is still largely degraded due to the reduction in overall transparency of the image. Figure 4.9 : The hologram (a) and reconstructed am plitude image (b), of groups 6 and 7 of the USAF 1951 positive resolution Target. (Area =250 m x 250 m 2 Pixels = 464 x 464, z = 2.20mm) 76
PAGE 92
Figure 4.10 displays the Fourier transform of the hologram in Fig.4.9a). The overlap of the dc and holographic terms are seen. Most of the energy in the image is concentrated at low frequencies from the dc term, most of which can be removed by subtracting the average of the reference beam. Figure 4.10 : Fourier Transform of the digi tal Gabor hologram in Fig.4.9 (a). Figures 4.11(a) and 4.11(b) are the hologram and the reconstructed amplitude image of an assembly of onion cells displaying an area of 400 m x 400 m 2 recorded at 456 x 456 pixels, and reconstructed using the Huygens convolution algorithm at a distance of 2.50 z mm= The onion cells size and struct ure are well established, easily identifiable and range from 0.25 to 0.4 millimet ers in length. The samples were mounted by cutting into the surface of the fresh onion material, and peeling off a layer of the outermost cells with a razor blade. The samp le was then moistened, and applied directly between the coverslip a nd microscope slide. 77
PAGE 93
Figure 4.11 : The hologram (a), the reconstructed amplitude image (b), and the direct image (c) of a collection of onion cells (Area =400 m x 400 m 2 Pixels = 456 x 456, z = 2.50mm) The amplitude reconstruction is able to partially reconstruct the conspicuous cell walls of the cells. Nonetheless, when compar ed to the direct image in Figure 4.11(c) one can see that the image quality is again rather poor. There is the presence of large scale residual features in the interior of the cell, which suggest th at the cells may be dead or dying and this introduces a significant amount of coherent noise into the image. These residual features also reduce th e overall transparency of the ce ll, and in turn generate 78
PAGE 94
adverse effects on the Gabor rec onstruction due to the disturbing 2O term. The large depth of field of digital Ga bor holographic microscopy with numerical reconstruction provides an ideal tool for the study of small discrete objects at different focal planes. The hologram and the recons tructed amplitude image of copolymer microspheres with a mean diameter of 9.6 m are shown in Figures 4.12(a) and (b) respectively, each displaying an area of 300 m x 300 m 2 recorded at 364 x 364 pixels, and reconstructed at a distance of z = 5.20mm using the Huygens convolution method. The microspheres were prepared by mounting in a thin layer of deionized water between a microscope slide and a glass coverslip. The diffraction rings of each particle are recognizable from the hologram. When the di stance between the particles and the CCD sensor is increased, the spacing between c oncentric rings belongi ng to an individual particle also increases. Figure 4.12(c) show s the direct image of the microspheres. In comparison with the direct image th e reconstructed amplitude image clearly resolved all isolated spheres as well as the two spheres that a ppear to be in direct contact. The fact that the overall transparency of the hologram is increased minimizes the degrading effect of the 2O term and results in a higher quality image reconstruction when compared to that of the previous exam ples of the resolution target and onion cells. Figures 4.13(a) and Figure 4.13(b) are the hologram and the reconstructed amplitude image of a particle field c ontaining differing densities of copolymer microspheres within the image. The image area is 300 x 300 m 2 recorded with 364 x 364 pixels and the image is reconstructe d using the Huygens convolution algorithm at z = 79
PAGE 95
Figure 4.12 : The hologram (a), the reconstructed amplitude image (b), and the direct image (c) of copolymer microspheres. (Area =300 m x 300 m 2 Pixels = 364 x 364, z = 5.20mm) Figure 4.13 : The hologram (a) and the reconstructed amplitude image (b) showing differing densities of copolymer microspheres (Area =300 m x 300 m 2 Pixels = 364 x 364, z = 5.60mm) 5.60mm from the hologram. The speckle noise in the image increases with the particle density. Therefore for a large group of small objects such as the microscopheres clustered 80
PAGE 96
together as in the bottom part of the im ages in Fig. 4.13 the speckle intensity may actually be comparable to the focused particle intensity and this ma kes individual particle detection complicated. One may see this effect in Fig.4.13 in the amplitude reconstruction Figures 4.14(a) and Figure 4.14(b) are the hologram and the reconstructed amplitude image of a paramecium a small unicellular organism that is found in freshwater ponds. The image area is 250 m x 250 m 2 recorded with 464 x 464 pixels, and reconstructed at a distance of using the Huygens convolution method. 0.90mm z= Figure 4.14 : The hologram (a), the reconstructed amplitude image (b), and the direct image (c) of a paramecium (Area =250 m x 250 m, Pixels = 464 x 464, z = 0.90mm) 81
PAGE 97
Paramecium belong to the protozoa family and range in size from approximately 100300 m in length. These organisms contain many complex and interesting features within a single cell as show n in the anatomical diagram presented in Figure 4.15 [49]. In the reconstructed amplitude image it is dif ficult to identify specific intracellular features as depicted in the anatom ical diagram. However, when one makes a comparison with the direct image in Fig. 4.14( c) it becomes clear that the paramecium contains a number of unresolv ed intracellular partic les with a large range of structural scales which effectively increase the speckle noise and simultaneously reduces the transparency of the paramecium. Figure 4.15 : Anatomy of Paramecium. 82
PAGE 98
4.6 Digital Gabor Holographic Movies Gabor holography offers both a rapid and e fficient approach in recording digital holograms. This property is particular useful in capturing the motion of moving objects in depth. A digital hologram contains informati on of the whole optical field in a threedimensional image volume, which allows for calculation of the image field in any focal plane. A time series of digital holograms then provides complete fourdimensional information of the object's threedimensional spatial images as well as the time evolution of those images. Once a movie of digital hol ograms are recorded, one can reconstruct the images of the object and be able to fo cus on any focal plane as a specimen under observation moves up and down in the image space. This is the most unique capability of digital holographic movie in contrast to conventional video microscopy, where only the images of the specific focal plane that wa s used in recording are preserved and the information of all the other planes is lost. This is a critical advantage especially in the microscopic recording of animated microbes th at are constantly swim ming in and out of a large range of focal distances. In conventional video microscopy, it is not feasible to track the focal distances of rapidly moving microbes, whereas in digital holography a series of holograms can be recorded at a fixed distance and the images are later numerically focused at leisure and one is able to tr ack the rapidly moving microbes as needed. To demonstrate these technical advantages for the creation of holographic movies, Figure 4.16 displays a time series of twel ve holograms showing a microbe moving progressively through pond water between each of the hologram frames. The image area of each frame is 100x100 m 2 recorded at 428 x 428 pixels and each frame is captured with an interval of apart. 0.40 ts= 83
PAGE 99
Figure 4.16: Timeseries of holograms of a microbe moving progressively in pond water. (Area =100 m x 100 m 2 Pixels = 428 x 428) The holograms are recorded by synchronizing th e camera with the laser pulses under the control of the digital delay generator. The timeseries amplitude images in Figure 4.17 corresponding to the holograms in Figure 4.15 are reconstructed using the Huygens convolution method adjusting image distances in the range z = 0.3520.354mm for best focus of the microbe (highlighted in circle) under observation. The images show the track of the specimen to demonstrate 84
PAGE 100
Figure 4.17: Timeseries of reconstructed amplit ude images of a microbe moving in pond water. (Area =100 m x 100 m 2 Pixels = 428 x 428, z = 0.3520.354mm) that time resolution is achieved. The reconstructed amplitude images are then composed into AVI files to produce the holographic movie in with the microbe is consistently in focus even though it is moving through focal plan es in depth. This is not possible with conventional microscopy. 85
PAGE 101
The next biological example concerns e uglena, a singlecelled organism with spindleshaped bodies ranging in size fr om 0.025 to 0.050mm in le ngth. Figure 4.18 (ad) shows a time series of holograms recorded 0.40s apart, displaying a number of euglenas at different focal depths swimming in pond water. Fi gures 4.18 (eh) are the corresponding amplitude images, reconstructed using the Huygens convolution method adjusting image distances in the range z = 0.4640.471mm for best fo cus of the individual euglenas. Reconstruction at di fferent depths throughout th e volume brings different individuals of euglena into focus. One can see how we have focused on one particular euglena moving between frames. Note how the other euglenas one of which appears to be in focus in the holograms now appear out of focus in the reconstructed amplitude frames as they are located at different depths to th e plane of focus. In addition, one may observe smaller objects swimming about. Closer examin ation reveals it to be algae of about 10 m in diameter. Figure 4.18: Timeseries of holograms (ad) a nd their corresponding reconstructed amplitude images (eh) of a number of e uglenas swimming in di fferent focal planes. (Area =200 m x 200 m 2 Pixels = 400 x 400, z = 0.4640.471mm) 86
PAGE 102
Any of the individual euglenas can focuse d throughout the entire movie. This is a significant advantage for a biol ogist interested in a numbe r of objects moving in depth during a time sequence. In comparison to a frame captured from conventional video microscopy in Figure 4.19, the information of eugl enas at different focal planes to the one viewed in the figure are effectively lost. Figure 4.19: A single frame of euglenas in pond water captured by a conventional video microscope 4.6.1 Automatic Focusing for Digital Gabor Movies After a number of holograms have been r ecorded, it remains to reconstruct each of the holograms. If an object recorded in the holograms is moving in depth, then the reconstruction distance is requi red to be changed between each holographic frame. For a user to reconstruct for example hundreds of holograms for the objects best focus by observation alone, then this becomes a tim e consuming and tedious procedure. 87
PAGE 103
88 An interesting property about Gabor hol ograms is that in the reconstructed amplitude image the object appears bright against a mostly dark background. Coupled with the fact that only specific objects with certain characte ristics are suitable for imaging with Gabor holography it is pos sible to construct autofocus ing algorithms that use the contrast and other properties of the image to find the best focus of the object. Upon examining the frequency components of an unfoc used image, one will find that there are relatively few high frequency components. As the image comes into focus, high frequency components increase. An example of the use of an autofocus program is illustrated in Figure 4.20. The figure displays the hologram and the reconstructed amplitude image of the resolution target. The program uses two approaches. The first one uses frequency content to determine edge sharpness. The other method us es an edge detector and measures the standard deviation of the intensity of the edge s, the larger the value, the better the focus. The program works by iteratively changing the reconstruction distan ce starting from a user specified value to find the largest numeri cal values of contrast and therefore find the best focus. The program algorithm is not able to find th e correct best focus in the figure. The starting z value was already fairly close to the act ual reconstruction plane for best focus. The methodology is also probably made more difficult because of how the noise effects different parts of the image and the sheer number of objects involved may also cause problems. There are a number of other approaches that require investigation and other
PAGE 104
contrast methods such as squared gradient absolute variation, or Laplacian methods which may yield better results. The successful integration of a program that automatically identifies the correct focus of an object will create a significant improvement for the application of this technique in holographic movies. This is a subject of future study. Figure 4.20: Autofocusing of th e resolution target. 4.7 Conclusions of Digital Gabor Holography In this chapter, we have described di gital Gabor holography for applications to biological microscopy. The holographic optical system records Gabor holograms directly from an object onto a CCD camera and then reconstructs the object wavefield numerically by use of the Huygens convol ution method. Although there is a minimum reconstruction distance requirement associ ated with the Huygens convolution method, this is insignificant since large recording distances are usually required in order to 89
PAGE 105
minimize the merging of the holographic term s. The superposition between the focused holographic image and the unfocused conjugate image cannot be avoided and there is, inevitably, some noise introduced into the image that cannot be avoided. However, by increasing the recording distance, it is possi ble to partially suppre ss the conjugate image so that it does not disturb the image to a significant degree. The microscopic capability of the Gabor holographic system is demonstrated through the experimental results of reconstr ucted amplitude images of diverse objects such as a resolution target, copolymer mi crospheres, onion cells, and protozoa. The simulation and experimental results clearly show that only specific types of objects with certain physical characterist ics are suitable for imaging with Gabor holography. These objects are required to be eith er discrete, small opaque objects, or larger but mostly transparent objects. In either case the ove rall transparency of the image must be significant enough so that the 2O term approximates a small amount of background noise. Biological samples such as paramecium and onion cells, which are considered by microscopy terms to be mostly transparent, actually contain a large range of intracellular features which create speckle noise in the c oherent imaging system. Also, these features, in turn, reduce the transparency and caus e further degradation in image quality. Digital Gabor holographic movies are seen to offer a rapid and efficient approach for image capture of a timeseries of holograms. In the experimental results, we have demonstrated that highresolution tracking of many objects in 4D from a single hologram data set can be performed. High late ral resolution is achievable, and, thus, 90
PAGE 106
91 tracking of organisms as small as bacteria may be possible. By the use of numerical focusing, each single hologram in the data set can be reconstructed at selected depths so that the trajectory and speed of a collection of objects can be captured as 3D data sets by stacking of the 2D reconstructions. The resu lts show that the di gital Gabor holographic system outperforms traditional conventiona l microscopic methods and can accurately calculate the objects position in the z direction. Other advantages of digital Gabor holography are the simplicity of sample preparation, particular ly for biological samples where neither sectioning nor staining is requ ired, which means that living cells which are largely transparent can also be viewed. Outside of biological microcopy, there are many other applications for 4D Gabor holographic movies, incl uding particle tracking.
PAGE 107
92 Chapter 5 Quantitative Digital Holographic Microscopy This chapter presents the results of digital holography experiments conducted in an offaxis configuration. Section 5.1 begi ns the chapter with an introduction to phasecontrast microscopy and reviews some of th e commonly used techniques in this field. Section 5.2 examines how phase information from an object is produced and how one may employ this information in order to obtain quantitative evaluation. Section 5.3 compares the use of the different reconstruc tion methods to a standard resolution test target. Section 5.4 studies the differences between phase and amplitude information in biological objects. Section 5.5 presents examples of quantitative phase images obtained from digital holography experiments that exam ine the resolution of the optical system and demonstrate the quantitative and qualitati ve capabilities of di gital holography for biological microscopy. Section 5.6 discusses the use of quantitativ e digital holographic movies in microscopy and demonstrates th e advantages over conve ntional techniques. Finally Section 5.7 ends this chapte r with a conclusion and discussion. 5.1 Introduction Many microscopic biological specimens such as living cells and their intracellular constituents, are mostly transp arent, and therefore are problematic for conventional brightfield micr oscopy. There have been deve loped a number of techniques
PAGE 108
93 for rendering transparent phase objects visible that have played very important roles in the development of modern biology and medi cine[50], and these include dark field, Zernike phasecontrast, and Nomarski di fferential interference contrast (DIC) microscopies. In dark field microscopy, only the scatte ring centers and boundaries contribute to the image signal against a zero background. In the phase contrast microscope, the phase variation is converted into amplitude varia tion, and in DIC, the interference of two sheared polarization components re sult in images that have sha dow effect and thus give a threedimensional perception of the object. In any of these techni ques, the phase to amplitude conversion is nonlinear and there are significant artifacts in the images, such as the halo in phase contrast and the disa ppearance of contrast along the direction perpendicular to shear in DIC. Quantitative phase imaging is not feasible with these techniques. Quantitative phase imaging is important because it allows the determination of the optical thickness profile of a transparen t object with subwave length accuracy. The optical thickness profile depends on the physic al thickness as well as the optical index variation, and thus one can extract this information with great accuracy [51]. Whitelight interference mi croscopy [22] and optical coherence microscopy [52] have been used to generate quantitativ e phase images but these require multiple exposures or mechanical scanning. Digital holography offers an excellent a pproach for quantitative phase imaging. A hologram that consists of the interference be tween the object and the reference beams is recorded by a CCD camera and the holographic image is numerically reconstructed inside a computer using the results of diffraction theo ry. Calculation of the complex optical field
PAGE 109
94 allows direct access to both the amplitude and the phase information of the optical field, and by numerical focusing the images can be obtained at any distance from a single recorded hologram. 5.2 Quantitative Evaluation by Digital Holography Amplitude objects change only the amplit ude component of the light, leaving the phase unchanged. On the other hand, phase objects change the phase of the light passing through them, leaving the amplitude unaltered as illustrated in Figure 5.1. As each wavefront passes through the specimen, it is deformed and retarded (or advanced) according to the geometry, refractive index differential, and the thickness of the specimen. An incident wavefront present divides into two di fferent components upon passing through a phase specimen. The first component is the undiffr acted, zeroorder wavefront that passes through and around the specimen, but does not in teract with it. In addition, a second component is the deviated or diffracted wa vefront, which becomes scattered in many directions and increases with specimen si ze. This scattering is essentially caused by discontinuities in the specimens refractiv e index. A nonabsorbing substance with a homogeneous refractive i ndex is transparent. Biological tissue, however, often contains many refractive index discontinuities. A refractive index change in a transparent medi um leads to a change of the optical path length, which is proportional to the product of the refractive index and the optical thickness. This optical path length change can be formulated mathematically in a phase change in the wavefront.
PAGE 110
Figure 5.1 : A phase shift is produced when a wa vefield impinges on an object with a refractive index different to the surrounding medium. The phase shift is a product of the variation of the relative i ndex of refraction and the optic al thickness of the object. 95
PAGE 111
This corresponding change in the interference phase is 02 (,)[(,,)] x ynxyzn dz (5.1) where is the refractive index of the medi um under observation in its initial, unperturbed state and n(x, y, z) is the final refractive index distribution. 0n From Equation (5.1) the physical thickness of the object can be expressed as 01 2obj obj objd nn (5.2) where represents the physical thickness of the object, objd obj is the optical path length change of the object with refractive index and is the refractive index of the surrounding medium. Once the phase information objn 0n obj is obtained from the reconstruction of the hologram, knowledge of the physical thickness of the sample allows calculation of the refractive index or vice versa. This method can be applied for the shape measurement of transparent microscopic samples, such as living cells. 5.3 Experimental Setup for OffAxis Digital Holography The digital holography experiments are performed usi ng the apparatus schematically depicted in Figure 5.2. In Figure 5.3, we show a birdseye view of the setup by a photograph. A mini ature pulsed Nd:YAG laser (Continuum Minilite) operates at 532nm, with an energy of 2 mJ per 10ns Qswitched pulse. The pulse repetition rate is 15Hz. The laser output, which is spatiallyfilte red and collimated, is split into a reference and object beam in a transmissive in terferometer setup based on the Mach96
PAGE 112
97 Zehnder configuration. The benefit of this type of interferometer is that the path length in both arms is equal due to the symmetry of the se tup. Therefore it can be used to measure the amount of phase modulation caused by an object placed in one arm of the interferometer. The object specimen, mounted on a xyztrans lation stage, is placed at a distance z from the hologram plane H whose magnified image is projected on the CCD camera, along with the reference beam. A pair of similar microscope objectives, either 20X, 0.4NA, or 40X, 0.65NA depending on the desired lateral magnification, is used in the two optical branches to match the curvatures of the two wavefronts. A slight angle is introduced between the object and the refe rence beams by tilting the beam splitter BS2 for offaxis holography. The camera (Sony DFWV500) has an array of 640 x 480 pixels on a 4.7 x 3.6 mm 2 active area, with 8bit gray scal e output. A digital delay generator (Stanford Research DG535) triggers both the laser and the camera at a repetition rate of 15 Hz. The microscope objectives L1 and L2 and the beam splitter BS2 are adjusted in the experimental setup so that the interfer ence fringes are straight and so avoiding the need to perform any digita l correction due to spherica l aberration introduced by the microscope objectives. Another parameter that characterizes the inte rference fringes is the spacing between them. An IEEE1394 cable connects the camera to the desktop computer, which processes the acquired images and calculate s the holographic diffr action using a number of programs based on LabV IEW and MatLab.
PAGE 113
98 Figure 5.2: Digital holographic se tup for OffAxis holography.
PAGE 114
99 Figure 5.3: Birdseye view of the digital hol ographic setup for OffAxis holography.
PAGE 115
100 The Labview programs allow the user to view the amplitude, the phase, and the real and imaginary parts of the reproduced object. The program can also change the angle of the plane at which the object is recreated. This is equivale nt to viewing the object from a different angle. This is a very useful feature for flat objects because the offaxis hologram reproduces the image as having a hor izontal tilt equal to the angle of the reference mirror. This would generate phase maps that exceed the repeat distance and cause wrap around. The angle is se lected so that the phase ma ps have the least number of fringes. This can also compensate for a ny tilt the object may have originally had. 5.3.1 Optical alignment An important factor in the experime ntal methodology is to quickly observe whether the hologram produced at the CCD camera is of a su fficiently good quality. It is very difficult to know whether hologram will be able to provide a high quality reconstruction of the amplitude and phase image just by simply looking at the holographic image. Furthermore the high sens itivity of the hologram to minute changes in alignment in the experiment also makes a substantial difference to the quality of the image reconstruction. An efficient methodology that allows for efficient and accurate alignment of the holographic optical system is to look at the angular spectru m (Fourier transform) of the hologram recorded by the CCD camera in real time. Any changes that are made in the experiment are registered in the angular spectr um in real time. This allows one to make subtle changes in the experiment to obtain th e best possible spectrum that will be able to
PAGE 116
produce a good reconstruction. Factors to consider are sufficient angular separation of the dc and holographic terms and any spurious noise. Particular attention must be paid to th e adjustment of the angle between object and reference beams, which must not exceed a maximum value. 5.4 Phase Unwrapping Once the hologram has been reconstruc ted, both the amplitude and the phase information of the object are available. The phase information, however, contains 2 discontinuities wherever the extreme value of corresponding to either or are reached. In general, the true phase ma y range over an interval greater than 2 in which case the phase contains artificial discontinuities. To improve interpretation of the reconstructed phase image the process of phase unwrapping ensures that all appropriate multiples of 2 phase errors are removed from the image. Unwrapping these discontinuities is a matter of adding an appropriate integer multiple of 2 to each pixel element of the wrapped phase map. In practi ce, however, the presence of noise and residues complicates effective phase unwrapping. Figure 5.4 displays the problem of the 2 discontinuity. The theo retical object is a tilted ramp of arbitrary height Wherever a phase variation of 2 exists, a phase discontinuity occurs. The true unwrappe d phase image can be expressed as ()()2()uwapwraprr =+ nr (5.3) where is the unwrapped phase, is the wrapped phase in the phase range of ()uwapr ()wrapr to r represents the pixel position and n(r) is an integer factor to be determined. 101
PAGE 117
Figure 5.4: The phase variation of a tilted ramp displaying discontinuities wherever the object height is greater than the phase variation of 2 Figure 5.5 presents the unwrapped phase da ta for the tilted object in Figure 5.4 with the implementation of Equation (5.3). The discontinuities are removed by the addition of integer values of 2 Figure 5.5: Removal of 2 discontinuities for the tilted obj ect in Figure 5.3 by addition of integer multiples of2 102
PAGE 118
103 A number of phase unwrapping algorithms have been developed, each with varying complexity and requirements [53, 54]. The simplest algorithm checks adjacent pixels in the wrapped phase and, when it exceeds some threshold value, takes this difference as the phase jump and adds 2 However depending on the initial start position and thereby the unwrapping path chosen one may get different unwrapped phase fields, depending on the unwrapping path chos en. As phase unwrapping is an integrated process, an error on the previous poin t can propagate along the unwrapping path. Therefore, determining an optimized path is a critical step toward completely reliable phase unwrapping. Other issues in phase unw rapping include the presence of speckle noise and also complex surfaces which may cause an acute change in the fringe distribution, or even fringe discontinuity. Therefore the process is not as simplistic as may first seem. In this thesis the phase images are all processed by the use of a floodfill algorithm to perform computa tional phase unwrapping as outlin ed in Appendix C. Like many other phase unwrapping algorithms, it star ts from a defined position and calculates the difference of neighboring pixels in the wrapped phase and, when it exceeds some threshold value, takes this difference as the phase jump. The algorithm also makes use of edge detection before unwrapping to determin e features in the image which may cause errors. The main drawback of this algorithm is the rather long time it requires for processing. The average time to unwrap an image is approximately three minutes and this creates a serious limitation for obtaining real time processing of dynamic effects.
PAGE 119
5.5 Comparison of numerical reconstruction methods Figure 5.6 shows the application of the Fresnel, convolution and angular spectrum methods to the reconstruction of the image of a resolution target. A 25 x 25 m 2 area of the USAF resolution target is imaged containing the group 7 element 6 using 452 x 452 pixels. Figure 5.6(a) shows the holographic interference pattern recorded by the CCD camera, and its Fourier transform in Fig. 5.6(b) is the angular spectru m. It contains three main peaks that correspond to the spectra of the zeroorder and the two twinimages. One of these peaks the highlighted rectangular ar ea is selected, a propagation phase factor (z = 7 m) is multiplied, and inverseFourier tran sformed to obtain the amplitude image in Fig. 5.6(c) and the phase image in Fig. 5.6(d). Any further filtering may be carried out in the selected rectangular area highlighted in Fig. 5.6(b) to remove spuri ous noise components. The noise contributions are clearly observable in the hologram spect rum, because they often arrive on the CCD with incidence angles that are distinct from the incidence angle of the object wave. In addition noise associated with the camera can also be removed. The individual bars displa yed in the figure are 2.2 m wide, and are clearly resolved and consistent with the diffractionlimited resolution of 0.5 m for a 0.65 NA lens. The resolution target is a positive mask with opaque chrome film pattern on transparent glass plate, used in transmission. From the analysis of the phase map, the phase step of the metal strip is ~ 52. The physical thickness of film is given by Equation (5.2) where is the wavelength, obj is the phase step, and is the index difference between the film and air. 0 objnn 104
PAGE 120
105 Using a known estimate of the film thickness of 50nm, the optical index (real part) of the metal film is ~2.5. The noise level of the flat area is ~ 10 which corresponds to glass thickness variation of ~30 nm. (On the filmcoated bar areas, the lack of light causes larger uncertainty in phase.) The phase map is rendered in pseudocolored 3D perspective in Fig. 5.6(i). Especially notable in the pha se map is the lack of the coherent noise conspicuous in the amplitude image and prevalent in most other holographic imaging methods. The amplitude and phase images obtained from the Huygens convolution method are shown in Fig. 5.6(e) and 5.6(f), as well as those obtained from the Fresnel method in Fig. 5.6(g) and 5.6(h). The main reason for the obvious degradation of these images is the insufficient offaxis angle to separate out the zero order component. The effect is most detrimental in the Fresnel im ages, where part of the holographic image is buried in the zero order background, and its ph ase image is completely scrambled. The effect shows up as the spurious interference patterns in th e convolution images. While the minimum offaxis angle must be strictly sa tisfied in order to avoid the zeroorder intrusion in Fresnel or c onvolution methods, the control and removal of zeroorder component is straightforward and flexible in the angular spectr um method. Another potential problem is that the Fresnel and convolution methods require minimum hologram distance to avoid aliasing, whereas the angular spectrum method does not have such minimum and the image can be calcula ted even at zero distance[15]. For the particular example shown here the minimum distance happens to be 2.6 m and therefore is not an issue, but the images show n below in Fig. 5.6 are obtained at z = 5 m < zmin = 18 m, the Fresnel or convolution met hods would not have worked.
PAGE 121
Figure 5.6: Holography of a resolution target. The image area is 25 x 25 m 2 (452 x 452 pixels) and the image is at z = 7 m from the hologram: (a) hologram; (b) angular spectrum; (c) amplitude and (d) phase images by angular spectrum method; (e) amplitude and (f) phase images by Huygens convolution method; (g) amplitude and (h) phase images by Fresnel transform method; (i) 3D pseudocolor rendering of (d). The individual bars are 2.2 m wide. 106
PAGE 122
The optical field can be calculated at any number of image planes from a single hologram. Figure 5.7 illustrates the use of the angular spectrum method in numerical focusing of a group 6 element of the resolu tion target. The panel shows a sequence of eight images calculated in the range of z = 1 m in steps of 2 m. Each image is a 30 m2 area of a resolution target. As the fo cus is scanned, one observes the bars move into focus as it passes through the vari ous image planes. Note that the object comes into focus at around 10 m. For Huygens and Fresnel methods this distance would have been smaller than zmin. Figure 5.7: Numerical focusing in digital holog raphy of an element of a USAF 1951 resolution target from a single ho logram. Images are of a 30 30 m2 area (360 360 pixels) with z scanned from 1 to 15 m in steps of 2 m. 107
PAGE 123
108 5.6 Phase Imaging Digital Holography Coherent noise is a major problem for imaging with lasers and often leads to degradation of the image quality. In digital holography, the phase images tend to suffer from the coherent noise to a significantly lesser degree compared to the amplitude images. This is because the amplitude image reflects the intensity variations in the reference wave, whereas the phase noise mos tly comes from the quality of the optical surfaces in the imaging system. Furthermore while amplitude measurements are equivalent in resolution and quality to clas sical optical microscopy, phase measurements can lead to images with an optical thickness resolution of a few nanometers. Figure 5.8 displays digital holography of a paramecium, which is a partially transparent. unicellular protozoan with consid erable intracellular complexity. The image area is 250 x 250 m 2 recorded at 464 x 464 pixels and the images are reconstructed at z = 770 m from the hologram. The most conspicuous feature in the amplitude image in Fig.5.8 (b) is one of the two contractile wate r vacuoles, which is shown as a bright spot. The other water vacuole and the macronucleus are not readily visible in the amplitude image, while there is a hint of them in the phase images. Slightly di fferent directions of reconstruction reference waves are used in th e phase images figures 5.8(c) and 5.8(d), which may be useful for emphasizing differen t aspects of the microscopic images. One may notice from these images that the paramecium is not highly transparent as demonstrated by the amplitude image and there exists a large number of features with varying indexes of refracti on that produce speckle and somewhat degrade the image quality.
PAGE 124
Figure 5.8: Holography of a paramecium, showing a) hologram, b) amplitude, c) and d) phase images of a 250 x 250 m 2 (464 x 464 pixels) area with z = 770 m. Figure 5.9 presents another example of digital holography of a paramecium. Note the phase variations that occu r on the edge of and within the paramecium itself in the phase image in Fig. 5.9(c). Compared to the amplitude image in Fig.5.9 (b), the phase image reveals more information of the intracellular features. Figure 5.9: Holography of a paramecium, showi ng a) hologram, b) amplitude, and c) phase image (Area =250 m x 250 m 2 Pixels = 464 x 464, z = 0.561mm). Examples of digital holography of onion cel ls are shown in Figure 5.10. In the upper row, a 100 x 100 m 2 area of a layer of onion cells is shown. The cell walls are sharply focused in the amplitude image and the phase image shows an accurate representation of the optical thickness, modulo waveleng th, of the cell bodies. The 109
PAGE 125
images of 70 x 70 m 2 area in the lower row focus on the nucleus of a cell. The phase image is a clear view of the optical thickness variation of the nucleus in the middle of the bulged body of the cell. A simple quantitative analysis of the cells index of refraction is possible. By counting the number of fringes, the optical thickness of the nucleus is easily determined to be 3.5 = 1.86 m thicker than the cell body. If we assume that the shape of the nucleus is spherical so that its thickness is the same as the 19 m diameter of the circular image, then the re fractive index difference between the nucleus and the cell body is 0.093. Figure 5.10: Holography of a layer of onion cells. Images of a) hologram, b) amplitude, c) wrapped phase, and d) unwrapped pha se in the upper row are a 100 x 100 m 2 (416 x 416 pixels) area with z = 174 m, while the images e), f), g), and h) in the lower row are a 70 x 70 m 2 (464 x 464 pixels) area with z = 6 m. 110
PAGE 126
111 5.7 Quantitative PhaseContrast mi croscopy by Digital holography Quantitative phase imaging is particularly effective in digital holography one only needs to plot the phase of the calculated complex optical field. Figure 5.11 displays digital holography of the smallest elements of group 7 on the resolution target. The panel displays the (a) hologram, (b) amplitude image, (c) phase image, (d) phase image unwrapped by software algorithm, (e) zprofile of the optical thickness cross section and (f) is the pseudocolor 3D rendering of the phase image unwrapped by software algorithm. The hologram is recorded at a distance z = 3m from the focal plane of the CCD camera and thus is not recorded in focus. The reconstruction is performed by application of the angular spectrum algorithm which has no set minimum reconstr uction distance and the determination of best focus of the image is done by observation. The smallest resolvable bars are that of group 7, elem ent 6 corresponding to a line width of 2.2m. The optical thickness resolution was determined quantitatively by analysis of the noise level in the flat area in th e unwrapped phase image, which corresponds to gl ass thickness variation and is calculated to be around 8nm by the use of Equation 5.2 (On the filmcoated bar areas, the lack of light caus es a larger uncertainty in phase). The low noise characteristics of phase imag ing are further demonstrated in Figure 5.12, which shows digital holography of group 6, element 2 of the resolution target. The panel displays the (a) unwrapped phase image, (b) the zprofile of the optical thickness cross section, and (c) the pseudocolor 3D rendering of the unwrapped phase image. The quantified noise in the flat areas of the unwrapped phase image is determined to be around 3nm as displayed in the R.M.S cross section profile.
PAGE 127
Figure 5.11: Digital holographic images of group 6 of a resolution target with area 50 x 50 m2. The image is at z = 3 m from the hologram: (a) hologram; (b) amplitude; (c) wrapped phase image; (d) unwrapped phase im age, (e) zprofile of cross section as displayed in (d) and (f) 3D ps eudocolor rendering of (d). 112
PAGE 128
Figure 5.12: Digital holography of group 6 element 2 of the resolution target. The images are the (a) unwrapped phase image, (b) zprofile of cross section as displayed in (a) an d (c) the 3D pseudocolor rendering of (a). 113
PAGE 129
Figure 5.13 shows another example of dig ital holography of a group 6 element of the resolution target. The panel displays the (a) hologram, (b) amplitude image, (c) phase image, (d) phase image unwrapped by software algorithm, and the pseudocolor 3D rendering of (d) is shown in (e). In Fig. 5.13(e), one may notice how there is a large amount of noise produced in the bars due to a lack of signal from an opaque object in a transmission setup. However, this noise is reduced considerably at the edges of the bars suggesting that there may indeed be some signal from these areas. Further evidence of this phenomenon is also observed when looking at the amplitude image reconstruction, where there is a small hole in one of the bars. In the phase image the immediate area around this hole appears to be fair ly noiseless with respect to th at of the inner parts of the bars. The area around the bars is again demonstr ated to contain low noise as can be seen in the 3D profile or unwrapped phase image. Figure 5.14 displays higher magnification digital holographic images of groups 8 and 9 of the resolution target, where the smallest resolvable bars are that of group 9, element 2, with line width of 0.87m. This va lue is consistent with that of the Abbe criterion 0.61/.~0.81 N Am (5.4) for diffraction limited imaging of the optical system. Thus the lateral resolution is only restri cted by the diffraction limit. In general, one may note that the lateral resolution of the system by Equation (5.4) can be improved either by use of a higherN.A. objective or by an increase in the size of the beam incident onto the objective lens. 114
PAGE 130
Figure 5.13: Digital holographic images of group 6 el ement of the resolution target with area 40 x 40 m2. The image is at z = 3 m from the hologram: (a) hologram; (b) amplitude; (c) wrapped phase image; (d) unwrapped phase image, and (e) 3D pseudocolor rendering of (d). 115
PAGE 131
Figure 5.14: Digital holographic images of group 8 and 9 of a resolution target with area 25 x 25 m2 (424 x 424pixels). The image is at z = 2 m from the hologram: (a) hologram; (b) amplitude; (c) wrapped phase image; (d) unwrapped phase image and (e) 3D pseudocolor rendering of (d). The smallest bar is that of group 9, element 2 and is 0.78 m wide. 116
PAGE 132
A problem with the form of the experiment al setup as depicted in Figure 5.2 is that the sample orientation is required to be mounted vertically instead of horizontal, as is the case in most conventional microscopic syst ems. This is fine for most samples that require no specific preparation and can be carri ed out with some sticky tape and a lens holder. However, it causes an inconvenience when performing imaging analysis of living biological cells. Furthermore, another problem is th at cells require both the correct amount of CO 2 and a temperature of around in order to be sustained over long periods of time. The usual methodology in c onventional microscopy is to place the living cells horizontally in a chamber maintain ed at the correct temperature and CO 37.2 C 2 gas concentration. With these f actors in mind, adaptations we re made to both the sample preparation and the mounting in order to satisfy the requirements for the experiments. The sample preparation for live cell imag ing is shown schematically in Figure 5.15. The cells were grown directly on a cover slip in a culture solution. The cover slip was then put inside a recta ngular shaped quartz cell (STAR NA Rectangular cell with path length 1mm) as illustrated in Figure 5.15 and DPBS, 1X w/Ca and Mg solution was simultaneously added into the quartz cell along w ith the cover slip containing the cells in order to keep the cells alive for periods of up to approximately 46 hours. The object beam entered the sample from the slide glass side. As the cell was attached to the cover slip, the focus was adjusted to the inner side of the cover slip. Although we were able to keep the cells alive for sustained periods of time using the DPBS, the cells were not in optimum conditions. This, we believe, seve rely reduced both the mobility and also the quality of the cells. Long peri ods of cell analysis require a special preparation of the biological sample. This preparation mainly refers to the fixation of the biological sample. 117
PAGE 133
Figure 5.15: Schematic of live sample preparation. The fixant is applied to kill the cell and to keep its form at the certain stage of development or change by external influence. This is particularly useful for studies of cells in an essentially frozen state. Figure 5.16 show images of a living mous e embryo fibroblast cell. Figure 5.16(e) which is the pseudocolor 3D rendering of Fig. 5.16(d) shows the high quality and contrast of surface detail of the cell obta ined with phase imaging and provides an accurate and qualitative profile of optical thickness. To produce the optical thickness profile by Equation (5.2), a refr active index estimate of 1.375 was used for the cell [55]. 118
PAGE 134
Figure 5.16: Holography of mouse embryo fibroblast cell. The image area is 60 x 60 m2 (424 x 424 pixels) and the image is at z = 14 m from the hologram: (a) hologram; (b) amplitude; (c) wrapped phase image; (d) unwrapped phase image and (e) 3D pseudocolor rendering of (d). 119
PAGE 135
Figure 5.17 shows digital hol ography of an onion nucleus. The panels display the (a) hologram, (b) amplitude image, (c) phase image, and (d) phase image unwrapped by a software algorithm. Pseudocolour 3D rendering of (d) is shown in (e).The image size is 30 30 m2 with 436 436 pixels and is reconstructed at z = 22 m from the hologram. The phase image is a clear view of the optic al thickness variation of the nucleus in the middle of the body of the cell. Figure 5.17: Holography of an onion nucleus. The image area is 30 x 30 m2 (436 x 436 pixels) and the image is at z = 22 m from the hologram: a) hologram, b) holographic amplitude and c) phase images; d) unwra pped phase image; e) 3D pseudocolour rendering of d). 120
PAGE 136
Figure 5.18 shows another example of di gital holography of an onion nucleus. The panels display the (a) hologram, (b) amplitude image, (c) phase image, and (d) phase image unwrapped by a software algorithm. Pse udocolour 3D rendering of (d) is shown in (e).The image size is 30 30 m2 with 452 452 pixels and is reconstructed at z = 13 m from the hologram. The nucleus appears cl ose to the cell wall and appears to be enlarged to a significant degree. Figure 5.18: Holography of an onion nucleus. The image area is 40 x 40 m2 (452 x 452 pixels) and the image is at z = 13 m from the hologram: a) hologram, b) holographic amplitude and c) phase images; d) unwra pped phase image; e) 3D pseudocolour rendering of d). 121
PAGE 137
Figure 5.19 displays digital holography of red blood cells. The panels display the (a) hologram, (b) amplitude image, (c) phase image, and (d) phase image unwrapped by a software algorithm. Pseudocolour 3D rende ring of (d) is shown in (e).The surrounding medium is air and we apply a consta nt index of refraction estimate of n = 1 .375 for the blood cell to produce the optical thickness scaling as seen in the pseudocolor 3D rendering. From the analysis of the phase ma p we infer the average optical thickness of the blood cells to be around 0.6 m. Figure 5.20 shows the 3D rendering from Fig.5.19 (e) where the colormap has been adjusted for better color visualization of the red blood cells. Figure 5.19: Digital holographic images of red blood cells of area 60 x 60 m2 (408 x 408pixels). The image is at z = 4 m from the hologram: (a) hologram; (b) amplitude; (c) wrapped phase image; (d) unwrapped phase image and (e) 3D pseudocolor rendering of (d). 122
PAGE 138
Figure 5.20: 3D pseudocolor rendering with colormap adjustment of the unwrapped phase image in Fig. 5.18 (d). 123
PAGE 139
124 Figure 5.21 displays digital holography of a single red blood cell. The panels display the (a) hologram, (b) reconstructed amplitude image, (c) reconstructed phase image, and (d) phase image unwrapped by a software algorithm. (e) The Fourier Transform of (d) is shown in Fig. 5.21 (e) and the pseudocolor 3D rendering of (d) is shown in (f). Figure 5.21 (g) is an SE M image of a red blood cell at 14,000x. The morphological shape of the cell in the 3D rende ring clearly defines the cell as a flat disk with the center pushed in which provides a larger surface area for absorption of oxygen. The shape in the 3D renderi ng corresponds well to that of the SEM image. The thickness of the cell is again measured to be approximately 0.6 m and the diameter is approximately 8 m. The angular spectrum shows the clear separation of the holographic terms and the dc component by careful control of th e optical elements in combination with the use of the angular spectrum in real time. Figure 5.22 shows digital holography of a number of red blood cells. Some of which are seen to be clumped together. The panels display th e (a) hologram, (b) reconstructed amplitude image, (c) reconstr ucted phase image, and (d) the phase image unwrapped by a software algorithm. A pseudocolor 3D rendering of figure 5.22(d) is shown in figure 5.22(e). The red blood cells s how the well defined cen tral indentation in the form of the familiar doughnut shape. Note here, the uniformity in the shape of the cells but slight variations in the size. The variation in size could be due to drying over time, which would undoubtedly affect the mor phological properties of the cell. The red blood cells are seen to have no internal organelles and are internally uniform and amorphous.
PAGE 140
Figure 5.21: Digital holographic images of a single red blood cell of area 60 x 60 m2 (408 x 408pixels). The image is at z = 4 m from the hologram: (a) hologram; (b) amplitude; (c) wrapped phase image; (d) unwrapped phase image, (e) angula r spectrum, (f) 3D pseudocolor rendering of (d) and (g) SEM image at 14,000x. 125
PAGE 141
Figure 5.22: Digital holographic images of red blood cells with area 50 x 50 m2. The image is at z = 6 m from the hologram: (a) Hologram; (b) wrapped phase image; (c) unwra pped phase im age and (d) 3D pseudocolor rendering of (c). 126
PAGE 142
127 Figure 5.23 displays the reconstruction of a SKOV3 ovarian cancer cell. The area in the image is 60 m 2 recorded with 424 pixels. Figure 5.23(a) is the holographic interference pattern recorded by the CCD camera, and its Fourier transform in figure 5.23(b) is the angular spectrum. It contains both the zero order and twin images, as well as an artifact due to stray interference components. The virtual image component, the highlighted circular area, is se lected. A propagation phase factor ( z =1 0 m) is multiplied, and finally inverseFourier transformed to obtain the amplitude image in figure 5.23(c) and the phase image in figure 5.23(d). The layer of lamellipodia around the edge of the cell is found to be about 110 nm, assuming n = 1 .375 for the cell. The phase map is rendered in pseudocoloured 3D perspective in figure 5.23(h). Especially nota ble in the phase map is the lack of the coherent noise conspicuous in the amplit ude image and prevalent in most other holographic imaging methods. The amplitude and phase images obtained from the Huygens convolution method are shown in fi gures 5.23(f) and 5.23(g), while those obtained from the Fresnel method are omitted because they are completely scrambled. The main reason for the obvious degradation of these images is the insufficient offaxis angle at such short z distance to separate ou t the zeroorder component. In Figure 5.24 we show digital hologra phy of a living HUVEC (Human Umbilical Vein Endothelial Cell). The image demonstrat es high clarity and quality, displaying the nuclear membranes morphological structure in detail. The lamellipodia of the cell are seen to extend out in order to occupy a la rge area as it attempts to migrate. The improvements in the digital ho lographic process is achieved in part by the use of the
PAGE 143
128 Figure 5.23: Digital holography of a SKOV ovarian cancer cell. The image area is 60 60m (424 424 pixe ls) and the image is at z = 1 .0 m from the hologram: (a) hologram; (b) angular sp ectrum; (c) amplitude and (d) phase images by the angular spectru m method; (e) unwrapped phase image of (d); (f) phase and (g) amplitude images by th e Huygens convolution method. (h) 3D perspective rendering of (e).
PAGE 144
Figure 5.24: Holography of a HUVEC cell. The image area is 70 x 70 m2 (448 x 448 pixels) and the image is at z = 12 m from the hologram: a) hologram; b) amplitude and c) phase images; d) unwrapped phase imag e; e) 3D perspect ive rendering of d). 129
PAGE 145
130 angular spectrum method for diffraction calcu lation, which has several advantages over the more commonly used Fresnel transf ormation or Huygens convolution methods. Spurious noise and interference components can be tightly controlled through the analysis and filtering of the angular spectru m. The reconstruction distance does not have a lower limit and the offaxis angle between the object and reference can be lower than the Fresnel requirement and still be able to cleanly separate out the zeroorder background. Figure 5.25 presents holographic images of a confluent group of SKOV3 ovarian cancer cells by angular spectrum digital hologr aphy. The panels disp lay (a) Zernike phase contrast image (of similar but different por tions of the sample than the holography images), (b) holographic amplitude and (c) phase images, and (d) phase image unwrapped by a software algorithm. Pseudocolor 3D rendering of (d) is shown in (e). The image size is 60 x 60 m 2 with 404 x 404 pixels. In the figure we see the phenomenon of c uboidal cells connecting together into an epithelial sheet and producing the grooves between cells. A gap in the confluence is also accurately imaged in Fig. 5.25(e), except for a few spikes due to a defect in the phaseunwrapping algorithm. The comparison between the amplitude and phase images highlights the main differences in image quality and noise dependence. In the amplitude image many features of the cells are hard to distinguish from the background and significant intensity variation may indicate the presence of coherence noise which is not visible in the phase images.
PAGE 146
Figure 5.25: Holography of confluent SKOV3 ova rian cancer cells. The image area is 60 x 60 m 2 (404 x 404 pixels) and the image is at z = 10 m from the hologram: (a) Zernike phase contrast image; (b) holograp hic amplitude and (c) phase images; (d) unwrapped phase image; (e) 3D pseudocolor rendering of (d). 131
PAGE 147
132 Figure 5.26 is a particularly unambiguous demonstration of th e level of image resolution and fidelity that can be obtain ed by the present technique, displaying the nuclear membranes and chromosomes. The panels display (a) Zern ike phase contrast image (of similar but different portions of th e sample than the holography images), (b) holographic amplitude and (c) phase imag es, and (d) phase image unwrapped by a software algorithm. Pseudocolor 3D rendering of (d) is shown in (e). The image size is 60 x 60 m 2 with 404 x 404 pixels. The overall height of the cell is calculated to be about 2.8 m, with the assumption of the average inde x of the cell to be 1.375. Thickness of the lamelipodium around the edge of the cell is determined to be about 320 nm. The noise level in the substrate area is 60 nm, which may be partly due to the residues from fixing of the cells. From this we infer that the co rrugated texture of cellular surfaces in Fig. 5.26(e) is not likely to be noise or artifact of holographic process. Such texture is evident in the Zernike phase contrast image of Fig. 5.26(a) and is known to exist in such confluent cells. The depth of the text ure is estimated to be about 120 nm. Figure 5.27 presents digital holography of a single SKOV3 ovarian cancer cell. The panel displays the (a) hologram, (b) holographic amplitude and (c) phase images, and (d) phase image unwrapped by a software algor ithm. Pseudocolor 3D rendering of (d) is shown in (e). The image size is 60 x 60 m 2 with 404 x 404 pixels. The image shows a clear view of the thin but broad projections of the lamellipodia at the edge of the mobile cell which pushes out in front of the cell in the direction of movement as it migrates. The mechanical tension generated by the lamellipodi a appears to stretch the cell into an elongated form when compared to the SKOV3 cells as shown in Fig. 5.26. Also of interest here is the rather irregula r shape of the lamellipodia.
PAGE 148
Figure 5.26: Holography of nonconfluent SKOV3 cells. The image area is 60 x 60 m2 (404 x 404 pixels) and the image is at z = 5 m from the hologram: (a) Zernike phase contrast image; (b) holographic amplitude and (c) phase images; (d) unwrapped phase image; (e) 3D pseudocolor rendering of (d). 133
PAGE 149
Figure 5.27: Holography of a single SKOV3 cell. The image area is 60 x 60 m2 (404 x 404 pixels) and the image is at z = 4 m from the hologram: (a) hologram; (b) amplitude and (c) phase image; (d) unwrapped phase imag e; (e) 3D pseudocolor rendering of (d). 134
PAGE 150
Figure 5.28 displays images of a single human epithelial cheek cell obtained by quantitative phase contrast micros copy. The image size is 60 x 60 m 2 with 404 x 404 pixels. The cheek cell is about 50 m in diameter and again corresponds well with literature. The nucleus and cell membrane ar e clearly visible as is the mitochondria, distributed in the proximity of the nucleus. Also of note he re is the irregular shape and contours of the cell membrane, with some sections of the cells showing folding. Figure 5.28: Digital holographic images of a Human cheek cell with area 60 x 60 m 2 The image is at z = 9 m from the hologram: (a) Zernike phase contrast image; (b) amplit ude; (c) wrapped phase image; (d) unwrapped phase image; (e) zprofile cross section a nd(f) 3D pseudocolor rendering of (d). 135
PAGE 151
As well as biological applications for microscopy, digital holography can also be applied for accurate and high resolution shape measurem ent in a number of other fields including materials characterization. Figure 5.29 presents an example of quantitative digital holography from a fragment piece of quartz. The panel displays the (a) hologram, (b) holographic amplitude and (c) phase images, and (d) phase im age unwrapped by a software algorithm. Pseudocolor 3D rendering of (d) is sh own in (e). The image size is 60 x 60 m2 with 404 x 404 pixels. The contour of the quartz is seen to be somewhat graded and jagged in its thickness profile. Figure 5.29: Holography of a fragment of quartz material. The image area is 60 x 60 m2 (404 x 404 pixels) and the image is at z = 7 m from the hologram: (a) Zernike phase contrast image; (b) holographic amplitude and (c) phase images; (d) unwrapped phase image; (e) 3D pseudocolor rendering of (d). 136
PAGE 152
Figure 5.30 displays digital holography of a water droplet. The shape is spherical and smoothed with the thickest part of the droplet occurring at the center. Figure 5.30: Holography of a water droplet. The image area is 60 x 60 m2 (404 x 404 pixels) and the image is at z = 12 m from the hologram: (a) hologram; (b) phase images; (c) unwrapped phase image; (d) 3D pseudocolor rendering of (c). 137
PAGE 153
138 5.8 OffAxis Digital Holographic Movies In digital holography, a series of holograms can be recorded at a fixed distance and the reconstructed images later numerically focused at leisure. In this way, one is able to track the rapidly moving object(s) quantitativel y, as needed. We have recorded a series of holograms by synchronizing the camera with the laser pulses under the control of the digital delay generator. The amplitude and phase images are calculated by the angular spectrum method while adjusting image distan ces for best focus of the object under observation. The reconstructed images are subsequently composed into AVI files. The phase movie generated is quantitative, and therefore, it is possible to make dynamic measurements of physical and morphological changes in the sample over time. In particular, this has many applications in mi croscopy where tracking of changes in events, like mitosis in cells, can be analyzed in detail. Figure 5.31 shows a time series of sele cted frames from the reconstructed amplitude movie of a paramecium swimming around with a number of euglenas. The image area of each frame is 250 x 250 m 2 recorded with 464 x 464 pixels. The amplitude images are reconstructed from a se ries of holograms while adjusting the image distances over a 20 m range in the range of z = 850 ~ 870 m in order to maintain the paramecium in focus. The time interval between each reconstructed frame is t = 800ms and the measured reconstruc tion rate of each recorded hologram is achieved in around 300 ms. Figure 5.32 shows the reconstructed phase images in a time series corresponding to the amplitude images in Figure 5.31. Note the phase information of the parameciums intracellular structure is much more definitive than that of the amplitude information.
PAGE 154
Figure 5.31: Time series of reconstructed amplitude images of a moving paramecium. The image area is 250 x 250 m 2 (464 x 464 pixels). The numerical focus was adjusted in the range of z = 850 ~ 870 m in order to track the paramecium in focus. Figure 5.32: Time series of reconstructed phase images of a moving paramecium corresponding to the amplitude images in Fig.5.29. 139
PAGE 155
Higher magnification timeseries amplitude images, selected from an amplitude movie are illustrated in Figure 5.33. The image area of each frame is 80 80 m 2 recorded with 452 x 452 pixels and reconstructed at z = 123 m. The time interval between each reconstructed frame is t = 800ms. In order to maintain the paramecium with in the field of view for long enough, the paramecium is slowed down using a drop of the thickening agent methylcellulose. In the amplitude images, one observes the flickering du e to celial motion in the oral groove that forces food (bacteria) into the food vacuoles Note the large range of intracellular structures and particles which cause speckle and image degradation. Figure 5.33: Time series of reconstructed amplitude images of a paramecium with image area 80 x 80 m 2 (452 x 452 pixels) with z = 123 m. 140
PAGE 156
Figure 5.34 shows the time series of rec onstructed phase images corresponding to the amplitude images in Figure 5.33. The phase images are reconstructed simultaneously to the amplitude images and either movie is available to the user at the same time. A notable phenomenon in the phase movi e is the shrinking of the contractile vacuole, highlighted, as it pumps out water fr om the paramecium. This feature is not easily observed in the amplitude frames in Fig. 5.33. The speckle noise in the paramecium does not appear to be as much of a problem in the phase images as it is in the amplitude images, in areas where the paramecium contains a large number of intracellular features. Figure 5.34: Time series of reconstructed phase images of paramecium corresponding to the amplitude images in Fig.5.31. 141
PAGE 157
Figure 5.35 displays a time series of se lected frames from the amplitude and phase movies of the migration process by living mouseembryo fibroblast cells (3T3). The image area is 170 170 m 2 recorded with 460 x 460 pixe ls and the reconstruction distance is z = 6 m. In the upper row, Figures 5.35 (ad) show selected frames from the amplitude movie and in the bottom row Fi gures 5.35 (eh) are the simultaneously produced phase images from the phase movie. The time interval between each frame is one hour. Cell migration is a fundamental functi on of normal cellular processes and its understanding is very much an i ssue for scientific researchers. The cells are not very mobile suggesting that the conditions for migration are not ideal. This is almost certainly because of the low temperature and CO 2 concentration which inhibit the cell motion. Figure 5.35: Selected frames from the amplitude movie (ad) and from the corresponding phase unwrapped movie (eh) of mouseembryo fibroblast cells in the process of migration. 142
PAGE 158
Figure 5.36 presents selected frames from the 3D optical thickness profile phase movie corresponding to the recons tructed amplitude and phase images presented in Fig. 5.35. The nucleus is clearly seen in the cells as are smaller intranuclear characteristics. The cells exhibit directed migration by form ation of lamellipodia extensions towards a specific target cell. Connections between some of the cells are also displayed. Figure 5.36: Selected frames from the 3D opti cal thickness profile phase movie of mouseembryo fibroblast cells in the process of migration 143
PAGE 159
Figure 5.37 displays selected frames from the 3D optical thickness profile phase movie showing the process of fibroblast cell mitosis. The st udy demonstrates that time lapse video microscopy is a simple but very useful approach to monitor the dynamics of movements which vary in speed and frequency during migration and mitosis of living cells. Figure 5.37: Twelvehour time lapse movie of fibroblast cells undergoing mitosis, obtained by digital holographic microscopy. The pseudocolor 3D rendering represents the optical thickness profile of the cells. 144
PAGE 160
145 5.9 Discussion and Conclusion Digital holography offers a highly sensitive and versatile means of measuring and monitoring optical path variations. From the recorded hologram, both amplitude and phase information are simultaneously availabl e, therefore enabling digital holography to act essentially as both a conventional and a phase microscope. The holographic reconstruction is carried out by the angular spectrum method in an offaxis configuration so that the twin images and dc term background can be subt racted out using frequency domain analysis. The use of angular spectrum method is also seen to solve some of the significant problems that have prevented wide r use of digital hol ography in biomedical imaging applications. These advantages in clude tight control of spurious noise components and no minimum recording distan ce. Also a combination of filtering windows of various shapes and sizes can be used to filter the undesired spatial frequencies while keeping intact the contributions of the interference terms. Phase imaging digital holography is particul arly promising in terms of the lack of coherent noise and the precision with which th e optical thickness can be profiled, which leads to images with high optical thickness resolution. Experimental results have been presented which demonstrate the effectiven ess of digital holography for quantitative phasecontrast microscopic imaging of transpar ent objects. Quantitative knowledge of the structures and dynamics of a transparent spec imen is of great importance in microscopy, particularly in cell biology. Conventional pha secontrast microscopies, such as Zernike and DIC, produce highcontrast qualitative images of transparent objects, but they cannot be directly applied to quantitative studies.
PAGE 161
146 The experimental results display 0.5 m diffractionlimited resolution, with the noise level in the phase profile corresponding to several nm of op tical thickness. The lateral and longitudinal resolutions obtaine d are consistent with diffraction limited imaging. Reconstructed images of object s such as living biological cells display intracellular and intranuclear organelles with sufficient clarity and quantitative accuracy for applications in biomedical research. The level of resolution and details in these images clearly exceeds currently availabl e techniques in phasecontrast optical microscopy, and provides a new modality for imaging morphology of cellular, intracellular, and intranuclear structures that is not currently available with noninvasive optical methods. We have also presented experimental resu lts that demonstrate the capabilities of digital holographic movies for biological micr oscopy. A series of holograms are recorded and the images are reconstructed with numeri cally adjustable focus so that the moving objects can be accurately tracked. On the other hand, the speckle noise of a coherent imaging system is a significant issue for biological microscopy where there is of ten a large range of st ructural scales. For example, in the presented holographic amplit ude images of paramecium the intracellular regions contain various assortments of unresol ved particles, which may be food particles or other organelles. These degrade the quality of images to various degrees. However, one also notices that the phase images tend to suffer from the coherent noise to a significantly lesser degree compared to the amplitude images. A disadvantage of the current method of producing digital holographic movies is the timeconsuming procedure of phase unw rapping which prevents the user from
PAGE 162
147 observing changes in an object in real time There are other phase unwrapping procedures available that will require testing and this will be a subject of future study. In the next chapter we describe and present results of multiwavelength digital holography which allows for real time phase imaging. We note that in this study we have presented the images with minimal postprocessing, the goal being the demonstration of digital holography pr ocesses. Other than the overall brightness and contrast adjustments, we have not applied any of the numerous image enhancing techniques that are avai lable which can significantly improve the perceived image quality for biological applications. This will be the subject of a future study. Digital holography and digital holographic m ovies are seen to be a useful new tool for biological microscopy, with noteworthy advantages over traditional microscopic techniques for biological imaging. As a non invasive and high resolution measurement technique, digital holography offers enormous advantages to the field of microscopy. The number of potentially useful applications is vast. Such ex amples include profiling of 3D morphological structures of diseased cells, such as cancer cells where the nuclei are often enlarged and elasticity is different to hea lthy cells. Cell changes over time can also be quantitatively assessed; therefore, one may obtain high contrast 4D movies. A limitation of the current experimental setup is that the sample is required to be vertical. For observation of living cells ove r long periods, under fa vorable conditions for migration and other processes, changes in the setup for axial illumination of the object are necessary. We are currently investigating the possibility of modifying an interference microscope.
PAGE 163
148 Chapter 6 MultiWavelength Phase Imaging Digital Holography (MWPIDH) This chapter introduces a novel optic al technique for removal of 2 discontinuities from the phase image. Sec tion 6.1 reviews some of the motivations and advantages of using an optical base d approach for phase unwrapping as opposed to that of computational algorithms as dem onstrated in the previous chapter. Section 6.2 introduces the mathematical and theo retical principles of using multiplewavelengths for phase unwrapping. Section 6.3 discusses the experimental setup and procedure for MWPIDH. Section 6.4 presen ts experimental results. Section 6.5 displays selected frames from MWPIDH m ovies, which demonstrat e the capabilities and future potential for quantitative analys is of dynamic objects or samples such as living cells in real time. Finally section 6.6 concludes this chapter. 6.1 Introduction Digital holography has been shown to be an effective method for performing high resolution quantitative phase microsc opy. However the phase images contain 2 discontinuities for objects of an optical depth greater than th e wavelength, thus providing a difficulty in both interferom etry and phase imaging. A conventional approach is to apply any one of a numbe r of phase unwrapping algorithms that have been developed to remove these discontinuities and improve the quality and interpretation of the image. On the other hand, these algo rithms require both substantial user intervention a nd the level of phase noise a nd phase discontinuity to lie
PAGE 164
within strict limits. Furthermore, complex obj ect shapes are often particularly difficult to unwrap correctly and may create phase er rors. Another significant disadvantage is that often the unwrapping pro cedure may be time consumi ng and hence the capability for real time imaging is lost. It has been recognized th at the phase measurement range can be extended to that of a longer synthetic or beat wave length by the generation and combination of two phase maps using two or more separa te wavelengths. The phase discontinuities which exist in the image are subsequently removed in this process by the effective extension of the axial range. The noise in th e final phase profile from the combination of the phase images produced by the multiple wavelengths is equal to that of the single wavelength. With the further extensi on to more wavelengths it should be possible to achieve even longer axial range with u ndiminished resolution. The technique can be applied to incoherent in terference imaging as well as holographic imaging. 6.2 Theory of MultiWavelength Ph ase Imaging Digital Holography 149m The principle of multiwavelength phase imaging is described by referring to Figures 6.1 and 6.2, with numerical values that were used in generating the simulation plots. Suppose the object is a tilted plane of height 5.0 h Figures 6.1(a) and 6(b) display the phase maps 1 and 2 of the tilted object using wavelengths of 1532 nm and 2633 nm respectively. The phase maps contain a 2 discontinuity wherever the height is a multiple of the wavelength. Subtraction of the two phase maps 1 and 2 in Figures 6.1(a) and 6(b) results in a new phase map 1212 as shown in Figure 6.1(c) which has nume rous discontinuities of 2
PAGE 165
By the addition of 2 to the phase map in Figure 6.1(c) wherever 120 produces a new phase map 1212 12()2(0) x with a longer range free of discontinuities and extended axial range. The new phase map is equivalent to that of a longer beat wavelength. Using wavelengths of 1532 nm and 2633 nm the new axial range is defined as 12 12 123.33 m (6.1) The phase map ()m x can be converted to the co rresponding surface profile m Z using 2mm mZ (6.2) The corresponding surfa ce profile of the object is the coarse map shown in Figure 6.1.(d) which is defined by this new axial range as 1212 12() 2 x Z (6.3) By proper choice of twowa velengths the axial range 12 can be adjusted to any value that would fit the axial size of the object being imaged. This technique provides a straightforw ard and efficient phase imaging method in a wide range of applications. On the othe r hand a limitation is that any phase noise in each singlewavelength phase map is amplified by a factor equal to the magnification of the wavelengths. Suppose the singlewavelength phase maps ()m x contain phase noise of 2m or the corresponding surface profiles ()m Z x contain a noise level of ~12mmnm (6.4) 150
PAGE 166
where we have used ~2%m to simulate the noise. The noise in the difference phase map 12() x is 12 1222( ) (6.5) and that in the surface profile 12'() Z x is 1212~130 nm (6.6) The noise has in effect been amplified approximately by a factor of 122/m as one can see in the coarse map 12'() Z x in Fig.6.1.d) when compared to the singlewavelength phase maps in Fig.6.1 a) or Fig.6.1.b). The other half of the phase imaging met hod consists of an algorithm to reduce the noise in 12'() Z x back to the level of the singl ewavelength phase maps. This method is described by referring to Figure 6.2. The coarse profile, 12() Z x shown in Fig.6.1d) can be divided into integer multiples of either of the wavelengths to produce a new coarse profile 12() Z x as illustrated in Fig. 6.2a). Using for example 1 the new coarse profile is defined as 12 12 1 1() ()int() Zx Zx (6.7) Pasting on the single single wavelength coarse map 1() Z x to the newly created coarse profile 12() Z x in Fig 6.2a), produces another coarse profile 12() Z x as shown in Fig.6.2b) such that 12121()() Z xZxZ (6.8) This almost recovers the surface profile with reduced noise, except at the boundaries of wavelength intervals, where the noise in the singlewavelength phase map causes numerous jumps of size, 1 as displayed in Figure 6.2c). If the noise level is not 151
PAGE 167
excessive, most of the spikes in the can be removed simply by the comparison of 12() Z x with the coarse map 12() Z x to produce b Z as shown in Figure 6.2d). If the difference is more than 1 then 1 is either added or subtracted depending on the sign of the difference. Finally Fig.6.2.e) show s the final result, as the fine map 12 Z where the noise level is approximately the same as that of 1() Z x the single wavelength profile, at around 12nm. The remaining spikes in the map are due to places where the coarse map is more than one half of 1 That is, the maximum noise level for the method to work properly is given approximately by 12~4% 4m m (6.9) The phaseunwrapping technique can be further extended to an iterative procedure of three or more wavelengths as displayed in Figure 6.3. The object simulated this time is a tilted plane of height 10.0 h m and the noise level is approximated to be 5%m First note that the axial range 12 can be increased by using closer values of 1 and 2 as one can see from Equation (6 .1). Figures 6.3.a) and 6.3.b), are the surface profiles, 1 Z and 2 Z of the object using wavelengths of 1620 nm and 2580 nm respectively. The axial range from Equation (6.1) is then 128.99 m From Equation (6.9) as described above, for twowavelength phase imaging the noise limit is found to be 12~1.7% 4m m (6.10) 152
PAGE 168
The noise limit has been effectively reduced because of the larger Figs.6.3.c) and 6.3.d) show the coarse map, 12 12() Z x and the fine map, 12() Z x generated from the phase maps of 1 and 2 The noise in the coarse map 12() Z x is displayed in Fig.6.3 j) and corresponds to 1212()~900 nm (6.11) which is much larger than half of 1 and therefore the fine map 12() Z x has too many 1 spikes in it, as one can see from Fig.6.3k) With the addition of a third wavelength at 3500 nm whose surface profile is displayed in Fig.6.3e), new coarse maps 13() Z x and 23() Z x are generated as shown in Fi gures 6.3f) and 6.3g) with beat wavelengths of 132.58 m and 233.63 m Combining two coarse maps coarse maps 13() Z x and 23() Z x using the coarse map procedure, produces a new coarse map as seen in Figure 6.3h). Instead of pasting 1323Z 1 Z to 12 Z which is too noisy to produce a useful result as shown in Fig.6.3k), we go through the following two steps. First paste 13 Z onto 132312( ) Z Z to obtain an intermediate fine map, as shown in Fig.6.3.h). The noise level in 1323Z 1323Z as shown in Fig.6.3 l) is 131213()(0.050.05)2.58~260 Z mn m (6.11) This is now smaller than one half of 1 and we can paste 1 Z to obtain the final fine map, Fig.6.3.i). The noise in this map is approximately the same as that of 1 Z 11~31 nm The maximum noise level m in the singlewavelength phase map for the threewavelength phase imaging to work is given by the smaller of 13 12~7% 4 or 1 13~6% 4 (6.12) 153
PAGE 169
Figure 6.1 : Simulation of twowavelength phase imaging di gital holography. a) Actual height profile of object wi th a tilted plane of axial height 5.0 h m ; b) height profile 1() Z x of a 5 m high incline derived from phase 1() x of 1532 nm ; c) 2() Z x derived from phase 2() x of 2633 nm ; d) phase map 1() x of 1532 nm ; e) phase map 2() x of 2633 nm ; f) difference phase map 12 12() x ; g) coarse map 12() Z x with beat wavelength 123.33 m 154
PAGE 170
Figure 6.2 : Simulation of twowavelength phase imaging digital holography for noise removal. a) 12() Z x where 12() Z x is divided into integer multiples of 1 ; b) 12() Z x where 1() Z x is pasted on 12() Z x ; c) a Z derived from comparing 12() Z x with the coarse map 12() Z x ; d) b Z ,resulting from addition or subtraction of 1 in a Z to remove spikes; e) the fine map, 12() Z x 155
PAGE 171
Figure 6.3 : Simulation of threewavelength phase imaging di gital holography. a) height profile 1() Z x of a 10 m high incline derived from phase 1() x of 1620 nm ; b) 2() Z x derived from phase 2() x of 2580 nm ; c) coarse map 12() Z x with beat wavelength 128.99 m ; d) fine map, 12() Z x ; e) 3() Z x derived from phase 3() x of 2500 nm ; f) coarse map 13() Z x with beat wavelength 122.58 m ; g) coarse map 23() Z x with beat wavelength 123.63 m ; h) intermediate fine map 1323Z where 13() Z x is pasted on 132312()() Z xZx ; i) final fine map 1323() Z x where 1() Z x is pasted onto ; j) Noise in c), 1323Z 12()() Z xZx ; k) Noise in d), 12()() Z xZx ; l) Noise in h), 1323()() Z xZx ; m) Noise in i), 1323()() Z xZx 156
PAGE 172
6.3 Experimental Setup for MWPIDH Figure 6.4 depicts the experimental setup for MWPIDH. A Continuous Wave (CW) HeNe laser (633nm) is added to the ex perimental setup in Figure 5.2 to obtain the second wavelength for the MWPIDH process. The beam from the HeNe laser is combined with the beam from the Nd:YAG laser (532nm) using BS1 to form a two color beam which is then split into an object and a reference beam. The beams require highly accurate optical alignment in order for them to traverse the same exact optical path. Figure 6.4: Experiment setup for MWPIDH 157
PAGE 173
6.4 TwoWavelength Phase Imaging Digital Holography Results Figure 6.5 presents holographic images of a group 6 element of the resolution target. The procedure for MWPIDH image captu re and processing is as follows. First a hologram is captured using only the beam emitted from the Nd:YAG laser at the green wavelength, 1532 nm with the HeNe laser beam simultaneously blocked off before BS1, so it does not enter into the holographi c optical system. Once this hologram is captured, the Nd:YAG beam is then blocked o ff and a second hologram is captured using the beam from the HeNe laser at the red wavelength, 2633 nm The two holograms are then processed separately to obtain their respective reconstructed amplitude and phase images. The top row of Figure 6.5 displays the hologram (a), amplitude (b) and phase (c) images by the green wavelength. The bottom row of Figure 6.5 shows the hologram (d), amplitude (e) and phase (f) images by th e red wavelength. The image size is 60x60 m 2 recorded at 452 x 452 pixels. The 2 discontinuities are pres ent in both of the phase images. One also notices that the quality of both the hologram and the reconstructed amplitude and phase images in Fig.6.5 (df) by the red wavelength is significantly poorer than that those by the green wavelength in Fig.6.5 (ac). In the amplitude image in Fig.6.5 (e) for example, one observes faint horizontal lines suggesting a dist urbing interference source. Using the technique of combining the two wavelength phase maps as outlined in Section 6.2, creates a lo nger range free of 2 discontinuities with extended axial range. The fine map is displayed in Fig. 6.5(h). Some discontinuities still remain in the image as 158
PAGE 174
the bars contain a large fluctuation of phase due to the small amount of signal that is obtained from these areas. Figure 6.5: (a) The hologram, (b) amplitude, (c) and wrapped phase map by the green wavelength 1 = 0.532 m and (d) the hologram, (e) amplitude, (f) wrapped phase map by the red wavelength at 2 = 0.633 m; (h) is the fine map obtained by the phase maps shown in (c) and (f). (Area =60 m x 60 m 2 Pixels = 452 x 452) 159
PAGE 175
Another example of twowavelength phase imagi ng digital holography is illustrated in Figure 6.6. The combination of the phase maps by the green wavelength, Fig. 6.6(a), and the red wavele ngth, Fig.6.6(b), produces a fine map with a new larger beat wavelength 12 = 3. 33 m, Fig.6.6(c). The software implemented phase unwrapping algorithm in Fig.6.6(d) has a defect th at propagates beyond the noisy regions. Figure 6.6: (a) The wrapped phase map reconstructed from the hologram at the green wavelength 1 = 0.532 m and (b) the red wavelength at 2 = 0.633 m; (c) the fine map obtained by the phase maps shown in (a) and (b); (d) is the unwrapped phase map by a software program. 160
PAGE 176
Figure 6.7 displays twowavelength phase imaging digital holography of onion cells. The image size is 250 x 250 m 2 recorded at 452 x 452 pixels. The panels display the (a) hologram, (b) reconstr ucted amplitude image, (c) a nd reconstructed phase image by the green wavelength and the (d) hologram, (e) reconstructed amplitude image, (f) and reconstructed phase image, by the red wave length. (g) is the fine map produced by combination of the phase maps. The onion cell walls are easily visible in the reconstruction and the fine phase map is of a good quality. Of note ag ain here is the red wavelength reconstruction is si gnificantly degraded compared to the green wavelength reconstruction. This consequently causes so me image degradation in the fine map. Figure 6.7: (a) The hologram, (b) amplitude, (c) and wrapped phase map by the green wavelength 1 = 0.532 m and (d) the hologram, (e) amplitude, (f) wrapped phase map by the red wavelength at 2 = 0.633 m; (g) is the fine map obtained by the phase maps shown in (c) and (f). 161
PAGE 177
6.5 MWPIDH Movies Besides the numerous difficulties involved in phase unwrapping with computation algorithms, a significant disadvantag e of their use is that most of them are time consuming and therefore the capability to perform real time or close to real time processing is lost. On the other hand th e procedure of MWPIDH for phase unwrapping can be performed in real time. Phase images reconstructed from holograms recorded at two different wavelengths are simply combined in a simple way to produce a fine phase map without the usual 2 discontinuities. For a static object we capture hologr ams successively for each individual wavelength. However to produce multiwave length holographic movies of dynamic changes requires that the holograms corre sponding to each wavelength be captured simultaneously on the CCD camera. The probl em then becomes how to separate the individual holograms corresponding to each wavelength from the two color beam. An effective method of undertaking this is to use the spectral property of the CCD camera. The CCD is able to capture images in color, i.e. 24bit, Red, Green and Blue (RGB), each color corresponding to 8bit and the separate plan es can be extracted. Figure 6.8 shows the relative response of the CCD with wavelength. By choosing wavelengths near the CCDs peak response for each RG B color, one effectively maximizes the response to that particular color plane wh ile simultaneously minimizing the response to the other colors. For our speci fic example at approximately 1532 nm one can see that the relative response to the green color plane is approximately the highest, while it is fairly low for red and blue and the same is true for our second selected wavelength at 2633 nm 162
PAGE 178
Figure 6.8 : Spectral characteristics of the color CCD camera. 163
PAGE 179
After digital capture of the two color beam holographic image corresponding to the selected wavelengths 1532 nm and 2633 nm onto the CCD camera, the red, green and blue planes of the captured im age are extracted. One then obtains the holograms specific to each wavelength. The blue extraction is subtra cted as noise from both the red and green holograms. The two hol ograms are then processed separately to obtain the phase images. These are subsequent ly combined as earlier outlined in the theory in order to remove 2 discontinuities and ex tend the axial range. Figure 6.9 shows a time series of selected amplitude images from an amplitude movie illustrating the motion of a rotifer through water. The amplitude images are reconstructed from holograms created by the green wavelength and the image area is 70m x 70m 2 recorded with 460 x 460 pixels. The time interval between each image is t = 800ms. The frame rate on the camera fo r image capture is 25 frames/sec and the holograms are reconstructed and combined in 300ms. Rotifers generally have a thick cylindric al body. So, when looking at them from above, there will always be a problem with depth of focus with conventional light microscopes. Furthermore they cannot be fixe d and flattened by the pressure of a coverglass because they would disintegrate a nd die at once. Digital holography as a noninvasive technique with threedimensiona l information overcomes these limitations. The rotifer's cilia are used to trap food and to move around through the water. A feature of interest in the seri es of images in the figure is the view of the stomach which appears somewhat indented in the images. Some food particles are also visible moving throughout the body. 164
PAGE 180
Figure 6.9: TimeSeries of reconstructed amplitude images showing a rotifer moving across the field of view. (Area =70 m x 70 m 2 Pixels = 460 x 460, z = 0.253mm) Figure 6.10 presents the reconstructed time series of phase images reconstructed simultaneously along with the amplitude imag es in Figure 6.9 by the green wavelength. The 2 discontinuities are seen to be present in the image. Figure 6.10 : TimeSeries of reconstructed phase images showing a rotifer moving across the field of view (Area =70 m x 70 m 2 Pixels = 460 x 460, z = 0.253mm) 165
PAGE 181
Figure 6.11 shows the corresponding time se ries of unwrapped fine map phase images created by the combination of phase maps reconstructed from the hologram produced by the green and red wavelengths. Th ere are some disconti nuities still present in the images where the object thickness is greater than the new axial range at 3.3 m Figure 6.11 : TimeSeries of unwrapped phase images by MWPIDH showing a rotifer moving across the field of view. (Area =70 m x 70 m 2 Pixels = 460 x 460) Figure 6.12 shows a time series of selected amplitude images from an amplitude movie showing a high magnification view of a paramecium feeding in water. The amplitude images are reconstructed from holograms created by the green wavelength and the image area is 60m x 60m 2 recorded with 442 x 442 pixels. The time interval between each image is t = 800ms. In the images there are some slight variations between the frames but nothing particularly striking, in fact the interior of the paramecium looks pretty noisy and dense. 166
PAGE 182
Figure 6.12 : TimeSeries of reconstructed amplitude images of a paramecium feeding in pond water. (Area =60 m x 60 m 2 Pixels = 442 x 442) Figure 6.13 presents the reconstructed time series of phase images reconstructed simultaneously along with the amplitude images in Figure 6.12 by the green wavelength. The 2 discontinuities are seen to be present in the image. Figure 6.13 : TimeSeries of reconstructed phase images of a paramecium feeding in pond water. (Area =60 m x 60 m 2 Pixels = 442 x 442) 167
PAGE 183
Figure 6.14 presents the images created by the combination of phase maps by the green and red wavelengths. The contractile vacuole (highlighted in the second frame) is shown to contract as it pumps out water. This phenomenon cannot be observed in the amplitude images in Fig. 6.12. There are some di scontinuities still present in the images where the parameciums physical thickne ss is greater than the axial range. Figure 6.14 : TimeSeries of unwrapped phase images by MWPIDH showing a rotifer moving across the field of view. 6.5 Conclusions and Discussion MWPIDH offers a simple and effective way of removing the discontinuities in the phase image. The axial range can be ch anged to a predetermined value depending on the wavelengths selected for the procedure. Therefore one can predetermine the axial range required for a specific object. If both a short wavelength and a long wavelength are used, it is possible to examine a larger objec t and still have high resolution. The object 168
PAGE 184
169 would first be imaged with the long wavele ngth; this would provi de a range of the object's position in space. Then the object w ould be imaged with the shorter wavelength to obtain higher resolution. The idea is to use the longer wavelength to resolve the degeneracys of the shorter wavelength. Th e longer wavelength is used to give information about which 2 discontinuity the object point is on. This requires that the uncertainty in distance given by the longe r wavelength be less than the shorter wavelength. One must take care when choosing th e beat wavelength. If it is too long, then the uncertainty in distance will be greater than the single frequency wavelength, causing the system to inaccurately determine the fringe number. Conventional computationa l algorithms for phase unw rapping are limited by a number of factors such as the object shape, noise requirements, significant user intervention and as a fairly time consuming pr ocess, real time imaging is a difficulty. This is a serious limitation for the con tinued growth of digital holography for microscopy. MWPIDH is seen to overcome these problems. The technique can be performed in close to real time and is quant itative; therefore it is an ideal method for performing quantitative phasecontrast microscopy. The presented results of test and bi ological samples show that the methodology for MWPIDH works well. However, the resu lts also confirm that the red wavelength reconstruction is of an inferior quality when compared to that of the green wavelength reconstruction and this causes image degrada tion of the final fine map phase image. Analysis of other recording waveleng ths and CCD cameras will require further investigation and the use of the angular spectr um should be invaluable for this process.
PAGE 185
170 Chapter 7 Conclusion and Future Work In this research we have successfully demonstrated the improvements made in techniques of digital holography. Experime nts conducted in both digital Gabor and offaxis holographic configurations have revealed both of these methods to be highly effective for applications in biological microscopy. In Gabor holography there is a limitati on in the method due to specific requirements of the objects physical characteri stics. It is required to be either highly transparent, or small and opaque. This condition is necessary in order to maximize the degree of transparency in the overall image. If this object requirement is not satisfied, then degradation in the reconstructed image occurs. Furthermore, phase information is not available in Gabor holography due to th e superposition of th e holographic terms. While these limitations tend to produce r econstructed images which are generally not of a high quality, the advantages of this technique when compared to that of conventional brightfield microscopic techni ques are observed in the 3D reconstruction capability. From a single hologram, numeri cal reconstruction can be performed in numerous axial planes. By contrast, in conventional brightfield microscopy the information is only available for one specific axial plane. Therefore, digital Gabor holography is able to track any object moving in depth over tim e within a certain field of view, so that it remains in focus throughout the entire timefr ame. If one is interested
PAGE 186
171 specifically in the trajecto ry, speed, motion, pattern, or any dynamic analysis of the object, digital Gabor holography is a pa rticularly attractive technique. One may also produce three dimensional optical profiles of motion by stacking numerous 2Dreconstructions of selected dept hs. For this purpose, the use of algorithms for autofocusing will also be of enormous advantage for the application of this method in the creation of holographic movies. OffAxis digital holography enables simultaneous reconstruction of both amplitude and phase information of the object. The dc and holographic terms can be spatially separated due to the introduction of an angle between the object and reference beams. However, due to the low spatial resolution of CCD cameras, this angle is limited. This in turn limits the distance of spatial separation between the terms. There are a number of numerical reconstr uction algorithms available for use in digital holography. In this thesis we have reviewed each of the methods, a nd their applic ation to the reconstruction of a resolution test target a nd a biological sample. The angular spectrum algorithm is shown to be a particular ly advantageous method for holographic reconstruction, since it makes it possible to filter the undesi red spatial frequencies while keeping the contributions from the interferen ce terms intact. Also, a significant advantage of the angular spectrum algorithm is th e absence of a minimum reconstruction distance, which is a strict re quirement in the more commonl y used Huygens convolution and Fresnel transform reconstruction methods. Fu rthermore, in this thesis we have demonstrated the application of the angular spectrum me thod for obtaining high quality images of biological objects with quantitati ve phase analysis. The use of the angular
PAGE 187
172 spectrum is seen to solve the problems that ha ve so far prevented the widespread use of digital holography for microscopy. Digital holography provides amplitude im ages, as obtained by a conventional optical microscope. The advantage over conven tional brightfield microscopes is in the 3D reconstruction, which overcomes the limited depth of focus. However, the significant advantage of offaxis digital holography is ac tually seen in the pha se information, which is simultaneously reconstructed alon g with the amplitude information. Conventional phasecontrast micros copy techniques are unable to yield quantitative phase information and have othe r disadvantages associated with the image quality, such as the halo effect. Digital holography, on the other hand, offers quantitative phase information, and, therefore, reveals the 3D topography of an obj ects optical profile with a resolution at the nanometer scale along the optical axis. Quantitative holographic movies can also be created for analysis of moving objects such as living cells. We have demonstr ated, as shown in the experimental results, that digital holography is well suitable for living cell analysis, and contains many advantages over traditionally applied methods However, the current experimental setup is required to be modified in orde r for horizontal moun ting of the sample. The phase image needs to be unwrapped to improve interpretati on of the image. A conventional approach is to apply any one of a number of computational phase unwrapping techniques. However, there are a number of disadvantag es associated with their use. The main constraint for live sample imaging is the time taken for the unwrapping of each phase image, which essentia lly takes away the ability for a biologist to view the motion in real time or close to real time. Extensive i nvestigation of other
PAGE 188
173 phase unwrapping techniques is required which may produc e good quality unwrapping in less time than the present floodfill algorithm. We show the use of a novel, optically based approach for unwrapping the phase by use of multiple wavelengths. This tec hnique offers a convenient and attractive alternative to using a softwarebased phase unwrapping algorithm. The advantage of the multiwavelength imaging technique is clearly demonstrated when unwrapping an object that does not fulfill the strict requirements of the conventio nal unwrapping algorithms. As shown in the experimental re sults, this method is eff ective and able to perform unwrapping in close to real time. Although the reconstructed image is shown to be of a fairly good quality with the current set up, it could be significantly enhanced by improving the quality of the red reconstruc ted image from the red hologram. We are currently performing investiga tions to pinpoint the cause of the poor outcome of the reconstructed images using the red wavelength. The angular spectrum has been invaluable in so far eliminating the possible causes. However, the reason for this degradation is as yet still unknown, although we suspect that the camera itself may be the main cause. This will be the subject of future investigations.
PAGE 189
174 References [1] Gabor D, A new mi croscope principle, Nature 1948 161: 7778. [2] Leith E and Upatnieks J, Reconstructed wavefronts and communication theory, J. Opt. Soc. Am 1962, 52: 1123. [3] Powell R.L, Stetson K.A, "Interfero metric vibration an alysis by wavefront reconstruction," J. Opt. Soc. Am. 1965, 55: 15931598. [4] Goodman J.W and Lawrence R.W, Digita l image formation from electronically detected holograms, Applied Physics Letters 1967, 11: 7779. [5] Kronrod M.A, Merzlyakov N.S, Yaroslavsky L.P, Reconstruc tion of a Hologram. with a Computer, Soviet PhysicsTechnical Physics 1972, 17(2) : 333334. [6] Schnars U, Jueptner WP, Direct re cording of holograms by a CCD target and numerical reconstruction, Appl Opt. 1994, 33: 179181. [7] Schnars U, Jueptner WP, Digital re cording and numerical reconstruction of holograms, Meas Sci Technol 2002, 13: R85R101.
PAGE 190
175 [8] Cuche E, Bevilacqua F, Depeursinge C, Digital holography for quantitative phasecontrast imaging, Opt Lett 1999, 24 : 291293. [9] Ferraro P, De Nicola S, Coppola G, Fini zio A, Alfieri D, Pierattini G, Controlling image size as a function of distance and wave length in Fresneltransform reconstruction of digital holograms, Opt Lett 2004, 29: 854856. [10] Ferraro P, Grilli S, Alfieri D, De Nicola S, Finizio A, Pierattini G, Javidi B, Coppola G, Striano V, Extended focused imag e in microscopy by digital holography, Opt Exp 2005, 13: 67386749. [11] Palacios F, Ricardo J, Palacios D, Gonalves E, Valin J, De Souza R, D image reconstruction of transparent microsc opic objects using digital holography, Opt Commun 2005, 248: 4150. [12] Cuche E, Marquet P, Depeursinge C: Simultaneous amplitude contrast and quantitative phasecontrast microscopy by numer ical reconstruction of Fresnel offaxis holograms, Appl Opt 1999, 38: 69947001. [13] Ferraro P, De Nicola S, Finizio A, Coppola G, Grilli S, Ma gro C, Pierattini G, Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phasecontrast imaging, Appl Opt 2003, 42: 19381946.
PAGE 191
176 [14] Stadelmaier A, Massig J, Compensati on of lens aberrations in digital holography, Opt Lett 2000, 25: 16301632. [15] Osten W, Baumbach T, Jueptner WP, Comparative digital holography, Opt Lett 2002, 27: 17641766. [16] Xu M L, Peng X, Miao J and Asundi A, Studies of digital microscopic holography with applications to microstructure testing, Appl. Opt. 2001, 40: 5046. [17] Pedrini G, Tiziani H J, Quantitat ive evaluation of twodimensional dynamic deformations using digital holography, Opt. Laser Technol. 1997, 29: 249. [18] Picart P, Leval J, Mounier D and Gougeon S, Some opportunities for vibration analysis with time averaging in digital Fresnel holography, Appl. Opt. 2005, 44: 337 343. [19]. Alexander, "Development of in tegral holographic motion pictures," SPIE Art and design, 1995, 2333: 187197. [20] Jacquot M, Sandoz P, Tribillon G, High resolution digital holography, Opt Commun 2001, 190: 8794.
PAGE 192
177 [21] Xu W, Jericho M H, Meinertzhage n I A and Kreuzer H J, Digital inline holography for biological applications, Proc. Natl Acad. Sci. USA 2001, 98: 11301. [22] Popescu G, Delflores L P, Vaughan J C, Badizadegan K, Iwai H, Dasari R and Feld M S, Fourier phase microscopy for inves tigation of biologica l structures and Dynamics, Opt. Lett. 2004, 29: 2503. [23] Marquet P, Rappaz B, Magistretti P J, Cuche E, Emery Y, Colomb T and Depeursinge C, Digital holographic micr oscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwevelength axial accuracy, Opt. Lett. 2005, 30: 468. [24] Carl D, Kemper B, Wernicke G and Von Bally G, Parameteroptimized digital holographic microscope for highre solution livingcell analysis, Appl. Opt. 2004, 43: 6536. [25] Haddad W.S, Cullen D, Solem J.C, Longw orth J.W, McPherson A, Boyer K, Rhodes C.K, Fouriertransform holographic microscopy, Appl Opt 1992, 31: 49734978. [26] Yamaguchi I, Kato J, Ohta S, Mizuno J, Image formation in phaseshifting digital holography and applications to microscopy, Appl Opt 2001, 40: 61776186.
PAGE 193
178 [27] Sun H, Player M, Watson J, Hendry D, Pe rkins R, Gust G, Paterson D, The use of digital/electronic holography fo r biological applications, J Opt APure Appl Opt 2005, 7: S399S407. [28] Mann C, Yu L, Lo C, Kim MK, Hi ghresolution quantit ative phasecontrast microscopy by digital holography, Opt Exp 2005, 13: 86938698. [29] Ma L, Wang H, Li Y, Jin H, Numerical reconstruc tion of digital holograms for threedimensional shape measurement, J Opt APure Appl Opt 2004, 6: 396400. [30] Grilli S, Ferraro P, De Nicola S, Fini zio A, Pierattini G, Meucci R, Whole optical wavefields reconstructi on by digital holography, Opt Exp 2001, 9: 294302. [31] Dakoff A, Gass J, Kim M.K: Microscop ic threedimensional imaging by digital interference, Journal of Electronic Imaging 2003, 12: 643647. [32] Yu L, Kim M.K: Wavelengthscanni ng digital interference holography for tomographic 3D imaging using the angular spectrum method, Opt Lett 2005, 30: 2092. [33] Servin M., Marroquin J.L, Malacara D, and Cuevas F.J, "Phase unwrapping with a regularized phasetracking system," Appl. Opt. 1998, 37: 191723. [34] Schofield M.A. and Zhu Y, "Fast phase unwrapping algorithm for interferometric
PAGE 194
179 Applications," Opt. Lett. 2003, 28: 1194. [35] Cheng Y.Y and Wyant J.C, "Twowav elength phase shifti ng interferometry," Appl. Opt. 1984, 23: 453943. [36] Creath K, "Step height measuremen t using twowavelength phaseshifting interferometry," Appl. Opt. 1987, 26: 28106. [37] Gass J, Dakoff A, Kim M. K, Phase imaging without 2 ambiguity by multiplewavelength digital holography, Opt Lett 2003, 28 : 11411143. [38] Mann C. J, Yu L, and Kim M.K Movies of cellular a nd subcellular motion by digital holographic microscopy, BioMedical Engineering OnLine 2006, 5:2133. [39] Backman V. et al Detection of preinvasive cancer cells, Nature 2000, 406: 3536. [40] T. M. Kreis, M. Adam s and W. P. O. Jueptner, M ethods of digital holography: a comparison, Proc. SPIE 1997, 3098: 22433. [41] Demetrakopoulos TH, Mittra R, Digital and optical reconstruc tion of images from suboptical diffraction patterns, Appl Opt 1974, 13: 665670. [42] Nugent K. A., Twinimage elimination in Gabor holography, Opt. Commun. 1990, 78 293299.
PAGE 195
180 [43] Gustafsson M et al ., "Highresolution digital transmission microscopy: a Fourier holography approach," Opt. and Lasers in Eng. 2004, 41: 553563. [44] Liu C., Liu Z.G, and Bo F. et al. "Superresolution digital holographic imaging method," Appl. Phys. Lett 2002, 81: 31433145. [45] Liu G and Scott P. D, Phase retrieval and twinimage elimination for inline Fresnel holograms, J. Opt. Soc. Am A 1987, 4: 159165. [46] S. Lai, B. Kemper, G. v. Bally, Offaxis reconstructions of inline holograms for twinimage elimination, Opt. Commun. 1999. 169: 3743 (1999). [47] Yamaguchi I, Kato J., Matsuzaki H., Measurement of surface shape and deformation by phaseshifting image digital holography, Proceedings of SPIE 2002, 4778: 251256. [48] Xu W, Jericho M.H, Mein ertzhagen I.A, Kreuzer H.J, "Digital inline holography of microspheres". Applied Optics 41, 53675375, 2002. [49] Carolina Biological Supply Handbook. (2002). [50]. Torok P and Kao F.J, Optical imaging and microscopy (SpringerVerlag, 2003).
PAGE 196
181 [51]. Barty A, Nugent K. A, Paganin D. and Roberts A., Quantitative optical phase microscopy, Opt. Lett. 1998, 23: 8179. [52]. Rylander C. G, Dave D, Akkin T, Miln er T. E, Diller K. R. and Welch A. J., Quantitative phasecontrast imaging of cells with phasesensitive optical coherence microscopy, Opt. Lett. 2004, 29: 150911. [53] Van den Doel L.R and Van Vliet L.J, Temporal phaseunwrapping algorithm for dynamic interference pattern analysis in interferencecont rast microscopy, App. Opt. 2001, 40: 44874500. [54] Strand J., Taxt T, and Jain A. K, Twodimensional phase unwrapping using a block leastsquares method, IEEE Trans. Image Process 1999, 8: 375. [55] Farinas J, Verkman A.S, Cell volume and plasma membrane osmotic water permeability in epithelial cell layers measured by interferemetry, Biophys J 1996, 71: 35113522.
PAGE 197
182 Bibliography R. E. Fischer and B. TadicGaleb, Optical System Design, SPIE Press; McGrawHill, ISBN 0071349162 (2000). J.W. Goodman, Introduction to Fourier Optics Second Edition McGrawHill, Inc., New York, NY (1996). E. Hecht, Optics, Fourth Edition Pearson Education, Inc., ISBN 8178086174 (2002). C. S. Williams and O. A. Becklund, Introduction to the Optical Transfer Function WileyInterscience, ISBN 0471947709 (1989). U. Schnars, W Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques, Springer; ISBN: 354021934X (2004). P. Hariharan, Basics of Holography Cambridge University Press ISBN 0521002001 (2002). Dennis C. Ghiglia Mark D. Pritt TwoDimensional Phas e Unwrapping: Theory, Algorithms, and Software WileyInterscien ce ISBN: 0471249351 (1998).
PAGE 198
183 M. Born, E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, Cambridge University Press ISBN: 0521642221 (1999)
PAGE 199
184 Appendices
PAGE 200
Appendix A Theorems A.1. The Fourier Transform The onedimensional Fourier transform of a reasonably wellbehaved function f(x) is {()}()()exp[2] f xFufxiuxdx The inverse is then found by 1{()}()()exp[2]FufxFuiuxdu where and are Fourier transform pairs. ()fx ()Fu The twodimensional Fourier transform of f(x) is {()}(,)(,)exp[2()] f xFuvfxyiuxvydxdy The inverse is then 1{()}(,)(,)exp[2()]FufxyFuviuxvydudv (,) x y are the spatial coordinates and are the corresponding spatial frequencies. (,)uv A.2. Theorems of Fourier Transforms A.2.1 Convolution: (),,, f gxydxdyfxygxxyy 185
PAGE 201
where the convolution operation is defined by A.2.2 Correlation: fgxdx fx g*x x gf*xfxg*x A.2.3 AutoCorrelation: ffxdx fx f*x xfxf*x A.2.4 Other useful convolution theorems: Ffg2 FGFfg12 FGFfg2 FG*Ffg*12 FGFff2 F 2Ff 212 FF f g h f g h f g hfgh h *f g *g h f *Ffgh2FGH* f x fx f g x g f x fgxg* f *xfgxf g *x 186
PAGE 202
f x x a x a f x f x afxxafxaxafxf*ax fx a fx b x a b A.3. Imaging with a coherent light source A.3.1 Impulse Response Function, h: x0X0 y0 Y0 OpticalSystem hx1 X1,y1 Y1 X1mX0; Y1mY0; Let m=1 Then E1x1,y1dx0dy0E0x0,y0 hx1x0,y1y0 E1x1,y1 E0x1,y1 hx1,y1 A coherent imaging system is linear in complex amplitude. A.3.2 Coherent Transfer Function (CTF), H: FE0 F0;FE1 F1;Fh Hxkx1z1 ;yky1z1 Then F1x,y F0Hx,y For a coherently illuminated object, the image intensity pattern is: 187
PAGE 203
I1x1,y1E1x1,y1 2E0hx1,y1 2 A.3.3 Transfer function, H: Fourier decomposition of the fields, with k = (k x k y ) = (kx/z, ky/z): Ff0F0k;Ff1 F1k ;Fh Hk Then F1k F0 Hk For an invariant system, the effect of the imaging system to a sinusoidal input is limited to amplitude change and phase shift. The transfer function approach is valid only in an invariant system. A.4. Discrete Fourier Transform A.4.1 Fourier Series If f(x) is a periodic function of period L, then fxFnexpinKxn Fn1 fxexpinKxdx0 where K2 is the fundamental frequency. A.4.2 Cosine and Sine series: 188
PAGE 204
fx12 C0CncosnKxn1 SnsinnKxn1 Cn2 fxcosnKxdx0Sn2 fxsinnKxdx0Fn12 CniSn A.4.3 Discrete Fourier Transform f(x) defined in [X/2, X/2]: fx0 for x X 2 X 2 f(x)xX/2X/20dx F(k)kK/2K/20dk f s (x) sampled at dx intervals: fsxfxcomb x x i.e., fsxifxi for xi X 2 :x: X 2 Fourier transform F s (k): FskFfsxkFfxcombxx Fk2 x combk2/x 2 x Fkn2x n 189
PAGE 205
Therefore, if Fk0 for k K 2 K 2 where K2 x then FskFk for k K 2 K 2 Conversely: F(k) defined in [K/2, K/2]: Fk0 for k K 2 K 2 F s (k) sampled at dk intervals: FskFkcomb k k Fourier transform f s (x): fsxfx2 k combx2/k Therefore, if fx0 for x X 2 X 2 where X2 k then fsxfx for x X 2 X 2 Therefore, if both f(x) and F(k) are discretized with N+1 points, then: KNkN2 X 2 x k K N 2 Nx 2 X XNxN2K 2k xXN 2Nk 2K 190
PAGE 206
191 Appendix B Listing of Labview Programs This appendix contains a listing of the computer pr ograms written in Labview which are used to acquire holograms from the firewire CCD camera and perform numerical reconstructio n of the hologram.
PAGE 207
B.1 Calibration of data parameters and optical alignment via the angular spectrum (Firecamera.vi) B.1.1 Front Panel B.1.2 Block Diagram 192
PAGE 208
B.2 Image capture of holograms (ExperimentalHolo.vi) B.2.1 Front Panel B.2.2 Block Diagram 193
PAGE 209
B.3 Reconstruction of the hologram (Holodiffract.vi) B.3.1 Front Panel B.3.2 Block DiagramReading hologram into reconstruction program 194
PAGE 210
B.3.3 Block DiagramPerforming frequency domain analysis B.3.4 Block DiagramCalculating Angular Spectrum 195
PAGE 211
B.3.5 Block DiagramSaving images B.4 Calculating Huygens diffraction 196
PAGE 212
B.5 Calculating Fresnel diffraction B.6 MultiWavelength Phase imaging 197
PAGE 213
198 Appendix C Listing of Matlab programs This appendix contains a listing of the computer programs written in Matlab which are used to numerically reconstruct the hologram and unwrap the reconstructed phase image. These algorithms can be integr ated into the Labview environment one may observe in the block diagrams displayed in Appendix B. C.1 Numerical Reconstruction in Matlab C.1.1 Huygens Algorithm function E1= Huygens(E0,ax0,ax1,nx1,zz,lam,xth,yth) % function E2= Huygens(E0,ax0,ax2,nx2,zz,lam,xth,yth) % xth and yth are horizontal and vertical angular shift of object % in degrees relative to reference for offaxis hologram. % % 3/10/2005 mkk % if zz==0; E1= E0; return ; end pi2= pi*2; kk= pi2/(lam*1e6); nx0= size(E0,1); dx0= ax0/nx0; dx1= ax1/nx1; XX0= [ax0/2:dx0:ax0/2dx0]; [X0,Y0]= meshgrid(XX0,XX0); XX1= [ax1/2:dx1:ax1/2dx1]; [X1,Y1]= meshgrid(XX1,XX1); xth= pi*xth/180; yth= pi*yth/180; E0= E0.*exp(i*kk*(xth*X0+yth*Y0)); Eo= reframe(E0, 'n',ax0,nx0,ax1,nx1);
PAGE 214
199 %SS= pi2*(i*kk/(pi2*zz))*exp(i*kk*sqrt(X1.^2+Y1.^2+zz^2)); SS= exp(i*kk*sqrt(X1.^2+Y1.^2+zz^2)); Eo= fft2(Eo)/nx1; SS= fft2(SS)/nx1; E1= fft2(Eo.*SS)/nx1; E1= fftshift(E1); E1= flipud(fliplr(E1)); C.1.2 Fresnel Algorithm function E2= Fresnel(E0,ax0,ax2,nx2,zz,lam,xsh,ysh) % function E2= Fresnel(E0,ax0,ax2,nx2,zz,lam,xsh,ysh) % xsh and ysh are horizontal and vertical shift of object % relative to reference for offaxis hologram. % % 6/24/01 mkk % if zz==0; E2= E0; return ; end pi2= pi*2; kk= pi2/(lam*1e6); nx0= size(E0,1); dx0= ax0/nx0; ax1= axbx(ax0,nx0,zz,kk); nx1= nx0; dx1= ax1/nx1; XX0= [ax0/2:dx0:ax0/2dx0]; [X0,Y0]= meshgrid(XX0,XX0); XX1= [ax1/2:dx1:ax1/2dx1]; [X1,Y1]= meshgrid(XX1,XX1); thx= xsh/zz; thy= ysh/zz; E0= E0.*exp(i*kk*(thx*X0+thy*Y0)); S0= exp((i*kk/(2*zz))*(X0.^2+Y0.^2)); %SS= pi2*(i*kk/(pi2*zz))*exp(i*kk*(zz+(X1.^2+Y1.^2)/(2*zz))); SS= exp(i*kk*(zz+(X1.^2+Y1.^2)/(2*zz))); E1= fft2(E0.*S0)/nx0; E1= SS.*E1; E1= fftshift(E1); E2= reframe(E1, 'n',ax1,nx1,ax2,nx2);
PAGE 215
200 C.1.3 Angular Spectrum Algorithm function [E2,SO]= Angspc(E0,ax0,ax2,nx2,zz,lam,xsh,ysh); % function E2= Angspc(E0,ax0,ax2,nx2,zz,lam,xsh,ysh); % xsh and ysh are horizontal and vertical shift of object % relative to reference for offaxis hologram. % [M,N]=size(E0); I1=fftshift(fft2(double(E0))); %figure(1); imshow(mat2gray(abs(I1))/0.003); %========================================================Fresnel Diffraction wlength=lam*1e9; %594e9; x=ax0*1e3; y=ax0*1e3; d=1e3*zz; %============================================filtering I2=zeros(M,N); %I2(:,1:70)=I1(:,1:70); I2(1:M,1:100)=I1(1:M,1:100); %layer102=shiftxy(I2,xsh,ysh); nx=xsh*100; ny=ysh*100; F00=zeros(M,N); F00(1:Mnx,1:Nny)=I2(nx+1:M,ny+1:N); F00(Mnx+1:M,1:Nny)=I2(1:nx,ny+1:N); F00(1:Mnx,Nny+1:N)=I2(nx+1:M,1:ny); F00(Mnx+1:M,Nny+1:N)=I2(1:nx,1:ny); SO=F00; %========================================================Angular Spectrum F01=(conj(F00)); MM=M/2; NN=N/2; %%d is the position of reconstruction %d=1.2; c1=j*2*pi*d; c2=1/wlength^2; dk=1/x; dl=1/y; for k=1:M for l=1:N
PAGE 216
201 T3(k,l)=F01(k,l)*exp(c1*sqrt(c2((MMk)*dk)^2((NNl)*dl)^2)); d en end E2=ifft2(fftshift(T3)); %/(j*wlength*d); C.2 Flood Fill Phase Unwrapping Programs C.2.1 Main.m Loads a wrapped phase image a = imread( 'Phase1.bmp' 'bmp'); c=double(a); size_of_b=size(c); for i=1:size_of_b(1,1) for j=1:size_of_b(1,1) b(i,j)=c(i,j,1); %^2+c(i,j,2)^2+c(i,j,3)^2)^0.5; end end arraydisp(b,0,255); %display image theta=b*2*pi/255pi; quality_map=qualitymap(b,size_of_b(1,1)); arraydisp(quality_map,0,0.4); %display quality map ture_quality_map=rmd(quality_map,size_of_b(1,1),0.01); arraydisp(ture_quality_map,0,0.4); %diplay quality map without discontinuity function [a1,a2]=add2list(adjoin1,adjoin2,x,y,q) C.2.2 add2list.m Adds a position and quality value to adjoining x,y % x,y is the postion, q is the quality [h,long]=size(adjoin2); for i=1:long if adjoin2(i)==0 adjoin1(i,1)=x; adjoin1(i,2)=y; adjoin2(i)=q; break; end end if i==long adjoin1(long+1,1)=x; adjoin1(long+1,2)=y; adjoin2(long+1)=q; end
PAGE 217
202 C.2.3 arraydisp.m Displays the array image a1=adjoin1; a2=adjoin2; % ARRAYDISP() display the array image function arraydisp(array,minv,maxv) % ARRAYDISP() display the array image figure imshow(mat2gray(array,[minv,maxv])) axis on C.2.4 main2.m Central program for phase unwrapping %for i=1:232 % for j=1:232 % test(i,j)=theta(i,j); % test_map(i,j)=ture_quality_map(i,j); % end %end uwphase=qgunwrap(theta,ture_quality_map,size_of_b(1,1),190,190); mdisp(uwphase,min(min(uwphase)),max(max(uwphase))); cmax=max(max(uwphase)); cmin=min(min(uwphase)); %uwph=uwphasecmin; uwph=cmaxcmin(uwphasecmin); max(max(uwph)) value=80uwph; newvalue=max(max(value)); uwphnew=newvalue+uwph; figure(2) imshow(mat2gray(uwphnew)) imwrite(uint8(255*mat2gray(uwphnew)/1),colormap, 'Unwrapphaserr123234new cellnewTa0.bmp' ) C.2.5 qmdisp.m display for array function mdisp(map,min,max) %QMDISP %t=map.*mask0+mask1*max; arraydisp(map,min,max); % FIGURE
PAGE 218
203 function uwphase=qgunwrap(theta,qmap,N,s_x,s_y) C.2.6 qgunwrap.m quality guided pa th following method %QGUNWRAP.m %N=100; start_time=datestr(now); disp([ 'File generating begins on start_time ', please wait ...' ]) mphase=theta; uwphase=zeros(N); ifunwrap=zeros(N); % start to unwrap phase using quality guided plus floodfill algorithm %start_x=s_x; %starting point start_y=s_y; uwphase(start_x,start_y)=mphase(start_x,start_y); ifunwrap(start_x,start_y)=1; long=1; ifempty=1; adjoin1(1,1)=start_x; adjoin1(1,2)=start_y; adjoin2(1)=qmap(start_x,start_y); tt=0; % perform floodfill algorithm until the ajoint list is empty %while ifempty>0 [t,position]=max(adjoin2); % First find the maximun value from adjoin list i=adjoin1(position,1); j=adjoin1(position,2); adjoin1(position,1)=0; % remove the pixel being operated from the adjoin list adjoin1(position,2)=0; adjoin2(position)=0; if i>1 & i1 &j
PAGE 219
204 d=unwrp(mphase(i,j),mphase(i,j+1)); uwphase(i,j+1)=uwphase(i,j)+d; ifunwrap(i,j+1)=1; [adjoin1,adjoin2]=add2list(adjoin1,adjoin2,i,j+1,qmap(i,j+1)); end if ifunwrap(i+1,j)==0 %& mask0(i+1,j)==1 %if not border and unwrap pixel, perform phase unwrapping d=unwrp(mphase(i,j),mphase(i+1,j)); uwphase(i+1,j)=uwphase(i,j)+d; ifunwrap(i+1,j)=1; [adjoin1,adjoin2]=add2list(adjoin1,adjoin2,i+1,j,qmap(i+1,j)); end end ifempty=sum(adjoin2); tt=tt+1 end tt end_time=datestr(now); disp([ 'File generating ends on end_time '.']); function map=qualitymap(theta,s) C.2.7 qualitymap.m %QUALITYMAP.m calculate the quality map of input image start_time=datestr(now); disp([ 'File generating begins on start_time ', please wait ...' ]) theta_temp=(flipud(theta))'; for i=1:2:(s3) for j=1:(s1) ma=theta_temp(i+2,j)theta_temp(i,j); mb=theta_temp(i,j+1)theta_temp(i,j); mc=theta_temp(i,j+1)theta_temp(i+2,j+1); md=theta_temp(i+2,j)theta_temp(i+2,j+1); c(i,j)=2/sqrt(ma^2+4*mb^2+4); c(i+1,j)=2/sqrt(mc^2+4*md^2+4); end end map=flipud(c'); map(:,s1:s)=1; map(s,:)=1; end_time=datestr(now); disp([ 'Programe runnig ends on end_time '.']) function mapt=rmd(map,s,t) C.2.8 rmd.m remove the discontinuity, s is image size. t is theshold %RMD.m
PAGE 220
205 disp('Precessing ...' ) mapt=map; if mapt(1,1)pi d=not2*pifinish; elseif t<pi d=not+2*pifinish; else d=notfinish; end C.2.10 view2D.m view unwrapped phase image A = IMREAD( 'UnwraNewPhase' 'bmp'); %A= double(A)/255; %A=double(A)*2*pi/255; cmax=max(max(A)); cmin=min(min(A)); %A=(A*0.532)/(2*pi)/(0.375); %A=(A*0.532)/(2*pi); A=(double(A)/255)*(20.8700); A=(A*0.532)/(2*pi*0.375); cmax=max(max(A)); cmin=min(min(A)); A= flipud(A);
PAGE 221
206 surf(A); shading fla t; axis tight; rotate3d on; colormap hsv; lighting phong; light;
PAGE 222
207 Appendix D List of Accomplishments Journal Publications 1. C. J. Mann L. Yu, C. Lo, and M. K. Kim, "Hi ghresolution quantitative phasecontrast microscopy by digital holography," Opt. Express 13, 86938698 (2005) http://www.opticsexpress.org/issue.cfm?volume=13&issue=22 2. C.J. Mann L. Yu, & M.K. Kim, Movies of ce llular and subcellular motion by digital holographic microscopy, J. Biomedical Engineering 5, 21 (2006) http://www.biomedicalengineeringonline.com/content/5/1/21/abstract 3. M.K. Kim, L. Yu, and C.J. Mann Interference technique s in digital holographic microscopy, J. Opt. A: Pure Appl. Opt. 8: S518S523 (2006) Conference Papers 1. C.J. Mann & M.K Kim, Digital Gabor hologra phy for particle field imaging, OSA Biomedical Topical Meetings Miami, FL (April 1417, 2004) poster presentation
PAGE 223
208 2. C.J. Mann & M.K. Kim, Digital Gabor holographic movie of animal microbes, OSA. Annual Meeting Rochester, NY, (October 1014, 2004) oral presentation 3. C.J. Mann & M.K. Kim Movies of amplitude and phase images of paramecium by digital holographic microsc opy, poster presentation at USF Interdisciplinary Graduate Research Symposium (April 2005) 4. C.J. Mann & M.K. Kim, Phase Imaging Digital holographic movies of animal cells, OSA, CLEO/QELS Baltimore, MA, (May 2426, 2005) oral presentation 5. M.K. Kim, L. Yu, and C.J. Mann Interference technique s in digital holographic microscopy, OMS 05 Optical Microsystems Capri, Italy (Sep.2005) oral presentation 6. C.J. Mann & M.K Kim, Quantitative phasecont rast microscopy by angular spectrum digital holography, SPIE BIOS San Jose, CA (Jan. 2126, 2006) oral presentation. 7. C.J. Mann & M.K Kim, Quantitative biological microscopy of cells by digital holography, OSA Biomedical Topical Meetings, Fort Lauderdale, FL (March 1922, 2006): oral presentation. 8. C.J. Mann & M.K. Kim Microscopy by digital holography, poster presentation at USF Interdisciplinary Graduate Research Symposium (April 2006)
PAGE 224
209 Book Chapters M.K. Kim, L. Yu, and C.J. Mann Digital holography and multiwavelength interference techniques, Chap. 2 in Digital holography and thr eedimensional display T.C. Poon, ed. (Springer 2006)
PAGE 225
About the Author Christopher Mann currently resides in the Tampa Bay area. He grew up in Willenhall, West Midlands, England and r eceived a Bachelor of Science degree in Physics from the University of Birmingham, England in 2001 and a Master of Science degree in Physics with Astrophysics from th e University of Birmingham in 2002. He entered the Ph.D program in Applied Physics at the University of South Florida in Fall 2002. He has completed an Industrial Practicum at MediVision, Tel Aviv Israel as part of the Applied Physics practical training. He has two journal publications in Optics Express and the Journal of Biomedical E ngineering and has presented at numerous technical conferences including CLEO SPIE and OSA topical meetings.
