USF Libraries
USF Digital Collections

Three-dimensional microscopy by laser scanning and multi-wavelength digital holography


Material Information

Three-dimensional microscopy by laser scanning and multi-wavelength digital holography
Physical Description:
Khmaladze, Alexander
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:


Subjects / Keywords:
Laser scanning microscopy
Computer holography
Holographic interferometry
Interference microscopy
Phase-contrast microscopy
Dissertations, Academic -- Physics -- Doctoral -- USF   ( lcsh )
non-fiction   ( marcgt )


ABSTRACT: This dissertation presents techniques of three-dimensional microscopy. First, an economical method of microscopic image formation that employs a raster-scanning laser beam focused on a sample, while non-imaging detector receives the scattered light is presented. The images produced by this method are analogous to the scanning electron microscopy with visible effects of shadowing and reflection. Compared to a conventional wide-field imaging system, the system allows for a greater flexibility, as the variety of optical detectors, such as PMT and position-sensitive quadrant photodiode can be used to acquire images. The system demonstrates a simple, low-cost method of achieving the resolution on the order of a micron. A further gain in terms of resolution and the depth of focus by using Bessel rather than Gaussian beams is discussed. Then, a phase-imaging technique to quantitatively study the three-dimensional structure of reflective and transmissive microscopic samples is presented. The method, based on the simultaneous dual-wavelength digital holography, allows for higher axial range at which the unambiguous phase imaging can be performed. The technique is capable of nanometer axial resolution. The noise level, which increases as a result of using two wavelengths, is then reduced to the level of a single wavelength. The method compares favorably to software unwrapping, as the technique does not produce non-existent phase steps. Curvature mismatch between the reference and object beams is numerically compensated. The 3D images of porous coal samples and SKOV-3 ovarian cancer cells are presented.
Dissertation (Ph.D.)--University of South Florida, 2008.
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Alexander Khmaladze.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 133 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002001405
oclc - 319715412
usfldc doi - E14-SFE0002638
usfldc handle - e14.2638
System ID:

This item is only available as the following downloads:

Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 002001405
003 fts
005 20090430160301.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 090430s2008 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0002638
QC21.2 (Online)
1 100
Khmaladze, Alexander.
0 245
Three-dimensional microscopy by laser scanning and multi-wavelength digital holography
h [electronic resource] /
by Alexander Khmaladze.
[Tampa, Fla] :
b University of South Florida,
Title from PDF of title page.
Document formatted into pages; contains 133 pages.
Includes vita.
Dissertation (Ph.D.)--University of South Florida, 2008.
Includes bibliographical references.
Text (Electronic dissertation) in PDF format.
3 520
ABSTRACT: This dissertation presents techniques of three-dimensional microscopy. First, an economical method of microscopic image formation that employs a raster-scanning laser beam focused on a sample, while non-imaging detector receives the scattered light is presented. The images produced by this method are analogous to the scanning electron microscopy with visible effects of shadowing and reflection. Compared to a conventional wide-field imaging system, the system allows for a greater flexibility, as the variety of optical detectors, such as PMT and position-sensitive quadrant photodiode can be used to acquire images. The system demonstrates a simple, low-cost method of achieving the resolution on the order of a micron. A further gain in terms of resolution and the depth of focus by using Bessel rather than Gaussian beams is discussed. Then, a phase-imaging technique to quantitatively study the three-dimensional structure of reflective and transmissive microscopic samples is presented. The method, based on the simultaneous dual-wavelength digital holography, allows for higher axial range at which the unambiguous phase imaging can be performed. The technique is capable of nanometer axial resolution. The noise level, which increases as a result of using two wavelengths, is then reduced to the level of a single wavelength. The method compares favorably to software unwrapping, as the technique does not produce non-existent phase steps. Curvature mismatch between the reference and object beams is numerically compensated. The 3D images of porous coal samples and SKOV-3 ovarian cancer cells are presented.
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
Advisor: Myung K. Kim, Ph.D.
Laser scanning microscopy
Computer holography
Holographic interferometry
Interference microscopy
Phase-contrast microscopy
Dissertations, Academic
x Physics
t USF Electronic Theses and Dissertations.
4 856


Three-Dimensional Microscopy by Laser S canning and Multi-Wavelength Digital Holography by Alexander Khmaladze A dissertation submitted in partial fulfillment of the requirements for the degree of Department of Physics College of Arts and Sciences University of South Florida Major Professor: Myung K. Kim, Ph.D. Srikanth Hariharan, Ph.D. Dennis Killinger, Ph.D. Garrett Matthews, Ph.D. Date of Approval: September 12, 2008 Keywords: laser scanning microscopy, com puter holography, holographic interferometry, interference microscopy, pha se-contrast microscopy Copyright 2008, Alexander Khmaladze


i Table of Contents List of figures ............................................................................................................... ...... iv Abstract ...................................................................................................................... ....... vii 1. Introduction to opt ical microscopy..................................................................................1 1.1. Brief history of microscopy…………………………………………………..1 1.2. Scanning versus full-field microscopy..............................................................3 1.3. Confocal, scanning electron and atomic force microscopy..............................4 1.3.1. Confocal microscope.........................................................................4 1.3.2. Atomic force microscope...................................................................6 1.3.3. Scanning electron microscope...........................................................8 1.4. Phase imaging microscopy.............................................................................10 1.4.1. Zernike phase contrast microscopy..................................................10 1.4.2. Differential interferen ce contrast microscopy.................................11 1.5. Digital holographic microscopy......................................................................14 2. Design and imaging properties of laser scanning microscope.......................................17 2.1. Introduction.....................................................................................................17 2.2. Laser scanning microscope design..................................................................19 2.3. Imaging properties..........................................................................................24 2.4. Laser scanning with the pos ition-sensitive detector.......................................33


ii 2.5. Bessel beams...................................................................................................37 3. Holography and three-dimensional imaging..................................................................39 3.1. Introduction to holography.............................................................................39 3.2. The retention of phase information.................................................................41 3.3. Phase and height maps....................................................................................42 4. The principles of digital holography..............................................................................45 4.1. Angular spectrum method...............................................................................45 4.2. Curvature correction.......................................................................................47 5. Dual-wavelength phase imaging by digital holography................................................52 5.1. Phase unwrapping...........................................................................................52 5.2. Multi-wavelength phase imaging....................................................................52 5.3. Application of the phase noise reduction algorithm (fine map).....................56 6. Study of the porous coal samples...................................................................................60 6.1. Experiment......................................................................................................60 6.2. Two-wavelength optical phase unwrapping...................................................64 6.3. Comparison between dual-wavele ngth and software phase unwrapping.......66 6.4. The application of the fine map algorithm......................................................68 7. Cell imaging................................................................................................................ ...70 7.1. Experiment......................................................................................................70 7.2. SKOV-3 ovarian cancer cells..........................................................................73 8. Conclusions................................................................................................................. ...78 8.1. Laser scanning microscope.............................................................................78 8.2. Dual-wavelength digital holography...............................................................79


iii References..................................................................................................................... .....81 Appendices..................................................................................................................... ....87 Appendix A: Laser spot size..................................................................................88 Appendix B: Laser scanning mi croscope Labview program.................................94 Appendix C: Axicon..............................................................................................96 Appendix D: Theory of dua l-wavelength phase imaging....................................102 Appendix E: Dual-wavelength dig ital holography Labview programs...............107 Appendix F: Dual-wavelength digital holography C programs...........................111 Appendix G: List of publications.........................................................................123


iv List of Figures Figure 1.1: Scanning confocal microscope ..........................................................................5 Figure 1.2. Atomic force microscope ..................................................................................7 Figure 1.3. Scanning electron microscope ...........................................................................9 Figure 1.4. Zernike phase c ontrast microscope .................................................................12 Figure 1.5. Differential interfer ence contrast microscope .................................................13 Figure 1.6. Digital holographic microscope ......................................................................16 Figure 2.1. Laser scanning microscope setup ....................................................................22 Figure 2.2. Scanning apparatus ..........................................................................................23 Figure 2.3. Scanning controls ............................................................................................25 Figure 2.4. USAF Resolution target gr oups 4 and 5 and zoomed 6 and 7 .........................26 Figure 2.5. The images of one cent coin ............................................................................28 Figure 2.6. Seeds of Ipomoea Violacea (M orning Glory) are imaged by PMT ................29 Figure 2.7. Lantern fly’s eye area ......................................................................................30 Figure 2.8. Dragonfly’s eye ...............................................................................................31 Figure 2.9. Ten images of the ant ......................................................................................32 Figure 2.10. One cent coin imaged with a position-sensitive detector ..............................34 Figure 2.11. Lantern fly’s head imaged w ith a position-sensitive detector .......................35 Figure 2.12. Ant’s legs imaged with a position-sensitive detector ....................................36


v Figure 3.1 Phase map and height profile for =633 nm ....................................................44 Figure 4.1. Two-wavelength hologram of a USAF resolution target ................................46 Figure 4.2. Curvature correction ........................................................................................48 Figure 4.3. The reconstructed phase imag e of the USAF resolution target .......................51 Figure 5.1. Phase maps resulting from imaging 5 micron sloped surface .........................54 Figure 5.2. Phase maps of the resolution target .................................................................55 Figure 5.3. Height profiles of the resolution target ............................................................58 Figure 5.4. Line intensity pr ofiles of a flat area .................................................................59 Figure 6.1. Dual-wavelength digital hol ography setup for coal samples ..........................62 Figure 6.2. Digital holographi c microscope system ..........................................................63 Figure 6.3. Boundary in a porous coal sample ...................................................................65 Figure 6.4. Images of a porous coal samp le (dual-wavelength vs. software) ....................67 Figure 6.5. Images of a porous coal sample (fine map) .....................................................69 Figure 7.1. Dual-wavelength digital hologr aphy setup for imaging of cells .....................71 Figure 7.2. Dual interferometers ........................................................................................72 Figure 7.3. Confluent SKOV-3 ovarian cancer cells .........................................................74 Figure 7.4. A single SKOV-3 cell ......................................................................................75 Figure 7.5. Comparison between opti cal and software unwrapping ..................................77 Figure A.1. Mathematica simulation of Gaussian beam ....................................................90 Figure A.2. Measuring laser spot in three places ...............................................................91 Figure A.3. Mathematica simulation of beam profile ........................................................92 Figure A.4. Mathematica simulation of the final beam profile ..........................................93 Figure B.1. Main screen of laser s canning microscope Labveiw program ........................95


vi Figure C.1. The depth of focus of the Bessel beam ...........................................................98 Figure C.2. Mathematica simulation of axicon parameters ...............................................99 Figure C.3. The axicon illuminated by a Gaussian beam ................................................100 Figure C.4. The intensity distribution in the focal plane of axicon .................................101 Figure D.1. Dual-wavelength phase imaging digital holography ....................................103 Figure D.2. Fine map generation .....................................................................................105 Figure E.1. Main screen of dua l-wavelength Labview program .....................................108 Figure E.2. Dual-wavelength phase unwrapping Labview diagram ................................109 Figure E.3. Curvature corre ction Labview diagram ........................................................110


vii Three-Dimensional Microscopy by Laser Sc anning and Multi-Wavelength Digital Holography Alexander Khmaladze ABSTRACT This dissertation presents techniques of three-dimensional microscopy. First, an economical method of microscopic image forma tion that employs a raster-scanning laser beam focused on a sample, while non-imaging detector receives the scattered light is presented. The images produced by this met hod are analogous to the scanning electron microscopy with visible effect s of shadowing and reflection. Compared to a conventional wide-field imaging system, the system allows for a greater flexibility, as the variety of optical detectors, such as PMT and position-sensitive quadrant photodiode can be used to acquire images. The system demonstrates a simple, low-cost method of achieving the resolution on the order of a micron. A further ga in in terms of resolution and the depth of focus by using Bessel rather than Gaussian beams is discussed. Then, a phase-imaging technique to quant itatively study the three-dimensional structure of reflective and transmissive mi croscopic samples is presented. The method, based on the simultaneous dual-wavelength digital hologra phy, allows for higher axial range at which the unambiguous phase imag ing can be performed. The technique is capable of nanometer axial resolution. The noise level, which increases as a result of


viii using two wavelengths, is then reduced to the level of a single wavelength. The method compares favorably to software unwra pping, as the technique does not produce nonexistent phase steps. Curvature mismatch between the reference and object beams is numerically compensated. The 3D images of porous coal samples and SKOV-3 ovarian cancer cells are presented.


1 CHAPTER 1 INTRODUCTION TO OPTICAL MICROSCOPY 1.1 Brief history of microscopy Microscopy can be defined as a technique for producing images of structures or details too small to otherwise be seen by th e human eye. The device that is employed in that process is called a microscope. Th e microscopy field has been around for many centuries and has evolved greatly as various technological advances were applied to the development of microscopes. Two main branch es of microscopy can be identified: wide field and scanning microscopy. Wide field microscopy uses diffraction, reflec tion, or refraction of radiation incident upon the subject of study with the subsequent collection of this scattered radiation in order to build up an image. Scanning micros copy involves the interaction of a scanning probe with the surface or object of interest. Developments in the field of microscopy continue to play a paramount role in cell bi ology, medical science, the study of materials in chemistry and physics and are also an essent ial tool in many other areas of science and technology. Recently, there has been an increased demand for tools that can be used for analyzing volume structures as small as a few nanometer s in size. Furthermore, when investigating the properties of, for example, living cells, the samples can also be almost transparent and


2 very fragile, which requires additional efforts in order to effectively visualize the structure. In order to obtain high resolution, one can use electrons (instead of photons) to image the sample. However, such electron mi croscopes have to operate in vacuum and therefore cannot be used for in-vivo imagi ng. There are different scanning microscopic techniques, such as atomic force microsc opy (AFM) and scanning tunneling microscopy (STM) with spatial resolution in nanometer range, but they have limitations when dealing with certain materials. Contact mode AFM tip can damage soft samples by scraping and shear forces can distort the image features. This problem may be partially solved by employing the tapping mode, but that comes at the price of the reduced scan speed and resolution. STM requires the material to be conductive, thus a complex sample preparations (e.g. gold plating) are need ed before imaging biological samples. Due to the finite focal depth of the imag ing lens, a typical opt ical microscope can only provide a two dimensional information of an object. Moreover, out of focus light creates background, which incr eases noise. Optical sec tioning microscopy (OSM) and scanning confocal microscopy (CSM) have been used for quantitative analysis of 3-D objects, but the mechanical scanning operati on is often time consuming, thus making invivo imaging difficult. Finally, for living cells, the lack of suffici ent contrast makes it difficult to employ the ordinary optical microscope, as the internal structure of ce lls is typically colorless and almost transparent. One way to increase contrast is to stain the different structures with selective dyes, but this often involves killi ng cells and fixing the sample. Staining can introduce artifacts caused by the processing of the specimen a nd are thus not a legitimate feature of the specimen This can be, to some degree, overcome by specific microscopy


3 techniques, which can non-invasively increase the contrast of the image. In general, these techniques make use of differences in the re fractive index of cell st ructures and convert the difference in phase that light acquire s while passing through the sample into amplitude (intensity) variation. The latter can be then observed by the human eye. 1.2 Scanning versus full-field microscopy The resolution of the standard optical wide-field micros copy is diffraction limited to several hundred nanometers. Also, the tec hnique can only effectively image strongly reflecting (or refracting) objects. In order to image features on a sub-nanometer scale, one needs to employ a scanning probe microscope. The idea of scanning microscopy has been around since mid 20th century; however the hardware needed to do the effective scan ning has largely became available only in 1970s. Scanning Probe Microscopy was really started in 1981, when Gerd Binnig and Heinrich Rohrer introduced Scanning Tunne ling microscope, and later in 1986, Binnig, together with C.F. Quate and C.H. Gerber developed atomic force microscopy. These techniques gave birth to a wide variety of scanning probe microscopy instrumentations, and their applications have been increasing expone ntially in diverse fields of physical sciences, engineering and technology [1]. Scanning probe microscopes allow imaging of a wide variety of ma terial structures, such as man-made and natural systems, in cluding biological systems, at exceedingly small scales. The family of scanning probe mi croscopes uses no lenses, but rather a probe that interacts with the sample surface. While these techniques are generally slower than full-field microscopy, the spatia l resolution is far greater.


4 1.3 Confocal, scanning electron and atomic force microscopy The type of interaction betw een the probe and the sample surface defines the type of scanning probe microscope being used. Th e method of Laser Sc anning Microscopy, presented in chapter 2 is, to so me degree, related to several of those techniques, which are briefly reviewed below. 1.3.1 Confocal microscope In a laser confocal microscope, the sca nning laser beam is expanded to fill the objective lens and is then focused onto a fluorescent specimen (see Figure 1.1). The mixture of reflected light and emitted fluorescent light is captured by the same objective and is focused onto a non-imaging photodetector via a beamsplitter. The reflected light is deviated by the beamsplitter, while the emitted fluorescent light passes through in the direction of the photodetector. A confocal aper ture (pinhole) is placed in front of the photodetector, such that the fluorescent light from points on the specimen that are not within the focal plane (out-of-focus light) is largely obstructed by the pinhole. Thus, the confocal microscope is very efficient in te rms of observing the signal from a very thin slice of the sample [2].


5 Figure 1.1: Scanning confocal microscope. Confocal aperture (pinhole) is used to block out-of-focus light.


6 Note that at any given instant, only one point of the sample is observed. The relative intensity of the fluorescent light, emitted from each point corresponds to the intensity of the resulting pixel in the im age. The photodetector is att ached to a computer, which builds up the image, one pixe l at a time. By scanning many thin sections through the Zaxis, a very clean 3D image can be build. In order to scan the image plane effectively, various scanning mirrors and often acousto-optical modula tors are used. The Z-axis scanning is usually done by a computer-controlled fine-stepping motor which moves the microscope stage up and down. 1.3.2 Atomic force microscope The Atomic Force Microscope (AFM) works by scanning an atomistically sharp tip over a surface of a specimen (or scanning the specimen under the statio nary tip). The tip is positioned at the end of a cantilever beam (see Figure 1.2). As the tip is repelled by or attracted to the surface, the can tilever beam deflects. The magnitude of the deflection is captured by a laser that reflects at an oblique angle from the very end of the cantilever. A plot of the laser deflection versus tip position on the sample surface provides the resolution of the hills and valleys that cons titute the topography of the surface. Height image data obtained by the AFM is three-dime nsional. The usual method for displaying the data is to color map height s on the computer screen [3].


7 Figure 1.2: Atomic force microscope.


8 The resolution of AFM images is far supe rior to any optical imaging methods and can, in principle be a fraction of the nanomet er. However, either for the duration of the entire scan, if imaging in contact mode, or for a short period of time, if the changes in resonant frequency are measured in tapping mode, the sample is needed to be in contact with the mechanical tip, which often ma kes imaging of soft samples difficult. 1.3.3 Scanning electron microscope Electron microscopes function similarly to op tical microscopes, but instead of light illuminating the sample, they use a sharply focused beam of electrons to scan across the surface of the sample (see Figure 1.3). Scanning Electron Microscope (SEM) provides topographical and elemental information at magnifications of up to 100,000x, wh ich translates to la teral resolution of several nanometers with virtually unlimited depth of field [4]. The electrons interact with the sample a nd generate secondary (Auger) electrons, backscattered electrons, and characteristic X-ra ys. All of them can be analyzed to gain information about the sample structure and composition. A stream of electrons is formed by the electron source and acce lerated toward the specimen. The stream is condensed by the first condenser magnetic lens, which is used in conjunction with the condenser aperture to eliminate the high-angle electrons from the beam. The second condenser lens forms the el ectrons into a thin, tight, coherent beam. The final lens, the objective, focuses the sc anning beam onto the spot on the specimen. When the beam strikes the sample and dwells on a particular spot for a few microseconds, interactions occur inside the sample.


9 Figure 1.3: Scanning electron microscope.


10 These interactions are detected with vari ous instruments, whic h count the number of interactions and record a pixel, whose intensity is determined by the number of interaction (i.e. more reactions result in bright er pixel). The entire pattern can be scanned 30 times per second. The clear downside of SEM imaging is the fact that in order for it to work, the sample has to be conductive. As a result, various specimens, including many biological materials are unsuitable for SEM imaging. 1.4 Phase imaging microscopy Phase imaging techniques are used to c onvert phase variations that light wave acquired while passing through objects into am plitude variations that can be observed. Over the years, there have been a number of techniques develope d to qualitatively perform this conversion. The examples in clude phase contrast (ZPC) microscopy and differential interference c ontrast (DIC) microscopy. 1.4.1 Zernike phase contrast microscopy Phase contrast microscopy, first proposed in 1934 by Dutch physicis t Frits Zernike, is a phase contrast-enhanci ng optical technique that can be utilized to produce high-contrast images of transparent specimens, such as live cells, medical tissues slices, microorganisms, fibers, etc. [5, 6].


11 Presented in Figure 1.4 is a diagram of a m odern upright phase contrast microscope together with the schematic illustration of the phase contrast optical path. Partially coherent illumination, produced by the tungsten-halogen lamp is directed through a collector lens and focused on a condenser annulus. Wave fronts passing through the annulus illuminate the specimen and either pass through unperturbed or are diffracted and retarded in phase by structures and phase gradients present in the specimen. Both nondiffracted and diffracted light waves are collec ted by the objective a nd segregated at the rear focal plane by a phase plate and focused at the intermediate image plane to form the final phase contrast image observed in the eyepieces. 1.4.2 Differential interference contrast microscopy Differential interference contrast micr oscopy (DIC), also known as Nomarski Interference Contrast (NIC) uses interferometr y of two polarized light beams, which take slightly different paths through the sample [7]. The length of each optical path differs, the beams interfere, when they are recombin ed, which gives the appearance of a threedimensional image. The light is polarized and splits into two beams by a Wollaston prism (see Figure 1.5). These two beams are focused by the co ndenser so they wi ll pass through two adjacent points in the sample, fraction of a micron apart. This results in the phase difference, which is then converted into amplitude variation by recombining the two beams in the second Wollaston prism and anothe r polarizer (analyzer) and thus making them interfere. The interference pattern th en contains the inform ation about the phase variation in the sample.


12 Figure 1.4: Zernike phase contrast microscope.


13 Figure 1.5 Differential interference contrast microscope.


14 1.5 Digital holographic microscopy. In ZPC and DIC phase contrast microsc opy, the phase to amplitude conversion is nonlinear. Therefore, these me thods cannot be used to qua ntify the phase change. The importance of quantitative phase information is in the fact that the phase change indicates the change in the optical path length the light has travelled. The optical path length can be then converted to physical thickness, provi ding the sample’s height information. Thus, quantitative phase imaging is a 3D imaging technique. There has been a number of quantitative phase imaging techniques proposed in the recent years. Barone-Nugent et al. [8] have demonstrated a quantitative phase imaging microscope that separates phase information from amplitude information and produces pure phase images. Several phase shift interf erometry techniques have been used for quantitative phase imaging [9]. Another tec hnique is diffraction pha se and fluorescence (DPF) microscopy [10], which uses simulta neous quantitative phase imaging and epifluorescence imaging of living cells. Recently, due to the advances in computer t echnologies as well as the availability of high-resolution CCD-cameras, digital holograp hic microscopy (DHM) has emerged as a powerful tool to obtain quant itative phase information and provide information about 3D structure of microscopic samples. In digi tal holography (as in the case of conventional holography) the process of recording is done op tically, but the recording media is usually a CCD-array, which allows for easy digitizati on and storage of the holographic recording in the computer memory. This recording then contains the informa tion of not just the intensity of light (amplitude signal), but also its phase.


15 After the phase and amplitude information is recorded, the process of extracting it can be accomplished by numerically diffracting and propagating the reference wave through the holographic recording. The resulting complex wave field produces the amplitude and phase maps simply as the amplitude and phase of a set of complex numbers. The process of the holographic reconstruction is, theref ore, essentially reduces to a numerical diffraction problem, which is done entir ely by a computer (see Figure 1.6). One of the main advantages of such a tec hnique is that image processing algorithms can be easily applied on various stages of the reconstruction. Here, we use the angular spectrum algorithm, which provides a number of significant advantages in filtering in Fourier domain, software curvature correction and nume rical focusing.


16 Figure 1.6: Digital holographic microscope.


17 CHAPTER 2 DESIGN AND IMAGING PROPERTIES OF LASER SCANNING MICROSCOPE 2.1 Introduction For years, the conversion of images produced by a microscope into digital form posed a number of challenges. CCD arrays have mo re than a few disadvantages in terms of resolution and image quality. Even on the highe r-end systems, the increase of the number of pixels on CCD matrix leads to the decr ease in sensitivity, requiring longer exposure time and brighter illumination. In addition to that, the optical system of the microscope itself can give rise to aberrations further reducing the image quality. Various scanning microscopy systems have been developed to offer a number of significant advantages to the conventional wide-field microscope imaging systems. The resolution of the scanning microscopes is no t limited by diffraction of imaging optics, but only by the size of the probe. These techniques ga ve origin to a wide variety of scanning probe microscopy instrumentations. In laser scanning confocal microscopy, a laser beam is projected onto a point on a sample, which is then imaged and apertured at the detector. The image is then formed by illumination of the sample point by point with the subsequent reconstruction by a computer. This principle is quite versatile and allows for a large array of variations of imaging t echniques. For example, scanning tunneling microscopy (STM), which has spatial resolu tion in nanometer range but also have


18 limitations when dealing with certain mate rials. STM requires the material to be conductive, thus a complex sample preparatio ns (e.g. gold plating) are needed before imaging biological samples. Scanning electr on microscopy (SEM) is yet another wellknown technique, which offers high resoluti on and striking contrast, but requires complex sample preparation. Here we present a development in a lase r scanning microscopy technique [11] a method of imaging that is analogous to SEM, but performed entirely by means of optical scanning. Thus, the method neither requires an imaging array detector nor needs complicated sample preparation. It uses a la ser beam, which is focused by a system of lenses into a single spot on a sample, while a largenumerical-aperture non-imaging detector receives the scattered light. Th e technique is similar to a scanned beam endoscope [12], where imaging is also conducte d by scanning the laser spot over the area of the sample, while measuring the scatte red energy for each point. Here, the laser scanning microscope was used together with the position-sensitive detector to study the orientation of the object surface. Because all the illumination energy falls on the particular spot of interest, there is no need to form a conjugate image plane. More over, there is no need to exclude the light from elsewhere in the field of view with a le ns or aperture. Thus, the system does not use a pinhole (typical for a laser scanning conf ocal microscopic system). Instead, the scanning laser beam parameters are manipulat ed freely by the syst em of lenses to produce a beam profile of a required cross-sect ion and confocal beam parameter. It is worth noting that the lenses used in the system are simple glass lenses (not microscope objectives, as they are not used to form an image, but only to focus the beam) which


19 greatly reduce the system’s cost. Even with th ese simple lenses the system remains free from all non-longitudinal aber rations such as coma, astigma tism, field curvature, and distortion. The lateral resolution, as well as the depth of focus of this system is effectively determined by the laser beam spot size. If th e visible light is use d, the lateral resolution can be on the order of a micron, while depth of focus (axial resolution) is determined by the laser beam profile. The non-invasive nature of this method, as well as the ease of operation and low cost far outweighs the shor tcomings of lower than SEM resolution for a variety of scientific an d industrial applications. 2.2. Laser scanning microscope design In order to build the laser scanning micros cope with the position-sensitive detector, one first needs to consider the parameters of the laser beam, such as its intensity, divergence and spot size required to generate the images of the desired axial and lateral resolution. In our system, laser (assumed to be emitting TEM00 Gaussian mode) spot size was measured by gradually blocking the laser beam with knife edge while recording the change in the overall beam intensity and numerically reconstructing the beam intensity profile (see Appendix A). The size of the la ser spot on the sample can be determined theoretically, by us ing the complex beam parameter z q [13]. The location of the beam's waist 0W and its size were determined by measuring the beam's spot in several locations and effectively reconstructing the intensity pr ofile of the Gaussian beam. The initial laser beam waist was found to be equal to 360 m and was located near the output mirror of the laser. At the waist, th e inverse to the complex beam parameter was found to be:


20 1 211 1.55 ()()() iim qzRzWz (2.1) where 0.63 m is the laser wavelength, z R is the radius of curvature and Wzis the beam width as functions of axial coordinatez. At the waist z R is infinite and 0WzW In the case of Gaussian beam, its depth of focus is determined as twice the Rayleigh range ( 2 0 02 21.29 W zm ), which shows that in order to achieve higher lateral resolution and longer focal depth, a compromise is necessary, as lowering the spot size on the sample will lead to the lowering of the focal depth. In the paraxial approximation any optical sy stem is completely characterized by its ray transfer ABCD matrix. We used this appr oach for propagation of the Gaussian beam and the determination of the lateral range of the system. For the system of three focusing lenses, presented here (focal lengths of the lenses L1, L2 and L3 are 10 cm, 100 cm and 5 cm respectively), we constructed its ray tr ansfer matrix and then evaluated it by a computer. The output complex beam parameter outq was found using the matrix elements and the parameter inqof the input beam: 653.56103.8310in out inAqB qi CqD (2.2) The final beam spot size was then computed using Equation (2.1) and was equal to 2.7 m with the depth of focus of 76 m (see Appendix A).


21 Figure 2.1 shows the schematics of the a pparatus. The 15mW He-Ne laser beam is steered by mirrors and expanded by lenses L1 and L2, which to gether created a telescopic system to produce a beam with larger cross-sect ion in front of lens L3. Lens L3 can then focus the beam into a small spot on an obj ect. The object is mounted on a translation stage driven by a motorized micrometer, and is scanned along the vertical axis (see Figure 2.2). The fast horizontal scanning is achieved by means of galvanometer based scanning mirror. With the typical frequency of fast scanning of 100 Hz, the entire image of a hundred lines can be acquired in a second It is worth noting that the system can easily be used with two scanni ng mirrors, in which case the scan speed can be increased to many frames per second, but at a higher cost. Equation f x ] tan[ (2.3) where f is the focal length of the focusing lens L3 and is the mechanical angle of mirror rotation, is used to determine the actual horizontal scanning range x due to the mirror. Because both the sample and the scanning mirro r are at the focal points of the lens L3 there is no need for geometric correction as the beam’s angle of incidence on the sample is always zero degrees.


22 Figure 2.1: Laser scanning microscope setup. Neutral density (ND) filters were used to avoid damage to fragile specimen. Focal le ngth of lenses L1, L2, L3 and L4 are 10 cm, 100 cm, 5 cm and 10 cm respectively. The focusi ng lens L3 can be an ordinary spherical lens or an axicon.


23 Figure 2.2: Scanning apparatus


24 The light, scattered from the sample, is partially collected by lens L4 and the intensity information is obtained by the photodetector PD. The system was first used with a photodiode and a photomultiplier tube (PMT). Finally, a quadrant position-sensitive detector was employed. A 12-bit data acquisitio n board feeds the detected signal into a computer, where a set of LabVIEW programs (See Appendix B) control the scanning and generation of the two dimensional im age of the object (see Figure 2.3). 2.3 Imaging properties Some images produced by our laser scanning sy stem are shown below. First, in order to assess the system’s resolution, a USAF reso lution target is imaged (Figure 2.4) using a simple photodiode. The image is 1176 x 1296 pi xels and the pixel size is 0.9 x 1 m. The measured beam spot size at the target was around 5 m. Note that since the laser beam intensity profile is Gau ssian, by making the pixel (step) size smaller than a spot size (over-sampling) the actual resolution is even slightly higher than spot size. The lateral resolution can be further improve d by using a system with the higher ratio of the foci of lenses L2 to L1. There is still a practical limit, however, after which the resolution does not improve, due to the diffrac tion limit of the finite apertures. Also, due to the quadratic dependence between the b eam waist size and the Rayleigh range, the system with smaller spot size will have much lower depth of focus.


25 Figure 2.3: Scanning controls


26 Figure 2.4: USAF Resolution target groups 4 and 5 and zoomed image of groups 6 and 7. Group 7 element 2 (line thickness less than 3.5 m) is resolved.


27 Figure 2.5 shows an area of a penny. Th e images are 500x500 pixels and the pixel size is 2.79x3 m. The images exhibit the perception of the 3D relief structure of the letters, evident as highlighted and shadowed regions. Note that since the letters on the penny protrude from background, the image a ppears as if the object is illuminated by a light source on the left with a camera viewing it from the front In reality, the laser beam illumination is from the front while a point detector is located on the left. Note also that the grainy patterns of the background are due to the actual texture of the surfaces. Although the light source is a coherent laser, any coherent speckle effect is averaged out by the large aperture of the detector, thus removing one of the common sources of noise in laser-based imaging systems. Figure 2.6 shows the image of a flower seed. The sample is of low reflectivity, which results in the si gnal level too low to be collected by an ordinary photodiode, so PMT was used instead. Note the reversed shadow effect as describe d for Figure 2.5. To the first order, these images are equivalent to ones that can be obtained using conventional wide-field microscopy, but with the scanning syst em the optics is greatly simplified. Some images of insects are shown in Figur e 2.7, Figure 2.8 and Figure 2.9. A lantern fly’s head and eye area is visible in Fi gure 2.7, where the i ndividual ommatidia are resolved. Similarly, Figure 2.8 shows the eye of a dragonfly. The images, shown in Figure 2.9, show the body of the ant for the various axial positions of the sample. For each consecutive image, the axial pos ition of the sample was changed by 250 m. When looking at the individual features, one can estima te the depth of field in this case to be about 1.5 mm


28 (a) (b) Figure 2.5: The images of one cent coin (par ts of the inscription "E-PLURIBUS UNUM" are shown). The image frames are 1.4 x 1.5 mm2 (500 x 500 pixels).


29 Figure 2.6: Seeds of Ipomoea Violacea (Morning Glory) are imaged by PMT. The image frame is 1.8 x 1.2 mm2 (434 x 289 pixels).


30 Figure 2.7: Lantern fly’s eye area (500 x 793 pixels)


31 Figure 2.8: Dragonfly’s eye (500 x 1000 pixels). The image frame is 2.1 x 3.3 mm2


32 Figure 2.9: Ten images of the ant spanning the axial range. The image frame is 2.1 x 4.2 mm2 (500 x 1000 pixels). The individual featur es (see the grey square) on the first image of the series are still visible on the seventh image.


332.4 Laser scanning with the po sition-sensitive detector The detector was then changed to the pos ition-sensitive (quadrant) detector. The detector consists of 4 areas (A, B, C and D diodes) positioned in 4 quadrants each capable of detecting the light intensity separately fr om the other. The detector output contains 3 channels: the Sum intensity of all 4 quadran ts, Top-minus-Bottom (T-B) and Left-minusRight (L-R). If the quadrant detector is initially positioned in such a way that the scattered light collected by the lens L4 in Fi gure 2.1 is in the middl e of the detector, T-B and L-R channels are approximately zero, while the Sum channel registers the same signal as in the case of an ordinary photodiode. As the laser beam was scanned over the features of the sample, the position of the intensity maximum of the scattered light moved up/down and sideways to reflect the spatial orientation of the scanned surface. Th e T-B and/or L-R channels were monitored and the sign and the magnitude of the sign al were then indicative of not only the reflective properties of the samp le, but of the way the surface is oriented in a particular spot. The image in Figure 2.10 is of a one-cent coin. The reflectivity of the surface was approximately the same everywhere on the coin, but the scattering in the direction of the detector is higher when the surface is turn ed towards the object and lower when the surface is turned away, thus showing the 3D structure of an individual column. Similarly, the images of a lantern fly’s h ead and ant’s legs, where the individual 3D features are highlighted, are s hown in Figures 2.11 and 2.12.


34 Figure 2.10: One cent coin imaged with a positi on-sensitive detector: a column of Lincoln’s memorial. 0.56 x 1.5 mm2 (200 x 500 pixels).


35 Figure 2.11: Lantern fly’s head imaged with a position-sensitive detector. The image frame is 1.4 x 3.0 mm2 (500 x 1000 pixels)


36 Figure 2.12: Ant’s legs imaged with a position-se nsitive detector. The image frame is 1.95 x 2.4 mm2 (700 x 800 pixels).


37 2.5 Bessel beams In conventional optics, the la teral resolution and the dept h of focus are conflicting requirements and cannot be both maximized at the same time. A possible solution is the use of a Bessel beam generated by an axicon le ns [14]. On-axis illumi nation of the axicon by the Gaussian beam produces a Bessel beam an interference pattern, which results in energy redistribution between the central maximum and the side lobes (see Appendix C). The use of Bessel beams has been already de monstrated in OCT applications [15], for example. In the case of the ideal Bessel b eam, the size of a central maximum remains the same as the beam propagates through space [16, 17]. If this beam is being used in a laser scanning microscope, such system will have infinitely long depth of focus. Even for a real Bessel beam the depth of focus is s till much longer than a Gaussian beam [18]. Mathematically, the influence of a thin axic on on the transmitted Gaussian beam can be taken into account by the radial phase factor exp21/ain where is the wavelength of laser light, an is the refraction index, is the wedge angle of the axicon (the wedge angle is equal to (180 degrees cone angle)/2) [19]. The Gaussian beam of amplitude Ar transmitted through the axicon can be written as ,exp21/aArin (2.4) Physically, the focusing of the beam means the phasing in of its spatial components. To model the propagation of the beam in space, one can obtain its spatial spectrum and use the Fresnel integral. The resulting spa tial spectrum can be c onverted back to the amplitude form via the inverse Fourier transfor m. It is possible to obtain the approximate solution (Equation 26 in [19]) using the asympt otic approximation of the Bessel function.


38 The beam intensity is then obtained as the s quared amplitude, and can be used to estimate the size of the Bessel beam spot. The calcu lations, based on this approximation, for the incident beam spot diameter (Gaussian TEM00 mode) of mm W 80 index of refraction 5 1 an wedge angle o10 would produce an axicon focal depth: cm n W La2 9 10 (2.5) with the beam spot size at the object of about 3 m The significant down side of the use of axicon in this system is the presence of the side lobes, which draw a significant amount of energy, thus reducing the overall contrast of the system. However, using different wedge angles may result in an accep table tradeoff between lower contrast and higher depth of focus.


39 CHAPTER 3 HOLOGRAPHY AND THREE-DIMENSIONAL IMAGING 3.1 Introduction to holography Holography (from Greek “holos” meaning “t he whole’), a method of recording both intensity and phase information of a comple x wave-field, was first proposed by Dennis Gabor in 1948 [20]. However, it was only in the 1960s, when the construction of laser and the introduction of off-axis technique, pi oneered by Leith and Uptanieks [21], made it practical. When an object is illuminated with a light wave, the reflected (or transmitted) wave’s amplitude and phase are perturbed as a result of the wave interaction with the object. In case of ordinary photography, only th e amplitude information is retained. If the object light wave is sufficiently coherent, it can be made to interfere with the reference wave. The resulting interference pattern then cont ains the information about both amplitude and phase. This process is commonly referred to as holographic reco rding, which can be done on a photographic plate or an electronic device, such as CCD. In order to reconstruct the hologram, one needs to illuminate it with the reference wave. When the reference wave illuminates th e holographic recording, it diffracts from it. The diffracted wave still contains the inform ation about both amplitude and phase of the original object wave, and the resulting diffr action pattern produces a 3D image of the


40 original object, the hologram. Physically, it is the presence of phase information that gives the depth perception for a human observer. It is also then possible to use this information to actually measure the depth (physical thickness) of the original object, which makes holography an important tool in scientific and engi neering research. If the reference and object beams are at an angle to each other, the resulting holographic recording still retains all th e properties, described above, but the reconstruction then will produce spatially separated zeroth and first orders of diffraction (known as DC, real and mirror images). In th is case, the real holographic image is not obscured by the DC background. Holographic interferometry, designed as a non-contact met hod for studies of deformation, strains, vibrations and heating was proposed by St etson et al [22]. It made it possible to non-invasively profile surfaces with an accuracy of less than a micron. Also in the late 1960’s, the origin of first computer generated hologram can be traced to Goodman and Lawrence [23] and Kronrod et al. [24]. These methods involve either numerical generation of the holographic recordi ng with the subsequent reconstruction, or the optical recording of th e hologram with the reconstr uction done digitally by a computer. The conventional process of holographic r ecording on photographic plates is rather complicated and time-consuming, which makes real-time imaging difficult. In the past decade, the emphasis has been shifting to digital holography [25]. In this case, the hologram is recorded by a high resolution CCD array [26-28]. As in the case of the ordinary holography, the hologram contains the information of not just the amplitude distribution of light, but also of its phase. After the hologram recording, the extraction of


41 the amplitude and phase can be accomplished by numerically propagating the reference wave. The complete and accurate description of the propagation of the optical field by the diffraction theory allows numerical reconstruc tion of an image as an array of complex numbers, which represent the amplitude and phase of the optical field [29]. In addition to the ability of fast image acquisition and the retrieval of both quantitative amplitude and phase information, digital holography offers the versatility of vari ous image processing techniques that can be applied to the comple x field, which may not be feasible in real space holography. A number of different met hods have been considered for numerical reconstruction including Fresnel transform, Huygens convolution, and angular spectrum [30-32]. Application of digital hologr aphy in microscopy is especi ally important, because of the extremely narrow depth of focus of high-magnification sy stems [33, 34]. Microscopic imaging by digital holography has been applie d to image microstructures and biological systems [35-37]. Numerical focusing of holographic images can be accomplished from a single exposed hologram. Direct accessibility to the phase in formation can be used for numerical correction of different aberrations of the optical system, such as field curvature and anamorphism [38]. 3.2 The retention of phase information Numerically, if a light wave passes through a specimen of thickness t the phase shift changes due to variation of refractive index. This can be expressed as n t 2 (3.1),


42 where n is the change in refractive index of the specimen relative to the surrounding medium. Phase-contrast techniques convert the phase changes suffered by the light wave, while passing through or reflecting from objects, into observable inte nsity variations. Over the years, a number of techniques have been developed to qualitati vely perform this conversion. As it was mentioned before in chapter 1, ZPC and DIC phase contrast microscopy techniques, while givi ng an appearance of 3D imag ing, cannot be easily used to quantify the phase change, since the conve rsion of phase to in tensity modulation is nonlinear. Since the phase change indicates the change in th e optical path length, it can be then converted to physical thickness, provi ding the sample’s height information. Thus, the direct access to the quantitative phase information makes digital holographic microscopy a true 3D imaging technique. 3.3 Phase and height maps If the light wave reflects from an obj ect, its surface is described by a height mapy x h ,, which is determined from the phase map y x of the holographic reconstruction at a given wavelength by y x y x h 4 (3.2) (there is a factor of due to the fact that light travels to the surface and then reflects back). Figure 3.1a shows the phase map of the aluminum-covered USAF resolution target. The step size is approximately 2.2 radians. Ac cording to equation 3.2, it is proportional to the target height map, whose profile corre sponding to the line ov er the phase map is


43 sketched. The step height is of about 100nm. This result is confir med by the AFM scan shown in Figure 3.1b. On the other hand, if the object is a mostly transparent cell on the reflective substrate, so that the light propagates through it, reflects from the substrate and propagates back, the physical thickness is 0, 4() x y hxy nn (3.3), where 0() nn is the refractive index difference between the cell and air.


44 Figure 3.1: Phase map and height profile for =633 nm: (a) the profile is taken along the line over the phase map and (b) AFM im age and height profile that confirm the results in (a).


45 CHAPTER 4 THE PRINCIPALS OF DIGITAL HOLOGRAPHY 4.1 Angular spectrum method Once a hologram has been acquired, it is reconstructed by numerically propagating the optical field along the direction pe rpendicular to the hologram plane ( z -direction) in accordance with the laws of diffraction. Fresnel-Kirchoff formula can be expr essed as Fourier integral [39]: 00(,;0)(,;0)exp[()]xyxy A kkExyikxkydxdy (4.1), where x k and yk are spatial frequencies corresponding to x and y respectively. 0(,;0) Exyz is the intensity distribution recorded by the CCD camera. This is the expression for Fourier transform and 0(,;0)xyAkk is the angular spectrum of the optical field 0(,;0) Exyz at the hologram plane z = 0. The object’s angular spectrum consists of a zero-order and a pair of first-order terms. One of the first-order terms is the angular spectrum of the object field a nd the other is its phase inve rted version. Figure 4.1(a) shows the hologram of a US AF resolution target reco rded by our dual wavelength experimental setup. The two crossing inte rference fringe patterns, formed by two wavelengths, can be clearly seen. Figure 4.1( b) presents the Fourier spectrum with the two pairs of first-order components, corr esponding to the two wavelengths, plainly visible.


46 Figure 4.1: Two-wavelength hologram of th e USAF resolution target: (a) digital hologram (640x480 pixe ls) and (b) its Fourier sp ectrum of the hologram with the red and the green wavelengt hs first order components shown.


47 The field 0(,;0) Exyz can be regarded as a pr ojection of many plane waves propagating in different directions in sp ace and with the complex amplitude of each component equal to0(,;0)xyAkk. The angular spectrum can then be propagated in space along the zaxis: 0(,;)(,;0)exp[] x yxyzAkkzAkkikz (4.2) where z ikzexp is the complex transfer function and 222 zxykkkk where 2/ k Here, there is no requirement for z to be larger than a certain minimum value, as in the case of Fresnel transform or H uygens convolution. The complex wave-field at an arbitrary z can be obtained by performing th e inverse Fourier transform: (,;)(,;)exp[()] x yxyxyExyzAkkzikxkydkdk (4.3). As both integrals in Equations (4.1) and (4.3) are computed via FFT algorithm, the angular spectrum method is well su ited for the real-time imaging. 4.2 Curvature correction The angular spectrum method described a bove is based on the premise that the reference and object waves are both plane wave s. However, in the real setup, each wave has its wavefront curvature, resulting in a curvature mismatch. Consider the complex field captured by a CCD camera (see Figure 4.2). The phase mismatch can be compensated numerically, by multiplying the original “flat” field 0(,;0) Exyz by the phase factor exp[] i where kd is the phase difference between A and O. Here,2/ k where is the wavelength of light and d is the optical path difference:


48 Figure 4.2: Curvature correction. R is the wa ve’s radius of curvature centered at C which can be determined experimentally for a given setup, r is the vector from the center of the CCD matrix (point O ) to an arbitrary point A and 0r is the vector from the center of the CCD matr ix to the projection of the center of curvature on the CCD matrix P Here 22rxy x and y are the coordinates of A and 22 0 00rxy x0 and y0 are the coordinates of P


49 2222dCACOCPPACPPO (4.4). From geometry: 00 000022 22 222222dRrrRrRxxyyRxy (4.5). The difference can be positive or negative, depending on the angle of the curvature we are compensating. Finally, 22 2222 00000(,;0)(,;0)exp[()] ExyExyikRxxyyRxy (4.6), which is the exact expression for the curvatur e-corrected field. This expression agrees with the approximation from reference [38], in the case Rr and 00r : 22 22 22 2222 ()1111 2 2 Rxy rr kRrRkR R RR (4.7). It is worth noting that Equation (4.7) is a known expression for Newton’s rings, which means that if the object is a plane mirro r, the resulting interference pattern would be a set of concentric rings with the dark fringes of radius of mR where m=0,1,2… Therefore, for a wavelength of 532 nm, R = 3 cm, the radius of a first fringe is 126 micron and there is a total of 3 fringes visibl e in 174 micron frame (see Figure 4.3). If the field of view is increased, th ere are going to be mo re fringes visible a nd at some point the aliasing may occur. One can use this formula as an analytical expression to avoid fringe aliasing. For example, for the parameters above, in order for the fringes to alias (less than 2 pixel per fringe), one would have to ha ve a field of view large enough for over 100 fringes.


50 If the parameters are chosen correctly, ev en a substantial curvature mismatch can be compensated. Figure 4.3 shows the phase imag e of the USAF resolution target covered with a layer of aluminum to make it entire ly reflective. The pattern on the resolution target is elevated approximately 100 nm above the flat background. Figure 4.3(a) shows the reconstructed image before the curvature correction. Figure 4.3(b) is the same image after the curvature correction was applied, and the curvature mismatch completely compensated.


51 Figure 4.3: The reconstructed phase image of the USAF resolution target: (a) without curvature correction and (b) w ith curvature correction applied. The images are 174x174 m2 (450x450 pixels).


52 CHAPTER 5 DUAL-WAVELENGTH PHASE IMAG ING BY DIGITAL HOLOGRAPHY 5.1 Phase unwrapping. Phase images of objects with variations in optical thickness greater than the wavelength are ambiguous and results in phas e wrapping. Consequently, the phase map exhibits discontinuities at the positions where the total phase change exceeds 2 The phase in equations 3.2 and 3.3 can only vary from 0 to 2 which corresponds to optical thickness variation of 0 to The phase imaging of objects with the optical thickness variation higher then that is ambiguous and gives rise to 2 -discontinuities in the phase image. There are various phase unwrapping method available, which involve dividing the phase image into horizontal lines and unwra pping them by scanning pixels and adding a 2 offset to each pixel, when needed. Afte r that, the unwrapping process is done along vertical lines. Software algorithms that exist for detecting and removing 2 discontinuities often require user interven tion, computationally slow and habitually produce erroneous results when the phase profile is noisy. 5.2 Multi-wavelength phase imaging We have introduced a multiple-wavelength phase-imaging technique that removes the 2 -discontinuities [40]. Unlike software algo rithmic approaches to phase unwrapping, it


53 does not require user intervention and has onl y minimal requirement on the level of phase noise and discontinuity. Furthermore, the met hod allows imaging to be performed faster – the only time constraint being the speed at which the Fourier tr ansforms in angular spectrum method is performed. Suppose we image an object such as a mirror, which is positioned at an angle to the optical axis of the setup. If the angle is different from 90 degrees, the surface of the mirror will have a slope. When it is imaged by using a wavelength smaller than its overall height, the phase image will contain 2 discontinuities, as s hown in Figure 5.1a and Figure 5.1b. However, if the simultaneous dual-wavele ngth phase imaging is performed, the discontinuities of the two maps will o ccur at different positions, since the two wavelengths are different. It allows unwrappi ng the phase by comparing the two maps, as stated below. In this way, the 2 jumps are removed and then the phase ambiguities are resolved on a bigger range, which corresponds to a new synthetic “beat” wavelength (see Figure 5.1c). Figure 5.2 shows the phase images of the USAF resolution target imaged at an angle. The images produced with single wavelengths exhibit multiple phase steps (Figure 5.2a and Figure 5.2b). The phase maps 1 and2 derived from each wavelength are subtracted, so that 1212 is obtained. Adding 2 wherever 120 yields a new phase map, practically free of discontinuities. It is equivale nt to a phase map created by a single synthetic “beat” wavelength: 12 = 12/| 1 2| (5.1)


54 6.25 0 78.4 um x 78.4 um x -6.25:6.25 ( 256 x 256 ) EE Lambda1 6.274 0 78.4 um x 78.4 um x -6.27:6.27 ( 256 x 256 ) EE Lambda2 0.179 -3.142 78.4 um x 78.4 um x -1:1 ( 256 x 256 ) Fine Map (a) (b) (c) Figure 5.1: Phase maps resulting from imaging 5 micron sloped surface by 2=532 nm (a) and 1=633 nm (b), where multiple discontinuities are clearly visible and the resulting beat wavelength phase map (c) with the resulting extended range.


55 Figure 5.2: Phase maps of the resolution target for (a) 1=532 nm and (b) 2=633 nm. (c) Synthetic dual-phase map with beat wavelength 12 = 3334 nm and (d) its 3D rendering (the images are 174x174 m2 and the vertical scale fo r (a-c) is in radians).


56 For the wavelengths 1=633 nm and 2=532 nm, the beat wavelength is 12 = 3334 nm (Figures 5.2c and Figure 5.2d). Here, while the phase images produced by a single wavelength exhibit multiple discontinuities, in the final synthetic wavelength map, the discontinuities are removed. In fact, here th e synthetic wavelength is such that the range of the dual-wavelength phase map is barely enough to resolve the discontinuities (some even remain on the left and right of Figure 5.2c). 5.3 Application of the phase noise reduction algorithm (fine map) The drawback of the dual-wavelength met hod is the amplification of phase noise by the same factor as the range. Furthermore, the two phase maps differ in their noise distributions, so that the fina l dual-wavelength phase map can remain quite noisy even if the noise in the single-wavelength phase maps is low. However, one can use this dualwavelength “coarse” map as a guide, together with one of the original phase maps ( or ), to produce the low noise “fine” phase map. The method (detailed in th e Appendix D and reference [40]) involves dividing the hei ght of the coarse map into the integer number of one of the original wavelengths, say 1. Then, the wavelength high segments from the phase map 1 are pasted into this coarse map, which achieves the desired effect of the reduced noise together with the exte nded range. In practice, the areas near the boundaries of the wavelength in tervals are somewhat prob lematic. There, the noise present in a single wavelength map 1 causes the height to ch ange erratically by one wavelength. In order to partially solve this problem, one can compare this map to the coarse map and, if the difference is more then 1/2, add or subtract 1 depending on the size of the difference.


57 However, if the noise is too excessive the last step results in the shift of small portions of the final image by 1, from its true position. Since the height of such a shift is always 1, it can be fixed by software, by looking fo r the steps of this height and shifting them up (or down) by 1. Indeed, the high noise level makes it very difficult to measure the height of the object’s individual features from the dual-wa velength phase map, as shown by the height profile in Figure 5.3 (a) obtaine d from the map in Figure 5.2 (c), despite the fact that the overall shape of the object is still preserved by the dual phase map. In contrast, the height profile of the “fine” map, shown in Figure 5.3 (c), agai n yields the phase image, where the step height of 100 nm is cl early observable (see chapter 3). In order to numerically estimate the noise le vels in the system, the height profiles of the flat area for a single wavelength, a coar se map, and a fine ma p (Figure 5.4) were taken and their rms noise were measured. While the rms noise in the coarse map is substantial (of order 54 nm), the noise for the fine ma p is almost equal to the single wavelength phase map (of order 8.5 nm and 6.5 nm respectively).


58 Figure 5.3: Height profiles of the resolution ta rget of (a) “coarse” and (b) “fine” phase maps. (c) final “fine” map and (d) 3D rendering of (c). The image sizes are 174x174 m2.


59 Figure 5.4: Line intensity profiles of a flat ar ea for the coarse, fine, and the single wavelength phase maps respectively.


60 CHAPTER 6 STUDY OF THE POROUS COAL SAMPLES 6.1 Experiment Activated coals, as well as coals treated with pyrolysis, are highly porous materials. This is the reason why these coals have a high capacity for absorption, which makes them very important in processes such as purification and filt ering. These processes often depend on the size and morphology of the pores Normally, porosity is evaluated using chemical methods, but these techniques can be rather complex and time consuming. Optical microscopy and digital imaging analys is have been previously used [41] to investigate coal samples, but these methods have their own limitations in terms of the minimum size of pores that can be observe d. As it was mentioned earlier, ordinary microscopic techniques do not provide the 3D depth information about the sample. The porosity evaluation using AFM fa ils as the pores are too deep for the tip. Such limitations in analyzing the pores of coal samples can be overcome by reflection digital holographic microscopy as shown in this work. The samples, imaged here, were acquire d from the Coal Group at the National University of Colombia, Campus Medellin. Th ese samples were trea ted by pyrolysis and prepared in a mixture with epoxy resin; then they were ground and polished with aluminum oxide abrasives (1, 0.5, 0.03 m grain size). The process ensured that the samples are reflective and firm enough to be imaged.


61 Figures 6.1 and 6.2 shows the experime ntal setup, based on two overlapping Michelson interferometers that enable to fine-tune the location of the first-order components associated with each wavelength in the Fourier space. Its configuration is similar to the setups based on the modified Mach-Zehnder [42-46] interferometers, as typical for the reflection digital holographic microscopy. He-Ne (1=633 nm) and diodepumped solid-state (2=532 nm) lasers were used as light sources. Neutral density filters (ND) control the intensity of the laser beams. 20x micr oscope objectives OBJ11 and OBJ12, together with the pinholes A, and th e collimator lenses L11/L12 produce uniform plane waves, whose intensity is further cont rolled by the polarizing filters P1 and P2. Beam splitters BS1 and BS2 divide the beams into the reference and the object waves, which are reflected by the reference mirrors a nd the object. Thereafter, the beam splitters direct the waves toward the CCD camera. Lenses L21 and L22 and 20x microscope objectives OBJ21 and OBJ22 again collimate the reference waves. Two separate re ference arms are used to match the object path lengths for each object wave. An interfer ence filter is placed into the reference arm of the diode-pumped solid-state ( =532 nm) laser. It is de signed to allow only this wavelength to pass and block the inverse reflection of the other laser. The 20x microscope objective OBJ1 focuses a magnified image of the sample onto the sensor of the CCD camera, where the in terference pattern between the reflected reference and object waves is recorded. The images are acq uired and processed using a set of Labview (see Appendix E) and C programs (see Appendi x F), which were developed for this project.


62 Figure 6.1: Dual-wavelength digital holography se tup for coal samples. The lateral magnification of all microscope objectives (O BJ) is 20x. The focal length of the lenses L21 and L22 are 17.5 cm and 10 cm respectively. The ND filters and polarizers P1 and P2 are used to control the intensity of the la ser beams. Pinholes A ar e used to select only the central part of the Gau ssian beam. Lenses L11, L12, L21 and L22 and objectives OBJ1, OBJ21 and OBJ22 assure an appropriate collimation of the waves (i.e. the beam waist is kept at “infinity”).


63 Figure 6.2: Digital holographic microscope system.


64 An angle between the object wave and each of the reference waves can be introduced by slightly tilting the reference mirrors. Fu rthermore, by tilting th e two reference beams orthogonally to each other, we can precis ely control the location of each spectral component in Fourier space. As a result, th e two spectral components can be sufficiently separated to enable the effective filtering in the Fourier domain, which in turn allows for the real-time imaging. 6.2 Two-wavelength optical phase unwrapping In Figure 6.3, a boundary betw een the porous coal and the resin is shown. The amplitude image Figure 6.3a is similar to wh at a regular microscope would display. The area corresponding to the resin is at the lowe r right corner. It is apparent that the boundary is barely visible in the amplitude im age, as there is no significant difference between the reflection fro m the coal and resin. Figure 6.3b and Figure 6.3c show the singl e wavelength phase images, which display multiple discontinuities that were removed in the dual wavelength coarse map (Figure 6.3d). The phase noise is significantly redu ced in the fine map (Figure 6.3e), as appreciated in its 3D rendering (Figure 6.3f ). The latter image s hows the 3D surface profiles of coal and resin surfaces, which a ppear to “bend” towards each other at the boundary between the two surfaces, so the bound ary itself is seen very clearly.


65 Figure 6.3: Boundary in a porous coal sample: (a) amplitude image; phase maps reconstructed at (b) 1=0.63 m and (c) 2=0.53 m; (d) the dual-wavelength coarse phase map, (e) fine map and (f) its 3D rendering. All images are 98x98 m2 and vertical scale (b-e) is in radians.


666.3 Comparison between dual-wavelength and software phase unwrapping Figure 6.4 shows the images of porous coal samples treated with pyrolysis. Once again, the phase images at a si ngle wavelength clearly exhibit 2 phase steps (Figures 6.4b and Figure 6.4c), while the dual wave length unwrapped phase map in Figure 6.4d shows very few discontinuities. The parts of the images where discontinuity is still present correspond to low reflectivity ar eas on the sample hologram, where no interference pattern is visible. Consequently, the phase th ere is basically a random noise, which gives rise to multiple 2 phase jumps. These can generally be identified as deeper pores. With this method, pores with the lateral size on the order of a micron can be identified It is worth noting that unwrapping the si ngle wavelength phase images using conventional algorithms is very problematic, as illustrated in Figure 6.4e a nd Figure 6.4f. A typical software unwrapping algorithm starts at a certain point of an image and moves along a 1D path (e.g. straight line, spiral). If it encounters what looks like a phase wrap, it shifts the map down/up. If the image has noisy areas (corners in Figu re 6.4e and Figure 6.4f), where phase oscillates randomly, the so ftware algorithm takes it as a real feature and creates nonexistent steps in phase/height profile, which clearly do not correspond to the real height profile of the sample.


67 Figure 6.4: Images of a porous coal sample: (a) amplitude image; phase maps reconstructed at (b) 1=532 nm and (c) 2=633 nm; (d) 3D rendering of the dualwavelength phase map; software unwra pped phase maps reconstructed at (e) 1=633 nm and (f) 2=532 nm for comparison. All image si zes are 98x98 microns. The vertical scales of the phase maps are in radians.


686.4 The application of th e fine map algorithm Figure 6.5 shows the images of an activated porous coal in a resin sample. The areas on the left and right sides of the image ar e too dark, but the central region is well illuminated. Two vertical scratches due to the coal-cutting process are clearly visible. Figure 6.5a and Figure 6.5b repr esent the coarse and the fine phase maps respectively. A noise reduction by a factor of five is appare nt by comparing the height profiles Figure 6.5c and Figure 6.5d from the coarse and the fine maps respectively. For further comparison, the single wavelength phase map and the same line profile are sketched in Figure 6.5e and Figure 6.5f respectively. The “spots” in the images Figure 6.5c and Figure 6.5d, as well as the spike around 120 m in Figure 6.5f, result from the high leve l noise in the single wavelength images. Consequently, some of the wavelength segmen ts are erroneously shif ted by a wavelength. Artifacts like these are rare and they do not prevent us from obtaining an accurate 3D picture of the sample. From the depth inform ation available from the fine map (Figure 6.5d), one can see, for example that the depth of a small feature (scratch located around 75 m from the left on all the lin e profiles) is about 100 nm, which is again consistent with the single wavelength phase map (Figur e 6.5f). Obviously, the single wavelength possesses multiple artifacts due to phase wrapping.


69 Figure 6.5: Images of a porous coal sample (a) Co arse phase map, (b) fine phase map and line profiles for (c) coar se, (d) fine. For comparison, (e) single wavelength phase map at =633 nm and (f) line profile (the image sizes are 138x138 m2 and the vertical scales of the phase images are in radians).


70 CHAPTER 7 CELL IMAGING 7.1 Experiment Dual-wavelength phase-imaging technique was also used to quantitatively study the three-dimensional structure of cells. We have obtained 3D images of SKOV-3 ovarian cancer cells with diffraction limited lateral re solution and axial resolution on the order of 5 nm. The cells display intracellular features with sufficient clarity to measure the thickness of the cell’s lamelipodium and observe the features of its nucleus. A similar study had been previously done using a single wavelength an d software phase unwrapping [47].. Figure 7.1 shows the experimental apparatu s. It is based similar to the design presented in Figure 6.1. The only major differe nce is that here th e wave fronts in both reference arms remain spherical and the resul ting curvature mismatch removal is entirely numerical. Once again, a relative angle can be introduced between the object and each of the two reference beams by slightly tilting the reference arms mirrors. By introducing different tilts in two orthogonal directions fo r two reference beams, we can separate each spectral component in Fourier space, which allows us to capture both wavelengths simultaneously. The actual interferometers are shown in Figure 7.2.


71 Figure 7.1: Dual-wavelength digital holography setup for imagi ng of cells. The focal length of the lenses L21 and L22 are 17.5 cm and 10 cm respectively. The beams are collimated between L11 and L21 and between L12 and L22 and again are collimated after 20x OBJ1 micr oscope objective.


72 Figure 7.2: Dual interferometers


73 7.2 SKOV-3 ovarian cancer cells Here, we have applied the dual-wavelength phase imaging method to 3D imaging of SKOV-3 ovarian cancer cells. Figure 7.3 show s the confluent group of cells: Figure 7.3a shows the intensity image, which is simila r to what one can see using the ordinary microscope, while Figure 7.3b displays a si ngle wavelength wrapped phase image, and Figure 7.3c shows the coarse dual-wavele ngth unwrapped phase image. Finally, Figure 7.3d displays 3D rendering of the final fine map, where we see the cells connecting together with grooves between them. The area at the bottom of the images is the exposed part of the gold substrate, to which the cel ls are bound. The measurements of the optical thickness of cells can then be performed using equation 3.3. One also needs to make an assumption of the cells refractive index, whic h we took to be 1.375. While it may not be possible to precisely determine the refractive index of the cell at each individual point, this number is always close to the refractiv e index of water and unlikely to deviate by more than a few percent. Figure 7.4 shows the image of SKOV-3 single cell, where the cell’s nucleus and pseudopodia are clearly seen. Once again, by us ing the phase to thic kness conversion, we can easily determine the 3D features of the ce ll. In addition to phase images for a single wavelength (Figure 7.4a ), coarse map (Figure 7.4b) and 3D rendering of the fine map (Figure 7.4c), Figure 7.4d disp lays the line intensity profile, which indicates, for example, that the overall cell height is about 1.47 m. The separate measurement indicates that the thickness of the cells pseudopodia (l amelipodia) is around 270 nm.


74 Figure 7.3: Confluent SKOV-3 ovarian cancer ce lls: (a) amplitude image, (b) reconstructed phase for =532 nm, (c) dual-wavelength coarse phase image and (d) 3D rendering of fine map. All images are 92x92 m2 (240x240 pixels).


75 Figure 7.4: A single SKOV-3 cell: (a) reconstructed phase for =633 nm, (b) dualwavelength coarse phase image, (c) 3D rend ering of fine map and (d) line thickness profile. All images are 63.5x59 m2 (165x153 pixels).


76 Finally, the image in the Figure 7.5 shows a different confluent area of the same sample. Once again, the phase images genera ted using one waveleng th clearly exhibit a number of 2 phase steps (see Figure 7.5b), while the dual wavelength unwrapped phase map Figure 7.5c shows a few spots where disc ontinuities are still present. These spots correspond to the lower intensity areas on the sample where no interference fringes were obtained. As a result, the phase is a random noise, which gives rise to multiple 2 phase steps. The images in Figure 7.5d and Figure 7.5e show the result of optical and software unwrapping respectively. Since th is phase image has a noisy area, where phase oscillates randomly, the software algorith m took it as a real feature and created a step in phase profile (upper right co rner of Figure 7.5e), which clearly does not correspond to the real thickness profile of the sample.


77 Figure 7.5: Comparison between optical and so ftware unwrapping: (a) amplitude image; (b) single wavelength phase image, (c ) coarse maps, (d) 3D rendering of the dualwavelength fine phase map a nd (e) software unwrapped phase map. Images are 98x98 m2 (256x256 pixels).


78 CHAPTER 8 CONCLUSIONS 8.1 Laser scanning microscope The concept of the laser scanning microsc ope with the position sensitive detector, demonstrated here, has a number of potentia l advantages. The resolution determining parameter is the actual size of the illuminati ng scanning beam and not the ability of the detector to receive and resolve the image th rough optics. The optical system is used only to focus the laser beam to an axial point. Therefore, the system is free from all nonlongitudinal aberrations such as coma, astigma tism, field curvature, and distortion. If both the lateral resolution and the depth of focus of this system are improved (by means of a stronger telescopic system or by usi ng Bessel beams), the system can be very effective in capturing images with submicron lateral resolution, while maintaining quite substantial depth of focus. The position se nsitive detector is only one of possible detectors that can be used. In fact, there is much freedom in a type of a detector to choose. For example, weak or polarization-sensitive fluorescence can be detected by lock-in technique and the use of appropria te polarization elements in the system. Faster scanning by using ac ousto-optic modulator or micro-electromechanical scanners can be considered to achieve video rate acquisition. In addition to improving the scanning beam characteristics, we can also look at the possibility of collecting light with


79 higher efficiency via optical fiber. Although the signal-to-noise ratio was acceptable for the samples that we have looked at, in order to make the system mo re versatile, adding the high NA fiber to both increase the amount of light that reaches detector, and to make it easier to collect light from different angl es can be advantageous. Also, by inserting a beamsplitter between the laser and the sca nning system, and capturing the reflecting image, the system can be configured as a confocal microscope. Finally, our system can also be used to scan in color. For that purpose, three lasers can be employed to simultaneously scan the sample, while three detectors collect light for RGB components. In addition to all that, the system is very inexpensive even in comparison to a simple microscope model, can be made very comp act and does not require complicated sample preparations. In theory, the method can be an optical analog of SEM capable of generating high quality images over a significant depth of focus. Although the lateral resolution is limited to a micron range, the samp les can be imaged in its natural state and the imaging system is simple and cost-effective. 8.2 Dual-wavelength digital holography The dual-wavelength phase imaging digita l holography technique proved to be a powerful method of 3D imaging with the 2 -ambiguity resolved. Its application to the detection and study of pores in coal samples has been demons trated. In comparison to the software unwrapping, dual-wavelength optical unwrapping method is advantageous, as it requires no intensive computati on procedures and can handle complex phase topologies. The method provides high-resolution, accurate quantitative profiles of surfaces and can be an effective tool in studying small and la rge scale 3D features of many natural and


80 manmade samples. The use of two wavelengt hs together with the fine map algorithm allows us to increase the maximum height of the features which can be imaged, while keeping the noise low (few nanometers). The selection of two wavelengths which are closer to each other increases the axial range, but also increases the noise to the leve ls where fine map algorithm begins to fail. In order to further increase the axial range a nd still keep the final noise levels low, the same procedure can be applied at threeor more wavelengths. Furthermore, it is possible to use a tunable laser to iteratively increase the range while reducing the noise to the desired levels. We have also demonstrated the applicat ion of digital holography for studying cells. As a result, the accuracy and the level of deta ils of the dual-wavelength images of cells, presented here, are superior to what has been previously demonstrated. The proposed method of curvature correction is simple and effec tive enough to easily implement the experiment without the microsc ope objectives in the reference arms of the Michelson interferometer. This greatly simplifies the optical setup and makes it much easier to do the initial adjustments of the apparatus. Simultaneous dual-wavelength setup utilized together w ith the angular spectrum algorithm provides an easy way to acquire single frame images in real time, whic h can be used to study cell migration. The results of this work had been publis hed in several journals and conference proceedings (see Appendix G).


81 REFERENCES [1] D.A. Bonnell, ed., Scanning Probe Microscopy and Spectroscopy: Theory, Techniques, and applications (Wiley, N.Y., 2001) [2] P.C. Cheng, T. H. Lin, W. L. Wu, J. L. Wu, ed., Multidimensional Microscopy (Springer-Verlag, N. Y., 1994) [3] A. D L. Humphris, M. J. Miles, J. K. Hobbs, “A Mechanical Microscope: High-speed atomic force microscopy,” Applie d Physics Letters 86, 034106 (2005) [4] T. R Corle and G. S. Kino, Confocal Scanning Optical Microscopy and Related Imaging Systems (Academic Press, San-Diego, 1996) [5] F. Zernike, “Phase contrast, a new me thod for the microscopic observation of transparent objects,” Ph ysica 9, 686-698 (1942) [6] F. Zernike, “Phase contrast, a new me thod for the microscopic observation of transparent objects Part II, ” Physica 9, 974-986 (1942) [7] R. D. Allen, G.B. David, and G. Noma rski, “The Zeiss-Nomarski differential interference equipment for transmitted-light microscopy,” Z. Wiss. Mikrosk. 69, 193221 (1969) [8] E. D. Barone-Nugent, A. Barty, K. A. Nugent, “Quantitative phase amplitude microscopy I: Optical Microscopy ”, J. Microscopy 206, 194-203 (2002)


82 [9] N. Warnasooriya, M. K. Kim "LED-based multi-wavelength phase imaging interference microscopy", Optic s Express 15, 9239-9247 (2007) [10] P. Marquet, B. Rappaz, T. Colomb, F. Cha rriere, J. Kuhn, Y. Emery, E. Cuche, C. Depeursinge, P. Magistretti, “ Digital hol ographic microscopy, a new optical imaging technique to investigate cellular dyna mics,” Biophotonics and New Therapy Frontiers, R. Grzymala and O. Haeb erle, eds.,Proc. SPIE 6191, 61910U-61910U5 (2006) [11] A. Khmaladze and M. K. Kim, “Scanni ng Photon Microscopy,” Proc. SPIE 5873, 156-162 (2005) [12] J. R. Lewis, M. Holton, M. Kykta, A. Ma lik, F. Metting, C. Ryerson, C. Wiklof, and J. Xu, “Scanned Beam Medical Imager,” Proc. SPIE 5348, 40-51 (2004). [13] A. Yariv, Quantum Electronics (Wiley, N.Y., 1975) [14] W. Chi and N. George, “Electronic Imag ing Using a Logarithmic Asphere,” Opt. Lett. 26, 875-877 (2001). [15] Z. Ding, H. Ren, Y. Zhao, J.S. Nelson, and Z. Chen, “High-resolution optical coherence tomography over a la rge depth range with an ax icon lens,” Opt. Lett. 27, 243-245 (2002) [16] J. Durnin, J. J. Miceli and J. H. Eberly, “Comparison of Bessel and Gaussian Beams,” Opt. Lett, 13, 79-80 (1988) [17] B. Hafizi and P. Sprangle, “Diffraction Effects in Dir ected Radiation Beams,” J. Opt. Soc. Am. A 8, 705-717 (1991) [18] R. Arimoto, C. Saloma, T. Tanaka, S. Kawata ”Imaging Properties of Axicon in a Scanning Optical System,” App. Opt. 31, 6653-6657 (1992)


83 [19] V. Jarutis, R. Paskauskas, A. Stabinis “Focusing of Laguerre-Gaussian Beams by Axicon,” Optics Communications, 184, 105-112 (2000) [20] Gabor D, “A New Microscope Pr inciple”, Nature 161, 777-778 (1948) [21] Leith E and Upatnieks J, “Reconstr ucted Wavefronts and Communication Theory”, J. Opt. Soc. Am., 52, 1123 (1962) [22] Powell R.L, Stetson K.A, "Interferome tric Vibration Analysis by Wavefront Reconstruction," J. Opt. Soc. Am., 55, 1593-1598 (1965) [23] Goodman J.W and Lawrence R.W, “Digital Image Formation from Electronically Detected holograms,” Applied P hysics Letters 11, 77-79 (1967) [24] Kronrod M.A, Merzlyakov N.S, Yaroslavs ky L.P, “Reconstruction of a Hologram with a Computer,” Soviet PhysicsTechnical Physics 17, 333-334 (1972) [25] W. Jueptner and U. Schnars, Digital Holography (Springer Verlag, 2004) [26] U. Schnars, "Direct phase determination in hologram interferometry with use of digitally recorded holograms", J. Opt. Soc. Am. A 11, 2011-2015 (1994) [27] U. Schnars and W.P. Jueptner, "Direct recording of holograms by a CCD target and numerical reconstruction" Appl. Opt. 33, 179-181 (1994) [28] U. Schnars and W.P. Jueptner, "Digital r ecording and numerical reconstruction of holograms", Meas. Sci. T echnol. 13, R85-R101 (2002) [29] J.W. Goodman, Introduction to Fourier Optics 2nd ed. (New York, McGrawHill, 1996) [30] S. Grilli, P. Ferraro, S. De Nicola, A. Fini zio, G. Pierattini and R. Meucci, "Whole optical wavefields reconstruction by digi tal holography", Opt. Express 9, 294-302 (2001)


84 [31] L.F. Yu and L.L. Cai, "Iterative algorithm with a constraint condition for numerical reconstruction of a three-dimensi onal object from its hologram", Journal of the Optical Society of America a-Optic s Image Science and Vision 18, 1033-1045 (2001) [32] K. Matsushima, H. Schimmel and F. Wyrowski, "Fast calculation method for optical diffraction on tilted pl anes by use of the angular spectrum of plane waves", Journal of the Optical Society of Amer ica a-Optics Image Science and Vision 20, 1755-1762 (2003) [33] A. Barty, K.A. Nugent, D. Paganin and A. Roberts, "Quantit ative optical phase microscopy", Opt. Lett. 23, 817-819 (1998) [34] E. Cuche, F. Bevilacqua and C. Depe ursinge, "Digital holog raphy for quantitative phase-contrast imaging", Op t. Lett. 24, 291-293 (1999) [35] L. Xu, X. Peng, J. Miao and A.K. As undi, "Studies of digital microscopic holography with applications to microstr ucture testing", Appl. Opt. 40, 5046-5051 (2001) [36] W.S. Haddad, D. Cullen, J.C. Solem, J. W. Longworth, A. McPherson, K. Boyer and C.K. Rhodes, "Fourier-transform hologr aphic microscope", Appl. Opt. 31, 49734978 (1992) [37] W. Xu, M.H. Jericho, I.A. Meinertzhagen and H.J. Kreuzer, "Digital in-line holography for biological applications", Proc. Natl. Acad. Sci. USA 98, 11301-11305 (2001) [38] P. Ferraro, S. De Nicola, A. Finizio, G. Coppola, S. Grilli C. Magro and G. Pierattini, "Compensation of the inherent wave front curvature in digital holographic


85 coherent microscopy for quantitative phase -contrast imaging", Appl. Opt. 42, 19381946 (2003) [39] M. Born and E. Wolfe, Principles of Optics (Pergamon, 1964) [40] J. Gass, A. Dakoff, and M.K. Kim, “Phase imaging without 2 -ambiguity by multiple-wavelength digital holography,” Opt. Lett. 28, 1141-1143 (2003) [41] W. Branch and H.J. Mesa, “Caracteriz acin de poros de carbones tratados trmicamente empleando procesamiento digital de imgenes y microscopa asistida por computador”, Avances en Sistemas e Informtica 1, (2004) [42] K. Tobin and P. Bingham, “Optical Spatial Heterodyned Interferometry for applications in Semiconductor Inspection and Metrology”, Proceedings of SPIE Vol. 6162 (2005) [43] J. Khn, T. Colomb, F. Montfort, F. Charri re, Y. Emery, E. Cuche, P. Marquet, C. Depeursinge, “Real-time dual-wavelengt h digital holographic microscopy with a single hologram acquisition”, Opt. Expr. 15, no. 12, 7231-7242, (2007). [44] P. Ferraro, L. Miccio, S. Grilli1, M. Patu rzo, S. De Nicola, A. Finizio, R. Osellame, P. Laporta, “Quantitative Phas e Microscopy of microstructures with extended measurement range and correct ion of chromatic aberrations by multiwavelength digital holography", Op t. Expr. 15, no. 22, 14591-14600, (2007). [45] D. Parshall & M.K. Kim, “Digital hologr aphic microscopy with dual wavelength phase unwrapping”, Appl. Opt. 45, 451-459 (2006). [46] M.K. Kim, L. Yu, and C.J. Mann, “Digital holography and multi-wavelength interference techniques”, Chap. 2 in Digital holography and three-dimensional display T.C. Poon, ed. (Springer 2006)


86 [47] C.J. Mann, L. Yu, C.M. Lo and M.K. Ki m, “High-resolution quantitative phasecontrast microscopy by digital holog raphy,” Opt. Express 13, 8693-8698 (2005) [48] J. H. McLeod, "The axicon: a new type of optical element," J. Opt. Soc. Am. 44, 592-597 (1954) [49] J. H. McLeod, "Axicons and their uses ," J. Opt. Soc. Am. 50, 166-169 (1960)




88 APPENDIX A LASER SPOT SIZE In order to measure the laser spot si ze on the sample, the beam was gradually blocked by using a sharp blade attached to the computer contro lled precision moving stage. The overall intensity was then m easured by the power meter (Newport model 1815-c). If the beam profile is assumed to be Gaussian, the blocking of its part can be simulated (see Figure A.1) to obtain the percentage of the beam that passes through, when everything up to the beam waist radius is transmitted. Thus, when the detector registers 97.72% of the total intensity, the beam is blocked at its waist. When the detector regist ers 50% of the total intensity, half the beam is blocked and therefore this represents the center of the beam. The relative distance, the blade needs to be moved to achieve these levels of intensity is then equal to the beam spot size. Laser beam waist size and location was m easured by simply measuring the size of the laser spot in 3 places, shown in Figure A.2. The experimental values of the beam spot size were then compared to theoretical values (Figure A.3). Thus, the beam waist is located on the output mirror and its size is 360 microns.


89 Finally, Figure A.4 shows the calculation of the final (as it is projected onto the sample) beam spot size and scanning para meters for a galvanometer based scanning mirror and a telescopic system.


90 Figure A.1: Mathematica simulation of Gaussian beam.

PAGE 100

91 Figure A.2: Measuring laser spot in three places

PAGE 101

92 Figure A.3: Mathematica simulation of beam profile

PAGE 102

93 Figure A.4: Mathematica simulation of final beam profile

PAGE 103

94 APPENDIX B LASER SCANNING MICROSCOPE LABVIEW PROGRAM Figure B.1 below shows the main screen of the program used to acquire images for laser scanning microscope.

PAGE 104

95 Figure B.1: Main screen of laser scanni ng microscope Labview program.

PAGE 105

96 APPENDIX C AXICON The word “axicon” was first proposed by J. McLeod [48-49] to characterize any figure of revolution that by reflection, or refraction, or both will bend light from a point source on the axis of revolution so as to cro ss the axis not at the point, as would be the case of a lens, but along a con tinuous line of points of a subs tantial extent of the axis. Thus, the definition of axicon is not only li mited to the conical lens, but may include many rotationally symmetric structures, such as a circular slit. In the case of on-axis illumination by the Gaussian beam, an axicon produces the kind of interference pattern, which can be thought of as being non-diffractive. The idea is that the solution of the wave equation in this cas e is separable into two parts – one depends only on the transverse coordinate and another on axial. The intensity distribution can be the expressed in term of the Bessel function. While propagating along the axial direction, the energy flows in and out of the central maximum. In the case of the (ideal) Bessel beam, this does not result in the overall increase of the size of a central maximum. One should point out that therefore, when isolated, the central maximum will spread out as quickly as the Gaussian beam of the same waist. Although for the ideal (infinite aperture ) Bessel beam, the size of the central maximum will stay the same indefinitely, the non-ideal Bessel beam will not. In fact, it

PAGE 106

97 has been shown both theoretic ally [16] and experimenta lly [17] that the maximum intensity of the Bessel beam will oscillate increasingly along the axial direction before rapidly decaying, when it reaches the geometrical shadow [18] (Figure C.1) The conical surface of an axicon generates the Bessel beam with on-axis illumination. As it was shown before, the useful approximation exists to calculate the intensity distribution produced by an ax icon (Equation 27 from [19]). Mathematica routine, shown in Figure C.2 was written to try out different wedge angles and see what would be the spot size and the depth of focus. The results of this simulation were consis tent with the theory outlined in [16] and [19]. The basic conclusion is th at for the smaller beam spot size (the size of the central maximum) one needs smaller cone angle. Also for a longer depth of focus, a bigger cone angle is needed. Running the simulation, a nd assuming the realistic parameters, it was concluded that for the sharp (a pproximately 1 micron) first p eak and still re latively long (11 mm) depth of focus, we could use an axic on with the cone angle of 140 degrees (See Figure C.3). Unfortunately, the images obtained with the axicon exhibited a rather poor contrast. The intensity distribution projected and obser ved on the object has a clearly defined zeroorder diffraction peak. When this peak is used for scanning, it forms a bright line on the object, which is clearly visible with the na ked eye (see Figure C.4). However, if one looks at the intensity profile in two dimensions, upon integrati on, the fraction of intensity in the central peak was calculated to be only 1.4% of the total energy. As a result, since our non-imaging detector looks at the total energy, there was ve ry little light for it to work with.

PAGE 107

98 Figure C.1: The depth of focus of the Bessel beam. L is the “focal length” of the axicon.

PAGE 108

99 Figure C.2: Mathematica simulation of axicon parameters

PAGE 109

100 Figure C.3: The axicon illuminated by a Gaussian beam (left) and theoretical intensity distribution in the focal plane (right).

PAGE 110

101 Figure C.4: The intensity distribution in the focal plane of axicon (left) and zoom of the central peak (right). The units of xand y-axes are meters.

PAGE 111

102 APPENDIX D THEORY OF DUAL-WAVELENG TH PHASE IMAGING The principle of multi-wavelength phase imaging, presented here is from the reference [40]. Suppose, we image a sloped mi rror, which will appear as a tilted plane with multiple discontinuities (see figure 5.1). Let’s assume that the overall height of this plane is 5.0 hm Figures D.1 (a) and D.1 (b) display the phase maps 1 and 2 of the tilted object using wavelengths of1532 nm and 2633 nm respectively. The phase maps contain multiple 2 discontinuities wherever the height is a multiple of the wavelength. Subtraction of the two phase maps 1 and 2 in Figures D.1 (a) and D.1 (b), results in a new phase map 1212 as shown in Figure D.1 (c). By adding 2 to the phase map in Figure D.1 (c) wherever 120 produces a new phase map 121212()2(0) x with a longer range free of di scontinuities and extended axial range (Figure D.1 (d)). The new phase map is e quivalent to that of a “beat wavelength”, which, in the case of using 1532 nm and 2633 nm is found as 12 12 123.33 m (D.1)

PAGE 112

103 Figure D.1: Dual-wavelength phase imaging digital holography (a) phase map 1() x of 1532 nm ; (b) phase map 2() x of 2633 nm ; (c) difference phase map 1212() x ; (d) coarse map 12() Z x with beat wavelength 123.33 m

PAGE 113

104 However, by amplifying the range, any phase noise is amplified as well by the same factor. Suppose the single-wavelength phase maps ()m x contain phase noise of2m where ~2%m Then, the corresponding surface profiles ()m Z xcontain a noise level of ~12mmnm (D.2) The noise in the difference phase map 12() x is then 121222() (D.3) and that in the surface profile noise 12'() Z xis 1212~130 nm (D.4) Thus, the noise has in effect been am plified approximately by a factor of 122/m as one can see in the coarse map12'() Z xin Figure D.1 (d) when compared to the singlewavelength phase maps in Figure D.1 (a) or Figure D.1 (b). Now, by using the fine map algorithm, it is possible to reduce the noise in12'() Z xback to the level of the single-wave length phase maps. The coarse profile, 12() Z x shown in Figure D.1 (d) can be divided into integer multiples of either of the wavelengths to produce a new coarse profile 12() Z x as illustrated in Figure D.2 (a). Using 1 the new profile is defined as 12 121 1() ()int() Zx Zx (D.5) One can then paste segments of a single-wa velength onto this prof ile to obtain profile 12() Z x as shown in Figure D.2 (b) such that

PAGE 114

105 Figure D.2: Fine map generation (a) 12() Z x obtained from dividing 12() Z x into integer multiples of 1 ; b) 12() Z x where 1() Z xis pasted on 12() Z x ; c) a Z derived from comparing 12() Z x with the coarse map12() Z x ; d)b Z ,resulting from addition or subtraction of 1 in a Z to remove spikes; e) the fine map, 12() Z x.

PAGE 115

106 12121()() Z xZxZ (D.6) At the boundaries of wavele ngth intervals, the noise in the single-wavelength phase map causes numerous jumps of size,1 as displayed in Figure D.2 (c). If the noise level is not excessive, most of the spikes in the can be removed by simply comparing of 12() Z x with 12() Z x (see Figure D.2 (d)), and if the difference is more than1 then 1 is either added or subtracted depending on the sign of the difference. Figure D.2 (e) shows the final result of the fine map procedure, where the noise level is approximately the same as that of1() Z x, the single wavelength profile, at around 12nm. The remaining spikes in the map are due to places where the coarse map is more than one half of1 In this work, these remaining spikes are removed by a software routine, which scans over the image and removes these 1 -high artifacts. It is estimated [40] that, the maximum noise level for the method to work properly is given approximately by 12~4% 4m m (D.7) If the noise levels are not too high, th e phase-unwrapping technique can be further extended to an iterative proce dure of three or more wavele ngths, which would yield an even bigger axial range.

PAGE 116

107 APPENDIX E DUAL-WAVELENGTH DIGITAL HOL OGRAPHY LABVIEW PROGRAMS During the course of this work, a number of Labview programs were developed to both acquire and process the holographic im ages. Figure E.1 below shows the main screen of the program for dual-wavelength digital holography. It incorporates both wavelength images with curvature corrections, Fourier transforms with multiple filtering capabilities, phase images, coarse and fine maps. Figure E.2 shows the diagram of dual-w avelength phase unwrapping. Figure E.3 shows the diagram of curvature correction Labv iew routine. This routine is called from the main program, with the set of parameters specified by user in the main screen (Figure E.1).

PAGE 117

108 Figure E.1: Main screen of dual-wa velength Labview program.

PAGE 118

109 Figure E.2: Dual-wavelength phase un wrapping Labview diagram.

PAGE 119

110 Figure E.3: Curvature correction Labview diagram.

PAGE 120

111 APPENDIX F DUAL-WAVELENGTH DIGITAL HOLOGRAPHY C PROGRAMS The printouts of all C routin es developed to increase th e speed of FFT in angular spectrum method are shown below. // spectrum.cpp : Defines the entry point for the DLL application. /* Call Library source file */ //#include "extcode.h" #include "stdafx.h" #include #include #include #include "fftn.h" #undef REAL #define REAL double #define P2 6.283185307179586476925286766559 /* Typedefs */ typedef struct { double re, im; } cmplx128; typedef struct { long dimSizes[2]; cmplx128 Numeric[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { double dxUm; double dyUm; TD2Hdl array2Dc; } TD1; typedef struct { double lambdaUm; double ZUm; } TD3; typedef struct { long x0; long y0; } TD4;

PAGE 121

112 __declspec ( dllexport ) void angspec(TD1 *HH2Dc, TD3 *diffractParam, TD1 *EE2DcDll, TD1 *filter2Dc, TD4 *shift, TD1 *FF2DcDll); __declspec ( dllexport ) void angspec(TD1 *HH2Dc, TD3 *diffractParam, TD1 *EE2DcDll, TD1 *filter2Dc, TD4 *shift, TD1 *FF2DcDll) { void shift_map ( int int int int int ); double grid( int int double ); // *EE2DcDll = *HH2Dc; // *FF2DcDll = *HH2Dc; // TD2Hdl arrayc; double *arre,*arim; int *map_fourier, *map_filter; int i, j; int ret; int dims [2]; /* pass fft dimensions */ double k2,kz2; double *kx2,*ky2; double phase, mult; int numrow, numcol; struct CENTER { int y, x; }; struct CENTER center; /* dimSizes[0] is the number of rows */ numrow = (*(HH2Dc->array2Dc))->dimSizes[0]; numcol = (*(HH2Dc->array2Dc))->dimSizes[1]; center.y = ( int ) (numrow/2); center.x = ( int ) (numcol/2); FF2DcDll->dxUm = P2 / numrow / HH2Dc->dxUm; FF2DcDll->dyUm = P2 / numcol / HH2Dc->dyUm; k2 = P2*P2/(diffractParam->la mbdaUm)/(diffractParam->lambdaUm); // numrow = HH2Dc->array2Dc.dimSize s[0]; // *array2Dc-> dimSizes[0]; /* dimSizes[1] is the number of columns */ // numcol = TD1->array2Dc->dimSizes[1]; dims [0] = numcol; /* x,y array dimensions to pass */ dims [1] = numrow; /* scale one of these ways: */ /* this is what MATLAB does, but it wa s then adjusted in Labview, so see below #define FORWARD_SCALE 0.0 #define INVERSE_SCALE -1.0 */ //#define FORWARD_SCALE -1.0 //#define INVERSE_SCALE 0.0 #define FORWARD_SCALE -2.0 #define INVERSE_SCALE -2.0 /* 2D forward fft */ fflush (stdout); /* Create separate real/imagi nary arrays and shifting map*/ arre = ( double *) malloc (numrow numcol sizeof ( double )); arim = ( double *) malloc (numrow numcol sizeof ( double )); // kx2 = (double *) malloc (numcol sizeof(double)); // check row and col // ky2 = (double *) malloc (numrow sizeof(double)); map_fourier = ( int *) malloc (numrow numcol sizeof ( int )); map_filter = ( int *) malloc (numrow numcol sizeof ( int ));

PAGE 122

113 for (i = 0; i < numrow; i++) { /* copy data into the arrays */ for (j = 0; j < numcol; j++) { arre[(i numcol) + j] = (*(HH2Dc->array2Dc))->Numeric[(i numcol) + j].re; arim[(i numcol) + j] = (*(HH2Dc->array2Dc))->Numeric[(i numcol) + j].im; } } /* Call to FFT fwd transform, separate real/imaginary arrays */ ret = fftn (2, dims, arre, arim, 1, FORWARD_SCALE); arre[0]=arim[0]=0; // That is how it is done in MATLAB for display purposes // if (ret) return 1; /* DO THE SHIFTING MAP to Shift the a rrays shift is negative to match Labview */ shift_map(map_fourier, numrow, numcol, (center.y shift->y0), (center.x shift->x0)); shift_map(map_filter, numrow, nu mcol, shift->y0, shift->x0); /* ----------now copy into FF ------------*/ /* REMEMBER arim returned is (-arim) as it would have been in MATLAB */ for (i = 0; i < numrow; i++) { for (j = 0; j < numcol; j++) { (*(FF2DcDll->array2Dc))->Numeric[(i numcol) + j].re = arre[map_fourier[(i numcol) + j]] (*(filter2Dc->array2Dc))->Nume ric[map_filter[(i numcol) + j]].re + arim[map_fourier[(i numcol) + j]] (*(filter2Dc->array2Dc))->Nu meric[map_filter[(i numcol) + j]].im; (*(FF2DcDll->array2Dc))->Numeric[(i numcol) + j].im = arim[map_fourier[(i numcol) + j]] (*(filter2Dc->array2Dc))->Nume ric[map_filter[(i numcol) + j]].re + arre[map_fourier[(i numcol) + j]] (*(filter2Dc->array2Dc))->Nu meric[map_filter[(i numcol) + j]].im; } } fflush (stdout); /* ----------get kx and kz grid ------------*/ kx2=grid(center.x, numcol, FF2DcDll->dxUm); ky2=grid(center.y, numrow, FF2DcDll->dyUm); shift_map(map_fourier, numrow, numcol, -center.y, -center.x); /* ----------multiply by exp ------------*/ for (i = 0; i < numrow; i++) { for (j = 0; j < numcol; j++) { kz2 = k2 kx2[j] ky2[i]; if (kz2>=0) { phase=diffractParam->ZUm sqrt(kz2); arre[map_fourier[(i numcol) + j]] =

PAGE 123

114 (*(FF2DcDll->array2Dc))->Numeric[(i numcol) + j].re cos(phase) (*(FF2DcDll->array2Dc))->Numeric[(i numcol) + j].im sin(phase); arim[map_fourier[(i numcol) + j]] = (*(FF2DcDll->array2Dc))->Numeric[(i numcol) + j].re sin(phase) + (*(FF2DcDll->array2Dc))->Numeric[(i numcol) + j].im cos(phase); (*(FF2DcDll->array2Dc))->Numeric[(i numcol) + j].re *= 2; (*(FF2DcDll->array2Dc))->Numeric[(i numcol) + j].im *= 2; } else { arre[(i numcol) + j] = 0; arim[(i numcol) + j] = 0; } } } /* Call to FFT inverse transform, separate real/imaginary arrays */ ret = fftn (2, dims, arre, arim, 1, INVERSE_SCALE); //changed -1 to 1 !!!! mult = 1./numcol/numrow; ///sqrt((double) numcol*numrow); for (i = 0; i < numrow; i++) { for (j = 0; j < numcol; j++) { (*(EE2DcDll->array2Dc))->Numeric[(i numcol) + j].re = arre[(i numcol) + j]*mult; (*(EE2DcDll->array2Dc))->Numeric[(i numcol) + j].im = arim[(i numcol) + j]*mult; } } // if (ret) return 1; /* done, free up dynamically allocated memory */ fft_free (); free (arim); free (arre); free (kx2); free (ky2); free (map_fourier); free (map_filter); } #include "stdafx.h" #include #include void shift_map ( int *map, int ysize, int xsize, int yshift, int xshift) { int i, j; int index1, index2; while (yshift<0) yshift+=ysize; while (xshift<0) xshift+=xsize; while (yshift>ysize) yshift-=ysize; while (xshift>xsize) xshift-=xsize; for (i = 0; i < ysize; i++) { /* go over the array filli ng the shift map array */ for (j = 0; j < xsize; j++) {

PAGE 124

115 index1=i-yshift; /* shift in y */ if (index1<0) index1+=ysize; /* if out of bounds by y coordinate, wrap */ index2=j-xshift; /* shift in x */ if (index2<0) index2+=xsize; /* if out of bounds by x coordinate, wrap */ map[i xsize + j] = index1 xsize + index2; } } } #include "stdafx.h" #include #include double *grid( int center, int size, double delta) { double value; double *array; value = delta (size*0.5 1); /* this is how it is done in labview */ array = ( double *) malloc (size sizeof ( double )); while (size--) { array[size]=value*value; value -=delta; } return array; } THE FILE BELOW HAD BEEN UPLOADED FROM TH E INTERNET AND ADOPTE D FOR THIS PROJECT /*--------------------------------*-C-*---------------------------------* File: fftn.c * Public: fft_free (); fftn / fftnf (); * Private: fftradix / fftradixf (); * Descript: multivariate complex Fourier transform, computed in place using mixed-radix Fast Fourier Transform algorithm. * Fortran code by: RC Singleton, Stanford Research Institute, Sept. 1968 * translated by f2c (version 19950721). * Revisions: 26 July 95 John Beale added maxf and maxp as parameters to fftradix() * 28 July 95 Mark Olesen cleaned-up the Fortran 66 goto spaghetti, only 3 labels remain. * added fft_free() to provide some measure of control over allocation/deallocation. * added fftn() wrapper for multidimensional FFTs * use -DFFT_NOFLOAT or -DFFT_NODOUBLE to avoid compiling that precision. Note suffix `f on the function names indicates float precision.

PAGE 125

116 * revised documentation * 31 July 95 Mark Olesen added GNU Public License more cleanup define SUN_BROKEN_REALLOC to use malloc() instead of realloc() on the first pass through, apparently needed for old libc removed #error directive in favour of some code that simply won't compile (generate an error that way) * 1 Aug 95 Mark Olesen define FFT_RADIX4 to only have radix 2, radix 4 transforms made fftradix /fftradixf () static scope, just use fftn() instead. If you have good ideas about fixing the factors in fftn() please do so. * 8 Jan 95 mj olesen fixed typo's, including one t hat broke scaling for scaling by total number of matrix elements or the square root of same removed unnecessary casts from allocations * ======================================================================* NIST Guide to Available Math Software. Source for module FFT from package GO. Retrieved from NETLIB on Wed Jul 5 11:50:07 1995. ======================================================================* *-----------------------------------------------------------------------* * int fftn (int ndim, const int dims[], REAL Re[], REAL Im[], int iSign, double scaling); * NDIM = the total number dimensions DIMS = a vector of array sizes if NDIM is zero then DIMS must be zero-terminated * RE and IM hold the real and imaginary components of the data, and return the resulting real and imaginary Four ier coefficients. Multidimensional data *must* be allocated contiguously. There is no limit on the number of dimensions. * ISIGN = the sign of the complex exponential (ie, forward or inverse FFT) the magnitude of ISIGN (normally 1) is used to determine the correct indexing increment (see below). * SCALING = normalizing constant by which the final result is *divided* if SCALING == -1, normalize by total dimension of the transform if SCALING < -1, normalize by the square-root of the total dimension * example: tri-variate transform with Re[n1][n2][n3], Im[n1][n2][n3] * int dims[3] = {n1,n2,n3} fftn (3, dims, Re, Im, 1, scaling); *-----------------------------------------------------------------------* int fftradix (REAL Re[], REAL Im[], size_t nTotal, size_t nPass, size_t nSpan, int iSign, size_t max_factors, size_t max_perm); * RE, IM see above documentation * Although there is no limit on the num ber of dimensions, fftradix() must be called once for each dimension, but the calls may be in any order. * NTOTAL = the total number of complex data values NPASS = the dimension of the current variable NSPAN/NPASS = the spacing of consec utive data values while indexing the current variable

PAGE 126

117 ISIGN see above documentation * example: tri-variate transform with Re[n1][n2][n3], Im[n1][n2][n3] * fftradix (Re, Im, n1*n2*n3, n1, n1, 1, maxf, maxp); fftradix (Re, Im, n1*n2*n3, n2, n1*n2, 1, maxf, maxp); fftradix (Re, Im, n1*n2*n3, n3, n1*n2*n3, 1, maxf, maxp); * single-variate transform, NTOTAL = N = NSPAN = (number of complex data values), * fftradix (Re, Im, n, n, n, 1, maxf, maxp); * The data can also be stored in a single array with alternating real and imaginary parts, the magnitude of ISIGN is changed to 2 to give correct indexing increment, and data [0] and data [1] used to pass the initial addresses for the sequences of real and imaginary values, * example: REAL data [2*NTOTAL]; fftradix ( &data[0], &data[1], NTOTAL, nPass, nSpan, 2, maxf, maxp); * for temporary allocation: * MAX_FACTORS >= the maximum prime factor of NPASS MAX_PERM >= the number of prime factors of NPASS. In addition, if the square-free portion K of NPASS has two or more prime factors, then MAX_PERM >= (K-1) * storage in FACTOR for a maximum of 15 prime factors of NPASS. if NPASS has more than one square-free factor, the product of the square-free factors must be <= 210 array storage for maximum prime factor of 23 the following two constants should agree with the array dimensions. *-----------------------------------------------------------------------* * void fft_free (void); * free-up allocated temporary storage after finished all the Fourier transforms. *----------------------------------------------------------------------*/ #ifndef _FFTN_C #define _FFTN_C #include "stdafx.h" #include #include #include #include "fftn.h" /* double precision routine */ static int fftradix ( double Re[], double Im[], size_t nTotal, size_t nPass, size_t nSpan, int isign, int max_factors, int max_perm); /* float precision routine */ static int fftradixf ( float Re[], float Im[], size_t nTotal, size_t nPass, size_t nSpan, int isign, int max_factors, int max_perm); /* parameters for memory management */ static size_t SpaceAlloced = 0; static size_t MaxPermAlloced = 0;

PAGE 127

118 /* temp space, (void *) since bot h float and double routines use it */ static void *Tmp0 = NULL; /* temp space for real part */ static void *Tmp1 = NULL; /* temp space for imaginary part */ static void *Tmp2 = NULL; /* temp space for Cosine values */ static void *Tmp3 = NULL; /* temp space for Sine values */ static int *Perm = NULL; /* Permutation vector */ #define NFACTOR 11 static int factor [NFACTOR]; #if defined (__FILE__) && defined (lint) Error: your compiler is sick! define __FILE__ yourself (a string) eg, something like -D__FILE__=\"fftn.c\" #endif #ifndef M_PI # define M_PI 3.14159265358979323846264338327950288 #endif #ifndef SIN60 # define SIN60 0.86602540378443865 /* sin(60 deg) */ # define COS72 0.30901699437494742 /* cos(72 deg) */ # define SIN72 0.95105651629515357 /* sin(72 deg) */ #endif /* re-include this source file on the second pass through */ #undef REAL #undef FFTN #undef FFTNS #undef FFTRADIX #undef FFTRADIXS #ifndef FFT_NOFLOAT # define REAL float # define FFTN fftnf /* trailing 'f' for float */ # define FFTNS "fftnf" /* name for error message */ # define FFTRADIX fftradixf /* trailing 'f' for float */ # define FFTRADIXS "fftradixf" /* name for error message */ //# include __FILE__ /* include this file again */ #endif #undef REAL #undef FFTN #undef FFTNS #undef FFTRADIX #undef FFTRADIXS #ifndef FFT_NODOUBLE # define REAL double # define FFTN fftn # define FFTNS "fftn" # define FFTRADIX fftradix # define FFTRADIXS "fftradix" //# include __FILE__ /* include this file again */ #endif #if defined (FFT_NOFLOAT) && defined (FFT_NODOUBLE) && defined (lint) Error: cannot have both -DFF T_NOFLOAT and -DFFT_NODOUBLE #endif //#else /* _FFTN_C */ /* */ void fft_free ( void ) { SpaceAlloced = MaxPermAlloced = 0;

PAGE 128

119 if (Tmp0 != NULL) { free (Tmp0); Tmp0 = NULL; } if (Tmp1 != NULL) { free (Tmp1); Tmp1 = NULL; } if (Tmp2 != NULL) { free (Tmp2); Tmp2 = NULL; } if (Tmp3 != NULL) { free (Tmp3); Tmp3 = NULL; } if (Perm != NULL) { free (Perm); Perm = NULL; } } int FFTN ( int ndim, const int dims[], REAL Re [], REAL Im [], int iSign, double scaling) { size_t nSpan, nPass, nTotal; int ret, i, max_factors, max_perm, inSpan; /* tally the number of elements in the data array and determine the number of dimensions */ nTotal = 1; if (ndim && dims [0]) { for (i = 0; i < ndim; i++) { if (dims [i] <= 0) { fputs ("Error: FFTNS "() dimension error\n", stderr); fft_free (); /* free-up memory */ return -1; } nTotal *= dims [i]; } } else { ndim = 0; for (i = 0; dims [i]; i++) { if (dims [i] <= 0) { fputs ("Error: FFTNS "() dimension error\n", stderr); fft_free (); /* free-up memory */ return -1; } nTotal *= dims [i]; ndim++; } } /* determine maximum number of factors and permuations */ #if 1 /* follow John Beale's example, just use the largest dimension and don't worry about excess allocati on. May be someone else will do it? */ max_factors = max_perm = 1; for (i = 0; i < ndim; i++) { nSpan = dims [i]; inSpan = ( int ) nSpan; if (inSpan > max_factors) max_factors = inSpan; if (inSpan > max_perm) max_perm = inSpan; } #else /* use the constants used in the original Fortran code */ max_factors = 23; max_perm = 209; #endif

PAGE 129

120 /* loop over the dimensions: */ nSpan = 1; for (i = 0; i < ndim; i++) { nPass = dims [i]; nSpan *= nPass; ret = FFTRADIX (Re, Im, nTotal, nPass, nSpan, iSign, max_factors, max_perm); /* exit, clean-up already done */ if (ret) return ret; } /* Divide through by the normalizing constant: */ if (scaling && scaling != 1.0) { if (iSign < 0) iSign = -iSign; // scaling = sqrt (scaling); if (scaling < 0.0) scaling = (scaling < -1.0) ? sqrt (( double ) nTotal) : nTotal; scaling = 1.0 / scaling; /* multiply is often faster */ for (i = 0; i < ( int ) nTotal; i += iSign) { Re [i] *= scaling; Im [i] *= scaling; } } return 0; } /*----------------------------------------------------------------------*/ /* singleton's mixed radix routine * could move allocation out to fftn(), but leave it here so that it's possible to make this a standalone function */ static int FFTRADIX (REAL Re[], REAL Im[], size_t nTotal, size_t nPass, size_t nSpan, int iSign, int max_factors, int max_perm) { int ii, mfactor, kspan, ispan, inc; int j, jc, jf, jj, k, k1, k2, k3, k4, kk, kt, nn, ns, nt; REAL radf; REAL c1, c2, c3, cd, aa, aj, ak, ajm, ajp, akm, akp; REAL s1, s2, s3, sd, bb, bj, bk, bjm, bjp, bkm, bkp; REAL *Rtmp = NULL; /* temp space for real part*/ REAL *Itmp = NULL; /* temp space for imaginary part */ REAL *Cos = NULL; /* Cosine values */ REAL *Sin = NULL; /* Sine values */ REAL s60 = SIN60; /* sin(60 deg) */ REAL c72 = COS72; /* cos(72 deg) */ REAL s72 = SIN72; /* sin(72 deg) */ REAL pi2 = M_PI; /* use PI first, 2 PI later */ /* gcc complains about k3 being uniniti alized, but I can't find out where or why ... it looks okay to me. * initialize to make gcc happy */

PAGE 130

121 k3 = 0; /* gcc complains about c2, c3, s2,s3 being uninitialized, but they're only used for the radix 4 case and only AFTER the (s1 == 0.0) pass through the loop at which point they will have been calculated. * initialize to make gcc happy */ c2 = c3 = s2 = s3 = 0.0; /* Parameter adjustments, was fortran so fix zero-offset */ Re--; Im--; if (nPass < 2) return 0; /* allocate storage */ if (SpaceAlloced < max_factors sizeof (REAL)) { #ifdef SUN_BROKEN_REALLOC if (!SpaceAlloced) /* first time */ { SpaceAlloced = max_factors sizeof (REAL); Tmp0 = malloc (SpaceAlloced); Tmp1 = malloc (SpaceAlloced); Tmp2 = malloc (SpaceAlloced); Tmp3 = malloc (SpaceAlloced); } else { #endif SpaceAlloced = max_factors sizeof (REAL); Tmp0 = realloc (Tmp0, SpaceAlloced); Tmp1 = realloc (Tmp1, SpaceAlloced); Tmp2 = realloc (Tmp2, SpaceAlloced); Tmp3 = realloc (Tmp3, SpaceAlloced); #ifdef SUN_BROKEN_REALLOC } #endif } else { /* allow full use of alloc'd space */ max_factors = ( int ) SpaceAlloced / sizeof (REAL); } if (( int ) MaxPermAlloced < max_perm) { #ifdef SUN_BROKEN_REALLOC if (!MaxPermAlloced) /* first time */ Perm = malloc (max_perm sizeof ( int )); else #endif Perm = ( int *) realloc (Perm, max_perm sizeof ( int )); MaxPermAlloced = max_perm; } else { /* allow full use of alloc'd space */ max_perm = ( int ) MaxPermAlloced; } if (Tmp0 == NULL || Tmp1 == NULL || Tmp2 == NULL || Tmp3 == NULL || Perm == NULL) goto Memory_Error_Label; /* assign pointers */ Rtmp = (REAL *) Tmp0; Itmp = (REAL *) Tmp1; Cos = (REAL *) Tmp2; Sin = (REAL *) Tmp3;

PAGE 131

122 /* Function Body */ inc = iSign; if (iSign < 0) { s72 = -s72; s60 = -s60; pi2 = -pi2; inc = -inc; /* absolute value */ } /* adjust for strange increments */ nt = inc ( int ) nTotal; ns = inc ( int ) nSpan; kspan = ns; nn = nt inc; jc = ns / (( int ) nPass); radf = pi2 ( double ) jc; pi2 *= 2.0; /* use 2 PI from here on */ ii = 0; jf = 0; /* determine the factors of n */ mfactor = 0; k = ( int ) nPass; while (k % 16 == 0) { mfactor++; factor [mfactor 1] = 4; k /= 16; } j = 3; jj = 9; do { while (k % jj == 0) { mfactor++; factor [mfactor 1] = j; k /= jj; } j += 2; jj = j j; } while (jj <= k); if (k <= 4) { kt = mfactor; factor [mfactor] = k; if (k != 1) mfactor++; } else { if (k (k / 4 << 2) == 0) { mfactor++; factor [mfactor 1] = 2; k /= 4; } kt = mfactor; j = 2; do { if (k % j == 0) { mfactor++; factor [mfactor 1] = j; k /= j; } j = ((j + 1) / 2 << 1) + 1; } while (j <= k); } if (kt) { j = kt; do { mfactor++; factor [mfactor 1] = factor [j 1];

PAGE 132

123 j--; } while (j); } /* test that mfactors is in range */ if (mfactor > NFACTOR) { fputs ("Error: FFTRADIXS "() exceeded number of factors\n", stderr); goto Memory_Error_Label; } /* compute fourier transform */ for (;;) { sd = radf / ( double ) kspan; cd = sin(sd); cd = 2.0 cd cd; sd = sin(sd + sd); kk = 1; ii++; switch (factor [ii 1]) { case 2: /* transform for factor of 2 (including rotation factor) */ kspan /= 2; k1 = kspan + 2; do { do { k2 = kk + kspan; ak = Re [k2]; bk = Im [k2]; Re [k2] = Re [kk] ak; Im [k2] = Im [kk] bk; Re [kk] += ak; Im [kk] += bk; kk = k2 + kspan; } while (kk <= nn); kk -= nn; } while (kk <= jc); if (kk > kspan) goto Permute_Results_Label; /* exit infinite loop */ do { c1 = 1.0 cd; s1 = sd; do { do { do { k2 = kk + kspan; ak = Re [kk] Re [k2]; bk = Im [kk] Im [k2]; Re [kk] += Re [k2]; Im [kk] += Im [k2]; Re [k2] = c1 ak s1 bk; Im [k2] = s1 ak + c1 bk; kk = k2 + kspan; } while (kk < nt); k2 = kk nt; c1 = -c1; kk = k1 k2; } while (kk > k2); ak = c1 (cd c1 + sd s1); s1 = sd c1 cd s1 + s1; c1 = 2.0 (ak ak + s1 s1); s1 *= c1; c1 *= ak; kk += jc; } while (kk < k2); k1 += inc + inc; kk = (k1 kspan) / 2 + jc; } while (kk <= jc + jc); break ;

PAGE 133

124 case 4: /* transform for factor of 4 */ ispan = kspan; kspan /= 4; do { c1 = 1.0; s1 = 0.0; do { do { k1 = kk + kspan; k2 = k1 + kspan; k3 = k2 + kspan; akp = Re [kk] + Re [k2]; akm = Re [kk] Re [k2]; ajp = Re [k1] + Re [k3]; ajm = Re [k1] Re [k3]; bkp = Im [kk] + Im [k2]; bkm = Im [kk] Im [k2]; bjp = Im [k1] + Im [k3]; bjm = Im [k1] Im [k3]; Re [kk] = akp + ajp; Im [kk] = bkp + bjp; ajp = akp ajp; bjp = bkp bjp; if (iSign < 0) { akp = akm + bjm; bkp = bkm ajm; akm -= bjm; bkm += ajm; } else { akp = akm bjm; bkp = bkm + ajm; akm += bjm; bkm -= ajm; } /* avoid useless multiplies */ if (s1 == 0.0) { Re [k1] = akp; Re [k2] = ajp; Re [k3] = akm; Im [k1] = bkp; Im [k2] = bjp; Im [k3] = bkm; } else { Re [k1] = akp c1 bkp s1; Re [k2] = ajp c2 bjp s2; Re [k3] = akm c3 bkm s3; Im [k1] = akp s1 + bkp c1; Im [k2] = ajp s2 + bjp c2; Im [k3] = akm s3 + bkm c3; } kk = k3 + kspan; } while (kk <= nt); c2 = c1 (cd c1 + sd s1); s1 = sd c1 cd s1 + s1; c1 = 2.0 (c2 c2 + s1 s1); s1 *= c1; c1 *= c2; /* values of c2, c3, s2, s3 that will get used next time */ c2 = c1 c1 s1 s1; s2 = 2.0 c1 s1; c3 = c2 c1 s2 s1; s3 = c2 s1 + s2 c1; kk = kk nt + jc; } while (kk <= kspan); kk = kk kspan + inc; } while (kk <= jc); if (kspan == jc)

PAGE 134

125 goto Permute_Results_Label; /* exit infinite loop */ break ; default : /* transform for odd factors */ #ifdef FFT_RADIX4 fputs ("Error: FFTRADIXS "(): compiled for radix 2/4 only\n", stderr); fft_free (); /* free-up memory */ return -1; break ; #else /* FFT_RADIX4 */ k = factor [ii 1]; ispan = kspan; kspan /= k; switch (k) { case 3: /* transform for factor of 3 (optional code) */ do { do { k1 = kk + kspan; k2 = k1 + kspan; ak = Re [kk]; bk = Im [kk]; aj = Re [k1] + Re [k2]; bj = Im [k1] + Im [k2]; Re [kk] = ak + aj; Im [kk] = bk + bj; ak -= 0.5 aj; bk -= 0.5 bj; aj = (Re [k1] Re [k2]) s60; bj = (Im [k1] Im [k2]) s60; Re [k1] = ak bj; Re [k2] = ak + bj; Im [k1] = bk + aj; Im [k2] = bk aj; kk = k2 + kspan; } while (kk < nn); kk -= nn; } while (kk <= kspan); break ; case 5: /* transform for factor of 5 (optional code) */ c2 = c72 c72 s72 s72; s2 = 2.0 c72 s72; do { do { k1 = kk + kspan; k2 = k1 + kspan; k3 = k2 + kspan; k4 = k3 + kspan; akp = Re [k1] + Re [k4]; akm = Re [k1] Re [k4]; bkp = Im [k1] + Im [k4]; bkm = Im [k1] Im [k4]; ajp = Re [k2] + Re [k3]; ajm = Re [k2] Re [k3]; bjp = Im [k2] + Im [k3]; bjm = Im [k2] Im [k3]; aa = Re [kk]; bb = Im [kk]; Re [kk] = aa + akp + ajp; Im [kk] = bb + bkp + bjp; ak = akp c72 + ajp c2 + aa; bk = bkp c72 + bjp c2 + bb; aj = akm s72 + ajm s2; bj = bkm s72 + bjm s2; Re [k1] = ak bj; Re [k4] = ak + bj; Im [k1] = bk + aj; Im [k4] = bk aj;

PAGE 135

126 ak = akp c2 + ajp c72 + aa; bk = bkp c2 + bjp c72 + bb; aj = akm s2 ajm s72; bj = bkm s2 bjm s72; Re [k2] = ak bj; Re [k3] = ak + bj; Im [k2] = bk + aj; Im [k3] = bk aj; kk = k4 + kspan; } while (kk < nn); kk -= nn; } while (kk <= kspan); break ; default : if (k != jf) { jf = k; s1 = pi2 / ( double ) k; c1 = cos(s1); s1 = sin(s1); if (jf > max_factors) goto Memory_Error_Label; Cos [jf 1] = 1.0; Sin [jf 1] = 0.0; j = 1; do { Cos [j 1] = Cos [k 1] c1 + Sin [k 1] s1; Sin [j 1] = Cos [k 1] s1 Sin [k 1] c1; k--; Cos [k 1] = Cos [j 1]; Sin [k 1] = -Sin [j 1]; j++; } while (j < k); } do { do { k1 = kk; k2 = kk + ispan; ak = aa = Re [kk]; bk = bb = Im [kk]; j = 1; k1 += kspan; do { k2 -= kspan; j++; Rtmp [j 1] = Re [k1] + Re [k2]; ak += Rtmp [j 1]; Itmp [j 1] = Im [k1] + Im [k2]; bk += Itmp [j 1]; j++; Rtmp [j 1] = Re [k1] Re [k2]; Itmp [j 1] = Im [k1] Im [k2]; k1 += kspan; } while (k1 < k2); Re [kk] = ak; Im [kk] = bk; k1 = kk; k2 = kk + ispan; j = 1; do { k1 += kspan; k2 -= kspan; jj = j; ak = aa; bk = bb; aj = 0.0; bj = 0.0; k = 1; do { k++;

PAGE 136

127 ak += Rtmp [k 1] Cos [jj 1]; bk += Itmp [k 1] Cos [jj 1]; k++; aj += Rtmp [k 1] Sin [jj 1]; bj += Itmp [k 1] Sin [jj 1]; jj += j; if (jj > jf) { jj -= jf; } } while (k < jf); k = jf j; Re [k1] = ak bj; Im [k1] = bk + aj; Re [k2] = ak + bj; Im [k2] = bk aj; j++; } while (j < k); kk += ispan; } while (kk <= nn); kk -= nn; } while (kk <= kspan); break ; } /* multiply by rotation factor (except for factors of 2 and 4) */ if (ii == mfactor) goto Permute_Results_Label; /* exit infinite loop */ kk = jc + 1; do { c2 = 1.0 cd; s1 = sd; do { c1 = c2; s2 = s1; kk += kspan; do { do { ak = Re [kk]; Re [kk] = c2 ak s2 Im [kk]; Im [kk] = s2 ak + c2 Im [kk]; kk += ispan; } while (kk <= nt); ak = s1 s2; s2 = s1 c2 + c1 s2; c2 = c1 c2 ak; kk = kk nt + kspan; } while (kk <= ispan); c2 = c1 (cd c1 + sd s1); s1 += sd c1 cd s1; c1 = 2.0 (c2 c2 + s1 s1); s1 *= c1; c2 *= c1; kk = kk ispan + jc; } while (kk <= kspan); kk = kk kspan + jc + inc; } while (kk <= jc + jc); break ; #endif /* FFT_RADIX4 */ } } /* permute the results to normal order---done in two stages */ /* permutation for square factors of n */ Permute_Results_Label: Perm [0] = ns; if (kt) { k = kt + kt + 1; if (mfactor < k) k--; j = 1; Perm [k] = jc;

PAGE 137

128 do { Perm [j] = Perm [j 1] / factor [j 1]; Perm [k 1] = Perm [k] factor [j 1]; j++; k--; } while (j < k); k3 = Perm [k]; kspan = Perm [1]; kk = jc + 1; k2 = kspan + 1; j = 1; if (nPass != nTotal) { /* permutation for multivariate transform */ Permute_Multi_Label: do { do { k = kk + jc; do { /* swap Re [kk] <> Re [k2], Im [kk] <> Im [k2] */ ak = Re [kk]; Re [kk] = Re [k2]; Re [k2] = ak; bk = Im [kk]; Im [kk] = Im [k2]; Im [k2] = bk; kk += inc; k2 += inc; } while (kk < k); kk += ns jc; k2 += ns jc; } while (kk < nt); k2 = k2 nt + kspan; kk = kk nt + jc; } while (k2 < ns); do { do { k2 -= Perm [j 1]; j++; k2 = Perm [j] + k2; } while (k2 > Perm [j 1]); j = 1; do { if (kk < k2) goto Permute_Multi_Label; kk += jc; k2 += kspan; } while (k2 < ns); } while (kk < ns); } else { /* permutation for single-variat e transform (optional code) */ Permute_Single_Label: do { /* swap Re [kk] <> Re [k2], Im [kk] <> Im [k2] */ ak = Re [kk]; Re [kk] = Re [k2]; Re [k2] = ak; bk = Im [kk]; Im [kk] = Im [k2]; Im [k2] = bk; kk += inc; k2 += kspan; } while (k2 < ns); do { do { k2 -= Perm [j 1]; j++; k2 = Perm [j] + k2; } while (k2 > Perm [j 1]); j = 1; do { if (kk < k2) goto Permute_Single_Label; kk += inc; k2 += kspan; } while (k2 < ns); } while (kk < ns); } jc = k3;

PAGE 138

129 } if ((kt << 1) + 1 >= mfactor) return 0; ispan = Perm [kt]; /* permutation for square-free factors of n */ j = mfactor kt; factor [j] = 1; do { factor [j 1] *= factor [j]; j--; } while (j != kt); kt++; nn = factor [kt 1] 1; if (nn > max_perm) goto Memory_Error_Label; j = jj = 0; for (;;) { k = kt + 1; k2 = factor [kt 1]; kk = factor [k 1]; j++; if (j > nn) break ; /* exit infinite loop */ jj += kk; while (jj >= k2) { jj -= k2; k2 = kk; k++; kk = factor [k 1]; jj += kk; } Perm [j 1] = jj; } /* determine the permutation cycles of length greater than 1 */ j = 0; for (;;) { do { j++; kk = Perm [j 1]; } while (kk < 0); if (kk != j) { do { k = kk; kk = Perm [k 1]; Perm [k 1] = -kk; } while (kk != j); k3 = kk; } else { Perm [j 1] = -j; if (j == nn) break ; /* exit infinite loop */ } } max_factors *= inc; /* reorder a and b, following the permutation cycles */ for (;;) { j = k3 + 1; nt -= ispan; ii = nt inc + 1; if (nt < 0) break ; /* exit infinite loop */ do { do { j--; } while (Perm [j 1] < 0); jj = jc; do { kspan = jj; if (jj > max_factors) {

PAGE 139

130 kspan = max_factors; } jj -= kspan; k = Perm [j 1]; kk = jc k + ii + jj; k1 = kk + kspan; k2 = 0; do { k2++; Rtmp [k2 1] = Re [k1]; Itmp [k2 1] = Im [k1]; k1 -= inc; } while (k1 != kk); do { k1 = kk + kspan; k2 = k1 jc (k + Perm [k 1]); k = -Perm [k 1]; do { Re [k1] = Re [k2]; Im [k1] = Im [k2]; k1 -= inc; k2 -= inc; } while (k1 != kk); kk = k2; } while (k != j); k1 = kk + kspan; k2 = 0; do { k2++; Re [k1] = Rtmp [k2 1]; Im [k1] = Itmp [k2 1]; k1 -= inc; } while (k1 != kk); } while (jj); } while (j != 1); } return 0; /* exit point here */ /* alloc or other problem, do some clean-up */ Memory_Error_Label: fputs ("Error: FFTRADIXS "() insufficient memory.\n", stderr); fft_free (); /* free-up memory */ return -1; } //#endif /* _FFTN_C */ /*--------------------------------*-C-*---------------------------------* File: fftn.h ---------------------------------------------------------------------* Re[]: real value array Im[]: imaginary value array nTotal: total number of complex values nPass: number of elements involved in this pass of transform nSpan: nspan/nPass = number of bytes to increment pointer in Re[] and Im[] isign: exponent: +1 = forward -1 = reverse scaling: normalizing constant by wh ich the final result is *divided* scaling == -1, normalize by total dimension of the transform scaling < -1, normalize by the square-root of the total dimension * ---------------------------------------------------------------------* See the comments in the code for correct usage! */ #ifndef _FFTN_H #define _FFTN_H extern void fft_free ( void );

PAGE 140

131/* double precision routine */ extern int fftn ( int ndim, const int dims[], double Re[], double Im[], int isign, double scaling); /* float precision routine */ extern int fftnf ( int ndim, const int dims[], float Re[], float Im[], int isign, double scaling); #endif /* _FFTN_H */

PAGE 141

132 APPENDIX G LIST OF PUBLICATIONS The results of this dissert ation have been published in the following journals and conference proceedings: 1. A. Khmaladze M. K. Kim, C-M. Lo "Phase imaging of cells by simultaneous dual-wavelength reflection digital hol ography", Optics Express Vol. 16, No. 15, 16900-16911 (2008). 2. A. Khmaladze A. Restrepo-Martnez, M. K. Ki m, R. Castaeda, A. Blandn, "Simultaneous Dual-Wavelength Reflec tion Digital Hologr aphy Applied to the Study of the Porous Coal Samples", Applied Optics Vol. 47, No. 21 32033210 (2008). 3. A. Khmaladze M. Kim, "Design and imaging properties of a laser scanning microscope with a position-sensitive de tector", Journal of Modern Optics (in press). 4. A. Khmaladze A. Restrepo-Martnez, M.K. Ki m, R. Castaeda, A. Blandn, "The Application of Dual-Wavelengt h Reflection Digital Holography for Detection of Pores in Coal Sample s", in Digital Holography and Three-

PAGE 142

133 Dimensional Imaging (DH), Technical Digest (CD), (Optical Society of America, 2008), paper DMB5. 5. A. Khmaladze C.J. Mann, M.K. Kim "Phase Contrast Movies of Cell Migration by Multi-Wavelength Digital Holography", in Digital Holography and Three-Dimensional Imaging (DH), Technical Digest (CD), (Optical Society of America, 2007), paper DMB3. 6. A. Khmaladze M. Kim, "Quantitative Phase Contrast Imaging of Cells by Multi-Wavelength Digital Holography", in Conference on Lasers and ElectroOptics (CLEO), Technical Digest (CD), (Optical Society of America, 2007), paper JTuA52A. 7. C. Mann, A. Khmaladze M.K. Kim "Phase Contrast Movies of Cell Migration by Multi-Wavelength Digita l Holography", in Conference on Lasers and Electro-Optics (CLEO), Tec hnical Digest (CD), (Optical Society of America, 2006), paper CTuG5. 8. A. Khmaladze M.K. Kim "Imaging Properties of Scanning Photon Microscope", in Biomedical Topical Meeting (BIO) Techni cal Digest (CD), (Optical Society of America, 2006), paper TuI74. 9. A. Khmaladze M.K. Kim "Use of Bessel Beams and Position-Sensitive Detectors in Scanning Photon Microscope for Improved Field Depth and Contrast", Proc. SPIE, 6090, 173-180 (2006). 10. A. Khmaladze M.K. Kim "Scanning Photon Micr oscope", Proc. SPIE, 5873, 156-162 (2005).

PAGE 143

About the Author Alexander Khmaladze received a Master of Science degree in Physics from the University of South Carolina in Columb ia, SC in 2000. In 2003 he entered the PhD program in Applied Physics at the University of South Florida, where he worked at Digital Holography & Microscopy Laborat ory with Professor M. K. Kim.