USF Libraries
USF Digital Collections

Structured light for three-dimensional microscopy

MISSING IMAGE

Material Information

Title:
Structured light for three-dimensional microscopy
Physical Description:
Book
Language:
English
Creator:
Krzewina, Leo G
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Optical sectioning
Spatial light modulator
Structured illumination
Extended focused imaging
CSIM
Phase-unwrapping
Chromatic aberration
Dissertations, Academic -- Physics -- Doctoral -- USF
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: The conventional light microscope is an indispensable tool for many physical and life science applications, but is of limited usefulness for three-dimensional imaging due to its increasingly narrow depth of field at high magnifications. Focused regions may be obscured by defocused neighbors or noise from extraneous light sources and subsurface scattering. By rejecting light originating from outside the depth of focus it is possible to minimize these problems. When a contiguous series of such focused slices, or optical sections, are obtained along an axis of an extended object they may be combined to form a complete, focused three-dimensional surface image. Here, a variety of methods to obtain optical sections in a reflective setup are presented. The first employs an optical feedback loop through a spatial light modulator (SLM) to selectively illuminate focused regions. The SLM is a flexible electro-optical device that also allows (non-feedback) experiments of an intensity modulated light source resulting in illumination with a linear structure. This structured illumination microscopy is an established sectioning technique, which requires three frame captures per axial position. By developing a color grid and exploiting the red, green, and blue channels of a CCD camera, the three frames have been reduced to one. The speed increase comes at a cost and the limiting effects of chromatic aberration are discussed. Digital holography offers an alternative to axial scanning by allowing the surface to be reconstructed from a single exposure. Use of multiple wavelength illumination with this extended focus imaging is proposed and preliminary results are shown.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2006.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Leo G. Krzewina.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 131 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001790600
oclc - 144810587
usfldc doi - E14-SFE0001515
usfldc handle - e14.1515
System ID:
SFS0025833:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Structured Light for Three-Dimensional Microscopy by Leo G. Krzewina A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Physics College of Arts and Sciences University of South Florida Major Professor: Myung K. Kim, Ph.D. Wei Chen, Ph.D. Pritish Mukherjee, Ph.D. David Rabson, Ph.D. Date of Approval: March 29, 2006 Keywords: optical sectioning, spatial light modulator, structured illumination, extended focused imaging, CSIM, phase-unwrapping, chromatic aberration Copyright 2006, Leo G. Krzewina

PAGE 2

To my family.

PAGE 3

Acknowledgements Without encouragement from my then future wife Mary Lynn, I probably would not have returned to graduate school after an eight year hiatus. She also deserves credit for continued patience and support throughout the ongoing effort, which persistently remained two or three years from completion over five (good) years. As far as advisors go, I can hardly imagine a better one than Dr Kim. He offered guidance when needed, and of equal importance, a fair degree of latit ude to research topics of personal interest. Of course, without my undergraduate mentor Don Cox at the University of WisconsinMadison and graduate astronomy advisor Bill Sa slaw at the University of Virginia I may never have reached this point either. I al so appreciate the efforts of the graduate committee members who will be some of the select few to read this document. Numerous helpful discussions with Frank (Dr. Lingfeng Yu), Dr. David Rabson, Chris Mann, Alex Khmaladze Nilanthi Warnasooriya, and Mariana Potcoava helped formulate many of the concepts that became results presented herein.

PAGE 4

Table of Contents List of Figures .v Abstract ....... viii 1. Introduction .1 1.1. Three-Dimensional Microscopy .. 2 1.2. Structured Illumination .... 2 1.3. Overview of Contemporary Methods .. 3 1.3.1. Confocal Scanning Microscopy (CSM) 3 1.3.2. Structured Illumination Microscopy (SIM) ... 5 1.3.3. Fluorescence Imaging 8 1.3.4. Digital Holography 9 1.3.5. Other Three-Dimensional Microscopy Techniques .... 10 1.3.6. Computational Microscopy ..... 11 1.3.7. Fastest Three-Dimensional Microscopy Methods .. 12 1.4. Summary of Content: My Contribution 13 1.4.1. Selective Illumination Feedback Microscopy (SIFM) .... 14 1.4.2. Color Structured Illumination Microscopy (CSIM) .... 15 1.4.3. Chromatic Aberration .. 16 1.4.4. Proposal to Improve Extended Focused Imaging ... 16 1.4.5. Summary of Appendices ..... 17 i

PAGE 5

2. Optical Sectioning by Selective Illumination Feedback Microscopy ... 2.1. Introduction ... 18 2.2. Experimental Setup ... 19 2.3. Calibration 21 2.4. Selective Illumination ... 22 2.5. Results ... 24 2.6. Conclusions ... 27 3. Single-Exposure Optical Sectioning by Color Structured Illumination Microscopy .. 29 3.1. Introduction ... 29 3.2. Experimental Setup ... 30 3.3. Results ... 33 3.4. Discussion 36 3.5. Conclusions ... 37 4. Color Structured Illumination Microscopy for Imaging in Noisy Environments 39 4.1. Introduction ... 39 4.2. Color Structured Illumination Microscopy ... 40 4.3. Experimental Setup ... 41 4.4. Sectioning and Noise Reduction: Results 42 5. Dynamic Structured Illumination Microscopy: focused imaging and optical sectioning for moving objects ... 45 5.1. Introduction ... 45 5.2. Experimental Setup ... 47 5.3. Method of Structured Illumination ....... 47 ii

PAGE 6

5.4. Color Grid Fabrication ...... 49 5.5. Results ... 53 5.6. Discussion 55 5.7. Conclusions ... 57 6. Chromatic Aberration in Structured Illumination Microscopy .... 58 6.1. Introduction ... 58 6.2. Experimental Setup ... 60 6.3. Results ... 61 6.4. Conclusions ... 68 7. Multi-Wavelength Extended Focused Imaging ... 70 7.1. Introduction ... 70 7.2. Phase-Shifting Digital Holography ....... 72 7.3. Multi-Wavelength Approach ........ 74 7.4. Conclusions ... 76 8. Concluding Remarks 77 References 79 Bibliography .... 92 Appendices ... 94 Appendix 1: Spatial Light Modulator .. 95 Appendix 2: Feedback Effects 98 Appendix 3: Phase-Unwrapping Algorithms ..... 101 Appendix 4: Convolution Theorem ....... 105 Appendix 5: Software Inventory ........ 110 iii

PAGE 7

Appendix 6: Geometrical Point Spread Function ...... 126 Appendix 7: Setup for a Reflective Mask ...... 129 Appendix 8: List of Accomplishments ...... 130 About the Author ...... End Page iv

PAGE 8

List of Figures Figure 1.1. Linear Sinusoidal Grid .... 5 Figure 1.2. Artifact Correction in Structured Illumination Microscopy .... 6 Figure 1.3. Binary and Sinusoidal Axial Response Comparison ... 8 Figure 1.4. Deconvolution of Ant Head ...... 12 Figure 1.5. Evolution of Imaging Quality .... 14 Figure 2.1. SIFM Experimental Setup ......... 20 Figure 2.2. SIFM Axial Response .... 24 Figure 2.3. Response vs. Light Structure Size ..... 26 Figure 2.4. SIFM Applied .... 28 Figure 3.1. CSIM Experimental Setup ..... 31 Figure 3.2. CSIM Axial Response ... 34 Figure 3.3. Moth Abdomen ...... 35 Figure 4.1. Experimental Setup ....... 41 Figure 4.2. Wood Surface .... 43 Figure 4.3. Turtox .... 44 Figure 5.1. Example Color Grid Spatial Period ... 49 Figure 5.2. Emission Spectrum of a Fluorescent Lamp ... 52 Figure 5.3. Red Thin Film Transmission Spectrum ..... 52 Figure 5.4. Camera Sensitivity ..... 53 v

PAGE 9

Figure 5.5. Calculated System Response ..... 53 Figure 5.6. Lens Paper 54 Figure 5.7. Pigeon Feather ... 55 Figure 5.8. SIM Error vs. Phase Error 56 Figure 6.1. Experimental Setup ....... 60 Figure 6.2. Chromatic Aberration Observation ... 62 Figure 6.3. Red, Green, and Blue Response Figure 6.4. Simulated Effects of Chromatic Aberration ...... 66 Figure 6.5. Fractional Broadening versus Aberration ...... 68 Figure 7.1. Multi-Wavelength Interferometer Setup ....... 72 Figure 7.2. Composing the Extended Focused Image ......... 75 Figure A.1.1. Spatial Light Modulator ............ 97 Figure A.2.1. Ideal Feedback ........... 98 Figure A.2.2. Image Quality Test ............ 99 Figure A.2.3. Feedback Misalignment Effects .. 100 Figure A.3.1. Cheek Cell ... 103 Figure A.3.2. SKOV Cancer Cell .......... 103 Figure A.3.3. Resolution Target ........ 104 Figure A.3.4. Noisy Resolution Target ...... 104 Figure A.4.1. Sample Point Spread Functions ... 106 Figure A.4.2. Test Image for Convolution 107 Figure A.4.3. Convolution Quadrant Error .... 108 Figure A.4.4. Numerical Round-Off Error ....... 109 vi

PAGE 10

Figure A.4.5. Convolution with Blurring ... 109 Figure A.5.1. Wavy Light Pattern ...... 117 Figure A.5.2. Butterfly Eye ........ 121 Figure A.5.3. Simulated Diffraction ...... 123 Figure A.5.4. Spectral Gaussian Low-Pass Filter ...... 124 Figure A.6.1. Fixed Array Size Point Spread Function ..... 127 Figure A.6.2. Dynamic Array Size Point Spread Function ... 128 Figure A.7.1. Setup for Reflective Spatial Light Modulator ..... 129 vii

PAGE 11

Structured Light for Three-Dimensional Microscopy Leo G. Krzewina ABSTRACT The conventional light microscope is an indispensable tool for many physical and life science applications, but is of limited usefulness for three-dimensional imaging due to its increasingly narrow depth of field at high magnifications. Focused regions may be obscured by defocused neighbors or noise from extraneous light sources and subsurface scattering. By rejecting light originating from outside the depth of focus it is possible to minimize these problems. When a contiguous series of such focused slices, or optical sections, are obtained along an axis of an extended object they may be combined to form a complete, focused three-dimensional surface image. Here, a variety of methods to obtain optical sections in a reflective setup are presented. The first employs an optical feedback loop through a spatial light modulator (SLM) to selectively illuminate focused regions. The SLM is a flexible electro-optical device that also allows (non-feedback) experiments of an intensity modulated light source resulting in illumination with a linear structure. This structured illumination microscopy is an established sectioning technique, which requires three frame captures per axial position. By developing a color grid and exploiting the red, green, and blue channels of a CCD camera, the three frames have been reduced to one. The speed increase comes at a viii

PAGE 12

cost and the limiting effects of chromatic aberration are discussed. Digital holography offers an alternative to axial scanning by allowing the surface to be reconstructed from a single exposure. Use of multiple wavelength illumination with this extended focus imaging is proposed and preliminary results are shown. ix

PAGE 13

1. Introduction The conventional light microscope has been used to observe objects on the size scale of the micron for centuries, and continues to be a powerful laboratory tool today. However, as the object size decreases (the magnification increases) the region extending along the optical axis that appears in good focus becomes very narrow. Specifically, the diffraction-limited depth of focus is given by 2focusnzNA (1.1) In Eq. (1), is the index of refraction of the medium surrounding the object, n is the illumination wavelength, and is the numerical aperture of the focusing lens or microscope objective. It is easier to think of this in terms of the size of the object being viewed, and to understand the effect by noting that the depth of focus decreases as the square of the field size. This implies that at high magnification the variation in depth of the surface of the object may exceed the depth of focus of the microscope, and defocused regions within the field will overlap those that are in focus to cause blurring. NA The classical way to avoid this problem is to cut the object into thin slices and place them onto glass slides for viewing. This is called physical sectioning. Since each section appears in focus, the entire volume of the object may be viewed if a large number of contiguous sections can be obtained. This process is obviously destructive and not appropriate for living specimens. Furthermore, it is both difficult to obtain contiguous sections from a single object and inconvenient to swap slides between observations. 1

PAGE 14

1.1. Three-dimensional microscopy To circumvent the problems associated with physical sectioning, numerous means have been devised to obtain sections without altering the object but instead to reject defocused light through modifications to the microscope. This way of obtaining sections is called optical sectioning. Since each optical section provides a clear view of the focused slice from the object at any particular axial () position, all that is necessary to view the entire object surface or volume is to scan it along the axis in a series of steps separated by approximately z z f ocusz For surface imaging, as in a reflective setup, the sections may be composed into a single focused view of the entire field. This is referred to as the autofocus image, since all modern instruments compose it automatically. In addition to generating an autofocus image, a system capable of sectioning also records the axial position of each slice so that a depth map of the surface is easily generated. For translucent objects, the autofocus image may not be useful if multiple features appear at the same lateral position along the axis, but depth information is still available. Therefore both cases are examples of three-dimensional microscopy; however, since the reflective mode is used exclusively in this work only surface reconstructions will be shown in the following chapters. z 1.2. Structured illumination One highly successful approach to obtaining three-dimensional information has been to use structured illumination. This has both general and specific connotations. Most generally, a system is said to use structured light when a pattern has been imposed upon the cross section of the illuminating beam as by passing it through a 2

PAGE 15

mask. A variety of structures have been used, examples of which include the point, line, and checkerboard. After illuminating the object with the structure, a measure of the focus is implicit with how well the structure is transferred to the recording medium (typically a CCD camera). A mask in the recording plane or computer analysis then extracts the focused component. In the specific case when a moving linear grid is used to modulate the incident beam, the term structured illumination microscopy (SIM) has been adopted in the literature, and is of special importance to this research. 1.3. Overview of Contemporary Methods Many approaches to three-dimensional microscopy have been developed, often tailored to a specific regime of applicability. This section provides a brief description of those that are most popular, but more exhaustive reviews are available [1-4]. Systems vary greatly in cost, image quality and information content, and acquisition speed. Some are destructive to the sample, while others are non-invasive and suitable for living specimens. Several were first applied in a confocal arrangement and then expanded for wide-field use, such as optical coherence tomography and coherent anti-stokes Raman scattering microscopy. The fastest three-dimensional microscopes are able to image an entire volume with a single frame capture, and are described at the end of this subsection. 1.3.1. Confocal Scanning Microscopy (CSM) In confocal microscopy, a small region is illuminated on the object, and a pinhole or spatial filter in the conjugate image plane discriminates between focused and unfocused light. Therefore, if the illuminated region of the object is in focus, most of the 3

PAGE 16

light from that point will reach the detector, whereas the incident intensity will be spread out over defocused regions such that only a small fraction of scattered light reaches the detector. Since only one point is illuminated at a time, the system must scan laterally (in and ) to complete a single section, then along the optical axis to sample the volume. Marvin Minsky devised the first confocal system in 1961 in an effort to view Golgi apparatus in brain tissue [5, 6]. Curiously, about two decades passed before the confocal scanning microscope received much interest. x y Today the CSM sets the standard for three-dimensional imaging quality. Because it restricts illumination to a focused point, it has a strongly peaked response at the focal plane, or high sectioning strength. The disadvantage of the system is the (sometimes prohibitive) length of time required to complete a volume scan. However, the acquisition speed has been greatly improved by using multiple pinholes to structure the light and allow parallel confocal scanning. The spinning Nipkow disk [7] is one such example system that allows real-time performance. Although typical disks might have hundreds of pinholes, as many as tens or even hundreds of thousands have been used [8]. An alternative is to employ a spatial light modulator (SLM) to generate a mask of pixel-sized pinholes as in programmable array microscopy (PAM) to eliminate mechanical moving parts [9]. There is a tradeoff between the number of pinholes and sectioning strength, since if the spacing between holes is too small nearest neighbors begin to pollute one another. The minimum spacing needed to preserve confocality has been measured [10], so that the total acquisition time can be estimated based upon the scanning speed of the mask. Another variation illuminates a narrow line across the entire field so that scanning is necessary in only one dimension [11]. This allows an obvious speed gain but leads to 4

PAGE 17

anisotropic images with reduced sectioning along one dimension. Overall, the CSM is the primary method used for three-dimensional imaging and will likely be so in the future. Its main weakness, long scan times, has been substantially reduced through parallelization. In some cases, with the use of image processing to help correct for object motion, the CSM has been applied to live specimens such as a retina [12] for detection of glaucoma. Yet real-time confocal systems are complicated and expensive, so alternatives such as structured illumination microscopy offer competitive advantages. 1.3.2. Structured Illumination Microscopy (SIM) In 1997, a faster method of obtaining optical sections based upon structured light was proposed [13]. In structured illumination microscopy a linear sinusoidal grid imposes a periodic pattern of a single spatial frequency on the incident beam cross section. Three images corresponding to three phases of the grid are obtained by triggering the camera to a mechanical grid actuator. The focused section is then extracted computationally by differences between these images; the complete mathematical details are available in Section 5. Figure 1.1 shows the three phases of an example grid. Figure 1.1: Three different phases of linear sinusoidal grid used for structured illumination microscopy. 5

PAGE 18

Due to its simplicity and speed, SIM is increasingly popular in both machine vision and bioscience applications. Its sectioning strength is only slightly inferior to that of the CSM [14]. It is readily implemented in reflective mode as it was done initially and within my experiments. SIM also performs well in conjunction with fluorescence microscopy for transparent specimens, as in the ApoTome imaging system available from Zeiss [15], since it excludes emission from fluorophores excited outside of the depth of focus that otherwise would contribute a strong defocused signal [16]. It has been proven capable of sectioning within optically dense tissue [17] and was successfully applied to confocal endoscopy [18]. The main disadvantage of SIM is that linear artifacts sometimes appear in the extracted sectioned image, an example of which is shown in Fig. 1.2. These may be caused by an improper synchronization of camera and grid actuator leading to imperfect phase sampling, flaws in the grid construction, fluctuations of light levels between frame captures, use of a binary rather than sinusoidal grid, and other less common reasons. Figure 1.2: Structured illumination sections before (left) and after (right) artifact reduction, from [19]. 6

PAGE 19

Several procedures have been developed to correct for these linear artifacts. The simplest is to normalize the intensity of the three phase offset images, which is appropriate provided that the grid is fine enough to ensure the same intensity from the three frames assuming a light source having constant brightness. Another straightforward correction is to suppress the artifact frequency component from within the spectrum after a Fourier transform. It is also possible to track the phase of the grid as observations are recorded and compute the section from several semi-arbitrary grid positions. These manipulations allow significant improvement in the quality of the final focused image, as shown in Fig. 1.2 from [19]. Another post-processing technique offers further gains by scaling the three intensities according to optimization of several parameters [20]. An area modulation approach demonstrates one way to project the sinusoidal fringe pattern with fewer defects than the usual grid [21]. When these corrections are applied, most linear artifacts can be removed. Structured illumination microscopy entails other subtleties that imply both advantages and disadvantages. Experiments indicate that a binary, rather than sinusoidal, grid results in increased light efficiency for higher contrast images. My simulation of the axial sectioning strength of the system by scanning a planar mirror through focus (Fig. 1.3) shows a similar response for the two grid styles. However, the use of the binary grid leads to artifacts at higher harmonics of the fundamental grid frequency. Interestingly, higher harmonics can be useful to obtain super-resolved images by recording their beat frequencies with normally unobservable object spatial frequencies [22] as in saturated SIM. Examples such as this and my own work on color structured illumination microscopy suggest that other benefits of SIM are yet to be realized. 7

PAGE 20

Simulated SIM Axial Response01-200-1000100200Distance (microns)Normalized Response On/Of f Sine Figure 1.3: Comparison of simulated SIM axial response with binary versus sinusoidal grid. 1.3.3. Fluorescence Imaging Perhaps the most rapidly expanding area is that of fluorescence imaging. As described above, fluorescence is used in both confocal and structured illumination setups. Fluorescence imaging also offers super-resolution capability [23]. In the two-photon implementation [24], it provides another way to achieve confocality. Since two photons of lower energy must be absorbed by the same fluorophore to excite its fluorescence level, the emission rate increases quadratically with incident intensity. This leads to sectioning, as the intensity is highly concentrated at the focal plane. Two-photon fluorescence microscopy is well suited to biological imaging as it deeply penetrates turbid media with nondestructive light intensity [25, 26] and maintains a higher signal to noise ratio than CSM. While a confocal arrangement is most typical, a wide-field version is also available for rapid acquisition [27]. 8

PAGE 21

Fluorescence is used in other sectioning devices besides those employing two-photon or multiphoton excitation. Speckle patterns have recently proven successful for wide-field sectioning [28]. Another variant, known as fluorescence lifetime imaging microscopy (FLIM), tracks specific fluorophores by recording radiative and non-radiative decay rates. FLIM has been applied in conjunction with both parallel confocal and structured illumination setups [8, 29]. One of the main drawbacks of fluorescence microscopy is that it is sometimes difficult to find fluorophores that bind in the desired distribution within the sample or fluoresce at desired wavelengths. Quantum dots may help resolve this problem as they can be tailored to specific applications. However, they do not alleviate the other major problem of toxicity of fluorophores to living specimens. 1.3.4. Digital Holography Digital holography (DH) is inherently three-dimensional as it records both the phase and amplitude at the image plane where coherent reference and object waves interfere [30, 31]. In the usual monochromatic laser setup, depth information in the phase map contains ambiguities when the object extends axially beyond one wavelength of the illumination. While these ambiguities are often resolvable by computer processing (see Appendix 3), use of multiwavelength DH offers an experimental solution [32-34]. Multiple wavelengths are also used with short coherence sources, such as LEDs, to generate holograms from interferograms to obtain the extended surface profile [35, 36], which adds a firm foundation to my proposed work on extended focused imaging [37] in Section 7. 9

PAGE 22

When a large number of wavelengths are scanned to record holograms of the same object, complete three-dimensional information is obtained, making sectioning possible [38]. Efforts are currently being made in KimLab to apply this to retinal measurements. DH also may also be combined with fluorescence imaging to obtain benefits of both methods for rapid three-dimensional image acquisition [39]. 1.3.5. Other Three-Dimensional Microscopy Techniques Recent advances in optical physics and computer technology have made a variety of sectioning techniques available. The list is too long to include thorough descriptions, but a fair amount of literature is available and the basic principles of some of them are listed here. A number of nonlinear interactions have been developed that are dependent upon the dielectric susceptibility of the object, and therefore are particularly useful for surface imaging or interface detection. These include second harmonic generation [40-42], sum-frequency generation [40, 42], third harmonic generation [43, 44], and coherent anti-Stokes Raman scattering (CARS) [45-47]. For example, the nonlinear polarization generated in second harmonic microscopy is given by (2)(2)0(2)()()P EE (1.2) where (2) is the nonlinear susceptibility of the medium [42]. Since (2) vanishes under inversion symmetry in the bulk of the medium, interfaces are emphasized. A short coherence length method known as optical coherence tomography (OCT) is well known for its high-resolution images of biological samples at depths beyond the limit of conventional and confocal microscopes [48]. It may be thought of as 10

PAGE 23

the optical equivalent of ultrasound, a competing technology, but with higher resolution due the shorter wavelength of light. Although originally a grayscale pointwise scanning method [49], OCT has recently been expanded to encompass both wide field and color [50, 51]. It is fast, non-invasive, and currently used in the medical community to measure retinal blood vessel response to pressure for glaucoma detection. Lesser known optical sectioning techniques abound. An example of a near-field scanning optical microscope (NSOM) is the total internal reflection microscope (TIRM), which can be used for three-dimensional microscopy after application of inverse-scattering theory [52]. Selective plane illumination microscopy (SPIM) is a fast sectioning method usually applied to larger samples, such as fish embryos [53], used to record planes from multiple directions. In combination with 4Pi and theta microscopy, an extended, very high-resolution three-dimensional imaging system has been implemented using six microscope objectives [54]. Medical imaging is so vital and complex [55] that one may expect continuing advances in current technology and an ever-expanding set of new microscopes to be invented. 1.3.6. Computational Microscopy To this point physical and optical sectioning have been discussed, but there is another tool for sectioning that does not fall into either of these categories, that of computational microscopy [56]. In this scenario, a conventional light microscope captures a series of images along the optical axis and a purely numerical attempt is made to extract the autofocus image. In one version of computational microscopy a point spread function is assumed known (via measurement or from theory) and applied to 11

PAGE 24

obtain best focus within each layer. Since the layers are analyzed as a set, the defocused component arising in any given layer due to its neighbors may be subtracted for much improved clarity. Information and ready-made software packages are available from several vendors [57-59]. When little or no information is known about the point spread function (PSF) a priori, blind deconvolution is applied [60]. In this case, an initial PSF is guessed and improved by iteration. An example from freely available software is shown in Fig. 1.4 of an ant head from Alex Khmaladzes scanning photon microscope. The main advantages of computational microscopy are cost, ease of use (since no special hardware is needed), and functionality at low light levels allowing for long exposures to biological specimens without causing damage. Figure 1.4: Ant head before (left) and after (right) deconvolution by program Unshake. 1.3.7. Fastest Three-Dimensional Microscopy Methods New advances toward obtaining three-dimensional images at the theoretically fastest possible rate of a single image per volume have recently proven viability. One 12

PAGE 25

example is wavefront coding, in which the focus of light is essentially spread evenly throughout the axial region of interest and decoded into a focused image by computer processing [61, 62]. Another is the use of rotational sheer interferometry in a conventional widefield microscope, which is somewhat complex but has a nearly infinite depth of field [63]. In extended focused imaging (EFI), intersections between intensity reconstructions with the phase map height profile are used to find the most focused component at each axial reconstruction position and composed into the autofocus image [64]. To date this has been accomplished only with uncomplicated synthetic objects and is susceptible to coherent noise, but the straightforward modification proposed in Sec. 7 is likely to improve results. While it seems that the unbeatable speed of these methods would make them the researchers choice of the future, tradeoffs between acquisition time and information content/quality suggest that they will complement scanning and sectioning microscopes as laboratory tools. 1.4. Summary of Content: My Contribution Now that the variety of three-dimensional microscopic imaging systems has been at least partially reviewed, we turn to the work at hand to understand how it enriches the field. Firstly, Ive adopted the European format of compiling the body of this document from peer-reviewed manuscripts. This is most efficient and ensures overall high quality. However, a number of appendices have been included to supplement the journal or conference articles that were necessarily of limited detail due to size constraints. 13

PAGE 26

1.4.1. Selective Illumination Feedback Microscopy (SIFM) Section 2 introduces selective illumination feedback microscopy (SIFM). The basic idea was to obtain an interesting gadget, the spatial light modulator (SLM), and use it in a feedback loop to perform optical sectioning. The SLM is a miniature LCD display that is capable of modulating either the phase or intensity of light passing through it; refer to Appendix 1 for details. In SIFM, the SLM acts to mask light from reaching unfocused areas. To do so requires knowledge of which areas are unfocused and correspondence between pixels in the SLM and those on the camera, and thus, feedback. In its original form the SIFM simply projected the same image as was recorded in the previous loop back onto the object. Though this does lead to focusing, it requires many loops and is therefore slow and subject to alignment errors in feedback; if the reader is curious about alignment and feedback error propagation, details are available in Appendix 2. The SLM was later used to structure light into a checkerboard pattern that with simple image processing allowed the implementation of a mask after just one cycle. Figure 1.5 shows how my laboratory skills and setup improved over the course of this project. The Image Quality Test (IQ Test) shown in its ideal form on the left of the figure was viewed for various setups over a period of several months until an acceptable quality of feedback was attained. Figure 1.5: The leftmost frame shows a bitmap of the IQ test. The second through fifth frames show imaging improvement over several months partly attributable to switching from a laser to an LED but mostly due to using a microscope objective rather than a lens. 14

PAGE 27

After the feedback system was functional, a focusing algorithm was soon added that achieved optical sectioning with a total of three image acquisitions per axial position. Note that this is fast, but no faster than SIM as described above. Nevertheless, a new three-dimensional imaging system was successfully developed. 1.4.2. Color Structured Illumination Microscopy (CSIM) Because my independent comparison found that SIM is less sensitive to noise and of comparable speed to SIFM, work shifted toward improving SIM. A parallel form of SIM was devised [65]. The parallel method of color structured illumination microscopy (CSIM) is conceptually simple but took a lot of experimental effort to prove, and is the subject of Sections 3 through 5. CSIM produces optical sections with a single camera exposure, so it is a factor of three faster than standard SIM for about the same quality. The first published work on the subject is reproduced in Section 3, where the principle, setup and results proving effectiveness are shown. Besides optical sectioning, CSIM may be viewed as a tool for noise reduction in two-dimensional imaging. Section 4 is derived from a brief conference article in which this idea is exemplified. Simulation also shows that if the background noise level is fluctuating, CSIM has potentially improved image quality compare to standard SIM. If the reader would like to learn about CSIM but is only casually interested in the subject, this section could be skipped. Section 5 is a more thorough examination of CSIM that covers material not included in Sections 3 and 4 such as a proposal for a substantially simpler calibration procedure. It also contains a more complete mathematical description of SIM for quick 15

PAGE 28

reference, and additional results from computer modeling and experiment. Section 5 is the most comprehensive of the three regarding CSIM and suffices as a stand-alone introduction to the subject. The work was presented at the SPIE conference 2006 [66]. 1.4.3. Chromatic Aberration As CSIM uses a multi-wavelength approach, it is subject to effects from chromatic aberration (CA), as indicated in this work as well as that of other research groups. However, it was not known if CA was the main culprit, since the results could have been compromised by a variety of other aberrations. In Section 6 it is concluded that longitudinal CA is indeed a major contributor to system performance degradation, but one that is recognizable and somewhat controllable. Results from a computer model clearly explain how CA leads to the characteristic broadening of the axial response, or decrease in sectioning strength, of SIM under white light. 1.4.4. Proposal to Improve Extended Focused Imaging In Section 7 the recently developed digital holographic method of extended focused imaging (EFI) [64] is more carefully reviewed. Here it is shown how EFI can be combined with a multi-wavelength setup to remove two-pi ambiguities, which will allow biological imaging to be performed. Furthermore, by illuminating with short coherence length LEDs, rather than a laser, speckle noise is reduced. Since the proposed work will combine already established principles, it should lead to the desired result of high quality three-dimensional imaging with a single exposure. 16

PAGE 29

1.4.5. Summary of Appendices More information about the spatial light modulator used in both the SIFM and chromatic aberration experiments is available in Appendix 1. Appendix 2 shows how the many undesirable feedback effects caused by imperfect alignment of the SIFM restrict its useful number of feedback iterations. My work with Chris Mann involved development of algorithms to unwrap the phase maps obtained from digital holograms. Since this also relates to the content of Section 7, details about phase unwrapping are included in Appendix 3. The convolution theorem is so generally useful for image processing that I was compelled to present it somewhere in this document, and it found a place in Appendix 4. In particular, it was used as a rapid way to implement a geometrical point spread function for simulation of axial response of CSIM to a moving object. Numerous programs were written in association with my work and to assist other researchers on their projects, and these are summarized in Appendix 5. The geometrical point spread function used to obtain such results as in Fig. 1.3 is explained in Appendix 6. Although I used a transmissive SLM, a reflective setup is also possible as shown in Appendix 7. This might be preferable since most light modulators are reflective and a higher light efficiency could be achieved even with the double-pass through the beam splitter. Finally, Appendix 8 lists my scientific accomplishments to date. 17

PAGE 30

2. Optical Sectioning by Selective Illumination Feedback Microscopy Selective illumination feedback microscopy is introduced as a new technique for optical sectioning microscopy. A liquid crystal spatial light modulator (SLM) is used to project structured light onto an object, whose image is captured by a CCD camera and processed by computer to extract the in-focus areas of the image. The processed image is fed back to the SLM to illuminate only the in-focus areas of the object. The final image captured by the CCD exhibits optical sectioning. The selective illumination principle is demonstrated both experimentally and with computer simulations, implying a range of potential three-dimensional microscopy applications. Keywords: Three-dimensional microscopy; Optical sectioning; Image processing 2.1. Introduction The ability of a microscope to gather in-focus information while suppressing unfocused areas is highly advantageous. To that end, the confocal scanning microscope (CSM) [7] has proven an invaluable instrument in both the physical and life sciences. However, by limiting detector throughput to a point-like region, the CSM must be scanned both laterally and longitudinally to obtain a complete three-dimensional image. A much faster method of structured illumination microscopy (SIM) [13] has been used to obtain wide-field focused images by illumination of three-phase sinusoidal linear grating 18 This section, with minor modification, was submitted for publication to the Journal of Modern Optics by L. G. Krzewina and M. K. Kim (2005).

PAGE 31

patterns. The imaging speed and ease of integration into a conventional microscope make SIM appealing, but even with precision instrumentation it is difficult to fully subtract the illumination pattern from the final image, and linear artifacts often result [20]. Here, we present an alternative approach to achieve wide-field optical sectioning via structured illumination through a spatial light modulator plus a feedback loop to acquire selectively illuminated, in-focus images. The SLM is increasingly utilized in a variety of optical systems. For example, an SLM has been placed in a feedback loop to correct for uneven reflectivity in topographic measurements by fringe projection [67]. An excellent sample use of selective illumination is the spatially selective laser irradiation [68] where an SLM was used to specifically target pathological tissue on biological surfaces in real-time, leaving neighboring healthy regions undamaged. The SLM also proves a convenient way to generate structured light, and may be used in lieu of mechanical moving parts such as the spinning Nipkow disk in multiple-aperture confocal imaging systems [9, 10]. In our setup, we first employ an SLM to produce a structured light pattern, a checkerboard, and again after a feedback loop as a mask to illuminate only selected areas of the field in what we refer to as selective illumination feedback microscopy (SIFM). 2.2. Experimental setup The SIFM setup is shown in Fig. 2.1. Incoherent light from a high-intensity white LED is directed toward the SLM in a Kohler illumination arrangement. The SLM is a Kopin model KCD-QD01-AA CyberDisplay transmissive LCD panel having physical dimensions of 4.80 x 3.60 mm 2 partitioned into 320 x 240 pixels with 8-bit grayscale 19

PAGE 32

precision. Its quoted contrast ratio is 90:1 w ith a transmissivity of 7%. The intensitymodulated light passes through a non-polariz ing beam splitter to be focused by a microscope objective (focal length 16 mm and 0.25 NA) onto an object. Light scattered by the object passes back through the microsc ope objective and beam splitter, and is imaged onto the camera, an 8-bit grayscal e CCD array of resolution 640 x 480 pixels with 9 m pitch. Figure 2.1: SIFM setup with LED, collimating lens L, the SLM, microscope objective MO, object or specimen S, beam splitter BS, and the CCD camera connected to personal computer PC. 20

PAGE 33

The camera and SLM are both conjugate to the object. The magnification of the system is defined by the magnification from the object plane to the SLM plane. Since the SLM has a width of 4.80 mm, we have: 34.80*10w (2.1) where is the field width in microns. After image capture and analysis, a new pattern is displayed on the SLM that masks light from reaching defocused areas. w 2.3. Calibration In order to ascertain pixelwise correspondence between the SLM and the CCD arrays, the SIFM process starts with mapping pixels between the SLM and those on the camera, similarly to [67]. This is done only once during calibration. Since the SLM has fewer pixels than the camera, the map from the SLM to the camera will be a one-to-many function. Consequently, the inverse map 1 transposes a group of pixels on the camera to its source of illumination on the SLM in a many-to-one mapping. The source pixel most strongly illuminating a particular camera pixel is then found from: (2.2) 1(,)(,SLMSLMcameracameraijij ) Here the (,) x y pixel positions are denoted by integer values (,. Selective illumination is achieved by analyzing the image at the camera to determine the desired pixels to illuminate, then using )ij 1 to display the appropriate illumination mask on the SLM. To map an individual SLM point, a cross is displayed so its image reflected from 21

PAGE 34

a plane mirror object appears focused at the camera. The summed intensity values for camera rows and columns are computed: 10()(,)Nxj I iIi j and 10()(,)Myi I jIi j (2.3) In these expressions, I is the intensity, is the number of rows and N M is the number of columns on the camera. The center of the cross is located at where (,)ij ()*()xy I iIj is maximized. This method is more robust than simply displaying a single point and searching for its location in the camera image. To generate four points near the corners of the field of view are mapped and the rest are interpolated. This fast mapping assumes an aberration-free field; for curved fields a slower approach is possible. After the mapping, the system acts as if the SLM were aligned and stretched to exactly cover the active camera region, to within integer mapping round-off errors, allowing closed-loop feedback implementation. 2.4. Selective illumination After calibration, structured light is used to determine the in-focus areas. A checkerboard pattern is displayed on the SLM, with each square, or superpixel, having s snn pixels (superpixels on the camera have corresponding size pixels). The CCD image is captured, then the inverse checkerboard is displayed and its image is captured. Where the two superposed images overlap, they are presumed to be out of focus. To quantify this, the state of an image pixel is: cNN c 1,(,,)0,Sijk (,,)(,,)IijkIijk (2.4) 22

PAGE 35

231 where 0 is the noise threshold parameter and specifies either the first ( k 0k ) or second checkerboard image, and (1k ) (,,) I ijk has been normalized. This acts as an analog to digital converter with single bit precision, and is used for Boolean logic calculations where 1 is on and 0 is off. A new NM calculation array is filled via: (,)(,,0)(,outSijSijSij ,1) (2.5) Here is the symbol for the exclusive OR operation. Effectively, the two images are combined into an output array that is on where exactly one of the input images is on. Finally, the fraction of on values in each superpixel is calculated and each superpixel is then illuminated according to this fraction. A nonlinearity and good contrast is obtained by generating an SLM illumination mask: 2,SLMON I xyfxy (2.6) where(, ) x y denotes superpixel location, is a factor to ensure 8-bit grayscale intensity values and ON f is given by: 11002(,)ccNNoutjiONcSijfN (2.7) With this, illumination to out of focus areas is rapidly attenuated. Note that by converting input images to Booleans that the system responds properly to variations in reflectivity, so long as the reflectivity leads to a signal above the noise level, adjustable via for different types of surfaces. Illumination of the SLM with the mask completes the feedback loop. Now light is only projected onto the in-focus areas of the sample and recorded by the camera.

PAGE 36

2.5. Results We tested the axial response of the system by scanning a plane mirror through focus. A series of images were taken along separated by z z3.2m having a field width of 340 m ( 14x ). The normalized response curve is shown in Fig. 2.2, which has a full-width at half maximum (FWHM) of 19.5 m, using a superpixel size of 66 pixels, or m 9090 2 on the SLM. SIFM Axial Response00.250.50.75101020304050Axial Displacement (m)Normalized Response Figure 2.2: SIFM normalized axial response with a plane mirror object taken with a field size of 340 x 255 m 2 (magnification of 14 X), and superpixel size of 6 x 6 pixels, corresponding to 90 x 90 m 2 on the SLM or 6.4 x 6.4 m 2 in the object plane. To demonstrate how the optical sectioning ability improves with higher spatial frequency light structure, the axial resolution was also measured as a function of the 24

PAGE 37

superpixel size for 2, 4, 6, 8, 10, and 12 pixels at a magnification of sn 6.2x For comparison, we calculated the theoretical result for structured illumination with sinusoidal linear grating having period s2n according to equation (10) from [13], with simplified to 2.2NAT (2.8) assuming values of 550nm 0.25NA and the period T of the sinusoidal grid in microns. Note that the 2 factor accounts for the unit cell periodicity of the checkerboard along the diagonal, and that the theoretical expression for SIM sectioning ability has proven consistent with previous experimental results [13, 69]. The FWHM versus spatial period for results obtained experimentally for SIFM and calculated for SIM are shown in Fig. 2.3, along with output from our numerical SIFM model. Threshold values of 0.1 and 0.9 were applied in simulations A and B, respectively, to show the predicted range of sectioning available at different noise levels. A value of 0.1 was used in the SIFM experiment, and comparison to the simulated curve suggests better results may yet be obtainable. In terms of spatial frequency, it is apparent that by using smaller superpixels SIFM results will substantially improve, since as was shown previously [69], both axial and transverse resolutions suffer from large pixels. SIFM and SIM each require just three camera exposures per axial position, fast by comparison to the minimum of sixteen needed for multiple-aperture confocal systems [10]. Although the SIM theory result is plotted over the entire range, due to the phase offset of 13 period between successive frames, the minimum spatial period available on a discreet grid is six pixels 25

PAGE 38

261 90 m in this case, so that the leftmost data point is not actually attainable. In principle, SIFM could go further to sn and an even larger advantage; however, this will prove experimentally challenging due to necessity of submicron alignment and mapping errors. Response vs. Light Structure Size0501000100200300Spatial Period at SLM (m)FWHM of Axial Response (m) SIM Theory SIFM Experiment SIFM Simulation A SIFM Simulation B Figure 2.3: FWHM of response versus light structure size for experimentally observed and simulated selective illumination, and structured illumination calculated according to [13], all at 6.2 X magnification. Simulations A and B applied noise tolerances of = 0.1 and 0.9, respectively. The transverse resolution was measured by imaging a resolution target. We found, as expected, a lateral illumination resolution of approximately one superpixel width. However, for small superpixels system response deteriorates, as a fractionally larger area is improperly sampled. Because of this trade-off, we used somewhat large

PAGE 39

superpixels, or 8, to obtain most of our images in these proof-of-principle experiments. Note that even if the illumination resolution is coarse, the final image still shows details in the lit regions defined by the resolution of the camera. Transverse resolution is also diminished at longitudinal surface discontinuities when an unfocused area pollutes its focused neighbor. Additional feedback loops may be a good way to solve this problem. Neighbor effects will decrease if several iterations are performed in which the least focused areas in each iteration are masked off. Multiple feedback loops impose stringent mapping accuracy, though, since a mapping error of 6sn will be magnified to N after iterations. N Figure 2.4 demonstrates topographic imaging by SIFM, where the technique is applied to the upper half of the on a penny. The field width is 780 m wide and the superpixel size is pixels. The images in the first column (a) and (e) show conventional bright field images of two object planes separated by 44 m. The second column, (b) and (f), show images of the checkerboard pattern, while the third column, (c) and (g), shows the processed SLM illumination masks. The final images in the last column, (d) and (h), clearly show optical sectioning absent from the conventional images. 8sn 2.6. Conclusions To conclude, we have demonstrated optical sectioning by selective illumination feedback microscopy. It is competitive with other surface imaging techniques in terms of resolution and acquisition speed, with comparatively minor artifacts. While for the same fast acquisition time structured illumination microscopy [13] is generally superior, particularly for noise reduction and subsurface imaging, the SIFM offers 27

PAGE 40

stronger absolute sectioning ability on a discreet illumination system (an SLM), and is a good candidate for niche machine vision applications such as materials or currency inspection. Future work includes more detailed numerical simulations and the use of a high resolution, high contrast SLM and implementation of additional iterations for an increased signal to noise ratio. Figure 2.4: SIFM applied to the in the date on a penny. The two rows (a)-(d) and (e)-(h) correspond to planes separated by 44 m. Bright field conventional images are shown in (a) and (e). The second column, (b) and (f), shows one of the two camera captures with illumination modulated by the checkerboard pattern. The calculated illumination masks are in the third column, (c) and (g). Finally, (d) and (h) have been selectively illuminated through the masks (c) and (g) and optical sectioning is achieved. 28

PAGE 41

3. Single-Exposure Optical Sectioning by Color Structured Illumination Microscopy Structured illumination microscopy (SIM) is a wide-field technique that rivals confocal microscopy in optical sectioning ability at a small fraction of the acquisition time. For standard detectors such as a CCD camera, SIM requires a minimum of three sequential frame captures, limiting its usefulness to static objects. By using a color grid and camera we surpass this limit and achieve optical sectioning with just a single image acquisition. The extended method is now applicable to moving objects, and improves three-dimensional imaging speed of static objects by at least a factor of three. OCIS Codes: 180.6900, 110.0180, 110.6880 3.1. Introduction In many physical and life science applications, the conventional light microscope suffers a loss of clarity when in-focus features are obscured by defocused neighboring regions. In recent years, instruments capable of optical sectioning have been developed to overcome this limitation. The confocal scanning microscope is a well-known example that has proven invaluable, but due to its point wise light structure, it must be scanned both laterally and longitudinally to obtain a complete three-dimensional image. The much faster method of structured illumination microscopy (SIM) proposed by Neil et al. [13] introduces a moving sinusoidal linear grating into the illumination path and after 29 This section, with minor modification, has been published in Optics Letters under L. G. Krzewina and M. K. Kim (2006) [2].

PAGE 42

straightforward computer processing yields optical sectioning with just three image acquisitions per axial position. The grating motion must be precisely synchronized with the camera to ensure proper phase difference between frames, implying that the object must remain stationary over a timescale typically limited by camera frame rate. With special hardware [67] the total acquisition time may be reduced enough to apply SIM to moving objects, but this is relatively expensive. Here, we exploit the red, green, and blue (RGB) channels of the color camera to combine the three separate captures by replacing the moving monochromatic grid by a fixed color grid. With this simple scheme, the three phase offset images are captured in parallel. Because a color grid is used, we refer to this as Color SIM (CSIM). 3.2. Experimental Setup The experimental setup is shown in Fig. 3.1. Incoherent light from a high-intensity white LED is directed toward the grid in a Kohler illumination arrangement. The structured light passes through a non-polarizing beam splitter to be focused by an Edmund Optics microscope objective (10 /0.25 NA tube length corrected) onto the object. Light scattered by the object passes back through the microscope objective and beam splitter, and is imaged onto the camera, a Sony XCD-X710CR color camera with a Bayer filter, for later processing. The grid and camera are both conjugate to the object, and the magnification of the system is defined as the magnification from the object to the grid plane. Throughout this paper, 15.5 and the object field size is m 240180 2 imaged onto an CCD array. 800600 30

PAGE 43

Figure 3.1: Experimental setup: L: collimating lens; GRID: slide of color grid pattern; BS: beam splitter; MO: microscope objective (10 NA); S: sample. /0.25 As the grid is of prime importance, we will discuss it with some detail. In standard SIM, a sinusoidal gra ting yields a fringe pattern of the approximate form: ()iSx 2 ()1cosix Sxm Ti (3.1) Here the spatial period of the grid at the object is given by 0T T (3.2) 31

PAGE 44

where is the unmagnified grid period and is the modulation depth. Images having intensity 0T m (,)i I xy are captured for the three phase offsets 10 223 and 343 It can be shown [13] that the optically sectioned image of interest may be obtained from 1222122313()()()sectionedIIIIIII 2 (3.3) Apparently the indices can be replaced by color channels RGB provided the responses of the channels are separable. To simplify the separation procedure and to increase light efficiency, we adopt a rectangular rather than sinusoidal grid, for which the artifact ramifications have been described previously [19, 20]. This implies a grid composed of a repeating pattern of six equally spaced stripes: green, yellow, red, magenta, blue, and cyan. {1,2,3}i The color grids were provided by Sprint Multimedia, Inc. of Tampa, Florida as 35 mm slides. The claimed print resolution of 10.6 m per line implies a minimum of 63.6 m, but we found this grid size to be too noisy. For data presented here we used m (m at the object) and note that by adjusting 0T 0254.4T 16.4T a smaller value of may be obtained as seen from Eq. (3.2). T The system is calibrated by finding the color values on the slides that lead to the desired levels at the camera, and measuring constants used in color-decoupling equations. The generic spectral response is obtained by calibrating to a planar mirror object; for other objects, the individual response will be considered during post-processing. Given raw image color channel intensities 0R I 0G I and 0B I we assume a first order linear 32

PAGE 45

correction: 33000BBB 000000RRRRRGGRBGGRRGGGGBBBRRBGGBB I II I I II I I II I 0xy (3.4) By adjusting the levels on the 35 mm slides, we were able to use 1xx The other calibration constants were found by measuring response to various levels of R+G, R+B, and G+B. For example, GRGR was found from the change in measured green intensity for two slides having different red levels. The raw camera RGB data is thus roughly corrected for color mixing in the light source, slides, and camera. To compensate for object color, we also balance the channels by scaling R I G I and B I to their maximum average value. This is equivalent to the uniform intensity normalization approach used by Cole et al. [19] for SIM, a straightforward post-processing technique that substantially reduces linear artifacts. 3.3. Results For CSIM to be successful, its optical sectioning ability must be comparable to standard SIM. The system response to an axially translated planar mirror is shown in Fig. 3.2, along with the theoretical curve from Eq. (10) of Ref. 1 using 550nm in the normalized spatial frequency expression NA with 10T The FWHM of response from theory and our experiment are 24.2 and 32.6 m, respectively. This apparently low sectioning strength scales as 1 and should also improve with optimized combination of lamp, grid, and camera.

PAGE 46

Axial Response01060120Distance (m)Normalized Response CSIM Theory Figure 3.2: CSIM normalized axial response with a plane mirror object (solid line) and the theoretical response calculated from Eq. (10). of Ref. 1 (dashed). Our experimental curve exhibits the same asymmetry and broadening as that of Miti et al. [69], which was attributed to longitudinal chromatic aberration. Since theory predicts only a 0.6 m difference between FWHM calculated at 450 and 650 nm (monochromatic, with other parameters the same) the broadening in our result is very likely due to spherical and chromatic aberrations from our microscope objective and 34

PAGE 47

collimating lens, which are not planar corrected. Additionally, the 35 mm film has varying surface curvature due to its fabrication method. The effectiveness of our microscope is demonstrated in Fig. 3.3, in which a moth abdomen is shown. A total volume of 240180229 m 3 was scanned in nine axial steps of size 25.4z m, so exactly 10 images were acquired. Figure 3.3: Moth abdomen. (a) Single conventional image taken midway along z. (b) CSIM autofocus image composed from the brightest pixels in the ten sections. (c) A single CSIM section. (d) Height map in which gray levels correspond to each axial position. The conventional view (a) shows very little of the hair structure in a single frame taken from midway along the scan. The autofocus image (b) is composed from the brightest pixels from all ten sections. One particular section is shown in (c). A height map may also be constructed from the raw data, as in (d), which has ten grayscale levels 35

PAGE 48

indicating position of the various features. For instance, the bright region in the upper-right of (b) is actually recessed, as seen in (d). z 3.4. Discussion Since application of CSIM to moving objects depends on widely varying camera exposure time and magnification the ability to estimate the maximum object velocity given these parameters is desirable. First consider the transverse motion direction for the worst case of velocity perpendicular to the grid lines. Although it is not difficult to derive an analytical estimation for max,v we present results from a simulation in which we found an approximately linear response: v11,v20,vttTRTtT (3.5) Here R is the normalized integrated response for a total phase shift of the grid at the object due to either grid or object motion with relative velocity during exposure time For slow enough objects with v t not exceeding one period, and requiring a response Eq. (3.5) is solved for to find: 0.5R v 0max,v2Tt (3.6) This expression is physically intuitive and readily employed. For example, for the pollen grain in Ref. 1 with =25 m and 0T 13.9 assuming an exposure time of t = 0.001 s, the maximum velocity is 900 m/s. This is a high velocity considering the field size of only 100m x 70m. 36

PAGE 49

It remains to determine the maximum observable axial velocity We turn again to Eq. (10) from Ref. 1 which may be solved graphically for (e.g. with Mathematica) to observe the solution behavior in terms of and max,v 0.5R 0T The key here is to require that the object stays within the FWHM axial range during the exposure time: max,vzFWHMt (3.7) While Eq. (3.6) and (3.7) give velocities for which the system can function, other constraints, such as contiguous sectioning, may require slower translation as limited by the camera frame rate. 3.5. Conclusions To conclude, we summarize the strengths and weaknesses of CSIM. By requiring only a single camera capture to achieve optical sectioning, it is possible to obtain sectioned images of moving objects. For static objects, a complete volume may be constructed with just one image per axial position, and since the translator does not need to stop for each z the total acquisition time will be reduced by more than a factor of three. Since no grid actuator is needed, the microscope setup is simplified and phase offset artifacts are eliminated. The 35 mm slides are readily available and very inexpensive. As for negative points, it takes some effort to properly calibrate the system to the lamp spectrum and camera spectral response to obtain a custom-made grid. Improved results are to be expected from higher quality grids, as the 35 mm slides suffer noise and non-uniformities one candidate for an upgraded grid is the dielectric film [70]. Even after calibration, color inherent to the object may introduce linear artifacts. 37

PAGE 50

Post-processing [19, 20] and color filters help minimize this problem. The lamp must be carefully chosen to provide both optimal color channel separation and intense enough light to take advantage of the brief exposure times available in CSIM. Since the light efficiency is reduced through grid filtering by approximately a factor of ten (the lamp to stripe bandwidth ratio), the use of a bright lamp or sensitive camera are needed for the applicability of this method. We would like to thank Sprint Multimedia for providing the color grids, often as quickly as overnight, and Bill Sargent for photography advice. This work is supported in part by a grant from the National Science Foundation. 38

PAGE 51

4. Color Structured Illumination Microscopy for Imaging in Noisy Environments In addition to single-exposure optical sectioning, color structured illumination microscopy may be applied within noisy environments often encountered in biological imaging. Advantages over other techniques are demonstrated by both computer modeling and experiment. OCIS codes: (180.6900) Three-dimensional microscopy; (110.0180) Microscopy 4.1. Introduction In conventional microscopy, focused regions may be polluted by defocused neighbors and other noise such as subsurface scattering in translucent materials, and stray light from external sources. While a longitudinal scan can be performed in a series of steps to obtain the surface topology, computer processing to extract the autofocus image is not usually successful under noisy conditions. An alternative is to illuminate the object with structured light from which focused sections, and therefore three-dimensional information, may be obtained. The simplest structure is the point, and this is the basis of the highly successful confocal microscopy. Excellent focus is obtained from pointwise illumination (and detection), but complete surface reconstruction requires scanning in x, y, and z and is slow. Parallel methods help reduce the acquisition time, but sixteen or more frame captures are still needed [9, 10]. 39 This section, with minor modification, has been accepted for publication as conference proceedings of the Biomedical Optics Topical Meeting under L. G. Krzewina and M. K. Kim (2006).

PAGE 52

The much faster sectioning method of structured illumination microscopy (SIM) was introduced in 1997 [13], requiring only three frames. Here a grid masks the light so that the object is illuminated with a linear pattern. Ideally, the grid is sinusoidal and its projection onto the object has the form: 2()1cosixSxmT i (4.1) In this expression m is the modulation depth and indexes the three captures needed, each one-third of a spatial period T offset from the last, so that i 10 223 and 343 From the corresponding three image captures having intensities (,)i I xy the sectioned image may be obtained from: 1222122313()()()sectionedIIIIIII 2 (4.2) 4.2. Color Structured Illumination Microscopy If the intensities of Eq. (4.2) are captured in parallel, only one camera acquisition is needed to achieve optical sectioning. This is possible by using a color grid and replacing the three indices by color camera channels red, green, and blue (RGB). When the three phase offset grids are superposed, they overlap such that the resulting grid is composed of repeating stripes of equal size and colored green, yellow (Y), red, magenta (M), blue, and cyan (C). The six GYRMBC stripes together form one period repeated many times across the entire grid. 0T The grids were fabricated by Sprint Multimedia, Inc., of Tampa, Florida as 35 mm slides. In our experiments we used of 127.2 m and 254.4 m with the smallest 0T 40

PAGE 53

size limited by manufacturing cons traints. To simplify color channel separation and to increase light efficiency, we us ed solid stripes rather than a sinusoidal pattern; the artifact ramifications of this are not severe and ha ve been described prev iously [19, 20]. The slides are very inexpensive and adequate fo r proof of principle e xperiments, but will eventually be replaced by higher quality equipm ent. Details about system calibration and axial response are a bit long to be included here but ar e available elsewhere [65]. 4.3. Experimental Setup The setup is shown in Fig. 4.1. Incoherent light from a highintensity white LED is directed toward the grid in a Kohler il lumination arrangement. The structured light passes through a non-polarizing beam splitter to be focused by a microscope objective (10 NA tube length corrected) onto the object. /0.25 Fig. 4.1: Experimental setup: L: collimating lens; GRID: slide of color grid pattern; BS: beam splitter; MO: microscope objective (10X / 0.25NA); S: sample. 41

PAGE 54

Light scattered by the object passes back through the microscope objective and beam splitter, and is imaged onto the camera, a Sony XCD-X710CR color camera with a Bayer filter, for later processing. The grid and camera are both conjugate to the object, and the magnification of the system is defined as the magnification from the object to the grid plane. The field is imaged onto an 800 x 600 CCD array. The sectioning strength of the system increases as the period T of the grid projected onto the object decreases, and T is calculated from: 0TT (4.3) Therefore, better sectioning is obtained by either a finer grid or increasing magnification. By adding a lens, a real minified image of the grid could be used in place of the actual grid to improve sectioning strength. 4.4. Sectioning and Noise Reduction: Results SIM is often used with an axial scan to compile a number of sections into the autofocus image. Another use is to reduce noise at a single longitudinal position. An example is shown in Fig. 4.2 in which a wood grain surface is imaged with a single frame capture. CSIM substantially reduces the noise from multiply scattered light that arrives at the camera as if it came from a defocused object. Here, 11.8 and the field size is m 407305 2 Of course standard SIM would work just as well in this case, but three image acquisitions would be needed. Another possible source of noise is an external light source. If it is varies in intensity on the same timescale as the camera frame rate, SIM will suffer from artifacts 42

PAGE 55

due to differing noise levels in the three phase images. To show this, we ran a simple computer simulation. A focused image from a conventional microscope was subdivided into three images as if it had been illuminated by a rectangular grid with the three phases of SIM. In Fig. 4.3 (left), the result of processing the image with no noise added shows a good reproduction of the original. Then a noise level of 20% of the mean image brightness was added, and this level fluctuated by 20% between each of the three frames. The ensuing artifacts are visible in Fig. 4.3 (right). Since CSIM only requires one frame capture, the brightness fluctuations would not lead to this problem. However, CSIM suffers similar artifacts if the noise spectrum differs substantially from that of the microscope lamp to which the system is calibrated. Thus, CSIM increases the acquisition rate of SIM by a factor of three, and under certain circumstances, offers improved noise reduction. Figure 4.2: Wood surface. The conventional microscope image (left) is degraded by noise from subsurface scattering which CSIM (right) substantially reduces. Each image was obtained with a single camera capture. 43

PAGE 56

Figure 4.3: Turtox object used for simulated SIM with noise. The left image resulted after standard SIM with no noise added. When a 20% external illumination noise level with 20% fluctuation is added, linear artifacts are evident (right). 44

PAGE 57

5. Dynamic Structured Illumination Microscopy: focused imaging and optical sectioning for moving objects Structured illumination microscopy (SIM) is a valuable tool for three-dimensional microscopy and has numerous applications in bioscience. Its success has been limited to static objects, though, as three sequential image acquisitions are required per final processed, focused image. To overcome this problem we have developed a multicolored grid which when used in tandem with a color camera is capable of performing SIM with just a single exposure. Images demonstrating optical sectioning of three-dimensional objects are presented, and results of applying color SIM for wide-field focused imaging are compared to those of SIM. From computer modeling and analytical calculations a theoretical estimate of the maximum observable object velocity in both the lateral and axial directions is available, implying that the new method will be capable of imaging a variety of live objects. Sample images of the technique applied to lens paper and a pigeon feather are included to show both advantages and disadvantages of CSIM. Keywords: Three-dimensional microscopy, microscopy, optical sectioning, structured illumination 5.1. Introduction Imaging of microscopic biological objects presents numerous challenges. As the objects often have complicated structure, under high magnification they extend beyond 45 This section, with minor modification, has been accepted for publication as proceedings of the SPIE Photonics West conference under L. G. Krzewina and M. K. Kim (2006).

PAGE 58

the depth of field of the microscope. When this occurs, a focused region may be severely polluted by its unfocused neighbors and blurry images result. For the case of a reflective setup (as used here) subsurface scattering from translucent objects may add additional noise over the entire surface. Optical sectioning techniques substantially improve image quality by only accepting light from within the depth of focus. This has been accomplished by a number of methods that have relative strengths and weaknesses depending upon the particular object under observation and its environment. Confocal scanning microscopy (CSM) [7] has been very successful and is in widespread use. In CSM a point is illuminated and scanned in three dimensions to sample the entire object surface or volume. A conjugate point in the detector plane rejects unfocused light through spatial filtering. Due to its pointwise scanning, CSM exhibits high sectioning strength but is comparatively slow. The speed has been increased by parallelization approaches [9, 10] but a minimum of sixteen frame captures per axial position are still required to maintain confocality. The recently introduced sectioning technique of structured illumination microscopy [13] (SIM) is a faster alternative. In SIM, a linear grid is introduced into the illumination path and three images are captured per axial position. Each image corresponds to a lateral position (phase) of the grid, as described below in Section 5.2. While SIM typically has at least the same sectioning strength as CSM, it may exhibit linear artifacts. These artifacts arise due to the linear structure of the modulated light, and several methods have been developed to remove them [19, 20]. The main advantages of SIM are that it is straightforward to implement by modifying a conventional microscope and that by requiring only three images per section it rapidly obtains a 46

PAGE 59

complete three-dimensional image. Here, we exploit the red, green, and blue (RGB) channels of the camera and replace the monochromatic, moving grid of SIM by a stationary color grid to further simplify the microscope components and reduce acquisition time. With color structured illumination microscopy (CSIM) only a single frame is needed per optical section. Since the camera shutter speed is typically orders of magnitude faster than its frame rate, this implies that CSIM is capable of sectioning moving objects that are unobservable with CSM or SIM due to motion blur. 5.2. Experimental Setup The experimental setup is identical to that of the previous chapter. It is shown in Fig. 4.1 and fully described in Section 4.3, page 41. 5.3. Method of Structured Illumination In the standard three-phase SIM setup, the grid is monochromatic and attached to an actuator which is synchronized with the camera so that images are taken at grid phase offsets of 10 223 and 343 The fringe pattern appearing at the object due to the grid has the approximate form: 2()1cosixSxmT i (5.1) 47

PAGE 60

In this expression, m is the modulation depth, is the lateral position, and the grid period at the object is given by: x 0TT (5.2) Here is the unmagnified grid period. After acquiring three images having intensities 0T (,)i I xy , the sectioned image is obtained from: {1,2,3}i 1222122313()()()sectionedIIIIIII 2 (5.3) To summarize previous results [13] for quick reference, the theoretical approximate axial response for the focused component of monochromatic light is: 12()()JIz (5.4) This employs the Bessel function of the first kind and the parameter defined by: 12vuv (5.5) In this definition the convenient optical coordinates are used, so that and 128sin(/uz 2) 1sin()v The grid spatial frequency is 1T and is obtained from the numerical aperture and index of refraction via sin()NAn Experimental results obtained from Eq. (5.3) with SIM have given the good result of matching closely to the predictions obtained by Eq. (5.4) when monochromatic light was used. In CSIM, white light is used to illuminate a color grid. The grid is stationary, and the numerical indices of Eq. (5.3) are replaced by the color channels RGB. This implies a grid composed of a repeating pattern of six equally spaced stripes: green, yellow, red, 48

PAGE 61

magenta, blue, and cyan, as shown in Fig. 5.1, where Y, M, and C represent yellow, magenta, and cyan. Of course this single period is repeated many times. For simplicity as well as increased light efficiency, solid rather than sinusoidal stripes are used. 0T Figure 5.1: Example single spatial period of color grid. If the RGB components of the camera image are separated they are (ideally) equivalent to the three images obtained by the moving grid of SIM. 5.4. Color Grid Fabrication To avoid overlapping of light between the channels, it is necessary to construct a grid that is calibrated to the light source and the camera spectral response. Though the object will affect the final spectrum, the system is calibrated to a planar mirror and corrections due to object color are applied through post-processing as discussed below. There are two approaches to designing the color grid. The experimental way is to measure the camera response to different colors available (i.e. the gel dye colors or dielectric film transmissivity), then analyze the result and choose the colors that give the best response. This was done in our experiment, which used 35 mm slides. The slides were designed as jpeg images at a resolution of 3300 2200 dots, so that each dot was 10.610.6m m and manufactured by Sprint Multimedia, Inc. of Tampa, Florida. Test slides were made having an 8 8 array of colors that were combinations of brightness levels of two of the primary colors R, G, or B. For example, one slide displayed eight levels of red along the left side and eight levels of green along the top, and these 49

PAGE 62

values blended into shades of yellow in the other elements. This slide allowed measurement of R, G, and Y slide values that lead to the same response at the camera. With two more slides, R+B and G+B, the other brightness levels were found. Besides requiring the same brightness response at the camera to the desired color, one must also minimize the overlap of this color into other channels. For example, pure green on the slide may overlap into the red channel on the camera, which would then falsely be recognized as a yellow stripe, so the green on the slide could be corrected by tinting it blue. However, even with care and several iterations, it is likely that there will be undesired overlap between color channels measured from the final grid. This may be compensated by assigning color-decoupling coefficients to each raw measurement from the camera, as discussed in our initial work [65]. We label these coefficients as XY where represents the desired color and Y is pollution into it by a different color. If the raw measured intensities are labeled as X 0 X I then the corrected values are X I and obtained from the transform, with 0xy : 000000RRRRRGGRBGGRRGGGGBBBRRBGGBB 000BBB I II I I II I I II I (5.6) The decoupling coefficients may be measured from the test slides by increasing the component slightly and recording the increase in Y X This experimental calibration approach was necessary for the 35 mm slides because we did not know their spectral properties, and while effective, was tedious and time consuming. A far simpler way to calibrate the system exists when spectral information is available. Given the fractional emissivity of the lamp ()E the transmissivity of a 50

PAGE 63

grid stripe ()YT and the camera sensitivity ()S the net measured intensity is just: () I ETS (5.7) The decoupling coefficients are then readily calculated. For example, after transmission through the green stripe GT ()()RG I RGR (5.8) This assumes a linear response of the system to varying lamp intensity, and as such it is a first-order correction. In principle xy may be even more carefully calculated or measured, but in practice the color of the object and other chromatic variations make doing so superfluous. Although our results have thus far been obtained with a white LED and 35 mm slide for the grid, future work includes a brighter lamp with an evenly distributed spectrum and improved grid. This will allow us to perform the calculation of Eq. (5.7), and improve results since the LED has large intensity variations over the visible spectrum and the slides are not as uniform as typical laboratory equipment. A high frequency 14 Watt fluorescent ring illuminator is available from Hi-Tech Lamps [71], model number TLC 8. Its emission spectrum is shown in Fig. 5.2. One potential supplier of an improved color grid is Brewer Science [72], from which a prototype dichroic thin film filter could be obtained. This company has films with transmission curves matching all six GYRMBC color grid stripes. They also provide software for fine-tuning the transmission based on film thickness. A sample of transmission through their red film is shown in Fig. 5.3. 51

PAGE 64

The spectral response of the camera is shown in Fig. 5.4. Finally, in Fig. 5.5 we have calculated the net expected response for the fluorescent lamp, red thin film of Fig. 5.4, and the camera. In Fig. 5.2 5.5, the wavelength range is 400 to 700 nm and a normalized intensity is plotted. From Fig. 5.5 it appears that th is film has a good response for the given lamp and camera, as the undesired green and blue components are small compared to that of re d. Quantitatively, from the area under the curves it is found from Eq. (5.8) that and 0.24GR 0.08BR Once all XY have been calculated, if they are small enough so that Eq. (5.6) is c onsistent, the grid can be manufactured with some confidence. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 400 500 600 700 0 0.1 02 03 0.4 05 06 0.7 08 09 1 400 500 600 700 Figure 5.2: Emission spectrum of a fluorescent lamp. Figure 5.3: Example red thin film transmission 52

PAGE 65

00.10.20.30.40.50.60.70.80.91400500600700 Blue Green Red 00.10.20.30.40.50.60.70.80.91400500600700 Red Green Blue Figure 5.4: Camera sensitivity for each channel. Figure 5.5: System response calculated from Eq. (5.7). 5.5. Results After calibrating the setup to a plane mirror, we scanned the mirror through focus and recorded the response as a function of axial position. We found a response consistent with Eq. (5.4), but about 30% broadened. This is still sufficient to clearly show optical sectioning. Lens paper was selected as a suitable example of a synthetic object, and is shown in Fig. 5.6. A total volume of 240180127 m 3 was scanned in five axial steps of size 25.4z m, so exactly six images were acquired. The magnification parameter was 15.5 with a grid of spatial period 0254.4T m. The conventional microscope view of Fig. 5.6a shows good detail in some areas, but much of the image is clouded by unfocused regions. By performing the CSIM axial scan and composing the autofocus image from the brightest pixels at each lateral position from all the sections, a complete focused image is obtained as in Fig. 5.6b. The first section is shown in Fig. 5.6c, 53

PAGE 66

where the suppression of unfocused light is apparent. In addition to a focused image, a surface height profile is also available with resolution of the sectioning strength as shown in Fig. 5.6d. The height map helps interpret the autofocus image, as can be seen by noting that the dark region in the upper-left of Fig. 5.6d shows that the corresponding structure in the autofocus is recessed. Figure 5.6: Lens paper. (a) Single conventional image taken midway along z. (b) CSIM autofocus image composed from the brightest pixels in the six sections. (c) A single CSIM section. (d) Height map in which gray levels correspond to each axial position. Figure 5.7 shows a 240180 m 2 area of a pigeon feather. On the left side is a conventional image. The autofocus image composed of 11 sections is central to the figure. On the right is the height map in which the three distinct depths are highlighted. To compensate for object color, we balance the RGB channels by scaling to their maximum average value. This is equivalent to the uniform intensity normalization approach used by Cole et al. [19] for SIM, a straightforward post-processing technique that substantially reduces linear artifacts, though some are still present here. 54

PAGE 67

Although these results are for inanimate objects, we have successfully applied CSIM to section a laterally oscillating metal loop. Because the loop is not interesting to view, we have omitted displaying it and instead turn to a calculation of the object velocities for which CSIM may be applied. Figure 5.7: Pigeon feather. Left to right: conventional image; autofocus from 11 sections; height map. 5.6. Discussion The results show that CSIM is able to obtain optical sections and provide three-dimensional information. For stationary objects an axial scan will be completed at least three times faster than with SIM, since it is not necessary to pause at each axial position where only a single frame acquisition is needed. Because the total time needed to obtain a section is limited by the camera shutter speed (assuming ample light), CSIM may be applied to moving objects. A version of SIM with a millisecond acquisition time for the three frames has been reported, which would also allow dynamic imaging, but it relied on special hardware [69]. In any case, it can be seen from Fig. 5.8 that three-phase SIM is very sensitive to phase errors between the images processed via Eq. (5.3). To ensure a small intensity error, the phase error must be under about 20%. 55

PAGE 68

SIM Error vs. Phase Error0100.20.40.60.81Fractional Phase ErrorFractional Intensity Error Figure 5.8: The fractional intensity error calculated as a function of phase error between frames of three-phase SIM. The error is estimated analytically as follows. The absolute phase error introduced between two successive frames after a time t is assuming velocity v(t) is: 02()tttdtTv modulo 2 (5.9) Assuming the severe case of v(t) perpendicular to the grid, and using v(t) ~ v o for small enough t, this becomes: 02vtT modulo 2 (5.10) Since a relative phase of 23 is desired between frames, the fractional phase error is: 03v3232tT (5.11) in which the modulo 2 has been dropped assuming a small error. Accordingly, to restrict the error to some desired limit, the maximum observable velocity of the object is: 56

PAGE 69

0v3Tt (5.12) If three images are needed, the time t is the inverse of the camera frame rate. Supposing 30 frames per second and a fractional phase error of 0.2 and with our experimental value of m, a maximum transverse velocity of 33 m/s is obtained. For CSIM, given a shutter speed of 1 ms, a velocity of over 1,000 m/s could be successfully sectioned. This agrees qualitatively with our simulated velocity estimate [65], though the simulated CSIM expression has a larger constant multiplier because of its fixed relative phase positions. 16.4T 5.7. Conclusions From the above we conclude that CSIM is a good tool for optical sectioning of moving objects. Its main drawbacks include reduced light efficiency, linear artifacts, and broadened response in some systems due to longitudinal chromatic aberration. With proper calibration and equipment, these potential problems may be minimized or eliminated. 57

PAGE 70

6. Chromatic Aberration in Structured Illumination Microscopy The recently introduced optical sectioning method of color structured illumination microscopy shows broadening apparently due to longitudinal chromatic aberration. Reduced sectioning strength has also been observed under polychromatic lighting in other optical sectioning microscopes. Experiments and computer simulations presented in this section provide insight into this problem by providing a simple way to recognize and characterize the effects of longitudinal chromatic aberration in structured illumination microscopy. 6.1. Introduction At high magnification the depth of field of the conventional light microscope becomes so narrow that large portions of extended objects appear out of focus at any particular axial position. A variety of solutions have been developed to extend the effective depth of field and make possible visualization of focused three-dimensional surfaces. The confocal scanning microscope (CSM) has been highly successful in this area and is an industry standard. In CSM, a point-like region is illuminated and scanned throughout the object volume and imaged at a conjugate detector point. A straightforward optimization is to use multiple point illumination-detector pairs to perform parallel CSM. This has been achieved with the spinning Nipkow disk [7]. To 58 This section, with minor modification, will be submitted for publication to Optics Communications under L. G. Krzewina and M. K. Kim (2006).

PAGE 71

eliminate moving parts, the spatial light modulator (SLM) has also been employed [9, 10]. While longitudinal chromatic aberration (LCA) is generally disadvantageous, when a known amount of LCA is introduced into the CSM setup it provides depth information via color, making the axial scan unnecessary [73]. In structured illumination microscopy (SIM) [13] an optical section, or focused slice, is obtained at any axial position by interjecting a moving linear sinusoidal grid into the illumination path and capturing three sequential images corresponding to grid phases incrementally offset by one-third of a spatial period. Simple image processing combines the three images into a focused section, and the autofocus image is composed from the brightest points of the images resulting from an axial scan. Lower than expected sectioning strength in a recent SIM experiment using a very fast smart pixel detector array under white light illumination was attributed to LCA [69] but not discussed in detail. We observed similar reduced sectioning strength in the parallel SIM method of color structured illumination microscopy (CSIM) in which the three phase offsets of SIM are substituted by the red, green, and blue channels of the CCD camera [65]. Our present goal is to better understand the limitations imposed by LCA. Returning to other three-dimensional imaging methods, the fastest way to extend the depth of field is to eliminate scanning entirely. In digital holography, the intensity image and phase-unwrapped surface profile are successfully combined to produce the extended focused image from a single exposure [37]. Wavefront coding is another promising, single acquisition approach that employs a pupil mask and computer processing to deconvolve a nominally blurred image to visualize the extended depth of field [74]. Here we concentrate on the effects of chromatic aberration in SIM. However, 59

PAGE 72

CA should be considered in any polychromatic experiment, and some of the results presented are general enough to be applied elsewhere. 6.2. Experimental Setup The experimental setup is shown in Fi g. 6.1. Incoherent light from a highintensity white LED is directed toward a grid in a Kohler illumination arrangement. The structured light passes th rough a non-polarizing beam splitter to be focused by a microscope objective (10 NA tube length corrected) onto the object. /0.25 Figure 6.1: Experimental setup: L: collimating lens; GRID: spa tial light modulator or 35 mm slide of color grid pattern; BS: beam splitter; MO: microscope objective (10 /0.25 NA); S: sample. Light scattered by the object passes back through the microsc ope objective and beam splitter, and is imaged onto the camera, a Sony XCD-X710CR color camera with a Bayer filter. The capture area is 800 600 pixels of size 4.654.65 m 2 The image is processed on a standard personal computer. The grid and camera are both conjugate to 60

PAGE 73

the object, the magnification of the system is defined as the magnification from the object to the grid plane, and the unmagnified grid spatial period is referred to as 0T The three-phase grid is generated by patterns displayed on a spatial light modulator (SLM), so no actuator is needed. The SLM is a Kopin model KCD-QD01-AA CyberDisplay transmissive LCD panel having physical dimensions of mm 4.803.60 2 partitioned into pixels with 8-bit grayscale precision. Its quoted contrast ratio is 90:1 with a transmissivity of 7%. 320240 6.3. Results We begin by measuring the sectioning strength of a system under polychromatic illumination and observe a broadened response when compared to a theoretical idealization that has accurately described previous monochromatic results. Then, the red, green, and blue (RGB) components measured by the color camera are analyzed individually to ascertain differences attributable to color. Next we perform computer modeling including LCA effects to see if the experimental result is obtained. Sectioning strength is commonly measured by stepping a planar mirror through focus and recording the response versus axial position as was done here. Color images were captured and then converted to grayscale with the mathematical model in which each channel receives z 13 weight. In Fig. 6.2, we show results of this procedure for SIM with white light illumination compared to an ideal, monochromatic theoretical model assuming 550nm (see Equation 5 below). We used magnification 16.6 grid period 253T m and axial step size of 3.2m 61

PAGE 74

00.510204060Distance (m)Normalized Response 80 Experiment Theory Figure 6.2: SIM experimental axial response under white lighting compared to the ideal result from monochromatic theory (Equation 5). It is seen that the experimental response is broadened and asymmetric by comparison to the theoretical curve. The FWHM of these responses are 22.8 m and 26.1 m, which is a broadening of 14.5%. Similar broadening in previous results [65, 69] was attributed to longitudinal chromatic aberration. Is LCA really to blame? Straightforward data analysis provides experimental evidence to answer this question. Because the response to the planar mirror scan was recorded with a color camera, the RGB components were already separated by a Bayer filter. These components can be considered separately and the response calculated for each, as shown in Fig. 6.3. 62

PAGE 75

00.510204060Axial Distance (m)Normalized Response 80 Red Green Blue Figure 6.3: Experimental response for red, green, and blue camera channels considered individually. While there are slight differences between R and G, it is clear that the maximum response of B is longitudinally shifted. The R and G channels show less broadening (24.4 m and 25.0 m) than to the white value, and the atypical shape of the B curve is due to the uneven spectrum of the white LED and overlapping between color channels on the camera. Because the RGB camera channels are broad, we do not expect a perfect monochromatic result here, but this is a convenient way to observe LCA with the knowledge that the light paths for the colors were not changed between different measurements. For comparison, when we used narrower band colored LEDs as approximately monochromatic illumination, the theoretical response curve was obtained but uncertainty was added to the longitudinal positions. It is not difficult to predict LCA effects in SIM computationally. The 63

PAGE 76

monochromatic axial intensity response may be obtained from the defocused optical transfer function, a good approximation of which is given by Eq. (11) of [75]. This was accomplished in Eq. (10) of [13] which we reformulate here in convenient notation for the computer model. Begin with a grid having the projected form 2()1cosixSxmT i (6.1) where is the modulation depth, m i is the phase of the three exposures ( 0,23,43 ), and 0TT is the grid period at the object. The grid spatial frequency is then 1T Making use of optical coordinates, with angle defined in terms of numerical aperture () and index of refraction via NA n sin()NAn we have: (6.2) 128sin(/uz 2) and 1sin()v (6.3) Here is the wavelength of the incident light. After defining as: 12vuv (6.4) Then the axial intensity response is approximated by: 12[()]()()JzIzz (6.5) This function shows substantially more wavelength dependence when LCA is present compared to its ideal response over the visual spectrum. To simulate the axial response function including LCA, we suppose the full response to be the superposition of 64

PAGE 77

responses from Eq. (6.5) at numerous wavelengths spanning the spectrum, with different wavelengths exhibiting maximum axial response (focus) at position ()focusz : 1()(iiNtotalfocusiIIzz ) (6.6) We used a wavelength range of 475675inmnm with evenly spaced i and a value of to approximate uniform lighting with 2 nm spacing between 100N i The simulation assumed perfect white light but a spectral weighting factor could easily be added to Eq. (6.6) to account for particular sources. More accurate results might be obtained by adding complexity, such as careful point spread function calculations [76], but our model is extremely fast, readily implemented, and consistent with experiment. Experimentally measured LCA data of focus position as a function of wavelength, relative to was obtained from Fig. (2.7) of [42]. This sample data illustrates that the LCA common to many microscope objectives can be expected to result in response curves similar to those observed. For the simulation, we used a magnification of (546)0focusznm 33.2 twice that of our experimental setup because the sample data was for a microscope objective exhibiting less LCA effects. The simulated curve is compared to monochromatic theory in Fig. 6.4. The expected asymmetric broadening is apparent. Now that we have a specific example showing the shape of the LCA curve, a more general result is desirable. We apply a variation of a simple model [77] appropriate for common microscope objectives based upon measured axial response maxima for the position of focus versus wavelength [42]: 200()()focusza (6.7) 65

PAGE 78

00.5101020304Axial Distance (m)Normalized Response 0 Theory Simulation Figure 6.4: Simulated effects of LCA for the microscope objective data from [42] compared to the ideal monochromatic theory. This supposes that light at different wavelengths deviates quadratically from some central position at wavelength 0 The scale factor gives the desired range of 0a ()focusz and a value of 0546 nm is chosen for the central wavelength. Next we define the aberration range as the average distance from focus for all wavelengths that can be calculated from LCAz 1()NfocusiiLCAzzN (6.8) Or, for the parabolic focus distribution of Eq. (6.7), with 12475,675nmnm : 66

PAGE 79

30,2021()()3()LCAparabolaaz 310 (6.9) Though somewhat arbitrary, this definition allows the fractional broadening of the axial response to be given as 0()1LCAFzFF (6.10) Here the FWHM of the axial response is abbreviated by so that is the response from monochromatic theory and F 0F ()LCAFz increases with the chromatic aberration. In our model we use apply Eq. (6.9) and vary to simulate different levels of LCA (for a particular setup, one would obtain by measuring ). Finally, the fractional aberration is defined as: 0a 0a LCAz 0 L CAfraczz F (6.11) The fractional broadening versus fractional aberration F f racz is shown in Fig. 6.5. The axial response is substantially broadened once the LCA spreads focus over a range comparable to the depth of field. 67

PAGE 80

00.20.40.60.8100.511Fractional AberrationFractional Broadening .5 Figure 6.5: Fractional broadening of the axial response versus increasing fractional chromatic aberration. 6.4. Conclusions Longitudinal chromatic aberration is a limiting factor to polychromatic structured illumination microscopy experiments. From the characterization of Fig. 6.5, an expectation of sectioning performance for a given setup can be obtained once the LCA has been measured. While spherical aberration, transverse CA, and other effects are also important, we studied LCA due to its impact on our experiments [65] and found it to be the primary source of degradation in our results. The best way to avoid LCA effects is of course to use monochromatic light, which is appropriate for three-phase SIM. The smart pixel detector array of [69] 68

PAGE 81

could therefore be modified and achieve improved sectioning capabilities. When a wide spectrum is necessary, as in CSIM, the best solution is to use a high NA plan-apochromat microscope objective stopped down to eliminate LCA [42]. Another possible alternative is to restrict illumination to narrow linewidths, as in a white laser, and consider the RGB channels of the camera separately. The peak axial responses could then be shifted for best alignment by automated computer processing, rendering the sharpest images and maximum sectioning strength. Optics specifically corrected for LCA at the RGB wavelengths would offer the best solution since no additional computer processing would be necessary. 69

PAGE 82

7. Multi-Wavelength Extended Focused Imaging Digital holography (DH) has recently been used to reconstruct the complete three-dimensional microscopic image in a process termed extended focused imaging (EFI) [64]. This allows the entire surface to be visualized from just a single camera acquisition, but relies on phase-unwrapping algorithms to extract height information. I propose adopting a multi-wavelength approach to eliminate the phase-unwrapping dependency, and show preliminary results. 7.1. Introduction Ideally a three-dimensional object appears focused over its entire volume. On the macroscopic scale this idealization may be observed, at least locally, as in photographs within a small room. However, since the depth of field decreases approximately as the square of the transverse field size the focal plane is extremely narrow when viewing microscopic objects. As described in Section 6.1, a number of scanning techniques have been applied to piece together focused sections of the object [7, 13, and 65]. The fastest possible method, which is equivalent to the idealization, would be to obtain a complete three-dimensional image from a single exposure. Wavefront coding is one such method that shows promise [42]. In wavefront coding, a pupil mask having known phase properties is used to distribute focus of incoherent light evenly over the object and computer processing is used to deconvolve 70

PAGE 83

the resulting image. Coherent light makes possible another contemporary approach known as extended focused imaging (EFI). In EFI, intensity and surface height information are obtained in implicit form by capturing a hologram, which records both the amplitude and phase of light scattered by the object. Digital holography (DH) allows an image of the object to be reconstructed at any plane along the optical axis. At a given reconstruction distance, some regions of the image may be in focus. These are found by applying phase-unwrapping to determine the object profile and intersecting it with reconstruction planes to act as sections. If these focused regions are extracted and the reconstruction is performed over a longitudinal (z-axis) range covering the entire object depth, the EFI image can be compiled. There are two main problems to be solved in EFI. One is that there is uncertainty in the profile introduced by discrepancies within phase-unwrapping algorithms (see Appendix 3). The proposed solution to this problem is to employ a multiwavelength approach to avoid the 2 ambiguities inherent to the single wavelength over the axial range of interest [32]. The other problem in EFI is that images appear gritty due to speckle noise from the coherent illumination. This noise might be substantially reduced by using short coherence length, narrow band LEDs instead of a laser and deriving the hologram from a series of interferograms. Unfortunately, the reduced noise comes at the cost of needing to acquire several images; however this is still many times faster than the most efficient scanning methods and is the approach presented here. 71

PAGE 84

7.2. Phase-shifting digital holography Phase-shifting digital holography [78] was introduced in 1997 as a way to generate a hologram from a series of phase-shifted interferograms. The interferogram is the result of interference between a split light beam in which one path is phase-shifted by a reference mirror and the other illuminates the object, as in the Michelson type interferometer of Fig. 7.1. Figure 7.1: Setup for multi-wavelength phase imaging interferometry. Courtesy of Nilanthi Warnasooriya. To generate the hologram, begin by assuming the object wavefronts at the CCD camera are expressed by plane waves of the form (,)iUxyAe (7.1) Here A is the amplitude and is the phase. The reference wave is similarly given by (,) R iRUxyAe R (7.2) 72

PAGE 85

These waves will interfere at the CCD plane in a predictable way provided that the path difference between them is within the coherence length of illumination. For the LEDs considered, this coherence length is a few microns. The amplitude of the combined wave in the image is then R iiiRRUUUAeAe (7.3) The intensity (hologram) recorded is therefore 222(,;)2cos()RiRRRIxyUAAAA (7.4) This is easily derived from Eq. (7.3) with use of the identity cos()2iiee (7.5) Of interest to form the hologram are the amplitudes A and and phase RA A is readily measured by blocking the reference beam to get (,)O I xy the object intensity in the imaging plane, and taking (,)OAIxy The phase is calculated after recording four images at reference phases offset by 14 period each via 1(,;32)(,;2)(,)tan(,;0)(,;)IxyIxyxyIxyIxy (7.6) Finally may then be calculated by Eq. (7.4) at a single reference phase, or it may be measured by blocking the object light path. In practice it may be preferable to measure and calculate RA RA A since the reference wave typically has the higher intensity. Once the hologram has been constructed in this manner, it can be used as is commonly done in DH to reconstruct the image of the object at varying axial positions. The phase image can also be plotted, consisting of grayscale values calculated from 73

PAGE 86

255*(,)(,)2grayscale x yxy (7.7) This can be interpreted as a height map of the surface, but due to the periodic nature of the light used for measurement it only has an unambiguous range of z the wavelength of the illumination. Within this region, the wrapped height is given by (,)(,)2w x yzxy (7.8) Clearly Eq. (7.7) and (7.8) are different only by a scale factor, so once the bitmap image of Eq. (7.7) is stored, it is easily converted into corresponding physical height. Note that for cameras having greater than eight bits per pixel precision, the images should also be stored in a higher precision format to avoid information loss. 7.3. Multi-wavelength approach Although depth resolution is limited to for any particular wavelength, when two distinct colors are used for illumination the unambiguous measurement extends to the beat wavelength 121212|| (7.9) This is achieved by combining the phase maps for the two wavelengths [32]. Applying this multi-wavelength approach in lieu of phase unwrapping should allow EFI to be used for a wide variety of objects, including biological specimens having complicated phase maps. At the time of this writing, multi-wavelength interferograms were not available for testing with EFI. However, I wrote a computer program to compose the EFI given a 74

PAGE 87

hologram and depth map (unwrapped phase map). Results from this program applied to a test hologram obtained with a laser are shown in Fig. 7.2. In this case, the level of focus does not vary much over the range of the phase map, so it is difficult to discern differences between the reconstructed sections and the autofocus image. This was a useful test, though, in preparation for multiwavelength data. The test also made evident that care must be taken to align the depth map with the hologram reconstruction distance coordinate. This is because the depth map has a floating zero level, like voltage. This can be done by picking a clearly focused feature in the reconstructed image and matching it with a slice from the depth map. Alternatively, a range of starting positions could be tested and the most focused EFI accepted as determined by a focus algorithm (as in Appendix 5). This automated procedure would be convenient and fast provided that the range of starting positions was searched quickly; a binary partitioning algorithm should prove suitable for the task. Figure 7.2: Composing the EFI. A total of ten images of the SKOV cancer cell were reconstructed evenly spaced along the z-axis. The first image is shown on the left. The tenth is in the middle. The EFI shows subtle differences from both on the right. This test does not show the power of EFI but was done as preparation for soon to be obtained multi-wavelength short coherence length holograms. 75

PAGE 88

7.4. Conclusions Extended focused imaging can probably be combined with multi-wavelength digital holography to eliminate errors due to phase unwrapping. Replacement of lasers with short coherence length LEDs will reduce noise. The range of depth will be limited to the coherence length of the LEDs, and since this is only a few microns, the method will be most useful at high magnifications where the depth of field is small. Chromatic aberration may add difficulty to data acquisition, since at these magnifications the difference in focus positions for the two wavelengths could exceed the depth of field. 76

PAGE 89

8. Concluding Remarks Optical microscopy is a rich subject that continues to rapidly evolve despite its long history. Creation of many new devices has been made possible by advances in computation, new materials, and manufacturing processes. For example, structured illumination microscopy and wavefront coding both rely heavily on image processing. These and other techniques offer substantially reduced acquisition time to obtain three-dimensional images. The selective illumination feedback microscope is technically interesting but has not yet realized any significant advantage over its competitors. It may be that such a system is better suited to motion detection, filtering, or other image enhancements often done by computation alone. In terms of sectioning, there is a potential speed gain over structured illumination microscopy if the entire field can be analyzed from a single pass to allow selective illumination and focused imaging with just two camera acquisitions. This might be accomplished by applying the contrast focusing algorithm to the raw image data (see Appendix 5, Fig. A.5.2). Focus discrimination would probably improve with additional feedback loops up to the point where this gain was offset by drift errors and an overall speed decrease. An even more important advantage could be the avoidance of linear artifacts common to SIM. Color structured illumination microscopy is almost certainly the most important contribution of this work. CSIM is a proven way to obtain optical sections from a single image acquisition with a high sectioning strength. The obstacles of system color 77

PAGE 90

calibration and defocus due to chromatic aberration are not particularly difficult to solve and an outline of how to do so has already been provided here. It is also conceivable that CSIM could be used in a fluorescence microscope, another important project for future work. The proposed improvement for extended focused imaging to remove speckle noise and depth ambiguities will almost certainly be accomplished within the next year. The necessary pieces to this puzzle have already been made and it should not be difficult to put them together. So although some progress has been made here, there is no shortage of new frontiers to explore. In these concluding remarks alone are three suggestions for projects that could lead to publishable results, and more importantly, new instrumentation for biological imaging. Besides the fastest possible three-dimensional image visualization, this is the final goal of my work. 78

PAGE 91

References [1] J.-A. Conchello and J. W. Lichtman, Optical sectioning microscopy, Nature Methods 2, 920-931 (2005) [2] D. A. Agard, Optical sectioning microscopy: cellular architecture in three dimensions, Ann. Rev. Biophys. Bioeng. 13, 191-219 (1984) [3] Y. Garini, B. J. Vermolen, and I. T. Young, From micro to nano: recent advances in high-resolution microscopy, Curr. Opin. Biotech 16, 3-12 (2005) [4] K. Konig, Multiphoton microscopy in life sciences, J. Micros. 200, 83-104 (2000) [5] M. Minsky, Microscopy apparatus, US Patent 3013467, (1961) [6] M. Minsky Memoir on Inventing the Confocal Scanning Microscope, Scanning 10, 128-139 (1988) [7] G. Q. Xiao, T. R. Corle, and G. S. Kino, Real time confocal scanning optical microscope, Appl. Phys. Lett. 53, 716-718 (1988). 79

PAGE 92

[8] D. M. Grant, D. S. Elson, D. Schimpf, C. Dunsby, J. Requedro-Isidro, E. Auksorius, I. Munro, M. A. A. Neil, and P. M. W. French, Optically sectioned fluorescence lifetime imaging using a Nipkow disk microscope and a tunable ultrafast continuum excitation source, Opt. Lett. 30, 3353-3355 (2005) [9] P. J. Smith, C. M. Taylor, A. J. Shaw, and E. M. McCabe, Programmable array microscopy with a ferroelectric liquid-crystal spatial light modulator, Appl. Opt. 39, 2664-2669 (2000) [10] M. Liang, R. L. Stehr, and A. W. Krause, Confocal pattern period in multiple-aperture confocal imaging systems with coherent illumination, Opt. Lett. 22, 751-753 (1997) [11] K.-B. Im, S. Han, H. Park, D. Kim, and B.-M. Kim, Simple high-speed confocal line-scanning microscope, Opt. Express 13, 5151-5156 (2005) [12] B. R. Masters, Three-dimensional confocal microscopy of the human optic nerve in vivo, Opt. Express 3, 356-359 (1998) [13] M. A. A. Neil, R. Juskaitis, and T. Wilson, Method of obtaining optical sectioning by using structured light in a conventional microscope, Opt. Lett. 22, 1905-1907 (1997) 80

PAGE 93

[14] A. C. Puche, Resolution performance in structured illumination microscopy: comparison of OptiGrid to traditional confocal microscopes, Independent Resolution Testing, Thales Optem Inc. (2005) [15] www.zeiss.de\apotome [16] H. Bauch, Structured illumination in fluorescence microscopy, Bioscience Technology (2003) [17] T. S. Tkaczyk, M. Rahman, V. Mack, K. Sokolov, and J. D. Rogers, High resolution, molecular-specific, reflectance imaging in optically dense tissue phantoms with structured-illumination, Opt. Express 12, 3745-3758 (2004) [18] D. Karadaglic, R. Juskaitis, and T. Wilson, Confocal endoscopy via structured illumination, Scanning 24, 301-304 (2002) [19] M.J. Cole, J. Siegel, S.E.D. Web, R. Jones, K. Dowling, Mj.J. Dayel, D. Parsons-Karavassilis, P.M.W. French, M.J. Lever, L.O.D. Sucharov, M.A.A. Neil, R. Juskaitis, and T. Wilson, Time-domain whole-field fluorescence lifetime imaging with optical sectioning, J. Microsc. 203, 246-257 (2001) 81

PAGE 94

[20] L. H. Schaefer, D. Schuster, and J. Schaffer, Structured illumination microscopy: artifact analysis and reduction utilizing a parameter optimization approach, J. Microsc. 216, 165-174 (2004) [21] T. Xian and X. Su, Area modulation grating for sinusoidal structure illumination on phase-measuring profilometry, App. Opt. 40, 1201-1206 [22] M. G. L. Gustafsson, Non-linear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution, PNAS 102, 13081-13086 (2005) [23] A. Egner and S. W. Hell, Fluorescence microscopy with super-resolved optical sections, Trends Cell. Bio. 15, 207-215 [24] P. T. C. So, C. Y. Dong, B. R. Masters, and K. M. Berland, Two-photon excitation fluorescence microscopy, Annu. Rev. Biomed. Eng. 02, 399-429 (2000) [25] P. Theer, M. T. Hasan, and W. Denk, Two-photon imaging to a depth of 1000 m in living brains by use of a Ti:Al 2 O 3 regenerative amplifier, Opt. Lett. 28, 1022-1024 (2003) [26] E. Beaurepaire and J. Mertz, Epifluorescence collection in two-photon microscopy, App. Opt. 41, 5376-5382 (2002) 82

PAGE 95

[27] D. Oron, E. Tal, and Y. Silberberg, Scanningless depth-resolved microscopy, Opt. Express 13, 1468-1476 (2005) [28] C. Ventalon and J. Mertz, Quasi-confocal fluorescence sectioning with dynamic speckle illumination, Opt. Lett. 30, 3350-3352 (2005) [29] M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, Whole-field optically sectioned fluorescence lifetime imaging, Opt. Lett. 25, 1361-1363 (2000) [30] P. Hariharan, Optical Holography, (Cambridge U. Press, Cambridge 1996) [31] J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, Digital in-line holographic microscopy, App. Opt. 45, 836-850 (2006) [32] J. A. Gass, A. Dakoff, M. K. Kim, Phase Imaging without 2 Ambiguity by Multiwavelength Digital Holography, Opt. Lett 28, 1141-1143 (2003) [33] D. Parshall, Phase imaging digital holography for biological microscopy, Masters thesis, University of South Florida Department of Physics (2004) 83

PAGE 96

[34] B. Javidi, P. Ferraro, S.-H. Hong, S. De Nicola, A. Finizio, D. Alfieri, and G. Pierattini, Three-dimensional image fusion by use of multiwavelength digital holography, Opt. Lett. 30, 144-146 (2005) [35] D. N. Wang, Y. N. Ning, K. T. V. Grattan, A. W. Palmer, and K. Weir, Optimized multiwavelength combination sources for interferometric use, App. Opt. 33, 7326-7333 (1994) [36] R. Chmelik and Z. Harna, Surface profilometry by a parallel-mode confocal microscope, Opt. Eng. 41, 744-745 (2002) [37] P. Ferraro, S. Grilli, D. Alfieri, S. Nicola, A. Finizio, G. Pierratini, B. Javidi, G. Coppola, and V. Striano, Extended focused image in microscopy by digital holography, Opt. Expr., 13, 6738-6749 (2005) [38] M. K. Kim, Wavelength-scanning digital interference holography for optical section imaging, Opt. Lett. 24, 1693-1695 (1999) [39] B. W. Schilling, T.-C. Poon, G. Indebetouw, B. Storrie, K. Shinoda, Y. Suzuki, and M. H. Wu, Three-dimensional holographic fluorescence microscopy, Opt. Lett. 22, 1506-1508 (1997) 84

PAGE 97

[40] J. F. McGilp, A review of optical second-harmonic and sum-frequency generation at surfaces and interfaces, J. Phys. D. 29, 1812-1821 (1996) [41] M. Gu and L. Fu, Three-dimensional image formation in fiber-optical second-harmonic-generation microscopy, Opt. Expr., 14, 1175-1181 (2006) [42] P. Torok and Fu-Jen Kao (Editors), Optical Imaging and Microscopy Techniques and Advanced Systems, Springer, ISBN 3-540-43493-3 (2003) [43] J. A. Squier, M. Muller, G. J. Brackenhoff, and K. R. Wilson, Third harmonic generation microscopy, Opt. Expr. 3, 315-324 (1998) [44] D. Yelin and Y. Silberberg, Laser scanning third-harmonic-generation microscopy in biology, Opt. Expr. 5, 169-175 (1999) [45] A. Zumbusch, G. R. Holtom, X. S. Xie, Three-dimensional vibrational imaging by coherent anti-Stokes Raman scattering, PRL 82, 4142-4145 (1999) [46] J.-X. Cheng, L. D. Book, and X. S. Xie, Polarization coherent anti-Stokes Raman scattering microscopy, Opt. Lett. 26, 1341-1343 (2001) [47] T. W. Kee and M. T. Cicerone, Simple approach to one-laser, broadband coherent anti-Stokes Raman scattering microscopy, Opt. Lett. 29, 2701-2703 (2005) 85

PAGE 98

[48] J. M. Schmitt, Optical coherence tomography (OCT): A review, IEEE J. Sel. Top. Quantum Elec. 5, 1205-1215 (1999) [49] D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, Optical coherence tomography, Science 254, 1178-1181 (1991) [50] M. Akiba, K. P. Chan, and N. Tanno, Full-field optical coherence tomography by two-dimensional heterodyne detection with a pair of CCD cameras, Opt. Lett. 28, 816-818 (2003) [51] L. Yu and M. K. Kim, Full-color three-dimensional microscopy by wide-field optical coherence tomography, Opt. Expr. 12, 6632-6641 (2004) [52] P. S. Carney and J. C. Schotland, Three-dimensional total internal reflection microscopy, Opt. Lett. 26, 1072-1074 (2001) [53] J. Huisken, E. Stelzer, and J. Swoger, G.I.T. Imag. & Micr. 1, 2-3, (2005) [54] O. Haeberle, C. Xu, A. Dieterlen, and S. Jacquey, Multiple-objective microscopy with three-dimensional resolution near 100 nm and a long working distance, Opt. Lett. 26, 1684-1686 (2001) 86

PAGE 99

[55] S. A. Boppart, T. F. Deutsch, and D. W. Rattner, Optical imaging technology in minimally invasive surgery: current status and future directions, Surg. Endosc. 13, 718-722 (1999) [56] J. G. McNally, T. Karpova, J. Cooper, and J. A. Conchello, Three-dimensional imaging by deconvolution microscopy, Methods 19, 373-385 (1999) [57] Vaytek: http://www.vaytek.com/guide.htm [58] Scanalytics: http://www.scanalytics.com/index.html [59] Applied Precision: http://www.api.com/dvsystems.html [60] G. R. Ayers and J. C. Dainty, Iterative blind deconvolution method and its applications, Opt. Lett. 13, 547-549 (1988) [61] K. Kubala, E. Dowski, and W. T. Cathey, Reducing complexity in computational imaging systems, Opt. Expr. 11, 2102-2108 (2003) [62] S. C. Tucker, W. T. Cathey, and E. Dowski, Extended depth of field and aberration control for inexpensive digital microscope systems, Opt. Expr. 4, 467-474 (1999) [63] P. Potuluri, M. Fetterman, D. Brady, High depth of field microscopic imaging using an interferometric camera, Opt. Expr. 8, 624-630 (2001) 87

PAGE 100

[64] P. Ferraro, S. Grilli, D. Alfieri, S. Nicola, A. Finizio, G. Pierratini, B. Javidi, G. Coppola, and V. Striano, Extended focused image in microscopy by digital holography, Opt. Expr., 13, 6738-6749 (2005) [65] L. G. Krzewina and M. K. Kim, Single-exposure optical sectioning by color structured illumination microscopy, Opt. Lett. 31, 477-479 (2006) [66] L. G. Krzewina and M. K. Kim, Dynamic structured illumination microscopy: focused imaging and optical sectioning for moving objects, to be published in proceeding of SPIE (2006) [67] K-P Proll, J-M Nivet, C. Voland, and H. J. Tiziani, Application of a liquid-crystal spatial light modulator for brightness adaptation in microscopic topometry, Appl. Opt. 39, 6430-6435 (2000) [68] F. Gassem, R. Farcy, L. Pruvost, A. Bersellini, H. Aroui, and S. Ghrissi, Spatially selective laser irradiation method controlled by real-time image analysis: optical aspects, Appl. Opt. 44, 887-892 (2005) [69] J. Mitic, T. Anhut, M. Meier, M. Ducros, A. Serov, and T. Lasser, Optical sectioning in wide-field microscopy obtained by dynamic structured light illumination and detection based on a smart pixel detector array, Opt. Lett. 28, 698-700 (2003) 88

PAGE 101

[70] J. Kvavle, C. Bell, J. Henrie, S. Schultz, and A. Hawkins, Improvement to reflective dielectric film color pictures, Opt. Express 12, 5789-5794 (2004) [71] http://www.hi-techlamps.com/ [72] http://www.brewerscience.com/ [73] H. J. Tiziani and H.-M. Uhde, Three-dimensional image sensing by chromatic confocal microscopy, Appl. Opt. 33, 1838-1843. [74] M. R. Arnison, C. J. Cogswell, C. J. R. Sheppard, and P. Trk, Wavefront Coding Fluorescence Microscopy Using High Aperture Lenses, from Optical Imaging and Microscopy, Springer (2003) ISBN 3-540-43493-3 [75] P. A. Stokseth, Properties of a Defocused Optical System, J. Opt. Soc. Am. 59, 1314 (1969) [76] C. Font, J. C. Escalera, and M. J. Yzuel, Polychromatic point spread function: calculation accuracy, J. Mod. Opt. 41, 1401-1413 (1994) [77] M. Takeda, Chromatic aberration matching of the polychromatic optical transfer function, Appl. Opt. 20, 684-687 (1981) 89

PAGE 102

[78] I. Yamaguchi and T. Zhang, Phase-shifting Digital Holography, Opt. Lett. 22, 1268-1270 (1997) [79] F.T.S. Yu, Q. W. Song, and X. J. Lu, Implementation of Boolean logic gates using a microchannel spatial light modulator with liquid-crystal televisions, Opt. Lett. 12, 962-964 (1987) [80] V. Bagnoud and J. D. Zuegel, Independent phase and amplitude control of a laser beam by use of a single-phase-only spatial light modulator, Opt. Lett. 29, 295-297 (2004) [81] B. Wattellier, C. Sauteret, J.-C. Chanteloup, and A. Migus, Beam-focus shaping by use of programmable phase-only filters: application to an untralong focal line, Opt. Lett. 27, 213 (2002) [82] T.-C. Poon, B. W. Schilling, M. H. Wu, K. Shinoda, and Y. Suzuki, Real-time two-dimensional holographic imaging by using an electron-beam-addressed spatial light modulator, Opt. Lett. 18, 63 (1993) [83] H. A. Zebker and Y. Lu, Phase unwrapping algorithms for radar interferometry: residue-cut, least-squares, and synthesis algorithms, J. Opt. Soc. Am. 15, 586-598 (1998) 90

PAGE 103

[84] A. Asundi and Z. Wensen, Fast phase-unwrapping algorithm based on a gray-scale mask and flood fill, Appl. Opt. 37, 5416-5420 (1998) [85] A. Baldi, Phase unwrapping by region growing, Appl. Opt. 42, 2498-2505 (2003) [86] V. V. Volkov and Y. Zhu, Deterministic phase unwrapping in the presence of noise, Opt. Lett. 28, 2156-2158 (2003) [87] M. A. Schofield and Y. Zhu, Fast phase unwrapping algorithm for interferometric applications, Opt. Lett. 28, 1194-1196 (2003) [88] L. Yu and M. K. Kim, Wavelength-scanning digital interference holography for tomographic three-dimensional imaging by use of the angular spectrum method, Opt. Lett. 30, 2092-2094 (2005) [89] G. Ade, A Digital Method for Noise Reduction in Holographic Reconstructions and Electron Microscopical Images, Scan. Micros. 11, 375-378 (1997) [90] H. A. Eltoukhy and S. Kavusi, A Computationally Efficient Algorithm for Multi-Focus Image Reconstruction, SPIE Conference Proceedings (2003) 91

PAGE 104

Bibliography R. E. Fischer and B. Tadic-Galeb, Optical System Design, SPIE Press; McGraw-Hill, ISBN 0-07-134916-2 (2000) J.W. Goodman, Introduction to Fourier Optics, Second Edition, McGraw-Hill, Inc., New York, NY (1996). E. Hecht, Optics, Fourth Edition, Pearson Education, Inc., ISBN 81-7808-617-4 (2002) R. Juskaitis and T. Wilson, A method for characterizing longitudinal chromatic aberration of microscope objectives using a confocal optical system, J. Microsc. 195, 17-22 (1999) Y. Kawata, S. Kunieda, and T. Kanedo, Three-dimensional observation of internal defects in semiconductor crystals by use of two-photon excitation, Opt. Lett. 27, 297-299 (2002) E. M. M. Manders, Chromatic shift in multicolour confocal microscopy, J. Microsc. 185, 321-328 (1996) 92

PAGE 105

G. Molesini and F. Quercioli, Pseudocolor effects of longitudinal chromatic aberration, J. Optics 17, 279-282 (1986) M. A. A. Neil, M. J. Booth, and T. Wilson, Closed-loop aberration correction by use of a modal Zernike wave-front sensor, Opt. Lett. 25, 1083-1085 (2000) K. Shi, P. Li, S. Yin, and Z. Liu, Chromatic confocal microscopy using supercontinuum light, Opt. Express 12, 2096-2101 (2004) C. S. Williams and O. A. Becklund, Introduction to the Optical Transfer Function, Wiley-Interscience, ISBN 0-471-94770-9 (1989) J. C. Wyant and K. Creath, Basic wavefront aberration theory for optical metrology, App. Opt. and Opt. Eng. Vol. XI, Chapter 1, ISBN 0-12-408611-X 93

PAGE 106

Appendices 94

PAGE 107

Appendix 1: Spatial Light Modulator In order to perform selective or structured illumination, a mask must be placed between the light source and the object to restrict light from reaching undesired areas. A modern device convenient for this purpose is the spatial light modulator (SLM). Spatial light modulators have many such commercial and experimental uses. An area of intense research has been their use in optical computing, in which an array of calculations can be performed in parallel by using the pixels as logic gates. Yu [79] has used an SLM with feedback to demonstrate all fundamental logic operations. The SLM has also been implemented to shape laser beams [80] and for focal spot control [81]. In microscopy, an SLM was used to construct a parallel confocal microscope [9, 10], and used to correct brightness levels in microscopic topometry [67]. Its phase modulation capability has led to numerous experiments in digital holography [82]. A transmissive SLM is essentially a miniature liquid crystal display (LCD). It is used to control the intensity or polarization of light transmitted through it, hence the term light modulator. Reflective SLMs are also available, such as the digital micromirror device from Texas Instruments, but the transmissive LCD version was convenient for this work. In a typical LCD, the panel is lit from behind and light first passes through a linear polarizer. Liquid crystals then rotate the phase angle by up to 90 degrees according to a voltage set by the user. The light is then transmitted fractionally through a final linear polarizer, termed the analyzer, which is orthogonal to the first. Intensity modulation is performed when the LCD arrangement is as described above. Phase modulation can be accomplished by omitting the analyzer, leaving a change in phase determined by the state of the crystals. In this experiment, intensity modulation is used exclusively. Through an 95

PAGE 108

Appendix 1 (continued) interface to a personal computer, controlling the voltage is equivalent to controlling the pixel values on a computer monitor. That is, the SLM and computer monitor both receive output from the video card of the computer. After an extensive search for a suitable, inexpensive SLM the Monochrome CyberDisplay Video Module KCD-MQ01-AA was selected. It has an active area of 4.80 mm by 3.60 mm divided into 320 x 240 pixels, giving a 15 m pitch with an intensity modulation of 256 grayscale levels. This SLM is commercially available from Kopin Corporation ( www.kopin.com ). The factory provided driver for the SLM is documented as underpowered and only the interior 290 x 210 pixels are displayed with maximum sharpness, but no significant degradation of sharpness in the 15 pixel border was observed. Due to the polarizing layer and opacity of the electronics within the SLM, its transmissivity is approximately 7%. Though the optical qualities of this SLM are inferior compared to its high resolution competitors, the cost was an order of magnitude less. For this initial development of SIFM, it was deemed to be appropriate. A picture of the SLM is shown in Fig. A.1.1 along with a dime for size comparison. 96

PAGE 109

Appendix 1 (continued) Figure A.1.1: The spatial light modulator from Kopin Industries. The size of 4.80 mm by 3.60 mm is compared to the dime. The active area is surrounded by a plastic frame and interfaced via a data ribbon. 97

PAGE 110

Appendix 2: Feedback Effects In an ideal optical feedback loop, the initial focused image will be projected with perfect fidelity back onto its source object indefinitely. Defocused regions will also be self-projected, but with less intensity per loop, as illustrated in Fig. A.2.1. This could be used to obtain optical sectioning for an object having constant reflectivity across its surface. In real situations, though, the reflectivity will vary and if it is assumed to have a value of (,) R xy between 0 and 1, then after loops its illumination will decrease to n (,)n R xy Figure A.2.1: Ideal focused and defocused feedback. A test image may be used to check if the feedback is reasonably well aligned for a single loop before additional iterations are performed. The image used to calibrate the SIFM was called the image quality test, or IQ test. It is shown in Fig. A.2.2. 98

PAGE 111

Appendix 2 (continued) Figure A.2.2: IQ test bitmap (left) and imaged after mapping (right). From Fig. A.2.2 it appears that good feedback alignment was obtained. However, there is a slight blurring visible in the mapped image that can not be avoided even under the best circumstances due to the diffraction limited point spread function. Also, since the CCD detector and SLM are integer arrays, unless they are aligned to submicron precision they will not map perfectly to one another and effectively suffer round-off errors. Supposing the minimal error of one pixel, the displacement error will propagate by one pixel for each loop, so after loops the local alignment will be inaccurate by pixels. Since the current hardware is divided into just hundreds of pixels, this drift error is intolerable after a low value of being a few loops. This is the main reason why feedback of only one loop was used in the experiment. n n n There are also other effects that become problematic with only slight misalignment. If an improper scale factor is obtained during the mapping process, there will be a size mismatch between the SLM and camera. Although not much astigmatism is expected to be present in high quality optics, when it does appear it is nearly impossible to correct and detrimental to feedback. The SLM and camera must be 99

PAGE 112

Appendix 2 (continued) mounted such that there is no relative rotation between them about the longitudinal axis. If there is a rotation, it is difficult to compensate for when mapping with square pixels. The feedback effects of size mismatch, astigmatism, and rotation are shown in Fig. A.2.3. These along with positional drift all severely limit the number of useful feedback loops. Figure A.2.3: Effects due to various misalignments of mapping in feedback. Size mismatch, astigmatism, and rotation are shown, but other effects are also possible. In an experimental sense, optical feedback is fragile. 100

PAGE 113

Appendix 3: Phase-Unwrapping Algorithms By recording interference patterns via digital holography, one may obtain phase information of light scattered by the surface of the object. The light waves are periodic and have physical length allowing surface profiles of total depth variation less than to be measured unambiguously. For larger depth variations, there is ambiguity as to which wave was measured, since they all look the same, and the phase value wraps around to zero at every 2 interval. Thus the phase, having a range of measures local height variations 2h but must be unwrapped to extend the depth by either visual inspection (a very slow approach) or a computer program. Numerous algorithms founded on a variety of assumptions have accomplished phase unwrapping. From a utilitarian point of view, they differ in speed and ability to correctly process noisy data. The simplest algorithm checks adjacent points for a phase discontinuity of and adds 2 assuming that a wrap has occurred. This is typically implemented along scan lines in a grayscale bitmap, so results are dependent upon the direction of scan. This method, common to such programs as MatLab, is extremely fast but is not usable for most experiments. This is because most real data sets contain errors, or inconsistencies. In this context, an inconsistency occurs when a net change in height is calculated after unwrapping along a closed loop, since we require that the starting and ending pixel (the same pixel) heights are the same. In path-dependent algorithms, inconsistencies are apparent when different paths lead to dissimilar results. Phase unwrapping became a topic of intense research when it showed practical military applicability; a recent example applies to synthetic aperture radar [83]. The 101

PAGE 114

Appendix 3 (continued) different approaches are too numerous to be detailed here, but it may interest the reader to observe how such dissimilar ways of viewing the problem lead to the end product of an unwrapped image [84-87]. For example, a residue-cut algorithm may try to avoid inconsistencies by cutting them out of the data, whereas more mathematical routines are relatively impervious to the inconsistencies. As phase unwrapping presents a technical challenge with so many solutions, none of which is yet ideal, I attempted my own. Also, Chris Mann was using a program that gave good results but took several minutes to process each image, so another motivation was improved speed. His phase images were fairly low in noise, and appeared to have 2 ambiguities around clusters or regions due to the biological nature of the samples. Therefore I adopted a region growing approach that first groups pixels into regions of similar values, and then assumes that neighboring regions are separated by no more than one phase jump. The program is called LPU (Leos Phase Unwrapper) and includes one variable parameter to adjust the tolerance for region growing. By increasing the tolerance, larger regions are grown and the speed will increase, at the potential expense of susceptibility to inconsistencies. Results from LPU are shown in the figures below. Overall, this algorithm is extremely fast, but not as tolerant to noise as other competitors. Because a similar region growing method was already recently published [85], the program is used as an internal lab tool but was not submitted for publication elsewhere. 102

PAGE 115

Appendix 3 (continued) Figure A.3.1: Cheek cell, wrapped (left) and unwrapped by LPU (right). The image size was 456 x 456 pixels and took 120 msec to unwrap. (All wrapped phase images courtesy of Chris Mann). Figure A.3.2: Skov cancer cell. This took 72 msec to unwrap with an original image size of 404 x 404. 103

PAGE 116

Appendix 3 (continued) Figure A.3.3: LPU has a little trouble with the resolution target images. The algorithm fails where the data becomes inconsistent, which means there is enough noise to cause improper phase boundaries. The area on the far right should be the brightest shade but incorrectly unwrapped. Figure A.3.4: Another resolution target image, which was fairly noisy. By adjusting the LPU tolerance parameter, a reasonably good unwrapping resulted after four attempts. However, this noise level is probably not suitable for LPU except for use as a preview to slower, more robust programs. 104

PAGE 117

Appendix 4: Convolution Theorem Optics is one of many areas in which mathematical convolution is commonly used. Convolution is a way of combining two functions and is defined as ()()()() f xgxdxfxgxx (A.4.1) To understand this, consider a single point in one of the functions The value of the function at this point is multiplied with the values of the other function over the entire range of Then the procedure is continued with the next point in and added to the original result until has been evaluated at all points. The example below should help clarify this. To note, the definition is symmetric so that x f g x f f ()()()()fxgxgxfx For an optical system this is closely related to the point spread function (PSF). For a linear system in which superposition holds, it is assumed that the total response for all points is the sum of individual responses at each point. Often a PSF is assumed that has the same shape for any source point in the system. More specifically, suppose the object points are located in a source plane perpendicular to the optical axis () and have a brightness ranging from zero to one. The PSF, when applied to one object point at zaxis (,)fxy (,) x y will spread its value over an area in the image plane. This blurring effect is unavoidable at best the diffraction limited Airy is obtained, and almost inevitably circularly symmetric. The result is that it is impossible to obtain an image of any object with perfect clarity, but the narrower the PSF, the better the result. Computer models of optical systems often employ the PSF, as I did for the simulation mentioned in Section 3.4. The impossible, ideal PSF is represented by a single point at the origin. For real systems, the amount this point is blurred indicates 105

PAGE 118

Appendix 4 (continued) imaging ability. Example PSFs are shown in Fig. A.4.1. The leftmost is the ideal case, and blurring increases toward the rightmost frame. Figure A.4.1: Sample point spread functions. The leftmost is an ideal case of a delta function response or perfect imaging. The central case shows the point blurred into a Gaussian function of radius 5 pixels. The rightmost is similar but has radius 13 pixels. The bounding regions are 256256 pixels. The PSFs have been inverted to save ink and a border has been added. Computationally, the simulation can apply the PSF by direct evaluation of Eq. (A.4.1) by iterating a double-loop of the object points convolved with the PSF. This means the time to perform the convolution will increase as where is the number of pixels in the object bitmap and NM N M is the number of pixels in the PSF bitmap. Of course M will be small when little blurring occurs, but for the general case this will be a slow operation. The convolution theorem states that the Fourier transform of the convolution of two functions and is the product of the Fourier transforms of those functions: f g {}()*(TfgTfTg ) (A.4.2) Here is used to denote the Fourier transform. Application of the inverse Fourier transform allows us to write T 106

PAGE 119

Appendix 4 (continued) 11{}{()*(TTfgfgTTfTg )} (A.4.3) Thus, the convolution of two functions may be accomplished by two Fourier transforms and one inverse Fourier transform. These operations both take the same amount of computation time and are typically implemented via a fast Fourier transform (FFT). The FFT is most efficient for images having dimensions in multiples of power of two, in which case the computation time increases as rather than which is much faster for large images. log()NN 2N I performed tests to check my usage of Eq. (A.4.3). The test image I used is a photograph of Dr. Kim as shown in Fig. A.4.2. Figure A.4.2: A happy test image. I applied Eq. (A.4.3) with a common FFT using the ideal PSF from Fig. A.4.1 and obtained the image shown in Fig. A.4.3: 107

PAGE 120

Appendix 4 (continued) Figure A.4.3: Result of performing convolution with Eq. (A.4.3) and a common fast Fourier transform. The expected result of convolution with the ideal PSF would be just to keep the original image, as it can be thought of imaging in a perfect optical system. So, my first test made evident that the quadrants of final convolved image must be rearranged to be sensible. This is a side-effect of some FFT implementations. I compared the rearranged output image with the original by taking a pixelwise difference between them as shown in Fig. A.4.4. Again the figure was inverted to save ink, and an artificial border was added. The fact that there are differences can be explained by numerical round-off errors. Finally I applied the other two PSFs from Fig. A.4.1 to see the effects of more realistic blurring in an optical system. The results are shown in Fig. A.4.5 and they are as expected. Applying the convolution theorem to evaluate the PSF is just one example application of Eq. (A.4.3). See Goodmans Introduction to Fourier Optics for more. 108

PAGE 121

Appendix 4 (continued) Figure A.4.4: Difference between original test image and output from convolution with the ideal PSF from the left side of Fig. A.4.1. If there were no numerical round-off errors, the images would be the same. Figure A.4.5: Application of Eq. (A.4.3) for convolution with the middle and rightmost PSFs from Fig. A.4.1. The expected increased blurring of the output image with size of the PSF is present. 109

PAGE 122

Appendix 5: Software Inventory The methods of structured illumination, selective illumination with feedback, and digital holography all rely on computer processing. The programs Ive written related to these subjects (and a few others) are cataloged here and will be provided upon request. They were written in Visual C++ version 6.0, and some made for my own use were executed from within the compiler, though they need not be. Others were run from a command prompt and include directions for use. It is convenient to install the Command Prompt Here Power Toy for quick access to these programs after copying them to a system path directory. Their source code locations relative to a generic parent directory root are listed for posterity. A.5.1. Autofocus (root\_Archive\Autofocus; run from command prompt) Applies a contrast based algorithm to determine which of a set of images is most focused. For each image, the sum of the square of the differences between each pixel and its four nearest neighbors is computed, and the image having the least sum is assumed to be in best focus. This is fairly successful for images of the same object at different axial positions, but is susceptible to high frequency noise. The intent was to automatically find the ideal holographic reconstruction distance. A.5.2. GAR (root\_Archive\GetAxialResponse; run from command prompt) Sectioning strength is measured by scanning a planar mirror through focus and determining the FWHM of the system (axial) response. To do this, a series of images are captured as the mirror is scanned through focus. GAR analyzes these images and 110

PAGE 123

Appendix 5 (continued) determines the FWHM. It allows a central square size to be set as the active region to reduce non-paraxial effects, such as distortion from spherical aberration. A.5.3. Holofocus (root\_Archive\Holofocus; run from command prompt) This uses the Autofocus algorithm of A.5.1 and subdivides images into user-defined sized squares, then composes the squares of best focus into a final image. This was later replaced by SetFocus (see below). A.5.4. Poprgb (root\_Archive\poprgb; run from command prompt) Separates a color image into its red, green, and blue components and stores the result as three distinct images. A.5.5. Profile (root\_Archive\profile; run from command prompt) Takes the central row of an image and plots it as a side view, or profile. A.5.6. Bessel1 (root\_LibraryBase\Bessel1; run from command prompt) Calculates the Bessel function J m (x) of the first kind. This code was later used in my chromatic aberration calculation. It uses the asymptotic approximation 2()cos(21)4mJxxnx (A.5.1) for large In this case, large means for x 30x 111

PAGE 124

Appendix 5 (continued) A.5.7. Bmp (root\_LibraryBase\bmp; non-executable) This is code for linking with other programs that allows reading and writing of Windows bitmap files. The run-length encoding compression appears to have a bug in it, so it is best to write uncompressed files. A.5.8. fwBase (root\_LibraryBase\fwBase; run from compiler) This is a basic application allowing image acquisition from a FireWire camera. It works for grayscale, often called black and white, cameras. The FireWire drivers were linked in as a library niimaq1394.lib available from National Instruments. A.5.9. fwBayer (root\_LibraryBase\fwBayer; run from compiler) This is similar to fwBase but works for the color camera. It is what I used for my CSIM experiments with the Sony camera described in previous sections. The camera actually returns what would appear to be a grayscale image, but is color encoded after passing through a Bayer filter. A two-row by two-column group of pixels is arranged as: GB RG. Thus red and blue pixels are interpolated from less data than green, so they are more susceptible to noise. While this was the assumed filter, I also verified it by capturing images illuminated with red, green, and blue LEDs and analyzing the pixels. 112

PAGE 125

Appendix 5 (continued) A.5.10. ImaqBase (root\_LibraryBase\ImaqBase; run from compiler) Similar to fwBase but for Imaq cameras. It links to Imaq.lib. The actual program I used with an Imaq camera for SIFM was called TestImaq as below. A.5.11. MathBase (root\_LibraryBase\MathBase; non-executable) This is code that includes numerical libraries for linking with other programs. For example, it includes the Fast Fourier Transform. A.5.12. PSF (root\_LibraryBase\psf; run from command prompt) Applies a sample point spread function to an input image, which acts as the object. Defocus is varied based upon parameters of beam waist and axial position. It uses a geometrical PSF, so it does not properly model diffraction-limited cases. A.5.13. Tga (root\_LibraryBase\tga; non-executable) This is code for linking with other programs that allows reading and writing of targa image files. A.5.14. ReSIM (root\_SE_ALL\ReSim; run from compiler) An Imaq grayscale camera program that was intended to help remove linear artifacts from SIM images. The idea is that the entire axial scan can be performed using SIM, then after the planes of best focus are found, they can be re-illuminated selectively without a linear pattern. This had some success but is probably not really viable. 113

PAGE 126

Appendix 5 (continued) A.5.15. seCompose (root\_SE_ALL\seCompose; run from command prompt) Once the selective illumination checkerboard images have been captured in a series along the z-axis, this program allows them all to be analyzed and composed into the final autofocus image. A.5.16. seFiltered (root\_SE_ALL\seFiltered; run from command prompt) Dr. Kim suggested passing the results of SIFM through a high-pass filter to remove background noise and neighbor blurring effects. This was accomplished here but was not pursued because it did not give very satisfying results. It appeared to work best as an edge enhancer. A.5.17. seScan (root\_SE_ALL\seScan; run from compiler) There are two ways to perform SIFM. One is to section on a per-frame basis by setting the noise threshold parameter globally and adjusting it until good overall focus is obtained. Another way involves a double-pass in which the first set is used to determine best planes of focus throughout and then the second pass is the selective illumination. seScan was used for this with some success, but since the scan time is increased by a factor of two, this was deemed unsatisfactory. A.5.18. testImaq (root\_SE_ALL\testImaq; run from compiler) My most poorly named program it began as a way to test using Imaq cameras and evolved into the selective illumination mainstay. It contains both my final and 114

PAGE 127

Appendix 5 (continued) vestigial code for performing mapping and analysis of the checkerboard patterns. A.5.19. Falsecolor (root\_SI_ALL\falsecolor; run from command prompt) Combine the autofocus image and height map generated by siCompose into one false color image meant to help show all information simultaneously. A.5.20. MakeColorGrid (root\_SI_ALL\MakeColorGrid; command prompt) CSIM requires color grids which can be various sizes. This program creates these grids with parameters including color and spatial period, for printing to 35 mm slides. A.5.21. MCG2 (root\_SI_ALL\MCG2; command prompt) Second version of MakeColorGrid. Easier to use and more flexible. A.5.22. PhaseError (root\_SI_ALL\PhaseError; command prompt) Analyze the effect of improper phase alignment of the grid in SIM. Under perfect conditions, the three consecutive sine waves are out of phase by 23 Here, phase noise is added. This was used to create Fig. 5.8. A.5.23. SI1 (root\_SI_ALL\SI1; command prompt) Analyze a series of CSIM frame captures to generate the focused axial images. This program must be modified in source if different color balancing values are needed. 115

PAGE 128

Appendix 5 (continued) A.5.24. SI3 (root\_SI_ALL\SI3; command prompt) Similar to SI1, but for three-phase illumination at just one axial position. This was used after standard SIM was performed with the SLM. A.5.25. siAber (root\_SI_ALL\siAber; command prompt) My initial attempt to analyze longitudinal chromatic aberration in SIM. A point spread function dependent upon wavelength is applied to the grid and the axial response is found. This program was never finished due to finding the better method of Section 6. A.5.26. siCompose (root\_SI_ALL\siCompose; command prompt) Given the focused frames from SI1 or SI3, compose the autofocus image and a corresponding height (or depth) map. This is done by taking the brightest pixel from the focused frames at each lateral position, then the height map is made from indices to these planes multiplied by a scale factor to maximize image contrast. A.5.27. siNoise (root\_SI_ALL\siNoise; command prompt) Given an image bitmap, subdivide it into three phase images composed of stripes as would be done in perfectly focused SIM. Then, add a varying noise level to two of these images and compute the new SIM result. This was used to generate Fig. 4.3. A.5.28. siPatterns (root\_SI_ALL\siPatterns; run from compiler) In another attempt to remove linear artifacts from SIM, I tried other patterns 116

PAGE 129

Appendix 5 (continued) besides lines. My reasoning was that it may not have been tried previously since a mechanical grid of more complicated patterns than sinusoidal lines might not have been easily manufactured. This is easy to make on the SLM, though. In addition to lines, I tried wavy lines, zig-zags, checkerboards, and even random patterns. In the end, the artifacts were inevitably present and reminiscent of the original light structure. In the case of the random patterns, the result was not sensible because of an unpredictable effective spatial period, so the axial response varied locally. An example for the wavy pattern is shown in Fig. A.5.1 below. Figure A.5.1: Wavy structured light pattern on SLM (left). This is phase-shifted horizontally and captured three times as in SIM. The resulting focused image (right) shows artifacts at twice the base frequency, as in standard SIM. A.5.29. Slide (root\_SI_ALL\slide; run from command prompt) This simulation examines the effects of motion of the object during three-phase SIM acquisition. A moving object is equivalent to a moving grid, and if the object is a planar mirror as is used to measure axial response, the simulation is particularly simple. A grid is drawn and its contribution to the virtual camera exposure is added, then the grid is slid a bit horizontally, added, etc until the final result has been summed. Parameters 117

PAGE 130

Appendix 5 (continued) allow exposure time and slide speed to be adjusted, as well as grid spatial period. A.5.30. Slidefw (root\_SI_ALL\slidefw; run from command prompt) Similar to slide, above, but for axial (forward) motion. Here, a geometrical PSF is used to allow defocus with axial position. Together with slide, generated the simulation results discussed in Section 5. A.5.31. Stokseth (root\_SI_ALL\stokseth; run from command prompt) The approximation of choice for theoretical sectioning response of SIM. This program allows many parameters to be varied such as grid period, illumination wavelength, and axial scan range. It also allows chromatic aberration curves to be computed by including a wavelength range and microscope objective aberration data. A.5.32. StructuredIllumination (root\_SI_ALL\ StructuredIllumination; compiler) My original Imaq code for SIM, eventually replaced by fwBayer for color images. A.5.33. TestPatterns (root\_SI_ALL\TestPatterns; run from command prompt) The CSIM of Section 3 requires camera calibration. The hard way of doing this (when spectral knowledge of optical components is not known) is to measure the camera response to various shades of color. This program creates images having rectangles of these different colors, for later 35 mm slide printing. 118

PAGE 131

Appendix 5 (continued) A.5.34. Angspec (root\angspec\; run from command prompt) One project headed by Frank (Lingfeng Yu) involved applying the angular spectrum method to digital holography [88]. I worked on code to implement this, which is a way of representing the reconstruction as a sum a plane waves having different directions. In reciprocal space this results in an angular spectrum. This was going to be an easy second-author paper for me, but Frank was so fast that he got the code done before I did, so I told him to remove my name. This program was therefore never finished, although it would be convenient to have as a DLL for LabView. A.5.35. Convolve (root\convolve\; run from command prompt) Performs the convolution of two images, as was used for Appendix 4. A.5.36. Dic3D (root\dic3D\; run from command prompt) Integrates a contrast image, as generated by the differential interference contrast method, to obtain a three-dimensional version. Much like phase-unwrapping, this is subject to inconsistencies and in this form, only partially successful. A.5.37. EFI (root\efi\; run from command prompt) Given the unwrapped phase and amplitude images of an object, computes the extended focused image, or EFI [37]. 119

PAGE 132

Appendix 5 (continued) A.5.38. Feedback (root\Feedback; run from compiler) An early program meant to help understand the magnitude of feedback effects (see Appendix 2), that employs several possible geometrical point spread functions. It simulates a closed loop with feedback and iteration of the PSF per cycle. A.5.39. SetFocus (root\ForOthers\Alex\setfocus; run from command prompt) Merges the focused parts from a series of axially acquired images into the autofocus image, using a variation of the contrast algorithm of A.5.1. The user can specify the rectangle size into which to subdivide the images. I applied this to some test images that Alex Khmaladze supplied from his scanning microscope with some success as can be seen in Fig. A.5.2. However, this approach is not original [90] so it was not included in submission for publication. Also, I applied a deconvolution program to the raw data to get best preprocessed focus. SetFocus does well, but rectangular groups show substantial linear artifacts. Therefore, in the final version each pixel is evaluated individually by computing its local contrast from the sum of the contrast within the rectangle of the given size in which it is centered. 120

PAGE 133

Appendix 5 (continued) Figure A.5.2: Butterfly eye. Left: a conventional image from midway along the axial scan. Middle: A deconvolved, SetFocus flattened image composed of rectangles from different axial positions. Right: Also SetFocus flattened, but with each pixel evaluated individually, rather than in rectangular groups. A.5.40. Lase (root\ForOthers\Anali\Lase; run from command prompt) Anali Makoui requested a simulation for a pumped laser for the LIDAR lab, including something called a chopper that blocks the beam part of them time, so I wrote one for her. This was done because the version she had programmed in a different language was not fast enough. She is using it now and has the most recent version. A.5.41. Phase Image Pack (root\ForOthers\Chris\; run from command prompt) Includes several programs written for Chris Mann to help massage holographic reconstructions into images suitable for compilation into movies. These are phase2bmp, untilt, normbmp, maxbmp, floor, and smooth. For example, normbmp evaluates a group of bitmaps and shifts them to all have the same background level so that when plotted as 121

PAGE 134

Appendix 5 (continued) height images they do not have vertical jitter between frames. He is probably the only person who will ever use them, but if interested contact me for more information. A.5.42. LPU (\root\ForOthers\Chris\lpu; run from command prompt) The phase unwrapping program described in Appendix 3. A.5.43. zeropad (\root\ForOthers\Chris\zeropad; run from command prompt) Adds a border of zeros of specified thickness to a sequence of image files. This is sometimes useful to make file sizes into powers of two to please Fast Fourier Transforms. A.5.44. punwrap, punwrap2 (\root\ForOthers\Chris\; run from command prompt) My attempts to implement some great looking phase-unwrapping algorithms [86, 87]. They seem straightforward enough but I was frustrated, and after several days of wasted effort, decided to write my own (LPU). If the reader has these algorithms in coded form, please let me know so I can try them. A.5.45. diffraction (\root\ForOthers\Justin\diffraction; run from command prompt) In summer of 2005 Dr. Kim accepted an undergraduate for a research project, and I was requested to serve as his mentor. The project I proposed for Justin was to generate a hologram from a computer program, have it printed on 35 mm film, and then view its reconstruction with a laser. He learned how to use LabView and was almost successful in making the holograms, but not quite. So I wrote something to simulate diffraction 122

PAGE 135

Appendix 5 (continued) through small apertures, including an axial position dependence of the aperture based on grayscale level in a source image. For example, in Fig. A.5.3 the two circles are the same size but at different axial positions, so the diffraction pattern they produce at some reconstruction distance appears different. It should be noted that this program uses direct summation of point light sources, rather than the convolution theorem optimization, so it is inefficient, but functional. Eventually Justin was able to view reconstructions of more complicated holograms and completed his project. Figure A.5.3: Simulated diffraction from circular apertures at different axial positions denoted by grayscale level. The apertures (left) result in the diffraction pattern in the image plane (right). A.5.46. MakeHolo (\root\ForOthers\Nilanthi\makeholo; run from command prompt) From an object amplitude image, reference amplitude image, and phase difference image (R-O) generated from four interferograms, calculate the hologram. This relates to the interferograms used for Section 7. 123

PAGE 136

Appendix 5 (continued) A.5.47. dllGauss (\root\LabViewDLLs\dllGauss; dynamic link library) Noise reduction in holography experiments using a standard low-pass filter does not typically lead to good results. By transforming the hologram into its Fourier spectrum, then applying a low-pass filter (with a Gaussian envelope) and then transforming back into a hologram, I was able to obtain more properly filtered images. Fig. A.5.4 shows this procedure applied to the phase map of an onion cell. Unfortunately, I later found that this work was first done in 1997 [89]. Figure A.5.4: Spectral Gaussian low-pass filter applied to onion phase map. The distance scale parameter shown in the upper-right of each image is used to get the desired smoothing. Here, the value of 60 leads to substantially decreased noise but maintains important features, such as thickness of the cell nucleus. A.5.48. dllTriNoise (\root\LabViewDLLs\dllTriNoise; dynamic link library) In multi-wavelength digital holography, holograms recorded at different wavelengths are combined to extend ambiguous depth measurement to the beat frequency 124

PAGE 137

Appendix 5 (continued) of the wavelengths. A good way to test this is by imaging a tilted mirror, the height of which will appear plotted as a sawtooth wave where discontinuities in the measurement occur at 2 ambiguities. This plot will of course contain noise, and by subtracting the underlying perfect sawtooth form the noise can be extracted. This is done by fitting a line with the correct slope to match the waveform, and was accomplished in dllTriNoise. Nilanthi Warnasooriya used this to find the noise level of some of her data in her candidacy, though I do not know if it is being used any longer. 125

PAGE 138

Appendix 6: Geometrical Point Spread Function Beginning with the paraxial approximation to the Helmholtz equation, the electric field of the Gaussian beam at position r is: E()()ikzAerr (A.6.1) with () A r given by: 22()1()()ikqzAAeqz r (A.6.2) Here is a constant, 1A 2 2 x y and is defined by: ()qz 0220011()zizqzzizzz (A.6.3) Thus (A.6.2) becomes: 2022001220()zizikzzzizAAezzr (A.6.4) or 220222202()2()01220()kzikzzzzzzizAAeezzr 0 (A.6.5) Obtain the point spread function for intensity via resulting in: *()EEIr 2022021220()kzzzAIzzr e (A.6.6) The wavenumber is 2k and the Rayleigh range is given by: 200wz (A.6.7) 126

PAGE 139

(Appendix 6 continued) In SIM experiments, a sinusoidal or linear grid is focused onto the object. To obtain the point spread function, I used the focused size of a single pixel in the grid aperture, that is: 0pixelww (A.6.8) Here is the magnification between the grid and the object, and for the 35 mm slide and SLM is 10.6 m and 15 m, respectively. pixelw Thus relation (A.6.6) can be evaluated to within a scale factor, so a square array of weights can be calculated as the intensity PSF, normalized, and scaled by source pixel value to add the image of the point to the output image. This is accomplished in the program psf as list in Appendix 5. I applied this to a circle aperture with pixel size of 10.6 microns at a magnification of 10 X, with wavelength of 550 nm at axial distances of 0.1, 10, 100, and 1000 m from the aperture. Here is the result: Figure A.6.1: PSF applied to circular hole at 0.1, 10, 100, and 1000 m distance with array size fixed at 7. The four circles above are with a fixed PSF array size, or area which it spreads out over. Next I made the computer automatically determine the array size, with a maximum spread of 301 pixels, by requiring that the sum of weights in the 127

PAGE 140

(Appendix 6 continued) central square (of the total PSF area) be greater than 0.9 (thats 90% of total). I redid the above with the dynamic PSF size, and got a much better, and expected, result: Figure A.6.2: PSF with dynamic array size. Note the rightmost area is dim, not totally black. The corresponding array sizes were: 7x7, 11x11, 63x63, and 301x301 (301 is saturated). Since the final distance of 1 mm led to a maximal array size, I observed that the fraction contained in the center square for this case was 0.49, rather than the otherwise required 0.90. However, at such far distances, not much can be seen so this is beyond the reasonable scope of this program, and as such is acceptable. 128

PAGE 141

Appendix 7: Setup for Reflective Mask The spatial light modulator used as a mask throughout my experiments was transmissive. Most are reflective, so one improvement to this work could be to use a reflective SLM setup. Figure A.7.1 shows a schematic. The obvious disadvantage is that the beam intensity decreases by a factor of two each time it traverses the beam splitter. This is offset by the fact that transmissive modulators also absorb a large fraction of incident light (the Kopin CyberDisplay is only 7% transmissive) while those that are reflective have high light efficiency approaching 100%. Figure A.7.1: Setup for reflective spatial light modulator. 129

PAGE 142

Appendix 8: List of Accomplishments [1] L. G. Krzewina and M. K. Kim, Single-exposure optical sectioning by color structured illumination microscopy, Opt. Lett. 31, 477-479 (2006) [2] L. G. Krzewina and M. K. Kim, Dynamic structured illumination microscopy: focused imaging and optical sectioning for moving objects, Oral presentation (by MKK); SPIE conference proceedings (2006) [3] L. G. Krzewina and M. K. Kim, Color structured illumination microscopy for imaging in noisy environments, Poster presentation; Optical Society of America Biomedical conference proceedings (2006) [4] L. G. Krzewina and M. K. Kim, Three-dimensional microscopy by selective illumination with feedback, Poster presentation; CLEO conference proceedings (2005) [5] L. G. Krzewina and M. K. Kim, Optical sectioning by selective illumination feedback microscopy, J. Mod. Opt. in Review (2006) [6] L. G. Krzewina and M. K. Kim, Chromatic aberration in structured illumination microscopy, Opt. Comm. to be submitted (2006) 130

PAGE 143

(Appendix 8 continued) [7] L.G. Krzewina and W. C. Saslaw Minimal spanning tree statistics for the analysis of large-scale structure, Mon. Not. R. Astron. Soc. 278, 869-876 (1996) [8] R. K. Smith, L. G. Krzewina, and D. P. Cox Time-dependent cooling and grain destruction in hot dusty plasmas: A simplified model and principal results, Ap. J. 473, 864-872 (1996) 131

PAGE 144

About the Author Leo Krzewina lives with his wife Mary L ynn in the Tampa Bay area. He grew up on a dairy farm in Wisconsin and received Bachelor of Science degrees in physics in mathematics from UW-Madison in 1993. He wa s awarded a Master of Arts degree in astronomy from the University of Virginia in 1995, where he also taught ballroom dancing. At this point he temporarily broke from the physical sciences to move to sunny Florida and make videogames for Gorilla Systems Corporation, where he soon became lead programmer. His work on countless Barbie titles that sold millions of copies has made Krzewina an almost household name. In 2001 he completed his first independent computer game entitled Colors of War that was published by eGames, Inc. That same year he resumed graduate studies at the University of South Flor ida, and eventually selected the field of opti cal physics for doctoral work.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001790600
003 fts
005 20070621143935.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 070621s2006 flu sbm 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001515
040
FHM
c FHM
035
(OCoLC)144810587
049
FHMM
090
QD21.22 (ONLINE)
1 100
Krzewina, Leo G.
0 245
Structured light for three-dimensional microscopy
h [electronic resource] /
by Leo G. Krzewina.
260
[Tampa, Fla] :
b University of South Florida,
2006.
3 520
ABSTRACT: The conventional light microscope is an indispensable tool for many physical and life science applications, but is of limited usefulness for three-dimensional imaging due to its increasingly narrow depth of field at high magnifications. Focused regions may be obscured by defocused neighbors or noise from extraneous light sources and subsurface scattering. By rejecting light originating from outside the depth of focus it is possible to minimize these problems. When a contiguous series of such focused slices, or optical sections, are obtained along an axis of an extended object they may be combined to form a complete, focused three-dimensional surface image. Here, a variety of methods to obtain optical sections in a reflective setup are presented. The first employs an optical feedback loop through a spatial light modulator (SLM) to selectively illuminate focused regions. The SLM is a flexible electro-optical device that also allows (non-feedback) experiments of an intensity modulated light source resulting in illumination with a linear structure. This structured illumination microscopy is an established sectioning technique, which requires three frame captures per axial position. By developing a color grid and exploiting the red, green, and blue channels of a CCD camera, the three frames have been reduced to one. The speed increase comes at a cost and the limiting effects of chromatic aberration are discussed. Digital holography offers an alternative to axial scanning by allowing the surface to be reconstructed from a single exposure. Use of multiple wavelength illumination with this extended focus imaging is proposed and preliminary results are shown.
502
Dissertation (Ph.D.)--University of South Florida, 2006.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 131 pages.
Includes vita.
590
Adviser: Myung K. Kim, Ph. D.
653
Optical sectioning.
Spatial light modulator.
Structured illumination.
Extended focused imaging.
CSIM.
Phase-unwrapping.
Chromatic aberration.
690
Dissertations, Academic
z USF
x Physics
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1515