USFDC Home  USF Electronic Theses and Dissertations   RSS 
Material Information
Subjects
Notes
Record Information

Full Text 
xml version 1.0 encoding UTF8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchemainstance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd leader nam Ka controlfield tag 001 001670407 003 fts 005 20051216093359.0 006 med 007 cr mnuuuuuu 008 051129s2005 flu sbm s000 0 eng d datafield ind1 8 ind2 024 subfield code a E14SFE0001283 035 (OCoLC)62367347 SFE0001283 040 FHM c FHM 049 FHMM 090 T56 (Online) 1 100 Nazilli, Vuslat. 0 245 Waveletbased monitoring and analysis of cardiorespiratory response to hypoxia h [electronic resource] / by Vuslat Nazilli. 260 [Tampa, Fla.] : b University of South Florida, 2005. 502 Thesis (M.S.)University of South Florida, 2005. 504 Includes bibliographical references. 516 Text (Electronic thesis) in PDF format. 538 System requirements: World Wide Web browser and PDF reader. Mode of access: World Wide Web. 500 Title from PDF of title page. Document formatted into pages; contains 65 pages. 3 520 ABSTRACT: Obstructive sleep apnea is a potentially lifethreatening condition characterized by repetitive episodes of upper airway obstruction that occur during sleep, usually associated with a reduction in blood oxygen saturation. In US population, 9% of women, 24% of men, and 2% of children have been diagnosed with obstructive sleep apnea, suggesting that 18 million people may suffer from the consequences of nightly episodes of apnea. One of the most significant symptoms of obstructive sleep apnea is profound and repeated hypoxia. The analysis of the interaction between cardiovascular and respiratory signals has been a widelyexplored area of research due to the significance of the results in describing a functional relationship between the underlying physiologic systems; however, statistical and analytical approaches to analyze the changes in these signals before and after hypoxia are still in their early stages of evolution. 590 Adviser: Dr. Jose L. ZayasCastro. 653 Principal curve. Intermittent hypoxia. Multichannel signal monitoring. Factor analysis. Wavelet transformation. 690 Dissertations, Academic z USF x Industrial Engineering Masters. 773 t USF Electronic Theses and Dissertations. 4 856 u http://digital.lib.usf.edu/?e14.1283 PAGE 1 WaveletBased Monitoring and Analysis of Cardiorespiratory Response to Hypoxia by Vuslat Nazilli A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Industrial Engineering Department of Industrial and Management Systems Engineering College of Engineering University of South Florida Major Professor: Jos L. ZayasCastro, Ph.D. Tapas K. Das, PhD Kendall F. Morris, Ph.D. A.N.V. Rao, Ph.D Date of Approval: July 21, 2005 Keywords: principal curve, intermittent hypoxia, multichannel signal monitoring, factor analysis, wavelet transformation Copyright 2005, Vuslat Nazilli PAGE 2 DEDICATION To My Family PAGE 3 ACKNOWLEDGEMENTS I would like to thank my advisor Dr. Jos L. ZayasCastro for the support and mentoring he has generously provided since the time I started my graduate studies at USF. Dr. ZayasCastro has always kept my best interests in mi nd, and for that I sincerely thank him. I also want to thank Dr. Qiang Huang for in troducing me to this interesting research topic and allowing me to work on his method. I than k Dr. Tapas K. Das, Dr. A.N.V. Rao and Dr. Kendall F. Morris for serving in my committee and fo r their valuable time. I sincerely appreciate Dr. Morris support by providing me the data for my thesis and explaining the physical mechanisms related to my research. Last but not the least, I would like to thank my family and friends for their neverending love and support through all the struggles. PAGE 4 TABLE OF CONTENTS LIST OF FIGURES.........................................................................................................................ii LIST OF TABLES...iv ABSTRACT.....................................................................................................................................v CHAPTER 1. INTRODUCTION....................................................................................................1 1.1 A Physiological Application..................................................................................2 1.1.1 Effects of Hypoxia........................................................................................3 1.1.2 Description of Data.......................................................................................5 1.2 Thesis Organization...............................................................................................6 CHAPTER 2. LITERATURE REVIEW.........................................................................................8 2.1 Research on the Analysis of Interaction between Biological Systems ........................8 2.1.1 Timedomain Methods..................................................................................9 2.1.2 Frequencydomain Methods......................................................................10 2.1.3 Phase Locking............................................................................................10 2.2 Wavelet Analysis........................................................................................................13 2.2.1 Basics of Multiresolution Analysis (MRA) and Wavelet Transformation.14 2.3 Principal Curves..........................................................................................................19 CHAPTER 3. WAVELET TRANSFORMATIONS OF CARDIORESPIRATORY SIGNALS AND PREANALYSIS.......................................................................................23 3.1 Extraction of Detail and Approximation Coefficients..............................................23 3.2 Analysis of Energy Redistributions in Wavelet Coefficients.. CHAPTER 4. DETECTION OF MULTISCALE CHANGES IN MULTICHANNEL DATA BY PRINCIPAL CURVE REGRESSION TEST (PCuRTest)....................................34 4.1 PCuRTest Algorithm.................................................................................................34 4.2 Detection of Changes in Wavelet Coefficients by PCuRTest....................................37 CHAPTER 5. CONCLUSIONS AND FUTURE RESEARCH....................................................42 5.1 Conclusions.................................................................................................................42 5.2 Future Research..........................................................................................................43 5.2.1 Factor Analysis...........................................................................................43 5.2.2 Multiple Battery Factor Analytic Model of Common Variation Patterns...48 REFERENCES..............................................................................................................................52 i PAGE 5 LIST OF FIGURES Figure 1.1 Schematic Representation of the Procedure Followed in this Thesis..............................2 Figure 1.2 Schematic Representation of Effects of OSA (*OSA: Obstructive Sleep Apnea, LV: Left Ventricular, BP: Blood Pressure, HR: Heart Rate).............................. Figure 1.3 1min. Phrenic Nerve, Lumbar Nerve and Blood Pressure Signals Recorded Before Hypoxic Stimuli.............................................................................................................6 Figure 2.1 A Damped Sinusoidal Signal and Its Representation on Unit Circle....12 Figure 2.2 Some Wavelet Families................................................................................................13 Figure 2.3 Threelevel Wavelet Decomposition Tree.....16 Figure 2.4 DWT of a Digital Signal...............................................................................................16 Figure 2.5 A Noisy Signal and the Reconstructed Signal After Denoising....18 Figure 2.6 A Sample Signal and the Reconstructed Form After Hardand Softthresholding......19 Figure 2.7 A Sample Principal Curve by HS Definition................................................................20 Figure 3.1 Level5 Decomposition of a BeforeHypoxia Phrenic Nerve Signal...........................24 Figure 3.2 Level5 Decomposition of a BeforeHypoxia Lumbar Nerve Signal...........................25 Figure 3.3 Level5 Decomposition of a BeforeHypoxia Blood Pressure Signal..........................26 Figure 3.4 Original and Thresholded Coefficients and Original and Reconstructed Signals for a Phrenic Nerve Signal..................................................................................................27 Figure 3.5 Original (red) and Reconstructed (blue) Forms of Lumbar Nerve and Blood Pressure Signal Samples.............................................................................................................27 Figure 3.6 Selection of Wavelet Coefficients................................................................................28 Figure 3.7 Relative Energy Distributions at Different Scales of a Phrenic Nerve Signal..............30 Figure 3.8 Relative Energy Distributions at Different Scales of a Lumbar Nerve Signal.............31 Figure 3.9 Relative Energy Distributions at Different Scales of a Blood Pressure Signal............32 Figure 3.10 Total Energy Distributions for Phrenic Nerve, Lumbar Nerve, and Blood Presure Signals Respectively (Dasked Line Indicates the Hypoxic Period).............................32 Figure 4.1 Principal Curve Regression..........................................................................................34 Figure 4.2 PCuRTest Results for Some Samples from the Validation Dataset..............................38 Figure 4.3 PCuRTest Results for Some Samples from the Validation Dataset............................. 39 Figure 4.4 PCuRTest Results for Some Samples from the Test Dataset........................................40 ii PAGE 6 Figure 4.5 PCuRTest Results for Some Samples from the Test Dataset........................................41 Figure 4.6 Relative Efficiency Graph for Selected Levels............................................................41 Figure 5.1 Plots of Unrotated and Varimaxrotated Factor Loadings for a Sample Dataset.........47 iii PAGE 7 LIST OF TABLES Table 3.1 Selected Wavelet Coefficients............................28 Table 3.2 Mapping between Levels and Frequency/Period Values for Daubechies4 Wavelet.33 iv PAGE 8 WAVELETBASED MONITORING AND ANALYSIS OF CARDIORESPIRATORY RESPONSE TO HYPOXIA Vuslat Nazilli ABSTRACT Obstructive sleep apnea is a potentially lifethreatening condition characterized by repetitive episodes of upper airway obstruction that occur during sleep, usually associated with a reduction in blood oxygen saturation. In US population, 9% of women, 24% of men, and 2% of children have been diagnosed with obstructive sleep apnea, suggesting that 18 million people may suffer from the consequences of nightly episodes of apnea. One of the most significant symptoms of obstructive sleep apnea is profound and repeated hypoxia. The analysis of the interaction between cardiovascular and respiratory signals has been a widelyexplored area of research due to the significance of the results in describing a functional relationship between the underlying physiologic systems; however, statistical and analytical approaches to analyze the changes in these signals before and after hypoxia are still in their early stages of evolution. A major motivation for this research has been the lack of methodologies to detect mean and/or variance shifts and identify root sources of variation in timefrequency characteristics of multichannel data. The contributions of this thesis are twofold. First, multiscale energy distributions based on wavelet transformations of the analyzed physiological signals are analyzed. This is followed by the development of an online multichannel monitoring approach based on principal curves that detects changes in the wavelet coefficients extracted from the analyzed signals. v PAGE 9 CHAPTER 1. INTRODUCTION Todays advanced multielectrode array technologies allow researchers to obtain simultaneously large amounts of multichannel signal data. It is known that physiological signals have multiscale properties and generally represent processes with different localizations in time and frequency. External factors or faults may cause certain changes such as mean shifts and/or variance shifts in these signals simultaneously, but affect the signals at different scale levels. This type of data contains important but unobservable diagnostic information about the underlying mechanisms or sources of variation that contribute to process variability resulting in distinct variation patterns in the data. The effects of these variation sources can be categorized as spatial, describing the interaction of different signals and temporal, indicating the evolution of a variation source through time. The identification or diagnosis of these underlying mechanisms can only be accomplished through an interpretable separation of different variation patterns. Wavelet decomposition has proven to be an effective approach to obtain timefrequency localized information of both stationary and nonstationary signals. This thesis aims to extract this advantage of wavelet analysis for the detection of spatiotemporal changes by integrating the multiscale information from multiple signals and to identify the variation sources that cause these changes. The analysis procedure to be followed throughout this thesis is shown in Figure 1.1. Section 1.1 includes a brief explanation of the analyzed problem and the motivation for undertaking this study. Section 1.2 explains the importance of exploring the potential effects of hypoxia on the malfunction of cardiovascular and respiratory systems and gives a short introduction on the characteristics of hypoxic effects. Section 1.3 gives a description of the 1 PAGE 10 dataset to be used in the further analysis steps. Computations through the whole document have been performed in MATLAB and R. Discrete wavelet transformation and extraction of wavelet and scaling coefficients Input Data (Multichannel signals) Figure 1.1 Schematic Representation of the Procedure Followed in this Thesis 1.1 A Physiological Application It is well known that cardiovascular and respiratory systems do not act independently although the nature of this interdependence is not completely understood. Most of the previous studies related to the analysis of hypoxic effects on these systems have been constrained to the analysis of these systems individually by using standard statistical methods. The studies related to the analysis of the interaction of these systems, on the other hand, do not include an online detection and exploration of changes in the relation between these systems subsequent to hypoxic conditions. The goal of this research is to detect and identify root causes of changes in the mutual activity of these two vital systems by integrating their responses to stimulation of hypoxia Multiscale root cause identification by multiple battery factor analysis Denoising by thresholding and formation of training and test datasets from significant coefficients Detection of outofcontrol coefficients (scale levels) by PCuRTest 2 PAGE 11 occurring at different scales. The next subsection describes hypoxia and the different types of effects that it is known or hypothesized to incur on cardiorespiratory systems. 1.1.1 Effects of Hypoxia Sleeprelated breathing disorders represent a variety of abnormalities ranging from simple snoring to obstructive sleep apnea. Obstructive sleep apnea (OSA) is a potentially lifethreatening condition characterized by repetitive episodes of upper airway obstruction that occur during sleep, usually associated with a reduction in blood oxygen saturation. The acute hemodynamic alterations of OSA include systemic and pulmonary hypertension, increased right and left ventricular afterload, and increased cardiac output. Figure 1.2 illustrates potential effects of OSA on cardiorespiratory mechanisms. The risks of undiagnosed obstructive sleep apnea include heart attacks, strokes, impotence, irregular heartbeat, pulmonary and systemic hypertension and heart diseases. One of the most significant symptoms of obstructive sleep apnea is profound and repeated hypoxia during sleep. The ventilatory and cardiovascular responses during or following hypoxia are highly complex in mammals and are hypothesized to result from an interplay between several timedependent facilitatory and inhibitory mechanisms. Some characteristics of mechanisms contributing to responses in these systems during or following hypoxia can be differentiated according to the following criteria: i. Pattern, intensity and duration of the hypoxic exposure that triggers them ii. Time domain of their effects (milliseconds to years) iii. The direction of their effects (facilitatory vs. inhibitory) iv. Type of stimulus (hypoxia vs. carotid sinus nerve (CSN) stimulation) v. The subject model (species of the test animal, state of consciousness, etc.) 3 PAGE 12 OSA* Intrathoracic Pressure Hypoxemia Arousal LV transmural pressure LV afterload Myocardial contractility Sympathetic tone Hypertension BP HR LV hypertrophy Cardiac Impairment Cardiac Failure Chronic Myocardial O2 supply Myocardial O2 demand Acute Figure 1.2 Schematic Representation of Effects of OSA (*OSA: Obstructive Sleep Apnea, LV: Left Ventricular, BP: Blood Pressure, HR: Heart Rate) To be specific, three types of ventilatory responses have been identified in mammals after brief (seconds to minutes) hypoxic exposures based on these criteria [2]: acute response (AR), short term potentiation (STP) and short term depression (STD). AR corresponds to the immediate increase and decrease in respiratory activity at the onset and completion of hypoxic stimulus respectively. STP is the further increase in respiratory activity characterized by an increase in amplitude and frequency of ventilation for a period of seconds to minutes and is a condition mostly observed in experiments with electrical stimulation of the CSN of cats, dogs and rats. STD, usually lasting from many seconds to a few minutes, describes a transient decrease in the frequency of phrenic nerve response at the termination of hypoxic stimulus and has been observed only in anaesthetized rats [3]. Responses observed in phrenic nerve activity during or after repeated episodes of hypoxic stimuli can be categorized into two types as progressive augmentation (PA) and long term 4 PAGE 13 facilitation (LTF). PA represents an increase in the amplitude of hypoxic ventilatory response and has been observed in few cases including the inspiratory activity of anaesthesized cats. LTF refers to a progressive increase, lasting from many minutes to several hours, in respiratory output during nonstimulated intervals between successive hypoxic stimuli [5]. It is observed that LTF is induced by repeated hypoxia, chemical stimulation of carotid chemoreceptors and electrical stimulation of the carotid sinus nerve or brain stem midline but not by hypercapnia. The mechanisms behind LTF are not completely revealed yet; however it may have a significant role in maintaining stable respiration during sleep. Some of the respiratory mechanisms identified in response to a hypoxic stimulus of several minutes to years are hypoxic ventilatory decline (HVD), ventilatory acclimatization to hypoxia (VAH), ventilatory deacclimatization from hypoxia (VDH) and hypoxic desensitization (HD). The effects of intermittent hypoxia on the cardiovascular system are far less clear than those of continuous hypoxia. Recent studies support the hypothesis that repetitive airway obstruction can cause systemic hypertension [6]. A review on the beneficial and adverse effects of intermittent hypoxia on the blood pressure is available in [7]. A description of the physiological data is presented next. 1.1.2 Description of the Data The data have been provided by the Department of Physiology at University of South Florida, College of Medicine. The dataset consists of two samples of phrenic nerve, lumbar nerve, and arterial blood pressure signals recorded before and after hypoxic stimulus generated by repeated carotid chemoreceptor stimulation in a vagotomized, artificially ventilated adult cat. The signals were amplified and digitized at 40 Hz. frequency. Phrenic and lumbar nerve discharges were integrated with a leaky resistancecapacitance circuit to obtain a moving time average of activity in the nerves. Detailed information related to the procedures and methods is available in [5]. 5 PAGE 14 Figures 1.2 and 1.3 display 1min length integrated phrenic nerve signal, 1min lumbar nerve signal and 1min. blood pressure signal respectively. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2000 1500 1000 500 0 Phrenic NerveSignal Value 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1300 1200 1100 1000 900 Lumbar Nerve 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1600 1400 1200 1000 800 Blood PressureTime (min) Figure 1.3 1min. Phrenic Nerve, Lumbar Nerve and Blood Pressure Signals Recorded Before Hypoxic Stimuli 1.2 Thesis Organization This thesis is organized as follows. Chapter 2 presents a review of the methodologies that has been proposed for the analyzed problem and introduces to the components of methodologies proposed in this thesis. Chapter 3 explains the procedure followed in extracting wavelet coefficients from phrenic nerve and blood pressure signals and presents a multiscale energy analysis of physiologic signals in response to hypoxia. Chapter 4 introduces the Principal Curve 6 PAGE 15 Regression Test (PCuR Test) as a tool to detect changes in the wavelet coefficients obtained from phrenic nerve and blood pressure signals and analyzes the test results. Chapter 5 discussed future research directions and presents an approach based on Multiple Battery Factor Analysis for diagnosis of common variation patterns in multichannel data. 7 PAGE 16 CHAPTER 2. LITERATURE REVIEW Previous research on the analysis of interaction between respiratory and cardiovascular mechanisms has been performed under the domain of detection of synchronization between these two systems. This chapter includes a brief literature review on the synchronization studies for neurophysiologic systems based on the domain in which the systems have been analyzed. This is followed by a brief explanation of wavelet analysis with applications in process monitoring. The following sections describe principal curves and factor analysis methods which are to be used in the methodologies introduced in chapters 3 and 4 respectively. 2.1 Research on the Analysis of Interactions between Biological Systems Most of the previous research on the analysis of the interrelationships between biological systems has been performed under the field of synchronization analysis. Synchronization has been a wellstudied concept since its discovery by C.H. Huygens [8] and has been used in the modeling of many biological systems. Detection of synchrony between pairs of oscillatory signals has been a problem of interest in many neuronal activities. Most of these studies are in the form of correlation detection between two neuronal signals in order to find a functional relationship between them. One of the most interesting and important problems in this area has been the interaction of cardiovascular and respiratory systems. It is known for more than a century that heart rate is modulated by a respiratory related rhythm and this phenomenon is referred to as Respiratory Sinus Arrhythmia (RSA) [9]. Mechanisms resulting in RSA are found to be the modulation of baroreflex inputs to the autonomic neurones within the brainstem by ventilation, feedback from 8 PAGE 17 arterial baroreceptors and a central rhythm generator in the brainstem [10]. Recent work suggests that RSA may improve energy efficiency by matching perfusion to ventilation within each respiratory cycle [11]. Another interaction between heart beats and respiratory activity is cardioventilatory coupling (CVC), which can be defined as mutual time conformity or synchronization of respiratorycycle phase with cardiaccycle phase observed in objects under low arousal (i.e. sleep, anaesthesia, sedation). Previous studies have suggested CVC as the triggering of inspiratory timing after a constant interval following a heart beat [13]. The existing approaches in the analysis of temporal correlations between neural signals can be categorized in three groups, based on the analysis of the time domain, the frequency domain or nonlinear dynamics of phase synchronizations. The traditional idea in all categories has been to obtain a correlation coefficient using either special marker events (e.g. times of R peaks in ECG signals) or the original raw data and evaluate its statistical significance. In the following sections, the synchronization studies on the interaction of cardiovascular and respiratory systems will be reviewed. 2.1.1 Timedomain Methods A standard method used in time domain analysis when data consists of spike trains is crosscorrelogram or its shufflecorrected version [14]; however it has been observed that interpretation of the peaks in a crosscorrelogram is quite ambiguous due to the possibility of various factors causing those peaks [15]. Lindsey et.al. [16] and Arata et.al [17] used gravity method in order to identify correlations between neuronal assemblies and visualize the fluctuations in the baroresponsive neuronal assemblies during the respiratory cycle and baroreceptor stimulation but were not able to make inferences about the strength of synchrony and duration of correlated activity in these studies. Orem and Dick [18] proposed the use of an index ,, the ratio of 2 9 PAGE 18 variance occurring across fractions of the activity of a respiratory neuron to the total variance, to quantify the degree of respiratorymodulated activity in that neuron and also the consistency of this modulation from breath to breath. Dick and Morris [19] modified this index to assess the strength and consistency of arterial pulse modulation of respiratorymodulated neural activity. 2.1.2 Frequencydomain Methods A common method used in frequency domain analysis is coherence spectrum. In [20, 21], heart rate fluctuations were analyzed by power spectrum in order to assess the functioning of shortterm cardiovascular control systems. Galletly et.al. [22] explored the relationship between CVC and simple spectral and nonlinear measures (i.e. fractal dimension and approximate entropy) of HRV (Heart Rate Variability) motivated by the idea that CVC should also have an effect on the pattern of HRV as well as inspiratory timing and concluded that these methods were ineffective in revealing the presence of CVC in heart rate time series. Another set of tools to analyze interactions between two (or more) signals in frequency domain has been spectral openloop and closedloop models [23]. Openloop models simply define a linear input/output relationship between signals through a single transfer function in frequency domain. Closedloop models have been developed in frequency domain to model the feedforward mechanisms of cardiovascular control [24, 25]. 2.1.3 Phase Locking The third group of approaches is aimed at revealing the nonlinear dynamical interdependence between coupled neural signals, which cannot be described by methods based on linear correlations like crosscorrelation. The studies under this domain brought about a new concept of phase locking, defined in general as constant)()(12tmtn (2.1) 10 PAGE 19 where denote the phase series of the analyzed signals and integers nand describe the orders of the synchronous states (i.e. the ratio of the frequencies of the signals is n:). The values of and are usually determined by trial and error, using relative frequency plots. 2,1 m m n m A number of nonlinear statistical measures like mutual information [26, 27], maximal correlation, phase difference entropy [28] or phase coherence [29] have been suggested in phase synchronization analysis of neural signals. The motivation behind using these measures in synchronization analysis is the fact that, unlike conventional statistical approaches in time or frequency domains, they discard the effect of amplitude correlations in order to detect correlations in the timing only. In another neurophysiologic problem, Schiff et.al. [30] proposed the use of mutual prediction in order to verify the existence of nonlinear coupling in a neural ensemble. A general problem encountered in synchronization analysis of neurophysiologic systems is that the temporal correlation between signals from such systems is usually disturbed by the nonstationarity in the interacting systems and/or coupling after some time, which conflicts with the underlying stationarity assumption in standard statistical methods used in this analysis. The possibility of such nonstationarities imposes a difficulty on the selection of an optimal analysis period length as well as the conclusions related to the synchronization between these systems. Although the analysis of more oscillation cycles would result in a more reliable analysis under stationarity conditions, in the case of nonstationarity, longer observations will avoid the detection of weaker or shorter interactions. One common method to deal with this situation is using sliding windows in the estimation of the correlation coefficient. Schafer et.al. [31] and Hurtado et.al [27] have addressed the nonstationarity problem of the analyzed oscillatory systems using Hilbert transforms of the signals in phase reconstruction of the timeseries data since this method does not require the stationarity of the transformed signals. A tool used in defining the phase of an arbitrary signal is known in signal processing arena as the analytic signal concept [32]. This 11 PAGE 20 process of phase extraction involves the projection of the time series onto the complex unit circle and measuring the angle of rotation over time (Figure 2.1). A review of other possible phase extraction methods is available in [33]. Figure 2.1 A Damped Sinusoidal Signal and its Representation on Unit Circle Another issue that should be taken into consideration in this analysis is whether or not to base the analysis on singletrial or multitrial data. This factor has significant implications on the reliability of the estimated statistics because, particularly in the case of neurophysiologic data, the interactions may display great variability from trial to trial or may last for a very short duration. Despite the extensive literature on the interaction between respiration and cardiovascular systems, the exact relationship between CVC and RSA still remains unknown. There is a general belief that the interactive relationship between CVC and RSA forms a complex feedback and feedforward mechanism; however they are distinct processes rather than just the reverse of each other [30, 31]. Galletly and Larsen [34] suggested that CVC aligns heart beats relative to inspiratory timing such that heart beats occur at positions in the ventilatory cycle where they were maximally affected by RSA, which might be due to the role of CVC in improving the variation in the heart beat interval with respiration during sleep. Galletly and Larsen [35] also noted that RSA appears to persist at all levels of arousal whereas CVC is observed best at low arousal and that RSA diminishes with age while there is no effect of age on CVC. In [31], the relationship between cardiorespiratory synchronization and RSA is analyzed using synchrograms, a graphic 12 PAGE 21 tool, and it is suggested that these phenomena might be stimulated by two competing physiological mechanisms. Dick and Morris [19] observed a correspondence between respiratoryand pulsemodulated activities and indicated that the hypothesized modulations may not be the only factors driving this association. The next section explains the basic concepts of wavelet transformation and reviews its applications in statistical process monitoring. 2.2 Wavelet Analysis Wavelets literally mean small waves. Wavelets are smooth continuous functions that form building blocks in function decompositions in a similar way to sine and cosine functions in Fourier transform. Unlike the sine and cosine bases used in Fourier transforms which are perfectly localized in frequency space, wavelets are localized in both time and frequency, decaying to zero as t (compact support). This property of wavelets enables the modeling of irregular or hidden patterns in signals and the extraction of process information at different frequency levels at different times. The ability of wavelets to zoomin on details is also of significant value in visualizing complex data; on the other hand, details can also be suppressed easily and thus wavelets can be used for data smoothing. Figure 2.2 displays Daubechies4, Daubechies3, and Symmlet8 wavelet functions. Figure 2.2 Some Wavelet Families 13 PAGE 22 The next section reviews the fundamental mathematical properties of wavelets and wavelet transformations. For a more detailed description, the reader is referred to [37, 38]. 2.2.1 Basics of Multiresolution Analysis (MRA) and Wavelet Transform The multiresolution analysis involves a decomposition of the function space into a sequence of closed subspaces , of such that jV Zj )(RL2 i. The subspace is contained is contained in all higher subspace, i.e. (2.2) jV ... ...21012VVVVV ii. All be included at the finest resolution and only the zero function at the coarsest level, i.e. )( )(2RLtf )( ,0 2RLVVjjjj iii. (scale or dilation invariance), 1 )2( )(jjVtfVtf iv. (translation or shift invariance), 00 )( )(VktfVtf v. There exists a function called scaling function, such that )(t )(kt is an orthonormal basis of 0V Following the properties outlined in (2.2), the scaling function )(t generates all the basis functions )( tjk through dilations and translations for each subspace i.e. In addition to scaling functions, which represent the low frequency components or smoothness of a signal, a set of wavelet functions, are required to represent the high frequency components of the signal or fine scale deviations from the overall smoothness of the signal. s are generated through the translations and dilations of a mother wavelet, i.e. jV Zkjkttjjjk );2(2)(2/ )(tjk jk )2(2)(2/2/ktjjjk Discrete Wavelet Transform (DWT) approximates any continuous function as )( )(2RLtf 14 PAGE 23 JjKkkjkjKkkJkJjdtatf11,,1,,)( )( (2.3) where J denotes the number of resolution levels and Kj is the number of coefficients at the jth level. Obviously, unlike the case of Fourier transforms, there exists a large selection of wavelet families depending on the choice of the mother wavelet. The choice of a suitable set of basis functions depends on the level of smoothness, number of vanishing moments, and degree of timefrequency localization as required by the data or the application. Some of the wavelet bases available in literature are Haar, Daubechies, coiflets, Morlet wavelets, symlets, and biorthogonal wavelets. Haar basis is the simplest of wavelet bases; however, is not very effective in timefrequency localization and estimating smooth functions. The more complex functions like the Daubechies wavelets produce overlapping averages and differences that provide a better average than the Haar wavelet at lower resolutions. In order to extract scaling coefficients kja, and wavelet coefficients kjd, Mallat [39] developed an efficient algorithm referred to as Pyramidal Algorithm. This algorithm required the number of data points, n, to be dyadic, i.e. This algorithm is based on successive averaging and differencing referred to as lowpass and highpass filtering respectively in signal processing terminology. Figure 2.3 displays a threelevel wavelet decomposition three where G0, H0, and X[n] represent the lowpass filter, highpass filter, and a signal of length n respectively. In this process, lowpass filter removes high frequencies of the data. Since details (e.g. sharp changes in the data) correspond to high frequencies, the averaging procedure tends to smooth the data. Therefore, at each level j of DWT, H0 produces detail coefficients, and G0 produces the coarse approximations represented by Figure 2.4 illustrates the concepts of averaging and differencing for a simple digital signal. The procedure of reconstructing a signal is the reverse process of decomposition. Zknk,2 jd ja 15 PAGE 24 2 G0 H0 2 d1 H0 d2 H0 2 2 G0 2 2 G0 X[n] d3 a3 Figure 2.3 Threelevel Wavelet Decomposition Tree Figure 2.4 DWT of a Digital Signal Note that during this process, each step produces a set of averages and coefficients that is half the size of the input data. For example, if the time series contains 256 data points, the first step will produce 128 averages and 128 detail coefficients. The averages then become the input for the next step (e.g., 128 averages resulting in a new set of 64 averages and 64 detail coefficients). This continues until one average and one coefficient (e.g., 20) is calculated. In practice, signals can be estimated using much fewer number of wavelet coefficients than the original number of data points in the signal. A major contribution of wavelet analysis has 16 PAGE 25 been a reasonable estimation of the number of coefficients based on a certain criterion. This property is used in signal estimation algorithms also referred to as denoising that attempt to remove the parts of the signal that fall into a particular model of noise for data compression purposes. Figure 2.5 displays a sample noisy signal and the same signal after denoising. A fundamental denoising method is shrinking the wavelet coefficients via hard thresholding and soft thresholding [40, 41, 42]. Both methods eliminates all the coefficients that are smaller than a specified threshold value with the only difference being that hard thresholding maintains the original values of the remaining coefficients whereas soft thresholding shrinks them toward zero by decreasing their values by the magnitude of the threshold value. Figure 2.6 illustrates how these thresholding methods work on a sample signal. The choice of an appropriate threshold value is important due to the fact that a very large threshold will possibly result in an oversmoothing of the data and a very small threshold will result in a rough reconstruction. One of the most common threshold measures in literature has been the universal threshold suggested by Donoho et.al. [40] as )log( 2 =ntjj (2.4) where is the standard deviation of noise at level j and n is the signal length. An extensive review for threshold selection is available in [43]. Various other denoising schemes are available in literature. SUREShrink method [44] introduced an adaptive procedure that assigns a threshold value to each decomposition level based on the criterion of minimizing Steins Unbiased Risk Estimate. Saito [45] proposed the AMDL (Approximation Minimum Description Length) procedure which minimizes a cost objective whish is a function of the number of eliminated coefficients. Other various thresholding schemes and their applications are available in references [46, 47, 48]. j 17 PAGE 26 0 500 1000 1500 2000 2500 3000 100 200 300 400 500 600 Original 0 500 1000 1500 2000 2500 3000 100 200 300 400 500 600 DenoisedObservation IndexSignal Value Figure 2.5 A Noisy Signal and the Reconstructed Signal After Denoising 1 0 1 Original signal 1 0 1 Hard thresholded signal 1 0 1 Soft thresholded signal Figure 2.6 A Sample Signal and the Reconstructed Form After Hardand Softthresholding (Note: all numbers are in generic values) 18 PAGE 27 The next section describes principal curves. Principal curves form the basis for the methodology described in Chapter 4. 2.3 Principal Curves Principal component analysis (PCA) is a standard analysis approach in multivariate statistics [49]. PCA finds the directions along which the data have the highest variability through an eigenvalue decomposition of the variance matrix of the data and has been used extensively for feature extraction and data compression purposes. First principal component of a random vector is defined as the straight line which maximizes the variance of the projection of x to a line and minimizes the distance between the line and. It is known that, if the distribution of x is elliptical, the first principal component is selfconsistent, meaning that any point of the line is the average of all points in x that project to this point. x x Hastie [50] and Hastie and Stuetzle [51] (hereafter HS) generalized the selfconsistency property of the first principal component to principal curves. HS defined principal curves as onedimensional, selfconsistent smooth curves that pass through the middle of a pdimensional distribution or data cloud. Therefore, principal curves provide a good, onedimensional summary of a highdimensional dataset. Let f(t) =(f1(t), f2(t),, fp(t)) be a smooth infinitelydifferentiable curve in where f1(t), f2(t),, fp(t) are coordinate functions in pdimensional space and parameter t is generally defined as the arc length along the curve Projection index,, is defined as pR 1fR R :pt tttttt)f(xinf)f(x :sup)x(f i.e. is the he largest value of t for which is closest to x. According to the definition of HS, f(t) is a principal curve if it satisfies the following properties: )x(ft )f(t i. f is a unitspeed curve defined over a closed interval, ii. f does not intersect itself, i.e. )(f )(f 2121tttt 19 PAGE 28 iii. f is selfconsistent, i.e. )(f ))(x x(ftttE The algorithm proposed by HS for constructing principal curves starts with some prior summaries, such as the usual principal component line iterates between a projection step and an expectation step until convergence. The algorithm to obtain the principal curve of the distribution of x is summarized as follows: Step 0. Set where a)(fx)0(tt x is the mean of x and a is the first linear principal component of density function Set and j = 1. h )x()x()0(f)0(tt Step 1. (Expectation) Define ttEtjj)x(x)(f)1(f)( Step 2. (Projection) Set for )x()x()(f)(jttj h x where Step 3. Stop if )1(2)(2)1(2f,f,f,jjjhDhDhD where 2f2))x(f(x f),(tEhDh Otherwise, let and go to Step 1. 1+=jj In the algorithm for fitting principal curve from data sets, rather than from a density function h, the expectations and distance function in Step 1 and 3 are replaced by their empirical counterparts respectively. In this analysis, cubic smoothing splines are used for local averaging in Step 1. Figure 2.7 is a display of a sample principal curve fit to a simulated data using the HS algorithm. 20 PAGE 29 Nx kx ))x(f(fNt ))x(f(fkt 1x ))x(f(1ft Figure 2.7 A Sample Principal Curve by HS Definition HS showed that no infinitesimally small perturbation to a principal curve will decrease 2f))x((xtfE which generalizes the minimum square error property of PCA to the distribution of x. HS did not answer in detail the questions on the existence and uniqueness of principal curves; however proved the existence of principal curves for some special distributions, such as elliptical and spherical distributions. Tibshirani [52] stated that, with the algorithm proposed by HS, maximization of the likelihood over all parameters may lead to inconsistent or inefficient estimates and developed an alternative algorithm based on a mixture model. This approach makes use of the EM algorithm, originally presented by Dempster [53], to compute maximum likelihood estimates of missing data that can take the form of unrecorded observations, unobservable parameters, or unobservable data. Some studies exist on using principal curves in process monitoring in multisensor manufacturing processes. Kim et.al [54] proposed a fault classification method for a forging 21 PAGE 30 process based on the distances between the projections of multichannel tonnage signals and a principal curve estimated using these signals. Principal curve estimation has been used in a variety of nonlinear data analysis studies [55, 56, 57]. The next chapter explains the process followed to extract the wavelet coefficients to be used in Chapter 4. It also presents a preliminary multiscale analysis of the coefficients. 22 PAGE 31 CHAPTER 3. WAVELET TRANSFORMATIONS OF CARDIORESPIRATORY SIGNALS The methodologies to be presented in Chapters 4 and 5 consist of statistical tests and analysis tools where the inputs will be the wavelet coefficients obtained from each signal. In this chapter, the procedure for extracting the wavelet coefficients from phrenic nerve, lumbar nerve, and blood pressure signals is described and a preliminary multiscale analysis of these signals is performed. 3.1 Extraction of Scaling and Wavelet Coefficients In order to extract the scaling and wavelet coefficients described in Section 2.2 using DWT, only the first 102.4 sec.s of each sample signal corresponding to n=4096 data points were taken. Daubechies4 was chosen as the suitable wavelet family in DWT of the signals. Figures 3.1, 3.2, and 3.3 display a level5 wavelet decomposition of a sample phrenic nerve signal, lumbar nerve signal, and blood pressure signal respectively. The yaxes denote the signal value and xaxes indicate the observation index. The left column displays the approximation components and the right column shows the detail components of the signal respectively. Naturally, the highest frequencies appear at the lower levels of decomposition original signal while successive approximations appear less and less noisy, progressively losing more highfrequency information. The coefficients were hardthresholded using the scaledependent universal threshold explained in Section 2.2. A visual check of the reconstructed signals displayed on the right columns of Figures 3.4, 3.5, and 3.6 shows that the thresholding strategy keeps the significant wavelet coefficients allowing almost perfect reconstruction. 23 PAGE 32 Figure 3.1 Level5 Decomposition of a BeforeHypoxia Phrenic Nerve Signal (Xaxes Indicate Index Value and Yaxes Indicate Signal Value) 24 PAGE 33 Figure 3.2 Level5 Decomposition of a BeforeHypoxia Lumbar Nerve Signal (Xaxes Indicate Index Value and Yaxes Indicate Signal Value) 25 PAGE 34 Figure 3.3 Level5 Decomposition of a BeforeHypoxia Blood Pressure Signal (Xaxes Indicate Index Value and Yaxes Indicate Signal Value) Figure 3.4 displays the original wavelet coefficients and those that are selected by hard thresholding for a sample phrenic nerve signal Figures 3.5 and 3.6 show examples of reconstructed forms of lumbar and blood pressure signals respectively. The thresholding procedure naturally results in the extraction of different numbers of coefficients from each signal. The methodology to be introduced in Chapter 4 requires an equal number of wavelet coefficients as inputs; therefore, it is required to select a common set of coefficients from the signals. In order to eliminate the loss of information, the union of the selected coefficient positions is selected in 26 PAGE 35 order to form the observation vectors as illustrated in Figure 3.6. This approach results in the extraction of 95 coefficients which are listed on Table 3.1. THRESHOLDING Figure 3.4 Original and Thresholded Coefficients and Original and Reconstructed Signals for a Phrenic Nerve Signal Sample(Note: In Coefficient Tables Yaxis Indicates Decomposition Level and Xaxis Indicates Coefficient Position) Figure 3.5 Original (red) and Reconstructed (blue) Forms of Lumbar Nerve and Blood Pressure Signal Samples 27 PAGE 36 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2000 1500 1000 500 0 Phrenic NerveSignal Value 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1300 1200 1100 1000 900 Lumbar Nerve 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1600 1400 1200 1000 800 Blood PressureTime (min) Figure 3.6 Selection of Wavelet Coefficients (Note: In Coefficient Tables yaxis Indicates Decomposition Level and xaxis Indicates Coefficient Position) Table 3.1 Selected Wavelet Coefficients Scale A9 D9 D8 D7 D6 D5 D4 Coefficients a91,a92,a93, a94,a95,a96, a97, a98 d91,d92,d93, d94,d95,d96, d97, d98 d81,d83, d84,d85, d8,15 d7,30 d6,61 d51,d53,d54,d55,d57, d57,d58,d5,10,d5,11, d5,15,d5,16,d5,21,d5,22,d5,27,d5,30,d5,31,d5,32,d5,33,d5,41,d5,42,d5,44 d5,46,d5,50,d5,53,d5,54, d5,61,d5,64,d5,65,d5,66, d5,67,d5,68,d5,71,d5,72,d5,73,d5,84,d5,89,d5,90,d5,94,d5,95,d5,98,d5,99 ,d5,101,d5,103 d41,d42,d44,d4,11,d4,12,d4,15,d4,17, d5,99,d4,18,d4,21,d4,22,d4,27,d4,31,d4,35,d4,36,d4,38,d4,39, d4,40,d4,61,d4,73,d4,101,d4,113,d4,116, d4,133,d4,151, d4,159,d4,201,d4,202,d4,214 28 PAGE 37 3.2 Analysis of Energy Redistributions in Wavelet Coefficients Wavelet transforms provide a set of powerful exploratory data analysis tools by giving insight about the dominant scales of variation in the analyzed time series. It is of potential value to quantitatively analyze the spectral energy redistribution in continuously monitored patients before and after hypoxia, since different effects of hypoxic stimuli may have different timefrequency localizations. It is also important to describe how the energy in different frequency bands is modified at different types of signals. The total energy contained in a signal is defined as its integrated magnitude, i.e. )(tx )()(2txdttxEtot (3.1) The wavelet spectrum is defined as jkkjjdE212, (3.2) orthogonal transformations, Jj,...,1 where J is the cutoff frequency. Wavelet spectrum has proven to be useful for detecting of dominant scales of variation. Peaks in highlight the dominant energetic scales within the analyzed signal. It is beneficial to normalize by to emphasize large peaks and sharp discontinuities and deemphasize lowamplitude background noise. jE jE totE Figures 3.7, 3.8, and 3.9 display the relative energy distributions through time for phrenic nerve, lumbar nerve, and blood pressure signals at levels totjEE/ 9,...,1 j respectively. Figure 3.10 shows the total energy levels for these signals. For phrenic nerve signal, levels undergo an obvious relocation in energy at the termination of hypoxic stimulation. This period is characterized by a small through followed by a local peak in relative distribution. This phenomenon may indicate an increase in highfrequency activity Lumbar nerve signal experiences an abrupt increase in relative energy at levels 5,..,1j 7,...,1 j in response to hypoxia, 29 PAGE 38 which indicates that relocation of energy in lumbar nerve signals cover both lowfrequency and highfrequency levels. The energy relocation patterns in these decomposition levels are similar to each other and are mainly caused by the sudden decrease in total energy level in lumbar nerve activity as a consequence of hypoxic stimuli. In the case of blood pressure signal, the redistribution pattern is not unique among scales and is more complicated. 30 60 90 0.7 0.8 0.9 1 Normalized Energy (100%) 30 60 90 0 0.2 0.4 30 60 90 0 0.01 0.02 0.03 30 60 90 0 2 4 6x 103 30 60 90 0.5 1 1.5 2x 103 30 60 90 0 0.5 1x 103 30 60 90 0 2 4x 104 30 60 90 0.5 1 1.5 2x 104 30 60 90 0 0.5 1x 104 Time (min) Hypoxicstimulus E8/Eto t E7/Eto t E9/Eto t E4/Eto t E5/Eto t E6/Eto t E3/Eto t E2/Eto t E1/Eto t Figure 3.7 Relative Energy Distributions at Different Scales of a Phrenic Nerve Signal 30 PAGE 39 30 60 90 0.4 0.6 0.8 30 60 90 0 0.2 0.4 30 60 90 0 0.1 0.2 30 60 90 0 0.05 0.1 Normalized Energy (100%) 30 60 90 0 0.05 0.1 30 60 90 0 0.02 0.04 30 60 90 0 0.005 0.01 0.015 30 60 90 0 0.005 0.01 Time (min) 30 60 90 0 0.005 0.01 Hypoxicstimulus E9/Eto t E7/Eto t E8/Eto t E4/Eto t E5/Eto t E6/Eto t E3/Eto t E2/Eto t E1/Eto t Figure 3.8 Relative Energy Distributions at Different Scales of a Lumbar Nerve Signal 31 PAGE 40 30 60 90 0 0.05 0.1 30 60 90 0 0.02 0.04 0.06 0.08 30 60 90 0 0.01 0.02 0.03 0.04 30 60 90 0.02 0.04 0.06 0.08 Normalized Energy (100%) 30 60 90 0.4 0.5 0.6 0.7 30 60 90 0.1 0.2 0.3 0.4 0.5 30 60 90 0.05 0.1 0.15 0.2 0.25 30 60 90 0.01 0.02 0.03 0.04 Time (min) 30 60 90 4 5 6 7x 103 Hypoxicstimulus E7/Eto t E9/Eto t E8/Eto t E5/Eto t E6/Eto t E4/Eto t E3/Eto t E2/Eto t E1/Eto t Figure 3.9 Relative Energy Distributions at Different Scales of a Blood Pressure Signal 30 60 90 0.4 0.6 0.8 1 1.2 1.4 1.6x 107 Time (min) 30 60 90 4.6 4.8 5 5.2 5.4 5.6 5.8 6 6.2x 107 30 60 90 0.6 0.8 1 1.2 1.4 1.6x 109 Total energy 3.10 Total energy distributions through time for phrenic nerve, lumbar nerve, and blood pressure signals respectively (dashed line indicates the hypoxic period) 32 PAGE 41 It should be noted that for all wavelets there is a onetoone relationship between the scale and period. This is important in the interpretation of the scaledependent information. The scale refers to the width of the wavelet. As the scale increases and the wavelet gets wider, it includes more of the time series, and the finer details get smeared out. The scale can be defined as the distance between oscillations in the wavelet (e.g. for the Morlet), or it can be some average width of the entire wavelet (e.g. for the Marr or Mexican hat). The period (or inverse frequency) is the approximate Fourier period that corresponds to the oscillations within the wavelet. The relationship can be derived by finding the wavelet transform of a pure cosine wave with a known Fourier period, and then computing the scale at which the wavelet power spectrum reaches its maximum. Table 3.2 displays this relationship between decomposition levels and implied (or pseudo) frequencies and periods for Daubechies4 wavelet. Table 3.2 Mapping Between Levels and Frequency/Period Values for Daubechies 4 Wavelet Level PseudoFrequency PseudoPeriod 1 14.2857 0.07 2 7.1429 0.14 3 3.5714 0.28 4 1.7857 0.56 5 0.8929 1.12 6 0.4464 2.24 7 0.2232 4.48 8 0.1116 8.96 9 0.0558 17.92 The next chapter introduces PCuRTest and its application on the wavelet coefficients selected from phrenic nerve, lumbar nerve, and blood pressure signals as described above. 33 PAGE 42 CHAPTER 4. DETECTION OF MULTISCALE CHANGES IN MULTICHANNEL DATA USING PRINCIPAL CURVE REGRESSION TEST (PCuRTest) Previous studies related to the exploration of the effects of hypoxia on biological systems are offline monitoring procedures where abnormalities or changes in the recorded signals are analyzed after the experiment is conducted. However, in cases where a quick response to a hypoxic condition has vital importance, an online monitoring system is necessary. The motivation in using PCuR Test [58], to monitor the wavelet coefficients of blood pressure and integrated phrenic nerve signals is to integrate their mutual multiscale behavior and to identify the coefficients or equivalently scale levels where a shift from the nominal conditions occur whenever new data is available. The data compression feature of wavelet coefficient is of significant value in this example, since it helps to decrease the number of tested data points by a great proportion. In Section 4.1, a general description of PCuR Test will be given. Section 4.2 analyzes the results of applying PCuRTest to wavelet coefficients of phrenic nerve and blood pressure signals. 4.1 PCuR Test Algorithm Let be the observed value of kth feature of pchannel signals, where xk = (xk1,, xkp)T and the signal is of the form If signals with n features are observed, denote by the ith observation of where xk,i=(xk1,i,, xkp,i)T is the ith observation of xk, kx ),..(nk1= nk1}{x Nik1,}x{ nk1}x{ Ni,..1= and The N samples are of the form where the Np matrix Xk=[xk,1, xk,2, ..., xk,N]T. nk,..,1= nk1}X{ 34 PAGE 43 The basic idea of PCuR is to extract a principal curve for each sample and build a regression model between the principal curve and (Figure 4.1). Using N incontrol observations under, N principal curves are extracted as the initial step. The principal curve f of is a vector of p functions of single variable t, i.e., f(t) =(f1(t), f2(t),, fp(t))T, where f1(t), f2(t),, fp(t) are coordinate functions in pdimensional space. If yk is the projection of xk on f, we can use to describe f. The ith principal curve fitted from is denoted as is then organized into an np matrix as [y1,i, y2,i, ..., yni]T. Define Np matrix Yk as Yk=[yk,1, yk,2, ..., yk,n]T. Yk contains the projected points of Xk on the corresponding N principal curves. nk1}x{ nk1}x{ nk1}X{ nk1}x{ nk1}y{ nik1,}x{ nik1,}y{. nik1,}y{ With N principal curves in the form of we establish regression models at each observation point. A multivariate linear regression model is assumed to adequately model response yk and predictors xk at the kth argument value, nk1}Y{ = [1] Bk + k (4.1) Tky Tkx E(k)= 0, Cov(k) = k, k =1, n. (4.2) Let kB= ][)()2()1(pkkkL Define N vector = (1,,1). To simplify notation, the subscript n will be dropped when the dimension of the vector 1N is clear from the context. T1N For data sets Xk and Yk of kth variable, kkkkE B X 1Y k=1,2,, n, (4.3) kE =[k(1), k(2),..., k(p)], E(k(j))= 0, Cov(k(j), k(g)) = k(j,g)I, j, g =1,2,, p. (4.4) The N observations on kth trial have covariance matrix k={k(j,g)}, but observations from different channels at time k are assumed to be uncorrelated. If k follows normal distribution and rank= p+1, N 2p+1, the maximum likelihood estimator (MLE) of is [49] kX 1 kB 35 PAGE 44 Estimated (p+1)p projection matrix: = kB kkkkYX 1X 1X 1T1T (4.5) where has a normal distribution with E()= and Cov kB kB kB )()(,gkjk =k(j,g) is independent of MLE of k given by 1TX 1X 1kkkB k = n1 TEk kE (4.6) where N follows Wishart distribution with (Np1) degree of freedom, i.e., k )(1,kpNpW Figure 4.1 Principal Curve Regression Data point xkChannel 1Projection point ykChannel 2 Accordingly, predicted values and residuals at time k are computed as kY kE kY = = kB kX 1 kX 1 1TX 1X 1kk kkYX 1T (4.7) = = kE kY kY kkkkkYX 1 X 1X 1 X 1 IT1T (4.8) Once the PCuR model is established from incontrol signals, it can be used to determine whether process change occurs in future observations. The principal curve of new observation nk1}z{ is treated as new response By the results of multivariate regression [49], the predicted ellipsoids for new response yk at time k are nk1}y{ 36 PAGE 45 (yk )T TBk kz1 11kpNN ( yk )/(1+[1 ]) TBk kz1 Tzk 1TX 1X 1kk kz1 )(2)1(2,pNpFpNpNp (4.9) Process change is believed to occur in a test sample if any point is outside control ellipsoid or indicating an outofcontrol condition. In the next section, wavelet coefficients extracted from phrenic nerve signals and blood pressure signals as described in Chapter 3 are monitored by PCuRTest. 4.2 Detection of Changes in Phrenic Nerve and Blood Pressure Signals using PCuRTest To apply PCuRTest to the physiology data, the selected coefficients from both beforeand afterhypoxia conditions were placed in and matrices respectively. Since there are three channels of signals in this case, p=3. The first 30 of the beforehypoxia samples are selected to form the training set and the remaining samples were used for validation of the model. The 15 samples from afterhypoxia conditions form the test dataset. Figures 4.1, 4.2, 4.3, and display the results of the test at a significance level of 5895 1595 01.0 Each point on these figures corresponds to a different wavelet coefficient and the points above the threshold (in red) represent wavelet coefficients where a change in the process has occurred. Figures 4.1 and 4.2 indicate that there are some outofcontrol coefficients in the beforehypoxia samples; however, these remain to be within a tolerable false alarm level. As Figures 4.3 and 4.4 display, PCuRTest results in the detection of a significant amount of outofcontrol coefficients. 37 PAGE 46 020406080 051525 Before HypoxiaCoef. kTest Statistics 020406080 051525 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 35384446 72 396264 020406080 051015 Before HypoxiaCoef. kTest Statistics 020406080 0204060 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 72 34 020406080 051525 Before HypoxiaCoef. kTest Statistics 020406080 051525 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 396264 35384446 72 020406080 051015 Before HypoxiaCoef. kTest Statistics 020406080 0204060 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 72 34 Figure 4.2 PCuRTest Results for Some Samples from the Validation Dataset 38 PAGE 47 020406080 05101520 Before HypoxiaCoef. kTest Statistics 38 020406080 051015 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 020406080 020406080 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 8 14324752 4 020406080 050150250 Before HypoxiaCoef. kTest Statistics 020406080 051525 Before HypoxiaCoef. kTest Statistics 020406080 0204060 Before HypoxiaCoef. kTest Statistics 40 7 6 020406080 051020 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 020406080 051015 Before HypoxiaCoef. kTest Statistics 90 90 101168707274767880 Figure 4.3 PCuRTest Results for Some Samples from the Validation Dataset 39 PAGE 48 020406080 050150 After HypoxiaCoef. kTest Statistics 020406080 050100150 After HypoxiaCoef. kTest Statistics 020406080 0200400600 After HypoxiaCoef. kTest Statistics 3491011121314151617181920212223242527282930313233343637383940414243444547485253565758596163646668697072737475798082838486889091939495 3491011131416192022232531323839404143454748535556636466707274767879828486909294 348141617182340485363727481 020406080 050100150 After HypoxiaCoef. kTest Statistics 020406080 05001000 After HypoxiaCoef. kTest Statistics 020406080 0200400600 After HypoxiaCoef. kTest Statistics 34142425274041495357596163676975 1345910111213141516171819202122232728303132333435363738394041424344454647484950515253555657585960616263646566676970717273747677787980818283848586878889909192939495 345910111213141516171819202122232425262728293031323337394041424445464849515253545657586061626364666768697071727374757778808182838486899091929495 020406080 0200400600 After HypoxiaCoef. kTest Statistics 020406080 0100300500 After HypoxiaCoef. kTest Statistics 020406080 0100300500 After HypoxiaCoef. kTest Statistics 3459101112131416181920222729313233383940424345474851545759606470717273747576777879818384858687888990919495 3459101112131416181920222729313233383940424345474851545759606470717273747576777879818384858687888990919495 3459274048505253545658606263646670727476788082868895 Figure 4.4 PCuRTest Results for Some Samples from the Test Dataset 40 PAGE 49 020406080 050100150 After HypoxiaCoef. kTest Statistics 020406080 0100300 After HypoxiaCoef. kTest Statistics 020406080 0200400600 After HypoxiaCoef. kTest Statistics 13458242545474951559094 134591011121314151617181920212223242527293031323334363739404142444546484951525354555759616566676869707172767778798182889295 345910111214151617181920212223262728293031323334373839404142434445464748495051525354575860626364666769707172747576777980818283848586888990919294 020406080 0204060 After HypoxiaCoef. kTest Statistics 020406080 02060100 After HypoxiaCoef. kTest Statistics 020406080 0200400600 After HypoxiaCoef. kTest Statistics 345911151617212526272941536371747982848890929395 345910111213141516192122242829303132333637404245464953555762646667697172777983858688909294 1345910111213141516171819202122232425262728293031323334353739404142444647484951525355565762636465707273747576777879808283848687888990919294 Figure 4.5 PCuRTest Results for Some Samples from the Test Datase Figure 4.6 displays the average relative frequency of the selected levels computed by averaging the number of cases where the coefficients in that level were outofcontrol over all trials. 00.10.20.30.40.50.60.70.80.9A9D9D8D7D6D5D4LevelRelative efficiency (100%) Figure 4.6 Relative Efficiency Graph for Selected Levels The next chapter discusses the results of this research and potential future directions. 41 PAGE 50 CHAPTER 5. CONCLUSIONS AND FUTURE RESEARCH 5.1 Conclusions This thesis presents an online process monitoring tool, PCurTest, to detect changes in wavelet coefficients of multichannel signals. The main contributions of this research can be summarized as follows: i. Detecting changes in processes that involve the fusion of information from multiple channels ii. Identifying the multiscale levels in signals where a change in relative wavelet energy occurs iii. Identification of wavelet coefficients where a process change is observed using PCuRTest iv. Application of PCuRTest for online monitoring of physiological signals and detection of changes in these signals that are observed as a consequence of hypoxic stimuli. The identification of outofcontrol wavelet coefficients are believed to indicate the timefrequency levels that undergo significant changes as a result of hypoxia. The results display that the methodology can successfully distinguish between beforehypoxia and afterhypoxia samples with low false alarm rates. The procedure is shown to perform better than the monitoring of channels separately. 42 PAGE 51 5.2 Future Research An important problem in the analysis of multichannel signals is the diagnosis of the simultaneous effects of certain variation sources on signals from different channels. It is clear that the analysis of multichannel data requires the identification of spatial variation patterns and the temporal behavior of these patterns. A natural motivation throughout this chapter is that understanding the nature of sources if variation that affect multichannel signals will help the identification and prevention of problems causing process variability. In particular, the results of the proposed methodology will be demonstrated to illuminate the dynamics of the underlying mechanisms that influence both respiratory and blood pressure signals under hypoxia conditions. The next section explains the proposed method and its relation to InterBattery Factor Analysis [76, 78]. This model aims at the blind identification of a single set of unobservable sources that accounts for the correlation between observations from separate channels. The reason why the term blind is used here is the fact that the common variation sources are extracted completely from the data without specifying potential factors before the analysis is conducted. An advantage of this model is that it doesnt require an equal number of features or variable from the signals. The following sections discuss the implications of modeling the cardiorespiratory interactions using the proposed model. 5.2.1 Factor Analysis Factor analysis is a multivariate statistical analysis tool that to analyze the interrelationships among a large number of variables and to explain these variables in terms of their common underlying dimensions (factors) based on the internal structure of covariance or correlation matrices. Since its development by Spearman [59], it has been used in a range of studies in social sciences, mainly in the analysis of psychological or mental tests and economic quantities [60, 61, 62]. 43 PAGE 52 Factor analysis partitions a multivariate observation vector into an unobserved systematic part composed of a relatively small number of factors and an unobserved error part and separates the effect of factors from error. In contrast to principal component analysis which attempts at identifying uncorrelated linear combinations of the original variables that account for a major portion of the variability in the data, factor analysis aims to explain the covariance matrix by a minimum number of hypothetical variates or factors. In this sense, principal component analysis is considered to be varianceoriented whereas factor analysis is covarianceoriented. Let be the ndimensional vector of observations. The model for factor analysis is defined as nx,...,x,xx21 efx (5.1) where, and eare column vectors of n components, fis a column vector of )(nk components, and is a matrix. In model (5.1), the elements of the factor loading matrix described assimply represent the weight of the ith testvariate on the kth hypothetical (latent) factor. It is assumed that is distributed independently of fwith mean and covariance matrix, where is assumed to have a diagonal form. To understand the physical meaning of the components of the model in (5.1), consider the case of mental tests. In this context, each component of xdenotes the score of the subject in a given test and the corresponding component of denotes the mean score of this test in the population of subjects. It will be assumed, without loss of generality, that kn ij e 0eE eeE 0 (Note that this assumption can be easily satisfied by subtracting the mean of from the data). When a battery of tests is given to a group of subjects, it is observed that the score of the subject in a given test is more related to his/her scores on the other tests rather than the scores of other individuals on other tests. Therefore, the motivation behind using a factor analysis model in the case of mental tests is to explain the interrelation between the tests of a particular subject through the separation of the effects acting x 44 PAGE 53 on a test score into two parts. The first part, contained in f, consists of common factors that account for the correlations between scores in different scores and k denotes the number of unobserved (latent) common factors. The vector is assumed to be a zeromean random vector with covariance matrix,. The components of vector,, contains the factors that are specific to the corresponding test. In other words, e is the part of the test score not explained by the common factors and therefore, is often referred to as uniqueness. f e Assuming that the model in (5.1) fits the data and the following set of assumptions are satisfied, a direct consequence will be that the covariance matrix of can be expanded as x efefE (5.2) which is often called the fundamental theorem of factor analysis. Therefore, the problem of factor analysis becomes whether there exist a triplet positive definite and positive definite and diagonal that satisfy (5.2) given a covariance matrix and number of factors, k. If a solution exists and is unique, the model is said to be identified. There is a fundamental indeterminacy in the factor analysis model due to the fact that any triplet can be transformed through multiplication by a nonsingular matrix C into an equivalent structure of , and [63]. To rule out this indeterminacy problem, some k2 identification conditions have to be imposed on and A common approach is to add the restrictions that C* 11*C C 1J (5.3) is diagonal. If the diagonal elements of Jare ordered and different from each other, then is uniquely determined. Another set of restriction conditions imposes that the first k rows of forms a lower triangular matrix. A natural approach to determine the number of common factors in factor analysis has been the analysis of the eigenvalues ini1 of the covariance matrix of the data, A simple visual 45 PAGE 54 tool is the scree plot [64] where eigenvalues are plotted against the number of factors and the number of factors at the cutoff point is taken as the optimal number of factors, k. Alternative procedure based on maximum likelihood functions are Akaike information criterion (AIC) [65] and minimum description length (MDL) [66]. These criterion are based on a sequential Chisquare test of )2(log)()(knkgaknNkAICkk (5.4) and 2/)log()2(log)()(NknkgaknNkMDLkk (5.5) for k=0,...,n1; where and denote the arithmetic and geometric means of the smallest (nk) smallest eigenvalues of In AIC and MDL methods, the optimal number of factors, k, is chosen as the k that minimizes (5.4) and (4.4) respectively. Bozdogan [67] proposed an alternative information complexity (ICOMP) criterion that it combines a badnessoffit term (minus twice the maximum log likelihood) with a measure of complexity of a model by taking into account the interdependencies of the parameter estimates as well as the dependencies of the model residuals unlike AIC or MDL. A review of various model selection procedures is available in [68]. ka kg Once a set of factor loadings has been obtained, the next crucial step is the interpretation of the factor loadings in order to extract meaningful information about the analyzed data. This step requires the transformation of factor loadings through a rotation technique which is to be selected depending on the objectives and characteristics of the problem and data. In particular, for solutions with more than one factor prior to rotation the first axis will lay in between the clusters of variables and the variables will not sort well on the factors obstructing their physical meaning. Rotation of the axes allows the factor loadings of each variable to be more clearly )(1>k 46 PAGE 55 differentiated by factor. For instance, in the case of psychological tests, each variable or factor in general has a positive direction such that more ability leads to higher test scores which implies that the factor loading should assume a positive value if it is nonzero. Four different studies [69, 70, 71, 72] have independently proposed orthogonal rotation criteria based on different reasoning but yielding the same solution known as quartimax, which maximizes the variance of variables (rows) over all factors (columns). As a controversial method, Kaiser [73] proposed the popular varimax criterion which is an orthogonal rotation of the factor axes to maximize the variance of the squared loadings of a factor (column) on all the variables (rows) in a factor matrix. Therefore, varimax minimizes the number of variables which have high loadings on any one given factor and each factor consequently tends to have either large or small loadings of particular variables on it. Equimax rotation was proposed as a compromise between quartimax and varimax methods trying to accomplish the goals of both quartimax and varimax; however it has not gained much popularity. 1 0 1 1 0 1 1 0.5 0 0.5 1 1 2Component 1 8 9 3 4 10 5Unrotated 6 7 Component 2Component 3 1 0 1 1 0 1 1 0.5 0 0.5 1Component 1 5 7 6 10 Varimax rotated 8 4 9 3 2 1Component 2Component 3 Figure 5.1 Plots of Unrotated and VarimaxRotated Factor Loadings for a Sample Dataset (Note: The Numbers Indicate the Variable Index and All Numbers Are in Generic Values) 47 PAGE 56 As an alternative to orthogonal rotations, direct oblimin rotation and promax rotation [74] have been proposed as oblique (nonorthogonal) rotation methods. Promax rotation is known to be computationally faster than the direct oblimin method and therefore is sometimes used for very large datasets. Some of the other oblique factor rotation techniques are oblimax, quartimin, biquartimin, binormamin, and maxplane. Figure 5.1 displays plots of loadings of variables from a sample dataset on unrotated and varimaxrotated factor axes. These plots show that varimax rotation has simplified the structure of the loadings with each variable depending on only one factor, which makes it possible to describe each factor in terms of the variables that it affects. 5.2.2 Multiple Battery Factor Analytic Model of Common Variation Patterns Let pmxxxx,...,,1:21 where 1 : iim x,...,(pi1= ) denotes the set of wavelet coefficients extracted from the ith channel, denotes the number of variables obtained from channel i, and The common variation sources are represented by the kvector It is assumed that the observations obey the following model im 1piimm ) ( mk f 111111eyfx (5.6) 222222eyfx ... ppppppeyfx where contains the k common loading vectors describing the strength of common variation sources on the observations from the ith channel and kpii : i contain channelspecific loadings indicating the significance of various sources acting on the ith channel only ),...,(pi1=. 48 PAGE 57 Let, py,...,y,yy21 p,...,,21 and pe,...,e,ee21 contain channelspecific variation sources, mean vectors of the channel observations and residuals respectively. It is assumed that e is a zeromean vector that is independent of f, i.e. It is assumed that the mean vector is subtracted from the observation vectors, i.e. 0)e ,f( Cov 0 It is also assumed that the common factors are independent and scaled to have unit variance, i.e. .Therefore, the model in (5.6) attempts to model the effects of k independent variation sources that are common to all channels of data represented by by separating them from channelspecific variation sources. The objective of model (5.6) then is the estimation of the matrices and vector f. Since the elements of fare scaled to have unit variance, jth column of indicates the strength of the jth common variation pattern on the observations from ith channel. )f ,f( Cov ix i ),...,(pi1= i The structure and objective of model (5.6) are similar to the Multiple Battery Factor Analysis, where the elements of vector fare referred to as interbattery factors and the matrices i are referred to as interbattery factor loadings. Multiple battery factor analysis is an extension of the twobattery case, InterBattery Factor Analysis, which was first proposed by Tucker [76] to provide information on the stability of factors over two batteries of tests. Mc Donald [77] generalized Tuckers factor analysis method to multiple batteries. Based on Tuckers work, Browne [78] derived the maximum likelihood estimates for the twobattery factor analysis and later generalized the maximum likelihood solutions to factor analysis with multiple batteries [78]. ),...,(pi1= Note that the number of total parameters to be estimated in the proposed model is piiimmkkmk1)1(21)1(21 which may result in a very largesize optimization problem depending on the method chosen. Therefore, the maximum likelihood estimates based on GaussSeidel algorithm proposed by Browne [78] will be used here in the estimation of factor loading 49 PAGE 58 matrices. It is assumed that the vector of interbattery factors, f, is normally distributed with zeromean and covariance matrix implying that the common factors are assumed to be orthogonal. It is also assumed that ),0(~yllN and A consequence of this set of assumptions is that x follows multivariate normal distribution with zero mean and covariance vector described as ),0( ~l elN=. ),...,(pi1 mm (5.7) where This equation is the same as equation (2.5) except that the matrix is assumed to have the following blockdiagonal form instead of the diagonal form in standard factor analysis: p,..,,21 mm (5.8) pp...00..........0...00...02211 where Due to the blockdiagonal form of the identification problem of the standard factor analysis becomes more complicated in interbattery factor analysis. A possible approach is to restrict the possible transformations of the loadings matrices to those that are permissible in standard factor analysis. iiiiiiii ),...,(pi1= Let be an unbiased estimate of the covariance matrix S obtained from N independent observations ppppppS...SS..........S...SSS...SSS212222111211 (5.9) Then has a Wishart distribution with Ndegrees of freedom and parameter matrix S)1(N The maximum likelihood estimates of and may be derived by the minimization of the maximum likelihood discrepancy function 50 PAGE 59 pF1StrSlnln),( (5.5) To minimize (5.5) the following equations must be satisfied [77]: 0)S(2 11F (5.10) 0)S(11BdiagF (5.11) where is the block diagonal matrix formed from the principal submatrices of corresponding to XBdiag X ii ),...,(pi1=. 51 PAGE 60 REFERENCES [1] Fletcher, E.C. Physiological and Genomic Consequences of Intermittent Hypoxia: Invited Review: Physiological consequences of intermittent hypoxia: systemic blood pressure, Journal of Applied Physiology, 90: 1600: 1605, 2001. [2] Powell, F.L., Milsom, W.K., Mitchell, G.S. Time domains of hypoxic ventilatory response. Respiration Physiology, 112: 123134, 1998. [3] Hayashi, F., Coles, S.K., Bach, K.B., Mitchell, G.S., McCrimmon, D.R. Timedependent phrenic nerve responses to carotid afferent activation: Intact vs. decerebellate rats. American Journal of Physiology, 265: 811819, 1993. [4] Coles, S.K., Dick, T.E. Neurons in the ventrolateral pons are required for posthypoxic frequency decline in rats. Journal of Physiology, 497: 7994, 1996. [5] Morris, K.F., Gozal, D. Persistent respiratory changes following intermittent hypoxic stimulation in cats and human beings. Respiratory Physiology and Neurobiology, 140: 18, 2004. [6] Nieto, F.J., Young, T.B., Lind, B.K., Shahar, E., Samet, J.M., Redline, S., DAgostino, R.B., Newman, A.B., Lebowitz, M.D., Pickering, T.G. Association of sleepordered breathing, sleep apnea, and hypertension in a large community base study. Journal of American Medical Association, 283: 18291836, 2000. [7] Neubauer, J.A. Physiological and Genomic Consequences of Intermittent Hypoxia: Invited Review: Physiological consequences of intermittent hypoxia: Physiological and pathophysiological responses to intermittent hypoxia, Journal of Applied Physiology, 90: 15931599, 2001. [8] Huygens, C., Horoloqium Oscilatorium (Paris, France), 1673. [9] Anrep, G.V., Pascual, W., Rossler, R. Respiratory variations of the heart rate. The central mechanism of the respiratory arrythmia and the interrelations between the central and reflex mechanisms. Proceedings of the Royal Society of London, 119: 191217, 1935. [10] Berger, R.D., Akselrod, S., Gordon, D., Cohen, R.J. An efficient algorithm for spectral analysis of heart rate variability. IEEE Transactions on Biomedical Engineering, 33: 900904, 1986. [11] Hayano, J., Yasuma, F., Okada, A., Mukai, S., Fujinami, T. Respiratory sinus arrhythmia: a phenomenon improving pulmonary gas exchange and circulatory efficiency. Circulation, 94: 842847, 1996. 52 PAGE 61 [12] Giardino, N.D., Glenny, R.W., Borson, S., Chan, L. Respiratory sinus arrhythmia is associated with efficiency of pulmonary gas exchange in healthy humans. American Journal of Heart and Circulation Physiology, 284: 15851591, 2003. [13] Bucher, K., Bucher, K.E. Cardiorespiratory synchronisms: synchrony with artificial circulation. Research in Experimental Medicine, 171: 3339, 1977. [14] Palm, G., Aertsen, A.M.H.J., Gerstein, G.L. On the significance of correlations among neuronal spike trains. Biological Cybernetics, 59: 111, 1988. [15] Brody, C.B. Correlations without synchrony. Neural Computation, 11: 15371551, 1997. [16] Lindsey, B.G., Arata, A., Morris, K.F., Hernandez, Y.M., Shannon, R. Medullary raphe neurones and baroreceptor modulation of the respiratory motor pattern in the cat. Journal of Physiology, 512: 863882, 1998. [17] Arata, A., Hernandez, Y.M., Lindsey, B.G., Morris, K.F., Shannon, R. Transient configurations of baroresponsive respiratoryrelated brainstem neuronal assemblies in the cat. Journal of Physiology, 525: 509530, 2000. [18] Orem, J., Dick, T.E. Consistency and Signal Strength of Respiratory Neuronal Activity. Journal of Neurophysiology, 50: 10981107, 1983. [19] Dick, T.E., Morris, K.F. Quantitative analysis of cardiovascular modulation in respiratory neural activity. Journal of Physiology, 556: 959970, 2004. [20] Sayers, B.McA. Analysis of heart rate variability. Ergonomics, 16: 1732, 1973. [21] Akselrod, S., Gordon, D., Ubel, F., Shannon, D., Barger A. Power spectrum analysis of heart rate fluctuation: a quantitative probe of beat to beat cardiovascular control. Science, 213: 220222, 1981. [22] Larsen, P.D., Galletly, D.C. Cardioventilatory coupling in heart rate variability: the value of standard analytical techniques. British Journal of Anaesthesia, 87: 819826, 2001. [23] Barbieri, R., Parati, G. Saul, J.P. Closedvs. openloop assessment of heart rate baroreflex. IEEE Engineering in Medicine and Biology Magazine, 20: 33 42, 2001. [24] Baselli, Cerutti, S., Civardi, S., Malliani, A., Pagani, M. Cardiovascular variability signals: Towards the identification of a closedloop model of the neural control mechanisms. IEEE Transactions on Biomedical Engineering, 35: 10331046, 1988. [25] Saul, J.P., Kaplan, D.T., Kitney, R.I. Nonlinear interactions between respiration and heart rate: Classical physiology or entrained nonlinear oscillations. IEEE Computers in Cardiology Proceedings, pp. 38, 1988. [26] Paulus, M., Hoyer, D. Detecting nonlinearity and phase synchronization with surrogate data. IEEE Engineering in Medicine and Biology Magazine, 17: 4045, 1998. [27] Hurtado, J.M., Rubchinsky, L.L., Sigvardt, K.A. Statistical Method for Detection of PhaseLocking Episodes in Neural Oscillations. Journal of Neurophysiology, 91: 18831898, 2003. 53 PAGE 62 [28] Tass, P., Rosenblum, M.G., Weule, J., Kurths, J., Pikovsky, A., Volkmann, J., Schnitzler, A., Freund, H.J. Detection of n:m phase locking from noisy data: application to magnetoencephalography. Physical Review Letters, 81: 32913294, 1998. [29] Feige, B., Aertsen, A., KristevaFeige, R. Dynamic synchronization between multiple cortical motor areas and muscle activity in phasic voluntary movements. Journal of Neurophysiology, 84: 26222629, 2000. [30] Schiff, S.J., So, P., Chang, T., Burke, R.E., Sauer T. Detecting dynamical interdependence and generalized synchrony through mutual prediction in a neural ensemble. Physical Review E, 54: 67086724, 1996. [31] Schafer, C., Rosenblum, M.G., Abel, H.H., Kurths, J. Synchronization in the human cardiorespiratory system. Physical Review E, 60: 857870, 1999. [32] Gabor, D. Theory of communication. Journal of Institute of Electrical Engineering, 93: 429457, 1946. [33] Rosenblum, M.G., Pikovsky, A.S., Kurths, J., Schafer, C., Tass, P.A. Phase synchronization from theory to data analysis. In NeuroInformatics, edited by Moss, F. and Gielen, S. Amsterdam: Elsevier Science, 279321, 2001. [34] Galletly, D.C., Larsen, P.D. Cardioventilatory coupling in heart rate variability: methods for qualitative and quantitative determination. British Journal of Anaesthesia, 87: 827833, 2001. [35] Galletly, D.C., Larsen, P.D. Relationship between cardioventilatory coupling and respiratory sinus arrhythmia. British Journal of Anaesthesia, 80: 164168, 1998. [36] Galletly, D.C., Larsen, P.D. Cardioventilatory coupling during anaesthesia. British Journal of Anaesthesia, 79: 3540, 1997. [37] Strang, G., Nguyen, T. Wavelets and Filter Banks. Wellesley Cambridge Press, Wellesley, MA 1996. [38] Daubechies, I. Ten Lectures in Wavelets. John Wiley, Philadelphia, 1992. [39] Mallat, S.G. Multifrequency channel decomposition of images and wavelet models. IEEE Transactions on Acoustic Speech and Signal Processing, 37: 20912110, 1989. [40] Donoho, D.L., Johnstone, I.M., Kerkyacharian, G., Pickard, D. Wavelet shrinkage: Asymptopia? (with discussion). Journal of Royal Statistical Society, 57: 301369, 1995. [41] Donoho, D.L. Denoising by soft thresholding. IEEE Transactions on Information Theory, 41: 613627, 1995. [42] Donoho, D.L., Johnstone, I.M. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81: 425455, 1994. [43] Antoniadis, A., Gijbels, I., Gregoire, G. Model selection using wavelet decomposition and applications. Biometrika, 84: 751763, 1997. 54 PAGE 63 [44] Donoho, D.L. Johnstone, I.M., Iain, M. Adapting to unknown smoothness via wavelet shrinkage. Journal of the American Statistical Association, 90: 1200:1224, 1995. [45] Saito, N. Simultaneous Noise Suppression and Signal Compression Using a Library of Orthonormal Bases and the Minimum Description Length Criterion: Wavelets in Geophysics. New York: Academic Press, 1994. [46] Lada, E., K., Lu, J.C., Willson, J.R. A waveletbased procedure for process fault detection. IEEE Transactions on Semiconductor Manufacturing, 15: 7990, 2002. [47] Jin, J., Shi, J. Featurepreserving data compression of stamping tonnage information using wavelets. Technometrics, 41: 327339, 1999. [48] Jeong, M.K., Chen, D., Lu, J.C. Thresholded scalogram and its applications in process fault detection. Applied Stochastic Models in Business and Industry, 19: 231244, 2003. [49] Johnson, R.A., Wichern, D.W. Applied Multivariate Statistical Analysis. Fifth edition. New Jersey: Prentice Hall, 2002. [50] Hastie, T. Principal Curves and Surfaces. Technical Report, Stanford University, 1984 [51] Hastie, T., Stuetzle W. Principal curves. Journal of the American Statistical Association, 84: 502516, 1989. [52] Tibshirani, R. Principal curves revisited. Statistics and Computing, 2: 183190, 1992. [53] Dempster, A.P., Laird, N.M., and Rubin, D.B. Maximum likelihood from incomplete data via the EM Algorithm (with discussion). Journal of Royal Statistical Society, B, 39: 138, 1977. [54] Kim, J., Huang, Q., and Shi, J. Online multichannel forging tonnage monitoring and fault pattern classification using principal curve. Accepted by ASME Transactions, Journal of Manufacturing Science and Engineering, 2003. [55] Delicado, P. Another look at principal curves and surfaces. Journal of Multivariate Analysis, 77: 84116, 2001. [56] Dong, D., McAvoy, T.J. Nonlinear principal component analysis based on principal curves and neural networks. Computers in Chemical Engineering, 20: 6578, 1995. [57] Chang, K., Ghosh J. A unified model for probabilistic principal surfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23: 2241, 2001. [58] Huang, Q. Principal Curve Regression and Analysis of Multiple Curve Data, 2004, Unpublished. [59] Spearman, C. Generalintelligence, objectively determined and measured. American Journal of Psychology, 15: 201293, 1904. [60] Burt, C. The Factors of the Mind. London University Press, 1940. 55 PAGE 64 [61] Cattell, R.B. The meaning and strategic use of factor analysis. In R.B. Cattell (Ed.), Handbook of Multivariate Experimental Psychology, Chicago: Rand McNally, 1988. [62] Maxwell, A.E. Recent trends in factor analysis. Journal of Royal Statistical Society, A, 124: 4959, 1961. [63] Anderson, T.W. An Introduction to Multivariate Statistical Analysis, Second Edition, John Wiley & Sons, 1984. [64] Cattell, R.B. The scree test for the number of factors. Multivariate Behavioural Research, 1: 245276, 1966. [65] Akaike, H. Factor analysis and AIC. Psychometrika, 52: 317332, 1987. [66] Rissanen, J. Modeling by shortest description length. Automatica, 14: 465471, 1978. [67] Bozdogan, H. Model selection and Akaikes information criterion (AIC): the general theory and its analytical extensions. Psychometrika, 52: 345370, 1987. [68] Forster, M.R. Key concepts in model selection: performance and generalizability. Journal of Mathematical Psychology, 44: 205231, 2000. [69] Carroll, J.B. An analytical solution for approximating simple structure in factor analysis. Psychometrika, 18: 2338, 1953. [70] Neuhaus, J.O., Wrigley, C. The quartimax method: an analytical approach to orthogonal simple structure. British Journal of Statistical Psychology, 7: 81:91, 1954. [71] Saunders, D.R. An analytic method for rotation to orthogonal simple structure. Princeton, NJ: Educational Testing Service Research Bulletin, No, 5310, 1953. [72] Ferguson, G.A. The concept of parsimony in factor analysis, Psychometrika, 19: 281290, 1954. [73] Kaiser, H.F. The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23: 187200, 1958. [74] Hendrickson, A.E., White, P.O. PROMAX: A quick method for rotation to oblique simple structure. British Journal of Statistical Psychology, 17: 6570, 1964. [75] Tucker, L.R. An interbattery method of factor analysis. Psychometrika, 23: 111136, 1958. [76] Mc Donald, R.P. Three common factor models for groups of variables. Psychometrika, 35: 111128, 1970. [77] Browne, M.W. The maximumlikelihood solution in interbattery factor analysis. British Journal of Mathematical and Statistical Psychology, 32: 7586, 1979. [78] Browne, M.W. Factor analysis of multiple batteries by maximum likelihood. British Journal of Mathematical and Statistical Psychology, 33: 184199, 1980. 56 PAGE 65 [79] Lawley, D.N., Maxwell, A.E. Factor Analysis as a Statistical Method, London: Butterworth, 1971. [80] Joreskog, K.G. Some contributions to maximum likelihood factor analysis. Psychometrika, 32: 443482, 1967. [81] Green, P.J., Silverman, B.W. Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach. London: Chapman and Hall. 57 