USF Libraries
USF Digital Collections

An indepth analysis of face recognition algorithms using affine approximations

MISSING IMAGE

Material Information

Title:
An indepth analysis of face recognition algorithms using affine approximations
Physical Description:
Book
Language:
English
Creator:
Reguna, Lakshmi
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
Biometrics
Optimal affine transformation
Principal component analysis
Eigen space
Affine space
Dissertations, Academic -- Computer Science -- Masters -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm.This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms.Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.
Thesis:
Thesis (MSComS)--University of South Florida, 2003.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Lakshmi Reguna.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 84 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001681129
oclc - 62792706
usfldc doi - E14-SFE0000616
usfldc handle - e14.616
System ID:
SFS0025306:00001


This item is only available as the following downloads:


Full Text

PAGE 1

AN INDEPTH ANAL YSIS OF F A CE RECOGNITION ALGORITHMS USING AFFINE APPR O XIMA TIONS b y LAKSHMI REGUNA A thesis submitted in partial fulllmen t of the requiremen ts for the degree of Master of Science Departmen t of Computer Science and Engineering College of Engineering Univ ersit y of South Florida Ma jor Professor: Sudeep Sark ar, Ph.D. Dmitry Goldgof, Ph.D. Nagara jan Ranganathan, Ph.D. Date of Appro v al: Ma y 19, 2003 Keyw ords: ane space, eigen space, principal comp onen t analysis, optimal ane transformation, biometrics c r Cop yrigh t 2003, Lakshmi Reguna

PAGE 2

DEDICA TION T o m y h usband and m y paren ts without whom I w ould not ha v e b een able to come so far

PAGE 3

A CKNO WLEDGEMENTS I w ould lik e to thank Dr. Sudeep Sark ar, m y ma jor professor, for his guidance and supp ort throughout m y Master's degree program, and for carefully reviewing m y thesis writeup. This thesis has b een a great learning exp erience for me. I sincerely thank Dr. Sudeep Sark ar for giving me the opp ortunit y to w ork on this pro ject. I w ould also lik e to thank Dr. Goldgof and Dr. Ranganathan for b eing a part of m y thesis committee. I also w ould lik e to Dr. P Jonathon Phillips from NIST for his v aluable inputs. I w ould also lik e to thank Dr. Ross Bev eridge and his studen ts from the Colorado State Univ ersit y for their patience and help.

PAGE 4

T ABLE OF CONTENTS LIST OF T ABLES ii LIST OF FIGURES iii ABSTRA CT vi CHAPTER 1 RELA TED W ORK 1 1.1 The FERET Proto col 3 1.1.1 FERET 1994-1996 3 1.1.2 F acial Recognition V endor T est 5 1.2 Ev aluating Statistical Signicance of the Results 7 1.2.1 Signicance of the Ane Appro ximating Algorithm 8 CHAPTER 2 AFFINE APPR O XIMA TION ALGORITHM 9 CHAPTER 3 TEST ALGORITHMS 15 3.1 Principal Comp onen t Analysis (PCA) Algorithm 15 3.2 Linear Discriminan t Analysis (LD A) Algorithm 16 3.3 Ba y esian In trap ersonal and Extrap ersonal Classier 17 CHAPTER 4 EXPERIMENT AL SETUP 18 4.1 Data Description 18 4.2 T raining Scenarios 18 CHAPTER 5 ANAL YSIS OF RESUL TS 21 5.1 Visualization of Distance Matrixes 21 5.2 Gross Error Measures 22 5.3 Analysis of Ane Appro ximation Algorithm 30 5.4 Eigen V alues of B Matrix 35 5.5 P erformance of Iden tication and V erication Scenarios 41 5.6 McNemar's T est 41 5.7 Dierence Caused b y T raining Scenarios 52 CHAPTER 6 CONCLUSION 53 6.1 F uture W ork 54 REFERENCES 55 APPENDICES 58 App endix A More Results 59 i

PAGE 5

LIST OF T ABLES T able 5.1 McNemar's T est for T raining Scenario 1 at Rank 1. 51 T able 5.2 McNemar's T est for T raining Scenario 2 at Rank 1. 51 T able 5.3 McNemar's T est for T raining Scenario 2. 51 T able A.1 T raining Scenario 2: Av erage RMS Error. 61 T able A.2 T raining Scenario 2: Stress 1. 61 T able A.3 McNemar's T est for T raining Scenario 1 at Rank 1. 74 T able A.4 McNemar's T est for T raining Scenario 2 at Rank 1. 74 T able A.5 McNemar's T est for T raining Scenario 2. 74 ii

PAGE 6

LIST OF FIGURES Figure 2.1 Problem Denition. 9 Figure 2.2 Flo w Chart of the Ane Appro ximation Algorithm. 13 Figure 4.1 T raining Scenario 1: Blo c k Diagram. 19 Figure 4.2 T raining Scenario 2: Blo c k Diagram. 19 Figure 5.1 Visualization of Similarit y Matrix for the PCA Algorithm Using the Euclidean Distance Measure. 23 Figure 5.2 Visualization of Similarit y Matrix for the PCA Algorithm Using the Cosine Distance Measure. 24 Figure 5.3 Visualization of Similarit y Matrix for the LD A Algorithm. 25 Figure 5.4 Visualization of Similarit y Matrix for the Ba y esian Algorithm. 27 Figure 5.5 T raining Scenario 1: Plot of RMS Error. 28 Figure 5.6 T raining Scenario 1: Stress Plot. 28 Figure 5.7 T raining Scenario 2: Plot of RMS Error. 29 Figure 5.8 T raining Scenario 2: Stress Plot. 29 Figure 5.9 T raining Scenario 1: Visualization of Ane T ransformation Matrix for the PCA Algorithm Using the Euclidean Distance Measure. 31 Figure 5.10 T raining Scenario 1: Visualization of Ane T ransformation Matrix for the PCA Algorithm Using the Cosine Distance Measure. 31 Figure 5.11 T raining Scenario 1: Visualization of Ane T ransformation Matrix for the LD A Algorithm. 32 Figure 5.12 T raining Scenario 1: Visualization of Ane T ransformation Matrix for the Ba y esian Algorithm. 32 Figure 5.13 T raining Scenario 2: Visualization of Ane T ransformation Matrix for the PCA algorithm Using the Euclidean Distance Measure. 33 Figure 5.14 T raining Scenario 2: Visualization of Ane T ransformation Matrix for the PCA Algorithm Using the Cosine Distance Measure. 33 iii

PAGE 7

Figure 5.15 T raining Scenario 2: Visualization of Ane T ransformation Matrix for LD A Algorithm. 34 Figure 5.16 T raining Scenario 2: Visualization of Ane T ransformation Matrix for Ba y esian Algorithm. 34 Figure 5.17 T raining Scenario 1: Eigen V alues for the PCA Algorithm Using the Euclidean Distance Measure. 35 Figure 5.18 T raining Scenario 1: Eigen V alues for the PCA Algorithm Using the Cosine Distance Measure. 36 Figure 5.19 T raining Scenario 1: Eigen V alues for the LD A Algorithm. 37 Figure 5.20 T raining Scenario 1: Eigen V alues for the Ba y esian Algorithm. 37 Figure 5.21 T raining Scenario 2: Eigen V alues for the PCA Algorithm Using the Euclidean Distance Measure. 38 Figure 5.22 T raining Scenario 2: Eigen V alues for the PCA Algorithm Using Cosine Distance Measure. 39 Figure 5.23 T raining Scenario 2: Eigen V alues for the LD A Algorithm. 39 Figure 5.24 T raining Scenario 2: Eigen V alues for the Ba y esian Algorithm. 40 Figure 5.25 T raining Scenario 1: CMC Curv e for PCA Algorithm Using the Euclidean Distance Measure. 42 Figure 5.26 T raining Scenario 1: R OC Curv e for PCA Algorithm Using the Euclidean Distance Measure. 42 Figure 5.27 T raining Scenario 1: CMC Curv e for PCA Algorithm Using Cosine Distance Measure. 43 Figure 5.28 T raining Scenario 1: R OC Curv e for PCA Algorithm Using Cosine Distance Measure. 43 Figure 5.29 T raining Scenario 1: CMC Curv e for LD A Algorithm. 44 Figure 5.30 T raining Scenario 1: R OC Curv e for LD A Algorithm. 44 Figure 5.31 T raining Scenario 1: CMC Curv e for Ba y esian Algorithm. 45 Figure 5.32 T raining Scenario 1: R OC curv e for Ba y esian Algorithm. 45 Figure 5.33 T raining Scenario 2: CMC Curv e for PCA Algorithm Using the Euclidean Distance Measure. 46 Figure 5.34 T raining Scenario 2: R OC Curv e for PCA Algorithm Using the Euclidean Distance Measure. 46 iv

PAGE 8

Figure 5.35 T raining Scenario 2: CMC Curv e for PCA Algorithm Using Cosine Distance Measure. 47 Figure 5.36 T raining Scenario 2: R OC Curv e for PCA algorithm Using Cosine Distance Measure. 47 Figure 5.37 T raining Scenario 2: CMC Curv e for LD A Algorithm. 48 Figure 5.38 T raining Scenario 2: R OC Curv e for LD A Algorithm. 48 Figure 5.39 T raining Scenario 2: CMC Curv e for Ba y esian Algorithm. 49 Figure 5.40 T raining Scenario 2: R OC Curv e for Ba y esian Algorithm. 49 Figure 5.41 Eigen F aces of Set C 1 52 Figure 5.42 Eigen F aces of Set C 2 52 Figure A.1 Eigen F aces of Set C 1 59 Figure A.2 Eigen F aces of Set C 2 60 Figure A.3 T raining Scenario 1 Plot of RMS Error. 60 Figure A.4 T raining Scenario 1 Stress Plot. 61 Figure A.5 T raining Scenario 1: CMC Curv e for PCA Algorithm. 62 Figure A.6 T raining Scenario 1: R OC Curv e for PCA Algorithm. 63 Figure A.7 T raining Scenario 1: CMC Curv e for LD A Algorithm. 64 Figure A.8 T raining Scenario 1: R OC Curv e for LD A Algorithm. 65 Figure A.9 T raining Scenario 1: CMC Curv e for Ba y esian Algorithm. 66 Figure A.10 T raining Scenario 1 : R OC Curv e for Ba y esian Algorithm. 67 Figure A.11 T raining Scenario 2: CMC Curv e for PCA Algorithm. 68 Figure A.12 T raining Scenario 2: R OC Curv e for PCA Algorithm. 69 Figure A.13 T raining Scenario 2: CMC Curv e for LD A Algorithm. 70 Figure A.14 T raining Scenario 2: R OC Curv e for LD A Algorithm. 71 Figure A.15 T raining Scenario 2: CMC Curv e for Ba y esian Algorithm. 72 Figure A.16 T raining Scenario 2: R OC Curv e for Ba y esian Algorithm. 73 v

PAGE 9

AN INDEPTH ANAL YSIS OF F A CE RECOGNITION ALGORITHMS USING AFFINE APPR O XIMA TIONS Lakshmi Reguna ABSTRA CT In order to foster the maturit y of face recognition analysis as a science, a w ell implemen ted baseline algorithm and go o d p erformance metrics are highly essen tial to b enc hmark progress. In the past, face recognition algorithms based on Principal Comp onen ts Analysis(PCA) ha v e often b een used as a baseline algorithm. The ob jectiv e of this thesis is to dev elop a strategy to estimate the b est ane transformation, whic h when applied to the eigen space of the PCA face recognition algorithm can appro ximate the results of an y giv en face recognition algorithm. The ane appro ximation strategy outputs an optimal ane transform that appro ximates the similarit y matrix of the distances b et w een a giv en set of faces generated b y an y giv en face recognition algorithm. The ane appro ximation strategy w ould help in comparing ho w close a face recognition algorithm is to the PCA based face recognition algorithm. This thesis w ork sho ws ho w the ane appro ximation algorithm can b e used as a v aluable to ol to ev aluate face recognition algorithms at a deep lev el. Tw o test algorithms w ere c ho osen to demonstrate the usefulness of the ane appro ximation strategy They are the Linear Discriminan t Analysis(LD A) based face recognition algorithm and the Ba y esian in terp ersonal and in trap ersonal classier based face recognition algorithm. Our studies indicate that b oth the algorithms can b e appro ximated w ell. These conclusions w ere arriv ed based on the results pro duced b y analyzing the ra w similarit y scores and b y studying the iden tication and v erication p erformance of the algorithms. Tw o training scenarios w ere considered, one in whic h b oth the face recognition and the ane appro ximation algorithm w ere trained on the same data set and in the other, differen t data sets w ere used to train b oth the algorithms. Gross error measures lik e the a v erage RMS error and Stress-1 error w ere used to directly compare the ra w similarit y vi

PAGE 10

scores. The histogram of the dierence b et w een the similarit y matrixes also clearly sho w ed that the error spread is small for the ane appro ximation algorithm. The p erformance of the algorithms in the iden tication and the v erication scenario w ere c haracterized using traditional CMS and R OC curv es. The McNemar's test sho w ed that the dierence b et w een the CMS and the R OC curv es generated b y the test face recognition algorithms and the ane appro ximation strategy is not statistically signican t. The results w ere statistically insignican t at rank 1 for the rst training scenario but for the second training scenario they b ecame insignican t only at higher ranks. This dierence in p erformance can b e attributed to the dieren t training sets used in the second training scenario. W e b eliev e that this dierence in p erformance b et w een the rst and the second training scenario can b e reduced b y using a larger training set. vii

PAGE 11

CHAPTER 1 RELA TED W ORK The rapid dev elopmen t of face recognition tec hnology necessitated the dev elopmen t of eectiv e proto cols and tec hniques to ev aluate the p erformance of face recognition algorithms. If automatic face recognition systems ha v e to b e deplo y ed in real w orld scenarios then eectiv e means of comparing the p erformances of indep enden t face recognition systems ha v e to b e dev elop ed. Researc hers used the insigh t gained from publicly a v ailable proto cols to ev aluate biometric systems for dev eloping metho ds to ev aluate face recognition system. The UK Biometrics W orking Group's Biometric T est Programme Rep ort compared six dieren t biometrics [12 ]. The rep ort is the rst ev aluation that directly compares p erformance of dieren t biometrics for the same application. F ace, ngerprin t, hand geometry iris, v ein and v oice recognition systems w ere tested for v erication in a normal oce environmen t with co op erativ e, non-habituated users. The NIST sp eak er recognition ev aluations measure v erication p erformance [14 ]. The proto cols used for ev aluating ngerprin t tec hnology also greatly help ed in dev eloping metho ds to ev aluate face recognition systems [7 8, 9]. The Biometric T esting Cen ter at San Jose State Univ ersit y explored a n um b er of essen tial questions relating to the science underpinning biometric tec hnologies. The results of their endea v ors con tains ev aluation results from the INSP ASS Hand Geometry System, the Philippine AFIS System and n umerous other small scale ev aluations [13 ]. The most imp ortan t lesson learn t from these ev aluations is that large sets of test images are essen tial for adequate ev aluation. Also the sample should b e statistically as similar as p ossible to the images that w ould arise in the application considered. The costs of errors in recognition should b e rerected in the scoring pro cess. The reject error b eha vior should b e studied and not just forced recognition. The op eration of a pattern recognition 1

PAGE 12

systems is statistical and distributions of success and failure are simply non-existen t. The distributions of success and failure dep end hea vily on the application b eing considered. Also no theory exists that can predict these distributions for new applications. So the ev aluation proto col should b e designed suc h that it is based on the application b eing considered as closely as p ossible. Philips et al [20 12 ] iden tied three basic scenarios in whic h biometric systems should b e ev aluated. They are as follo ws: 1. T ec hnology Ev aluation. 2. Scenario Ev aluation. 3. Op erational Ev aluation. T ec hnology ev aluation attempts to compare comp eting algorithms from a single tec hnology The testing of all the algorithms is carried out on a standardized database. The database w ould b e collected b y a univ ersal sensor. So a database should b e created suc h that it is neither to o dicult nor to o easy for the algorithms b eing tested. Although sample data ma y b e distributed for dev elopmen tal or tuning purp oses, the actual testing w ould b e done on data not previously seen b y algorithm dev elop ers. T esting is carried out using oine pro cessing of the data. Since the database is xed, the results are rep eatable. T ec hnological ev aluations ha v e b een v ery crucial in understanding the strengths and w eakness of biometric systems. The goal of scenario ev aluation is to determine the o v erall system p erformance in a protot yp e or sim ulated application. T esting is carried out on a complete system in an en vironmen t that attempts to mo del a real-w orld target application of in terest. Eac h tested system w ould ha v e its o wn acquisition sensor and so will receiv e sligh tly dieren t data. Ho w ev er care should b e tak en suc h that the data collection across all tested systems is in the same en vironmen t with the same p opulation. The testing ma yb e a com bination of oine and online comparisons. The exten t to whic h the test results w ould b e rep eatable w ould directly dep end on ho w con trolled the testing en vironmen t is main tained. Op erational ev aluation aims to determine the p erformance of a complete biometric system in a sp ecic application en vironmen t with a sp ecic target p opulation. Oine testing 2

PAGE 13

ma yb e p ossible dep ending up on the data capabilities of the tested device. Op erational test results will not b e rep eatable b ecause of unkno wn and undo cumen ted dierences b et w een op erational en vironmen ts. The ev aluation proto col determines ho w a biometric system is tested, test data is selected and measures of p erformance are c hosen. It should neither b e easy nor to o hard. Also the ev aluation proto col should spread the range of p erformance scores o v er a range so that it is easy to distinguish b et w een the dieren t algorithms. Also biometric systems should b e tested on previously unseen data. If the biometric systems are not tested on previously unseen data then it w ould only help in measuring ho w a biometric system tunes to a particular data set [20 ]. One of the most signican t ev aluation proto cols in face recognition is the FERET ev aluation proto col [16 17 18 ]. The subsequen t section deals with the FERET proto col in more detail. 1.1 The FERET Proto col The most imp ortan t step in the direction of dev eloping a standardized tec hnique for ev aluating face recognition systems w as brough t ab out b y the FERET program. Before the FERET program w as dev elop ed most of the face recognition systems rep orted p erfect p erformance in a small data set of images. But when deplo y ed in a real w orld scenario the realit y w as far from the rep orted p erformance. 1.1.1 FERET 1994-1996 Three FERET ev aluations for lab oratory algorithms w as carried out in 1994, 1995 and 1996 [16 17 19 ]. This w as follo w ed b y ev aluation of commercial face recognition systems in 2000 and 2002 [24 23 ]. As a part of the FERET program a large database of still images w as collected b et w een Aug 1993 and July 1996 that consists of 14,126 images of 1199 individuals. Later the FERET V endor T est [23 ] conducted in 2002 used a m uc h larger database consisting of 121,589 op erational images of 37,437 sub jects. The images w ere pro vided from the U.S Departmen t of State's Mexican non-immigran t visa arc hiv e. 3

PAGE 14

The FERET ev aluation proto col designed for the tests during 1994, 1995 and 1996 w as a general ev aluation strategy to measure the p erformance of lab oratory algorithms on a common database. The FERET test w as not concerned with the p erformance of individual comp onen ts of an algorithm nor w as it concerned with the p erformance of the algorithms under v arious op erational scenarios. The algorithms w ere ev aluated against dieren t categories of images in order to obtain a robust assessmen t of p erformance. The images diered b y c hanges in the illumination and ligh ting, presence or absence of glasses and the time of acquisition of images of the same sub ject. The rst FERET ev aluation test w as administered in August 1994 [19 ]. During this ev aluation the PCA face recognition algorithm w as used as a baseline algorithm to compare the p erformance of other face recognition algorithms. This ev aluation could measure the p erformance of algorithms that could automatically lo cate, normalize and iden tify faces. The FERET proto col dened the gallery set as the set of kno wn individuals and the prob e set as the set of unkno wn individuals [16 ]. The ev aluation in August 1994 consisted of three tests, eac h with a dieren t prob e and gallery set. The rst test measured the iden tication p erformance from a gallery of 316 individuals with one image p er p erson. The second test w as a false alarm test whic h measured ho w w ell an algorithm rejects faces not in the gallery The third test baselined the eects of p ose c hanges on p erformance. The FERET proto col c haracterized the p erformance of the face recognition algorithms in the iden tication scenario using Cum ulativ e Matc h Score (CMS) curv es and the p erformance in the v erication scenario using Receiv er-Op erator Characteristics (R OC) curv es. The CMS curv e plots the p ercen tage of queries in whic h the correct answ er can b e obtained within a certain rank. The R OC curv es plots the false reject error v ersus the false alarm error. Manseld and W a yman [12 ] also recommended the use of Detection Error T rade-o (DET) curv es to rep ort the p erformance of biometric systems. DET curv e plot the false matc h rate v ersus the false non-matc h rate. They dened the false matc h rate (FMR) as the exp ected probabilit y that a sample will b e falsely declared to matc h a single randomly selected "non-self" template" and the false non-matc h rate (FNMR) as "the exp ected probabilit y that a sample will b e falsely declared not to matc h a template of the same 4

PAGE 15

measure from the same user applying the sample". The DET curv e is essen tially the same as the R OC curv e; it plots the t w o error rates against eac h other instead of the detection v ersus false alarm rate as in the R OC curv e. F alse reject is 1 detection rate. F alse alarm and false reject rates are computed o v er the n um b er of comparisons whereas the false matc h/non-matc h rate are computed o v er the n um b er of transactions made b y the user. The second FERET test w as administered in Marc h 1995. One of the main emphasis of this test w as on duplicate prob es. A duplicate prob e is usually an image of a p erson whose corresp onding gallery image w as tak en on a dieren t da y The algorithms w ere ev aluated on larger galleries and progress from the previous FERET ev aluation in 1994 w as measured. This ev aluation consisted of a single test that measured iden tication p erformance from a gallery of 817 individuals. The third FERET test w as administered in Septem b er 1996 and Marc h 1997. The design of this ev aluation w as more complex than the rst t w o ev aluations, and allo w ed for more detailed p erformance c haracterization of face recognition systems. One of the main goals of FERET96 w as to measure the impro v emen ts of p erformance of face recognition algorithms on dieren t prob e and gallery image sets since the last t w o FERET ev aluations. FERET96 tried to measure the eect of p ose v ariation and digital mo dication of the prob e images on p erformance. 1.1.2 F acial Recognition V endor T est F rom 2000, the FERET program also b egan to ev aluate commercial face recognition systems. The F acial Recognition V endor T est 2000 (FR VT 2000) [24 ] w as conducted to assess the tec hnical capabilities of commercial face recognition systems. FR VT 2000 conducted t w o kinds of tests : 1. Recognition P erformance T est tec hnology ev aluation. 2. Pro duct Usabilit y T est limited scenario ev aluation. The recognition p erformance test w as conducted to ev aluate the tec hnical p erformance of the face recognition systems. The tests w ere conducted on the standardized FERET 5

PAGE 16

database. The c hange in p erformance due to the follo wing factors w as quan tied using CMS and R OC curv es: 1. Compression : Estimate eect of lossy image compression. 2. Distance : Estimate eect if p osition sub ject at v arying distance from a xed camera. 3. Expression : Ev aluate p erformance when comparing images of the same p erson with dieren t facial expression. 4. Illumination : Analyze eect of c hanges of sub ject illumination. 5. Media : Estimate eect of comparing images stored on dieren t media. 6. P ose : Ev aluate p erformance as viewp oin t from whic h facial images are tak en c hanges. 7. Image resolution : Ev aluate p erformance as image resolution is v aried. 8. T emp oral : Analyze eect of time dela y b et w een rst and subsequen t capture of facial images. The Pro duct Usabilit y test w as a limited scenario ev aluation. It scenario for the test w as access con trol to liv e sub jects. This test w as carried out though some of the participan t face recognition systems w ere not designed for access con trol applications. During the pro duct usabilit y test some of the parameters that w ere v aried are the start distance, b eha vior mo de and bac kligh ting. The test sub jects p erformed eac h test in the co op erativ e, rep eatable and indieren t b eha vior mo des. The next face recognition v endor test w as conducted in 2002 (FR VT 2002) [23 ]. FR VT 2002 consisted of t w o subtests namely 1. High Computational In tensit y (HCIn t) test. 2. Medium Computational In tensit y (MCIn t) test. The HCIn t test w as designed to ev aluate the p erformance of the face recognition systems on a v ery large database. The MCIn t test w as designed to ev aluate the capabilit y of the face 6

PAGE 17

recognition system to p erform the face recognition task with sev eral dieren t formats of imagery (still and video) under v arying conditions lik e indo or ligh ting and outdo or ligh ting. P erformance of the face recognition systems w as also measured on w atc h lists. W atc h list is a list of the sub jects that the face recognition system is on the lo ok out for. When a prob e is presen ted to the face recognition system then the face recognition system c hec ks to see if it is presen t on the w atc h list. The p erformance of this application is c haracterized b y the w atc h list R OC that plots to trade-o b et w een detection and iden tication rate and false alarm rate. One of the new features of FR VT 2002 w as that it computed the v ariance in p erformance for m ultiple prob e and gallery com binations. Error ellipses w ere used in the R OC curv es to estimate the range of p erformance. The error ellipses are measured using disjoin t galleries and prob e sets. This a v oids the issues with resampling tec hniques. Error ellipsis are a measure of v ariance and unlik e condence in terv als they do not pro vide error b ounds on the R OC. Some of the co v ariates that w ere explored for the rst time b y FR VT 2002 are sex and age of an individual. 1.2 Ev aluating Statistical Signicance of the Results The FERET proto col w as a ma jor step in the standardizing the ev aluation of face recognition algorithms. Ho w ev er the FERET proto col stopp ed short of addressing the critical question of statistical v ariabilit y [26 ]. Sometimes not all the measured dierences b et w een the p erformance of algorithms are statistically signican t. Some of the dierences also o ccur b y c hance. So it w as essen tial to dev elop sound tec hniques to study the statistical signicance of the measured dierences. The FERET proto col did not establish a common means of testing when the dierence b et w een t w o curv es is signican t. So metho ds need to b e dev elop ed to address this issue. Mic heals et al [28 ] try to deriv e the mean and standard deviation estimates for recognition rates at dieren t ranks. By using stratied sampling and a statistical tec hnique called Balanced Rep eated Resampling (BRR) they generate standard error bars for CMS curv es. They used their tec hnique to compare the dierence b et w een PCA algorithm and 7

PAGE 18

t w o algorithms from the Visionics F aceIt SDK on the FERET data set. One of the simplest tec hnique for ev aluating statistical signicance is the McNemar's test [25 ]. If t w o trained algorithms are tested with the same gallery and prob e images then this test w ould b e v ery appropriate to compare the statistical signicance of the results. Bev eridge et al also p erm uted the prob e and the gallery images to v erify the statistical signicance of the results and th us generated CMS curv es with error bars indicating condence in terv als [25 ]. Jonathon Phillips et al [15 ] explored the issue of whether h uman sub jects and face recognition algorithms b oth nd the same faces iden tical. The results w ere represen ted using biplots. Leigh et al [10 ] used Phi-PIT transformations to represen t the similarit y matrixes using b o x plots and analyze the results using ANO V A. The b o x plots help in c haracterizing the distribution of the data set. Rukhin et al [31 ] used partial rank correlations to correlate the p erformance of face recognition algorithms. 1.2.1 Signicance of the Ane Appro ximating Algorithm The ane appro ximation algorithm is essen tially a to ol for carrying out the tec hnology ev aluation of face recognition algorithms at a deep lev el. The PCA face recognition algorithm has frequen tly b een used as a baseline algorithm to ev aluate the p erformance of other face recognition algorithms [16 21 ]. So far the comparison with the PCA algorithm has only b een made in the iden tication and the v erication scenarios using nal p erformance scores and statistical analysis. The ane appro ximation algorithm tak es a step further b y not treating the face recognition algorithm as a blac k b o x. It tries to generate an ane transform that attempts to transform the eigen space of the PCA algorithm so that it matc hes the results of the face recognition algorithm as closely as p ossible. The ane transform will b e an iden tit y matrix if the input face recognition is the PCA algorithm. The closer the ane transform is to an iden tit y transform the closer the input face recognition algorithm is to the PCA algorithm. Subsequen t sections deal with the error measures and tec hniques used to compare the similarit y matrix obtained b y the face recognition algorithm and the ane appro ximating algorithm. 8

PAGE 19

CHAPTER 2 AFFINE APPR O XIMA TION ALGORITHM Figure 2.1. Problem Denition. Find matrix A suc h that the Euclidean distances b et w een transformed images i.e. (A) (Input Images) are equal to the giv en distances. This c hapter (Figure 2.1) outlines the algorithm that is used to appro ximate the face recognition algorithm. Ev ery face recognition algorithm can output a similarit y matrix b et w een the images in the gallery and the prob e set. The ane appro ximation algorithm uses this similarit y matrix to pro duce an optimal ane transformation. The ane transformation matrix is a closed form solution. This ane transformation can translate, rotate, stretc h and shear the eigen space of the PCA algorithm suc h that when the images can b e em b edded in this space the Euclidean distances b et w een the images matc hes the similarit y scores pro duced b y the face recognition algorithm. If the input face recognition algorithm is the PCA algorithm then the ane transformation will b e an iden tit y matrix. The closer the ane appro ximation algorithm is to an iden tit y matrix the closer w e can sa y that the b eha vior of an algorithm is to the PCA algorithm. In this manner an y input face recognition algorithm can b e compared with the baseline PCA algorithm. 9

PAGE 20

The follo wing notation will b e used to describ e the linear transform strategy 1. Let ~ x i b e the N 2 1 sized column v ector formed b y ro w scanning the N N i -th image. 2. Let K denote the n um b er of images. 3. Let d A ij b e the distance b et w een ~ x i and ~ x j that a giv en algorithm computes. These distances can b e arranged as a K K matrix D where K is the giv en n um b er of images. 4. Let the matrix A b e a M N 2 sized arra y that is used to linearly transform the input image v ector. ~ y i = A ~ x i (2.1) The ro ws of the matrix A denote the axes of the reduced M dimensional space. F or a PCA based space, the ro ws of A will b e orthogonal to eac h other. 5. The (square) Euclidean distance b et w een ~ y i and ~ y j can b e denoted b y d E ( ~ y i ; ~ y j ) = P M k =1 ( ~ y i ( k ) ~ y j ( k )) 2 = ( ~ y i ~ y j ) T ( ~ y i ~ y j ) (2.2) Problem Denition :The matrix A whic h is the ane transform, has to b e determined suc h that d E ( ~ y i ; ~ y j ) = d A ij (2.3) d E ( ~ y i ; ~ y j ) = ( ~ y i ~ y j ) T ( ~ y i ~ y j ) = ( A ~ x i A ~ x j ) T ( A ~ x i A ~ x j ) = ( A ( ~ x i ~ x j )) T ( A ( ~ x i ~ x j )) = ( ~ x i ~ x j ) T ( A T A )( ~ x i ~ x j ) (2.4) (As an aside, it is w orth noting that if the ro ws of A w ere orthonormal (e.g. in PCA) and size of A w as N 2 N 2 then A T A = AA T = I the iden tit y matrix. Or in other w ords, Euclidean distance are preserv ed: d E ( ~ y i ; ~ y j ) = d E ( ~ x i ; ~ x j )). 10

PAGE 21

Let, 1. B = A T A where B is a N 2 N 2 sized matrix. Note that B is symmetric, i.e. B T = B 2. ~ ij = ~ x i ~ x j is a N 2 1 sized column v ector. Using the ab o v e notations d E ( ~ y i ; ~ y j ) = ~ T ij B ~ ij = P N 2 k =1 P N 2 l =1 B ( k ; l ) ~ ij ( k ) ~ ij ( l ) (2.5) The ab o v e double sum can b e expressed as pro duct of t w o column v ectors as follo ws. Let t w o columns v ectors b e dened as follo ws 1. ~ b is a N 2 ( N 2 +1) 2 sized column v ector b y scanning the lo w er triangular en tries (including the diagonal) of B. Th us, ~ b ( k ( k + 1) 2 + l ) = B ( k ; l ), for l k ; k = 1 ; ; N 2 (2.6) 2. ~ ij is a N 2 ( N 2 +1) 2 sized column v ector suc h that ~ ij ( k ( k + 1) 2 + l ) = 8 > < > : ~ ij ( k ) 2 for k = l 2 ~ ij ( k ) ~ ij ( l ) for l k (2.7) Using the ab o v e equation d E ( ~ y i ; ~ y j ) = ~ T ij ~ b (2.8) ~ b should b e determined suc h that ~ T ij ~ b = d A ij (2.9) for ev ery pair of images. These K ( K 1) 2 equations can b e compactly expressed in matrix notion as follo ws T ~ b = ~ d A (2.10) 11

PAGE 22

where is a K ( K 1) 2 N 2 ( N 2 +1) 2 sized matrix formed b y concatenating the column v ectors ~ ij And, ~ d A is column v ector of the giv en distances. In the ab o v e equation, ~ b is unkno wn and can b e solv ed using an y standard linear equation solv er. The only constrain t is that K N 2 + 1 so that the n um b er equation is at least equal to the n um b er of unkno wns. Giv en ~ b the matrix B can b e formed, from whic h w e w ould lik e to form A suc h that B = A T A This can b e done using the eigen v ectors ( ~ u i ) and eigen v alues ( i ) of B The matrix B is factored in to U U T where the columns of U are the eigen v ectors, ~ u i of B and is a diagonal matrix formed out of the eigen v alues. Since B is guaran teed b y the fact that B is symmetric.In fact, w e can also claim that the eigen v alues w ould real and p ositiv e. Symmetric matrices ha v e real eigen v alues. F rom Eq. 2.5 is follo ws that B is p ositiv e semi-denite b ecause distances are alw a ys are greater than or equal to zero. And, eigen v alues of p ositiv e semi-denite matrices are greater than or equal to zero. Giv en the eigen v alue and eigen v ector decomp osition of B w e can c ho ose A = 1 2 U T (2.11) or in other w ords the ro ws of A are the scaled eigen v ectors ~ u i 's. In particular, the i -th ro w of A will b e p i ~ u i T Th us, the non-zero ro ws of A w ould b e determined b y the n um b er of non-zero eigen v alues of B A is the ane appro ximation matrix whic h when will attempt to duplicate the results of an y input face recognition algorithm. T o summarize, the steps are 1. F orm the K ( K 1) 2 N 2 ( N 2 +1) 2 sized matrix from the input images as describ ed ab o v e. The constrain t is that K N 2 + 1. 2. F orm the column v ector ~ d A from the giv en distances. 3. Find B b y solving the linear equation T ~ b = ~ d A where ~ b is related to B as describ ed ab o v e. 12

PAGE 23

Figure 2.2. Flo w Chart of the Ane Appro ximation Algorithm. 13

PAGE 24

4. Find the eigen v ectors ( ~ u i ) and eigen v alues ( i ) of B 5. F orm A suc h that the i -th ro w of A is p i ~ u i T One of the concerns with this approac h is the requiremen t that the n um b er of images a v ailable ( K ) b e greater than size of the image ( N 2 ), the dimension of the pixel-based space of the ra w images. This requiremen t can b e relaxed if w e rst p erform a principal comp onen t analysis (PCA), whic h preserv es Euclidian distances, on the original pixel-based image space, to arriv e at a smaller subspace, sa y of size P dimensions with P << N In that case the requiremen t w ould b e that the n um b er of images a v ailable b e larger than this n um b er of dimension. So the o v erall steps of the algorithm is as illustrated in Figure 2.2. The estimated matrixes are T P C A that captures the rigid transform and A captures the non-rigid part. The o v erall ane transform is AT P C A If A is the iden tit y matrix then w e get the plain PCA algorithm. The ane transform matrix shears and stretc hes the eigen space of the algorithm so that it can em b ed the images suc h that the Euclidean distance b et w een them equals the original similarit y scores. 14

PAGE 25

CHAPTER 3 TEST ALGORITHMS Three standard algorithms w ere c ho osen to test the p erformance of the ane appro ximation strategy The algorithms c hosen w ere : 1. Principal Comp onen t Analysis (PCA) algorithm [5 ]. 2. Principal Comp onen t Analysis (PCA) coupled with the Linear Discriminan t Analysis (LD A) algorithm [1]. 3. Ba y esian in trap ersonal and extrap ersonal classier [3]. An implemen tation of the algorithms w as a v ailable from [29 ]. A brief description of eac h of these algorithms is as follo ws: 3.1 Principal Comp onen t Analysis (PCA) Algorithm Kirb y et al dev elop ed a tec hnique to represen t faces b y using the Karh unen-Lo ev e projection [4 ]. This tec hnique is kno wn as Principal Comp onen t Analysis(PCA). Essen tially PCA is a statistical dimensionalit y reduction metho d, whic h pro duces the optimal linear least squared decomp osition of a training set. This tec hnique w as extended b y T urk et al for the task of face recognition [5 ]. In the PCA algorithm, rst eac h training image is unrolled in to a v ector of n pixel v alues. Then the mean image of the training set is calculated and it is subtracted from eac h training image. So the resulting training images are "mean cen tered". All the mean cen tered images are placed as a column of a matrix, sa y M The co v ariance matrix n = M M T (3.1) 15

PAGE 26

The co v ariance matrix c haracterizes the distribution of the images in < n The eigen v ectors of the co v ariance matrix n form the subspace in whic h the gallery and the prob e images will b e em b edded for comparison. The eigen v ectors are normalized so that they b ecome orthonormal. The eigen v alues of the eigen v ectors are in v ersely prop ortional to the amoun t of v ariance eac h eigen v ector represen ts. The eigen v ector with the highest eigen v alue corresp onds to the direction of the maxim um v ariance and so on. These eigen v ectors are the Principal Comp onen ts of the training images. During the testing phase, the gallery and the prob e images are mean cen tered and pro jected on to the eigen space. The distance b et w een the gallery and the prob e images in the eigen space represen ts their similarit y score. Dierence distance measures can b e used to compute the distances b et w een the gallery and the prob e images. Tw o distances measures w ere c ho osen to test the ane appro ximation algorithm. That are the Euclidean distance measure and the cosine distance measure. As an ticipated the ane transform matrix is an iden tit y matrix when the distances b et w een the gallery and the prob e images are computed using the Euclidean distance measure. 3.2 Linear Discriminan t Analysis (LD A) Algorithm The Linear Discriminan t Analysis algorithm w as dev elop ed b y Zhao and Chellapa [1]. The LD A algorithm is based on Fisc her's Linear Discriminan ts. Essen tially LD A tries to pro duce an optimal linear discriminan t function that emphasizes the dierences b et w een classes and minimizes the dierences within classes. In face recognition eac h class refers to the set of images of eac h sub ject. So while training the LD A algorithm more than one sample of eac h sub ject is required. Initially PCA is p erformed to reduce the dimensionalit y of the feature v ectors. Then the LD A is p erformed so that the class distinguishing features are preserv ed. The PCA and LD A basis v ectors are m ultiplied to pro duce the LD A transformation matrix. The gallery and the prob e images are pro jected on to the LD A space and are compared as b efore. 16

PAGE 27

3.3 Ba y esian In trap ersonal and Extrap ersonal Classier The Ba y esian in trap ersonal and extrap ersonal classier w as dev elop ed b y Moghaddam and P en tland [3 ]. This algorithm uses a probabilistic measure of similarit y for comparison. There are t w o v arian ts of the algorithms 1. Maxim um Lik eliho o d (ML) Classier. 2. Maxim um a P osteriori (MAP) Classier. This algorithm tries to mo del t w o m utually exclusiv e classes of dierences: in tra-p ersonal and the extra-p ersonal dierences. The in tra p ersonal dierences are v ariations in the app earance of the same individual due to dieren t expressions, ligh ting c hanges etc. The extra p ersonal dierences are the v ariations in app earances b et w een dieren t individuals. The MAP classier uses the in tra p ersonal and extra p ersonal classier whereas the ML classier uses only the in tra p ersonal dierences. Results sho w that the ML classier p erforms just as w ell as the MAP classier [3 ]. Also the ML classier is computationally less exp ensiv e. In this study the ML classier w as used for comparing with the ane appro ximation algorithm. 17

PAGE 28

CHAPTER 4 EXPERIMENT AL SETUP This c hapter describ es the data set and the training scenarios that w ere used to compare the face recognition algorithm with the ane appro ximation algorithm. 4.1 Data Description Three disjoin t images sets C 1 C 2 and C 3 w ere selected from the FERET database [16 18 ]. All the three sets, C 1 C 2 and C 3 consisted of images of the t yp e fa (regular facial expression) and fb (alternate facial expression of the sub ject tak en with the same ligh ting conditions). Sets C 1 and C 2 consisted of 100 images of 25 sub jects (i.e) 4 images p er sub ject. Set C 3 consisted of 600 images of 300 sub jects (i.e) 2 images p er sub ject. Sets C 1 and C 2 w ere used to train the face recognition algorithms and the ane appro ximation algorithm while set C 3 w as used as a v alidation set to compare the similarit y matrix generated b y the face recognition algorithm and the ane appro ximation algorithm. The v alidation set C 3 consisted of images of t yp e fa and fb. The corresp onding fb image for eac h fa image w as tak en on the same da y All the images w ere prepro cessed using the normalization co de dev elop ed at NIST. The images w ere spatially normalized suc h that the ey es w ere alw a ys placed at xed p oin ts in the imagery based up on a ground truth le of ey e co ordinates pro vided with the FERET data. The images w ere cropp ed to a standard size of 150 b y 130 pixels. Pixels not lying within an o v al shap ed face region w ere mask ed out. Then the pixel v alues w ere histogram equalized and then shifted and scaled suc h that the mean v alue of all the pixels in the face region is zero and the standard deviation is one. 4.2 T raining Scenarios Tw o training scenarios for the algorithms w ere considered: 18

PAGE 29

Figure 4.1. T raining Scenario 1: Blo c k Diagram. F ace recognition algorithm and ane appro ximation algorithm trained on the same image set. Figure 4.2. T raining Scenario 2: Blo c k Diagram. F ace recognition algorithm and ane appro ximation algorithm trained on dieren t image sets. 19

PAGE 30

1. Both the face recognition algorithm and the ane appro ximation algorithm w ere trained using the same set (either C 1 or C 2 ). 2. The face recognition algorithm w as trained on set C 1 and the ane appro ximation algorithm w as trained on set C 2 The Linear Discriminan t Analysis classier algorithm and the Ba y esian in terp ersonal and in trap ersonal classier algorithm b oth rst p erform a PCA to reduce the dimensionalit y of the features. In the rst training scenario the dimensionalit y of the PCA step in the ane appro ximation algorithm is main tained constan tly at 60 dimensions. The results from the rst training scenario signican tly impro v ed when the dimensionalit y of the ane appro ximation w as gradually increased to matc h the dimensionalit y of PCA step in the face recognition algorithms. The b est results w ere obtained when the dimensionalit y of the ane appro ximation w as made equal to the dimensionalit y of the PCA space. The results did not sho w signican t dierence when the dimensionalit y of the ane appro ximation w as made greater than the dimensionalit y of the PCA space. So in the second training scenario the dimensionalit y of the PCA step w as alw a ys made equal to the dimensionalit y of the ane appro ximation algorithm. The second scenario is a more stringen t w a y of comparing a face recognition algorithm with the PCA algorithm. The second scenario seeks to compare ho w close an already trained face recognition algorithm is with the PCA algorithm. In the rst scenario b oth the face recognition algorithm and the ane appro ximation algorithm are trained on the same image set. In the second training scenario the face recognition algorithm and the ane appro ximation algorithm are trained on dieren t image sets. 20

PAGE 31

CHAPTER 5 ANAL YSIS OF RESUL TS This c hapter lo oks in to some of the tec hniques that ha v e b een used to compare the similarit y matrix generated b y the face recognition algorithm and the ane appro ximation algorithm and then presen ts the results.Both the face recognition algorithm and the ane appro ximation algorithm output a similarit y matrix. The similarit y matrix indicates the similarit y score b et w een eac h pair of image in the test set C 3 The ra w similarit y scores pro duced b y the face recognition algorithm and the ane appro ximation algorithm are compared directly b y visualizing the distribution of the v alues in the similarit y matrixes using images, b y studying the histogram of the matrix obtained b y the dierence b et w een the similarit y matrix generated b y the ane appro ximation algorithm and the face recognition algorithm and b y using standard gross error measures lik e a v erage RMS error and Stress-1. Alternativ ely the p erformance of the face recognition algorithm and the ane appro ximation algorithm in the inden tication and the v erication scenarios are also used as an index to judge the p erformance of the t w o algorithms. When the input face face recognition algorithm is the PCA face recognition algorithm then the ane transformation matrix will b e an iden tit y matrix. So the closer the ane appro ximation matrix is to an iden tit y matrix, the closer the face recognition algorithm is to the PCA algorithm. So tec hniques are also used to visualize the ane transformation matrix. The subsequen t sections will elab orate more on these tec hniques. 5.1 Visualization of Distance Matrixes In Figures 5.1 to 5.4 the distance matrixes generated b y the ane appro ximation algorithm and the face recognition algorithm are directly compared. The individual distance 21

PAGE 32

matrixes along with the error matrixes are visualized as images with red denoting larger v alues than blue mapp ed according to the legend bar alongside eac h image. The histogram of the dierence matrixes clearly sho ws that the distance matrix pro duced b y the ane appro ximation algorithm closely matc hes the distance matrix pro duced b y the face recognition algorithm. This is more pronounced in the rst training scenario then in the second training scenario. This ma y b e probably b ecause of the dierence in the training sets. In training scenario 1, the dimensionalit y of the PCA step in all the three algorithms is made 60. In the PCA face recognition algorithm using the Euclidean distance measure, when the dimensionalit y of the ane space b ecomes equal to the dimensionalit y of the PCA space (ie 60 dimensions) the similarit y matrix pro duced b y the ane appro ximation algorithm b ecomes iden tical to the similarit y matrix pro duced b y the PCA algorithm (see Figure 5.1). In the PCA face recognition algorithm using the cosine distance measure, the histogram sho ws that there is a signican t dierence b et w een the distance matrix generated b y the ane appro ximation algorithm and the PCA algorithm (see Figure 5.2). In the case of the LD A algorithm the b est results are obtained when the dimensionalit y of the ane space b ecomes equal to the dimensionalit y of the LD A space (ie 24 dimensions). The Ba y esian algorithm do es not use an explicit m ultidimensional space to classify the images. But the results b ecome b etter as the dimensionalit y of the ane space is gradually increased to matc h the dimensionalit y of the PCA space. Signican t dierences in the results are observ ed b et w een the training scenario 1 and training scenario 2 for the PCA and the LD A face recognition algorithms. But for the Ba y esian face recognition algorithm c hanges in the training set do es not seem to mak e m uc h dierence (see Figure 5.4). 5.2 Gross Error Measures The normalized a v erage RMS error and Stress-1 [32 ], whic h is a standard error measure used in MDS, w ere used to quan tify the dierence b et w een the similarit y matrix generated b y the face recognition algorithm and the distance matrix generated b y the ane 22

PAGE 33

0 50 100 150 (a) 200 400 600 100 200 300 400 500 600 0 50 100 150 (b) 200 400 600 100 200 300 400 500 600 0 0.05 0.1 0.15 0.2 0.25 0.3 (c) 200 400 600 100 200 300 400 500 600 -20 -10 0 10 20 0 1 2 3 4 x 10 5 (d) ErrorCount T raining Scenario 1. 0 50 100 150 (a) 200 400 600 100 200 300 400 500 600 0 50 100 150 (b) 200 400 600 100 200 300 400 500 600 -40 -30 -20 -10 0 (c) 200 400 600 100 200 300 400 500 600 -20 -10 0 10 20 0 0.5 1 1.5 2 2.5 x 10 5 (d) ErrorCount T raining Scenario 2. Figure 5.1. Visualization of Similarit y Matrix for the PCA Algorithm Using the Euclidean Distance Measure. V alues are in generic units. (a) Similarit y matrix pro duced b y PCA algorithm (b) Similarit y matrix pro duced b y Ane appro ximation algorithm (c) Dierence b et w een the t w o similarit y matrixes (d) Histogram of the dierence matrix. 23

PAGE 34

0.5 1 1.5 (a) 200 400 600 100 200 300 400 500 600 0 0.5 1 1.5 (b) 200 400 600 100 200 300 400 500 600 -0.2 0 0.2 0.4 0.6 0.8 (c) 200 400 600 100 200 300 400 500 600 -0.5 0 0.5 1 0 1 2 3 4 5 x 10 4 (d) ErrorCount T raining Scenario 1. 0.5 1 1.5 (a) 200 400 600 100 200 300 400 500 600 0 0.5 1 1.5 2 2.5 (b) 200 400 600 100 200 300 400 500 600 0 0.5 1 1.5 (c) 200 400 600 100 200 300 400 500 600 0 0.5 1 1.5 2 0 1 2 3 4 5 6 7 x 10 4 (d) ErrorCount T raining Scenario 2. Figure 5.2. Visualization of Similarit y Matrix for PCA Algorithm Using the Cosine Distance Measure. V alues are in generic units. (a) Similarit y matrix pro duced b y PCA algorithm (b) Similarit y matrix pro duced b y Ane appro ximation algorithm (c) Dierence b et w een the t w o similarit y matrixes (d) Histogram of the dierence matrix. 24

PAGE 35

0 20 40 60 80 (a) 200 400 600 100 200 300 400 500 600 0 20 40 60 80 (b) 200 400 600 100 200 300 400 500 600 -20 -15 -10 -5 0 (c) 200 400 600 100 200 300 400 500 600 -20 -10 0 10 20 0 0.5 1 1.5 2 x 10 5 (d) ErrorCount T raining Scenario 1. 0 20 40 60 80 (a) 200 400 600 100 200 300 400 500 600 0 10 20 30 40 50 60 (b) 200 400 600 100 200 300 400 500 600 -35 -30 -25 -20 -15 -10 -5 0 (c) 200 400 600 100 200 300 400 500 600 -20 -10 0 10 20 0 2 4 6 8 10 x 10 4 (d) ErrorCount T raining Scenario 2. Figure 5.3. Visualization of Similarit y Matrix for the LD A Algorithm. V alues are in generic units. (a) Similarit y matrix pro duced b y the LD A algorithm (b) Similarit y matrix pro duced b y Ane appro ximation algorithm (c) Dierence b et w een the t w o similarit y matrixes (d) Histogram of the dierence matrix. 25

PAGE 36

appro ximation algorithm. The normalized a v erage RMS error is dened as follo ws : 1 max ( D ) q ( D ij D 0 ij ) 2 N 2 (5.1) where D is the distance matrix obtained using the face recognition algorithm D 0 is the distance matrix obtained using the ane appro ximation algorithm and max ( D ) is the maxim um similarit y score of the distance matrix D Stress-1 is dened as follo ws: = s P ( D ij D 0 ij ) 2 P D 2 ij (5.2) Figure 5.5 sho ws the plot of the a v erage RMS error and Figure 5.6 sho ws the stress plot for training scenario 1. F rom Figure 5.5 and Figure 5.6 it can b e observ ed that once the dimensionalit y of the ane appro ximation algorithm b ecomes equal to the dimensionalit y of the face space used b y the PCA (Euclidean distance measure) and the LD A algorithms the error v alues b ecome constan t. No suc h phenomena is observ ed in the Ba y esian in trap ersonal and extrap ersonal classier algorithm. The error v alues for the PCA face recognition algorithm using the cosine distance measure are the highest and they k eep increasing as the dimensionalit y of the ane space increased. This is b ecause the image of the distance matrix generated b y the PCA algorithm using the cosine distance measure sho ws that the algorithm tends to cluster the images in to groups whereas the ane appro ximation algorithm tends to distribute the images more ev enly in the ane space (see gure 6.2). But the CMS and the R OC curv es sho w that the p erformance of the ane appro ximation algorithm is b etter then the PCA face recognition algorithm using the cosine distance measure and this p erformance impro v es as the dimensionalit y of the ane space is increased. Figure 5.7 and Figure 5.8 sho w the Av erage RMS error v alues and Stress 1 v alues for training scenario 2. In this case, the dimensionalit y of the ane space is made equal to the dimensionalit y of the PCA space in eac h of the face recognition algorithms. 26

PAGE 37

2 4 6 8 10 12 14 16 (a) 200 400 600 100 200 300 400 500 600 0 2 4 6 8 10 12 14 (b) 200 400 600 100 200 300 400 500 600 -6 -4 -2 0 2 (c) 200 400 600 100 200 300 400 500 600 -20 -10 0 10 20 0 2 4 6 8 10 12 14 x 10 4 (d) ErrorCount T raining Scenario 1. 2 4 6 8 10 12 14 16 (a) 200 400 600 100 200 300 400 500 600 0 2 4 6 8 10 12 14 (b) 200 400 600 100 200 300 400 500 600 -6 -4 -2 0 2 (c) 200 400 600 100 200 300 400 500 600 -20 -10 0 10 20 0 2 4 6 8 10 12 14 x 10 4 (d) ErrorCount T raining Scenario 2. Figure 5.4. Visualization of Similarit y Matrix for the Ba y esian Algorithm. V alues are in generic units. (a) Similarit y matrix pro duced b y the Ba y esian algorithm (b) Similarit y matrix pro duced b y Ane appro ximation algorithm (c) Dierence b et w een the t w o similarit y matrixes (d) Histogram of the dierence matrix.The dierence b et w een training scenario 1 and training scenario 2 do es not seem to b e signican t. 27

PAGE 38

0 5e-05 0.0001 0.00015 0.0002 0.00025 0.0003 0.00035 0.0004 0.00045 0.0005 10 20 30 40 50 60 70 80 90 100 Average RMS Error Number of Dimensions of the Affine Space Average RMS Error vs Dimensionality PCA Euclidean PCA Cosine LDA Bayesian Figure 5.5. T raining Scenario 1: Plot of RMS Error. Dimensionalit y of the PCA step in all the three algorithms w as set to 60 dimensions. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 10 20 30 40 50 60 70 80 90 100 Stress-1 Number of Dimensions of the Affine Space Stress Plot PCA Euclidean PCA Cosine LDA Bayesian Figure 5.6. T raining Scenario 1: Stress Plot. Dimensionalit y of the PCA step in all the three algorithms w as set to 60 dimensions. 28

PAGE 39

0 0.0002 0.0004 0.0006 0.0008 0.001 10 20 30 40 50 60 70 80 90 100 Average RMS Error Number of Dimensions of the Affine Space Average RMS Error vs Dimensionality PCA Euclidean PCA Cosine LDA Bayesian Figure 5.7. T raining Scenario 2: Plot of RMS Error. Dimensionalit y of the PCA step in all the three algorithms w as made equal to the dimensionalit y of the ane space. 0 0.2 0.4 0.6 0.8 1 1.2 10 20 30 40 50 60 70 80 90 100 Stress-1 Number of Dimensions of the Affine Space Stress Plot PCA Euclidean PCA Cosine LDA Bayesian Figure 5.8. T raining Scenario 2: Stress Plot. Dimensionalit y of the PCA step in all the three algorithms w as made equal to the dimensionalit y of the ane space. 29

PAGE 40

5.3 Analysis of Ane Appro ximation Algorithm The ane appro ximation algorithm uses the similarit y matrix generated b y a face recognition algorithm to come up with an optimal ane transformation. The ane transformation will try to stretc h and shear the eigen space of the PCA algorithm so that the images can b e em b edded in the space suc h that the Euclidean distance b et w een the emb edded images matc hes the similarit y scores generated b y the face recognition algorithm. If the input face recognition algorithm is the PCA algorithm then the ane transformation w e w ould exp ect the ane transformation to b e an iden tit y matrix. This is seen in Figure 5.9. It is w orth while to analyze to ane transformation matrix. The ane transformation matrix has b een visulalized as an image with the ro ws and columns p erm uted so that the diagonal elemen ts con tain the most dominan t elemen t. T aking a clue from discrete dynamical systems, the eigen v alues of A w ere computed. If the eigen v alue i < 1 then it means that the ane space is getting compressed along the i th dimension. If i > 1 then it means that the ane space is getting stretc hed along the i th dimension. So the o v erall distribution of the eigen v alues w ould giv e an idea as to what the ane transformation is doing to the eigen space,whether the eigen space is b eing stretc hed, sheared or compressed. Figures 5.9 to 5.12 sho w the the ane transform for all the four algorithms for the training scenario 1 and gures 5.13 to 5.16 sho w the ane transform for the training scenario 2. 30

PAGE 41

0 0.2 0.4 0.6 0.8 1 (a) 20 40 60 10 20 30 40 50 60 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 Real partImaginary part(b) Figure 5.9. T raining Scenario 1: Visualization of Ane T ransformation Matrix for the PCA Algorithm Using the Euclidean Distance Measure. V alues are in generic units. (a) Ane transformation matrix with the ro ws and columns rearranged so that the diagonal elemen ts con tain the largest v alue. In this case the ane transformation matrix is an iden tit y matrix. (b) The plot of the eigen v alues of the ane transformation matrix sho ws that the eigen space has not b een stretc hed or sheared. -0.015 -0.01 -0.005 0 0.005 0.01 0.015 (a) 20 40 60 10 20 30 40 50 60 -0.02 -0.01 0 0.01 0.02 -0.02 -0.01 0 0.01 0.02 Real partImaginary part(b) Figure 5.10. T raining Scenario 1: Visualization of Ane T ransformation Matrix for the PCA Algorithm Using the Cosine Distance Measure. V alues are in generic units. (a) Ane transformation matrix with the ro ws and columns rearranged so that the diagonal elemen ts con tain the largest v alue. (b) The plot of the eigen v alues of the ane transformation matrix sho ws that the eigen space has b een compressed b y the ane transform. 31

PAGE 42

-0.5 0 0.5 (a) 10 20 30 40 5 10 15 20 25 30 35 40 -0.5 0 0.5 -0.5 0 0.5 Real partImaginary part(b) Figure 5.11. T raining Scenario 1: Visualization of Ane T ransformation Matrix for the LD A Algorithm. V alues are in generic units. (a) Ane transformation matrix with the ro ws and columns rearranged so that the diagonal elemen ts con tain the largest v alue. (b) The plot of the eigen v alues of the ane transformation matrix sho ws that the eigen space has b een compressed b y the ane transform. -0.05 0 0.05 0.1 (a) 10 20 30 40 5 10 15 20 25 30 35 40 -0.1 -0.05 0 0.05 0.1 -0.1 -0.05 0 0.05 0.1 Real partImaginary part(b) Figure 5.12. T raining Scenario 1: Visualization of Ane T ransformation Matrix for the Ba y esian Algorithm. V alues are in generic units. (a) Ane transformation matrix with the ro ws and columns rearranged so that the diagonal elemen ts con tain the largest v alue. (b) The plot of the eigen v alues of the ane transformation matrix sho ws that the eigen space has b een compressed b y the ane transform. 32

PAGE 43

-0.02 -0.01 0 0.01 0.02 (a) 20 40 60 10 20 30 40 50 60 -0.02 -0.01 0 0.01 0.02 -0.02 -0.01 0 0.01 0.02 Real partImaginary part(b) Figure 5.13. T raining Scenario 2: Visualization of Ane T ransformation Matrix for the PCA algorithm Using the Euclidean Distance Measure. V alues are in generic units. (a) Ane transformation matrix with the ro ws and columns rearranged so that the diagonal elemen ts con tain the largest v alue. (b) The plot of the eigen v alues of the ane transformation matrix sho ws that the eigen space has b een compressed b y the ane transform. -0.4 -0.2 0 0.2 0.4 0.6 (a) 10 20 30 40 5 10 15 20 25 30 35 40 -0.5 0 0.5 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 Real partImaginary part(b) Figure 5.14. T raining Scenario 2: Visualization of Ane T ransformation Matrix for the PCA Algorithm Using the Cosine Distance Measure. V alues are in generic units. (a) Ane transformation matrix with the ro ws and columns rearranged so that the diagonal elemen ts con tain the largest v alue. (b) The plot of the eigen v alues of the ane transformation matrix sho ws that the eigen space has b een compressed b y the ane transform. 33

PAGE 44

-0.3 -0.2 -0.1 0 0.1 0.2 0.3 (a) 10 20 30 40 5 10 15 20 25 30 35 40 -0.4 -0.2 0 0.2 0.4 -0.4 -0.2 0 0.2 0.4 Real partImaginary part(b) Figure 5.15. T raining Scenario 2: Visualization of Ane T ransformation Matrix for the LD A Algorithm. V alues are in generic units. (a) Ane transformation matrix with the ro ws and columns rearranged so that the diagonal elemen ts con tain the largest v alue. (b) The plot of the eigen v alues of the ane transformation matrix sho ws that the eigen space has b een compressed b y the ane space. -0.06 -0.04 -0.02 0 0.02 0.04 0.06 (a) 10 20 30 40 5 10 15 20 25 30 35 40 -0.05 0 0.05 -0.05 0 0.05 Real partImaginary part(b) Figure 5.16. T raining Scenario 2: Visualization of Ane T ransformation Matrix for the Ba y esian Algorithm. V alues are in generic units. (a) Ane transformation matrix with the ro ws and columns rearranged so that the diagonal elemen ts con tain the largest v alue. (b) The plot of the eigen v alues of the ane transformation matrix sho ws that the eigen space has b een compressed b y the ane space. 34

PAGE 45

5.4 Eigen V alues of B Matrix The ane appro ximation matrix A is obtained b y the eigen v ector decomp osition of the B matrix. So it is w orth while to study ho w the eigen v alues of the B matrix decrease as the dimensionalit y of the ane space is increased. Figures 5.17 to 5.24 sho w the plot of the eigen v alues of B matrix. In training scenario-1 the eigen v alues of B matrix are nearly unit y un til the dimensionalit y is less than 60 and then after crossing 60 the eigen v alues b ecome zero. In the LD A algorithm the eigen v alues b ecome zero once the dimensionalit y of the ane space crosses the dimensionalit y of the LD A space ie 24 dimensions. F or the other algorithms the eigen v alues of B matrix b ecome nearly zero as the dimensionalit y is increased further and further. Th us the eigen v alues of B can b e used for determining the optimal dimensionalit y of the ane space. 0 0.5 1 1.5 2 0 10 20 30 40 50 60 70 80 90 Eigen Value Index of Eigen Vector PCA Face Recognition Algorithm 20 PCA Face Recognition Algorithm 40 PCA Face Recognition Algorithm 60 PCA Face Recognition Algorithm 80 Figure 5.17. T raining Scenario 1: Eigen V alues for the PCA Algorithm Using the Euclidean Distance Measure. The eigen v alues are nearly unit y un til the dimensionalit y is less than 60 and after that they b ecome zero. 35

PAGE 46

-0.0002 0 0.0002 0.0004 0.0006 0.0008 0 20 40 60 80 100 Eigen Value Index of Eigen Vector PCA Face Recognition Algorithm 20 PCA Face Recognition Algorithm 40 PCA Face Recognition Algorithm 60 PCA Face Recognition Algorithm 80 Figure 5.18. T raining Scenario 1: Eigen V alues for the PCA Algorithm Using the Cosine Distance Measure. 36

PAGE 47

0 0.5 1 1.5 2 0 10 20 30 40 50 60 70 80 90 Eigen Value Index of Eigen Vector LDA Face Recognition Algorithm 20 LDA Face Recognition Algorithm 40 LDA Face Recognition Algorithm 60 LDA Face Recognition Algorithm 80 Figure 5.19. T raining Scenario 1: Eigen V alues for the LD A Algorithm. The eigen v alues b ecome zero when the dimensionalit y crosses the dimensionalit y of the LD A space 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0 10 20 30 40 50 60 70 80 90 Eigen Value Index of Eigen Vector Bayesian Face Recognition Algorithm 20 Bayesian Face Recognition Algorithm 40 Bayesian Face Recognition Algorithm 60 Bayesian Face Recognition Algorithm 80 Figure 5.20. T raining Scenario 1: Eigen V alues for the Ba y esian Algorithm. 37

PAGE 48

0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 80 90 Eigen Value Index of Eigen Vector PCA Face Recognition Algorithm 20 PCA Face Recognition Algorithm 40 PCA Face Recognition Algorithm 60 PCA Face Recognition Algorithm 80 Figure 5.21. T raining Scenario 2: Eigen V alues for the PCA Algorithm Using the Euclidean Distance Measure. 38

PAGE 49

0 0.0005 0.001 0.0015 0.002 0 20 40 60 80 100 Eigen Value Index of Eigen Vector PCA Face Recognition Algorithm 20 PCA Face Recognition Algorithm 40 PCA Face Recognition Algorithm 60 PCA Face Recognition Algorithm 80 Figure 5.22. T raining Scenario 2: Eigen V alues for the PCA Algorithm Using Cosine Distance Measure. 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 80 90 Eigen Value Index of Eigen Vector LDA Face Recognition Algorithm 20 LDA Face Recognition Algorithm 40 LDA Face Recognition Algorithm 60 Figure 5.23. T raining Scenario 2: Eigen V alues for the LD A Algorithm. 39

PAGE 50

-0.01 0 0.01 0.02 0.03 0.04 0.05 0.06 0 10 20 30 40 50 60 70 80 90 Eigen Value Index of Eigen Vector Bayesian Face Recognition Algorithm 20 Bayesian Face Recognition Algorithm 40 Bayesian Face Recognition Algorithm 60 Bayesian Face Recognition Algorithm 80 Figure 5.24. T raining Scenario 2: Eigen V alues for the Ba y esian Algorithm. 40

PAGE 51

5.5 P erformance of Iden tication and V erication Scenarios The previous subsection compares the ra w similarit y scores. Ho w ev er it is necessary to lo ok in to the p erformance of an y face recognition algorithm in the iden tication and v erication scenario. So the distance matrixes from the ane appro ximation algorithm and face recognition algorithm w ere tested in the iden tication and v erication scenarios also. The v alidation set C 3 w as partitioned in to t w o disjoin t subsets to form the gallery and prob e sets. The gallery set consisted of all the images of t yp e fa(normal expression) in the v alidation set C 3 and the prob e set consisted of all the images of t yp e fb(dieren t expression, same ligh ting and same da y) in the v alidation set. The p erformance of the ane appro ximation algorithm and the face recognition algorithms for the iden tication and v erication scenarios w as c haracterized b y using standard Cum ulativ e Matc h Score (CMS) curv e and Receiv er Op erator Characteristics (R OC) curv e. Figures 5.25, 5.27, 5.29 and 5.31 sho w the CMS curv es for the three algorithms for training scenario 1 while gures 5.26, 5.28, 5.30 and 5.32 sho ws the R OC curv es. F rom the CMS and R OC curv es it is eviden t that as the dimensionalit y of the ane appro ximation algorithm is increased the closeness b et w een the CMS curv e of the ane appro ximation algorithm and the face recognition algorithm also increases. The R OC curv es also sho w that the ane appro ximation algorithm appro ximates the face recognition algorithms v ery w ell. Figures 5.33, 5.35, 5.37 and 5.39 sho w the CMS curv es for training scenario 2. The CMS curv es for the second training scenario are not as go o d as the CMS curv es for the rst training scenario b ecause of the eect of training the algorithms on dieren t sets. Figures 5.34, 5.36, 5.38 and 5.40 sho w the R OC curv es for the second training scenario. The R OC curv es b et w een the ane appro ximation algorithm and the face recognition algorithms for the second training scenarios are v ery close. 5.6 McNemar's T est Bev eridge et al [25 ] demonstrated the appropriateness of using the McNemar's test for comparing t w o iden tication rates. McNemar's test is v ery suitable for paired data. If the 41

PAGE 52

0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank PCA Face Recognition Algorithm 60 Affine Approximation Algorithm 100 Affine Approximation Algorithm 60 Affine Approximation Algorithm 40 Affine Approximation Algorithm 20 Figure 5.25. T raining Scenario 1: CMC Curv e for PCA Algorithm Using the Euclidean Distance Measure. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 Probability of Verification False Alarm Rate PCA Face Recognition Algorithm 60 Affine Approximation Algorithm 20 Affine Approximation Algorithm 40 Affine Approximation Algorithm 60 Figure 5.26. T raining Scenario 1: R OC Curv e for PCA Algorithm Using the Euclidean Distance Measure. 42

PAGE 53

0.7 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank PCA Face Recognition Algorithm 60 Affine Approximation Algorithm 100 Affine Approximation Algorithm 60 Affine Approximation Algorithm 20 Figure 5.27. T raining Scenario 1: CMC Curv e for PCA Algorithm Using Cosine Distance Measure. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 Probability of Verification False Alarm Rate PCA Face Recognition Algorithm 60 Affine Approximation Algorithm 20 Affine Approximation Algorithm 40 Affine Approximation Algorithm 60 Figure 5.28. T raining Scenario 1: R OC Curv e for PCA Algorithm Using Cosine Distance Measure. 43

PAGE 54

0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank LDA Face Recognition Algorithm 60 Affine Approximation Algorithm 100 Affine Approximation Algorithm 60 Affine Approximation Algorithm 40 Affine Approximation Algorithm 20 Figure 5.29. T raining Scenario 1: CMC Curv e for LD A Algorithm. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 Probability of Verification False Alarm Rate LDA Face Recognition Algorithm 60 Affine Approximation Algorithm 20 Affine Approximation Algorithm 40 Affine Approximation Algorithm 60 Figure 5.30. T raining Scenario 1: R OC Curv e for LD A Algorithm. 44

PAGE 55

0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank Bayesian Face Recognition Algorithm 60 Affine Approximation Algorithm 100 Affine Approximation Algorithm 60 Affine Approximation Algorithm 40 Affine Approximation Algorithm 20 Figure 5.31. T raining Scenario 1: CMC Curv e for Ba y esian Algorithm. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 Probability of Verification False Alarm Rate Bayesian Face Recognition Algorithm 60 Affine Approximation Algorithm 20 Affine Approximation Algorithm 40 Affine Approximation Algorithm 60 Figure 5.32. T raining Scenario 1: R OC curv e for Ba y esian Algorithm. 45

PAGE 56

0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank PCA Face Recognition Algorithm 60 Affine Approximation Algorithm 60 Figure 5.33. T raining Scenario 2: CMC Curv e for PCA Algorithm Using the Euclidean Distance Measure. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 Probability of Verification False Alarm Rate PCA Face Recognition Algorithm 60 Affine Approximation Algorithm 60 Figure 5.34. T raining Scenario 2: R OC Curv e for PCA Algorithm Using the Euclidean Distance Measure. 46

PAGE 57

0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank PCA Face Recognition Algorithm 100 Affine Approximation Algorithm 100 Figure 5.35. T raining Scenario 2: CMC Curv e for PCA Algorithm Using Cosine Distance Measure. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 Probability of Verification False Alarm Rate PCA Face Recognition Algorithm Affine Approximation Algorithm Figure 5.36. T raining Scenario 2: R OC Curv e for PCA algorithm Using Cosine Distance Measure. 47

PAGE 58

0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank LDA Face Recognition Algorithm 80 Affine Approximation Algorithm 80 Figure 5.37. T raining Scenario 2: CMC Curv e for LD A Algorithm. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 Probability of Verification False Alarm Rate LDA Face Recognition Algorithm 80 Affine Approximation Algorithm 80 Figure 5.38. T raining Scenario 2: R OC Curv e for LD A Algorithm. 48

PAGE 59

0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank Bayesian Face Recognition Algorithm 80 Affine Approximation Algorithm 80 Figure 5.39. T raining Scenario 2: CMC Curv e for Ba y esian Algorithm. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 Probability of Verification False Alarm Rate Bayesian Face Recognition Algorithm 60 Affine Approximation Algorithm 60 Figure 5.40. T raining Scenario 2: R OC Curv e for Ba y esian Algorithm. 49

PAGE 60

gallery and the prob e set used for testing t w o algorithms is iden tical, then McNemar's test can b e applied to study if the dierences in p erformance b et w een the algorithms are statistically signican t. In suc h a situation there ma yb e four outcomes for eac h comparision: 1. Both the algorithms succeed SS. 2. Both the algorithms fail FF. 3. Algorithm A succeeds but algorithm B fails SF. 4. Algorithm A fails but algorithm B succeeds FS. McNemar's test discards the rst t w o outcomes, SS and FF. The n ull h yp othesis H 0 is that the probabilit y of observing SF is equal to the probabilit y of observing FS. The alternativ e h yp othesis is that the probabilit y of observing SF is not equal to the probabilit y of observing FS. Let n S F denote the n um b er of times SF is observ ed and n F S denote the n um b er of times FS. P [ n S F ] = P [ n F S ] = n F S X i =0 n i !( n i )! 0 : 5 n (5.3) where n = n S F + n F S This probabilit y is the p v alue for accepting the n ull h yp othesis, H 0 T able 5.1 sho ws the results of McNemar's test for the rst training scenario and tables 5.2 and 5.3 sho ws the results for the second training scenario. In the rst training scenario, when the dimensionalit y of the ane appro ximation algorithm is lesser than the dimensionalit y of the eigen space or the LD A space in the then the n ull h yp othesis H 0 b ecomes true only at higher ranks. Suc h b eha vior can also b e observ ed in the case of the Ba y esian algorithm ev en though it do es not use a m ultidimensional space to classify to prob e and gallery images. Once the dimensionalit y of the ane appro ximation algorithm and the eigen space or the LD A space b ecome equal than H 0 b ecomes true at rank 1. Ho w ev er in the second training scenario the dierences b et w een P S F and P F S for all the algorithms except the PCA face recognition algorithm using the cosine distance measure b ecome signican t only at higher ranks. This is probably b ecause the algorithms are trained on t w o dieren t sets. 50

PAGE 61

T able 5.1. McNemar's T est for T raining Scenario 1 at Rank 1. SF denote the n um b er of times the face recognition algorithm succeeds in recognizing the image but the ane appro ximation algorithm fails. FS denote the n um b er of times the ane appro ximation algorithm succeeds in recognizing the image but the face recognition algorithm fails. RRF denotes the recognition rate for the face recognition algorithm and RRA denotes the recognition rate for the ane appro ximation algorithm. Algorithm SF FS p v alue RRF RRA PCA (Euclidean) 0 0 not applicable 72% 72% PCA (Cosine) 12 12 0.8383 69.67% 69.67% LD A 7 1 0.0771 79.33% 77.33% Ba y esian 14 8 0.2864 83% 81% T able 5.2. McNemar's T est for T raining Scenario 2 at Rank 1. SF denote the n um b er of times the face recognition algorithm succeeds in recognizing the image but the ane appro ximation algorithm fails. FS denote the n um b er of times the ane appro ximation algorithm succeeds in recognizing the image but the face recognition algorithm fails. RRF denotes the recognition rate for the face recognition algorithm and RRA denotes the recognition rate for the ane appro ximation algorithm. Algorithm SF FS p v alue RRF RRA PCA (Euclidean) 25 1 0.0001 72.67% 64.67% PCA (Cosine) 13 12 1.0 71.67% 71.33% LD A 28 11 0.0104 76.67% 71% Ba y esian 21 5 0.0033 83% 77.67% T able 5.3. McNemar's T est for T raining Scenario 2. The rank column denotes the rank from whic h the dierence b ecomes insignican t. RRF denotes the recognition rate for the face recognition algorithm and RRA denotes the recognition rate for the ane appro ximation algorithm at that rank. Algorithm Rank SF FS p v alue RRF RRA PCA (Euclidean) 7 12 5 0.1456 83.67% 81.3% PCA (Cosine) 1 13 12 1.0 71.67% 71.33% LD A 4 12 2 0.0704 88.3% 85% Ba y esian 11 7 2 0.1824 96.3% 94.67% 51

PAGE 62

Figure 5.41. Eigen F aces of Set C 1 Figure 5.42. Eigen F aces of Set C 2 5.7 Dierence Caused b y T raining Scenarios All the results sho w that the p erformance in the rst training scenario is alw a ys b etter than the second training scenario. In the rst training scenario b oth the algorithms are trained with the same data set and in the second training scenario they are trained with dieren t data sets. Figure 6.41 and Figure 6.42 sho w the top eigen faces of training set C 1 and training set C 2 resp ectiv ely The top eigen faces for b oth the sets are v ery dieren t. Probably this dierence can b e reduced b y using a larger training set. But w e w ere unable to explore this since the computational cost and time b ecomes prohibitiv ely exp ensiv e as the training set size increases. 52

PAGE 63

CHAPTER 6 CONCLUSION W e ha v e presen ted a v ery no v el metho d of ev aluating face recognition algorithms at a m uc h deep er lev el. The ane appro ximation algorithm uses the similarit y of an y input algorithm to generate an ane transformation whic h when applied to the eigen space of the standard PCA algorithm can duplicate the results of the face recognition algorithm. Our strategy is sup erior to traditional Multidimensional Scaling (MDS) tec hniques since the transformation from input to output is explicitly computed. MDS just em b eds the output in some space and there is no guidance in MDS metho ds on ho w to map a "new" data p oin t on to the MDS space. Ho w ev er the ane appro ximation algorithm will b e able to do the job. The data for training and testing w as tak en from the FERET database. The linear discriminan t analysis (LD A) face recognition algorithm and the ba y esian in trap ersonal and extrap ersonal classier face recognition algorithm w ere tested to ev aluate the p erformance of the ane appro ximation algorithm. The PCA algorithm w as also c ho osen mainly to rearm the ane appro ximation strategy Tw o training scenarios w ere considered one in whic h b oth the algorithms w ere trained on the same set and in the other algorithm b oth the algorithms w ere trained on dieren t sets. The closeness b et w een the similarit y matrix generated b y the ane appro ximation algorithm and the face recognition algorithm w as c haracterized b y dieren t tec hniques. Gross error measures normalized RMS and Stress-1 error measures w ere used to compare the ra w similarit y scores. The p erformance in the v erication and iden tication scenarios w ere compared with standard CMS and R OC curv es. Finally the statistical signicance of the results w as tested using McNemar's test. 53

PAGE 64

As an ticipated for the rst training scenario the ane transformation matrix w as an iden tit y matrix when the input algorithm w as the PCA algorithm. Excellen t results w ere obtained for the LD A and Ba y esian algorithm also. The results got b etter as the dimensionalit y of the ane appro ximation algorithm increased. In the case of PCA algorithm and LD A algorithm the results reac hed a plateau once the dimensionalit y of the ane appro ximation algorithm b ecame equal to the dimensionalit y of the eigen space (in the case of the PCA algorithm) and the LD A space (in the case of the LD A algorithm). The ba y esian algorithm sho w ed impro v ed results as the dimensionalit y of the ane appro ximation algorithm w as increased. McNemar's test sho w ed that once the dimensionalit y b ecame 60 the dierence b et w een the p erformance in iden tication and v erication scenarios b ecame insignican t at rank 1. In the second training scenario, o wing to the dierence in the training sets, the results w ere not as go o d as the rst training scenario. Ho w ev er a similar relationship b et w een the errors and the dimensionalit y w as observ ed. The dierence b et w een the p erformance in iden tication and v erication scenarios b ecame insignican t at rank 1 only at higher ranks. The ane appro ximation algorithm w ould this serv e as a v ery imp ortan t b enc hmark for comparing face recognition algorithms more closely with PCA algorithm. The PCA algorithm has often b een used as a baseline algorithm for face recognition [16 21 ]. In order to allo w face recognition to mature as a science b etter not alone is it imp ortan t to establish a baseline algorithm but it is also essen tial to dev elop go o d tec hniques to compare the p erformance of an y face recognition algorithm with the baseline algorithm. 6.1 F uture W ork An imp ortan t goal of the future w ork in this area w ould b e to increase the size of the training set. It w ould b e in teresting to study the p erformance of ane appro ximation algorithm with more n um b er of face recognition algorithms. Only images of t yp e fa and fb from the FERET data set w ere used for training and testing. F urther exp erimen tation needs to b e done b y using other categories of images also. 54

PAGE 65

REFERENCES [1] W. Zhao, R. Chellapa, A. Krishnasw am y \Discriminant A nalaysis of Pricip al Comp onents for F ac e R e c o gnition Pro c. Third IEEE In tl Conf. Automatic F ace and Gesture Recognition, pp 336-341, 1998. [2] W. Zhao and R. Chellappa and A. Rosenfeld and P Jonathon Phillips. \F ac e r e c o gnition: A liter atur e survey" UMD CfAR T ec hnical Rep ort CAR-TR-948, 2000. [3] B. Moghaddam and A. P en tland. \Beyond Eigenfac es: Pr ob abilistic Matching for F ac e R e c o gnition Pro c. In tl Conf. on Automatic F ace and Gensture Recogn tition, Nara, Japan, pp 30-35, April 1998. [4] M. Kirb y and L. Siro vic h. \Applic ation of the Karhunen-L o eve Pr o c e dur e for the Characterization of Human F ac es" IEEE T rans on P attern Analysis and Mac hine In telligence, V ol 12, No 1, pp 103-108, Jan uary 1990. [5] M. T urk and A. P en tland. \Eigenfac es for R e c o gnition J. Cognitiv e Neuroscience, V ol 3, No 1, pp 71-86, 1991. [6] J. J. W eng and D. Sw ets. F ace Recognition. \In Biometrics: Personal Identic ation in a Networke d So ciety" Klu w er Academic Publishers, 1999. [7] G. T. Candela and R. Chellapa. \Comp aritive Performanc e of Classic ation Metho ds for Fingerprints" T ec hnical Rep ort, National Institutes of Standards and T ec hnology 1993. [8] Maio D, D. Maltoni, R. Capp elli, J. L. W a yman and A. K. Jain. \FV C 2000: Fingerprint V eric ation Comp etition IEEE T ransactions on P attern Analysis and Mac hine In telligence, Col 24, No 3, pp 402-412. [9] Maio D, D. Maltoni, R. Capp elli, J. L. W a yman and A. K. Jain. \FV C 2002: Se cond Fingerprint V eric ation Comp etition In Pro c 16th In ternational Conference on P attern Recognition, V ol 3, pp 811-814. [10] S. D. Leigh, A. Rukhin, A. Hec k ert, P Grother, P Jonathon Phillips, M. Mo o dy K. Knisk ern and S. Heath. \T r ansformation, R anking, and Clustering for F ac e R e c o gnition A lgorithm Performanc e" Pro c. Third W orkshop on Automatic Iden tication Adv anced T ec hnologies, T arryto wn, NY, 2002. [11] P J. Grother and G. T. Candela. \Comp arision of Handprinte d Digital Classiers" T ec hnical Rep ort, National Institutes of Standards and T ec hnology 1993. [12] A. J. Manseld, J. L. W a yman. \Best Pr actic es in T esting and R ep orting Performanc e of Biometric Devic es" T ec hnical Rep ort, National Ph ysiscs Lab oratory UK, 2002. 55

PAGE 66

[13] J. L. W a yman. \National Biometric T est Center Col le cte d Works 1997-2000, V ersion 1.2" T ec hnical Rep ort, San Jose State Univ ersit y [14] A. Martin and M. Przyb o c ki. \The NIST 1999 sp e aker r e c o gnition evaluation A n Overview" Digital Signal Pro cessing, V ol 10, pp 1-18, 2000. [15] P Jonathon Phillips and A. O'T o ole and Y. Cheng and B. Ross and H. Wild. \Assessing algorithms as c omputational mo dels for human fac e r e c o gnition T ec hnical Rep ort NISTIR 6348, National Institute of Standards and T ec hnology June 1999. [16] P Jonathon Phillips, H. Mo on and S. Rizvi and P J. Rauss. \The FERET Evaluation Metho dolo gy for F ac e-R e c o gnition A lgorithms" IEEE T ransactions on P attern Analysis and Mac hine In telligence, v ol 22, no 10, Oct 2000. [17] S. Rizvi and P Jonathon Phillips and H. Mo on. \The fer et veric ation testing pr oto c ol for fac e r e c o gnition algorithms" Pro c. Third IEEE In tl Conf. Automatic F ace and Gesture Recognition, pp 48-53, 1998. [18] h ttp://www.itl.nist.go v/iad/h umanid/feret/, 17th Ma y 2003. [19] P Jonathon Phillips, H. W ec hsler, J. Huang and P J. Rauss. \The FERET Datab ase and Evaluation Pr o c e dur e for F ac e R e c o gnition A lgorithms" Image and Vision Computing, v ol 16, pp 295-306, 1998. [20] P Jonathon Phillips, A. Martin, C. L. Wilson and M. Przyb o c ki. \A n Intr o duction to evaluating biometric systems" Computer, V ol. 33, No. 2, pp 56-63, F ebruary 2000. [21] H. Mo on and P Jonathon Phillips. \A nalysis of PCA Base d F ac e R e c o gnition A lgorithms" Empirical Ev aluation T ec hniques in Computer Vision, IEEE Computer So ciet y Press, Los Alamitos, California, 1998. [22] www.frvt.org, 20th April, 2003. [23] P Jonathon Phillips, P Grother, R. J. Mic heals, D. M. Blac kburn, E. T abassi and J. M. Bone. \FR VT 2002: Evaluation R ep ort" Marc h 2003. h ttp://www.frvt.org/FR VT2002/do cumen ts.h tm. [24] D. M. Blac kburn, J. M. Bone, and P Jonathon Phillips. \FR VT 2000: Evaluation R ep ort" h ttp://www.frvt.org/FR VT2000/do cumen ts.h tm, 22nd Ma y 2003. [25] J. R. Bev eridge, K. She, B. A. Drap er and G. H. Giv ens. \Par ametric and Nonp ar ametric Metho ds for the Statistic al Evaluation of Human ID A lgorithms" W orkshop on Empirical Ev aluation Metho ds in Computer Vision, Dec 2001. [26] J. R. Bev eridge, K. She, B. A. Drap er and G. H. Giv ens. \A Nonp ar ametric Statistic al Comp arison of Princip al Comp onent and Line ar Discriminant Subsp ac es for F ac e R e c o gnition Pro c of the IEEE Conference on Computer Vision and P attern Recognition, pp. 535-542, Decem b er 2001. [27] K. Baek, B. A. Drap er, J. R. Bev eridge and K. She. \PCA vs.ICA:A Comp arison on the FERET Data Set" In ternational Conference on Computer Vision, P attern Recognition and Image Pro cessing in conjunction with the 6th JCIS, Durham, North Carolina, Marc h 8-14, June 2001. 56

PAGE 67

[28] R. J. Mic heals and T. Boult. \Ecient evaluation of classic ation and r e c o gnition systems" In Pro c of IEEE Computer Vision and P attern Recognition 2001, Decem b er 2001. [29] h ttp://www.cs.colostate.edu/ev alfacerec /al go rithms4.h tml, 9th Jan uary 2003. [30] W. Y am b or, B. A. Drap er and J. R. Bev eridge. \A nalyzing PCA-b ase dF ac e R e c o gnition A lgorithms: Eigenve ctor Sele ction and Distanc e Me asur es" In Empirical Ev aluation Metho ds in Computer Vision, H. Christensen and P .Jonathon Phillips (eds.), W orld Scien tic Press, Singap ore, 2002. [31] A. Rukhin, P Grother, P Jonathon Phillips, S. Leigh, A. Hec k ert, E. Newton \Dep endenc e Char acteristics of F ac e R e c o gnition A lgorithms" In Pro c of 16th In ternational Conference on P attern Recognition, V ol 2, Queb ec Cit y Canada, August, 2002. [32] Borg and Gro enen \Mo dern Multidimensional Sc aling" pp 34, First Edition, Springer, 1997. 57

PAGE 68

APPENDICES 58

PAGE 69

App endix A More Results Figure A.1. Eigen F aces of Set C 1 In order to study the eect of the database another set of exp erimen ts w ere rep eated with a dieren t data set. As b efore three disjoin t images sets C 1 C 2 and C 3 w ere selected from the FERET database [16 18 ]. All the three sets, C 1 C 2 and C 3 consisted of images of the t yp e fa (regular facial expression) and fb (alternate facial expression of the sub ject tak en on the same da y). Sets C 1 and C 2 consisted of 100 images of appro ximately 2040 sub jects. Set C 3 consisted of 482 images of 200 sub jects. Sets C 1 and C 2 w ere used to train the face recognition algorithms and the ane appro ximation strategy while set C 3 w as used as a v alidation set to compare the similarit y matrix generated b y the face recognition algorithm and the ane appro ximation algorithm. The gallery and the prob e set w ere c hosen from the v alidation set C 3 The gallery set consisted of images of t yp e fa and the prob e set consisted of images of t yp e fb. It w as later disco v ered in most of the images in C 1 and C 2 the sub jects w ere w earing sp ectacles. This w as captured in all the eigen v ectors. Figure A.1 and Figure A.2 sho w the top eigen faces of training set C 1 and training set C 2 resp ectiv ely The Euclidean distance measure is used for computing the distances b et w een the gallery and the prob e images in the PCA face recognition algorithm. 59

PAGE 70

App endix A (Con tin ued) Figure A.2. Eigen F aces of Set C 2 0 1 2 3 4 5 10 20 30 40 50 60 70 80 90 100 Average RMS Error Number of Dimensions of the Affine Space Average RMS Error vs Dimensionality PCA Euclidean PCA Cosine LDA Bayesian Figure A.3. T raining Scenario 1 Plot of RMS Error. Dimensionalit y of the PCA step in all the three algorithms w as set to 60 dimensions. 60

PAGE 71

App endix A (Con tin ued) 0 0.05 0.1 0.15 0.2 0.25 0.3 10 20 30 40 50 60 70 80 90 100 Stress-1 Number of Dimensions of the Affine Space Stress Plot PCA Euclidean PCA Cosine LDA Bayesian Figure A.4. T raining Scenario 1 Stress Plot. Dimensionalit y of the PCA step in all the three algorithms w as set to 60 dimensions. T able A.1. T raining Scenario 2: Av erage RMS Error. Cuto PCA LD A Ba y esian 20 6.227 4.1404 0.0229 40 6.3186 2.8581 0.0344 60 6.4896 1.9132 0.05 80 6.5312 1.767 0.122 T able A.2. T raining Scenario 2: Stress 1. Cuto PCA LD A Ba y esian 20 0.1522 0.2447 0.2627 40 0.1437 0.248 0.2233 60 0.1414 0.2615 0.1707 80 0.1397 0.2474 0.167 61

PAGE 72

App endix A (Con tin ued) 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank PCA Face Recognition Algorithm 60 Affine Approximation Algorithm 60 Affine Approximation Algorithm 40 Affine Approximation Algorithm 20 Figure A.5. T raining Scenario 1: CMC Curv e for PCA Algorithm. 62

PAGE 73

App endix A (Con tin ued) 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.02 0.04 0.06 0.08 0.1 Probability of Verification False Alarm Rate PCA Face Recognition Algorithm 60 Affine Approximation Algorithm 20 Affine Approximation Algorithm 60 Figure A.6. T raining Scenario 1: R OC Curv e for PCA Algorithm. 63

PAGE 74

App endix A (Con tin ued) 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank LDA Face Recognition Algorithm 60 Affine Approximation Algorithm 60 Affine Approximation Algorithm 40 Affine Approximation Algorithm 20 Figure A.7. T raining Scenario 1: CMC Curv e for LD A Algorithm. 64

PAGE 75

App endix A (Con tin ued) 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.02 0.04 0.06 0.08 0.1 Probability of Verification False Alarm Rate LDA Face Recognition Algorithm 60 Affine Approximation Algorithm 20 Affine Approximation Algorithm 60 Figure A.8. T raining Scenario 1: R OC Curv e for LD A Algorithm. 65

PAGE 76

App endix A (Con tin ued) 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank Bayesian Face Recognition Algorithm 60 Affine Approximation Algorithm 60 Affine Approximation Algorithm 40 Affine Approximation Algorithm 20 Figure A.9. T raining Scenario 1: CMC Curv e for Ba y esian Algorithm. 66

PAGE 77

App endix A (Con tin ued) 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.02 0.04 0.06 0.08 0.1 Probability of Verification False Alarm Rate Bayesian Face Recognition Algorithm 60 Affine Approximation Algorithm 20 Affine Approximation Algorithm 60 Figure A.10. T raining Scenario 1 : R OC Curv e for Ba y esian Algorithm. 67

PAGE 78

App endix A (Con tin ued) 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank PCA Face Recognition algorithm Affine Approximation algorithm Figure A.11. T raining Scenario 2: CMC Curv e for PCA Algorithm. 68

PAGE 79

App endix A (Con tin ued) 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.02 0.04 0.06 0.08 0.1 Probability of Verification False Alarm Rate PCA Face Recognition algorithm Affine Approximation algorithm Figure A.12. T raining Scenario 2: R OC Curv e for PCA Algorithm. 69

PAGE 80

App endix A (Con tin ued) 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank LDA Face Recognition Algorithm Affine Approximation Algorithm Figure A.13. T raining Scenario 2: CMC Curv e for LD A Algorithm. 70

PAGE 81

App endix A (Con tin ued) 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.02 0.04 0.06 0.08 0.1 Probability of Verification False Alarm Rate LDA Face Recognition Algorithm Affine Approxiation Algorithm Figure A.14. T raining Scenario 2: R OC Curv e for LD A Algorithm. 71

PAGE 82

App endix A (Con tin ued) 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 18 20 Probability of Correct Match Rank Bayesian Face Recognition Algorithm Affine Approximation Algorithm Figure A.15. T raining Scenario 2: CMC Curv e for Ba y esian Algorithm. 72

PAGE 83

App endix A (Con tin ued) 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.02 0.04 0.06 0.08 0.1 Probability of Verification False Alarm Rate Bayesian Face Recognition Algorithm Affine Approximation Algorithm Figure A.16. T raining Scenario 2: R OC Curv e for Ba y esian Algorithm. 73

PAGE 84

App endix A (Con tin ued) T able A.3. McNemar's T est for T raining Scenario 1 at Rank 1. SF denote the n um b er of times the face recognition algorithm succeeds in recognizing the image but the ane appro ximation algorithm fails. FS denote the n um b er of times the ane appro ximation algorithm succeeds in recognizing the image but the face recognition algorithm fails. Algorithm SF FS p v alue PCA 0 0 LD A 3 0 0.2482 Ba y esian 9 4 0.2673 T able A.4. McNemar's T est for T raining Scenario 2 at Rank 1. The rank column denotes the rank from whic h the dierence b ecomes insignican t. Algorithm SF FS p v alue PCA 14 9 0.4042 LD A 13 6 0.1687 Ba y esian 10 4 0.1814 T able A.5. McNemar's T est for T raining Scenario 2. The rank column denotes the rank from whic h the dierence b ecomes insignican t. Algorithm Rank SF FS p v alue PCA 5 4 0 0.1336 LD A 1 13 6 0.1687 Ba y esian 1 10 4 0.1814 74


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001681129
003 fts
005 20060215071301.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 060106s2003 flu sbm s000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000616
035
(OCoLC)62792706
SFE0000616
040
FHM
c FHM
049
FHMM
090
QA76 (Online)
1 100
Reguna, Lakshmi.
3 245
An indepth analysis of face recognition algorithms using affine approximations.
h [electronic resource] /
by Lakshmi Reguna.
260
[Tampa, Fla.] :
b University of South Florida,
2003.
502
Thesis (MSComS)--University of South Florida, 2003.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 84 pages.
520
ABSTRACT: In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm.This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms.Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.
590
Adviser: Sarkar, Dr Sudeep.
653
Biometrics.
Optimal affine transformation.
Principal component analysis.
Eigen space.
Affine space.
0 690
Dissertations, Academic
z USF
x Computer Science
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.616