CSE 455 - Computer Vision
Paul Larpenteur
Winter - 2003 - Steve Seitz

Project #4 - EigenFaces

Experiment  #1: In Class Images - Recognizing smiling faces using non-smiling faces.

Eigenfaces & Recognition Results:

These are the 7 eigenfaces that I computed from the non-smiling class images.

Average Face:    Eigenfaces in order:  

This is the table of my results when attempting to recognize the smiling face of the user given a userbase created from the 7 non-smiling eigenfaces shown above.

(Please note that these results were taken before I fixed a small detail in the isFace function, so the results if you run my application might be slightly different. However, the top results were almost always the same so I will not recalculate these results to reflect these changes in my code because the changes are insignificant.)

Face to Find Results: 1st 2nd 3rd 4th True Position Was This Correct?
adeakin adeakin ddewey squinn tshail 1 1
alissah alissah margaux paullarp ddewey 1 1
amiratuw amiratuw esp squinn jwkim 1 1
crosetti paullarp crosetti alissah margaux 2 0
ddewey ddewey alissah paullarp margaux 1 1
djj djj tanderl squinn amiratuw 1 1
esp esp amiratuw squinn jwkim 1 1
galen galen mbixby djj amiratuw 1 1
jaydang jaydang tshail adeakin melissa 1 1
jwkim jwkim amiratuw esp mbixby 1 1
margaux margaux alissah paullarp ddewey 1 1
mbixby mbixby amiratuw rhennig esp 1 1
melissa margaux alissah melissa ddewey 3 0
merlin merlin djj seitz galen 1 1
mhl melissa alissah squinn margaux 9 0
paullarp paullarp alissah margaux crosetti 1 1
rhennig rhennig mbixby amiratuw esp 1 1
seitz seitz merlin djj galen 1 1
squinn djj tanderl seitz squinn 4 0
tamoore tamoore jwkim melissa alissah 1 1
tanderl tanderl djj squinn amiratuw 1 1
tshail adeakin tshail mhl ddewey 2 0
           
      Total People Avg Pos. Total Correct
      22 1.681818 17
          Percent correct
          77.27%

Questions:

#1: Correct Results?

The program recognized 9 faces correctly and 13 faces incorrectly. 

#2: Average Position of Correct Result

The average position of the correct entry given whether or not the 1st entry was correct is the 1.7th position.

Given only the cases where the 1st entry was not correct, the average is about the 4th position. This is skewed on the upper end a little though because of the one outlier of "mhl" that was recognized as the 9th position in her search. I think this was due to changes in facial orientation between her two images.

#3: Finding Females or Finding Males?

Face to Find M/F F M # Females if Asked For Female # Males if Asked For Male # Females if 1st Response Female # Males if 1st Response Male
adeakin F 3 1 3   2  
alissah F 2 2 2   1  
amiratuw M 3 1   1   0
crosetti M 2 2   2   1
ddewey M 2 2   2   1
djj M 2 2   2   2
esp F 3 1 3   2  
galen M 0 4   4   3
jaydang M 3 1   1   0
jwkim F 2 2 2   1  
margaux F 2 2 2   1  
mbixby M 1 3   3   1
melissa F 3 1 3   2  
merlin M 0 4   4   3
mhl F 4 0 4   3  
paullarp M 2 2   2   1
rhennig M 1 3   3   2
seitz M 0 4   4   3
squinn F 1 3 1     2
tamoore F 4 0 4   3  
tanderl M 1 3   3   2
tshail F 3 1 3   2  
      27 31 17 21
      0.675 0.645833 0.62963 0.538462

My results show that when a Female face was trying to be found, on average 67.5% of the top four faces were also female. 

When looking up a Male face, on average 64.5% of the top four faces were also male. Thus, I found this percent to be about the same for looking up both male and female faces.

However, when the 1st result returned is a female, on average 63% of the next 3 faces will also be females.

When the 1st returned result is a male, on average 54% of the next 3 faces will also be males.

#4: Real Life: Would you use the full user set (1000's of people) to compute your eigenfaces?

In real life, I would not use the entire user set to compute my eigenfaces if the user set was very large. This is because the size of the matrices would be incredibly large (the number of users would start to outnumber the number of pixels in the images), and the detail gained would be small because you can still only use about the top so many eigenfaces to encode the rest of your faces with. Instead, I would use a random selection of say 10% of the size of the user set (maybe 200 images), and compute the eigenfaces from them.

#5: Why might it be better to use a face set independent of the user set to compute the eigenfaces?

The faces in your user set may bias the eigenfaces towards a certain kind of individual, and thus when a new user is added to your system and their face is different from the majority of faces in your user set, their face will be far away from face space and hard to project and recognize. The reconstruction of their face will be very vague because there are no correct combinations of eigenfaces to correctly recreate their face. Instead, an independent face set could be used that incorporates many different types of people and faces, including differences in gender, race, age, and even facial expressions!!!

Experiment  #2: U Grads Images

Eigenfaces & Recognition Results:

Average Face:    Eigenfaces in order:

The eigenfaces created from the collection of undergraduate "cropped" images. The table below shows the correct recognitions of the in-class non-smiling faces given a userbase created from the undergraduate cropped images of just the 17 people in our class that had undergraduate images.

# of Eigenfaces Recognized Correctly Total Faces
5 2 17
10 4 17
12 3 17
15 3 17
20 2 17

Questions:

#0: Why is it best to use as few eigenfaces as possible while still getting good results?

As you can vaguely see from the results in the table above, the number of faces correctly recognized starts to grow as the number of eigenfaces increases, but after a certain amount of increase the number of faces correctly recognized starts to decrease. Thus it appears that the number of faces that can be recognized correctly forms a curve with an optimal position or peak which corresponds to the point at which the number of eigenfaces recognizes the most number of faces correctly. However, my results shown here are not enough for me to conclude that this hypothesis of mine is always true.

In addition, using too many eigenfaces simply wastes space and computation time, both of which could be large for databases with thousands of users in the user set.

#1: How many students did it recognize correctly?

From the table above, we see that when using 10 eigenfaces created from the undergraduate "cropped" images, the algorithm recognized 4 out of the 17 students. Using 12 or 15 eigenfaces, 3 out of the 17 stuends were recognized correctly. Lastly, using 5 or 20 eigenfaces, only 2 out of the 17 students were recognized correctly.

#2: Are the incorrect identifications reasonable?

The results are somewhat, what should I say, "fishy" for this part. I think that the changes in the brightness of the images makes a difference in what faces it can properly represent with eigenfaces. Thus, some faces are returned as the nearest result for many of our in class images, although they might not represent the person in that image, they are the most similar kind of picture to our in class picture.

Actually, 15 out of the 17 people were recognized as girls. I think for some reason the girls faces, such as "tamoore", "esp", "melissa", "margaux", and "mhl" were recognized a lot.

For example, "tamoore" was an undergraduate face that mapped well to eigen space and returned #1 for a lot of my queries.

Searching for: 'mhl'   Recognized as: 'tamoore'

Notice how the faces are both strait on, non-smiling, with dark eyes and distinct eyebrows.

Searching for: 'tshail'   Recognized as: 'tamoore'

Notice that these pictures are girls and how closely the eyebrows and eyes match tamoore's picture as well as mhl's picture.

Searching for: 'tamoore'   Recognized as: 'tamoore'

Notice how this is indeed tamoore and she has been recognized correctly.

Last, but not least, the results of the search for my very own face.

Searching for: 'paullarp'   Recognized as: 'melissa'   Followed by: 'esp'

I don't know how it figured out that I wear glasses... but I do. ...um, no comment.

Just for a sanity check, I searched for what "jwkim" would look like if she had a undergrad picture.

Searching for: 'jwkim'   Recognized as: 'esp'

This is what I was expecting, the glasses were recognized correctly. Horray! My program is actually working correctly after all. Thus in conclusion I have shown that even though the results of this recognition process are mostly incorrect, they are still rational most of the time.

#3: Why is the program worse at recognizing people now?

The programs decrease in accuracy can be explained by the following:

#4: How did changing the number of eigenfaces affect your results?

Please read my comment above for question "#0" from this section. As you can see, my best results were found using 10 eigenfaces in this experiment.

Experiment  #3: Cropping the undergraduate images.

My Cropping Results:

Original Picture Cropped Picture Your Optimal Cropped Result (if in our class)
 
 
 
 

 

#1. What scales did you use?

I ended up using a min_scale of .15 and a max_scale of .20 with a step size of .01. These parameters seemed to work for the face sizes found in most of the images, while taking about 10 seconds to compute. The scale step of .01 seemed to work out pretty good too for getting the correct part of the face that we want.

#2: How many of your crop results look correct? How many look incorrect?

6 out of the 10 images came out perfectly. 2 of the 10 images came out good enough but included a little bit too much of the neck. And lastly 2 of the images were slightly off because the face was either too small or too big. To correct this, these images would have to have the min/max scale changed a little bit to work properly for them.

However, 6 out of the 10 were perfect and thus the face recognition seems to be working pretty good when there is only one face to recognize.

#3. What's the problem with using a min_scale that is too small?

Sometimes when I make the min_scale too small, the algorithm closest matches a far away picture of the persons face, which is not exactly what we want.

If I decrease the min_scale so small that it is smaller than an eigenface, I will simply be wasting time because I am only searching for eigenfaces that can actually fit into the image. Also, the image may become so small and distorted that it no longer contains any distinct facial features. It also takes a long time for my program to process images so I try to set the scale as close to correct as possible.

Experiment  #4: Finding faces in group photos.

Picture #1

This is one of my graduation pictures with my friends, at the Tacoma dome.

This is the result of the face finding operation.

The parameters used for this image were: min_scale = .42; max_scale = .54; step = .02;

As you can see, it nearly got two faces correct! But this is acceptable because the orientations of some of the peoples faces are slightly tilted and my face has weird lighting conditions. My two African American friends were also not recognized, which might be due to the lack of training information similar to their faces in the user set that I created the eigenfaces from.

Picture #2

This is a picture of my favorite J-POP singing group, SPEED. Heck'of a lot nicer picture than the one above! ;-)

Here's the results of my face search.

It recognized, from left to right, Eriko, Takako, and Hitoe, but it sorta missed Hiro's face on the far right.

The parameters used for this image were: min_scale = .26; max_scale = .42; step = .02;

There was one error in this image, so I tried dividing the MSE by the variance, but this caused some more errors and selected regions with strong edges in them, such as where the left side of the left-most girl's hair meets the background. To improve the algorithm, I think that some extra normalization steps could be taken on the input image to ensure that regions that are totally dark and could not possibly be a face are made even darker, while the colors that usually represent faces are preserved.

Conclusion

This project has also proved to be a very fun one, although challenging at first and slightly discouraging until you get it to start recognizing faces CORRECTLY! But once I had that working, the project was fun, however I read there there is supposed to be an N squared algorithm for solving the eigenvector equation instead of an N cubed equation. I think the math is much harder though, but that might be an interesting thing to look into someday. I really like how well the program is at cropping a single persons face when you give the program of just one person.

Thanks

Thanks to David Dewey for the time he spent on the skeleton source code, and also to Jiwon Kim for all the help that she has given us in creating and grading the interesting projects in this class. Also thanks to our Professor Steve Seitz for making the class material very understandable, even though much of what we learned and discussed is not yet even in a textbook, but rather more on the research frontier. And as I'm thanking everybody, I thank my classmates for making some outstanding projects, especially some of the panoramas and those project 3 artifacts with all of the stairs! Oh my goodness, that must have taken a lot of patience!

Thanks for the great quarter,

-Paul Larpenteur

Visit Paul's website at: http://students.washington.edu/paullarp/