Laure Thompson

Computer Vision (CSE 455), Winter 2012

Project 4: Eigenfaces

Project Abstract

Objectives

In this project, we applied PCA (Principal Component Analysis) to create a face recognition system. For this problem, we reduce each face image into a vector and then use PCA to reduce the dimensions of the vectors. This reduction can occur because valid faces only exist within a small subspace of the full vector space. By finding this reduced space, the corresponding eigenvectors (in this case called eigenfaces) which define this space, can be used to describe each face. Then by comparing

Challenges

Debugging turned out to be quite a challenge, until I created a debugging image that displayed the calculated error scores for each face window for each scale. This allowed me to see what the actual reasons were for false positives and negatives. In my particular case I had forgotten to initialize a number of Face objects, which caused random noise to appear within my results.

Lessons Learned

Testing is much easier with automation. Writing scripts proved to be very valuable. Addtionally, comments are extremely important. Reading through code to find understanding can be time-consuming and error-prone.

Implementation

I completed all of the required skeleton code. I also made a few changes to some of the provided code. I will explain these changes after the discussion of all of the skeleton components.

The project is structured as follows:

Minor changes to given code:

Experiment: Recognition

Average Face

10 25x25 Eigen Faces

Face Recognition for 10 Eigenfaces

When using the 10 25x25 eigenfaces, the program correctly recognized 23/33 of the cropped, smiling students to their corresponding cropped, non-smiling photo. Three of the mismatches are displayed below:

Correct Recognition vs Number of Eigenfaces

Methodology

For this experiment I created a batch script which for each necessary 25x25 eigenface number (10 or 1-33 odd) first calculated the eigenfaces and userbase using the following commands:

main --eigenfaces NUM 25 25 nonsmiling_cropped\list.txt eigNUM.face

main --constructuserbase eigNUM.face nonsmiling_cropped\list.txt userNUM.user

Then, for each cropped smiling image, I ran the --recognizeface command. Like the following command:

main --recognizeface smiling_cropped\IMG userNUM.user eigNUM.face 1

Then, I counted the number of correct face matches for each eigenface count and generated the plot using Excel.

Questions

Question 1: The plot shows that the number of matches initially increases with the increase of eigenfaces. However, the increase quickly tapers off. This suggests that simply increasing the eigenface count does not generally improve the quality of matching. So, best eigenface count choice appears to be between 5 and 9 eigenfaces because it produces it costs less computation time than 33 eigenfaces and produces nearly the same results. The reason why the "best" number of eigenfaces is so small may be attributed to the small image base, since there are only 33 images, the later eigenfaces are less and less meaningful.


Question 2: Of the ten mistakes when using the 10 25x25 eigenfaces, all of the errors seemed reasonable. These mismatched images all had very strange facial expressions, that deviated a lot from the corresponding neutral expressions. The corresponding neutral face generally appeared within the top 5 matches, which seems reasonable.

Experiment: Find Faces

Methodology

For this experiment, I simply used the --findface command to test my program's face recognition. I tested various parameters in order to receive an accurate result.

Cropping the elf.tga image

For this part, I used the following command to crop the image shown above:

main --findface elf.tga eig10.face 0.45 0.55 0.01 crop1 result_elf.tga

This results in the following cropped image:

These results are what I would expect. The program does in fact recognize the baby's face.

Cropping an image of me

Question 1: For cropping my face from the above image, I used the following parameters:

min_scale = 0.05, max_scale = 0.15, scale_step = .01

This produced, a nice crop of my face, which is shown below:


Question 2: Initially, I had allowed the max_scale value to be larger. In this particular error, 0.15. This bad face match is shown below:

This caused the result to be focused on my bangs, just capturing my eyebrows. This mistake might have occurred because there is a dark shadowing in my hair right where the "eye" region of this image would be. My guess is that this shadowing caused it to have a reasonable score, plus the fact that my hair is very pale in this photo and likely a reasonable approximation to a skin tone doesn't help either.

Cropping a group neutral image

Question 1: I tested face recognition on the third neutral group image. To produce the image with correct face recognition shown above, I used the following parameters:

min_scale = 0.5, max_scale = 0.8, scale_step = .1


Question 2: In this case, I did not provide small enough face sizes to have the actual faces recognized. I originally had the max_scale set to the smaller value of 0.6. The incorrect face recognition image is shown below:

By having too small of a max_scale value, the tested face windows were too large in comparison to the actual faces. This caused the faces to be overlooked, while the upper portions of the screen were selected. These selections are likely caused by the low variance of the face windows, as well as the slight shadowing within each window.

Cropping an image of hobbits

Question 1: For my second group image test, I took an image of the hobbits from The Lord of the Rings. To produce the accurate face recognition shown above, I used the following parameters:

min_scale = 0.85, max_scale = 0.85, scale_step = .1

Question 2: While I was trying to figure out the scale that corresponded to the faces in the image, I ran into a number of false positives. One of these instances, is shown below:

In this case I used the following parameters:

min_scale = 0.5, max_scale = 0.7, scale_step = .1

In this case, I resulted in two false positives. Merry and Pippen's faces were not recognized. I think this is partially due to their small face sizes, in comparison to the window. I think the false matches ranked highly for different reasons. The false match that was largely made of the white background, was likely chosen for its very low variance. The only variance within the image exists in the bottom left corner. The reason for Frodo's sleeve being detected as a face, is likely due to the shading within the region as well as the level of red coloring, since I weighted red to be more important than green or blue.

Experiment: Verify Faces

Methodology

For this experiment I used a script that would iterate through all of the users in the database and compare them with their smiling image as well as the smiling image of the user number=(33-1) mod 33 +1. I ran this script for different MSE thresholds, noting down the number of false positives, as well as the number of false negatives.

Results

class="label">

Questions

Question 1: I tried all threshold values between 5000 and 200000 at a 5000 increment. The best threshold value was 125000. I plotted the graph and interpreted what the best choice would be for minimum false positives and negatives.

Question 2:

False Positive = 4

False Negative = 4