Chris Gamble

0831405

Computer Vision (CSE 455), Winter 2012

Project 4: Eigenfaces

Project Abstract

Objectives

The objective for this project was to create a program that performed facial recognition over a variety of head on images of faces. It is supposed to have the ability to create the eigenfaces that are created from a set of images, recognize whether an image is a face, match a different face to a known face, as well as find multiple faces within a single image.

Challenges

The most challenging portion of this project was testing. I was testing using the debug build because I failed to notice that we were suggested to use the release build. This caused testing of my code to run a whole lot slower than it needed to be and made the time between editing and testing to be much longer than it needed to be. I learned that the release was much faster just slightly too late, but it did allow me to run the experiments faster than before.

Lessons Learned

As part of the lessons I learned, I need to read all of the assignment, and not gloss over parts. That is the main reason that I had issues with my testing. I also learned that this procedure is very simplified and that it cannot find any affine translated faces within the images, they must be looking straight at the camera and cannot be tilted at all.

Implementation

The portion of the code I wrote was the everything relating to finding or creating faces from an image or set of images. Faces::egienFaces builds the basis eigenfaces to perform the rest of operations with. projectFaces uses these eigenfaces to define the coefficients that are needed to recreate the given faces from the eigenfaces. ConstructFace reconstructs a face with the given coefficients based off the known eigenvectors. IsFace is a test function of a given face determines whether or not the error between the given face and the eigenfaces is lower than a given threshold. Verify Face reports the error between a User's coefficients and a face that is expected to be the same User. recognize face does the same as VerifyFace with an entire database of Users, and then sorts them by the best matched user to the given face.

The project is structured as follows:

Experiment: Recognition

Methodology

I used Part 1 of the script below to save the output for the various number of eigenfaces. I then took the output and created a plot from the data in Excel. I found that the best solution gives about a 73% success rating.

TODO: Questions

Question 1: The trend shows dramatic improvement until about 6 and from there it wavers at around 70% accuracy. Near the total available eigenfaces, it starts to improve again as more minute details are being added with the addition of the eigenvectors. I think from this data the best choice would be to use 7-10 faces, as it will take less time to grab those and do the comparisons than with all 33 eigenfaces.

Question 2: See below.

Sample Results

The average_face and the 33 eigenfaces

Mistakes

main --eigenfaces 30 25 25 nonsmiling_cropped/files.txt eig30.face

main --constructuserbase eig30.face nonsmiling_cropped/files.txt base1.user

main --recognizeface smiling_cropped/%%A base1.user eig30.face 1

This mistake is probably caused by the outrageousness of the smiling face. The facial features are much more pronounced than in an of the neutral ones, and so it can't find good coefficients to find it.

The face for this one is more asymmetrical than most faces are and so I believe that is preventing to from utilizing portions of symmetry within the eigenfaces.

The open mouth distorts the angle that the eyes and nose have toward the camera, which is preventing the eigenfaces to match up properly.

I think that this too is attributed to the lack of symmetry within the smiling face.

This face isn't looking directly at the camera, as the eyes are being pulled back, and the bright corner on oneside but not the other is probably causing some issues aligning it with the eigenfaces.

Once again I this is also caused by the lack of symmetry in the smiling face.

The best I can attribute is that the face is slightly away from the camera, but it was able to still recognize the classes and similar hair. I think this is almost a half correct face.

Again lack of symmetry.

The problem caused by this is similar to the first image, where it cannot correlate the outrageousness of the smile to proper eigenface values.

Experiment: Find Faces

Methodology

The images I used to test with were elf.tga, group_neutral(2).tga, and then two photos I found on facebook that I have been tagged in. To estimate the scales that I used for these images, I used an image editing software to select an approximate guess of the size of the face. I then used the ratio of the size of the eigenface database to create an approximate scale. With that scale, I expanded about to about .05-.1 on either side to create some variety within the scales. and then stepped by either .02 or .01 to try a variety of faces.

Questions

Question 1: See the commands I used for the scale and steps of each image.

Question 2: False negatives occur when I think that the faces are too tilted or not looking directly at the camera. False positives are occuring because algorithm is supposed to return the top n faces, and so those false positives are most likely coming from the bottom of the face list. That means that the MSE recovered by that square happens to be the closest to the average face of anything else around it.

Sample Results

main --findface findfaces\elf.tga eig10.face .45 .55 .01 crop 1 results\cropped_elf.tga

This is a demonstration of the program performing correctly in my opinion because the child's face is much more correctly aligned with the rest of eigenfaces. It is interesting though that using a set of faces from primarily 20-22 year olds still was able to capture a child's face as the best match.

main --findface findfaces\self.tga eig10.face .9 1 .01 crop 1 results\cropped_self.tga

I used the image that was taken in class the self portrait image because I have found from looking at my faceboook pictures that I don't look head onto the camera that often. So the pictures I was trying to use aren't finding my face correctly, as it isn't setup properly.

main --findface findfaces\group_neutral(2).tga eig10.face 0.74 0.88 0.02 mark 3 results\3faces.tga

This is the test image from the group pictures, and it shows my program find all three images properly as expected.

main --findface findfaces\sax_seniors.tga eig10.face .48 .52 .02 mark 13 results\sax_faces.tga

There are a couple of the faces that are tilted, and so I was certain it wouldn't be find those. So as sad as it sounds, I was impressed with the fact that it found about 5 of the 9 faces I consider to be viable faces to find.

TODO: Experiment: Verify Faces

Methodology

I ran part three of the script below to determine the answers to the following questions.

Questions

Question 1: I used the thresholds (1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500). And found that 4500 worked the best as it gave 100% accuracy for the set. I found this by choosing a range of values to iterate over and test the set at each threshold.

Question 2: The false negative rate for the best MSE threshold was 0 as it returned a perfect score on all that were correct. The false positive rate was about 58%, which is really high, so I think to use the best of both worlds, using a threshold of about 2500 will give you a false negative rate of about 12%, but also a false positive rate of 27%, which is less than have of what 4500 offers.

Test Script

The batch script I used to create the information for the experiments can be found here