Adam Stritzel
CSEP 576
Project 3


Buddha

Normals

Depth

Albedo


Cat

Normals

Depth

Albedo


Owl

Normals

Depth

Albedo


Thoughts

I feel that this procedure worked surprisingly well across all the example datasets. I'm pleased with the success after some frustration in the panorama project due to non-constant exposures.

There seem to be two major places for improvement in the algorithm. The most glaring, in my opinion, is the behaviors of the normals around the silhouette-edge of the object. You can see this in the RGB Normal view, especially along the bottom of the cat and the owl. Rather than a smooth transition across colors, there are several places that make abrupt jumps. If I were to try to improve this behavior, I might a) enforce a smoothness of the normals around the silhouette, or b) extend the surface by extrapolation a short distance (essentially the same as A), or c) try to enforce that the normal around the silhouette is perpendicular to the camera. I'm not sold on the last suggestion because in a non-smooth object, this isn't necessarily the case, but I'm guessing it'd be good for most practical purposes.

The other place for improvement is very dark regions. You can see the problem in the owl's right (left in the image) eye. The first depth image hides the problem somewhat as the camera is looking into the owl's eye, but the second image shows quite clearly the pupil jutting out in an ugly fashion. I belive the problem to be the near absolute-black nature of the owl's eyes. The image of a perfectly black body would not change under the various lighting conditions. One way of addressing this problem would be to recognize such regions in order to enforce some local smoothing. Any region that stays very dark across all lights will be problematic, and can be searched for on just such a basis. While smoothing might not be the right thing to do in all cases, it will make the final model simpler than the complex jaggies seen in the owl. That compromise is likely acceptable for many applications.

Since you requested feedback on this project, I'd say that it held my interest, but I was a bit disappointed at the prohibitive difficulty of generating my own data sets. One appealing aspect of the first two projects was to put my own touch on my final artifact.

Code-wise, I think that everything was in-place and I wouldn't have known it was a 1st-time run-through of the project. My only mild gripe is that since we don't solve the system on our own, it is understandable that everyone I've talked to was surprised to find NaN results in the normal map. Although I don't think this should be changed, it might be good to warn that such results are expected.

Oh, and one more bit that I found to be awkward: separate functions for filling in the M matrix and then the V vector. I would have preferred a single function, not that it's a big deal, but it felt odd to break up all the equations across two functions. Much clearer to say things one full set of constraints at a time.