CSEP 576 Photometric stereo project

Bob Archer

The small images link to full size images

Original image RGB normals Needle normals Albedos ---------- Views without albedos ---------- ---------- Views with albedos ----------

Discussion

The buddha, cat and rock images worked particularly well. The images have no overlapping parts and no specular highlights both of which confuse the algorithm.

Since the algorithm only produces a single depth value for each X, Y coordinate it does not do a good job with images that have overlapping regions. The beak on the owl and the front legs of the horse are two examples of this, looking at the owl's left eye shows that it has been overwritten by the beak while the front legs of the horse appear joined together. Note also that the tail of the horse appears attached to one of the hind legs.

There probably isn't much we can do about this with the algorithm as it stands. The algorithm assumes that the input is a monge patch and produces results accordingly. It might be possible to look for large discontinuities in the depth values and to assume that they belonged to different rather than adjacent parts of the image, however I suspect that would have its own set of problems. To get really decent results for overlapping portions of the image we would need to take pictures from different angles and combine them.


Specular highlights don't work well - the model does not allow for specular illumination. The two cases where this really show up are in the right eye of the owl and on the belly of the owl. The highlight in the middle of the eye produces a spike, the highlight on the belly of the owl gives the belly a more pointed appearance than it should have.



Extra 1 - smooth the normals

I added code to allow the normals to be smoothed before their values were used in the surface reconstruction function. Since the normals are just a set of vectors I can use a convolution function supplied with a kernel to smooth them.

I experimented with four different kernels, a 3x3 gaussian and then uniform kernels at three different sizes - 3x3, 5x5 and 11x11.

Original algorithm 3 x 3 Gaussian 3 x 3 uniform smoothing 5 x 5 uniform smoothing 11 x 11 uniform smoothing

The 3 x 3 kernels don't seem to have much effect, it isn't until we get to the 5 x 5 that we really start to see a difference.

The close up of the eye shows that the spike narrows as the kernel gets bigger, then finally disappears altogether with an 11 x 11 kernel.

Closeups of the belly don't show much difference. It's a much bigger feature than the spike in the eye and would need a large kernel to smooth it completely.

The images above are all somewhat deceptive because they use the albedo map which isn't affected by the smoothing at all. Looking at this series of images without the albedo map really shows the effect of smoothing off the normals. Edges are less distinct as the convolution kernel gets bigger and bigger (look at the edge of the right eye socket).

Conclusion

This gives good results on small errors like the spike in the eye. It does blur the edges, although when combined with the albedo map this blurring is not particularly noticeable. Large specular highlights are not handled. Kernels up to 11 x 11 do not significantly flatten out the object, even though edges have been smoothed the 3D shape is retained.


Extra 2 - modify the surface

When constructing the matrices for the point ( x, y ), the surface reconstruction algorithm uses the vectors from ( x, y ) to ( x+1, y ) and from ( x, y ) to ( x, y+1 ). I decided to use different vectors. When calculating the surface at point ( x, y ) I use the vectors from ( x-n, y ) to ( x+n, y ) and from ( x, y-n ) to ( x, y+n ). The results of these tests for n = 1 and n = 2 are shown below.

Original algorithm n = 1 n = 2

Notice that we seem to have lost the spike in the eye and also the point on the belly, although as n increases the object gets flatter and other artifacts start to appear.

A closeup of the right eye show that the spike is still there, although greatly reduced in size

Closeups of the belly show pretty good results, although again there are some other artifacts beginning to show themselves, the image is very grainy.

Conclusion

This gives good results for removing the belly bump, however it does this by flattening the object (this doesn't show up that much in the images but is very noticeable in the viewer). It also introduces other artifacts, there's an odd stippling effect.


Extra 3 - change the weighting

When solving for normals each term is weighted by the image intensity. This has the effect of overweighting specular highlights since they're particularly bright. I tried two methods to reduce the weight given to specular highlights.

  1. Multiply the intensity value by a constant factor (in this case one quarter)
  2. Clamp the intensity value to a given maximum
Original algorithm Divide the intensity value by four Clamp the intensity value

Conclusion

This produced no noticable difference, I've obviously misunderstood how this is supposed to work.


Not specifically intended as an extra, I modified the code to allow me to script the program. This made it easier to produce the large number of images required.