CSEP576 Project 3
Ryan Kaminsky

Write-Up
The project steps were broken down very well in the skeleton code. First the lighting directions were determined from the images of the chrome spheres. It proved to be a little tricky for me to determine the normal vector for the highlight, but after I figured out this calculation, the lighting directions were easy to calculate.

The next step was to calculate the normals using the lighting directions and pixel values. This was done using the formulas from the class slides to build the correct M matrix and b vector.

Computing the albedo map was the next step in the process. This just involved using the formula I.J/J.J and summing over the images in the data set.

The final step involved computing the depths for the images. To do this, the M matrix and v vector were filled with at most two constraints for z for each pixel value in the image. The first time I tested this code I ran into a problem where the M^T*M matrix was not symmetric, which is extremely unusual. After investigation I found that this was due to NaN values in the matrix. I traced this back to the Normal calculations where I didn't account for NaN values. Once this code was changed the M^T*M matrix passed the symmetric test and my depth values appeared to be correct during the 3D rendering.


Buddha
Normals
Albedo Map  
3D View 1
3D View 2


Owl
Normals
Albedo Map  
3D View 1
3D View 2


Cat
Normals
Albedo Map  
3D View 1
3D View 2


Extras  
Weighing Constraints. In this enhancement I attempted to weight the constraints of the M matrix and v vector when calculating the depths. The thinking here is that dark pixels have less accurate information than light pixels and therefore should receive less weight. To achieve this I weighted each constraint by the sum of the pixel values across all images. In this way, pixels that were dark in all images would get a low weight and pixels that were bright in all images would get a high weight. Pixels values that varied across images would get an average weight. This enhancement produced interesting results. In some cases it achieved the desired results of reducing abnormal normals like the eye of the owl, but it also seems to distort the normals in other areas. For instance, with the cat, the nuzzle portion is significantly more defined and less smooth than the original solution.
 
Original Weighted
 
 
 
 
Weighing Normals. In the normal project solution when solving for the normals we weight each side by the image intensity to help with shadows in the image. As an alternative method I weighed each side in the normal equation by the average of the pixel intensities over all of the images for the pixel we are computing. This results in a more gradual blending of the normals. This can be seen in the computation of the normals for the cat in the RGB image. On the left side of the cat's neck there is a more gradual blending when compared with the original. Looking closely at the cat's wiskers also shows the shallower normals produced by this average weighing. You can also see the effects as the albedo mask and depths are computed from these normals. The results are much smoother and gradual than the originals.
 
Original Weighted
 
 
 
 
Gaussian Filter on Normals. In yet another attempt to smooth the normals and ultimately the depths, once I calculated the normals, I then applied a gaussian smoothing kernel to them. By doing this, the normal at each point would be smoothed to its neighbors which theoretically should cause the normals to be more of a continuous function (and smoother). You can see the effects of this on the buddha as the RGB map as well as the depths are smoother (and blurrier). On the cat images, you can see the smoothing on the whiskers as they are not nearly as defined as in the original. The owl images show slight improvement in the eye area, but more improvement is still possible.
 
Original Gaussian