Trace Sample

Project 2: Ray Tracing


Assigned Date : Tue. 10/27
Project Due Date : Thu. 11/12

Artifact Due Date : Tue. 11/17

Help Session: Thu. 10/29
AC 678 at 4:00

 

Quick Links

Project Description

Trace is a program that constructs recursively ray-traced images of fairly simple scenes. It is similar in functionality to the POV-Ray raytracer. You might try browsing around the POV-Ray web site for artifact and extra credit inspiration. And POV-Ray can be downloaded for free, so if you want a taste of what a really powerful raytracer can do, then go check it out!

Getting Started

Getting the work directory: To get started, check out the skeleton code from your SVN repository.

Sanity checks: Run the executable sample solution and try rendering some scenes. You'll find some sample scene files (all the files with the .ray extension) in the "scenes" folder. These are text files that describe some geometry and the material that should be applied to them. Refer to this page for a brief explanation. In case you face any errors, try the executable sample_solution_no_threads.exe. The sample solution uses multi-threading to make the ray tracing faster and it may not be supported by your system.

Compile and run the skeleton code and check if it compiles properly and runs without errors. There's a chance that the code won't compile or run giving errors due to multi-threading. In case you face such errors, go here.

Visual debugger for your code: The raytracer features a debugging window that allows you to see individual rays bouncing around the scene. This window provides a lot of visual feedback that can be enormously useful when debugging your application. Look at this web page for a detailed explanation of how to use the debugging window. Note that you do not need to make any changes to the code for the debugging display.

Understanding the code struture: The Trace project is a very large collection of files and object-oriented C++ code. Fortunately, you only need to work directly with a subset of it. However, you will probably want to spend a bit of time getting familiar with the layout and class hierarchy at first, so that when you code you know what classes and methods are available for your use. We strongly suggest that you go through the project road map which gives an idea of the code at high level and what you need to do.

The ray tracer can run in both text mode and graphics mode. Text mode is usually faster. Running without any arguments will execute the program in the graphics mode. For usage see 'ray --help'.

Creating Your Own Scenes

As you get into the project, you'll probably want to use some scenes of your own invention. There is a help page available about the file format. This file also describes the specifications of all the primitives you are required to implement. To create realistic refractive objects, you'll need their Indices of Refraction.

Testing Your Project

Once you implement the requirements listed below, you'll be able to test it with this automated tool, and this one for the mac. This tool compares your program's output against the sample solution's. A readme is included to help you use the tool. It is in your best interest to test with the tool, as the TA will be using it to get a sense of how your program compares to the sample solution. Don't worry if your solution doesn't give exactly the same output (rounding errors, among other things, are a fact of life). This tool is only to get an idea of where to look for problems. Note that your bells and whistles must be disabled for the tool to accurately check against the sample.

Required Functionality

You are required to implement:
  1. The Whitted illumination model, which includes Blinn-Phong shading (emissive, ambient, diffuse, and specular terms) as well as reflection and refraction terms. You only need to handle directional and point light sources, i.e. no area lights, but you should be able to handle multiple lights.
  2. Ray-sphere intersection.
  3. Ray-triangle intersection.  You may take an approach in 3D (described here) or take the faster 2D approach described in the "affine transformations" and "ray tracing" lectures.
  4. Phong interpolation of normals on triangle meshes. (See this example.)
  5. Anti-aliasing. Regular super-sampling is acceptable, more advanced anti-aliasing will be considered as an extension.
  6. An accereration data structure that speeds up the intersection computations in large scenes.

Notes on Whitted's illumination model

The Whitted illumination model consists of a direct lighting term (requiring you to trace rays towards each light, and then apply Blinn-Phong shading) and recursively computed reflection and refraction terms. (Notice that the number of reflected and refracted rays that will be calculated is limited by the "depth" setting in the ray tracer. This means that to see reflections and refraction, you must set the depth to be greater than zero!)

If an intersection point is inside a surface (based on testing the ray direction against the normal), you should negate the normal before doing any shading calculations or casting reflected/refracted rays.  The intersection routines generally return outward-pointing normals, which is the opposite of what you need if you're inside the surface.

The skeleton code doesn't implement Phong interpolation of normals. You need to add code for this (only for meshes with per-vertex normals.)

You may assume that objects are not nested inside other objects. If a refracted ray enters a solid object, it will pass completely through the object and back outside before refracting into another object. Improving your refraction code to handle more general cases such as a refractive sphere contained inside another refractive sphere is an extra credit option as described below. In addition, you may assume that the camera itself is not placed inside an object. The initial rays that are sent out through the projection plane will always be moving through air.

Some of the provided scenes need features that are not required in order to render correctly. shell.ray, sier.ray, and house.ray all require material interpolation support (which is extra credit) in order to appear as they are rendered by the sample solution. And, texture_box.ray and texture_reflection.ray use texture mapping (also extra credit). So, if you are testing with these scenes, do not worry if your results are different (and other scenes appear fine).

Here is a document that lists equations that will come in handy when writing your shading and ray tracing algorithms.

Anti-aliasing

Once you've implemented the shading model and can generate images, you will notice that the images you generated are filled with "jaggies". You should implement an anti-aliasing technique to smooth these rough edges. In particular, you are required to perform super-sampling and averaging down. You should provide a slider which allows the user to control the number of samples per pixel (1, 4, 9 or 16 samples). You need only implement a box filter for the averaging down step. More sophisticated anti-aliasing methods are left as bells and whistles below.

Acclerated ray-surface intersection

The goal of this portion of the assignment is to speed up the ray-surface intersection module in your ray tracer. In particular, we want you to improve the running time of the program when ray tracing complex scenes containing large numbers of objects (they are usually triangles). There are two basic approaches to do this:
  1. Specialize and optimize the ray-object intersection test to run as fast as possible.
  2. Add data structures that speed the intersection query when there are many objects.

Most of your effort should be spent on approach 2, i.e. reducing the number of ray-object intersection tests. You are free to experiment with any of the acceleration schemes described in the textbook, or in Chapter 6 of ''A Survey of Ray Tracing Acceleration Techniques,'' of Glassner's book. Of course, you are also free to invent new acceleration methods.

Make sure that you design your acceleration module so that it is able to handle the current set of geometric primitives - that is, triangles spheres, squares, boxes, and cones.

The sample scenes include several simple scenes and three complex test scenes: trimesh1, trimesh2, and trimesh3. You will notice that trimesh1 has per-vertex normals and materials, and trimesh2 has per-vertex materials but not normals. Per-vertex normals and materials imply interpolation of these quantities at the current ray-triangle intersection point (using barycentric coordinates).

Acceleration grading criteria

The test scenes will each contain up to thousands of primitives. A portion of your grade for this assignment will be based on the speed of your ray tracer running on these scenes. To earn full credit, there should be a substantial improvement in the performance of your ray tracer after accelerating it, though it certainly does not need to be as fast as the sample solution.  In addition, the fastest performing ray tracers will earn some extra credit.

We will evaluate the performance of all the ray tracers on the same machine running 32-bit Windows.  You will need to compile your ray tracer on one of the department's Windows Terminal Servers.  Details on exactly how to do this are here.

For grading on the rendering speed, the scenes will be traced at the specific size with one ray traced per pixel, and the rays should be traced with 5 levels of recursion, i.e. each ray should bounce 5 times. If during these bounces you strike surfaces with a zero specular reflectance and zero refraction, stop there. At each bounce, rays should be traced to all light sources, including shadow testing. The command for testing rendering speed looks like:

ray -b -w 400 -r 5 in.ray out.bmp
[For fairness, don't include other stop criteria (except for the one mentioned above) for -b option.]

You are welcome to precompute scene-specific (but not viewpoint-specific) acceleration data structures and make other time-memory tradeoffs, but your precomputation time and memory use should be reasonable. Don't try to customize your ray tracer for the test scenes; we will also use other scenes during grading. If you have any questions about what constitutes a fair acceleration technique, ask us. Coding your inner loops in machine language is unfair. Using multiple processors is unfair. Compiling with optimization enabled is fair. In general, don't go overboard tuning aspects of your system that aren't related to tracing rays.

Artifact Requirements

You are required to submit one artifact per person. Name the file <your-cse-netid>.jpg or <your-cse-netid>.png. The scene traced cannot be one of the provided .ray files but must at least be modified in some way (or a completely new scene). With each artifact, a text file describing the scene may also be submitted (<your-cse-netid>.txt). This can be as simple as two sentences describing the placement of the objects and lights to get the desired effect or a detailed description of the bells and whistles used to create the scene. The text file will be linked with your artifact.

If you implement any bells or whistles you need to provide examples of these features in effect. You should present your extra credit features at grading time either by rendering scenes that demonstrate the features during the grading session or by showing images you rendered in advance. You might need to pre-render images if they take a while to compute (longer than 30 seconds). These pre-rendered examples, if needed, must be turned in with your project on the project due date. The scenes you use for demonstrating features can be different from what you end up submitting as an artifact.

Bells and Whistles

This assignment is large, and, furthermore, the optimization element is completely open-ended, so you can profitably work on that until the project is due. Therefore we don't necessarily expect a bunch of bells and whistles. We can't stop you, though. Below are some interesting extensions to this project.

Shirley's book is a reasonable resources for implementing bells and whistles. In addition, Glassner's book on ray tracing is a very comprehensive exposition of a whole bunch of ways ray tracing can be expanded or optimized (and it's really well written). Foley's book may provide additional explanations for bells and whistles.  If you're planning on implementing any of these bells and whistles, you are encouraged to read the relevant sections in these books as well.

Remember that you'll need to establish to our satisfaction that you've implemented the extension! You should have test cases that clearly demonstrate the effect of the code you've added to the ray tracer. Sometimes different extensions can interact, making it hard to tell how each contributed to the final image, so it's also helpful to add controls to selectively enable and disable your extensions. In fact, we require that all extensions be disabled by default, with controls to turn them on one by one.

Here are some examples of effects you can get with ray tracing. Currently none of these were created from past students' ray tracers.

[whistle] Implement an adaptive termination criterion for tracing rays, based on ray contribution.  Control the adaptation  threshold with a slider.

[Whistle]  
 Modify your antialiasing to implement the first stage of distribution ray tracing by jittering the sub-pixel samples. The noise introduced by jittering should be evident when casting 1 ray per pixel.

[whistle] Modify shadow attenuation to use Beer's law, so that the thicker objects cast darker shadows than thinner ones with the same transparency constant.

[whistle] Include a Fresnel term so that the amount of reflected and refracted light at a transparent surface depend on the angle of incidence and index of refraction

[whistle]

 Add code for interpolating the material properties of a triangle's vertices.

[bell]

 Implement spot lights (described in the Angel text). You'll have to extend the parser to handle spot lights but don't worry, this is low-hanging fruit.

[bell] Add a menu option that lets you specify a background image to replace the environment's ambient color during the rendering.  That is, any ray that goes off into infinity behind the scene should return a color from the loaded image, instead of just black.  The background should appear as the backplane of the rendered image with suitable reflections and refractions to it.

[bell] Deal with overlapping objects intelligently.  In class, we discussed how to handle refraction for non-overlapping objects in air.  This approach breaks down when objects intersect or are wholly contained inside other objects. Add support to the refraction code for detecting this and handling it in a more realistic fashion.  Note, however, that in the real world, objects can't coexist in the same place at the same time. You will have to make assumptions as to how to choose the index of refraction in the overlapping space.  Make those assumptions clear when demonstrating the results.

[bell+whistle] Implement antialiasing by adaptive supersampling, as described in Glassner, Chapter 1, Section 4.5 and Figure 19 or in Foley, et al., 15.10.4.  For full credit, you must show some sort of visualization of the sampling pattern that results.  For example, you could create another image where each pixel is given an intensity proportional to the number of rays used to calculate the color of the corresponding pixel in the ray traced image.  Implementing this bell/whistle is a big win -- nice antialiasing at low cost.

[bell+whistle] Implement more versatile lighting controls, such as the Warn model described in Foley 16.1.5. This allows you to do things like control the shape of the projected light.

[bell] [bell] Add texture mapping support to the program. To get full credit for this, you must add uv coordinate mapping to all the built-in primitives (sphere, box, cylinder, cone) except trimeshes. The square object already has coordinate mapping implemented for your reference. The most basic kind of texture mapping is to apply the map to the diffuse color of a surface. But many other parameters can be mapped. Reflected color can be mapped to create the sense of a surrounding environment. Transparency can be mapped to create holes in objects. Additional (variable) extra credit will be given for such additional mappings. The basis for this bell is built into the skeleton, and the parser already handles the types of mapping mentioned above. Additional credit will be awarded for quality implementation of texture mapping on general trimeshes.

[bell] [bell] Implement bump mapping (Watt 8.4; Foley, et al. 16.3.3). Check this out!

[bell] [bell] Implement solid textures or some other form of procedural texture mapping, as described in Foley, et al., 20.1.2 and 20.8.3. Solid textures are a way to easily generate a semi-random texture like wood grain or marble.

[bell][bell]

  Add some new types of geometry to the ray tracer. Consider implementing torii or general quadrics. Many other objects are possible here.

[bell] [bell] Extend the ray-tracer to create Single Image Random Dot Stereograms (SIRDS). Click here to read a paper on how to make them. Also check out this page of examples. Or, create 3D images like this one, for viewing with red-blue glasses.

[Bell][Bell] for the first,[Bell] for each additional
Implement distribution ray tracing to produce one or more or the following effects: depth of field, soft shadows, motion blur, glossy reflection.

[bell] [bell] [bell] Implement 3D fractals and extend the .ray file format to provide support for these objects. Note that you are not allowed to "fake" this by just drawing a plain old 2D fractal image, such as the usual Mandelbrot Set. Similarly, you are not allowed to cheat by making a .ray file that arranges objects in a fractal pattern, like the sier.ray test file. You must raytrace an actual 3D fractal, and your extension to the .ray file format must allow you to control the resulting object in some interesting way, such as choosing different fractal algorithms or modifying the base pattern used to produce the fractal.

Here are two really good examples of raytraced fractals that were produced by students during a previous quarter: Example 1, Example 2
And here are a couple more interesting fractal objects: Example 3, Example 4

[bell] [bell] [bell] [bell] Implement 4D quaternion fractals and extend the .ray file format to provide support for these objects. These types of fractals are generated by using a generalization of complex numbers called quaternions. What makes the fractal really interesting is that it is actually a 4D object. This is a problem because we can only perceive three spatial dimensions, not four. In order to render a 3D image on the computer screen, one must "slice" the 4D object with a three dimensional hyperplane. Then the points plotted on the screen are all the points that are in the intersection of the hyperplane and the fractal. Your extension to the .ray file format must allow you to control the resulting object in some interesting way, such as choosing different generating equations, changing the slicing plane, or modifying the surface attributes of the fractal.

Here are a few examples, which were created using the POV-Ray raytracer (yes, POV-Ray has quaternion fractals built in!): Example 1, Example 2, Example 3, Example 4. And, this is an excellent example from a previous quarter.

To get started, visit this web page to brush up on your quaternion math. Then go to this site to learn about the theory behind these fractals. Then, you can take a look at this page for a discussion of how a raytracer can perform intersection calculations.

[bell] [bell] [bell] [bell] Implement a more realistic shading model. Credit will vary depending on the sophistication of the model. A simple model factors in the Fresnel term to compute the amount of light reflected and transmitted at a perfect dielectric (e.g., glass). A more complex model incorporates the notion of a microfacet distribution to broaden the specular highlight. Accounting for the color dependence in the Fresnel term permits a more metallic appearance. Even better, include anisotropic reflections for a plane with parallel grains or a sphere with grains that follow the lines of latitude or longitude. Sources: Shirley, Chapter 24, Watt, Chapter 7, Foley et al, Section 16.7; Glassner, Chapter 4, Section 4; Ward's SIGGRAPH '92 paper; Schlick's Eurographics Rendering Workshop '93 paper.

This all sounds kind of complex, and the physics behind it is. But the coding doesn't have to be. It can be worthwhile to look up one of these alternate models, since they do a much better job at surface shading. Be sure to demo the results in a way that makes the value added clear.

Theoretically, you could also invent new shading models. For instance, you could implement a less realistic model! Could you implement a shading model that produces something that looks like cel animation? Variable extra credit will be given for these "alternate" shading models.  Links to ideas: Comic Book Rendering

Note that you must still implement the Phong model.

[bell] [bell] [bell] [bell] Implement CSG, constructive solid geometry. This extension allows you to create very interesting models. See page 108 of Glassner for some implementation suggestions. An excellent example of CSG was built by a grad student here in the grad graphics course.

[bell] [bell] [bell] [bell] Add a particle systems simulation and renderer (Foley 20.5, Watt 17.7, or see instructor for more pointers).

[bell] [bell] [bell] [bell] Implement caustics by tracing rays from the light source and depositing energy in texture maps (a.k.a., illumination maps, in this case). Caustics are variations in light intensity caused by refractive focusing--everything from simple magnifying-glass points to the shifting patterns on the bottom of a swimming pool. A paper discussing some methods. 2 bells each for refractive and reflective caustics. (Note: caustics can be modeled without illumination maps by doing "photon mapping", a monster bell described below.)

Here is a really good example of caustics that were produced by two students during a previous quarter: Example



There are innumerable ways to extend a ray tracer. Think about all the visual phenomena in the real world. The look and shape of cloth. The texture of hair. The look of frost on a window. Dappled sunlight seen through the leaves of a tree. Fire. Rain. The look of things underwater. Prisms. Do you have an idea of how to simulate this phenomenon? Better yet, how can you fake it but get something that looks just as good? You are encouraged to dream up other features you'd like to add to the base ray tracer. Obviously, any such extensions will receive variable extra credit depending on merit (that is, coolness!). Feel free to discuss ideas with the course staff before (and while) proceeding!


Disclaimer: please consult the course staff before spending any serious time on these. These are all quite difficult (I would say monstrous) and may qualify as impossible to finish in the given time. But they're cool.

Sub-Surface Scattering

The trace program assigns colors to pixels by simulating a ray of light that travels, hits a surface, and then leaves the surface at the same position. This is good when it comes to modeling a material that is metallic or mirror-like, but fails for translucent materials, or materials where light is scattered beneath the surface (such as skin, milk, plants... ). Check this paper out to learn more.

Metropolis Light Transport

Not all rays are created equal. Some light rays contribute more to the image than others, depending on what they reflect off of or pass through on the route to the eye. Ideally, we'd like to trace the rays that have the largest effect on the image, and ignore the others. The problem is: how do you know which rays contribute most? Metropolis light transport solves this problem by randomly searching for "good" rays. Once those rays are found, they are mutated to produce others that are similar in the hope that they will also be good. The approach uses statistical sampling techniques to make this work. Here's some information on it, and a neat picture.


Photon Mapping

Photon mapping is a powerful variation of ray tracing that adds speed, accuracy and versatility. It's a two-pass method: in the first pass photon maps are created by emitting packets of energy photons) from the light sources and storing these as they hit surfaces within the scene. The scene is then rendered using a distribution ray tracing algorithm optimized by using the information in the photon maps. It produces some amazing pictures. Here's some information on it.

Also, if you want to implement photon mapping, we suggest you look at the SIGGRAPH 2001 course 38 notes. The TA's can point you to a copy, if you are interested.


References