CSE 557 1998 Rendering Competition

CSE 557 - Computer Graphics
Winter quarter, 1998
Instructor: Brian Curless
Teaching assistant: Jonathan Shade


Grand Prize Winner

Tinker Crane

by Daniel Azuma

The final image of the TinkerCrane uses raycom2's level 2 adaptive antialiasing (maximum 16 samples per pixel.) There are over 200 separate CSG-based objects in this scene, utilizing more than a thousand individual primitives.

Most objects are texture mapped, although it's subtle-- you can see the particle board texture on the wheels if you look closely, but the wood grain on the sticks is almost impossible to make out. All the sticks and wheels are diffuse except for the little orange sticks up at the top of the model, which are plastic (you can see a specular highlight if you look closely.) The tabletop uses a texture I lifted from a web page somewhere, tiled several times across the surface.

A few of the objects in the scene are large (4x the size of everything else) to give a better look at the detail. Note especially the objects hanging from the crane: you can see through the hole in the middle of the wheel, and through the notch at the end of the magenta stick.

First Runner Up

View of Saturn from the moon Dione

by Dawn Werner and Brian Meyer

This image uses a fractal primitive to simulate the surface of the moon. Saturn is texture mapped using a bitmap downloaded from NASA. The rings are a sweeper surface that are both texture mapped and transparency mapped. Notice how you can see the gaps in the rings when you look at the shadow on the planets surface. The star patterns are generated using a different fractal routine applied to the atmosphere.

Island

by Dawn Werner and Brian Meyer


This image was generated using 320,000 triangle fractal generated using a heterogeneous multi-dimensional fractal. The water effects are a procedural bump map that is generated by single dimensional fractal. The sky is done using the same single dimension fractal routine as the water but applied to the atmosphere instead. The background water is a similar algorithm but using different colors so they will shade darker and appear to be water.

The following clips were generated by sweeping the camera around the center point of the view in each of the previous two images. There are 360 frames in each clip. They were four times over sampled at each point. But then they were compressed into the AVI so some aliasing does occur.

MPEG movie showing the island during the day.
MPEG movie showing the island during the night.

Second Runner Up

Liberated

by Patrick Crowley and Miguel Figueroa


Our distribution ray tracer distributes both reflection rays and shadow rays. Reflection rays are distributed within a cone about the mirror reflection direction with a radius determined by a scaled factor of 1/shininess. We implemented area lights as spheres, thus shadow rays are distributed within the cone formed between the intersection point and the spherical area light. The number of distributed reflection and shadow rays, as well as the aforementioned scale factor, are user definable parameters.

We implemented texture maps on three material properties: diffuse color, specular color and surface normal(bump maps). Diffuse and specular maps use a simple bilinear interpolation algorithm to get the diffuse and specular color components from the texture map. Bump maps determine the gradient on the texture map at the point in question, and use it to perturb the surface normal at that point.

Other contestants

Red mountains at sunset

by Denise Pinnel and Greg Badros


We generated fractal mountains using the midpoint perturberance method on polysets. The most difficult part of this method using polysets was that the algorithm works on triangles and we had to keep track of the perturberance of the edges so that the resulting triangles from each polygon within the polysets would continue to meet at the edges. If two polygons have an adjoining edge then when we begin to perturb the second (nth) polygon that shares the edge we have already perturbed one or more of its adjoining edges and we pass this already computed information into the fractal generator. We control the amount of perturberance with two variables -- maximum amount of perturberance per level and number of levels. The number of levels determines the amount of subdividing of the original polygon. The maximum amount of perturberance is set for the first level and is subsequently halved for each higher numbered level. This is the maximum amount that the midpoint can be displaced in either the positive or negative y direction. We also tried perturbing in the direction (pos or neg) of the normal to the original polygon, however, this did not give good results for the mountains we were generating.

SGI movie showing fractal earth and mountains at night.

Another Night's Reading

by Denise Pinnel and Greg Badros


"Another Night's Reading" was rendered in about 3 hours on an SGI Indy box. The scene is a realistic duplication of the artist's nightstand, complete with the "Introduction to Raytracing" text which decorated the table-top for the duration of the intensive graduate course. Three non-coincident point lights inside the lamp shade account for the complex shadows. The texture-mapping of the nightstand itself involves various wood-grain images and provides an especially realistic effect. The framed picture on the wall is of an image generated by "Impressionist," an earlier project in the course.

Poolballs

by Justin Miller

This image is a reasonably accurate represenation of a set of pool balls, racked and ready. The felt table is bump-mapped using an actual felt image as the bump map. The pool ball texture images were all created by hand. To add space to the image, a reflection map of a living room was used on the scene. The long streaks reflected in the balls are a result of the windows in the environment map.

Blackboard

by Tapan Parikh and Geoff Hulten

This scene was rendered using distribution ray tracing, texture mapping and bump mapping. For all reflections and shadows a number of rays are distributed within a cone centered at the optimal reflection/shadow ray. For shadows the radius of the cone is given by the radius of the area light, (encoded in the .out file as a spot_light, using the cutOffAngle as the radius), and for reflections the radius of the cone is given by the shininess of the surface. (the shinier, the smaller the radius). Rays are weighted towards the center of the cone, since on average there will be an equal number of rays cast per distance from the center of the cone.

Broccoli Shrip

by Craig Wilcox and Erik Vee

This creature was modeled using L-systems, as described in the book The Algorithmic Beauty of Plants, and rendered with our raytracer.

Tranquility

by Craig Wilcox and Erik Vee

We used L-systems again to generated the tree. In addition, we used a procedurally generated terrain, texture mapping, and procedural bump and transparency mapping (for the water).

Last update: May 26, 1998

shade@cs.washington.edu