[A

Project 4: Final Project

Assigned: November 25

Project proposal: December 3

Project meeting: December 4

Due: December 18


Project Description

The topic of the final project is open to you. You get to define your own final project. It should be ambitious enough to demonstrate a significant result, but not so hard that you aren't confident about shipping a reasonably complete product. The project ideas below are for your reference... feel free to come up with your own idea. You can leaf through recent SIGGRAPH proceedings in Grail for additional inspiration, or browse online.

Some Project Ideas

Image Processing

Intelligent scissors with alpha matting. Image matting is the process of extracting a foreground element from the background so that the foreground can be placed over a novel background. Intelligent scissors is a method for interactively drawing along the contour of a foreground object while the computer tries to "snap" the curve to the nearest edge. The result of intelligent scissoring is a nice separation, but it doesn't model fractional mixing of foreground and background colors, i.e., alpha computation. In this project, you would combine intelligent scissors with alpha estimation. You could start from the intelligent scissors project in 490CV and enhance it to include a simple variant of Chuang's digital image matting method.
Object-based image editing.  Allow the user to take an image, select a portion of the image, and warp that portion using a curve interface.  This idea is decribed in more detail in a SIGGRAPH 2002 paper.   The system in the SIGGRAPH paper has a number of components to it; we would recommend you focus on the tools for selecting a portion of an object (e.g., using an intelligent scissors algorithm) and the curve tools for deforming shapes.  You can ignore the texture filling problem by working with photographs imaged against a constant background and using that color for fill.
Digital photomontage.  A photomontage is a combination of multiple images into a new image, ideally one that seamlessly combines the input images.  Implement a system that allows a user to loosely select regions from photographs and then automatically cut out good regions and blend them into a final composite.  The idea is described in more detail in this SIGGRAPH paper, and a version was released as GroupShot.
Context-aware image resizing.  Allow the user to resize an image so that, rather than just filtering and resampling, pixels are deleted or added in a way that preserves scene content.  The idea -- called "seam carving" -- is discussed in this SIGGRAPH 2007 paper.  Be sure to check out the demo.

Rendering

Distribution ray tracing.  Implement semi-diffuse reflections and refractions by distributing the secondary rays emanating from each surface according to a bidirectional reflectance distribution function (BRDF) of your own choosing. Allow slider control over one or more parameters of the BRDF. Stop ray recursion if the weight for a ray's color drops below a slider-selectable threshold. You can refer Ward's paper as an example of BRDF representation. You can also implement distribution ray tracing for area light sources to simulate penumbrae, for a finite aperture to simulate depth of field, and for a finite shutter interval to simulate motion blur.  This extension to the ray tracing project should be combined with another extension, such as texture and bump mapping and/or advanced material properties. 
Photon mappingCaustics (re-focusing of light through refractive objects) and complex volumetric scatter (multiple scattering events through participating media like thick smoke and clouds) are important visual effects that cannot be modeled without tracing some rays from the light, as well as from the viewer.  One method for modeling of these effects is to trace light rays and deposit photons on surfaces and in clouds -- a technique called "photon mapping."  Implement photon mapping to demonstrate one or both of these effects.  Henrik Wann Jensen's web page has a variety of links to examples and papers about this approach.
Subsurface scattering.  Accurate modeling of semi-translucent materials such as skin and marble require simulation of subsurface scattering effects. Implement a distribution ray tracer that accounts for this important visual effect.  Henrik Wann Jensen's subsurface scattering web page on the subject has a number of examples and pointers to papers for implementing this idea.
Hierarchical radiosity. This is actually not as hard as it sounds. To implement the Hanrahan hierarchical solver , the main ingredients are a simple polygonal scene, a routine to break a triangle or rectangle into a few smaller pieces, a patch to patch visibility solver (your ray tracer already does this), and a simple mechanism for traversing a hierarchy of polygons. The simplest way to visualize your results would be to write out a file in the obj format accepted by your subdivision editor extended with an rgb triple for each point in addition to the xyz values, and extend your subdivision modeler to read and render the vertex color. Alternatively, you can do a "final gather" ray-tracing pass to achieve smooth shading. For best results, combine the radiosity solution with a ray-tracing pass that incorporates specular highlights and textures.
Light field renderer. Ray tracing, especially for complex models and many interreflections, is generally not real-time. An alternative approach is to ray trace a set of images and then reuse these rays when generating new images. This idea is called light field or lumigraph rendering. Using some simple tricks with graphics hardware and ray traced images for viewpoints on a regular grid, it is actually possible to reuse rays in real-time to create new renderings quickly. See Levoy and Hanrahan's paper for details. A more complex method when geometry is also available (in addition to the images) is described by Grzeszczuk, et al.
Non-photorealistic lighting model
Technical illustration has some important characteristics which makes it very different from realistic rendering. Therefore, the popular shading model like Phong shading is not appropriate for technical illustrations. Gooch et al. introduced a shading model for technical illustrations in their SIGGRAPH '98 paper. Another option is to make a cel shader (cartoon style), or art-based rendering to make Dr. Seuss-like pictures.

Geometric modeling

Subdivision surfaces. Subdivision surfaces gain more and more attention from industry and have been widely used in animation production, like "Geri's Game" and "Toy Story II". As a final project, you are required to implement non-interpolating (Loop or Catmull-Clark) and interpolating (Butterfly) subdivision. Use the evaluation mask to calculate final vertex positions if necessary, and the tangent masks to calculate vertex normals. Loop and Butterfly subdivision were described in sufficient detail in lecture; you can refer to this paper for more information about the masks used in Catmull-Clark subdivision. You also need to have support for feature edges and vertices. To soften creases, you can modify the subdivision scheme slightly to use sharp subdivision rules for a finite number of steps, followed by smooth subdivision to push to the limit position; see this paper for more discussion of this simple, useful idea. Your subdivision editor should have an interface for selecting a vertex or an edge and then marking (or unmarking) it as a feature. You can start from a previous quarter's subdivision surface skeleton code (in MFC only).
Shape deformations.  Implement a system for taking a given shape, such as a triangle mesh, and deforming it based on free-form deformations or skeletal blending weights.  The deformation tool should be incorporated into your animation system so that you can animate the deformation.
Sketch interface for modeling. Teddy is a sensational system introduced in SIGGRAPH''99 and presents a new paradigm for modeling. In Teddy, users sketch in a drawing window and the system automatically infers the underlying 3D model reasonably. As a final project, you can design your own interface for modeling or follow Teddy's design (you are not required to implement all operations introduced in the paper).
Multi-resolution curves for keyframing. Create a multiresolution keyframing system which can edit the motion curves at different levels of detal. The curves should be able to interpolate the keyframe constraints by changing the coefficients at a specific level of detail. Devise a scheme which can interpolate at non-integer levels of detail. There is a paper by Finkelstein and Salesin on multiresolution curves.
Physically based vegetation. Design a dynamic model of a tree that would move realistically (e.g. for wind blowing or a person brushing through). The tree could be built using L-system rules. Different parts of the tree would have different amounts of elasticity. For a sufficiently complex tree (or lots of trees), you may need to explore efficient ways of simulating the tree physics in order to get rapid simulation.
Procedural architecture and cities. Design a general L-system for modeling buildings (simple structure and image-based facades) and cityscapes. Explore ways of parameterizing the buildings by attributes such as age, style, and location. This SIGGRAPH '01 paper can be used as a guide.

Animation

Cartoon physics simulation. Design a simulation environment that would model various aspects of cartoon physics. Augment the traditional simulation of Newtonian physics with additional constraints, damping parameters and other modifications which would produce an exaggerated cartoon-like behavior.
Inverse kinematics. Implement inverse kinematics for character modeling and animation. Given a sequence of trajectories for a few end-effectors on the character's body construct the character's joint angle which would interpolate the end-effector trajectories while ensuring that the character moves "naturally". See lecture on inverse kinematics for details. There is a section on IK in the CSE558 course notes from 2000.
Cloth Simulator. Design a realistic cloth simulation system. Your work can focus on stable and accurate cloth simulation, handling of collisions in a reliable manner, and/or modeling of various kinds of cloth such as wool, cotton, or silk.
Rigid body simulation. Incorporate rigid body simulation with the keyframe animator. The keyframed character should be able to interact in realistic ways with the simulated objects in the scene. See this set of course notes to get you started.
Efficient collision detection.  Implement an efficient algorithm for determining when two rigid bodies are in contact.  You can leverage ideas you developed for efficient ray tracing.  
Secondary motion.  Develop a method for adding secondary motion to your animations so that, e.g., flesh jiggles when it moves.  The technique could be built around a simple lattice that defines spatial deformation within the lattice and is driven via particle system physics.  Implementing a more stable solution method, such as implicit integration, is recommended in order to have this simulation run in real-time.
Motion Warping. Implement a technique for editing already existing animations, by warping the motion curves to meet the new constraints. See Zoran Popović's paper for more information.
Explosions. Develop a particle system approach to modeling and rendering realistic explosions (fire, dust plumes, etc.)
Water Flow. Simulate and render the flow of whitewater rapids.
Crowds and cars. Design a model of pedestrian crowds and/or vehicular rush-hour traffic. The crowd model should be parameterized by the number of people, demographics (age, sex, etc), and general direction. The pedestrians should exhibit some variations in behavior as well (speed, interacting with others, etc.). Lower level human movement could be built from hand-animated examples or from human captured motion data. Vehicular traffic can be parameterized by time of day, volume, source and destination. In either case, intersections between people and/or crowds must be avoided.

Demos

You should prepare in advance for the demo, since you'll be the expert, not us. If you choose a rendering project you should have precomputed some images. If you project is interactive you may not need precomputed images, but you'll want to have inputs ready that effectively demonstrate your work. In any case, please generate an artifact to show off your project. It can be a rendering or a model or a visualization of your results.