Morphis an interactive program that uses a feature-based metamorphosis technique to create an animation that simultaneously warps and blends one image into another.
Before you start coding, take a picture of yourself using one of the IndyCams (by running capture at a Unix prompt). Each person's picture should be similar in size, lighting, proportion of head to background, and orientation of head. If it is not, one of the TAs might assist you in retaking your picture. There are a couple pieces of white poster board in the instructional lab; use one as a backdrop when taking your picture. The image should be 200x200 pixels in size and should be saved in "/cse/courses/cse457/images/morph/<yourlogin>.rgb". The following are examples of how your pictures should look (minus any dithering and quantizing from your web viewer):
To help you take good pictures, you might try making adjustments to the IndyCams: they can be focused by turning the lens cylinder, and the white balance and signal controls are accessible from capture's menus, along with the shutter speed. If necessary, the images may be "doctored-up" with the programs xpaint and imgworks. See also the "Home Page Tutorial" in the Help Session #0 page.
/cse/courses/cse457/projects/morphThen compile and link to create the starting point version of the program.
Control lines can be added by pressing the left mouse button, dragging it, and releasing the button. Although the lines can be specified, they are not currently used by the program.
There is a button labeled "Movie". When it is pressed, it does three things: first, it will produce 16 cross-dissolved image files "result00.rgb" to "result15.rgb; second, it will call the movie creation program "makemovie" to build the movie "firstmovie.mov"; finally, it will call the movie player program "movieplayer." This result is a very coarse animation. You will improve the functionality of the movie button in the assignment.
To view your movie after creating it, execute the command
Save your movie files in "/cse/courses/cse457/artifacts/morph_97S/<yourlogin>_<nextpersonslogin>.mov".
These movies will later be concatenated to create a single animation.
You are also encouraged to go beyond the minimum number of bells and whistles by implementing more from the approved list, or by thinking up your own unapproved extensions.
Implement the capability for loading and saving lines. This will save you time redrawing the lines.
Direction of lines are important in the warping algorithm. Add arrowheads to indicate the direction of the lines.
If the image color is not too much different from the color of the control lines, we won't be able to see the lines clearly. Add a button called "Fade out". When it is pressed, the color of the image will be dimmed, so that the lines will stand out.
End-points interpolation doesn't work well when a pair of corresponding lines are more or less opposite each other. Implement the midpoint-angle-length interpolation.
During movie animation, allow the user to see how lines interpolate from frame to frame. This can be accomplished by displaying the intermediate lines in the middle window.
Implement three sliders to allow the user to change the a, b, and p parameters that control the weight of a line. The libui documentation on a uiAddSlider might help.
When there are many lines, computation can make things slow down. One thing you can do is limit the influences of line pairs, so that only a small number need to be considered for any one pixel.
Allow the control lines to be moved, deleted, and resized.
When very few quadrilaterals are used to do texture mapping, the morphed image computed as above will probably look lousy. Conversely, if we use more quads than are really necessary it will take too long to compute the image. A good solution to this problem is to compute the image in an adaptive manner, i.e., using more quads in regions where a single quad is insufficient. Add an option that implements such an method. (The "togglebutton" widget in libui may come in handy here.)
Add an option to do progressive display in the preview window. The idea is to progressively display the current approximation to the morphed image during the adaptive texture mapping process. The morphed image will initally be displayed using a single texture mapped quad per image, then it will progessively get better as more and more quads are used, in breadth-first order.
Add an option to let the user use points instead of (or in addition to) lines to indicate the corresponding features between images.
Come up with a good user-interface and back-end that allows the user to edit two video clips instead of two still images. To begin, you can devise a way to add new keyframes to the line interpolation.