High sensitivity to how features are defined
Warping does not always work well because it is extremely sensitive to how features are defined in the 2 source images. For simple images, our code does work well. Yet, things can get poor for complicated images like faces. The 3 problems that we find are: (a) getting pixels that are sampled way out of the bound of the source images, (b) we get weird twist of the source image, and (c) some mysterious white dots in the final intermediate frame, which we found to be the direct consequence of (a). Click here to see some of our heartbreaking, screw-up intermediate frames.
__Our solution
We have spent tremendous amount of time (more than 50 hours combined) just trying to find out why we keep getting problems (a) and (b): we have gone over our code line by line for more than 5 times to ensure that every function does exactly what is required to do. We worked with simple images and found our code working reasonably well on them. We printed out the computed values of the pixel to sample. But still, we just could not find out why our code doesn't work fine sometimes.
In Beier and Neely's paper, there's no specific mentioning of these problems we have. All it says is that weird result can be corrected by adding a few more pairs of control lines in the 2 sources. And we experimented with such a suggestion. However, the problems still persist. By adjusting the control "a", "b", and "p" in the paper, we sometimes can avoid some of these problems, not rarely all of them entirely.
On the other hand, it is interesting in that when we put our intermediate frames into a movie, the result doesn't appear poor at all, even though some intermediate frames doesn't turn out perfectly.
Jaggy edges
Even for morphing of simple images, we can get uneven edges.
__Our solution
We applied the ordinary antialiasing to smooth out the jaggies. We did so by first computing the pixels of the entire intermediate frame, and then run the antialiasing on the frame.