Online Interscene Alignment with Change Detection: Movies

Accompanying Evan Herbst's PhD thesis, 2014. Demonstrating the value of optimization of a change-detection-based objective for aligning frames from a video of one scene to a map of a similar scene.

Here we compare two methods based on visual point feature matching and one method that combines point feature matching with optimization of a differencing-based objective. Adding the gradient-based optimization greatly reduces jitter and significantly improves alignment. The improved alignment is especially noticeable at the edges of the table and on the dinosaur.


the two scenes (in the videos below, frames from the second are aligned to a map previously made of the first)
change detection results with respect to the map of the first scene (red = aligns well; yellow = surfaces not in the other scene; orange = surfaces occluded in the other scene; blue = no information due to invalid depth readings; black = space not covered by the map) change detection results with respect to the incoming frame (colors have same meaning as in first column, but black is not applicable here)
online change detection results using alignment by matching FAST + Calonder descriptors to keyframes selected with place recognition
online change detection results using alignment by matching SIFT descriptors to keyframes selected with place recognition
online change detection results using alignment by direct optimization of a change-detection-based score, initialized with SIFT matching
online change detection results using alignment by direct optimization of a heuristic objective (Patch Volumes)

These videos were made differently: both are projected onto the frame rather than the map. They're also faster than the rest.