Robot programming by demonstration has typically been a data-intensive process requiring a lot more and higher-quality demonstrations than a typical user is willing to give. Instead we believe that skills and task can be more effectively transferred to robots through alternative interactions seeded by a single demonstration. Such interactions include interactive visualizations of skills and tasks to allow the user to directly edit them and natural language interactions to augment demonstrations with meta information. In this context we are investigating the role of the robot's behavior and interface elements in allowing the person to have a sound mental model of what the robot can perceive and how it represents skills and tasks.