Intelligent agents must function in a world that is characterized by high uncertainty and missing information, and by a rich structure of objects, classes, and relations. Current AI systems are, for the most part, able to handle one of these issues but not both. Overcoming this will lay the foundation for the next generation of AI, bringing it significantly closer to human-level performance on the hardest problems. In particular, learning algorithms almost invariably assume that all training examples are mutually independent, but they often have complex relations among them. We are developing learners for this case and applying them to domains like link-based Web search, adaptive Web navigation, viral marketing, and social network modeling. We are also developing statistical learning and inference techniques for time-changing relational domains and applying them to fault diagnosis and other problems. More generally, our goal is to develop learners that can learn from noisy input in rich first-order languages, not just human-designed attribute vectors, and are thus much more autonomous and widely applicable.