CSE 648
206-543-2969
pedrodcs.washington.edu
Areas of interest: 

Machine learning, artificial intelligence, data science

Tractable Deep Learning

In machine learning, as throughout computer science, there is a tradeoff between expressiveness and tractability. On the one hand, we need powerful model classes to capture the richness and complexity of the real world. On the other, we need inference in those models to remain tractable, otherwise their potential for widespread practical use is limited. Deep learning can induce powerful representations, with multiple layers of latent variables, but these models are generally intractable. We are developing new classes of similarly expressive but still tractable models, including sum-product networks and tractable Markov logic. These models capture both class-subclass and part-subpart structure in the domain, and are in some aspects more expressive than traditional graphical models like Bayesian networks and Markov random fields. Research includes designing representations, studying their properties, developing efficient algorithms for learning them, and applications to challenging problems in natural language understanding, vision, and other areas.

Alchemy

Alchemy is a software package providing a series of algorithms for statistical relational learning and probabilistic logic inference, based on the Markov logic representation. Alchemy allows you to easily develop a wide range of AI applications, including:
  • Collective classification
  • Link prediction
  • Entity resolution
  • Social network modeling
  • Information extraction

Collective Knowledge Bases

The production and use of knowledge is a collective enterprise, and communication between its participants is the bottleneck. Some of the costs of this bottleneck are duplicated work, misdirected work, slower progress, and suboptimal decisions for lack of knowledge that is actually available. The Internet has greatly reduced the physical barriers to communication and coordination; our focus is to help overcome the intellectual ones.

Large-Scale Machine Learning

In many domains, data now arrives faster than we are able to learn from it. To avoid wasting this data, we must switch from the traditional "one-shot" machine learning approach to systems that can mine continuous, high-volume, open-ended data streams as they arrive. We have identified a set of desiderata for such systems and developed an approach to building stream mining algorithms that satisfies all of them. The approach is based on explicitly minimizing the number of examples used in each learning step, while guaranteeing that user-defined targets for predictive performance are met.

Machine Reading

We seek to apply natural language, information extraction and machine learning methods to build semantic representations of individual texts and large corpora such as the WWW.

Demos


Statistical Relational Learning

Intelligent agents must function in a world that is characterized by high uncertainty and missing information, and by a rich structure of objects, classes, and relations. Current AI systems are, for the most part, able to handle one of these issues but not both. Overcoming this will lay the foundation for the next generation of AI, bringing it significantly closer to human-level performance on the hardest problems. In particular, learning algorithms almost invariably assume that all training examples are mutually independent, but they often have complex relations among them.