CSE 590 AI, Autumn 2004

Current Research in Artificial Intelligence

New time: Fridays 12:30-1:30
New Location: CSE 403

Organizer:
Henry Kautz

Email: https://mailman.cs.washington.edu/mailman/listinfo/cse590ai 

This weekly seminar presents a mix of

All students and faculty are warmly encouraged to attend!  Please email the organizer if you would like to sign up to discuss your research, or if you have suggestions for speakers to invite or papers to discuss.

Oct 1 AI Lunch in Room 609 (The AI Laboratory)

Enjoy pizza while schmoozing with other students and faculty.

Oct 8 Oren Etzioni (UW CSE)

Web-scale Information Extraction in KnowItAll (Preliminary Results) 

Manually querying search engines in order to accumulate a large body of factual information is a tedious, error-prone process of piecemeal search. Search engines retrieve and rank potentially relevant documents for human perusal, but do not extract facts, assess confidence, or fuse information from multiple documents. My talk introduces KnowItAll, a system that aims to automate the tedious process of extracting large collections of facts from the web in an autonomous, domain-independent, and scalable manner. I will also speculate on the long-term implications of the work for building an intelligent system that engages in "life-long" learning. 

Oct 15 Henry Kautz (UW CSE) and Tanzeem Choudhury (Intel Research Seattle)

Why Get Wired?

A future where computers monitor every move we make has long been the vision of dystopic science fiction.  Now that such a future is nearly here, some people (such as the writers of Wired magazine) look forward to all the cool things it will bring us - which all boil down to ways to make it easier to watch television and buy stuff.  In this talk, by contrast, we will discuss some applications of sensors, monitoring, and artificial intelligence that are neither evil nor stupid.

Oct 22

12:00 Free lunch in 609

12:30 Talk in 403

Heidi Dixon (University of Oregon, Eugene)

Satisfiability, Structure, and Permutation Groups

Many problem domains of interest contain structure that is apparent in high level problem descriptions but is lost when instances are translated into CNF for solution by standard satisfiability engines. The term 'structure' is somewhat vague, but generally means that a problem contains recognizable patterns. It has long been argued that we need to exploit problem structure if we want to solve problems efficiently. In my talk I'll argue that we need a general language for describing and reasoning about problem structure, and I'll show that solving even a simple embedded pigeonhole problem is likely to require the ability to reason about more than one type of problem structure. I'll present a new propositional representation that allows efficient and uniform description of problem structure, and allows us to reason about problem structure in a uniform way. The basic unit of representation is a Boolean clause together with a permutation group on the set of literals in the problem. Multiple database axioms can be obtained by operating on the single Boolean axiom with the permutation group. I'll introduce the solver ZAP that is based on these ideas and present some initial results for our implementation showing polynomial scaling on pigeonhole problems, Urquhart problems, and clique coloring problems.

Oct 29 Jeff Bilmes (UW EE)

Speech, Language, and Graphical Models: Representation and Computation

Graphical models are a general statistical abstraction that can represent the inherent uncertainty within many scientific and engineering tasks. Recent research has indicated that they hold great promise for advancing speech and language processing. These time series, however, have properties that introduce unique and challenging research problems. The first part of this talk will present a variety of graph features and structures that solve many problems in speech and language. Two styles of structure determination are of particular interest: those whose goal is to represent knowledge, and those that are determined in a data-driven fashion specifically to reduce errors. Next, we will outline the computational challenges in applying this framework to speech and language, and describe our solutions. This includes dynamic graph triangulation methods, dynamic exact and approximate probabilistic inference methods, and a host of others, thereby making tractable computing with an enormous number of system variables.

Nov 5

12:00 Free lunch in 609

12:30 Talk in 403

Sebastian Thrun (Stanford University)

Robotic Mapping: Solved?

This talk addresses a decades-old debate in the robotic mapping community. It presents a unified probabilistic framework between the two major philisophical paradigms in the field, known as topological and metric mapping. I will argue that one is just the inverse of the other, and by cleverly jumping between both paradigms we can achieve unprecedented scalability (from 10^3 to 10^8). I will present results obtained with an autonomous robot developed for mapping abandoned mines.

Nov 12

12:00 Free lunch in 609

12:30 Talk in 403

Stuart Russell (University of California at Berkeley)

Uncertainty in an Unknown World

Recent advances in knowledge representation for probability models have allowed for uncertainty about the properties of objects and the relations that might hold among them. Such models, however, typically assume exact knowledge of which objects exist and of which object is which---that is, they assume *domain closure* and *unique names*. These assumptions greatly simplify the sample space for probability models, but are inappropriate for many real-world situations. This talk presents a formal language, BLOG, for defining probability models over worlds with unknown objects and in which several terms may refer to the same object. The language has a simple syntax based on first-order logic, combined with local probability functions for quantifying conditional dependencies. A key additional element is the *number* statement, which specifies a conditional distribution over the number of objects that satisfy a given property. Subject to certain acyclicity constraints, every BLOG model specifies a unique probability distribution over the full set of possible worlds for the first-order language. Furthermore, complete inference algorithms exist for a large fragment of the language. I will present several example models and discuss interesting issues arising from the treatment of evidence in such languages.

Nov 19 Paul Beame (UW CSE)

What can complexity theory tell us about automated inference?

Dec 3 Marina Meila (UW Statistics)

Spectral clustering meets machine learning

Dec 10

12:00 Free lunch in 609

12:30 Talk in 403

Bart Selman (Cornell University)

Algorithmic adventures at the interface of computer science, statistical physics, and combinatorics

I will cover a series of recent developments in the design and study of combinatorial algorithms. Among the most exciting new approaches is a new class of probabilistic methods coming out of statistical physics and information theory. I will also cover recent progress in sampling from combinatorial spaces using random walks and Markov chain methods.