Tuesday, May 22, 2012 - 12:30

Learning Hierarchical Representations for Recognition

Speaker: Liefeng Bo, Dept. Computer Science & Eng., Univ. Washington
Location: Gates Commons

Humans are able to recognize objects despite significant
variations in their appearance due to changing viewpoints, scales,
lighting conditions and deformations. This ability fundamentally relies
on robust representations/features of the physical world. In this talk,
I will introduce our recent work on representation/feature learning:
hierarchical kernel descriptor and hierarchical matching pursuit.
Hierarchical kernel descriptor is a kernel based feature learning
approach that uses efficient matching kernels to build feature
hierarchies. Hierarchical matching pursuit is a multi-layer sparse
coding approach that learns multi-level features from raw sensor data by
recursively applying matching pursuit coding followed by spatial pyramid
pooling. Our models outperform the state-of-the-art, often by a large
margin, on more than fifteen popular computer vision benchmarks for many
types of recognition tasks including RGB-(D) object recognition,
fine-grained object recognition, digit recognition, face recognition,
material recognition, scene recognition, and RGB-(D) scene labeling. In
addition, I will discuss the connections and differences between our
models, and bag-of-words models, deformable parts model and deep
networks/feature learning in terms of the underlying architecture.

Tuesday, May 8, 2012 - 12:30

Linguistic Structure Prediction with AD³

Speaker: Noah Smith, Carnegie Mellon University
Location: Gates Commons

In this talk, I will present AD³ (Alternating Directions Dual
Decomposition), an algorithm for approximate MAP inference in loopy
graphical models with discrete random variables, including structured
prediction problems. AD³ is simple to implement and well-suited to
problems with hard constraints expressed in first-order logic. It often
finds the exact MAP solution, giving a certificate when it does; when it
doesn't, it can be embedded within an exact branch and bound technique.
I'll show experimental results on two natural language processing tasks,
dependency parsing and frame-semantic parsing. This work was done in
collaboration with André Martins, Dipanjan Das, Pedro Aguiar, Mário
Figueiredo, and Eric Xing.

Tuesday, April 24, 2012 - 12:30

Decision making in the primate brain: A model based on POMDPs and reinforcement learning

Speaker: Raj Rao, University of Washington
Location: Room 305

How does the brain learn to make decisions based on noisy sensory
information and incomplete knowledge of the world? In this talk, I
will sketch a neural model of action selection and decision making
based on the general framework of partially observable Markov decision
processes (POMDPs). The model postulates that actions are selected so
as to maximize expected cumulative reward, where rewards can be
external (e.g., food) or internal (e.g., penalty for delay). Action
selection is based on the posterior distribution over states (the
"belief" state) computed using Bayesian inference. A reinforcement
learning algorithm known as temporal difference (TD) learning
maximizes the expected reward. I will describe how such a model
provides a unified framework for explaining recent experimental
results on decision making in the primate brain. The resulting neural
architecture posits an active role for the neocortex in Bayesian
inference while ascribing a role to the basal ganglia in value
computation and action selection. The model suggests an important role
for interactions between the neocortex and the basal ganglia in
learning the mapping between probabilistic sensory representations and
actions that maximize rewards.
(Joint work with Yanping Huang and Abe Friesen)

Tuesday, April 10, 2012 - 12:30

Spectral Clustering and Higher-Order Cheeger Inequalities

Speaker: James Lee, Dept. Computer Science & Eng., Univ. Washington
Location: Gates Commons

I will try to tell a story of spectral clustering, from
heuristics, to experiments, to a new algorithm and its formal analysis.
In an interesting twist, the same tools used to analyze spectral
partitioning yield new ways to mine the structure of noisy systems of
linear equations modulo a prime. This, in turn, allows us to mount a
spectral attack on one of the most prominent open problems in complexity
theory. [This is based partly on joint work with Shayan Oveis-Gharan and
Luca Trevisan.]

Tuesday, March 6, 2012 - 12:30

Gaussian Processes for Approximating Complex Models

Speaker: Murali Haran, Pennsylvania State University
Location: EEB 037

This is a tutorial on the use of Gaussian process to approximate complex models often used in modeling phenomena like climate, disease dynamics, etc. The tutorial will begin with an introduction to Gaussian processes followed by a discussion of how Gaussian processes can be used for both computer model emulation (approximation) and calibration.

Tuesday, February 28, 2012 - 12:30

Causality

Speaker: Thomas Richardon, University of Washington
Location: EEB 037

As these talks are intended as overviews, time permitting, I plan to give two mini-tutorials: The first will be on the idea of counterfactuals/potential outcomes. Basically answering the question: how does data obtained from a simple randomized experiment differ from that obtained from an observational study, and how does that weaken inferences that can be obtained. The second will require assume a little bit of background on Bayesian networks and will answer the question: if we obtain data from a subset of the variables in a causal Bayesian network, which causal effects are identified and how can they be computed efficiently.

Tuesday, February 21, 2012 - 12:30

Counting and Sampling Solutions of Combinatorial Problems

Speaker: Ashish Sabharwal, IBM T,J, Watson
Location: EEB 037

This tutorial presents a survey of algorithms that are used for counting and sampling of SAT problems.

Tuesday, February 14, 2012 - 12:30

Knowledge Extraction for Biomedical Text

Speaker: Hoifung Poon, Microsoft Research
Location:

This tutorial summarizes the literature on knowledge extraction in scientific domains, such as biomedical texts.

Tuesday, February 7, 2012 - 12:30

Machine Learning for Information Retrieval

Speaker: Niranjan Balasubramanian, University of Washington
Location: EEB 037

This tutorial discusses the machine learning techniques popular in the information retrieval subcommunity such as ranking techniques, pagerank, etc.

Tuesday, January 31, 2012 - 12:30

Trajectory Optimization with Differential Dynamic Programming

Speaker: Tom Erez, University of Washington
Location: EEB 037

This tutorial gives an introduction to the control theory, in particular, discussing the trajectory optimization techniques.

Machine Learning Seminars

Sponsored by