Title: Lower Bounds for Interactive Compression and Linear Programs

Advisor: Anup Rao

Supervisory Committee: Anup Rao (Chair), Christopher Hoffman (GSR, Math), Paul Beame, Dan Suciu, and Thomas Rothvoss

Abstract: This thesis proves concrete lower bounds related to two computational settings -- interactive compression and linear programs.

The first part of this thesis studies data compression in interactive models. In the classical setting of data compression, we have a sender that wishes to convey a message to a receiver by transmitting as few bits as possible. Since the pioneering work of Shannon, it has been known how to encode the messages so that the number of bits sent is equal to the amount of information contained in the message, which turns out to be optimal. The setting of interactive compression is a generalization of the classical setting to the case when there are multiple parties interacting with each other and they possess partial knowledge about each other's messages. In this thesis we further the understanding of interactive compression in the settings of communication complexity and streaming algorithms. The first part of this thesis proves the following lower bounds for interactive compression:

Exponential Separation between Information and Communication: Given two parties with correlated inputs X and Y who are communicating to compute a function f(X,Y), can we compress the communication to the amount of information exchanged? This thesis introduces a new technique, the notion of fooling distributions, to prove that information can be exponentially smaller than communication.

Compression and Direct-sum for Streaming: We consider a direct-sum question for streaming algorithms: suppose a streaming algorithm processes k input streams that arrive in parallel; does it need k times more memory to perform the computation? This thesis introduces a notion of information cost for streaming algorithms and shows that compressing the memory of a streaming algorithm to its information cost would imply that no streaming algorithm can perform significantly better than using k times more memory on k parallel streams.

The second part of this thesis studies linear programs for combinatorial optimization problems. Many combinatorial optimization problems can naturally be expressed as a linear program, albeit with exponentially many inequalities. It turns out that in certain instances, we can significantly reduce the size of an exact (or approximate) linear program for an optimization problem by using some auxiliary variables. Such linear programs are known as extended formulations or higher dimensional lifts. They also capture the best known algorithms for many combinatorial optimization problems, as well as the LP hierarchies (Sherali-Adams, Lov\'asz-Schrijver). In recent years, strong lower bounds have been proven on the size of extended formulations for many well-known combinatorial optimization problem. This thesis proves the following result about extended formulations:

Approximate Extended Formulations for the Matching Polytope: We prove a tight lower bound for approximate extended formulations for the Matching polytope. In the course of proving this lower bound, we also obtain a tight non-negative rank lower bound for a family of matrices known as lopsided unique disjointness.

Place: 
CSE 403
When: 
Wednesday, August 8, 2018 - 14:30 to 16:30