PIMS - SFU Theory Seminar: Russell Impagliazzo
Topic
Speakers
Details
A theme that cuts across many domains of computer science and mathematics is to find simple representations of complex mathematical objects such as graphs, functions, or distributions on data. These representations need to capture how the object interacts with a class of tests, and to approximately determine the outcome of these tests.
For example, a scientist is trying to find a mathematical model explaining observed data about some phenomenon, such as kinds of butterflies in a forest. A minimal criterion for success is that the model should accurately predict the results of future observations. When is this possible? This general situation arises in many contexts in computer science and mathematics. In machine learning, the object might be a distribution on data points, high dimensional real vectors, and the tests might be half-spaces. The goal would be to learn a simple representation of the data that determines the probability of any half-space or possibly intersections of half spaces. In computational complexity, the object might be a Boolean function or distribution on strings, and the tests are functions of low circuit complexity. In graph theory, the object is a large graph, and the tests are the cuts; the representation should determine approximately the size of any cut. In additive combinatorics, the object might be a function or distribution over a group, and the tests might be correlations with linear functions or polynomials.
This talk is to illustrate the connections between these various contexts. In particular, we'll show how to convert boosting algorithms into unsupervised learning algorithms similar to GANs. These algorithms produce ``dense models'' from seeing samples of a distribution, that are high entropy distributions that are indistinguishable from the samples by a class of tests. To illustrate the versatility of the techniques, we'll show direct applications to Fourier analysis of functions, leakage-resistant cryptography and graph theory.
We will then discuss how to implement the algorithms in question, and in particular, bounds on the sample complexity required to ensure generalization.
Additional Information
SFU TASC 1 9204
Russell Impagliazzo (UCSD)