UWashington Distinguished Seminar in Optimization and Data: Misha Belkin
Topic
Speakers
Details
Remarkable recent successes of deep learning have heavily relied on very large neural networks. Time and time again, larger networks have shown better performance than smaller networks trained on comparably sized data sets. Since large networks generally work better, why not train infinitely large networks directly to get best performance? This is not a rhetorical question. Recent work on the Neural Tangent Kernel showed that in certain regimes infinitely wide neural networks are equivalent to kernel machines with an explicitly computable kernel function. These machines can be trained directly by solving linear systems. However, there are two primary challenges in training infinitely large networks. First, such networks are not able to take advantage of feature learning, a key ingredient in the success of neural networks. Second, scaling kernels machines to large data sizes has been a serious computational challenge. In this talk, I will first show how feature learning can be incorporated into kernel machines without backpropagation, resulting in Recursive Feature Machines (RFM) that outperform Multilayer Perceptrons. Just as standard kernel machines, RFMs can be trained by solving linear systems of equations. These machines demonstrate state-of-the art performance on tabular data and are considerably more efficient than neural networks for small and medium data sizes. Second, I will discuss some of our recent work on scaling kernel machines to much larger data. While the data sizes used to train modern LLMs remain beyond our reach, given sufficient resources, scaling to these data sizes is not out of the realm of possibility.
Speaker biography: Mikhail Belkin is a Professor at Halicioglu Data Science Institute and Computer Science and Engineering Department at UCSD and an Amazon Scholar. He received his Ph.D. in 2003 from the Department of Mathematics at the University of Chicago. His research interests are broadly in theory and applications of machine learning and data analysis. Some of his well-known work includes widely used Laplacian Eigenmaps, Graph Regularization and Manifold Regularization algorithms, which brought ideas from classical differential geometry and spectral graph theory to data science. His recent work has been concerned with understanding remarkable mathematical and statistical phenomena observed in the context of deep learning. This empirical evidence necessitated revisiting some of the basic concepts in statistics and optimization. One of his key recent findings has been the "double descent" risk curve that extends the textbook U-shaped bias-variance trade-off curve beyond the point of interpolation.
Additional Information
Time: 3:30 pm Pacific
Location: Gates Commons (CSE 691) Allen Center
Seminar webpage: https://sites.math.washington.edu/~thomas/DSOS.html
Misha Belkin, University of California, San Diego