Doctoral Thesis Oral Defense - Runtian Zhai

— 3:30pm

Location:
In Person and Virtual - ET - Reddy Conference Room, Gates Hillman 4405 and Zoom

Speaker:
RUNTIAN ZHAI , Ph.D. Candidate, Computer Science Department, Carnegie Mellon University
https://www.runtianzhai.com/

Contextures: The Mechanism of Representation Learning

This thesis establishes the contexture theory to mathematically characterize the mechanism of representation learning, also known as pretraining. Despite the remarkable empirical success of foundation models, it is not very clear what representations they learn, and why these representations are useful for various disparate downstream tasks. A scientific understanding of representation learning is critical, especially at this point when scaling up the model size is producing diminishing returns, and designing new pretraining methods is imperative for further progress. Prior work treated different representation learning methods quite differently, whereas the contexture theory provides a unified framework for delineating the representations these methods learn. 

The central argument is that a representation is learned from the association between the input X and a context variable A. We prove that if an encoder captures the maximum information of this association, in which case we say that the encoder learns the contexture, then it will be optimal on the class of tasks that are compatible with the context. We also show that a context is the most useful when the association between X and A is neither too strong nor too weak. The important implication of the contexture theory is that increasing the model size alone will achieve diminishing returns, and further advancements require better contexts. We demonstrate that lots of existing pretraining objectives can learn the contexture, including supervised learning, self-supervised learning, generative models, etc. Based on that, we introduce two general objectives---SVME and KISE, for learning the contexture. We also show how to mix multiple contexts together, which is an effortless way to create better contexts from existing ones. Then, we prove statistical learning bounds for representation learning, and extend the framework to spectrally transformed kernel regression for semi-supervised learning. Finally, we discuss the effect of the data distribution shift from pretraining to the downstream task.  

Thesis Committee

Pradeep Ravikumar (Co-Chair)
Zico Kolter (Co-Chair)
Andrej Risteski
Yuandong Tian (Meta)

In Person and Zoom Participation.  See announcement.


Add event to Google
Add event to iCal