Transformer Predictor Dynamics and Task Diversity

Best AI papers explained - A podcast by Enoch H. Kang - Saturdays

Categories:

This paper focuses on modeling the behavior of Transformer models during training, particularly concerning in-context learning (ICL), which shows a transition from generalizing to memorizing. The authors utilize a Bayesian model that incorporates two primary predictors, Memorizing (M) and Generalizing (G), and demonstrate that this model accurately captures the observed behavior of the Transformer across tasks like linear regression and classification. The paper examines the relationship between training steps, task diversity, and the dominance of the two predictors, concluding that specific parameters relating to Kolmogorov complexity and sample efficiency are necessary to explain the observed transient generalization phenomenology. The visual data, presented as heatmaps, illustrates how these factors influence the shift between generalization (blue) and memorization (red) over the course of training.