WEBMINAR 2026

The Physics behind Optimization and Generalization

SpeakerDr Liu Ziyin, NTT Research and MIT, USA
Date/TimeTuesday, 24 Mar, 10am
Locationvia zoom, registration link:
https://nus-sg.zoom.us/webinar/register/WN_ZYcxHFzjRKyR6RNi4RxeVQ
Note: attendees should register using a Zoom-registered email address
HostA/Prof Duane Loh

Abstract

AI has become an empirical science. We are discovering more and more interesting phenomena in these AI models, but the organizing principles of modern AI have been unclear. In this seminar, I will discuss what I call the Symmetry-Irreversibility Framework (SIF), which leverages symmetry and irreversibility, two main concepts and tools from science in general and physics in particular, to analyze and understand the phenomenology of deep learning. I will discuss how interesting phenomena such as implicit sparsity, collapse, the edge of stability, and the more recently discovered Platonic representation hypothesis could be consequences of the model’s hidden symmetries and the irreversibility of the training dynamics. Lastly, I will also discuss how one can apply the SIF to make AI models more efficient, interpretable, and controllable.

Biography

Dr. Liu Ziyin is a postdoctoral researcher at the Physics & Informatics Laboratories at NTT Research and an IAIFI Affiliate at MIT. He received his Ph.D. in physics from the University of Tokyo. Ziyin’s research focuses on identifying the scientific and mathematical principles underlying the mechanisms of learning in neural networks. Ziyin is also interested in theoretical physics and computational neuroscience.