my photo

Kaifeng Lyu 吕凯风

I am a tenure-track assistant professor at the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University. Before that, I was a postdoctoral research fellow at the Simons Institute at UC Berkeley. I obtained my Ph.D. in Computer Science from Princeton University in 2024, where I was very fortunate to be advised by Prof. Sanjeev Arora. I did my undergraduate study at Tsinghua University and received a B.Eng. in Computer Science and Technology in 2019. During my study at Tsinghua, I was a student of Yao Class headed by Prof. Andrew Chi-Chih Yao and I was very fortunate to be advised by Prof. Jian Li.

Research Interests

I am primarily interested in machine learning theory, AI safety/alignment, and optimization.

I believe that a good theory should come from practice and be able to advance practice: it should start from important real-world phenomena or problems, explain or solve them theoretically, and then be applied to guide the practice. Based on this philosophy, I conduct research with both theory and experiments, aiming to develop solid foundations for modern machine learning and make AI in the era of large models more efficient, safe, and reliable.

Below is a list of topics I am actively thinking about/working on:

  1. Training Dynamics of Neural Networks: The training dynamics of neural networks are extremely complex, but are there any universal laws that we can predict? What are the phase transitions that make model performance unpredictable? Can we even predict these phase transitions? Our previous works include:

    • Theoretical analyses for how to scale hyperparameters in distributed training [1], [2], [3]
    • What is the best learning rate schedule for large models? [4]
    • The amount of knowledge learned from mixed data does not grow linearly with model size, but may exhibit phase transitions [5]
    • Grokking: Why does the test accuracy of a neural network suddenly improve after training on the training set for a long time? [6]
    • How does the normalization layer help neural network training? [7], [8], [9]
    • Implicit Bias of GD: Even if you do not add regularization to your network, gradient descent will add it for you [10], [11], [12]
  2. Generalization Paradigms of Foundation Models: Modern large foundation models integrate various unsupervised, supervised and reinforcement learning paradigms, and can achieve amazing generalization ability after training on large amounts of data. How do we understand these new generalization paradigms? How do we combine algorithms, architectures, and data better to enhance the model's capabilities? Can these understandings guide us to better select, mix, and even generate data? Our previous works include:

    • How does architectures affect Chain-of-Thought reasoning performance? [13]
    • Weak-to-strong Generalization: Does the phenomenon that GPT-4 supervised by GPT-2 outperforms GPT-2 also happen in simpler models? [14]
  3. Foundations of AI Safety/Alignment: I am also interested in AI safety and alignment. Machine learning usually optimizes the performance of a model in the “average case”, but AI safety issues can be exposed in extreme cases. What is the fundamental reason for model failures in the extreme case? What are the limitations of current AI alignment methods, and what are the security issues that cannot be completely avoided? In the long run, can we find a systematic method like cryptography to solve a large class of AI safety problems once and for all? Our previous works include:

    • Current RLHF-based safety alignment is very shallow, shallow enough that the difference in output token distribution between base and aligned models is only a few tokens deep [15]
    • Fine-tuning a well-aligned model may degrade its safety, but simple data format adjustments can mitigate this problem [16]
    • Theoretically, why does a neural network not have adversarial robustness? [17], [18]

Conference Papers

Weak-to-Strong Generalization Even in Random Feature Networks, Provably
  • Marko Medvedev*
  • Kaifeng Lyu*
  • Dingli Yu
  • Sanjeev Arora
  • Zhiyuan Li
  • Nathan Srebro
A Multi-Power Law for Loss Curve Prediction Across Learning Rate Schedules
  • Kairong Luo
  • Haodong Wen
  • Shengding Hu
  • Zhenbo Sun
  • Zhiyuan Liu
  • Maosong Sun
  • Kaifeng Lyu
  • Wenguang Chen
RNNs are not Transformers (Yet): The Key Bottleneck on In-context Retrieval
  • Kaiyue Wen*
  • Xingyu Dang*
  • Kaifeng Lyu
Safety Alignment Should Be Made More Than Just a Few Tokens Deep
  • Xiangyu Qi
  • Ashwinee Panda
  • Kaifeng Lyu
  • Xiao Ma
  • Subhrajit Roy
  • Ahmad Beirami
  • Prateek Mittal
  • Peter Henderson
Oral Presentation (Top 1.8%). Outstanding Paper Award (Top 3/3827=0.08%).
Feature Averaging: An Implicit Bias of Gradient Descent Leading to Non-Robustness in Neural Networks
  • Binghui Li*
  • Zhixuan Pan*
  • Kaifeng Lyu
  • Jian Li
Efficient Stagewise Pretraining via Progressive Subnetworks
  • Abhishek Panigrahi*
  • Nikunj Saunshi*
  • Kaifeng Lyu
  • Sobhan Miryoosefi
  • Sashank Reddi
  • Satyen Kale
  • Sanjiv Kumar
Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias
  • Rui Lu*
  • Runzhe Wang*
  • Kaifeng Lyu
  • Xitai Jiang
  • Gao Huang
  • Mengdi Wang
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates
  • Kaifeng Lyu*
  • Haoyu Zhao*
  • Xinran Gu*
  • Dingli Yu
  • Anirudh Goyal
  • Sanjeev Arora
A Quadratic Synchronization Rule for Distributed Deep Learning
  • Xinran Gu*
  • Kaifeng Lyu*
  • Sanjeev Arora
  • Jingzhao Zhang
  • Longbo Huang
Dichotomy of Early and Late Phase Implicit Biases Can Provably Induce Grokking
  • Kaifeng Lyu*
  • Jikai Jin*
  • Zhiyuan Li
  • Simon S. Du
  • Jason D. Lee
  • Wei Hu
DistillSpec: Improving Speculative Decoding via Knowledge Distillation
  • Yongchao Zhou
  • Kaifeng Lyu
  • Ankit Singh Rawat
  • Aditya Krishna Menon
  • Afshin Rostamizadeh
  • Sanjiv Kumar
  • Jean-François Kagy
  • Rishabh Agarwal
The Marginal Value of Momentum for Small Learning Rate SGD
  • Runzhe Wang
  • Sadhika Malladi
  • Tianhao Wang
  • Kaifeng Lyu
  • Zhiyuan Li
Understanding incremental learning of gradient descent: A fine-grained analysis of matrix sensing
  • Jikai Jin
  • Zhiyuan Li
  • Kaifeng Lyu
  • Simon S. Du
  • Jason D. Lee
Why (and When) does Local SGD Generalize Better than SGD?
  • Xinran Gu*
  • Kaifeng Lyu*
  • Longbo Huang
  • Sanjeev Arora
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
  • Kaifeng Lyu
  • Zhiyuan Li
  • Sanjeev Arora
On the SDEs and Scaling Rules for Adaptive Gradient Algorithms
  • Sadhika Malladi*
  • Kaifeng Lyu*
  • Abhishek Panigrahi
  • Sanjeev Arora
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
  • Arushi Gupta*
  • Nikunj Saunshi*
  • Dingli Yu*
  • Kaifeng Lyu
  • Sanjeev Arora
Oral Presentation (Top 1.9%).
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias
  • Kaifeng Lyu*
  • Zhiyuan Li*
  • Runzhe Wang*
  • Sanjeev Arora
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning
  • Zhiyuan Li
  • Yuping Luo
  • Kaifeng Lyu
(alphabetical order)
Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate
  • Zhiyuan Li*
  • Kaifeng Lyu*
  • Sanjeev Arora
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
  • Kaifeng Lyu
  • Jian Li
Oral Presentation (Top 1.9%).
Theoretical Analysis of Auto Rate-Tuning by Batch Normalization
  • Sanjeev Arora
  • Zhiyuan Li
  • Kaifeng Lyu
(alphabetical order)
Fine-grained complexity meets IP = PSPACE
  • Lijie Chen
  • Shafi Goldwasser
  • Kaifeng Lyu
  • Guy N Rothblum
  • Aviad Rubinstein
(alphabetical order)
Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs
  • Ran Duan
  • Kaifeng Lyu
  • Hongxun Wu
  • Yuanhang Xie
(alphabetical order)
Learning gradient descent: Better generalization and longer horizons
  • Kaifeng Lv*
  • Shunhua Jiang*
  • Jian Li
(Contribution order by default; Asterisk * stands for equal contribution.)
PhD Students:
  • Haodong Wen (incoming)
  • Kexian Tang (incoming)
Master's Student:
  • Rui Chen (incoming)

Planned Courses

  • Fall 2025. Large Language Models from Scratch: Theory and Practice, Tsinghua University.

Teaching Assistant Experience

Professional Services

  • Organizer, NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning (M3L 2024).
  • Organizer, NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning (M3L 2023).
  • Conference Reviewer: ICML (2020-2025), NeurIPS (2020-2023), ICLR (2022-2025), TPAMI, COLT (2020,2025), AAAI (2020), KDD (2022).
  • Journal Reviewer: TMLR, JMLR, TPAMI, AIJ.
  • Organizer, Yao Class Seminar, Tsinghua University (Fall 2019, Fall 2020, Spring 2021).

Universal Online Judge

  • I founded the Universal Online Judge (UOJ) in 2014, a popular online judge system in China.
  • UOJ is capable of testing both traditional and non-traditional programming problems in OI (Olympiad in Informatics). A team of top OI players regularly hosts programming contests on UOJ.
  • [Link] [GitHub] [Docs]