
Kaifeng Lyu 吕凯风
I am a tenure-track assistant professor at the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University. Before that, I was a postdoctoral research fellow at the Simons Institute at UC Berkeley. I obtained my Ph.D. in Computer Science from Princeton University in 2024, where I was very fortunate to be advised by Prof. Sanjeev Arora. I did my undergraduate study at Tsinghua University and received a B.Eng. in Computer Science and Technology in 2019. During my study at Tsinghua, I was a student of Yao Class headed by Prof. Andrew Chi-Chih Yao and I was very fortunate to be advised by Prof. Jian Li.
Research Interests
I am primarily interested in machine learning theory, AI safety/alignment, and optimization.
I believe that a good theory should come from practice and be able to advance practice: it should start from important real-world phenomena or problems, explain or solve them theoretically, and then be applied to guide the practice. Based on this philosophy, I conduct research with both theory and experiments, aiming to develop solid foundations for modern machine learning and make AI in the era of large models more efficient, safe, and reliable.
Below is a list of topics I am actively thinking about/working on:
-
Training Dynamics of Neural Networks: The training dynamics of neural networks are extremely complex, but are there any universal laws that we can predict? What are the phase transitions that make model performance unpredictable? Can we even predict these phase transitions? Our previous works include:
- Theoretical analyses for how to scale hyperparameters in distributed training [1], [2], [3]
- What is the best learning rate schedule for large models? [4]
- The amount of knowledge learned from mixed data does not grow linearly with model size, but may exhibit phase transitions [5]
- Grokking: Why does the test accuracy of a neural network suddenly improve after training on the training set for a long time? [6]
- How does the normalization layer help neural network training? [7], [8], [9]
- Implicit Bias of GD: Even if you do not add regularization to your network, gradient descent will add it for you [10], [11], [12]
-
Generalization Paradigms of Foundation Models: Modern large foundation models integrate various unsupervised, supervised and reinforcement learning paradigms, and can achieve amazing generalization ability after training on large amounts of data. How do we understand these new generalization paradigms? How do we combine algorithms, architectures, and data better to enhance the model's capabilities? Can these understandings guide us to better select, mix, and even generate data? Our previous works include:
-
Foundations of AI Safety/Alignment: I am also interested in AI safety and alignment. Machine learning usually optimizes the performance of a model in the “average case”, but AI safety issues can be exposed in extreme cases. What is the fundamental reason for model failures in the extreme case? What are the limitations of current AI alignment methods, and what are the security issues that cannot be completely avoided? In the long run, can we find a systematic method like cryptography to solve a large class of AI safety problems once and for all? Our previous works include:
- Current RLHF-based safety alignment is very shallow, shallow enough that the difference in output token distribution between base and aligned models is only a few tokens deep [15]
- Fine-tuning a well-aligned model may degrade its safety, but simple data format adjustments can mitigate this problem [16]
- Theoretically, why does a neural network not have adversarial robustness? [17], [18]
Conference Papers
PhD Students:
- Haodong Wen (incoming)
- Kexian Tang (incoming)
Master's Student:
- Rui Chen (incoming)
Planned Courses
- Fall 2025. Large Language Models from Scratch: Theory and Practice, Tsinghua University.
Teaching Assistant Experience
- Spring 2024. Teaching Assistant for COS324: Introduction to Machine Learning (by Prof. Sanjeev Arora & Prof. Elad Hazan), Princeton University.
- Fall 2022. Teaching Assistant for COS521: Advanced Algorithm Design (by Prof. Matt Weinberg & Prof. Huacheng Yu), Princeton University.
- Spring 2021. Teaching Assistant for COS598B: Advanced Topics in Computer Science: Mathematical Understanding of Deep Learning (by Prof. Sanjeev Arora), Princeton University.
- Spring 2020. Teaching Assistant for Mathematics for Computer Science (by Prof. Andrew Chi-Chih Yao), Tsinghua University.
- Spring 2019. Teaching Assistant for Distributed Computing (by Prof. Wei Chen), Tsinghua University.
Professional Services
- Organizer, NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning (M3L 2024).
- Organizer, NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning (M3L 2023).
- Conference Reviewer: ICML (2020-2025), NeurIPS (2020-2023), ICLR (2022-2025), TPAMI, COLT (2020,2025), AAAI (2020), KDD (2022).
- Journal Reviewer: TMLR, JMLR, TPAMI, AIJ.
- Organizer, Yao Class Seminar, Tsinghua University (Fall 2019, Fall 2020, Spring 2021).
Universal Online Judge
- I founded the Universal Online Judge (UOJ) in 2014, a popular online judge system in China.
- UOJ is capable of testing both traditional and non-traditional programming problems in OI (Olympiad in Informatics). A team of top OI players regularly hosts programming contests on UOJ.
- [Link] [GitHub] [Docs]