
Kaifeng Lyu 吕凯风
I am a tenure-track assistant professor at the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University. Before that, I was a postdoctoral research fellow at the Simons Institute at UC Berkeley. I obtained my Ph.D. in Computer Science from Princeton University in 2024, where I was very fortunate to be advised by Prof. Sanjeev Arora. I did my undergraduate study at Tsinghua University and received a B.Eng. in Computer Science and Technology in 2019. During my study at Tsinghua, I was a student of Yao Class headed by Prof. Andrew Chi-Chih Yao and I was very fortunate to be advised by Prof. Jian Li.
Research Interests
I am primarily interested in machine learning theory, AI safety/alignment, and optimization.
I believe that a good theory should come from practice and be able to advance practice: it should start from important real-world phenomena or problems, explain or solve them theoretically, and then be applied to guide the practice. Based on this philosophy, I conduct research with both theory and experiments, aiming to develop solid foundations for modern machine learning and make AI in the era of large models more efficient, safe, and reliable.
The following are some of the topics I am actively thinking about/working on:
- Science of Large-Scale Training: The training of large models is extremely complex, but are there any universal laws that we can predict? How can we make the training process more predictable? The large language model will not be the last large model that humans train, are there any universal laws that can be migrated to the next large model training?
- Principles in Data-Centric ML: The progress of large model capabilities comes from two aspects: the expansion of training scale and the improvement of data quality. Distillation is easy, and stacking engineering techniques can also get performance gains for sure. But besides that, can we summarize some basic principles that can be widely applied to help us better select, mix, and even generate data?
- Foundations of AI Safety/Alignment: I am also interested in AI safety and alignment. Machine learning usually optimizes the performance of a model in the “average case”, but AI safety issues can be exposed in extreme cases. What is the fundamental reason for model failures in the extreme case? What are the limitations of current AI alignment methods, and what are the security issues that cannot be completely avoided? In the long run, can we find a systematic method like cryptography to solve a large class of AI safety problems once and for all?
Sorry, please see the Chinese version for more details, especially if you are seeking for PhD recruitment information! [Chinese]
Conference Papers
PhD Students:
- Haodong Wen (2025–present)
- Kexian Tang (2025–present)
- Huaijie Wang (2023–present, joined our group in 2026)
- Jinhan Li (incoming)
- Tingkai Yan (incoming)
PhD Students in Close Research Collaboration:
- Kairong Luo (2024–present, advised by Prof. Wenguang Chen)
- Haofeng Huang (incoming, advised by Prof. Andrew Yao)
Master's Student:
- Rui Chen (2025–present, co-advised by Prof. Shuran Zheng)
Alumni / Graduating Soon:
- Xinghan Li (Undergraduate Class of 2026, joining the University of Washington as a PhD student)
- Yiran Zhang (Undergraduate Class of 2026, joining UC Berkeley as a PhD student)
- Xingyu Dang (Undergraduate Class of 2025, now PhD student at Princeton)
- Kaiyue Wen (Undergraduate Class of 2024, now PhD student at Stanford)
Courses
- Spring 2026 (upcoming). Mathematics for Computer Science and Artificial Intelligence, Tsinghua University.
- Fall 2025. Large Language Models from Scratch: Theory and Practice, Tsinghua University (Top 5% of teaching evaluations).
Teaching Assistant Experience
- Spring 2024. Teaching Assistant for COS324: Introduction to Machine Learning (by Prof. Sanjeev Arora & Prof. Elad Hazan), Princeton University.
- Fall 2022. Teaching Assistant for COS521: Advanced Algorithm Design (by Prof. Matt Weinberg & Prof. Huacheng Yu), Princeton University.
- Spring 2021. Teaching Assistant for COS598B: Advanced Topics in Computer Science: Mathematical Understanding of Deep Learning (by Prof. Sanjeev Arora), Princeton University.
- Spring 2020. Teaching Assistant for Mathematics for Computer Science (by Prof. Andrew Chi-Chih Yao), Tsinghua University.
- Spring 2019. Teaching Assistant for Distributed Computing (by Prof. Wei Chen), Tsinghua University.
Professional Services
- Organizer, NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning (M3L 2024).
- Organizer, NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning (M3L 2023).
- Conference Area Chair: NeurIPS (2025), ICLR (2026).
- Conference Reviewer: ICML (2020-2025), NeurIPS (2020-2023), ICLR (2022-2025), TPAMI, COLT (2020,2025), AAAI (2020), KDD (2022).
- Journal Reviewer: TMLR, JMLR, TPAMI, AIJ.
- Organizer, Yao Class Seminar, Tsinghua University (Fall 2019, Fall 2020, Spring 2021).
Universal Online Judge
- I founded the Universal Online Judge (UOJ) in 2014, a popular online judge system in China.
- UOJ is capable of testing both traditional and non-traditional programming problems in OI (Olympiad in Informatics). A team of top OI players regularly hosts programming contests on UOJ.
- [Link] [GitHub] [Docs]