Tianjian Li
Center for Language and Speech Processing, Johns Hopkins University

Hi👋, I’m Tianjian! I’m a PhD student in Computer Science at Johns Hopkins University, proudly advised by Prof. Daniel Khashabi. I am also a research intern at Meta FAIR, where I have the privilege to be advised by Tianlu Wang.
Previously, I completed my Master’s degree in Computer Science at JHU. I worked with my wonderful advisors Kenton Murray and Philipp Koehn. Before that, I was an undergraduate at New York University.
My research lies at the intersection between machine learning and natural language processing.
I prefer solutions that are simple, generalizable, and theoretically sound.
If you have anything to share with me, please feel free to contact me through my email: tli104 at jhu.edu
news
Sep 4, 2025 | Our new work: Jointly Reinforcing Diversity and Quality in Language Model Generations is out! In this work, we studied how to make language models generate diverse outputs without sacrificing quality using online reinforcement learning. |
---|---|
May 1, 2025 | SimpleMix is accepted to ICML 2025 !! In this work, we studied the interplay between on- and off-policy data in preference optimization. |
Jan 23, 2025 | 3 papers are accepted to NAACL🎉, which includes my work on training on heavily imbalanced datasets, Jack’s work on making language models produce verbatim quotes from training data, and Yining’s work on evaluating the creativity of language models on code generation. I am super grateful to my wonderful co-authors! |
Dec 11, 2024 | I will be joining Meta AI Research (FAIR) as a research intern in summer 2025! |
Dec 6, 2024 | New blog post on why does the chosen and the rejected log-probs is decreased during DPO and why it is to some extent beneficial for alignment. |
selected publications
-
arxiv
-
ICMLSIMPLEMIX: Frustratingly Simple Mixing of Off- and On-policy Data in Language Model Preference LearningIn ICML 2025, 2025
-
NAACL
-
NAACL
-
NAACLVerifiable by Design: Aligning Language Models to Quote from Pre-Training DataIn NAACL 2025, 2025
-
ICLRError Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation ModelsIn ICLR 2024
(Spotlight - Top 5%), 2025 -
ACLWhy Does Zero-shot Cross-lingual Generation Fail? An Explaination and A SolutionIn ACL 2023 (Findings), 2025