Work
-
Orby AI - A Uniphore Company
Studying preference learning for LLM fine-tuning, honored to be advised by Peng Qi and Gang Li.
Education
-
2020.01 - Present |
Atlanta, GA |
Georgia Institute of Technology
Statistics
-
2016.08 - 2020.06 |
Hefei, Anhui, China |
University of Science and Technology of China
Statistics
Awards
-
2025.08.26
Alice and John Jarvis, Ph.D. Student Paper Competition
-
2022.08
SURE program, Georgia Institute of Technology
-
2021.08
Georgia Institute of Technology
-
2017
University of Science and Technology of China
Publications
-
The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)
It provides a quantification for the trade-offs between three categories of fairness notions, including independence, separation, and calibration, delivering an equalized-based fairness metric that helps achieve fairness for a class of downstream tasks.
-
arXiv
It proposes a solution for the over-confidence phenomenon for SOTA LLM models, serving as a fine-tuning method that detects the modification of keywords in text prompts or items in image prompts.
-
Reject and Resubmit with Journal of Machine Learning Research
It provides a general inequality to establish finite sample bounds for optimization problems where kernel-based statistics are involved as part of the objective function, revealing that the convergence rate or error bound for kernel-based statistics is relevant with the input dimension only up to a logarithm factor.
-
Journal of Nonparametric Statistics
This work studies a stepwise dimension reduction method, built without the elliptical distribution assumption for widely studied Sufficient Dimension Reduction (Method), detecting nonlinear subspaces for both response and predictors.
-
INFORMS Journal on Computing