About Me
I am Kaixuan Huang (黄凯旋), a Ph.D. student in Electrical and Computer Engineering Department at Princeton University. I am fortunate to be advised by Professor Mengdi Wang. Before that, I received B.S. in Mathematics and B.S. in Computer Science from Peking University. I was advised by Prof. Zhihua Zhang while doing undergraduate research. In 2019, I visited Georgia Tech as a research intern, supervised by Prof. Tuo Zhao. In 2020, I visited Tsinghua University as a research intern, supervised by Prof. Longbo Huang. I also worked closely with Prof. Jason Lee.
I am interested in deep learning and artificial general intelligence. At a high level, my research aim at (1) understanding the interplay between network model structures and data intrinsic structures, and how it influences the generalization and extrapolation behaviors of the neural networks, (2) understanding the forms of human perception, knowledge, reasoning, and decision-making, and how to design next-generation AGI systems that are more human-like.
Currently, I am interested in diffusion models and large language models. I am open to possible cooperations or visiting opportunities. If you are interested, please contact me by email or wechat.
Publications and Preprints
- Deep Reinforcement Learning for Efficient and Fair Allocation of Health Care Resources
Yikuan Li*, Chengsheng Mao*, Kaixuan Huang*, Hanyin Wang*, Zheng Yu*, Mengdi Wang, Yuan Luo
arxiv preprint [link] - Visual Adversarial Examples Jailbreak Large Language Models
Xiangyu Qi*, Kaixuan Huang*, Ashwinee Panda, Peter Henderson, Mengdi Wang, Prateek Mittal
ICML 2023 Workshop on New Frontiers in Adversarial Machine Learning. (Oral) [link] [Code] - Scaling In-Context Demonstrations with Structured Attention
Tianle Cai*, Kaixuan Huang*, Jason D. Lee, Mengdi Wang
ICML 2023 Workshop on Efficient Systems for Foundation Models. [link] - Reward-Directed Conditional Diffusion: Provable Distribution Estimation and Reward Improvement
Hui Yuan, Kaixuan Huang, Chengzhuo Ni, Minshuo Chen, Mengdi Wang
In Advances in Neural Information Processing Systems (NeurIPS), 2023. [link] [Code] - Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data
Minshuo Chen*, Kaixuan Huang*, Tuo Zhao, Mengdi Wang
In International Conference on Machine Learning (ICML), 2023. [link] - Deep Reinforcement Learning for Cost-Effective Medical Diagnosis
Zheng Yu*, Yikuan Li*, Joseph Kim*, Kaixuan Huang*, Yuan Luo, Mengdi Wang
In International Conference on Learning Representations (ICLR), 2023. [link] - Going Beyond Linear RL: Sample Efficient Neural Function Approximation
Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang (alphabetical)
In Advances in Neural Information Processing Systems (NeurIPS), 2021. [link] - Optimal Gradient-based Algorithms for Non-concave Bandit Optimization
Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang (alphabetical)
In Advances in Neural Information Processing Systems (NeurIPS), 2021. [link] - A Short Note on the Relationship of Information Gain and Eluder Dimension
Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei (alphabetical)
ICML2021 Workshop on Reinforcement Learning Theory. [link] - Fast Federated Learning in the Presence of Arbitrary Device Unavailability
Xinran Gu*, Kaixuan Huang*, Jingzhao Zhang, Longbo Huang
In Advances in Neural Information Processing Systems (NeurIPS), 2021. [link] - Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- A Neural Tangent Kernel Perspective
Kaixuan Huang*, Yuqing Wang*, Molei Tao, Tuo Zhao
In Advances in Neural Information Processing Systems (NeurIPS), 2020. [link] - On the Convergence of FedAvg on Non-IID Data
Xiang Li*, Kaixuan Huang*, Wenhao Yang*, Shusen Wang, Zhihua Zhang
In International Conference on Learning Representations (ICLR), 2020. (Oral Presentation) [link]
Misc.
I love classical music and I practice playing piano 40 hours a day.
I implemented a tiny [tool] to help me filter out interesting daily arXiv papers.
Contact Info
Email: kaixuanh AT princeton DOT edu
Wechat: [QR Code]