About Me

I am Kaixuan Huang (黄凯旋), a 4th-year Ph.D. student in Electrical and Computer Engineering Department at Princeton University. I am fortunate to be advised by Professor Mengdi Wang. Before that, I received B.S. in Mathematics and B.S. in Computer Science from Peking University. My current research interest is {reasoning, alignment sciences/techniques of language models/diffusion models, safety and OOD-robustness of langauge models, language model agents}. My research is supported by Google PHD Fellowship.

News

  • 11/2024: Thrilled to receieve Google PHD Fellowship 2024.
  • 10/2024: I will give a talk at INFORMS 2024 about CRISPR-GPT.
  • 03/2024: I started my internship at Google DeepMind, working with Zheng Wen and Csaba Szepesvari.

Selected Publications

  • A Theoretical Perspective for Speculative Decoding Algorithm

    Ming Yin, Minshuo Chen, Kaixuan Huang, and Mengdi Wang
    NeurIPS 2024 [link]

  • Latent Diffusion Models for Controllable RNA Sequence Generation

    Kaixuan Huang*, Yukang Yang*, Kaidi Fu, Yanyi Chu, Le Cong, Mengdi Wang
    NeurIPS 2024 Workshop on AI for New Drug Modalities [link]

  • SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors

    Tinghao Xie, Xiangyu Qi, Yi Zeng, Yangsibo Huang, Udari Madhushani Sehwag, Kaixuan Huang , Luxi He, Boyi Wei, Dacheng Li, Ying Sheng, Ruoxi Jia, Bo Li, Kai Li, Danqi Chen, Peter Henderson, Prateek Mittal
    arXiv preprint [link] [Website]

  • SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths

    Kaixuan Huang , Xudong Guo, Mengdi Wang
    ICML 2024 workshop on Efficient Systems for Foundation Models (ES-FoMo) [link] [Code]

  • AI Risk Management Should Incorporate Both Safety and Security

    Xiangyu Qi, Yangsibo Huang, Yi Zeng, Edoardo Debenedetti, Jonas Geiping, Luxi He, Kaixuan Huang , Udari Madhushani, Vikash Sehwag, Weijia Shi, Boyi Wei, Tinghao Xie, Danqi Chen, Pin-Yu Chen, Jeffrey Ding, Ruoxi Jia, Jiaqi Ma, Arvind Narayanan, Weijie J Su, Mengdi Wang, Chaowei Xiao, Bo Li, Dawn Song, Peter Henderson, Prateek Mittal
    arXiv preprint [link]

  • CRISPR-GPT: An LLM Agent for Automated Design of Gene-Editing Experiments

    Kaixuan Huang*, Yuanhao Qu*, Henry Cousins, William A. Johnson, Di Yin, Mihir Shah, Denny Zhou, Russ Altman, Mengdi Wang, Le Cong
    arXiv preprint [link]

  • Embodied LLM Agents Learn to Cooperate in Organized Teams

    Xudong Guo, Kaixuan Huang, Jiale Liu, Wenhui Fan, Natalia Vélez, Qingyun Wu, Huazheng Wang, Thomas L. Griffiths, Mengdi Wang
    arXiv preprint [link]

  • Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications

    Boyi Wei*, Kaixuan Huang*, Yangsibo Huang*, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, Peter Henderson
    ICML 2024 [link] [Code]

  • A 5' UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions

    Yanyi Chu*, Dan Yu*, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang
    Nature Machine Intelligence (2024) [link]

  • Visual Adversarial Examples Jailbreak Large Language Models

    Xiangyu Qi*, Kaixuan Huang*, Ashwinee Panda, Peter Henderson, Mengdi Wang, Prateek Mittal
    AAAI 2024 ( Oral ) ICML2023 Adv ML workshop. (Oral) [link] [Code]

  • Scaling In-Context Demonstrations with Structured Attention

    Tianle Cai*, Kaixuan Huang*, Jason D. Lee, Mengdi Wang
    ICML 2023 Workshop on Efficient Systems for Foundation Models. [link]

  • Reward-Directed Conditional Diffusion: Provable Distribution Estimation and Reward Improvement

    Hui Yuan, Kaixuan Huang, Chengzhuo Ni, Minshuo Chen, Mengdi Wang
    In Advances in Neural Information Processing Systems (NeurIPS), 2023. [link] [Code]

  • Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data

    Minshuo Chen*, Kaixuan Huang*, Tuo Zhao, Mengdi Wang
    In International Conference on Machine Learning (ICML), 2023. [link]

  • Fast Federated Learning in the Presence of Arbitrary Device Unavailability

    Xinran Gu*, Kaixuan Huang*, Jingzhao Zhang, Longbo Huang
    In Advances in Neural Information Processing Systems (NeurIPS), 2021. [link]

  • Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- A Neural Tangent Kernel Perspective

    Kaixuan Huang*, Yuqing Wang*, Molei Tao, Tuo Zhao
    In Advances in Neural Information Processing Systems (NeurIPS), 2020. [link]

  • On the Convergence of FedAvg on Non-IID Data

    Xiang Li*, Kaixuan Huang*, Wenhao Yang*, Shusen Wang, Zhihua Zhang
    In International Conference on Learning Representations (ICLR), 2020. (Oral Presentation) [link]

Misc.

I love classical music and I practice playing piano 40 hours a day.

I implemented a tiny [tool] to help me filter out interesting daily arXiv papers.

My philosophical thoughts: (1) the interplay between network model structures and data intrinsic structures, and the generalization and extrapolation behaviors of the neural networks — this resembles how humans’ scientific discoveries match the real physical world, which has been investiaged by great philosophiers such as Immanuel Kant. The forms of human perception, knowledge, reasoning, and decision-making have also been studied by fields outside computer science.

Kaixuan's Github chart

Contact Info

Email: kaixuanh AT princeton DOT edu

Wechat: [QR Code]