About Me

I am Kaixuan Huang (黄凯旋), a Ph.D. student in Electrical and Computer Engineering Department at Princeton University. I am fortunate to be advised by Professor Mengdi Wang. Before that, I received B.S. in Mathematics and B.S. in Computer Science from Peking University. I was advised by Prof. Zhihua Zhang while doing undergraduate research. In 2019, I visited Georgia Tech as a research intern, supervised by Prof. Tuo Zhao. In 2020, I visited Tsinghua University as a research intern, supervised by Prof. Longbo Huang. I also worked closely with Prof. Jason Lee.

I am broadly interested in all aspects of machine learning and data science, with an emphasis on deep learning. My general research goal is to understand the science and the art of solving machine learning problems. I aim at developing theories that provide insights into how to devise practical models and algorithms. I also enjoy developing practical algorithms and toolsets to solve a specific problem.

I am open to possible cooperations or visiting opportunities. If you are interested, please contact me by email or wechat.

Publications and Preprints

  • Going Beyond Linear RL: Sample Efficient Neural Function Approximation

    Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang (alphabetical)
    In Advances in Neural Information Processing Systems (NeurIPS), 2021. [link]

  • Optimal Gradient-based Algorithms for Non-concave Bandit Optimization

    Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang (alphabetical)
    In Advances in Neural Information Processing Systems (NeurIPS), 2021. [link]

  • A Short Note on the Relationship of Information Gain and Eluder Dimension

    Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei (alphabetical)
    ICML2021 Workshop on Reinforcement Learning Theory. [link]

  • Fast Federated Learning in the Presence of Arbitrary Device Unavailability

    Xinran Gu*, Kaixuan Huang*, Jingzhao Zhang, Longbo Huang
    In Advances in Neural Information Processing Systems (NeurIPS), 2021. [link]

  • Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- A Neural Tangent Kernel Perspective

    Kaixuan Huang*, Yuqing Wang*, Molei Tao, Tuo Zhao
    In Advances in Neural Information Processing Systems (NeurIPS), 2020. [link]

  • On the Convergence of FedAvg on Non-IID Data

    Xiang Li*, Kaixuan Huang*, Wenhao Yang*, Shusen Wang, Zhihua Zhang
    In International Conference on Learning Representations (ICLR), 2020. (Oral Presentation) [link]

Contact Info

Email: kaixuanh AT princeton DOT edu

Wechat: [QR Code]