Skip to content
View ZongqianLi's full-sized avatar
  • University of Cambridge

Block or report ZongqianLi

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ZongqianLi/README.md

Zongqian Li

😃 About Me

I am Zongqian Li, a PhD student in Natural Language Processing at Cambridge. I am interested in large language model, efficiency, agent, and multimodality. I have an interdisciplinary background in computer science, physics, and materials science. I also have internships in research institutes and finance companies.

 

📕 Education

  • [2024.01-Now] Univeristy of Cambridge
    PhD in Natural Language Processing
    Supervisor: Prof. Nigel Collier
    Fully funded by Cambridge International Scholarship, Cambridge Trust Scholar

  • [2023.10-2024.01] University of Cambridge
    MPhil in Physics
    Supervisor: Prof. Jacqueline Cole
    Member of St John's College

  • [2018.09-2022.06] Queen Mary University of London
    BEng with Hons in Materials Science and Engineering
    First Class with Honours, Rank 1/234 in school
    Distinguished Student Scholar (Top 0.03%)

 

🏔️ Experiences

  • [2024.09-2024.10] ELE, Alibaba Group
    Rider

  • [2022.05-2022.10] Tsinghua University
    Department of Computer Science and Technology
    Research Intern
    Supervisor: Prof. Jie Tang, Prof. Juanzi Li, Dr. Jifan Yu

  • [2023.07-2024.01] ZhenFund
    Top 2 Early-stage VC in China
    Investment Intern
    Mentor: Partner Yuan Liu

  • [2023.04-2023.07] Deloitte
    NLP Developer
    Mentor: Associate Xianlong Li

 

📝 Publications

Topics: Efficiency E, Model Training T, Data D, Reasoning R, Survey S, Evaluation V

  • Flexi-LoRA: Efficient LoRA Finetuning with Input-Aware Dynamic Ranks ET
    Zongqian Li, Yixuan Su, Nigel Collier
    Under Review

  • ReasonGraph: Visualisation of Reasoning Paths R [Paper] [Demo] [Page] [Github]
    Zongqian Li, Ehsan Shareghi, Nigel Collier
    ACL 2025 Demo (Github 450+ Stars)

  • PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning ET [Paper] [Page] [Github]
    Zongqian Li, Yixuan Su, Nigel Collier
    Under Review

  • 500xCompressor: Generalized Prompt Compression for Large Language Models ET [Paper] [Page] [Github]
    Zongqian Li, Yixuan Su, Nigel Collier
    ACL 2025 Main

  • Prompt Compression for Large Language Models: A Survey ES [Paper] [Page] [Github]
    Zongqian Li, Yinhong Liu, Yixuan Su, Nigel Collier
    NAACL 2025 Main (Selected Oral)

  • A Survey on Prompt Tuning ES [Paper] [Page] [Github]
    Zongqian Li, Yixuan Su, Nigel Collier
    ICML 2025 Workshop

  • Auto-generating Question-answering Datasets with Domain-specific Knowledge for Language Models in Scientific Tasks ETD [Paper]
    Zongqian Li, Jacquiline Cole
    Digital Discovery (Q1)

  • General Scales Unlock AI Evaluation with Explanatory and Predictive Power V [Paper] [Page]
    Lexin Zhou, Lorenzo Pacchiardi, Fernando Martínez-Plumed, Katherine M. Collins, Yael Moros-Daval, ..., Zongqian Li, ..., Peter Henderson, Sherry Tongshuang Wu, Patrick C. Kyllonen, Lucy Cheke, Xing Xie, José Hernández-Orallo
    Under Review

 

Pinned Loading

  1. 500xCompressor 500xCompressor Public

    [ACL 2025 Main] Repository for the paper: 500xCompressor: Generalized Prompt Compression for Large Language Models

    Python 42 5

  2. Prompt-Compression-Survey Prompt-Compression-Survey Public

    [NAACL 2025 Main Selected Oral] Repository for the paper: Prompt Compression for Large Language Models: A Survey

    28

  3. ReasonGraph ReasonGraph Public

    [ACL 2025 Demo] Repository for the demo and paper: ReasonGraph: Visualisation of Reasoning Paths

    HTML 495 42

  4. PT-MoE PT-MoE Public

    Repository for the paper: PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning

    4 1

  5. Prompt-Tuning-Survey Prompt-Tuning-Survey Public

    [ICML 2025 Workshop] Repository for the paper: A Survey on Prompt Tuning

    3