Aili Chen

Aili Chen

Ph.D Student

Fudan University

Biography

Hi, I'm Aili Chen (ι™ˆθ‰Ύεˆ©). I am a first-year Ph.D. student at Fudan University, advised by Prof. Yanghua Xiao at Knowledge Work Lab. Previously, I received my Bachelor's degree from Fudan University in 2024. I have interest in Large Language Model, especially in reasoning models and autonomous agents:

Interests
  • Reasoning & Planning
  • Language Agent
  • LLM Personalization
  • Music, Photography, and Travel
Education
  • Ph.D. in CS, 2024 - 2029 (expected)

    Fudan University

  • B.S. in Information Security, 2020 - 2024

    Fudan University

News

  • July. 2025: πŸ‡¦πŸ‡Ή Attending ACL 2025@Vienna! I will present DEEPER! Looking forward to meeting everyone!

  • Jun. 2025: πŸ”” Introducing Minimax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model. MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning attention mechanism!

  • May. 2025: πŸ”” Check out Enigmata! A complete pipeline for advancing logical reasoning in LLMs, from data generation β†’ verification β†’ RLVR training β†’ evaluation.

  • May. 2025: πŸ”” Check out ARIA! we propose ARIA, a method that Aggregates Rewards in Intention space to enable efficient and effective language Agents training.

  • May. 2025: πŸ”” Check out ARM! LLMs often suffer from "overthinking" - excessive reasoning that wastes computational resources. ARM introduces adaptive reasoning formats and multiple modes to optimize token usage while maintaining performance.

  • May. 2025: πŸ”” Check out SynLogic! A comprehensive logical reasoning data synthesis framework that generates diverse, verifiable reasoning data at scale for learning logical reasoning and beyond.

  • May. 2025: πŸŽ‰ Our paper DEEPER is accepted to ACL 2025!

  • Mar. 2025: πŸŽ‰ Our paper SelfGoal is accepted to NAACL 2025!

  • Jan. 2025: πŸŽ‰ Our paper Think Thrice Before You Act is accepted to ICLR 2025!

  • Sep. 2024: πŸŽ‰ Our survey paper on role-playing agents is accepted to TMLR!

  • Aug. 2024: πŸ”” Check out TravelAgent! We introduce an LLM-powered travel planning system that generates rational comprehensive and personalized itineraries.

Experience

 
 
 
 
 
ByteDance AI Lab
Research Intern
MiniMax
Jan 2025 – Present Shanghai, China
Work on LLMs' reasoning capabilities, language agents and multi-turn rl.
 
 
 
 
 
Alibaba Group
Research Intern
Alibaba Group
Mar 2024 – Feb 2025 Hangzhou, China
Research on personalized LLM-based agent and RAG.
 
 
 
 
 
Meituan
Information Security Engineer Intern
Meituan
Jul 2023 – Sep 2023 Beijing, China
Information Security Division - Mobility Security Research Department.

Awards

Outstanding Graduate of Shanghai
Top 1%
National Second Prize
Top 10%
Meritorious Award
Top 6%
Huawei Scholarship
Top 3%
Outstanding Student Leader of Fudan University
Top 2%
Outstanding Communist Youth League Member of Fudan University
Top 5%

Recent Publications

Quickly discover relevant content by filtering publications.
(2025). MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention. Preprint.

PDF Cite

(2025). Enigmata: Scaling Logical Reasoning in Large Language Models with Synthetic Verifiable Puzzles. Technical Report.

PDF Cite Dataset Project Code Model

(2025). ARIA: Training Language Agents with Intention-Driven Reward Aggregation. Preprint.

PDF Cite

(2025). ARM: Adaptive Reasoning Model. Preprint.

PDF Cite

(2025). DEEPER Insight into Your User: Directed Persona Refinement for Dynamic Persona Modeling. The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025).

PDF Cite Code

Visitors