Founder & Senior Staff Research Scientist |AG2 / Google DeepMind
Created AutoGen (now AG2), the pioneering open-source multi-agent conversation framework for agentic AI. Also built FLAML, a widely adopted AutoML library. Won the 2015 SIGKDD Dissertation Award and Best Paper at ICLR 2024 LLM Agents Workshop. Spent a decade at Microsoft Research before joining Google DeepMind in 2024.
Biography
Chi Wang is a Senior Staff Research Scientist at Google DeepMind and the creator of AutoGen (now AG2) and FLAML (automated machine learning library). He spent a decade at Microsoft Research (2014-2024) as a Principal Researcher before departing in late 2024 to join DeepMind and fork AutoGen into the community-governed AG2 project. Wang earned his PhD in 2014 from the University of Illinois at Urbana-Champaign under Jiawei Han, winning the 2015 SIGKDD Dissertation Award for his work on mining latent entity structures from unstructured data. He holds a BS from Tsinghua University and was the first and only Illinois CS student to receive the Microsoft Research PhD Fellowship. His AutoGen paper (2308.08155) won Best Paper at the ICLR 2024 LLM Agents Workshop and was published at COLM 2024. AutoGen pioneered multi-agent conversation as a generic programming paradigm for agentic AI and has become one of the most widely used agent frameworks in production, with applications ranging from chip design at NVIDIA to scientific research. In July 2025 he launched MassGen, an open-source multi-agent scaling system enabling parallel intelligence sharing and consensus across agents.
Created AutoGen, the first truly generic multi-agent conversation framework for LLM applications, enabling developers to compose customizable agents that converse to accomplish tasks. After departing Microsoft, forked the project as AG2 under open community governance. The framework supports sequential chats, group chats, nested chats, swarm patterns, teachability, and captain agents. Won Best Paper at ICLR 2024 LLM Agents Workshop.
Created FLAML, a widely adopted open-source AutoML library providing fast hyperparameter optimization with minimal computational overhead. Published at MLSys 2021. Used by Microsoft, Google, and Amazon. Served as the incubation ground for AutoGen's early development.
Developed a cost-effective hyperparameter optimization framework for LLM generation inference that tunes multiple parameters jointly (temperature, max tokens, number of responses, prompts) under budget constraints. Published in 2023 and integrated into FLAML.
Launched an open-source multi-agent scaling system (July 2025) enabling parallel intelligence sharing, iterative refinement, and consensus across agents. Supports diverse backends including open-source and proprietary models as well as multiple agent frameworks (AG2, LangGraph, Claude Code, smolagents).
PhD dissertation at UIUC developing a comprehensive mining framework for extracting hierarchical topics, entity roles, and entity relations from unstructured text-rich heterogeneous information networks. Won the 2015 SIGKDD Dissertation Award. Published as a book with advisor Jiawei Han.
AutoGen is a pioneering attempt to solve the issue of generic programming for agentic AI.
An enormous difference in performance based on how you configure and use these models — performance ranged from 6% to 90%.
Support the full development lifecycle — from prototyping to testing to production deployment, including observability, monitoring.
Research generated March 19, 2026