Product Leader / Co-creator of Ray Tune |Anyscale
Co-creator of Ray Tune, co-author of the Ray OSDI paper and O'Reilly's 'Learning Ray'. UC Berkeley RISELab researcher turned Anyscale product leader.
Biography
Richard Liaw is a co-creator of Ray Tune and a Product leader at Anyscale, the company commercializing the Ray distributed computing framework. A UC Berkeley RISELab graduate researcher under Joseph Gonzalez, Ion Stoica, and Ken Goldberg, Liaw is a co-author of the foundational Ray paper (OSDI 2018, 1,560 citations), the RLlib reinforcement learning library (ICML, 1,006 citations), and the Tune hyperparameter tuning platform (1,079 citations). He co-authored 'Learning Ray' (O'Reilly, 2023) and has an h-index of 13 with over 4,300 total citations across 20 publications. His research spans distributed systems, hyperparameter optimization (HyperSched, RubberBand), and scalable reinforcement learning.
Co-created Tune, a unified framework for distributed hyperparameter tuning and model selection that supports ASHA, BOHB, PBT, and integrates with PyTorch, TensorFlow, XGBoost, and Keras. Over 1,079 citations.
Co-authored the foundational Ray paper introducing a general-purpose distributed framework for emerging AI applications with fault-tolerant primitives. 1,560 citations.
Co-authored the scalable reinforcement learning library providing composable abstractions for multi-agent environments and policy serving. 1,006 citations.
Co-authored the definitive O'Reilly book on Ray with Max Pumperla and Edward Oakes, covering distributed Python for machine learning including training and cluster management.
Developed a dynamic application-level resource scheduler that reallocates resources to best-performing hyperparameter trials under deadline constraints. Published at ACM SoCC 2019.
Cloud-based hyperparameter tuning system for cost-efficient model selection, published at EuroSys 2021.
Ray is a compute engine for scaling AI workloads, offering libraries for training, serving, and data processing.
Developing locally and scaling into large-scale contexts without needing a distributed systems PhD was a key design principle.
Ray Data makes batch inference and image processing pipelines 100 times easier than forcing them into weird setups.
Research generated March 19, 2026