Co-Founder & CEO |Anyscale
Co-creator of Ray, the open-source distributed computing framework powering AI workloads at OpenAI, Amazon, and NVIDIA. PhD from UC Berkeley under Ion Stoica. Built Anyscale to $1B+ valuation.
Biography
Robert Nishihara is the Co-Founder and CEO of Anyscale and co-creator of Ray, the open-source distributed computing framework that powers AI workloads at companies including OpenAI, Amazon, NVIDIA, Uber, and Visa. He earned a BA in Mathematics from Harvard University and a PhD in Computer Science from UC Berkeley, where he worked in the RISELab and Berkeley AI Research Lab under Ion Stoica and Michael I. Jordan. His dissertation, 'On Systems and Algorithms for Distributed Machine Learning,' laid the groundwork for Ray. Under his leadership, Anyscale has raised over $250M at a $1B+ valuation, and Ray now orchestrates more than 1 million clusters per month. In late 2025, Anyscale transferred the Ray project to the PyTorch Foundation, establishing it as the neutral industry-standard operating system for AI workloads.
Co-created the open-source distributed computing framework for AI applications. Ray provides a unified interface for task-parallel and actor-based computations, now used by 10,000+ organizations including OpenAI (for training GPT-4), Amazon, NVIDIA, and Uber. Orchestrates 1M+ clusters per month.
Co-founded and leads Anyscale, the managed Ray platform that runs AI workloads 2x faster and 6x cheaper than DIY cloud deployments. Raised $250M+ at $1B+ valuation. Powers mission-critical AI at Amazon, Cohere, Hugging Face, NVIDIA, OpenAI, and Visa.
Foundational systems paper presenting Ray's architecture: a distributed scheduler and fault-tolerant object store that unify task-parallel and actor-based computation for emerging AI workloads including reinforcement learning and model serving.
Co-developed Ray RLlib, an open-source reinforcement learning library offering scalable and unified APIs for RL research and production, built on Ray's distributed execution engine.
Led the development of RayTurbo, an optimized version of Ray for peak performance, and the foundational shift to GPU-native architecture announced at Ray Summit 2024.
UC Berkeley doctoral thesis covering distributed systems design and optimization algorithms for machine learning, forming the theoretical foundation for Ray's architecture.
One of our goals with Ray is to enable developers to build scalable applications in a day without any knowledge of distributed systems. We're going to enable developers to reason only about their application logic.
We're at the outset of tremendous value being created by AI. To realize that value, much of the hard work ahead involves taking existing and future capabilities, making them incredibly reliable in the real world, and building out the underlying hardware and software infrastructure to enable them throughout every industry.
There's clearly going to be a massive infrastructure build-out for AI. On the hardware side, we all know how successful Nvidia is. But the software piece, there's a lot of work to do. There's a lot of complexity to rein in that's growing in AI, and that's a lot of what we're trying to do.
Anyscale's aspiration is to build the fastest, most cost-efficient infrastructure for running LLMs and AI workloads.
AI systems are growing in complexity, from reinforcement learning pipelines that combine simulation, data generation, training, and inference, to multimodal data preparation for RAG and robotics.
Research generated March 19, 2026