VP of Research, Alignment |Anthropic (ex-OpenAI)
Leads alignment research at Anthropic after heading OpenAI's Superalignment team. Focuses on scalable oversight, RLHF, and ensuring AI systems remain aligned with human values.