Director, Dept. of Empirical Inference |Max Planck Institute for Intelligent Systems
Pioneer of kernel methods and causal inference in ML. Co-founded ELLIS, leads causal representation learning research, and contributed to exoplanet discovery.
Biography
Bernhard Scholkopf is a German computer scientist and director at the Max Planck Institute for Intelligent Systems in Tubingen, where he heads the Department of Empirical Inference. He is a founding Scientific Director of the ELLIS Institute Tubingen, an affiliated professor at ETH Zurich, and an Amazon Distinguished Scholar. A pioneer of kernel methods and support vector machines alongside Vladimir Vapnik, he co-developed kernel PCA, proved the representer theorem for reproducing kernel Hilbert spaces, and co-founded the field of kernel embeddings of distributions. Starting around 2005 he shifted focus to causal inference in machine learning, arguing that the hardest open problems of AI are intrinsically linked to causality. He co-authored the landmark textbook 'Elements of Causal Inference' (MIT Press, 2017) and has introduced the concepts of independent causal mechanisms and sparse mechanism shift. His group has also applied ML to exoplanet discovery, contributing to the detection of K2-18b, the first habitable-zone exoplanet with water vapour in its atmosphere. He is among the most cited computer scientists in the world, with over 200,000 citations on Google Scholar.
Introduced kernel PCA and proved the representer theorem, showing that SVMs are a special case of a much larger class of algorithms expressible in terms of dot products, all generalizable to nonlinear settings via reproducing kernels. Co-founded the field of kernel methods.
Co-authored with Alexander Smola, this textbook became the definitive reference for support vector machines, regularization, and kernel-based learning algorithms. One of the most cited ML books ever written.
Co-authored with Jonas Peters and Dominik Janzing, this open-access book provides a self-contained introduction to causal inference foundations and learning algorithms, bridging graphical models with modern ML.
Landmark paper arguing that the hard open problems of ML and AI -- including transfer, generalization, and robustness -- are intrinsically related to causality, and introducing the independent causal mechanisms principle.
Laid out a research agenda for discovering high-level causal variables from low-level observations, connecting causal inference with representation learning, transfer, and generalization.
Applied ML methods to transit photometry data, contributing to the discovery of multiple exoplanets including K2-18b, the first habitable-zone exoplanet found to have water vapour in its atmosphere.
Co-founded and serves as chairman of ELLIS, a pan-European network of AI research labs aiming to ensure Europe remains competitive in fundamental AI research and attracts world-class talent.
We are extremely far away from a machine being more intelligent than a human being.
The hard open problems of machine learning and AI are intrinsically related to causality.
Current models and the field of representation learning mostly represent statistical dependences. Thinking would require models that also tell us about the effect of our actions or interventions in the world -- i.e., world models.
Research generated March 19, 2026