Chief Scientist |Lazarus AI / Cognitive Computations
Creator of Dolphin and Samantha model families, advocate for composable alignment and uncensored open-source LLMs.
Biography
Eric Hartford is a leading open-source AI researcher, Chief Scientist at Lazarus AI, and the creator of the Dolphin and Samantha families of large language models. He founded Cognitive Computations and is the foremost advocate of composable alignment -- the idea that base models should ship uncensored so that downstream developers can layer their own alignment on top. Hartford holds an MS from the University of Washington and a BS from Pacific Lutheran University, with prior engineering roles at Microsoft, eBay, Amazon, Zillow, TensorWave, and Abacus.AI. His Dolphin series, based on Microsoft's Orca paper, spans dozens of variants across Llama, Mistral, Mixtral, Qwen, Phi, and Gemma architectures, with Dolphin 3.0 adding reasoning traces and function-calling. He also created Samantha, an empathetic AI companion fine-tuned on GPT-4 distillation data to emulate the personality from the film Her. On HuggingFace he maintains 11+ personal models and the cognitivecomputations organization. On GitHub (864 followers, 72 public repos) he contributes to Axolotl, the open-source fine-tuning framework used by many top model creators. Hartford's uncensored-models blog post became the de facto manifesto for the composable-alignment movement.
Open-source, commercially licensed, uncensored instruct-tuned LLM series based on Microsoft's Orca paper. Spans dozens of variants across Llama, Mistral, Mixtral, Qwen, Phi, and Gemma architectures from 0.5B to 70B+ parameters. Dolphin 3.0 adds reasoning traces, function calling, and agentic capabilities.
Authored the influential 'Uncensored Models' blog post arguing that base models should be alignment-free so developers can compose their own alignment layers. Pioneered dataset filtering techniques to remove refusal and bias patterns while preserving model capability.
Fine-tuned empathetic AI companion model (7B, 13B, 33B parameters) trained on GPT-4 distillation data to emulate the personality from the film Her. Trained in philosophy, psychology, and personal relationships, prioritizing emotional engagement over transactional interactions.
Created uncensored variants of WizardLM (7B, 13B, 30B, 65B) by filtering the WizardLM dataset to remove alignment and moralizing responses, enabling users to add their own RLHF LoRA for personalized alignment.
Series of text-to-speech models in 1.5B, 7B, and Large (9B) sizes, published on HuggingFace in September 2025.
Founded Cognitive Computations as the organizational home for Dolphin, Samantha, and other open-source AI projects. Maintains the cognitivecomputations HuggingFace organization and erichartford.com blog.
Prominent contributor and power user of the Axolotl open-source fine-tuning framework from OpenAccess AI Collective, using it to train Dolphin, Samantha, and other models with QLoRA, FSDP, and sample packing techniques.
To architect a composable alignment, one must start with an unaligned instruct model. Without an unaligned base, we have nothing to build alignment on top of.
Why should the open-source AI running on my computer, get to decide for itself when it wants to answer my question? This is about ownership and control. If I ask my model a question, I want an answer, I do not want it arguing with me.
American popular culture isn't the only culture. There is no 'one true correct alignment'.
Dolphin is steerable and gives control to the system owner. You set the system prompt, decide the alignment, and have control of your data. Dolphin does not impose its ethics or guidelines on you.
Research generated March 19, 2026