Co-founder & Head of Post-Training |Nous Research
Creator of the Hermes fine-tuned model family and OpenHermes datasets, pioneering large-scale synthetic data curation for open-source LLM post-training.
Biography
Teknium (Ryan Teknium) is Co-founder and Head of Post-Training at Nous Research, an open-source AI lab focused on advancing foundational models through community-driven research. He is best known as the creator of the Hermes series of fine-tuned language models and the OpenHermes datasets, which have been collectively downloaded over 55 million times and power more than 120 applications. Before co-founding Nous Research in 2023, he worked at Stability AI. Starting from early GPT-4-generated instruction datasets like GPTeacher, he pioneered large-scale synthetic data curation for LLM post-training, scaling from 242K examples in OpenHermes 1 to over 5 million samples in Hermes 4. His work on neutrally-aligned, highly steerable instruct models has made the Hermes family one of the most widely adopted open-source fine-tune lines in the LLM ecosystem.
Flagship series of neutrally-aligned, highly steerable instruct-tuned language models released through Nous Research, spanning Hermes, Hermes 2, Hermes 3 (Llama 3.1), DeepHermes 3, and Hermes 4. Collectively downloaded over 55 million times and powering 120+ applications. Known for strong system prompt adherence and tool-use capabilities.
Open-source series of fine-tuned models and their training datasets, starting with 242K GPT-4-generated examples in OpenHermes 1 and scaling to 1M samples in OpenHermes 2.5. The OpenHermes 2.5 Mistral 7B model became one of the most popular open-source fine-tunes, with 152K+ downloads on Hugging Face.
Early collection of modular GPT-4-generated datasets including General-Instruct, Roleplay, Code-Instruct, and Toolformer modules. Over 1,600 GitHub stars. Became a foundational dataset component in the OpenHermes training pipeline and influenced the broader synthetic data movement for LLM fine-tuning.
Co-founded open-source AI research lab in 2023 that grew from a volunteer collective into a $65M-funded company. Produces the Hermes model family, Psyche decentralized training network, and Atropos RL environments framework. Serves as Head of Post-Training, overseeing all fine-tuning, alignment, and RLHF pipelines.
Popular open-source toolkit for prompt engineering with over 420 GitHub stars, providing structured approaches to crafting effective prompts for large language models.
I saw Alpaca, and I wanted to remake it with GPT-4.
Post-training takes pre-trained models and molds them into a smarter, more steerable, and fine-tuned version of it, which can be aligned to the morals or ethics you want, or built specifically for working with your product or other tooling.
Research generated March 19, 2026