ML Engineer & Developer Advocate |Hugging Face
ML hacker and former HF Developer Advocate who built stable-diffusion-videos (4.6k stars) and MusicGen Songstarter. Maintains 209 models on the Hub.
Biography
Nathan (Nate) Raw is a machine learning engineer, open-source contributor, and former Developer Advocate at Hugging Face. He describes himself as a 'machine learning hacker passionate about building cool products with technology.' Before focusing on ML, he was an electronic music producer who released tracks on Beatport and produced, mixed, and mastered instrumental hip-hop and experimental music. He is affiliated with the University of Florida and Heriot-Watt University (visible through his HuggingFace organization memberships). He contributed extensively to the Hugging Face ecosystem (234 commits across HF org repos), built tooling for PyTorch Lightning (17 commits to Lightning-AI), and later worked at Splice applying ML to music and audio. His most recognized project is stable-diffusion-videos (4,670+ stars), which made him the top trending developer on GitHub. On the HuggingFace Hub he maintains 209 models, 131 datasets, and 65 Spaces spanning computer vision, audio generation, NLP, and video understanding.
Open-source tool for creating AI-generated videos by exploring Stable Diffusion's latent space and morphing between text prompts. 4,670+ stars, 450+ forks. Made Nate the top trending developer on GitHub.
Fine-tune Vision Transformers for any visual concept using images found on the web. Democratized ViT fine-tuning for the HuggingFace community. 314 stars.
Fine-tuned MusicGen models trained on curated Splice samples, generating song ideas useful for music producers. v0.2 trained on ~1,800 hand-picked loops.
Sing a melody idea and let AI generate a full music sample from it. Bridges vocal input with MusicGen melody conditioning.
Vision Transformer fine-tuned for age classification. One of his most downloaded HuggingFace models (293k+ downloads, 146 likes).
Built huggingface-sync-action (GitHub Action for HF Hub sync), hf-hub-lightning (PyTorch Lightning callback), modelcards utility, spaces-docker-templates, and huggingface-datasets-converter.
Co-authored research on text-to-video generation for music visualization, introducing 'transitions' and 'holds' for coherent AI music videos (arXiv:2304.08551).
234 commits across Hugging Face org repos including diffusers, huggingface.js, doc-builder, and Zapier integration. Helped build video classification pipelines for the Hub.
I'm a machine learning hacker passionate about building cool products with technology.
Early on, I produced electronic dance music (EDM) and released tracks on Beatport through record labels...eventually I shifted to instrumental hiphop and more experimental stuff, producing, mixing, and mastering for local artist friends.
I sat at my computer and listened to samples for a long time, purchasing samples carefully. I focused primarily on hip hop/rap/electronic melodic loops, as well as some soul/jazz samples.
Research generated March 19, 2026