Lucas Valbuena|10 questions
Generated by a 20-agent research pipeline that gathers intelligence from GitHub, arXiv, ORCID, HuggingFace, and blog posts, then synthesizes domain-specific interview questions grounded in the person's actual work.
Your 'system-prompts-and-models-of-ai-tools' repository became a massive transparency resource with over 135,000 stars. How did the process of reverse-engineering and publishing those proprietary prompts directly lead to the founding of ZeroLeaks? Was there a specific vulnerability you uncovered that made you realize this needed to be a product, not just a GitHub repo?
Before ZeroLeaks, you built 'better-clawd' as an open-source, telemetry-free alternative to Claude Code. That project required deep product thinking. How did the experience of building a full-stack AI developer tool shape your approach to founding and building ZeroLeaks as a company?
In your blog, you position yourself as an investigator of AI systems. For ZeroLeaks, what were the key architectural decisions in designing a platform that can reliably test for prompt injection and extraction across different LLM providers and application frameworks? How do you balance comprehensiveness with performance?
Your 'market-ai-resolution' project explores rule-based AI reasoning for blockchain oracles. That's a highly specific technical intersection. How does your research into formal methods and deterministic AI systems for DeFi influence the way you architect tests for non-deterministic, conversational LLM vulnerabilities at ZeroLeaks?
Your work rests on a tension: you've built a business (ZeroLeaks) by exposing the vulnerabilities of the very AI tools (like those from Cursor, Warp) that many developers, including yourself, use to build things. Is this a sustainable, symbiotic relationship, or an inherently adversarial one? What's your long-term view on this dynamic?
You have a clear stance against telemetry and vendor lock-in, as seen in 'better-clawd'. In an AI security context, does this philosophy extend to believing that the most secure AI applications will necessarily be open-source and self-hosted? Is there a future where you trust a proprietary, cloud-based AI system to be secure?
Your 'awesome-solana-ai' repo curates resources at the AI/blockchain intersection. Have specific collaborations or conversations within that niche community influenced the roadmap for ZeroLeaks? For instance, are you seeing unique AI security threats emerge in on-chain or agentic applications?
With 3,500+ GitHub followers gained largely from one repo, you have a highly engaged but specific audience of developers interested in AI tooling internals. How are you leveraging that community not just for distribution, but for collaborative threat research to fuel ZeroLeaks' detection capabilities?
Zero Calendar is an AI-native app, and ZeroLeaks secures AI apps. Do you see these paths converging? Is the long-term vision for ZeroLeaks to evolve into or inspire a suite of secure-by-default, AI-native development frameworks or primitives?
You've explored AI for prediction market resolution. Looking ahead, do you believe the biggest AI security challenges will shift from prompt leakage in today's chat applications to securing autonomous, economically incentivized AI agents? How is ZeroLeaks preparing for that potential future?