HuggingFace - Interview Head of Product Victor
TL;DR
Hugging Face started as a tiny model-sharing hub before the ChatGPT era — Victor Mistar says five years ago it was basically “plain HTML” with around 25 models and a small community using transformers mostly for narrow tasks like image classification, not today’s generative AI workflows.
Open model releases create visible traffic spikes on Hugging Face — Victor says the hub sees bumps whenever major models drop, citing waves around Llama 2, Mixtral, Qwen, DeepSeek, GLM, and Minimax, which reflects Hugging Face’s role as the default distribution layer for open AI.
Victor now codes 'only with models' and prefers lightweight agent harnesses — his daily stack is Claude Code with the latest Opus plus Aider-like tooling on open models such as GLM, and he specifically likes minimalist harnesses that don’t over-steer different models with huge system prompts.
A big Hugging Face product shift is designing for agents, not just humans browsing a UI — Victor says this became obvious around late last year, and he points to new Spaces features like an “agent button” that exposes a simple curl/doc interface so agents can chain apps into workflows.
Hugging Face is intentionally trying to be infrastructure, not the coordinator of the ecosystem — Victor frames the company as the foundation developers build on top of, while acknowledging the ecosystem is still messy, with recurring issues like broken chat templates after each new model release.
Victor’s strongest conviction is that open source AI is necessary 'to survive' — his argument is simple: if one of the most powerful tools ever created isn’t broadly ownable, the outcome is dangerous, so open models matter even more than classic open-source software did.
The Breakdown
From 25 Models to the Main On-Ramp
Victor Mistar introduces himself as Hugging Face’s head of product, five years into the ride, and describes the early platform as basically the “first Amazon page” for models: plain HTML, about 25 models, and a tiny believer community. Back then it was pre-generative AI, more “classifying birds” than building agents, but the core idea was already there: people needed a place to find and share models.
Why He Joined Before the Hype
Before Hugging Face, Victor was running a classic SaaS company when Hugging Face CTO Julien reached out because they both felt AI was going to be huge. Victor had already been playing with GPT-1 and GPT-2, and by the time he joined, GPT-3-era Davinci was the frontier. He sold his company at just the right time and joined the Paris-based team to focus almost entirely on product and the early Hub.
The Product Job: Keep a Wildly Expanding Surface Area Usable
Victor says Hugging Face still doesn’t run on some grand long-term roadmap; it’s more like a living short- and mid-term set of issues and docs handled by a small team that has worked together for three or four years. The hard part now isn’t inventing from scratch — it’s stopping the product from turning into an incomprehensible blob as features keep piling up. He still specs heavily in Figma with a “red pencil” mindset and sounds very aware that a deeper rework may eventually be needed.
The Models He Actually Uses
Victor’s background is design, but he calls himself a “nerdy designer” who has coded for 10 years — and today he says he codes “only with the models.” His main setup is Claude Code with the latest Opus, plus GLM a lot, which he says is becoming seriously competitive; he even built a 3GS-style car game with it and hit that personal “wow, I should go for it” moment after it solved problem after problem. He also makes a strong case for minimalist harnesses because different open models break when you over-steer them with too much system prompt logic.
Hugging Face as Agent Infrastructure
One of the most interesting product turns in the conversation is Victor’s view that Hugging Face is becoming something agents navigate better than humans. He says the platform is now so broad it feels a bit like Cloudflare, and he’s seeing strong results from using agents with Hugging Face tooling like the CLI. A concrete example: Spaces now has an “agent button” that exposes a simple curl/docs interface so an agent can discover and chain apps — say, a 3D Space with an image-generation Space — into much more powerful workflows.
Not Trying to Coordinate the Chaos
The interviewer pushes on how fragmented everything feels, and Victor more or less agrees — every big release follows the same ritual of model drop, bug, broken chat template, confusion. But his stance is that Hugging Face shouldn’t be the central coordinator; it should be the foundation. That posture carries into onboarding too: local AI is still hard, regular users struggle even with basic API/tool setup, and Victor doesn’t pretend there’s a perfect product solving that yet.
Where This Is Headed — and Why Open Source Matters So Much
By the end, the tone turns more philosophical. Victor says he’s convinced the world will be “drastically different in five years,” even if most people outside the bubble don’t act like it yet, and he thinks the near-term reality is more augmentation than instant total replacement because people still want human relationships. But the part he feels most strongly about is open source: if one of the most powerful tools ever created isn’t available for people to own and run themselves, he says, “it cannot end well,” and that’s why open AI matters even more than open-source software did.
What He Wants People to Try Right Now
Asked for something practical, Victor surprisingly lands on storage. He highlights Hugging Face Buckets, which can be mounted as a virtual file system and are especially useful for training workflows and Spaces-based projects, adding that storage will matter for agents more than people realize. He also plugs the company’s traces effort — still rough in the UI, by his own admission — and suggests users upload traces, ideally after privacy cleanup with their own tooling.