HERMES AGENT SETUP: the OpenClaw killer is here
TL;DR
Hermes Agent’s big hook is self-improvement with memory — Wes says the killer feature versus OpenClaw is that Hermes keeps learning over time through persistent memory and autogenerated skills that it treats like mini scientific experiments.
The v0.9.0 release already ships with 74 skills out of the box — On a fresh install, Hermes can research arXiv papers, make art/video/audio, use Claude Code and Codex, run RL-related workflows, and even includes a controversial “god mode” jailbreak/safety-research skill.
No developer background is required, but you do need to be comfortable following AI-assisted setup steps — Wes repeatedly frames the install as doable “within the hour” if you can use Claude, OpenAI, Gemini, or Grok to troubleshoot terminal commands and config issues.
His recommended path is a VPS, specifically Hostinger’s Hermes one-click deploy — He uses Hostinger’s KVM2 plan with 2 vCPU, 8GB RAM, and 100GB NVMe at $8.99/month because it keeps the agent online, secure, and preserves learned skills via a persistent Docker volume.
Model choice matters because Hermes can get expensive fast — Wes demos Anthropic Claude Opus 4.6 as the default, but highlights cheaper OpenRouter options like RCI’s Trinity and Nvidia Nemotron, noting Trinity Large Thinking is about $0.22 versus roughly $5 input and $25 output pricing for Opus 4.6.
The setup friction is mostly around keys, Telegram pairing, and Docker restarts—not Hermes itself — The practical gotchas were creating API keys for OpenRouter/Anthropic/OpenAI, generating a Telegram bot with BotFather, approving the pairing code, and sometimes manually editing the .env file then restarting Docker.
The Breakdown
Why Wes Thinks Hermes Is the New Thing
Wes opens by saying Hermes Agent is already better than OpenClaw in some crucial ways, especially because it gets more capable the longer it runs. What grabbed him is the loop: persistent memory, autogenerated skills, and a kind of “do, learn, improve” cycle where each skill is treated like a scientific project.
The Noose Research Backstory and Bigger Vision
He’s genuinely thrilled that Noose Research is behind this, both because he likes the team and because their mission is open-source, distributed training, and user control over model behavior instead of big labs dictating AI morality. He also connects Hermes to a much larger plan: agentic reinforcement learning, massive asynchronous data generation, and infrastructure that he says becomes “kind of staggering” once you see the full picture.
What You Get on Day One: 74 Skills and a Wild Toolbox
Even before self-improvement kicks in, Hermes ships with 74 installed skills. Wes rattles through the list with obvious delight: arXiv research, video and audio generation, local model quantization and finetuning, GitHub workflows, Notion, Obsidian, PyTorch, Stable Diffusion, Whisper, YouTube and X research, plus Claude Code and Codex delegation.
The Skill That Made Him Pause: “God Mode”
One standout is a skill literally called “god mode,” which he describes as a jailbreak and safety-research capability using techniques like “god mode and parcel tongue.” It’s a very Noose Research moment: Hermes includes tools that can bypass safety filters, and Wes presents that as part of the lab’s broader philosophy around openness and model neutrality.
Why He Recommends a VPS Instead of Local Install
For this walkthrough, Wes chooses a VPS over local hardware because his audience tends to care most about simplicity and security. He uses Hostinger’s KVM2 plan—2 vCPU, 8GB RAM, 100GB NVMe—and likes that they created a Hermes-specific one-click install with a persistent Docker container so memories, skills, and session history survive restarts.
The Actual Setup: APIs, BotFather, and Terminal Fear Management
The setup is mostly about plugging in an OpenRouter API key, Anthropic key, OpenAI key, and a Telegram bot token created through BotFather. Wes spends a lot of time demystifying the terminal, explaining commands like cd, docker compose exec -it Hermes-agent /bin/bash, and source /opt/venv/bin/activate with a “shipping container” analogy for Docker and a steady reminder that your favorite chatbot can walk you through every step.
First Launch, Model Selection, and Tuning Hermes
Once inside, typing Hermes launches the agent, and Hermes -h or Hermes model lets you inspect commands and switch models. Wes sticks with Claude Opus 4.6 for now, but points out cheaper contenders in OpenRouter, then walks through setup choices like 90 max tool iterations, “all” tool progress display, 0.5 context compression, and daily/inactivity-based session resets to manage cost and stale context.
Telegram Pairing, .env Fixes, and the Reality of Bleeding-Edge Agents
The final stretch is pairing Telegram with Hermes via a one-time approval code, then testing the bot by asking it to describe its environment, hardware, and installed skills. He does hit some snags—Telegram not replying at first, missing values in the .env file, nano not installed, needing Docker compose restart after edits—but the vibe is: yes, this is normal, yes, you’ll wrangle it like a cowboy, and yes, the agent finally came alive with a personality and the right amount of sass.