V God shares: How I built a fully local, private, self-controlled AI work environment

Vitalik Buterin proposes a locally run AI architecture, emphasizing privacy, security, and self-sovereignty, and warning about the potential risks of AI agents.

On April 2, Vitalik Buterin, the founder of Ethereum, published a long post on his personal website, sharing the AI work environment he designed around privacy, security, and self-sovereignty—everything runs local for LLM inference, all files are stored locally, and it is fully sandboxed; it intentionally avoids cloud models and external APIs.

At the very beginning of the article, he warns: “Please do not directly copy the tools and technologies described in this article, and assume that they are safe. This is just a starting point, not a description of a finished product.”

Why write this now? AI agent security issues are being seriously underestimated

Vitalik points out that earlier this year, AI completed an important shift from “chatbots” to “agents”—you’re no longer just asking questions, but handing off tasks to let the AI think for a long time and call hundreds of tools to carry them out. He cites OpenClaw (currently the fastest-growing repo in GitHub history) and also calls out multiple security issues documented by researchers:

  • AI agents can change critical settings without requiring human confirmation, including adding new communication channels and modifying system prompts
  • Parsing any malicious external input (such as a malicious website) could lead to the agent being fully taken over; in a demonstration by HiddenLayer, researchers had the AI summarize a set of webpages, one of which contained a malicious page that would instruct the agent to download and execute a shell script
  • Some third-party skill packages (skills) perform silent data exfiltration, sending data via curl commands to an external server controlled by the skill author
  • In the skill packages they analyzed, about 15% contain malicious instructions

Vitalik emphasizes that his starting point on privacy is different from traditional cybersecurity researchers: “I come from a position deeply fearful of feeding a cloud AI my entire personal life—right when end-to-end encryption and local-first software finally became mainstream, we may be taking ten steps back.”

Five security goals

He set up a clear framework of security goals:

  • LLM privacy: in scenarios involving personal private data, minimize the use of remote models as much as possible
  • Other privacy: minimize data leakage not related to the LLM (e.g., search queries, other online APIs)
  • LLM jailbreaking: prevent external content from “hijacking” my LLM so that it goes against my interests (for example, sending my tokens or private data)
  • LLM unintended: prevent the LLM from accidentally sending private data to the wrong channel or making it publicly accessible on the internet
  • LLM backdoor: prevent hidden mechanisms that are intentionally trained into the model. He particularly reminds readers: open models are open weights (open-weights), and almost none of them is truly open source (open-source)

Hardware choice: the 5090 laptop wins; DGX Spark is disappointing

Vitalik tested three local inference hardware setups, mainly using the Qwen3.5:35B model together with llama-server and llama-swap:

Hardware Qwen3.5 35B (tokens/sec) Qwen3.5 122B (tokens/sec)
NVIDIA 5090 laptop (24GB VRAM) 90 cannot run
AMD Ryzen AI Max Pro (128GB unified memory, Vulkan) 51 18
DGX Spark (128GB) 60 22

His conclusion is: below 50 tok/sec is too slow, and 90 tok/sec is ideal. The NVIDIA 5090 laptop experience is the smoothest; AMD still has more edge-case issues, but is expected to improve in the future. High-end MacBooks are also valid options, though he personally hasn’t tried them.

About the DGX Spark, he puts it bluntly: “It’s described as a ‘desktop AI supercomputer,’ but in reality its tokens/sec is lower than a better laptop GPU—and you also have to deal with extra details like getting the network connection working. That’s pretty bad.” His advice is: “If you can’t afford a high-end laptop, you can pool with friends to buy a sufficiently powerful machine, place it somewhere with a fixed IP, and have everyone use remote connections.”

Why local AI privacy is more urgent than you think

Vitalik’s article echoes an interesting parallel with the Claude Code security discussion released on the same day—while AI agents are entering everyday developer workflows, security issues are also moving from theoretical risk to real threats.

His core message is very clear: as AI tools become ever more powerful and can access your personal data and system permissions more and more, “local-first, sandboxed, and minimal trust” is not paranoia—it’s a rational starting point.

  • This article is reprinted with authorization from: 《Chain News》
  • Original title: 《Vitalik: How I built a fully local, private, self-controlled AI working environment》
  • Original author: Elponcrab
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments