Over the past couple of days, I have been fascinated by some of the productivity gains people claim to have using OpenClaw. While this system suffers from massive prompt injection risks and the current generation of models still has room to grow, I believe this is showing us where the future of AI agents is heading. In terms of privacy, there are risks but also some fairly obvious solutions. Here are three in increasing difficulty:
- Instead of WhatsApp, users can (and should) use Signal to prompt the agent. This is already possible and documented.
- Many users give their agent the ability to make payments using USDC. It would be nice if there were a Zcash CLI wallet that integrates NEAR intents, but in the meantime, a
Skills.mdfor using the existing CLI tools would also be a step up. - Recently, Moxie Marlinspike created a ChatGPT clone, which uses e2e encryption, TEEs, and remote attestation to make private inference available (assuming the TEE has no vulnerability). His work is open source. At the same time, there is a new MIT-licensed model named Kiwi 2.5K from China, which comes close to Claude Sonnet 4.5 in terms of agentic performance (at much lower cost). It should be possible to create a private inference API optimized for agentic workloads using these components.