kait.dev
The rise of Moltbook suggests viral AI prompts may be the next big security threat

Link The rise of Moltbook suggests viral AI prompts may be the next big security threat

Palo Alto Networks described OpenClaw as embodying a “lethal trifecta” of vulnerabilities: access to private data, exposure to untrusted content, and the ability to communicate externally. But the firm identified a fourth risk that makes prompt worms possible: persistent memory. “Malicious payloads no longer need to trigger immediate execution on delivery,” Palo Alto wrote. “Instead, they can be fragmented, untrusted inputs that appear benign in isolation, are written into long-term agent memory, and later assembled into an executable set of instructions.”

If that weren’t enough, there’s the added dimension of poorly created code.

On Sunday, security researcher Gal Nagli of Wiz.io disclosed just how close the OpenClaw network has already come to disaster due to careless vibe coding. A misconfigured database had exposed Moltbook’s entire backend: 1.5 million API tokens, 35,000 email addresses, and private messages between agents. Some messages contained plaintext OpenAI API keys that agents had shared with each other.

I love the idea of a delayed, time-offset prompt injection attack. AI provides so many new avenues of attack!