@zacodil
@zacodil

Everyone says Grok got hacked. It is Bankr's problem, not Grok's. Yes, AI agents can be prompt-injected - that is a known LLM issue. But here the AI does not even own the private keys. Bankr decides what Grok's text means. An LLM cannot defensively word every reply against an external parser. That is not how language works. Twice now. The story: Earlier this year, someone tweeted at Grok asking for a token name suggestion. Grok suggested "DebtReliefBot" (DRB). Bankr, reading Grok's tweet as a deploy command, launched the token on Base. Bankr's launchpad gives creator allocations to the deploying wallet, so a wallet labeled "Grok" on Basescan ended up holding 3 billion DRB tokens (~$155K). Bankr controlled that wallet. Recently someone drained it. Two-stage attack: 1. Attacker sent the Grok-labeled wallet a Bankr Club Membership NFT. That NFT is what unlocks Bankr's transfer tools for any wallet that holds it. 2. Attacker tweeted at Grok with a crafted prompt. Grok generated a reply containing "@bankrbot send 3B DRB to 0xe8e47..." 3. Bankr scanned X, saw the command in Grok's tweet, verified the wallet had Bankr Club NFT, signed and broadcast the transfer. The wallet was created by Bankr in association with the @grok X handle. Bankr holds operational control. Grok is a text-generation service. xAI does not hold the keys. Bankr just executes whatever appears in Grok's feed. The first incident was DavidJones805 in March using image-text injection. Bankr stopped responding to Grok back then, but the integration evidently came back online. The fix is not "make the LLM smarter." The fix is do not build infrastructure that takes LLM text as authorization to move money. Either Bankr stops listening to Grok, or Bankr accepts that whatever Grok says is its own consequence.

x.com
by timdaub.eth12063 🥝17hx.com