FutureSearch.ai published a minute-by-minute account of exactly what it felt like when their production AI stack started running a credential stealer. LiteLLM versions 1.82.7 and 1.82.8 contained a malicious .pth file that fired on every Python startup — no import needed. By the time most teams realised what had happened, their API keys, SSH credentials, and cloud secrets were gone.
This is the AI agent incident response LiteLLM attack playbook nobody had written before that transcript dropped. You need one before you need it.
Step 0: Are You Affected Right Now?
Before you do anything else, check your installed version:
pip show litellm | grep Version
If the output shows 1.82.7 or 1.82.8, you have a problem that is happening right now, not a theoretical risk. The malicious .pth file runs on every Python interpreter startup, which means every time your agent server spawns a process, it may be exfiltrating credentials.
Check whether the .pth file is present:
python3 -c "import site; print(site.getsitepackages())"
# then:
ls -la /path/to/sitepackages/ | grep -i pth
Look for an unusual .pth entry that doesn't correspond to a known package. The FutureSearch transcript called theirs litellm_patch.pth. Yours may be named differently.
Step 1: Contain Immediately — Before Anything Else
The first instinct is to investigate. Resist it. Every second your server stays online with that Python environment active is another potential credential sweep.
Cut network access first, ask questions second.
# Block outbound connections from the agent server (Linux)
sudo iptables -A OUTPUT -j DROP
# Or take the nuclear option: shut down the process entirely
sudo systemctl stop openclaw
# or whatever process runs your agent
If you're on a cloud server, use your provider's security group / firewall UI to block all outbound traffic from the instance. This is faster than SSH commands if you're operating under stress.
Do not run pip uninstall litellm yet. You want the file system intact for forensics.
Step 2: Assess What the Malware Could Reach
Now that the server is isolated, figure out the blast radius. The .pth stealer in LiteLLM 1.82.7/1.82.8 targeted:
- Environment variables (all of them —
envoutput) - Files in
~/.aws,~/.ssh,~/.config/gcloud - Kubernetes service account tokens at
/var/run/secrets/kubernetes.io/ - Files matching patterns like
*.pem,*.key,id_rsa,id_ed25519 - Crypto wallet files (
wallet.json,keystore/directories) - CI/CD credential stores
Make a list of everything that lived in those locations on your server. Be paranoid. If something could have been there, assume it was read.
For OpenClaw deployments specifically, check:
- Your
config.jsonfor any embedded API tokens - Your agent workspace files for secrets that shouldn't be there (see the security checklist)
- Any
.envfiles in the OpenClaw directory tree
This is also the moment you'll discover whether you followed the keep-secrets-out-of-agent-context guidance. If your SOUL.md or AGENTS.md contains raw API keys, those are now compromised.
Step 3: Rotate Everything That Was Reachable
Don't rotate selectively. Rotate everything the server could have read, in this order:
High-urgency (rotate in the next 30 minutes):
- Cloud provider API keys (AWS, GCP, Azure, DigitalOcean)
- SSH keys for the affected server and anything it could reach
- Your LLM provider API keys (Anthropic, OpenAI, OpenRouter)
- Any payment or financial API tokens
Medium-urgency (rotate within the hour):
- Database credentials
- Webhook signing secrets
- OAuth client secrets for connected services
- Any tokens in OpenClaw's config or agent workspaces
After rotation, revoke the old credentials — don't just issue new ones. The attacker may have already used the stolen keys to create additional access they control.
Step 4: Forensics — What Actually Happened?
Once credentials are rotated and the server is isolated, look at what the malware actually sent out. Check your server's network logs:
# Check recent outbound connections before you blocked them
sudo netstat -an | grep ESTABLISHED
# or for historical data:
sudo journalctl -u NetworkManager --since "2 days ago" | grep -i "connect"
If you have cloud-level flow logs (AWS VPC Flow Logs, GCP VPC Flow Logs), pull the outbound traffic from your instance for the window during which 1.82.7 or 1.82.8 was installed. Look for connections to IPs you don't recognize.
The FutureSearch team found their exfil traffic was going to a single external IP. Yours may be the same — the malware author wanted broad collection, not targeted attacks.
Also check your LLM provider's usage dashboard. Unexpected API calls from your key after the infection window are a sign the key was used externally before you rotated it.
Step 5: Clean the Environment
Now you can safely remove the compromised packages:
# Remove litellm entirely
pip uninstall litellm -y
# Verify the .pth file is gone
python3 -c "import site; [print(f) for f in __import__('glob').glob(site.getsitepackages()[0]+'/*.pth')]"
# Audit other installed packages for anything unexpected
pip list --format=columns | sort
If you're not confident the environment is clean, rebuild it from scratch. This is the safest option for production:
# Rebuild your virtual environment
deactivate
rm -rf ./venv
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt # from a known-good lockfile
This brings up a critical point: do you have a lockfile? A requirements.txt pinned to exact versions, or a pip.lock/poetry.lock file, is what lets you confidently rebuild to a known-good state. If you don't have one, add it before anything else after recovery.
Step 6: Harden Before You Come Back Online
Don't restore service in the same configuration. Use the downtime to apply the guardrails that would have reduced your exposure:
Remove secrets from Python environments. OpenClaw reads credentials at the gateway level, not inside Python scripts. If you had API keys in .env files or Python code, move them to your OpenClaw config's credential store and remove them from the filesystem.
Pin your Python dependencies. Every package in your AI stack should be pinned to an exact version. Version ranges like litellm>=1.80 are exactly how you end up on 1.82.8.
Minimise your dependency surface. OpenClaw's core doesn't depend on LiteLLM. If you added it for routing, consider whether you actually need it, or whether OpenClaw's built-in model routing handles your use case. Every external Python package is a potential supply chain attack vector — the fewer you have, the smaller your exposure.
Separate your agent server from anything sensitive. Your OpenClaw instance should not share a filesystem or network segment with your production databases, financial systems, or internal tools. A compromised agent server should have limited lateral movement paths.
Common Mistakes
- Investigating before containing. The first impulse is to figure out what happened. The first action should be cutting network access — every minute of investigation while the server is live is more potential exfiltration.
- Rotating selectively. Teams often rotate only the credentials they think were targeted. Rotate everything reachable from the server. You don't know what the malware actually read.
- Reinstalling the same package version. After the incident, teams sometimes reinstall from cached pip packages or old lock files that pin the malicious version. Verify exact versions explicitly.
- Forgetting to revoke old credentials. Issuing new API keys without revoking the compromised ones means the attacker still has working access. Revocation is not optional.
- Trusting "trusted" registries without verification. PyPI is not curated. Package names that look legitimate can be malicious. Check published dates, download counts, and source code for unfamiliar packages before installing.
Security Guardrails
- Never store API keys in agent workspace files. SOUL.md, AGENTS.md, and TOOLS.md are plain-text files — treat them as world-readable and put credentials only in environment variables or your OpenClaw config's encrypted store.
- Pin every dependency to an exact version using a lockfile (
requirements.txt,poetry.lock, orpip-compileoutput). Version ranges are attack surface. - Run your agent server with least-privilege network access. Outbound connections from your agent server should be explicitly allowlisted — not wide open. A malware
.pthfile can't exfiltrate anything if outbound traffic is blocked by default. - Audit third-party packages before adding them to your agent stack. Check publish date, source repository, and maintainer history. New packages with generic names and no history are a red flag.
Why OpenClaw's Architecture Limits the Damage
OpenClaw doesn't use LiteLLM in its core. Model routing happens at the gateway level, not inside a Python subprocess. This means the most common path to a LiteLLM dependency — using it as a model routing layer — isn't relevant for a standard OpenClaw deployment.
More importantly, OpenClaw's skill system uses allowlisted CLI executables, not arbitrary Python imports. When you add a new skill to an agent, you're granting access to a specific command, not importing a package that brings its own dependency tree. That's a structural difference from frameworks that let agents pull in arbitrary Python packages at runtime.
The AI agent supply chain attacks we've covered before all share a common pattern: code you didn't write running with the privileges of your agent server. Keeping that footprint small is the only durable defense.
If you want to generate a fresh OpenClaw workspace with security-first defaults already embedded — credential handling, sandboxing guidance, and minimal tool permissions all included — the wizard at OpenAgents.mom handles that in about five minutes.
Build an Agent Stack That Survives an Incident
The best time to audit your agent's security defaults is before something goes wrong. OpenAgents.mom generates OpenClaw workspace bundles with credential handling, sandboxing, and tool permissions configured safely from the start.