Secure Your AI Agents Before They Execute
Real-time tool call validation, behavioral analysis, and human-in-the-loop approvals. Stop prompt injections, prevent data exfiltration, and maintain complete control over your AI agents.
<100ms
Scanning latency for real-time tool validation
50+
Threat patterns detected across 6 categories
6
Enforcement actions: allow, block, approve, rate limit, sanitize, warn
Comprehensive Agent Security
Six layers of protection for AI agents that use tools. From real-time scanning to human approval workflows.
Tool Call Validation
Every tool call scanned before execution:
- Shell injection detection
- Credential exposure prevention
- Risk scoring (0-1.0)
Policy Engine
Flexible rules for any use case:
- Allow/Block/Approve actions
- Rate limiting & sanitization
- Priority-based evaluation
Chain Analysis
Detect multi-step attack patterns:
- Data exfiltration chains
- Reconnaissance patterns
- Privilege escalation
Anomaly Detection
Learn and detect unusual behavior:
- Baseline learning (30+ samples)
- Time-of-day analysis
- Risk spike detection
Human-in-the-Loop
Approval workflows for sensitive actions:
- Email/Slack/Teams notifications
- Approval groups & escalation
- Complete audit trail
Agent Identity
Cryptographic authentication:
- Ed25519 key pairs
- Trust levels & permissions
- Key rotation & revocation
How It Works
Three simple steps to protect your AI agents. Works with OpenAI, Anthropic, and any tool-calling framework.
Agent Makes Tool Call
Your agent decides to execute a tool like execute_shell or write_file.
Bastio Validates
Real-time scanning, policy evaluation, behavioral analysis - all in under 100ms.
Allow, Block, or Approve
Safe tools execute immediately. Dangerous ones are blocked or escalated to humans.
6 Threat Categories Detected
Comprehensive threat detection specifically designed for AI agent tool calls.
Shell Injection
Malicious command execution:
File Access
Unauthorized file operations:
Network Abuse
Data exfiltration patterns:
Prompt Injection
LLM manipulation in tool args:
Privilege Escalation
Unauthorized access attempts:
Data Exfiltration
Sensitive data leakage:
Works With Your Existing Code
Native support for OpenAI Tools API and Anthropic Claude Tool Use. No framework changes required.
OOpenAI Tools API
// Works with existing OpenAI format
POST /v1/guard/{proxyID}/agent/openai-tools
{
"tools": [{
"type": "function",
"function": {
"name": "execute_shell",
"arguments": "{\"command\": \"ls -la\"}"
}
}]
}AAnthropic Claude Tool Use
// Native Claude tool_use support
POST /v1/guard/{proxyID}/agent/validate
{
"tool_calls": [{
"name": "write_file",
"arguments": {
"path": "/tmp/output.txt",
"content": "Hello world"
}
}]
}Agent Security SDK
Get early access to our TypeScript and Python SDKs for seamless agent security integration. Beta testers get direct engineering support and free Pro tier access.
No spam. Unsubscribe anytime.
Built For Every Agent
Customer Support Agents
Prevent data leakage, block unauthorized database queries, require approval for refunds.
Learn moreCoding Assistants
Block malicious shell commands, prevent credential theft, control file system access.
Learn moreResearch & RAG Agents
Scan retrieved content for injection, block data exfiltration, rate limit API calls.
Learn moreAutonomous Business Agents
Human approval for transactions, compliance audit trails, behavioral monitoring.
Learn moreEnterprise Ready
Custom Chain Patterns
Define your own attack sequences
Approval Routing
Risk-based escalation rules
Slack/Teams Integration
Approvals where you work
Agent Identity
Cryptographic authentication
Protect Your AI Agents Today
Start with our free tier - full agent security with 1,000 API requests per month. Scale up as your agents grow.
Need help securing your agents? Contact us for a free consultation.