Providers

One gateway, every AI provider

Connect to 9 providers and 50+ models through a single integration. Direct API, cloud gateways, enterprise platforms, and self-hosted — all with unified security and zero vendor lock-in.

OpenAIOpenAI
AnthropicAnthropic
GeminiGoogle Gemini
MistralMistral AI
DeepSeekDeepSeek
Vertex AIVertex AI
Azure AI FoundryAzure AI
AWS BedrockAWS Bedrock
OllamaOllama

9

Providers supported

50+

Models available

4

Credential modes

Provider Routing

9 providers, one unified API.

ProviderTypeTop ModelsStatus
OpenAIDirect APIGPT-5.2, GPT-4o, o1✓ healthy
AnthropicDirect APIOpus 4.6, Sonnet 4.5, Haiku 3.5✓ healthy
GoogleDirect APIGemini 3 Pro, 3 Flash, 2.5 Pro✓ healthy
MistralDirect APILarge, Small, Codestral✓ healthy
DeepSeekDirect APIChat, Reasoner✓ healthy
Vertex AIMulti-vendorGemini + Claude + Mistral + Llama✓ healthy
Azure AIMulti-vendorGPT-4o + Llama + Mistral + DeepSeek✓ healthy
AWS BedrockCloudClaude Opus, Sonnet, Haiku✓ healthy
OllamaSelf-hostedLlama, Mistral, Qwen, Phi✓ healthy
Credential Modes

Flexible authentication for every environment.

BYOK (Bring Your Own Key)

Use your own API keys. Keep your enterprise pricing and negotiated rates.

Platform-Managed

Use Bastio-managed credentials. No API keys to manage or rotate.

Cloud IAM

GCP Service Account, AWS IAM, or Azure AD credentials for enterprise SSO.

Self-Hosted

Connect to local Ollama instances. Zero external API calls, complete data control.

Integration Categories

Four ways to connect, from direct API to self-hosted.

CategoryProvidersDescriptionModels
Direct API5 providersConnect directly with API keys50+ models across OpenAI, Anthropic, Google, Mistral, DeepSeek
Multi-Vendor2 providersMultiple AI vendors, one credentialVertex AI (4 vendors) and Azure AI Foundry (5 vendors)
Enterprise Cloud1 providerAWS-native governance and complianceClaude models via AWS Bedrock with IAM integration
Self-Hosted1 providerOn-premises, air-gapped deploymentAny Ollama-compatible model, zero API costs

What's included

Every provider, fully integrated

All providers connected through Bastio get automatic security scanning, caching, failover, and observability at no extra configuration.

Streaming support for all providers
BYOK and platform-managed credentials
Unified security scanning
Automatic provider failover
Circuit breaker protection
OpenAI SDK compatibility
Cost tracking per provider
Rate limiting per key
PII redaction in transit
Model-level routing
Response caching
Request/response logging

Switch Providers

Change one line to route through a different provider. Same SDK, same code.

from openai import OpenAI

client = OpenAI(
    api_key="sk-your-bastio-key",
    base_url="https://api.bastio.com/v1/guard/px_..."
)

response = client.chat.completions.create(
    model="gpt-4o",        # OpenAI
    # model="claude-sonnet-4-5",  # Anthropic
    # model="gemini-3-pro",       # Google
    messages=[{"role": "user", "content": "Hello"}],
    stream=True
)

for chunk in response:
    print(chunk.choices[0].delta.content, end="")

Anthropic SDK

Native Anthropic SDK support. Point your base URL at Bastio and use the Claude API natively.

import anthropic

client = anthropic.Anthropic(
    api_key="sk-your-bastio-key",
    base_url="https://api.bastio.com/v1/guard/px_..."
)

message = client.messages.create(
    model="claude-sonnet-4-5",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello, Claude"}
    ]
)

print(message.content[0].text)

Unified Security Layer

Every provider, every model protected by the same threat detection, PII scanning, and jailbreak prevention.

Bring Your Own Keys

Keep your negotiated enterprise pricing. Store keys encrypted, rotate without code changes.

Zero Lock-in

Switch providers with a config change. Compare performance and cost across vendors in real-time.

Start routing to any AI provider today

9 providers included with every plan. Connect your own API keys or use platform-managed credentials.