Observability

You can't secure what you can't see

Full visibility into every AI request — traces, sessions, cost, and security context — so you can debug, optimize, and protect your LLM stack.

< 5s

Live trace updates

90 days

Data retention

CSV

One-click export

Request Tracing

Every LLM request captured with full context.

Trace IDModelLatencyTokensCostStatus
tr_7kL9mN...gpt-4o847ms1,801$0.0023
tr_8mP2qR...claude-3.5312ms924$0.0014
tr_2nQ5sT...gpt-4o1,203ms3,412$0.0051⚠ PII
tr_9rV3wX...claude-3.5428ms1,156$0.0018
tr_4tY6zA...gpt-4o695ms2,048$0.0031
Session Management

Group related requests into conversations.

10:23:45

How do I reset my password?

gpt-4o · $0.002

10:24:12

Can you help with my account?

gpt-4o · $0.003

10:25:01

What about my billing?

gpt-4o · $0.002

Analytics Dashboard

Visualize request volume, latency, and cost over time.

What's captured

Every data point, every request

Each trace records the full context of an LLM request so you can debug, audit, and optimize without guessing.

Trace ID
Timestamp
Model used
Latency (ms)
Input tokens
Output tokens
Cost (USD)
Status code
Session ID
User ID
Security events
Full payloads

Automatic Sessions

Provide a user parameter and Bastio groups requests into sessions automatically.

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[...],
    user="user_12345"  # Auto-session
)

Explicit Sessions

Use the X-Session-ID header to group requests into named sessions.

curl -X POST .../chat/completions \
  -H "X-API-Key: YOUR_KEY" \
  -H "X-Session-ID: checkout-flow" \
  -d '{"model":"gpt-4o",...}'

Smart Filtering

Filter by model, cost, status, time range, or security events.

Full-Text Search

Search across request and response content instantly.

Payload Viewer

Inspect complete request and response data per trace.

Start observing your AI requests

Traces, sessions, and analytics included with every plan. No extra cost.