Monitor AI model calls—inputs, outputs, latency, and errors—across services in real time.
30-day trial on paid features. No credit card to start.


Record structured inputs/outputs with optional field-level redaction.
Track p50/p90/p99, error codes, and failure hotspots across endpoints.
Attribute token usage and spend to models, teams, and routes.
Compare error rate, latency, and cost across model providers.
Thresholds with dedupe; route to Slack, Telegram, or Webhooks.
Invite teammates, share dashboards, and audit configuration changes.
Integrate AI-Traces to capture prompts, responses, latency, tokens, and costs from server routes and workers.
View documentationAdd AI-Traces to Python services to stream inputs/outputs and performance with minimal overhead.
View documentationFollow our quickstart guide to integrate AI-Traces into your service.
View setup instructionsPrompts/inputs, responses/outputs, latency, token usage, cost, and error details per request.
Node.js (JavaScript/TypeScript) and Python SDKs with simple initialization.
Enable field-level redaction and masking; TLS enforced in transit.
Yes, configure thresholds and route alerts to Slack, Telegram, or Webhooks with deduplication.
Need help integrating AI-Traces or have questions?
Contact us