Trace every LLM call, track cost across models, version prompts, and get alerts before users do. Works with modern model stacks and any provider via manual tracing.
ZappyBee Beta is free while we collect feedback. Fair-use monthly limits apply to prevent abuse.
import { ZappyBee } from "zappybee";
import Anthropic from "@anthropic-ai/sdk";
ZappyBee.init({
apiKey: process.env.ZAPPYBEE_API_KEY || "tc_live_...",
baseUrl: process.env.ZAPPYBEE_BASE_URL, // optional
});
const client = new Anthropic();
ZappyBee.wrap(client); // auto-traces all calls
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 512,
messages: [{ role: "user", content: "Hello!" }],
});From prototype to production. Know exactly what your agents are doing, how much they cost, and when something breaks.
Full trace timeline: LLM calls, tool calls, reasoning steps, inputs/outputs, tokens, latency, and cost.
Claude, GPT, Gemini, Grok, Mistral, Llama, DeepSeek and more. Auto-wrap Anthropic/OpenAI or use manual tracing for anything.
Model usage, daily spend, error rate, average duration, and top models across your agents and projects.
Version prompts, ship changes safely, and compare performance with real usage and outcomes.
Trigger alerts on error rate, cost, and latency. Get notified via email or Slack webhook.
Send signed events (HMAC) for traces, steps, and alert triggers to your incident pipeline.
Invite teammates, create shareable dashboards, and enforce retention policies for production readiness.
One SDK plus integrations for popular frameworks