# ToolPulse > Agent tool reliability monitoring. One-line decorator wraps any AI agent tool (MCP, function, API). Records latency, success/failure, fingerprints response shape, detects schema drift before your agent acts on bad data, and runs synthetic health checks. ToolPulse helps engineers building AI agents detect and prevent silent tool failures. The Python and TypeScript SDKs are open source (MIT). The hosted backend offers a free tier (100K calls/month). ## Getting started - [Quickstart guide](https://toolpulse.pages.dev/docs/quickstart.md): Install the SDK, add the @monitor decorator, see calls in the dashboard within 60 seconds. - [PyPI package: toolpulse](https://pypi.org/project/toolpulse/): pip install toolpulse - [npm package: toolpulse](https://www.npmjs.com/package/toolpulse): npm install toolpulse - [GitHub repository](https://github.com/toolpulse/toolpulse): Source, examples, and integrations. ## Integrations - [LangChain integration](https://toolpulse.pages.dev/docs/langchain.md) - [LlamaIndex integration](https://toolpulse.pages.dev/docs/llamaindex.md) - [MCP integration](https://toolpulse.pages.dev/docs/mcp.md) - [OpenAI SDK integration](https://toolpulse.pages.dev/docs/openai.md) - [Anthropic SDK integration](https://toolpulse.pages.dev/docs/anthropic.md) ## Live data - [Public status page](https://toolpulse.pages.dev/status.md): Real latency and uptime for popular LLM tools we monitor. - [State of LLM Tools weekly report](https://toolpulse.pages.dev/blog/state-of-llm-tools.md) ## Comparisons - [ToolPulse vs Langfuse](https://toolpulse.pages.dev/compare/langfuse.md): LLM engineering platform with traces and evals. - [ToolPulse vs Helicone](https://toolpulse.pages.dev/compare/helicone.md): LLM observability via proxy with prompt-level analytics. - [ToolPulse vs Arize Phoenix](https://toolpulse.pages.dev/compare/arize-phoenix.md): OSS LLM observability and tracing. - [ToolPulse vs AgentOps](https://toolpulse.pages.dev/compare/agentops.md): Agent observability with session replay. - [ToolPulse vs Langtrace](https://toolpulse.pages.dev/compare/langtrace.md): OpenTelemetry-based LLM tracing. ## Recent comparison posts - [ToolPulse vs Langfuse: when to pick which](https://toolpulse.pages.dev/blog/toolpulse-vs-langfuse-when-to-pick-which.md): Honest, side-by-side comparison: Langfuse for prompt traces and evals, ToolPulse for tool-call reliability and schema drift. Where they overlap, where they don't, which to choose. ## Recent technical deep-dives - [Why schema drift is the silent killer of agent reliability](https://toolpulse.pages.dev/blog/why-schema-drift-is-the-silent-killer-of-agent-reliability.md): An API changes a field from int to string. Your agent doesn't crash — it just silently makes worse decisions. Here's how schema drift propagates through tool chains, and how to detect it before users see the consequences. ## Recent case studies - [The 3am drift event: how a popular search API quietly changed shape and what we caught](https://toolpulse.pages.dev/blog/the-3am-drift-event-tool-x-quietly-changed-shape.md): A real drift event from our own monitored agent stack. A search tool added a new top-level field, removed an inner one, and our agent started giving worse answers — for two hours, until the alert fired. ## Pricing - Indie (free): 100K calls/month, 10 tools, 7-day retention - Pro ($149/mo): 1M calls/month, 50 tools, 90-day retention, schema drift alerts - Team ($499/mo): unlimited calls, unlimited tools, SSO, custom retention