ReasonBlocks is a Python SDK that makes AI agents smarter and more reliable. It wraps your LangChain or OpenAI Agents application with a middleware layer that scores each reasoning step, detects when the agent is struggling, and injects targeted guidance — all without changing your agent’s logic or message history.Documentation Index
Fetch the complete documentation index at: https://reasonblocks.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
Add ReasonBlocks to your agent in under five minutes
How It Works
Understand FSM states, E-traces, and the monitoring pipeline
LangChain Guide
Step-by-step guide for LangChain and LangGraph agents
API Reference
Full reference for the ReasonBlocks class and all SDK exports
What ReasonBlocks does
ReasonBlocks sits between your agent loop and the LLM. On every step it:- Scores the agent’s reasoning for difficulty (hedging, errors, entity density)
- Classifies the run into a state — FAST, NORMAL, SLOW, or SKIP
- Monitors for unhealthy patterns like loops, repeated test failures, and edit thrashing
- Injects targeted E-trace guidance from a pattern store into the system prompt
- Routes the model call to a cheaper or more powerful model based on current difficulty
Key capabilities
E-Trace Injection
Inject instance-level, pattern-level, and universal guidance from a live pattern store
FSM State Machine
Track agent difficulty across FAST, NORMAL, SLOW, and SKIP states with hysteresis
Health Monitors
Detect loops, hedging, edit-revert thrashing, and test-repeat failures automatically
Model Routing
Route to Haiku on easy steps and Sonnet/Opus when the agent is struggling
Codebase Memory
Persist and recall per-repo findings semantically across agent runs
Token Saving
Compress stale tool outputs and nudge stuck agents to exit early