Skip to main content

Documentation Index

Fetch the complete documentation index at: https://reasonblocks.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

ReasonBlocks is a Python SDK that makes AI agents smarter and more reliable. It wraps your LangChain or OpenAI Agents application with a middleware layer that scores each reasoning step, detects when the agent is struggling, and injects targeted guidance — all without changing your agent’s logic or message history.

Quick Start

Add ReasonBlocks to your agent in under five minutes

How It Works

Understand FSM states, E-traces, and the monitoring pipeline

LangChain Guide

Step-by-step guide for LangChain and LangGraph agents

API Reference

Full reference for the ReasonBlocks class and all SDK exports

What ReasonBlocks does

ReasonBlocks sits between your agent loop and the LLM. On every step it:
  1. Scores the agent’s reasoning for difficulty (hedging, errors, entity density)
  2. Classifies the run into a state — FAST, NORMAL, SLOW, or SKIP
  3. Monitors for unhealthy patterns like loops, repeated test failures, and edit thrashing
  4. Injects targeted E-trace guidance from a pattern store into the system prompt
  5. Routes the model call to a cheaper or more powerful model based on current difficulty
1

Install the SDK

pip install reasonblocks
2

Initialize ReasonBlocks

from reasonblocks import ReasonBlocks

rb = ReasonBlocks(api_key="rb_live_...")
3

Add middleware to your agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-20250514",
    tools=[...],
    system_prompt="...",
    middleware=[rb.middleware()],
)
4

Run your agent normally

ReasonBlocks operates transparently. Your agent code is unchanged — steering happens inside the middleware.

Key capabilities

E-Trace Injection

Inject instance-level, pattern-level, and universal guidance from a live pattern store

FSM State Machine

Track agent difficulty across FAST, NORMAL, SLOW, and SKIP states with hysteresis

Health Monitors

Detect loops, hedging, edit-revert thrashing, and test-repeat failures automatically

Model Routing

Route to Haiku on easy steps and Sonnet/Opus when the agent is struggling

Codebase Memory

Persist and recall per-repo findings semantically across agent runs

Token Saving

Compress stale tool outputs and nudge stuck agents to exit early