Skip to main content

Documentation Index

Fetch the complete documentation index at: https://reasonblocks.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

make_langchain_tools wires a CodebaseMemory instance and an optional ImportGraph to a list of LangChain @tool-decorated functions. Pass the returned list directly to create_agent or a LangGraph node alongside your own tools. The tools share the same memory and graph objects you provide, so findings stored during one step are immediately available on the next recall.
from reasonblocks.integrations import make_langchain_tools

Installation

The LangChain integration requires langchain-core. Install it alongside the ReasonBlocks SDK:
pip install reasonblocks langchain-core

make_langchain_tools

make_langchain_tools(
    memory: CodebaseMemory | None = None,
    graph: ImportGraph | None = None,
    *,
    recall_top_k: int = 5,
    recall_threshold: float = 0.25,
    enable_recall: bool = True,
    enable_store: bool = True,
    enable_impact: bool = True,
) -> list
Returns a list of LangChain Tool objects. The list length depends on which flags are enabled and which arguments are provided: up to three tools (recall_findings, store_finding, impact_analysis).

Parameters

memory
CodebaseMemory | None
default:"None"
The CodebaseMemory instance the tools read from and write to. Bind one instance per agent run. When None, both recall_findings and store_finding are omitted from the returned list regardless of enable_recall and enable_store.
graph
ImportGraph | None
default:"None"
Optional ImportGraph for the repository. When provided and enable_impact is True, an impact_analysis tool is added to the list. Pass None to omit the tool.
recall_top_k
int
default:"5"
Maximum number of findings to return from a single recall_findings call. Increasing this value returns more context but uses more tokens.
recall_threshold
float
default:"0.25"
Minimum similarity score (0–1) a finding must reach to be included in recall results. Lower values return more results with potentially lower relevance; higher values return fewer, higher-confidence results.
enable_recall
bool
default:"true"
Include the recall_findings tool. Set to False to produce a write-only or impact-only tool set.
enable_store
bool
default:"true"
Include the store_finding tool. Set to False for read-only scenarios where the agent should not persist new observations.
enable_impact
bool
default:"true"
Include the impact_analysis tool when a graph is provided. Set to False to suppress it even when graph is not None.

Returns

tools
list[Tool]
A list of LangChain Tool objects. Safe to concatenate with your own tool list: tools=[*rb_tools, *your_tools].

Tools

recall_findings(query)

Searches CodebaseMemory for findings relevant to query. The agent should call this before reading a file — if findings already exist, it can skip the file read entirely.
ParameterTypeDescription
querystrNatural-language description of what you are looking for
Returns a formatted string containing matching findings, or a message indicating nothing was found.

store_finding(content, file_path, finding_type)

Persists a new finding to CodebaseMemory so future agent runs can recall it. Store small, self-contained facts rather than long paragraphs.
ParameterTypeDefaultDescription
contentstrThe finding text (under 8 000 characters)
file_pathstr""Repo-relative path the finding is about, if applicable
finding_typestr"note"Short tag: bug, behavior, pattern, or note
Returns "stored (id=<fid>)" on success or "store failed" on error.

impact_analysis(file_path)

Queries the ImportGraph to return the dependents (files that import this file) and dependencies (files this file imports). Use it to judge the blast radius of a proposed change.
ParameterTypeDescription
file_pathstrRepo-relative path, e.g. "pydantic/main.py"
Returns a formatted string describing the file’s dependents and dependencies.
impact_analysis is only present in the returned list when you pass a non-None graph and enable_impact=True. If you conditionally build your tool list, check len(rb_tools) rather than assuming a fixed index.

Complete example

from reasonblocks import ReasonBlocks
from reasonblocks.codebase_memory import CodebaseMemory
from reasonblocks.import_graph import ImportGraph
from reasonblocks.integrations import make_langchain_tools
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain_openai import ChatOpenAI

import pathlib

rb     = ReasonBlocks(api_key="rb_live_...")
memory = CodebaseMemory(codebase_id="my-repo")
graph  = ImportGraph().build_from_files(
    {str(p): p.read_text() for p in pathlib.Path("myrepo").rglob("*.py")}
)

rb_tools = make_langchain_tools(
    memory,
    graph,
    recall_top_k=8,
    recall_threshold=0.3,
)

llm   = ChatOpenAI(model="gpt-4o")
agent = create_openai_tools_agent(llm, rb_tools + your_tools, prompt)
executor = AgentExecutor(agent=agent, tools=rb_tools + your_tools)

with rb.middleware(run_id="run-1", agent_name="reviewer", task="review PR #42") as mw:
    result = executor.invoke({"input": "Review the changes in PR #42"})
Call make_langchain_tools once per agent run, not once per application start. This ensures memory is bound to the correct run-scoped CodebaseMemory instance.