-
Notifications
You must be signed in to change notification settings - Fork 7.7k
feat(tool): add rlm_repl tool for recursive LLM pattern #8555
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
This adds a new experimental tool that enables true Recursive Language Model (RLM) capabilities, allowing the LLM to write code that programmatically invokes sub-LLM calls in loops rather than requiring explicit tool calls. Based on the RLM paper (https://arxiv.org/html/2512.24601v1), this addresses two key limitations of typical sub-agent implementations: 1. Can't write O(N) sub-calls as tool calls (model can't verbalize that many) 2. Long prompts can't fit in context (need pointer-based access) The tool provides: - sub_llm(prompt, agent?) - invoke a sub-LLM call - sub_llm_parallel(prompts[], agent?) - parallel sub-LLM calls - context.store/load/chunk/keys - pointer-based data access Enabled via OPENCODE_EXPERIMENTAL_RLM_REPL=true or OPENCODE_EXPERIMENTAL=true Fixes anomalyco#8554 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
The following comment was made by an LLM, it may be inaccurate: No duplicate PRs found |
|
do we have to invoke anything special to trigger this? |
|
@androolloyd Great question! To enable the tool, set the environment variable before running OpenCode: OPENCODE_EXPERIMENTAL_RLM_REPL=true opencodeOr enable all experimental features at once: OPENCODE_EXPERIMENTAL=true opencodeOnce enabled, the LLM can use the // Process chunks in parallel
const results = await sub_llm_parallel(
chunks.map(c => `Analyze this: ${c}`)
)
return results.join("\n")The tool becomes available automatically — no special invocation syntax needed. The LLM will choose to use it when the task benefits from programmatic sub-LLM calls rather than individual tool invocations. Let me know if you have any other questions! |
Note on SandboxingTo clarify how the current sandbox works and its limitations: Current Implementation (No External Dependencies)The sandbox uses const sandbox = {
sub_llm,
sub_llm_parallel,
context,
JSON, Array, Object, String, Number, Boolean, Date, Math, Promise, Map, Set, RegExp, Error,
parseInt, parseFloat, isNaN, isFinite, encodeURIComponent, decodeURIComponent,
// Explicitly blocked:
setTimeout: undefined,
setInterval: undefined,
fetch: undefined,
require: undefined,
eval: undefined,
Function: undefined,
}
const asyncFn = new Function(...sandboxKeys, wrappedCode)Limitation: This is NOT True SandboxingA clever payload can escape via the constructor chain: const F = [].constructor.constructor
const evil = F('return process')()
evil.exit(1) // Escapes sandboxThe code acknowledges this at line 247-248:
Options for Stronger Isolation
Current AssessmentFor an experimental feature behind a flag, this is acceptable as a proof-of-concept. The other safeguards (50 call limit, 5min timeout, 10MB context cap, restricted sub-agent permissions) provide additional protection. However, if this moves toward production, it should be hardened with |
|
standardizing around strong isolation is probably a net good |
|
i have tried this pull request using OPENCODE_EXPERIMENTAL=true opencode |
Summary
This PR adds a new experimental
rlm_repltool that enables true Recursive Language Model (RLM) capabilities in OpenCode.Fixes #8554
Background
Based on the RLM paper, this addresses two key limitations of typical sub-agent implementations:
What This PR Adds
A new built-in tool that allows the LLM to write JavaScript code that programmatically invokes sub-LLM calls:
Available Functions
sub_llm(prompt, agent?)sub_llm_parallel(prompts[], agent?)context.store(key, data)context.load(key)context.chunk(key, size)context.keys()Example: Processing Large Data
Security
How to Enable
OPENCODE_EXPERIMENTAL_RLM_REPL=true opencode # or OPENCODE_EXPERIMENTAL=true opencodeFiles Changed
packages/opencode/src/tool/rlm-repl.ts- New tool implementationpackages/opencode/src/tool/registry.ts- Register the toolpackages/opencode/src/flag/flag.ts- Add experimental flagTesting
Tested that the code compiles without syntax errors. The tool uses the same
SessionPrompt.prompt()pattern thatTaskTooluses for spawning sub-agents.Notes
new Function()which is a simplified approach. For production hardening, consider usingisolated-vmorvm2for stronger isolation.