The authors use as an environment a Python REPL that itself can call other instances of the LM. The prompt is programmatically manipulated as a Python variable on the REPL.
The motivation is for the LM to use Python commands, including commands that call other LM instances, to figure out how best to modify the context at inference time.
The results from early testing look impressive at a first glance: An RLM wrapping GPT-5-mini outperforms GPT-5 by a wide margin on long-context tasks, at significant lower cost.
I've added this to my reading list.
See e.g. https://textbooks.cs.ksu.edu/cc210/16-recursion/08-recursion...
EDIT: makes me think of many computation systems in various substrates, and how they work. Focus vs distraction/creativity. ADHD workers in hierarchies of capitalism, purpose of breadth vs depth of exploration at various levels of the stack, who's at the "top" and why, etc etc
It’s not relying on the LM context much. You can generally code away for an hour before you run out of context and have to run a compression step or just start fresh.
> Lastly, in our experiments we only consider a recursive depth of 1 — i.e. the root LM can only call LMs, not other RLMs.
> but we felt that for most modern “long context” benchmarks, a recursive depth of 1 was sufficient to handle most problems.
I don't think a size 2 call stack algorithm should be regarded as 'recursive'.
It feels a little disingenuous to call it a Recursive Language Model when the recursive depth of the study was only 1.
1.Recursion is used to break down the large context and dispatch to different LLM calls to get the useful context.
2.This may lead to longer test-time execution on large contexts (even with parallelism in deep recursion), and the monetary cost may increase rapidly.
I think it’s a different idea from using RAG or manually maintaining a context window
correct me if I'm wrong
please correct me if I'm wrong..this is just subagent architecture?
It simply hopes two drunks are more coherent than one drunk.
It was even before the rise of LLMs
The authors may want to consider a more specific name