Recursive Language Models (RLMs) are a task-agnostic inference paradigm for language models (LMs) to handle near-infinite length contexts by enabling the LM to programmatically examine, decompose, and ...
Unlike traditional RAG (Retrieval-Augmented Generation), RLM treats document context as an external variable in a Python REPL environment. The LLM doesn't see the full document - instead, it writes ...