Recursive Language Models (RLM) is a new interface for LLMs with cool ideas by Alex Zhang!
⚠️ LLMs struggle with long prompts → attention overload & lost info 🔄 RLMs inspect, split & call themselves on chunks, then aggregate results ✅ Handles millions of tokens, reduces noise, improves reasoning 💡 System prompt guides recursion 🎯 RLM trajectories can be used for RL training or distillation (OpenEnv+TRL!!)