Papers
arxiv:2601.18116

FABLE: Forest-Based Adaptive Bi-Path LLM-Enhanced Retrieval for Multi-Document Reasoning

Published on Jan 26
· Submitted by
sunlin
on Jan 28
Authors:
,
,
,
,

Abstract

FABLE is a forest-based adaptive bi-path retrieval framework that enhances LLM-based information retrieval through hierarchical indexing and structured evidence acquisition, achieving superior performance with reduced token usage compared to traditional RAG methods.

AI-generated summary

The rapid expansion of long-context Large Language Models (LLMs) has reignited debate on whether Retrieval-Augmented Generation (RAG) remains necessary. However, empirical evidence reveals persistent limitations of long-context inference, including the lost-in-the-middle phenomenon, high computational cost, and poor scalability for multi-document reasoning. Conversely, traditional RAG systems, while efficient, are constrained by flat chunk-level retrieval that introduces semantic noise and fails to support structured cross-document synthesis. We present FABLE, a Forest-based Adaptive Bi-path LLM-Enhanced retrieval framework that integrates LLMs into both knowledge organization and retrieval. FABLE constructs LLM-enhanced hierarchical forest indexes with multi-granularity semantic structures, then employs a bi-path strategy combining LLM-guided hierarchical traversal with structure-aware propagation for fine-grained evidence acquisition, with explicit budget control for adaptive efficiency trade-offs. Extensive experiments demonstrate that FABLE consistently outperforms SOTA RAG methods and achieves comparable accuracy to full-context LLM inference with up to 94\% token reduction, showing that long-context LLMs amplify rather than fully replace the need for structured retrieval.

Community

Paper author Paper submitter

With the rise of 1M+ context windows in Gemini and Claude, the biggest debate in AI right now is: "Do we still need RAG, or should we just dump everything into the prompt?"

Today's pick, FABLE: Forest-Based Adaptive Bi-Path LLM-Enhanced Retrieval for Multi-Document Reasoning, provides a compelling answer: Long context is great, but structured navigation is better (and much cheaper).

What makes FABLE stand out from the sea of RAG papers is that it reimagines the LLM's role. It stops treating the LLM as just a "reader" and turns it into a "librarian":

Forests > Flat Chunks: Instead of the traditional "chunk + vector search" which loses global context, FABLE uses LLMs to pre-build Hierarchical Knowledge Forests. This allows the system to actively "zoom in" for granular details or "zoom out" for high-level synthesis depending on the query.

The Bi-Path Innovation: It doesn't rely on just one retrieval method. It runs a Bi-Path Strategy: one path uses LLM reasoning to navigate the document structure (symbolic/logic), and the other uses vector propagation (semantic similarity). This hybrid approach captures subtle connections that vector databases often miss.

Insane Efficiency: The results are the real hook here. FABLE achieves the same reasoning accuracy as full-context inference (517k tokens) while using only ~31k tokens. That is a 94% reduction in compute/cost without sacrificing quality.

Why read this? If you are working on Agents or Multi-Document QA, you know that "Lost-in-the-middle" is a real pain. FABLE proves that giving LLMs a map (the semantic forest) is more effective than just giving them a bigger backpack (context window). It beats graph-based baselines like HippoRAG and is a refreshing take on how to scale reasoning without scaling costs.

A must-read for anyone trying to optimize RAG pipelines for complex tasks!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.18116 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.18116 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.18116 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.