| | --- |
| | |
| | tags: |
| | - pyterrier |
| | - pyterrier-artifact |
| | - pyterrier-artifact.corpus_graph |
| | - pyterrier-artifact.corpus_graph.np_topk |
| | task_categories: |
| | - text-retrieval |
| | viewer: false |
| | --- |
| | |
| | # ragwiki-corpusgraph |
| |
|
| | ## Description |
| |
|
| | A corpus graph for Wikipedia corpus (`rag:nq_wiki`), built using `ragwiki-terrier` sparse index. |
| | The graph encodes semantic relationships between documents by connecting each document to its `k=16` nearest neighbors based on `bm25` retriever. |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | # Load the artifact |
| | import pyterrier as pt |
| | artifact = pt.Artifact.from_hf('astappiev/ragwiki-corpusgraph') |
| | ``` |
| |
|
| | ### Usage with GAR |
| |
|
| | ```python |
| | import pyterrier as pt |
| | from pyterrier_adaptive import GAR, NpTopKCorpusGraph |
| | from pyterrier_t5 import MonoT5ReRanker |
| | |
| | sparse_index: pt.terrier.TerrierIndex = pt.Artifact.from_hf("pyterrier/ragwiki-terrier") |
| | retriever = pt.rewrite.tokenise() >> sparse_index.bm25(include_fields=["docno", "text", "title"]) >> pt.rewrite.reset() |
| | |
| | get_text = sparse_index.text_loader(["docno", "text", "title"]) |
| | prepare_text = pt.apply.generic(lambda df: df.assign(qid=df["qid"].map(str), docno=df["docno"].map(str))) >> get_text |
| | scorer = prepare_text >> MonoT5ReRanker(verbose=False, batch_size=64) |
| | |
| | graph: NpTopKCorpusGraph = pt.Artifact.from_hf("astappiev/ragwiki-corpusgraph").to_limit_k(8) |
| | |
| | pipeline = retriever >> GAR(scorer, graph) >> get_text |
| | pipeline.search("hello world") |
| | |
| | ``` |
| |
|
| | ## Benchmarks |
| |
|
| | *TODO: Provide benchmarks for the artifact.* |
| |
|
| | ## Reproduction |
| |
|
| | This graph was constructed using [PyTerrier Adaptive](https://github.com/terrierteam/pyterrier_adaptive). |
| |
|
| | ```python |
| | import pyterrier as pt |
| | from pyterrier_adaptive import NpTopKCorpusGraph |
| | |
| | dataset: pt.datasets.Dataset = pt.get_dataset("rag:nq_wiki") |
| | sparse_index: pt.terrier.TerrierIndex = pt.Artifact.from_hf("pyterrier/ragwiki-terrier") |
| | bm25 = pt.rewrite.tokenise() >> sparse_index.bm25(include_fields=["docno", "text", "title"], threads=slurm_cpus) >> pt.rewrite.reset() |
| | |
| | graph = NpTopKCorpusGraph.from_retriever( |
| | bm25 % (build_graph_k + 1), |
| | dataset.get_corpus_iter(), |
| | "../index/ragwiki-corpusgraph", |
| | k=build_graph_k, |
| | batch_size=65_536, |
| | ) |
| | graph.to_hf('astappiev/ragwiki-corpusgraph') |
| | ``` |
| |
|
| | However the code above will take 2 months to run on a single machine. The graph was built using modified version of `from_retriever` to enable parallel compute on the cluster. |
| |
|
| | It took around 14340 CPU-hours to build the graph on our cluster. |
| |
|
| | ## Metadata |
| |
|
| | ```json |
| | { |
| | "type": "corpus_graph", |
| | "format": "np_topk", |
| | "package_hint": "pyterrier-adaptive", |
| | "doc_count": 21015324, |
| | "k": 16 |
| | } |
| | ``` |