File size: 2,628 Bytes
fbb8a02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0481197
 
fbb8a02
 
 
 
 
 
 
0481197
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbb8a02
 
 
 
 
 
 
 
0481197
 
fbb8a02
0481197
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbb8a02
 
0481197
 
 
 
fbb8a02
 
0481197
fbb8a02
 
 
 
 
 
 
0481197
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
# pretty_name: "" # Example: "MS MARCO Terrier Index"
tags:
- pyterrier
- pyterrier-artifact
- pyterrier-artifact.corpus_graph
- pyterrier-artifact.corpus_graph.np_topk
task_categories:
- text-retrieval
viewer: false
---

# ragwiki-corpusgraph

## Description

A corpus graph for Wikipedia corpus (`rag:nq_wiki`), built using `ragwiki-terrier` sparse index.  
The graph encodes semantic relationships between documents by connecting each document to its `k=16` nearest neighbors based on `bm25` retriever.

## Usage

```python
# Load the artifact
import pyterrier as pt
artifact = pt.Artifact.from_hf('astappiev/ragwiki-corpusgraph')
```

### Usage with GAR

```python
import pyterrier as pt
from pyterrier_adaptive import GAR, NpTopKCorpusGraph
from pyterrier_t5 import MonoT5ReRanker

sparse_index: pt.terrier.TerrierIndex = pt.Artifact.from_hf("pyterrier/ragwiki-terrier")
retriever = pt.rewrite.tokenise() >> sparse_index.bm25(include_fields=["docno", "text", "title"]) >> pt.rewrite.reset()

get_text = sparse_index.text_loader(["docno", "text", "title"])
prepare_text = pt.apply.generic(lambda df: df.assign(qid=df["qid"].map(str), docno=df["docno"].map(str))) >> get_text
scorer = prepare_text >> MonoT5ReRanker(verbose=False, batch_size=64)

graph: NpTopKCorpusGraph = pt.Artifact.from_hf("astappiev/ragwiki-corpusgraph").to_limit_k(8)

pipeline = retriever >> GAR(scorer, graph) >> get_text
pipeline.search("hello world")

```

## Benchmarks

*TODO: Provide benchmarks for the artifact.*

## Reproduction

This graph was constructed using [PyTerrier Adaptive](https://github.com/terrierteam/pyterrier_adaptive).

```python
import pyterrier as pt
from pyterrier_adaptive import NpTopKCorpusGraph

dataset: pt.datasets.Dataset = pt.get_dataset("rag:nq_wiki")
sparse_index: pt.terrier.TerrierIndex = pt.Artifact.from_hf("pyterrier/ragwiki-terrier")
bm25 = pt.rewrite.tokenise() >> sparse_index.bm25(include_fields=["docno", "text", "title"], threads=slurm_cpus) >> pt.rewrite.reset()

graph = NpTopKCorpusGraph.from_retriever(
    bm25 % (build_graph_k + 1),
    dataset.get_corpus_iter(),
    "../index/ragwiki-corpusgraph",
    k=build_graph_k,
    batch_size=65_536,
)
graph.to_hf('astappiev/ragwiki-corpusgraph')
```

However the code above will take 2 months to run on a single machine. The graph was built using modified version of `from_retriever` to enable parallel compute on the cluster.

It took around 14340 CPU-hours to build the graph on our cluster.

## Metadata

```json
{
  "type": "corpus_graph",
  "format": "np_topk",
  "package_hint": "pyterrier-adaptive",
  "doc_count": 21015324,
  "k": 16
}
```