Datasets:

Modalities:
Text
Formats:
text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
File size: 3,441 Bytes
871e2c9
 
 
 
 
b32ae35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
871e2c9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: apache-2.0
language:
- en
---
# ๐Ÿ“„ ddro-docids

This repository provides the **generated document IDs (DocIDs)** used for training and evaluating the DDRO (Direct Document Relevance Optimization) models.

Two types of DocIDs are included:
- **PQ (Product Quantization) DocIDs**: Compact semantic representations based on quantized document embeddings.
- **TU (Title + URL) DocIDs**: Tokenized document identifiers constructed from document titles and/or URLs.

---

### ๐Ÿ“š Contents
- `pq_msmarco_docids.txt`: PQ DocIDs for MS MARCO (MS300K).
- `tu_msmarco_docids.txt`: TU DocIDs for MS MARCO (MS300K).
- `pq_nq_docids.txt`: PQ DocIDs for Natural Questions (NQ320K).
- `tu_nq_docids.txt`: TU DocIDs for Natural Questions (NQ320K).

Each file maps a document ID to its corresponding tokenized docid representation.

---

### ๐Ÿ“Œ Details
- **Maximum Length**:
  - PQ DocIDs: Up to **24 tokens** (24 subspaces for PQ coding).
  - TU DocIDs: Up to **99 tokens** (after tokenization and truncation).
- **Tokenizer and Model**:  
  - **T5-Base** tokenizer and model are used for tokenization.
- **DocID Construction**:
  - **PQ DocIDs**: Generated by quantizing dense document embeddings obtained from a **SentenceTransformer (GTR-T5-Base)** model.
  - **TU DocIDs**: Generated by tokenizing reversed URL segments or document titles combined with domains based on semantic richness.
- **Final Adjustment**:  
  - All DocIDs are appended with `[1]` (end-of-sequence) token for consistent decoding.

---

### ๐Ÿ› ๏ธ Code for Embedding and DocID Generation

#### Step 1: Generate Document Embeddings

Document embeddings are generated using a SentenceTransformer model (`gtr-t5-base` by default).
The scripts used to generate these embbedings are available [here](https://github.com/kidist-amde/ddro/blob/main/src/data/preprocessing/generate_doc_embeddings.py).


Example:
```bash
python generate_embeddings.py \
    --input_path path/to/input.jsonl \
    --output_path path/to/save/embeddings.txt \
    --model_name sentence-transformers/gtr-t5-base \
    --batch_size 128 \
    --dataset msmarco
```

- `input_path`: Path to the document corpus.
- `output_path`: Destination for the generated embeddings.
- `dataset`: Choose `msmarco` or `nq`.

**Note**: For NQ, documents are loaded differently (from gzipped TSV format).

---
### ๐Ÿ› ๏ธ Code for DocID Generation
The scripts used to generate these DocIDs are available [here](https://github.com/kidist-amde/ddro/blob/main/src/data/generate_instances/generate_encoded_docids.py).

Key functionality:
- Loading document embeddings and documents.
- Encoding document IDs with PQ codes or URL/title-based tokenization.
- Applying consistent token indexing for training generative retrievers.

Example usage:
```bash
python generate_encoded_docids.py \
    --encoding pq \
    --input_doc_path path/to/documents.jsonl \
    --input_embed_path path/to/embeddings.txt \
    --output_path path/to/save/pq_docids.txt \
    --pretrain_model_path transformer_models/t5-base
```

Supported encoding options: `atomic`, `pq`, `url`, `summary`



### ๐Ÿ“– Citation
If you use these docids, please cite:

```bibtex
@article{mekonnen2025lightweight,
  title={Lightweight and Direct Document Relevance Optimization for Generative Information Retrieval},
  author={Mekonnen, Kidist Amde and Tang, Yubao and de Rijke, Maarten},
  journal={arXiv preprint arXiv:2504.05181},
  year={2025}
}
```

---