|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
# π ddro-docids |
|
|
|
|
|
This repository provides the **generated document IDs (DocIDs)** used for training and evaluating the DDRO (Direct Document Relevance Optimization) models. |
|
|
|
|
|
Two types of DocIDs are included: |
|
|
- **PQ (Product Quantization) DocIDs**: Compact semantic representations based on quantized document embeddings. |
|
|
- **TU (Title + URL) DocIDs**: Tokenized document identifiers constructed from document titles and/or URLs. |
|
|
|
|
|
--- |
|
|
|
|
|
### π Contents |
|
|
- `pq_msmarco_docids.txt`: PQ DocIDs for MS MARCO (MS300K). |
|
|
- `tu_msmarco_docids.txt`: TU DocIDs for MS MARCO (MS300K). |
|
|
- `pq_nq_docids.txt`: PQ DocIDs for Natural Questions (NQ320K). |
|
|
- `tu_nq_docids.txt`: TU DocIDs for Natural Questions (NQ320K). |
|
|
|
|
|
Each file maps a document ID to its corresponding tokenized docid representation. |
|
|
|
|
|
--- |
|
|
|
|
|
### π Details |
|
|
- **Maximum Length**: |
|
|
- PQ DocIDs: Up to **24 tokens** (24 subspaces for PQ coding). |
|
|
- TU DocIDs: Up to **99 tokens** (after tokenization and truncation). |
|
|
- **Tokenizer and Model**: |
|
|
- **T5-Base** tokenizer and model are used for tokenization. |
|
|
- **DocID Construction**: |
|
|
- **PQ DocIDs**: Generated by quantizing dense document embeddings obtained from a **SentenceTransformer (GTR-T5-Base)** model. |
|
|
- **TU DocIDs**: Generated by tokenizing reversed URL segments or document titles combined with domains based on semantic richness. |
|
|
- **Final Adjustment**: |
|
|
- All DocIDs are appended with `[1]` (end-of-sequence) token for consistent decoding. |
|
|
|
|
|
--- |
|
|
|
|
|
### π οΈ Code for Embedding and DocID Generation |
|
|
|
|
|
#### Step 1: Generate Document Embeddings |
|
|
|
|
|
Document embeddings are generated using a SentenceTransformer model (`gtr-t5-base` by default). |
|
|
The scripts used to generate these embbedings are available [here](https://github.com/kidist-amde/ddro/blob/main/src/data/preprocessing/generate_doc_embeddings.py). |
|
|
|
|
|
|
|
|
Example: |
|
|
```bash |
|
|
python generate_embeddings.py \ |
|
|
--input_path path/to/input.jsonl \ |
|
|
--output_path path/to/save/embeddings.txt \ |
|
|
--model_name sentence-transformers/gtr-t5-base \ |
|
|
--batch_size 128 \ |
|
|
--dataset msmarco |
|
|
``` |
|
|
|
|
|
- `input_path`: Path to the document corpus. |
|
|
- `output_path`: Destination for the generated embeddings. |
|
|
- `dataset`: Choose `msmarco` or `nq`. |
|
|
|
|
|
**Note**: For NQ, documents are loaded differently (from gzipped TSV format). |
|
|
|
|
|
--- |
|
|
### π οΈ Code for DocID Generation |
|
|
The scripts used to generate these DocIDs are available [here](https://github.com/kidist-amde/ddro/blob/main/src/data/generate_instances/generate_encoded_docids.py). |
|
|
|
|
|
Key functionality: |
|
|
- Loading document embeddings and documents. |
|
|
- Encoding document IDs with PQ codes or URL/title-based tokenization. |
|
|
- Applying consistent token indexing for training generative retrievers. |
|
|
|
|
|
Example usage: |
|
|
```bash |
|
|
python generate_encoded_docids.py \ |
|
|
--encoding pq \ |
|
|
--input_doc_path path/to/documents.jsonl \ |
|
|
--input_embed_path path/to/embeddings.txt \ |
|
|
--output_path path/to/save/pq_docids.txt \ |
|
|
--pretrain_model_path transformer_models/t5-base |
|
|
``` |
|
|
|
|
|
Supported encoding options: `atomic`, `pq`, `url`, `summary` |
|
|
|
|
|
|
|
|
|
|
|
### π Citation |
|
|
If you use these docids, please cite: |
|
|
|
|
|
```bibtex |
|
|
@article{mekonnen2025lightweight, |
|
|
title={Lightweight and Direct Document Relevance Optimization for Generative Information Retrieval}, |
|
|
author={Mekonnen, Kidist Amde and Tang, Yubao and de Rijke, Maarten}, |
|
|
journal={arXiv preprint arXiv:2504.05181}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |