Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,110 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-retrieval
|
| 5 |
+
- summarization
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- legal
|
| 10 |
+
- law
|
| 11 |
+
size_categories:
|
| 12 |
+
- n<1K
|
| 13 |
+
source_datasets:
|
| 14 |
+
- launch/gov_reports
|
| 15 |
+
dataset_info:
|
| 16 |
+
- config_name: default
|
| 17 |
+
features:
|
| 18 |
+
- name: query-id
|
| 19 |
+
dtype: string
|
| 20 |
+
- name: corpus-id
|
| 21 |
+
dtype: string
|
| 22 |
+
- name: score
|
| 23 |
+
dtype: float64
|
| 24 |
+
splits:
|
| 25 |
+
- name: test
|
| 26 |
+
num_examples: 973
|
| 27 |
+
- config_name: corpus
|
| 28 |
+
features:
|
| 29 |
+
- name: _id
|
| 30 |
+
dtype: string
|
| 31 |
+
- name: title
|
| 32 |
+
dtype: string
|
| 33 |
+
- name: text
|
| 34 |
+
dtype: string
|
| 35 |
+
splits:
|
| 36 |
+
- name: corpus
|
| 37 |
+
num_examples: 973
|
| 38 |
+
- config_name: queries
|
| 39 |
+
features:
|
| 40 |
+
- name: _id
|
| 41 |
+
dtype: string
|
| 42 |
+
- name: text
|
| 43 |
+
dtype: string
|
| 44 |
+
splits:
|
| 45 |
+
- name: queries
|
| 46 |
+
num_examples: 970
|
| 47 |
+
configs:
|
| 48 |
+
- config_name: default
|
| 49 |
+
data_files:
|
| 50 |
+
- split: test
|
| 51 |
+
path: data/default.jsonl
|
| 52 |
+
- config_name: corpus
|
| 53 |
+
data_files:
|
| 54 |
+
- split: corpus
|
| 55 |
+
path: data/corpus.jsonl
|
| 56 |
+
- config_name: queries
|
| 57 |
+
data_files:
|
| 58 |
+
- split: queries
|
| 59 |
+
path: data/queries.jsonl
|
| 60 |
+
pretty_name: GovReport MTEB Benchmark
|
| 61 |
+
---
|
| 62 |
+
# GovReport MTEB Benchmark 🏋
|
| 63 |
+
This is the test split of the [GovReport](https://huggingface.co/datasets/launch/gov_report) dataset formatted in the [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) information retrieval dataset format.
|
| 64 |
+
|
| 65 |
+
This dataset is intended to facilitate the consistent and reproducible evaluation of information retrieval models on GovReport with the [`mteb`](https://github.com/embeddings-benchmark/mteb) embedding model evaluation framework.
|
| 66 |
+
|
| 67 |
+
More specifically, this dataset tests the ability of information retrieval models to identify US government reports.
|
| 68 |
+
|
| 69 |
+
This dataset has been processed into the MTEB format by [Isaacus](https://isaacus.com/), a legal AI research company.
|
| 70 |
+
|
| 71 |
+
## Methodology 🧪
|
| 72 |
+
To understand how GovReport was created, refer to its creators' [paper](https://arxiv.org/abs/2104.02112).
|
| 73 |
+
|
| 74 |
+
This dataset was formatted by labelling `summary` columns of the source data as queries (or anchors), and treating the `document` column as relevant (or positive) passages.
|
| 75 |
+
|
| 76 |
+
## Structure 🗂️
|
| 77 |
+
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.
|
| 78 |
+
|
| 79 |
+
The `default` split pairs queries (`query-id`) with relevant passages (`corpus-id`), each pair having a `score` of 1.
|
| 80 |
+
|
| 81 |
+
The `corpus` split contains information from the government report, with the raw text of a report being stored in the `text` key and its id being stored in the `_id` key.
|
| 82 |
+
|
| 83 |
+
The `queries` split contains summaries, with the text of the report summary being stored in the `text` key and its id being stored in the `_id` key.
|
| 84 |
+
|
| 85 |
+
## License 📜
|
| 86 |
+
To the extent that any intellectual property rights reside in the contributions made by Isaacus in formatting and processing this dataset, Isaacus licenses those contributions under the same license terms as the source dataset. You are free to use this dataset without citing Isaacus.
|
| 87 |
+
|
| 88 |
+
The source dataset is licensed under [CC BY 4.0](https://choosealicense.com/licenses/cc-by-4.0/).
|
| 89 |
+
|
| 90 |
+
## Citation 🔖
|
| 91 |
+
```bibtex
|
| 92 |
+
@inproceedings{huang-etal-2021-efficient,
|
| 93 |
+
title = "Efficient Attentions for Long Document Summarization",
|
| 94 |
+
author = "Huang, Luyang and
|
| 95 |
+
Cao, Shuyang and
|
| 96 |
+
Parulian, Nikolaus and
|
| 97 |
+
Ji, Heng and
|
| 98 |
+
Wang, Lu",
|
| 99 |
+
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
|
| 100 |
+
month = jun,
|
| 101 |
+
year = "2021",
|
| 102 |
+
address = "Online",
|
| 103 |
+
publisher = "Association for Computational Linguistics",
|
| 104 |
+
url = "https://aclanthology.org/2021.naacl-main.112",
|
| 105 |
+
doi = "10.18653/v1/2021.naacl-main.112",
|
| 106 |
+
pages = "1419--1436",
|
| 107 |
+
abstract = "The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization. In this paper, we propose Hepos, a novel efficient encoder-decoder attention with head-wise positional strides to effectively pinpoint salient information from the source. We further conduct a systematic study of existing efficient self-attentions. Combined with Hepos, we are able to process ten times more tokens than existing models that use full attentions. For evaluation, we present a new dataset, GovReport, with significantly longer documents and summaries. Results show that our models produce significantly higher ROUGE scores than competitive comparisons, including new state-of-the-art results on PubMed. Human evaluation also shows that our models generate more informative summaries with fewer unfaithful errors.",
|
| 108 |
+
eprint={2104.02112}
|
| 109 |
+
}
|
| 110 |
+
```
|