|
|
--- |
|
|
license: cc-by-sa-4.0 |
|
|
task_categories: |
|
|
- text-retrieval |
|
|
- question-answering |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- legal |
|
|
- law |
|
|
size_categories: |
|
|
- n<1K |
|
|
source_datasets: |
|
|
- reglab/barexam_qa |
|
|
dataset_info: |
|
|
- config_name: default |
|
|
features: |
|
|
- name: query-id |
|
|
dtype: string |
|
|
- name: corpus-id |
|
|
dtype: string |
|
|
- name: score |
|
|
dtype: float64 |
|
|
splits: |
|
|
- name: test |
|
|
num_examples: 117 |
|
|
- config_name: corpus |
|
|
features: |
|
|
- name: _id |
|
|
dtype: string |
|
|
- name: title |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
splits: |
|
|
- name: corpus |
|
|
num_examples: 116 |
|
|
- config_name: queries |
|
|
features: |
|
|
- name: _id |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
splits: |
|
|
- name: queries |
|
|
num_examples: 117 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: test |
|
|
path: data/default.jsonl |
|
|
- config_name: corpus |
|
|
data_files: |
|
|
- split: corpus |
|
|
path: data/corpus.jsonl |
|
|
- config_name: queries |
|
|
data_files: |
|
|
- split: queries |
|
|
path: data/queries.jsonl |
|
|
pretty_name: Bar Exam QA (MTEB format) |
|
|
--- |
|
|
# Bar Exam QA (MTEB format) |
|
|
This is the test split of the [Bar Exam QA](https://huggingface.co/datasets/reglab/barexam_qa) dataset formatted in the [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) information retrieval dataset format. |
|
|
|
|
|
This dataset is intended to facilitate the consistent and reproducible evaluation of information retrieval models on Bar Exam QA with the [`mteb`](https://github.com/embeddings-benchmark/mteb) embedding model evaluation framework. |
|
|
|
|
|
More specifically, this dataset tests the ability of information retrieval models to identify legal provisions relevant to US bar exam questions. |
|
|
|
|
|
This dataset forms part of the [Massive Legal Embeddings Benchmark (MLEB)](https://isaacus.com/mleb), the largest, most diverse, and most comprehensive benchmark for legal text embedding models. |
|
|
|
|
|
## Methodology π§ͺ |
|
|
To understand how Bar Exam QA was created, refer to its [documentation](https://huggingface.co/datasets/reglab/barexam_qa). |
|
|
|
|
|
This dataset was formatted by concatenating the `prompt` and `question` columns of the source data delimited by a single space (or, where there was no `prompt`, reverting to the `question` only) into queries (or anchors), and treating the `gold_passage` column as relevant (or positive) passages. |
|
|
|
|
|
## Structure ποΈ |
|
|
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`. |
|
|
|
|
|
The `default` split pairs queries (`query-id`) with relevant passages (`corpus-id`), each pair having a `score` of 1. |
|
|
|
|
|
The `corpus` split contains relevant passages from Bar Exam QA, with the text of a passage being stored in the `text` key and its id being stored in the `_id` key. |
|
|
|
|
|
The `queries` split contains queries, with the text of a query being stored in the `text` key and its id being stored in the `_id` key. |
|
|
|
|
|
## License π |
|
|
This dataset is licensed under [CC BY SA 4.0](https://choosealicense.com/licenses/cc-by-sa-4.0/). |
|
|
|
|
|
## Citation π |
|
|
If you use this dataset, please cite the [Massive Legal Embeddings Benchmark (MLEB)](https://arxiv.org/abs/2510.19365) as well. |
|
|
```bibtex |
|
|
@inproceedings{Zheng_2025, series={CSLAW β25}, |
|
|
title={A Reasoning-Focused Legal Retrieval Benchmark}, |
|
|
url={http://dx.doi.org/10.1145/3709025.3712219}, |
|
|
DOI={10.1145/3709025.3712219}, |
|
|
booktitle={Proceedings of the Symposium on Computer Science and Law on ZZZ}, |
|
|
publisher={ACM}, |
|
|
author={Zheng, Lucia and Guha, Neel and Arifov, Javokhir and Zhang, Sarah and Skreta, Michal and Manning, Christopher D. and Henderson, Peter and Ho, Daniel E.}, |
|
|
year={2025}, |
|
|
month=mar, pages={169β193}, |
|
|
collection={CSLAW β25}, |
|
|
eprint={2505.03970} |
|
|
} |
|
|
|
|
|
@misc{butler2025massivelegalembeddingbenchmark, |
|
|
title={The Massive Legal Embedding Benchmark (MLEB)}, |
|
|
author={Umar Butler and Abdur-Rahman Butler and Adrian Lucas Malec}, |
|
|
year={2025}, |
|
|
eprint={2510.19365}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2510.19365}, |
|
|
} |
|
|
``` |