|
|
--- |
|
|
dataset_info: |
|
|
- config_name: documents |
|
|
features: |
|
|
- name: chunk_id |
|
|
dtype: string |
|
|
- name: chunk |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 2486993 |
|
|
num_examples: 3351 |
|
|
download_size: 1280365 |
|
|
dataset_size: 2486993 |
|
|
- config_name: queries |
|
|
features: |
|
|
- name: chunk_id |
|
|
dtype: string |
|
|
- name: query |
|
|
dtype: string |
|
|
- name: answer |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 220871 |
|
|
num_examples: 1111 |
|
|
download_size: 113284 |
|
|
dataset_size: 220871 |
|
|
configs: |
|
|
- config_name: documents |
|
|
data_files: |
|
|
- split: train |
|
|
path: documents/train-* |
|
|
- config_name: queries |
|
|
data_files: |
|
|
- split: train |
|
|
path: queries/train-* |
|
|
--- |
|
|
|
|
|
# ConTEB - Covid-QA |
|
|
|
|
|
This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It focuses on the theme of **Healthcare**, particularly stemming from articles about the COVID-19 pandemic. |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
This dataset was designed to elicit contextual information. It is built upon [the COVID-QA dataset](https://aclanthology.org/2020.nlpcovid19-acl.18/). To build the corpus, we start from the pre-existing collection documents, extract the text, and chunk them (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). We use GPT-4o to annotate which chunk, among the gold document, best contains information needed to answer the query. Since chunking is done a posteriori without considering the questions, chunks are not always self-contained and eliciting document-wide context can help build meaningful representations. |
|
|
|
|
|
This dataset provides a focused benchmark for contextualized embeddings. It includes a curated set of original documents, chunks stemming from them, and queries. |
|
|
|
|
|
* **Number of Documents:** 115 |
|
|
* **Number of Chunks:** 3351 |
|
|
* **Number of Queries:** 1111 |
|
|
* **Average Number of Tokens per Doc:** 153.9 |
|
|
|
|
|
## Dataset Structure (Hugging Face Datasets) |
|
|
The dataset is structured into the following columns: |
|
|
|
|
|
* **`documents`**: Contains chunk information: |
|
|
* `"chunk_id"`: The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document. |
|
|
* `"chunk"`: The text of the chunk |
|
|
* **`queries`**: Contains query information: |
|
|
* `"query"`: The text of the query. |
|
|
* `"answer"`: The answer relevant to the query, from the original dataset. |
|
|
* `"chunk_id"`: The ID of the chunk that the query is related to, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document. |
|
|
|
|
|
## Usage |
|
|
|
|
|
We will upload a Quickstart evaluation snippet soon. |
|
|
|
|
|
## Citation |
|
|
|
|
|
We will add the corresponding citation soon. |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France. |
|
|
|
|
|
## Copyright |
|
|
|
|
|
All rights are reserved to the original authors of the documents. |
|
|
|