File size: 2,933 Bytes
2704031 2d9ae44 2704031 3856fb3 2704031 3856fb3 2704031 3856fb3 2d9ae44 e8e5130 2d9ae44 e8e5130 2d9ae44 e8e5130 2704031 2d9ae44 2704031 461968d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | ---
dataset_info:
- config_name: documents
features:
- name: chunk
dtype: string
- name: chunk_id
dtype: string
splits:
- name: test
num_bytes: 3161302
num_examples: 3702
download_size: 1775726
dataset_size: 3161302
- config_name: queries
features:
- name: chunk_ids
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 19746
num_examples: 36
download_size: 14925
dataset_size: 19746
configs:
- config_name: documents
data_files:
- split: test
path: documents/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
---
# ConTEB - ESG Reports
This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It focuses on the theme of **Industrial ESG Reports**, particularly stemming from the fast-food industry.
## Dataset Summary
This dataset was designed to elicit contextual information. It is built upon [a subset of the ViDoRe Benchmark](https://huggingface.co/datasets/vidore/esg_reports_human_labeled_v2). To build the corpus, we start from the pre-existing collection of ESG Reports, extract the text, and chunk them (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). We then manually re-annotate the queries to ensure that they are linked to the relevant chunk of the annotated original page. Queries were manually crafted in the original dataset.
This dataset provides a focused benchmark for contextualized embeddings. It includes a curated set of original documents, chunks stemming from them, and queries.
* **Number of Documents:** 30
* **Number of Chunks:** 3702
* **Number of Queries:** 36
* **Average Number of Tokens per Doc:** 205.5
## Dataset Structure (Hugging Face Datasets)
The dataset is structured into the following columns:
* **`documents`**: Contains chunk information:
* `"chunk_id"`: The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.
* `"chunk"`: The text of the chunk
* **`queries`**: Contains query information:
* `"query"`: The text of the query.
* `"answer"`: The answer relevant to the query, from the original dataset.
* `"chunk_ids"`: A list of chunk IDs that are relevant to the query. This is used to link the query to the relevant chunks in the `documents` dataset.
## Usage
We will upload a Quickstart evaluation snippet soon.
## Citation
We will add the corresponding citation soon.
## Acknowledgments
This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.
## Copyright
All rights are reserved to the original authors of the documents.
|