Update README
Browse files
README.md
CHANGED
|
@@ -36,3 +36,45 @@ configs:
|
|
| 36 |
- split: test
|
| 37 |
path: queries/test-*
|
| 38 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
- split: test
|
| 37 |
path: queries/test-*
|
| 38 |
---
|
| 39 |
+
|
| 40 |
+
# ConTEB - ESG Reports
|
| 41 |
+
|
| 42 |
+
This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It focuses on the theme of **Industrial ESG Reports**, particularly stemming from the fast-food industry.
|
| 43 |
+
|
| 44 |
+
## Dataset Summary
|
| 45 |
+
|
| 46 |
+
This dataset was designed to elicit contextual information. It is built upon [a subset of the ViDoRe Benchmark](https://huggingface.co/datasets/vidore/esg_reports_human_labeled_v2). To build the corpus, we start from the pre-existing collection of ESG Reports, extract the text, and chunk them (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). We then manually re-annotate the queries to ensure that they are linked to the relevant chunk of the annotated original page. Queries were manually crafted in the original dataset.
|
| 47 |
+
|
| 48 |
+
This dataset provides a focused benchmark for contextualized embeddings. It includes a curated set of original documents, chunks stemming from them, and queries.
|
| 49 |
+
|
| 50 |
+
* **Number of Documents:** 30
|
| 51 |
+
* **Number of Chunks:** 3702
|
| 52 |
+
* **Number of Queries:** 36
|
| 53 |
+
* **Average Number of Tokens per Doc:** 205.5
|
| 54 |
+
|
| 55 |
+
## Dataset Structure (Hugging Face Datasets)
|
| 56 |
+
The dataset is structured into the following columns:
|
| 57 |
+
|
| 58 |
+
* **`documents`**: Contains chunk information:
|
| 59 |
+
* `"chunk_id"`: The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.
|
| 60 |
+
* `"chunk"`: The text of the chunk
|
| 61 |
+
* **`queries`**: Contains query information:
|
| 62 |
+
* `"query"`: The text of the query.
|
| 63 |
+
* `"answer"`: The answer relevant to the query, from the original dataset.
|
| 64 |
+
* `"chunk_ids"`: A list of chunk IDs that are relevant to the query. This is used to link the query to the relevant chunks in the `documents` dataset.
|
| 65 |
+
|
| 66 |
+
## Usage
|
| 67 |
+
|
| 68 |
+
We will upload a Quickstart evaluation snippet soon.
|
| 69 |
+
|
| 70 |
+
## Citation
|
| 71 |
+
|
| 72 |
+
We will add the corresponding citation soon.
|
| 73 |
+
|
| 74 |
+
## Acknowledgments
|
| 75 |
+
|
| 76 |
+
This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.
|
| 77 |
+
|
| 78 |
+
## Copyright
|
| 79 |
+
|
| 80 |
+
All rights are reserved to the original authors of the documents.
|