Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
File size: 3,004 Bytes
7425ea7
 
658a062
5c3ee1e
 
 
 
 
 
 
 
 
 
 
658a062
5c3ee1e
 
 
 
 
 
 
 
 
 
 
7425ea7
 
5c3ee1e
 
 
658a062
5c3ee1e
 
 
7425ea7
1f0a2e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
dataset_info:
- config_name: documents
  features:
  - name: chunk_id
    dtype: string
  - name: chunk
    dtype: string
  splits:
  - name: train
    num_bytes: 19898
    num_examples: 60
  download_size: 6011
  dataset_size: 19898
- config_name: queries
  features:
  - name: chunk_id
    dtype: string
  - name: query
    dtype: string
  splits:
  - name: train
    num_bytes: 9514
    num_examples: 120
  download_size: 2687
  dataset_size: 9514
configs:
- config_name: documents
  data_files:
  - split: train
    path: documents/train-*
- config_name: queries
  data_files:
  - split: train
    path: queries/train-*
---

# ConTEB - Insurance 

This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It focuses on the theme of **Insurance**, particularly stemming from a document of the EIOPA entity.

## Dataset Summary

*Insurance* is composed of a long document with insurance-related statistics for each country of the European Union. To build the corpus, we extract the text of the document, and chunk it (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). Countries are often not referred to in-text, but only once in the section title. Therefore, certain chunks require knowledge of their position within the document to be properly disambiguated from others. Questions are manually crafted to require structural understanding for accurate chunk matching. Since questions are crafted after the chunking process, the annotation results directly from the manual question generation process.

This dataset provides a focused benchmark for contextualized embeddings. It includes a curated set of original documents, chunks stemming from them, and queries.

*   **Number of Documents:** 1 
*   **Number of Chunks:** 60 
*   **Number of Queries:** 120 
*   **Average Number of Tokens per Doc:** 80.7 

## Dataset Structure (Hugging Face Datasets)
The dataset is structured into the following columns:

*   **`documents`**: Contains chunk information:
  *   `"chunk_id"`:  The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document. 
  *   `"chunk"`:  The text of the chunk
*   **`queries`**: Contains query information:
    *   `"query"`: The text of the query.
    *   `"chunk_id"`: The ID of the chunk that the query is related to, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.

## Usage

We will upload a Quickstart evaluation snippet soon.

## Citation

We will add the corresponding citation soon.

## Acknowledgments

This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.

## Copyright

All rights are reserved to the original authors of the documents.