Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
mlconti commited on
Commit
1f0a2e4
·
1 Parent(s): f6d959a

Update README

Browse files
Files changed (1) hide show
  1. README.md +69 -28
README.md CHANGED
@@ -1,36 +1,77 @@
1
  ---
2
  dataset_info:
3
  - config_name: documents
4
- features:
5
- - name: chunk_id
6
- dtype: string
7
- - name: chunk
8
- dtype: string
9
- splits:
10
- - name: train
11
- num_bytes: 19898
12
- num_examples: 60
13
- download_size: 6011
14
- dataset_size: 19898
15
  - config_name: queries
16
- features:
17
- - name: chunk_id
18
- dtype: string
19
- - name: query
20
- dtype: string
21
- splits:
22
- - name: train
23
- num_bytes: 9514
24
- num_examples: 120
25
- download_size: 2687
26
- dataset_size: 9514
27
  configs:
28
  - config_name: documents
29
- data_files:
30
- - split: train
31
- path: documents/train-*
32
  - config_name: queries
33
- data_files:
34
- - split: train
35
- path: queries/train-*
36
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  dataset_info:
3
  - config_name: documents
4
+ features:
5
+ - name: chunk_id
6
+ dtype: string
7
+ - name: chunk
8
+ dtype: string
9
+ splits:
10
+ - name: train
11
+ num_bytes: 19898
12
+ num_examples: 60
13
+ download_size: 6011
14
+ dataset_size: 19898
15
  - config_name: queries
16
+ features:
17
+ - name: chunk_id
18
+ dtype: string
19
+ - name: query
20
+ dtype: string
21
+ splits:
22
+ - name: train
23
+ num_bytes: 9514
24
+ num_examples: 120
25
+ download_size: 2687
26
+ dataset_size: 9514
27
  configs:
28
  - config_name: documents
29
+ data_files:
30
+ - split: train
31
+ path: documents/train-*
32
  - config_name: queries
33
+ data_files:
34
+ - split: train
35
+ path: queries/train-*
36
  ---
37
+
38
+ # ConTEB - Insurance
39
+
40
+ This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It focuses on the theme of **Insurance**, particularly stemming from a document of the EIOPA entity.
41
+
42
+ ## Dataset Summary
43
+
44
+ *Insurance* is composed of a long document with insurance-related statistics for each country of the European Union. To build the corpus, we extract the text of the document, and chunk it (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). Countries are often not referred to in-text, but only once in the section title. Therefore, certain chunks require knowledge of their position within the document to be properly disambiguated from others. Questions are manually crafted to require structural understanding for accurate chunk matching. Since questions are crafted after the chunking process, the annotation results directly from the manual question generation process.
45
+
46
+ This dataset provides a focused benchmark for contextualized embeddings. It includes a curated set of original documents, chunks stemming from them, and queries.
47
+
48
+ * **Number of Documents:** 1
49
+ * **Number of Chunks:** 60
50
+ * **Number of Queries:** 120
51
+ * **Average Number of Tokens per Doc:** 80.7
52
+
53
+ ## Dataset Structure (Hugging Face Datasets)
54
+ The dataset is structured into the following columns:
55
+
56
+ * **`documents`**: Contains chunk information:
57
+ * `"chunk_id"`: The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.
58
+ * `"chunk"`: The text of the chunk
59
+ * **`queries`**: Contains query information:
60
+ * `"query"`: The text of the query.
61
+ * `"chunk_id"`: The ID of the chunk that the query is related to, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.
62
+
63
+ ## Usage
64
+
65
+ We will upload a Quickstart evaluation snippet soon.
66
+
67
+ ## Citation
68
+
69
+ We will add the corresponding citation soon.
70
+
71
+ ## Acknowledgments
72
+
73
+ This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.
74
+
75
+ ## Copyright
76
+
77
+ All rights are reserved to the original authors of the documents.