Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
insurance / README.md
mlconti's picture
Update README
5c3ee1e
metadata
dataset_info:
  - config_name: documents
    features:
      - name: chunk_id
        dtype: string
      - name: chunk
        dtype: string
    splits:
      - name: train
        num_bytes: 19898
        num_examples: 60
    download_size: 6011
    dataset_size: 19898
  - config_name: queries
    features:
      - name: chunk_id
        dtype: string
      - name: query
        dtype: string
    splits:
      - name: train
        num_bytes: 9514
        num_examples: 120
    download_size: 2687
    dataset_size: 9514
configs:
  - config_name: documents
    data_files:
      - split: train
        path: documents/train-*
  - config_name: queries
    data_files:
      - split: train
        path: queries/train-*

ConTEB - Insurance

This dataset is part of ConTEB (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It focuses on the theme of Insurance, particularly stemming from a document of the EIOPA entity.

Dataset Summary

Insurance is composed of a long document with insurance-related statistics for each country of the European Union. To build the corpus, we extract the text of the document, and chunk it (using LangChain's RecursiveCharacterSplitter with a threshold of 1000 characters). Countries are often not referred to in-text, but only once in the section title. Therefore, certain chunks require knowledge of their position within the document to be properly disambiguated from others. Questions are manually crafted to require structural understanding for accurate chunk matching. Since questions are crafted after the chunking process, the annotation results directly from the manual question generation process.

This dataset provides a focused benchmark for contextualized embeddings. It includes a curated set of original documents, chunks stemming from them, and queries.

  • Number of Documents: 1
  • Number of Chunks: 60
  • Number of Queries: 120
  • Average Number of Tokens per Doc: 80.7

Dataset Structure (Hugging Face Datasets)

The dataset is structured into the following columns:

  • documents: Contains chunk information:
  • "chunk_id": The ID of the chunk, of the form doc-id_chunk-id, where doc-id is the ID of the original document and chunk-id is the position of the chunk within that document.
  • "chunk": The text of the chunk
  • queries: Contains query information:
    • "query": The text of the query.
    • "chunk_id": The ID of the chunk that the query is related to, of the form doc-id_chunk-id, where doc-id is the ID of the original document and chunk-id is the position of the chunk within that document.

Usage

We will upload a Quickstart evaluation snippet soon.

Citation

We will add the corresponding citation soon.

Acknowledgments

This work is partially supported by ILLUIN Technology, and by a grant from ANRT France.

Copyright

All rights are reserved to the original authors of the documents.