wiki727k / README.md
saeedabc's picture
Update sent_ids to ids; Update README
1ba69f4
metadata
annotations_creators:
  - machine-generated
language_creators:
  - found
language:
  - en
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - 100K<n<1M
source_datasets:
  - original
task_categories:
  - text-classification
  - sentence-similarity
task_ids:
  - semantic-similarity-classification
pretty_name: Wiki-727K
tags:
  - text segmentation
  - document segmentation
  - topic segmentation
  - topic shift detection
  - semantic chunking
  - chunking
  - nlp
  - wikipedia
dataset_info:
  features:
    - name: id
      dtype: string
    - name: ids
      sequence: string
    - name: sentences
      sequence: string
    - name: titles_mask
      sequence: uint8
    - name: levels
      sequence: uint8
    - name: labels
      sequence:
        class_label:
          names:
            '0': semantic-continuity
            '1': semantic-shift
  splits:
    - name: train
      num_bytes: 4754764877
      num_examples: 582160
    - name: validation
      num_bytes: 595209014
      num_examples: 72354
    - name: test
      num_bytes: 608033007
      num_examples: 73232
  download_size: 1569504207
  dataset_size: 5958006898

Dataset Card for Wiki-727K Dataset

Wiki-727K is a large dataset for text segmentation, automatically extracted and labeled from Wikipedia. It is designed as a sentence-level sequence labeling task for identifying semantic or topic shift in documents.

Dataset Overview

  • Train: 582k
  • Validation: 72k
  • Test: 73k

Features

  • id (string): Document ID.
  • ids (sequence of string): Sentence IDs for each document.
  • sentences (sequence of string): Sentences in each document.
  • titles_mask (sequence of uint8): Mask indicating if a sentence is a title (optional).
  • levels (sequence of uint8): Hierarchical level of each sentence (optional).
  • labels (sequence of class): Binary labels: semantic-continuity or semantic-shift.

Usage

The dataset can be loaded using the HuggingFace datasets library:

from datasets import load_dataset

titled_dataset = load_dataset('saeedabc/wiki727k', num_proc=8, trust_remote_code=True)

untitled_dataset = load_dataset('saeedabc/wiki727k', drop_titles=True, num_proc=8, trust_remote_code=True)

Dataset Details