agentlans's picture
Update README.md
74676e3 verified
metadata
configs:
  - config_name: all
    data_files:
      - path:
          - all.jsonl.zst
        split: train
  - config_name: sample_k1000
    data_files:
      - path:
          - sample_k1000.jsonl.zst
        split: train
  - config_name: sample_k10000
    data_files:
      - path:
          - sample_k10000.jsonl.zst
        split: train
  - config_name: sample_k100000
    data_files:
      - path:
          - sample_k100000.jsonl.zst
        split: train
  - config_name: sample_k1000000
    data_files:
      - path:
          - sample_k1000000.jsonl.zst
        split: train
  - config_name: sample_k2000
    data_files:
      - path:
          - sample_k2000.jsonl.zst
        split: train
  - config_name: sample_k20000
    data_files:
      - path:
          - sample_k20000.jsonl.zst
        split: train
  - config_name: sample_k200000
    data_files:
      - path:
          - sample_k200000.jsonl.zst
        split: train
  - config_name: sample_k5000
    data_files:
      - path:
          - sample_k5000.jsonl.zst
        split: train
  - config_name: sample_k50000
    data_files:
      - path:
          - sample_k50000.jsonl.zst
        split: train
  - config_name: sample_k500000
    data_files:
      - path:
          - sample_k500000.jsonl.zst
        split: train
task_categories:
  - text-generation
language:
  - en
tags:
  - wikipedia
  - paragraphs

Wikipedia Paragraphs Complete Dataset

This dataset consists of English Wikipedia paragraphs ranging from 1 000 to 8 000 characters in length. It was sourced from the Wikimedia dump: "wikimedia/wikipedia", "20231101.en".

Preprocessing Steps

The dataset has undergone extensive cleaning and normalization, including:

  • Removing brackets
  • Removing HTML tags
  • Normalizing bullet points, hyphenated words, quotation marks, Unicode characters, and whitespace
  • Replacing email addresses, emojis, hashtags, phone numbers, URLs, and user handles

Clustering

Paragraphs have been clustered using Snowflake/snowflake-arctic-embed-xs embeddings.

Multiple cluster granularities are available as separate configurations with cluster counts of:
1 000 | 2 000 | 5 000 | 10 000 | 20 000 | 50 000 | 100 000 | 200 000 | 500 000 | 1 000 000

Example Data Sample

{
  "title": "SS Arrow",
  "text": "On February 5, 1970, a mile-long oil slick had formed and was heading for Cape Breton Island, the northern side of the bay. Small aircraft attempted to disperse the oil, dropping a chemical dispersant known as COREXIT on the spill, but this failed. The oil spread and washed ashore on many beaches in the bay. Within a week, the oil had spread to cover 75 miles of beaches and threatened to spread even further. Ultimately, the oil spill affected 190 miles of shoreline, with environmental degradation still evident decades later. The clean-up took months. The pollution crippled the local fishing industry, with fishermen catching lobsters and fish coated in bunker C. The Fisheries Research Board of Canada conducted experiments in May 1970 to assess aquatic life and imposed regulations on commercial fishing to protect public health. The combined cleanup and environmental impact costs ran into millions of dollars."
}

Limitations

  • Some extraneous spaces and minor formatting errors remain
  • Parentheses, brackets, and other formatting elements are omitted
  • LaTeX expressions and table markdown are not well preserved

See Also

For a smaller subset with higher-quality formatting, consider agentlans/wikipedia-paragraphs.