Laz4rz's picture
Update README.md
e51169b verified
metadata
license: cc-by-sa-3.0
dataset_info:
  features:
    - name: text
      dtype: string
    - name: category
      dtype: string
    - name: url
      dtype: string
    - name: title
      dtype: string
    - name: embeddings
      sequence: float64
  splits:
    - name: train
      num_bytes: 4949572549
      num_examples: 518092
  download_size: 3787534362
  dataset_size: 4949572549
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
language:
  - en
pretty_name: STEMWikiSmallRAG
tags:
  - RAG
  - Retrieval Augmented Generation
  - Small Chunks
  - Wikipedia
  - Science
  - Scientific
  - Scientific Wikipedia
  - Science Wikipedia
  - 512 tokens
  - STEM
task_categories:
  - text-generation
  - text-classification
  - question-answering

STEMWikiSmallRAG with embeddings

This dataset contains wikipedia entries from STEM field, unfortunately there is also Business&Economics... but I thought it may contain some useful data as well, even by accident.

Processed version of millawell/wikipedia_field_of_science, prepared to be used in small context length RAG systems. Chunk length is tokenizer dependent, but each chunk should be around 512 tokens. Longer wikipedia pages have been split into smaller entries, with title added as a prefix. Embedded using mixedbread-ai/mxbai-embed-large-v1, with truncation to 512 tokens.

There are also not embedded 256 and 512 tokens datasets available:

  • Laz4rz/wikipedia_science_chunked_small_rag_512
  • Laz4rz/wikipedia_science_chunked_small_rag_256

If you wish to prepare some other chunk length:

  • use millawell/wikipedia_field_of_science
  • adapt chunker function:
def chunker_clean(results, example, length=512, approx_token=3, prefix=""):
    if len(results) == 0:
        regex_pattern = r'[\n\s]*\n[\n\s]*'
        example = re.sub(regex_pattern, " ", example).strip().replace(prefix, "")
    chunk_length = length * approx_token
    if len(example) > chunk_length:
        first = example[:chunk_length]
        chunk = ".".join(first.split(".")[:-1])
        if len(chunk) == 0:
            chunk = first
        rest = example[len(chunk)+1:]
        results.append(prefix+chunk.strip())
        if len(rest) > chunk_length:
            chunker_clean(results, rest.strip(), length=length, approx_token=approx_token, prefix=prefix)
        else:
            results.append(prefix+rest.strip())
    else:
        results.append(prefix+example.strip())
    return results