sencha / README.md
bhavnicksm's picture
Upload dataset
af99f56 verified
metadata
license: cc-by-4.0
task_categories:
  - question-answering
  - text-retrieval
language:
  - en
tags:
  - chunking
  - scientific
  - academic-papers
  - nlp
  - qasper
  - rag
  - retrieval
size_categories:
  - 1K<n<10K
configs:
  - config_name: corpus
    data_files:
      - split: train
        path: corpus/train-*
  - config_name: questions
    data_files:
      - split: train
        path: questions/train-*
dataset_info:
  - config_name: corpus
    features:
      - name: id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
      - name: num_sections
        dtype: int64
    splits:
      - name: train
        num_bytes: 6489700
        num_examples: 243
    download_size: 3222047
    dataset_size: 6489700
  - config_name: questions
    features:
      - name: id
        dtype: string
      - name: paper_id
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: chunk-must-contain
        dtype: string
    splits:
      - name: train
        num_bytes: 985455
        num_examples: 1507
    download_size: 476865
    dataset_size: 985455

🍵 Sencha: Scientific Paper Chunking Assessment

Scientific Challenges - A dataset for evaluating chunking algorithms on academic papers.

Overview

Sencha is designed to test how well chunking algorithms handle long-form scientific documents. It contains full-text NLP research papers with questions that require finding specific information across multiple sections.

Key Challenges

  • Handling structured sections (Abstract, Methods, Results, etc.)
  • Preserving citation context (BIBREF tags)
  • Managing hierarchical section headers
  • Chunking technical content with equations and terminology

Dataset Structure

Corpus

The corpus config contains 250 full-text NLP papers.

Column Type Description
id string ArXiv paper ID
title string Paper title
text string Full paper text in markdown format
num_sections int Number of sections in the paper

Questions

The questions config contains 1,146 questions about paper content.

Column Type Description
id string Unique question identifier
paper_id string Reference to corpus document (ArXiv ID)
question string Question about the paper content
answer string Answer to the question
chunk-must-contain string Evidence passage that answers the question

Statistics

Metric Value
Papers 250
Questions 1,146
Avg paper length 26,400 chars (5,300 words)
Min paper length ~5,600 chars
Max paper length ~98,500 chars
Avg must-contain length 613 chars
Domain NLP/Computational Linguistics

Usage

from datasets import load_dataset

# Load the corpus
corpus = load_dataset("chonkie-ai/sencha", "corpus", split="train")

# Load the questions
questions = load_dataset("chonkie-ai/sencha", "questions", split="train")

# Use with MTCB evaluator
from mtcb import SenchaEvaluator
from chonkie import RecursiveChunker

evaluator = SenchaEvaluator(
    chunker=RecursiveChunker(chunk_size=512),
    embedding_model="voyage-3-large"
)
result = evaluator.evaluate(k=[1, 3, 5, 10])

Sample Topics

The papers cover various NLP topics including:

  • Sentiment analysis and affective computing
  • Word embeddings and language models
  • Text classification and NER
  • Question answering systems
  • Machine translation
  • Social media analysis
  • Clinical NLP

Source

Derived from QASPER (NAACL 2021) by Allen AI - a dataset for question answering on scientific research papers.

License

CC-BY-4.0 (following QASPER license)