Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 504526239 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_categories "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

scientific_lay_summarisation - PLOS - normalized

This dataset is a modified version of tomasg25/scientific_lay_summarization and contains scientific lay summaries that have been preprocessed with this code. The preprocessing includes fixing punctuation and whitespace problems, and calculating the token length of each text sample using a tokenizer from the T5 model.

Original dataset details:

Data Cleaning

The text in both the "article" and "summary" columns was processed to ensure that punctuation and whitespace were consistent. The fix_punct_whitespace function was applied to each text sample to:

  • Remove spaces before punctuation marks (except for parentheses)
  • Add a space after punctuation marks (except for parentheses) if missing
  • Handle spaces around parentheses
  • Add a space after a closing parenthesis if followed by a word or opening parenthesis
  • Handle spaces around quotation marks
  • Handle spaces around single quotes
  • Handle comma in numbers

Tokenization

The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The calculate_token_length function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.

Data Format

The resulting processed data files are stored in Apache parquet and can be loaded using the pandas' library or the datasets' library from the Hugging Face transformers package. The relevant column names and data types for summarization are

DatasetDict({
    train: Dataset({
        features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
        num_rows: 24773
    })
    test: Dataset({
        features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
        num_rows: 1376
    })
    validation: Dataset({
        features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
        num_rows: 1376
    })
})

Usage

Load the desired parquet file(s) using pandas or datasets. Here is an example using pandas:

# download the dataset files by clicking on 'use in datasets' and cloning 
import pandas as pd

# Load train set
df = pd.read_parquet("scientific_lay_summarisation-plos-norm/train.parquet")
print(df.info())

And here is an example using datasets:

from datasets import load_dataset

dataset = load_dataset("pszemraj/scientific_lay_summarisation-plos-norm")
train_set = dataset['train']
# Print the first few samples
for i in range(5):
    print(train_set[i])

Token Lengths

For train split:

train-lengths


Downloads last month
65

Models trained or fine-tuned on pszemraj/scientific_lay_summarisation-plos-norm

Spaces using pszemraj/scientific_lay_summarisation-plos-norm 3