ClassiCC-PT / README.md
ThalesR's picture
Update README.md
5d33e81 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: url
      dtype: string
    - name: edu_score
      dtype: float32
    - name: stem_score
      dtype: float32
    - name: toxic_score
      dtype: float32
  splits:
    - name: train
      num_bytes: 285576324067
      num_examples: 96975210
  download_size: 164777356452
  dataset_size: 285576324067
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

πŸ“š ClassiCC-PT: Classified Common Crawl Corpus for Portuguese

πŸ“– Overview

ClassiCC-PT (Classified Common Crawl – Portuguese) is a large-scale web corpus containing ~120B Portuguese tokens extracted from Common Crawl snapshots. It is specifically curated for training large language models in Portuguese, with a focus on data quality, language specificity, and targeted filtering.

This corpus was created as part of a study on continued pretraining for adapting English-trained LLMs to Portuguese.

πŸ— Dataset Construction

Source Snapshots: CC-2021-31, CC-2021-39, CC-2022-40 Steps:

  • Language Filtering

    Selected only pages tagged with Portuguese in Common Crawl metadata (~2% of each CC crawl).
    
  • HTML to Text Extraction

    Used Trafilatura to remove boilerplate and extract main content.
    
  • Deduplication

    Applied MinHash intra-crawl deduplication (removing ~40% duplicates).
    
  • Neural-Based Filtering

    Developed three BERTimbau-based classifiers for:
    
    Educational content (ClassiCC-PT-edu)
    
    STEM content (ClassiCC-PT-STEM)
    
    Toxic content (ClassiCC-PT-toxic)
    
    Classifiers were  trained on GPT-4o-annotated Portuguese data.
    

Final Corpus

Retained ~106M documents / ~125B tokens ( Llama 2 tokenizer)

πŸš€ Performance Impact

When used for continued pretraining of TinyLlama-1.1B (1T EN tokens), ClassiCC-PT improved Portuguese benchmark performance (Poeta v1) significantly, outperforming mC4-PT and matching ClueWeb-22-PT. The model trained with ClassiCC-PT is called CuriΓ³ 1.1B and is available at huggingface.

Model Training Regimen Poeta v1 NPM
TinyLlama-1T (EN) – 17.4
mC4-PT cont. pretraining ~20
ClueWeb-22-PT cont. pretraining ~27
ClassiCC-PT (CuriΓ³-1.1B) cont. pretraining 27.1

πŸ“₯ Download & Usage

from datasets import load_dataset

ds = load_dataset("ClassiCC-Corpus/ClassiCC-PT", split="train")

print(ds[0])

# {
# 'text': '...',
# 'id': '...',
# 'url': '...',
# 'edu_score': 4.0,
# 'stem_score': 1.0,
# 'toxic_score': 0.0
# }

πŸ“œ Citation

If you use ClassiCC-PT, please cite:

@article{almeida2025building,
  title={Building High-Quality Datasets for Portuguese LLMs: From Common Crawl Snapshots to Industrial-Grade Corpora},
  author={Almeida, Thales Sales and Nogueira, Rodrigo and Pedrini, Helio},
  journal={Journal of the Brazilian Computer Society},
  volume={31},
  number={1},
  pages={1246--1262},
  year={2025}
}

Acknowledgements

We thank the google TRC program, which generously granted us the necessary resources for the development of this research.