Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
text: string
source: string
-- schema metadata --
huggingface: '{"info": {"features": {"text": {"dtype": "string", "_type":' + 61
to
{'text': Value('string')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1975, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              text: string
              source: string
              -- schema metadata --
              huggingface: '{"info": {"features": {"text": {"dtype": "string", "_type":' + 61
              to
              {'text': Value('string')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

NBR-500 Corpus

🇧🇷 Corpus de pré-treinamento para o modelo NBR-500 - Um Small Language Model de 500M parâmetros otimizado para Português Brasileiro.

📊 Estatísticas

Métrica Valor
Documentos 3.89M
Tokens ~1.5B
Idioma Português Brasileiro
Formato Parquet

🔍 Pipeline de Processamento

O dataset passou por um rigoroso pipeline de qualidade baseado no SmolLM Training Playbook:

  1. Filtragem de Qualidade

    • Remoção de textos curtos (< 100 caracteres)
    • Remoção de conteúdo repetitivo
    • Filtragem de spam e baixa qualidade
  2. Detecção de Idioma

    • FastText LID para garantir 100% português
    • Threshold de confiança > 0.8
  3. Deduplicação

    • MinHash LSH (datasketch)
    • Remoção de near-duplicates

📁 Fontes

  • Wikipedia PT-BR
  • CulturaX Portuguese
  • OSCAR Portuguese
  • Outros corpora brasileiros

🚀 Uso

from datasets import load_dataset

dataset = load_dataset("limajr/nbr-500-corpus", split="train")

for example in dataset:
    print(example["text"][:200])
    break

🎯 Propósito

Este corpus foi criado especificamente para treinar o NBR-500, um modelo de linguagem pequeno e eficiente para:

  • ✅ Execução em dispositivos de borda (Edge AI)
  • ✅ Aplicações em Português Brasileiro
  • ✅ Baixa latência e consumo de memória
  • ✅ Quantização para GGUF (Q4, Q8)

📦 Modelo Relacionado

  • Modelo: limajr/nbr-500
  • Tokenizer: BPE nativo com 32k vocabulário (46% mais eficiente que GPT-2 para PT-BR)

📜 Licença

Apache 2.0

🙏 Créditos

Baseado nas práticas do SmolLM Training Playbook da HuggingFace.

Downloads last month
23