Datasets:
Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError
Exception: TypeError
Message: SplitInfo.__init__() got an unexpected keyword argument 'num_shards'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1032, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1007, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 607, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 319, in _from_yaml_dict
yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/splits.py", line 600, in _from_yaml_list
return cls.from_split_dict(yaml_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/splits.py", line 570, in from_split_dict
split_info = SplitInfo(**split_info)
^^^^^^^^^^^^^^^^^^^^^^^
TypeError: SplitInfo.__init__() got an unexpected keyword argument 'num_shards'Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Stage 1 (S1): General Knowledge Anchor — 6M FineWeb-Edu-Dedup
1. Project Overview
This dataset represents the General Knowledge Acquisition Phase (S1) for a research project focused on developing a Domain-Adaptive LLM for ISO 27001 Information Security Auditing.
S1 serves as the cognitive foundation. This corpus is designed to establish high-level linguistic proficiency and general reasoning before the introduction of specialized regulatory standards in Stage 2.
2. Dataset Summary
- Total Samples: 6,000,000
- Primary Source: HuggingFaceTB/smollm-corpus
- Role: General Reasoning & Scientific Logic Base.
3. Sourcing & Preprocessing (S1 Methodology)
The sourcing logic for this 6M slice prioritized Knowledge Density over raw volume:
- Educational Filtering: Only samples with a high "educational score" (classifier-based) were retained to ensure the model learns professional and structured language.
- Sharding: Organized into 120 Parquet shards to support high-throughput, multi-node training.
4. Technical Specifications
| Parameter | Value |
|---|---|
| Format | Parquet (Compressed) |
| Average Sequence Length | 600 - 4096 tokens |
| Language | English (High-Proficiency) |
5. Usage in Continual Pre-training
This dataset is intended to be interleaved with Math/Code and Multilingual streams to reach a Stage 1 target of 10B tokens.
Loading for Training (Streaming)
from datasets import load_dataset
dataset = load_dataset("JoTeqtheFirstAI/fineweb-edu-dedup6m", split="train", streaming=True)
- Downloads last month
- 19