The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 78, in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 169, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 121, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 503, in map_nested
mapped = [
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 504, in <listcomp>
map_nested(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 520, in map_nested
mapped = [
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 521, in <listcomp>
_single_map_nested((function, obj, batched, batch_size, types, None, True, None))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 382, in _single_map_nested
return function(data_struct)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 131, in _extract
raise NotImplementedError(
NotImplementedError: Extraction protocol for TAR archives like 'hf://datasets/jhu-clsp/ettin-pretraining-data@7c1cc45be268598bbbbe0157676e520c9290b3b8/algebraic-stack/algebraic_stack_train_0000-tokenized-chunked-1024-512-128-backfill-nodups.jsonl.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
Example usage:
url = dl_manager.download(url)
tar_archive_iterator = dl_manager.iter_archive(url)
for filename, file in tar_archive_iterator:
...
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning: The task_categories "retrieval" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Ettin Pre-training Data
Phase 1 of 3: Diverse pre-training data mixture (1.7T tokens) used to train the Ettin model suite.
This dataset contains the pre-training phase data used to train all Ettin encoder and decoder models. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.
π Data Composition
| Data Source | Tokens (B) | Percentage | Description |
|---|---|---|---|
| DCLM | 837.2 | 49.1% | High-quality web crawl data |
| CC Head | 356.6 | 20.9% | Common Crawl head documents |
| Starcoder | 263.9 | 15.5% | Code repositories and files |
| 80.3 | 4.7% | Social discussion threads | |
| PeS2o | 57.3 | 3.4% | Scientific papers |
| Arxiv | 28.0 | 1.6% | Academic preprints |
| StackExchange | 19.6 | 1.2% | Q&A forums |
| Tulu Flan | 16.6 | 1.0% | Instruction-following data |
| Open-Web-Math | 12.7 | 0.7% | Mathematical content |
| Algebraic StackExchange | 12.6 | 0.7% | Math Q&A |
| CC News | 7.3 | 0.4% | News articles |
| Wikipedia | 7.3 | 0.4% | Encyclopedia articles |
| Total | 1,704.7 | 100.0% | Diverse mixture for foundation training |
π Usage
For pre-training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
Direct Access
from streaming import StreamingDataset
# Load the streaming dataset
dataset = StreamingDataset(
remote='https://huggingface.co/datasets/jhu-clsp/ettin-pretraining-data',
local='/tmp/ettin-pretraining-data',
shuffle=True
)
# Access samples
for sample in dataset:
text = sample['text']
# Process your data...
π Structure
Each folder contains one data source in MDS (Mosaic Data Shard) format:
arxiv/- Academic papers from ArXivbooks/- Literature and reference bookscc_head/- High-quality Common Crawl documentscc_news/- News articles from Common Crawldclm/- DataComp-LM filtered web dataopen_web_math/- Mathematical web contentalgebraic_stackexchange/- Math Q&A from StackExchangepes2o/- Scientific papers (PeS2o dataset)reddit/- Reddit discussion threadsstackexchange/- General StackExchange Q&Astarcoder/- Code from GitHub repositoriestulu_flan/- Instruction-following exampleswikipedia/- Wikipedia articles
π Related Resources
- Models: Ettin Model Suite (17M-1B parameters)
- Phase 2: Mid-training Data (250B tokens)
- Phase 3: Decay Phase Data (50B tokens)
- Training Order: Batch-level Data Order
- Paper: Arxiv link
- Code: GitHub Repository
Citation
@misc{weller2025seqvsseqopen,
title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders},
author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
year={2025},
eprint={2507.11412},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.11412},
}
- Downloads last month
- 4,675