dataset_info:
features:
- name: text_trg
dtype: string
splits:
- name: train
num_bytes: 2535717504
num_examples: 1268362
- name: test
num_bytes: 99932966
num_examples: 50000
download_size: 1649827296
dataset_size: 2635650470
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
task_categories:
- text-generation
language:
- en
COSMOS: Compressed and Smooth Latent Space for Text Diffusion Modeling
This repository contains a pre-processed dataset used in the paper "Compressed and Smooth Latent Space for Text Diffusion Modeling". The paper introduces COSMOS, a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This dataset is part of the collection of pre-processed datasets recommended for training and evaluating COSMOS models across various generative tasks, including story generation, question generation, summarization, and detoxification.
The official code for the COSMOS framework can be found on GitHub: https://github.com/MeshchaninovViacheslav/cosmos
Paper Abstract
Autoregressive language models dominate modern text generation, yet their sequential nature introduces fundamental limitations: decoding is slow, and maintaining global coherence remains challenging. Diffusion models offer a promising alternative by enabling parallel generation and flexible control; however, their application to text generation is hindered by the high dimensionality of token-level representations. We introduce Cosmos, a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This space is learned using an autoencoder trained simultaneously for token-level reconstruction and alignment with frozen activations from a pretrained language encoder, providing robust semantic grounding and enabling effective perturbation-based augmentations. Empirically, we demonstrate that text representations can be compressed by $8\times$ while maintaining generation quality comparable to token-level diffusion models. Furthermore, increasing the latent sequence length allows Cosmos to surpass both diffusion-based and autoregressive baselines. We evaluate Cosmos on four diverse generative tasks including story generation, question generation, summarization, and detoxification and compare it with various generative paradigms. Cosmos achieves comparable or superior generation quality while offering more than $2\times$ faster inference. Code is released at \href{ this https URL }{GitHub}
Sample Usage
This section provides steps for using this dataset within the COSMOS framework, adapted from the official GitHub repository.
Dataset Preparation
This dataset, along with others used in the paper, is pre-processed and available on the Hugging Face Hub. The COSMOS training scripts are designed to automatically download and save these datasets locally. To use this dataset, you need to update the dataset field in your configuration file (conf/config.yaml) and run the data loading script:
# Example in conf/config.yaml for 'rocstories' dataset
- dataset: "rocstories"
Then, run the data loading utility from the project root:
# or uv run python -m utils.load_to_hub --config_path ../conf/ --load_from_hub
python -m utils.load_to_hub --config_path ../conf/ --load_from_hub
Generation
After training the autoencoder and diffusion models with this or other COSMOS datasets, you can generate new text samples using the generate.py script. Ensure you have the appropriate model checkpoints loaded.
CUDA_LAUNCH_BLOCKING=1 \
HYDRA_FULL_ERROR=1 \
uv run \
torchrun --nproc_per_node=4 --master_port=12345 \
generate.py \
dataset=rocstories \
diffusion.dynamic.N=200 \
diffusion.dynamic.d=5 \
diffusion.training.batch_size=512 \
encoder.latent.num_latents=16 \
encoder.embedding.max_position_embeddings=128 \
decoder.latent.num_latents=16 \
decoder.embedding.max_position_embeddings=128 \
autoencoder.model.load_checkpoint='"autoencoder-num_latents=16-wikipedia-final-128/100000.pth"' \
diffusion.model.load_checkpoint='\"diffusion-rocstories-16-d=5-final/180000.pth\"' \
diffusion.generation.num_gen_texts=2000 \
training=""
Citation
If you use this dataset or the associated work, please cite the paper:
@article{Meshchaninov2025CompressedAS,
title={Compressed and Smooth Latent Space for Text Diffusion Modeling},
author={Viacheslav Meshchaninov and Egor Chimbulatov and Alexander Shabalin and Aleksandr Abramov and Dmitry P. Vetrov},
journal={ArXiv},
year={2025},
volume={abs/2506.21170},
url={https://arxiv.org/abs/2506.21170}
}