rocstories / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add metadata, paper/code links, and sample usage
ce03603 verified
|
raw
history blame
2.14 kB
---
task_categories:
- text-generation
language:
- en
tags:
- story-generation
- question-generation
- summarization
- detoxification
dataset_info:
features:
- name: target
dtype: string
splits:
- name: train
num_bytes: 20493594
num_examples: 88161
- name: test
num_bytes: 2310690
num_examples: 10000
download_size: 14376849
dataset_size: 22804284
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# COSMOS Dataset
This repository hosts pre-processed datasets used with **COSMOS: Compressed and Smooth Latent Space for Text Diffusion Modeling**, as presented in the paper [Compressed and Smooth Latent Space for Text Diffusion Modeling](https://huggingface.co/papers/2506.21170).
COSMOS introduces a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This method enables parallel generation and flexible control, achieving comparable or superior quality in tasks such as story generation, question generation, summarization, and detoxification.
The official code implementation can be found on GitHub: [MeshchaninovViacheslav/cosmos](https://github.com/MeshchaninovViacheslav/cosmos).
## Sample Usage
After training the autoencoder and diffusion model as described in the [GitHub repository](https://github.com/MeshchaninovViacheslav/cosmos), you can generate new text samples using the following command:
```bash
CUDA_LAUNCH_BLOCKING=1 \
HYDRA_FULL_ERROR=1 \
uv run \
torchrun --nproc_per_node=4 --master_port=12345 \
generate.py \
dataset=rocstories \
diffusion.dynamic.N=200 \
diffusion.dynamic.d=5 \
diffusion.training.batch_size=512 \
encoder.latent.num_latents=16 \
encoder.embedding.max_position_embeddings=128 \
decoder.latent.num_latents=16 \
decoder.embedding.max_position_embeddings=128 \
autoencoder.model.load_checkpoint='\"autoencoder-num_latents=16-wikipedia-final-128/100000.pth\"' \
diffusion.model.load_checkpoint='\"diffusion-rocstories-16-d=5-final/180000.pth\"' \
diffusion.generation.num_gen_texts=2000 \
training=""
```