Improve dataset card: Add paper/code links, task categories, tags, and sample usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -19,4 +19,45 @@ configs:
|
|
| 19 |
path: data/train-*
|
| 20 |
- split: test
|
| 21 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
path: data/train-*
|
| 20 |
- split: test
|
| 21 |
path: data/test-*
|
| 22 |
+
task_categories:
|
| 23 |
+
- text-generation
|
| 24 |
+
tags:
|
| 25 |
+
- diffusion-models
|
| 26 |
+
- latent-space
|
| 27 |
---
|
| 28 |
+
|
| 29 |
+
# COSMOS Dataset
|
| 30 |
+
|
| 31 |
+
This repository contains the `rocstories` dataset, one of the pre-processed datasets used in the paper "[Compressed and Smooth Latent Space for Text Diffusion Modeling](https://huggingface.co/papers/2506.21170)".
|
| 32 |
+
|
| 33 |
+
The paper introduces COSMOS, a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This space is learned using an autoencoder trained for token-level reconstruction and alignment with frozen activations from a pretrained language encoder. The datasets are integral for training and evaluating text diffusion models across various generative tasks, including story generation, question generation, summarization, and detoxification.
|
| 34 |
+
|
| 35 |
+
## Paper
|
| 36 |
+
[Compressed and Smooth Latent Space for Text Diffusion Modeling](https://huggingface.co/papers/2506.21170)
|
| 37 |
+
|
| 38 |
+
## Code
|
| 39 |
+
The official implementation can be found on GitHub: [https://github.com/MeshchaninovViacheslav/cosmos](https://github.com/MeshchaninovViacheslav/cosmos)
|
| 40 |
+
|
| 41 |
+
## Sample Usage
|
| 42 |
+
|
| 43 |
+
After training the autoencoder and diffusion models as described in the [GitHub repository](https://github.com/MeshchaninovViacheslav/cosmos), you can generate new text samples using the `generate.py` script. The following command is an example for generating text using a diffusion model trained with a dataset like `rocstories`:
|
| 44 |
+
|
| 45 |
+
```bash
|
| 46 |
+
CUDA_LAUNCH_BLOCKING=1 \
|
| 47 |
+
HYDRA_FULL_ERROR=1 \
|
| 48 |
+
uv run \
|
| 49 |
+
torchrun --nproc_per_node=4 --master_port=12345 \
|
| 50 |
+
generate.py \
|
| 51 |
+
dataset=rocstories \
|
| 52 |
+
diffusion.dynamic.N=200 \
|
| 53 |
+
diffusion.dynamic.d=5 \
|
| 54 |
+
diffusion.training.batch_size=512 \
|
| 55 |
+
encoder.latent.num_latents=16 \
|
| 56 |
+
encoder.embedding.max_position_embeddings=128 \
|
| 57 |
+
decoder.latent.num_latents=16 \
|
| 58 |
+
decoder.embedding.max_position_embeddings=128 \
|
| 59 |
+
autoencoder.model.load_checkpoint='"autoencoder-num_latents=16-wikipedia-final-128/100000.pth"' \
|
| 60 |
+
diffusion.model.load_checkpoint='\"diffusion-rocstories-16-d=5-final/180000.pth\"' \
|
| 61 |
+
diffusion.generation.num_gen_texts=2000 \
|
| 62 |
+
training=""
|
| 63 |
+
```
|