Improve dataset card: Add description, links, metadata, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +73 -0
README.md CHANGED
@@ -19,4 +19,77 @@ configs:
19
  path: data/train-*
20
  - split: test
21
  path: data/test-*
 
 
 
 
22
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  path: data/train-*
20
  - split: test
21
  path: data/test-*
22
+ task_categories:
23
+ - text-generation
24
+ language:
25
+ - en
26
  ---
27
+
28
+ # COSMOS: Compressed and Smooth Latent Space for Text Diffusion Modeling
29
+
30
+ This repository contains a pre-processed dataset used in the paper "[Compressed and Smooth Latent Space for Text Diffusion Modeling](https://huggingface.co/papers/2506.21170)". The paper introduces **COSMOS**, a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This dataset is part of the collection of pre-processed datasets recommended for training and evaluating COSMOS models across various generative tasks, including story generation, question generation, summarization, and detoxification.
31
+
32
+ The official code for the COSMOS framework can be found on GitHub: [https://github.com/MeshchaninovViacheslav/cosmos](https://github.com/MeshchaninovViacheslav/cosmos)
33
+
34
+ ## Paper Abstract
35
+
36
+ Autoregressive language models dominate modern text generation, yet their sequential nature introduces fundamental limitations: decoding is slow, and maintaining global coherence remains challenging. Diffusion models offer a promising alternative by enabling parallel generation and flexible control; however, their application to text generation is hindered by the high dimensionality of token-level representations. We introduce Cosmos, a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This space is learned using an autoencoder trained simultaneously for token-level reconstruction and alignment with frozen activations from a pretrained language encoder, providing robust semantic grounding and enabling effective perturbation-based augmentations. Empirically, we demonstrate that text representations can be compressed by $8\times$ while maintaining generation quality comparable to token-level diffusion models. Furthermore, increasing the latent sequence length allows Cosmos to surpass both diffusion-based and autoregressive baselines. We evaluate Cosmos on four diverse generative tasks including story generation, question generation, summarization, and detoxification and compare it with various generative paradigms. Cosmos achieves comparable or superior generation quality while offering more than $2\times$ faster inference. Code is released at \href{ this https URL }{GitHub}
37
+
38
+ ## Sample Usage
39
+
40
+ This section provides steps for using this dataset within the COSMOS framework, adapted from the [official GitHub repository](https://github.com/MeshchaninovViacheslav/cosmos).
41
+
42
+ ### Dataset Preparation
43
+
44
+ This dataset, along with others used in the paper, is pre-processed and available on the Hugging Face Hub. The COSMOS training scripts are designed to automatically download and save these datasets locally. To use this dataset, you need to update the `dataset` field in your configuration file (`conf/config.yaml`) and run the data loading script:
45
+
46
+ ```yaml
47
+ # Example in conf/config.yaml for 'rocstories' dataset
48
+ - dataset: "rocstories"
49
+ ```
50
+
51
+ Then, run the data loading utility from the project root:
52
+
53
+ ```bash
54
+ # or uv run python -m utils.load_to_hub --config_path ../conf/ --load_from_hub
55
+ python -m utils.load_to_hub --config_path ../conf/ --load_from_hub
56
+ ```
57
+
58
+ ### Generation
59
+
60
+ After training the autoencoder and diffusion models with this or other COSMOS datasets, you can generate new text samples using the `generate.py` script. Ensure you have the appropriate model checkpoints loaded.
61
+
62
+ ```bash
63
+ CUDA_LAUNCH_BLOCKING=1 \
64
+ HYDRA_FULL_ERROR=1 \
65
+ uv run \
66
+ torchrun --nproc_per_node=4 --master_port=12345 \
67
+ generate.py \
68
+ dataset=rocstories \
69
+ diffusion.dynamic.N=200 \
70
+ diffusion.dynamic.d=5 \
71
+ diffusion.training.batch_size=512 \
72
+ encoder.latent.num_latents=16 \
73
+ encoder.embedding.max_position_embeddings=128 \
74
+ decoder.latent.num_latents=16 \
75
+ decoder.embedding.max_position_embeddings=128 \
76
+ autoencoder.model.load_checkpoint='"autoencoder-num_latents=16-wikipedia-final-128/100000.pth"' \
77
+ diffusion.model.load_checkpoint='\"diffusion-rocstories-16-d=5-final/180000.pth\"' \
78
+ diffusion.generation.num_gen_texts=2000 \
79
+ training=""
80
+ ```
81
+
82
+ ## Citation
83
+
84
+ If you use this dataset or the associated work, please cite the paper:
85
+
86
+ ```bibtex
87
+ @article{Meshchaninov2025CompressedAS,
88
+ title={Compressed and Smooth Latent Space for Text Diffusion Modeling},
89
+ author={Viacheslav Meshchaninov and Egor Chimbulatov and Alexander Shabalin and Aleksandr Abramov and Dmitry P. Vetrov},
90
+ journal={ArXiv},
91
+ year={2025},
92
+ volume={abs/2506.21170},
93
+ url={https://arxiv.org/abs/2506.21170}
94
+ }
95
+ ```