Improve dataset card: Add metadata, paper/code links, and sample usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -1,4 +1,13 @@
1
  ---
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: target
@@ -20,3 +29,35 @@ configs:
20
  - split: test
21
  path: data/test-*
22
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ task_categories:
3
+ - text-generation
4
+ language:
5
+ - en
6
+ tags:
7
+ - story-generation
8
+ - question-generation
9
+ - summarization
10
+ - detoxification
11
  dataset_info:
12
  features:
13
  - name: target
 
29
  - split: test
30
  path: data/test-*
31
  ---
32
+
33
+ # COSMOS Dataset
34
+
35
+ This repository hosts pre-processed datasets used with **COSMOS: Compressed and Smooth Latent Space for Text Diffusion Modeling**, as presented in the paper [Compressed and Smooth Latent Space for Text Diffusion Modeling](https://huggingface.co/papers/2506.21170).
36
+
37
+ COSMOS introduces a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This method enables parallel generation and flexible control, achieving comparable or superior quality in tasks such as story generation, question generation, summarization, and detoxification.
38
+
39
+ The official code implementation can be found on GitHub: [MeshchaninovViacheslav/cosmos](https://github.com/MeshchaninovViacheslav/cosmos).
40
+
41
+ ## Sample Usage
42
+
43
+ After training the autoencoder and diffusion model as described in the [GitHub repository](https://github.com/MeshchaninovViacheslav/cosmos), you can generate new text samples using the following command:
44
+
45
+ ```bash
46
+ CUDA_LAUNCH_BLOCKING=1 \
47
+ HYDRA_FULL_ERROR=1 \
48
+ uv run \
49
+ torchrun --nproc_per_node=4 --master_port=12345 \
50
+ generate.py \
51
+ dataset=rocstories \
52
+ diffusion.dynamic.N=200 \
53
+ diffusion.dynamic.d=5 \
54
+ diffusion.training.batch_size=512 \
55
+ encoder.latent.num_latents=16 \
56
+ encoder.embedding.max_position_embeddings=128 \
57
+ decoder.latent.num_latents=16 \
58
+ decoder.embedding.max_position_embeddings=128 \
59
+ autoencoder.model.load_checkpoint='\"autoencoder-num_latents=16-wikipedia-final-128/100000.pth\"' \
60
+ diffusion.model.load_checkpoint='\"diffusion-rocstories-16-d=5-final/180000.pth\"' \
61
+ diffusion.generation.num_gen_texts=2000 \
62
+ training=""
63
+ ```