wikipedia / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper/code links, task categories, tags, and sample usage
4e240fa verified
|
raw
history blame
2.47 kB
metadata
dataset_info:
  features:
    - name: text_trg
      dtype: string
  splits:
    - name: train
      num_bytes: 9033045649
      num_examples: 16086245
    - name: test
      num_bytes: 28125706
      num_examples: 50000
  download_size: 6107844688
  dataset_size: 9061171355
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
task_categories:
  - text-generation
tags:
  - diffusion-models
  - latent-space

COSMOS Dataset

This repository contains the rocstories dataset, one of the pre-processed datasets used in the paper "Compressed and Smooth Latent Space for Text Diffusion Modeling".

The paper introduces COSMOS, a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This space is learned using an autoencoder trained for token-level reconstruction and alignment with frozen activations from a pretrained language encoder. The datasets are integral for training and evaluating text diffusion models across various generative tasks, including story generation, question generation, summarization, and detoxification.

Paper

Compressed and Smooth Latent Space for Text Diffusion Modeling

Code

The official implementation can be found on GitHub: https://github.com/MeshchaninovViacheslav/cosmos

Sample Usage

After training the autoencoder and diffusion models as described in the GitHub repository, you can generate new text samples using the generate.py script. The following command is an example for generating text using a diffusion model trained with a dataset like rocstories:

CUDA_LAUNCH_BLOCKING=1 \
HYDRA_FULL_ERROR=1 \
uv run \
torchrun --nproc_per_node=4 --master_port=12345 \
generate.py \
dataset=rocstories \
diffusion.dynamic.N=200 \
diffusion.dynamic.d=5 \
diffusion.training.batch_size=512 \
encoder.latent.num_latents=16 \
encoder.embedding.max_position_embeddings=128 \
decoder.latent.num_latents=16 \
decoder.embedding.max_position_embeddings=128 \
autoencoder.model.load_checkpoint='"autoencoder-num_latents=16-wikipedia-final-128/100000.pth"' \
diffusion.model.load_checkpoint='\"diffusion-rocstories-16-d=5-final/180000.pth\"' \
diffusion.generation.num_gen_texts=2000 \
training=""