star-small / README.md
nielsr's picture
nielsr HF Staff
Update README.md
09bca63 verified
|
raw
history blame
2.67 kB
metadata
size_categories:
  - 1K<n<10K
task_categories:
  - text-generation
dataset_info:
  features:
    - name: path
      sequence: int64
    - name: edge_list
      sequence:
        sequence: int64
    - name: source
      dtype: int64
    - name: goal
      dtype: int64
  splits:
    - name: train
      num_bytes: 15200000
      num_examples: 50000
    - name: validation
      num_bytes: 30400
      num_examples: 100
    - name: test
      num_bytes: 1520000
      num_examples: 5000
  download_size: 1165940
  dataset_size: 16750400
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
tags:
  - synthetic

Insertion Language Models Dataset

Synthetic data for the paper [2505.05755] Insertion Language Models: Sequence Generation with Arbitrary-Position Insertions.

Abstract

Autoregressive models (ARMs), which predict subsequent tokens one-by-one ``from left to right,'' have achieved significant success across a wide range of sequence generation tasks. However, they struggle to accurately represent sequences that require satisfying sophisticated constraints or whose sequential dependencies are better addressed by out-of-order generation. Masked Diffusion Models (MDMs) address some of these limitations, but the process of unmasking multiple tokens simultaneously in MDMs can introduce incoherences, and MDMs cannot handle arbitrary infilling constraints when the number of tokens to be filled in is not known in advance. In this work, we introduce Insertion Language Models (ILMs), which learn to insert tokens at arbitrary positions in a sequence -- that is, they select jointly both the position and the vocabulary element to be inserted. By inserting tokens one at a time, ILMs can represent strong dependencies between tokens, and their ability to generate sequences in arbitrary order allows them to accurately model sequences where token dependencies do not follow a left-to-right sequential structure. To train ILMs, we propose a tailored network parameterization and use a simple denoising objective. Our empirical evaluation demonstrates that ILMs outperform both ARMs and MDMs on common planning tasks. Furthermore, we show that ILMs outperform MDMs and perform on par with ARMs in an unconditional text generation task while offering greater flexibility than MDMs in arbitrary-length text infilling. The code is available at: this https URL .

Project Page

https://dhruveshp.com/projects/ilm

Code

The code for the paper is available at: https://github.com/dhruvdcoder/ILM