language:
- en
license: unknown
task_categories:
- text-generation
tags:
- insertion-language-model
- text-infilling
- planning
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1947171484
num_examples: 2198247
- name: validation
num_bytes: 24180392
num_examples: 41623
download_size: 1035549086
dataset_size: 1971351876
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
Insertion Language Models: Sequence Generation with Arbitrary-Position Insertions
This dataset contains "Stories data" associated with the paper Insertion Language Models: Sequence Generation with Arbitrary-Position Insertions.
Project page: https://dhruveshp.com/projects/ilm
Abstract
Autoregressive models (ARMs), which predict subsequent tokens one-by-one ``from left to right,'' have achieved significant success across a wide range of sequence generation tasks. However, they struggle to accurately represent sequences that require satisfying sophisticated constraints or whose sequential dependencies are better addressed by out-of-order generation. Masked Diffusion Models (MDMs) address some of these limitations, but the process of unmasking multiple tokens simultaneously in MDMs can introduce incoherences, and MDMs cannot handle arbitrary infilling constraints when the number of tokens to be filled in is not known in advance. In this work, we introduce Insertion Language Models (ILMs), which learn to insert tokens at arbitrary positions in a sequence -- that is, they select jointly both the position and the vocabulary element to be inserted. By inserting tokens one at a time, ILMs can represent strong dependencies between tokens, and their ability to generate sequences in arbitrary order allows them to accurately model sequences where token dependencies do not follow a left-to-right sequential structure. To train ILMs, we propose a tailored network parameterization and use a simple denoising objective. Our empirical evaluation demonstrates that ILMs outperform both ARMs and MDMs on common planning tasks. Furthermore, we show that ILMs outperform MDMs and perform on par with ARMs in an unconditional text generation task while offering greater flexibility than MDMs in arbitrary-length text infilling.
Code and Usage
The paper mentions the availability of code: "The code is available at: this https URL". However, a specific link to the repository for this paper was not provided in the context.
The GitHub repository https://github.com/yuyijiong/hard_retrieval_for_llm provided in the context belongs to a different paper, "Hyper-multi-step: The Truth Behind Difficult Long-context Tasks". Therefore, no code link or sample usage for the "Insertion Language Models" paper can be included in this dataset card based on the available information.