Add dataset card for ELF pre-tokenized XSum

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - summarization
5
+ language:
6
+ - en
7
+ tags:
8
+ - diffusion
9
+ - flow-matching
10
+ - language-modeling
11
+ - elf
12
+ ---
13
+
14
+ This repository contains the pre-tokenized XSum dataset used in the paper [ELF: Embedded Language Flows](https://huggingface.co/papers/2605.10938).
15
+
16
+ The dataset is tokenized using the T5 tokenizer and is prepared for use with ELF, a class of continuous-time Flow Matching models that operate in the continuous embedding space of a frozen T5 encoder.
17
+
18
+ - **GitHub Repository:** [https://github.com/lillian039/ELF](https://github.com/lillian039/ELF)
19
+ - **Paper:** [ELF: Embedded Language Flows](https://huggingface.co/papers/2605.10938)
20
+
21
+ ### Dataset Summary
22
+
23
+ For the summarization task (XSum), the data is structured for conditional generation:
24
+ - `condition_input_ids`: Tokenized source text (the article).
25
+ - `input_ids`: Tokenized target text (the summary).
26
+
27
+ The ELF model prepends the condition IDs to the input IDs and applies specific attention masks during training and inference.
28
+
29
+ ### Sample Usage
30
+
31
+ You can load this dataset directly using the `datasets` library:
32
+
33
+ ```python
34
+ from datasets import load_dataset
35
+
36
+ # Load the training split
37
+ dataset = load_dataset("embedded-language-flows/xsum_train_t5")
38
+
39
+ # Example of an item in the dataset
40
+ print(dataset["train"][0])
41
+ ```
42
+
43
+ ### Citation
44
+
45
+ ```bibtex
46
+ @article{elf2026,
47
+ title={ELF: Embedded Language Flows},
48
+ author={Hu, Keya and Qiu, Linlu and Lu, Yiyang and Zhao, Hanhong and Li, Tianhong and Kim, Yoon and Andreas, Jacob and He, Kaiming},
49
+ journal={arXiv preprint arXiv:2605.10938},
50
+ year={2026}
51
+ }
52
+ ```