Add dataset card, link to paper and Github

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - diffusion-models
9
+ - flow-matching
10
+ ---
11
+
12
+ # ELF: Embedded Language Flows
13
+
14
+ This repository contains pre-tokenized datasets used in the paper [ELF: Embedded Language Flows](https://huggingface.co/papers/2605.10938).
15
+
16
+ [**Github**](https://github.com/lillian039/ELF) | [**Paper**](https://huggingface.co/papers/2605.10938)
17
+
18
+ ELF is a class of diffusion models in continuous embedding space based on continuous-time Flow Matching. The datasets provided here are pre-tokenized using the T5 tokenizer and encoded using a frozen T5-small encoder as described in the paper.
19
+
20
+ ## Dataset Details
21
+
22
+ The authors provide pre-tokenized splits for several benchmarks:
23
+ - **OpenWebText**: Used for unconditional generation.
24
+ - **WMT14 De-En**: Used for machine translation.
25
+ - **XSum**: Used for abstractive summarization.
26
+
27
+ ## Usage
28
+
29
+ You can load the pre-tokenized datasets directly using the Hugging Face `datasets` library:
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ # Example: Load the OpenWebText pre-tokenized dataset
35
+ dataset = load_dataset("embedded-language-flows/openwebtext-t5")
36
+
37
+ # Example: Load the WMT14 De-En validation set
38
+ dataset_val = load_dataset("embedded-language-flows/wmt14_de-en_validation_t5")
39
+ ```
40
+
41
+ ## Citation
42
+
43
+ ```bibtex
44
+ @article{elf2026,
45
+ title={ELF: Embedded Language Flows},
46
+ author={Hu, Keya and Qiu, Linlu and Lu, Yiyang and Zhao, Hanhong and Li, Tianhong and Kim, Yoon and Andreas, Jacob and He, Kaiming},
47
+ journal={arXiv preprint arXiv:2605.10938},
48
+ year={2026}
49
+ }
50
+ ```