Datasets:

Languages:
English
ArXiv:
License:
victordelima nielsr HF Staff commited on
Commit
ab79558
·
1 Parent(s): eb52490

Add task category (#1)

Browse files

- Add task category (1f3cad2f9724dd267801d036d7adceb932fb71d9)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +31 -29
README.md CHANGED
@@ -1,30 +1,32 @@
1
- ---
2
- language:
3
- - en
4
- license: cc-by-4.0
5
- ---
6
-
7
- ## Dataset Information
8
-
9
- Most conversational agents (CAs) are designed to satisfy user needs through user-driven interactions. However, many real-world settings, such as academic interviewing, judicial proceedings, and journalistic investigations, involve broader institutional decision-making processes and require agents that can elicit information from users. To enable systematic research on this setting, we present *YIELD*, a 26M-token dataset of 2,281 ethically sourced, human-to-human dialogues. For full details, see the accompanying paper [here](https://doi.org/10.48550/arXiv.2604.10968).
10
-
11
-
12
- ## Code Repository
13
-
14
- GitHub: https://github.com/infosenselab/yield
15
-
16
-
17
- ## Citing YIELD
18
-
19
- If you use this resource in your projects, please cite the following paper.
20
-
21
-
22
- ```bibtex
23
- @misc{De_Lima_YIELD_A_Large-Scale_2026,
24
- author = {De Lima, Victor and Yang, Grace Hui},
25
- doi = {10.48550/arXiv.2604.10968},
26
- title = {{YIELD: A Large-Scale Dataset and Evaluation Framework for Information Elicitation Agents}},
27
- url = {https://arxiv.org/abs/2604.10968},
28
- year = {2026}
29
- }
 
 
30
  ```
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc-by-4.0
5
+ task_categories:
6
+ - text-generation
7
+ ---
8
+
9
+ ## Dataset Information
10
+
11
+ Most conversational agents (CAs) are designed to satisfy user needs through user-driven interactions. However, many real-world settings, such as academic interviewing, judicial proceedings, and journalistic investigations, involve broader institutional decision-making processes and require agents that can elicit information from users. To enable systematic research on this setting, we present *YIELD*, a 26M-token dataset of 2,281 ethically sourced, human-to-human dialogues. For full details, see the accompanying paper [here](https://doi.org/10.48550/arXiv.2604.10968).
12
+
13
+
14
+ ## Code Repository
15
+
16
+ GitHub: https://github.com/infosenselab/yield
17
+
18
+
19
+ ## Citing YIELD
20
+
21
+ If you use this resource in your projects, please cite the following paper.
22
+
23
+
24
+ ```bibtex
25
+ @misc{De_Lima_YIELD_A_Large-Scale_2026,
26
+ author = {De Lima, Victor and Yang, Grace Hui},
27
+ doi = {10.48550/arXiv.2604.10968},
28
+ title = {{YIELD: A Large-Scale Dataset and Evaluation Framework for Information Elicitation Agents}},
29
+ url = {https://arxiv.org/abs/2604.10968},
30
+ year = {2026}
31
+ }
32
  ```