Text Generation
PEFT
Safetensors
English

Add pipeline tag and library name metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +38 -36
README.md CHANGED
@@ -1,37 +1,39 @@
1
- ---
2
- base_model:
3
- - meta-llama/Llama-3.1-8B-Instruct
4
- datasets:
5
- - infosense/yield
6
- language:
7
- - en
8
- license: llama3.1
9
- ---
10
-
11
- # YIELD Fine-Tuning Adapters
12
-
13
- This repository contains the persona adapter models presented in the paper [YIELD: A Large-Scale Dataset and Evaluation Framework for Information Elicitation Agents](https://doi.org/10.48550/arXiv.2604.10968).
14
-
15
- ## Model Information
16
-
17
- All adapters in this repository are LoRA adapters trained on top of the `Llama-3.1-8B-Instruct`, `Llama-3.2-3B-Instruct`, and `DeepSeek-R1-Distill-Llama-8B` models. All models are fine-tuned using both the Supervised Fine-Tuning (SFT) and Offline Reinforcement-Learning (ORL) pipelines detailed in the paper.
18
-
19
- ## Resources
20
-
21
- - **Code Repository**: [GitHub - infosenselab/yield](https://github.com/infosenselab/yield)
22
- - **Dataset**: [infosense/yield](https://huggingface.co/datasets/infosense/yield)
23
- - **Paper**: [arXiv:](https://doi.org/10.48550/arXiv.2604.10968)
24
-
25
- ## Citing YIELD
26
-
27
- If you use this resource in your projects, please cite the following paper:
28
-
29
- ```bibtex
30
- @misc{De_Lima_YIELD_A_Large-Scale_2026,
31
- author = {De Lima, Victor and Yang, Grace Hui},
32
- doi = {10.48550/arXiv.2604.10968},
33
- title = {{YIELD: A Large-Scale Dataset and Evaluation Framework for Information Elicitation Agents}},
34
- url = {https://arxiv.org/abs/2604.10968},
35
- year = {2026}
36
- }
 
 
37
  ```
 
1
+ ---
2
+ base_model:
3
+ - meta-llama/Llama-3.1-8B-Instruct
4
+ datasets:
5
+ - infosense/yield
6
+ language:
7
+ - en
8
+ license: llama3.1
9
+ library_name: peft
10
+ pipeline_tag: text-generation
11
+ ---
12
+
13
+ # YIELD Fine-Tuning Adapters
14
+
15
+ This repository contains the persona adapter models presented in the paper [YIELD: A Large-Scale Dataset and Evaluation Framework for Information Elicitation Agents](https://huggingface.co/papers/2604.10968).
16
+
17
+ ## Model Information
18
+
19
+ All adapters in this repository are LoRA adapters trained on top of the `Llama-3.1-8B-Instruct`, `Llama-3.2-3B-Instruct`, and `DeepSeek-R1-Distill-Llama-8B` models. All models are fine-tuned using both the Supervised Fine-Tuning (SFT) and Offline Reinforcement-Learning (ORL) pipelines detailed in the paper. These models are designed to act as Information Elicitation Agents (IEAs), which aim to elicit information from users to support institutional or task-oriented objectives.
20
+
21
+ ## Resources
22
+
23
+ - **Code Repository**: [GitHub - infosenselab/yield](https://github.com/infosenselab/yield)
24
+ - **Dataset**: [infosense/yield](https://huggingface.co/datasets/infosense/yield)
25
+ - **Paper**: [Hugging Face Papers](https://huggingface.co/papers/2604.10968)
26
+
27
+ ## Citing YIELD
28
+
29
+ If you use this resource in your projects, please cite the following paper:
30
+
31
+ ```bibtex
32
+ @misc{De_Lima_YIELD_A_Large-Scale_2026,
33
+ author = {De Lima, Victor and Yang, Grace Hui},
34
+ doi = {10.48550/arXiv.2604.10968},
35
+ title = {{YIELD: A Large-Scale Dataset and Evaluation Framework for Information Elicitation Agents}},
36
+ url = {https://arxiv.org/abs/2604.10968},
37
+ year = {2026}
38
+ }
39
  ```