edbeeching HF Staff commited on
Commit
00f5bf2
·
1 Parent(s): ec7f30e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -14
README.md CHANGED
@@ -7,29 +7,31 @@ tags:
7
  - code
8
  ---
9
 
10
- Model Card
11
- Llama-se-rl-adapter
12
- Adapted weights of an RL fine-tuned model based on LLama. Authored by Edward Beeching, Younes Belkada, Kashiv Rasul, Lewis Tunstall and Leandro von Werra.
13
 
14
 
15
- Model Description
16
- Llama-se-rl is a Llama based model that has been first fine-tuned on the Stack Exchange dataset and the RL fine-tuned using a reward model . This dataset consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. The model is designed to generate human-like responses to questions in these domains. The model has been training the respond to prompts with the following template:
17
 
 
18
  Question: <Query>
19
 
20
  Answer: <Response>
 
21
 
22
- Intended Uses & Limitations
23
- Llama-se-rl is intended for use in generating responses to questions related to the Stack Exchange dataset. It is suitable for generating answers to questions in the domains covered by the dataset, such as programming, mathematics, and physics. However, the model may not perform well on questions outside these domains or on questions requiring highly specific or technical knowledge.
24
 
25
- Limitations and Bias
26
- The Llama-se-rl model inherits limitations and biases from the Llama model and also those contained in the Stack Exchange dataset. The Stack Exchange dataset may contain biases in terms of the topics it covers and the users who contribute to it. It may not include all possible domains, and the quality of answers may vary. Additionally, the model may generate answers that are incorrect or misleading due to biases in the training data or the inherent limitations of the Llama architecture.
27
 
28
- BibTeX entry and citation info
29
- bibtex
30
- Copy code
31
  @misc{beeching2023llama,
32
- title={Llama: A Fine-tuned GPT-2 Model for Stack Exchange},
33
- author={Edward Beeching and Younes Belkada and Kashiv Rasul and Lewis Tunstall and Leandro von Werra},
34
  year={2023}
35
  }
 
 
7
  - code
8
  ---
9
 
10
+ # Llama-se-rl-adapter
11
+ Adapter weights of an RL fine-tuned model based on LLaMa. Authored by Edward Beeching, Younes Belkada, Kashiv Rasul, Lewis Tunstall and Leandro von Werra.
 
12
 
13
 
14
+ ## Model Description
15
+ **Llama-se-rl** is a Llama-based model that has been first fine-tuned on the Stack Exchange dataset and then RL fine-tuned using a Stack Exchange Reward Model. This dataset consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. The model is designed to generate human-like responses to questions in these domains. The model has been training to respond to prompts with the following template:
16
 
17
+ ```
18
  Question: <Query>
19
 
20
  Answer: <Response>
21
+ ```
22
 
23
+ ## Intended Uses & Limitations
24
+ **Llama-se-rl** is intended for use in generating responses to questions related to the Stack Exchange dataset. It is suitable for generating answers to questions in the domains covered by the dataset, such as programming, mathematics, and physics. However, the model may not perform well on questions outside these domains or on questions requiring highly specific or technical knowledge.
25
 
26
+ ## Limitations and Bias
27
+ The **Llama-se-rl** model inherits limitations and biases from the Llama model and also those contained in the Stack Exchange dataset. The Stack Exchange dataset may contain biases in terms of the topics it covers and the users who contribute to it. It may not include all possible domains, and the quality of answers may vary. Additionally, the model may generate answers that are incorrect or misleading due to biases in the training data or the inherent limitations of the Llama architecture.
28
 
29
+ ## BibTeX entry and citation info
30
+
31
+ ```bibtex
32
  @misc{beeching2023llama,
33
+ title={StackLLaMa: An RL Fine-tuned LLaMa Model for Stack Exchange Question and Answering},
34
+ author={Beeching, Edward and Belkada, Younes and Rasul, Kashiv and Tunstall, Lewis and von Werra, Leandro},
35
  year={2023}
36
  }
37
+ ```