Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,35 @@
|
|
| 1 |
---
|
| 2 |
license: llama2
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: llama2
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
# youri-2x7b_dev
|
| 6 |
+
|
| 7 |
+
This model is a Mixture of Experts (MoE) merger of the following two models:
|
| 8 |
+
- [rinna/youri-7b-instruction](https://huggingface.co/rinna/youri-7b-instruction)
|
| 9 |
+
- [rinna/youri-7b-chat](https://huggingface.co/rinna/youri-7b-chat)
|
| 10 |
+
|
| 11 |
+
## 🧩 Configuration
|
| 12 |
+
|
| 13 |
+
The model has been made with a custom version of the [mergekit](https://github.com/cg123/mergekit) library (mixtral branch) and the following configuration:
|
| 14 |
+
|
| 15 |
+
```yaml
|
| 16 |
+
base_model: rinna/youri-7b-chat
|
| 17 |
+
gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
|
| 18 |
+
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
|
| 19 |
+
experts:
|
| 20 |
+
- source_model: rinna/youri-7b-chat
|
| 21 |
+
positive_prompts:
|
| 22 |
+
- "質問と回答の選択肢を入力として受け取り、選択肢から回答を選択してください。"
|
| 23 |
+
- "前提と仮説の関係を含意、矛盾、中立の中から回答してください。"
|
| 24 |
+
- "以下のテキストを、ポジティブまたはネガティブの感情クラスのいずれかに分類してください。"
|
| 25 |
+
- "以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。"
|
| 26 |
+
- source_model: rinna/youri-7b-instruction
|
| 27 |
+
positive_prompts:
|
| 28 |
+
- "質問に対する回答を題名と文章から一言で抽出してください。回答は名詞で答えてください。"
|
| 29 |
+
- "与えられたニュース記事を要約してください。"
|
| 30 |
+
- "与えられた文が文法的であるかを回答してください。"
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
The `positive_prompts` in the above configuration are extracted from the instructions of benchmarks that each model excels in.
|
| 34 |
+
For reference on the benchmarks for each model, please see the LM Benchmark at [rinnakk's LM Benchmark](https://rinnakk.github.io/research/benchmarks/lm/index.html).
|
| 35 |
+
These benchmarks provide a detailed overview of the areas where each individual model performs particularly well, guiding the effective use of the merged model in various natural language processing tasks.
|