| | --- |
| | base_model: |
| | - DataPilot/ArrowPro-7B-RobinHood |
| | - Aratako/AntlerStar-RP |
| | - ohwi/japanese-stablelm-instruct-gamma-7b-dpo-uf-v1 |
| | - stabilityai/japanese-stablelm-base-gamma-7b |
| | - Aratako/Ninja-v1-RP-expressive |
| | - Aratako/ArrowPro-7B-RobinHood-toxic |
| | - Elizezen/Phos-7B-RP |
| | - DataPilot/ArrowPro-7B-KillerWhale |
| | - umiyuki/Umievo-itr012-Gleipnir-7B |
| | - DataPilot/ArrowPro-7B-KUJIRA |
| | library_name: transformers |
| | tags: |
| | - mergekit |
| | - merge |
| | language: |
| | - ja |
| | pipeline_tag: text-generation |
| | license: apache-2.0 |
| | --- |
| | # RP-7b-instruct |
| |
|
| | 🚨 **This model is tuning to RP and knowledge is likely unstable.** |
| |
|
| | This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
| |
|
| |
|
| | ## Output example |
| |
|
| | ``` |
| | [INST] <<SYS>> |
| | あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。 |
| | <</SYS>> |
| | |
| | まどマギで一番好きなキャラは? [/INST]まどマギで一番好きなキャラクターは、鹿目まどかです。彼女は物語の中心人物であり、優しくて芯の強い性格が魅力的です。また、彼女が魔法少女としての運命を受け入れ、他の魔法少女たちを救うために奔走する姿には感動します。彼女の最期の選択も、多くの人々の心に残る名シーンとなっています。 |
| | ``` |
| |
|
| | ## Test environment |
| |
|
| | This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.3 for Generation. |
| |
|
| |
|
| | ## Usage |
| |
|
| | This format must be adhered to strictly, as deviations may result in less optimal outputs from the model. |
| |
|
| | The template used to construct a prompt for the Instruct model is specified as follows: |
| |
|
| | ``` |
| | <s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST] |
| | ``` |
| |
|
| |
|
| | Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings. |
| |
|
| | For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。" |
| | |
| | For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input} |
| | |
| | In other words, We recommend the following: |
| | |
| | ``` |
| | <s>[INST] <<SYS>>\nあなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST] |
| | ``` |
| | |
| | ### Use the instruct model |
| | |
| | ```python |
| | import torch |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | |
| | model_name = "nitky/RP-7b-instruct" |
| | model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") |
| | tokenizer = AutoTokenizer.from_pretrained(model_name) |
| |
|
| | device = "cuda" |
| |
|
| | messages = [ |
| | {"role": "system", "content": "あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。"}, |
| | {"role": "user", "content": "まどマギで一番好きなキャラは?"} |
| | ] |
| | |
| | encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
| | |
| | model_inputs = encodeds.to(device) |
| | model.to(device) |
| |
|
| | generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.3) |
| | decoded = tokenizer.batch_decode(generated_ids) |
| | print(decoded[0]) |
| | ``` |
| | |
| | ## Merge Details |
| | ### Merge Method |
| | |
| | This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) as a base. |
| | |
| | ### Models Merged |
| | |
| | The following models were included in the merge: |
| | * [DataPilot/ArrowPro-7B-RobinHood](https://huggingface.co/DataPilot/ArrowPro-7B-RobinHood) |
| | * [Aratako/AntlerStar-RP](https://huggingface.co/Aratako/AntlerStar-RP) |
| | * [ohwi/japanese-stablelm-instruct-gamma-7b-dpo-uf-v1](https://huggingface.co/ohwi/japanese-stablelm-instruct-gamma-7b-dpo-uf-v1) |
| | * [Aratako/Ninja-v1-RP-expressive](https://huggingface.co/Aratako/Ninja-v1-RP-expressive) |
| | * [Aratako/ArrowPro-7B-RobinHood-toxic](https://huggingface.co/Aratako/ArrowPro-7B-RobinHood-toxic) |
| | * [Elizezen/Phos-7B-RP](https://huggingface.co/Elizezen/Phos-7B-RP) |
| | * [DataPilot/ArrowPro-7B-KillerWhale](https://huggingface.co/DataPilot/ArrowPro-7B-KillerWhale) |
| | * [umiyuki/Umievo-itr012-Gleipnir-7B](https://huggingface.co/umiyuki/Umievo-itr012-Gleipnir-7B) |
| | * [DataPilot/ArrowPro-7B-KUJIRA](https://huggingface.co/DataPilot/ArrowPro-7B-KUJIRA) |
| | |
| | ### Configuration |
| | |
| | The following YAML configuration was used to produce this model: |
| | |
| | ```yaml |
| | merge_method: model_stock |
| | base_model: stabilityai/japanese-stablelm-base-gamma-7b |
| | models: |
| | - model: Aratako/AntlerStar-RP |
| | - model: Aratako/ArrowPro-7B-RobinHood-toxic |
| | - model: Aratako/Ninja-v1-RP-expressive |
| | - model: DataPilot/ArrowPro-7B-KUJIRA |
| | - model: DataPilot/ArrowPro-7B-RobinHood |
| | - model: DataPilot/ArrowPro-7B-KillerWhale |
| | - model: Elizezen/Phos-7B-RP |
| | - model: ohwi/japanese-stablelm-instruct-gamma-7b-dpo-uf-v1 |
| | - model: umiyuki/Umievo-itr012-Gleipnir-7B |
| | dtype: bfloat16 |
| | tokenizer_source: model:stabilityai/japanese-stablelm-base-gamma-7b |
| | name: RP-7b-instruct |
| | ``` |
| | |