Update README.md
Browse files
README.md
CHANGED
|
@@ -102,7 +102,7 @@ model-index:
|
|
| 102 |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen2.5-7B-RRP-1M
|
| 103 |
name: Open LLM Leaderboard
|
| 104 |
---
|
| 105 |
-
LoRA trained on a thinking/reasoning and roleplaying dataset
|
| 106 |
|
| 107 |
## What this Model Can Do:
|
| 108 |
|
|
@@ -110,7 +110,6 @@ LoRA trained on a thinking/reasoning and roleplaying dataset
|
|
| 110 |
- Reasoning: Tackle problems and answer your questions in a logical way (thanks to the LoRA layer).
|
| 111 |
- Thinking: Use the <think> tag in your system prompts to activate the model's thinking abilities.
|
| 112 |
|
| 113 |
-
## Merge Details
|
| 114 |
### Merge Method
|
| 115 |
|
| 116 |
This model was merged using the Passthrough merge method using [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) + [bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora](https://huggingface.co/bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora) as a base.
|
|
|
|
| 102 |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen2.5-7B-RRP-1M
|
| 103 |
name: Open LLM Leaderboard
|
| 104 |
---
|
| 105 |
+
LoRA trained on a thinking/reasoning and roleplaying dataset and then merged with the Qwen2.5-7B-Instruct-1M model, which supports up to 1 million token context lengths.
|
| 106 |
|
| 107 |
## What this Model Can Do:
|
| 108 |
|
|
|
|
| 110 |
- Reasoning: Tackle problems and answer your questions in a logical way (thanks to the LoRA layer).
|
| 111 |
- Thinking: Use the <think> tag in your system prompts to activate the model's thinking abilities.
|
| 112 |
|
|
|
|
| 113 |
### Merge Method
|
| 114 |
|
| 115 |
This model was merged using the Passthrough merge method using [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) + [bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora](https://huggingface.co/bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora) as a base.
|