| | ---
|
| | base_model:
|
| | - Qwen/Qwen2.5-Coder-1.5B-Instruct
|
| | - Qwen/Qwen2.5-Math-1.5B-Instruct
|
| | - Qwen/Qwen2.5-1.5B
|
| | - agentica-org/DeepScaleR-1.5B-Preview
|
| | tags:
|
| | - merge
|
| | - mergekit
|
| | - lazymergekit
|
| | - Qwen/Qwen2.5-Coder-1.5B-Instruct
|
| | - Qwen/Qwen2.5-Math-1.5B-Instruct
|
| | - Qwen/Qwen2.5-1.5B
|
| | - agentica-org/DeepScaleR-1.5B-Preview
|
| | language:
|
| | - zho
|
| | - eng
|
| | - fra
|
| | - spa
|
| | - por
|
| | - deu
|
| | - ita
|
| | - rus
|
| | - jpn
|
| | - kor
|
| | - vie
|
| | - tha
|
| | - ara
|
| | ---
|
| |
|
| | # DeepQwenScalerPlus
|
| |
|
| | DeepQwenScalerPlus is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
| | * [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct)
|
| | * [Qwen/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B-Instruct)
|
| | * [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B)
|
| | * [agentica-org/DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview)
|
| |
|
| | ## 🧩 Configuration
|
| |
|
| | ```yaml
|
| | models:
|
| | - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
| | # no parameters necessary for base model
|
| | - model: Qwen/Qwen2.5-Coder-1.5B-Instruct
|
| | parameters:
|
| | density: 0.5
|
| | weight: 0.5
|
| | - model: Qwen/Qwen2.5-Math-1.5B-Instruct
|
| | parameters:
|
| | density: 0.6
|
| | weight: 0.5
|
| | - model: Qwen/Qwen2.5-1.5B
|
| | parameters:
|
| | density: 0.6
|
| | weight: 0.5
|
| | - model: agentica-org/DeepScaleR-1.5B-Preview
|
| | parameters:
|
| | density: 0.4
|
| | weight: 0.6
|
| | merge_method: ties
|
| | base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
| | parameters:
|
| | normalize: true
|
| | dtype: float16
|
| | ```
|
| |
|
| | ## 💻 Usage
|
| |
|
| | ```python
|
| | !pip install -qU transformers accelerate
|
| |
|
| | from transformers import AutoTokenizer
|
| | import transformers
|
| | import torch
|
| |
|
| | model = "K00B404/DeepQwenScalerPlus"
|
| | messages = [{"role": "user", "content": "What is a large language model?"}]
|
| |
|
| | tokenizer = AutoTokenizer.from_pretrained(model)
|
| | prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| | pipeline = transformers.pipeline(
|
| | "text-generation",
|
| | model=model,
|
| | torch_dtype=torch.float16,
|
| | device_map="auto",
|
| | )
|
| |
|
| | outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
| | print(outputs[0]["generated_text"])
|
| | ``` |