Update README.md
Browse files
README.md
CHANGED
|
@@ -16,10 +16,8 @@ license: apache-2.0
|
|
| 16 |
|
| 17 |
This is a fine tune of Qwen 3 4B 2507 Instruct, a lightweight but capable model that can outperform many larger models. Then used Unsloth LoRA Finetuning on an extensive range of high quality diverse datasets. Dolphy 1.0 was fine tuned on 1.5M Examples throughout it's fine tuning pipeline. As a fine tuned Qwen model, it still supports the extensive range of languages Qwen provided, but now with more nuanced responces and more native understanding. Another aspect of Dolphy 1.0 we focused on training it on Instruction Following datasets and personality datasets to give it a human like flair.
|
| 18 |
|
| 19 |
-
**
|
| 20 |
-
|
| 21 |
-
- For multimodal models: **llama-mtmd-cli** **-m** model_name.gguf **--mmproj** mmproj_file.gguf
|
| 22 |
-
|
| 23 |
You can also find this model in upcoming Dolphy AI releases.
|
| 24 |
|
| 25 |
## Available Model files:
|
|
|
|
| 16 |
|
| 17 |
This is a fine tune of Qwen 3 4B 2507 Instruct, a lightweight but capable model that can outperform many larger models. Then used Unsloth LoRA Finetuning on an extensive range of high quality diverse datasets. Dolphy 1.0 was fine tuned on 1.5M Examples throughout it's fine tuning pipeline. As a fine tuned Qwen model, it still supports the extensive range of languages Qwen provided, but now with more nuanced responces and more native understanding. Another aspect of Dolphy 1.0 we focused on training it on Instruction Following datasets and personality datasets to give it a human like flair.
|
| 18 |
|
| 19 |
+
**Compatibility**
|
| 20 |
+
As Dolphy 1.0 and Qwen3 2507 Instruct models share the same base, Dolphy 1.0 is compatible with Qwen3's extensive tool use, function calling and multilingual capibilities. The tokenizer is unchanged
|
|
|
|
|
|
|
| 21 |
You can also find this model in upcoming Dolphy AI releases.
|
| 22 |
|
| 23 |
## Available Model files:
|