new
Browse files
README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
base_model:
|
| 4 |
-
- Qwen/Qwen2.5-1.5B-Instruct
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
tags:
|
| 7 |
- conversational
|
|
@@ -9,6 +9,16 @@ tags:
|
|
| 9 |
- merge
|
| 10 |
- LoRA
|
| 11 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
<h1 align="center">
|
| 13 |
<strong>FluffyTail</strong>
|
| 14 |
</h1>
|
|
@@ -24,18 +34,17 @@ tags:
|
|
| 24 |
|
| 25 |
## 🚀 Быстрый старт / Quick Start
|
| 26 |
|
| 27 |
-
Самый простой способ начать — использовать готовое решение через
|
| 28 |
|
| 29 |
-
```bash
|
| 30 |
ollama run MarkProMaster229/FluffyTail
|
| 31 |
-
|
| 32 |
-
|
| 33 |
## 📖 Об обучении / Training Details
|
| 34 |
-
Модель была дообучена с использованием адаптера
|
| 35 |
-
Количество обучаемых параметров:
|
| 36 |
|
| 37 |
-
This model was fine-tuned using the
|
| 38 |
-
Number of trainable parameters:
|
| 39 |
|
| 40 |
---
|
| 41 |
|
|
@@ -69,7 +78,10 @@ Number of trainable parameters: **9,232,384**, which is **~0.59%** of the total
|
|
| 69 |
|
| 70 |
This model is based on the following work:
|
| 71 |
|
| 72 |
-
-
|
| 73 |
- The original Apache 2.0 license for the base model applies.
|
| 74 |
|
| 75 |
-
If you use this model in your research, please consider citing the original Qwen2.5 work.
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
base_model:
|
| 4 |
+
- Qwen/Qwen2.5-1.5B-Instruct
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
tags:
|
| 7 |
- conversational
|
|
|
|
| 9 |
- merge
|
| 10 |
- LoRA
|
| 11 |
---
|
| 12 |
+
|
| 13 |
+
<div style="
|
| 14 |
+
background: linear-gradient(135deg, #170e34 0%, #3a1c6e 30%, #2d1b69 70%, #170e34 100%);
|
| 15 |
+
padding: 30px;
|
| 16 |
+
border-radius: 16px;
|
| 17 |
+
margin-top: 20px;
|
| 18 |
+
color: #e2e2ff;
|
| 19 |
+
box-shadow: inset 0 0 60px rgba(106, 13, 173, 0.2);
|
| 20 |
+
">
|
| 21 |
+
|
| 22 |
<h1 align="center">
|
| 23 |
<strong>FluffyTail</strong>
|
| 24 |
</h1>
|
|
|
|
| 34 |
|
| 35 |
## 🚀 Быстрый старт / Quick Start
|
| 36 |
|
| 37 |
+
Самый простой способ начать — использовать готовое решение через Ollama:
|
| 38 |
|
|
|
|
| 39 |
ollama run MarkProMaster229/FluffyTail
|
| 40 |
+
The easiest way to get started is to use the ready-to-use solution via Ollama:
|
| 41 |
+
ollama run MarkProMaster229/FluffyTail
|
| 42 |
## 📖 Об обучении / Training Details
|
| 43 |
+
Модель была дообучена с использованием адаптера LoRA (Low-Rank Adaptation).
|
| 44 |
+
Количество обучаемых параметров: 9 232 384, что составляет ~0.59% от общего числа параметров базовой модели.
|
| 45 |
|
| 46 |
+
This model was fine-tuned using the LoRA (Low-Rank Adaptation) adapter.
|
| 47 |
+
Number of trainable parameters: 9,232,384, which is ~0.59% of the total parameters of the base model.
|
| 48 |
|
| 49 |
---
|
| 50 |
|
|
|
|
| 78 |
|
| 79 |
This model is based on the following work:
|
| 80 |
|
| 81 |
+
- Qwen2.5-1.5B-Instruct by the Qwen Team.
|
| 82 |
- The original Apache 2.0 license for the base model applies.
|
| 83 |
|
| 84 |
+
If you use this model in your research, please consider citing the original Qwen2.5 work.
|
| 85 |
+
|
| 86 |
+
</div> <!-- Закрывающий тег для общего фона -->
|
| 87 |
+
`
|