Update README.md
Browse files
README.md
CHANGED
|
@@ -3,8 +3,32 @@ library_name: transformers.js
|
|
| 3 |
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
| 4 |
---
|
| 5 |
|
| 6 |
-
https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B with ONNX weights to be compatible with Transformers.js.
|
| 7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
|
| 10 |
## Usage (Transformers.js)
|
|
@@ -98,4 +122,12 @@ x = 1 \quad \text{or} \quad x = 2
|
|
| 98 |
|
| 99 |
---
|
| 100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
|
|
|
| 3 |
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
| 4 |
---
|
| 5 |
|
|
|
|
| 6 |
|
| 7 |
+
[<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/67b2f4e49edebc815a3a4739/R1g957j1aBbx8lhZbWmxw.jpeg" width="200"/>](https://huggingface.co/fjmgAI)
|
| 8 |
+
|
| 9 |
+
## Fine-Tuned Model
|
| 10 |
+
|
| 11 |
+
**`fjmgAI/b1-R1-1.5B-ONNX`**
|
| 12 |
+
|
| 13 |
+
## Base Model
|
| 14 |
+
**`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`**
|
| 15 |
+
|
| 16 |
+
## Fine-Tuning Method
|
| 17 |
+
Fine-tuning was performed using **[`unsloth`](https://github.com/unslothai/unsloth)**, an efficient fine-tuning framework optimized for low-resource environments and Huggingface's TRL library.
|
| 18 |
+
Using ONNx runtime to transform the resulting model weights and make it compatible with Transformers.js.
|
| 19 |
+
|
| 20 |
+
## Dataset
|
| 21 |
+
**[`Kukedlc/dpo-orpo-spanish-15k`](https://huggingface.co/datasets/Kukedlc/dpo-orpo-spanish-15k)**
|
| 22 |
+
|
| 23 |
+
### Description
|
| 24 |
+
A Spanish-language dataset containing **15,000 examples**, designed for **Direct Preference Optimization (DPO)** or **Outcome-Regularized Preference Optimization (ORPO).**
|
| 25 |
+
|
| 26 |
+
### Adaptation
|
| 27 |
+
The dataset was adapted to a reasoning-based format for GPRO, enhancing its ability to guide preference-based decision-making during fine-tuning. This adaptation ensures better alignment with instruction-following tasks in Spanish.
|
| 28 |
+
|
| 29 |
+
## Fine-Tuning Details
|
| 30 |
+
- The model was trained using the **GPRO algorithm**, leveraging structured preference data to refine its response generation.
|
| 31 |
+
- The focus was on retaining the model's **instructional abilities** while improving its **understanding and generation** of Spanish text.
|
| 32 |
|
| 33 |
|
| 34 |
## Usage (Transformers.js)
|
|
|
|
| 122 |
|
| 123 |
---
|
| 124 |
|
| 125 |
+
## Purpose
|
| 126 |
+
This fine-tuned model is intended for **Spanish-language applications** that require efficient AI that follows instructions using a **lightweight reasoning process.**
|
| 127 |
+
|
| 128 |
+
- **Developed by:** fjmgAI
|
| 129 |
+
- **License:** apache-2.0
|
| 130 |
+
|
| 131 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) [<img src="https://camo.githubusercontent.com/9585eb3e70c8138cbc0f73de7e970be4c668e957e45d16fc3ee6687fcc1da905/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f74726c2d6c69622f646f63756d656e746174696f6e2d696d616765732f7265736f6c76652f6d61696e2f74726c5f62616e6e65725f6461726b2e706e67" width="200"/>](https://github.com/huggingface/trl?tab=readme-ov-file)
|
| 132 |
+
|
| 133 |
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|