Update README.md
Browse files
README.md
CHANGED
|
@@ -9,14 +9,42 @@ tags:
|
|
| 9 |
license: apache-2.0
|
| 10 |
language:
|
| 11 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
|
|
|
| 13 |
|
| 14 |
-
|
| 15 |
|
| 16 |
-
-
|
| 17 |
-
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
| 9 |
license: apache-2.0
|
| 10 |
language:
|
| 11 |
- en
|
| 12 |
+
- es
|
| 13 |
+
datasets:
|
| 14 |
+
- Kukedlc/dpo-orpo-spanish-15k
|
| 15 |
+
library_name: transformers
|
| 16 |
---
|
| 17 |
+
[<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/67b2f4e49edebc815a3a4739/R1g957j1aBbx8lhZbWmxw.jpeg" width="200"/>](https://huggingface.co/fjmgAI)
|
| 18 |
|
| 19 |
+
## Fine-Tuned Model
|
| 20 |
|
| 21 |
+
**`fjmgAI/b1-R1-Zero-3B-GGUF`**
|
| 22 |
+
|
| 23 |
+
## Base Model
|
| 24 |
+
**`unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit`**
|
| 25 |
+
|
| 26 |
+
## Fine-Tuning Method
|
| 27 |
+
Fine-tuning was performed using **[`unsloth`](https://github.com/unslothai/unsloth)**, an efficient fine-tuning framework optimized for low-resource environments and Huggingface's TRL library.
|
| 28 |
+
|
| 29 |
+
## Dataset
|
| 30 |
+
**[`Kukedlc/dpo-orpo-spanish-15k`](https://huggingface.co/datasets/Kukedlc/dpo-orpo-spanish-15k)**
|
| 31 |
+
|
| 32 |
+
### Description
|
| 33 |
+
A Spanish-language dataset containing **15,000 examples**, designed for **Direct Preference Optimization (DPO)** or **Outcome-Regularized Preference Optimization (ORPO).**
|
| 34 |
|
| 35 |
+
### Adaptation
|
| 36 |
+
The dataset was adapted to a reasoning-based format for GPRO, enhancing its ability to guide preference-based decision-making during fine-tuning. This adaptation ensures better alignment with instruction-following tasks in Spanish.
|
| 37 |
+
|
| 38 |
+
## Fine-Tuning Details
|
| 39 |
+
- The model was trained using the **GPRO algorithm**, leveraging structured preference data to refine its response generation.
|
| 40 |
+
- The model was fine-tuned to maintain its **4-bit quantization (`bnb-4bit`)** for memory efficiency while aligning its outputs with the characteristics of the Spanish dataset.
|
| 41 |
+
- The focus was on retaining the model's **instructional abilities** while improving its **understanding and generation** of Spanish text.
|
| 42 |
+
|
| 43 |
+
## Purpose
|
| 44 |
+
This fine-tuned model is intended for **Spanish-language applications** that require efficient AI that follows instructions using a **lightweight reasoning process.**
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
- **Developed by:** fjmgAI
|
| 48 |
+
- **License:** apache-2.0
|
| 49 |
|
| 50 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) [<img src="https://camo.githubusercontent.com/9585eb3e70c8138cbc0f73de7e970be4c668e957e45d16fc3ee6687fcc1da905/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f74726c2d6c69622f646f63756d656e746174696f6e2d696d616765732f7265736f6c76652f6d61696e2f74726c5f62616e6e65725f6461726b2e706e67" width="200"/>](https://github.com/huggingface/trl?tab=readme-ov-file)
|