Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,6 @@
|
|
| 1 |
---
|
| 2 |
-
base_model:
|
|
|
|
| 3 |
tags:
|
| 4 |
- text-generation-inference
|
| 5 |
- transformers
|
|
@@ -14,33 +15,8 @@ metrics:
|
|
| 14 |
- bleu
|
| 15 |
- rouge
|
| 16 |
- meteor
|
| 17 |
-
|
| 18 |
model-index:
|
| 19 |
- name: MASID-v3
|
| 20 |
-
description: |
|
| 21 |
-
**MASID-v3** is a fine-tuned version of **Qwen2.5-7B** trained specifically for **Filipino recipe generation**, with a focus on main dish preparation.
|
| 22 |
-
|
| 23 |
-
This model was trained on the **Filipino Recipes 2K V2 dataset**, a curated collection of ~2,000 authentic Filipino recipes.
|
| 24 |
-
Unlike earlier variants that explored multi-stage fine-tuning, **MASID-v3 was trained directly from Qwen2.5-7B** using this dataset to specialize the model toward Filipino culinary knowledge.
|
| 25 |
-
|
| 26 |
-
The goal of MASID-v3 is to generate structured and culturally accurate Filipino main dish recipes, covering a wide range of traditional cooking methods and ingredient combinations.
|
| 27 |
-
|
| 28 |
-
### Model Details
|
| 29 |
-
- **Base Model**: [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
|
| 30 |
-
- **Dataset**: Filipino Recipes 2K V2 (~2,000 samples)
|
| 31 |
-
- **Training Objective**: Recipe text generation (Filipino cuisine, main dishes)
|
| 32 |
-
- **Method**: Direct fine-tuning from Qwen2.5-7B
|
| 33 |
-
|
| 34 |
-
### Intended Use
|
| 35 |
-
- Assisting in **recipe writing**
|
| 36 |
-
- Exploring **Filipino food culture**
|
| 37 |
-
- Generating **cooking instructions** in natural language
|
| 38 |
-
|
| 39 |
-
### Limitations
|
| 40 |
-
- Trained on a relatively **small dataset (~2k samples)**
|
| 41 |
-
- May sometimes produce **hallucinated ingredients** or **inaccurate steps**
|
| 42 |
-
- Not suitable for **nutritional or food safety advice**
|
| 43 |
-
- Best used for **research, education, and creative purposes**
|
| 44 |
results:
|
| 45 |
- task:
|
| 46 |
name: Text Generation
|
|
@@ -49,8 +25,6 @@ model-index:
|
|
| 49 |
name: joackimagno/FILIPINO_RECIPES_2K_V2
|
| 50 |
type: joackimagno/FILIPINO_RECIPES_2K_V2
|
| 51 |
split: test
|
| 52 |
-
# (optional but recommended)
|
| 53 |
-
revision: <dataset_git_sha_or_tag>
|
| 54 |
metrics:
|
| 55 |
- name: BLEU-4
|
| 56 |
type: bleu
|
|
@@ -65,12 +39,6 @@ model-index:
|
|
| 65 |
config: rougeL
|
| 66 |
---
|
| 67 |
|
| 68 |
-
# Uploaded finetuned model
|
| 69 |
-
|
| 70 |
-
- **Developed by:** joackimagno
|
| 71 |
-
- **License:** apache-2.0
|
| 72 |
-
- **Finetuned from model :** unsloth/Qwen2.5-7B
|
| 73 |
-
|
| 74 |
# MASID-v3
|
| 75 |
|
| 76 |
**MASID-v3** is a fine-tuned version of **Qwen2.5-7B** trained specifically for **Filipino recipe generation**, with a focus on main dish preparation.
|
|
@@ -105,6 +73,21 @@ The goal of MASID-v3 is to generate structured and culturally accurate Filipino
|
|
| 105 |
|
| 106 |
---
|
| 107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
## Example Usage
|
| 109 |
|
| 110 |
```python
|
|
@@ -115,7 +98,11 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
|
| 115 |
# Load model and tokenizer
|
| 116 |
model_name = "joackimagno/MASID-v3"
|
| 117 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 118 |
-
model = AutoModelForCausalLM.from_pretrained(
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
|
| 120 |
# ==============================================================
|
| 121 |
# Alpaca-style prompt
|
|
@@ -173,10 +160,4 @@ generated = tokenizer.decode(
|
|
| 173 |
skip_special_tokens=True
|
| 174 |
)
|
| 175 |
|
| 176 |
-
print(generated.strip())
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
| 181 |
-
|
| 182 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen2.5-7B
|
| 4 |
tags:
|
| 5 |
- text-generation-inference
|
| 6 |
- transformers
|
|
|
|
| 15 |
- bleu
|
| 16 |
- rouge
|
| 17 |
- meteor
|
|
|
|
| 18 |
model-index:
|
| 19 |
- name: MASID-v3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
results:
|
| 21 |
- task:
|
| 22 |
name: Text Generation
|
|
|
|
| 25 |
name: joackimagno/FILIPINO_RECIPES_2K_V2
|
| 26 |
type: joackimagno/FILIPINO_RECIPES_2K_V2
|
| 27 |
split: test
|
|
|
|
|
|
|
| 28 |
metrics:
|
| 29 |
- name: BLEU-4
|
| 30 |
type: bleu
|
|
|
|
| 39 |
config: rougeL
|
| 40 |
---
|
| 41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
# MASID-v3
|
| 43 |
|
| 44 |
**MASID-v3** is a fine-tuned version of **Qwen2.5-7B** trained specifically for **Filipino recipe generation**, with a focus on main dish preparation.
|
|
|
|
| 73 |
|
| 74 |
---
|
| 75 |
|
| 76 |
+
## Evaluation
|
| 77 |
+
|
| 78 |
+
| Dataset | Split | BLEU-4 | METEOR | ROUGE-L (F1) |
|
| 79 |
+
|------------------------------------|:-----:|:------:|:------:|:------------:|
|
| 80 |
+
| joackimagno/FILIPINO_RECIPES_2K_V2 | test | 0.07 | 0.35 | 0.32 |
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+
---
|
| 86 |
+
|
| 87 |
+
This Qwen2 model was trained **2× faster** with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face’s TRL library.
|
| 88 |
+
|
| 89 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
| 90 |
+
|
| 91 |
## Example Usage
|
| 92 |
|
| 93 |
```python
|
|
|
|
| 98 |
# Load model and tokenizer
|
| 99 |
model_name = "joackimagno/MASID-v3"
|
| 100 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 101 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 102 |
+
model_name,
|
| 103 |
+
torch_dtype=torch.float16,
|
| 104 |
+
device_map="auto",
|
| 105 |
+
)
|
| 106 |
|
| 107 |
# ==============================================================
|
| 108 |
# Alpaca-style prompt
|
|
|
|
| 160 |
skip_special_tokens=True
|
| 161 |
)
|
| 162 |
|
| 163 |
+
print(generated.strip())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|