YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- filipino
- recipes
- cooking
- meal-planning
- tagalog
- peft
- lora
language:
- en
- tl
library_name: peft

HAIN - Filipino Recipe Model

A fine-tuned Mistral 7B model specialized for Filipino recipe generation.

Model Details

  • Base Model: mistralai/Mistral-7B-Instruct-v0.2
  • Fine-tuning Method: QLoRA (4-bit quantization + LoRA)
  • Training Data: 331 Filipino recipes from various regions
  • Language: English + Tagalog ingredient names

Usage

from peft import PeftModel                                                                                                                                                    
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig                                                                                              
import torch                                                                                                                                                                  
                                                                                                                                                                              
# Load model                                                                                                                                                                  
bnb_config = BitsAndBytesConfig(                                                                                                                                              
    load_in_4bit=True,                                                                                                                                                        
    bnb_4bit_quant_type="nf4",                                                                                                                                                
    bnb_4bit_compute_dtype=torch.float16,                                                                                                                                     
)                                                                                                                                                                             
                                                                                                                                                                              
base_model = AutoModelForCausalLM.from_pretrained(                                                                                                                            
    "mistralai/Mistral-7B-Instruct-v0.2",                                                                                                                                     
    quantization_config=bnb_config,                                                                                                                                           
    device_map="auto",                                                                                                                                                        
)                                                                                                                                                                             
                                                                                                                                                                              
model = PeftModel.from_pretrained(base_model, "alwayslate22/hain-recipe-model")                                                                                               
tokenizer = AutoTokenizer.from_pretrained("alwayslate22/hain-recipe-model")                                                                                                   
                                                                                                                                                                              
# Generate                                                                                                                                                                    
prompt = "<s>[INST] Give me the full JSON recipe for: Chicken Adobo (Filipino, Tagalog Dish). [/INST]"                                                                        
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)                                                                                                              
outputs = model.generate(**inputs, max_new_tokens=500)                                                                                                                        
print(tokenizer.decode(outputs[0], skip_special_tokens=True))                                                                                                                 
                                                                                                                                                                              
Output Format                                                                                                                                                                 
                                                                                                                                                                              
The model returns structured JSON:                                                                                                                                            
                                                                                                                                                                              
{                                                                                                                                                                             
  "recipe_id": 1,                                                                                                                                                             
  "title": "Chicken Adobo",                                                                                                                                                   
  "cuisine": "Filipino",                                                                                                                                                      
  "region": "Tagalog Dish",                                                                                                                                                   
  "ingredients": [...],                                                                                                                                                       
  "instructions": [...],                                                                                                                                                      
  "cooking_time": "45 minutes",                                                                                                                                               
  "servings": 4,                                                                                                                                                              
  "difficulty": "Easy"                                                                                                                                                        
}                                                                                                                                                                             
                                                                                                                                                                              
Training                                                                                                                                                                      
                                                                                                                                                                              
- Hardware: Google Colab T4 GPU                                                                                                                                               
- Training Time: ~1 hour                                                                                                                                                      
- Epochs: 3                                                                                                                                                                   
- Final Loss: ~0.25                                                                                                                                                           
                                                                                    
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support