dvgodoy/yoda_sentences
Viewer • Updated • 720 • 575 • 6
How to use gilbaes/phi3-mini-yoda-v1 with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
model = PeftModel.from_pretrained(base_model, "gilbaes/phi3-mini-yoda-v1")This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the dvgodoy/yoda_sentences dataset.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "gilbaes/phi3-mini-yoda-v1"
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
prompt = "Translate to Yoda speak: I am learning to use the Force."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
This model was fine-tuned using:
This model inherits the limitations and biases from the base Phi-3 model and the training dataset. It's designed for educational purposes and may not be suitable for production use without further evaluation.
@misc{phi3-mini-yoda-v1},
author = {gilbaes},
title = {phi3-mini-yoda-v1},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/gilbaes/phi3-mini-yoda-v1}
}
Base model
microsoft/Phi-3-mini-4k-instruct