Llama-3.3-70B-Instruct-LoRA-Rev3
Usage
from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM
base = "meta-llama/Llama-3.3-70B-Instruct"
tok = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(model, "davidcondrey/llama-3.3-70B-i-ft-rev3")
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support