Llama-70B-God-Tier-Adapter
LoRA adapter fine-tuned from meta-llama/Llama-3.3-70B-Instruct.
Model Details
- Type: LoRA adapter (requires a compatible base model)
- Base model:
meta-llama/Llama-3.3-70B-Instruct - Dataset:
tatsu-lab/alpaca - Training samples:
1000 - Max sequence length:
512 - Packing:
False - LoRA: r=16, alpha=32, dropout=0.05, target=attn
Intended Use
This repository contains adapter weights intended to be loaded on top of the base model.
Usage (Transformers + PEFT)
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_id = "meta-llama/Llama-3.3-70B-Instruct"
adapter_id = "Daga2001/Llama-70B-God-Tier-Adapter"
tokenizer = AutoTokenizer.from_pretrained(base_id)
base = AutoModelForCausalLM.from_pretrained(base_id, device_map="auto")
model = PeftModel.from_pretrained(base, adapter_id)
License / Attribution
Use is subject to the base model license and terms.
- Downloads last month
- 70
Model tree for Daga2001/Llama-70B-God-Tier-Adapter
Base model
meta-llama/Llama-3.1-70B Finetuned
meta-llama/Llama-3.3-70B-Instruct