Llama-70B-God-Tier-Adapter

LoRA adapter fine-tuned from meta-llama/Llama-3.3-70B-Instruct.

Model Details

  • Type: LoRA adapter (requires a compatible base model)
  • Base model: meta-llama/Llama-3.3-70B-Instruct
  • Dataset: tatsu-lab/alpaca
  • Training samples: 1000
  • Max sequence length: 512
  • Packing: False
  • LoRA: r=16, alpha=32, dropout=0.05, target=attn

Intended Use

This repository contains adapter weights intended to be loaded on top of the base model.

Usage (Transformers + PEFT)

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_id = "meta-llama/Llama-3.3-70B-Instruct"
adapter_id = "Daga2001/Llama-70B-God-Tier-Adapter"

tokenizer = AutoTokenizer.from_pretrained(base_id)
base = AutoModelForCausalLM.from_pretrained(base_id, device_map="auto")
model = PeftModel.from_pretrained(base, adapter_id)

License / Attribution

Use is subject to the base model license and terms.

Downloads last month
70
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Daga2001/Llama-70B-God-Tier-Adapter

Adapter
(169)
this model