Llama-70B-God-Tier
Full merged Transformers checkpoint (sharded safetensors) created by merging a LoRA adapter into meta-llama/Llama-3.3-70B-Instruct.
Model Details
- Type: merged fine-tune (full weights)
- Base model:
meta-llama/Llama-3.3-70B-Instruct - Adapter repo: https://huggingface.co/Daga2001/Llama-70B-God-Tier-Adapter
- Dataset:
tatsu-lab/alpaca - Training samples:
1000 - Max sequence length:
512 - Packing:
False - LoRA (merged): r=16, alpha=32, dropout=0.05, target=attn
Intended Use
This repo is intended for Transformers users.
Usage (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Daga2001/Llama-70B-God-Tier"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
LM Studio
LM Studio typically expects GGUF files. See the *-GGUF repo if provided.
License / Attribution
Use is subject to the base model license and terms.
- Downloads last month
- 111