Text Generation
Transformers
Safetensors
English
llama
trl
sft
text-generation-inference
4-bit precision
bitsandbytes
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="uralstech/AFO-AutoAgent-v2")# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("uralstech/AFO-AutoAgent-v2")
model = AutoModelForCausalLM.from_pretrained("uralstech/AFO-AutoAgent-v2")Quick Links
- Downloads last month
- -
Model tree for uralstech/AFO-AutoAgent-v2
Base model
meta-llama/Llama-3.1-8B
# Gated model: Login with a HF token with gated access permission hf auth login