# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("appvoid/arco-reflection-v4")
model = AutoModelForCausalLM.from_pretrained("appvoid/arco-reflection-v4")Quick Links
Uploaded model
- Developed by: appvoid
- License: apache-2.0
- Finetuned from model : h2oai/h2o-danube3-500m-base
super experimental, better use the arco-reflection if you dont get good results
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 12

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="appvoid/arco-reflection-v4")