# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("dominguesm/tiny-random-canarim")
model = AutoModelForCausalLM.from_pretrained("dominguesm/tiny-random-canarim")Quick Links
tiny-random-canarim
This is a tiny random Llama model derived from "dominguesm/canarim-7b".
See make_tiny_model.py for how this was done.
This is useful for functional testing (not quality generation, since its weights are random and the tokenizer has been shrunk to 3k items)
Thanks to Stas Bekman for the code.
- Downloads last month
- 27
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="dominguesm/tiny-random-canarim")