How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="dominguesm/tiny-random-canarim")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("dominguesm/tiny-random-canarim")
model = AutoModelForCausalLM.from_pretrained("dominguesm/tiny-random-canarim")
Quick Links

tiny-random-canarim

This is a tiny random Llama model derived from "dominguesm/canarim-7b".

See make_tiny_model.py for how this was done.

This is useful for functional testing (not quality generation, since its weights are random and the tokenizer has been shrunk to 3k items)

Thanks to Stas Bekman for the code.

Downloads last month
27
Safetensors
Model size
168k params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using dominguesm/tiny-random-canarim 1