# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("stas/tiny-random-llama-2")
model = AutoModelForCausalLM.from_pretrained("stas/tiny-random-llama-2")Quick Links
This is a tiny random Llama model derived from "meta-llama/Llama-2-7b-hf".
See make_tiny_model.py for how this was done.
This is useful for functional testing (not quality generation, since its weights are random and the tokenizer has been shrunk to 3k items)
- Downloads last month
- 144,707
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="stas/tiny-random-llama-2")