How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="stas/tiny-random-llama-2")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("stas/tiny-random-llama-2")
model = AutoModelForCausalLM.from_pretrained("stas/tiny-random-llama-2")
Quick Links

This is a tiny random Llama model derived from "meta-llama/Llama-2-7b-hf".

See make_tiny_model.py for how this was done.

This is useful for functional testing (not quality generation, since its weights are random and the tokenizer has been shrunk to 3k items)

Downloads last month
144,707
Safetensors
Model size
104k params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for stas/tiny-random-llama-2

Adapters
1 model

Spaces using stas/tiny-random-llama-2 3