How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="nickypro/tinyllama-110M")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("nickypro/tinyllama-110M")
model = AutoModelForCausalLM.from_pretrained("nickypro/tinyllama-110M")
Quick Links

This is the 110M parameter Llama 2 architecture model trained on the TinyStories dataset. These are converted from karpathy/tinyllamas. See the llama2.c project for more details.

Downloads last month
1,089
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nickypro/tinyllama-110M

Finetunes
1 model
Quantizations
2 models