# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Locutusque/TinyMistral-248M-Instruct")
model = AutoModelForCausalLM.from_pretrained("Locutusque/TinyMistral-248M-Instruct")Quick Links
Base model Locutusque/TinyMistral-248M fully fine-tuned on Locutusque/InstructMix. During validation, this model achieved an average perplexity of 3.23 on Locutusque/InstructMix dataset. It has so far been trained on approximately 608,000 examples. More epochs are planned for this model.
- Downloads last month
- 849
Model tree for Locutusque/TinyMistral-248M-Instruct
Base model
Locutusque/TinyMistral-248M
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Locutusque/TinyMistral-248M-Instruct")