# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TheBlokeAI/Mixtral-tiny-GPTQ")
model = AutoModelForCausalLM.from_pretrained("TheBlokeAI/Mixtral-tiny-GPTQ")Quick Links
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Mixtral Tiny - GPTQ
- Model creator: Hugging Face Internal Testing Organization
- Original model: Mixtral Tiny
Description
For testing the Mixtral architecture. Not for normal use.
- Downloads last month
- 276
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TheBlokeAI/Mixtral-tiny-GPTQ")