Text Generation
Transformers
Safetensors
mistral
4-bit precision
AWQ
conversational
text-generation-inference
awq
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("solidrust/Mixtral_AI_MiniTron_Chat-AWQ")
model = AutoModelForCausalLM.from_pretrained("solidrust/Mixtral_AI_MiniTron_Chat-AWQ")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
LeroyDyer/Mixtral_AI_MiniTron_Chat AWQ
- Model creator: LeroyDyer
- Original model: Mixtral_AI_MiniTron_Chat
Model Summary
these little one are easy to train for task !!! ::
They already have some training (not great) But they can take more and more
(and being MISTRAL they can takes lora modules!)
Rememeber to add training on to the lora you merge withit : ie load the lora and train a few cycle on the same data that was applied in the p=lora (ie 20 Steps ) and
See it it took hold then merge IT!
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron
- Downloads last month
- 7
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="solidrust/Mixtral_AI_MiniTron_Chat-AWQ") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)