# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Masterjp123/MythicalMax")
model = AutoModelForCausalLM.from_pretrained("Masterjp123/MythicalMax")Quick Links
This model is a mix of MythosMax-L2-13B and Mythical-Destroyer-V2-L2-13B
It's perpose was to make a mix of Mythical destoryer, MythosMax and Kimiko Lora that could have the best parts of each, NOTE: this model is very exparimental.
Feel free to make a GPTQ or any other quantized version of this model, becuase I do not plain to.
- Downloads last month
- 5
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Masterjp123/MythicalMax")