How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="OccultAI/Musecuilo-12B-Model_Stock")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("OccultAI/Musecuilo-12B-Model_Stock")
model = AutoModelForCausalLM.from_pretrained("OccultAI/Musecuilo-12B-Model_Stock")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

🐈 Musecuilo 12B Model_Stock

Musecuilo

Note: Use Mistral Tekken (recommended) or ChatML chat template for best results. The model has some refusals but can be jailbroken or ablated as needed.

This model was merged using the model_stock merge method.

Musecuilo is a merge of the following models using mergekit:

🧩 Configuration

architecture: MistralForCausalLM
base_model: B:/12B/mistralai--Mistral-Nemo-Instruct-2407
models:
  - model: B:/12B/allura-org--Tlacuilo-12B
  - model: B:/12B/LatitudeGames--Muse-12B
merge_method: model_stock
parameters:  
  filter_wise: true
dtype: float32
out_dtype: bfloat16
tokenizer:
  source: B:/12B/LatitudeGames--Muse-12B
name: Musecuilo-12B-Model_Stock
Downloads last month
73
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for OccultAI/Musecuilo-12B-Model_Stock

Paper for OccultAI/Musecuilo-12B-Model_Stock