How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Vortex5/Radiant-Shadow-12B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Vortex5/Radiant-Shadow-12B")
model = AutoModelForCausalLM.from_pretrained("Vortex5/Radiant-Shadow-12B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Radiant-Shadow-12B

This is a merge of pre-trained language models created using mergekit.

📒Notes: I had some issues with chatml instruction template, try Mistral V7 works well.

Merge Details

Merge Method

This model was merged using the Passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
- sources:
  - model: Vortex5/Lunar-Nexus-12B
    layer_range: [0, 17]

- sources:
  - model: Retreatcost/KansenSakura-Radiance-RP-12b
    layer_range: [17, 31]

- sources:
  - model: Vortex5/Shadow-Crystal-12B
    layer_range: [31, 40]
merge_method: passthrough
dtype: bfloat16
tokenizer:
  source: union
Downloads last month
2
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Vortex5/Radiant-Shadow-12B