Aletheia-12B

Aletheia-12B is a merge of the following models using mergekit:

This was an experiment in attempting to get a more intelligent and creative 12B model for my own personal use and decided to put it out in the wild.

Feel free to merge it or go wild with it!

Recommended Settings

This is what I personally use but feel free to adjust or change to your needs.

  • Instruction Template: ChatML
  • Temperature: 1.0
  • Min-P: 0.05
  • Repetition Penalty: 1.05
  • DRY Sampler: Multiplier 0.8, Base 1.75

Prompt Template (ChatML)

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant

Possible Bugs/Issues

  • May still have a few refusals.
  • Can be repetitive in my personal testing.
  • Talks as {user} after 10k context length I find but your experience may vary.

Examples

{TBA}

Credits & Acknowledgements

This model wouldn't exist without the incredible work of the open-source community:

  • Quantization: Huge thanks to mradermacher for providing the high-quality GGUF and iMatrix quants.
  • Base Models: Thanks to the creators of the constituent parts:
    • yamatazen (FusionEngine/EsotericSage)
    • ohyeah1 (Violet-Lyra)
    • redrix (AngelSlayer/Patricide)
  • Tools: Merged using LazyMergekit by Maxime Labonne, fantastic tool!

Configuration

  - model: Dogoo3/MN-HyperNovaIrix-12B
  - model: ohyeah1/Violet-Lyra-Gutenberg-v2
  - model: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS
  - model: yamatazen/FusionEngine-12B-Lorablated
  - model: redrix/patricide-12B-Unslop-Mell
merge_method: model_stock
base_model: Dogoo3/MN-HyperNovaIrix-12B
normalize: false
dtype: bfloat16```
Downloads last month
272
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dogoo3/Aletheia-12B