Bulldog

This model is a moerge between two models. In personal testing, the model performs exceedingly well on several tasks. I may perform more rigourous testing in the future. This model can squeeze some tokens on 'lighter' hardware it is reccomended you use 48 GB and above for fast results.

Usage

Utilises the llama-3 prompt format.

Config

base_model: meta-llama/Meta-Llama-3-8B
dtype: bfloat16
gate_mode: hidden
experts:
 - source_model: mlabonne/NeuralDaredevil-8B-abliterated
   positive_prompts:
   - "Chat"
   - "Assistant"
   - "Maths"
   - "Code"
 - source_model: Sao10K/L3-8B-Stheno-v3.2
   positive_prompts:
   - "Roleplay"
   - "Story"
   - "NSFW"
Downloads last month
6
Safetensors
Model size
14B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support