How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="CorticalStack/shadow-clown-7B-dare")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("CorticalStack/shadow-clown-7B-dare")
model = AutoModelForCausalLM.from_pretrained("CorticalStack/shadow-clown-7B-dare")
Quick Links
Shadow clown logo

shadow-clown-7B-dare

shadow-clown-7B-dare is a DARE merge of the following models using mergekit:

See the paper Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch for more on the method.

🧩 Configuration

models:
  - model: yam-peleg/Experiment26-7B
  - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
    parameters:
      density: 0.52
      weight: 0.4
  - model: CultriX/NeuralTrix-7B-dpo
    parameters:
      density: 0.52
      weight: 0.2
  - model: CorticalStack/neurotic-crown-clown-7b-ties
    parameters:
      density: 0.52
      weight: 0.3
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
  int8_mask: true
dtype: bfloat16
Downloads last month
89
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CorticalStack/shadow-clown-7B-dare

Merges
4 models
Quantizations
2 models

Paper for CorticalStack/shadow-clown-7B-dare