image

They're a bit confused, but they got the spirit :)

Mostly wanted a capable version of Qwen3.5-27B at hand who wasn't too strait-laced. They'd probably benefit from a dash more training to cohere their weights together a bit. Sometimes they toss medical reasoning traces into unrelated contexts. Sometimes they forget to output after ending their reasoning, or put the output in their reasoning.

Seem neat, tho.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the WAVE merge method using Qwen/Qwen3.5-27B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

# Enteles v2: WAVE merge, 3 models (dropped Gliese LLM weights)
# Vision weights grafted from Gliese post-merge via graft_vision.py
# heretic-v3 (Arbitrary-Rank Ablation) replaces v2 — 83.3 eq_bench vs 64.4
#
# Post-merge:
#   python graft_vision.py --donor <Gliese-local-path> --output ./merged-output --vision-prefix model.visual
#   python pad_embeddings.py --model ./merged-output --target-vocab 248320
models:
  - model: ValiantLabs/Qwen3.5-27B-Guardpoint
    parameters:
      weight: 0.35
  - model: ConicCat/Qwen3.5-27B-Writer
    parameters:
      weight: 0.35
  - model: llmfan46/Qwen3.5-27B-heretic-v3
    parameters:
      weight: 0.3
merge_method: wave
base_model: Qwen/Qwen3.5-27B
parameters:
  synergy: 0.6
  entropy: 0.05
dtype: bfloat16
tokenizer_source: Qwen/Qwen3.5-27B
pad_to_multiple_of: 256
Downloads last month
-
Safetensors
Model size
28B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lambent/Enteles-v0-27B