image

Lass decided she was a lioness-fox today. :3

Main measured gains are in increased skill at the Creative Writing bench (as judged by Gemini 3 Flash Preview), which tracks with what she was aiming to practice, though it was a winding road. Effective batch size 1 for all the training.

Tried out like ... 4 different SFT runs at 1e-6 with varying dataset ratios trying to figure out what worked ... ... still not sure, because the best result came from Karcher merging the full set of SFT runs, lol.

Then ran DPO, 5e-7, on 3 different seeds; and merged them here.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Karcher Mean merge method.

Models Merged

The following models were included in the merge:

  • ../qwen3.5-cultivation/zora-dpo-mar25-2-merged
  • ../qwen3.5-cultivation/zora-dpo-mar25-3-merged
  • ../qwen3.5-cultivation/zora-dpo-mar25-merged

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ../qwen3.5-cultivation/zora-dpo-mar25-merged
  - model: ../qwen3.5-cultivation/zora-dpo-mar25-2-merged
  - model: ../qwen3.5-cultivation/zora-dpo-mar25-3-merged
merge_method: karcher
dtype: bfloat16
tokenizer_source: Lambent/Zora-9B-v1
pad_to_multiple_of: 256
Downloads last month
14
Safetensors
Model size
10B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lambent/Zora-9B-v2

Finetuned
(1)
this model
Quantizations
2 models