BoltMonkey's picture
Add model card
1bafddb verified
metadata
license: apache-2.0
base_model: BoltMonkey/SuperNeuralDreadDevil-8b
datasets:
  - Nitral-Archive/Cosmopedia-Instruct-60k-Distilled-R1-70B-ShareGPT-Reason-Tags
pipeline_tag: text-generation
tags:
  - qlora
  - 4bit
  - merged

boltmonkey_shortreasoning‑8b

Merged QLoRA adapter plus base weights.
Fine‑tuned for short‑form chain‑of‑thought reasoning.

  • Base model: SuperNeuralDreadDevil-8b
  • Dataset: Cosmopedia‑Instruct 60k (ShareGPT style)
  • Context length: 1096 tokens
  • Training: 4 epochs, LoRA r = 32, α = 16, dropout 0.05, fp16, 4‑bit quant

See train_args.json for the full Axolotl config.