DualMind-GGUF / README.md
reaperdoesntknow's picture
keeper: refresh 2026-04-04
b1b67e3 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - gguf
  - dualmind
  - knowledge-distillation
  - self-critique
  - convergent-intelligence
  - convergentintel
  - distillation
  - edge
base_model: reaperdoesntknow/DualMind
model_name: DualMind-GGUF

DualMind-GGUF

GGUF quantizations of DualMind for local inference via llama.cpp, Ollama, LM Studio, and other GGUF-compatible runtimes.

Convergent Intelligence LLC: Research Division

Available Quantizations

File Quant Size Use Case
DualMind-f16.gguf F16 ~3.4 GB Full precision, reference quality
DualMind-Q8_0.gguf Q8_0 ~1.8 GB Near-lossless, recommended for GPU
DualMind-Q5_K_M.gguf Q5_K_M ~1.3 GB Balanced quality/size
DualMind-Q4_K_M.gguf Q4_K_M ~1.1 GB Best for CPU/edge deployment

What Is DualMind?

DualMind is a 1.7B parameter model that implements a dual-cognition reasoning architecture:

<explore>  — unconstrained reasoning, derivation, speculation
<examine>  — adversarial self-critique, error detection
<response> — clean synthesis from the internal dialogue

The model learns to reason freely, then critique its own reasoning, then produce a final answer. Multi-model dialectics collapsed into shared weights.

Training lineage: Qwen3-1.7B → DistilQwen3 (uncensored) → Disctil (DISC-refined) → TKD from Qwen3-30B-A3B-Thinking → DualMind SFT on LogicInference_OA dataset.

Quick Start

Ollama:

# Already published:
ollama run reaperdoesntrun/DualMinded-1.7B

# Or from GGUF:
ollama create dualmind -f Modelfile

llama.cpp:

./llama-cli -m DualMind-Q4_K_M.gguf \
  -p "##USER:\nProve that every convergent sequence is Cauchy.\n\n<explore>\n" \
  --temp 0.6 --top-p 0.9 --repeat-penalty 1.3 -n 512

Recommended parameters:

  • temperature: 0.6
  • top_p: 0.9
  • repeat_penalty: 1.3 (important — prevents enumeration loops)
  • num_predict: 512–1024

Related

Mathematical Foundations

This is a GGUF-quantized variant. The mathematical foundations (Discrepancy Calculus, Topological Knowledge Distillation) are documented in the source model's card. The discrepancy operator $Df(x)$ and BV decomposition that inform the training pipeline are preserved through quantization — the structural boundaries detected by DISC during training are baked into the weights, not dependent on precision.

Citation

@misc{colca2026dualmind,
  title={From Three Teachers to Dual Cognition},
  author={Colca, Roy S.},
  year={2026},
  publisher={HuggingFace},
  url={https://doi.org/10.57967/hf/8184}
}

Convergent Intelligence LLC: Research Division — Apache 2.0


Convergent Intelligence Portfolio

Part of the DualMind Series by Convergent Intelligence LLC: Research Division

DualMind Family

Model Format Description
DualMind BF16 LogicInference-trained. Explore→Examine→Response loop.
DualMinded-Qwen3-1.7B BF16 Opus 4.6 reasoning traces. Higher quality splits.
Dualmind-Qwen-1.7B-Thinking BF16 Thinking-teacher variant with extended deliberation.
DualMind-GGUF GGUF Quantized LogicInference variant. CPU/6GB GPU.
DualMinded-Qwen3-1.7B-GGUF GGUF Quantized Opus variant. Ollama ready.

Papers

Paper DOI
Structure Over Scale 10.57967/hf/8165
Three Teachers to Dual Cognition 10.57967/hf/8184
Discrepancy Calculus 10.57967/hf/8194

Last updated: 2026-03-31 by Convergent Intelligence LLC: Research Division