DualMind-GGUF

GGUF quantizations of DualMind for local inference via llama.cpp, Ollama, LM Studio, and other GGUF-compatible runtimes.

Convergent Intelligence LLC: Research Division

Available Quantizations

File Quant Size Use Case
DualMind-f16.gguf F16 ~3.4 GB Full precision, reference quality
DualMind-Q8_0.gguf Q8_0 ~1.8 GB Near-lossless, recommended for GPU
DualMind-Q5_K_M.gguf Q5_K_M ~1.3 GB Balanced quality/size
DualMind-Q4_K_M.gguf Q4_K_M ~1.1 GB Best for CPU/edge deployment

What Is DualMind?

DualMind is a 1.7B parameter model that implements a dual-cognition reasoning architecture:

<explore>  β€” unconstrained reasoning, derivation, speculation
<examine>  β€” adversarial self-critique, error detection
<response> β€” clean synthesis from the internal dialogue

The model learns to reason freely, then critique its own reasoning, then produce a final answer. Multi-model dialectics collapsed into shared weights.

Training lineage: Qwen3-1.7B β†’ DistilQwen3 (uncensored) β†’ Disctil (DISC-refined) β†’ TKD from Qwen3-30B-A3B-Thinking β†’ DualMind SFT on LogicInference_OA dataset.

Quick Start

Ollama:

# Already published:
ollama run reaperdoesntrun/DualMinded-1.7B

# Or from GGUF:
ollama create dualmind -f Modelfile

llama.cpp:

./llama-cli -m DualMind-Q4_K_M.gguf \
  -p "##USER:\nProve that every convergent sequence is Cauchy.\n\n<explore>\n" \
  --temp 0.6 --top-p 0.9 --repeat-penalty 1.3 -n 512

Recommended parameters:

  • temperature: 0.6
  • top_p: 0.9
  • repeat_penalty: 1.3 (important β€” prevents enumeration loops)
  • num_predict: 512–1024

Related

Citation

@misc{colca2026dualmind,
  title={From Three Teachers to Dual Cognition},
  author={Colca, Roy S.},
  year={2026},
  publisher={HuggingFace},
  url={https://doi.org/10.57967/hf/8184}
}

Convergent Intelligence LLC: Research Division β€” Apache 2.0

Downloads last month
-
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for reaperdoesntknow/DualMind-GGUF

Quantized
(1)
this model

Collection including reaperdoesntknow/DualMind-GGUF