| --- |
| license: apache-2.0 |
| language: |
| - en |
| tags: |
| - gguf |
| - dualmind |
| - knowledge-distillation |
| - self-critique |
| - convergent-intelligence |
| - convergentintel |
| - distillation |
| - edge |
| base_model: reaperdoesntknow/DualMind |
| model_name: DualMind-GGUF |
| --- |
| |
| # DualMind-GGUF |
|
|
| GGUF quantizations of [DualMind](https://huggingface.co/reaperdoesntknow/DualMind) for local inference via [llama.cpp](https://github.com/ggml-org/llama.cpp), [Ollama](https://ollama.com), [LM Studio](https://lmstudio.ai), and other GGUF-compatible runtimes. |
|
|
| **Convergent Intelligence LLC: Research Division** |
|
|
| ## Available Quantizations |
|
|
| | File | Quant | Size | Use Case | |
| |------|-------|------|----------| |
| | `DualMind-f16.gguf` | F16 | ~3.4 GB | Full precision, reference quality | |
| | `DualMind-Q8_0.gguf` | Q8_0 | ~1.8 GB | Near-lossless, recommended for GPU | |
| | `DualMind-Q5_K_M.gguf` | Q5_K_M | ~1.3 GB | Balanced quality/size | |
| | `DualMind-Q4_K_M.gguf` | Q4_K_M | ~1.1 GB | Best for CPU/edge deployment | |
| |
| ## What Is DualMind? |
| |
| DualMind is a 1.7B parameter model that implements a dual-cognition reasoning architecture: |
| |
| ``` |
| <explore> — unconstrained reasoning, derivation, speculation |
| <examine> — adversarial self-critique, error detection |
| <response> — clean synthesis from the internal dialogue |
| ``` |
| |
| The model learns to reason freely, then critique its own reasoning, then produce a final answer. Multi-model dialectics collapsed into shared weights. |
| |
| **Training lineage:** Qwen3-1.7B → DistilQwen3 (uncensored) → Disctil (DISC-refined) → TKD from Qwen3-30B-A3B-Thinking → DualMind SFT on LogicInference_OA dataset. |
|
|
| ## Quick Start |
|
|
| **Ollama:** |
| ```bash |
| # Already published: |
| ollama run reaperdoesntrun/DualMinded-1.7B |
| |
| # Or from GGUF: |
| ollama create dualmind -f Modelfile |
| ``` |
|
|
| **llama.cpp:** |
| ```bash |
| ./llama-cli -m DualMind-Q4_K_M.gguf \ |
| -p "##USER:\nProve that every convergent sequence is Cauchy.\n\n<explore>\n" \ |
| --temp 0.6 --top-p 0.9 --repeat-penalty 1.3 -n 512 |
| ``` |
|
|
| **Recommended parameters:** |
| - `temperature`: 0.6 |
| - `top_p`: 0.9 |
| - `repeat_penalty`: 1.3 (important — prevents enumeration loops) |
| - `num_predict`: 512–1024 |
|
|
| ## Related |
|
|
| - [DualMind](https://huggingface.co/reaperdoesntknow/DualMind) — source model (SafeTensors) |
| - [DualMinded-Qwen3-1.7B](https://huggingface.co/reaperdoesntknow/DualMinded-Qwen3-1.7B) — Opus-trained variant |
| - [DualMind_Methodolgy](https://huggingface.co/reaperdoesntknow/DualMind_Methodolgy) — methodology paper (DOI: [10.57967/hf/8184](https://doi.org/10.57967/hf/8184)) |
| - [DualMind Collection](https://huggingface.co/collections/reaperdoesntknow/dualmind) |
| - [DistilQwen Collection](https://huggingface.co/collections/reaperdoesntknow/distilqwen) — the full distillation chain |
|
|
|
|
| ## Mathematical Foundations |
|
|
| This is a GGUF-quantized variant. The mathematical foundations (Discrepancy Calculus, Topological Knowledge Distillation) are documented in the source model's card. The discrepancy operator $Df(x)$ and BV decomposition that inform the training pipeline are preserved through quantization — the structural boundaries detected by DISC during training are baked into the weights, not dependent on precision. |
|
|
| ## Citation |
|
|
| ```bibtex |
| @misc{colca2026dualmind, |
| title={From Three Teachers to Dual Cognition}, |
| author={Colca, Roy S.}, |
| year={2026}, |
| publisher={HuggingFace}, |
| url={https://doi.org/10.57967/hf/8184} |
| } |
| ``` |
|
|
| *Convergent Intelligence LLC: Research Division — Apache 2.0* |
| <!-- card-refresh: 2026-03-30 --> |
|
|
| --- |
|
|
| ## Convergent Intelligence Portfolio |
|
|
| *Part of the [DualMind Series](https://huggingface.co/collections/reaperdoesntknow/dualmind-69c93f888c6e79ecc69cf41e) by [Convergent Intelligence LLC: Research Division](https://huggingface.co/reaperdoesntknow)* |
|
|
| ### DualMind Family |
|
|
| | Model | Format | Description | |
| |-------|--------|-------------| |
| | [DualMind](https://huggingface.co/reaperdoesntknow/DualMind) | BF16 | LogicInference-trained. Explore→Examine→Response loop. | |
| | [DualMinded-Qwen3-1.7B](https://huggingface.co/reaperdoesntknow/DualMinded-Qwen3-1.7B) | BF16 | Opus 4.6 reasoning traces. Higher quality splits. | |
| | [Dualmind-Qwen-1.7B-Thinking](https://huggingface.co/reaperdoesntknow/Dualmind-Qwen-1.7B-Thinking) | BF16 | Thinking-teacher variant with extended deliberation. | |
| | [DualMind-GGUF](https://huggingface.co/reaperdoesntknow/DualMind-GGUF) | GGUF | Quantized LogicInference variant. CPU/6GB GPU. | |
| | [DualMinded-Qwen3-1.7B-GGUF](https://huggingface.co/reaperdoesntknow/DualMinded-Qwen3-1.7B-GGUF) | GGUF | Quantized Opus variant. Ollama ready. | |
|
|
| ### Papers |
|
|
| | Paper | DOI | |
| |-------|-----| |
| | [Structure Over Scale](https://huggingface.co/reaperdoesntknow/Structure-Over-Scale) | 10.57967/hf/8165 | |
| | [Three Teachers to Dual Cognition](https://huggingface.co/reaperdoesntknow/DualMind_Methodolgy) | 10.57967/hf/8184 | |
| | [Discrepancy Calculus](https://huggingface.co/reaperdoesntknow/Discrepancy_Calculus) | 10.57967/hf/8194 | |
|
|
| --- |
|
|
| *Last updated: 2026-03-31 by Convergent Intelligence LLC: Research Division* |
| <!-- cix-keeper-ts:2026-04-11T16:08:55Z --> |
|
|