SymbioticLM-8B
Model Type: Hybrid Symbolic–Transformer
Base Model: Qwen-8B
Framework: PyTorch + Transformers-compatible
Purpose: Long-memory symbolic reasoning + high-fidelity language generation
Overview
SymbioticLM-8B is a state-of-the-art hybrid transformer model with built-in symbolic cognition. It combines an 8B Qwen-based transformer with modular symbolic processors and a persistent memory buffer. The model supports both general conversation and deep symbolic tasks such as theorem generation, logical chaining, and structured reasoning with retained memory across turns.
Architecture Highlights
- Backbone: Qwen-8B rotary transformer
- Symbolic Dim: 4096
- Symbolic Modules:
- ThoughtDynamicsLNN (multi-head LSTM attention)
- CrystallineProcessor (DNAConv GNN)
- LiquidThoughtProcessor (recurrent symbol folding)
- HelicalDNAProcessor (helical linear projection)
- Memory: 2048 symbolic vectors (float32) with entropy-aware retrieval and contextual recall
- Dream Mode: Self-generates symbolic cognition offline
Files Included
| File | Description |
|---|---|
model.bin |
PyTorch weights (LFS tracked) |
model.safetensors |
Same weights in safetensors format (recommended) |
memory.pt |
Symbolic memory snapshot (entropic, pretrained) |
config.json |
Base model configuration |
generation_config.json |
Sampling and decoding config (temperature, top_p, etc.) |
tokenizer.json |
Tokenizer data with custom tags and structure |
added_tokens.json |
Extra tokens like <THM>, <PROOF>, <D_EPS> |
special_tokens_map.json |
Maps for special tokens used during generation |
Intended Uses
- General symbolic reasoning and logical conversation
- Memory-aware tutoring, research assistants
- Code + math proof modeling
- Context-persistent dialogue systems
Limitations
- Not instruction-tuned (e.g., chat-style inputs may require prompt engineering)
- Larger memory buffer may increase CPU load slightly
- Symbolic inference is offline-evolved; memory must be actively seeded
Citations
This model was designed and built from Discrepancy Analysis, paper to be published soon!
Convergent Intelligence Portfolio
Part of the Symbiotic AI Series by Convergent Intelligence LLC: Research Division
Related Models
| Model | Downloads | Format |
|---|---|---|
| Symbiotic-1B | 4 | HF |
| Symiotic-14B | 3 | HF |
| Symbiotic-Beta | 3 | HF |
Top Models from Our Lab
Total Portfolio: 41 models | 2,781 total downloads
Last updated: 2026-03-28 12:57 UTC
From the Convergent Intelligence Portfolio
DistilQwen Collection — Our only BF16 series. Proof-weighted distillation from Qwen3-30B-A3B → 1.7B and 0.6B on H100. Three teacher variants (Instruct, Thinking, Coder), nine models, 2,788 combined downloads. The rest of the portfolio proves structure beats scale on CPU. This collection shows what happens when you give the methodology real hardware.
Top model: Qwen3-1.7B-Coder-Distilled-SFT — 508 downloads
Full methodology: Structure Over Scale (DOI: 10.57967/hf/8165)
Convergent Intelligence LLC: Research Division
- Downloads last month
- 239