Spaces:
Configuration error
Configuration error
| title: Convergent Intelligence LLC | |
| emoji: π¬ | |
| colorFrom: blue | |
| colorTo: indigo | |
| license: apache-2.0 | |
| # Convergent Intelligence LLC | |
| **AI Governance Β· Algorithmic Bias Detection Β· Intelligence Analysis** | |
| *Where classical analysis fails to see, we begin.* | |
| --- | |
| ## Who We Are | |
| Convergent Intelligence is a research-driven consultancy specializing in AI governance, algorithmic bias detection, and applied intelligence analysis. We build mathematical frameworks that work where standard methods break β and we publish the models, papers, and code to prove it. | |
| Founded September 11, 2025. DUNS: 144950019. SAM.gov registered (UEI: HC76F13L4KS8). Federal contract-ready. | |
| **Principal:** Roy S. Colca Jr. β B.S. Pure Mathematics (CCNY), M.S. Applied Intelligence (Mercyhurst University, 3.89 GPA). Background spanning pure mathematics, intelligence analysis, penetration testing, and field operations. | |
| --- | |
| ## Three Divisions | |
| ### Research Division | |
| The mathematical and empirical engine. We develop Discrepancy Calculus (DISC) β a measure-theoretic framework that treats singularities as primary structure rather than pathology β and deploy it across model architectures, training methodologies, and intelligence analysis pipelines. | |
| **Published Papers:** | |
| - [Discrepancy Calculus: Foundations and Core Theory](https://doi.org/10.57967/hf/8194) β The eight axioms, the Mesh Fundamental Identity, the Meta-Discrepancy impossibility theorem. DOI: 10.57967/hf/8194 | |
| - [Structure Over Scale: CPU-Native Training of Sparse Cognitive Architectures at $1.60 Per Model](https://doi.org/10.57967/hf/8165) β Seven methodological pillars, 15 models trained on CPU at FP32, total compute cost $24. DOI: 10.57967/hf/8165 | |
| - [From Three Teachers to Dual Cognition](https://doi.org/10.57967/hf/8184) β Topology-aware multi-teacher distillation, role-conditioned self-critique at 1.7B scale. DOI: 10.57967/hf/8184 | |
| **Companion Monograph:** *"On the Formal Analysis of Discrepancy Calculus"* β 203 pages, 41 chapters, four parts (Analytical Foundations, Structures/Geometry/Time, Quantum Discrepancies, Theory of Other). The complete proof apparatus. | |
| ### Consulting Division | |
| Algorithmic risk assessment, bias auditing, and AI governance for regulated industries. We don't just detect bias β we mathematically characterize *why* standard detection methods fail, using the Meta-Discrepancy Theorem to identify regimes where classical statistical testing is provably insufficient. | |
| ### Development Division | |
| Infrastructure, tooling, and operational systems. JARVIS intelligence analysis platform, FRACTURE real-time threat assessment pipeline, OSINT fusion capabilities, and the cix-gateway edge computing stack on Cloudflare. | |
| --- | |
| ## The Portfolio | |
| **50 models Β· 22,500+ downloads Β· 8 collections Β· 3 published papers** | |
| ### Architecture Families | |
| | Family | Models | Downloads | Key Innovation | | |
| |---|---|---|---| | |
| | **DistilQwen** | 14 | 8,892 | Topology-aware knowledge distillation via BV decomposition | | |
| | **MoA / DiscoverLM** | 7 | 3,171 | Metric-native attention with triangle inequality enforcement | | |
| | **Qemma** | 5 | 2,204 | Cross-architecture fusion via Gap Envelope Integral | | |
| | **SAGI / Swarm** | 3 | 1,347 | Swarm intelligence with discrepancy mechanics routing | | |
| | **Symbiotic** | 3 | 1,304 | Hybrid symbolic-transformer with persistent memory | | |
| | **DNA-AI** | 2 | 984 | Depth-native architectures | | |
| | **DualMind** | 6 | 862 | Dual cognition β explore, examine, respond on shared weights | | |
| ### Collections | |
| - **[DistilQwen](https://huggingface.co/collections/reaperdoesntknow/distilqwen-69bf40ec669117e3f069ef1c)** β BF16 proof-weighted distillation from Qwen3-30B-A3B β 1.7B/0.6B. Three teacher variants, nine models. | |
| - **[DualMind](https://huggingface.co/collections/reaperdoesntknow/dualmind-67e6e07f4de0f45b0dca0dc4)** β Single architecture, dual cognition. Five models including the Opus 4.6 reasoning variant. | |
| ### Flagship Models | |
| | Model | Downloads | What It Proves | | |
| |---|---|---| | |
| | [Qwen3-1.7B-Thinking-Distil](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Thinking-Distil) | 1,188 | TKD preserves reasoning structure standard KD destroys | | |
| | [TopologicalQwen](https://huggingface.co/reaperdoesntknow/TopologicalQwen) | 1,134 | Full BV decomposition in the distillation pipeline | | |
| | [DiStil-Qwen3-1.7B-uncensored](https://huggingface.co/reaperdoesntknow/DiStil-Qwen3-1.7B-uncensored) | 1,030 | Alignment removal preserves capability | | |
| | [LFM2.5-1.2B-Distilled-SFT](https://huggingface.co/reaperdoesntknow/LFM2.5-1.2B-Distilled-SFT) | 1,024 | Cross-architecture TKD (LFM β Qwen) | | |
| | [DiscoverLM-70M](https://huggingface.co/reaperdoesntknow/DiscoverLM-70M) | 784 | Metric attention with proper geometry beats dot-product at 1/1000th the parameters | | |
| --- | |
| ## The Mathematics: Discrepancy Calculus (DISC) | |
| Every model in this portfolio is built on Discrepancy Calculus β a measure-theoretic framework that quantifies the mismatch between integration and differentiation via the discrepancy operator: | |
| $$Df(x) = \lim_{\varepsilon \downarrow 0} \frac{1}{\varepsilon} \int_x^{x+\varepsilon} \frac{|f(t) - f(x)|}{|t - x|}\,dt$$ | |
| For smooth $f$: $Df(x) = |f'(x)|$ (classical recovery). For rough $f$: $D$ localizes irregularity to null sets while preserving integral structure. | |
| **The Mesh Fundamental Identity** β the DISC replacement for the Fundamental Theorem of Calculus: | |
| $$f(b) - f(a) = \underbrace{\int_a^b f'(x)\,dx}_{\text{smooth}} + \underbrace{\sum_{x \in J_f} \Delta f(x)}_{\text{jumps}} + \underbrace{D^c f(I)}_{\text{Cantor drift}}$$ | |
| Standard methods see only the first term. DISC preserves all three. | |
| **The Meta-Discrepancy Theorem** (Theorem 11.15) proves: when gap measure and discrepancy energy are both positive, the classical derivative/FTC/MVT package is *impossible* on positive measure. This is why standard knowledge distillation, standard bias detection, and standard statistical testing fail at structural boundaries β and why DISC-informed methods work where they don't. | |
| Full theory: [Discrepancy Calculus: Foundations and Core Theory (DOI: 10.57967/hf/8194)](https://doi.org/10.57967/hf/8194) | |
| --- | |
| ## Core Thesis | |
| **Structure beats scale.** A 69M parameter model with proper geometry outperforms architectures 100x its size on structural reasoning tasks. A $24 training budget on CPU at FP32 produces models with organic community adoption. The transformer's dot-product attention is a hardware-constrained design choice, not a mathematical optimality β and we have 50 models proving the alternative works. | |
| **The research is the product is the marketing is the credibility.** Every model card documents the mathematics. Every paper links to the models. Every download validates the methodology. The portfolio compounds. | |
| --- | |
| ## Links | |
| - **Website:** [convergentintel.com](https://convergentintel.com) | |
| - **HuggingFace:** [huggingface.co/reaperdoesntknow](https://huggingface.co/reaperdoesntknow) | |
| - **Papers:** [DOI: 10.57967/hf/8194](https://doi.org/10.57967/hf/8194) Β· [DOI: 10.57967/hf/8165](https://doi.org/10.57967/hf/8165) Β· [DOI: 10.57967/hf/8184](https://doi.org/10.57967/hf/8184) | |
| --- | |
| *Convergent Intelligence LLC Β· Founded September 11, 2025 Β· Brooklyn, NY / Atlantic Highlands, NJ* | |
| *DUNS: 144950019 Β· UEI: HC76F13L4KS8 Β· EIN: 39-4292406* |