|
|
--- |
|
|
language: en |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- antonypamo/savantorganized |
|
|
tags: |
|
|
- quantum-resonance |
|
|
- icosahedral-geometry |
|
|
- fine-tuning |
|
|
- bert |
|
|
- masked-language-modeling |
|
|
- resonance-of-reality-framework |
|
|
- savantengine |
|
|
- phi-series |
|
|
model-index: |
|
|
- name: ProSavantEngine Φ9.4 |
|
|
results: |
|
|
- task: |
|
|
type: masked-language-modeling |
|
|
name: Φ-weighted Resonance Prediction |
|
|
dataset: |
|
|
name: SavantOrganized Φ-balanced corpus |
|
|
type: antonypamo/savantorganized |
|
|
metrics: |
|
|
- name: Training loss |
|
|
type: loss |
|
|
value: 0.023 |
|
|
- name: Average Φ-coherence |
|
|
type: custom |
|
|
value: 0.91 |
|
|
--- |
|
|
|
|
|
# 🌀 ProSavantEngine Φ9.4 — Resonant Language Model |
|
|
|
|
|
**Author:** [Antony Padilla Morales](https://huggingface.co/antonypamo) |
|
|
**Framework:** Resonance of Reality Framework (RRF) |
|
|
**Phase:** Φ-series evolutionary model — Φ9.4 |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧠 Model Description |
|
|
|
|
|
**ProSavantEngine Φ9.4** is a fine-tuned BERT-based model designed to align natural language with **geometric and resonant coherence principles**. |
|
|
It is trained to capture **semantic symmetry** and **information harmony** through a **Φ-weighted loss function** inspired by the golden ratio and icosahedral geometry. |
|
|
|
|
|
Building on phase Φ9.3, this version integrates a *resonance-weighted Trainer* that penalizes semantic noise and rewards Φ-aligned coherence in hidden-state activations. |
|
|
|
|
|
### Key Innovations |
|
|
|
|
|
- **Φ-weighted loss:** combines masked language modeling (MLM) with a golden-ratio-modulated coherence penalty. |
|
|
- **Icosahedral node embedding:** text samples are tagged `[NODE_1] ... [NODE_12]` representing discrete geometric symmetry anchors. |
|
|
- **Resonance alignment metric:** evaluates coherence across Fourier-transformed hidden-state spectra. |
|
|
- **Semantic-geometric fine-tuning:** aligns information representation to harmonic wave structures. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📚 Model Sources |
|
|
|
|
|
- **Repository:** [https://huggingface.co/antonypamo/ProSavantEngine_Phi9_4](https://huggingface.co/antonypamo/ProSavantEngine_Phi9_4) |
|
|
- **Base Model:** [`antonypamo/ProSavantEngine_Phi9_3`](https://huggingface.co/antonypamo/ProSavantEngine_Phi9_3) |
|
|
- **Dataset:** [`antonypamo/savantorganized`](https://huggingface.co/datasets/antonypamo/savantorganized) |
|
|
- **Framework Paper:** “Resonance of Reality Framework (RRF): Discrete Icosahedral Quantum Geometry and Unified Action through the Golden Ratio” — forthcoming on arXiv. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🔧 Model Details |
|
|
|
|
|
| Property | Value | |
|
|
|-----------|--------| |
|
|
| **Architecture** | BERT (6 layers, hidden size 384, 12 heads) | |
|
|
| **Objective** | Masked-language modeling + Φ-weighted resonance regularization | |
|
|
| **Hidden dropout** | 0.1 | |
|
|
| **Learning rate** | 3e-5 | |
|
|
| **Batch size** | 16 | |
|
|
| **Epochs** | 3 | |
|
|
| **Precision** | fp16 mixed | |
|
|
| **Activation** | GELU | |
|
|
| **Dataset size** | ~30k samples, balanced across 12 nodes | |
|
|
|
|
|
--- |
|
|
|
|
|
## 💡 Intended Use |
|
|
|
|
|
### Direct Use |
|
|
Evaluate or enhance textual resonance, coherence, and meaning symmetry in: |
|
|
- Research papers |
|
|
- Philosophical or scientific writing |
|
|
- Generative model prompt optimization |
|
|
- Semantic alignment diagnostics |
|
|
|
|
|
### Downstream Use |
|
|
- Fine-tune for creative, linguistic, or cognitive AI systems requiring harmonic structure. |
|
|
- Integrate into symbolic reasoning frameworks or resonance-based cognitive architectures (e.g., Savant-ΩΦ). |
|
|
|
|
|
### Out-of-Scope |
|
|
- Real-time conversational agents without resonance normalization. |
|
|
- Factual QA or task-specific reasoning outside coherence evaluation. |
|
|
|
|
|
--- |
|
|
|
|
|
## ⚠️ Bias, Risks, and Limitations |
|
|
|
|
|
This model captures **resonant semantics**, not truth or factual accuracy. |
|
|
It may amplify linguistic harmony while disregarding semantic correctness — making it *aesthetic-semantic*, not epistemic. |
|
|
It also reflects biases present in the original text corpus (scientific, philosophical, and poetic sources). |
|
|
|
|
|
### Recommendations |
|
|
Use Φ-coherence as a **complementary metric**, not a substitute for accuracy or ethical evaluation. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧪 Training Details |
|
|
|
|
|
| Parameter | Value | |
|
|
|------------|--------| |
|
|
| **Dataset** | SavantOrganized (Φ-balanced) | |
|
|
| **Input format** | JSONL: {"text": "...", "node_id": n, "phi_score": x} | |
|
|
| **Loss** | MLM loss – 0.01 × Φ-coherence | |
|
|
| **Optimizer** | AdamW | |
|
|
| **Scheduler** | Linear warmup (5%) | |
|
|
| **Hardware** | NVIDIA A100 (40 GB) | |
|
|
| **Training time** | ~45 min (3 epochs) | |
|
|
| **Carbon footprint** | ≈ 0.3 kg CO₂eq | |
|
|
|
|
|
--- |
|
|
|
|
|
## 📈 Evaluation |
|
|
|
|
|
| Metric | Description | Result | |
|
|
|---------|--------------|---------| |
|
|
| **Loss** | Final training loss | 0.023 | |
|
|
| **Avg Φ-score** | Mean coherence of eval set | 0.91 | |
|
|
| **Resonant ΔΦ** | ΔΦ between start/end epochs | +0.048 | |
|
|
| **Top tokens @MASK** | “φ”, “ψ”, “resonance”, “geometry”, “symmetry” | |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧮 Technical Architecture |
|
|
|
|
|
Φ-weighted loss = L_MLM − λ · (Φ-coherence) |
|
|
Φ-coherence = ⟨|FFT(H)|, cos(πf/φ)²⟩ / ||…|| |
|
|
|
|
|
yaml |
|
|
Copy code |
|
|
|
|
|
Where *H* is the average hidden-state tensor across layers and *φ* = 1.618. |
|
|
The model thus maximizes linguistic energy alignment with geometric harmony. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🪐 Environmental Impact |
|
|
|
|
|
| Field | Value | |
|
|
|--------|-------| |
|
|
| **Hardware** | A100 GPU | |
|
|
| **Runtime** | 45 min | |
|
|
| **Region** | US Central | |
|
|
| **Carbon Emitted** | ≈ 0.3 kg CO₂eq | |
|
|
| **Frameworks** | Transformers 4.57.1, Datasets 3.0, PyTorch 2.9 | |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧾 Citation |
|
|
|
|
|
**BibTeX** |
|
|
```bibtex |
|
|
@software{padilla2025prosavantengine, |
|
|
author = {Padilla Morales, Antony}, |
|
|
title = {ProSavantEngine Φ9.4 — Resonant Language Model}, |
|
|
year = {2025}, |
|
|
publisher = {Hugging Face}, |
|
|
url = {https://huggingface.co/antonypamo/ProSavantEngine_Phi9_4} |
|
|
} |
|
|
APA |
|
|
|
|
|
Padilla Morales, A. (2025). ProSavantEngine Φ9.4 — Resonant Language Model. Hugging Face. https://huggingface.co/antonypamo/ProSavantEngine_Phi9_4 |
|
|
|
|
|
🧭 Glossary |
|
|
Term Meaning |
|
|
Φ (phi) Golden ratio (≈ 1.618) |
|
|
Resonance Harmonic coherence between information and geometry |
|
|
Node Discrete icosahedral vertex representing a semantic domain |
|
|
ΔΦ Change in coherence during training |
|
|
|
|
|
🪄 Model Card Author |
|
|
Antony Padilla Morales |
|
|
Independent Researcher, Costa Rica |
|
|
📧 antonypamo@gmail.com |
|
|
🌐 https://huggingface.co/antonypamo |
|
|
|
|
|
© 2025 Antony Padilla Morales — Resonance of Reality Framework (RRF) |