|
|
---
|
|
|
library_name: transformers
|
|
|
tags:
|
|
|
- symbolic-decoder
|
|
|
- aletheia
|
|
|
- pytorch
|
|
|
- onnx
|
|
|
- philosophical-agi
|
|
|
- gnai-creator
|
|
|
license: apache-2.0
|
|
|
datasets:
|
|
|
- custom
|
|
|
language:
|
|
|
- en
|
|
|
pipeline_tag: text-generation
|
|
|
---
|
|
|
|
|
|
# 🧠 Noesis Decoder (AletheiaEngine)
|
|
|
|
|
|
**Repository:** [gnai-creator/noesis-decoder](https://huggingface.co/gnai-creator/noesis-decoder)
|
|
|
**Author:** Felipe M. Muniz (`gnai-creator`)
|
|
|
**License:** Apache-2.0
|
|
|
|
|
|
---
|
|
|
|
|
|
## 🔍 Overview
|
|
|
|
|
|
**Noesis Decoder** is the proprietary symbolic decoder of **AletheiaEngine** — a hybrid symbolic–neural system designed for *philosophical artificial general intelligence*.
|
|
|
|
|
|
Unlike conventional text generators, Noesis translates **symbolic embeddings (ψₛ)** into meaningful language based on *epistemic coherence*, rather than statistical prediction.
|
|
|
|
|
|
---
|
|
|
|
|
|
## ⚙️ Model Architecture
|
|
|
|
|
|
* **Framework:** PyTorch → ONNX Runtime
|
|
|
* **Files:**
|
|
|
|
|
|
* `model_infer.onnx` – Inference model (optimized)
|
|
|
* `noesis.pt` – PyTorch checkpoint (training artifact)
|
|
|
* `inference.py` – Custom ONNX handler
|
|
|
* **Input:** float32 symbolic vector, shape `[1, D]`
|
|
|
* **Output:** decoded float or token embeddings (depending on context)
|
|
|
|
|
|
---
|
|
|
|
|
|
## 🧩 Example Usage
|
|
|
|
|
|
### 🔹 Python + ONNX Runtime
|
|
|
|
|
|
```python
|
|
|
from huggingface_hub import hf_hub_download
|
|
|
import onnxruntime as ort
|
|
|
import numpy as np
|
|
|
|
|
|
# Download ONNX model
|
|
|
onnx_path = hf_hub_download(
|
|
|
repo_id="gnai-creator/noesis-decoder",
|
|
|
filename="model_infer.onnx",
|
|
|
repo_type="model"
|
|
|
)
|
|
|
|
|
|
# Load runtime
|
|
|
sess = ort.InferenceSession(onnx_path, providers=["CPUExecutionProvider"])
|
|
|
input_name = sess.get_inputs()[0].name
|
|
|
output_name = sess.get_outputs()[0].name
|
|
|
|
|
|
# Example symbolic vector ψₛ
|
|
|
x = np.random.randn(1, 300).astype("float32")
|
|
|
|
|
|
# Run inference
|
|
|
y = sess.run([output_name], {input_name: x})[0]
|
|
|
print("Output shape:", y.shape)
|
|
|
```
|
|
|
|
|
|
---
|
|
|
|
|
|
## 💡 Training Data
|
|
|
|
|
|
Trained on **symbolic text pairs** generated from philosophical, logical, and reflective corpora within the AletheiaEngine ecosystem.
|
|
|
Goal: alignment between **symbolic intention (ψₛ)** and **natural language output**.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 📊 Metrics (Indicative)
|
|
|
|
|
|
| Metric | Value | Description |
|
|
|
| ------------- | ------------ | ------------------------------------------ |
|
|
|
| Cosine(Q) | 0.83 | Symbolic alignment measure |
|
|
|
| Perplexity | 2.41 | Statistical readability proxy |
|
|
|
| Latency (CPU) | ~28 ms/token | Inference on Intel Sapphire Rapids (1vCPU) |
|
|
|
|
|
|
---
|
|
|
|
|
|
## 🚀 Deployment
|
|
|
|
|
|
This model is compatible with **Hugging Face Inference Endpoints** using the `Custom` engine and the included `inference.py` handler.
|
|
|
|
|
|
Recommended hardware:
|
|
|
|
|
|
* **CPU:** Intel Sapphire Rapids (1vCPU / 2GB)
|
|
|
* **GPU:** NVIDIA T4 for larger batch inference
|
|
|
|
|
|
---
|
|
|
|
|
|
## ⚠️ Limitations
|
|
|
|
|
|
* Not a conventional LLM — requires symbolic vectors as input.
|
|
|
* Outputs are contextualized to Aletheia’s symbolic reasoning pipeline.
|
|
|
* Not suited for free-form text generation.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 📜 License
|
|
|
|
|
|
This repository is distributed under the **Apache License 2.0**.
|
|
|
See [LICENSE](./LICENSE) for details.
|
|
|
|
|
|
---
|
|
|
|
|
|
> *“Truth is not imposed; it emerges from alignment.”*
|
|
|
> — *Felipe M. Muniz (2025)*
|
|
|
|
|
|
|