π§ Noesis Decoder (AletheiaEngine)
Repository: gnai-creator/noesis-decoder
Author: Felipe M. Muniz (gnai-creator)
License: Apache-2.0
π Overview
Noesis Decoder is the proprietary symbolic decoder of AletheiaEngine β a hybrid symbolicβneural system designed for philosophical artificial general intelligence.
Unlike conventional text generators, Noesis translates symbolic embeddings (Οβ) into meaningful language based on epistemic coherence, rather than statistical prediction.
βοΈ Model Architecture
Framework: PyTorch β ONNX Runtime
Files:
model_infer.onnxβ Inference model (optimized)noesis.ptβ PyTorch checkpoint (training artifact)inference.pyβ Custom ONNX handler
Input: float32 symbolic vector, shape
[1, D]Output: decoded float or token embeddings (depending on context)
π§© Example Usage
πΉ Python + ONNX Runtime
from huggingface_hub import hf_hub_download
import onnxruntime as ort
import numpy as np
# Download ONNX model
onnx_path = hf_hub_download(
repo_id="gnai-creator/noesis-decoder",
filename="model_infer.onnx",
repo_type="model"
)
# Load runtime
sess = ort.InferenceSession(onnx_path, providers=["CPUExecutionProvider"])
input_name = sess.get_inputs()[0].name
output_name = sess.get_outputs()[0].name
# Example symbolic vector Οβ
x = np.random.randn(1, 300).astype("float32")
# Run inference
y = sess.run([output_name], {input_name: x})[0]
print("Output shape:", y.shape)
π‘ Training Data
Trained on symbolic text pairs generated from philosophical, logical, and reflective corpora within the AletheiaEngine ecosystem. Goal: alignment between symbolic intention (Οβ) and natural language output.
π Metrics (Indicative)
| Metric | Value | Description |
|---|---|---|
| Cosine(Q) | 0.83 | Symbolic alignment measure |
| Perplexity | 2.41 | Statistical readability proxy |
| Latency (CPU) | ~28 ms/token | Inference on Intel Sapphire Rapids (1vCPU) |
π Deployment
This model is compatible with Hugging Face Inference Endpoints using the Custom engine and the included inference.py handler.
Recommended hardware:
- CPU: Intel Sapphire Rapids (1vCPU / 2GB)
- GPU: NVIDIA T4 for larger batch inference
β οΈ Limitations
- Not a conventional LLM β requires symbolic vectors as input.
- Outputs are contextualized to Aletheiaβs symbolic reasoning pipeline.
- Not suited for free-form text generation.
π License
This repository is distributed under the Apache License 2.0. See LICENSE for details.
βTruth is not imposed; it emerges from alignment.β β Felipe M. Muniz (2025)