🧠 Noesis Decoder (AletheiaEngine)

Repository: gnai-creator/noesis-decoder Author: Felipe M. Muniz (gnai-creator) License: Apache-2.0


πŸ” Overview

Noesis Decoder is the proprietary symbolic decoder of AletheiaEngine β€” a hybrid symbolic–neural system designed for philosophical artificial general intelligence.

Unlike conventional text generators, Noesis translates symbolic embeddings (Οˆβ‚›) into meaningful language based on epistemic coherence, rather than statistical prediction.


βš™οΈ Model Architecture

  • Framework: PyTorch β†’ ONNX Runtime

  • Files:

    • model_infer.onnx – Inference model (optimized)
    • noesis.pt – PyTorch checkpoint (training artifact)
    • inference.py – Custom ONNX handler
  • Input: float32 symbolic vector, shape [1, D]

  • Output: decoded float or token embeddings (depending on context)


🧩 Example Usage

πŸ”Ή Python + ONNX Runtime

from huggingface_hub import hf_hub_download
import onnxruntime as ort
import numpy as np

# Download ONNX model
onnx_path = hf_hub_download(
    repo_id="gnai-creator/noesis-decoder",
    filename="model_infer.onnx",
    repo_type="model"
)

# Load runtime
sess = ort.InferenceSession(onnx_path, providers=["CPUExecutionProvider"])
input_name  = sess.get_inputs()[0].name
output_name = sess.get_outputs()[0].name

# Example symbolic vector Οˆβ‚›
x = np.random.randn(1, 300).astype("float32")

# Run inference
y = sess.run([output_name], {input_name: x})[0]
print("Output shape:", y.shape)

πŸ’‘ Training Data

Trained on symbolic text pairs generated from philosophical, logical, and reflective corpora within the AletheiaEngine ecosystem. Goal: alignment between symbolic intention (Οˆβ‚›) and natural language output.


πŸ“Š Metrics (Indicative)

Metric Value Description
Cosine(Q) 0.83 Symbolic alignment measure
Perplexity 2.41 Statistical readability proxy
Latency (CPU) ~28 ms/token Inference on Intel Sapphire Rapids (1vCPU)

πŸš€ Deployment

This model is compatible with Hugging Face Inference Endpoints using the Custom engine and the included inference.py handler.

Recommended hardware:

  • CPU: Intel Sapphire Rapids (1vCPU / 2GB)
  • GPU: NVIDIA T4 for larger batch inference

⚠️ Limitations

  • Not a conventional LLM β€” requires symbolic vectors as input.
  • Outputs are contextualized to Aletheia’s symbolic reasoning pipeline.
  • Not suited for free-form text generation.

πŸ“œ License

This repository is distributed under the Apache License 2.0. See LICENSE for details.


β€œTruth is not imposed; it emerges from alignment.” β€” Felipe M. Muniz (2025)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support