File size: 3,348 Bytes
e2b3499
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---

library_name: transformers
tags:
- symbolic-decoder
- aletheia
- pytorch
- onnx
- philosophical-agi
- gnai-creator
license: apache-2.0
datasets:
- custom
language:
- en
pipeline_tag: text-generation
---


# 🧠 Noesis Decoder (AletheiaEngine)

**Repository:** [gnai-creator/noesis-decoder](https://huggingface.co/gnai-creator/noesis-decoder)
**Author:** Felipe M. Muniz (`gnai-creator`)
**License:** Apache-2.0

---

## 🔍 Overview

**Noesis Decoder** is the proprietary symbolic decoder of **AletheiaEngine** — a hybrid symbolic–neural system designed for *philosophical artificial general intelligence*.

Unlike conventional text generators, Noesis translates **symbolic embeddings (ψₛ)** into meaningful language based on *epistemic coherence*, rather than statistical prediction.

---

## ⚙️ Model Architecture

* **Framework:** PyTorch → ONNX Runtime
* **Files:**

  * `model_infer.onnx` – Inference model (optimized)
  * `noesis.pt` – PyTorch checkpoint (training artifact)
  * `inference.py` – Custom ONNX handler
* **Input:** float32 symbolic vector, shape `[1, D]`
* **Output:** decoded float or token embeddings (depending on context)

---

## 🧩 Example Usage

### 🔹 Python + ONNX Runtime

```python

from huggingface_hub import hf_hub_download

import onnxruntime as ort

import numpy as np



# Download ONNX model

onnx_path = hf_hub_download(

    repo_id="gnai-creator/noesis-decoder",

    filename="model_infer.onnx",

    repo_type="model"

)



# Load runtime

sess = ort.InferenceSession(onnx_path, providers=["CPUExecutionProvider"])

input_name  = sess.get_inputs()[0].name

output_name = sess.get_outputs()[0].name



# Example symbolic vector ψₛ

x = np.random.randn(1, 300).astype("float32")



# Run inference

y = sess.run([output_name], {input_name: x})[0]

print("Output shape:", y.shape)

```

---

## 💡 Training Data

Trained on **symbolic text pairs** generated from philosophical, logical, and reflective corpora within the AletheiaEngine ecosystem.
Goal: alignment between **symbolic intention (ψₛ)** and **natural language output**.

---

## 📊 Metrics (Indicative)

| Metric        | Value        | Description                                |
| ------------- | ------------ | ------------------------------------------ |
| Cosine(Q)     | 0.83         | Symbolic alignment measure                 |
| Perplexity    | 2.41         | Statistical readability proxy              |
| Latency (CPU) | ~28 ms/token | Inference on Intel Sapphire Rapids (1vCPU) |

---

## 🚀 Deployment

This model is compatible with **Hugging Face Inference Endpoints** using the `Custom` engine and the included `inference.py` handler.

Recommended hardware:

* **CPU:** Intel Sapphire Rapids (1vCPU / 2GB)
* **GPU:** NVIDIA T4 for larger batch inference

---

## ⚠️ Limitations

* Not a conventional LLM — requires symbolic vectors as input.
* Outputs are contextualized to Aletheia’s symbolic reasoning pipeline.
* Not suited for free-form text generation.

---

## 📜 License

This repository is distributed under the **Apache License 2.0**.
See [LICENSE](./LICENSE) for details.

---

> *“Truth is not imposed; it emerges from alignment.”*
> — *Felipe M. Muniz (2025)*