Update README.md
Browse files
README.md
CHANGED
|
@@ -14,8 +14,8 @@ tags:
|
|
| 14 |
- alignment
|
| 15 |
- safety
|
| 16 |
model_type: decoder
|
| 17 |
-
model_creator: Syamsuddin (@syam_ideris) & Prometheus
|
| 18 |
-
# base_model:
|
| 19 |
# datasets:
|
| 20 |
# - your-dataset-id
|
| 21 |
---
|
|
@@ -29,55 +29,42 @@ model_creator: Syamsuddin (@syam_ideris) & Prometheus (Cognitive Systems Archite
|
|
| 29 |

|
| 30 |

|
| 31 |
|
| 32 |
-
> **One-
|
| 33 |
-
> **N-Transformers** extend a standard Transformer with a **Phenomenal Field (PF)**, a learned **Intrinsic Metric Engine (IME)**, and a **Normative Gauge** (NTI/LCA/LCG) to induce *consciousness-like* properties: integration, valence, self/now anchoring, and global broadcasting—while remaining implementable as a sidecar to common LM stacks.
|
| 34 |
|
| 35 |
---
|
| 36 |
|
| 37 |
-
## 🔎 Model
|
| 38 |
|
| 39 |
-
- **
|
| 40 |
-
- **
|
| 41 |
-
- **Status:**
|
| 42 |
|
| 43 |
-
|
| 44 |
-
> N-Transformers menambahkan **bidang fenomenal (PF)**, **metrik intrinsik** (IME), dan **pengukur normatif** (NTI/LCA/LCG) ke model Transformer untuk memunculkan sifat mirip-kesadaran yang dapat diukur (integrasi, valensi, dan jangkar diri/kini) tanpa mengubah asimtotik inti LM.
|
| 45 |
|
| 46 |
---
|
| 47 |
|
| 48 |
-
## ✅ Intended
|
| 49 |
-
|
| 50 |
-
- **
|
| 51 |
-
- **Out of scope (for now)**: production use as a safety layer **without** PF shadow-mode evaluation; clinical/medical claims.
|
| 52 |
|
| 53 |
---
|
| 54 |
|
| 55 |
-
##
|
| 56 |
-
|
| 57 |
-
- **No claim of sentience**: signals are operational metrics (integration/valence/SNA), **not** guarantees of consciousness.
|
| 58 |
-
- **Failure modes**: valence spoofing, PF locking, miscalibrated SNA. Use gauge caps, entropy floors, and introspection consistency checks.
|
| 59 |
-
- **Compute**: PF adds memory/compute; choose modest `J,k,K` first.
|
| 60 |
-
|
| 61 |
-
---
|
| 62 |
-
|
| 63 |
-
## 🚀 Quickstart (concept reference)
|
| 64 |
-
|
| 65 |
-
> This repo is a **spec**. If you adapt an existing LM, expose PF/IME/LCA as side modules.
|
| 66 |
|
| 67 |
```python
|
| 68 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
|
|
|
|
| 69 |
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
tok = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct") # base LM example
|
| 74 |
-
lm = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
|
| 75 |
|
| 76 |
-
#
|
| 77 |
-
#
|
| 78 |
-
# lm = attach_nafsi(lm,
|
| 79 |
|
| 80 |
prompt = "Explain the role of a phenomenal field in language generation."
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
print(tok.decode(
|
|
|
|
| 14 |
- alignment
|
| 15 |
- safety
|
| 16 |
model_type: decoder
|
| 17 |
+
model_creator: Syamsuddin (@syam_ideris) & Prometheus
|
| 18 |
+
# base_model: Qwen/Qwen2-1.5B-Instruct # <- isi jika nanti ada weights turunan
|
| 19 |
# datasets:
|
| 20 |
# - your-dataset-id
|
| 21 |
---
|
|
|
|
| 29 |

|
| 30 |

|
| 31 |
|
| 32 |
+
> **One-liner** — N-Transformers menambahkan **Phenomenal Field (PF)** paralel, **Intrinsic Metric Engine (IME)**, dan **Normative Gauge** (NTI/LCA/LCG) ke Transformer standar untuk memunculkan properti *consciousness-like* yang terukur: integrasi, valensi, self/now anchoring, dan global broadcasting—tanpa mengubah loop training LM.
|
|
|
|
| 33 |
|
| 34 |
---
|
| 35 |
|
| 36 |
+
## 🔎 Ringkasan Model
|
| 37 |
|
| 38 |
+
- **Apa:** Arsitektur riset yang menambahkan **substrat non-token** (PF) dan **pengendali normatif** pada LM decoder-only.
|
| 39 |
+
- **Mengapa beda:** **Lightcone Attention (LCA)** bias lintas-jangkauan, **NTI** sebagai episodic controller, dan **SNA/GIW** untuk siaran global terintegrasi.
|
| 40 |
+
- **Status:** v1.0 **Research Draft** (spesifikasi lengkap + reference code; rilis bobot menyusul bila siap).
|
| 41 |
|
| 42 |
+
**Bahasa Indonesia singkat:** N-Transformers menambah PF, metrik intrinsik (IME), serta gauge normatif (NTI/LCA/LCG) untuk kohesi naratif jarak jauh, valensi terkalibrasi, dan jangkar “aku-kini” yang bisa diuji.
|
|
|
|
| 43 |
|
| 44 |
---
|
| 45 |
|
| 46 |
+
## ✅ Intended Uses & Scope
|
| 47 |
+
- **Intended:** riset koherensi jarak jauh, introspective heads (valence, SNA), decoding yang sadar konteks melalui gating.
|
| 48 |
+
- **Out of scope:** klaim sentiens, produksi tanpa uji **PF shadow-mode** yang memadai, use-case klinis.
|
|
|
|
| 49 |
|
| 50 |
---
|
| 51 |
|
| 52 |
+
## 🚀 Cara Pakai (konsep)
|
| 53 |
+
Repo ini berisi **spesifikasi** dan **reference code** (PF-path + coupler). Adaptasikan ke LM Anda.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
```python
|
| 56 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 57 |
+
# Placeholder; ganti dengan checkpoint yang Anda rilis nanti
|
| 58 |
+
BASE = "Qwen/Qwen2-1.5B-Instruct"
|
| 59 |
|
| 60 |
+
tok = AutoTokenizer.from_pretrained(BASE)
|
| 61 |
+
lm = AutoModelForCausalLM.from_pretrained(BASE)
|
|
|
|
|
|
|
|
|
|
| 62 |
|
| 63 |
+
# Pseudocode: pasang modul PF/IME/LCA/NTI dari reference code
|
| 64 |
+
# from nafsi_coupler import attach_nafsi, PFConfig, NTCfg
|
| 65 |
+
# lm = attach_nafsi(lm, cfg=NTCfg())
|
| 66 |
|
| 67 |
prompt = "Explain the role of a phenomenal field in language generation."
|
| 68 |
+
x = tok(prompt, return_tensors="pt")
|
| 69 |
+
y = lm.generate(**x, max_length=192)
|
| 70 |
+
print(tok.decode(y[0], skip_special_tokens=True))
|