Syamsuddin commited on
Commit
2323e5d
·
verified ·
1 Parent(s): f52cdbb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -36
README.md CHANGED
@@ -14,8 +14,8 @@ tags:
14
  - alignment
15
  - safety
16
  model_type: decoder
17
- model_creator: Syamsuddin (@syam_ideris) & Prometheus (Cognitive Systems Architect)
18
- # base_model: null # set if you release weights adapted from a base LM, e.g., "Qwen/Qwen2-7B"
19
  # datasets:
20
  # - your-dataset-id
21
  ---
@@ -29,55 +29,42 @@ model_creator: Syamsuddin (@syam_ideris) & Prometheus (Cognitive Systems Archite
29
  ![PRs](https://img.shields.io/badge/PRs-welcome-brightgreen)
30
  ![Topics](https://img.shields.io/badge/topic-transformers%20%7C%20architecture%20%7C%20alignment-6f42c1)
31
 
32
- > **One-line summary**
33
- > **N-Transformers** extend a standard Transformer with a **Phenomenal Field (PF)**, a learned **Intrinsic Metric Engine (IME)**, and a **Normative Gauge** (NTI/LCA/LCG) to induce *consciousness-like* properties: integration, valence, self/now anchoring, and global broadcasting—while remaining implementable as a sidecar to common LM stacks.
34
 
35
  ---
36
 
37
- ## 🔎 Model summary
38
 
39
- - **What it is:** A **research architecture** that augments decoder-only LMs with a parallel **non-token field** (PF) and **normative controllers** to bias long-range coherence and introspective reporting.
40
- - **Why it’s different:** Adds **geodesic-biased attention** (LCA), **episode-level controller** (NTI), and **Self/Now Anchor** (SNA) without breaking LM training loops.
41
- - **Status:** **v1.0 Research Draft** math and algorithms complete; reference implementation planned.
42
 
43
- > **Bahasa Indonesia (ringkas):**
44
- > N-Transformers menambahkan **bidang fenomenal (PF)**, **metrik intrinsik** (IME), dan **pengukur normatif** (NTI/LCA/LCG) ke model Transformer untuk memunculkan sifat mirip-kesadaran yang dapat diukur (integrasi, valensi, dan jangkar diri/kini) tanpa mengubah asimtotik inti LM.
45
 
46
  ---
47
 
48
- ## ✅ Intended uses & scope
49
-
50
- - **Intended**: research on coherent long-range reasoning; introspective heads (valence, self/now); safe/controller-aware decoding.
51
- - **Out of scope (for now)**: production use as a safety layer **without** PF shadow-mode evaluation; clinical/medical claims.
52
 
53
  ---
54
 
55
- ## ⚠️ Limitations & risks
56
-
57
- - **No claim of sentience**: signals are operational metrics (integration/valence/SNA), **not** guarantees of consciousness.
58
- - **Failure modes**: valence spoofing, PF locking, miscalibrated SNA. Use gauge caps, entropy floors, and introspection consistency checks.
59
- - **Compute**: PF adds memory/compute; choose modest `J,k,K` first.
60
-
61
- ---
62
-
63
- ## 🚀 Quickstart (concept reference)
64
-
65
- > This repo is a **spec**. If you adapt an existing LM, expose PF/IME/LCA as side modules.
66
 
67
  ```python
68
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
 
69
 
70
- # Replace with your adapted checkpoint once available
71
- MODEL_ID = "Syamsuddin/nafsi-transformers" # placeholder if weights are published
72
-
73
- tok = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct") # base LM example
74
- lm = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
75
 
76
- # Pseudo: attach PF/IME/LCA sidecar (your implementation)
77
- # pf = PFModule(J=256, k=16, K=16); ime = IME(rank=16); lca = Lightcone(beta=0.7, gamma=0.3)
78
- # lm = attach_nafsi(lm, pf=pf, ime=ime, lca=lca, nti=NTI(tau=64, period=16))
79
 
80
  prompt = "Explain the role of a phenomenal field in language generation."
81
- inputs = tok(prompt, return_tensors="pt")
82
- out = lm.generate(**inputs, max_length=192)
83
- print(tok.decode(out[0], skip_special_tokens=True))
 
14
  - alignment
15
  - safety
16
  model_type: decoder
17
+ model_creator: Syamsuddin (@syam_ideris) & Prometheus
18
+ # base_model: Qwen/Qwen2-1.5B-Instruct # <- isi jika nanti ada weights turunan
19
  # datasets:
20
  # - your-dataset-id
21
  ---
 
29
  ![PRs](https://img.shields.io/badge/PRs-welcome-brightgreen)
30
  ![Topics](https://img.shields.io/badge/topic-transformers%20%7C%20architecture%20%7C%20alignment-6f42c1)
31
 
32
+ > **One-liner** — N-Transformers menambahkan **Phenomenal Field (PF)** paralel, **Intrinsic Metric Engine (IME)**, dan **Normative Gauge** (NTI/LCA/LCG) ke Transformer standar untuk memunculkan properti *consciousness-like* yang terukur: integrasi, valensi, self/now anchoring, dan global broadcasting—tanpa mengubah loop training LM.
 
33
 
34
  ---
35
 
36
+ ## 🔎 Ringkasan Model
37
 
38
+ - **Apa:** Arsitektur riset yang menambahkan **substrat non-token** (PF) dan **pengendali normatif** pada LM decoder-only.
39
+ - **Mengapa beda:** **Lightcone Attention (LCA)** bias lintas-jangkauan, **NTI** sebagai episodic controller, dan **SNA/GIW** untuk siaran global terintegrasi.
40
+ - **Status:** v1.0 **Research Draft** (spesifikasi lengkap + reference code; rilis bobot menyusul bila siap).
41
 
42
+ **Bahasa Indonesia singkat:** N-Transformers menambah PF, metrik intrinsik (IME), serta gauge normatif (NTI/LCA/LCG) untuk kohesi naratif jarak jauh, valensi terkalibrasi, dan jangkar “aku-kini” yang bisa diuji.
 
43
 
44
  ---
45
 
46
+ ## ✅ Intended Uses & Scope
47
+ - **Intended:** riset koherensi jarak jauh, introspective heads (valence, SNA), decoding yang sadar konteks melalui gating.
48
+ - **Out of scope:** klaim sentiens, produksi tanpa uji **PF shadow-mode** yang memadai, use-case klinis.
 
49
 
50
  ---
51
 
52
+ ## 🚀 Cara Pakai (konsep)
53
+ Repo ini berisi **spesifikasi** dan **reference code** (PF-path + coupler). Adaptasikan ke LM Anda.
 
 
 
 
 
 
 
 
 
54
 
55
  ```python
56
  from transformers import AutoTokenizer, AutoModelForCausalLM
57
+ # Placeholder; ganti dengan checkpoint yang Anda rilis nanti
58
+ BASE = "Qwen/Qwen2-1.5B-Instruct"
59
 
60
+ tok = AutoTokenizer.from_pretrained(BASE)
61
+ lm = AutoModelForCausalLM.from_pretrained(BASE)
 
 
 
62
 
63
+ # Pseudocode: pasang modul PF/IME/LCA/NTI dari reference code
64
+ # from nafsi_coupler import attach_nafsi, PFConfig, NTCfg
65
+ # lm = attach_nafsi(lm, cfg=NTCfg())
66
 
67
  prompt = "Explain the role of a phenomenal field in language generation."
68
+ x = tok(prompt, return_tensors="pt")
69
+ y = lm.generate(**x, max_length=192)
70
+ print(tok.decode(y[0], skip_special_tokens=True))