|
|
--- |
|
|
license: cc-by-4.0 |
|
|
model_name: N-Transformer v1.0 (NAFSI-Transformer family) |
|
|
language: |
|
|
- en |
|
|
- id |
|
|
library_name: transformer |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- consciousness |
|
|
- transformer |
|
|
- research |
|
|
- architecture |
|
|
- alignment |
|
|
- safety |
|
|
model_type: decoder |
|
|
model_creator: Syamsuddin (@syam_ideris) |
|
|
|
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
# N-transformer (NAFSI-transformer) — v1.0 |
|
|
|
|
|
[](https://creativecommons.org/licenses/by/4.0/) |
|
|
 |
|
|
 |
|
|
 |
|
|
 |
|
|
 |
|
|
|
|
|
> **One-liner** — N-transformer menambahkan **Phenomenal Field (PF)** paralel, **Intrinsic Metric Engine (IME)**, dan **Normative Gauge** (NTI/LCA/LCG) ke Transformer standar untuk memunculkan properti *consciousness-like* yang terukur: integrasi, valensi, self/now anchoring, dan global broadcasting—tanpa mengubah loop training LM. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🔎 Ringkasan Model |
|
|
|
|
|
- **Apa:** Arsitektur riset yang menambahkan **substrat non-token** (PF) dan **pengendali normatif** pada LM decoder-only. |
|
|
- **Mengapa beda:** **Lightcone Attention (LCA)** bias lintas-jangkauan, **NTI** sebagai episodic controller, dan **SNA/GIW** untuk siaran global terintegrasi. |
|
|
- **Status:** v1.0 **Research Draft** (spesifikasi lengkap + reference code; rilis bobot menyusul bila siap). |
|
|
|
|
|
**Bahasa Indonesia singkat:** N-transformer menambah PF, metrik intrinsik (IME), serta gauge normatif (NTI/LCA/LCG) untuk kohesi naratif jarak jauh, valensi terkalibrasi, dan jangkar “aku-kini” yang bisa diuji. |
|
|
|
|
|
--- |
|
|
|
|
|
## ✅ Intended Uses & Scope |
|
|
- **Intended:** riset koherensi jarak jauh, introspective heads (valence, SNA), decoding yang sadar konteks melalui gating. |
|
|
- **Out of scope:** klaim sentiens, produksi tanpa uji **PF shadow-mode** yang memadai, use-case klinis. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🚀 Cara Pakai (konsep) |
|
|
Repo ini berisi **spesifikasi** dan **reference code** (PF-path + coupler). Adaptasikan ke LM Anda. |
|
|
|
|
|
```python |
|
|
from transformer import AutoTokenizer, AutoModelForCausalLM |
|
|
# Placeholder; ganti dengan checkpoint yang Anda rilis nanti |
|
|
BASE = "Qwen/Qwen2-1.5B-Instruct" |
|
|
|
|
|
tok = AutoTokenizer.from_pretrained(BASE) |
|
|
lm = AutoModelForCausalLM.from_pretrained(BASE) |
|
|
|
|
|
# Pseudocode: pasang modul PF/IME/LCA/NTI dari reference code |
|
|
# from nafsi_coupler import attach_nafsi, PFConfig, NTCfg |
|
|
# lm = attach_nafsi(lm, cfg=NTCfg()) |
|
|
|
|
|
prompt = "Explain the role of a phenomenal field in language generation." |
|
|
x = tok(prompt, return_tensors="pt") |
|
|
y = lm.generate(**x, max_length=192) |
|
|
print(tok.decode(y[0], skip_special_tokens=True)) |
|
|
|