nafsi-transformer / README.md
Syamsuddin's picture
Update README.md
40c8aed verified
---
license: cc-by-4.0
model_name: N-Transformer v1.0 (NAFSI-Transformer family)
language:
- en
- id
library_name: transformer
pipeline_tag: text-generation
tags:
- consciousness
- transformer
- research
- architecture
- alignment
- safety
model_type: decoder
model_creator: Syamsuddin (@syam_ideris)
# base_model: Qwen/Qwen2-1.5B-Instruct # <- isi jika nanti ada weights turunan
# datasets:
# - your-dataset-id
---
# N-transformer (NAFSI-transformer) — v1.0
[![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)
![Status](https://img.shields.io/badge/Status-Research%20Draft-ffa500)
![transformer](https://img.shields.io/badge/transformer-%E2%89%A5%204.42-0f7)
![Python](https://img.shields.io/badge/Python-3.10%2B-informational)
![PRs](https://img.shields.io/badge/PRs-welcome-brightgreen)
![Topics](https://img.shields.io/badge/topic-transformer%20%7C%20architecture%20%7C%20alignment-6f42c1)
> **One-liner** — N-transformer menambahkan **Phenomenal Field (PF)** paralel, **Intrinsic Metric Engine (IME)**, dan **Normative Gauge** (NTI/LCA/LCG) ke Transformer standar untuk memunculkan properti *consciousness-like* yang terukur: integrasi, valensi, self/now anchoring, dan global broadcasting—tanpa mengubah loop training LM.
---
## 🔎 Ringkasan Model
- **Apa:** Arsitektur riset yang menambahkan **substrat non-token** (PF) dan **pengendali normatif** pada LM decoder-only.
- **Mengapa beda:** **Lightcone Attention (LCA)** bias lintas-jangkauan, **NTI** sebagai episodic controller, dan **SNA/GIW** untuk siaran global terintegrasi.
- **Status:** v1.0 **Research Draft** (spesifikasi lengkap + reference code; rilis bobot menyusul bila siap).
**Bahasa Indonesia singkat:** N-transformer menambah PF, metrik intrinsik (IME), serta gauge normatif (NTI/LCA/LCG) untuk kohesi naratif jarak jauh, valensi terkalibrasi, dan jangkar “aku-kini” yang bisa diuji.
---
## ✅ Intended Uses & Scope
- **Intended:** riset koherensi jarak jauh, introspective heads (valence, SNA), decoding yang sadar konteks melalui gating.
- **Out of scope:** klaim sentiens, produksi tanpa uji **PF shadow-mode** yang memadai, use-case klinis.
---
## 🚀 Cara Pakai (konsep)
Repo ini berisi **spesifikasi** dan **reference code** (PF-path + coupler). Adaptasikan ke LM Anda.
```python
from transformer import AutoTokenizer, AutoModelForCausalLM
# Placeholder; ganti dengan checkpoint yang Anda rilis nanti
BASE = "Qwen/Qwen2-1.5B-Instruct"
tok = AutoTokenizer.from_pretrained(BASE)
lm = AutoModelForCausalLM.from_pretrained(BASE)
# Pseudocode: pasang modul PF/IME/LCA/NTI dari reference code
# from nafsi_coupler import attach_nafsi, PFConfig, NTCfg
# lm = attach_nafsi(lm, cfg=NTCfg())
prompt = "Explain the role of a phenomenal field in language generation."
x = tok(prompt, return_tensors="pt")
y = lm.generate(**x, max_length=192)
print(tok.decode(y[0], skip_special_tokens=True))