Update README.md
Browse files
README.md
CHANGED
|
@@ -16,30 +16,31 @@ library_name: transformers
|
|
| 16 |
pipeline_tag: text-generation
|
| 17 |
---
|
| 18 |
|
| 19 |
-
# Emergent Semantics —
|
| 20 |
|
| 21 |
-
This repository provides **
|
| 22 |
|
| 23 |
[📚 Paper (Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations)](https://huggingface.co/papers/2507.04886) -
|
| 24 |
|
| 25 |
[📚 Paper (Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate)](https://huggingface.co/papers/2507.07129) -
|
| 26 |
|
| 27 |
-
This checkpoint
|
| 28 |
|
| 29 |
-
Compared to **
|
| 30 |
|
| 31 |
---
|
| 32 |
|
| 33 |
## Key idea (what this ablation tests)
|
| 34 |
|
| 35 |
-
- Each token is assigned a **frozen 64-dimensional
|
| 36 |
-
-
|
|
|
|
| 37 |
- The embedding layer is **frozen** throughout training (`requires_grad = False`).
|
| 38 |
|
| 39 |
To match the Transformer hidden size, the 64-dim embedding is expanded to 1024 via a **non-trainable repetition**:
|
| 40 |
`repeat_interleave(16)` → `64 * 16 = 1024`.
|
| 41 |
|
| 42 |
-
This
|
| 43 |
|
| 44 |
---
|
| 45 |
|
|
@@ -65,23 +66,12 @@ So the Transformer backbone is the same, but the **embedding table is much small
|
|
| 65 |
- **Positional encoding:** rotary embeddings
|
| 66 |
- **Activation:** GELU
|
| 67 |
- **Tokenizer / vocab size:** 65,536 (bvv241-2-3 compatible)
|
| 68 |
-
- **Input embeddings:** **frozen**, **
|
| 69 |
-
- **Embedding initialization:** random
|
| 70 |
- **Output head:** **not tied** to the input embeddings (trained separately)
|
| 71 |
|
| 72 |
---
|
| 73 |
|
| 74 |
-
## Files in this repo (embedding reference)
|
| 75 |
-
|
| 76 |
-
For transparency and reproducibility, the explicit frozen embedding values are included in this repository.
|
| 77 |
-
|
| 78 |
-
- `embeddings.txt` (human-readable reference; token → 64-bit vector):
|
| 79 |
-
`https://huggingface.co/Bochkov/emergent-semantics-model-64-float-272m/blob/main/embeddings.txt`
|
| 80 |
-
|
| 81 |
-
> Note: Embeddings are shipped in this model repo (even though the tokenizer exists separately) to keep the model+embedding mapping self-contained and unambiguous.
|
| 82 |
-
|
| 83 |
-
---
|
| 84 |
-
|
| 85 |
## Tokenizer
|
| 86 |
|
| 87 |
The intended tokenizer is **bvv241-2-3** (same vocab size and indexing):
|
|
@@ -99,8 +89,8 @@ You may load the tokenizer either from this model repo (if included) or from the
|
|
| 99 |
import torch
|
| 100 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 101 |
|
| 102 |
-
tokenizer = AutoTokenizer.from_pretrained("Bochkov/emergent-semantics-model-64-
|
| 103 |
-
model = AutoModelForCausalLM.from_pretrained("Bochkov/emergent-semantics-model-64-
|
| 104 |
|
| 105 |
inputs = torch.tensor([tokenizer.encode("Question: What is the capital of Japan?\nAnswer:")], dtype=torch.long, device='cuda')
|
| 106 |
|
|
@@ -120,8 +110,8 @@ print(tokenizer.decode(outputs[0].tolist()))
|
|
| 120 |
This model is intended for **research only**, especially for:
|
| 121 |
|
| 122 |
- Comparisons vs **Model_UNI_GLYPH (glyph/PCA frozen embeddings)** and vs **trainable-embedding baselines**
|
| 123 |
-
-
|
| 124 |
-
-
|
| 125 |
|
| 126 |
Not intended for production deployment (no instruction tuning, safety tuning, or factuality guarantees).
|
| 127 |
|
|
|
|
| 16 |
pipeline_tag: text-generation
|
| 17 |
---
|
| 18 |
|
| 19 |
+
# Emergent Semantics — Model_64_FLOAT (272M)
|
| 20 |
|
| 21 |
+
This repository provides **Model_64_FLOAT (272M)** — an **ablation model** from the paper:
|
| 22 |
|
| 23 |
[📚 Paper (Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations)](https://huggingface.co/papers/2507.04886) -
|
| 24 |
|
| 25 |
[📚 Paper (Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate)](https://huggingface.co/papers/2507.07129) -
|
| 26 |
|
| 27 |
+
This checkpoint tests whether language modeling and semantic structure can emerge when the **entire input embedding layer is frozen** and contains **no semantic or glyph/visual information**.
|
| 28 |
|
| 29 |
+
Compared to **Model_64_BIT**, this model uses the same embedding dimensionality (`n_embed=64`) and the same “unique per token” construction, but the embedding vectors are **floating-point** (after a deterministic projection/normalization step), rather than raw binary components.
|
| 30 |
|
| 31 |
---
|
| 32 |
|
| 33 |
## Key idea (what this ablation tests)
|
| 34 |
|
| 35 |
+
- Each token is assigned a **frozen 64-dimensional float vector** (`n_embed=64`).
|
| 36 |
+
- The vectors originate from **random per-token patterns** and are constructed to guarantee a **unique ID per token** (**no collisions by design**).
|
| 37 |
+
- A deterministic post-processing step (e.g., PCA/projection + normalization) converts the raw patterns into **float embeddings** and standardizes their scale.
|
| 38 |
- The embedding layer is **frozen** throughout training (`requires_grad = False`).
|
| 39 |
|
| 40 |
To match the Transformer hidden size, the 64-dim embedding is expanded to 1024 via a **non-trainable repetition**:
|
| 41 |
`repeat_interleave(16)` → `64 * 16 = 1024`.
|
| 42 |
|
| 43 |
+
This keeps the Transformer backbone identical while isolating the role of embedding *trainability* and embedding *content*.
|
| 44 |
|
| 45 |
---
|
| 46 |
|
|
|
|
| 66 |
- **Positional encoding:** rotary embeddings
|
| 67 |
- **Activation:** GELU
|
| 68 |
- **Tokenizer / vocab size:** 65,536 (bvv241-2-3 compatible)
|
| 69 |
+
- **Input embeddings:** **frozen**, **float**, `n_embed=64`, expanded to 1024 by repetition (non-trainable)
|
| 70 |
+
- **Embedding initialization:** random per-token patterns → deterministic projection/normalization → float vectors (**unique per token**, no collisions)
|
| 71 |
- **Output head:** **not tied** to the input embeddings (trained separately)
|
| 72 |
|
| 73 |
---
|
| 74 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
## Tokenizer
|
| 76 |
|
| 77 |
The intended tokenizer is **bvv241-2-3** (same vocab size and indexing):
|
|
|
|
| 89 |
import torch
|
| 90 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 91 |
|
| 92 |
+
tokenizer = AutoTokenizer.from_pretrained("Bochkov/emergent-semantics-model-64-float-272m")
|
| 93 |
+
model = AutoModelForCausalLM.from_pretrained("Bochkov/emergent-semantics-model-64-float-272m", trust_remote_code=True)
|
| 94 |
|
| 95 |
inputs = torch.tensor([tokenizer.encode("Question: What is the capital of Japan?\nAnswer:")], dtype=torch.long, device='cuda')
|
| 96 |
|
|
|
|
| 110 |
This model is intended for **research only**, especially for:
|
| 111 |
|
| 112 |
- Comparisons vs **Model_UNI_GLYPH (glyph/PCA frozen embeddings)** and vs **trainable-embedding baselines**
|
| 113 |
+
- Ablations comparing **binary vs float** frozen identifier embeddings at the same `n_embed`
|
| 114 |
+
- Studying whether semantic structure emerges in Transformer blocks when the input embedding space is a **random-but-unique float code**
|
| 115 |
|
| 116 |
Not intended for production deployment (no instruction tuning, safety tuning, or factuality guarantees).
|
| 117 |
|