Update README.md
Browse files
README.md
CHANGED
|
@@ -8,83 +8,62 @@ tags:
|
|
| 8 |
- i3-arhitecture
|
| 9 |
---
|
| 10 |
|
| 11 |
-
#
|
| 12 |
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
**i3Model** is a research-focused, ultra-efficient large language model prototype designed for exploring advanced hybrid architectures that balance **performance, scalability, and memory efficiency**. It integrates several experimental mechanisms for sequence modeling, low-rank parameterization, and quantization-aware training to achieve strong performance under resource constraints.
|
| 16 |
-
|
| 17 |
-
This model was developed for experimentation in lightweight large language modeling, particularly for tasks such as:
|
| 18 |
-
|
| 19 |
-
* Character- or token-level language modeling
|
| 20 |
-
* Text generation and continuation
|
| 21 |
-
* Research into efficient training and deployment techniques
|
| 22 |
-
|
| 23 |
-
> **Note:** Architectural details are proprietary and are intentionally omitted.
|
| 24 |
|
| 25 |
---
|
| 26 |
|
| 27 |
-
##
|
| 28 |
-
|
| 29 |
-
The model is intended for:
|
| 30 |
|
| 31 |
-
|
| 32 |
-
* Prototyping efficient LLMs for low-resource environments
|
| 33 |
-
* Studying low-rank adaptation and quantization for model compression
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
---
|
| 38 |
|
| 39 |
-
##
|
| 40 |
-
|
| 41 |
-
* **Hybrid Recurrent–Sequence Modeling:** Combines sequence-mixing and dynamic state-space mechanisms for temporal reasoning.
|
| 42 |
-
* **Low-Rank Parameterization:** Reduces parameter footprint while maintaining expressivity.
|
| 43 |
-
* **Quantization-Aware Design:** Uses a 4-bit quantization scheme with FP32 master weights for training stability.
|
| 44 |
-
* **Causal Autoregressive Training:** Enables next-token prediction and controlled text generation.
|
| 45 |
-
* **Modular and Extensible:** Supports layer-wise experimentation and scalable configuration.
|
| 46 |
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
| 50 |
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
* **Learning Rate:** 3e-4
|
| 56 |
-
* **Training Duration:** ~2000 iterations on a small dataset
|
| 57 |
-
* **Batch Size:** 2
|
| 58 |
|
| 59 |
-
|
| 60 |
|
| 61 |
---
|
| 62 |
|
| 63 |
-
##
|
| 64 |
|
| 65 |
-
|
|
|
|
|
|
|
| 66 |
|
| 67 |
-
|
| 68 |
-
* **Generation coherence at small scale**
|
| 69 |
-
* **Speed and memory performance metrics**
|
| 70 |
-
|
| 71 |
-
While not benchmarked on large-scale NLP datasets, the model demonstrates promising early results in lightweight text generation with efficient runtime characteristics.
|
| 72 |
|
| 73 |
---
|
| 74 |
|
| 75 |
## Limitations
|
| 76 |
|
| 77 |
-
*
|
| 78 |
-
*
|
| 79 |
-
*
|
| 80 |
-
*
|
| 81 |
|
| 82 |
---
|
| 83 |
|
| 84 |
-
##
|
| 85 |
-
|
| 86 |
-
This model is intended solely for **research and educational use**. Users should:
|
| 87 |
|
| 88 |
-
*
|
| 89 |
-
*
|
| 90 |
-
* Attribute the model appropriately if adapted or redistributed.
|
|
|
|
| 8 |
- i3-arhitecture
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# i3-tiny
|
| 12 |
|
| 13 |
+
**i3-tiny** is a compact, efficient character-level language model designed for experimentation and exploration in text generation. Despite its small size, it packs a surprising punch for creative and research-oriented tasks, generating sequences that are quirky, unpredictable, and full of “human-like” character-level errors.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
---
|
| 16 |
|
| 17 |
+
## Model Overview
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
i3-tiny is trained to predict the next character in a sequence, making it ideal for **character-level language modeling**, **creative text generation**, and **research on lightweight, efficient models**. Its small footprint allows rapid experimentation, even on modest hardware, and it provides a playground for studying how models learn patterns in sequences of characters.
|
|
|
|
|
|
|
| 20 |
|
| 21 |
+
The model is **intentionally experimental** — it’s not aligned, fact-checked, or polished. Instead, it showcases how a compact architecture can capture patterns in text, learn from repetition, and generate outputs that are sometimes surprisingly coherent, sometimes hilariously garbled.
|
| 22 |
|
| 23 |
---
|
| 24 |
|
| 25 |
+
## Training Details
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
+
* **Dataset:** ~45,830 characters (a curated text corpus repeated to improve exposure)
|
| 28 |
+
* **Vocabulary:** 34 characters (all lowercased)
|
| 29 |
+
* **Sequence length:** 128
|
| 30 |
+
* **Training iterations:** 2,000
|
| 31 |
+
* **Batch size:** 2
|
| 32 |
+
* **Optimizer:** AdamW, learning rate 3e-4
|
| 33 |
+
* **Model parameters:** 711,106
|
| 34 |
+
* **Performance notes:** Each iteration takes roughly 400–500 ms; 100 iterations take ~45 s on average. Loss steadily decreased from 3.53 to 2.15 over training.
|
| 35 |
|
| 36 |
+
**Example generation (iteration 1200):**
|
| 37 |
|
| 38 |
+
```
|
| 39 |
+
Prompt: "The quick"
|
| 40 |
+
Generated: the quick efehn. dethe cans the fice the fpeens antary of eathetint, an thadat hitimes the and cow thig, and
|
| 41 |
+
```
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
These outputs capture the **chaotic creativity** of a character-level model: a mixture of readable words, invented forms, and surprising sequences.
|
| 44 |
|
| 45 |
---
|
| 46 |
|
| 47 |
+
## Intended Uses
|
| 48 |
|
| 49 |
+
* **Character-level text generation experiments**
|
| 50 |
+
* **Research and education**: studying lightweight language models, sequence learning, and text modeling
|
| 51 |
+
* **Creative exploration**: generating quirky text or procedural content for games, demos, or artistic projects
|
| 52 |
|
| 53 |
+
> ⚠️ i3-tiny is experimental and **not intended for production or high-stakes applications**. Text may be repetitive, nonsensical, or inconsistent.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
---
|
| 56 |
|
| 57 |
## Limitations
|
| 58 |
|
| 59 |
+
* Small vocabulary and character-level modeling limit natural language fluency
|
| 60 |
+
* Outputs are **highly experimental** and not fact-checked
|
| 61 |
+
* Generated sequences can be repetitive or unexpectedly garbled
|
| 62 |
+
* Not aligned or safety-checked
|
| 63 |
|
| 64 |
---
|
| 65 |
|
| 66 |
+
## Model Weights
|
|
|
|
|
|
|
| 67 |
|
| 68 |
+
* Stored in `model.bin`
|
| 69 |
+
* Compatible with PyTorch
|
|
|