Upload 6 files
Browse files- README.md +45 -0
- config.json +17 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +1 -0
- tokenizer.json +0 -0
- tokenizer_config.json +8 -0
README.md
CHANGED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
license: mit
|
| 4 |
+
tags:
|
| 5 |
+
- conversational
|
| 6 |
+
- efficient
|
| 7 |
+
- proprietary-architecture
|
| 8 |
+
datasets:
|
| 9 |
+
- starhopp3r/TinyChat
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# i3 Model - Memory-Optimized Efficient Conversational Language Model
|
| 13 |
+
|
| 14 |
+
## Model Description
|
| 15 |
+
|
| 16 |
+
The **i3 Model** is a memory-optimized language model designed for conversational understanding. This version uses streaming tokenization to minimize RAM usage during training.
|
| 17 |
+
|
| 18 |
+
**PROPRIETARY ARCHITECTURE**: The internal architecture and training methodologies are proprietary and confidential.
|
| 19 |
+
|
| 20 |
+
## Model Statistics
|
| 21 |
+
|
| 22 |
+
- **Vocabulary Size**: 4,466 (variable-length chunks)
|
| 23 |
+
- **Hidden Dimension**: 512
|
| 24 |
+
- **Number of Layers**: 24
|
| 25 |
+
- **Max Sequence Length**: 256
|
| 26 |
+
- **Total Parameters**: 22,640,626
|
| 27 |
+
- **Tokenization**: Memory-efficient variable-length chunking (2-3 characters)
|
| 28 |
+
|
| 29 |
+
### Key Features
|
| 30 |
+
|
| 31 |
+
1. **Memory-Optimized**: Streaming tokenization reduces RAM usage significantly
|
| 32 |
+
2. **Proprietary Hybrid Architecture**: Advanced sequence processing with linear complexity
|
| 33 |
+
3. **Variable-Length Tokenization**: Smart chunking strategy for better compression
|
| 34 |
+
4. **Conversational Focus**: Specialized for dialogue and emotional understanding
|
| 35 |
+
|
| 36 |
+
## Training Details
|
| 37 |
+
|
| 38 |
+
- **Dataset**: [TinyChat](https://huggingface.co/datasets/starhopp3r/TinyChat)
|
| 39 |
+
- **Training Objective**: Next-token prediction with proprietary optimization
|
| 40 |
+
- **Framework**: PyTorch
|
| 41 |
+
- **Memory Optimization**: Streaming dataset processing
|
| 42 |
+
|
| 43 |
+
## License
|
| 44 |
+
|
| 45 |
+
**PROPRIETARY LICENSE** - All rights reserved.
|
config.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"i3Model"
|
| 4 |
+
],
|
| 5 |
+
"model_type": "i3",
|
| 6 |
+
"vocab_size": 4466,
|
| 7 |
+
"d_model": 512,
|
| 8 |
+
"n_layers": 24,
|
| 9 |
+
"n_heads": 16,
|
| 10 |
+
"max_seq_len": 256,
|
| 11 |
+
"rank": 128,
|
| 12 |
+
"d_state": 64,
|
| 13 |
+
"tokenizer_type": "chunk",
|
| 14 |
+
"chunk_strategy": "variable_2_3",
|
| 15 |
+
"torch_dtype": "float32",
|
| 16 |
+
"transformers_version": "4.36.0"
|
| 17 |
+
}
|
pytorch_model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1ccf2db41965042284ce1f7024ab4e84b2194a3efdc6152f859f90234a26ef22
|
| 3 |
+
size 90818463
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"tokenizer_class": "ChunkTokenizer",
|
| 3 |
+
"model_max_length": 256,
|
| 4 |
+
"vocab_size": 4466,
|
| 5 |
+
"chunk_strategy": "variable_2_3",
|
| 6 |
+
"special_tokens": {},
|
| 7 |
+
"clean_up_tokenization_spaces": false
|
| 8 |
+
}
|