Model save
Browse files- README.md +49 -183
- config.json +6 -7
- model.safetensors +2 -2
- training_args.bin +1 -1
README.md
CHANGED
|
@@ -1,192 +1,58 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
tags:
|
| 4 |
-
-
|
| 5 |
-
- neollm
|
| 6 |
-
- hybrid-attention
|
| 7 |
-
- fanformer
|
| 8 |
-
- gated-delta-networks
|
| 9 |
-
- polynomial-activations
|
| 10 |
-
- fineweb-edu
|
| 11 |
-
- ademamix
|
| 12 |
-
- custom-scheduler
|
| 13 |
-
- flash-attention
|
| 14 |
-
- torch-compile
|
| 15 |
-
pipeline_tag: text-generation
|
| 16 |
model-index:
|
| 17 |
- name: NeoLLM
|
| 18 |
-
results:
|
| 19 |
-
- task:
|
| 20 |
-
type: text-generation
|
| 21 |
-
name: Text Generation
|
| 22 |
-
dataset:
|
| 23 |
-
type: multiple-choice
|
| 24 |
-
name: ARC-Easy
|
| 25 |
-
metrics:
|
| 26 |
-
- type: accuracy
|
| 27 |
-
value: 39.14
|
| 28 |
-
- task:
|
| 29 |
-
type: text-generation
|
| 30 |
-
name: Text Generation
|
| 31 |
-
dataset:
|
| 32 |
-
type: multiple-choice
|
| 33 |
-
name: HellaSwag
|
| 34 |
-
metrics:
|
| 35 |
-
- type: accuracy
|
| 36 |
-
value: 26.55
|
| 37 |
-
- task:
|
| 38 |
-
type: text-generation
|
| 39 |
-
name: Text Generation
|
| 40 |
-
dataset:
|
| 41 |
-
type: multiple-choice
|
| 42 |
-
name: MMLU
|
| 43 |
-
metrics:
|
| 44 |
-
- type: accuracy
|
| 45 |
-
value: 24.25
|
| 46 |
-
- task:
|
| 47 |
-
type: text-generation
|
| 48 |
-
name: Text Generation
|
| 49 |
-
dataset:
|
| 50 |
-
type: multiple-choice
|
| 51 |
-
name: ARC-Challenge
|
| 52 |
-
metrics:
|
| 53 |
-
- type: accuracy
|
| 54 |
-
value: 17.24
|
| 55 |
-
license: apache-2.0
|
| 56 |
-
datasets:
|
| 57 |
-
- HuggingFaceFW/fineweb-edu
|
| 58 |
-
language:
|
| 59 |
-
- en
|
| 60 |
---
|
| 61 |
|
|
|
|
|
|
|
|
|
|
| 62 |
# NeoLLM
|
| 63 |
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
-
|
| 89 |
-
-
|
| 90 |
-
-
|
| 91 |
-
-
|
| 92 |
-
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
-
|
| 105 |
-
-
|
| 106 |
-
-
|
| 107 |
-
-
|
| 108 |
-
- **Tokenizer**: Qwen3 Fast Tokenizer with weight tying enabled
|
| 109 |
-
|
| 110 |
-
### Training Configuration
|
| 111 |
-
- **Hardware**: NVIDIA RTX 5090
|
| 112 |
-
- **Training Time**: 3 hours
|
| 113 |
-
- **Loss Function**: Cut Your Losses (from "Cut Your Losses in Large-Vocabulary Language Models") - NOT standard Cross-Entropy
|
| 114 |
-
- **Optimizer**: AdEMAMix with parameters:
|
| 115 |
-
- Betas: (0.9, 0.999, 0.999)
|
| 116 |
-
- Alpha: 5.0
|
| 117 |
-
- t_alpha: 5000, t_beta3: 5000
|
| 118 |
-
- Weight decay: 0.1
|
| 119 |
-
- **Learning Rate Schedule**: Custom cosine with linear warmup
|
| 120 |
-
- Start LR: 3e-4
|
| 121 |
-
- Peak LR: 6e-4 (at 5000 warmup steps)
|
| 122 |
-
- Min LR: 6e-5
|
| 123 |
-
- **Batch Size**: 64 per device
|
| 124 |
-
- **Precision**: BF16 with torch.compile optimization
|
| 125 |
-
- **Hardware Optimizations**: Flash Attention 2
|
| 126 |
-
- **Epochs**: 1
|
| 127 |
-
|
| 128 |
-
### Framework Versions
|
| 129 |
-
- **PyTorch**: 2.8.0+cu129
|
| 130 |
-
- **Transformers**: 4.57.0.dev0
|
| 131 |
-
- **Flash Attention**: 2.x
|
| 132 |
-
- **CUDA**: 12.9
|
| 133 |
-
|
| 134 |
-
## Evaluation Results
|
| 135 |
-
|
| 136 |
-
### Benchmark Performance (1-shot evaluation)
|
| 137 |
-
|
| 138 |
-
| Task | Score |
|
| 139 |
-
|------|-------|
|
| 140 |
-
| ARC-Easy | 39.14% |
|
| 141 |
-
| HellaSwag | 26.55% |
|
| 142 |
-
| MMLU | 24.25% |
|
| 143 |
-
| ARC-Challenge | 17.24% |
|
| 144 |
-
|
| 145 |
-
*All evaluations performed in few-shot (1-shot) setting*
|
| 146 |
-
|
| 147 |
-
## Model Architecture Components
|
| 148 |
-
|
| 149 |
-
### Fourier Analysis Network (FANLayer)
|
| 150 |
-
Based on "FANformer: Improving Large Language Models Through Effective Periodicity Modeling":
|
| 151 |
-
```
|
| 152 |
-
FANLayer'(X) = [cos(WpX)||sin(WpX)||(WpX + Bp)]
|
| 153 |
-
```
|
| 154 |
-
|
| 155 |
-
### LayerNorm Scaling (LNS)
|
| 156 |
-
Implements scaling factor 1/√ℓ as described in "The Curse of Depth in Large Language Models":
|
| 157 |
-
```
|
| 158 |
-
h^(ℓ) = LayerNorm(h^(ℓ)) × (1/√ℓ)
|
| 159 |
-
```
|
| 160 |
-
|
| 161 |
-
### Gradient-Preserving Activation Scaling (GPAS)
|
| 162 |
-
Scales activations without penalizing gradients using stop-gradient operations.
|
| 163 |
-
|
| 164 |
-
### Polynomial Composition Activations (PolyNorm)
|
| 165 |
-
Custom activation function based on "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".
|
| 166 |
-
|
| 167 |
-
### Gated Delta Networks
|
| 168 |
-
Linear attention mechanism from "Gated Delta Networks: Improving Mamba2 with Delta Rule" for efficient sequence modeling.
|
| 169 |
-
|
| 170 |
-
## Intended Uses & Limitations
|
| 171 |
-
|
| 172 |
-
### Intended Uses
|
| 173 |
-
- Research into hybrid attention architectures
|
| 174 |
-
- Educational purposes for understanding advanced LLM components
|
| 175 |
-
- Small-scale language modeling experiments
|
| 176 |
-
- Benchmarking novel architectural components
|
| 177 |
-
|
| 178 |
-
### Limitations
|
| 179 |
-
- Relatively small model size (110M parameters) limits capability compared to larger models
|
| 180 |
-
- Training limited to 4M samples from single dataset
|
| 181 |
-
- Performance below state-of-the-art models on standard benchmarks
|
| 182 |
-
- Experimental architecture may have stability considerations in production
|
| 183 |
-
|
| 184 |
-
### Recommendations
|
| 185 |
-
- Best suited for research and educational applications
|
| 186 |
-
- Consider fine-tuning for specific downstream tasks
|
| 187 |
-
- Monitor performance carefully if adapting for production use
|
| 188 |
-
|
| 189 |
-
## Training Infrastructure
|
| 190 |
-
|
| 191 |
-
- **Mixed Precision**: BF16 for numerical stability
|
| 192 |
-
- **Compilation**: torch.compile with max-autotune mode
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
tags:
|
| 4 |
+
- generated_from_trainer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
model-index:
|
| 6 |
- name: NeoLLM
|
| 7 |
+
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 11 |
+
should probably proofread and complete it, then remove this comment. -->
|
| 12 |
+
|
| 13 |
# NeoLLM
|
| 14 |
|
| 15 |
+
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
|
| 16 |
+
It achieves the following results on the evaluation set:
|
| 17 |
+
- Loss: 3.8652
|
| 18 |
+
|
| 19 |
+
## Model description
|
| 20 |
+
|
| 21 |
+
More information needed
|
| 22 |
+
|
| 23 |
+
## Intended uses & limitations
|
| 24 |
+
|
| 25 |
+
More information needed
|
| 26 |
+
|
| 27 |
+
## Training and evaluation data
|
| 28 |
+
|
| 29 |
+
More information needed
|
| 30 |
+
|
| 31 |
+
## Training procedure
|
| 32 |
+
|
| 33 |
+
### Training hyperparameters
|
| 34 |
+
|
| 35 |
+
The following hyperparameters were used during training:
|
| 36 |
+
- learning_rate: 0.0006
|
| 37 |
+
- train_batch_size: 64
|
| 38 |
+
- eval_batch_size: 64
|
| 39 |
+
- seed: 42
|
| 40 |
+
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 41 |
+
- lr_scheduler_type: linear
|
| 42 |
+
- lr_scheduler_warmup_ratio: 0.1
|
| 43 |
+
- num_epochs: 1
|
| 44 |
+
|
| 45 |
+
### Training results
|
| 46 |
+
|
| 47 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
| 48 |
+
|:-------------:|:------:|:----:|:---------------:|
|
| 49 |
+
| 4.2056 | 0.3840 | 3000 | 4.2055 |
|
| 50 |
+
| 3.8841 | 0.7680 | 6000 | 3.8652 |
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
### Framework versions
|
| 54 |
+
|
| 55 |
+
- Transformers 4.57.0.dev0
|
| 56 |
+
- Pytorch 2.8.0+cu129
|
| 57 |
+
- Datasets 3.6.0
|
| 58 |
+
- Tokenizers 0.22.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
config.json
CHANGED
|
@@ -2,15 +2,14 @@
|
|
| 2 |
"architectures": [
|
| 3 |
"NeoLLMForCausalLM"
|
| 4 |
],
|
| 5 |
-
"auto_map": {
|
| 6 |
-
"AutoConfig": "configuration_neollm.NeoLLMConfig",
|
| 7 |
-
"AutoModel": "modeling_neollm.NeoLLMModel",
|
| 8 |
-
"AutoModelForCausalLM": "modeling_neollm.NeoLLMForCausalLM"
|
| 9 |
-
},
|
| 10 |
"attention_bias": false,
|
| 11 |
"attention_dropout": 0.1,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
"dropout_rate": 0.1,
|
| 13 |
-
|
| 14 |
"dtype": "bfloat16",
|
| 15 |
"eos_token_id": 151645,
|
| 16 |
"fan_ratio": 0.125,
|
|
@@ -18,7 +17,7 @@
|
|
| 18 |
"hidden_act": "xielu",
|
| 19 |
"hidden_size": 512,
|
| 20 |
"initializer_range": 0.02,
|
| 21 |
-
"intermediate_size":
|
| 22 |
"layer_types": [
|
| 23 |
"linear_attention",
|
| 24 |
"linear_attention",
|
|
|
|
| 2 |
"architectures": [
|
| 3 |
"NeoLLMForCausalLM"
|
| 4 |
],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
"attention_bias": false,
|
| 6 |
"attention_dropout": 0.1,
|
| 7 |
+
"auto_map": {
|
| 8 |
+
"AutoConfig": "configuration_unified.UnifiedModelConfig",
|
| 9 |
+
"AutoModel": "modeling_unified.UnifiedModel",
|
| 10 |
+
"AutoModelForCausalLM": "modeling_unified.UnifiedModel"
|
| 11 |
+
},
|
| 12 |
"dropout_rate": 0.1,
|
|
|
|
| 13 |
"dtype": "bfloat16",
|
| 14 |
"eos_token_id": 151645,
|
| 15 |
"fan_ratio": 0.125,
|
|
|
|
| 17 |
"hidden_act": "xielu",
|
| 18 |
"hidden_size": 512,
|
| 19 |
"initializer_range": 0.02,
|
| 20 |
+
"intermediate_size": 1536,
|
| 21 |
"layer_types": [
|
| 22 |
"linear_attention",
|
| 23 |
"linear_attention",
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:673a2ad3e9fb95397d7c50a0d7023b13ddd589eb5b9205b3370e9da8be1d4991
|
| 3 |
+
size 231636744
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5969
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1a7ed46ac173cd670ec0cb96d3ba813baf4fad6c4f51be08a8e3127610528168
|
| 3 |
size 5969
|