File size: 9,281 Bytes
e6277d9 d414dde e6277d9 fa2b0dc e6277d9 d414dde 7e01414 d414dde e6277d9 5bd36fb e6277d9 d414dde e6277d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 |
---
library_name: transformers
model_name: Asterisk
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
tags:
- aspp
- hybrid-architecture
- graph-reasoning
- sft
- trl
license: apache-2.0
language:
- en
---
# Asterisk: Hybrid ASPP-Attention Architecture
**Asterisk** is a research implementation that combines the **ASPP (Adjacency-Structured Parallel Propagation)** operator with standard attention mechanisms to enhance the SmolLM2-135M model. The model implements a hybrid architecture that fuses graph-based local reasoning (ASPP) with global attention for improved expressiveness on structured reasoning tasks.
## Model Description
- **Base Model**: [SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct)
- **Architecture**: Hybrid ASPP-Attention (30 hybrid layers)
- **Parameters**: 171.2M (35M additional ASPP parameters)
- **Training**: Supervised Fine-Tuning on Capybara dataset
- **Framework**: Transformers 4.57.6, TRL 0.27.0
## Evaluation Results
Evaluated on LM-Evaluation-Harness:
| Task | Metric | Score | Stderr |
|------|--------|-------|--------|
| **HellaSwag** | acc_norm | **0.4430** | Β±0.0157 |
| **ARC-Easy** | acc_norm | **0.5450** | Β±0.0158 |
| **ARC-Challenge** | acc_norm | **0.2884** | Β±0.0132 |
| **PIQA** | acc_norm | **0.6770** | Β±0.0148 |
| **WinoGrande** | acc | **0.5210** | Β±0.0158 |
### Key Innovation: The Asterisk Operator (β
-operator)
The **Asterisk Operator** performs local parallel state evolution through point-wise transformations:
```
h_i^(t+1) = Ο(h_i^(t)) [K-step iterative evolution]
```
This is then gated and fused with standard Llama attention outputs:
```
output = gate * ASPP(x) + (1-gate) * Attention(x)
```
## Architecture
### 1. ASPPOperator (Point-wise Parallel Propagation)
```python
class ASPPOperator:
"""
Forward pass:
1. Optional dimensionality reduction: h_t = down_proj(hidden_states)
2. K-step evolution: h_t = h_t + Ξ± * Ο(h_t) [K times]
3. Layer normalization after each step
4. Optional projection back: output = up_proj(h_t)
Parameters:
- hidden_size: 576 (model dimension)
- aspp_hidden_dim: 256 (internal ASPP dimension)
- aspp_num_steps: 8 (evolution iterations)
- aspp_dropout: 0.2
"""
```
**Pseudocode:**
```
function ASPP(hidden_states):
# Optional dimensionality reduction
if use_projection:
h_t β down_proj(hidden_states)
h_t β dropout(h_t)
else:
h_t β hidden_states
# Learnable number of steps
k_steps β max(1, int(sigmoid(k_logit) * num_steps))
# K-step point-wise evolution
for t = 1 to k_steps:
# Point-wise update: Ο(h_t) = MLP(h_t)
h_t_next β update_net(h_t)
# Scaled residual connection
h_t β h_t + residual_scale * h_t_next
h_t β layer_norm(h_t)
# Project back to original dimension
if use_projection:
h_t β up_proj(h_t)
h_t β dropout(h_t)
return h_t
```
### 2. HybridASPPAttentionLayer
```python
class HybridASPPAttentionLayer(LlamaDecoderLayer):
"""
Extends LlamaDecoderLayer with parallel ASPP branch
Architecture:
1. Input LayerNorm
2. Parallel branches:
- ASPP operator for local structured reasoning
- Standard LlamaAttention for global context
3. Gated fusion: gate * ASPP + (1-gate) * Attention
4. Residual connection
5. Feed-forward MLP
"""
```
**Pseudocode:**
```
function HybridLayer(hidden_states, attention_mask, ...):
residual β hidden_states
hidden_states β input_layernorm(hidden_states)
# Parallel branches
aspp_output β aspp_operator(hidden_states)
attn_output β self_attention(hidden_states, attention_mask, ...)
# Gated fusion
fusion_input β concat([aspp_output, attn_output])
gate β sigmoid(linear(dropout(fusion_input)))
fused_output β gate * aspp_output + (1 - gate) * attn_output
# Residual connection
hidden_states β residual + fused_output
# MLP block
residual β hidden_states
hidden_states β post_attention_layernorm(hidden_states)
hidden_states β mlp(hidden_states)
hidden_states β residual + hidden_states
return hidden_states
```
### 3. AsteriskForCausalLM
```python
class AsteriskForCausalLM(LlamaForCausalLM):
"""
Main model class with custom model_type "asterisk"
Configuration:
- hybrid_layer_indices: None (all 30 layers are hybrid)
- aspp_hidden_dim: 256 (reduces overfitting)
- aspp_num_steps: 8 (learnable, actual steps β 6)
- aspp_dropout: 0.2
"""
```
**Note**: These are preliminary results with sample limits. Full evaluation pending.
## Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"path/to/Asterisk",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("path/to/Asterisk")
# Generate text
messages = [{"role": "user", "content": "Explain quantum computing in simple terms."}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Details
### Training Configuration
- **Dataset**: Capybara (conversational instruction-following)
- **Optimizer**: AdamW (lr=2e-5, weight_decay=0.01)
- **Batch Size**: 4 per device, gradient accumulation=4 (effective batch=16)
- **Epochs**: 2
- **Scheduler**: Cosine with warmup (100 steps)
- **Mixed Precision**: bfloat16
- **Gradient Checkpointing**: Enabled
### ASPP Configuration
```python
aspp_hidden_dim = 256 # Internal dimension (vs 576 model hidden_size)
aspp_num_steps = 8 # Max evolution steps (learnable)
aspp_dropout = 0.2 # Regularization
hybrid_layer_indices = None # All 30 layers
```
## Model Creation from Base
```python
from AsteriskForCausalLM import AsteriskForCausalLM
# Create Asterisk model from SmolLM2 base
model, base_model = AsteriskForCausalLM.from_pretrained_base(
"HuggingFaceTB/SmolLM2-135M-Instruct",
hybrid_layer_indices=None, # None = all layers
aspp_hidden_dim=256, # Internal ASPP dimension
aspp_num_steps=8, # K-step evolution
aspp_dropout=0.2, # Dropout rate
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Base model parameters are transferred, ASPP parameters initialized randomly
model.load_state_dict(base_model.state_dict(), strict=False)
```
## Theoretical Background
### Universality (Theorem 2.1)
ASPP can simulate any Message-Passing Neural Network (MPNN) function on finite graphs in D steps, where D is the graph diameter.
### Convergence (Theorem 2.2)
Exponential convergence to fixed points with rate c=0.76 under Lipschitz continuity.
### Turing Completeness
Proven via cyclic tag system simulation - ASPP can compute any Turing-computable function given sufficient depth.
**Implementation Note**: This implementation simplifies theoretical ASPP to point-wise evolution to reduce overfitting while maintaining iterative refinement benefits.
## Files in Checkpoint
```
Asterisk/
βββ AsteriskForCausalLM.py # Model implementation (required for trust_remote_code)
βββ config.json # Model configuration with auto_map
βββ model.safetensors # Model weights
βββ tokenizer.json # Tokenizer
βββ generation_config.json # Generation settings
βββ README.md # This file
```
## Dependencies
```bash
pip install torch>=2.0.0
pip install transformers>=4.40.0
pip install trl>=0.8.0
pip install datasets>=2.14.0
pip install accelerate>=0.25.0
pip install bitsandbytes
```
## Citations
If you use this model, please cite:
```bibtex
@misc{asterisk2026,
title={Asterisk: Hybrid ASPP-Attention Architecture for Enhanced Language Modeling},
author={NoesisLab},
year={2026},
publisher={Huggingface},
url={https://huggingface.co/NoesisLab/Asterisk}
}
```
```bibtex
@misc{vonwerra2022trl,
title={{TRL: Transformer Reinforcement Learning}},
author={Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year={2020},
journal={GitHub repository},
publisher={GitHub},
howpublished={\url{https://github.com/huggingface/trl}}
}
```
```bibtex
@article{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Allal, Loubna Ben and Lozhkov, Anton and Penedo, Guilherme and Wolf, Thomas and von Werra, Leandro},
year={2024}
}
```
## License
This model inherits the Apache 2.0 license from SmolLM2-135M-Instruct.
## Framework Versions
- **TRL**: 0.27.0
- **Transformers**: 4.57.6
- **PyTorch**: 2.8.0+cu128
- **Datasets**: 4.5.0
- **Tokenizers**: 0.22.2
## Acknowledgments
Built on top of [SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) by HuggingFace. Training framework powered by [TRL](https://github.com/huggingface/trl).
|