Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,156 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
datasets:
|
| 4 |
+
- vesteinn/babylm
|
| 5 |
+
---
|
| 6 |
+
# rootxhacker/arthemis-lm
|
| 7 |
+
|
| 8 |
+
Building capable language models shouldn't require massive corporate budgets. While the industry pushes toward increasingly large models, this project explores what's possible with neuromorphic architectures and limited resources.
|
| 9 |
+
|
| 10 |
+
I developed this 155.8M parameter Llama-SNN-LTC model with specific constraints:
|
| 11 |
+
|
| 12 |
+
- Budget limit: Under $50 using Google Colab Pro Plus
|
| 13 |
+
- From-scratch pretraining with fully open-source dataset
|
| 14 |
+
- No fine-tuning or synthetic data generation from existing LLMs
|
| 15 |
+
- Focus on architectural innovation over scale
|
| 16 |
+
|
| 17 |
+
## Model Details
|
| 18 |
+
|
| 19 |
+
This project is heavily inspired by **keeeeenw/MicroLlama**, which is an awesome open-source project aimed at pretraining a 300M Llama model on a budget.
|
| 20 |
+
|
| 21 |
+
This project incorporates **Spiking Neural Networks (SNNs)** and **Liquid Time Constants (LTCs)** into the Llama architecture, creating a neuromorphic language model. I spent under $50 on Google Colab Pro Plus and used the first 1M samples from the BabyLM challenge dataset, which contains approximately 100M tokens.
|
| 22 |
+
|
| 23 |
+
**Model Type**: Causal Language Model with Neuromorphic Enhancements
|
| 24 |
+
**Supported Languages**: English
|
| 25 |
+
**Number of Parameters**: 155.8M
|
| 26 |
+
**Context Length**: 1024 tokens
|
| 27 |
+
**Base Architecture**: Llama with SNN/LTC modifications
|
| 28 |
+
**Training Data**: BabyLM (vesteinn/babylm) - 1M samples (~100M tokens)
|
| 29 |
+
|
| 30 |
+
### Architecture Features
|
| 31 |
+
- **Spiking Neural Networks** in attention mechanisms for temporal processing
|
| 32 |
+
- **Liquid Time Constants** in feed-forward layers for adaptive dynamics
|
| 33 |
+
- **12-layer transformer backbone** with neuromorphic enhancements
|
| 34 |
+
- **RoPE positional encoding** for sequence understanding
|
| 35 |
+
- **Custom surrogate gradient training** for differentiable spike computation
|
| 36 |
+
|
| 37 |
+
Here are my major model configurations:
|
| 38 |
+
|
| 39 |
+
```
|
| 40 |
+
hidden_size = 768
|
| 41 |
+
intermediate_size = 2048
|
| 42 |
+
num_hidden_layers = 12
|
| 43 |
+
num_attention_heads = 12
|
| 44 |
+
num_key_value_heads = 12
|
| 45 |
+
max_position_embeddings = 1024
|
| 46 |
+
vocab_size = 50257
|
| 47 |
+
spiking_threshold = 1.0
|
| 48 |
+
ltc_hidden_size = 256
|
| 49 |
+
ltc_layers = 2
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
## Usage
|
| 53 |
+
|
| 54 |
+
### Install dependencies
|
| 55 |
+
```bash
|
| 56 |
+
pip install transformers torch numpy
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
## Evaluation
|
| 60 |
+
|
| 61 |
+
I performed evaluation using the standard lm-evaluation-harness setup. Following similar methodology to TinyLlama and MicroLlama, I used acc_norm for most datasets except for winogrande and boolq which used acc as the metrics.
|
| 62 |
+
|
| 63 |
+
### Results Comparison
|
| 64 |
+
|
| 65 |
+
| Model | Params | Budget | HellaSwag | OBQA | WinoGrande | ARC_e | ARC_c | BoolQ | Avg |
|
| 66 |
+
|-------|--------|--------|-----------|------|------------|-------|-------|-------|-----|
|
| 67 |
+
| **rootxhacker/arthemis-lm** | **155.8M** | **<$50** | **24.65** | **20.60** | **48.10** | **28.20** | **22.20** | **39.80** | **30.59** |
|
| 68 |
+
| google/bert-large-uncased | 336M | N/A | 24.53 | 26.20 | 49.80 | 25.08 | 25.68 | 40.86 | 32.03 |
|
| 69 |
+
|
| 70 |
+
## Observations
|
| 71 |
+
|
| 72 |
+
- **Budget Efficiency**: Our model achieves competitive performance with only ~$50 budget, demonstrating that meaningful language models can be built with limited resources.
|
| 73 |
+
- **Neuromorphic Advantages**: The SNN-LTC architecture shows particularly strong performance in WinoGrande (48.10%), suggesting enhanced reasoning capabilities from temporal dynamics.
|
| 74 |
+
- **Parameter Efficiency**: With 155.8M parameters, our model performs comparably to BERT-large-uncased (336M parameters) while being significantly smaller.
|
| 75 |
+
- **Room for Improvement**: More training data and compute would likely improve performance, but the current results validate the neuromorphic approach.
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
```
|
| 80 |
+
Architecture: Llama + Spiking Neural Networks + Liquid Time Constants
|
| 81 |
+
Hidden Size: 768
|
| 82 |
+
Intermediate Size: 2048
|
| 83 |
+
Attention Heads: 12
|
| 84 |
+
Layers: 12
|
| 85 |
+
Max Position Embeddings: 1024
|
| 86 |
+
Vocabulary Size: 50,257
|
| 87 |
+
Spiking Threshold: 1.0
|
| 88 |
+
LTC Hidden Size: 256
|
| 89 |
+
Training Precision: FP32
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
## Training Details
|
| 93 |
+
|
| 94 |
+
The model was pretrained from scratch using:
|
| 95 |
+
- **Dataset**: BabyLM (vesteinn/babylm) - First 1M samples (~100M tokens)
|
| 96 |
+
- **Hardware**: Google Colab Pro Plus (A100 GPU)
|
| 97 |
+
- **Training Steps**: 20,000 steps
|
| 98 |
+
- **Batch Size**: 8 with gradient accumulation
|
| 99 |
+
- **Learning Rate**: 3e-4 with linear warmup
|
| 100 |
+
- **Precision**: FP32 for stability with neuromorphic components
|
| 101 |
+
|
| 102 |
+
### Key Innovations
|
| 103 |
+
- **Custom SNN Implementation**: Leaky Integrate-and-Fire neurons with surrogate gradients
|
| 104 |
+
- **Liquid Time Constants**: Adaptive time dynamics in feed-forward layers
|
| 105 |
+
- **Budget-Conscious Training**: Optimized for maximum performance per dollar spent
|
| 106 |
+
- **Neuromorphic Language Modeling**: First known integration of SNNs and LTCs in causal LM
|
| 107 |
+
|
| 108 |
+
## Future Work
|
| 109 |
+
|
| 110 |
+
- Scale to larger datasets with increased compute budget
|
| 111 |
+
- Explore different spiking neuron models (e.g., Adaptive LIF, Izhikevich)
|
| 112 |
+
- Implement more sophisticated LTC architectures
|
| 113 |
+
- Fine-tune for specific downstream tasks
|
| 114 |
+
- Compare energy efficiency with standard transformers
|
| 115 |
+
|
| 116 |
+
## Model Sources
|
| 117 |
+
|
| 118 |
+
- **Repository**: [Coming Soon]
|
| 119 |
+
- **Paper**: [In Progress]
|
| 120 |
+
- **Hugging Face**: [rootxhacker/arthemis-lm](https://huggingface.co/rootxhacker/arthemis-lm)
|
| 121 |
+
|
| 122 |
+
## Uses
|
| 123 |
+
|
| 124 |
+
This model can be used for:
|
| 125 |
+
- Text generation and completion
|
| 126 |
+
- Few-shot learning tasks
|
| 127 |
+
- Research into neuromorphic language models
|
| 128 |
+
- Educational purposes for understanding SNN/LTC architectures
|
| 129 |
+
- Base model for fine-tuning on specific tasks
|
| 130 |
+
|
| 131 |
+
## Limitations
|
| 132 |
+
|
| 133 |
+
- **Training Data**: Limited to 100M tokens (much smaller than typical LLMs)
|
| 134 |
+
- **Context Length**: Maximum 1024 tokens
|
| 135 |
+
- **Domain**: Primarily trained on English text
|
| 136 |
+
- **Compute**: Training limited by budget constraints
|
| 137 |
+
- **Performance**: Lower than larger, more extensively trained models
|
| 138 |
+
|
| 139 |
+
## Acknowledgments
|
| 140 |
+
|
| 141 |
+
Special thanks to **keeeeenw** for the inspiration and open-source MicroLlama project, which demonstrated that impressive language models can be built on a budget. This work builds upon those principles while exploring neuromorphic computing approaches to language modeling.
|
| 142 |
+
|
| 143 |
+
## Citation
|
| 144 |
+
|
| 145 |
+
```bibtex
|
| 146 |
+
@misc{arthemis-lm-2024,
|
| 147 |
+
title={Arthemis-LM: A Neuromorphic Language Model with Spiking Neural Networks and Liquid Time Constants},
|
| 148 |
+
author={rootxhacker},
|
| 149 |
+
year={2024},
|
| 150 |
+
howpublished={\url{https://huggingface.co/rootxhacker/arthemis-lm}}
|
| 151 |
+
}
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
## License
|
| 155 |
+
|
| 156 |
+
Apache License 2.0
|