File size: 2,196 Bytes
41e2c7f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
license: mit
tags:
- pytorch
- safetensors
- threshold-logic
- neuromorphic
- modular-arithmetic
---
# threshold-mod7
Computes Hamming weight mod 7 for 8-bit inputs. Multi-layer network with thermometer encoding.
## Circuit
```
xβ xβ xβ xβ xβ xβ
xβ xβ
β β β β β β β β
ββββ΄βββ΄βββ΄βββΌβββ΄βββ΄βββ΄βββ
βΌ
βββββββββββββββ
β Thermometer β Layer 1: 9 neurons
βββββββββββββββ
β
βΌ
βββββββββββββββ
β MOD-7 β Layer 2: 6 neurons
β Detection β Pattern (1,1,1,1,1,1,-6)
βββββββββββββββ
β
βΌ
βββββββββββββββ
β Classify β Output: 7 classes
βββββββββββββββ
β
βΌ
{0, 1, 2, 3, 4, 5, 6}
```
## Algebraic Insight
Pattern `(1, 1, 1, 1, 1, 1, -6)` cycles mod 7:
```
HW=0: sum=0 β 0 mod 7
...
HW=6: sum=6 β 6 mod 7
HW=7: sum=0 β 0 mod 7 (reset: 1+1+1+1+1+1-6=0)
HW=8: sum=1 β 1 mod 7
```
For 8-bit inputs, only one reset occurs (at HW=7).
## Architecture
| Layer | Neurons | Function |
|-------|---------|----------|
| Input | 8 | Binary bits |
| Hidden 1 | 9 | Thermometer encoding |
| Hidden 2 | 6 | MOD-7 detection |
| Output | 7 | One-hot classification |
**Total: 22 neurons, 190 parameters**
## Usage
```python
from safetensors.torch import load_file
import torch
w = load_file('model.safetensors')
def forward(x):
x = x.float()
x = (x @ w['layer1.weight'].T + w['layer1.bias'] >= 0).float()
x = (x @ w['layer2.weight'].T + w['layer2.bias'] >= 0).float()
out = x @ w['output.weight'].T + w['output.bias']
return out.argmax(dim=-1)
```
## Files
```
threshold-mod7/
βββ model.safetensors
βββ model.py
βββ config.json
βββ README.md
```
## License
MIT
|