threshold-mod8 / README.md
phanerozoic's picture
Rename from tiny-mod8-verified
267b930 verified
---
license: mit
tags:
- pytorch
- safetensors
- threshold-logic
- neuromorphic
- modular-arithmetic
---
# threshold-mod8
Computes Hamming weight mod 8 directly on inputs. Single-layer circuit.
## Circuit
```
xβ‚€ x₁ xβ‚‚ x₃ xβ‚„ xβ‚… x₆ x₇
β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
w: 1 1 1 1 1 1 1 -7
β””β”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”Όβ”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”˜
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ b: 0 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β–Ό
HW mod 8
```
## Algebraic Insight
Position 8 gets weight 1-8 = -7:
- Positions 1-7: weight +1
- Position 8: weight -7
```
HW=0: sum=0 β†’ 0 mod 8
...
HW=7: sum=7 β†’ 7 mod 8
HW=8: sum=0 β†’ 0 mod 8 (reset: 1+1+1+1+1+1+1-7=0)
```
The only non-trivial case is HW=8, which resets to 0.
## Parameters
| | |
|---|---|
| Weights | [1, 1, 1, 1, 1, 1, 1, -7] |
| Bias | 0 |
| Total | 9 parameters |
## Usage
```python
from safetensors.torch import load_file
import torch
w = load_file('model.safetensors')
def mod8(bits):
inputs = torch.tensor([float(b) for b in bits])
return int((inputs * w['weight']).sum() + w['bias'])
```
## Files
```
threshold-mod8/
β”œβ”€β”€ model.safetensors
β”œβ”€β”€ model.py
β”œβ”€β”€ config.json
└── README.md
```
## License
MIT