metadata
license: mit
tags:
- pytorch
- safetensors
- threshold-logic
- neuromorphic
- modular-arithmetic
threshold-mod12
Trivial case: computes Hamming weight mod 12 for 8-bit inputs. Since max HW is 8 < 12, this is just HW.
Circuit
xβ xβ xβ xβ xβ xβ
xβ xβ
β β β β β β β β
β β β β β β β β
w: 1 1 1 1 1 1 1 1
ββββ΄βββ΄βββ΄βββΌβββ΄βββ΄βββ΄βββ
βΌ
βββββββββββ
β b: 0 β
βββββββββββ
β
βΌ
HW (= HW mod 12)
Why Trivial?
For mod m where m > (number of inputs), no reset ever occurs:
- 8 inputs β max HW = 8
- 8 mod 12 = 8 (no wraparound)
Parameters
| Weights | [1, 1, 1, 1, 1, 1, 1, 1] |
| Bias | 0 |
| Total | 9 parameters |
Usage
from safetensors.torch import load_file
import torch
w = load_file('model.safetensors')
def mod12(bits): # Actually just HW
inputs = torch.tensor([float(b) for b in bits])
return int((inputs * w['weight']).sum() + w['bias'])
Files
threshold-mod12/
βββ model.safetensors
βββ model.py
βββ config.json
βββ README.md
License
MIT