metadata
license: mit
tags:
- pytorch
- safetensors
- threshold-logic
- neuromorphic
- error-correction
threshold-hamming1511-encoder
Hamming(15,11) encoder. Adds 4 parity bits to 11 data bits for single-error correction.
Function
encode(d1..d11) -> [p1, p2, d1, p4, d2, d3, d4, p8, d5, d6, d7, d8, d9, d10, d11]
Bit Positions
| Position | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Bit | p1 | p2 | d1 | p4 | d2 | d3 | d4 | p8 | d5 | d6 | d7 | d8 | d9 | d10 | d11 |
Parity Equations
Each parity bit covers positions where its position number (in binary) has a 1 in that bit:
- p1 (bit 0): positions 1,3,5,7,9,11,13,15 -> XOR(d1,d2,d4,d5,d7,d9,d11)
- p2 (bit 1): positions 2,3,6,7,10,11,14,15 -> XOR(d1,d3,d4,d6,d7,d10,d11)
- p4 (bit 2): positions 4,5,6,7,12,13,14,15 -> XOR(d2,d3,d4,d8,d9,d10,d11)
- p8 (bit 3): positions 8,9,10,11,12,13,14,15 -> XOR(d5,d6,d7,d8,d9,d10,d11)
Architecture
Each parity bit requires a 7-way XOR, implemented as a tree of 2-way XORs:
XOR7(a,b,c,d,e,f,g) = XOR(XOR4(a,b,c,d), XOR3(e,f,g))
XOR4(a,b,c,d) = XOR(XOR(a,b), XOR(c,d))
XOR3(e,f,g) = XOR(XOR(e,f), g)
Each XOR2 requires 3 neurons (OR, NAND, AND).
Parameters
| Inputs | 11 |
| Outputs | 15 |
| Neurons | 86 |
| Layers | 6 |
| Parameters | 591 |
| Magnitude | 272 |
Error Correction
When decoded, the receiver computes syndrome bits by XORing received bits at parity positions. The syndrome directly indicates the bit position of any single-bit error (0 = no error).
Comparison to Hamming(7,4)
| Code | Data | Parity | Total | Efficiency |
|---|---|---|---|---|
| Hamming(7,4) | 4 | 3 | 7 | 57% |
| Hamming(15,11) | 11 | 4 | 15 | 73% |
Larger Hamming codes are more efficient but require more complex circuits.
Usage
from safetensors.torch import load_file
w = load_file('model.safetensors')
# See model.py for reference implementation
License
MIT