Upload folder using huggingface_hub
Browse files- README.md +166 -0
- config.json +9 -0
- model.py +117 -0
- model.safetensors +3 -0
README.md
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- pytorch
|
| 5 |
+
- safetensors
|
| 6 |
+
- threshold-logic
|
| 7 |
+
- neuromorphic
|
| 8 |
+
- error-correction
|
| 9 |
+
- hamming-code
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# threshold-hamming74decoder
|
| 13 |
+
|
| 14 |
+
Hamming(7,4) decoder with single-error correction. Takes a 7-bit codeword (possibly corrupted) and outputs the corrected 4 data bits.
|
| 15 |
+
|
| 16 |
+
## Circuit Overview
|
| 17 |
+
|
| 18 |
+
```
|
| 19 |
+
c1 c2 c3 c4 c5 c6 c7
|
| 20 |
+
│ │ │ │ │ │ │
|
| 21 |
+
└──┴──┴──┴──┴──┴──┴─────────────────────┐
|
| 22 |
+
│ │ │ │ │ │ │ │
|
| 23 |
+
▼ ▼ ▼ ▼ ▼ ▼ ▼ │
|
| 24 |
+
┌─────────────────────┐ │
|
| 25 |
+
│ Syndrome Computer │ │
|
| 26 |
+
│ s1 = c1⊕c3⊕c5⊕c7 │ │
|
| 27 |
+
│ s2 = c2⊕c3⊕c6⊕c7 │ │
|
| 28 |
+
│ s3 = c4⊕c5⊕c6⊕c7 │ │
|
| 29 |
+
└─────────────────────┘ │
|
| 30 |
+
│ s1,s2,s3 │
|
| 31 |
+
▼ │
|
| 32 |
+
┌─────────────────────┐ │
|
| 33 |
+
│ Error Locator │ │
|
| 34 |
+
│ flip3 = s1∧s2∧¬s3 │ ┌──────────────┘
|
| 35 |
+
│ flip5 = s1∧¬s2∧s3 │ │ c3,c5,c6,c7
|
| 36 |
+
│ flip6 = ¬s1∧s2∧s3 │ │
|
| 37 |
+
│ flip7 = s1∧s2∧s3 │ │
|
| 38 |
+
└─────────────────────┘ │
|
| 39 |
+
│ │
|
| 40 |
+
▼ ▼
|
| 41 |
+
┌─────────────────────────────┐
|
| 42 |
+
│ Corrector │
|
| 43 |
+
│ d1 = c3 ⊕ flip3 │
|
| 44 |
+
│ d2 = c5 ⊕ flip5 │
|
| 45 |
+
│ d3 = c6 ⊕ flip6 │
|
| 46 |
+
│ d4 = c7 ⊕ flip7 │
|
| 47 |
+
└─────────────────────────────┘
|
| 48 |
+
│
|
| 49 |
+
▼
|
| 50 |
+
d1 d2 d3 d4
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## Decoding Algorithm
|
| 54 |
+
|
| 55 |
+
**Step 1: Compute Syndrome**
|
| 56 |
+
|
| 57 |
+
The syndrome is a 3-bit value that indicates the error position:
|
| 58 |
+
|
| 59 |
+
| s3 | s2 | s1 | Decimal | Meaning |
|
| 60 |
+
|----|----|----|---------|---------|
|
| 61 |
+
| 0 | 0 | 0 | 0 | No error |
|
| 62 |
+
| 0 | 0 | 1 | 1 | Error in c1 (parity) |
|
| 63 |
+
| 0 | 1 | 0 | 2 | Error in c2 (parity) |
|
| 64 |
+
| 0 | 1 | 1 | 3 | Error in c3 (d1) |
|
| 65 |
+
| 1 | 0 | 0 | 4 | Error in c4 (parity) |
|
| 66 |
+
| 1 | 0 | 1 | 5 | Error in c5 (d2) |
|
| 67 |
+
| 1 | 1 | 0 | 6 | Error in c6 (d3) |
|
| 68 |
+
| 1 | 1 | 1 | 7 | Error in c7 (d4) |
|
| 69 |
+
|
| 70 |
+
**Step 2: Locate and Correct**
|
| 71 |
+
|
| 72 |
+
Only data positions (3, 5, 6, 7) need correction in the output. Parity bit errors (positions 1, 2, 4) don't affect data extraction.
|
| 73 |
+
|
| 74 |
+
**Step 3: Extract Data**
|
| 75 |
+
|
| 76 |
+
Data bits are at positions 3, 5, 6, 7 of the codeword, XORed with their flip signals.
|
| 77 |
+
|
| 78 |
+
## 4-Way XOR Implementation
|
| 79 |
+
|
| 80 |
+
Each syndrome bit requires a 4-input XOR:
|
| 81 |
+
|
| 82 |
+
```
|
| 83 |
+
XOR(a,b,c,d) = XOR(XOR(a,b), XOR(c,d))
|
| 84 |
+
|
| 85 |
+
a b c d
|
| 86 |
+
│ │ │ │
|
| 87 |
+
└─┬─┘ └─┬─┘
|
| 88 |
+
▼ ▼
|
| 89 |
+
┌─────┐ ┌─────┐
|
| 90 |
+
│ XOR │ │ XOR │ Layer 1-2
|
| 91 |
+
└─────┘ └─────┘
|
| 92 |
+
│ │
|
| 93 |
+
└─────┬─────┘
|
| 94 |
+
▼
|
| 95 |
+
┌─────┐
|
| 96 |
+
│ XOR │ Layer 3-4
|
| 97 |
+
└─────┘
|
| 98 |
+
│
|
| 99 |
+
▼
|
| 100 |
+
XOR(a,b,c,d)
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
## Architecture
|
| 104 |
+
|
| 105 |
+
| Stage | Component | Neurons | Layers |
|
| 106 |
+
|-------|-----------|---------|--------|
|
| 107 |
+
| Syndrome | 3 × 4-way XOR | 18 | 4 |
|
| 108 |
+
| Error Locator | 4 detectors | 4 | 1 |
|
| 109 |
+
| Corrector | 4 × 2-way XOR | 12 | 2 |
|
| 110 |
+
| **Total** | | **34** | **6** |
|
| 111 |
+
|
| 112 |
+
Note: Syndrome computation and final XOR stages run in parallel where possible.
|
| 113 |
+
|
| 114 |
+
## Error Correction Examples
|
| 115 |
+
|
| 116 |
+
```
|
| 117 |
+
Original: 1011 → encode → 0110011
|
| 118 |
+
Corrupted: 0110011 → flip bit 5 → 0110111
|
| 119 |
+
Syndrome: s1=1, s2=0, s3=1 → position 5
|
| 120 |
+
Corrected: d1=1, d2=0, d3=1, d4=1 ✓
|
| 121 |
+
|
| 122 |
+
Original: 0000 → encode → 0000000
|
| 123 |
+
Corrupted: 0000000 → flip bit 7 → 0000001
|
| 124 |
+
Syndrome: s1=1, s2=1, s3=1 → position 7
|
| 125 |
+
Corrected: d1=0, d2=0, d3=0, d4=0 ✓
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
## Limitations
|
| 129 |
+
|
| 130 |
+
- Corrects **single-bit** errors only
|
| 131 |
+
- **Detects** double-bit errors (non-zero syndrome, wrong correction)
|
| 132 |
+
- Cannot distinguish 2-bit errors from 1-bit errors
|
| 133 |
+
|
| 134 |
+
For stronger protection, use Hamming(7,4) + overall parity (SECDED).
|
| 135 |
+
|
| 136 |
+
## Usage
|
| 137 |
+
|
| 138 |
+
```python
|
| 139 |
+
from safetensors.torch import load_file
|
| 140 |
+
|
| 141 |
+
w = load_file('model.safetensors')
|
| 142 |
+
|
| 143 |
+
def hamming74_decode(codeword):
|
| 144 |
+
"""Decode 7-bit Hamming codeword to 4 data bits with error correction"""
|
| 145 |
+
# See model.py for full implementation
|
| 146 |
+
pass
|
| 147 |
+
|
| 148 |
+
# Received corrupted codeword (error at position 3)
|
| 149 |
+
received = [0, 1, 0, 0, 0, 1, 1] # Should be [0,1,1,0,0,1,1]
|
| 150 |
+
data = hamming74_decode(received)
|
| 151 |
+
# Returns [1, 0, 1, 1] (corrected)
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
## Files
|
| 155 |
+
|
| 156 |
+
```
|
| 157 |
+
threshold-hamming74decoder/
|
| 158 |
+
├── model.safetensors
|
| 159 |
+
├── model.py
|
| 160 |
+
├── config.json
|
| 161 |
+
└── README.md
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
## License
|
| 165 |
+
|
| 166 |
+
MIT
|
config.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"name": "threshold-hamming74decoder",
|
| 3 |
+
"description": "Hamming(7,4) decoder with single-error correction as threshold circuit",
|
| 4 |
+
"inputs": 7,
|
| 5 |
+
"outputs": 4,
|
| 6 |
+
"neurons": 46,
|
| 7 |
+
"layers": 6,
|
| 8 |
+
"parameters": 178
|
| 9 |
+
}
|
model.py
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
from safetensors.torch import load_file
|
| 3 |
+
|
| 4 |
+
def load_model(path='model.safetensors'):
|
| 5 |
+
return load_file(path)
|
| 6 |
+
|
| 7 |
+
def xor2(a, b, w, prefix):
|
| 8 |
+
"""2-input XOR using threshold gates"""
|
| 9 |
+
inp = torch.tensor([float(a), float(b)])
|
| 10 |
+
or_out = float((inp * w[f'{prefix}.layer1.or.weight']).sum() + w[f'{prefix}.layer1.or.bias'] >= 0)
|
| 11 |
+
nand_out = float((inp * w[f'{prefix}.layer1.nand.weight']).sum() + w[f'{prefix}.layer1.nand.bias'] >= 0)
|
| 12 |
+
l1 = torch.tensor([or_out, nand_out])
|
| 13 |
+
return int((l1 * w[f'{prefix}.layer2.weight']).sum() + w[f'{prefix}.layer2.bias'] >= 0)
|
| 14 |
+
|
| 15 |
+
def xor4(c, indices, w, prefix):
|
| 16 |
+
"""4-input XOR: XOR(a,b,c,d) = XOR(XOR(a,b), XOR(c,d))"""
|
| 17 |
+
inp = torch.tensor([float(c[i]) for i in range(7)])
|
| 18 |
+
|
| 19 |
+
# First pair XOR
|
| 20 |
+
i0, i1 = indices[0], indices[1]
|
| 21 |
+
or_out = float((inp * w[f'{prefix}.xor_{i0}{i1}.layer1.or.weight']).sum() + w[f'{prefix}.xor_{i0}{i1}.layer1.or.bias'] >= 0)
|
| 22 |
+
nand_out = float((inp * w[f'{prefix}.xor_{i0}{i1}.layer1.nand.weight']).sum() + w[f'{prefix}.xor_{i0}{i1}.layer1.nand.bias'] >= 0)
|
| 23 |
+
xor_ab = int((torch.tensor([or_out, nand_out]) * w[f'{prefix}.xor_{i0}{i1}.layer2.weight']).sum() + w[f'{prefix}.xor_{i0}{i1}.layer2.bias'] >= 0)
|
| 24 |
+
|
| 25 |
+
# Second pair XOR
|
| 26 |
+
i2, i3 = indices[2], indices[3]
|
| 27 |
+
or_out = float((inp * w[f'{prefix}.xor_{i2}{i3}.layer1.or.weight']).sum() + w[f'{prefix}.xor_{i2}{i3}.layer1.or.bias'] >= 0)
|
| 28 |
+
nand_out = float((inp * w[f'{prefix}.xor_{i2}{i3}.layer1.nand.weight']).sum() + w[f'{prefix}.xor_{i2}{i3}.layer1.nand.bias'] >= 0)
|
| 29 |
+
xor_cd = int((torch.tensor([or_out, nand_out]) * w[f'{prefix}.xor_{i2}{i3}.layer2.weight']).sum() + w[f'{prefix}.xor_{i2}{i3}.layer2.bias'] >= 0)
|
| 30 |
+
|
| 31 |
+
# Final XOR
|
| 32 |
+
inp2 = torch.tensor([float(xor_ab), float(xor_cd)])
|
| 33 |
+
or_out = float((inp2 * w[f'{prefix}.xor_final.layer1.or.weight']).sum() + w[f'{prefix}.xor_final.layer1.or.bias'] >= 0)
|
| 34 |
+
nand_out = float((inp2 * w[f'{prefix}.xor_final.layer1.nand.weight']).sum() + w[f'{prefix}.xor_final.layer1.nand.bias'] >= 0)
|
| 35 |
+
return int((torch.tensor([or_out, nand_out]) * w[f'{prefix}.xor_final.layer2.weight']).sum() + w[f'{prefix}.xor_final.layer2.bias'] >= 0)
|
| 36 |
+
|
| 37 |
+
def hamming74_decode(c, w):
|
| 38 |
+
"""Hamming(7,4) decoder with single-error correction.
|
| 39 |
+
c: list of 7 bits [c1,c2,c3,c4,c5,c6,c7]
|
| 40 |
+
Returns: list of 4 corrected data bits [d1,d2,d3,d4]
|
| 41 |
+
"""
|
| 42 |
+
# Compute syndrome bits
|
| 43 |
+
# s1 = c1 XOR c3 XOR c5 XOR c7 (indices 0,2,4,6)
|
| 44 |
+
s1 = xor4(c, [0, 2, 4, 6], w, 's1')
|
| 45 |
+
|
| 46 |
+
# s2 = c2 XOR c3 XOR c6 XOR c7 (indices 1,2,5,6)
|
| 47 |
+
s2 = xor4(c, [1, 2, 5, 6], w, 's2')
|
| 48 |
+
|
| 49 |
+
# s3 = c4 XOR c5 XOR c6 XOR c7 (indices 3,4,5,6)
|
| 50 |
+
s3 = xor4(c, [3, 4, 5, 6], w, 's3')
|
| 51 |
+
|
| 52 |
+
syndrome = torch.tensor([float(s1), float(s2), float(s3)])
|
| 53 |
+
|
| 54 |
+
# Compute flip signals for each data position
|
| 55 |
+
# flip3: syndrome = 011 (position 3 = d1)
|
| 56 |
+
flip3 = int((syndrome * w['flip3.weight']).sum() + w['flip3.bias'] >= 0)
|
| 57 |
+
# flip5: syndrome = 101 (position 5 = d2)
|
| 58 |
+
flip5 = int((syndrome * w['flip5.weight']).sum() + w['flip5.bias'] >= 0)
|
| 59 |
+
# flip6: syndrome = 110 (position 6 = d3)
|
| 60 |
+
flip6 = int((syndrome * w['flip6.weight']).sum() + w['flip6.bias'] >= 0)
|
| 61 |
+
# flip7: syndrome = 111 (position 7 = d4)
|
| 62 |
+
flip7 = int((syndrome * w['flip7.weight']).sum() + w['flip7.bias'] >= 0)
|
| 63 |
+
|
| 64 |
+
# Correct data bits: di = ci XOR flip_i
|
| 65 |
+
d1 = xor2(c[2], flip3, w, 'd1.xor') # c3
|
| 66 |
+
d2 = xor2(c[4], flip5, w, 'd2.xor') # c5
|
| 67 |
+
d3 = xor2(c[5], flip6, w, 'd3.xor') # c6
|
| 68 |
+
d4 = xor2(c[6], flip7, w, 'd4.xor') # c7
|
| 69 |
+
|
| 70 |
+
return [d1, d2, d3, d4]
|
| 71 |
+
|
| 72 |
+
if __name__ == '__main__':
|
| 73 |
+
w = load_model()
|
| 74 |
+
print('Hamming(7,4) Decoder with Single-Error Correction')
|
| 75 |
+
|
| 76 |
+
# Reference encoder
|
| 77 |
+
def encode(d1, d2, d3, d4):
|
| 78 |
+
p1 = d1 ^ d2 ^ d4
|
| 79 |
+
p2 = d1 ^ d3 ^ d4
|
| 80 |
+
p3 = d2 ^ d3 ^ d4
|
| 81 |
+
return [p1, p2, d1, p3, d2, d3, d4]
|
| 82 |
+
|
| 83 |
+
errors = 0
|
| 84 |
+
|
| 85 |
+
# Test all 16 data words with no errors
|
| 86 |
+
print('\nNo errors:')
|
| 87 |
+
for d in range(16):
|
| 88 |
+
d1, d2, d3, d4 = (d>>0)&1, (d>>1)&1, (d>>2)&1, (d>>3)&1
|
| 89 |
+
codeword = encode(d1, d2, d3, d4)
|
| 90 |
+
decoded = hamming74_decode(codeword, w)
|
| 91 |
+
expected = [d1, d2, d3, d4]
|
| 92 |
+
status = 'OK' if decoded == expected else 'FAIL'
|
| 93 |
+
if decoded != expected:
|
| 94 |
+
errors += 1
|
| 95 |
+
print(f' {d1}{d2}{d3}{d4} -> {decoded} (expected {expected}) {status}')
|
| 96 |
+
|
| 97 |
+
# Test single-bit errors
|
| 98 |
+
print('\nSingle-bit errors:')
|
| 99 |
+
test_data = [0b1011, 0b0000, 0b1111, 0b0101]
|
| 100 |
+
for d in test_data:
|
| 101 |
+
d1, d2, d3, d4 = (d>>0)&1, (d>>1)&1, (d>>2)&1, (d>>3)&1
|
| 102 |
+
codeword = encode(d1, d2, d3, d4)
|
| 103 |
+
|
| 104 |
+
# Introduce error at each position
|
| 105 |
+
for pos in range(7):
|
| 106 |
+
corrupted = codeword.copy()
|
| 107 |
+
corrupted[pos] ^= 1
|
| 108 |
+
decoded = hamming74_decode(corrupted, w)
|
| 109 |
+
expected = [d1, d2, d3, d4]
|
| 110 |
+
status = 'OK' if decoded == expected else 'FAIL'
|
| 111 |
+
if decoded != expected:
|
| 112 |
+
errors += 1
|
| 113 |
+
print(f' data={d1}{d2}{d3}{d4} err@{pos+1}: {decoded} (expected {expected}) {status}')
|
| 114 |
+
|
| 115 |
+
print(f'\nTotal errors: {errors}')
|
| 116 |
+
if errors == 0:
|
| 117 |
+
print('All tests passed!')
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c9c3e6496312013794f02707587af87f84e7fc412ab15218045a547386ebf0a9
|
| 3 |
+
size 7556
|