|
|
---
|
|
|
license: mit
|
|
|
tags:
|
|
|
- pytorch
|
|
|
- safetensors
|
|
|
- threshold-logic
|
|
|
- neuromorphic
|
|
|
- decoder
|
|
|
---
|
|
|
|
|
|
# threshold-3to8decoder
|
|
|
|
|
|
3-to-8 binary decoder. Converts 3-bit binary input to one-hot 8-bit output. A single-layer threshold circuit.
|
|
|
|
|
|
## Circuit
|
|
|
|
|
|
```
|
|
|
aβ aβ aβ
|
|
|
β β β
|
|
|
βββββββββΌββββββββ€
|
|
|
β β β
|
|
|
βββββ΄ββββ¬ββββ΄ββββ¬ββββ΄ββββ
|
|
|
β β β β
|
|
|
βΌ βΌ βΌ βΌ
|
|
|
ββββββββββββββββββββββββββββββββ
|
|
|
β yβ ββ yβ ββ yβ ββ ... β
|
|
|
β-1,-1,-1ββ-1,-1,+1ββ-1,+1,-1ββ β
|
|
|
βb: 0 ββb: -1 ββb: -1 ββ β
|
|
|
ββββββββββββββββββββββββββββββββ
|
|
|
β β β β
|
|
|
βΌ βΌ βΌ βΌ
|
|
|
yβ yβ yβ ... yβ
|
|
|
```
|
|
|
|
|
|
## One-Hot Encoding
|
|
|
|
|
|
Each input value activates exactly one output:
|
|
|
|
|
|
| Input | aβaβaβ | Output yβyβyβyβyβyβ
yβyβ |
|
|
|
|-------|--------|--------------------------|
|
|
|
| 0 | 000 | 10000000 |
|
|
|
| 1 | 001 | 01000000 |
|
|
|
| 2 | 010 | 00100000 |
|
|
|
| 3 | 011 | 00010000 |
|
|
|
| 4 | 100 | 00001000 |
|
|
|
| 5 | 101 | 00000100 |
|
|
|
| 6 | 110 | 00000010 |
|
|
|
| 7 | 111 | 00000001 |
|
|
|
|
|
|
## Mechanism
|
|
|
|
|
|
Each output yα΅’ acts as a "pattern matcher" for input = i:
|
|
|
|
|
|
- **Weight +1** for bit positions that should be 1
|
|
|
- **Weight -1** for bit positions that should be 0
|
|
|
- **Bias** = -(number of 1 bits in i)
|
|
|
|
|
|
Example for yβ
(binary 101):
|
|
|
```
|
|
|
weights: [+1, -1, +1] (match 1, reject 0, match 1)
|
|
|
bias: -2 (need 2 matches to fire)
|
|
|
```
|
|
|
|
|
|
When input = 101: sum = 1Β·1 + (-1)Β·0 + 1Β·1 = 2, fires
|
|
|
When input = 111: sum = 1Β·1 + (-1)Β·1 + 1Β·1 = 1, doesn't fire
|
|
|
|
|
|
## The Matching Principle
|
|
|
|
|
|
The circuit computes "how well does input match pattern i?"
|
|
|
|
|
|
- Perfect match: score = (number of 1s in i)
|
|
|
- One bit wrong: score = (number of 1s in i) - 1
|
|
|
|
|
|
The bias ensures only perfect matches pass.
|
|
|
|
|
|
## Weight Patterns
|
|
|
|
|
|
| Output | Binary | Weights | Bias |
|
|
|
|--------|--------|---------|------|
|
|
|
| yβ | 000 | [-1, -1, -1] | 0 |
|
|
|
| yβ | 001 | [-1, -1, +1] | -1 |
|
|
|
| yβ | 010 | [-1, +1, -1] | -1 |
|
|
|
| yβ | 011 | [-1, +1, +1] | -2 |
|
|
|
| yβ | 100 | [+1, -1, -1] | -1 |
|
|
|
| yβ
| 101 | [+1, -1, +1] | -2 |
|
|
|
| yβ | 110 | [+1, +1, -1] | -2 |
|
|
|
| yβ | 111 | [+1, +1, +1] | -3 |
|
|
|
|
|
|
## Single-Layer Elegance
|
|
|
|
|
|
Unlike traditional logic (which uses AND, OR, NOT combinations), threshold logic can decode in one layer. Each output neuron directly computes "does input match me?"
|
|
|
|
|
|
## Architecture
|
|
|
|
|
|
**8 neurons, 32 parameters, 1 layer**
|
|
|
|
|
|
All neurons run in parallel - no dependencies.
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
```python
|
|
|
from safetensors.torch import load_file
|
|
|
import torch
|
|
|
|
|
|
w = load_file('model.safetensors')
|
|
|
|
|
|
def decode(a2, a1, a0):
|
|
|
inp = torch.tensor([float(a2), float(a1), float(a0)])
|
|
|
return [int((inp * w[f'y{i}.weight']).sum() + w[f'y{i}.bias'] >= 0)
|
|
|
for i in range(8)]
|
|
|
|
|
|
# Input 5 -> output 5 is hot
|
|
|
outputs = decode(1, 0, 1)
|
|
|
print(outputs) # [0, 0, 0, 0, 0, 1, 0, 0]
|
|
|
```
|
|
|
|
|
|
## Files
|
|
|
|
|
|
```
|
|
|
threshold-3to8decoder/
|
|
|
βββ model.safetensors
|
|
|
βββ model.py
|
|
|
βββ config.json
|
|
|
βββ README.md
|
|
|
```
|
|
|
|
|
|
## License
|
|
|
|
|
|
MIT
|
|
|
|