Threshold Logic Circuits
Collection
Boolean gates, voting functions, modular arithmetic, and adders as threshold networks.
•
248 items
•
Updated
•
1
Weighted threshold function demonstrating non-uniform input weights.
y = 1 iff 4·x3 + 3·x2 + 2·x1 + 1·x0 >= 6
Each input has a different "voting power":
Maximum weighted sum = 10, threshold = 6 (weighted majority).
| x3 | x2 | x1 | x0 | w_sum | y |
|---|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 1 | 6 | 1 |
| 1 | 0 | 0 | 0 | 4 | 0 |
| 1 | 0 | 1 | 0 | 6 | 1 |
| 1 | 1 | 0 | 0 | 7 | 1 |
| 1 | 1 | 1 | 1 | 10 | 1 |
Note: x3 alone (weight 4) isn't enough, but x3 + x1 (weight 6) passes.
Single threshold neuron:
x3 ──(×4)──┐
x2 ──(×3)──┼──► Σ ──► (≥6?) ──► y
x1 ──(×2)──┤
x0 ──(×1)──┘
| Inputs | 4 |
| Outputs | 1 |
| Neurons | 1 |
| Layers | 1 |
| Parameters | 5 |
| Magnitude | 16 |
This is the fundamental building block of threshold logic. Any linearly separable Boolean function can be computed by a single weighted threshold neuron. Non-linearly-separable functions (like XOR) require multiple layers.
The general form: y = 1 iff Σ(wi·xi) >= θ
from safetensors.torch import load_file
import torch
w = load_file('model.safetensors')
def weighted(x3, x2, x1, x0):
inp = torch.tensor([float(x3), float(x2), float(x1), float(x0)])
return int((inp @ w['y.weight'].T + w['y.bias'] >= 0).item())
# weighted(1, 0, 1, 0) = 1 # 4+2=6 >= 6
# weighted(1, 0, 0, 1) = 0 # 4+1=5 < 6
MIT