metadata
license: mit
tags:
- pytorch
- safetensors
- threshold-logic
- neuromorphic
- weighted-voting
threshold-weighted
Weighted threshold function demonstrating non-uniform input weights.
Function
y = 1 iff 4·x3 + 3·x2 + 2·x1 + 1·x0 >= 6
Each input has a different "voting power":
- x3: weight 4 (most influential)
- x2: weight 3
- x1: weight 2
- x0: weight 1 (least influential)
Maximum weighted sum = 10, threshold = 6 (weighted majority).
Truth Table (selected rows)
| x3 | x2 | x1 | x0 | w_sum | y |
|---|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 1 | 6 | 1 |
| 1 | 0 | 0 | 0 | 4 | 0 |
| 1 | 0 | 1 | 0 | 6 | 1 |
| 1 | 1 | 0 | 0 | 7 | 1 |
| 1 | 1 | 1 | 1 | 10 | 1 |
Note: x3 alone (weight 4) isn't enough, but x3 + x1 (weight 6) passes.
Architecture
Single threshold neuron:
x3 ──(×4)──┐
x2 ──(×3)──┼──► Σ ──► (≥6?) ──► y
x1 ──(×2)──┤
x0 ──(×1)──┘
Parameters
| Inputs | 4 |
| Outputs | 1 |
| Neurons | 1 |
| Layers | 1 |
| Parameters | 5 |
| Magnitude | 16 |
Theory
This is the fundamental building block of threshold logic. Any linearly separable Boolean function can be computed by a single weighted threshold neuron. Non-linearly-separable functions (like XOR) require multiple layers.
The general form: y = 1 iff Σ(wi·xi) >= θ
Applications
- Weighted voting systems
- Credit scoring
- Risk assessment
- Neural network layers
Usage
from safetensors.torch import load_file
import torch
w = load_file('model.safetensors')
def weighted(x3, x2, x1, x0):
inp = torch.tensor([float(x3), float(x2), float(x1), float(x0)])
return int((inp @ w['y.weight'].T + w['y.bias'] >= 0).item())
# weighted(1, 0, 1, 0) = 1 # 4+2=6 >= 6
# weighted(1, 0, 0, 1) = 0 # 4+1=5 < 6
License
MIT