Threshold Logic Circuits
Collection
Boolean gates, voting functions, modular arithmetic, and adders as threshold networks.
β’
269 items
β’
Updated
β’
1
Minimum-magnitude threshold circuit for XOR. Magnitude 7 is proven optimal via exhaustive Coq computation.
| Magnitude | Valid XOR Circuits |
|---|---|
| 0-6 | 0 (proven impossible) |
| 7 | 6 (all included here) |
All solutions use weights from {-1, 0, 1} only.
| Solution | Neuron 1 | Neuron 2 | Output | Structure |
|---|---|---|---|---|
| 1 | [-1,1] b=0 | [1,-1] b=0 | [-1,-1] b=1 | Symmetric opposites |
| 2 | [1,-1] b=0 | [-1,1] b=0 | [-1,-1] b=1 | Symmetric opposites |
| 3 | [-1,1] b=0 | [-1,1] b=-1 | [-1,1] b=0 | Same weights |
| 4 | [1,-1] b=0 | [1,-1] b=-1 | [-1,1] b=0 | Same weights |
| 5 | [-1,1] b=-1 | [-1,1] b=0 | [1,-1] b=0 | Same weights |
| 6 | [1,-1] b=-1 | [1,-1] b=0 | [1,-1] b=0 | Same weights |
Solutions 1-2 form one structural family (symmetric opposites in layer 1). Solutions 3-6 form another family (identical weights in layer 1, differing biases).
The original XOR circuit uses OR + NAND + AND with magnitude 10:
Original (magnitude 10):
OR: [1, 1], b=-1 β magnitude 3
NAND: [-1,-1], b=1 β magnitude 3
AND: [1, 1], b=-2 β magnitude 4 β costly -2 bias
Optimized (magnitude 7):
n1: [-1, 1], b=0 β magnitude 2 β zero bias
n2: [1, -1], b=0 β magnitude 2 β zero bias
out: [-1,-1], b=1 β magnitude 3
30% magnitude reduction (10 β 7).
optimality_proof.v contains a machine-verified proof that:
The proof uses exhaustive computation via native_compute/vm_compute.
Theorem mag_6 : any_xor (configs_at_mag 6) = false.
Proof. native_compute. reflexivity. Qed.
Theorem mag_7 : any_xor (configs_at_mag 7) = true.
Proof. native_compute. reflexivity. Qed.
threshold-xor-mag7/
βββ solution1.safetensors # Symmetric family
βββ solution2.safetensors # Symmetric family
βββ solution3.safetensors # Same-weights family
βββ solution4.safetensors # Same-weights family
βββ solution5.safetensors # Same-weights family
βββ solution6.safetensors # Same-weights family
βββ optimality_proof.v # Coq proof of optimality
βββ model.py # Python implementation
βββ config.json # Metadata
βββ README.md
from safetensors.torch import load_file
import torch
w = load_file('solution1.safetensors')
def xor_gate(x1, x2):
inp = torch.tensor([float(x1), float(x2)])
h1 = int((inp * w['layer1.neuron1.weight']).sum() + w['layer1.neuron1.bias'] >= 0)
h2 = int((inp * w['layer1.neuron2.weight']).sum() + w['layer1.neuron2.bias'] >= 0)
hid = torch.tensor([float(h1), float(h2)])
return int((hid * w['layer2.weight']).sum() + w['layer2.bias'] >= 0)
# Test
assert xor_gate(0, 0) == 0
assert xor_gate(0, 1) == 1
assert xor_gate(1, 0) == 1
assert xor_gate(1, 1) == 0
coqc optimality_proof.v
MIT