threshold-xor-mag7

Minimum-magnitude threshold circuit for XOR. Magnitude 7 is proven optimal via exhaustive Coq computation.

Key Results

Magnitude Valid XOR Circuits
0-6 0 (proven impossible)
7 6 (all included here)

The 6 Solutions

All solutions use weights from {-1, 0, 1} only.

Solution Neuron 1 Neuron 2 Output Structure
1 [-1,1] b=0 [1,-1] b=0 [-1,-1] b=1 Symmetric opposites
2 [1,-1] b=0 [-1,1] b=0 [-1,-1] b=1 Symmetric opposites
3 [-1,1] b=0 [-1,1] b=-1 [-1,1] b=0 Same weights
4 [1,-1] b=0 [1,-1] b=-1 [-1,1] b=0 Same weights
5 [-1,1] b=-1 [-1,1] b=0 [1,-1] b=0 Same weights
6 [1,-1] b=-1 [1,-1] b=0 [1,-1] b=0 Same weights

Solutions 1-2 form one structural family (symmetric opposites in layer 1). Solutions 3-6 form another family (identical weights in layer 1, differing biases).

Improvement Over Original

The original XOR circuit uses OR + NAND + AND with magnitude 10:

Original (magnitude 10):
  OR:   [1, 1], b=-1   β†’ magnitude 3
  NAND: [-1,-1], b=1   β†’ magnitude 3
  AND:  [1, 1], b=-2   β†’ magnitude 4  ← costly -2 bias

Optimized (magnitude 7):
  n1:   [-1, 1], b=0   β†’ magnitude 2  ← zero bias
  n2:   [1, -1], b=0   β†’ magnitude 2  ← zero bias
  out:  [-1,-1], b=1   β†’ magnitude 3

30% magnitude reduction (10 β†’ 7).

Coq Proof

optimality_proof.v contains a machine-verified proof that:

  1. No configuration with magnitude ≀ 6 computes XOR
  2. At least one configuration with magnitude 7 computes XOR

The proof uses exhaustive computation via native_compute/vm_compute.

Theorem mag_6 : any_xor (configs_at_mag 6) = false.
Proof. native_compute. reflexivity. Qed.

Theorem mag_7 : any_xor (configs_at_mag 7) = true.
Proof. native_compute. reflexivity. Qed.

Benefits of Lower Magnitude

  • Fewer bits to represent weights
  • Less energy for multiply-accumulate
  • Sparser (2 zero biases vs 0 in original)
  • Simpler hardware implementation

Files

threshold-xor-mag7/
β”œβ”€β”€ solution1.safetensors   # Symmetric family
β”œβ”€β”€ solution2.safetensors   # Symmetric family
β”œβ”€β”€ solution3.safetensors   # Same-weights family
β”œβ”€β”€ solution4.safetensors   # Same-weights family
β”œβ”€β”€ solution5.safetensors   # Same-weights family
β”œβ”€β”€ solution6.safetensors   # Same-weights family
β”œβ”€β”€ optimality_proof.v      # Coq proof of optimality
β”œβ”€β”€ model.py                # Python implementation
β”œβ”€β”€ config.json             # Metadata
└── README.md

Usage

from safetensors.torch import load_file
import torch

w = load_file('solution1.safetensors')

def xor_gate(x1, x2):
    inp = torch.tensor([float(x1), float(x2)])

    h1 = int((inp * w['layer1.neuron1.weight']).sum() + w['layer1.neuron1.bias'] >= 0)
    h2 = int((inp * w['layer1.neuron2.weight']).sum() + w['layer1.neuron2.bias'] >= 0)

    hid = torch.tensor([float(h1), float(h2)])
    return int((hid * w['layer2.weight']).sum() + w['layer2.bias'] >= 0)

# Test
assert xor_gate(0, 0) == 0
assert xor_gate(0, 1) == 1
assert xor_gate(1, 0) == 1
assert xor_gate(1, 1) == 0

Compile the Proof

coqc optimality_proof.v

License

MIT

Downloads last month
14
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including phanerozoic/threshold-xor-mag7