File size: 2,950 Bytes
44dc6ac |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
---
license: mit
tags:
- formal-verification
- coq
- threshold-logic
- neuromorphic
- multi-layer
---
# tiny-XOR-verified
Formally verified XOR gate. Two-layer threshold network computing exclusive or with 100% accuracy.
## Architecture
| Component | Value |
|-----------|-------|
| Inputs | 2 |
| Outputs | 1 |
| Neurons | 3 (2 hidden, 1 output) |
| Layers | 2 |
| Parameters | 9 |
| **Layer 1, Neuron 1 (OR)** | |
| Weights | [1, 1] |
| Bias | -1 |
| **Layer 1, Neuron 2 (NAND)** | |
| Weights | [-1, -1] |
| Bias | 1 |
| **Layer 2 (AND)** | |
| Weights | [1, 1] |
| Bias | -2 |
| Activation | Heaviside step (all layers) |
## Key Properties
- 100% accuracy (4/4 inputs correct)
- Coq-proven correctness
- Minimal 2-layer architecture (XOR is not linearly separable)
- Integer weights
- Commutative: XOR(x,y) = XOR(y,x)
- Associative: XOR(x,XOR(y,z)) = XOR(XOR(x,y),z)
- Self-inverse: XOR(x,XOR(x,y)) = y
## Why 2 Layers?
XOR is the classic example of a function that is **not linearly separable** - a single threshold neuron cannot compute it. This network uses the minimal architecture:
**Layer 1**: Compute OR and NAND in parallel
**Layer 2**: Compute AND of results
This implements: XOR(x,y) = AND(OR(x,y), NAND(x,y))
## Usage
```python
import torch
from safetensors.torch import load_file
weights = load_file('xor.safetensors')
def xor_gate(x, y):
inputs = torch.tensor([float(x), float(y)])
# Layer 1: OR and NAND
or_sum = (inputs * weights['layer1.neuron1.weight']).sum() + weights['layer1.neuron1.bias']
or_out = int(or_sum >= 0)
nand_sum = (inputs * weights['layer1.neuron2.weight']).sum() + weights['layer1.neuron2.bias']
nand_out = int(nand_sum >= 0)
# Layer 2: AND
layer1_outs = torch.tensor([float(or_out), float(nand_out)])
and_sum = (layer1_outs * weights['layer2.weight']).sum() + weights['layer2.bias']
return int(and_sum >= 0)
# Test
print(xor_gate(0, 0)) # 0
print(xor_gate(0, 1)) # 1
print(xor_gate(1, 0)) # 1
print(xor_gate(1, 1)) # 0
```
## Verification
**Coq Theorem**:
```coq
Theorem xor_correct : forall x y, xor_circuit x y = xorb x y.
```
Proven axiom-free with properties:
- Commutativity
- Associativity
- Identity (XOR with false)
- Complement (XOR with true gives NOT)
- Nilpotence (XOR with itself gives false)
- Involution (XOR is self-inverse)
Full proof: [coq-circuits/Boolean/XOR.v](https://github.com/CharlesCNorton/coq-circuits/blob/main/coq/Boolean/XOR.v)
## Circuit Operation
| Input (x,y) | OR | NAND | AND(OR,NAND) | XOR |
|-------------|-----|------|--------------|-----|
| (0,0) | 0 | 1 | 0 | 0 |
| (0,1) | 1 | 1 | 1 | 1 |
| (1,0) | 1 | 1 | 1 | 1 |
| (1,1) | 1 | 0 | 0 | 0 |
XOR outputs true when inputs differ.
## Citation
```bibtex
@software{tiny_xor_prover_2025,
title={tiny-XOR-verified: Formally Verified XOR Gate},
author={Norton, Charles},
url={https://huggingface.co/phanerozoic/tiny-XOR-verified},
year={2025}
}
```
|