File size: 2,401 Bytes
a33835b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
license: mit
tags:
- formal-verification
- coq
- threshold-logic
- neuromorphic
- multi-layer
---

# tiny-XNOR-verified

Formally verified XNOR gate. Two-layer threshold network computing equivalence with 100% accuracy.

## Architecture

| Component | Value |
|-----------|-------|
| Inputs | 2 |
| Outputs | 1 |
| Neurons | 3 (2 hidden, 1 output) |
| Layers | 2 |
| Parameters | 9 |
| **Layer 1, Neuron 1 (NOR)** | |
| Weights | [-1, -1] |
| Bias | 0 |
| **Layer 1, Neuron 2 (AND)** | |
| Weights | [1, 1] |
| Bias | -2 |
| **Layer 2 (OR)** | |
| Weights | [1, 1] |
| Bias | -1 |
| Activation | Heaviside step (all layers) |

## Key Properties

- 100% accuracy (4/4 inputs correct)
- Coq-proven correctness
- Minimal 2-layer architecture (XNOR is not linearly separable)
- Integer weights
- Commutative: XNOR(x,y) = XNOR(y,x)
- Reflexive: XNOR(x,x) = true
- Equivalence relation (reflexive, symmetric)

## Usage

```python
import torch
from safetensors.torch import load_file

weights = load_file('xnor.safetensors')

def xnor_gate(x, y):
    inputs = torch.tensor([float(x), float(y)])

    # Layer 1: NOR and AND
    nor_sum = (inputs * weights['layer1.neuron1.weight']).sum() + weights['layer1.neuron1.bias']
    nor_out = int(nor_sum >= 0)

    and_sum = (inputs * weights['layer1.neuron2.weight']).sum() + weights['layer1.neuron2.bias']
    and_out = int(and_sum >= 0)

    # Layer 2: OR
    layer1_outs = torch.tensor([float(nor_out), float(and_out)])
    or_sum = (layer1_outs * weights['layer2.weight']).sum() + weights['layer2.bias']
    return int(or_sum >= 0)

# Test
print(xnor_gate(0, 0))  # 1
print(xnor_gate(0, 1))  # 0
print(xnor_gate(1, 0))  # 0
print(xnor_gate(1, 1))  # 1
```

## Verification

**Coq Theorem**:
```coq
Theorem xnor_correct : forall x y, xnor_circuit x y = negb (xorb x y).
```

Proven axiom-free with properties:
- Commutativity
- Reflexivity
- Symmetry
- Equivalence relation properties

Full proof: [coq-circuits/Boolean/XNOR.v](https://github.com/CharlesCNorton/coq-circuits/blob/main/coq/Boolean/XNOR.v)

## Circuit Operation

XNOR outputs true when inputs are equal (both false or both true).

XNOR(x,y) = OR(NOR(x,y), AND(x,y))

## Citation

```bibtex
@software{tiny_xnor_prover_2025,
  title={tiny-XNOR-verified: Formally Verified XNOR Gate},
  author={Norton, Charles},
  url={https://huggingface.co/phanerozoic/tiny-XNOR-verified},
  year={2025}
}
```