metadata
license: mit
tags:
- pytorch
- safetensors
- threshold-logic
- neuromorphic
- popcount
- bit-counting
threshold-popcount3
3-bit population count (Hamming weight). Counts the number of 1-bits in a 3-bit input, producing a 2-bit output (0-3).
Circuit
x0 x1 x2
β β β
βββββ¬ββββ΄ββββ¬ββββ
β β
ββββββ΄βββββ β
β Layer 1 β β
β atleast1β β (sum >= 1)
β atleast2β β (sum >= 2)
β atleast3ββββ (sum >= 3)
ββββββ¬βββββ
β
ββββββ΄βββββ
β Layer 2 β
β XOR β out1 = atleast1 XOR atleast2
β pass β out0 = atleast2 XOR atleast3
ββββββ¬βββββ
β
βΌ
[out1, out0]
Function
popcount3(x0, x1, x2) -> (out1, out0)
where output = 2*out1 + out0 = number of 1-bits in input
Truth Table
| x0 | x1 | x2 | Count | out1 | out0 |
|---|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | 1 | 0 | 1 |
| 0 | 1 | 0 | 1 | 0 | 1 |
| 0 | 1 | 1 | 2 | 1 | 0 |
| 1 | 0 | 0 | 1 | 0 | 1 |
| 1 | 0 | 1 | 2 | 1 | 0 |
| 1 | 1 | 0 | 2 | 1 | 0 |
| 1 | 1 | 1 | 3 | 1 | 1 |
Mechanism
The circuit uses threshold gates to detect "at least k" conditions, then XOR gates to convert to binary:
Layer 1 - Threshold Detection:
| Gate | Weights | Bias | Fires when |
|---|---|---|---|
| atleast1 | [1,1,1] | -1 | sum >= 1 |
| atleast2 | [1,1,1] | -2 | sum >= 2 |
| atleast3 | [1,1,1] | -3 | sum >= 3 |
Layer 2 - Binary Encoding:
The key insight: binary output bits can be computed from threshold outputs:
out1 (2's place) = atleast2 XOR atleast3 = (sum >= 2) XOR (sum >= 3)
- True when sum is exactly 2 or 3
- Actually: out1 = atleast2 (since atleast3 implies atleast2)
- Simplified: out1 = atleast2
out0 (1's place) = atleast1 XOR atleast2 XOR atleast3
- True when sum is 1 or 3 (odd from {1,2,3} perspective)
- Simplified: out0 = parity of threshold outputs
Actually, simpler encoding:
- out1 = atleast2
- out0 = atleast1 XOR atleast2
Architecture
| Layer | Components | Neurons |
|---|---|---|
| 1 | atleast1, atleast2, atleast3 | 3 |
| 2-3 | XOR for out0 | 3 |
Total: 6 neurons
Alternative simpler design:
- out1 = atleast2 (direct wire, 0 extra neurons)
- out0 = atleast1 XOR atleast2 (3 neurons for XOR)
Using direct threshold for out1: 4 neurons total
Parameters
| Inputs | 3 |
| Outputs | 2 |
| Neurons | 6 |
| Layers | 3 |
| Parameters | 21 |
| Magnitude | 22 |
Usage
from safetensors.torch import load_file
import torch
w = load_file('model.safetensors')
def popcount3(x0, x1, x2):
inp = torch.tensor([float(x0), float(x1), float(x2)])
# Layer 1: Threshold detection
at1 = int((inp @ w['atleast1.weight'].T + w['atleast1.bias'] >= 0).item())
at2 = int((inp @ w['atleast2.weight'].T + w['atleast2.bias'] >= 0).item())
# out1 = atleast2 (sum >= 2 means bit 1 is set)
out1 = at2
# out0 = atleast1 XOR atleast2
l1 = torch.tensor([float(at1), float(at2)])
or_out = int((l1 @ w['xor.or.weight'].T + w['xor.or.bias'] >= 0).item())
nand_out = int((l1 @ w['xor.nand.weight'].T + w['xor.nand.bias'] >= 0).item())
l2 = torch.tensor([float(or_out), float(nand_out)])
out0 = int((l2 @ w['xor.and.weight'].T + w['xor.and.bias'] >= 0).item())
return out1, out0
# Examples
print(popcount3(0, 0, 0)) # (0, 0) = 0
print(popcount3(1, 0, 0)) # (0, 1) = 1
print(popcount3(1, 1, 0)) # (1, 0) = 2
print(popcount3(1, 1, 1)) # (1, 1) = 3
Applications
- Hamming distance calculation
- Set cardinality in bit vectors
- Error weight computation
- Branch prediction (counting set bits)
- Cryptographic operations
Files
threshold-popcount3/
βββ model.safetensors
βββ model.py
βββ create_safetensors.py
βββ config.json
βββ README.md
License
MIT