threshold-mux16

16:1 multiplexer. Selects one of 16 data inputs based on 4-bit select signal.

Function

MUX16(d0..d15, s3,s2,s1,s0) = d[s] where s = 8s3 + 4s2 + 2*s1 + s0

Architecture

d0..d15 (16 data)   s3 s2 s1 s0 (4 select)
    |                    |
    +--------------------+
    |
    v
[N0]  d0  AND (s=0000)  ----+
[N1]  d1  AND (s=0001)  ----|
[N2]  d2  AND (s=0010)  ----|
  ...                        +---> [OR] ---> output
[N14] d14 AND (s=1110)  ----|
[N15] d15 AND (s=1111)  ----+

Layer 1 Weights

Each neuron Ni fires when di=1 AND s=i:

  • Weight on di: +1
  • Weight on each select bit: +1 if that bit is 1 in i, else -1
  • Bias: -(1 + popcount(i))

Parameters

Inputs 20 (16 data + 4 select)
Outputs 1
Neurons 17
Layers 2
Parameters 373
Magnitude 145

Usage

from safetensors.torch import load_file
import torch

w = load_file('model.safetensors')

def mux16(data, s3, s2, s1, s0):
    inp = torch.tensor([float(d) for d in data] +
                       [float(s3), float(s2), float(s1), float(s0)])
    l1 = (inp @ w['layer1.weight'].T + w['layer1.bias'] >= 0).float()
    out = (l1 @ w['layer2.weight'].T + w['layer2.bias'] >= 0).float()
    return int(out.item())

# Select d10 (s=1010)
data = [0]*16
data[10] = 1
print(mux16(data, 1, 0, 1, 0))  # 1

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including phanerozoic/threshold-mux16