threshold-atmost2outof8

At-most-2-out-of-8 detector. Fires when two or fewer inputs are active. The error-tolerance bound.

Circuit

  xβ‚€ x₁ xβ‚‚ x₃ xβ‚„ xβ‚… x₆ x₇
   β”‚  β”‚  β”‚  β”‚  β”‚  β”‚  β”‚  β”‚
   β””β”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”Όβ”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”˜
               β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ w: -1Γ—8 β”‚
          β”‚ b:  +2  β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
           HW ≀ 2?

The Double-Error Budget

This circuit allows:

  • Zero active inputs (silence)
  • One active input (single event)
  • Two active inputs (pair/double event)

Three or more is considered "too many."

Mechanism

sum = -HW + 2
HW Sum Output
0 +2 1
1 +1 1
2 0 1
3 -1 0
4+ < -1 0

The bias of +2 grants a budget of two active inputs.

Error Detection Context

In coding theory:

  • A code with minimum distance d can detect d-1 errors
  • AtMost2 accepts patterns with ≀2 bit flips from all-zeros
  • Useful for validating that corruption is within correctable limits
Scenario This circuit
No errors Pass
Single-bit error Pass
Double-bit error Pass
Triple+ error Fail

Dual of AtLeast6

Circuit Condition Sparse/Dense
AtLeast6 HW β‰₯ 6 Dense
AtMost2 HW ≀ 2 Sparse

Bitwise NOT maps AtMost2 inputs to AtLeast6 inputs.

Coverage

HW C(8,k) AtMost2?
0 1 Yes
1 8 Yes
2 28 Yes
3-8 219 No

Fires on 1 + 8 + 28 = 37 of 256 inputs (14.5%).

Parameters

Component Value
Weights all -1
Bias +2
Total 9 parameters

Usage

from safetensors.torch import load_file
import torch

w = load_file('model.safetensors')

def atmost2(bits):
    inp = torch.tensor([float(b) for b in bits])
    return int((inp * w['weight']).sum() + w['bias'] >= 0)

# Double event: allowed
print(atmost2([1,0,0,0,1,0,0,0]))  # 1

# Triple event: rejected
print(atmost2([1,0,0,1,1,0,0,0]))  # 0

Files

threshold-atmost2outof8/
β”œβ”€β”€ model.safetensors
β”œβ”€β”€ model.py
β”œβ”€β”€ config.json
└── README.md

License

MIT

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including phanerozoic/threshold-atmost2outof8