threshold-winnertakeall

4-input Winner-Take-All network. A single active input "wins" and produces output. Ties produce no winner.

Circuit

   xβ‚€      x₁      xβ‚‚      x₃
    β”‚       β”‚       β”‚       β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€
    β”‚       β”‚       β”‚       β”‚
    β–Ό       β–Ό       β–Ό       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚  yβ‚€  β”‚β”‚  y₁  β”‚β”‚  yβ‚‚  β”‚β”‚  y₃  β”‚
β”‚+1 -1 β”‚β”‚-1 +1 β”‚β”‚-1 -1 β”‚β”‚-1 -1 β”‚
β”‚-1 -1 β”‚β”‚-1 -1 β”‚β”‚+1 -1 β”‚β”‚-1 +1 β”‚
β”‚b: -1 β”‚β”‚b: -1 β”‚β”‚b: -1 β”‚β”‚b: -1 β”‚
β””β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”˜
    β”‚       β”‚       β”‚       β”‚
    β–Ό       β–Ό       β–Ό       β–Ό
   yβ‚€      y₁      yβ‚‚      y₃

Competitive Dynamics

Each output neuron has:

  • +1 weight for its own input (self-excitation)
  • -1 weight for all other inputs (lateral inhibition)
  • -1 bias (requires activation to fire)

This creates competition: each neuron tries to fire, but activity from others suppresses it.

Truth Table

Inputs Outputs Interpretation
0000 0000 No input, no winner
1000 1000 xβ‚€ wins
0100 0100 x₁ wins
0010 0010 xβ‚‚ wins
0001 0001 x₃ wins
1100 0000 Tie - no winner
1010 0000 Tie - no winner
1111 0000 All active - no winner

The Inhibition Principle

For yβ‚€ to fire:

sum = (+1)Β·xβ‚€ + (-1)Β·x₁ + (-1)Β·xβ‚‚ + (-1)Β·x₃ - 1
    = xβ‚€ - x₁ - xβ‚‚ - x₃ - 1

This fires only when xβ‚€=1 and all others=0:

  • xβ‚€=1, others=0: sum = 1 - 0 - 0 - 0 - 1 = 0 β‰₯ 0 βœ“
  • xβ‚€=1, x₁=1: sum = 1 - 1 - 0 - 0 - 1 = -1 < 0 βœ—

Exactly-One Detection

WTA is equivalent to "Exactly-1-of-4 with position encoding":

HW WTA Output
0 0000
1 One-hot (winner)
2+ 0000 (tie)

Biological Analogy

This mimics lateral inhibition in neural circuits:

  • Retinal ganglion cells compete via inhibitory interneurons
  • The strongest signal suppresses neighbors
  • Sharpens contrast and selects dominant features

Architecture

4 neurons, 20 parameters, 1 layer

All neurons compute in parallel. No recurrence needed for binary inputs.

Usage

from safetensors.torch import load_file
import torch

w = load_file('model.safetensors')

def wta(inputs):
    inp = torch.tensor([float(x) for x in inputs])
    return [int((inp * w[f'y{i}.weight']).sum() + w[f'y{i}.bias'] >= 0)
            for i in range(4)]

print(wta([0,1,0,0]))  # [0, 1, 0, 0] - x1 wins
print(wta([1,1,0,0]))  # [0, 0, 0, 0] - tie

Files

threshold-winnertakeall/
β”œβ”€β”€ model.safetensors
β”œβ”€β”€ model.py
β”œβ”€β”€ config.json
└── README.md

License

MIT

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including phanerozoic/threshold-winnertakeall