metadata
license: mit
tags:
- pytorch
- safetensors
- threshold-logic
- neuromorphic
- sequential
- latch
threshold-d-latch
D latch (level-sensitive) next-state logic as threshold circuit.
Circuit
E ββββββββ
D ββββββββΌβββΊ D-Latch βββ¬βββΊ Q
Q_prev βββ ββββΊ Qn
Modes
- E=1 (Transparent): Q follows D
- E=0 (Hold): Q holds previous value
Truth Table
| E | D | Q_prev | Q | Qn | Mode |
|---|---|---|---|---|---|
| 0 | X | 0 | 0 | 1 | Hold |
| 0 | X | 1 | 1 | 0 | Hold |
| 1 | 0 | X | 0 | 1 | Transparent |
| 1 | 1 | X | 1 | 0 | Transparent |
Logic
Q = (E AND D) OR (NOT_E AND Q_prev)
Qn = (E AND NOT_D) OR (NOT_E AND NOT_Q_prev)
Architecture
| Layer | Neurons |
|---|---|
| 1 | e_and_d, e_and_notd, note_and_qprev, note_and_notqprev |
| 2 | Q, Qn |
Total: 6 neurons, 26 parameters, 2 layers
D-Latch vs D-Flip-Flop
- D-Latch: Level-sensitive. Q changes while E is high.
- D-Flip-Flop: Edge-triggered. Q changes only on clock edge.
D-latches are simpler but can cause timing issues (race conditions) if not carefully designed. Flip-flops are safer for synchronous designs.
Parameters
| Inputs | 3 |
| Outputs | 2 |
| Neurons | 6 |
| Layers | 2 |
| Parameters | 26 |
| Magnitude | 18 |
Usage
from safetensors.torch import load_file
w = load_file('model.safetensors')
# Simulate latch behavior
q = 0
for e, d in [(1, 1), (1, 0), (0, 1), (0, 0)]:
q_next = compute(e, d, q, w)
q = q_next
License
MIT