phanerozoic commited on
Commit
8bc948a
Β·
verified Β·
1 Parent(s): 2896fb8

Rename from tiny-mod10-verified

Browse files
Files changed (4) hide show
  1. README.md +72 -0
  2. config.json +23 -0
  3. model.py +30 -0
  4. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - pytorch
5
+ - safetensors
6
+ - threshold-logic
7
+ - neuromorphic
8
+ - modular-arithmetic
9
+ ---
10
+
11
+ # threshold-mod10
12
+
13
+ Trivial case: computes Hamming weight mod 10 for 8-bit inputs. Since max HW is 8 < 10, this is just HW.
14
+
15
+ ## Circuit
16
+
17
+ ```
18
+ xβ‚€ x₁ xβ‚‚ x₃ xβ‚„ xβ‚… x₆ x₇
19
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
20
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
21
+ w: 1 1 1 1 1 1 1 1
22
+ β””β”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”Όβ”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”˜
23
+ β–Ό
24
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
25
+ β”‚ b: 0 β”‚
26
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
27
+ β”‚
28
+ β–Ό
29
+ HW (= HW mod 10)
30
+ ```
31
+
32
+ ## Why Trivial?
33
+
34
+ For mod m where m > (number of inputs), no reset ever occurs:
35
+
36
+ - 8 inputs β†’ max HW = 8
37
+ - 8 mod 10 = 8 (no wraparound)
38
+
39
+ ## Parameters
40
+
41
+ | | |
42
+ |---|---|
43
+ | Weights | [1, 1, 1, 1, 1, 1, 1, 1] |
44
+ | Bias | 0 |
45
+ | Total | 9 parameters |
46
+
47
+ ## Usage
48
+
49
+ ```python
50
+ from safetensors.torch import load_file
51
+ import torch
52
+
53
+ w = load_file('model.safetensors')
54
+
55
+ def mod10(bits): # Actually just HW
56
+ inputs = torch.tensor([float(b) for b in bits])
57
+ return int((inputs * w['weight']).sum() + w['bias'])
58
+ ```
59
+
60
+ ## Files
61
+
62
+ ```
63
+ threshold-mod10/
64
+ β”œβ”€β”€ model.safetensors
65
+ β”œβ”€β”€ model.py
66
+ β”œβ”€β”€ config.json
67
+ └── README.md
68
+ ```
69
+
70
+ ## License
71
+
72
+ MIT
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "threshold_network",
3
+ "task": "mod10_classification",
4
+ "architecture": "8 -> 1",
5
+ "input_size": 8,
6
+ "output_size": 1,
7
+ "num_neurons": 1,
8
+ "num_parameters": 9,
9
+ "modulus": 10,
10
+ "activation": "heaviside",
11
+ "weight_constraints": "integer",
12
+ "weight_pattern": "[1, 1, 1, 1, 1, 1, 1, 1]",
13
+ "verification": {
14
+ "method": "coq_proof",
15
+ "exhaustive": true,
16
+ "inputs_tested": 256
17
+ },
18
+ "accuracy": {
19
+ "all_inputs": "256/256",
20
+ "percentage": 100.0
21
+ },
22
+ "github": "https://github.com/CharlesCNorton/coq-circuits"
23
+ }
model.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Threshold Network for MOD-10 Circuit
3
+
4
+ For 8-bit inputs, HW ranges 0-8, all less than 10, so HW mod 10 = HW.
5
+ """
6
+
7
+ import torch
8
+ from safetensors.torch import load_file
9
+
10
+
11
+ class ThresholdMod10:
12
+ def __init__(self, weights_dict):
13
+ self.weight = weights_dict['weight']
14
+ self.bias = weights_dict['bias']
15
+
16
+ def __call__(self, bits):
17
+ inputs = torch.tensor([float(b) for b in bits])
18
+ return (inputs * self.weight).sum() + self.bias
19
+
20
+ @classmethod
21
+ def from_safetensors(cls, path="model.safetensors"):
22
+ return cls(load_file(path))
23
+
24
+
25
+ if __name__ == "__main__":
26
+ weights = load_file("model.safetensors")
27
+ model = ThresholdMod10(weights)
28
+ for hw in range(9):
29
+ bits = [1]*hw + [0]*(8-hw)
30
+ print(f"HW={hw}: out={model(bits).item():.0f}, HW mod 10 = {hw}")
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d9838dc724da91cd61df87cf34f8fc697e97d3f2b257ceceb971995deed4643
3
+ size 164