phanerozoic commited on
Commit
fa53b57
Β·
verified Β·
1 Parent(s): a7dacd5

Rename from tiny-mod12-verified

Browse files
Files changed (4) hide show
  1. README.md +72 -0
  2. config.json +23 -0
  3. model.py +22 -0
  4. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - pytorch
5
+ - safetensors
6
+ - threshold-logic
7
+ - neuromorphic
8
+ - modular-arithmetic
9
+ ---
10
+
11
+ # threshold-mod12
12
+
13
+ Trivial case: computes Hamming weight mod 12 for 8-bit inputs. Since max HW is 8 < 12, this is just HW.
14
+
15
+ ## Circuit
16
+
17
+ ```
18
+ xβ‚€ x₁ xβ‚‚ x₃ xβ‚„ xβ‚… x₆ x₇
19
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
20
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
21
+ w: 1 1 1 1 1 1 1 1
22
+ β””β”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”Όβ”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”˜
23
+ β–Ό
24
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
25
+ β”‚ b: 0 β”‚
26
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
27
+ β”‚
28
+ β–Ό
29
+ HW (= HW mod 12)
30
+ ```
31
+
32
+ ## Why Trivial?
33
+
34
+ For mod m where m > (number of inputs), no reset ever occurs:
35
+
36
+ - 8 inputs β†’ max HW = 8
37
+ - 8 mod 12 = 8 (no wraparound)
38
+
39
+ ## Parameters
40
+
41
+ | | |
42
+ |---|---|
43
+ | Weights | [1, 1, 1, 1, 1, 1, 1, 1] |
44
+ | Bias | 0 |
45
+ | Total | 9 parameters |
46
+
47
+ ## Usage
48
+
49
+ ```python
50
+ from safetensors.torch import load_file
51
+ import torch
52
+
53
+ w = load_file('model.safetensors')
54
+
55
+ def mod12(bits): # Actually just HW
56
+ inputs = torch.tensor([float(b) for b in bits])
57
+ return int((inputs * w['weight']).sum() + w['bias'])
58
+ ```
59
+
60
+ ## Files
61
+
62
+ ```
63
+ threshold-mod12/
64
+ β”œβ”€β”€ model.safetensors
65
+ β”œβ”€β”€ model.py
66
+ β”œβ”€β”€ config.json
67
+ └── README.md
68
+ ```
69
+
70
+ ## License
71
+
72
+ MIT
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "threshold_network",
3
+ "task": "mod12_classification",
4
+ "architecture": "8 -> 1",
5
+ "input_size": 8,
6
+ "output_size": 1,
7
+ "num_neurons": 1,
8
+ "num_parameters": 9,
9
+ "modulus": 12,
10
+ "activation": "heaviside",
11
+ "weight_constraints": "integer",
12
+ "weight_pattern": "[1, 1, 1, 1, 1, 1, 1, 1]",
13
+ "verification": {
14
+ "method": "coq_proof",
15
+ "exhaustive": true,
16
+ "inputs_tested": 256
17
+ },
18
+ "accuracy": {
19
+ "all_inputs": "256/256",
20
+ "percentage": 100.0
21
+ },
22
+ "github": "https://github.com/CharlesCNorton/coq-circuits"
23
+ }
model.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Threshold Network for MOD-12 Circuit
3
+
4
+ For 8-bit inputs, HW ranges 0-8, all less than 12, so HW mod 12 = HW.
5
+ """
6
+
7
+ import torch
8
+ from safetensors.torch import load_file
9
+
10
+
11
+ class ThresholdMod12:
12
+ def __init__(self, weights_dict):
13
+ self.weight = weights_dict['weight']
14
+ self.bias = weights_dict['bias']
15
+
16
+ def __call__(self, bits):
17
+ inputs = torch.tensor([float(b) for b in bits])
18
+ return (inputs * self.weight).sum() + self.bias
19
+
20
+ @classmethod
21
+ def from_safetensors(cls, path="model.safetensors"):
22
+ return cls(load_file(path))
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d9838dc724da91cd61df87cf34f8fc697e97d3f2b257ceceb971995deed4643
3
+ size 164