phanerozoic commited on
Commit
26e7af6
·
verified ·
1 Parent(s): b64bbbf

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +153 -0
  2. config.json +9 -0
  3. create_safetensors.py +83 -0
  4. model.py +33 -0
  5. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - pytorch
5
+ - safetensors
6
+ - threshold-logic
7
+ - neuromorphic
8
+ - bit-manipulation
9
+ - ffs
10
+ ---
11
+
12
+ # threshold-ffs8
13
+
14
+ 8-bit find first set (FFS). Returns the 1-indexed position of the least significant set bit. Returns 0 if no bits are set.
15
+
16
+ ## Circuit
17
+
18
+ ```
19
+ x7 x6 x5 x4 x3 x2 x1 x0
20
+ │ │ │ │ │ │ │ │
21
+ └───┴───┴───┴───┴───┴───┴───┘
22
+
23
+
24
+ ┌─────────────────────┐
25
+ │ CTZ + 1 (if set) │
26
+ │ or 0 (if all zero) │
27
+ └─────────────────────┘
28
+
29
+
30
+ [f3, f2, f1, f0]
31
+ (position 0-8)
32
+ ```
33
+
34
+ ## Function
35
+
36
+ ```
37
+ ffs8(x7..x0) -> (f3, f2, f1, f0)
38
+
39
+ position = 8*f3 + 4*f2 + 2*f1 + f0
40
+
41
+ if input = 0: position = 0
42
+ if input != 0: position = CTZ(input) + 1
43
+ ```
44
+
45
+ FFS is 1-indexed: the first bit (x0) is position 1, not 0.
46
+
47
+ ## Truth Table (selected)
48
+
49
+ | Input (hex) | Binary | FFS | f3 f2 f1 f0 | Meaning |
50
+ |-------------|--------|:---:|-------------|---------|
51
+ | 0x00 | 00000000 | 0 | 0 0 0 0 | No bits set |
52
+ | 0x01 | 00000001 | 1 | 0 0 0 1 | Bit 0 is first |
53
+ | 0x02 | 00000010 | 2 | 0 0 1 0 | Bit 1 is first |
54
+ | 0x04 | 00000100 | 3 | 0 0 1 1 | Bit 2 is first |
55
+ | 0x08 | 00001000 | 4 | 0 1 0 0 | Bit 3 is first |
56
+ | 0x10 | 00010000 | 5 | 0 1 0 1 | Bit 4 is first |
57
+ | 0x20 | 00100000 | 6 | 0 1 1 0 | Bit 5 is first |
58
+ | 0x40 | 01000000 | 7 | 0 1 1 1 | Bit 6 is first |
59
+ | 0x80 | 10000000 | 8 | 1 0 0 0 | Bit 7 is first |
60
+ | 0xFF | 11111111 | 1 | 0 0 0 1 | Bit 0 is first |
61
+
62
+ ## Relationship to CTZ
63
+
64
+ ```
65
+ if (input == 0):
66
+ FFS = 0
67
+ else:
68
+ FFS = CTZ + 1
69
+ ```
70
+
71
+ FFS and CTZ are closely related:
72
+ - CTZ returns 0-7 for positions, 8 for zero input
73
+ - FFS returns 1-8 for positions, 0 for zero input
74
+
75
+ ## Mechanism
76
+
77
+ **Position detectors:** Same as CTZ - detect where first 1 appears from LSB
78
+
79
+ | Signal | Fires when |
80
+ |--------|------------|
81
+ | p0 | x0 = 1 |
82
+ | p1 | x0 = 0, x1 = 1 |
83
+ | p2 | x0 = x1 = 0, x2 = 1 |
84
+ | ... | ... |
85
+ | p7 | x0..x6 = 0, x7 = 1 |
86
+
87
+ **Output encoding:** Direct binary encoding of position + 1
88
+
89
+ - f0 = p0 OR p2 OR p4 OR p6 (positions 0,2,4,6 → FFS 1,3,5,7)
90
+ - f1 = p1 OR p2 OR p5 OR p6 (positions 1,2,5,6 → FFS 2,3,6,7)
91
+ - f2 = p3 OR p4 OR p5 OR p6 (positions 3,4,5,6 → FFS 4,5,6,7)
92
+ - f3 = p7 (position 7 → FFS 8)
93
+
94
+ ## Parameters
95
+
96
+ | | |
97
+ |---|---|
98
+ | Inputs | 8 |
99
+ | Outputs | 4 |
100
+ | Neurons | 12 |
101
+ | Layers | 2 |
102
+ | Parameters | 76 |
103
+ | Magnitude | 48 |
104
+
105
+ ## Usage
106
+
107
+ ```python
108
+ from safetensors.torch import load_file
109
+ import torch
110
+
111
+ w = load_file('model.safetensors')
112
+
113
+ def ffs8(bits):
114
+ # bits = [x0, x1, ..., x7] (LSB first)
115
+ inp = torch.tensor([float(b) for b in bits])
116
+ # ... (see model.py for full implementation)
117
+
118
+ # Examples
119
+ print(ffs8([1,0,0,0,0,0,0,0])) # 1 (first bit set is position 0)
120
+ print(ffs8([0,0,1,0,0,0,0,0])) # 3 (first bit set is position 2)
121
+ print(ffs8([0,0,0,0,0,0,0,0])) # 0 (no bits set)
122
+ ```
123
+
124
+ ## Applications
125
+
126
+ - POSIX ffs() function implementation
127
+ - Finding available slots in bitmaps
128
+ - Scheduler ready queue processing
129
+ - Memory allocator free list management
130
+ - Interrupt priority handling
131
+
132
+ ## Comparison: FFS vs CTZ vs CLZ
133
+
134
+ | Function | Zero input | Non-zero input | Index base |
135
+ |----------|------------|----------------|------------|
136
+ | FFS | 0 | 1 to 8 | 1-indexed |
137
+ | CTZ | 8 | 0 to 7 | 0-indexed |
138
+ | CLZ | 8 | 0 to 7 | 0-indexed from MSB |
139
+
140
+ ## Files
141
+
142
+ ```
143
+ threshold-ffs8/
144
+ ├── model.safetensors
145
+ ├── model.py
146
+ ├── create_safetensors.py
147
+ ├── config.json
148
+ └── README.md
149
+ ```
150
+
151
+ ## License
152
+
153
+ MIT
config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "threshold-ffs8",
3
+ "description": "8-bit find first set (1-indexed position of LSB)",
4
+ "inputs": 8,
5
+ "outputs": 4,
6
+ "neurons": 12,
7
+ "layers": 2,
8
+ "parameters": 76
9
+ }
create_safetensors.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from safetensors.torch import save_file
3
+
4
+ weights = {}
5
+
6
+ # 8-bit Find First Set (FFS)
7
+ # Input: x0..x7 (x0 is LSB)
8
+ # Output: f3,f2,f1,f0 where position = 8*f3 + 4*f2 + 2*f1 + f0
9
+ # Returns 0 if no bits set, else 1-indexed position of first set bit
10
+
11
+ # Position detectors (same as CTZ)
12
+ for i in range(8):
13
+ w = [0.0] * 8
14
+ w[i] = 1.0
15
+ for j in range(i):
16
+ w[j] = -1.0
17
+ weights[f'p{i}.weight'] = torch.tensor([w], dtype=torch.float32)
18
+ weights[f'p{i}.bias'] = torch.tensor([-1.0], dtype=torch.float32)
19
+
20
+ # Output encoding for FFS (1-indexed):
21
+ # Position 0 (x0 first) → FFS = 1 = 0001
22
+ # Position 1 (x1 first) → FFS = 2 = 0010
23
+ # Position 2 (x2 first) → FFS = 3 = 0011
24
+ # Position 3 (x3 first) → FFS = 4 = 0100
25
+ # Position 4 (x4 first) → FFS = 5 = 0101
26
+ # Position 5 (x5 first) → FFS = 6 = 0110
27
+ # Position 6 (x6 first) → FFS = 7 = 0111
28
+ # Position 7 (x7 first) → FFS = 8 = 1000
29
+ # No bits set → FFS = 0 = 0000
30
+
31
+ # f0 = p0 OR p2 OR p4 OR p6 (FFS values 1,3,5,7 have bit 0 set)
32
+ # f1 = p1 OR p2 OR p5 OR p6 (FFS values 2,3,6,7 have bit 1 set)
33
+ # f2 = p3 OR p4 OR p5 OR p6 (FFS values 4,5,6,7 have bit 2 set)
34
+ # f3 = p7 (FFS value 8 has bit 3 set)
35
+
36
+ save_file(weights, 'model.safetensors')
37
+
38
+ def ffs8(bits):
39
+ """Find first set bit (1-indexed). Returns 0 if no bits set."""
40
+ inp = torch.tensor([float(b) for b in bits])
41
+
42
+ p = []
43
+ for i in range(8):
44
+ pi = int((inp @ weights[f'p{i}.weight'].T + weights[f'p{i}.bias'] >= 0).item())
45
+ p.append(pi)
46
+
47
+ # Combine for 1-indexed output
48
+ f0 = 1 if (p[0] or p[2] or p[4] or p[6]) else 0
49
+ f1 = 1 if (p[1] or p[2] or p[5] or p[6]) else 0
50
+ f2 = 1 if (p[3] or p[4] or p[5] or p[6]) else 0
51
+ f3 = p[7]
52
+
53
+ return f3, f2, f1, f0
54
+
55
+ print("Verifying ffs8...")
56
+ errors = 0
57
+ for i in range(256):
58
+ bits = [(i >> j) & 1 for j in range(8)]
59
+ f3, f2, f1, f0 = ffs8(bits)
60
+ result = 8*f3 + 4*f2 + 2*f1 + f0
61
+
62
+ # Compute expected FFS
63
+ if i == 0:
64
+ expected = 0
65
+ else:
66
+ expected = 1
67
+ temp = i
68
+ while (temp & 1) == 0:
69
+ expected += 1
70
+ temp >>= 1
71
+
72
+ if result != expected:
73
+ errors += 1
74
+ if errors <= 5:
75
+ print(f"ERROR: ffs8({i:08b}) = {result}, expected {expected}")
76
+
77
+ if errors == 0:
78
+ print("All 256 test cases passed!")
79
+ else:
80
+ print(f"FAILED: {errors} errors")
81
+
82
+ print(f"Magnitude: {sum(t.abs().sum().item() for t in weights.values()):.0f}")
83
+ print(f"Parameters: {sum(t.numel() for t in weights.values())}")
model.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from safetensors.torch import load_file
3
+
4
+ def load_model(path='model.safetensors'):
5
+ return load_file(path)
6
+
7
+ def ffs8(bits, w):
8
+ """Find first set bit (1-indexed). Returns 0 if no bits set."""
9
+ inp = torch.tensor([float(b) for b in bits])
10
+
11
+ p = []
12
+ for i in range(8):
13
+ pi = int((inp @ w[f'p{i}.weight'].T + w[f'p{i}.bias'] >= 0).item())
14
+ p.append(pi)
15
+
16
+ f0 = 1 if (p[0] or p[2] or p[4] or p[6]) else 0
17
+ f1 = 1 if (p[1] or p[2] or p[5] or p[6]) else 0
18
+ f2 = 1 if (p[3] or p[4] or p[5] or p[6]) else 0
19
+ f3 = p[7]
20
+
21
+ return f3, f2, f1, f0
22
+
23
+ if __name__ == '__main__':
24
+ w = load_model()
25
+ print('FFS8 selected tests:')
26
+ print('Input | FFS | f3f2f1f0')
27
+ print('---------+-----+---------')
28
+ test_vals = [0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x06, 0xF0, 0xFF]
29
+ for val in test_vals:
30
+ bits = [(val >> j) & 1 for j in range(8)]
31
+ f3, f2, f1, f0 = ffs8(bits, w)
32
+ pos = 8*f3 + 4*f2 + 2*f1 + f0
33
+ print(f'{val:08b} | {pos} | {f3}{f2}{f1}{f0}')
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:151d4a3c0c7b1f4588febe905fb8be5a61d131467ed333d4f3ef7b5927839355
3
+ size 1328