WCNegentropy commited on
Commit
ecee73e
·
verified ·
1 Parent(s): 09435ed

Remove NEW_CODEX_TASK.md - cleanup for OS launch

Browse files
Files changed (1) hide show
  1. NEW_CODEX_TASK.md +0 -85
NEW_CODEX_TASK.md DELETED
@@ -1,85 +0,0 @@
1
- # DEPRECATED
2
-
3
- All tasks in this file have been implemented (Stages 1–5). The document remains for historical reference only.
4
-
5
- Stage 1: Compression Algorithm Implementation
6
-
7
- Task 1: Choose Compression Method
8
-
9
- Prompt:
10
-
11
- Codex: Provide a concise PyTorch-compatible implementation of lossless binary compression and decompression (e.g., RLE, Huffman, or LZ-based) suitable for binary input sequences represented as tensors of bits.
12
-
13
- Task 2: Implement Compression Functions
14
-
15
- Prompt:
16
-
17
- Codex: Implement PyTorch functions compress_bits(input_tensor) and decompress_bits(compressed_tensor) that accept and return PyTorch tensors (dtype=torch.bool or torch.uint8). Ensure compress → decompress cycle perfectly reconstructs original data, and include simple unit tests.
18
-
19
-
20
-
21
- Stage 2: Encoder/Decoder Integration
22
-
23
- Task 3: Add Compression to Encoder Input
24
-
25
- Prompt:
26
-
27
- Codex: Modify BitTransformerLM’s input pipeline by wrapping the existing model forward pass with a forward_compressed(bits_tensor) method. This method should decompress incoming compressed bit tensors before embedding. Ensure it returns identical outputs as existing uncompressed inputs for verification.
28
-
29
- Task 4: Add Decompression to Decoder Output
30
-
31
- Prompt:
32
-
33
- Codex: Implement a PyTorch-compatible function model_output_decompress(output_bits_tensor) to decompress bit sequences output by BitTransformerLM. Integrate this function as an optional post-processing step after the model’s bitstream generation.
34
-
35
-
36
-
37
- Stage 3: Training and Evaluation Enhancements
38
-
39
- Task 5: Toggle Compression During Training
40
-
41
- Prompt:
42
-
43
- Codex: Modify the existing training loop to randomly compress input bit sequences with a configurable probability (compress_prob=0.5). Ensure that when compression is on, inputs are compressed and decompressed transparently, and when off, inputs bypass compression.
44
-
45
- Task 6: Evaluate Compressed vs Raw Performance
46
-
47
- Prompt:
48
-
49
- Codex: Extend the current training evaluation metrics to separately track loss, accuracy, and compression ratio for both compressed and raw sequences. Log these metrics clearly in the training output.
50
-
51
-
52
-
53
- Stage 4: Advanced Integration (Optional)
54
-
55
- Task 7: Multi-task Training for Compression Learning
56
-
57
- Prompt:
58
-
59
- Codex: Implement an optional multi-task training mode where the model occasionally sees compressed inputs directly without decompression. Add a separate loss calculation to monitor its performance on these compressed inputs. Track and log separately from normal next-bit prediction loss.
60
-
61
- Task 8: Compression-aware Safety Telemetry
62
-
63
- Prompt:
64
-
65
- Codex: Adjust the existing BitTransformerLM telemetry (K, C, and S metrics) to handle compressed sequences appropriately. Modify telemetry calculations to optionally apply metrics to decompressed outputs instead of raw bitstream when compression is enabled.
66
-
67
-
68
-
69
- Stage 5: Dashboard and Runtime Integration
70
-
71
- Task 9: Dashboard Compression UI Toggle
72
-
73
- Prompt:
74
-
75
- Codex: Add a simple UI toggle labeled “Enable Compression” to the existing BitTransformerLM dashboard, controlling whether inputs and outputs are automatically compressed and decompressed. Display compression ratio metrics when enabled.
76
-
77
- Task 10: Error Handling and User Feedback
78
-
79
- Prompt:
80
-
81
- Codex: Implement graceful error handling in the dashboard for compression and decompression failures. Provide clear user-facing feedback in the UI if decompression fails, along with suggestions or fallbacks.
82
-
83
-
84
-
85
- These ten tasks enable incremental, testable integration of binary compression/decompression into BitTransformerLM without fundamentally altering the core transformer model itself.