timlawrenz commited on
Commit
4c47831
·
verified ·
1 Parent(s): 9ac16a5

Upload paper.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. paper.md +152 -45
paper.md CHANGED
@@ -3,7 +3,7 @@
3
  **Authors:** Tim Lawrenz, with autonomous experimentation by Ratiocinator
4
 
5
  **Abstract.**
6
- We present a systematic study of Graph Neural Network (GNN) architectures for two tasks on Ruby Abstract Syntax Trees (ASTs): cyclomatic complexity prediction and code generation via graph autoencoders. Using a dataset of 22,452 Ruby methods, we evaluate five GNN architectures (GCN, GraphSAGE, GAT, GIN, GraphConv) across 35 GPU experiments on RTX 4090 instances. For complexity prediction, a 5-layer GraphSAGE achieves MAE 4.018 (R² = 0.709), a 16% improvement over the 3-layer baseline. GIN ranks second among equal-depth models. For code generation, we find a stark negative result: standard graph autoencoders produce 0% syntactically valid Ruby across all tested architectures, loss functions, and hidden dimensions. Only a tree-aware decoder with teacher-forced GIN edges achieves non-zero validity (7%), suggesting that explicit structural supervision is necessary but insufficient for GNN-based code generation. Total compute cost: under $5 USD.
7
 
8
  ---
9
 
@@ -14,22 +14,27 @@ Graph Neural Networks have emerged as a natural representation for source code,
14
  1. **Which GNN architecture best predicts code complexity from ASTs?** Existing studies typically evaluate one or two architectures without controlled comparisons.
15
  2. **Can GNN autoencoders generate syntactically valid code?** While graph variational autoencoders have shown promise in molecular generation, their application to code synthesis is largely uncharted.
16
 
17
- We address both questions through a systematic study on Ruby, a dynamically-typed language whose relatively uniform syntax makes AST analysis tractable. Our contributions are:
 
 
18
 
19
  - A controlled 5-way architecture comparison for complexity prediction, showing that **network depth matters more than width or architecture choice** (Section 4.1).
20
  - A comprehensive negative result demonstrating that **GNN autoencoders cannot generate valid Ruby code** under standard training regimes (Section 4.2).
21
- - Evidence that **teacher-forced tree-aware decoding with GIN** is the only configuration that produces any valid output (7%), pointing toward necessary conditions for GNN-based code generation (Section 4.3).
22
- - All experiments were orchestrated by Ratiocinator, an autonomous research pipeline, demonstrating **reproducible AI-driven experimentation at under $5 total compute cost** (Section 3).
 
23
 
24
  ## 2. Related Work
25
 
26
  **GNNs for Code Understanding.** Allamanis et al. (2018) introduced GNNs for variable misuse detection. CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2021) combine transformer architectures with code structure. Our work differs by focusing on classical GNN variants (GCN, SAGE, GAT, GIN, GraphConv) in a controlled comparison rather than proposing a new architecture.
27
 
 
 
28
  **Graph Autoencoders.** Kipf and Welling (2016) proposed variational graph autoencoders for link prediction. Junction Tree VAE (Jin et al., 2018) generates molecular graphs with validity guarantees by decomposing graphs into tree structures. Our tree-aware decoder draws inspiration from this approach but operates on ASTs rather than molecular substructures.
29
 
30
  **Code Complexity Prediction.** McCabe's cyclomatic complexity (1976) is a standard software metric. ML approaches using hand-crafted features (Gill and Kemerer, 1991) have been supplemented by deep learning methods, but few use graph representations of the AST directly.
31
 
32
- **Automated Research.** Our experimental infrastructure, Ratiocinator, belongs to the emerging class of AI-driven scientific discovery tools alongside systems like The AI Scientist (Lu et al., 2024). We demonstrate that such systems can conduct meaningful ablation studies at minimal cost.
33
 
34
  ## 3. Experimental Setup
35
 
@@ -52,7 +57,12 @@ The dataset is split 85/15 into training (19,084) and validation (3,368) sets.
52
  - **GIN** (Xu et al., 2019): Sum aggregation with learnable epsilon, designed for maximal expressiveness under the WL test.
53
  - **GraphConv** (Morris et al., 2019): General message-passing with separate self/neighbor transforms.
54
 
55
- **Code Generation.** `ASTAutoencoder` encodes the AST into a fixed-size latent vector, then decodes to reconstruct node types and edge structure. The decoder supports three edge construction modes:
 
 
 
 
 
56
  - **Chain**: Nodes connected sequentially (baseline, no structural information).
57
  - **Teacher-forced**: Ground-truth AST edges provided during training.
58
  - **Iterative**: Decoder predicts edges from node embeddings (no ground truth at inference).
@@ -63,12 +73,13 @@ All experiments run on single NVIDIA RTX 4090 GPUs (24 GB VRAM) provisioned via
63
 
64
  ### 3.4 Experiment Design
65
 
66
- | Track | Task | Arms | Varied Parameters |
67
- |-------|------|------|-------------------|
68
- | 1 | Complexity prediction | 8 | Architecture, depth, width |
69
- | 2 | Generation (chain decoder) | 7 | Architecture, loss function, width |
70
- | 4 | Generation (tree-aware decoder) | 6 | Edge mode, architecture, loss |
71
- | Baseline | Complexity (autonomous) | 18 | Random seed variance |
 
72
 
73
  Additionally, 18 runs from the autonomous Ratiocinator coordinator (3 iterations × ~6 successful arms) provide baseline variance data under identical SAGE/64/3 configuration, yielding MAE μ = 4.745, σ = 0.073.
74
 
@@ -80,6 +91,7 @@ Additionally, 18 runs from the autonomous Ratiocinator coordinator (3 iterations
80
  |---|---|---|---|---|---|
81
  | **SAGE** | 64 | **5** | **4.018** | **54.37** | **0.709** |
82
  | GIN | 64 | 3 | 4.589 | 69.28 | 0.629 |
 
83
  | SAGE (baseline) | 64 | 3 | 4.782 | 68.07 | 0.635 |
84
  | GraphConv | 64 | 3 | 4.804 | 68.14 | 0.635 |
85
  | SAGE (wide) | 128 | 3 | 4.863 | 68.15 | 0.635 |
@@ -92,9 +104,9 @@ Additionally, 18 runs from the autonomous Ratiocinator coordinator (3 iterations
92
 
93
  2. **Width does not help.** Doubling the hidden dimension from 64 to 128 (SAGE-wide) produces no improvement (4.863 vs 4.782), suggesting the representational bottleneck is in message-passing depth, not per-layer capacity.
94
 
95
- 3. **Architecture ranking (at 3 layers): GIN > SAGE ≈ GraphConv > GAT > GCN.** GIN's theoretical advantage under the Weisfeiler-Leman framework translates to a practical 4% improvement over SAGE. GCN, lacking learnable aggregation weights, performs worst with 11% higher MAE.
96
 
97
- 4. **GAT underperforms expectations.** Despite attention being theoretically more expressive, GAT ranks fifth. We hypothesize that the relatively uniform AST structure (most nodes have 2–4 children) does not benefit from attention-based neighbor weighting. GAT also ran slowest, with the `gat-wide` arm timing out at the 1200s budget.
98
 
99
  ### 4.2 Track 2: Generation Failure Analysis
100
 
@@ -106,7 +118,7 @@ Additionally, 18 runs from the autonomous Ratiocinator coordinator (3 iterations
106
  | GIN | 256 | improved | 0% |
107
  | GCN | 256 | improved | 0% |
108
 
109
- **Every configuration achieves 0% syntactic validity.** Validation loss converges to approximately 7.7 across all variants, indicating the model learns non-trivial representations but cannot reconstruct valid ASTs.
110
 
111
  We hypothesize three contributing factors:
112
 
@@ -118,7 +130,9 @@ We hypothesize three contributing factors:
118
 
119
  ### 4.3 Track 4: Tree-Aware Decoder Topology
120
 
121
- | Edge Mode | Conv | Validity | Val Loss |
 
 
122
  |---|---|---|---|
123
  | chain | GAT | 0% | 7.715 |
124
  | teacher_forced | GAT | 0% | 7.706 |
@@ -126,23 +140,70 @@ We hypothesize three contributing factors:
126
  | teacher_forced | SAGE | 0% | 7.799 |
127
  | **teacher_forced** | **GIN** | **7%** | **8.384** |
128
 
129
- The teacher-forced GIN configuration is the **only one that produces any valid Ruby code** (7 out of 100 samples). This result is notable for several reasons:
130
 
131
- 1. **Teacher forcing is necessary.** The chain and iterative modes produce 0% validity even with the same architecture. Ground-truth AST edges during training provide the structural scaffold the decoder needs.
132
 
133
- 2. **GIN is the only viable architecture.** Despite GAT and SAGE also receiving teacher-forced edges, only GIN produces valid output. This aligns with GIN's superior WL-test expressiveness — it can distinguish subtree structures that other architectures conflate.
134
 
135
- 3. **Higher val_loss paradox.** The teacher-forced GIN has the highest validation loss (8.384 vs ~7.7), yet is the only model that generates valid code. This suggests the standard reconstruction loss is a poor proxy for code validity — the model may be learning a sharper but riskier distribution.
 
 
 
 
 
 
136
 
137
- 4. **7% is still insufficient.** Even the best configuration produces only 7 valid samples out of 100, far below practical utility. This suggests that GNN autoencoders, even with structural supervision, face fundamental limitations for code generation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
138
 
139
  ## 5. Discussion
140
 
141
  ### The Representation Gap
142
 
143
- Our results reveal a **representation gap** between understanding and generation in GNN-based code models. For complexity prediction (a graph-level regression task), GNNs perform well — a 5-layer SAGE explains 71% of variance in cyclomatic complexity. But for generation (requiring node-level reconstruction of discrete types and valid tree structure), the same representations fail catastrophically.
 
 
144
 
145
- This asymmetry mirrors findings in NLP, where masked language models excel at classification but require autoregressive decoding for generation. GNNs face an analogous challenge: message-passing aggregates information for global understanding but loses the fine-grained sequential and hierarchical structure needed for reconstruction.
 
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
  ### Depth vs. Width
148
 
@@ -150,25 +211,28 @@ The stark depth-over-width result (5-layer SAGE improves 16%; 128-dim SAGE impro
150
 
151
  ### Why GIN for Generation?
152
 
153
- GIN's success as the sole architecture achieving non-zero validity likely stems from its injective aggregation function. The sum aggregation with learnable epsilon preserves multiset information about node neighborhoods, which is critical for distinguishing structurally similar but semantically different AST subtrees (e.g., `if/else` vs `case/when`). Other architectures, by averaging or attending, lose this fine-grained structural signal.
 
 
154
 
155
  ### Limitations
156
 
157
- - **Single language.** Ruby results may not transfer to languages with different AST structures.
158
- - **Fixed hyperparameters.** We did not tune learning rate, dropout, or batch size per architecture.
159
- - **No cross-validation.** Results are from a single train/val split.
160
- - **Small generation evaluation.** Validity is measured on 100 samples; a larger evaluation would reduce variance.
161
 
162
  ## 6. Conclusion
163
 
164
- We conducted a systematic study of five GNN architectures for Ruby code complexity prediction and generation across 35 GPU experiments. Our findings are:
165
 
166
- 1. **For complexity prediction, go deeper:** A 5-layer GraphSAGE (MAE 4.018, R² 0.709) outperforms all 3-layer variants by 16%, while doubling width provides no benefit.
167
- 2. **GNN autoencoders cannot generate valid code:** Zero syntactic validity across 10+ configurations with standard chain decoders.
168
- 3. **Structural supervision is necessary but insufficient:** Teacher-forced GIN decoding achieves 7% validity the sole non-zero result but remains impractical.
169
- 4. **Architecture expressiveness matters for generation:** GIN, the most expressive architecture under the WL framework, is the only one that crosses the 0% validity barrier.
 
170
 
171
- These results suggest that future work on GNN-based code generation should focus on (a) hybrid architectures combining GNN encoding with autoregressive or grammar-constrained decoding, (b) hierarchical generation strategies inspired by Junction Tree VAE, and (c) validity-aware training objectives that go beyond token-level reconstruction loss.
172
 
173
  ### Reproducibility
174
 
@@ -193,21 +257,64 @@ The 5-layer SAGE result (MAE 4.018) is **9.9 standard deviations** below this ba
193
 
194
  ## Appendix B: Compute Cost Breakdown
195
 
196
- | Experiment | Arms | Successful | Approx. Cost |
197
- |---|---|---|---|
198
- | Autonomous research (3 iter) | 24 | 18 | $1.50 |
199
- | Architecture comparison | 8 | 7 | $1.20 |
200
- | Generation analysis | 7 | 5 | $0.80 |
201
- | Decoder topology | 6 | 5 | $0.70 |
202
- | **Total** | **45** | **35** | **~$4.20** |
 
 
 
 
203
 
204
  ## Appendix C: Failed Arms
205
 
206
  | Arm | Failure Mode | Cause |
207
  |---|---|---|
208
- | gat-wide (Track 1) | Timeout (exit 124) | GAT with 128-dim exceeded 1200s training budget |
209
  | simple-loss-gat (Track 2) | SSH timeout | Instance never became SSH-ready |
210
- | comprehensive-loss-gat (Track 2) | Exit 1 | Training script error (likely OOM or arg parsing) |
211
- | teacher-forced-gat-comprehensive (Track 4) | Exit 1 | Training script error |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
212
 
213
- All failures are infrastructure-related or timeout-related; no experiments produced anomalous metrics.
 
3
  **Authors:** Tim Lawrenz, with autonomous experimentation by Ratiocinator
4
 
5
  **Abstract.**
6
+ We present a systematic study of Graph Neural Network (GNN) architectures for two tasks on Ruby Abstract Syntax Trees (ASTs): cyclomatic complexity prediction and code generation via graph autoencoders. Using a dataset of 22,452 Ruby methods, we evaluate five GNN architectures (GCN, GraphSAGE, GAT, GIN, GraphConv) across 40 GPU experiments on RTX 4090 and RTX 2070 SUPER instances. For complexity prediction, a 5-layer GraphSAGE achieves MAE 4.018 (R² = 0.709), a 16% improvement over the 3-layer baseline. GIN ranks second among equal-depth models. For code generation, we find a stark negative result: standard graph autoencoders produce 0% syntactically valid Ruby across all tested architectures, loss functions, and hidden dimensions. A deep-dive analysis reveals that teacher-forced GIN decoders achieve 81% node type accuracy and 99.5% type diversity, yet still produce 0% valid code because 47% of AST elements are literal values (identifiers, strings, numbers) with no learnable representation. This **literal value bottleneck** — not architectural capacity — is the fundamental barrier to GNN-based code generation. All experiments were orchestrated by Ratiocinator, an autonomous LLM-driven research pipeline. Total compute cost: under $5 USD.
7
 
8
  ---
9
 
 
14
  1. **Which GNN architecture best predicts code complexity from ASTs?** Existing studies typically evaluate one or two architectures without controlled comparisons.
15
  2. **Can GNN autoencoders generate syntactically valid code?** While graph variational autoencoders have shown promise in molecular generation, their application to code synthesis is largely uncharted.
16
 
17
+ We address both questions through a systematic study on Ruby, a dynamically-typed language whose relatively uniform syntax makes AST analysis tractable. While transformer-based autoregressive models (Codex, CodeLlama, StarCoder) have demonstrated remarkable code generation capabilities through sequential token prediction, GNNs offer a fundamentally different approach: operating directly on the graph structure of parsed ASTs rather than treating code as text. This structural approach could theoretically provide stronger guarantees about syntactic validity and enable more sample-efficient learning of code semantics. Our study tests whether these theoretical advantages translate to practice.
18
+
19
+ Our contributions are:
20
 
21
  - A controlled 5-way architecture comparison for complexity prediction, showing that **network depth matters more than width or architecture choice** (Section 4.1).
22
  - A comprehensive negative result demonstrating that **GNN autoencoders cannot generate valid Ruby code** under standard training regimes (Section 4.2).
23
+ - A deep-dive analysis revealing the **literal value bottleneck**: teacher-forced GIN achieves 81% node type accuracy but 0% syntax validity because 47% of AST elements are unrecoverable literals (Section 4.3).
24
+ - Evidence that **chain decoders suffer severe mode collapse** (93% of predictions default to a single type), while teacher forcing restores type diversity without achieving code validity (Section 4.3).
25
+ - All experiments were orchestrated by Ratiocinator, an autonomous LLM-driven research pipeline, demonstrating **reproducible AI-driven experimentation at under $5 total compute cost** (Section 3).
26
 
27
  ## 2. Related Work
28
 
29
  **GNNs for Code Understanding.** Allamanis et al. (2018) introduced GNNs for variable misuse detection. CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2021) combine transformer architectures with code structure. Our work differs by focusing on classical GNN variants (GCN, SAGE, GAT, GIN, GraphConv) in a controlled comparison rather than proposing a new architecture.
30
 
31
+ **Autoregressive Code Generation.** Transformer-based models (Codex, Chen et al., 2021; StarCoder, Li et al., 2023; CodeLlama, Rozière et al., 2023) treat code as a token sequence and generate it autoregressively. These models achieve remarkable results but lack structural guarantees — they can produce syntactically invalid code because they operate on surface text, not ASTs. GNN-based approaches, operating directly on parsed syntax trees, could theoretically enforce structural validity by construction. Our negative results demonstrate that this theoretical advantage does not materialize in practice.
32
+
33
  **Graph Autoencoders.** Kipf and Welling (2016) proposed variational graph autoencoders for link prediction. Junction Tree VAE (Jin et al., 2018) generates molecular graphs with validity guarantees by decomposing graphs into tree structures. Our tree-aware decoder draws inspiration from this approach but operates on ASTs rather than molecular substructures.
34
 
35
  **Code Complexity Prediction.** McCabe's cyclomatic complexity (1976) is a standard software metric. ML approaches using hand-crafted features (Gill and Kemerer, 1991) have been supplemented by deep learning methods, but few use graph representations of the AST directly.
36
 
37
+ **Automated Research.** Our experimental infrastructure, Ratiocinator, belongs to the emerging class of LLM-driven scientific discovery tools alongside systems like The AI Scientist (Lu et al., 2024). Ratiocinator uses large language models to propose hypotheses, generate experiment configurations, and analyze results, while orchestrating GPU compute on ephemeral cloud instances. We demonstrate that such systems can conduct meaningful ablation studies at minimal cost.
38
 
39
  ## 3. Experimental Setup
40
 
 
57
  - **GIN** (Xu et al., 2019): Sum aggregation with learnable epsilon, designed for maximal expressiveness under the WL test.
58
  - **GraphConv** (Morris et al., 2019): General message-passing with separate self/neighbor transforms.
59
 
60
+ **Code Generation.** `ASTAutoencoder` encodes the AST into a fixed-size latent vector, then decodes to reconstruct node types and edge structure. We evaluate three loss functions:
61
+ - **Simple**: Node type cross-entropy only (reconstruction of 74D one-hot node features).
62
+ - **Improved**: Node type cross-entropy plus parent prediction cross-entropy, weighted by `type_weight` and `parent_weight` hyperparameters. This provides an explicit structural learning signal by requiring the model to predict each node's parent.
63
+ - **Comprehensive**: Similar to improved but with different parent logit normalization.
64
+
65
+ The decoder supports three edge construction modes:
66
  - **Chain**: Nodes connected sequentially (baseline, no structural information).
67
  - **Teacher-forced**: Ground-truth AST edges provided during training.
68
  - **Iterative**: Decoder predicts edges from node embeddings (no ground truth at inference).
 
73
 
74
  ### 3.4 Experiment Design
75
 
76
+ | Track | Task | Arms | Varied Parameters | GPU |
77
+ |-------|------|------|-------------------|-----|
78
+ | 1 | Complexity prediction | 8 | Architecture, depth, width | RTX 4090 (Vast.ai) |
79
+ | 2 | Generation (chain decoder) | 7 | Architecture, loss function, width | RTX 4090 (Vast.ai) |
80
+ | 4 | Generation (tree-aware decoder) | 6 | Edge mode, architecture, loss | RTX 4090 (Vast.ai) |
81
+ | 5 | GIN deep dive + qualitative | 5 | Hidden dim, depth, edge mode | RTX 2070 SUPER (local) |
82
+ | Baseline | Complexity (autonomous) | 18 | Random seed variance | RTX 4090 (Vast.ai) |
83
 
84
  Additionally, 18 runs from the autonomous Ratiocinator coordinator (3 iterations × ~6 successful arms) provide baseline variance data under identical SAGE/64/3 configuration, yielding MAE μ = 4.745, σ = 0.073.
85
 
 
91
  |---|---|---|---|---|---|
92
  | **SAGE** | 64 | **5** | **4.018** | **54.37** | **0.709** |
93
  | GIN | 64 | 3 | 4.589 | 69.28 | 0.629 |
94
+ | GAT (wide) | 128 | 3 | 4.662 | 72.36 | 0.612 |
95
  | SAGE (baseline) | 64 | 3 | 4.782 | 68.07 | 0.635 |
96
  | GraphConv | 64 | 3 | 4.804 | 68.14 | 0.635 |
97
  | SAGE (wide) | 128 | 3 | 4.863 | 68.15 | 0.635 |
 
104
 
105
  2. **Width does not help.** Doubling the hidden dimension from 64 to 128 (SAGE-wide) produces no improvement (4.863 vs 4.782), suggesting the representational bottleneck is in message-passing depth, not per-layer capacity.
106
 
107
+ 3. **Architecture ranking (at 3 layers): GIN > SAGE ≈ GraphConv > GAT > GCN.** GIN's theoretical advantage under the Weisfeiler-Leman (WL) graph isomorphism test — which measures a GNN's ability to distinguish non-isomorphic graphs by iteratively hashing neighborhood multisets — translates to a practical 4% improvement over SAGE. GIN's injective sum aggregation preserves the full multiset of neighbor features, while SAGE's mean aggregation and GCN's normalized averaging lose information. GCN, lacking learnable aggregation weights, performs worst with 11% higher MAE.
108
 
109
+ 4. **GAT underperforms expectations.** Despite attention being theoretically more expressive, GAT ranks sixth (including the wide variant at MAE 4.662). We hypothesize that the relatively uniform AST structure (most nodes have 2–4 children) does not benefit from attention-based neighbor weighting. GAT also ran slowest; the `gat-wide` arm timed out on Vast.ai (1200s budget) and was completed locally (50 epochs, ~300s on RTX 2070 SUPER).
110
 
111
  ### 4.2 Track 2: Generation Failure Analysis
112
 
 
118
  | GIN | 256 | improved | 0% |
119
  | GCN | 256 | improved | 0% |
120
 
121
+ **Every configuration achieves 0% syntactic validity.** Validation loss converges to approximately 7.7 across all variants, indicating the model learns non-trivial representations but cannot reconstruct valid ASTs. Both the `improved` loss (node type CE + parent prediction CE) and the `simple` loss (node type CE only) were tested; neither produced valid output. The `comprehensive` loss variant failed to train on two arms due to numerical instability.
122
 
123
  We hypothesize three contributing factors:
124
 
 
130
 
131
  ### 4.3 Track 4: Tree-Aware Decoder Topology
132
 
133
+ **Remote experiments (Vast.ai RTX 4090, batch size 4096):**
134
+
135
+ | Edge Mode | Conv | Heuristic Validity* | Val Loss |
136
  |---|---|---|---|
137
  | chain | GAT | 0% | 7.715 |
138
  | teacher_forced | GAT | 0% | 7.706 |
 
140
  | teacher_forced | SAGE | 0% | 7.799 |
141
  | **teacher_forced** | **GIN** | **7%** | **8.384** |
142
 
143
+ *Heuristic validity: >2 unique predicted node types per sample.
144
 
145
+ The initial remote results showed teacher-forced GIN as the sole configuration with non-zero heuristic validity. To investigate this signal, we conducted a deep-dive analysis on local GPU (RTX 2070 SUPER) with smaller batch sizes (32) enabling better convergence:
146
 
147
+ **Local deep-dive (RTX 2070 SUPER, batch size 32, 30 epochs):**
148
 
149
+ | Config | Hidden Dim | Layers | Val Loss | Heuristic Valid | Type Acc | Syntax Valid |
150
+ |---|---|---|---|---|---|---|
151
+ | tf-gin-128 | 128 | 3 | 3.871 | 97.0% | 81.4% | **0%** |
152
+ | tf-gin-256 | 256 | 3 | 3.890 | 97.0% | 81.3% | **0%** |
153
+ | tf-gin-512 | 512 | 3 | 3.833 | 96.5% | 81.8% | **0%** |
154
+ | **tf-gin-256-deep** | **256** | **5** | **3.759** | **99.5%** | **81.1%** | **0%** |
155
+ | chain-gin-256 (control) | 256 | 3 | 5.413 | 4.0% | 48.2% | 0% |
156
 
157
+ With proper convergence, teacher-forced GIN achieves **99.5% heuristic validity and 81% node type accuracy** yet **0% real syntactic validity** when reconstructed code is checked against a Ruby parser. This paradox reveals the core failure mode:
158
+
159
+ ### 4.4 The Literal Value Bottleneck
160
+
161
+ Analysis of 500 validation samples shows the AST element distribution:
162
+
163
+ | Category | Count | Percentage |
164
+ |---|---|---|
165
+ | Typed AST nodes (def, send, args, ...) | 15,395 | 53.2% |
166
+ | Literal values (identifiers, strings, numbers, nil) | 13,534 | **46.8%** |
167
+
168
+ All literal values — method names, variable names, string contents, numeric literals, and nil sentinels — are encoded as `UNKNOWN` (type index 73) in the 74-dimensional one-hot feature vector. The model has **no mechanism to predict or recover these values**.
169
+
170
+ **Qualitative example.** For the Ruby method:
171
+ ```ruby
172
+ def call(storage)
173
+ new(storage).call
174
+ end
175
+ ```
176
+
177
+ The AST contains 12 elements: 6 typed nodes (`def`, `args`, `arg`, `send`, `send`, `lvar`) and 6 literals (`"call"`, `"storage"`, `nil`, `"new"`, `"storage"`, `"call"`). The teacher-forced GIN decoder achieves **100% accuracy** on all 12 elements — correctly predicting all 6 structural types and correctly predicting `UNKNOWN` for all 6 literals. Yet the reconstructed AST cannot be unparsed to valid Ruby because the literal values are irrecoverable.
178
+
179
+ **Mode collapse in chain decoders.** Without ground-truth edges, the chain decoder exhibits severe mode collapse: 92.7% of all predicted tokens are `UNKNOWN`, with only `def` (3.6%) and `send` (3.0%) appearing as secondary predictions. Average type accuracy drops from 81% (teacher-forced) to 48% (chain), and the average number of unique predicted types falls from 8.6 to 1.6.
180
+
181
+ **Dimension invariance.** Hidden dimensions of 128, 256, and 512 produce nearly identical results (type accuracy 81.3–81.8%, heuristic validity 96.5–97.0%), confirming that the bottleneck is not model capacity but the information-theoretic gap in the input representation.
182
+
183
+ **Depth helps marginally.** A 5-layer decoder improves heuristic validity from 97% to 99.5% and reduces val_loss from 3.89 to 3.76, consistent with the depth-over-width finding in Track 1.
184
 
185
  ## 5. Discussion
186
 
187
  ### The Representation Gap
188
 
189
+ Our results reveal a **representation gap** between understanding and generation in GNN-based code models. For complexity prediction (a graph-level regression task), GNNs perform well — a 5-layer SAGE explains 71% of variance in cyclomatic complexity. But for generation (requiring node-level reconstruction of discrete types *and* literal values), the same representations fail catastrophically.
190
+
191
+ This gap has two components:
192
 
193
+ 1. **Structural**: Chain decoders destroy the tree topology needed for valid ASTs. Teacher forcing eliminates this component, restoring 81% type accuracy.
194
+ 2. **Lexical**: The 74D one-hot feature representation encodes node *types* but discards node *values*. Nearly half of all AST elements are literals (method names, variable names, strings) that become the undifferentiated `UNKNOWN` token. No amount of architectural improvement can recover information that was never encoded.
195
+
196
+ This mirrors findings in NLP, where masked language models excel at classification but require autoregressive decoding for generation. However, for GNNs on code, the problem is more fundamental: it is not merely a decoding strategy issue but an **input representation deficiency**. Autoregressive text models operate on the full token vocabulary; GNN autoencoders operate on a lossy projection that discards lexical content.
197
+
198
+ ### Implications for GNN-Based Code Generation
199
+
200
+ Our findings suggest that viable GNN code generation would require:
201
+
202
+ 1. **Literal value prediction heads**: Separate output heads for identifier names (from a vocabulary or copy mechanism), string contents, and numeric values — similar to pointer networks in sequence-to-sequence models.
203
+ 2. **Hybrid architectures**: GNN encoders for structural understanding combined with autoregressive or grammar-constrained decoders for sequential output — analogous to the tree2seq approach.
204
+ 3. **Grammar-aware decoding**: Constrained decoders that enforce Ruby's BNF grammar during generation, eliminating syntactically impossible outputs by construction.
205
+
206
+ The fact that GNNs achieve R² = 0.71 for complexity prediction demonstrates they learn meaningful code representations. The challenge is extracting that understanding into valid sequential output.
207
 
208
  ### Depth vs. Width
209
 
 
211
 
212
  ### Why GIN for Generation?
213
 
214
+ GIN's success as the sole architecture achieving non-zero heuristic validity (both on Vast.ai and locally) likely stems from its injective aggregation function. The sum aggregation with learnable epsilon preserves the full multiset of neighbor features meaning it can distinguish structurally similar but semantically different AST subtrees. For example, a `def` node with children `[args, send, send]` is distinguishable from one with `[args, send, lvar]`, while mean-aggregating architectures (SAGE) or attention-weighted architectures (GAT) may conflate these. This is precisely the Weisfeiler-Leman (WL) graph isomorphism test advantage: GIN is provably as powerful as the 1-WL test in distinguishing non-isomorphic graphs, while GCN and GraphSAGE are strictly weaker.
215
+
216
+ The practical significance: with teacher forcing providing the correct tree structure, GIN's superior expressiveness enables it to predict node types with 81% accuracy versus the chain decoder's 48% — a 33 percentage point gap that demonstrates the value of structural supervision combined with expressive aggregation.
217
 
218
  ### Limitations
219
 
220
+ - **Single language.** Ruby results may not transfer to languages with different AST structures (e.g., Python's more uniform indentation-based ASTs or Java's verbose type system).
221
+ - **Fixed hyperparameters.** We did not tune learning rate, dropout, or batch size per architecture. The Vast.ai experiments used batch size 4096 (constrained by the METRICS protocol and single-script evaluation), while local deep-dive used batch size 32 — this difference affected convergence (val_loss 8.4 vs 3.9) and explains the discrepancy between the 7% remote heuristic validity and the 97% local result.
222
+ - **No cross-validation.** Results are from a single 85/15 train/val split, though the 18 baseline replicates (σ = 0.073) provide confidence in the complexity prediction findings.
223
+ - **Heuristic validity metric on remote runs.** The Vast.ai experiments used `unique_types > 2` as a proxy for syntactic validity. The local deep-dive revealed this dramatically overestimates actual code validity (99.5% heuristic vs 0% syntax). Future work should always include real parser-based syntax checking.
224
 
225
  ## 6. Conclusion
226
 
227
+ We conducted a systematic study of five GNN architectures for Ruby code complexity prediction and generation across 40 GPU experiments on two hardware platforms. Our findings are:
228
 
229
+ 1. **For complexity prediction, go deeper:** A 5-layer GraphSAGE (MAE 4.018, R² 0.709) outperforms all 3-layer variants by 16%, while doubling width provides no benefit. This result is 9.9σ significant against the baseline variance.
230
+ 2. **GNN autoencoders cannot generate valid code:** Zero syntactic validity across 15+ configurations spanning five architectures, three loss functions, three decoder edge modes, and four hidden dimensions.
231
+ 3. **The literal value bottleneck is the root cause:** Teacher-forced GIN achieves 81% node type accuracy and 99.5% type diversity, but 0% syntax validity because 47% of AST elements are literal values (identifiers, strings, numbers) with no learnable representation. The failure is not in model capacity or architecture but in the input encoding.
232
+ 4. **Chain decoders suffer mode collapse:** Without structural supervision, 93% of predictions default to a single type. Teacher forcing is necessary for meaningful predictions but insufficient for valid code.
233
+ 5. **Architecture expressiveness matters:** GIN, the most expressive architecture under the WL framework, consistently outperforms alternatives in both complexity prediction (best 3-layer MAE) and generation (sole non-zero heuristic validity).
234
 
235
+ These results suggest that future work on GNN-based code generation should focus on (a) enriched input representations that encode literal values alongside structural types, (b) hybrid architectures combining GNN encoding with autoregressive or grammar-constrained decoding, and (c) copy mechanisms or pointer networks that can reproduce identifier names from the input graph. The strong performance on complexity prediction (R² = 0.71) confirms that GNNs learn meaningful code representations — the challenge is building decoders that can reconstruct the full richness of code from these representations.
236
 
237
  ### Reproducibility
238
 
 
257
 
258
  ## Appendix B: Compute Cost Breakdown
259
 
260
+ | Experiment | Arms | Successful | Hardware | Approx. Cost |
261
+ |---|---|---|---|---|
262
+ | Autonomous research (3 iter) | 24 | 18 | RTX 4090 (Vast.ai) | $1.50 |
263
+ | Architecture comparison | 8 | 7 | RTX 4090 (Vast.ai) | $1.20 |
264
+ | Generation analysis | 7 | 5 | RTX 4090 (Vast.ai) | $0.80 |
265
+ | Decoder topology | 6 | 5 | RTX 4090 (Vast.ai) | $0.70 |
266
+ | GIN deep dive | 5 | 5 | RTX 2070 SUPER (local) | ~$0.10* |
267
+ | GAT-wide completion | 1 | 1 | RTX 2070 SUPER (local) | ~$0.02* |
268
+ | **Total** | **51** | **41** | | **~$4.32** |
269
+
270
+ *Local GPU cost estimated at $0.10/hr electricity.
271
 
272
  ## Appendix C: Failed Arms
273
 
274
  | Arm | Failure Mode | Cause |
275
  |---|---|---|
276
+ | gat-wide (Track 1, Vast.ai) | Timeout (exit 124) | GAT with 128-dim exceeded 1200s budget; completed locally |
277
  | simple-loss-gat (Track 2) | SSH timeout | Instance never became SSH-ready |
278
+ | comprehensive-loss-gat (Track 2) | Exit 1 | Numerical instability in comprehensive loss |
279
+ | teacher-forced-gat-comprehensive (Track 4) | Exit 1 | Numerical instability in comprehensive loss |
280
+
281
+ All failures are infrastructure-related, timeout-related, or due to the comprehensive loss function's numerical instability. No experiments produced anomalous metrics.
282
+
283
+ ## Appendix D: Qualitative Reconstruction Examples
284
+
285
+ **Perfect type reconstruction (teacher-forced GIN, 5-layer, 256-dim):**
286
+
287
+ Original Ruby:
288
+ ```ruby
289
+ def call(storage)
290
+ new(storage).call
291
+ end
292
+ ```
293
+
294
+ | Node | Ground Truth | Predicted | Match |
295
+ |---|---|---|---|
296
+ | 0 | def | def | ✓ |
297
+ | 1 | "call" → UNKNOWN | UNKNOWN | ✓ |
298
+ | 2 | args | args | ✓ |
299
+ | 3 | arg | arg | ✓ |
300
+ | 4 | "storage" → UNKNOWN | UNKNOWN | ✓ |
301
+ | 5 | send | send | ✓ |
302
+ | 6 | send | send | ✓ |
303
+ | 7 | nil → UNKNOWN | UNKNOWN | ✓ |
304
+ | 8 | "new" → UNKNOWN | UNKNOWN | ✓ |
305
+ | 9 | lvar | lvar | ✓ |
306
+ | 10 | "storage" → UNKNOWN | UNKNOWN | ✓ |
307
+ | 11 | "call" → UNKNOWN | UNKNOWN | ✓ |
308
+
309
+ 12/12 correct (100%). The model perfectly reconstructs the AST type skeleton, but the 6 UNKNOWN nodes (method names `call`, `new`; variable `storage`; nil literal) carry no recoverable content, making code generation impossible.
310
+
311
+ **Most common type confusions (from 200 evaluation samples):**
312
+
313
+ | Ground Truth → Predicted | Count | Semantic Explanation |
314
+ |---|---|---|
315
+ | str → lvar | 8 | String literals confused with local variables (both leaf nodes) |
316
+ | send → const | 5 | Method calls confused with constant references (both name-bearing) |
317
+ | const → send | 5 | Symmetric confusion: names look similar in AST context |
318
+ | args → UNKNOWN | 3 | Argument lists confused with literal values |
319
 
320
+ All confusions are between semantically related node types the model learns meaningful AST semantics but struggles with fine-grained distinctions between similar roles.