Datasets:
Graph Neural Networks for Ruby Code Complexity Prediction and Generation: A Systematic Architecture Study
Authors: Tim Lawrenz, with autonomous experimentation by Ratiocinator
Abstract. We present a systematic study of Graph Neural Network (GNN) architectures for two tasks on Ruby Abstract Syntax Trees (ASTs): cyclomatic complexity prediction and code generation via graph autoencoders. Using a dataset of 22,452 Ruby methods, we evaluate five GNN architectures (GCN, GraphSAGE, GAT, GIN, GraphConv) across 40 GPU experiments on RTX 4090 and RTX 2070 SUPER instances. For complexity prediction, a 5-layer GraphSAGE achieves MAE 4.018 (R² = 0.709), a 16% improvement over the 3-layer baseline. GIN ranks second among equal-depth models. For code generation, we find a stark negative result: standard graph autoencoders produce 0% syntactically valid Ruby across all tested architectures, loss functions, and hidden dimensions. A deep-dive analysis reveals that teacher-forced GIN decoders achieve 81% node type accuracy and 99.5% type diversity, yet still produce 0% valid code because 47% of AST elements are literal values (identifiers, strings, numbers) with no learnable representation. This literal value bottleneck — not architectural capacity — is the fundamental barrier to GNN-based code generation. All experiments were orchestrated by Ratiocinator, an autonomous LLM-driven research pipeline. Total compute cost: under $5 USD.
1. Introduction
Graph Neural Networks have emerged as a natural representation for source code, where Abstract Syntax Trees provide a structured graph encoding of program syntax and semantics. Prior work has applied GNNs to tasks including vulnerability detection (Devign; Zhou et al., 2019), code clone detection (ASTNN; Zhang et al., 2019), and type inference. However, two questions remain underexplored:
- Which GNN architecture best predicts code complexity from ASTs? Existing studies typically evaluate one or two architectures without controlled comparisons.
- Can GNN autoencoders generate syntactically valid code? While graph variational autoencoders have shown promise in molecular generation, their application to code synthesis is largely uncharted.
We address both questions through a systematic study on Ruby, a dynamically-typed language whose relatively uniform syntax makes AST analysis tractable. While transformer-based autoregressive models (Codex, CodeLlama, StarCoder) have demonstrated remarkable code generation capabilities through sequential token prediction, GNNs offer a fundamentally different approach: operating directly on the graph structure of parsed ASTs rather than treating code as text. This structural approach could theoretically provide stronger guarantees about syntactic validity and enable more sample-efficient learning of code semantics. Our study tests whether these theoretical advantages translate to practice.
Our contributions are:
- A controlled 5-way architecture comparison for complexity prediction, showing that network depth matters more than width or architecture choice (Section 4.1).
- A comprehensive negative result demonstrating that GNN autoencoders cannot generate valid Ruby code under standard training regimes (Section 4.2).
- A deep-dive analysis revealing the literal value bottleneck: teacher-forced GIN achieves 81% node type accuracy but 0% syntax validity because 47% of AST elements are unrecoverable literals (Section 4.3).
- Evidence that chain decoders suffer severe mode collapse (93% of predictions default to a single type), while teacher forcing restores type diversity without achieving code validity (Section 4.3).
- All experiments were orchestrated by Ratiocinator, an autonomous LLM-driven research pipeline, demonstrating reproducible AI-driven experimentation at under $5 total compute cost (Section 3).
2. Related Work
GNNs for Code Understanding. Allamanis et al. (2018) introduced GNNs for variable misuse detection. CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2021) combine transformer architectures with code structure. Our work differs by focusing on classical GNN variants (GCN, SAGE, GAT, GIN, GraphConv) in a controlled comparison rather than proposing a new architecture.
Autoregressive Code Generation. Transformer-based models (Codex, Chen et al., 2021; StarCoder, Li et al., 2023; CodeLlama, Rozière et al., 2023) treat code as a token sequence and generate it autoregressively. These models achieve remarkable results but lack structural guarantees — they can produce syntactically invalid code because they operate on surface text, not ASTs. GNN-based approaches, operating directly on parsed syntax trees, could theoretically enforce structural validity by construction. Our negative results demonstrate that this theoretical advantage does not materialize in practice.
Graph Autoencoders. Kipf and Welling (2016) proposed variational graph autoencoders for link prediction. Junction Tree VAE (Jin et al., 2018) generates molecular graphs with validity guarantees by decomposing graphs into tree structures. Our tree-aware decoder draws inspiration from this approach but operates on ASTs rather than molecular substructures.
Code Complexity Prediction. McCabe's cyclomatic complexity (1976) is a standard software metric. ML approaches using hand-crafted features (Gill and Kemerer, 1991) have been supplemented by deep learning methods, but few use graph representations of the AST directly.
Automated Research. Our experimental infrastructure, Ratiocinator, belongs to the emerging class of LLM-driven scientific discovery tools alongside systems like The AI Scientist (Lu et al., 2024). Ratiocinator uses large language models to propose hypotheses, generate experiment configurations, and analyze results, while orchestrating GPU compute on ephemeral cloud instances. We demonstrate that such systems can conduct meaningful ablation studies at minimal cost.
3. Experimental Setup
3.1 Dataset
We use 22,452 Ruby methods extracted from open-source repositories, each parsed into an AST with:
- 74-dimensional node features encoding node type (one-hot over 73 known AST types plus one
UNKNOWNtoken). Lexical literals — identifiers, string contents, numeric values — are stripped of their content and mapped to this singleUNKNOWNtoken to bound the feature space. As we show in Section 4.4, this design choice has profound consequences for generation. - Edge attributes (3D): edge type encoding, relative depth, and child index.
- Positional encodings (2D): tree depth and sibling position.
- Labels: McCabe cyclomatic complexity (integer, range 1–200+).
The dataset is split 85/15 into training (19,084) and validation (3,368) sets.
3.2 Models
Complexity Prediction. RubyComplexityGNN applies $L$ message-passing layers followed by global mean pooling and an MLP regressor. We evaluate five convolution operators:
- GCN (Kipf and Welling, 2017): Symmetric normalized adjacency.
- GraphSAGE (Hamilton et al., 2017): Mean aggregation with concatenation.
- GAT (Veličković et al., 2018): Multi-head attention (4 heads).
- GIN (Xu et al., 2019): Sum aggregation with learnable epsilon, designed for maximal expressiveness under the WL test.
- GraphConv (Morris et al., 2019): General message-passing with separate self/neighbor transforms.
Code Generation. ASTAutoencoder encodes the AST into a fixed-size latent vector, then decodes to reconstruct node types and edge structure. We evaluate three loss functions:
- Simple: Node type cross-entropy only (reconstruction of 74D one-hot node features).
- Improved: Node type cross-entropy plus parent prediction cross-entropy, weighted by
type_weightandparent_weighthyperparameters. This provides an explicit structural learning signal by requiring the model to predict each node's parent. - Comprehensive: Similar to improved but with different parent logit normalization.
The decoder supports three edge construction modes:
- Chain: Nodes connected sequentially (baseline, no structural information).
- Teacher-forced: Ground-truth AST edges provided during training.
- Iterative: Decoder predicts edges from node embeddings (no ground truth at inference).
3.3 Infrastructure
All experiments run on single NVIDIA RTX 4090 GPUs (24 GB VRAM) provisioned via Vast.ai. Training uses Adam optimizer (lr=0.001), batch size 32, and 50 epochs (complexity) or 30 epochs (generation). The Ratiocinator pipeline handles instance provisioning, code deployment via Git, dependency installation, training execution, metric collection, and instance cleanup.
3.4 Experiment Design
| Track | Task | Arms | Varied Parameters | GPU |
|---|---|---|---|---|
| 1 | Complexity prediction | 8 | Architecture, depth, width | RTX 4090 (Vast.ai) |
| 2 | Generation (chain decoder) | 7 | Architecture, loss function, width | RTX 4090 (Vast.ai) |
| 4 | Generation (tree-aware decoder) | 6 | Edge mode, architecture, loss | RTX 4090 (Vast.ai) |
| 5 | GIN deep dive + qualitative | 5 | Hidden dim, depth, edge mode | RTX 2070 SUPER (local) |
| Baseline | Complexity (autonomous) | 18 | Random seed variance | RTX 4090 (Vast.ai) |
Additionally, 18 runs from the autonomous Ratiocinator coordinator (3 iterations × ~6 successful arms) provide baseline variance data under identical SAGE/64/3 configuration, yielding MAE μ = 4.745, σ = 0.073.
4. Results
4.1 Track 1: Architecture Comparison for Complexity Prediction
| Architecture | Hidden Dim | Layers | Val MAE ↓ | Val MSE | R² |
|---|---|---|---|---|---|
| SAGE | 64 | 5 | 4.018 | 54.37 | 0.709 |
| GIN | 64 | 3 | 4.589 | 69.28 | 0.629 |
| GAT (wide) | 128 | 3 | 4.662 | 72.36 | 0.612 |
| SAGE (baseline) | 64 | 3 | 4.782 | 68.07 | 0.635 |
| GraphConv | 64 | 3 | 4.804 | 68.14 | 0.635 |
| SAGE (wide) | 128 | 3 | 4.863 | 68.15 | 0.635 |
| GAT | 64 | 3 | 4.952 | 73.19 | 0.608 |
| GCN | 64 | 3 | 5.321 | 81.61 | 0.563 |
Key findings:
Depth dominates. The 5-layer SAGE achieves MAE 4.018, a 16.0% relative improvement over the 3-layer baseline (4.782). This is far outside the baseline variance band (σ = 0.073 from 18 replicate runs).
Width does not help. Doubling the hidden dimension from 64 to 128 (SAGE-wide) produces no improvement (4.863 vs 4.782), suggesting the representational bottleneck is in message-passing depth, not per-layer capacity.
Architecture ranking (at 3 layers): GIN > SAGE ≈ GraphConv > GAT > GCN. GIN's theoretical advantage under the Weisfeiler-Leman (WL) graph isomorphism test — which measures a GNN's ability to distinguish non-isomorphic graphs by iteratively hashing neighborhood multisets — translates to a practical 4% improvement over SAGE. GIN's injective sum aggregation preserves the full multiset of neighbor features, while SAGE's mean aggregation and GCN's normalized averaging lose information. GCN, lacking learnable aggregation weights, performs worst with 11% higher MAE.
GAT underperforms expectations. Despite attention being theoretically more expressive, GAT ranks sixth (including the wide variant at MAE 4.662). We hypothesize that the relatively uniform AST structure (most nodes have 2–4 children) does not benefit from attention-based neighbor weighting. GAT also ran slowest; the
gat-widearm timed out on Vast.ai (1200s budget) and was completed locally (50 epochs, ~300s on RTX 2070 SUPER).
4.2 Track 2: Generation Failure Analysis
| Decoder Conv | Hidden Dim | Loss Function | Syntactic Validity |
|---|---|---|---|
| GAT | 256 | improved | 0% |
| GAT | 512 | improved | 0% |
| SAGE | 256 | improved | 0% |
| GIN | 256 | improved | 0% |
| GCN | 256 | improved | 0% |
Every configuration achieves 0% syntactic validity. Validation loss converges to approximately 7.7 across all variants, indicating the model learns non-trivial representations but cannot reconstruct valid ASTs. Both the improved loss (node type CE + parent prediction CE) and the simple loss (node type CE only) were tested; neither produced valid output. The comprehensive loss variant failed to train on two arms due to numerical instability.
We hypothesize three contributing factors:
Token vocabulary size. Ruby ASTs use ~200 node types. With 74-dimensional features, the decoder must recover discrete types from continuous embeddings — a fundamentally lossy reconstruction.
Structural coherence. Valid Ruby requires nested structures (begin/end blocks, do/end, def/end) that span arbitrary distances in the graph. The chain decoder connects nodes sequentially, destroying the tree topology.
Loss function mismatch. Cross-entropy on node types does not penalize structural invalidity. A syntactically-aware loss would need to evaluate the generated AST against Ruby's grammar, which is combinatorially expensive.
4.3 Track 4: Tree-Aware Decoder Topology
Remote experiments (Vast.ai RTX 4090, batch size 4096):
| Edge Mode | Conv | Heuristic Validity* | Val Loss |
|---|---|---|---|
| chain | GAT | 0% | 7.715 |
| teacher_forced | GAT | 0% | 7.706 |
| iterative | GAT | 0% | 7.765 |
| teacher_forced | SAGE | 0% | 7.799 |
| teacher_forced | GIN | 7% | 8.384 |
*Heuristic validity: >2 unique predicted node types per sample.
The initial remote results showed teacher-forced GIN as the sole configuration with non-zero heuristic validity. To investigate this signal, we conducted a deep-dive analysis on local GPU (RTX 2070 SUPER) with smaller batch sizes (32) enabling better convergence:
Local deep-dive (RTX 2070 SUPER, batch size 32, 30 epochs):
| Config | Hidden Dim | Layers | Val Loss | Heuristic Valid | Type Acc | Syntax Valid |
|---|---|---|---|---|---|---|
| tf-gin-128 | 128 | 3 | 3.871 | 97.0% | 81.4% | 0% |
| tf-gin-256 | 256 | 3 | 3.890 | 97.0% | 81.3% | 0% |
| tf-gin-512 | 512 | 3 | 3.833 | 96.5% | 81.8% | 0% |
| tf-gin-256-deep | 256 | 5 | 3.759 | 99.5% | 81.1% | 0% |
| chain-gin-256 (control) | 256 | 3 | 5.413 | 4.0% | 48.2% | 0% |
With proper convergence, teacher-forced GIN achieves 99.5% heuristic validity and 81% node type accuracy — yet 0% real syntactic validity when reconstructed code is checked against a Ruby parser. This paradox reveals the core failure mode:
4.4 The Literal Value Bottleneck
Analysis of 500 validation samples shows the AST element distribution:
| Category | Count | Percentage |
|---|---|---|
| Typed AST nodes (def, send, args, ...) | 15,395 | 53.2% |
| Literal values (identifiers, strings, numbers, nil) | 13,534 | 46.8% |
All literal values — method names, variable names, string contents, numeric literals, and nil sentinels — are encoded as UNKNOWN (type index 73) in the 74-dimensional one-hot feature vector. The model has no mechanism to predict or recover these values.
Qualitative example. For the Ruby method:
def call(storage)
new(storage).call
end
The AST contains 12 elements: 6 typed nodes (def, args, arg, send, send, lvar) and 6 literals ("call", "storage", nil, "new", "storage", "call"). The teacher-forced GIN decoder achieves 100% accuracy on all 12 elements — correctly predicting all 6 structural types and correctly predicting UNKNOWN for all 6 literals. Yet the reconstructed AST cannot be unparsed to valid Ruby because the literal values are irrecoverable.
Mode collapse in chain decoders. Without ground-truth edges, the chain decoder exhibits severe mode collapse: 92.7% of all predicted tokens are UNKNOWN, with only def (3.6%) and send (3.0%) appearing as secondary predictions. Average type accuracy drops from 81% (teacher-forced) to 48% (chain), and the average number of unique predicted types falls from 8.6 to 1.6.
Dimension invariance. Hidden dimensions of 128, 256, and 512 produce nearly identical results (type accuracy 81.3–81.8%, heuristic validity 96.5–97.0%), confirming that the bottleneck is not model capacity but the information-theoretic gap in the input representation.
Depth helps marginally. A 5-layer decoder improves heuristic validity from 97% to 99.5% and reduces val_loss from 3.89 to 3.76, consistent with the depth-over-width finding in Track 1.
5. Discussion
The Representation Gap
Our results reveal a representation gap between understanding and generation in GNN-based code models. For complexity prediction (a graph-level regression task), GNNs perform well — a 5-layer SAGE explains 71% of variance in cyclomatic complexity. But for generation (requiring node-level reconstruction of discrete types and literal values), the same representations fail catastrophically.
This gap has two components:
- Structural: Chain decoders destroy the tree topology needed for valid ASTs. Teacher forcing eliminates this component, restoring 81% type accuracy.
- Lexical: The 74D one-hot feature representation encodes node types but discards node values. Nearly half of all AST elements are literals (method names, variable names, strings) that become the undifferentiated
UNKNOWNtoken. No amount of architectural improvement can recover information that was never encoded.
This mirrors findings in NLP, where masked language models excel at classification but require autoregressive decoding for generation. However, for GNNs on code, the problem is more fundamental: it is not merely a decoding strategy issue but an input representation deficiency. Autoregressive text models operate on the full token vocabulary; GNN autoencoders operate on a lossy projection that discards lexical content.
Implications for GNN-Based Code Generation
Our findings suggest that viable GNN code generation would require:
- Literal value prediction heads: Separate output heads for identifier names (from a vocabulary or copy mechanism), string contents, and numeric values — similar to pointer networks in sequence-to-sequence models.
- Hybrid architectures: GNN encoders for structural understanding combined with autoregressive or grammar-constrained decoders for sequential output — analogous to the tree2seq approach.
- Grammar-aware decoding: Constrained decoders that enforce Ruby's BNF grammar during generation, eliminating syntactically impossible outputs by construction.
The fact that GNNs achieve R² = 0.71 for complexity prediction demonstrates they learn meaningful code representations. The challenge is extracting that understanding into valid sequential output.
Depth vs. Width
The stark depth-over-width result (5-layer SAGE improves 16%; 128-dim SAGE improves 0%) has practical implications. For code ASTs with depths of 10–30, a 3-layer GNN can only aggregate information from a 3-hop neighborhood. Deeper networks capture cross-branch dependencies (e.g., a variable defined in one branch and used in another) that directly relate to complexity.
Why GIN for Generation?
GIN's success as the sole architecture achieving non-zero heuristic validity (both on Vast.ai and locally) likely stems from its injective aggregation function. The sum aggregation with learnable epsilon preserves the full multiset of neighbor features — meaning it can distinguish structurally similar but semantically different AST subtrees. For example, a def node with children [args, send, send] is distinguishable from one with [args, send, lvar], while mean-aggregating architectures (SAGE) or attention-weighted architectures (GAT) may conflate these. This is precisely the Weisfeiler-Leman (WL) graph isomorphism test advantage: GIN is provably as powerful as the 1-WL test in distinguishing non-isomorphic graphs, while GCN and GraphSAGE are strictly weaker.
The practical significance: with teacher forcing providing the correct tree structure, GIN's superior expressiveness enables it to predict node types with 81% accuracy versus the chain decoder's 48% — a 33 percentage point gap that demonstrates the value of structural supervision combined with expressive aggregation.
Limitations
- Single language. Ruby results may not transfer to languages with different AST structures (e.g., Python's more uniform indentation-based ASTs or Java's verbose type system).
- Fixed hyperparameters. We did not tune learning rate, dropout, or batch size per architecture. The Vast.ai experiments used batch size 4096 (constrained by the METRICS protocol and single-script evaluation), while local deep-dive used batch size 32 — this difference affected convergence (val_loss 8.4 vs 3.9) and explains the discrepancy between the 7% remote heuristic validity and the 97% local result.
- No cross-validation. Results are from a single 85/15 train/val split, though the 18 baseline replicates (σ = 0.073) provide confidence in the complexity prediction findings.
- Heuristic validity metric on remote runs. The Vast.ai experiments used
unique_types > 2as a proxy for syntactic validity. The local deep-dive revealed this dramatically overestimates actual code validity (99.5% heuristic vs 0% syntax). Future work should always include real parser-based syntax checking.
6. Conclusion
We conducted a systematic study of five GNN architectures for Ruby code complexity prediction and generation across 40 GPU experiments on two hardware platforms. Our findings are:
- For complexity prediction, go deeper: A 5-layer GraphSAGE (MAE 4.018, R² 0.709) outperforms all 3-layer variants by 16%, while doubling width provides no benefit. This result is 9.9σ significant against the baseline variance.
- GNN autoencoders cannot generate valid code: Zero syntactic validity across 15+ configurations spanning five architectures, three loss functions, three decoder edge modes, and four hidden dimensions.
- The literal value bottleneck is the root cause: Teacher-forced GIN achieves 81% node type accuracy and 99.5% type diversity, but 0% syntax validity because 47% of AST elements are literal values (identifiers, strings, numbers) with no learnable representation. The failure is not in model capacity or architecture but in the input encoding.
- Chain decoders suffer mode collapse: Without structural supervision, 93% of predictions default to a single type. Teacher forcing is necessary for meaningful predictions but insufficient for valid code.
- Architecture expressiveness matters: GIN, the most expressive architecture under the WL framework, consistently outperforms alternatives in both complexity prediction (best 3-layer MAE) and generation (sole non-zero heuristic validity).
These results suggest that future work on GNN-based code generation should focus on (a) enriched input representations that encode literal values alongside structural types, (b) hybrid architectures combining GNN encoding with autoregressive or grammar-constrained decoding, and (c) copy mechanisms or pointer networks that can reproduce identifier names from the input graph. The strong performance on complexity prediction (R² = 0.71) confirms that GNNs learn meaningful code representations — the challenge is building decoders that can reconstruct the full richness of code from these representations.
Reproducibility
All code is available at github.com/timlawrenz/jubilant-palm-tree (branch: experiment/ratiocinator-gnn-study). Experiments were orchestrated by Ratiocinator (github.com/timlawrenz/ratiocinator) using declarative YAML specifications. Total compute cost: approximately $5 USD on Vast.ai RTX 4090 instances.
Appendix A: Baseline Variance Analysis
The autonomous Ratiocinator coordinator ran 3 iterations of 8 arms each under identical SAGE/64/3 configuration (due to an environment variable propagation bug that was subsequently fixed). This unintentional ablation provides 18 independent replicates:
| Statistic | Value |
|---|---|
| Successful runs | 18 |
| MAE mean | 4.745 |
| MAE std | 0.073 |
| MAE range | 4.622 – 4.962 |
| R² mean | 0.635 |
| R² range | 0.627 – 0.638 |
The 5-layer SAGE result (MAE 4.018) is 9.9 standard deviations below this baseline mean, confirming its statistical significance.
Appendix B: Compute Cost Breakdown
| Experiment | Arms | Successful | Hardware | Approx. Cost |
|---|---|---|---|---|
| Autonomous research (3 iter) | 24 | 18 | RTX 4090 (Vast.ai) | $1.50 |
| Architecture comparison | 8 | 7 | RTX 4090 (Vast.ai) | $1.20 |
| Generation analysis | 7 | 5 | RTX 4090 (Vast.ai) | $0.80 |
| Decoder topology | 6 | 5 | RTX 4090 (Vast.ai) | $0.70 |
| GIN deep dive | 5 | 5 | RTX 2070 SUPER (local) | ~$0.10* |
| GAT-wide completion | 1 | 1 | RTX 2070 SUPER (local) | ~$0.02* |
| Total | 51 | 41 | ~$4.32 |
*Local GPU cost estimated at $0.10/hr electricity.
Appendix C: Failed Arms
| Arm | Failure Mode | Cause |
|---|---|---|
| gat-wide (Track 1, Vast.ai) | Timeout (exit 124) | GAT with 128-dim exceeded 1200s budget; completed locally |
| simple-loss-gat (Track 2) | SSH timeout | Instance never became SSH-ready |
| comprehensive-loss-gat (Track 2) | Exit 1 | Numerical instability in comprehensive loss |
| teacher-forced-gat-comprehensive (Track 4) | Exit 1 | Numerical instability in comprehensive loss |
All failures are infrastructure-related, timeout-related, or due to the comprehensive loss function's numerical instability. No experiments produced anomalous metrics.
Appendix D: Qualitative Reconstruction Examples
Perfect type reconstruction (teacher-forced GIN, 5-layer, 256-dim):
Original Ruby:
def call(storage)
new(storage).call
end
| Node | Ground Truth | Predicted | Match |
|---|---|---|---|
| 0 | def | def | ✓ |
| 1 | "call" → UNKNOWN | UNKNOWN | ✓ |
| 2 | args | args | ✓ |
| 3 | arg | arg | ✓ |
| 4 | "storage" → UNKNOWN | UNKNOWN | ✓ |
| 5 | send | send | ✓ |
| 6 | send | send | ✓ |
| 7 | nil → UNKNOWN | UNKNOWN | ✓ |
| 8 | "new" → UNKNOWN | UNKNOWN | ✓ |
| 9 | lvar | lvar | ✓ |
| 10 | "storage" → UNKNOWN | UNKNOWN | ✓ |
| 11 | "call" → UNKNOWN | UNKNOWN | ✓ |
12/12 correct (100%). The model perfectly reconstructs the AST type skeleton, but the 6 UNKNOWN nodes (method names call, new; variable storage; nil literal) carry no recoverable content, making code generation impossible.
Most common type confusions (from 200 evaluation samples):
| Ground Truth → Predicted | Count | Semantic Explanation |
|---|---|---|
| str → lvar | 8 | String literals confused with local variables (both leaf nodes) |
| send → const | 5 | Method calls confused with constant references (both name-bearing) |
| const → send | 5 | Symmetric confusion: names look similar in AST context |
| args → UNKNOWN | 3 | Argument lists confused with literal values |
All confusions are between semantically related node types — the model learns meaningful AST semantics but struggles with fine-grained distinctions between similar roles.