Datasets:
Fix empty nodes with ID < 1 (e.g., 0.1) not being reconstructed
Browse filesEmpty nodes (enhanced dependencies) with decimal IDs less than 1, such as
0.1 or 0.2, were not being included in the reconstructed CoNLL-U output.
These nodes appear before token 1 and represent zero elements like pro-drop
subjects.
The reconstruction code was only handling empty nodes that come after their
parent token (e.g., 22.1 after token 22), but missed nodes with parent ID 0,
which must be inserted before token 1.
Fixes:
- Template reconstruction: Insert 0.x empty nodes before token loop
- Validation reconstruction: Same fix for consistency
- Documentation: Added section 7 explaining the issue and fix
Affected treebank:
- ca_ancora: 445 sentences with 0.x empty nodes now validate perfectly
Validation results:
- ca_ancora: 16,678/16,678 sentences (100% success)
- fr_gsd, en_ewt, it_isdt: Still passing (47,131 sentences)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
@@ -104,6 +104,23 @@ def example_to_conllu(example: Dict[str, Any], upos_names: List[str] = None) ->
|
|
| 104 |
empty_node_positions[int(parent)] = []
|
| 105 |
empty_node_positions[int(parent)].append(empty_node)
|
| 106 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
# Build token lines with MWTs and empty nodes
|
| 108 |
token_idx = 1
|
| 109 |
for i in range(len(example['tokens'])):
|
|
@@ -136,7 +153,7 @@ def example_to_conllu(example: Dict[str, Any], upos_names: List[str] = None) ->
|
|
| 136 |
]
|
| 137 |
lines.append('\t'.join(fields))
|
| 138 |
|
| 139 |
-
# Insert empty nodes after token if needed
|
| 140 |
if token_idx in empty_node_positions:
|
| 141 |
for empty_node in empty_node_positions[token_idx]:
|
| 142 |
en_fields = [
|
|
|
|
| 104 |
empty_node_positions[int(parent)] = []
|
| 105 |
empty_node_positions[int(parent)].append(empty_node)
|
| 106 |
|
| 107 |
+
# Insert empty nodes that come before token 1 (e.g., 0.1, 0.2)
|
| 108 |
+
if 0 in empty_node_positions:
|
| 109 |
+
for empty_node in empty_node_positions[0]:
|
| 110 |
+
en_fields = [
|
| 111 |
+
empty_node.get('id', '_'),
|
| 112 |
+
empty_node.get('form', '_'),
|
| 113 |
+
empty_node.get('lemma', '_'),
|
| 114 |
+
empty_node.get('upos', '_'),
|
| 115 |
+
empty_node.get('xpos') or '_',
|
| 116 |
+
empty_node.get('feats') or '_',
|
| 117 |
+
empty_node.get('head', '_'),
|
| 118 |
+
empty_node.get('deprel', '_'),
|
| 119 |
+
empty_node.get('deps') or '_',
|
| 120 |
+
empty_node.get('misc') or '_',
|
| 121 |
+
]
|
| 122 |
+
lines.append('\t'.join(en_fields))
|
| 123 |
+
|
| 124 |
# Build token lines with MWTs and empty nodes
|
| 125 |
token_idx = 1
|
| 126 |
for i in range(len(example['tokens'])):
|
|
|
|
| 153 |
]
|
| 154 |
lines.append('\t'.join(fields))
|
| 155 |
|
| 156 |
+
# Insert empty nodes after token if needed (e.g., 22.1 after token 22)
|
| 157 |
if token_idx in empty_node_positions:
|
| 158 |
for empty_node in empty_node_positions[token_idx]:
|
| 159 |
en_fields = [
|
|
@@ -125,6 +125,32 @@ The encoding line is not part of any sentence's metadata.
|
|
| 125 |
|
| 126 |
**Status**: ⚠️ Accepted loss - encoding declarations are informational only and don't affect linguistic data.
|
| 127 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 128 |
## Summary of Limitations
|
| 129 |
|
| 130 |
### ✅ Fully Fixed
|
|
@@ -132,6 +158,7 @@ The encoding line is not part of any sentence's metadata.
|
|
| 132 |
2. Duplicate metadata keys
|
| 133 |
3. Metadata keys without values
|
| 134 |
4. Empty metadata values
|
|
|
|
| 135 |
|
| 136 |
### ⚠️ Known Acceptable Limitations
|
| 137 |
1. File-level comments (encoding declarations) - not sentence-level data
|
|
|
|
| 125 |
|
| 126 |
**Status**: ⚠️ Accepted loss - encoding declarations are informational only and don't affect linguistic data.
|
| 127 |
|
| 128 |
+
## 7. Empty Nodes Before Token 1
|
| 129 |
+
|
| 130 |
+
**Issue**: Empty nodes (enhanced dependencies) with decimal IDs less than 1 (like `0.1`, `0.2`) must be inserted before the first token, not after.
|
| 131 |
+
|
| 132 |
+
**Example**:
|
| 133 |
+
```conllu
|
| 134 |
+
# sent_id = CESS-CAT-A-19981201-124-s7B
|
| 135 |
+
# text = No crec que la nostra vida corri riscos...
|
| 136 |
+
0.1 _ _ PRON p _ _ _ 2:nsubj ArgTem=arg0:agt|Entity=(...)
|
| 137 |
+
1 No no ADV rn Polarity=Neg 2 advmod 2:advmod _
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
The empty node `0.1` comes BEFORE token 1, not after token 0 (which doesn't exist).
|
| 141 |
+
|
| 142 |
+
**Affected Treebanks**:
|
| 143 |
+
- `ca_ancora` (Catalan-AnCora): 445 sentences with empty nodes at position 0.x
|
| 144 |
+
- Any treebank using empty nodes for pro-drop subjects or other zero elements
|
| 145 |
+
|
| 146 |
+
**Workaround**: Special handling in reconstruction code to insert empty nodes with ID < 1 before the token loop starts.
|
| 147 |
+
|
| 148 |
+
**Implementation**:
|
| 149 |
+
- Template: Lines 128-144 in `templates/universal_dependencies.tmpl`
|
| 150 |
+
- Validation: Lines 107-122 in `05_validate_parquet.py`
|
| 151 |
+
|
| 152 |
+
**Status**: ✅ Fixed - empty nodes with any ID (including < 1) are now correctly reconstructed
|
| 153 |
+
|
| 154 |
## Summary of Limitations
|
| 155 |
|
| 156 |
### ✅ Fully Fixed
|
|
|
|
| 158 |
2. Duplicate metadata keys
|
| 159 |
3. Metadata keys without values
|
| 160 |
4. Empty metadata values
|
| 161 |
+
5. Empty nodes with ID < 1 (0.x positions)
|
| 162 |
|
| 163 |
### ⚠️ Known Acceptable Limitations
|
| 164 |
1. File-level comments (encoding declarations) - not sentence-level data
|
|
@@ -125,6 +125,24 @@ def example_to_conllu(example: Dict, upos_names: Optional[list] = None) -> str:
|
|
| 125 |
except (ValueError, KeyError):
|
| 126 |
pass
|
| 127 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 128 |
# Build token lines
|
| 129 |
token_idx = 1
|
| 130 |
for i in range(len(example['tokens'])):
|
|
@@ -155,7 +173,7 @@ def example_to_conllu(example: Dict, upos_names: Optional[list] = None) -> str:
|
|
| 155 |
]
|
| 156 |
lines.append('\t'.join(fields))
|
| 157 |
|
| 158 |
-
# Insert empty nodes after this token if needed
|
| 159 |
for node_id in sorted(empty_nodes_dict.keys()):
|
| 160 |
if int(node_id) == token_idx:
|
| 161 |
for empty_node in empty_nodes_dict[node_id]:
|
|
|
|
| 125 |
except (ValueError, KeyError):
|
| 126 |
pass
|
| 127 |
|
| 128 |
+
# Insert empty nodes that come before token 1 (e.g., 0.1, 0.2)
|
| 129 |
+
for node_id in sorted(empty_nodes_dict.keys()):
|
| 130 |
+
if node_id < 1:
|
| 131 |
+
for empty_node in empty_nodes_dict[node_id]:
|
| 132 |
+
en_fields = [
|
| 133 |
+
empty_node.get('id', '_'),
|
| 134 |
+
empty_node.get('form', '_'),
|
| 135 |
+
empty_node.get('lemma', '_'),
|
| 136 |
+
empty_node.get('upos', '_'),
|
| 137 |
+
empty_node.get('xpos') or '_',
|
| 138 |
+
empty_node.get('feats') or '_',
|
| 139 |
+
empty_node.get('head', '_'),
|
| 140 |
+
empty_node.get('deprel', '_'),
|
| 141 |
+
empty_node.get('deps') or '_',
|
| 142 |
+
empty_node.get('misc') or '_',
|
| 143 |
+
]
|
| 144 |
+
lines.append('\t'.join(en_fields))
|
| 145 |
+
|
| 146 |
# Build token lines
|
| 147 |
token_idx = 1
|
| 148 |
for i in range(len(example['tokens'])):
|
|
|
|
| 173 |
]
|
| 174 |
lines.append('\t'.join(fields))
|
| 175 |
|
| 176 |
+
# Insert empty nodes after this token if needed (e.g., 22.1 after token 22)
|
| 177 |
for node_id in sorted(empty_nodes_dict.keys()):
|
| 178 |
if int(node_id) == token_idx:
|
| 179 |
for empty_node in empty_nodes_dict[node_id]:
|