UDD-1 / TECHNICAL_REPORT_REVIEW.md
rain1024's picture
Add v1.1 word segmentation dataset, annotation guidelines, and research references
cc9c47f

Review: UDD-1: A Large-Scale Vietnamese Universal Dependencies Treebank

Reviewed: TECHNICAL_REPORT.md (revised version) Date: 2026-02-08 Review format: ACL Rolling Review (ARR) Note: This is a re-review of the revised manuscript. The original review identified 6 major weaknesses; this review assesses how they were addressed.


Paper Summary

This paper presents UDD-1, a silver-standard Vietnamese Universal Dependencies treebank of 10,000 sentences (230,709 tokens) from the legal domain. Annotations are machine-generated using the Underthesea NLP toolkit's biaffine parser trained on VLSP 2020 data, followed by a multi-pass rule-based post-processing pipeline enforcing UD v2 constraints. The revised manuscript adds an annotation quality assessment section with quantified XPOS-UPOS consistency analysis, documents language-specific deprel subtypes, and substantially expands the related work and limitations sections.

Summary of Strengths

  1. Genuine resource contribution (Sections 1, 6, Table 6): UDD-1 fills a real gap as the first Vietnamese UD treebank outside the news domain and the first legal-domain UD treebank for any Southeast Asian language. At 10,000 sentences it is 3x larger than UD_Vietnamese-VTB. The cross-language comparison with UD_Czech-CLTT (Table 6) effectively contextualizes this contribution.

  2. Transparent quality analysis (Section 5.6): The revised manuscript adds a substantive annotation quality assessment that quantifies the XPOS-UPOS mismatch rate (8.6%), categorizes mismatches into justified (18.8%), forced (73.8%), and other (7.4%), and honestly describes known error patterns including word segmentation errors, UPOS forcing artifacts, and incomplete sentence filtering. This level of self-analysis is commendable and rare for silver treebank papers.

  3. Well-documented post-processing with trade-off analysis (Section 4.3): The four-pass pipeline is described in detail with a clear explanation of the UPOS-vs-deprel decision strategy. The explicit recommendation that "users who need reliable POS tags independent of syntactic relations should prefer the XPOS column" is practical and honest.

  4. Comprehensive related work (Sections 2.1--2.5): The revised manuscript covers Vietnamese treebanks (VnDT, BKTreebank, VTB, DGDT), parsing methods (VnCoreNLP through PhoNLP), shared tasks (VLSP 2019, 2020), silver treebank methodology (UD-CHILDES), legal domain treebanks (Czech CLTT), and the UD framework. All in-text citations have matching reference entries.

  5. Thorough limitations section (9 items): The expanded limitations section honestly addresses annotation quality (including the "1 in 4 arcs may be incorrect" framing), UPOS-deprel forcing, domain bias, word segmentation, lemmatization, missing morphological features, post-processing scope, non-standard subtypes, and the absence of gold evaluation.

  6. Reproducibility: Software versions are specified (Underthesea v2.1.0, PyTorch 2.0), code is publicly available (src/convert_to_ud.py), data is on HuggingFace, and the split methodology is documented (Section 3.3).

Summary of Weaknesses

  1. No gold-standard quality evaluation (Limitation 9): Despite adding the quality analysis section, the paper still lacks any evaluation against manually annotated gold data. The ~76% LAS figure is from the VLSP 2020 benchmark on news text, not on legal text. The actual annotation quality on legal text remains unknown. Even a small sample evaluation (50 sentences) would transform this from a "we think quality is around X" to a measured claim. This remains the most significant weakness.

  2. UPOS forcing still affects 73.8% of mismatched tokens: The revised paper documents this issue transparently (strength), but does not fix it. The pipeline still forces UPOS to match potentially incorrect deprels for the majority of XPOS-UPOS mismatches. The paper acknowledges this as future work (Conclusion item 3) but it would strengthen the resource to implement the alternative strategy (preferring deprel changes when XPOS strongly indicates the correct category) before publication.

  3. Sequential split may introduce topic bias (Section 3.3): The paper now documents the split methodology, which is sequential by position. This means dev and test sets may come from a narrow subset of legal topics. The paper acknowledges this ("does not guarantee topic diversity") but the lack of stratification is a concern for downstream evaluation.

  4. VLSP 2019 shared task not cited: Section 2.2 mentions "The VLSP 2019 shared task introduced dependency parsing evaluation with approximately 4,000 sentences" but provides no citation. This should either be cited properly or removed.

Scores

Criterion Score Previous Change
Soundness 3 2 +1
Excitement 3 3 --
Overall Assessment 3 2 +1
Reproducibility 4 4 --
Confidence 4 4 --

Score Justification

  • Soundness 3 (up from 2): The addition of Section 5.6 (Annotation Quality Assessment) with quantified XPOS-UPOS analysis and transparent discussion of known error patterns substantially addresses the previous concern about missing quality evaluation. The structural claims are now properly framed ("necessary but not sufficient conditions"). The remaining gap is the absence of gold evaluation on legal text, which keeps this at 3 rather than 4.

  • Excitement 3 (unchanged): The resource fills a genuine gap. The quality analysis is a useful contribution methodology-wise. If gold evaluation were added, this would move to 4.

  • Overall 3 (up from 2): The revised paper is now borderline acceptable. The transparent quality analysis, comprehensive related work, expanded limitations, and documented methodology bring the paper to a level appropriate for a Findings track or resource-focused venue. The primary remaining issue (no gold evaluation) is clearly scoped as future work.

  • Reproducibility 4 (unchanged): Software versions now specified. Code and data publicly available.

Detailed Comments

Technical Soundness

The revised manuscript substantially improves technical transparency. Key improvements:

  1. Section 5.6 provides a multi-dimensional quality assessment (parser baseline, structural validity, XPOS-UPOS consistency, known error patterns, comparison with CHILDES). The framing is honest: "These are necessary but not sufficient conditions for annotation quality---any tree satisfies the single-root constraint."

  2. Section 4.3 now documents the UPOS-vs-deprel decision strategy with explicit trade-off analysis, including the 8.6% mismatch rate and its breakdown. The recommendation to use XPOS for reliable POS is practical.

  3. Section 4.1 now explicitly discusses the fallback logic for failed parses and the parser's domain shift limitation.

Remaining concern: The 76% LAS figure is used throughout as a quality proxy, but this is from news text. The paper should more carefully distinguish between "76% LAS on VLSP 2020 news benchmark" and "unknown LAS on legal text." The claim in the abstract ("76% LAS on the VLSP 2020 benchmark") is accurate but could mislead readers into thinking this is the expected quality of the legal annotations.

Novelty and Contribution

The contribution is primarily a new resource with a secondary methodological contribution (the documented pipeline). The revised paper strengthens the methodological side with the quality analysis framework (Section 5.6), which could serve as a template for other silver treebank papers. The legal domain contribution remains genuinely novel for Vietnamese.

Clarity and Presentation

The paper is well-organized and reads clearly. The new sections integrate smoothly. Section numbering is now consistent (2.1--2.5). Tables are informative and well-formatted. The detailed listing of deprel subtypes in Section 5.4 is a useful addition.

One minor structural issue: the paper could benefit from a brief summary table or figure showing the full pipeline flow (raw text -> cleaning -> filtering -> parsing -> POS mapping -> post-processing -> validation -> output).

Reproducibility Assessment

Improved from the previous version:

  • Software versions specified: Underthesea v2.1.0, PyTorch 2.0
  • Split methodology documented (Section 3.3)
  • Code location specified (src/convert_to_ud.py)
  • Data available on HuggingFace
  • Validation tool vendored in repository

The remaining gap is the lack of the exact model checkpoint hash. Different Underthesea v2.1.0 installations might bundle slightly different model weights.

Limitations and Ethics

The limitations section is now comprehensive (9 items) and honest. It covers annotation quality, UPOS forcing, domain bias (including the specific note about excluding legal correspondence and contracts), word segmentation, lemmatization, morphological features, post-processing scope, non-standard subtypes, and evaluation gaps. This is a model limitations section for a silver treebank paper.

The domain bias discussion (Limitation 3) could be slightly expanded to note whether the legal documents span different time periods, which could affect language use.

Related Work Research

Papers Found

Paper Year Method Results Relevance
PhoNLP (Nguyen & Nguyen) 2021 Multi-task PhoBERT 79.11% LAS on VnDT SOTA Vietnamese parsing -- now cited
DGDT (Huynh et al.) 2025 Domain generalization 3-5% LAS degradation Domain shift motivation -- now cited
UD-English-CHILDES (Yang et al.) 2025 Stanza silver 83.3% LAS quality Silver treebank reference -- now cited
Czech CLTT (Sevcikova & Zabokrtsky) 2016 Manual annotation 1,121 sentences legal Legal domain comparable -- now cited
HPSG parser (Nguyen et al.) 2024 HPSG + PhoBERT 15% non-compliant trees Treebank quality -- now cited

Missing Citations

  • VLSP 2019 shared task: Mentioned in text (Section 2.2) but not formally cited with a reference entry.
  • All other previously missing citations (DGDT, CHILDES, Czech CLTT, HPSG) are now properly cited.

SOTA Verification

  • Claimed: PhoNLP achieves 79.11% LAS, 85.47% UAS on VnDT -- Verified correct
  • Claimed: Best VLSP 2020 system achieves 76.27% LAS -- Verified correct
  • Claimed: UD_Vietnamese-VTB has 3,323 sentences -- Verified correct
  • Claimed: Only one official Vietnamese UD treebank exists -- Verified correct
  • Claimed: DGDT shows 3.27% UAS / 5.09% LAS degradation -- Consistent with source
  • Claimed: Czech CLTT has 1,121 sentences, 35,220 tokens -- Verified correct

Questions for Authors

  1. Could you run the Underthesea parser on 50-100 sentences from the legal corpus that have been manually annotated by a Vietnamese linguist, to report in-domain LAS/UAS? This single addition would move the paper from borderline to solid accept.

  2. The sequential split (Section 3.3) may result in dev/test sets covering only a narrow range of legal topics. Have you verified that the legal topics in dev/test are representative of the training set?

  3. What percentage of the 10,000 sentences hit the parser fallback (HEAD=0 for all tokens)? You estimate <0.1% -- could you verify this from logs or by checking the data for sentences where all non-root tokens have dep relations?

Minor Issues

  • Section 2.2: "The VLSP 2019 shared task introduced dependency parsing evaluation with approximately 4,000 sentences" has no citation. Either add the reference or remove the specific claim.
  • Section 5.4: The example for acl:subj reads "quy định quy định..." which appears to repeat the same word. Clarify if this is intentional (legal Vietnamese uses such constructions) or a typo.
  • The abstract mentions "Underthesea NLP toolkit (v2.1.0) Biaffine attention-based dependency parser" -- a minor grammar fix: add "apostrophe-s" or restructure to "Underthesea NLP toolkit's (v2.1.0) Biaffine attention-based dependency parser".

Suggestions for Improvement

  1. Add gold evaluation (highest priority): Manually annotate 50-100 randomly sampled sentences and report LAS/UAS. This is the single most impactful improvement.

  2. Implement the improved UPOS-fixing strategy: The conclusion mentions "developing an improved post-processing strategy that prefers changing deprels over forcing UPOS when XPOS strongly indicates the correct category." Implementing this before publication would reduce the 8.6% XPOS-UPOS mismatch rate and improve the resource quality.

  3. Add a pipeline diagram: A figure showing the full annotation pipeline flow would improve readability.

  4. Cite or remove the VLSP 2019 reference: Either find the proper citation or rephrase to avoid the uncited claim.

  5. Consider a stratified re-split: A random or stratified split (preserving document boundaries) would produce more representative dev/test sets than the current sequential split.

Assessment of Revisions (vs. Previous Review)

Previous Weakness Status Notes
1. No quality evaluation Partially addressed Section 5.6 added with XPOS-UPOS analysis. No gold evaluation yet.
2. UPOS corruption in data Addressed (documented) Trade-off quantified (8.6%, 73.8% forced). XPOS recommended for reliable POS.
3. UPOS-forcing systematic bias Addressed (documented) Decision strategy explained. Acknowledged as future work to improve.
4. 77 deprel types undocumented Addressed 32 universal + 45 subtypes classified. 7 key subtypes listed and described. Non-standard subtypes flagged.
5. Split methodology missing Addressed Section 3.3 documents sequential split with rationale.
6. Incomplete related work Addressed DGDT, Czech CLTT, UD-CHILDES, HPSG parser, VLSP 2019/2020 all covered. 15 references, all verified.

Evaluation Checklist

Methodology

  • Research questions clearly stated
  • Methods appropriate for research questions
  • Baselines appropriate and fairly compared
  • Statistical significance properly addressed (N/A for resource paper)
  • Limitations of approach acknowledged

Experiments

  • Datasets properly described (source, size, splits, preprocessing)
  • Evaluation metrics appropriate for the task (quality analysis in 5.6)
  • Training details sufficient for reproduction
  • Ablation studies or analysis provided (XPOS-UPOS breakdown)
  • Results support the claims made

Presentation

  • Abstract accurately summarizes contributions
  • Introduction motivates the problem
  • Related work comprehensive and fair
  • Figures/tables readable and informative
  • Conclusion matches actual contributions

Related Work Verification

  • Key prior work on same task is cited
  • Baseline comparisons use current methods
  • SOTA claims are accurate and up-to-date
  • No significant missing references (minor: VLSP 2019)
  • Fair characterization of competing approaches

Responsible NLP

  • Limitations section present and substantive (9 items)
  • Potential negative impacts discussed
  • Data collection ethics addressed
  • Bias considerations mentioned (Limitation 3)