Albin Thörn Cleland commited on
Commit
95ce394
·
1 Parent(s): 0b4690a

final model with gold test added to train

Browse files
Files changed (46) hide show
  1. logs/latest.txt +1 -1
  2. logs/log_conll17.pt_sv_20251203_232257.txt +719 -0
  3. logs/log_conll17.pt_sv_diachron_20251203_223822.txt +731 -0
  4. logs/log_conll17.pt_sv_diachron_20251212_145854.txt +161 -0
  5. logs/log_conll17.pt_sv_diachron_20251212_150001.txt +0 -0
  6. logs/log_conll17.pt_sv_diachron_is_20251203_221228.txt +591 -0
  7. logs/log_conll17.pt_sv_is_20251203_234401.txt +87 -0
  8. logs/log_diachronic.pt_sv_is_20251203_234442.txt +738 -0
  9. logs/log_sv_diachron_20251212_145741.txt +169 -0
  10. saved_models/depparse/conll17_baseline_sv_only/sv_diachronic_charlm_parser.pt +3 -0
  11. saved_models/depparse/conll17_baseline_sv_only/sv_diachronic_charlm_parser_checkpoint.pt +3 -0
  12. saved_models/depparse/conll17_is-modern/sv_diachronic_charlm_parser.pt +3 -0
  13. saved_models/depparse/conll17_is-modern/sv_diachronic_charlm_parser_checkpoint.pt +3 -0
  14. saved_models/depparse/conll17_sv_diachron/sv_diachronic_charlm_parser.pt +3 -0
  15. saved_models/depparse/conll17_sv_diachron/sv_diachronic_charlm_parser_checkpoint.pt +3 -0
  16. saved_models/depparse/final-conll17-sv_diachron_test/sv_diachronic_charlm_parser.pt +3 -0
  17. saved_models/depparse/final-conll17-sv_diachron_test/sv_diachronic_charlm_parser_checkpoint.pt +3 -0
  18. saved_models/depparse/kubhist2-sv-is-NO-DIACHRON/sv_diachronic_charlm_parser.pt +3 -0
  19. saved_models/depparse/kubhist2-sv-is-NO-DIACHRON/sv_diachronic_charlm_parser_checkpoint.pt +3 -0
  20. ud-treebanks-is/is_icepahc-ud-dev.conllu +3 -0
  21. ud-treebanks-is/is_icepahc-ud-test.conllu +3 -0
  22. ud-treebanks-is/is_icepahc-ud-train.conllu +3 -0
  23. ud-treebanks-is/{is_modern-ud-dev.conllu → modern/is_modern-ud-dev.conllu} +0 -0
  24. ud-treebanks-is/{is_modern-ud-test.conllu → modern/is_modern-ud-test.conllu} +0 -0
  25. ud-treebanks-is/{is_modern-ud-train.conllu → modern/is_modern-ud-train.conllu} +0 -0
  26. ud-treebanks-sv/{ucxn_ud_swedish-talbanken.conllu → svediakorp-letter141673-Stalhammar.conllu} +2 -2
  27. ud-treebanks-sv/svediakorp-sec1033-spf190.conllu +3 -0
  28. ud-treebanks-sv/svediakorp-sec1063-spf220.conllu +3 -0
  29. ud-treebanks-sv/svediakorp-sec1102-spf259.conllu +3 -0
  30. ud-treebanks-sv/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu +3 -0
  31. ud-treebanks-sv/svediakorp-sec25-Runius.conllu +3 -0
  32. ud-treebanks-sv/svediakorp-sec252-BremerF_Teckningar1.conllu +3 -0
  33. ud-treebanks-sv/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu +3 -0
  34. ud-treebanks-sv/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu +3 -0
  35. ud-treebanks-sv/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu +3 -0
  36. ud-treebanks-sv/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu +3 -0
  37. ud-treebanks-sv/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu +3 -0
  38. ud-treebanks-sv/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu +3 -0
  39. ud-treebanks-sv/svediakorp-sec452-NyblomH_FantasierFyra.conllu +3 -0
  40. ud-treebanks-sv/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu +3 -0
  41. ud-treebanks-sv/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu +3 -0
  42. ud-treebanks-sv/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu +3 -0
  43. ud-treebanks-sv/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu +3 -0
  44. ud-treebanks-sv/svediakorp-sec987-spf144.conllu +3 -0
  45. ud-treebanks-sv/svediakorp-sec988-spf145.conllu +3 -0
  46. ud-treebanks-sv/svediakorp-sec991-spf148.conllu +3 -0
logs/latest.txt CHANGED
@@ -1 +1 @@
1
- log_diachronic.pt_sv_diachron_is_20251203_214751.txt
 
1
+ log_conll17.pt_sv_diachron_20251212_150001.txt
logs/log_conll17.pt_sv_20251203_232257.txt ADDED
@@ -0,0 +1,719 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === LOGFILE: logs/log_conll17.pt_sv_20251203_232257.txt ===
2
+ Language codes: sv
3
+ Using pretrained model: conll17.pt
4
+
5
+ Running: python prepare-train-val-test.py sv
6
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-dev.conllu
7
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test.conllu
8
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_pud-ud-test.conllu
9
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-test.conllu
10
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test-trg.conllu
11
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-dev.conllu
12
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/ucxn_ud_swedish-talbanken.conllu
13
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-train.conllu
14
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_old-ud-test.conllu
15
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-train.conllu
16
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-test.conllu
17
+ Skipping DigPhil MACHINE (diachron not requested).
18
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu
19
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu
20
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu
21
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu
22
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1063-spf220.conllu
23
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu
24
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu
25
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec252-BremerF_Teckningar1.conllu
26
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec988-spf145.conllu
27
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec987-spf144.conllu
28
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu
29
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-letter141673-Stalhammar.conllu
30
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1033-spf190.conllu
31
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec25-Runius.conllu
32
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu
33
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec452-NyblomH_FantasierFyra.conllu
34
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu
35
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu
36
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu
37
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1102-spf259.conllu
38
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec991-spf148.conllu
39
+ Cleaning TRAIN...
40
+ Cleaning DEV...
41
+ [REMOVED] sent_id=33 ERRORS=['Token 15: Missing deprel']
42
+ Cleaning TEST...
43
+ Writing TRAIN → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu (19820 valid sentences)
44
+ Writing DEV → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu (9 valid sentences)
45
+ Writing TEST → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu (99 valid sentences)
46
+ Done.
47
+ Sourcing scripts/config_alvis.sh
48
+ Running stanza dataset preparation…
49
+ 2025-12-03 23:23:04 INFO: Datasets program called with:
50
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
51
+ 2025-12-03 23:23:04 DEBUG: Downloading resource file from https://raw.githubusercontent.com/stanfordnlp/stanza-resources/main/resources_1.11.0.json
52
+
53
+ 2025-12-03 23:23:04 INFO: Downloaded file to /cephyr/users/cleland/Alvis/stanza_resources/resources.json
54
+ 2025-12-03 23:23:04 DEBUG: Processing parameter "processors"...
55
+ 2025-12-03 23:23:04 WARNING: Can not find pos: diachronic from official model list. Ignoring it.
56
+ 2025-12-03 23:23:04 INFO: Downloading these customized packages for language: sv (Swedish)...
57
+ =======================
58
+ | Processor | Package |
59
+ -----------------------
60
+ =======================
61
+
62
+ 2025-12-03 23:23:04 INFO: Finished downloading models and saved to /cephyr/users/cleland/Alvis/stanza_resources
63
+ 2025-12-03 23:23:04 INFO: Using tagger model in /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt for sv_diachronic
64
+ 2025-12-03 23:23:04 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
65
+ 2025-12-03 23:23:04 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
66
+ Augmented 56 quotes: Counter({'„”': 9, '″″': 9, '""': 8, '「」': 8, '””': 5, '““': 4, '《》': 4, '»«': 3, '„“': 3, '«»': 3})
67
+ 2025-12-03 23:23:05 INFO: Running tagger to retag /local/tmp.5441282/tmp4sg9id1k/sv_diachronic.train.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu
68
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmp4sg9id1k/sv_diachronic.train.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu']
69
+ 2025-12-03 23:23:05 INFO: Running tagger in predict mode
70
+ 2025-12-03 23:23:05 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
71
+ 2025-12-03 23:23:07 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
72
+ 2025-12-03 23:23:07 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
73
+ 2025-12-03 23:23:07 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
74
+ 2025-12-03 23:23:07 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
75
+ 2025-12-03 23:23:07 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
76
+ 2025-12-03 23:23:10 INFO: Loading data with batch size 250...
77
+ 2025-12-03 23:23:24 INFO: Start evaluation...
78
+ 2025-12-03 23:24:31 INFO: UPOS XPOS UFeats AllTags
79
+ 2025-12-03 23:24:31 INFO: 98.01 62.93 94.04 60.49
80
+ 2025-12-03 23:24:31 INFO: POS Tagger score: sv_diachronic 60.49
81
+ 2025-12-03 23:24:32 INFO: Running tagger to retag /local/tmp.5441282/tmp4sg9id1k/sv_diachronic.dev.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu
82
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmp4sg9id1k/sv_diachronic.dev.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu']
83
+ 2025-12-03 23:24:32 INFO: Running tagger in predict mode
84
+ 2025-12-03 23:24:32 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
85
+ 2025-12-03 23:24:33 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
86
+ 2025-12-03 23:24:33 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
87
+ 2025-12-03 23:24:33 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
88
+ 2025-12-03 23:24:33 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
89
+ 2025-12-03 23:24:34 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
90
+ 2025-12-03 23:24:34 INFO: Loading data with batch size 250...
91
+ 2025-12-03 23:24:34 INFO: Start evaluation...
92
+ 2025-12-03 23:24:34 INFO: UPOS XPOS UFeats AllTags
93
+ 2025-12-03 23:24:34 INFO: 93.32 90.84 93.32 85.64
94
+ 2025-12-03 23:24:34 INFO: POS Tagger score: sv_diachronic 85.64
95
+ 2025-12-03 23:24:34 INFO: Running tagger to retag /local/tmp.5441282/tmp4sg9id1k/sv_diachronic.test.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu
96
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmp4sg9id1k/sv_diachronic.test.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu']
97
+ 2025-12-03 23:24:34 INFO: Running tagger in predict mode
98
+ 2025-12-03 23:24:34 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
99
+ 2025-12-03 23:24:36 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
100
+ 2025-12-03 23:24:36 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
101
+ 2025-12-03 23:24:36 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
102
+ 2025-12-03 23:24:36 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
103
+ 2025-12-03 23:24:36 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
104
+ 2025-12-03 23:24:36 INFO: Loading data with batch size 250...
105
+ 2025-12-03 23:24:36 INFO: Start evaluation...
106
+ 2025-12-03 23:24:37 INFO: UPOS XPOS UFeats AllTags
107
+ 2025-12-03 23:24:37 INFO: 93.14 96.78 95.32 90.28
108
+ 2025-12-03 23:24:37 INFO: POS Tagger score: sv_diachronic 90.28
109
+ Preparing data for UD_Swedish-diachronic: sv_diachronic, sv
110
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu and writing to /local/tmp.5441282/tmp4sg9id1k/sv_diachronic.train.gold.conllu
111
+ Swapped 'w1, w2' for 'w1 ,w2' 122 times
112
+ Added 100 new sentences with asdf, zzzz -> asdf,zzzz
113
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu and writing to /local/tmp.5441282/tmp4sg9id1k/sv_diachronic.dev.gold.conllu
114
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu and writing to /local/tmp.5441282/tmp4sg9id1k/sv_diachronic.test.gold.conllu
115
+ Running stanza dependency parser training…
116
+ 2025-12-03 23:24:46 INFO: Training program called with:
117
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/training/run_depparse.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt --batch_size 32 --dropout 0.33
118
+ 2025-12-03 23:24:46 DEBUG: UD_Swedish-diachronic: sv_diachronic
119
+ 2025-12-03 23:24:46 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
120
+ 2025-12-03 23:24:46 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
121
+ 2025-12-03 23:24:46 INFO: UD_Swedish-diachronic: saved_models/depparse/sv_diachronic_charlm_parser.pt does not exist, training new model
122
+ 2025-12-03 23:24:46 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
123
+ 2025-12-03 23:24:46 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
124
+ 2025-12-03 23:24:46 INFO: Running train depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--train_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu', '--batch_size', '5000', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'train', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
125
+ 2025-12-03 23:24:46 INFO: Running parser in train mode
126
+ 2025-12-03 23:24:46 INFO: Using pretrained contextualized char embedding
127
+ 2025-12-03 23:24:46 INFO: Loading data with batch size 32...
128
+ 2025-12-03 23:24:49 INFO: Train File /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu, Data Size: 19920
129
+ 2025-12-03 23:24:49 INFO: Original data size: 19920
130
+ 2025-12-03 23:24:49 INFO: Augmented data size: 20844
131
+ 2025-12-03 23:24:57 WARNING: sv_diachronic is not a known dataset. Examining the data to choose which xpos vocab to use
132
+ 2025-12-03 23:24:57 INFO: Original length = 20844
133
+ 2025-12-03 23:24:57 INFO: Filtered length = 20844
134
+ 2025-12-03 23:25:02 WARNING: Chose XPOSDescription(xpos_type=<XPOSType.XPOS: 1>, sep='|') for the xpos factory for sv_diachronic
135
+ 2025-12-03 23:25:06 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
136
+ 2025-12-03 23:25:11 DEBUG: 13986 batches created.
137
+ 2025-12-03 23:25:11 DEBUG: 9 batches created.
138
+ 2025-12-03 23:25:11 INFO: Training parser...
139
+ 2025-12-03 23:25:11 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
140
+ 2025-12-03 23:25:11 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
141
+ 2025-12-03 23:25:11 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
142
+ 2025-12-03 23:25:11 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
143
+ 2025-12-03 23:25:14 INFO: Finished STEP 20/50000, loss = 3.699069 (0.033 sec/batch), lr: 0.003000
144
+ 2025-12-03 23:25:15 INFO: Finished STEP 40/50000, loss = 3.841336 (0.035 sec/batch), lr: 0.003000
145
+ 2025-12-03 23:25:16 INFO: Finished STEP 60/50000, loss = 2.719965 (0.034 sec/batch), lr: 0.003000
146
+ 2025-12-03 23:25:16 INFO: Finished STEP 80/50000, loss = 3.832992 (0.035 sec/batch), lr: 0.003000
147
+ 2025-12-03 23:25:17 INFO: Finished STEP 100/50000, loss = 3.050168 (0.035 sec/batch), lr: 0.003000
148
+ 2025-12-03 23:25:17 INFO: Evaluating on dev set...
149
+ 2025-12-03 23:25:18 INFO: LAS MLAS BLEX
150
+ 2025-12-03 23:25:18 INFO: 3.22 0.82 1.64
151
+ 2025-12-03 23:25:18 INFO: step 100: train_loss = 1815.927894, dev_score = 0.0322
152
+ 2025-12-03 23:25:18 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
153
+ 2025-12-03 23:25:18 INFO: new best model saved.
154
+ 2025-12-03 23:25:19 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
155
+ 2025-12-03 23:25:19 INFO: new model checkpoint saved.
156
+ 2025-12-03 23:25:19 INFO: Finished STEP 120/50000, loss = 3.082294 (0.035 sec/batch), lr: 0.003000
157
+ 2025-12-03 23:25:20 INFO: Finished STEP 140/50000, loss = 2.829382 (0.035 sec/batch), lr: 0.003000
158
+ 2025-12-03 23:25:21 INFO: Finished STEP 160/50000, loss = 2.907810 (0.036 sec/batch), lr: 0.003000
159
+ 2025-12-03 23:25:22 INFO: Finished STEP 180/50000, loss = 3.411438 (0.035 sec/batch), lr: 0.003000
160
+ 2025-12-03 23:25:22 INFO: Finished STEP 200/50000, loss = 2.634063 (0.037 sec/batch), lr: 0.003000
161
+ 2025-12-03 23:25:22 INFO: Evaluating on dev set...
162
+ 2025-12-03 23:25:23 INFO: LAS MLAS BLEX
163
+ 2025-12-03 23:25:23 INFO: 7.92 6.14 7.51
164
+ 2025-12-03 23:25:23 INFO: step 200: train_loss = 2.962158, dev_score = 0.0792
165
+ 2025-12-03 23:25:23 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
166
+ 2025-12-03 23:25:23 INFO: new best model saved.
167
+ 2025-12-03 23:25:24 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
168
+ 2025-12-03 23:25:24 INFO: new model checkpoint saved.
169
+ 2025-12-03 23:25:25 INFO: Finished STEP 220/50000, loss = 2.243211 (0.036 sec/batch), lr: 0.003000
170
+ 2025-12-03 23:25:25 INFO: Finished STEP 240/50000, loss = 3.037997 (0.034 sec/batch), lr: 0.003000
171
+ 2025-12-03 23:25:26 INFO: Finished STEP 260/50000, loss = 2.535078 (0.034 sec/batch), lr: 0.003000
172
+ 2025-12-03 23:25:27 INFO: Finished STEP 280/50000, loss = 3.026233 (0.035 sec/batch), lr: 0.003000
173
+ 2025-12-03 23:25:27 INFO: Finished STEP 300/50000, loss = 3.630493 (0.034 sec/batch), lr: 0.003000
174
+ 2025-12-03 23:25:27 INFO: Evaluating on dev set...
175
+ 2025-12-03 23:25:28 INFO: LAS MLAS BLEX
176
+ 2025-12-03 23:25:28 INFO: 13.61 10.54 12.24
177
+ 2025-12-03 23:25:28 INFO: step 300: train_loss = 3.006871, dev_score = 0.1361
178
+ 2025-12-03 23:25:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
179
+ 2025-12-03 23:25:28 INFO: new best model saved.
180
+ 2025-12-03 23:25:29 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
181
+ 2025-12-03 23:25:29 INFO: new model checkpoint saved.
182
+ 2025-12-03 23:25:30 INFO: Finished STEP 320/50000, loss = 2.784619 (0.034 sec/batch), lr: 0.003000
183
+ 2025-12-03 23:25:30 INFO: Finished STEP 340/50000, loss = 3.004477 (0.038 sec/batch), lr: 0.003000
184
+ 2025-12-03 23:25:31 INFO: Finished STEP 360/50000, loss = 2.542431 (0.035 sec/batch), lr: 0.003000
185
+ 2025-12-03 23:25:32 INFO: Finished STEP 380/50000, loss = 3.084781 (0.036 sec/batch), lr: 0.003000
186
+ 2025-12-03 23:25:32 INFO: Finished STEP 400/50000, loss = 2.454229 (0.035 sec/batch), lr: 0.003000
187
+ 2025-12-03 23:25:32 INFO: Evaluating on dev set...
188
+ 2025-12-03 23:25:33 INFO: LAS MLAS BLEX
189
+ 2025-12-03 23:25:33 INFO: 18.81 6.64 10.55
190
+ 2025-12-03 23:25:33 INFO: step 400: train_loss = 3.048551, dev_score = 0.1881
191
+ 2025-12-03 23:25:33 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
192
+ 2025-12-03 23:25:33 INFO: new best model saved.
193
+ 2025-12-03 23:25:34 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
194
+ 2025-12-03 23:25:34 INFO: new model checkpoint saved.
195
+ 2025-12-03 23:25:35 INFO: Finished STEP 420/50000, loss = 2.214305 (0.034 sec/batch), lr: 0.003000
196
+ 2025-12-03 23:25:35 INFO: Finished STEP 440/50000, loss = 2.953549 (0.034 sec/batch), lr: 0.003000
197
+ 2025-12-03 23:25:36 INFO: Finished STEP 460/50000, loss = 2.711811 (0.037 sec/batch), lr: 0.003000
198
+ 2025-12-03 23:25:37 INFO: Finished STEP 480/50000, loss = 2.823795 (0.036 sec/batch), lr: 0.003000
199
+ 2025-12-03 23:25:38 INFO: Finished STEP 500/50000, loss = 4.227708 (0.035 sec/batch), lr: 0.003000
200
+ 2025-12-03 23:25:38 INFO: Evaluating on dev set...
201
+ 2025-12-03 23:25:38 INFO: LAS MLAS BLEX
202
+ 2025-12-03 23:25:38 INFO: 18.07 7.92 9.43
203
+ 2025-12-03 23:25:38 INFO: step 500: train_loss = 3.085661, dev_score = 0.1807
204
+ 2025-12-03 23:25:39 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
205
+ 2025-12-03 23:25:39 INFO: new model checkpoint saved.
206
+ 2025-12-03 23:25:39 INFO: Finished STEP 520/50000, loss = 3.152939 (0.039 sec/batch), lr: 0.003000
207
+ 2025-12-03 23:25:40 INFO: Finished STEP 540/50000, loss = 2.722816 (0.037 sec/batch), lr: 0.003000
208
+ 2025-12-03 23:25:41 INFO: Finished STEP 560/50000, loss = 1.797972 (0.034 sec/batch), lr: 0.003000
209
+ 2025-12-03 23:25:42 INFO: Finished STEP 580/50000, loss = 1.902476 (0.037 sec/batch), lr: 0.003000
210
+ 2025-12-03 23:25:42 INFO: Finished STEP 600/50000, loss = 2.532953 (0.037 sec/batch), lr: 0.003000
211
+ 2025-12-03 23:25:42 INFO: Evaluating on dev set...
212
+ 2025-12-03 23:25:43 INFO: LAS MLAS BLEX
213
+ 2025-12-03 23:25:43 INFO: 24.75 12.64 15.33
214
+ 2025-12-03 23:25:43 INFO: step 600: train_loss = 3.006814, dev_score = 0.2475
215
+ 2025-12-03 23:25:43 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
216
+ 2025-12-03 23:25:43 INFO: new best model saved.
217
+ 2025-12-03 23:25:44 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
218
+ 2025-12-03 23:25:44 INFO: new model checkpoint saved.
219
+ 2025-12-03 23:25:44 INFO: Finished STEP 620/50000, loss = 3.911072 (0.037 sec/batch), lr: 0.003000
220
+ 2025-12-03 23:25:45 INFO: Finished STEP 640/50000, loss = 3.704555 (0.039 sec/batch), lr: 0.003000
221
+ 2025-12-03 23:25:46 INFO: Finished STEP 660/50000, loss = 2.692690 (0.037 sec/batch), lr: 0.003000
222
+ 2025-12-03 23:25:47 INFO: Finished STEP 680/50000, loss = 2.771069 (0.038 sec/batch), lr: 0.003000
223
+ 2025-12-03 23:25:47 INFO: Finished STEP 700/50000, loss = 4.281591 (0.036 sec/batch), lr: 0.003000
224
+ 2025-12-03 23:25:47 INFO: Evaluating on dev set...
225
+ 2025-12-03 23:25:48 INFO: LAS MLAS BLEX
226
+ 2025-12-03 23:25:48 INFO: 34.65 21.99 25.31
227
+ 2025-12-03 23:25:48 INFO: step 700: train_loss = 3.216450, dev_score = 0.3465
228
+ 2025-12-03 23:25:48 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
229
+ 2025-12-03 23:25:48 INFO: new best model saved.
230
+ 2025-12-03 23:25:49 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
231
+ 2025-12-03 23:25:49 INFO: new model checkpoint saved.
232
+ 2025-12-03 23:25:50 INFO: Finished STEP 720/50000, loss = 3.350647 (0.040 sec/batch), lr: 0.003000
233
+ 2025-12-03 23:25:50 INFO: Finished STEP 740/50000, loss = 2.873540 (0.038 sec/batch), lr: 0.003000
234
+ 2025-12-03 23:25:51 INFO: Finished STEP 760/50000, loss = 3.564713 (0.038 sec/batch), lr: 0.003000
235
+ 2025-12-03 23:25:52 INFO: Finished STEP 780/50000, loss = 3.640228 (0.041 sec/batch), lr: 0.003000
236
+ 2025-12-03 23:25:53 INFO: Finished STEP 800/50000, loss = 2.930106 (0.036 sec/batch), lr: 0.003000
237
+ 2025-12-03 23:25:53 INFO: Evaluating on dev set...
238
+ 2025-12-03 23:25:53 INFO: LAS MLAS BLEX
239
+ 2025-12-03 23:25:53 INFO: 24.75 8.60 12.90
240
+ 2025-12-03 23:25:53 INFO: step 800: train_loss = 3.300487, dev_score = 0.2475
241
+ 2025-12-03 23:25:54 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
242
+ 2025-12-03 23:25:54 INFO: new model checkpoint saved.
243
+ 2025-12-03 23:25:54 INFO: Finished STEP 820/50000, loss = 2.968561 (0.035 sec/batch), lr: 0.003000
244
+ 2025-12-03 23:25:55 INFO: Finished STEP 840/50000, loss = 2.491823 (0.037 sec/batch), lr: 0.003000
245
+ 2025-12-03 23:25:56 INFO: Finished STEP 860/50000, loss = 3.991972 (0.035 sec/batch), lr: 0.003000
246
+ 2025-12-03 23:25:57 INFO: Finished STEP 880/50000, loss = 2.541115 (0.037 sec/batch), lr: 0.003000
247
+ 2025-12-03 23:25:58 INFO: Finished STEP 900/50000, loss = 3.015432 (0.038 sec/batch), lr: 0.003000
248
+ 2025-12-03 23:25:58 INFO: Evaluating on dev set...
249
+ 2025-12-03 23:25:58 INFO: LAS MLAS BLEX
250
+ 2025-12-03 23:25:58 INFO: 25.50 11.48 13.52
251
+ 2025-12-03 23:25:58 INFO: step 900: train_loss = 3.187365, dev_score = 0.2550
252
+ 2025-12-03 23:25:59 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
253
+ 2025-12-03 23:25:59 INFO: new model checkpoint saved.
254
+ 2025-12-03 23:25:59 INFO: Finished STEP 920/50000, loss = 2.031316 (0.038 sec/batch), lr: 0.003000
255
+ 2025-12-03 23:26:00 INFO: Finished STEP 940/50000, loss = 2.938839 (0.036 sec/batch), lr: 0.003000
256
+ 2025-12-03 23:26:01 INFO: Finished STEP 960/50000, loss = 3.606135 (0.036 sec/batch), lr: 0.003000
257
+ 2025-12-03 23:26:02 INFO: Finished STEP 980/50000, loss = 5.427132 (0.038 sec/batch), lr: 0.003000
258
+ 2025-12-03 23:26:02 INFO: Finished STEP 1000/50000, loss = 2.710342 (0.039 sec/batch), lr: 0.003000
259
+ 2025-12-03 23:26:02 INFO: Evaluating on dev set...
260
+ 2025-12-03 23:26:03 INFO: LAS MLAS BLEX
261
+ 2025-12-03 23:26:03 INFO: 37.87 21.71 24.22
262
+ 2025-12-03 23:26:03 INFO: step 1000: train_loss = 3.299336, dev_score = 0.3787
263
+ 2025-12-03 23:26:03 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
264
+ 2025-12-03 23:26:03 INFO: new best model saved.
265
+ 2025-12-03 23:26:04 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
266
+ 2025-12-03 23:26:04 INFO: new model checkpoint saved.
267
+ 2025-12-03 23:26:05 INFO: Finished STEP 1020/50000, loss = 3.274420 (0.040 sec/batch), lr: 0.003000
268
+ 2025-12-03 23:26:05 INFO: Finished STEP 1040/50000, loss = 3.172289 (0.037 sec/batch), lr: 0.003000
269
+ 2025-12-03 23:26:06 INFO: Finished STEP 1060/50000, loss = 2.884028 (0.040 sec/batch), lr: 0.003000
270
+ 2025-12-03 23:26:07 INFO: Finished STEP 1080/50000, loss = 4.205043 (0.038 sec/batch), lr: 0.003000
271
+ 2025-12-03 23:26:08 INFO: Finished STEP 1100/50000, loss = 3.608851 (0.039 sec/batch), lr: 0.003000
272
+ 2025-12-03 23:26:08 INFO: Evaluating on dev set...
273
+ 2025-12-03 23:26:08 INFO: LAS MLAS BLEX
274
+ 2025-12-03 23:26:08 INFO: 38.61 24.19 26.78
275
+ 2025-12-03 23:26:08 INFO: step 1100: train_loss = 3.198060, dev_score = 0.3861
276
+ 2025-12-03 23:26:09 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
277
+ 2025-12-03 23:26:09 INFO: new best model saved.
278
+ 2025-12-03 23:26:09 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
279
+ 2025-12-03 23:26:09 INFO: new model checkpoint saved.
280
+ 2025-12-03 23:26:10 INFO: Finished STEP 1120/50000, loss = 2.438182 (0.039 sec/batch), lr: 0.003000
281
+ 2025-12-03 23:26:11 INFO: Finished STEP 1140/50000, loss = 4.315670 (0.038 sec/batch), lr: 0.003000
282
+ 2025-12-03 23:26:11 INFO: Finished STEP 1160/50000, loss = 3.651882 (0.038 sec/batch), lr: 0.003000
283
+ 2025-12-03 23:26:12 INFO: Finished STEP 1180/50000, loss = 3.139420 (0.037 sec/batch), lr: 0.003000
284
+ 2025-12-03 23:26:13 INFO: Finished STEP 1200/50000, loss = 2.040530 (0.038 sec/batch), lr: 0.003000
285
+ 2025-12-03 23:26:13 INFO: Evaluating on dev set...
286
+ 2025-12-03 23:26:14 INFO: LAS MLAS BLEX
287
+ 2025-12-03 23:26:14 INFO: 39.85 23.98 26.98
288
+ 2025-12-03 23:26:14 INFO: step 1200: train_loss = 3.227944, dev_score = 0.3985
289
+ 2025-12-03 23:26:14 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
290
+ 2025-12-03 23:26:14 INFO: new best model saved.
291
+ 2025-12-03 23:26:14 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
292
+ 2025-12-03 23:26:14 INFO: new model checkpoint saved.
293
+ 2025-12-03 23:26:15 INFO: Finished STEP 1220/50000, loss = 2.697804 (0.041 sec/batch), lr: 0.003000
294
+ 2025-12-03 23:26:16 INFO: Finished STEP 1240/50000, loss = 2.911831 (0.038 sec/batch), lr: 0.003000
295
+ 2025-12-03 23:26:17 INFO: Finished STEP 1260/50000, loss = 1.945185 (0.040 sec/batch), lr: 0.003000
296
+ 2025-12-03 23:26:18 INFO: Finished STEP 1280/50000, loss = 3.025609 (0.038 sec/batch), lr: 0.003000
297
+ 2025-12-03 23:26:18 INFO: Finished STEP 1300/50000, loss = 4.230402 (0.037 sec/batch), lr: 0.003000
298
+ 2025-12-03 23:26:18 INFO: Evaluating on dev set...
299
+ 2025-12-03 23:26:19 INFO: LAS MLAS BLEX
300
+ 2025-12-03 23:26:19 INFO: 45.30 30.97 33.55
301
+ 2025-12-03 23:26:19 INFO: step 1300: train_loss = 3.111315, dev_score = 0.4530
302
+ 2025-12-03 23:26:19 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
303
+ 2025-12-03 23:26:19 INFO: new best model saved.
304
+ 2025-12-03 23:26:20 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
305
+ 2025-12-03 23:26:20 INFO: new model checkpoint saved.
306
+ 2025-12-03 23:26:21 INFO: Finished STEP 1320/50000, loss = 1.966291 (0.038 sec/batch), lr: 0.003000
307
+ 2025-12-03 23:26:21 INFO: Finished STEP 1340/50000, loss = 3.216881 (0.040 sec/batch), lr: 0.003000
308
+ 2025-12-03 23:26:22 INFO: Finished STEP 1360/50000, loss = 2.379959 (0.038 sec/batch), lr: 0.003000
309
+ 2025-12-03 23:26:23 INFO: Finished STEP 1380/50000, loss = 4.992296 (0.037 sec/batch), lr: 0.003000
310
+ 2025-12-03 23:26:24 INFO: Finished STEP 1400/50000, loss = 3.349003 (0.042 sec/batch), lr: 0.003000
311
+ 2025-12-03 23:26:24 INFO: Evaluating on dev set...
312
+ 2025-12-03 23:26:24 INFO: LAS MLAS BLEX
313
+ 2025-12-03 23:26:24 INFO: 36.88 23.21 24.89
314
+ 2025-12-03 23:26:24 INFO: step 1400: train_loss = 3.364135, dev_score = 0.3688
315
+ 2025-12-03 23:26:25 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
316
+ 2025-12-03 23:26:25 INFO: new model checkpoint saved.
317
+ 2025-12-03 23:26:26 INFO: Finished STEP 1420/50000, loss = 4.139513 (0.036 sec/batch), lr: 0.003000
318
+ 2025-12-03 23:26:26 INFO: Finished STEP 1440/50000, loss = 2.905265 (0.037 sec/batch), lr: 0.003000
319
+ 2025-12-03 23:26:27 INFO: Finished STEP 1460/50000, loss = 3.610150 (0.038 sec/batch), lr: 0.003000
320
+ 2025-12-03 23:26:28 INFO: Finished STEP 1480/50000, loss = 4.759534 (0.038 sec/batch), lr: 0.003000
321
+ 2025-12-03 23:26:29 INFO: Finished STEP 1500/50000, loss = 2.334270 (0.040 sec/batch), lr: 0.003000
322
+ 2025-12-03 23:26:29 INFO: Evaluating on dev set...
323
+ 2025-12-03 23:26:29 INFO: LAS MLAS BLEX
324
+ 2025-12-03 23:26:29 INFO: 33.66 20.92 32.22
325
+ 2025-12-03 23:26:29 INFO: step 1500: train_loss = 3.377781, dev_score = 0.3366
326
+ 2025-12-03 23:26:30 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
327
+ 2025-12-03 23:26:30 INFO: new model checkpoint saved.
328
+ 2025-12-03 23:26:31 INFO: Finished STEP 1520/50000, loss = 2.824460 (0.037 sec/batch), lr: 0.003000
329
+ 2025-12-03 23:26:32 INFO: Finished STEP 1540/50000, loss = 4.636815 (0.039 sec/batch), lr: 0.003000
330
+ 2025-12-03 23:26:32 INFO: Finished STEP 1560/50000, loss = 3.666232 (0.039 sec/batch), lr: 0.003000
331
+ 2025-12-03 23:26:33 INFO: Finished STEP 1580/50000, loss = 2.960546 (0.039 sec/batch), lr: 0.003000
332
+ 2025-12-03 23:26:34 INFO: Finished STEP 1600/50000, loss = 2.503325 (0.039 sec/batch), lr: 0.003000
333
+ 2025-12-03 23:26:34 INFO: Evaluating on dev set...
334
+ 2025-12-03 23:26:34 INFO: LAS MLAS BLEX
335
+ 2025-12-03 23:26:34 INFO: 42.57 27.91 31.29
336
+ 2025-12-03 23:26:34 INFO: step 1600: train_loss = 3.303970, dev_score = 0.4257
337
+ 2025-12-03 23:26:35 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
338
+ 2025-12-03 23:26:35 INFO: new model checkpoint saved.
339
+ 2025-12-03 23:26:36 INFO: Finished STEP 1620/50000, loss = 3.455626 (0.042 sec/batch), lr: 0.003000
340
+ 2025-12-03 23:26:37 INFO: Finished STEP 1640/50000, loss = 3.333741 (0.038 sec/batch), lr: 0.003000
341
+ 2025-12-03 23:26:37 INFO: Finished STEP 1660/50000, loss = 3.761724 (0.041 sec/batch), lr: 0.003000
342
+ 2025-12-03 23:26:38 INFO: Finished STEP 1680/50000, loss = 3.171466 (0.039 sec/batch), lr: 0.003000
343
+ 2025-12-03 23:26:39 INFO: Finished STEP 1700/50000, loss = 3.169466 (0.040 sec/batch), lr: 0.003000
344
+ 2025-12-03 23:26:39 INFO: Evaluating on dev set...
345
+ 2025-12-03 23:26:39 INFO: LAS MLAS BLEX
346
+ 2025-12-03 23:26:39 INFO: 45.05 33.96 39.41
347
+ 2025-12-03 23:26:39 INFO: step 1700: train_loss = 3.350673, dev_score = 0.4505
348
+ 2025-12-03 23:26:40 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
349
+ 2025-12-03 23:26:40 INFO: new model checkpoint saved.
350
+ 2025-12-03 23:26:41 INFO: Finished STEP 1720/50000, loss = 2.898433 (0.037 sec/batch), lr: 0.003000
351
+ 2025-12-03 23:26:42 INFO: Finished STEP 1740/50000, loss = 4.455222 (0.039 sec/batch), lr: 0.003000
352
+ 2025-12-03 23:26:42 INFO: Finished STEP 1760/50000, loss = 2.290793 (0.041 sec/batch), lr: 0.003000
353
+ 2025-12-03 23:26:43 INFO: Finished STEP 1780/50000, loss = 3.614108 (0.041 sec/batch), lr: 0.003000
354
+ 2025-12-03 23:26:44 INFO: Finished STEP 1800/50000, loss = 2.709010 (0.039 sec/batch), lr: 0.003000
355
+ 2025-12-03 23:26:44 INFO: Evaluating on dev set...
356
+ 2025-12-03 23:26:45 INFO: LAS MLAS BLEX
357
+ 2025-12-03 23:26:45 INFO: 46.04 29.96 33.76
358
+ 2025-12-03 23:26:45 INFO: step 1800: train_loss = 3.388469, dev_score = 0.4604
359
+ 2025-12-03 23:26:45 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
360
+ 2025-12-03 23:26:45 INFO: new best model saved.
361
+ 2025-12-03 23:26:45 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
362
+ 2025-12-03 23:26:45 INFO: new model checkpoint saved.
363
+ 2025-12-03 23:26:46 INFO: Finished STEP 1820/50000, loss = 3.027663 (0.039 sec/batch), lr: 0.003000
364
+ 2025-12-03 23:26:47 INFO: Finished STEP 1840/50000, loss = 4.950170 (0.041 sec/batch), lr: 0.003000
365
+ 2025-12-03 23:26:48 INFO: Finished STEP 1860/50000, loss = 3.627461 (0.037 sec/batch), lr: 0.003000
366
+ 2025-12-03 23:26:49 INFO: Finished STEP 1880/50000, loss = 3.900440 (0.041 sec/batch), lr: 0.003000
367
+ 2025-12-03 23:26:49 INFO: Finished STEP 1900/50000, loss = 3.206836 (0.037 sec/batch), lr: 0.003000
368
+ 2025-12-03 23:26:49 INFO: Evaluating on dev set...
369
+ 2025-12-03 23:26:50 INFO: LAS MLAS BLEX
370
+ 2025-12-03 23:26:50 INFO: 41.09 28.63 34.11
371
+ 2025-12-03 23:26:50 INFO: step 1900: train_loss = 3.372791, dev_score = 0.4109
372
+ 2025-12-03 23:26:51 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
373
+ 2025-12-03 23:26:51 INFO: new model checkpoint saved.
374
+ 2025-12-03 23:26:51 INFO: Finished STEP 1920/50000, loss = 4.990382 (0.038 sec/batch), lr: 0.003000
375
+ 2025-12-03 23:26:52 INFO: Finished STEP 1940/50000, loss = 3.309713 (0.041 sec/batch), lr: 0.003000
376
+ 2025-12-03 23:26:53 INFO: Finished STEP 1960/50000, loss = 3.610244 (0.039 sec/batch), lr: 0.003000
377
+ 2025-12-03 23:26:54 INFO: Finished STEP 1980/50000, loss = 3.427943 (0.039 sec/batch), lr: 0.003000
378
+ 2025-12-03 23:26:55 INFO: Finished STEP 2000/50000, loss = 3.286536 (0.039 sec/batch), lr: 0.003000
379
+ 2025-12-03 23:26:55 INFO: Evaluating on dev set...
380
+ 2025-12-03 23:26:55 INFO: LAS MLAS BLEX
381
+ 2025-12-03 23:26:55 INFO: 46.04 34.82 37.79
382
+ 2025-12-03 23:26:55 INFO: step 2000: train_loss = 3.315330, dev_score = 0.4604
383
+ 2025-12-03 23:26:55 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
384
+ 2025-12-03 23:26:55 INFO: new best model saved.
385
+ 2025-12-03 23:26:56 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
386
+ 2025-12-03 23:26:56 INFO: new model checkpoint saved.
387
+ 2025-12-03 23:26:57 INFO: Finished STEP 2020/50000, loss = 2.214467 (0.041 sec/batch), lr: 0.003000
388
+ 2025-12-03 23:26:58 INFO: Finished STEP 2040/50000, loss = 3.128998 (0.041 sec/batch), lr: 0.003000
389
+ 2025-12-03 23:26:58 INFO: Finished STEP 2060/50000, loss = 3.400111 (0.041 sec/batch), lr: 0.003000
390
+ 2025-12-03 23:26:59 INFO: Finished STEP 2080/50000, loss = 5.836899 (0.042 sec/batch), lr: 0.003000
391
+ 2025-12-03 23:27:00 INFO: Finished STEP 2100/50000, loss = 2.544196 (0.042 sec/batch), lr: 0.003000
392
+ 2025-12-03 23:27:00 INFO: Evaluating on dev set...
393
+ 2025-12-03 23:27:01 INFO: LAS MLAS BLEX
394
+ 2025-12-03 23:27:01 INFO: 41.34 27.60 36.09
395
+ 2025-12-03 23:27:01 INFO: step 2100: train_loss = 3.418321, dev_score = 0.4134
396
+ 2025-12-03 23:27:01 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
397
+ 2025-12-03 23:27:01 INFO: new model checkpoint saved.
398
+ 2025-12-03 23:27:02 INFO: Finished STEP 2120/50000, loss = 4.017645 (0.041 sec/batch), lr: 0.003000
399
+ 2025-12-03 23:27:03 INFO: Finished STEP 2140/50000, loss = 4.332951 (0.040 sec/batch), lr: 0.003000
400
+ 2025-12-03 23:27:04 INFO: Finished STEP 2160/50000, loss = 2.522452 (0.038 sec/batch), lr: 0.003000
401
+ 2025-12-03 23:27:04 INFO: Finished STEP 2180/50000, loss = 3.793148 (0.040 sec/batch), lr: 0.003000
402
+ 2025-12-03 23:27:05 INFO: Finished STEP 2200/50000, loss = 2.889349 (0.041 sec/batch), lr: 0.003000
403
+ 2025-12-03 23:27:05 INFO: Evaluating on dev set...
404
+ 2025-12-03 23:27:06 INFO: LAS MLAS BLEX
405
+ 2025-12-03 23:27:06 INFO: 39.60 26.64 35.10
406
+ 2025-12-03 23:27:06 INFO: step 2200: train_loss = 3.486956, dev_score = 0.3960
407
+ 2025-12-03 23:27:06 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
408
+ 2025-12-03 23:27:06 INFO: new model checkpoint saved.
409
+ 2025-12-03 23:27:07 INFO: Finished STEP 2220/50000, loss = 3.812532 (0.039 sec/batch), lr: 0.003000
410
+ 2025-12-03 23:27:08 INFO: Finished STEP 2240/50000, loss = 4.048740 (0.039 sec/batch), lr: 0.003000
411
+ 2025-12-03 23:27:09 INFO: Finished STEP 2260/50000, loss = 3.259020 (0.038 sec/batch), lr: 0.003000
412
+ 2025-12-03 23:27:10 INFO: Finished STEP 2280/50000, loss = 3.957003 (0.038 sec/batch), lr: 0.003000
413
+ 2025-12-03 23:27:10 INFO: Finished STEP 2300/50000, loss = 3.192188 (0.044 sec/batch), lr: 0.003000
414
+ 2025-12-03 23:27:10 INFO: Evaluating on dev set...
415
+ 2025-12-03 23:27:11 INFO: LAS MLAS BLEX
416
+ 2025-12-03 23:27:11 INFO: 43.81 27.85 33.33
417
+ 2025-12-03 23:27:11 INFO: step 2300: train_loss = 3.390584, dev_score = 0.4381
418
+ 2025-12-03 23:27:12 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
419
+ 2025-12-03 23:27:12 INFO: new model checkpoint saved.
420
+ 2025-12-03 23:27:12 INFO: Finished STEP 2320/50000, loss = 4.818910 (0.040 sec/batch), lr: 0.003000
421
+ 2025-12-03 23:27:13 INFO: Finished STEP 2340/50000, loss = 2.406329 (0.042 sec/batch), lr: 0.003000
422
+ 2025-12-03 23:27:14 INFO: Finished STEP 2360/50000, loss = 2.298516 (0.040 sec/batch), lr: 0.003000
423
+ 2025-12-03 23:27:15 INFO: Finished STEP 2380/50000, loss = 2.806631 (0.041 sec/batch), lr: 0.003000
424
+ 2025-12-03 23:27:16 INFO: Finished STEP 2400/50000, loss = 3.427664 (0.040 sec/batch), lr: 0.003000
425
+ 2025-12-03 23:27:16 INFO: Evaluating on dev set...
426
+ 2025-12-03 23:27:16 INFO: LAS MLAS BLEX
427
+ 2025-12-03 23:27:16 INFO: 43.07 31.09 37.39
428
+ 2025-12-03 23:27:16 INFO: step 2400: train_loss = 3.488748, dev_score = 0.4307
429
+ 2025-12-03 23:27:17 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
430
+ 2025-12-03 23:27:17 INFO: new model checkpoint saved.
431
+ 2025-12-03 23:27:18 INFO: Finished STEP 2420/50000, loss = 2.539723 (0.041 sec/batch), lr: 0.003000
432
+ 2025-12-03 23:27:19 INFO: Finished STEP 2440/50000, loss = 4.533563 (0.043 sec/batch), lr: 0.003000
433
+ 2025-12-03 23:27:19 INFO: Finished STEP 2460/50000, loss = 4.236969 (0.043 sec/batch), lr: 0.003000
434
+ 2025-12-03 23:27:20 INFO: Finished STEP 2480/50000, loss = 2.358161 (0.044 sec/batch), lr: 0.003000
435
+ 2025-12-03 23:27:21 INFO: Finished STEP 2500/50000, loss = 3.165836 (0.040 sec/batch), lr: 0.003000
436
+ 2025-12-03 23:27:21 INFO: Evaluating on dev set...
437
+ 2025-12-03 23:27:22 INFO: LAS MLAS BLEX
438
+ 2025-12-03 23:27:22 INFO: 46.53 30.44 35.52
439
+ 2025-12-03 23:27:22 INFO: step 2500: train_loss = 3.440427, dev_score = 0.4653
440
+ 2025-12-03 23:27:22 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
441
+ 2025-12-03 23:27:22 INFO: new best model saved.
442
+ 2025-12-03 23:27:22 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
443
+ 2025-12-03 23:27:22 INFO: new model checkpoint saved.
444
+ 2025-12-03 23:27:23 INFO: Finished STEP 2520/50000, loss = 3.732689 (0.039 sec/batch), lr: 0.003000
445
+ 2025-12-03 23:27:24 INFO: Finished STEP 2540/50000, loss = 5.190745 (0.040 sec/batch), lr: 0.003000
446
+ 2025-12-03 23:27:25 INFO: Finished STEP 2560/50000, loss = 4.239511 (0.041 sec/batch), lr: 0.003000
447
+ 2025-12-03 23:27:26 INFO: Finished STEP 2580/50000, loss = 4.053186 (0.040 sec/batch), lr: 0.003000
448
+ 2025-12-03 23:27:27 INFO: Finished STEP 2600/50000, loss = 2.809570 (0.042 sec/batch), lr: 0.003000
449
+ 2025-12-03 23:27:27 INFO: Evaluating on dev set...
450
+ 2025-12-03 23:27:27 INFO: LAS MLAS BLEX
451
+ 2025-12-03 23:27:27 INFO: 49.50 37.55 41.77
452
+ 2025-12-03 23:27:27 INFO: step 2600: train_loss = 3.507095, dev_score = 0.4950
453
+ 2025-12-03 23:27:27 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
454
+ 2025-12-03 23:27:27 INFO: new best model saved.
455
+ 2025-12-03 23:27:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
456
+ 2025-12-03 23:27:28 INFO: new model checkpoint saved.
457
+ 2025-12-03 23:27:29 INFO: Finished STEP 2620/50000, loss = 4.161043 (0.040 sec/batch), lr: 0.003000
458
+ 2025-12-03 23:27:30 INFO: Finished STEP 2640/50000, loss = 3.440454 (0.041 sec/batch), lr: 0.003000
459
+ 2025-12-03 23:27:31 INFO: Finished STEP 2660/50000, loss = 3.371374 (0.039 sec/batch), lr: 0.003000
460
+ 2025-12-03 23:27:31 INFO: Finished STEP 2680/50000, loss = 2.921980 (0.039 sec/batch), lr: 0.003000
461
+ 2025-12-03 23:27:32 INFO: Finished STEP 2700/50000, loss = 2.089964 (0.040 sec/batch), lr: 0.003000
462
+ 2025-12-03 23:27:32 INFO: Evaluating on dev set...
463
+ 2025-12-03 23:27:33 INFO: LAS MLAS BLEX
464
+ 2025-12-03 23:27:33 INFO: 45.54 33.12 36.94
465
+ 2025-12-03 23:27:33 INFO: step 2700: train_loss = 3.381599, dev_score = 0.4554
466
+ 2025-12-03 23:27:33 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
467
+ 2025-12-03 23:27:33 INFO: new model checkpoint saved.
468
+ 2025-12-03 23:27:34 INFO: Finished STEP 2720/50000, loss = 5.878531 (0.041 sec/batch), lr: 0.003000
469
+ 2025-12-03 23:27:35 INFO: Finished STEP 2740/50000, loss = 3.237665 (0.040 sec/batch), lr: 0.003000
470
+ 2025-12-03 23:27:36 INFO: Finished STEP 2760/50000, loss = 2.492691 (0.042 sec/batch), lr: 0.003000
471
+ 2025-12-03 23:27:37 INFO: Finished STEP 2780/50000, loss = 4.720194 (0.039 sec/batch), lr: 0.003000
472
+ 2025-12-03 23:27:38 INFO: Finished STEP 2800/50000, loss = 3.760880 (0.042 sec/batch), lr: 0.003000
473
+ 2025-12-03 23:27:38 INFO: Evaluating on dev set...
474
+ 2025-12-03 23:27:38 INFO: LAS MLAS BLEX
475
+ 2025-12-03 23:27:38 INFO: 51.24 38.24 43.28
476
+ 2025-12-03 23:27:38 INFO: step 2800: train_loss = 3.545646, dev_score = 0.5124
477
+ 2025-12-03 23:27:38 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
478
+ 2025-12-03 23:27:38 INFO: new best model saved.
479
+ 2025-12-03 23:27:39 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
480
+ 2025-12-03 23:27:39 INFO: new model checkpoint saved.
481
+ 2025-12-03 23:27:40 INFO: Finished STEP 2820/50000, loss = 3.466887 (0.040 sec/batch), lr: 0.003000
482
+ 2025-12-03 23:27:41 INFO: Finished STEP 2840/50000, loss = 2.830301 (0.040 sec/batch), lr: 0.003000
483
+ 2025-12-03 23:27:42 INFO: Finished STEP 2860/50000, loss = 3.183891 (0.043 sec/batch), lr: 0.003000
484
+ 2025-12-03 23:27:43 INFO: Finished STEP 2880/50000, loss = 3.444857 (0.043 sec/batch), lr: 0.003000
485
+ 2025-12-03 23:27:43 INFO: Finished STEP 2900/50000, loss = 3.642260 (0.044 sec/batch), lr: 0.003000
486
+ 2025-12-03 23:27:43 INFO: Evaluating on dev set...
487
+ 2025-12-03 23:27:44 INFO: LAS MLAS BLEX
488
+ 2025-12-03 23:27:44 INFO: 53.71 40.59 44.82
489
+ 2025-12-03 23:27:44 INFO: step 2900: train_loss = 3.663115, dev_score = 0.5371
490
+ 2025-12-03 23:27:44 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
491
+ 2025-12-03 23:27:44 INFO: new best model saved.
492
+ 2025-12-03 23:27:45 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
493
+ 2025-12-03 23:27:45 INFO: new model checkpoint saved.
494
+ 2025-12-03 23:27:46 INFO: Finished STEP 2920/50000, loss = 2.488231 (0.043 sec/batch), lr: 0.003000
495
+ 2025-12-03 23:27:46 INFO: Finished STEP 2940/50000, loss = 3.432896 (0.044 sec/batch), lr: 0.003000
496
+ 2025-12-03 23:27:47 INFO: Finished STEP 2960/50000, loss = 2.781652 (0.041 sec/batch), lr: 0.003000
497
+ 2025-12-03 23:27:48 INFO: Finished STEP 2980/50000, loss = 2.840132 (0.042 sec/batch), lr: 0.003000
498
+ 2025-12-03 23:27:49 INFO: Finished STEP 3000/50000, loss = 4.171174 (0.039 sec/batch), lr: 0.003000
499
+ 2025-12-03 23:27:49 INFO: Evaluating on dev set...
500
+ 2025-12-03 23:27:49 INFO: LAS MLAS BLEX
501
+ 2025-12-03 23:27:49 INFO: 51.98 39.50 45.80
502
+ 2025-12-03 23:27:49 INFO: step 3000: train_loss = 3.476293, dev_score = 0.5198
503
+ 2025-12-03 23:27:50 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
504
+ 2025-12-03 23:27:50 INFO: new model checkpoint saved.
505
+ 2025-12-03 23:27:51 INFO: Finished STEP 3020/50000, loss = 3.225040 (0.043 sec/batch), lr: 0.003000
506
+ 2025-12-03 23:27:52 INFO: Finished STEP 3040/50000, loss = 3.064417 (0.042 sec/batch), lr: 0.003000
507
+ 2025-12-03 23:27:53 INFO: Finished STEP 3060/50000, loss = 3.886524 (0.041 sec/batch), lr: 0.003000
508
+ 2025-12-03 23:27:53 INFO: Finished STEP 3080/50000, loss = 3.704923 (0.041 sec/batch), lr: 0.003000
509
+ 2025-12-03 23:27:54 INFO: Finished STEP 3100/50000, loss = 4.191244 (0.042 sec/batch), lr: 0.003000
510
+ 2025-12-03 23:27:54 INFO: Evaluating on dev set...
511
+ 2025-12-03 23:27:55 INFO: LAS MLAS BLEX
512
+ 2025-12-03 23:27:55 INFO: 49.75 37.32 40.67
513
+ 2025-12-03 23:27:55 INFO: step 3100: train_loss = 3.466918, dev_score = 0.4975
514
+ 2025-12-03 23:27:55 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
515
+ 2025-12-03 23:27:55 INFO: new model checkpoint saved.
516
+ 2025-12-03 23:27:56 INFO: Finished STEP 3120/50000, loss = 3.950969 (0.042 sec/batch), lr: 0.003000
517
+ 2025-12-03 23:27:57 INFO: Finished STEP 3140/50000, loss = 2.399348 (0.039 sec/batch), lr: 0.003000
518
+ 2025-12-03 23:27:58 INFO: Finished STEP 3160/50000, loss = 3.456824 (0.042 sec/batch), lr: 0.003000
519
+ 2025-12-03 23:27:59 INFO: Finished STEP 3180/50000, loss = 3.288764 (0.041 sec/batch), lr: 0.003000
520
+ 2025-12-03 23:28:00 INFO: Finished STEP 3200/50000, loss = 3.057923 (0.041 sec/batch), lr: 0.003000
521
+ 2025-12-03 23:28:00 INFO: Evaluating on dev set...
522
+ 2025-12-03 23:28:00 INFO: LAS MLAS BLEX
523
+ 2025-12-03 23:28:00 INFO: 47.77 36.55 40.76
524
+ 2025-12-03 23:28:00 INFO: step 3200: train_loss = 3.563295, dev_score = 0.4777
525
+ 2025-12-03 23:28:01 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
526
+ 2025-12-03 23:28:01 INFO: new model checkpoint saved.
527
+ 2025-12-03 23:28:02 INFO: Finished STEP 3220/50000, loss = 4.334808 (0.042 sec/batch), lr: 0.003000
528
+ 2025-12-03 23:28:03 INFO: Finished STEP 3240/50000, loss = 2.773743 (0.040 sec/batch), lr: 0.003000
529
+ 2025-12-03 23:28:03 INFO: Finished STEP 3260/50000, loss = 3.285001 (0.042 sec/batch), lr: 0.003000
530
+ 2025-12-03 23:28:04 INFO: Finished STEP 3280/50000, loss = 3.142590 (0.046 sec/batch), lr: 0.003000
531
+ 2025-12-03 23:28:05 INFO: Finished STEP 3300/50000, loss = 2.988616 (0.044 sec/batch), lr: 0.003000
532
+ 2025-12-03 23:28:05 INFO: Evaluating on dev set...
533
+ 2025-12-03 23:28:06 INFO: LAS MLAS BLEX
534
+ 2025-12-03 23:28:06 INFO: 51.98 39.08 41.60
535
+ 2025-12-03 23:28:06 INFO: step 3300: train_loss = 3.589203, dev_score = 0.5198
536
+ 2025-12-03 23:28:06 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
537
+ 2025-12-03 23:28:06 INFO: new model checkpoint saved.
538
+ 2025-12-03 23:28:07 INFO: Finished STEP 3320/50000, loss = 5.763182 (0.045 sec/batch), lr: 0.003000
539
+ 2025-12-03 23:28:08 INFO: Finished STEP 3340/50000, loss = 3.421094 (0.042 sec/batch), lr: 0.003000
540
+ 2025-12-03 23:28:09 INFO: Finished STEP 3360/50000, loss = 5.406409 (0.043 sec/batch), lr: 0.003000
541
+ 2025-12-03 23:28:10 INFO: Finished STEP 3380/50000, loss = 3.032209 (0.045 sec/batch), lr: 0.003000
542
+ 2025-12-03 23:28:11 INFO: Finished STEP 3400/50000, loss = 3.139112 (0.042 sec/batch), lr: 0.003000
543
+ 2025-12-03 23:28:11 INFO: Evaluating on dev set...
544
+ 2025-12-03 23:28:11 INFO: LAS MLAS BLEX
545
+ 2025-12-03 23:28:11 INFO: 49.75 36.29 40.08
546
+ 2025-12-03 23:28:11 INFO: step 3400: train_loss = 3.722110, dev_score = 0.4975
547
+ 2025-12-03 23:28:12 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
548
+ 2025-12-03 23:28:12 INFO: new model checkpoint saved.
549
+ 2025-12-03 23:28:13 INFO: Finished STEP 3420/50000, loss = 3.515473 (0.044 sec/batch), lr: 0.003000
550
+ 2025-12-03 23:28:14 INFO: Finished STEP 3440/50000, loss = 4.545405 (0.042 sec/batch), lr: 0.003000
551
+ 2025-12-03 23:28:15 INFO: Finished STEP 3460/50000, loss = 3.489767 (0.043 sec/batch), lr: 0.003000
552
+ 2025-12-03 23:28:15 INFO: Finished STEP 3480/50000, loss = 4.931797 (0.042 sec/batch), lr: 0.003000
553
+ 2025-12-03 23:28:16 INFO: Finished STEP 3500/50000, loss = 3.305768 (0.042 sec/batch), lr: 0.003000
554
+ 2025-12-03 23:28:16 INFO: Evaluating on dev set...
555
+ 2025-12-03 23:28:17 INFO: LAS MLAS BLEX
556
+ 2025-12-03 23:28:17 INFO: 48.76 37.05 41.68
557
+ 2025-12-03 23:28:17 INFO: step 3500: train_loss = 3.662503, dev_score = 0.4876
558
+ 2025-12-03 23:28:17 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
559
+ 2025-12-03 23:28:17 INFO: new model checkpoint saved.
560
+ 2025-12-03 23:28:18 INFO: Finished STEP 3520/50000, loss = 4.402299 (0.043 sec/batch), lr: 0.003000
561
+ 2025-12-03 23:28:19 INFO: Finished STEP 3540/50000, loss = 2.635880 (0.042 sec/batch), lr: 0.003000
562
+ 2025-12-03 23:28:20 INFO: Finished STEP 3560/50000, loss = 2.857255 (0.043 sec/batch), lr: 0.003000
563
+ 2025-12-03 23:28:21 INFO: Finished STEP 3580/50000, loss = 3.507267 (0.043 sec/batch), lr: 0.003000
564
+ 2025-12-03 23:28:22 INFO: Finished STEP 3600/50000, loss = 5.196735 (0.042 sec/batch), lr: 0.003000
565
+ 2025-12-03 23:28:22 INFO: Evaluating on dev set...
566
+ 2025-12-03 23:28:22 INFO: LAS MLAS BLEX
567
+ 2025-12-03 23:28:22 INFO: 51.24 40.84 44.21
568
+ 2025-12-03 23:28:22 INFO: step 3600: train_loss = 3.681615, dev_score = 0.5124
569
+ 2025-12-03 23:28:23 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
570
+ 2025-12-03 23:28:23 INFO: new model checkpoint saved.
571
+ 2025-12-03 23:28:24 INFO: Finished STEP 3620/50000, loss = 4.539114 (0.042 sec/batch), lr: 0.003000
572
+ 2025-12-03 23:28:25 INFO: Finished STEP 3640/50000, loss = 6.413163 (0.042 sec/batch), lr: 0.003000
573
+ 2025-12-03 23:28:25 INFO: Finished STEP 3660/50000, loss = 4.375307 (0.041 sec/batch), lr: 0.003000
574
+ 2025-12-03 23:28:26 INFO: Finished STEP 3680/50000, loss = 5.120213 (0.041 sec/batch), lr: 0.003000
575
+ 2025-12-03 23:28:27 INFO: Finished STEP 3700/50000, loss = 3.109243 (0.044 sec/batch), lr: 0.003000
576
+ 2025-12-03 23:28:27 INFO: Evaluating on dev set...
577
+ 2025-12-03 23:28:28 INFO: LAS MLAS BLEX
578
+ 2025-12-03 23:28:28 INFO: 49.26 38.99 44.03
579
+ 2025-12-03 23:28:28 INFO: step 3700: train_loss = 3.778901, dev_score = 0.4926
580
+ 2025-12-03 23:28:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
581
+ 2025-12-03 23:28:28 INFO: new model checkpoint saved.
582
+ 2025-12-03 23:28:29 INFO: Finished STEP 3720/50000, loss = 3.468835 (0.045 sec/batch), lr: 0.003000
583
+ 2025-12-03 23:28:30 INFO: Finished STEP 3740/50000, loss = 3.420460 (0.044 sec/batch), lr: 0.003000
584
+ 2025-12-03 23:28:31 INFO: Finished STEP 3760/50000, loss = 2.856275 (0.042 sec/batch), lr: 0.003000
585
+ 2025-12-03 23:28:32 INFO: Finished STEP 3780/50000, loss = 2.668692 (0.041 sec/batch), lr: 0.003000
586
+ 2025-12-03 23:28:33 INFO: Finished STEP 3800/50000, loss = 4.511623 (0.044 sec/batch), lr: 0.003000
587
+ 2025-12-03 23:28:33 INFO: Evaluating on dev set...
588
+ 2025-12-03 23:28:33 INFO: LAS MLAS BLEX
589
+ 2025-12-03 23:28:33 INFO: 45.30 35.79 39.16
590
+ 2025-12-03 23:28:33 INFO: step 3800: train_loss = 3.920723, dev_score = 0.4530
591
+ 2025-12-03 23:28:34 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
592
+ 2025-12-03 23:28:34 INFO: new model checkpoint saved.
593
+ 2025-12-03 23:28:35 INFO: Finished STEP 3820/50000, loss = 4.449274 (0.042 sec/batch), lr: 0.003000
594
+ 2025-12-03 23:28:36 INFO: Finished STEP 3840/50000, loss = 2.533618 (0.042 sec/batch), lr: 0.003000
595
+ 2025-12-03 23:28:36 INFO: Finished STEP 3860/50000, loss = 3.834488 (0.043 sec/batch), lr: 0.003000
596
+ 2025-12-03 23:28:37 INFO: Finished STEP 3880/50000, loss = 3.449773 (0.043 sec/batch), lr: 0.003000
597
+ 2025-12-03 23:28:38 INFO: Finished STEP 3900/50000, loss = 3.354862 (0.047 sec/batch), lr: 0.003000
598
+ 2025-12-03 23:28:38 INFO: Evaluating on dev set...
599
+ 2025-12-03 23:28:39 INFO: LAS MLAS BLEX
600
+ 2025-12-03 23:28:39 INFO: 52.48 39.58 44.63
601
+ 2025-12-03 23:28:39 INFO: step 3900: train_loss = 3.766196, dev_score = 0.5248
602
+ 2025-12-03 23:28:39 INFO: Training ended with 3900 steps.
603
+ 2025-12-03 23:28:39 INFO: Best dev F1 = 53.71, at iteration = 2900
604
+ 2025-12-03 23:28:39 INFO: Running dev depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
605
+ 2025-12-03 23:28:39 INFO: Running parser in predict mode
606
+ 2025-12-03 23:28:39 INFO: Loading model from: saved_models/depparse/sv_diachronic_charlm_parser.pt
607
+ 2025-12-03 23:28:41 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
608
+ 2025-12-03 23:28:41 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
609
+ 2025-12-03 23:28:41 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
610
+ 2025-12-03 23:28:41 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
611
+ 2025-12-03 23:28:41 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
612
+ 2025-12-03 23:28:41 INFO: Loading data with batch size 32...
613
+ 2025-12-03 23:28:41 DEBUG: 9 batches created.
614
+ 2025-12-03 23:28:42 INFO: F1 scores for each dependency:
615
+ Note that unlabeled attachment errors hurt the labeled attachment scores
616
+ acl: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
617
+ acl:relcl: p 0.1667 r 0.2857 f1 0.2105 (7 actual)
618
+ advcl: p 0.0526 r 0.2000 f1 0.0833 (5 actual)
619
+ advmod: p 0.4839 r 0.6000 f1 0.5357 (25 actual)
620
+ amod: p 0.8148 r 0.7097 f1 0.7586 (31 actual)
621
+ appos: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
622
+ aux: p 0.8889 r 0.7273 f1 0.8000 (11 actual)
623
+ case: p 0.7544 r 0.7679 f1 0.7611 (56 actual)
624
+ cc: p 0.7692 r 0.7692 f1 0.7692 (13 actual)
625
+ ccomp: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
626
+ conj: p 0.5000 r 0.0833 f1 0.1429 (12 actual)
627
+ cop: p 0.2500 r 0.3333 f1 0.2857 (3 actual)
628
+ csubj: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
629
+ det: p 0.8696 r 0.9091 f1 0.8889 (22 actual)
630
+ expl: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
631
+ iobj: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
632
+ mark: p 0.5000 r 0.3333 f1 0.4000 (12 actual)
633
+ nmod: p 0.2609 r 0.4000 f1 0.3158 (15 actual)
634
+ nmod:poss: p 1.0000 r 0.8947 f1 0.9444 (19 actual)
635
+ nsubj: p 0.3421 r 0.7647 f1 0.4727 (17 actual)
636
+ nsubj:pass: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
637
+ obj: p 0.6667 r 0.2727 f1 0.3871 (22 actual)
638
+ obl: p 0.3333 r 0.4146 f1 0.3696 (41 actual)
639
+ obl:agent: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
640
+ orphan: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
641
+ parataxis: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
642
+ punct: p 0.4808 r 0.4808 f1 0.4808 (52 actual)
643
+ root: p 0.4444 r 0.4444 f1 0.4444 (9 actual)
644
+ xcomp: p 0.0000 r 0.0000 f1 0.0000 (8 actual)
645
+ 2025-12-03 23:28:42 INFO: LAS MLAS BLEX
646
+ 2025-12-03 23:28:42 INFO: 53.71 40.59 44.82
647
+ 2025-12-03 23:28:42 INFO: Parser score:
648
+ 2025-12-03 23:28:42 INFO: sv_diachronic 53.71
649
+ 2025-12-03 23:28:42 INFO: Finished running dev set on
650
+ UD_Swedish-diachronic
651
+ UAS LAS CLAS MLAS BLEX
652
+ 66.58 53.71 44.82 40.59 44.82
653
+ 2025-12-03 23:28:42 INFO: Running test depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
654
+ 2025-12-03 23:28:42 INFO: Running parser in predict mode
655
+ 2025-12-03 23:28:42 INFO: Loading model from: saved_models/depparse/sv_diachronic_charlm_parser.pt
656
+ 2025-12-03 23:28:44 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
657
+ 2025-12-03 23:28:44 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
658
+ 2025-12-03 23:28:44 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
659
+ 2025-12-03 23:28:44 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
660
+ 2025-12-03 23:28:44 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
661
+ 2025-12-03 23:28:44 INFO: Loading data with batch size 32...
662
+ 2025-12-03 23:28:44 DEBUG: 93 batches created.
663
+ 2025-12-03 23:28:49 INFO: F1 scores for each dependency:
664
+ Note that unlabeled attachment errors hurt the labeled attachment scores
665
+ acl: p 0.0000 r 0.0000 f1 0.0000 (32 actual)
666
+ acl:cleft: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
667
+ acl:relcl: p 0.1167 r 0.0933 f1 0.1037 (75 actual)
668
+ advcl: p 0.0544 r 0.2667 f1 0.0904 (60 actual)
669
+ advcl:relcl: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
670
+ advmod: p 0.5083 r 0.5746 f1 0.5394 (268 actual)
671
+ amod: p 0.6795 r 0.6913 f1 0.6853 (230 actual)
672
+ appos: p 0.0000 r 0.0000 f1 0.0000 (13 actual)
673
+ aux: p 0.7556 r 0.8095 f1 0.7816 (84 actual)
674
+ aux:pass: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
675
+ case: p 0.7245 r 0.7051 f1 0.7147 (373 actual)
676
+ cc: p 0.5759 r 0.5871 f1 0.5815 (155 actual)
677
+ ccomp: p 0.0000 r 0.0000 f1 0.0000 (35 actual)
678
+ compound:prt: p 0.0000 r 0.0000 f1 0.0000 (21 actual)
679
+ conj: p 0.1053 r 0.0253 f1 0.0408 (158 actual)
680
+ cop: p 0.7619 r 0.3478 f1 0.4776 (46 actual)
681
+ csubj: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
682
+ dep: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
683
+ det: p 0.7833 r 0.7644 f1 0.7737 (208 actual)
684
+ discourse: p 0.0000 r 0.0000 f1 0.0000 (7 actual)
685
+ dislocated: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
686
+ expl: p 0.0000 r 0.0000 f1 0.0000 (11 actual)
687
+ expl:pv: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
688
+ fixed: p 0.0000 r 0.0000 f1 0.0000 (8 actual)
689
+ flat: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
690
+ flat:name: p 0.0000 r 0.0000 f1 0.0000 (12 actual)
691
+ goeswith: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
692
+ iobj: p 0.0000 r 0.0000 f1 0.0000 (14 actual)
693
+ mark: p 0.6569 r 0.5882 f1 0.6207 (153 actual)
694
+ nmod: p 0.2320 r 0.2843 f1 0.2555 (102 actual)
695
+ nmod:poss: p 0.8102 r 0.7817 f1 0.7957 (142 actual)
696
+ nsubj: p 0.3947 r 0.6429 f1 0.4891 (280 actual)
697
+ nsubj:pass: p 0.0000 r 0.0000 f1 0.0000 (25 actual)
698
+ nummod: p 0.0000 r 0.0000 f1 0.0000 (10 actual)
699
+ obj: p 0.6111 r 0.1803 f1 0.2785 (183 actual)
700
+ obl: p 0.2623 r 0.4029 f1 0.3177 (278 actual)
701
+ obl:agent: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
702
+ orphan: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
703
+ parataxis: p 0.0000 r 0.0000 f1 0.0000 (18 actual)
704
+ punct: p 0.3852 r 0.3906 f1 0.3879 (425 actual)
705
+ reparandum: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
706
+ root: p 0.5253 r 0.5253 f1 0.5253 (99 actual)
707
+ vocative: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
708
+ xcomp: p 0.0000 r 0.0000 f1 0.0000 (75 actual)
709
+ 2025-12-03 23:28:49 INFO: LAS MLAS BLEX
710
+ 2025-12-03 23:28:49 INFO: 47.60 36.14 39.67
711
+ 2025-12-03 23:28:49 INFO: Parser score:
712
+ 2025-12-03 23:28:49 INFO: sv_diachronic 47.60
713
+ 2025-12-03 23:28:49 INFO: Finished running test set on
714
+ UD_Swedish-diachronic
715
+ UAS LAS CLAS MLAS BLEX
716
+ 62.40 47.60 39.67 36.14 39.67
717
+ DONE.
718
+ Full log saved to: logs/log_conll17.pt_sv_20251203_232257.txt
719
+ Symlink updated: logs/latest.txt → log_conll17.pt_sv_20251203_232257.txt
logs/log_conll17.pt_sv_diachron_20251203_223822.txt ADDED
@@ -0,0 +1,731 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === LOGFILE: logs/log_conll17.pt_sv_diachron_20251203_223822.txt ===
2
+ Language codes: sv diachron
3
+ Using pretrained model: conll17.pt
4
+
5
+ Running: python prepare-train-val-test.py sv diachron
6
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-dev.conllu
7
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test.conllu
8
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_pud-ud-test.conllu
9
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-test.conllu
10
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test-trg.conllu
11
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-dev.conllu
12
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/ucxn_ud_swedish-talbanken.conllu
13
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-train.conllu
14
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_old-ud-test.conllu
15
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-train.conllu
16
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-test.conllu
17
+ Including DigPhil MACHINE in TRAIN (minus gold)…
18
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu
19
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu
20
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu
21
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu
22
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1063-spf220.conllu
23
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu
24
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu
25
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec252-BremerF_Teckningar1.conllu
26
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec988-spf145.conllu
27
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec987-spf144.conllu
28
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu
29
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-letter141673-Stalhammar.conllu
30
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1033-spf190.conllu
31
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec25-Runius.conllu
32
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu
33
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec452-NyblomH_FantasierFyra.conllu
34
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu
35
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu
36
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu
37
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1102-spf259.conllu
38
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec991-spf148.conllu
39
+ Cleaning TRAIN...
40
+ [REMOVED] sent_id=6 ERRORS=['Line 24: Invalid token ID or head', 'Line 25: Invalid token ID or head', 'Line 29: Invalid token ID or head', 'Token 30 has invalid head 24']
41
+ [REMOVED] sent_id=7_8 ERRORS=['Multiple roots found: [5, 10]']
42
+ [REMOVED] sent_id=30_31 ERRORS=['Multiple roots found: [3, 18]']
43
+ [REMOVED] sent_id=35 ERRORS=['Line 36: Invalid token ID or head']
44
+ [REMOVED] sent_id=2_3 ERRORS=['Multiple roots found: [1, 5]']
45
+ [REMOVED] sent_id=2_3 ERRORS=['Multiple roots found: [7, 20]']
46
+ [REMOVED] sent_id=8_9 ERRORS=['Multiple roots found: [24, 57]']
47
+ [REMOVED] sent_id=12_13 ERRORS=['Multiple roots found: [11, 16]']
48
+ [REMOVED] sent_id=124_split2 ERRORS=['Line 4: Invalid token ID or head', 'No root found', 'Token 1 has invalid head 4', 'Token 2 has invalid head 4', 'Token 3 has invalid head 4', 'Token 6 has invalid head 4', 'Token 11 has invalid head 4', 'Token 15 has invalid head 4']
49
+ [REMOVED] sent_id=396 ERRORS=['Token 2: Missing form']
50
+ [REMOVED] sent_id=416 ERRORS=['Token 2: Missing form']
51
+ [REMOVED] sent_id=589 ERRORS=['Token 2: Missing form']
52
+ [REMOVED] sent_id=909 ERRORS=['Token 2: Missing form']
53
+ [REMOVED] sent_id=912 ERRORS=['Token 2: Missing form']
54
+ [REMOVED] sent_id=3_split1 ERRORS=['Multiple roots found: [4, 15, 17]']
55
+ [REMOVED] sent_id=3_split2 ERRORS=['Line 1: Invalid token ID or head', 'Line 8: Invalid token ID or head', 'Line 15: Invalid token ID or head', 'No root found', 'Token 2 has invalid head 1', 'Token 3 has invalid head 8', 'Token 4 has invalid head 8', 'Token 5 has invalid head 8', 'Token 7 has invalid head 8', 'Token 10 has invalid head 8', 'Token 13 has invalid head 8', 'Token 14 has invalid head 8']
56
+ [REMOVED] sent_id=3_4 ERRORS=['Multiple roots found: [1, 5]']
57
+ [REMOVED] sent_id=5_6 ERRORS=['Multiple roots found: [3, 24]']
58
+ [REMOVED] sent_id=11_12_13 ERRORS=['Multiple roots found: [5, 17, 25]']
59
+ [REMOVED] sent_id=119 ERRORS=['Token 2: Missing form']
60
+ [REMOVED] sent_id=179 ERRORS=['Token 2: Missing form']
61
+ [REMOVED] sent_id=188 ERRORS=['Token 2: Missing form']
62
+ [REMOVED] sent_id=223 ERRORS=['Token 2: Missing form']
63
+ [REMOVED] sent_id=268 ERRORS=['Token 2: Missing form']
64
+ [REMOVED] sent_id=325 ERRORS=['Token 2: Missing form']
65
+ [REMOVED] sent_id=388 ERRORS=['Token 2: Missing form']
66
+ [REMOVED] sent_id=399 ERRORS=['Token 2: Missing form']
67
+ [REMOVED] sent_id=475 ERRORS=['Token 2: Missing form']
68
+ [REMOVED] sent_id=505 ERRORS=['Token 2: Missing form']
69
+ [REMOVED] sent_id=520 ERRORS=['Token 2: Missing form']
70
+ [REMOVED] sent_id=562 ERRORS=['Token 2: Missing form']
71
+ [REMOVED] sent_id=669 ERRORS=['Token 2: Missing form']
72
+ [REMOVED] sent_id=711 ERRORS=['Token 2: Missing form']
73
+ [REMOVED] sent_id=731 ERRORS=['Token 2: Missing form']
74
+ [REMOVED] sent_id=867 ERRORS=['Token 2: Missing form']
75
+ [REMOVED] sent_id=884 ERRORS=['Token 2: Missing form']
76
+ [REMOVED] sent_id=923 ERRORS=['Token 2: Missing form']
77
+ [REMOVED] sent_id=939 ERRORS=['Token 2: Missing form']
78
+ [REMOVED] sent_id=1086 ERRORS=['Token 2: Missing form']
79
+ [REMOVED] sent_id=1179 ERRORS=['Token 2: Missing form']
80
+ [REMOVED] sent_id=1251 ERRORS=['Token 2: Missing form']
81
+ [REMOVED] sent_id=1345 ERRORS=['Token 2: Missing form']
82
+ [REMOVED] sent_id=1459 ERRORS=['Token 2: Missing form']
83
+ [REMOVED] sent_id=1656 ERRORS=['Token 2: Missing form']
84
+ [REMOVED] sent_id=1669 ERRORS=['Token 2: Missing form']
85
+ [REMOVED] sent_id=87_88 ERRORS=['Multiple roots found: [3, 6]']
86
+ [REMOVED] sent_id=65_split2_66_split2 ERRORS=['Line 4: Invalid token ID or head', 'Token 2 has invalid head 4', 'Token 3 has invalid head 4', 'Token 5 has invalid head 4']
87
+ [REMOVED] sent_id=25 ERRORS=['Token 2: Missing form']
88
+ [REMOVED] sent_id=136 ERRORS=['Token 2: Missing form']
89
+ [REMOVED] sent_id=208 ERRORS=['Token 2: Missing form']
90
+ [REMOVED] sent_id=230 ERRORS=['Token 2: Missing form']
91
+ [REMOVED] sent_id=245 ERRORS=['Token 2: Missing form']
92
+ [REMOVED] sent_id=276 ERRORS=['Token 2: Missing form']
93
+ [REMOVED] sent_id=320 ERRORS=['Token 2: Missing form']
94
+ [REMOVED] sent_id=366 ERRORS=['Token 2: Missing form']
95
+ [REMOVED] sent_id=519 ERRORS=['Token 2: Missing form']
96
+ [REMOVED] sent_id=569 ERRORS=['Token 2: Missing form']
97
+ [REMOVED] sent_id=50_split2 ERRORS=['Line 1: Invalid token ID or head', 'Line 6: Invalid token ID or head', 'No root found', 'Token 2 has invalid head 1']
98
+ [REMOVED] sent_id=53_54 ERRORS=['Multiple roots found: [27, 91]']
99
+ [REMOVED] sent_id=55_56_57 ERRORS=['Multiple roots found: [2, 4, 13]']
100
+ [REMOVED] sent_id=17_split1 ERRORS=['Multiple roots found: [2, 14, 17]']
101
+ [REMOVED] sent_id=17_split2 ERRORS=['Line 8: Invalid token ID or head', 'Line 25: Invalid token ID or head', 'Line 38: Invalid token ID or head', 'No root found', 'Token 3 has invalid head 8', 'Token 7 has invalid head 8', 'Token 9 has invalid head 8', 'Token 10 has invalid head 8', 'Token 17 has invalid head 8', 'Token 22 has invalid head 25', 'Token 23 has invalid head 25', 'Token 24 has invalid head 25', 'Token 26 has invalid head 25', 'Token 27 has invalid head 25', 'Token 28 has invalid head 25']
102
+ [REMOVED] sent_id=19_split1 ERRORS=['Multiple roots found: [3, 31]']
103
+ Cleaning DEV...
104
+ [REMOVED] sent_id=33 ERRORS=['Token 15: Missing deprel']
105
+ Cleaning TEST...
106
+ Writing TRAIN → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu (66252 valid sentences)
107
+ Writing DEV → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu (9 valid sentences)
108
+ Writing TEST → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu (99 valid sentences)
109
+ Done.
110
+ Sourcing scripts/config_alvis.sh
111
+ Running stanza dataset preparation…
112
+ 2025-12-03 22:38:31 INFO: Datasets program called with:
113
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
114
+ 2025-12-03 22:38:31 DEBUG: Downloading resource file from https://raw.githubusercontent.com/stanfordnlp/stanza-resources/main/resources_1.11.0.json
115
+
116
+ 2025-12-03 22:38:31 INFO: Downloaded file to /cephyr/users/cleland/Alvis/stanza_resources/resources.json
117
+ 2025-12-03 22:38:31 DEBUG: Processing parameter "processors"...
118
+ 2025-12-03 22:38:31 WARNING: Can not find pos: diachronic from official model list. Ignoring it.
119
+ 2025-12-03 22:38:31 INFO: Downloading these customized packages for language: sv (Swedish)...
120
+ =======================
121
+ | Processor | Package |
122
+ -----------------------
123
+ =======================
124
+
125
+ 2025-12-03 22:38:31 INFO: Finished downloading models and saved to /cephyr/users/cleland/Alvis/stanza_resources
126
+ 2025-12-03 22:38:31 INFO: Using tagger model in /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt for sv_diachronic
127
+ 2025-12-03 22:38:31 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
128
+ 2025-12-03 22:38:31 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
129
+ Augmented 192 quotes: Counter({'„”': 28, '""': 27, '「」': 20, '″″': 20, '»«': 18, '«»': 18, '《》': 17, '””': 17, '„“': 15, '““': 12})
130
+ 2025-12-03 22:38:34 INFO: Running tagger to retag /local/tmp.5441282/tmpi3ipyceb/sv_diachronic.train.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu
131
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmpi3ipyceb/sv_diachronic.train.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu']
132
+ 2025-12-03 22:38:34 INFO: Running tagger in predict mode
133
+ 2025-12-03 22:38:34 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
134
+ 2025-12-03 22:38:36 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
135
+ 2025-12-03 22:38:36 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
136
+ 2025-12-03 22:38:36 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
137
+ 2025-12-03 22:38:36 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
138
+ 2025-12-03 22:38:37 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
139
+ 2025-12-03 22:38:39 INFO: Loading data with batch size 250...
140
+ 2025-12-03 22:39:18 INFO: Start evaluation...
141
+ 2025-12-03 22:42:53 INFO: UPOS XPOS UFeats AllTags
142
+ 2025-12-03 22:42:53 INFO: 99.37 88.39 98.13 87.63
143
+ 2025-12-03 22:42:53 INFO: POS Tagger score: sv_diachronic 87.63
144
+ 2025-12-03 22:42:53 INFO: Running tagger to retag /local/tmp.5441282/tmpi3ipyceb/sv_diachronic.dev.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu
145
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmpi3ipyceb/sv_diachronic.dev.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu']
146
+ 2025-12-03 22:42:53 INFO: Running tagger in predict mode
147
+ 2025-12-03 22:42:53 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
148
+ 2025-12-03 22:42:55 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
149
+ 2025-12-03 22:42:55 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
150
+ 2025-12-03 22:42:55 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
151
+ 2025-12-03 22:42:55 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
152
+ 2025-12-03 22:42:55 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
153
+ 2025-12-03 22:42:55 INFO: Loading data with batch size 250...
154
+ 2025-12-03 22:42:55 INFO: Start evaluation...
155
+ 2025-12-03 22:42:56 INFO: UPOS XPOS UFeats AllTags
156
+ 2025-12-03 22:42:56 INFO: 93.32 90.84 93.32 85.64
157
+ 2025-12-03 22:42:56 INFO: POS Tagger score: sv_diachronic 85.64
158
+ 2025-12-03 22:42:56 INFO: Running tagger to retag /local/tmp.5441282/tmpi3ipyceb/sv_diachronic.test.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu
159
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmpi3ipyceb/sv_diachronic.test.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu']
160
+ 2025-12-03 22:42:56 INFO: Running tagger in predict mode
161
+ 2025-12-03 22:42:56 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
162
+ 2025-12-03 22:42:57 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
163
+ 2025-12-03 22:42:57 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
164
+ 2025-12-03 22:42:57 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
165
+ 2025-12-03 22:42:57 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
166
+ 2025-12-03 22:42:58 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
167
+ 2025-12-03 22:42:58 INFO: Loading data with batch size 250...
168
+ 2025-12-03 22:42:58 INFO: Start evaluation...
169
+ 2025-12-03 22:42:58 INFO: UPOS XPOS UFeats AllTags
170
+ 2025-12-03 22:42:58 INFO: 93.14 96.78 95.32 90.28
171
+ 2025-12-03 22:42:58 INFO: POS Tagger score: sv_diachronic 90.28
172
+ Preparing data for UD_Swedish-diachronic: sv_diachronic, sv
173
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu and writing to /local/tmp.5441282/tmpi3ipyceb/sv_diachronic.train.gold.conllu
174
+ Swapped 'w1, w2' for 'w1 ,w2' 132 times
175
+ Added 506 new sentences with asdf, zzzz -> asdf,zzzz
176
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu and writing to /local/tmp.5441282/tmpi3ipyceb/sv_diachronic.dev.gold.conllu
177
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu and writing to /local/tmp.5441282/tmpi3ipyceb/sv_diachronic.test.gold.conllu
178
+ Running stanza dependency parser training…
179
+ 2025-12-03 22:43:12 INFO: Training program called with:
180
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/training/run_depparse.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt --batch_size 32 --dropout 0.33
181
+ 2025-12-03 22:43:12 DEBUG: UD_Swedish-diachronic: sv_diachronic
182
+ 2025-12-03 22:43:12 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
183
+ 2025-12-03 22:43:12 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
184
+ 2025-12-03 22:43:12 INFO: UD_Swedish-diachronic: saved_models/depparse/sv_diachronic_charlm_parser.pt does not exist, training new model
185
+ 2025-12-03 22:43:12 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
186
+ 2025-12-03 22:43:12 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
187
+ 2025-12-03 22:43:12 INFO: Running train depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--train_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu', '--batch_size', '5000', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'train', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
188
+ 2025-12-03 22:43:12 INFO: Running parser in train mode
189
+ 2025-12-03 22:43:12 INFO: Using pretrained contextualized char embedding
190
+ 2025-12-03 22:43:12 INFO: Loading data with batch size 32...
191
+ 2025-12-03 22:43:20 INFO: Train File /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu, Data Size: 66758
192
+ 2025-12-03 22:43:20 INFO: Original data size: 66758
193
+ 2025-12-03 22:43:21 INFO: Augmented data size: 66875
194
+ 2025-12-03 22:43:38 WARNING: sv_diachronic is not a known dataset. Examining the data to choose which xpos vocab to use
195
+ 2025-12-03 22:43:38 INFO: Original length = 66875
196
+ 2025-12-03 22:43:38 INFO: Filtered length = 66875
197
+ 2025-12-03 22:43:55 WARNING: Chose XPOSDescription(xpos_type=<XPOSType.XPOS: 1>, sep='|') for the xpos factory for sv_diachronic
198
+ 2025-12-03 22:44:01 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
199
+ 2025-12-03 22:44:15 DEBUG: 38839 batches created.
200
+ 2025-12-03 22:44:15 DEBUG: 9 batches created.
201
+ 2025-12-03 22:44:15 INFO: Training parser...
202
+ 2025-12-03 22:44:15 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
203
+ 2025-12-03 22:44:15 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
204
+ 2025-12-03 22:44:15 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
205
+ 2025-12-03 22:44:16 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
206
+ 2025-12-03 22:44:24 INFO: Finished STEP 20/50000, loss = 6.296603 (0.201 sec/batch), lr: 0.003000
207
+ 2025-12-03 22:44:27 INFO: Finished STEP 40/50000, loss = 5.081994 (0.181 sec/batch), lr: 0.003000
208
+ 2025-12-03 22:44:31 INFO: Finished STEP 60/50000, loss = 4.866196 (0.144 sec/batch), lr: 0.003000
209
+ 2025-12-03 22:44:34 INFO: Finished STEP 80/50000, loss = 4.271401 (0.138 sec/batch), lr: 0.003000
210
+ 2025-12-03 22:44:36 INFO: Finished STEP 100/50000, loss = 4.687382 (0.143 sec/batch), lr: 0.003000
211
+ 2025-12-03 22:44:36 INFO: Evaluating on dev set...
212
+ 2025-12-03 22:44:37 INFO: LAS MLAS BLEX
213
+ 2025-12-03 22:44:37 INFO: 43.56 33.47 35.15
214
+ 2025-12-03 22:44:37 INFO: step 100: train_loss = 7.396650, dev_score = 0.4356
215
+ 2025-12-03 22:44:37 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
216
+ 2025-12-03 22:44:37 INFO: new best model saved.
217
+ 2025-12-03 22:44:38 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
218
+ 2025-12-03 22:44:38 INFO: new model checkpoint saved.
219
+ 2025-12-03 22:44:41 INFO: Finished STEP 120/50000, loss = 3.784610 (0.125 sec/batch), lr: 0.003000
220
+ 2025-12-03 22:44:43 INFO: Finished STEP 140/50000, loss = 4.674234 (0.131 sec/batch), lr: 0.003000
221
+ 2025-12-03 22:44:46 INFO: Finished STEP 160/50000, loss = 4.012815 (0.137 sec/batch), lr: 0.003000
222
+ 2025-12-03 22:44:48 INFO: Finished STEP 180/50000, loss = 4.966728 (0.122 sec/batch), lr: 0.003000
223
+ 2025-12-03 22:44:51 INFO: Finished STEP 200/50000, loss = 4.140534 (0.121 sec/batch), lr: 0.003000
224
+ 2025-12-03 22:44:51 INFO: Evaluating on dev set...
225
+ 2025-12-03 22:44:51 INFO: LAS MLAS BLEX
226
+ 2025-12-03 22:44:51 INFO: 51.98 38.30 43.83
227
+ 2025-12-03 22:44:51 INFO: step 200: train_loss = 4.683159, dev_score = 0.5198
228
+ 2025-12-03 22:44:51 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
229
+ 2025-12-03 22:44:51 INFO: new best model saved.
230
+ 2025-12-03 22:44:52 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
231
+ 2025-12-03 22:44:52 INFO: new model checkpoint saved.
232
+ 2025-12-03 22:44:55 INFO: Finished STEP 220/50000, loss = 3.998342 (0.126 sec/batch), lr: 0.003000
233
+ 2025-12-03 22:44:57 INFO: Finished STEP 240/50000, loss = 5.531576 (0.124 sec/batch), lr: 0.003000
234
+ 2025-12-03 22:44:59 INFO: Finished STEP 260/50000, loss = 4.806288 (0.116 sec/batch), lr: 0.003000
235
+ 2025-12-03 22:45:02 INFO: Finished STEP 280/50000, loss = 4.965835 (0.111 sec/batch), lr: 0.003000
236
+ 2025-12-03 22:45:04 INFO: Finished STEP 300/50000, loss = 3.812420 (0.115 sec/batch), lr: 0.003000
237
+ 2025-12-03 22:45:04 INFO: Evaluating on dev set...
238
+ 2025-12-03 22:45:04 INFO: LAS MLAS BLEX
239
+ 2025-12-03 22:45:04 INFO: 55.45 44.40 49.47
240
+ 2025-12-03 22:45:04 INFO: step 300: train_loss = 4.276576, dev_score = 0.5545
241
+ 2025-12-03 22:45:05 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
242
+ 2025-12-03 22:45:05 INFO: new best model saved.
243
+ 2025-12-03 22:45:05 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
244
+ 2025-12-03 22:45:05 INFO: new model checkpoint saved.
245
+ 2025-12-03 22:45:07 INFO: Finished STEP 320/50000, loss = 3.429461 (0.111 sec/batch), lr: 0.003000
246
+ 2025-12-03 22:45:10 INFO: Finished STEP 340/50000, loss = 3.642277 (0.118 sec/batch), lr: 0.003000
247
+ 2025-12-03 22:45:12 INFO: Finished STEP 360/50000, loss = 3.760803 (0.116 sec/batch), lr: 0.003000
248
+ 2025-12-03 22:45:14 INFO: Finished STEP 380/50000, loss = 4.786219 (0.105 sec/batch), lr: 0.003000
249
+ 2025-12-03 22:45:16 INFO: Finished STEP 400/50000, loss = 2.972147 (0.108 sec/batch), lr: 0.003000
250
+ 2025-12-03 22:45:16 INFO: Evaluating on dev set...
251
+ 2025-12-03 22:45:17 INFO: LAS MLAS BLEX
252
+ 2025-12-03 22:45:17 INFO: 57.43 46.61 50.85
253
+ 2025-12-03 22:45:17 INFO: step 400: train_loss = 3.942294, dev_score = 0.5743
254
+ 2025-12-03 22:45:17 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
255
+ 2025-12-03 22:45:17 INFO: new best model saved.
256
+ 2025-12-03 22:45:18 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
257
+ 2025-12-03 22:45:18 INFO: new model checkpoint saved.
258
+ 2025-12-03 22:45:20 INFO: Finished STEP 420/50000, loss = 4.565236 (0.105 sec/batch), lr: 0.003000
259
+ 2025-12-03 22:45:22 INFO: Finished STEP 440/50000, loss = 4.531408 (0.098 sec/batch), lr: 0.003000
260
+ 2025-12-03 22:45:24 INFO: Finished STEP 460/50000, loss = 5.082150 (0.090 sec/batch), lr: 0.003000
261
+ 2025-12-03 22:45:26 INFO: Finished STEP 480/50000, loss = 2.923217 (0.110 sec/batch), lr: 0.003000
262
+ 2025-12-03 22:45:28 INFO: Finished STEP 500/50000, loss = 3.408716 (0.100 sec/batch), lr: 0.003000
263
+ 2025-12-03 22:45:28 INFO: Evaluating on dev set...
264
+ 2025-12-03 22:45:28 INFO: LAS MLAS BLEX
265
+ 2025-12-03 22:45:28 INFO: 60.89 51.48 55.27
266
+ 2025-12-03 22:45:28 INFO: step 500: train_loss = 4.009553, dev_score = 0.6089
267
+ 2025-12-03 22:45:29 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
268
+ 2025-12-03 22:45:29 INFO: new best model saved.
269
+ 2025-12-03 22:45:29 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
270
+ 2025-12-03 22:45:29 INFO: new model checkpoint saved.
271
+ 2025-12-03 22:45:31 INFO: Finished STEP 520/50000, loss = 5.306584 (0.101 sec/batch), lr: 0.003000
272
+ 2025-12-03 22:45:33 INFO: Finished STEP 540/50000, loss = 3.657916 (0.100 sec/batch), lr: 0.003000
273
+ 2025-12-03 22:45:35 INFO: Finished STEP 560/50000, loss = 4.001961 (0.086 sec/batch), lr: 0.003000
274
+ 2025-12-03 22:45:37 INFO: Finished STEP 580/50000, loss = 4.052956 (0.085 sec/batch), lr: 0.003000
275
+ 2025-12-03 22:45:39 INFO: Finished STEP 600/50000, loss = 3.852856 (0.099 sec/batch), lr: 0.003000
276
+ 2025-12-03 22:45:39 INFO: Evaluating on dev set...
277
+ 2025-12-03 22:45:40 INFO: LAS MLAS BLEX
278
+ 2025-12-03 22:45:40 INFO: 59.90 50.00 55.51
279
+ 2025-12-03 22:45:40 INFO: step 600: train_loss = 3.724004, dev_score = 0.5990
280
+ 2025-12-03 22:45:40 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
281
+ 2025-12-03 22:45:40 INFO: new model checkpoint saved.
282
+ 2025-12-03 22:45:42 INFO: Finished STEP 620/50000, loss = 3.484413 (0.101 sec/batch), lr: 0.003000
283
+ 2025-12-03 22:45:44 INFO: Finished STEP 640/50000, loss = 3.220206 (0.102 sec/batch), lr: 0.003000
284
+ 2025-12-03 22:45:46 INFO: Finished STEP 660/50000, loss = 3.800846 (0.094 sec/batch), lr: 0.003000
285
+ 2025-12-03 22:45:48 INFO: Finished STEP 680/50000, loss = 3.769488 (0.093 sec/batch), lr: 0.003000
286
+ 2025-12-03 22:45:50 INFO: Finished STEP 700/50000, loss = 3.285978 (0.097 sec/batch), lr: 0.003000
287
+ 2025-12-03 22:45:50 INFO: Evaluating on dev set...
288
+ 2025-12-03 22:45:50 INFO: LAS MLAS BLEX
289
+ 2025-12-03 22:45:50 INFO: 57.92 46.93 53.70
290
+ 2025-12-03 22:45:50 INFO: step 700: train_loss = 3.829402, dev_score = 0.5792
291
+ 2025-12-03 22:45:51 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
292
+ 2025-12-03 22:45:51 INFO: new model checkpoint saved.
293
+ 2025-12-03 22:45:53 INFO: Finished STEP 720/50000, loss = 4.905636 (0.100 sec/batch), lr: 0.003000
294
+ 2025-12-03 22:45:55 INFO: Finished STEP 740/50000, loss = 3.337056 (0.102 sec/batch), lr: 0.003000
295
+ 2025-12-03 22:45:57 INFO: Finished STEP 760/50000, loss = 5.074053 (0.092 sec/batch), lr: 0.003000
296
+ 2025-12-03 22:45:59 INFO: Finished STEP 780/50000, loss = 4.024767 (0.089 sec/batch), lr: 0.003000
297
+ 2025-12-03 22:46:00 INFO: Finished STEP 800/50000, loss = 4.552472 (0.091 sec/batch), lr: 0.003000
298
+ 2025-12-03 22:46:00 INFO: Evaluating on dev set...
299
+ 2025-12-03 22:46:01 INFO: LAS MLAS BLEX
300
+ 2025-12-03 22:46:01 INFO: 61.39 49.05 53.70
301
+ 2025-12-03 22:46:01 INFO: step 800: train_loss = 3.612613, dev_score = 0.6139
302
+ 2025-12-03 22:46:01 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
303
+ 2025-12-03 22:46:01 INFO: new best model saved.
304
+ 2025-12-03 22:46:02 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
305
+ 2025-12-03 22:46:02 INFO: new model checkpoint saved.
306
+ 2025-12-03 22:46:04 INFO: Finished STEP 820/50000, loss = 6.260066 (0.090 sec/batch), lr: 0.003000
307
+ 2025-12-03 22:46:05 INFO: Finished STEP 840/50000, loss = 4.288107 (0.092 sec/batch), lr: 0.003000
308
+ 2025-12-03 22:46:07 INFO: Finished STEP 860/50000, loss = 3.849105 (0.097 sec/batch), lr: 0.003000
309
+ 2025-12-03 22:46:09 INFO: Finished STEP 880/50000, loss = 2.492743 (0.088 sec/batch), lr: 0.003000
310
+ 2025-12-03 22:46:11 INFO: Finished STEP 900/50000, loss = 4.095746 (0.090 sec/batch), lr: 0.003000
311
+ 2025-12-03 22:46:11 INFO: Evaluating on dev set...
312
+ 2025-12-03 22:46:11 INFO: LAS MLAS BLEX
313
+ 2025-12-03 22:46:11 INFO: 60.40 51.68 55.04
314
+ 2025-12-03 22:46:11 INFO: step 900: train_loss = 3.946219, dev_score = 0.6040
315
+ 2025-12-03 22:46:12 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
316
+ 2025-12-03 22:46:12 INFO: new model checkpoint saved.
317
+ 2025-12-03 22:46:14 INFO: Finished STEP 920/50000, loss = 2.770604 (0.084 sec/batch), lr: 0.003000
318
+ 2025-12-03 22:46:16 INFO: Finished STEP 940/50000, loss = 3.612128 (0.100 sec/batch), lr: 0.003000
319
+ 2025-12-03 22:46:17 INFO: Finished STEP 960/50000, loss = 4.249920 (0.089 sec/batch), lr: 0.003000
320
+ 2025-12-03 22:46:19 INFO: Finished STEP 980/50000, loss = 2.418294 (0.091 sec/batch), lr: 0.003000
321
+ 2025-12-03 22:46:21 INFO: Finished STEP 1000/50000, loss = 4.842584 (0.094 sec/batch), lr: 0.003000
322
+ 2025-12-03 22:46:21 INFO: Evaluating on dev set...
323
+ 2025-12-03 22:46:21 INFO: LAS MLAS BLEX
324
+ 2025-12-03 22:46:21 INFO: 61.39 53.70 56.66
325
+ 2025-12-03 22:46:21 INFO: step 1000: train_loss = 3.696227, dev_score = 0.6139
326
+ 2025-12-03 22:46:22 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
327
+ 2025-12-03 22:46:22 INFO: new best model saved.
328
+ 2025-12-03 22:46:22 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
329
+ 2025-12-03 22:46:22 INFO: new model checkpoint saved.
330
+ 2025-12-03 22:46:24 INFO: Finished STEP 1020/50000, loss = 3.346077 (0.088 sec/batch), lr: 0.003000
331
+ 2025-12-03 22:46:26 INFO: Finished STEP 1040/50000, loss = 3.296835 (0.087 sec/batch), lr: 0.003000
332
+ 2025-12-03 22:46:28 INFO: Finished STEP 1060/50000, loss = 2.515590 (0.088 sec/batch), lr: 0.003000
333
+ 2025-12-03 22:46:29 INFO: Finished STEP 1080/50000, loss = 3.913376 (0.075 sec/batch), lr: 0.003000
334
+ 2025-12-03 22:46:31 INFO: Finished STEP 1100/50000, loss = 5.241524 (0.077 sec/batch), lr: 0.003000
335
+ 2025-12-03 22:46:31 INFO: Evaluating on dev set...
336
+ 2025-12-03 22:46:31 INFO: LAS MLAS BLEX
337
+ 2025-12-03 22:46:31 INFO: 59.65 49.47 53.28
338
+ 2025-12-03 22:46:31 INFO: step 1100: train_loss = 3.721023, dev_score = 0.5965
339
+ 2025-12-03 22:46:32 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
340
+ 2025-12-03 22:46:32 INFO: new model checkpoint saved.
341
+ 2025-12-03 22:46:34 INFO: Finished STEP 1120/50000, loss = 2.731287 (0.086 sec/batch), lr: 0.003000
342
+ 2025-12-03 22:46:36 INFO: Finished STEP 1140/50000, loss = 3.536034 (0.080 sec/batch), lr: 0.003000
343
+ 2025-12-03 22:46:37 INFO: Finished STEP 1160/50000, loss = 3.398331 (0.079 sec/batch), lr: 0.003000
344
+ 2025-12-03 22:46:39 INFO: Finished STEP 1180/50000, loss = 5.028436 (0.078 sec/batch), lr: 0.003000
345
+ 2025-12-03 22:46:41 INFO: Finished STEP 1200/50000, loss = 3.061586 (0.077 sec/batch), lr: 0.003000
346
+ 2025-12-03 22:46:41 INFO: Evaluating on dev set...
347
+ 2025-12-03 22:46:41 INFO: LAS MLAS BLEX
348
+ 2025-12-03 22:46:41 INFO: 61.14 53.28 56.66
349
+ 2025-12-03 22:46:41 INFO: step 1200: train_loss = 3.522418, dev_score = 0.6114
350
+ 2025-12-03 22:46:42 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
351
+ 2025-12-03 22:46:42 INFO: new model checkpoint saved.
352
+ 2025-12-03 22:46:43 INFO: Finished STEP 1220/50000, loss = 2.513215 (0.089 sec/batch), lr: 0.003000
353
+ 2025-12-03 22:46:45 INFO: Finished STEP 1240/50000, loss = 4.864305 (0.085 sec/batch), lr: 0.003000
354
+ 2025-12-03 22:46:47 INFO: Finished STEP 1260/50000, loss = 7.466803 (0.082 sec/batch), lr: 0.003000
355
+ 2025-12-03 22:46:49 INFO: Finished STEP 1280/50000, loss = 3.586653 (0.086 sec/batch), lr: 0.003000
356
+ 2025-12-03 22:46:50 INFO: Finished STEP 1300/50000, loss = 3.791464 (0.078 sec/batch), lr: 0.003000
357
+ 2025-12-03 22:46:50 INFO: Evaluating on dev set...
358
+ 2025-12-03 22:46:51 INFO: LAS MLAS BLEX
359
+ 2025-12-03 22:46:51 INFO: 61.63 52.01 55.81
360
+ 2025-12-03 22:46:51 INFO: step 1300: train_loss = 3.743181, dev_score = 0.6163
361
+ 2025-12-03 22:46:51 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
362
+ 2025-12-03 22:46:51 INFO: new best model saved.
363
+ 2025-12-03 22:46:51 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
364
+ 2025-12-03 22:46:51 INFO: new model checkpoint saved.
365
+ 2025-12-03 22:46:53 INFO: Finished STEP 1320/50000, loss = 2.740164 (0.083 sec/batch), lr: 0.003000
366
+ 2025-12-03 22:46:55 INFO: Finished STEP 1340/50000, loss = 4.084477 (0.088 sec/batch), lr: 0.003000
367
+ 2025-12-03 22:46:56 INFO: Finished STEP 1360/50000, loss = 4.818987 (0.070 sec/batch), lr: 0.003000
368
+ 2025-12-03 22:46:58 INFO: Finished STEP 1380/50000, loss = 2.790164 (0.076 sec/batch), lr: 0.003000
369
+ 2025-12-03 22:47:00 INFO: Finished STEP 1400/50000, loss = 4.602959 (0.081 sec/batch), lr: 0.003000
370
+ 2025-12-03 22:47:00 INFO: Evaluating on dev set...
371
+ 2025-12-03 22:47:00 INFO: LAS MLAS BLEX
372
+ 2025-12-03 22:47:00 INFO: 62.13 55.08 58.05
373
+ 2025-12-03 22:47:00 INFO: step 1400: train_loss = 3.724900, dev_score = 0.6213
374
+ 2025-12-03 22:47:00 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
375
+ 2025-12-03 22:47:00 INFO: new best model saved.
376
+ 2025-12-03 22:47:01 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
377
+ 2025-12-03 22:47:01 INFO: new model checkpoint saved.
378
+ 2025-12-03 22:47:03 INFO: Finished STEP 1420/50000, loss = 4.015442 (0.079 sec/batch), lr: 0.003000
379
+ 2025-12-03 22:47:04 INFO: Finished STEP 1440/50000, loss = 3.268615 (0.082 sec/batch), lr: 0.003000
380
+ 2025-12-03 22:47:06 INFO: Finished STEP 1460/50000, loss = 3.735999 (0.080 sec/batch), lr: 0.003000
381
+ 2025-12-03 22:47:07 INFO: Finished STEP 1480/50000, loss = 3.804015 (0.073 sec/batch), lr: 0.003000
382
+ 2025-12-03 22:47:09 INFO: Finished STEP 1500/50000, loss = 4.517982 (0.087 sec/batch), lr: 0.003000
383
+ 2025-12-03 22:47:09 INFO: Evaluating on dev set...
384
+ 2025-12-03 22:47:10 INFO: LAS MLAS BLEX
385
+ 2025-12-03 22:47:10 INFO: 61.63 52.54 56.36
386
+ 2025-12-03 22:47:10 INFO: step 1500: train_loss = 3.783271, dev_score = 0.6163
387
+ 2025-12-03 22:47:10 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
388
+ 2025-12-03 22:47:10 INFO: new model checkpoint saved.
389
+ 2025-12-03 22:47:12 INFO: Finished STEP 1520/50000, loss = 3.094217 (0.085 sec/batch), lr: 0.003000
390
+ 2025-12-03 22:47:13 INFO: Finished STEP 1540/50000, loss = 3.673062 (0.073 sec/batch), lr: 0.003000
391
+ 2025-12-03 22:47:15 INFO: Finished STEP 1560/50000, loss = 3.584330 (0.078 sec/batch), lr: 0.003000
392
+ 2025-12-03 22:47:16 INFO: Finished STEP 1580/50000, loss = 3.714322 (0.078 sec/batch), lr: 0.003000
393
+ 2025-12-03 22:47:18 INFO: Finished STEP 1600/50000, loss = 3.680001 (0.070 sec/batch), lr: 0.003000
394
+ 2025-12-03 22:47:18 INFO: Evaluating on dev set...
395
+ 2025-12-03 22:47:18 INFO: LAS MLAS BLEX
396
+ 2025-12-03 22:47:18 INFO: 60.89 49.47 54.55
397
+ 2025-12-03 22:47:18 INFO: step 1600: train_loss = 3.629141, dev_score = 0.6089
398
+ 2025-12-03 22:47:19 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
399
+ 2025-12-03 22:47:19 INFO: new model checkpoint saved.
400
+ 2025-12-03 22:47:21 INFO: Finished STEP 1620/50000, loss = 4.405625 (0.076 sec/batch), lr: 0.003000
401
+ 2025-12-03 22:47:22 INFO: Finished STEP 1640/50000, loss = 6.583941 (0.084 sec/batch), lr: 0.003000
402
+ 2025-12-03 22:47:24 INFO: Finished STEP 1660/50000, loss = 2.624713 (0.082 sec/batch), lr: 0.003000
403
+ 2025-12-03 22:47:25 INFO: Finished STEP 1680/50000, loss = 3.746646 (0.081 sec/batch), lr: 0.003000
404
+ 2025-12-03 22:47:27 INFO: Finished STEP 1700/50000, loss = 3.647429 (0.078 sec/batch), lr: 0.003000
405
+ 2025-12-03 22:47:27 INFO: Evaluating on dev set...
406
+ 2025-12-03 22:47:27 INFO: LAS MLAS BLEX
407
+ 2025-12-03 22:47:27 INFO: 62.38 53.59 56.96
408
+ 2025-12-03 22:47:27 INFO: step 1700: train_loss = 3.711031, dev_score = 0.6238
409
+ 2025-12-03 22:47:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
410
+ 2025-12-03 22:47:28 INFO: new best model saved.
411
+ 2025-12-03 22:47:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
412
+ 2025-12-03 22:47:28 INFO: new model checkpoint saved.
413
+ 2025-12-03 22:47:30 INFO: Finished STEP 1720/50000, loss = 3.788727 (0.082 sec/batch), lr: 0.003000
414
+ 2025-12-03 22:47:31 INFO: Finished STEP 1740/50000, loss = 4.010241 (0.075 sec/batch), lr: 0.003000
415
+ 2025-12-03 22:47:33 INFO: Finished STEP 1760/50000, loss = 3.686716 (0.081 sec/batch), lr: 0.003000
416
+ 2025-12-03 22:47:35 INFO: Finished STEP 1780/50000, loss = 2.884626 (0.079 sec/batch), lr: 0.003000
417
+ 2025-12-03 22:47:36 INFO: Finished STEP 1800/50000, loss = 3.530944 (0.073 sec/batch), lr: 0.003000
418
+ 2025-12-03 22:47:36 INFO: Evaluating on dev set...
419
+ 2025-12-03 22:47:37 INFO: LAS MLAS BLEX
420
+ 2025-12-03 22:47:37 INFO: 63.37 53.81 58.47
421
+ 2025-12-03 22:47:37 INFO: step 1800: train_loss = 3.627865, dev_score = 0.6337
422
+ 2025-12-03 22:47:37 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
423
+ 2025-12-03 22:47:37 INFO: new best model saved.
424
+ 2025-12-03 22:47:37 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
425
+ 2025-12-03 22:47:37 INFO: new model checkpoint saved.
426
+ 2025-12-03 22:47:39 INFO: Finished STEP 1820/50000, loss = 3.387048 (0.078 sec/batch), lr: 0.003000
427
+ 2025-12-03 22:47:40 INFO: Finished STEP 1840/50000, loss = 3.075635 (0.081 sec/batch), lr: 0.003000
428
+ 2025-12-03 22:47:42 INFO: Finished STEP 1860/50000, loss = 3.544627 (0.074 sec/batch), lr: 0.003000
429
+ 2025-12-03 22:47:44 INFO: Finished STEP 1880/50000, loss = 2.697122 (0.073 sec/batch), lr: 0.003000
430
+ 2025-12-03 22:47:45 INFO: Finished STEP 1900/50000, loss = 4.582170 (0.074 sec/batch), lr: 0.003000
431
+ 2025-12-03 22:47:45 INFO: Evaluating on dev set...
432
+ 2025-12-03 22:47:46 INFO: LAS MLAS BLEX
433
+ 2025-12-03 22:47:46 INFO: 63.12 55.81 57.93
434
+ 2025-12-03 22:47:46 INFO: step 1900: train_loss = 3.833838, dev_score = 0.6312
435
+ 2025-12-03 22:47:46 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
436
+ 2025-12-03 22:47:46 INFO: new model checkpoint saved.
437
+ 2025-12-03 22:47:48 INFO: Finished STEP 1920/50000, loss = 4.797754 (0.071 sec/batch), lr: 0.003000
438
+ 2025-12-03 22:47:49 INFO: Finished STEP 1940/50000, loss = 3.999227 (0.076 sec/batch), lr: 0.003000
439
+ 2025-12-03 22:47:51 INFO: Finished STEP 1960/50000, loss = 2.863396 (0.081 sec/batch), lr: 0.003000
440
+ 2025-12-03 22:47:52 INFO: Finished STEP 1980/50000, loss = 3.603798 (0.074 sec/batch), lr: 0.003000
441
+ 2025-12-03 22:47:54 INFO: Finished STEP 2000/50000, loss = 2.377973 (0.075 sec/batch), lr: 0.003000
442
+ 2025-12-03 22:47:54 INFO: Evaluating on dev set...
443
+ 2025-12-03 22:47:54 INFO: LAS MLAS BLEX
444
+ 2025-12-03 22:47:54 INFO: 58.91 50.63 54.85
445
+ 2025-12-03 22:47:54 INFO: step 2000: train_loss = 3.578752, dev_score = 0.5891
446
+ 2025-12-03 22:47:55 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
447
+ 2025-12-03 22:47:55 INFO: new model checkpoint saved.
448
+ 2025-12-03 22:47:56 INFO: Finished STEP 2020/50000, loss = 3.455811 (0.069 sec/batch), lr: 0.003000
449
+ 2025-12-03 22:47:58 INFO: Finished STEP 2040/50000, loss = 4.113457 (0.067 sec/batch), lr: 0.003000
450
+ 2025-12-03 22:47:59 INFO: Finished STEP 2060/50000, loss = 3.953318 (0.070 sec/batch), lr: 0.003000
451
+ 2025-12-03 22:48:01 INFO: Finished STEP 2080/50000, loss = 2.824056 (0.075 sec/batch), lr: 0.003000
452
+ 2025-12-03 22:48:02 INFO: Finished STEP 2100/50000, loss = 4.579782 (0.079 sec/batch), lr: 0.003000
453
+ 2025-12-03 22:48:02 INFO: Evaluating on dev set...
454
+ 2025-12-03 22:48:03 INFO: LAS MLAS BLEX
455
+ 2025-12-03 22:48:03 INFO: 61.63 52.85 57.51
456
+ 2025-12-03 22:48:03 INFO: step 2100: train_loss = 3.729740, dev_score = 0.6163
457
+ 2025-12-03 22:48:03 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
458
+ 2025-12-03 22:48:03 INFO: new model checkpoint saved.
459
+ 2025-12-03 22:48:05 INFO: Finished STEP 2120/50000, loss = 3.769734 (0.077 sec/batch), lr: 0.003000
460
+ 2025-12-03 22:48:06 INFO: Finished STEP 2140/50000, loss = 3.904819 (0.071 sec/batch), lr: 0.003000
461
+ 2025-12-03 22:48:08 INFO: Finished STEP 2160/50000, loss = 3.850474 (0.068 sec/batch), lr: 0.003000
462
+ 2025-12-03 22:48:09 INFO: Finished STEP 2180/50000, loss = 3.001653 (0.072 sec/batch), lr: 0.003000
463
+ 2025-12-03 22:48:11 INFO: Finished STEP 2200/50000, loss = 1.957051 (0.074 sec/batch), lr: 0.003000
464
+ 2025-12-03 22:48:11 INFO: Evaluating on dev set...
465
+ 2025-12-03 22:48:11 INFO: LAS MLAS BLEX
466
+ 2025-12-03 22:48:11 INFO: 60.64 51.16 55.39
467
+ 2025-12-03 22:48:11 INFO: step 2200: train_loss = 3.531071, dev_score = 0.6064
468
+ 2025-12-03 22:48:12 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
469
+ 2025-12-03 22:48:12 INFO: new model checkpoint saved.
470
+ 2025-12-03 22:48:13 INFO: Finished STEP 2220/50000, loss = 3.429658 (0.077 sec/batch), lr: 0.003000
471
+ 2025-12-03 22:48:15 INFO: Finished STEP 2240/50000, loss = 3.791333 (0.073 sec/batch), lr: 0.003000
472
+ 2025-12-03 22:48:16 INFO: Finished STEP 2260/50000, loss = 2.263699 (0.072 sec/batch), lr: 0.003000
473
+ 2025-12-03 22:48:18 INFO: Finished STEP 2280/50000, loss = 3.650295 (0.067 sec/batch), lr: 0.003000
474
+ 2025-12-03 22:48:19 INFO: Finished STEP 2300/50000, loss = 4.082314 (0.071 sec/batch), lr: 0.003000
475
+ 2025-12-03 22:48:19 INFO: Evaluating on dev set...
476
+ 2025-12-03 22:48:20 INFO: LAS MLAS BLEX
477
+ 2025-12-03 22:48:20 INFO: 62.13 54.55 57.93
478
+ 2025-12-03 22:48:20 INFO: step 2300: train_loss = 3.777273, dev_score = 0.6213
479
+ 2025-12-03 22:48:20 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
480
+ 2025-12-03 22:48:20 INFO: new model checkpoint saved.
481
+ 2025-12-03 22:48:22 INFO: Finished STEP 2320/50000, loss = 3.704285 (0.071 sec/batch), lr: 0.003000
482
+ 2025-12-03 22:48:23 INFO: Finished STEP 2340/50000, loss = 2.753342 (0.069 sec/batch), lr: 0.003000
483
+ 2025-12-03 22:48:25 INFO: Finished STEP 2360/50000, loss = 3.819938 (0.072 sec/batch), lr: 0.003000
484
+ 2025-12-03 22:48:26 INFO: Finished STEP 2380/50000, loss = 4.015243 (0.075 sec/batch), lr: 0.003000
485
+ 2025-12-03 22:48:27 INFO: Finished STEP 2400/50000, loss = 4.291789 (0.077 sec/batch), lr: 0.003000
486
+ 2025-12-03 22:48:27 INFO: Evaluating on dev set...
487
+ 2025-12-03 22:48:28 INFO: LAS MLAS BLEX
488
+ 2025-12-03 22:48:28 INFO: 60.64 53.28 57.08
489
+ 2025-12-03 22:48:28 INFO: step 2400: train_loss = 3.659402, dev_score = 0.6064
490
+ 2025-12-03 22:48:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
491
+ 2025-12-03 22:48:28 INFO: new model checkpoint saved.
492
+ 2025-12-03 22:48:30 INFO: Finished STEP 2420/50000, loss = 3.808647 (0.070 sec/batch), lr: 0.003000
493
+ 2025-12-03 22:48:31 INFO: Finished STEP 2440/50000, loss = 3.675776 (0.070 sec/batch), lr: 0.003000
494
+ 2025-12-03 22:48:33 INFO: Finished STEP 2460/50000, loss = 4.045568 (0.071 sec/batch), lr: 0.003000
495
+ 2025-12-03 22:48:34 INFO: Finished STEP 2480/50000, loss = 2.708247 (0.071 sec/batch), lr: 0.003000
496
+ 2025-12-03 22:48:36 INFO: Finished STEP 2500/50000, loss = 3.840647 (0.072 sec/batch), lr: 0.003000
497
+ 2025-12-03 22:48:36 INFO: Evaluating on dev set...
498
+ 2025-12-03 22:48:36 INFO: LAS MLAS BLEX
499
+ 2025-12-03 22:48:36 INFO: 63.61 54.97 57.93
500
+ 2025-12-03 22:48:36 INFO: step 2500: train_loss = 3.616523, dev_score = 0.6361
501
+ 2025-12-03 22:48:36 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
502
+ 2025-12-03 22:48:36 INFO: new best model saved.
503
+ 2025-12-03 22:48:37 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
504
+ 2025-12-03 22:48:37 INFO: new model checkpoint saved.
505
+ 2025-12-03 22:48:38 INFO: Finished STEP 2520/50000, loss = 3.329048 (0.075 sec/batch), lr: 0.003000
506
+ 2025-12-03 22:48:40 INFO: Finished STEP 2540/50000, loss = 3.042371 (0.066 sec/batch), lr: 0.003000
507
+ 2025-12-03 22:48:41 INFO: Finished STEP 2560/50000, loss = 3.319040 (0.070 sec/batch), lr: 0.003000
508
+ 2025-12-03 22:48:43 INFO: Finished STEP 2580/50000, loss = 4.341519 (0.056 sec/batch), lr: 0.003000
509
+ 2025-12-03 22:48:44 INFO: Finished STEP 2600/50000, loss = 5.309865 (0.070 sec/batch), lr: 0.003000
510
+ 2025-12-03 22:48:44 INFO: Evaluating on dev set...
511
+ 2025-12-03 22:48:44 INFO: LAS MLAS BLEX
512
+ 2025-12-03 22:48:44 INFO: 61.14 52.01 56.24
513
+ 2025-12-03 22:48:44 INFO: step 2600: train_loss = 3.623874, dev_score = 0.6114
514
+ 2025-12-03 22:48:45 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
515
+ 2025-12-03 22:48:45 INFO: new model checkpoint saved.
516
+ 2025-12-03 22:48:46 INFO: Finished STEP 2620/50000, loss = 2.560167 (0.066 sec/batch), lr: 0.003000
517
+ 2025-12-03 22:48:48 INFO: Finished STEP 2640/50000, loss = 3.262659 (0.066 sec/batch), lr: 0.003000
518
+ 2025-12-03 22:48:49 INFO: Finished STEP 2660/50000, loss = 2.611564 (0.072 sec/batch), lr: 0.003000
519
+ 2025-12-03 22:48:51 INFO: Finished STEP 2680/50000, loss = 3.953672 (0.070 sec/batch), lr: 0.003000
520
+ 2025-12-03 22:48:52 INFO: Finished STEP 2700/50000, loss = 3.726388 (0.072 sec/batch), lr: 0.003000
521
+ 2025-12-03 22:48:52 INFO: Evaluating on dev set...
522
+ 2025-12-03 22:48:53 INFO: LAS MLAS BLEX
523
+ 2025-12-03 22:48:53 INFO: 61.88 53.39 57.20
524
+ 2025-12-03 22:48:53 INFO: step 2700: train_loss = 3.740555, dev_score = 0.6188
525
+ 2025-12-03 22:48:53 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
526
+ 2025-12-03 22:48:53 INFO: new model checkpoint saved.
527
+ 2025-12-03 22:48:54 INFO: Finished STEP 2720/50000, loss = 3.536426 (0.066 sec/batch), lr: 0.003000
528
+ 2025-12-03 22:48:56 INFO: Finished STEP 2740/50000, loss = 4.492881 (0.072 sec/batch), lr: 0.003000
529
+ 2025-12-03 22:48:57 INFO: Finished STEP 2760/50000, loss = 3.437390 (0.073 sec/batch), lr: 0.003000
530
+ 2025-12-03 22:48:59 INFO: Finished STEP 2780/50000, loss = 3.811538 (0.065 sec/batch), lr: 0.003000
531
+ 2025-12-03 22:49:00 INFO: Finished STEP 2800/50000, loss = 2.615445 (0.069 sec/batch), lr: 0.003000
532
+ 2025-12-03 22:49:00 INFO: Evaluating on dev set...
533
+ 2025-12-03 22:49:01 INFO: LAS MLAS BLEX
534
+ 2025-12-03 22:49:01 INFO: 61.14 51.59 56.66
535
+ 2025-12-03 22:49:01 INFO: step 2800: train_loss = 3.751917, dev_score = 0.6114
536
+ 2025-12-03 22:49:01 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
537
+ 2025-12-03 22:49:01 INFO: new model checkpoint saved.
538
+ 2025-12-03 22:49:03 INFO: Finished STEP 2820/50000, loss = 3.525189 (0.076 sec/batch), lr: 0.003000
539
+ 2025-12-03 22:49:04 INFO: Finished STEP 2840/50000, loss = 3.113450 (0.066 sec/batch), lr: 0.003000
540
+ 2025-12-03 22:49:05 INFO: Finished STEP 2860/50000, loss = 2.488067 (0.066 sec/batch), lr: 0.003000
541
+ 2025-12-03 22:49:07 INFO: Finished STEP 2880/50000, loss = 2.702721 (0.071 sec/batch), lr: 0.003000
542
+ 2025-12-03 22:49:08 INFO: Finished STEP 2900/50000, loss = 3.248651 (0.068 sec/batch), lr: 0.003000
543
+ 2025-12-03 22:49:08 INFO: Evaluating on dev set...
544
+ 2025-12-03 22:49:09 INFO: LAS MLAS BLEX
545
+ 2025-12-03 22:49:09 INFO: 62.13 52.43 56.24
546
+ 2025-12-03 22:49:09 INFO: step 2900: train_loss = 3.761651, dev_score = 0.6213
547
+ 2025-12-03 22:49:09 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
548
+ 2025-12-03 22:49:09 INFO: new model checkpoint saved.
549
+ 2025-12-03 22:49:11 INFO: Finished STEP 2920/50000, loss = 3.129019 (0.072 sec/batch), lr: 0.003000
550
+ 2025-12-03 22:49:12 INFO: Finished STEP 2940/50000, loss = 3.579517 (0.067 sec/batch), lr: 0.003000
551
+ 2025-12-03 22:49:13 INFO: Finished STEP 2960/50000, loss = 3.505895 (0.067 sec/batch), lr: 0.003000
552
+ 2025-12-03 22:49:15 INFO: Finished STEP 2980/50000, loss = 2.193599 (0.073 sec/batch), lr: 0.003000
553
+ 2025-12-03 22:49:16 INFO: Finished STEP 3000/50000, loss = 1.882619 (0.064 sec/batch), lr: 0.003000
554
+ 2025-12-03 22:49:16 INFO: Evaluating on dev set...
555
+ 2025-12-03 22:49:17 INFO: LAS MLAS BLEX
556
+ 2025-12-03 22:49:17 INFO: 61.63 51.16 55.81
557
+ 2025-12-03 22:49:17 INFO: step 3000: train_loss = 4.014063, dev_score = 0.6163
558
+ 2025-12-03 22:49:17 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
559
+ 2025-12-03 22:49:17 INFO: new model checkpoint saved.
560
+ 2025-12-03 22:49:19 INFO: Finished STEP 3020/50000, loss = 5.977424 (0.058 sec/batch), lr: 0.003000
561
+ 2025-12-03 22:49:20 INFO: Finished STEP 3040/50000, loss = 3.319470 (0.063 sec/batch), lr: 0.003000
562
+ 2025-12-03 22:49:21 INFO: Finished STEP 3060/50000, loss = 3.211635 (0.069 sec/batch), lr: 0.003000
563
+ 2025-12-03 22:49:23 INFO: Finished STEP 3080/50000, loss = 3.183325 (0.065 sec/batch), lr: 0.003000
564
+ 2025-12-03 22:49:24 INFO: Finished STEP 3100/50000, loss = 2.788619 (0.069 sec/batch), lr: 0.003000
565
+ 2025-12-03 22:49:24 INFO: Evaluating on dev set...
566
+ 2025-12-03 22:49:25 INFO: LAS MLAS BLEX
567
+ 2025-12-03 22:49:25 INFO: 60.89 51.05 56.54
568
+ 2025-12-03 22:49:25 INFO: step 3100: train_loss = 3.763229, dev_score = 0.6089
569
+ 2025-12-03 22:49:25 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
570
+ 2025-12-03 22:49:25 INFO: new model checkpoint saved.
571
+ 2025-12-03 22:49:27 INFO: Finished STEP 3120/50000, loss = 4.927651 (0.063 sec/batch), lr: 0.003000
572
+ 2025-12-03 22:49:28 INFO: Finished STEP 3140/50000, loss = 3.578332 (0.068 sec/batch), lr: 0.003000
573
+ 2025-12-03 22:49:29 INFO: Finished STEP 3160/50000, loss = 3.173575 (0.076 sec/batch), lr: 0.003000
574
+ 2025-12-03 22:49:31 INFO: Finished STEP 3180/50000, loss = 4.860454 (0.075 sec/batch), lr: 0.003000
575
+ 2025-12-03 22:49:32 INFO: Finished STEP 3200/50000, loss = 1.778136 (0.065 sec/batch), lr: 0.003000
576
+ 2025-12-03 22:49:32 INFO: Evaluating on dev set...
577
+ 2025-12-03 22:49:33 INFO: LAS MLAS BLEX
578
+ 2025-12-03 22:49:33 INFO: 60.15 50.74 54.12
579
+ 2025-12-03 22:49:33 INFO: step 3200: train_loss = 3.791774, dev_score = 0.6015
580
+ 2025-12-03 22:49:33 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
581
+ 2025-12-03 22:49:33 INFO: new model checkpoint saved.
582
+ 2025-12-03 22:49:35 INFO: Finished STEP 3220/50000, loss = 8.492014 (0.070 sec/batch), lr: 0.003000
583
+ 2025-12-03 22:49:36 INFO: Finished STEP 3240/50000, loss = 3.045755 (0.064 sec/batch), lr: 0.003000
584
+ 2025-12-03 22:49:37 INFO: Finished STEP 3260/50000, loss = 4.122291 (0.076 sec/batch), lr: 0.003000
585
+ 2025-12-03 22:49:39 INFO: Finished STEP 3280/50000, loss = 3.458145 (0.063 sec/batch), lr: 0.003000
586
+ 2025-12-03 22:49:40 INFO: Finished STEP 3300/50000, loss = 2.681028 (0.064 sec/batch), lr: 0.003000
587
+ 2025-12-03 22:49:40 INFO: Evaluating on dev set...
588
+ 2025-12-03 22:49:41 INFO: LAS MLAS BLEX
589
+ 2025-12-03 22:49:41 INFO: 61.88 51.59 55.39
590
+ 2025-12-03 22:49:41 INFO: step 3300: train_loss = 3.750064, dev_score = 0.6188
591
+ 2025-12-03 22:49:41 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
592
+ 2025-12-03 22:49:41 INFO: new model checkpoint saved.
593
+ 2025-12-03 22:49:42 INFO: Finished STEP 3320/50000, loss = 3.499715 (0.066 sec/batch), lr: 0.003000
594
+ 2025-12-03 22:49:44 INFO: Finished STEP 3340/50000, loss = 3.765631 (0.064 sec/batch), lr: 0.003000
595
+ 2025-12-03 22:49:45 INFO: Finished STEP 3360/50000, loss = 2.908885 (0.064 sec/batch), lr: 0.003000
596
+ 2025-12-03 22:49:47 INFO: Finished STEP 3380/50000, loss = 4.147782 (0.067 sec/batch), lr: 0.003000
597
+ 2025-12-03 22:49:48 INFO: Finished STEP 3400/50000, loss = 4.854831 (0.073 sec/batch), lr: 0.003000
598
+ 2025-12-03 22:49:48 INFO: Evaluating on dev set...
599
+ 2025-12-03 22:49:48 INFO: LAS MLAS BLEX
600
+ 2025-12-03 22:49:48 INFO: 61.39 51.59 54.55
601
+ 2025-12-03 22:49:48 INFO: step 3400: train_loss = 3.827870, dev_score = 0.6139
602
+ 2025-12-03 22:49:49 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
603
+ 2025-12-03 22:49:49 INFO: new model checkpoint saved.
604
+ 2025-12-03 22:49:50 INFO: Finished STEP 3420/50000, loss = 3.672020 (0.066 sec/batch), lr: 0.003000
605
+ 2025-12-03 22:49:52 INFO: Finished STEP 3440/50000, loss = 3.315593 (0.064 sec/batch), lr: 0.003000
606
+ 2025-12-03 22:49:53 INFO: Finished STEP 3460/50000, loss = 2.334443 (0.064 sec/batch), lr: 0.003000
607
+ 2025-12-03 22:49:54 INFO: Finished STEP 3480/50000, loss = 4.692723 (0.061 sec/batch), lr: 0.003000
608
+ 2025-12-03 22:49:56 INFO: Finished STEP 3500/50000, loss = 2.920178 (0.064 sec/batch), lr: 0.003000
609
+ 2025-12-03 22:49:56 INFO: Evaluating on dev set...
610
+ 2025-12-03 22:49:56 INFO: LAS MLAS BLEX
611
+ 2025-12-03 22:49:56 INFO: 60.40 50.74 53.70
612
+ 2025-12-03 22:49:56 INFO: step 3500: train_loss = 3.750002, dev_score = 0.6040
613
+ 2025-12-03 22:49:56 INFO: Training ended with 3500 steps.
614
+ 2025-12-03 22:49:56 INFO: Best dev F1 = 63.61, at iteration = 2500
615
+ 2025-12-03 22:49:57 INFO: Running dev depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
616
+ 2025-12-03 22:49:57 INFO: Running parser in predict mode
617
+ 2025-12-03 22:49:57 INFO: Loading model from: saved_models/depparse/sv_diachronic_charlm_parser.pt
618
+ 2025-12-03 22:49:59 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
619
+ 2025-12-03 22:49:59 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
620
+ 2025-12-03 22:49:59 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
621
+ 2025-12-03 22:49:59 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
622
+ 2025-12-03 22:50:00 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
623
+ 2025-12-03 22:50:00 INFO: Loading data with batch size 32...
624
+ 2025-12-03 22:50:00 DEBUG: 9 batches created.
625
+ 2025-12-03 22:50:00 INFO: F1 scores for each dependency:
626
+ Note that unlabeled attachment errors hurt the labeled attachment scores
627
+ acl: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
628
+ acl:relcl: p 0.2222 r 0.2857 f1 0.2500 (7 actual)
629
+ advcl: p 0.1667 r 0.2000 f1 0.1818 (5 actual)
630
+ advmod: p 0.4828 r 0.5600 f1 0.5185 (25 actual)
631
+ amod: p 0.8889 r 0.7742 f1 0.8276 (31 actual)
632
+ appos: p 1.0000 r 0.5000 f1 0.6667 (4 actual)
633
+ aux: p 0.8182 r 0.8182 f1 0.8182 (11 actual)
634
+ case: p 0.9444 r 0.9107 f1 0.9273 (56 actual)
635
+ cc: p 0.6923 r 0.6923 f1 0.6923 (13 actual)
636
+ ccomp: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
637
+ compound:prt: p 0.0000 r 0.0000 f1 0.0000 (0 actual)
638
+ conj: p 0.3750 r 0.5000 f1 0.4286 (12 actual)
639
+ cop: p 0.5000 r 0.3333 f1 0.4000 (3 actual)
640
+ csubj: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
641
+ det: p 0.8696 r 0.9091 f1 0.8889 (22 actual)
642
+ expl: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
643
+ iobj: p 0.3333 r 0.5000 f1 0.4000 (2 actual)
644
+ mark: p 0.6364 r 0.5833 f1 0.6087 (12 actual)
645
+ nmod: p 0.4118 r 0.4667 f1 0.4375 (15 actual)
646
+ nmod:poss: p 1.0000 r 0.8947 f1 0.9444 (19 actual)
647
+ nsubj: p 0.5200 r 0.7647 f1 0.6190 (17 actual)
648
+ nsubj:pass: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
649
+ obj: p 0.5769 r 0.6818 f1 0.6250 (22 actual)
650
+ obl: p 0.5714 r 0.5854 f1 0.5783 (41 actual)
651
+ obl:agent: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
652
+ orphan: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
653
+ parataxis: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
654
+ punct: p 0.4423 r 0.4423 f1 0.4423 (52 actual)
655
+ root: p 0.5556 r 0.5556 f1 0.5556 (9 actual)
656
+ xcomp: p 0.6667 r 0.2500 f1 0.3636 (8 actual)
657
+ 2025-12-03 22:50:00 INFO: LAS MLAS BLEX
658
+ 2025-12-03 22:50:00 INFO: 63.61 54.97 57.93
659
+ 2025-12-03 22:50:00 INFO: Parser score:
660
+ 2025-12-03 22:50:00 INFO: sv_diachronic 63.61
661
+ 2025-12-03 22:50:00 INFO: Finished running dev set on
662
+ UD_Swedish-diachronic
663
+ UAS LAS CLAS MLAS BLEX
664
+ 70.54 63.61 57.93 54.97 57.93
665
+ 2025-12-03 22:50:00 INFO: Running test depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
666
+ 2025-12-03 22:50:00 INFO: Running parser in predict mode
667
+ 2025-12-03 22:50:00 INFO: Loading model from: saved_models/depparse/sv_diachronic_charlm_parser.pt
668
+ 2025-12-03 22:50:02 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
669
+ 2025-12-03 22:50:02 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
670
+ 2025-12-03 22:50:02 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
671
+ 2025-12-03 22:50:02 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
672
+ 2025-12-03 22:50:02 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
673
+ 2025-12-03 22:50:02 INFO: Loading data with batch size 32...
674
+ 2025-12-03 22:50:02 DEBUG: 93 batches created.
675
+ 2025-12-03 22:50:07 INFO: F1 scores for each dependency:
676
+ Note that unlabeled attachment errors hurt the labeled attachment scores
677
+ acl: p 0.3333 r 0.0312 f1 0.0571 (32 actual)
678
+ acl:cleft: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
679
+ acl:relcl: p 0.3061 r 0.2000 f1 0.2419 (75 actual)
680
+ advcl: p 0.0893 r 0.1667 f1 0.1163 (60 actual)
681
+ advcl:relcl: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
682
+ advmod: p 0.5745 r 0.5896 f1 0.5820 (268 actual)
683
+ amod: p 0.8139 r 0.8174 f1 0.8156 (230 actual)
684
+ appos: p 0.0000 r 0.0000 f1 0.0000 (13 actual)
685
+ aux: p 0.8554 r 0.8452 f1 0.8503 (84 actual)
686
+ aux:pass: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
687
+ case: p 0.8661 r 0.8150 f1 0.8398 (373 actual)
688
+ cc: p 0.6474 r 0.6516 f1 0.6495 (155 actual)
689
+ ccomp: p 0.0000 r 0.0000 f1 0.0000 (35 actual)
690
+ compound:prt: p 0.6800 r 0.8095 f1 0.7391 (21 actual)
691
+ conj: p 0.2938 r 0.2975 f1 0.2956 (158 actual)
692
+ cop: p 0.7188 r 0.5000 f1 0.5897 (46 actual)
693
+ csubj: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
694
+ dep: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
695
+ det: p 0.8308 r 0.8029 f1 0.8166 (208 actual)
696
+ discourse: p 0.0000 r 0.0000 f1 0.0000 (7 actual)
697
+ dislocated: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
698
+ expl: p 0.5000 r 0.0909 f1 0.1538 (11 actual)
699
+ expl:pv: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
700
+ fixed: p 0.0000 r 0.0000 f1 0.0000 (8 actual)
701
+ flat: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
702
+ flat:name: p 0.0000 r 0.0000 f1 0.0000 (12 actual)
703
+ goeswith: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
704
+ iobj: p 0.1935 r 0.4286 f1 0.2667 (14 actual)
705
+ mark: p 0.6624 r 0.6797 f1 0.6710 (153 actual)
706
+ nmod: p 0.2661 r 0.2843 f1 0.2749 (102 actual)
707
+ nmod:poss: p 0.8865 r 0.8803 f1 0.8834 (142 actual)
708
+ nsubj: p 0.5385 r 0.6750 f1 0.5990 (280 actual)
709
+ nsubj:pass: p 0.0000 r 0.0000 f1 0.0000 (25 actual)
710
+ nummod: p 0.8000 r 0.8000 f1 0.8000 (10 actual)
711
+ obj: p 0.5556 r 0.6011 f1 0.5774 (183 actual)
712
+ obl: p 0.5030 r 0.5935 f1 0.5446 (278 actual)
713
+ obl:agent: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
714
+ orphan: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
715
+ parataxis: p 0.1000 r 0.2222 f1 0.1379 (18 actual)
716
+ punct: p 0.4685 r 0.4729 f1 0.4707 (425 actual)
717
+ reparandum: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
718
+ root: p 0.5657 r 0.5657 f1 0.5657 (99 actual)
719
+ vocative: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
720
+ xcomp: p 0.5000 r 0.3467 f1 0.4094 (75 actual)
721
+ 2025-12-03 22:50:07 INFO: LAS MLAS BLEX
722
+ 2025-12-03 22:50:07 INFO: 59.06 49.85 53.26
723
+ 2025-12-03 22:50:07 INFO: Parser score:
724
+ 2025-12-03 22:50:07 INFO: sv_diachronic 59.06
725
+ 2025-12-03 22:50:07 INFO: Finished running test set on
726
+ UD_Swedish-diachronic
727
+ UAS LAS CLAS MLAS BLEX
728
+ 68.07 59.06 53.26 49.85 53.26
729
+ DONE.
730
+ Full log saved to: logs/log_conll17.pt_sv_diachron_20251203_223822.txt
731
+ Symlink updated: logs/latest.txt → log_conll17.pt_sv_diachron_20251203_223822.txt
logs/log_conll17.pt_sv_diachron_20251212_145854.txt ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === LOGFILE: logs/log_conll17.pt_sv_diachron_20251212_145854.txt ===
2
+ Language codes: sv diachron
3
+ Using pretrained model: conll17.pt
4
+
5
+ Running: python prepare-train-val-test.py sv diachron
6
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec991-spf148.conllu
7
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec988-spf145.conllu
8
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-dev.conllu
9
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test.conllu
10
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu
11
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec252-BremerF_Teckningar1.conllu
12
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu
13
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_pud-ud-test.conllu
14
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-test.conllu
15
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu
16
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec452-NyblomH_FantasierFyra.conllu
17
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec1102-spf259.conllu
18
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-letter141673-Stalhammar.conllu
19
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec1033-spf190.conllu
20
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu
21
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu
22
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test-trg.conllu
23
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu
24
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-dev.conllu
25
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-train.conllu
26
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu
27
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu
28
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu
29
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_old-ud-test.conllu
30
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-train.conllu
31
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec25-Runius.conllu
32
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-test.conllu
33
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu
34
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec987-spf144.conllu
35
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec1063-spf220.conllu
36
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu
37
+ Including DigPhil MACHINE in TRAIN (minus gold)…
38
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu
39
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu
40
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu
41
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu
42
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1063-spf220.conllu
43
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu
44
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu
45
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec252-BremerF_Teckningar1.conllu
46
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec988-spf145.conllu
47
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec987-spf144.conllu
48
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu
49
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-letter141673-Stalhammar.conllu
50
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1033-spf190.conllu
51
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec25-Runius.conllu
52
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu
53
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec452-NyblomH_FantasierFyra.conllu
54
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu
55
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu
56
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu
57
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1102-spf259.conllu
58
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec991-spf148.conllu
59
+ Cleaning TRAIN...
60
+ [REMOVED] sent_id=33 ERRORS=['Token 15: Missing deprel']
61
+ [REMOVED] sent_id=6 ERRORS=['Line 24: Invalid token ID or head', 'Line 25: Invalid token ID or head', 'Line 29: Invalid token ID or head', 'Token 30 has invalid head 24']
62
+ [REMOVED] sent_id=7_8 ERRORS=['Multiple roots found: [5, 10]']
63
+ [REMOVED] sent_id=30_31 ERRORS=['Multiple roots found: [3, 18]']
64
+ [REMOVED] sent_id=35 ERRORS=['Line 36: Invalid token ID or head']
65
+ [REMOVED] sent_id=2_3 ERRORS=['Multiple roots found: [1, 5]']
66
+ [REMOVED] sent_id=2_3 ERRORS=['Multiple roots found: [7, 20]']
67
+ [REMOVED] sent_id=8_9 ERRORS=['Multiple roots found: [24, 57]']
68
+ [REMOVED] sent_id=12_13 ERRORS=['Multiple roots found: [11, 16]']
69
+ [REMOVED] sent_id=124_split2 ERRORS=['Line 4: Invalid token ID or head', 'No root found', 'Token 1 has invalid head 4', 'Token 2 has invalid head 4', 'Token 3 has invalid head 4', 'Token 6 has invalid head 4', 'Token 11 has invalid head 4', 'Token 15 has invalid head 4']
70
+ [REMOVED] sent_id=396 ERRORS=['Token 2: Missing form']
71
+ [REMOVED] sent_id=416 ERRORS=['Token 2: Missing form']
72
+ [REMOVED] sent_id=589 ERRORS=['Token 2: Missing form']
73
+ [REMOVED] sent_id=909 ERRORS=['Token 2: Missing form']
74
+ [REMOVED] sent_id=912 ERRORS=['Token 2: Missing form']
75
+ [REMOVED] sent_id=3_split1 ERRORS=['Multiple roots found: [4, 15, 17]']
76
+ [REMOVED] sent_id=3_split2 ERRORS=['Line 1: Invalid token ID or head', 'Line 8: Invalid token ID or head', 'Line 15: Invalid token ID or head', 'No root found', 'Token 2 has invalid head 1', 'Token 3 has invalid head 8', 'Token 4 has invalid head 8', 'Token 5 has invalid head 8', 'Token 7 has invalid head 8', 'Token 10 has invalid head 8', 'Token 13 has invalid head 8', 'Token 14 has invalid head 8']
77
+ [REMOVED] sent_id=3_4 ERRORS=['Multiple roots found: [1, 5]']
78
+ [REMOVED] sent_id=5_6 ERRORS=['Multiple roots found: [3, 24]']
79
+ [REMOVED] sent_id=11_12_13 ERRORS=['Multiple roots found: [5, 17, 25]']
80
+ [REMOVED] sent_id=119 ERRORS=['Token 2: Missing form']
81
+ [REMOVED] sent_id=179 ERRORS=['Token 2: Missing form']
82
+ [REMOVED] sent_id=188 ERRORS=['Token 2: Missing form']
83
+ [REMOVED] sent_id=223 ERRORS=['Token 2: Missing form']
84
+ [REMOVED] sent_id=268 ERRORS=['Token 2: Missing form']
85
+ [REMOVED] sent_id=325 ERRORS=['Token 2: Missing form']
86
+ [REMOVED] sent_id=388 ERRORS=['Token 2: Missing form']
87
+ [REMOVED] sent_id=399 ERRORS=['Token 2: Missing form']
88
+ [REMOVED] sent_id=475 ERRORS=['Token 2: Missing form']
89
+ [REMOVED] sent_id=505 ERRORS=['Token 2: Missing form']
90
+ [REMOVED] sent_id=520 ERRORS=['Token 2: Missing form']
91
+ [REMOVED] sent_id=562 ERRORS=['Token 2: Missing form']
92
+ [REMOVED] sent_id=669 ERRORS=['Token 2: Missing form']
93
+ [REMOVED] sent_id=711 ERRORS=['Token 2: Missing form']
94
+ [REMOVED] sent_id=731 ERRORS=['Token 2: Missing form']
95
+ [REMOVED] sent_id=867 ERRORS=['Token 2: Missing form']
96
+ [REMOVED] sent_id=884 ERRORS=['Token 2: Missing form']
97
+ [REMOVED] sent_id=923 ERRORS=['Token 2: Missing form']
98
+ [REMOVED] sent_id=939 ERRORS=['Token 2: Missing form']
99
+ [REMOVED] sent_id=1086 ERRORS=['Token 2: Missing form']
100
+ [REMOVED] sent_id=1179 ERRORS=['Token 2: Missing form']
101
+ [REMOVED] sent_id=1251 ERRORS=['Token 2: Missing form']
102
+ [REMOVED] sent_id=1345 ERRORS=['Token 2: Missing form']
103
+ [REMOVED] sent_id=1459 ERRORS=['Token 2: Missing form']
104
+ [REMOVED] sent_id=1656 ERRORS=['Token 2: Missing form']
105
+ [REMOVED] sent_id=1669 ERRORS=['Token 2: Missing form']
106
+ [REMOVED] sent_id=87_88 ERRORS=['Multiple roots found: [3, 6]']
107
+ [REMOVED] sent_id=65_split2_66_split2 ERRORS=['Line 4: Invalid token ID or head', 'Token 2 has invalid head 4', 'Token 3 has invalid head 4', 'Token 5 has invalid head 4']
108
+ [REMOVED] sent_id=25 ERRORS=['Token 2: Missing form']
109
+ [REMOVED] sent_id=136 ERRORS=['Token 2: Missing form']
110
+ [REMOVED] sent_id=208 ERRORS=['Token 2: Missing form']
111
+ [REMOVED] sent_id=230 ERRORS=['Token 2: Missing form']
112
+ [REMOVED] sent_id=245 ERRORS=['Token 2: Missing form']
113
+ [REMOVED] sent_id=276 ERRORS=['Token 2: Missing form']
114
+ [REMOVED] sent_id=320 ERRORS=['Token 2: Missing form']
115
+ [REMOVED] sent_id=366 ERRORS=['Token 2: Missing form']
116
+ [REMOVED] sent_id=519 ERRORS=['Token 2: Missing form']
117
+ [REMOVED] sent_id=569 ERRORS=['Token 2: Missing form']
118
+ [REMOVED] sent_id=50_split2 ERRORS=['Line 1: Invalid token ID or head', 'Line 6: Invalid token ID or head', 'No root found', 'Token 2 has invalid head 1']
119
+ [REMOVED] sent_id=53_54 ERRORS=['Multiple roots found: [27, 91]']
120
+ [REMOVED] sent_id=55_56_57 ERRORS=['Multiple roots found: [2, 4, 13]']
121
+ [REMOVED] sent_id=17_split1 ERRORS=['Multiple roots found: [2, 14, 17]']
122
+ [REMOVED] sent_id=17_split2 ERRORS=['Line 8: Invalid token ID or head', 'Line 25: Invalid token ID or head', 'Line 38: Invalid token ID or head', 'No root found', 'Token 3 has invalid head 8', 'Token 7 has invalid head 8', 'Token 9 has invalid head 8', 'Token 10 has invalid head 8', 'Token 17 has invalid head 8', 'Token 22 has invalid head 25', 'Token 23 has invalid head 25', 'Token 24 has invalid head 25', 'Token 26 has invalid head 25', 'Token 27 has invalid head 25', 'Token 28 has invalid head 25']
123
+ [REMOVED] sent_id=19_split1 ERRORS=['Multiple roots found: [3, 31]']
124
+ Cleaning DEV...
125
+ [REMOVED] sent_id=33 ERRORS=['Token 15: Missing deprel']
126
+ Cleaning TEST...
127
+ Writing TRAIN → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu (60334 valid sentences)
128
+ Writing DEV → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu (9 valid sentences)
129
+ Writing TEST → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu (99 valid sentences)
130
+ Done.
131
+ Sourcing scripts/config_alvis.sh
132
+ Running stanza dataset preparation…
133
+ 2025-12-12 14:59:02 INFO: Datasets program called with:
134
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
135
+ 2025-12-12 14:59:02 DEBUG: Downloading resource file from https://raw.githubusercontent.com/stanfordnlp/stanza-resources/main/resources_1.11.0.json
136
+
137
+ 2025-12-12 14:59:02 INFO: Downloaded file to /cephyr/users/cleland/Alvis/stanza_resources/resources.json
138
+ 2025-12-12 14:59:02 DEBUG: Processing parameter "processors"...
139
+ 2025-12-12 14:59:02 WARNING: Can not find pos: diachronic from official model list. Ignoring it.
140
+ 2025-12-12 14:59:02 INFO: Downloading these customized packages for language: sv (Swedish)...
141
+ =======================
142
+ | Processor | Package |
143
+ -----------------------
144
+ =======================
145
+
146
+ 2025-12-12 14:59:02 INFO: Finished downloading models and saved to /cephyr/users/cleland/Alvis/stanza_resources
147
+ 2025-12-12 14:59:02 INFO: Using tagger model in /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt for sv_diachronic
148
+ 2025-12-12 14:59:02 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
149
+ 2025-12-12 14:59:02 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
150
+ Augmented 189 quotes: Counter({'""': 26, '「」': 23, '„”': 23, '»«': 22, '″″': 18, '””': 18, '““': 17, '《》': 17, '„“': 13, '«»': 12})
151
+ 2025-12-12 14:59:05 INFO: Running tagger to retag /local/tmp.5491708/tmpc2soyxjt/sv_diachronic.train.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu
152
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5491708/tmpc2soyxjt/sv_diachronic.train.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu']
153
+ 2025-12-12 14:59:05 INFO: Running tagger in predict mode
154
+ 2025-12-12 14:59:05 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
155
+ 2025-12-12 14:59:07 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
156
+ 2025-12-12 14:59:07 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
157
+ 2025-12-12 14:59:07 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
158
+ 2025-12-12 14:59:07 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
159
+ 2025-12-12 14:59:08 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
160
+ 2025-12-12 14:59:11 INFO: Loading data with batch size 250...
161
+ ./make_new_model.sh: line 58: 3492354 Terminated python -m stanza.utils.datasets.prepare_depparse_treebank UD_Swedish-diachronic --wordvec_pretrain_file "/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/${PRETRAINED_MODEL}"
logs/log_conll17.pt_sv_diachron_20251212_150001.txt ADDED
The diff for this file is too large to render. See raw diff
 
logs/log_conll17.pt_sv_diachron_is_20251203_221228.txt CHANGED
@@ -145,3 +145,594 @@ Augmented 191 quotes: Counter({'„”': 29, '»«': 25, '「」': 25, '″″':
145
  2025-12-03 22:12:44 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
146
  2025-12-03 22:12:46 INFO: Loading data with batch size 250...
147
  2025-12-03 22:13:33 INFO: Start evaluation...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
145
  2025-12-03 22:12:44 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
146
  2025-12-03 22:12:46 INFO: Loading data with batch size 250...
147
  2025-12-03 22:13:33 INFO: Start evaluation...
148
+ 2025-12-03 22:17:46 INFO: UPOS XPOS UFeats AllTags
149
+ 2025-12-03 22:17:46 INFO: 89.06 74.56 85.47 73.91
150
+ 2025-12-03 22:17:46 INFO: POS Tagger score: sv_diachronic 73.91
151
+ 2025-12-03 22:17:47 INFO: Running tagger to retag /local/tmp.5441282/tmp2whacqzu/sv_diachronic.dev.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu
152
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmp2whacqzu/sv_diachronic.dev.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu']
153
+ 2025-12-03 22:17:47 INFO: Running tagger in predict mode
154
+ 2025-12-03 22:17:47 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
155
+ 2025-12-03 22:17:49 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
156
+ 2025-12-03 22:17:49 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
157
+ 2025-12-03 22:17:49 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
158
+ 2025-12-03 22:17:49 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
159
+ 2025-12-03 22:17:49 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
160
+ 2025-12-03 22:17:49 INFO: Loading data with batch size 250...
161
+ 2025-12-03 22:17:49 INFO: Start evaluation...
162
+ 2025-12-03 22:17:49 INFO: UPOS XPOS UFeats AllTags
163
+ 2025-12-03 22:17:49 INFO: 93.32 90.84 93.32 85.64
164
+ 2025-12-03 22:17:49 INFO: POS Tagger score: sv_diachronic 85.64
165
+ 2025-12-03 22:17:49 INFO: Running tagger to retag /local/tmp.5441282/tmp2whacqzu/sv_diachronic.test.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu
166
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmp2whacqzu/sv_diachronic.test.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu']
167
+ 2025-12-03 22:17:49 INFO: Running tagger in predict mode
168
+ 2025-12-03 22:17:49 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
169
+ 2025-12-03 22:17:51 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
170
+ 2025-12-03 22:17:51 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
171
+ 2025-12-03 22:17:51 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
172
+ 2025-12-03 22:17:51 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
173
+ 2025-12-03 22:17:51 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
174
+ 2025-12-03 22:17:51 INFO: Loading data with batch size 250...
175
+ 2025-12-03 22:17:51 INFO: Start evaluation...
176
+ 2025-12-03 22:17:52 INFO: UPOS XPOS UFeats AllTags
177
+ 2025-12-03 22:17:52 INFO: 93.14 96.78 95.32 90.28
178
+ 2025-12-03 22:17:52 INFO: POS Tagger score: sv_diachronic 90.28
179
+ Preparing data for UD_Swedish-diachronic: sv_diachronic, sv
180
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu and writing to /local/tmp.5441282/tmp2whacqzu/sv_diachronic.train.gold.conllu
181
+ Swapped 'w1, w2' for 'w1 ,w2' 216 times
182
+ Added 579 new sentences with asdf, zzzz -> asdf,zzzz
183
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu and writing to /local/tmp.5441282/tmp2whacqzu/sv_diachronic.dev.gold.conllu
184
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu and writing to /local/tmp.5441282/tmp2whacqzu/sv_diachronic.test.gold.conllu
185
+ Running stanza dependency parser training…
186
+ 2025-12-03 22:18:07 INFO: Training program called with:
187
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/training/run_depparse.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt --batch_size 32 --dropout 0.33
188
+ 2025-12-03 22:18:07 DEBUG: UD_Swedish-diachronic: sv_diachronic
189
+ 2025-12-03 22:18:07 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
190
+ 2025-12-03 22:18:07 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
191
+ 2025-12-03 22:18:07 INFO: UD_Swedish-diachronic: saved_models/depparse/sv_diachronic_charlm_parser.pt does not exist, training new model
192
+ 2025-12-03 22:18:07 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
193
+ 2025-12-03 22:18:07 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
194
+ 2025-12-03 22:18:07 INFO: Running train depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--train_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu', '--batch_size', '5000', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'train', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
195
+ 2025-12-03 22:18:07 INFO: Running parser in train mode
196
+ 2025-12-03 22:18:07 INFO: Using pretrained contextualized char embedding
197
+ 2025-12-03 22:18:07 INFO: Loading data with batch size 32...
198
+ 2025-12-03 22:18:16 INFO: Train File /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu, Data Size: 76366
199
+ 2025-12-03 22:18:16 INFO: Original data size: 76366
200
+ 2025-12-03 22:18:17 INFO: Augmented data size: 77263
201
+ 2025-12-03 22:18:40 WARNING: sv_diachronic is not a known dataset. Examining the data to choose which xpos vocab to use
202
+ 2025-12-03 22:18:40 INFO: Original length = 77263
203
+ 2025-12-03 22:18:40 INFO: Filtered length = 77263
204
+ 2025-12-03 22:18:58 WARNING: Chose XPOSDescription(xpos_type=<XPOSType.XPOS: 1>, sep='|') for the xpos factory for sv_diachronic
205
+ 2025-12-03 22:19:07 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
206
+ 2025-12-03 22:19:23 DEBUG: 46876 batches created.
207
+ 2025-12-03 22:19:23 DEBUG: 9 batches created.
208
+ 2025-12-03 22:19:23 INFO: Training parser...
209
+ 2025-12-03 22:19:23 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
210
+ 2025-12-03 22:19:23 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
211
+ 2025-12-03 22:19:23 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
212
+ 2025-12-03 22:19:24 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
213
+ 2025-12-03 22:19:27 INFO: Finished STEP 20/50000, loss = 0.000000 (0.034 sec/batch), lr: 0.003000
214
+ 2025-12-03 22:19:28 INFO: Finished STEP 40/50000, loss = 0.000000 (0.033 sec/batch), lr: 0.003000
215
+ 2025-12-03 22:19:29 INFO: Finished STEP 60/50000, loss = 0.000000 (0.034 sec/batch), lr: 0.003000
216
+ 2025-12-03 22:19:30 INFO: Finished STEP 80/50000, loss = 2712.291016 (0.035 sec/batch), lr: 0.003000
217
+ 2025-12-03 22:19:30 INFO: Finished STEP 100/50000, loss = 1.744704 (0.032 sec/batch), lr: 0.003000
218
+ 2025-12-03 22:19:30 INFO: Evaluating on dev set...
219
+ 2025-12-03 22:19:31 INFO: LAS MLAS BLEX
220
+ 2025-12-03 22:19:31 INFO: 2.23 0.82 1.64
221
+ 2025-12-03 22:19:31 INFO: step 100: train_loss = 245.595099, dev_score = 0.0223
222
+ 2025-12-03 22:19:31 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
223
+ 2025-12-03 22:19:31 INFO: new best model saved.
224
+ 2025-12-03 22:19:32 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
225
+ 2025-12-03 22:19:32 INFO: new model checkpoint saved.
226
+ 2025-12-03 22:19:32 INFO: Finished STEP 120/50000, loss = 1.622719 (0.032 sec/batch), lr: 0.003000
227
+ 2025-12-03 22:19:33 INFO: Finished STEP 140/50000, loss = 0.865345 (0.033 sec/batch), lr: 0.003000
228
+ 2025-12-03 22:19:34 INFO: Finished STEP 160/50000, loss = 0.985934 (0.032 sec/batch), lr: 0.003000
229
+ 2025-12-03 22:19:34 INFO: Finished STEP 180/50000, loss = 0.975155 (0.032 sec/batch), lr: 0.003000
230
+ 2025-12-03 22:19:35 INFO: Finished STEP 200/50000, loss = 0.881175 (0.032 sec/batch), lr: 0.003000
231
+ 2025-12-03 22:19:35 INFO: Evaluating on dev set...
232
+ 2025-12-03 22:19:36 INFO: LAS MLAS BLEX
233
+ 2025-12-03 22:19:36 INFO: 3.22 3.44 3.93
234
+ 2025-12-03 22:19:36 INFO: step 200: train_loss = 1.118413, dev_score = 0.0322
235
+ 2025-12-03 22:19:36 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
236
+ 2025-12-03 22:19:36 INFO: new best model saved.
237
+ 2025-12-03 22:19:36 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
238
+ 2025-12-03 22:19:36 INFO: new model checkpoint saved.
239
+ 2025-12-03 22:19:37 INFO: Finished STEP 220/50000, loss = 1.017378 (0.033 sec/batch), lr: 0.003000
240
+ 2025-12-03 22:19:38 INFO: Finished STEP 240/50000, loss = 0.768964 (0.032 sec/batch), lr: 0.003000
241
+ 2025-12-03 22:19:38 INFO: Finished STEP 260/50000, loss = 3.528730 (0.033 sec/batch), lr: 0.003000
242
+ 2025-12-03 22:19:39 INFO: Finished STEP 280/50000, loss = 1.277798 (0.033 sec/batch), lr: 0.003000
243
+ 2025-12-03 22:19:40 INFO: Finished STEP 300/50000, loss = 1.865093 (0.033 sec/batch), lr: 0.003000
244
+ 2025-12-03 22:19:40 INFO: Evaluating on dev set...
245
+ 2025-12-03 22:19:41 INFO: LAS MLAS BLEX
246
+ 2025-12-03 22:19:41 INFO: 6.93 4.93 6.03
247
+ 2025-12-03 22:19:41 INFO: step 300: train_loss = 1.475053, dev_score = 0.0693
248
+ 2025-12-03 22:19:41 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
249
+ 2025-12-03 22:19:41 INFO: new best model saved.
250
+ 2025-12-03 22:19:42 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
251
+ 2025-12-03 22:19:42 INFO: new model checkpoint saved.
252
+ 2025-12-03 22:19:42 INFO: Finished STEP 320/50000, loss = 1.403440 (0.033 sec/batch), lr: 0.003000
253
+ 2025-12-03 22:19:43 INFO: Finished STEP 340/50000, loss = 2.338756 (0.033 sec/batch), lr: 0.003000
254
+ 2025-12-03 22:19:44 INFO: Finished STEP 360/50000, loss = 1.520796 (0.036 sec/batch), lr: 0.003000
255
+ 2025-12-03 22:19:44 INFO: Finished STEP 380/50000, loss = 1.272455 (0.033 sec/batch), lr: 0.003000
256
+ 2025-12-03 22:19:45 INFO: Finished STEP 400/50000, loss = 2.057795 (0.033 sec/batch), lr: 0.003000
257
+ 2025-12-03 22:19:45 INFO: Evaluating on dev set...
258
+ 2025-12-03 22:19:46 INFO: LAS MLAS BLEX
259
+ 2025-12-03 22:19:46 INFO: 9.41 8.43 9.20
260
+ 2025-12-03 22:19:46 INFO: step 400: train_loss = 1.514085, dev_score = 0.0941
261
+ 2025-12-03 22:19:46 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
262
+ 2025-12-03 22:19:46 INFO: new best model saved.
263
+ 2025-12-03 22:19:47 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
264
+ 2025-12-03 22:19:47 INFO: new model checkpoint saved.
265
+ 2025-12-03 22:19:48 INFO: Finished STEP 420/50000, loss = 1.397767 (0.033 sec/batch), lr: 0.003000
266
+ 2025-12-03 22:19:48 INFO: Finished STEP 440/50000, loss = 1.919402 (0.034 sec/batch), lr: 0.003000
267
+ 2025-12-03 22:19:49 INFO: Finished STEP 460/50000, loss = 0.787392 (0.038 sec/batch), lr: 0.003000
268
+ 2025-12-03 22:19:50 INFO: Finished STEP 480/50000, loss = 1.199158 (0.034 sec/batch), lr: 0.003000
269
+ 2025-12-03 22:19:50 INFO: Finished STEP 500/50000, loss = 1.164281 (0.033 sec/batch), lr: 0.003000
270
+ 2025-12-03 22:19:50 INFO: Evaluating on dev set...
271
+ 2025-12-03 22:19:51 INFO: LAS MLAS BLEX
272
+ 2025-12-03 22:19:51 INFO: 7.18 3.15 4.49
273
+ 2025-12-03 22:19:51 INFO: step 500: train_loss = 1.317972, dev_score = 0.0718
274
+ 2025-12-03 22:19:52 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
275
+ 2025-12-03 22:19:52 INFO: new model checkpoint saved.
276
+ 2025-12-03 22:19:52 INFO: Finished STEP 520/50000, loss = 1.349550 (0.033 sec/batch), lr: 0.003000
277
+ 2025-12-03 22:19:53 INFO: Finished STEP 540/50000, loss = 1.134644 (0.033 sec/batch), lr: 0.003000
278
+ 2025-12-03 22:19:54 INFO: Finished STEP 560/50000, loss = 3.609558 (0.033 sec/batch), lr: 0.003000
279
+ 2025-12-03 22:19:55 INFO: Finished STEP 580/50000, loss = 1.594248 (0.034 sec/batch), lr: 0.003000
280
+ 2025-12-03 22:19:55 INFO: Finished STEP 600/50000, loss = 2.013339 (0.034 sec/batch), lr: 0.003000
281
+ 2025-12-03 22:19:55 INFO: Evaluating on dev set...
282
+ 2025-12-03 22:19:56 INFO: LAS MLAS BLEX
283
+ 2025-12-03 22:19:56 INFO: 19.55 5.87 9.64
284
+ 2025-12-03 22:19:56 INFO: step 600: train_loss = 1.737639, dev_score = 0.1955
285
+ 2025-12-03 22:19:56 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
286
+ 2025-12-03 22:19:56 INFO: new best model saved.
287
+ 2025-12-03 22:19:57 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
288
+ 2025-12-03 22:19:57 INFO: new model checkpoint saved.
289
+ 2025-12-03 22:19:57 INFO: Finished STEP 620/50000, loss = 1.571207 (0.034 sec/batch), lr: 0.003000
290
+ 2025-12-03 22:19:58 INFO: Finished STEP 640/50000, loss = 2.199408 (0.036 sec/batch), lr: 0.003000
291
+ 2025-12-03 22:19:59 INFO: Finished STEP 660/50000, loss = 1.839754 (0.034 sec/batch), lr: 0.003000
292
+ 2025-12-03 22:20:00 INFO: Finished STEP 680/50000, loss = 2.188510 (0.034 sec/batch), lr: 0.003000
293
+ 2025-12-03 22:20:00 INFO: Finished STEP 700/50000, loss = 2.559276 (0.035 sec/batch), lr: 0.003000
294
+ 2025-12-03 22:20:00 INFO: Evaluating on dev set...
295
+ 2025-12-03 22:20:01 INFO: LAS MLAS BLEX
296
+ 2025-12-03 22:20:01 INFO: 15.59 5.87 10.06
297
+ 2025-12-03 22:20:01 INFO: step 700: train_loss = 1.983614, dev_score = 0.1559
298
+ 2025-12-03 22:20:02 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
299
+ 2025-12-03 22:20:02 INFO: new model checkpoint saved.
300
+ 2025-12-03 22:20:02 INFO: Finished STEP 720/50000, loss = 2.502826 (0.036 sec/batch), lr: 0.003000
301
+ 2025-12-03 22:20:03 INFO: Finished STEP 740/50000, loss = 2.022189 (0.034 sec/batch), lr: 0.003000
302
+ 2025-12-03 22:20:04 INFO: Finished STEP 760/50000, loss = 1.974559 (0.034 sec/batch), lr: 0.003000
303
+ 2025-12-03 22:20:05 INFO: Finished STEP 780/50000, loss = 1.880402 (0.035 sec/batch), lr: 0.003000
304
+ 2025-12-03 22:20:05 INFO: Finished STEP 800/50000, loss = 2.110544 (0.034 sec/batch), lr: 0.003000
305
+ 2025-12-03 22:20:05 INFO: Evaluating on dev set...
306
+ 2025-12-03 22:20:06 INFO: LAS MLAS BLEX
307
+ 2025-12-03 22:20:06 INFO: 21.29 14.49 15.69
308
+ 2025-12-03 22:20:06 INFO: step 800: train_loss = 2.055266, dev_score = 0.2129
309
+ 2025-12-03 22:20:06 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
310
+ 2025-12-03 22:20:06 INFO: new best model saved.
311
+ 2025-12-03 22:20:07 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
312
+ 2025-12-03 22:20:07 INFO: new model checkpoint saved.
313
+ 2025-12-03 22:20:08 INFO: Finished STEP 820/50000, loss = 1.546061 (0.036 sec/batch), lr: 0.003000
314
+ 2025-12-03 22:20:08 INFO: Finished STEP 840/50000, loss = 1.595675 (0.035 sec/batch), lr: 0.003000
315
+ 2025-12-03 22:20:09 INFO: Finished STEP 860/50000, loss = 2.485874 (0.035 sec/batch), lr: 0.003000
316
+ 2025-12-03 22:20:10 INFO: Finished STEP 880/50000, loss = 1.312481 (0.033 sec/batch), lr: 0.003000
317
+ 2025-12-03 22:20:10 INFO: Finished STEP 900/50000, loss = 1.980712 (0.034 sec/batch), lr: 0.003000
318
+ 2025-12-03 22:20:10 INFO: Evaluating on dev set...
319
+ 2025-12-03 22:20:11 INFO: LAS MLAS BLEX
320
+ 2025-12-03 22:20:11 INFO: 20.05 10.87 13.68
321
+ 2025-12-03 22:20:11 INFO: step 900: train_loss = 1.865527, dev_score = 0.2005
322
+ 2025-12-03 22:20:12 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
323
+ 2025-12-03 22:20:12 INFO: new model checkpoint saved.
324
+ 2025-12-03 22:20:12 INFO: Finished STEP 920/50000, loss = 1.275847 (0.033 sec/batch), lr: 0.003000
325
+ 2025-12-03 22:20:13 INFO: Finished STEP 940/50000, loss = 1.930356 (0.033 sec/batch), lr: 0.003000
326
+ 2025-12-03 22:20:14 INFO: Finished STEP 960/50000, loss = 1.657969 (0.034 sec/batch), lr: 0.003000
327
+ 2025-12-03 22:20:14 INFO: Finished STEP 980/50000, loss = 2.620776 (0.035 sec/batch), lr: 0.003000
328
+ 2025-12-03 22:20:15 INFO: Finished STEP 1000/50000, loss = 1.873940 (0.036 sec/batch), lr: 0.003000
329
+ 2025-12-03 22:20:15 INFO: Evaluating on dev set...
330
+ 2025-12-03 22:20:16 INFO: LAS MLAS BLEX
331
+ 2025-12-03 22:20:16 INFO: 29.70 13.28 15.29
332
+ 2025-12-03 22:20:16 INFO: step 1000: train_loss = 1.973620, dev_score = 0.2970
333
+ 2025-12-03 22:20:16 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
334
+ 2025-12-03 22:20:16 INFO: new best model saved.
335
+ 2025-12-03 22:20:17 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
336
+ 2025-12-03 22:20:17 INFO: new model checkpoint saved.
337
+ 2025-12-03 22:20:17 INFO: Finished STEP 1020/50000, loss = 1.684948 (0.035 sec/batch), lr: 0.003000
338
+ 2025-12-03 22:20:18 INFO: Finished STEP 1040/50000, loss = 2.617945 (0.034 sec/batch), lr: 0.003000
339
+ 2025-12-03 22:20:19 INFO: Finished STEP 1060/50000, loss = 2.131221 (0.035 sec/batch), lr: 0.003000
340
+ 2025-12-03 22:20:19 INFO: Finished STEP 1080/50000, loss = 2.448305 (0.035 sec/batch), lr: 0.003000
341
+ 2025-12-03 22:20:20 INFO: Finished STEP 1100/50000, loss = 2.325217 (0.036 sec/batch), lr: 0.003000
342
+ 2025-12-03 22:20:20 INFO: Evaluating on dev set...
343
+ 2025-12-03 22:20:21 INFO: LAS MLAS BLEX
344
+ 2025-12-03 22:20:21 INFO: 27.97 16.88 19.91
345
+ 2025-12-03 22:20:21 INFO: step 1100: train_loss = 2.404084, dev_score = 0.2797
346
+ 2025-12-03 22:20:21 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
347
+ 2025-12-03 22:20:21 INFO: new model checkpoint saved.
348
+ 2025-12-03 22:20:22 INFO: Finished STEP 1120/50000, loss = 2.092524 (0.036 sec/batch), lr: 0.003000
349
+ 2025-12-03 22:20:23 INFO: Finished STEP 1140/50000, loss = 1.981141 (0.034 sec/batch), lr: 0.003000
350
+ 2025-12-03 22:20:24 INFO: Finished STEP 1160/50000, loss = 1.987068 (0.039 sec/batch), lr: 0.003000
351
+ 2025-12-03 22:20:24 INFO: Finished STEP 1180/50000, loss = 1.792558 (0.035 sec/batch), lr: 0.003000
352
+ 2025-12-03 22:20:25 INFO: Finished STEP 1200/50000, loss = 2.950092 (0.035 sec/batch), lr: 0.003000
353
+ 2025-12-03 22:20:25 INFO: Evaluating on dev set...
354
+ 2025-12-03 22:20:25 INFO: LAS MLAS BLEX
355
+ 2025-12-03 22:20:25 INFO: 30.20 15.89 19.14
356
+ 2025-12-03 22:20:25 INFO: step 1200: train_loss = 2.341607, dev_score = 0.3020
357
+ 2025-12-03 22:20:26 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
358
+ 2025-12-03 22:20:26 INFO: new best model saved.
359
+ 2025-12-03 22:20:26 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
360
+ 2025-12-03 22:20:26 INFO: new model checkpoint saved.
361
+ 2025-12-03 22:20:27 INFO: Finished STEP 1220/50000, loss = 2.524709 (0.036 sec/batch), lr: 0.003000
362
+ 2025-12-03 22:20:28 INFO: Finished STEP 1240/50000, loss = 1.693646 (0.035 sec/batch), lr: 0.003000
363
+ 2025-12-03 22:20:28 INFO: Finished STEP 1260/50000, loss = 3.541253 (0.036 sec/batch), lr: 0.003000
364
+ 2025-12-03 22:20:29 INFO: Finished STEP 1280/50000, loss = 2.380494 (0.038 sec/batch), lr: 0.003000
365
+ 2025-12-03 22:20:30 INFO: Finished STEP 1300/50000, loss = 2.219758 (0.035 sec/batch), lr: 0.003000
366
+ 2025-12-03 22:20:30 INFO: Evaluating on dev set...
367
+ 2025-12-03 22:20:30 INFO: LAS MLAS BLEX
368
+ 2025-12-03 22:20:30 INFO: 19.80 5.44 7.53
369
+ 2025-12-03 22:20:30 INFO: step 1300: train_loss = 2.223584, dev_score = 0.1980
370
+ 2025-12-03 22:20:31 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
371
+ 2025-12-03 22:20:31 INFO: new model checkpoint saved.
372
+ 2025-12-03 22:20:32 INFO: Finished STEP 1320/50000, loss = 1.357148 (0.035 sec/batch), lr: 0.003000
373
+ 2025-12-03 22:20:32 INFO: Finished STEP 1340/50000, loss = 3.326379 (0.034 sec/batch), lr: 0.003000
374
+ 2025-12-03 22:20:33 INFO: Finished STEP 1360/50000, loss = 2.658162 (0.035 sec/batch), lr: 0.003000
375
+ 2025-12-03 22:20:34 INFO: Finished STEP 1380/50000, loss = 1.538915 (0.034 sec/batch), lr: 0.003000
376
+ 2025-12-03 22:20:35 INFO: Finished STEP 1400/50000, loss = 2.163197 (0.039 sec/batch), lr: 0.003000
377
+ 2025-12-03 22:20:35 INFO: Evaluating on dev set...
378
+ 2025-12-03 22:20:35 INFO: LAS MLAS BLEX
379
+ 2025-12-03 22:20:35 INFO: 32.43 18.33 20.00
380
+ 2025-12-03 22:20:35 INFO: step 1400: train_loss = 2.172803, dev_score = 0.3243
381
+ 2025-12-03 22:20:35 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
382
+ 2025-12-03 22:20:35 INFO: new best model saved.
383
+ 2025-12-03 22:20:36 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
384
+ 2025-12-03 22:20:36 INFO: new model checkpoint saved.
385
+ 2025-12-03 22:20:37 INFO: Finished STEP 1420/50000, loss = 2.540121 (0.036 sec/batch), lr: 0.003000
386
+ 2025-12-03 22:20:37 INFO: Finished STEP 1440/50000, loss = 1.781801 (0.035 sec/batch), lr: 0.003000
387
+ 2025-12-03 22:20:38 INFO: Finished STEP 1460/50000, loss = 2.100438 (0.034 sec/batch), lr: 0.003000
388
+ 2025-12-03 22:20:39 INFO: Finished STEP 1480/50000, loss = 2.049929 (0.037 sec/batch), lr: 0.003000
389
+ 2025-12-03 22:20:40 INFO: Finished STEP 1500/50000, loss = 1.730072 (0.036 sec/batch), lr: 0.003000
390
+ 2025-12-03 22:20:40 INFO: Evaluating on dev set...
391
+ 2025-12-03 22:20:40 INFO: LAS MLAS BLEX
392
+ 2025-12-03 22:20:40 INFO: 44.31 30.54 33.05
393
+ 2025-12-03 22:20:40 INFO: step 1500: train_loss = 2.157021, dev_score = 0.4431
394
+ 2025-12-03 22:20:40 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
395
+ 2025-12-03 22:20:40 INFO: new best model saved.
396
+ 2025-12-03 22:20:41 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
397
+ 2025-12-03 22:20:41 INFO: new model checkpoint saved.
398
+ 2025-12-03 22:20:42 INFO: Finished STEP 1520/50000, loss = 2.264807 (0.035 sec/batch), lr: 0.003000
399
+ 2025-12-03 22:20:42 INFO: Finished STEP 1540/50000, loss = 2.165263 (0.036 sec/batch), lr: 0.003000
400
+ 2025-12-03 22:20:43 INFO: Finished STEP 1560/50000, loss = 2.489748 (0.035 sec/batch), lr: 0.003000
401
+ 2025-12-03 22:20:44 INFO: Finished STEP 1580/50000, loss = 2.107423 (0.035 sec/batch), lr: 0.003000
402
+ 2025-12-03 22:20:45 INFO: Finished STEP 1600/50000, loss = 1.877839 (0.036 sec/batch), lr: 0.003000
403
+ 2025-12-03 22:20:45 INFO: Evaluating on dev set...
404
+ 2025-12-03 22:20:45 INFO: LAS MLAS BLEX
405
+ 2025-12-03 22:20:45 INFO: 39.85 27.27 28.10
406
+ 2025-12-03 22:20:45 INFO: step 1600: train_loss = 2.212696, dev_score = 0.3985
407
+ 2025-12-03 22:20:46 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
408
+ 2025-12-03 22:20:46 INFO: new model checkpoint saved.
409
+ 2025-12-03 22:20:47 INFO: Finished STEP 1620/50000, loss = 3.706948 (0.034 sec/batch), lr: 0.003000
410
+ 2025-12-03 22:20:47 INFO: Finished STEP 1640/50000, loss = 3.773534 (0.035 sec/batch), lr: 0.003000
411
+ 2025-12-03 22:20:48 INFO: Finished STEP 1660/50000, loss = 2.317595 (0.036 sec/batch), lr: 0.003000
412
+ 2025-12-03 22:20:49 INFO: Finished STEP 1680/50000, loss = 2.911879 (0.037 sec/batch), lr: 0.003000
413
+ 2025-12-03 22:20:49 INFO: Finished STEP 1700/50000, loss = 2.189356 (0.035 sec/batch), lr: 0.003000
414
+ 2025-12-03 22:20:49 INFO: Evaluating on dev set...
415
+ 2025-12-03 22:20:50 INFO: LAS MLAS BLEX
416
+ 2025-12-03 22:20:50 INFO: 42.57 28.81 30.90
417
+ 2025-12-03 22:20:50 INFO: step 1700: train_loss = 2.541046, dev_score = 0.4257
418
+ 2025-12-03 22:20:51 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
419
+ 2025-12-03 22:20:51 INFO: new model checkpoint saved.
420
+ 2025-12-03 22:20:51 INFO: Finished STEP 1720/50000, loss = 2.593841 (0.036 sec/batch), lr: 0.003000
421
+ 2025-12-03 22:20:52 INFO: Finished STEP 1740/50000, loss = 2.513894 (0.035 sec/batch), lr: 0.003000
422
+ 2025-12-03 22:20:53 INFO: Finished STEP 1760/50000, loss = 1.373182 (0.034 sec/batch), lr: 0.003000
423
+ 2025-12-03 22:20:53 INFO: Finished STEP 1780/50000, loss = 1.841678 (0.035 sec/batch), lr: 0.003000
424
+ 2025-12-03 22:20:54 INFO: Finished STEP 1800/50000, loss = 2.291480 (0.036 sec/batch), lr: 0.003000
425
+ 2025-12-03 22:20:54 INFO: Evaluating on dev set...
426
+ 2025-12-03 22:20:55 INFO: LAS MLAS BLEX
427
+ 2025-12-03 22:20:55 INFO: 42.08 29.11 33.68
428
+ 2025-12-03 22:20:55 INFO: step 1800: train_loss = 2.314730, dev_score = 0.4208
429
+ 2025-12-03 22:20:56 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
430
+ 2025-12-03 22:20:56 INFO: new model checkpoint saved.
431
+ 2025-12-03 22:20:56 INFO: Finished STEP 1820/50000, loss = 1.414010 (0.036 sec/batch), lr: 0.003000
432
+ 2025-12-03 22:20:57 INFO: Finished STEP 1840/50000, loss = 2.184860 (0.036 sec/batch), lr: 0.003000
433
+ 2025-12-03 22:20:58 INFO: Finished STEP 1860/50000, loss = 1.638621 (0.038 sec/batch), lr: 0.003000
434
+ 2025-12-03 22:20:58 INFO: Finished STEP 1880/50000, loss = 1.797415 (0.035 sec/batch), lr: 0.003000
435
+ 2025-12-03 22:20:59 INFO: Finished STEP 1900/50000, loss = 1.410933 (0.036 sec/batch), lr: 0.003000
436
+ 2025-12-03 22:20:59 INFO: Evaluating on dev set...
437
+ 2025-12-03 22:21:00 INFO: LAS MLAS BLEX
438
+ 2025-12-03 22:21:00 INFO: 35.89 25.32 28.69
439
+ 2025-12-03 22:21:00 INFO: step 1900: train_loss = 2.409112, dev_score = 0.3589
440
+ 2025-12-03 22:21:00 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
441
+ 2025-12-03 22:21:00 INFO: new model checkpoint saved.
442
+ 2025-12-03 22:21:01 INFO: Finished STEP 1920/50000, loss = 1.152543 (0.035 sec/batch), lr: 0.003000
443
+ 2025-12-03 22:21:02 INFO: Finished STEP 1940/50000, loss = 2.077556 (0.036 sec/batch), lr: 0.003000
444
+ 2025-12-03 22:21:02 INFO: Finished STEP 1960/50000, loss = 2.977457 (0.036 sec/batch), lr: 0.003000
445
+ 2025-12-03 22:21:03 INFO: Finished STEP 1980/50000, loss = 4.101305 (0.035 sec/batch), lr: 0.003000
446
+ 2025-12-03 22:21:04 INFO: Finished STEP 2000/50000, loss = 3.153137 (0.036 sec/batch), lr: 0.003000
447
+ 2025-12-03 22:21:04 INFO: Evaluating on dev set...
448
+ 2025-12-03 22:21:04 INFO: LAS MLAS BLEX
449
+ 2025-12-03 22:21:04 INFO: 42.33 30.48 32.99
450
+ 2025-12-03 22:21:04 INFO: step 2000: train_loss = 2.343207, dev_score = 0.4233
451
+ 2025-12-03 22:21:05 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
452
+ 2025-12-03 22:21:05 INFO: new model checkpoint saved.
453
+ 2025-12-03 22:21:06 INFO: Finished STEP 2020/50000, loss = 1.897645 (0.040 sec/batch), lr: 0.003000
454
+ 2025-12-03 22:21:07 INFO: Finished STEP 2040/50000, loss = 1.459832 (0.036 sec/batch), lr: 0.003000
455
+ 2025-12-03 22:21:07 INFO: Finished STEP 2060/50000, loss = 3.348090 (0.036 sec/batch), lr: 0.003000
456
+ 2025-12-03 22:21:08 INFO: Finished STEP 2080/50000, loss = 1.801581 (0.035 sec/batch), lr: 0.003000
457
+ 2025-12-03 22:21:09 INFO: Finished STEP 2100/50000, loss = 3.107420 (0.035 sec/batch), lr: 0.003000
458
+ 2025-12-03 22:21:09 INFO: Evaluating on dev set...
459
+ 2025-12-03 22:21:09 INFO: LAS MLAS BLEX
460
+ 2025-12-03 22:21:09 INFO: 44.31 29.11 33.76
461
+ 2025-12-03 22:21:09 INFO: step 2100: train_loss = 2.388609, dev_score = 0.4431
462
+ 2025-12-03 22:21:10 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
463
+ 2025-12-03 22:21:10 INFO: new best model saved.
464
+ 2025-12-03 22:21:10 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
465
+ 2025-12-03 22:21:10 INFO: new model checkpoint saved.
466
+ 2025-12-03 22:21:11 INFO: Finished STEP 2120/50000, loss = 3.930416 (0.035 sec/batch), lr: 0.003000
467
+ 2025-12-03 22:21:12 INFO: Finished STEP 2140/50000, loss = 2.165164 (0.035 sec/batch), lr: 0.003000
468
+ 2025-12-03 22:21:13 INFO: Finished STEP 2160/50000, loss = 2.158297 (0.036 sec/batch), lr: 0.003000
469
+ 2025-12-03 22:21:13 INFO: Finished STEP 2180/50000, loss = 2.724458 (0.035 sec/batch), lr: 0.003000
470
+ 2025-12-03 22:21:14 INFO: Finished STEP 2200/50000, loss = 3.169353 (0.036 sec/batch), lr: 0.003000
471
+ 2025-12-03 22:21:14 INFO: Evaluating on dev set...
472
+ 2025-12-03 22:21:14 INFO: LAS MLAS BLEX
473
+ 2025-12-03 22:21:14 INFO: 40.59 27.25 29.35
474
+ 2025-12-03 22:21:14 INFO: step 2200: train_loss = 2.482198, dev_score = 0.4059
475
+ 2025-12-03 22:21:15 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
476
+ 2025-12-03 22:21:15 INFO: new model checkpoint saved.
477
+ 2025-12-03 22:21:16 INFO: Finished STEP 2220/50000, loss = 2.776895 (0.036 sec/batch), lr: 0.003000
478
+ 2025-12-03 22:21:17 INFO: Finished STEP 2240/50000, loss = 1.790497 (0.036 sec/batch), lr: 0.003000
479
+ 2025-12-03 22:21:17 INFO: Finished STEP 2260/50000, loss = 2.498086 (0.035 sec/batch), lr: 0.003000
480
+ 2025-12-03 22:21:18 INFO: Finished STEP 2280/50000, loss = 1.828641 (0.036 sec/batch), lr: 0.003000
481
+ 2025-12-03 22:21:19 INFO: Finished STEP 2300/50000, loss = 2.395666 (0.035 sec/batch), lr: 0.003000
482
+ 2025-12-03 22:21:19 INFO: Evaluating on dev set...
483
+ 2025-12-03 22:21:19 INFO: LAS MLAS BLEX
484
+ 2025-12-03 22:21:19 INFO: 40.35 25.16 29.77
485
+ 2025-12-03 22:21:19 INFO: step 2300: train_loss = 2.424874, dev_score = 0.4035
486
+ 2025-12-03 22:21:20 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
487
+ 2025-12-03 22:21:20 INFO: new model checkpoint saved.
488
+ 2025-12-03 22:21:21 INFO: Finished STEP 2320/50000, loss = 2.519269 (0.035 sec/batch), lr: 0.003000
489
+ 2025-12-03 22:21:21 INFO: Finished STEP 2340/50000, loss = 2.637892 (0.034 sec/batch), lr: 0.003000
490
+ 2025-12-03 22:21:22 INFO: Finished STEP 2360/50000, loss = 3.442990 (0.037 sec/batch), lr: 0.003000
491
+ 2025-12-03 22:21:23 INFO: Finished STEP 2380/50000, loss = 2.406678 (0.034 sec/batch), lr: 0.003000
492
+ 2025-12-03 22:21:24 INFO: Finished STEP 2400/50000, loss = 2.744663 (0.035 sec/batch), lr: 0.003000
493
+ 2025-12-03 22:21:24 INFO: Evaluating on dev set...
494
+ 2025-12-03 22:21:24 INFO: LAS MLAS BLEX
495
+ 2025-12-03 22:21:24 INFO: 41.58 28.16 31.47
496
+ 2025-12-03 22:21:24 INFO: step 2400: train_loss = 2.414172, dev_score = 0.4158
497
+ 2025-12-03 22:21:25 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
498
+ 2025-12-03 22:21:25 INFO: new model checkpoint saved.
499
+ 2025-12-03 22:21:26 INFO: Finished STEP 2420/50000, loss = 2.022905 (0.035 sec/batch), lr: 0.003000
500
+ 2025-12-03 22:21:26 INFO: Finished STEP 2440/50000, loss = 2.230075 (0.037 sec/batch), lr: 0.003000
501
+ 2025-12-03 22:21:27 INFO: Finished STEP 2460/50000, loss = 2.250048 (0.038 sec/batch), lr: 0.003000
502
+ 2025-12-03 22:21:28 INFO: Finished STEP 2480/50000, loss = 1.855565 (0.036 sec/batch), lr: 0.003000
503
+ 2025-12-03 22:21:28 INFO: Finished STEP 2500/50000, loss = 1.887844 (0.035 sec/batch), lr: 0.003000
504
+ 2025-12-03 22:21:28 INFO: Evaluating on dev set...
505
+ 2025-12-03 22:21:29 INFO: LAS MLAS BLEX
506
+ 2025-12-03 22:21:29 INFO: 50.00 34.38 38.16
507
+ 2025-12-03 22:21:29 INFO: step 2500: train_loss = 2.319558, dev_score = 0.5000
508
+ 2025-12-03 22:21:29 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
509
+ 2025-12-03 22:21:29 INFO: new best model saved.
510
+ 2025-12-03 22:21:30 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
511
+ 2025-12-03 22:21:30 INFO: new model checkpoint saved.
512
+ 2025-12-03 22:21:31 INFO: Finished STEP 2520/50000, loss = 2.576492 (0.035 sec/batch), lr: 0.003000
513
+ 2025-12-03 22:21:31 INFO: Finished STEP 2540/50000, loss = 1.877021 (0.035 sec/batch), lr: 0.003000
514
+ 2025-12-03 22:21:32 INFO: Finished STEP 2560/50000, loss = 2.449260 (0.039 sec/batch), lr: 0.003000
515
+ 2025-12-03 22:21:33 INFO: Finished STEP 2580/50000, loss = 1.826830 (0.035 sec/batch), lr: 0.003000
516
+ 2025-12-03 22:21:34 INFO: Finished STEP 2600/50000, loss = 2.538510 (0.035 sec/batch), lr: 0.003000
517
+ 2025-12-03 22:21:34 INFO: Evaluating on dev set...
518
+ 2025-12-03 22:21:34 INFO: LAS MLAS BLEX
519
+ 2025-12-03 22:21:34 INFO: 44.06 27.91 32.14
520
+ 2025-12-03 22:21:34 INFO: step 2600: train_loss = 2.278852, dev_score = 0.4406
521
+ 2025-12-03 22:21:35 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
522
+ 2025-12-03 22:21:35 INFO: new model checkpoint saved.
523
+ 2025-12-03 22:21:35 INFO: Finished STEP 2620/50000, loss = 3.521389 (0.037 sec/batch), lr: 0.003000
524
+ 2025-12-03 22:21:36 INFO: Finished STEP 2640/50000, loss = 2.663636 (0.036 sec/batch), lr: 0.003000
525
+ 2025-12-03 22:21:37 INFO: Finished STEP 2660/50000, loss = 2.443088 (0.036 sec/batch), lr: 0.003000
526
+ 2025-12-03 22:21:38 INFO: Finished STEP 2680/50000, loss = 3.625439 (0.037 sec/batch), lr: 0.003000
527
+ 2025-12-03 22:21:38 INFO: Finished STEP 2700/50000, loss = 1.004484 (0.038 sec/batch), lr: 0.003000
528
+ 2025-12-03 22:21:38 INFO: Evaluating on dev set...
529
+ 2025-12-03 22:21:39 INFO: LAS MLAS BLEX
530
+ 2025-12-03 22:21:39 INFO: 48.02 30.44 36.36
531
+ 2025-12-03 22:21:39 INFO: step 2700: train_loss = 2.747435, dev_score = 0.4802
532
+ 2025-12-03 22:21:40 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
533
+ 2025-12-03 22:21:40 INFO: new model checkpoint saved.
534
+ 2025-12-03 22:21:40 INFO: Finished STEP 2720/50000, loss = 3.552655 (0.036 sec/batch), lr: 0.003000
535
+ 2025-12-03 22:21:41 INFO: Finished STEP 2740/50000, loss = 3.005398 (0.035 sec/batch), lr: 0.003000
536
+ 2025-12-03 22:21:42 INFO: Finished STEP 2760/50000, loss = 3.090632 (0.036 sec/batch), lr: 0.003000
537
+ 2025-12-03 22:21:43 INFO: Finished STEP 2780/50000, loss = 3.169990 (0.037 sec/batch), lr: 0.003000
538
+ 2025-12-03 22:21:43 INFO: Finished STEP 2800/50000, loss = 2.445925 (0.036 sec/batch), lr: 0.003000
539
+ 2025-12-03 22:21:43 INFO: Evaluating on dev set...
540
+ 2025-12-03 22:21:44 INFO: LAS MLAS BLEX
541
+ 2025-12-03 22:21:44 INFO: 44.31 29.35 33.54
542
+ 2025-12-03 22:21:44 INFO: step 2800: train_loss = 2.594440, dev_score = 0.4431
543
+ 2025-12-03 22:21:44 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
544
+ 2025-12-03 22:21:44 INFO: new model checkpoint saved.
545
+ 2025-12-03 22:21:45 INFO: Finished STEP 2820/50000, loss = 2.367159 (0.037 sec/batch), lr: 0.003000
546
+ 2025-12-03 22:21:46 INFO: Finished STEP 2840/50000, loss = 2.213182 (0.037 sec/batch), lr: 0.003000
547
+ 2025-12-03 22:21:47 INFO: Finished STEP 2860/50000, loss = 1.802136 (0.037 sec/batch), lr: 0.003000
548
+ 2025-12-03 22:21:47 INFO: Finished STEP 2880/50000, loss = 2.307342 (0.037 sec/batch), lr: 0.003000
549
+ 2025-12-03 22:21:48 INFO: Finished STEP 2900/50000, loss = 2.596610 (0.035 sec/batch), lr: 0.003000
550
+ 2025-12-03 22:21:48 INFO: Evaluating on dev set...
551
+ 2025-12-03 22:21:49 INFO: LAS MLAS BLEX
552
+ 2025-12-03 22:21:49 INFO: 47.03 31.45 36.06
553
+ 2025-12-03 22:21:49 INFO: step 2900: train_loss = 2.611014, dev_score = 0.4703
554
+ 2025-12-03 22:21:49 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
555
+ 2025-12-03 22:21:49 INFO: new model checkpoint saved.
556
+ 2025-12-03 22:21:50 INFO: Finished STEP 2920/50000, loss = 2.927231 (0.036 sec/batch), lr: 0.003000
557
+ 2025-12-03 22:21:51 INFO: Finished STEP 2940/50000, loss = 3.114241 (0.036 sec/batch), lr: 0.003000
558
+ 2025-12-03 22:21:52 INFO: Finished STEP 2960/50000, loss = 2.844905 (0.036 sec/batch), lr: 0.003000
559
+ 2025-12-03 22:21:52 INFO: Finished STEP 2980/50000, loss = 2.520785 (0.038 sec/batch), lr: 0.003000
560
+ 2025-12-03 22:21:53 INFO: Finished STEP 3000/50000, loss = 2.595266 (0.036 sec/batch), lr: 0.003000
561
+ 2025-12-03 22:21:53 INFO: Evaluating on dev set...
562
+ 2025-12-03 22:21:54 INFO: LAS MLAS BLEX
563
+ 2025-12-03 22:21:54 INFO: 44.55 31.09 34.87
564
+ 2025-12-03 22:21:54 INFO: step 3000: train_loss = 2.604358, dev_score = 0.4455
565
+ 2025-12-03 22:21:54 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
566
+ 2025-12-03 22:21:54 INFO: new model checkpoint saved.
567
+ 2025-12-03 22:21:55 INFO: Finished STEP 3020/50000, loss = 2.251695 (0.036 sec/batch), lr: 0.003000
568
+ 2025-12-03 22:21:56 INFO: Finished STEP 3040/50000, loss = 2.632687 (0.036 sec/batch), lr: 0.003000
569
+ 2025-12-03 22:21:57 INFO: Finished STEP 3060/50000, loss = 2.573472 (0.035 sec/batch), lr: 0.003000
570
+ 2025-12-03 22:21:57 INFO: Finished STEP 3080/50000, loss = 2.548043 (0.036 sec/batch), lr: 0.003000
571
+ 2025-12-03 22:21:58 INFO: Finished STEP 3100/50000, loss = 2.941990 (0.036 sec/batch), lr: 0.003000
572
+ 2025-12-03 22:21:58 INFO: Evaluating on dev set...
573
+ 2025-12-03 22:21:58 INFO: LAS MLAS BLEX
574
+ 2025-12-03 22:21:58 INFO: 45.54 31.29 36.79
575
+ 2025-12-03 22:21:58 INFO: step 3100: train_loss = 2.610970, dev_score = 0.4554
576
+ 2025-12-03 22:21:59 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
577
+ 2025-12-03 22:21:59 INFO: new model checkpoint saved.
578
+ 2025-12-03 22:22:00 INFO: Finished STEP 3120/50000, loss = 2.302535 (0.037 sec/batch), lr: 0.003000
579
+ 2025-12-03 22:22:01 INFO: Finished STEP 3140/50000, loss = 2.652006 (0.036 sec/batch), lr: 0.003000
580
+ 2025-12-03 22:22:01 INFO: Finished STEP 3160/50000, loss = 3.137692 (0.035 sec/batch), lr: 0.003000
581
+ 2025-12-03 22:22:02 INFO: Finished STEP 3180/50000, loss = 2.632149 (0.037 sec/batch), lr: 0.003000
582
+ 2025-12-03 22:22:03 INFO: Finished STEP 3200/50000, loss = 1.484895 (0.037 sec/batch), lr: 0.003000
583
+ 2025-12-03 22:22:03 INFO: Evaluating on dev set...
584
+ 2025-12-03 22:22:03 INFO: LAS MLAS BLEX
585
+ 2025-12-03 22:22:03 INFO: 44.55 32.84 38.32
586
+ 2025-12-03 22:22:03 INFO: step 3200: train_loss = 2.512681, dev_score = 0.4455
587
+ 2025-12-03 22:22:04 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
588
+ 2025-12-03 22:22:04 INFO: new model checkpoint saved.
589
+ 2025-12-03 22:22:05 INFO: Finished STEP 3220/50000, loss = 2.150381 (0.035 sec/batch), lr: 0.003000
590
+ 2025-12-03 22:22:06 INFO: Finished STEP 3240/50000, loss = 1.300762 (0.038 sec/batch), lr: 0.003000
591
+ 2025-12-03 22:22:06 INFO: Finished STEP 3260/50000, loss = 2.482401 (0.035 sec/batch), lr: 0.003000
592
+ 2025-12-03 22:22:07 INFO: Finished STEP 3280/50000, loss = 2.695213 (0.035 sec/batch), lr: 0.003000
593
+ 2025-12-03 22:22:08 INFO: Finished STEP 3300/50000, loss = 1.333264 (0.042 sec/batch), lr: 0.003000
594
+ 2025-12-03 22:22:08 INFO: Evaluating on dev set...
595
+ 2025-12-03 22:22:08 INFO: LAS MLAS BLEX
596
+ 2025-12-03 22:22:08 INFO: 49.50 34.03 39.50
597
+ 2025-12-03 22:22:08 INFO: step 3300: train_loss = 2.585611, dev_score = 0.4950
598
+ 2025-12-03 22:22:09 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
599
+ 2025-12-03 22:22:09 INFO: new model checkpoint saved.
600
+ 2025-12-03 22:22:10 INFO: Finished STEP 3320/50000, loss = 2.705882 (0.036 sec/batch), lr: 0.003000
601
+ 2025-12-03 22:22:10 INFO: Finished STEP 3340/50000, loss = 1.668782 (0.038 sec/batch), lr: 0.003000
602
+ 2025-12-03 22:22:11 INFO: Finished STEP 3360/50000, loss = 2.618518 (0.037 sec/batch), lr: 0.003000
603
+ 2025-12-03 22:22:12 INFO: Finished STEP 3380/50000, loss = 2.959959 (0.038 sec/batch), lr: 0.003000
604
+ 2025-12-03 22:22:13 INFO: Finished STEP 3400/50000, loss = 2.689266 (0.037 sec/batch), lr: 0.003000
605
+ 2025-12-03 22:22:13 INFO: Evaluating on dev set...
606
+ 2025-12-03 22:22:13 INFO: LAS MLAS BLEX
607
+ 2025-12-03 22:22:13 INFO: 49.01 34.11 40.42
608
+ 2025-12-03 22:22:13 INFO: step 3400: train_loss = 2.699152, dev_score = 0.4901
609
+ 2025-12-03 22:22:14 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
610
+ 2025-12-03 22:22:14 INFO: new model checkpoint saved.
611
+ 2025-12-03 22:22:15 INFO: Finished STEP 3420/50000, loss = 2.238733 (0.035 sec/batch), lr: 0.003000
612
+ 2025-12-03 22:22:15 INFO: Finished STEP 3440/50000, loss = 2.567868 (0.037 sec/batch), lr: 0.003000
613
+ 2025-12-03 22:22:16 INFO: Finished STEP 3460/50000, loss = 2.507038 (0.037 sec/batch), lr: 0.003000
614
+ 2025-12-03 22:22:17 INFO: Finished STEP 3480/50000, loss = 3.521281 (0.037 sec/batch), lr: 0.003000
615
+ 2025-12-03 22:22:18 INFO: Finished STEP 3500/50000, loss = 2.573985 (0.036 sec/batch), lr: 0.003000
616
+ 2025-12-03 22:22:18 INFO: Evaluating on dev set...
617
+ 2025-12-03 22:22:18 INFO: LAS MLAS BLEX
618
+ 2025-12-03 22:22:18 INFO: 42.82 27.00 33.76
619
+ 2025-12-03 22:22:18 INFO: step 3500: train_loss = 2.601057, dev_score = 0.4282
620
+ 2025-12-03 22:22:18 INFO: Training ended with 3500 steps.
621
+ 2025-12-03 22:22:18 INFO: Best dev F1 = 50.00, at iteration = 2500
622
+ 2025-12-03 22:22:19 INFO: Running dev depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
623
+ 2025-12-03 22:22:19 INFO: Running parser in predict mode
624
+ 2025-12-03 22:22:19 INFO: Loading model from: saved_models/depparse/sv_diachronic_charlm_parser.pt
625
+ 2025-12-03 22:22:22 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
626
+ 2025-12-03 22:22:22 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
627
+ 2025-12-03 22:22:22 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
628
+ 2025-12-03 22:22:22 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
629
+ 2025-12-03 22:22:22 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
630
+ 2025-12-03 22:22:22 INFO: Loading data with batch size 32...
631
+ 2025-12-03 22:22:22 DEBUG: 9 batches created.
632
+ 2025-12-03 22:22:22 INFO: F1 scores for each dependency:
633
+ Note that unlabeled attachment errors hurt the labeled attachment scores
634
+ acl: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
635
+ acl:relcl: p 0.0000 r 0.0000 f1 0.0000 (7 actual)
636
+ advcl: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
637
+ advmod: p 0.4848 r 0.6400 f1 0.5517 (25 actual)
638
+ amod: p 0.8571 r 0.7742 f1 0.8136 (31 actual)
639
+ appos: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
640
+ aux: p 0.6923 r 0.8182 f1 0.7500 (11 actual)
641
+ case: p 0.8727 r 0.8571 f1 0.8649 (56 actual)
642
+ cc: p 0.6667 r 0.6154 f1 0.6400 (13 actual)
643
+ ccomp: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
644
+ compound:prt: p 0.0000 r 0.0000 f1 0.0000 (0 actual)
645
+ conj: p 0.0000 r 0.0000 f1 0.0000 (12 actual)
646
+ cop: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
647
+ csubj: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
648
+ det: p 0.8696 r 0.9091 f1 0.8889 (22 actual)
649
+ expl: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
650
+ iobj: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
651
+ mark: p 0.4286 r 0.2500 f1 0.3158 (12 actual)
652
+ nmod: p 0.0000 r 0.0000 f1 0.0000 (15 actual)
653
+ nmod:poss: p 1.0000 r 0.8947 f1 0.9444 (19 actual)
654
+ nsubj: p 0.1429 r 0.8235 f1 0.2435 (17 actual)
655
+ nsubj:pass: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
656
+ obj: p 0.1000 r 0.0455 f1 0.0625 (22 actual)
657
+ obl: p 0.4500 r 0.2195 f1 0.2951 (41 actual)
658
+ obl:agent: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
659
+ orphan: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
660
+ parataxis: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
661
+ punct: p 0.4423 r 0.4423 f1 0.4423 (52 actual)
662
+ root: p 0.4444 r 0.4444 f1 0.4444 (9 actual)
663
+ xcomp: p 0.0769 r 0.2500 f1 0.1176 (8 actual)
664
+ 2025-12-03 22:22:22 INFO: LAS MLAS BLEX
665
+ 2025-12-03 22:22:22 INFO: 50.00 34.38 38.16
666
+ 2025-12-03 22:22:22 INFO: Parser score:
667
+ 2025-12-03 22:22:22 INFO: sv_diachronic 50.00
668
+ 2025-12-03 22:22:22 INFO: Finished running dev set on
669
+ UD_Swedish-diachronic
670
+ UAS LAS CLAS MLAS BLEX
671
+ 63.86 50.00 38.16 34.38 38.16
672
+ 2025-12-03 22:22:22 INFO: Running test depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--batch_size', '32', '--dropout', '0.33']
673
+ 2025-12-03 22:22:22 INFO: Running parser in predict mode
674
+ 2025-12-03 22:22:22 INFO: Loading model from: saved_models/depparse/sv_diachronic_charlm_parser.pt
675
+ 2025-12-03 22:22:24 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
676
+ 2025-12-03 22:22:24 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
677
+ 2025-12-03 22:22:24 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
678
+ 2025-12-03 22:22:25 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
679
+ 2025-12-03 22:22:25 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
680
+ 2025-12-03 22:22:25 INFO: Loading data with batch size 32...
681
+ 2025-12-03 22:22:25 DEBUG: 93 batches created.
682
+ 2025-12-03 22:22:30 INFO: F1 scores for each dependency:
683
+ Note that unlabeled attachment errors hurt the labeled attachment scores
684
+ acl: p 0.0000 r 0.0000 f1 0.0000 (32 actual)
685
+ acl:cleft: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
686
+ acl:relcl: p 0.0000 r 0.0000 f1 0.0000 (75 actual)
687
+ advcl: p 0.0000 r 0.0000 f1 0.0000 (60 actual)
688
+ advcl:relcl: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
689
+ advmod: p 0.4321 r 0.5821 f1 0.4960 (268 actual)
690
+ amod: p 0.7956 r 0.7783 f1 0.7868 (230 actual)
691
+ appos: p 0.0000 r 0.0000 f1 0.0000 (13 actual)
692
+ aux: p 0.6053 r 0.8214 f1 0.6970 (84 actual)
693
+ aux:pass: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
694
+ case: p 0.8408 r 0.8070 f1 0.8235 (373 actual)
695
+ cc: p 0.3922 r 0.3871 f1 0.3896 (155 actual)
696
+ ccomp: p 0.0000 r 0.0000 f1 0.0000 (35 actual)
697
+ compound:prt: p 0.3333 r 0.1905 f1 0.2424 (21 actual)
698
+ conj: p 0.2500 r 0.0063 f1 0.0123 (158 actual)
699
+ cop: p 0.0000 r 0.0000 f1 0.0000 (46 actual)
700
+ csubj: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
701
+ dep: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
702
+ det: p 0.8267 r 0.8029 f1 0.8146 (208 actual)
703
+ discourse: p 0.0000 r 0.0000 f1 0.0000 (7 actual)
704
+ dislocated: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
705
+ expl: p 0.0000 r 0.0000 f1 0.0000 (11 actual)
706
+ expl:pv: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
707
+ fixed: p 0.0000 r 0.0000 f1 0.0000 (8 actual)
708
+ flat: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
709
+ flat:name: p 0.0000 r 0.0000 f1 0.0000 (12 actual)
710
+ goeswith: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
711
+ iobj: p 0.0000 r 0.0000 f1 0.0000 (14 actual)
712
+ mark: p 0.5238 r 0.2876 f1 0.3713 (153 actual)
713
+ nmod: p 0.0000 r 0.0000 f1 0.0000 (102 actual)
714
+ nmod:poss: p 0.8267 r 0.8732 f1 0.8493 (142 actual)
715
+ nsubj: p 0.2097 r 0.6786 f1 0.3204 (280 actual)
716
+ nsubj:pass: p 0.0000 r 0.0000 f1 0.0000 (25 actual)
717
+ nummod: p 0.0000 r 0.0000 f1 0.0000 (10 actual)
718
+ obj: p 0.1111 r 0.0656 f1 0.0825 (183 actual)
719
+ obl: p 0.2797 r 0.1439 f1 0.1900 (278 actual)
720
+ obl:agent: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
721
+ orphan: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
722
+ parataxis: p 0.0000 r 0.0000 f1 0.0000 (18 actual)
723
+ punct: p 0.3765 r 0.3765 f1 0.3765 (425 actual)
724
+ reparandum: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
725
+ root: p 0.4949 r 0.4949 f1 0.4949 (99 actual)
726
+ vocative: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
727
+ xcomp: p 0.1189 r 0.4533 f1 0.1884 (75 actual)
728
+ 2025-12-03 22:22:30 INFO: LAS MLAS BLEX
729
+ 2025-12-03 22:22:30 INFO: 44.21 33.10 35.91
730
+ 2025-12-03 22:22:30 INFO: Parser score:
731
+ 2025-12-03 22:22:30 INFO: sv_diachronic 44.21
732
+ 2025-12-03 22:22:30 INFO: Finished running test set on
733
+ UD_Swedish-diachronic
734
+ UAS LAS CLAS MLAS BLEX
735
+ 59.53 44.21 35.91 33.10 35.91
736
+ DONE.
737
+ Full log saved to: logs/log_conll17.pt_sv_diachron_is_20251203_221228.txt
738
+ Symlink updated: logs/latest.txt → log_conll17.pt_sv_diachron_is_20251203_221228.txt
logs/log_conll17.pt_sv_is_20251203_234401.txt ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === LOGFILE: logs/log_conll17.pt_sv_is_20251203_234401.txt ===
2
+ Language codes: sv is
3
+ Using pretrained model: conll17.pt
4
+
5
+ Running: python prepare-train-val-test.py sv is
6
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-dev.conllu
7
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test.conllu
8
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_pud-ud-test.conllu
9
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-test.conllu
10
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test-trg.conllu
11
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-dev.conllu
12
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/ucxn_ud_swedish-talbanken.conllu
13
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-train.conllu
14
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_old-ud-test.conllu
15
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-train.conllu
16
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-test.conllu
17
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_gc-ud-dev.conllu
18
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_pud-ud-test.conllu
19
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_icepahc-ud-train.conllu
20
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_gc-ud-train.conllu
21
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_gc-ud-test.conllu
22
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_icepahc-ud-test.conllu
23
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_icepahc-ud-dev.conllu
24
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/modern/is_modern-ud-test.conllu
25
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/modern/is_modern-ud-dev.conllu
26
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/modern/is_modern-ud-train.conllu
27
+ Skipping DigPhil MACHINE (diachron not requested).
28
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu
29
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu
30
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu
31
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu
32
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1063-spf220.conllu
33
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu
34
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu
35
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec252-BremerF_Teckningar1.conllu
36
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec988-spf145.conllu
37
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec987-spf144.conllu
38
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu
39
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-letter141673-Stalhammar.conllu
40
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1033-spf190.conllu
41
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec25-Runius.conllu
42
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu
43
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec452-NyblomH_FantasierFyra.conllu
44
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu
45
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu
46
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu
47
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1102-spf259.conllu
48
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec991-spf148.conllu
49
+ Cleaning TRAIN...
50
+ Cleaning DEV...
51
+ [REMOVED] sent_id=33 ERRORS=['Token 15: Missing deprel']
52
+ Cleaning TEST...
53
+ Writing TRAIN → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu (73384 valid sentences)
54
+ Writing DEV → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu (9 valid sentences)
55
+ Writing TEST → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu (99 valid sentences)
56
+ Done.
57
+ Sourcing scripts/config_alvis.sh
58
+ Running stanza dataset preparation…
59
+ 2025-12-03 23:44:11 INFO: Datasets program called with:
60
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
61
+ 2025-12-03 23:44:11 DEBUG: Downloading resource file from https://raw.githubusercontent.com/stanfordnlp/stanza-resources/main/resources_1.11.0.json
62
+
63
+ 2025-12-03 23:44:11 INFO: Downloaded file to /cephyr/users/cleland/Alvis/stanza_resources/resources.json
64
+ 2025-12-03 23:44:11 DEBUG: Processing parameter "processors"...
65
+ 2025-12-03 23:44:11 WARNING: Can not find pos: diachronic from official model list. Ignoring it.
66
+ 2025-12-03 23:44:11 INFO: Downloading these customized packages for language: sv (Swedish)...
67
+ =======================
68
+ | Processor | Package |
69
+ -----------------------
70
+ =======================
71
+
72
+ 2025-12-03 23:44:11 INFO: Finished downloading models and saved to /cephyr/users/cleland/Alvis/stanza_resources
73
+ 2025-12-03 23:44:11 INFO: Using tagger model in /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt for sv_diachronic
74
+ 2025-12-03 23:44:11 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
75
+ 2025-12-03 23:44:11 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
76
+ Augmented 502 quotes: Counter({'""': 57, '«»': 55, '「」': 53, '″″': 52, '《》': 51, '””': 50, '»«': 49, '„”': 48, '““': 45, '„“': 42})
77
+ 2025-12-03 23:44:16 INFO: Running tagger to retag /local/tmp.5441282/tmpeqgzmft2/sv_diachronic.train.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu
78
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmpeqgzmft2/sv_diachronic.train.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu']
79
+ 2025-12-03 23:44:16 INFO: Running tagger in predict mode
80
+ 2025-12-03 23:44:16 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
81
+ 2025-12-03 23:44:18 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/conll17.pt
82
+ 2025-12-03 23:44:18 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
83
+ 2025-12-03 23:44:18 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
84
+ 2025-12-03 23:44:18 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
85
+ 2025-12-03 23:44:19 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
86
+ 2025-12-03 23:44:21 INFO: Loading data with batch size 250...
87
+ ./make_new_model.sh: line 58: 1401181 Terminated python -m stanza.utils.datasets.prepare_depparse_treebank UD_Swedish-diachronic --wordvec_pretrain_file "/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/${PRETRAINED_MODEL}"
logs/log_diachronic.pt_sv_is_20251203_234442.txt ADDED
@@ -0,0 +1,738 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === LOGFILE: logs/log_diachronic.pt_sv_is_20251203_234442.txt ===
2
+ Language codes: sv is
3
+ Using pretrained model: diachronic.pt
4
+
5
+ Running: python prepare-train-val-test.py sv is
6
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-dev.conllu
7
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test.conllu
8
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_pud-ud-test.conllu
9
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-test.conllu
10
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_swell-ud-test-trg.conllu
11
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-dev.conllu
12
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/ucxn_ud_swedish-talbanken.conllu
13
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_talbanken-ud-train.conllu
14
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_old-ud-test.conllu
15
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-train.conllu
16
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-sv/sv_lines-ud-test.conllu
17
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_gc-ud-dev.conllu
18
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_pud-ud-test.conllu
19
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_icepahc-ud-train.conllu
20
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_gc-ud-train.conllu
21
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_gc-ud-test.conllu
22
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_icepahc-ud-test.conllu
23
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/is_icepahc-ud-dev.conllu
24
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/modern/is_modern-ud-test.conllu
25
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/modern/is_modern-ud-dev.conllu
26
+ Reading: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud-treebanks-is/modern/is_modern-ud-train.conllu
27
+ Skipping DigPhil MACHINE (diachron not requested).
28
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu
29
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu
30
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu
31
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu
32
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1063-spf220.conllu
33
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu
34
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu
35
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec252-BremerF_Teckningar1.conllu
36
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec988-spf145.conllu
37
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec987-spf144.conllu
38
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu
39
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-letter141673-Stalhammar.conllu
40
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1033-spf190.conllu
41
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec25-Runius.conllu
42
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu
43
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec452-NyblomH_FantasierFyra.conllu
44
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu
45
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu
46
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu
47
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1102-spf259.conllu
48
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec991-spf148.conllu
49
+ Cleaning TRAIN...
50
+ Cleaning DEV...
51
+ [REMOVED] sent_id=33 ERRORS=['Token 15: Missing deprel']
52
+ Cleaning TEST...
53
+ Writing TRAIN → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu (73384 valid sentences)
54
+ Writing DEV → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu (9 valid sentences)
55
+ Writing TEST → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu (99 valid sentences)
56
+ Done.
57
+ Sourcing scripts/config_alvis.sh
58
+ Running stanza dataset preparation…
59
+ 2025-12-03 23:44:51 INFO: Datasets program called with:
60
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt
61
+ 2025-12-03 23:44:51 DEBUG: Downloading resource file from https://raw.githubusercontent.com/stanfordnlp/stanza-resources/main/resources_1.11.0.json
62
+
63
+ 2025-12-03 23:44:51 INFO: Downloaded file to /cephyr/users/cleland/Alvis/stanza_resources/resources.json
64
+ 2025-12-03 23:44:51 DEBUG: Processing parameter "processors"...
65
+ 2025-12-03 23:44:51 WARNING: Can not find pos: diachronic from official model list. Ignoring it.
66
+ 2025-12-03 23:44:51 INFO: Downloading these customized packages for language: sv (Swedish)...
67
+ =======================
68
+ | Processor | Package |
69
+ -----------------------
70
+ =======================
71
+
72
+ 2025-12-03 23:44:51 INFO: Finished downloading models and saved to /cephyr/users/cleland/Alvis/stanza_resources
73
+ 2025-12-03 23:44:51 INFO: Using tagger model in /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt for sv_diachronic
74
+ 2025-12-03 23:44:51 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
75
+ 2025-12-03 23:44:51 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
76
+ Augmented 502 quotes: Counter({'""': 57, '«»': 55, '「」': 53, '″″': 52, '《》': 51, '””': 50, '»«': 49, '„”': 48, '““': 45, '„“': 42})
77
+ 2025-12-03 23:44:56 INFO: Running tagger to retag /local/tmp.5441282/tmpnockqs29/sv_diachronic.train.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu
78
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmpnockqs29/sv_diachronic.train.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu']
79
+ 2025-12-03 23:44:56 INFO: Running tagger in predict mode
80
+ 2025-12-03 23:44:56 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
81
+ 2025-12-03 23:44:59 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt
82
+ 2025-12-03 23:44:59 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
83
+ 2025-12-03 23:44:59 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
84
+ 2025-12-03 23:44:59 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
85
+ 2025-12-03 23:44:59 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
86
+ 2025-12-03 23:45:01 INFO: Loading data with batch size 250...
87
+ 2025-12-03 23:46:03 INFO: Start evaluation...
88
+ 2025-12-03 23:50:45 INFO: UPOS XPOS UFeats AllTags
89
+ 2025-12-03 23:50:45 INFO: 47.07 13.54 39.24 12.99
90
+ 2025-12-03 23:50:45 INFO: POS Tagger score: sv_diachronic 12.99
91
+ 2025-12-03 23:50:46 INFO: Running tagger to retag /local/tmp.5441282/tmpnockqs29/sv_diachronic.dev.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu
92
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmpnockqs29/sv_diachronic.dev.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu']
93
+ 2025-12-03 23:50:46 INFO: Running tagger in predict mode
94
+ 2025-12-03 23:50:46 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
95
+ 2025-12-03 23:50:49 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt
96
+ 2025-12-03 23:50:49 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
97
+ 2025-12-03 23:50:49 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
98
+ 2025-12-03 23:50:49 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
99
+ 2025-12-03 23:50:49 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
100
+ 2025-12-03 23:50:49 INFO: Loading data with batch size 250...
101
+ 2025-12-03 23:50:49 INFO: Start evaluation...
102
+ 2025-12-03 23:50:49 INFO: UPOS XPOS UFeats AllTags
103
+ 2025-12-03 23:50:49 INFO: 88.61 79.46 83.42 74.75
104
+ 2025-12-03 23:50:49 INFO: POS Tagger score: sv_diachronic 74.75
105
+ 2025-12-03 23:50:49 INFO: Running tagger to retag /local/tmp.5441282/tmpnockqs29/sv_diachronic.test.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu
106
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5441282/tmpnockqs29/sv_diachronic.test.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu']
107
+ 2025-12-03 23:50:49 INFO: Running tagger in predict mode
108
+ 2025-12-03 23:50:49 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
109
+ 2025-12-03 23:50:52 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt
110
+ 2025-12-03 23:50:52 DEBUG: POS model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
111
+ 2025-12-03 23:50:52 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
112
+ 2025-12-03 23:50:52 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
113
+ 2025-12-03 23:50:52 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
114
+ 2025-12-03 23:50:52 INFO: Loading data with batch size 250...
115
+ 2025-12-03 23:50:52 INFO: Start evaluation...
116
+ 2025-12-03 23:50:53 INFO: UPOS XPOS UFeats AllTags
117
+ 2025-12-03 23:50:53 INFO: 89.61 86.64 86.69 81.18
118
+ 2025-12-03 23:50:53 INFO: POS Tagger score: sv_diachronic 81.18
119
+ Preparing data for UD_Swedish-diachronic: sv_diachronic, sv
120
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu and writing to /local/tmp.5441282/tmpnockqs29/sv_diachronic.train.gold.conllu
121
+ Swapped 'w1, w2' for 'w1 ,w2' 611 times
122
+ Added 364 new sentences with asdf, zzzz -> asdf,zzzz
123
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu and writing to /local/tmp.5441282/tmpnockqs29/sv_diachronic.dev.gold.conllu
124
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu and writing to /local/tmp.5441282/tmpnockqs29/sv_diachronic.test.gold.conllu
125
+ Running stanza dependency parser training…
126
+ 2025-12-03 23:51:12 INFO: Training program called with:
127
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/training/run_depparse.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt --batch_size 32 --dropout 0.33
128
+ 2025-12-03 23:51:12 DEBUG: UD_Swedish-diachronic: sv_diachronic
129
+ 2025-12-03 23:51:12 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
130
+ 2025-12-03 23:51:12 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
131
+ 2025-12-03 23:51:12 INFO: UD_Swedish-diachronic: saved_models/depparse/sv_diachronic_charlm_parser.pt does not exist, training new model
132
+ 2025-12-03 23:51:12 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
133
+ 2025-12-03 23:51:12 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
134
+ 2025-12-03 23:51:12 INFO: Running train depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--train_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu', '--batch_size', '5000', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'train', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt', '--batch_size', '32', '--dropout', '0.33']
135
+ 2025-12-03 23:51:12 INFO: Running parser in train mode
136
+ 2025-12-03 23:51:12 INFO: Using pretrained contextualized char embedding
137
+ 2025-12-03 23:51:12 INFO: Loading data with batch size 32...
138
+ 2025-12-03 23:51:23 INFO: Train File /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu, Data Size: 73748
139
+ 2025-12-03 23:51:23 INFO: Original data size: 73748
140
+ 2025-12-03 23:51:24 INFO: Augmented data size: 79876
141
+ 2025-12-03 23:52:00 WARNING: sv_diachronic is not a known dataset. Examining the data to choose which xpos vocab to use
142
+ 2025-12-03 23:52:00 INFO: Original length = 79876
143
+ 2025-12-03 23:52:00 INFO: Filtered length = 79876
144
+ 2025-12-03 23:52:22 WARNING: Chose XPOSDescription(xpos_type=<XPOSType.XPOS: 1>, sep='|') for the xpos factory for sv_diachronic
145
+ 2025-12-03 23:52:33 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt
146
+ 2025-12-03 23:52:54 DEBUG: 57790 batches created.
147
+ 2025-12-03 23:52:54 DEBUG: 9 batches created.
148
+ 2025-12-03 23:52:54 INFO: Training parser...
149
+ 2025-12-03 23:52:54 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
150
+ 2025-12-03 23:52:54 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
151
+ 2025-12-03 23:52:54 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
152
+ 2025-12-03 23:52:54 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
153
+ 2025-12-03 23:53:02 INFO: Finished STEP 20/50000, loss = 7.141889 (0.215 sec/batch), lr: 0.003000
154
+ 2025-12-03 23:53:06 INFO: Finished STEP 40/50000, loss = 6.678511 (0.184 sec/batch), lr: 0.003000
155
+ 2025-12-03 23:53:10 INFO: Finished STEP 60/50000, loss = 6.466716 (0.157 sec/batch), lr: 0.003000
156
+ 2025-12-03 23:53:13 INFO: Finished STEP 80/50000, loss = 6.603405 (0.170 sec/batch), lr: 0.003000
157
+ 2025-12-03 23:53:16 INFO: Finished STEP 100/50000, loss = 6.213842 (0.163 sec/batch), lr: 0.003000
158
+ 2025-12-03 23:53:16 INFO: Evaluating on dev set...
159
+ 2025-12-03 23:53:17 INFO: LAS MLAS BLEX
160
+ 2025-12-03 23:53:17 INFO: 6.19 0.80 2.81
161
+ 2025-12-03 23:53:17 INFO: step 100: train_loss = 8.986515, dev_score = 0.0619
162
+ 2025-12-03 23:53:17 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
163
+ 2025-12-03 23:53:17 INFO: new best model saved.
164
+ 2025-12-03 23:53:18 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
165
+ 2025-12-03 23:53:18 INFO: new model checkpoint saved.
166
+ 2025-12-03 23:53:21 INFO: Finished STEP 120/50000, loss = 5.419530 (0.153 sec/batch), lr: 0.003000
167
+ 2025-12-03 23:53:24 INFO: Finished STEP 140/50000, loss = 5.962956 (0.143 sec/batch), lr: 0.003000
168
+ 2025-12-03 23:53:27 INFO: Finished STEP 160/50000, loss = 5.762636 (0.141 sec/batch), lr: 0.003000
169
+ 2025-12-03 23:53:30 INFO: Finished STEP 180/50000, loss = 5.969774 (0.143 sec/batch), lr: 0.003000
170
+ 2025-12-03 23:53:33 INFO: Finished STEP 200/50000, loss = 6.623581 (0.137 sec/batch), lr: 0.003000
171
+ 2025-12-03 23:53:33 INFO: Evaluating on dev set...
172
+ 2025-12-03 23:53:33 INFO: LAS MLAS BLEX
173
+ 2025-12-03 23:53:33 INFO: 12.87 5.59 9.46
174
+ 2025-12-03 23:53:33 INFO: step 200: train_loss = 6.091040, dev_score = 0.1287
175
+ 2025-12-03 23:53:33 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
176
+ 2025-12-03 23:53:33 INFO: new best model saved.
177
+ 2025-12-03 23:53:34 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
178
+ 2025-12-03 23:53:34 INFO: new model checkpoint saved.
179
+ 2025-12-03 23:53:37 INFO: Finished STEP 220/50000, loss = 5.729638 (0.123 sec/batch), lr: 0.003000
180
+ 2025-12-03 23:53:40 INFO: Finished STEP 240/50000, loss = 5.471228 (0.131 sec/batch), lr: 0.003000
181
+ 2025-12-03 23:53:42 INFO: Finished STEP 260/50000, loss = 5.682319 (0.132 sec/batch), lr: 0.003000
182
+ 2025-12-03 23:53:45 INFO: Finished STEP 280/50000, loss = 5.628954 (0.121 sec/batch), lr: 0.003000
183
+ 2025-12-03 23:53:47 INFO: Finished STEP 300/50000, loss = 6.819226 (0.129 sec/batch), lr: 0.003000
184
+ 2025-12-03 23:53:47 INFO: Evaluating on dev set...
185
+ 2025-12-03 23:53:48 INFO: LAS MLAS BLEX
186
+ 2025-12-03 23:53:48 INFO: 19.06 8.15 11.81
187
+ 2025-12-03 23:53:48 INFO: step 300: train_loss = 6.002339, dev_score = 0.1906
188
+ 2025-12-03 23:53:48 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
189
+ 2025-12-03 23:53:48 INFO: new best model saved.
190
+ 2025-12-03 23:53:49 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
191
+ 2025-12-03 23:53:49 INFO: new model checkpoint saved.
192
+ 2025-12-03 23:53:51 INFO: Finished STEP 320/50000, loss = 5.481004 (0.126 sec/batch), lr: 0.003000
193
+ 2025-12-03 23:53:54 INFO: Finished STEP 340/50000, loss = 5.271991 (0.122 sec/batch), lr: 0.003000
194
+ 2025-12-03 23:53:56 INFO: Finished STEP 360/50000, loss = 5.577344 (0.122 sec/batch), lr: 0.003000
195
+ 2025-12-03 23:53:59 INFO: Finished STEP 380/50000, loss = 5.370859 (0.123 sec/batch), lr: 0.003000
196
+ 2025-12-03 23:54:01 INFO: Finished STEP 400/50000, loss = 5.943788 (0.131 sec/batch), lr: 0.003000
197
+ 2025-12-03 23:54:01 INFO: Evaluating on dev set...
198
+ 2025-12-03 23:54:02 INFO: LAS MLAS BLEX
199
+ 2025-12-03 23:54:02 INFO: 16.34 8.48 11.56
200
+ 2025-12-03 23:54:02 INFO: step 400: train_loss = 5.517460, dev_score = 0.1634
201
+ 2025-12-03 23:54:02 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
202
+ 2025-12-03 23:54:02 INFO: new model checkpoint saved.
203
+ 2025-12-03 23:54:05 INFO: Finished STEP 420/50000, loss = 5.580643 (0.114 sec/batch), lr: 0.003000
204
+ 2025-12-03 23:54:07 INFO: Finished STEP 440/50000, loss = 5.062556 (0.120 sec/batch), lr: 0.003000
205
+ 2025-12-03 23:54:10 INFO: Finished STEP 460/50000, loss = 5.934627 (0.100 sec/batch), lr: 0.003000
206
+ 2025-12-03 23:54:12 INFO: Finished STEP 480/50000, loss = 5.522203 (0.109 sec/batch), lr: 0.003000
207
+ 2025-12-03 23:54:14 INFO: Finished STEP 500/50000, loss = 4.767919 (0.112 sec/batch), lr: 0.003000
208
+ 2025-12-03 23:54:14 INFO: Evaluating on dev set...
209
+ 2025-12-03 23:54:15 INFO: LAS MLAS BLEX
210
+ 2025-12-03 23:54:15 INFO: 21.53 12.70 15.87
211
+ 2025-12-03 23:54:15 INFO: step 500: train_loss = 5.277778, dev_score = 0.2153
212
+ 2025-12-03 23:54:15 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
213
+ 2025-12-03 23:54:15 INFO: new best model saved.
214
+ 2025-12-03 23:54:16 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
215
+ 2025-12-03 23:54:16 INFO: new model checkpoint saved.
216
+ 2025-12-03 23:54:18 INFO: Finished STEP 520/50000, loss = 4.886576 (0.104 sec/batch), lr: 0.003000
217
+ 2025-12-03 23:54:20 INFO: Finished STEP 540/50000, loss = 5.273784 (0.113 sec/batch), lr: 0.003000
218
+ 2025-12-03 23:54:22 INFO: Finished STEP 560/50000, loss = 6.024814 (0.112 sec/batch), lr: 0.003000
219
+ 2025-12-03 23:54:25 INFO: Finished STEP 580/50000, loss = 5.645607 (0.121 sec/batch), lr: 0.003000
220
+ 2025-12-03 23:54:27 INFO: Finished STEP 600/50000, loss = 4.885401 (0.116 sec/batch), lr: 0.003000
221
+ 2025-12-03 23:54:27 INFO: Evaluating on dev set...
222
+ 2025-12-03 23:54:27 INFO: LAS MLAS BLEX
223
+ 2025-12-03 23:54:27 INFO: 21.78 11.04 15.13
224
+ 2025-12-03 23:54:27 INFO: step 600: train_loss = 5.160117, dev_score = 0.2178
225
+ 2025-12-03 23:54:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
226
+ 2025-12-03 23:54:28 INFO: new best model saved.
227
+ 2025-12-03 23:54:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
228
+ 2025-12-03 23:54:28 INFO: new model checkpoint saved.
229
+ 2025-12-03 23:54:30 INFO: Finished STEP 620/50000, loss = 5.036123 (0.105 sec/batch), lr: 0.003000
230
+ 2025-12-03 23:54:33 INFO: Finished STEP 640/50000, loss = 4.977450 (0.105 sec/batch), lr: 0.003000
231
+ 2025-12-03 23:54:35 INFO: Finished STEP 660/50000, loss = 6.069982 (0.108 sec/batch), lr: 0.003000
232
+ 2025-12-03 23:54:37 INFO: Finished STEP 680/50000, loss = 4.577070 (0.106 sec/batch), lr: 0.003000
233
+ 2025-12-03 23:54:39 INFO: Finished STEP 700/50000, loss = 5.212182 (0.105 sec/batch), lr: 0.003000
234
+ 2025-12-03 23:54:39 INFO: Evaluating on dev set...
235
+ 2025-12-03 23:54:40 INFO: LAS MLAS BLEX
236
+ 2025-12-03 23:54:40 INFO: 22.77 11.76 15.29
237
+ 2025-12-03 23:54:40 INFO: step 700: train_loss = 5.049553, dev_score = 0.2277
238
+ 2025-12-03 23:54:40 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
239
+ 2025-12-03 23:54:40 INFO: new best model saved.
240
+ 2025-12-03 23:54:41 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
241
+ 2025-12-03 23:54:41 INFO: new model checkpoint saved.
242
+ 2025-12-03 23:54:43 INFO: Finished STEP 720/50000, loss = 4.331203 (0.103 sec/batch), lr: 0.003000
243
+ 2025-12-03 23:54:45 INFO: Finished STEP 740/50000, loss = 5.526166 (0.110 sec/batch), lr: 0.003000
244
+ 2025-12-03 23:54:47 INFO: Finished STEP 760/50000, loss = 4.207514 (0.107 sec/batch), lr: 0.003000
245
+ 2025-12-03 23:54:49 INFO: Finished STEP 780/50000, loss = 5.748611 (0.106 sec/batch), lr: 0.003000
246
+ 2025-12-03 23:54:51 INFO: Finished STEP 800/50000, loss = 5.367404 (0.113 sec/batch), lr: 0.003000
247
+ 2025-12-03 23:54:51 INFO: Evaluating on dev set...
248
+ 2025-12-03 23:54:52 INFO: LAS MLAS BLEX
249
+ 2025-12-03 23:54:52 INFO: 25.99 13.03 18.07
250
+ 2025-12-03 23:54:52 INFO: step 800: train_loss = 4.956062, dev_score = 0.2599
251
+ 2025-12-03 23:54:52 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
252
+ 2025-12-03 23:54:52 INFO: new best model saved.
253
+ 2025-12-03 23:54:53 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
254
+ 2025-12-03 23:54:53 INFO: new model checkpoint saved.
255
+ 2025-12-03 23:54:55 INFO: Finished STEP 820/50000, loss = 5.313302 (0.100 sec/batch), lr: 0.003000
256
+ 2025-12-03 23:54:57 INFO: Finished STEP 840/50000, loss = 3.976813 (0.105 sec/batch), lr: 0.003000
257
+ 2025-12-03 23:54:59 INFO: Finished STEP 860/50000, loss = 4.790815 (0.107 sec/batch), lr: 0.003000
258
+ 2025-12-03 23:55:01 INFO: Finished STEP 880/50000, loss = 4.651226 (0.109 sec/batch), lr: 0.003000
259
+ 2025-12-03 23:55:03 INFO: Finished STEP 900/50000, loss = 6.161462 (0.108 sec/batch), lr: 0.003000
260
+ 2025-12-03 23:55:03 INFO: Evaluating on dev set...
261
+ 2025-12-03 23:55:03 INFO: LAS MLAS BLEX
262
+ 2025-12-03 23:55:03 INFO: 30.94 18.95 22.74
263
+ 2025-12-03 23:55:03 INFO: step 900: train_loss = 4.923603, dev_score = 0.3094
264
+ 2025-12-03 23:55:04 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
265
+ 2025-12-03 23:55:04 INFO: new best model saved.
266
+ 2025-12-03 23:55:04 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
267
+ 2025-12-03 23:55:04 INFO: new model checkpoint saved.
268
+ 2025-12-03 23:55:06 INFO: Finished STEP 920/50000, loss = 6.818205 (0.100 sec/batch), lr: 0.003000
269
+ 2025-12-03 23:55:08 INFO: Finished STEP 940/50000, loss = 5.440798 (0.099 sec/batch), lr: 0.003000
270
+ 2025-12-03 23:55:10 INFO: Finished STEP 960/50000, loss = 5.401172 (0.098 sec/batch), lr: 0.003000
271
+ 2025-12-03 23:55:12 INFO: Finished STEP 980/50000, loss = 5.490879 (0.097 sec/batch), lr: 0.003000
272
+ 2025-12-03 23:55:14 INFO: Finished STEP 1000/50000, loss = 4.422812 (0.097 sec/batch), lr: 0.003000
273
+ 2025-12-03 23:55:14 INFO: Evaluating on dev set...
274
+ 2025-12-03 23:55:15 INFO: LAS MLAS BLEX
275
+ 2025-12-03 23:55:15 INFO: 27.72 15.07 22.40
276
+ 2025-12-03 23:55:15 INFO: step 1000: train_loss = 4.838060, dev_score = 0.2772
277
+ 2025-12-03 23:55:15 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
278
+ 2025-12-03 23:55:15 INFO: new model checkpoint saved.
279
+ 2025-12-03 23:55:17 INFO: Finished STEP 1020/50000, loss = 5.364956 (0.100 sec/batch), lr: 0.003000
280
+ 2025-12-03 23:55:19 INFO: Finished STEP 1040/50000, loss = 4.670531 (0.088 sec/batch), lr: 0.003000
281
+ 2025-12-03 23:55:21 INFO: Finished STEP 1060/50000, loss = 5.501184 (0.093 sec/batch), lr: 0.003000
282
+ 2025-12-03 23:55:23 INFO: Finished STEP 1080/50000, loss = 5.111253 (0.095 sec/batch), lr: 0.003000
283
+ 2025-12-03 23:55:25 INFO: Finished STEP 1100/50000, loss = 4.579232 (0.095 sec/batch), lr: 0.003000
284
+ 2025-12-03 23:55:25 INFO: Evaluating on dev set...
285
+ 2025-12-03 23:55:26 INFO: LAS MLAS BLEX
286
+ 2025-12-03 23:55:26 INFO: 29.70 16.81 26.05
287
+ 2025-12-03 23:55:26 INFO: step 1100: train_loss = 4.974654, dev_score = 0.2970
288
+ 2025-12-03 23:55:26 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
289
+ 2025-12-03 23:55:26 INFO: new model checkpoint saved.
290
+ 2025-12-03 23:55:28 INFO: Finished STEP 1120/50000, loss = 4.386758 (0.093 sec/batch), lr: 0.003000
291
+ 2025-12-03 23:55:30 INFO: Finished STEP 1140/50000, loss = 3.762003 (0.098 sec/batch), lr: 0.003000
292
+ 2025-12-03 23:55:32 INFO: Finished STEP 1160/50000, loss = 5.152778 (0.097 sec/batch), lr: 0.003000
293
+ 2025-12-03 23:55:34 INFO: Finished STEP 1180/50000, loss = 5.184041 (0.099 sec/batch), lr: 0.003000
294
+ 2025-12-03 23:55:36 INFO: Finished STEP 1200/50000, loss = 4.176614 (0.090 sec/batch), lr: 0.003000
295
+ 2025-12-03 23:55:36 INFO: Evaluating on dev set...
296
+ 2025-12-03 23:55:37 INFO: LAS MLAS BLEX
297
+ 2025-12-03 23:55:37 INFO: 29.46 17.84 20.75
298
+ 2025-12-03 23:55:37 INFO: step 1200: train_loss = 4.793580, dev_score = 0.2946
299
+ 2025-12-03 23:55:37 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
300
+ 2025-12-03 23:55:37 INFO: new model checkpoint saved.
301
+ 2025-12-03 23:55:39 INFO: Finished STEP 1220/50000, loss = 4.165412 (0.097 sec/batch), lr: 0.003000
302
+ 2025-12-03 23:55:41 INFO: Finished STEP 1240/50000, loss = 6.280207 (0.094 sec/batch), lr: 0.003000
303
+ 2025-12-03 23:55:43 INFO: Finished STEP 1260/50000, loss = 5.302584 (0.094 sec/batch), lr: 0.003000
304
+ 2025-12-03 23:55:45 INFO: Finished STEP 1280/50000, loss = 5.041810 (0.086 sec/batch), lr: 0.003000
305
+ 2025-12-03 23:55:47 INFO: Finished STEP 1300/50000, loss = 4.568531 (0.086 sec/batch), lr: 0.003000
306
+ 2025-12-03 23:55:47 INFO: Evaluating on dev set...
307
+ 2025-12-03 23:55:47 INFO: LAS MLAS BLEX
308
+ 2025-12-03 23:55:47 INFO: 33.42 21.82 27.47
309
+ 2025-12-03 23:55:47 INFO: step 1300: train_loss = 4.806624, dev_score = 0.3342
310
+ 2025-12-03 23:55:47 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
311
+ 2025-12-03 23:55:47 INFO: new best model saved.
312
+ 2025-12-03 23:55:48 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
313
+ 2025-12-03 23:55:48 INFO: new model checkpoint saved.
314
+ 2025-12-03 23:55:50 INFO: Finished STEP 1320/50000, loss = 5.249327 (0.096 sec/batch), lr: 0.003000
315
+ 2025-12-03 23:55:52 INFO: Finished STEP 1340/50000, loss = 4.607012 (0.090 sec/batch), lr: 0.003000
316
+ 2025-12-03 23:55:54 INFO: Finished STEP 1360/50000, loss = 4.846408 (0.092 sec/batch), lr: 0.003000
317
+ 2025-12-03 23:55:56 INFO: Finished STEP 1380/50000, loss = 4.257821 (0.091 sec/batch), lr: 0.003000
318
+ 2025-12-03 23:55:57 INFO: Finished STEP 1400/50000, loss = 4.532840 (0.091 sec/batch), lr: 0.003000
319
+ 2025-12-03 23:55:57 INFO: Evaluating on dev set...
320
+ 2025-12-03 23:55:58 INFO: LAS MLAS BLEX
321
+ 2025-12-03 23:55:58 INFO: 36.39 24.68 31.06
322
+ 2025-12-03 23:55:58 INFO: step 1400: train_loss = 4.772193, dev_score = 0.3639
323
+ 2025-12-03 23:55:58 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
324
+ 2025-12-03 23:55:58 INFO: new best model saved.
325
+ 2025-12-03 23:55:59 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
326
+ 2025-12-03 23:55:59 INFO: new model checkpoint saved.
327
+ 2025-12-03 23:56:01 INFO: Finished STEP 1420/50000, loss = 4.807161 (0.093 sec/batch), lr: 0.003000
328
+ 2025-12-03 23:56:03 INFO: Finished STEP 1440/50000, loss = 4.291670 (0.090 sec/batch), lr: 0.003000
329
+ 2025-12-03 23:56:04 INFO: Finished STEP 1460/50000, loss = 4.771211 (0.095 sec/batch), lr: 0.003000
330
+ 2025-12-03 23:56:06 INFO: Finished STEP 1480/50000, loss = 5.477196 (0.093 sec/batch), lr: 0.003000
331
+ 2025-12-03 23:56:08 INFO: Finished STEP 1500/50000, loss = 4.622495 (0.087 sec/batch), lr: 0.003000
332
+ 2025-12-03 23:56:08 INFO: Evaluating on dev set...
333
+ 2025-12-03 23:56:09 INFO: LAS MLAS BLEX
334
+ 2025-12-03 23:56:09 INFO: 35.15 23.19 30.23
335
+ 2025-12-03 23:56:09 INFO: step 1500: train_loss = 4.886981, dev_score = 0.3515
336
+ 2025-12-03 23:56:09 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
337
+ 2025-12-03 23:56:09 INFO: new model checkpoint saved.
338
+ 2025-12-03 23:56:11 INFO: Finished STEP 1520/50000, loss = 3.840725 (0.089 sec/batch), lr: 0.003000
339
+ 2025-12-03 23:56:13 INFO: Finished STEP 1540/50000, loss = 4.632377 (0.083 sec/batch), lr: 0.003000
340
+ 2025-12-03 23:56:15 INFO: Finished STEP 1560/50000, loss = 4.408291 (0.096 sec/batch), lr: 0.003000
341
+ 2025-12-03 23:56:16 INFO: Finished STEP 1580/50000, loss = 4.363316 (0.093 sec/batch), lr: 0.003000
342
+ 2025-12-03 23:56:18 INFO: Finished STEP 1600/50000, loss = 4.781141 (0.093 sec/batch), lr: 0.003000
343
+ 2025-12-03 23:56:18 INFO: Evaluating on dev set...
344
+ 2025-12-03 23:56:19 INFO: LAS MLAS BLEX
345
+ 2025-12-03 23:56:19 INFO: 36.14 23.58 29.89
346
+ 2025-12-03 23:56:19 INFO: step 1600: train_loss = 4.545301, dev_score = 0.3614
347
+ 2025-12-03 23:56:19 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
348
+ 2025-12-03 23:56:19 INFO: new model checkpoint saved.
349
+ 2025-12-03 23:56:21 INFO: Finished STEP 1620/50000, loss = 5.111926 (0.088 sec/batch), lr: 0.003000
350
+ 2025-12-03 23:56:23 INFO: Finished STEP 1640/50000, loss = 4.994601 (0.088 sec/batch), lr: 0.003000
351
+ 2025-12-03 23:56:25 INFO: Finished STEP 1660/50000, loss = 5.298572 (0.093 sec/batch), lr: 0.003000
352
+ 2025-12-03 23:56:26 INFO: Finished STEP 1680/50000, loss = 3.126743 (0.086 sec/batch), lr: 0.003000
353
+ 2025-12-03 23:56:28 INFO: Finished STEP 1700/50000, loss = 4.490625 (0.088 sec/batch), lr: 0.003000
354
+ 2025-12-03 23:56:28 INFO: Evaluating on dev set...
355
+ 2025-12-03 23:56:29 INFO: LAS MLAS BLEX
356
+ 2025-12-03 23:56:29 INFO: 34.41 23.08 30.77
357
+ 2025-12-03 23:56:29 INFO: step 1700: train_loss = 4.464878, dev_score = 0.3441
358
+ 2025-12-03 23:56:29 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
359
+ 2025-12-03 23:56:29 INFO: new model checkpoint saved.
360
+ 2025-12-03 23:56:31 INFO: Finished STEP 1720/50000, loss = 3.677958 (0.091 sec/batch), lr: 0.003000
361
+ 2025-12-03 23:56:33 INFO: Finished STEP 1740/50000, loss = 4.914848 (0.088 sec/batch), lr: 0.003000
362
+ 2025-12-03 23:56:35 INFO: Finished STEP 1760/50000, loss = 4.272021 (0.082 sec/batch), lr: 0.003000
363
+ 2025-12-03 23:56:36 INFO: Finished STEP 1780/50000, loss = 4.306624 (0.090 sec/batch), lr: 0.003000
364
+ 2025-12-03 23:56:38 INFO: Finished STEP 1800/50000, loss = 5.146128 (0.085 sec/batch), lr: 0.003000
365
+ 2025-12-03 23:56:38 INFO: Evaluating on dev set...
366
+ 2025-12-03 23:56:39 INFO: LAS MLAS BLEX
367
+ 2025-12-03 23:56:39 INFO: 40.10 28.21 35.79
368
+ 2025-12-03 23:56:39 INFO: step 1800: train_loss = 4.611931, dev_score = 0.4010
369
+ 2025-12-03 23:56:39 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
370
+ 2025-12-03 23:56:39 INFO: new best model saved.
371
+ 2025-12-03 23:56:40 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
372
+ 2025-12-03 23:56:40 INFO: new model checkpoint saved.
373
+ 2025-12-03 23:56:41 INFO: Finished STEP 1820/50000, loss = 4.201222 (0.088 sec/batch), lr: 0.003000
374
+ 2025-12-03 23:56:43 INFO: Finished STEP 1840/50000, loss = 4.578183 (0.081 sec/batch), lr: 0.003000
375
+ 2025-12-03 23:56:45 INFO: Finished STEP 1860/50000, loss = 3.769521 (0.078 sec/batch), lr: 0.003000
376
+ 2025-12-03 23:56:47 INFO: Finished STEP 1880/50000, loss = 5.085626 (0.087 sec/batch), lr: 0.003000
377
+ 2025-12-03 23:56:48 INFO: Finished STEP 1900/50000, loss = 4.567149 (0.081 sec/batch), lr: 0.003000
378
+ 2025-12-03 23:56:48 INFO: Evaluating on dev set...
379
+ 2025-12-03 23:56:49 INFO: LAS MLAS BLEX
380
+ 2025-12-03 23:56:49 INFO: 30.94 17.05 25.78
381
+ 2025-12-03 23:56:49 INFO: step 1900: train_loss = 4.625774, dev_score = 0.3094
382
+ 2025-12-03 23:56:49 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
383
+ 2025-12-03 23:56:49 INFO: new model checkpoint saved.
384
+ 2025-12-03 23:56:51 INFO: Finished STEP 1920/50000, loss = 4.870181 (0.084 sec/batch), lr: 0.003000
385
+ 2025-12-03 23:56:53 INFO: Finished STEP 1940/50000, loss = 4.911637 (0.085 sec/batch), lr: 0.003000
386
+ 2025-12-03 23:56:55 INFO: Finished STEP 1960/50000, loss = 3.122515 (0.083 sec/batch), lr: 0.003000
387
+ 2025-12-03 23:56:56 INFO: Finished STEP 1980/50000, loss = 4.698964 (0.085 sec/batch), lr: 0.003000
388
+ 2025-12-03 23:56:58 INFO: Finished STEP 2000/50000, loss = 4.137403 (0.075 sec/batch), lr: 0.003000
389
+ 2025-12-03 23:56:58 INFO: Evaluating on dev set...
390
+ 2025-12-03 23:56:59 INFO: LAS MLAS BLEX
391
+ 2025-12-03 23:56:59 INFO: 42.33 33.54 36.06
392
+ 2025-12-03 23:56:59 INFO: step 2000: train_loss = 4.470704, dev_score = 0.4233
393
+ 2025-12-03 23:56:59 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
394
+ 2025-12-03 23:56:59 INFO: new best model saved.
395
+ 2025-12-03 23:56:59 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
396
+ 2025-12-03 23:56:59 INFO: new model checkpoint saved.
397
+ 2025-12-03 23:57:01 INFO: Finished STEP 2020/50000, loss = 4.445052 (0.082 sec/batch), lr: 0.003000
398
+ 2025-12-03 23:57:03 INFO: Finished STEP 2040/50000, loss = 4.791672 (0.082 sec/batch), lr: 0.003000
399
+ 2025-12-03 23:57:04 INFO: Finished STEP 2060/50000, loss = 4.461417 (0.079 sec/batch), lr: 0.003000
400
+ 2025-12-03 23:57:06 INFO: Finished STEP 2080/50000, loss = 4.476893 (0.086 sec/batch), lr: 0.003000
401
+ 2025-12-03 23:57:08 INFO: Finished STEP 2100/50000, loss = 4.298782 (0.078 sec/batch), lr: 0.003000
402
+ 2025-12-03 23:57:08 INFO: Evaluating on dev set...
403
+ 2025-12-03 23:57:08 INFO: LAS MLAS BLEX
404
+ 2025-12-03 23:57:08 INFO: 36.39 25.94 30.54
405
+ 2025-12-03 23:57:08 INFO: step 2100: train_loss = 4.424171, dev_score = 0.3639
406
+ 2025-12-03 23:57:09 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
407
+ 2025-12-03 23:57:09 INFO: new model checkpoint saved.
408
+ 2025-12-03 23:57:11 INFO: Finished STEP 2120/50000, loss = 4.935596 (0.085 sec/batch), lr: 0.003000
409
+ 2025-12-03 23:57:12 INFO: Finished STEP 2140/50000, loss = 4.870115 (0.090 sec/batch), lr: 0.003000
410
+ 2025-12-03 23:57:14 INFO: Finished STEP 2160/50000, loss = 6.628759 (0.090 sec/batch), lr: 0.003000
411
+ 2025-12-03 23:57:16 INFO: Finished STEP 2180/50000, loss = 4.541629 (0.079 sec/batch), lr: 0.003000
412
+ 2025-12-03 23:57:17 INFO: Finished STEP 2200/50000, loss = 5.319767 (0.084 sec/batch), lr: 0.003000
413
+ 2025-12-03 23:57:17 INFO: Evaluating on dev set...
414
+ 2025-12-03 23:57:18 INFO: LAS MLAS BLEX
415
+ 2025-12-03 23:57:18 INFO: 42.82 31.03 37.32
416
+ 2025-12-03 23:57:18 INFO: step 2200: train_loss = 4.508313, dev_score = 0.4282
417
+ 2025-12-03 23:57:18 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
418
+ 2025-12-03 23:57:18 INFO: new best model saved.
419
+ 2025-12-03 23:57:19 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
420
+ 2025-12-03 23:57:19 INFO: new model checkpoint saved.
421
+ 2025-12-03 23:57:21 INFO: Finished STEP 2220/50000, loss = 4.577855 (0.081 sec/batch), lr: 0.003000
422
+ 2025-12-03 23:57:22 INFO: Finished STEP 2240/50000, loss = 4.928855 (0.084 sec/batch), lr: 0.003000
423
+ 2025-12-03 23:57:24 INFO: Finished STEP 2260/50000, loss = 4.655763 (0.079 sec/batch), lr: 0.003000
424
+ 2025-12-03 23:57:26 INFO: Finished STEP 2280/50000, loss = 3.626117 (0.078 sec/batch), lr: 0.003000
425
+ 2025-12-03 23:57:27 INFO: Finished STEP 2300/50000, loss = 3.293444 (0.081 sec/batch), lr: 0.003000
426
+ 2025-12-03 23:57:27 INFO: Evaluating on dev set...
427
+ 2025-12-03 23:57:28 INFO: LAS MLAS BLEX
428
+ 2025-12-03 23:57:28 INFO: 42.08 29.55 33.40
429
+ 2025-12-03 23:57:28 INFO: step 2300: train_loss = 4.557782, dev_score = 0.4208
430
+ 2025-12-03 23:57:28 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
431
+ 2025-12-03 23:57:28 INFO: new model checkpoint saved.
432
+ 2025-12-03 23:57:30 INFO: Finished STEP 2320/50000, loss = 3.842817 (0.077 sec/batch), lr: 0.003000
433
+ 2025-12-03 23:57:32 INFO: Finished STEP 2340/50000, loss = 3.087567 (0.075 sec/batch), lr: 0.003000
434
+ 2025-12-03 23:57:33 INFO: Finished STEP 2360/50000, loss = 4.356225 (0.083 sec/batch), lr: 0.003000
435
+ 2025-12-03 23:57:35 INFO: Finished STEP 2380/50000, loss = 3.980781 (0.085 sec/batch), lr: 0.003000
436
+ 2025-12-03 23:57:36 INFO: Finished STEP 2400/50000, loss = 4.691700 (0.085 sec/batch), lr: 0.003000
437
+ 2025-12-03 23:57:36 INFO: Evaluating on dev set...
438
+ 2025-12-03 23:57:37 INFO: LAS MLAS BLEX
439
+ 2025-12-03 23:57:37 INFO: 41.09 30.83 34.58
440
+ 2025-12-03 23:57:37 INFO: step 2400: train_loss = 4.658476, dev_score = 0.4109
441
+ 2025-12-03 23:57:38 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
442
+ 2025-12-03 23:57:38 INFO: new model checkpoint saved.
443
+ 2025-12-03 23:57:39 INFO: Finished STEP 2420/50000, loss = 3.771997 (0.076 sec/batch), lr: 0.003000
444
+ 2025-12-03 23:57:41 INFO: Finished STEP 2440/50000, loss = 4.486198 (0.079 sec/batch), lr: 0.003000
445
+ 2025-12-03 23:57:42 INFO: Finished STEP 2460/50000, loss = 3.902313 (0.076 sec/batch), lr: 0.003000
446
+ 2025-12-03 23:57:44 INFO: Finished STEP 2480/50000, loss = 3.251201 (0.076 sec/batch), lr: 0.003000
447
+ 2025-12-03 23:57:46 INFO: Finished STEP 2500/50000, loss = 3.793236 (0.075 sec/batch), lr: 0.003000
448
+ 2025-12-03 23:57:46 INFO: Evaluating on dev set...
449
+ 2025-12-03 23:57:46 INFO: LAS MLAS BLEX
450
+ 2025-12-03 23:57:46 INFO: 41.34 29.83 34.87
451
+ 2025-12-03 23:57:46 INFO: step 2500: train_loss = 4.539210, dev_score = 0.4134
452
+ 2025-12-03 23:57:47 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
453
+ 2025-12-03 23:57:47 INFO: new model checkpoint saved.
454
+ 2025-12-03 23:57:48 INFO: Finished STEP 2520/50000, loss = 2.911145 (0.076 sec/batch), lr: 0.003000
455
+ 2025-12-03 23:57:50 INFO: Finished STEP 2540/50000, loss = 4.427355 (0.075 sec/batch), lr: 0.003000
456
+ 2025-12-03 23:57:52 INFO: Finished STEP 2560/50000, loss = 5.599472 (0.082 sec/batch), lr: 0.003000
457
+ 2025-12-03 23:57:53 INFO: Finished STEP 2580/50000, loss = 3.617108 (0.078 sec/batch), lr: 0.003000
458
+ 2025-12-03 23:57:55 INFO: Finished STEP 2600/50000, loss = 5.222749 (0.075 sec/batch), lr: 0.003000
459
+ 2025-12-03 23:57:55 INFO: Evaluating on dev set...
460
+ 2025-12-03 23:57:55 INFO: LAS MLAS BLEX
461
+ 2025-12-03 23:57:55 INFO: 40.59 27.80 31.95
462
+ 2025-12-03 23:57:55 INFO: step 2600: train_loss = 4.557820, dev_score = 0.4059
463
+ 2025-12-03 23:57:56 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
464
+ 2025-12-03 23:57:56 INFO: new model checkpoint saved.
465
+ 2025-12-03 23:57:58 INFO: Finished STEP 2620/50000, loss = 3.744372 (0.081 sec/batch), lr: 0.003000
466
+ 2025-12-03 23:57:59 INFO: Finished STEP 2640/50000, loss = 3.877670 (0.074 sec/batch), lr: 0.003000
467
+ 2025-12-03 23:58:01 INFO: Finished STEP 2660/50000, loss = 4.861045 (0.078 sec/batch), lr: 0.003000
468
+ 2025-12-03 23:58:02 INFO: Finished STEP 2680/50000, loss = 4.102747 (0.084 sec/batch), lr: 0.003000
469
+ 2025-12-03 23:58:04 INFO: Finished STEP 2700/50000, loss = 4.350719 (0.075 sec/batch), lr: 0.003000
470
+ 2025-12-03 23:58:04 INFO: Evaluating on dev set...
471
+ 2025-12-03 23:58:05 INFO: LAS MLAS BLEX
472
+ 2025-12-03 23:58:05 INFO: 41.83 28.45 32.99
473
+ 2025-12-03 23:58:05 INFO: step 2700: train_loss = 4.458846, dev_score = 0.4183
474
+ 2025-12-03 23:58:05 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
475
+ 2025-12-03 23:58:05 INFO: new model checkpoint saved.
476
+ 2025-12-03 23:58:07 INFO: Finished STEP 2720/50000, loss = 3.473779 (0.077 sec/batch), lr: 0.003000
477
+ 2025-12-03 23:58:08 INFO: Finished STEP 2740/50000, loss = 3.842570 (0.075 sec/batch), lr: 0.003000
478
+ 2025-12-03 23:58:10 INFO: Finished STEP 2760/50000, loss = 3.377702 (0.069 sec/batch), lr: 0.003000
479
+ 2025-12-03 23:58:11 INFO: Finished STEP 2780/50000, loss = 5.296472 (0.076 sec/batch), lr: 0.003000
480
+ 2025-12-03 23:58:13 INFO: Finished STEP 2800/50000, loss = 4.328273 (0.075 sec/batch), lr: 0.003000
481
+ 2025-12-03 23:58:13 INFO: Evaluating on dev set...
482
+ 2025-12-03 23:58:14 INFO: LAS MLAS BLEX
483
+ 2025-12-03 23:58:14 INFO: 41.34 29.52 33.26
484
+ 2025-12-03 23:58:14 INFO: step 2800: train_loss = 4.382064, dev_score = 0.4134
485
+ 2025-12-03 23:58:14 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
486
+ 2025-12-03 23:58:14 INFO: new model checkpoint saved.
487
+ 2025-12-03 23:58:16 INFO: Finished STEP 2820/50000, loss = 4.742715 (0.085 sec/batch), lr: 0.003000
488
+ 2025-12-03 23:58:17 INFO: Finished STEP 2840/50000, loss = 3.637206 (0.072 sec/batch), lr: 0.003000
489
+ 2025-12-03 23:58:19 INFO: Finished STEP 2860/50000, loss = 5.079699 (0.082 sec/batch), lr: 0.003000
490
+ 2025-12-03 23:58:21 INFO: Finished STEP 2880/50000, loss = 4.569402 (0.073 sec/batch), lr: 0.003000
491
+ 2025-12-03 23:58:22 INFO: Finished STEP 2900/50000, loss = 6.735947 (0.079 sec/batch), lr: 0.003000
492
+ 2025-12-03 23:58:22 INFO: Evaluating on dev set...
493
+ 2025-12-03 23:58:23 INFO: LAS MLAS BLEX
494
+ 2025-12-03 23:58:23 INFO: 44.06 32.99 37.58
495
+ 2025-12-03 23:58:23 INFO: step 2900: train_loss = 4.429761, dev_score = 0.4406
496
+ 2025-12-03 23:58:23 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
497
+ 2025-12-03 23:58:23 INFO: new best model saved.
498
+ 2025-12-03 23:58:23 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
499
+ 2025-12-03 23:58:23 INFO: new model checkpoint saved.
500
+ 2025-12-03 23:58:25 INFO: Finished STEP 2920/50000, loss = 4.928506 (0.077 sec/batch), lr: 0.003000
501
+ 2025-12-03 23:58:27 INFO: Finished STEP 2940/50000, loss = 3.095824 (0.079 sec/batch), lr: 0.003000
502
+ 2025-12-03 23:58:28 INFO: Finished STEP 2960/50000, loss = 5.189299 (0.075 sec/batch), lr: 0.003000
503
+ 2025-12-03 23:58:30 INFO: Finished STEP 2980/50000, loss = 4.598363 (0.076 sec/batch), lr: 0.003000
504
+ 2025-12-03 23:58:31 INFO: Finished STEP 3000/50000, loss = 4.077869 (0.074 sec/batch), lr: 0.003000
505
+ 2025-12-03 23:58:31 INFO: Evaluating on dev set...
506
+ 2025-12-03 23:58:32 INFO: LAS MLAS BLEX
507
+ 2025-12-03 23:58:32 INFO: 49.01 35.07 39.67
508
+ 2025-12-03 23:58:32 INFO: step 3000: train_loss = 4.472208, dev_score = 0.4901
509
+ 2025-12-03 23:58:32 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser.pt
510
+ 2025-12-03 23:58:32 INFO: new best model saved.
511
+ 2025-12-03 23:58:33 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
512
+ 2025-12-03 23:58:33 INFO: new model checkpoint saved.
513
+ 2025-12-03 23:58:34 INFO: Finished STEP 3020/50000, loss = 5.035297 (0.075 sec/batch), lr: 0.003000
514
+ 2025-12-03 23:58:36 INFO: Finished STEP 3040/50000, loss = 4.391068 (0.076 sec/batch), lr: 0.003000
515
+ 2025-12-03 23:58:37 INFO: Finished STEP 3060/50000, loss = 3.966053 (0.071 sec/batch), lr: 0.003000
516
+ 2025-12-03 23:58:39 INFO: Finished STEP 3080/50000, loss = 4.046230 (0.076 sec/batch), lr: 0.003000
517
+ 2025-12-03 23:58:40 INFO: Finished STEP 3100/50000, loss = 4.489533 (0.080 sec/batch), lr: 0.003000
518
+ 2025-12-03 23:58:40 INFO: Evaluating on dev set...
519
+ 2025-12-03 23:58:41 INFO: LAS MLAS BLEX
520
+ 2025-12-03 23:58:41 INFO: 46.29 36.82 40.17
521
+ 2025-12-03 23:58:41 INFO: step 3100: train_loss = 4.470613, dev_score = 0.4629
522
+ 2025-12-03 23:58:42 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
523
+ 2025-12-03 23:58:42 INFO: new model checkpoint saved.
524
+ 2025-12-03 23:58:43 INFO: Finished STEP 3120/50000, loss = 4.524646 (0.082 sec/batch), lr: 0.003000
525
+ 2025-12-03 23:58:45 INFO: Finished STEP 3140/50000, loss = 3.848670 (0.070 sec/batch), lr: 0.003000
526
+ 2025-12-03 23:58:46 INFO: Finished STEP 3160/50000, loss = 3.646326 (0.070 sec/batch), lr: 0.003000
527
+ 2025-12-03 23:58:48 INFO: Finished STEP 3180/50000, loss = 4.994428 (0.080 sec/batch), lr: 0.003000
528
+ 2025-12-03 23:58:49 INFO: Finished STEP 3200/50000, loss = 4.490797 (0.073 sec/batch), lr: 0.003000
529
+ 2025-12-03 23:58:49 INFO: Evaluating on dev set...
530
+ 2025-12-03 23:58:50 INFO: LAS MLAS BLEX
531
+ 2025-12-03 23:58:50 INFO: 45.05 34.10 37.84
532
+ 2025-12-03 23:58:50 INFO: step 3200: train_loss = 4.441081, dev_score = 0.4505
533
+ 2025-12-03 23:58:50 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
534
+ 2025-12-03 23:58:50 INFO: new model checkpoint saved.
535
+ 2025-12-03 23:58:52 INFO: Finished STEP 3220/50000, loss = 4.917915 (0.083 sec/batch), lr: 0.003000
536
+ 2025-12-03 23:58:54 INFO: Finished STEP 3240/50000, loss = 5.068405 (0.081 sec/batch), lr: 0.003000
537
+ 2025-12-03 23:58:55 INFO: Finished STEP 3260/50000, loss = 5.995227 (0.078 sec/batch), lr: 0.003000
538
+ 2025-12-03 23:58:57 INFO: Finished STEP 3280/50000, loss = 5.134910 (0.074 sec/batch), lr: 0.003000
539
+ 2025-12-03 23:58:58 INFO: Finished STEP 3300/50000, loss = 6.417435 (0.080 sec/batch), lr: 0.003000
540
+ 2025-12-03 23:58:58 INFO: Evaluating on dev set...
541
+ 2025-12-03 23:58:59 INFO: LAS MLAS BLEX
542
+ 2025-12-03 23:58:59 INFO: 46.04 34.78 39.34
543
+ 2025-12-03 23:58:59 INFO: step 3300: train_loss = 4.556138, dev_score = 0.4604
544
+ 2025-12-03 23:58:59 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
545
+ 2025-12-03 23:58:59 INFO: new model checkpoint saved.
546
+ 2025-12-03 23:59:01 INFO: Finished STEP 3320/50000, loss = 3.302945 (0.074 sec/batch), lr: 0.003000
547
+ 2025-12-03 23:59:02 INFO: Finished STEP 3340/50000, loss = 3.905273 (0.072 sec/batch), lr: 0.003000
548
+ 2025-12-03 23:59:04 INFO: Finished STEP 3360/50000, loss = 4.912685 (0.073 sec/batch), lr: 0.003000
549
+ 2025-12-03 23:59:05 INFO: Finished STEP 3380/50000, loss = 5.646633 (0.077 sec/batch), lr: 0.003000
550
+ 2025-12-03 23:59:07 INFO: Finished STEP 3400/50000, loss = 6.704240 (0.077 sec/batch), lr: 0.003000
551
+ 2025-12-03 23:59:07 INFO: Evaluating on dev set...
552
+ 2025-12-03 23:59:07 INFO: LAS MLAS BLEX
553
+ 2025-12-03 23:59:07 INFO: 47.03 35.46 39.59
554
+ 2025-12-03 23:59:07 INFO: step 3400: train_loss = 4.482560, dev_score = 0.4703
555
+ 2025-12-03 23:59:08 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
556
+ 2025-12-03 23:59:08 INFO: new model checkpoint saved.
557
+ 2025-12-03 23:59:10 INFO: Finished STEP 3420/50000, loss = 4.840897 (0.080 sec/batch), lr: 0.003000
558
+ 2025-12-03 23:59:11 INFO: Finished STEP 3440/50000, loss = 3.560481 (0.071 sec/batch), lr: 0.003000
559
+ 2025-12-03 23:59:13 INFO: Finished STEP 3460/50000, loss = 6.708435 (0.075 sec/batch), lr: 0.003000
560
+ 2025-12-03 23:59:14 INFO: Finished STEP 3480/50000, loss = 4.363883 (0.075 sec/batch), lr: 0.003000
561
+ 2025-12-03 23:59:16 INFO: Finished STEP 3500/50000, loss = 4.516262 (0.064 sec/batch), lr: 0.003000
562
+ 2025-12-03 23:59:16 INFO: Evaluating on dev set...
563
+ 2025-12-03 23:59:16 INFO: LAS MLAS BLEX
564
+ 2025-12-03 23:59:16 INFO: 48.02 35.00 39.58
565
+ 2025-12-03 23:59:16 INFO: step 3500: train_loss = 4.333270, dev_score = 0.4802
566
+ 2025-12-03 23:59:17 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
567
+ 2025-12-03 23:59:17 INFO: new model checkpoint saved.
568
+ 2025-12-03 23:59:18 INFO: Finished STEP 3520/50000, loss = 5.945315 (0.076 sec/batch), lr: 0.003000
569
+ 2025-12-03 23:59:20 INFO: Finished STEP 3540/50000, loss = 3.693509 (0.077 sec/batch), lr: 0.003000
570
+ 2025-12-03 23:59:21 INFO: Finished STEP 3560/50000, loss = 4.317142 (0.083 sec/batch), lr: 0.003000
571
+ 2025-12-03 23:59:23 INFO: Finished STEP 3580/50000, loss = 4.603652 (0.070 sec/batch), lr: 0.003000
572
+ 2025-12-03 23:59:24 INFO: Finished STEP 3600/50000, loss = 4.323731 (0.075 sec/batch), lr: 0.003000
573
+ 2025-12-03 23:59:24 INFO: Evaluating on dev set...
574
+ 2025-12-03 23:59:25 INFO: LAS MLAS BLEX
575
+ 2025-12-03 23:59:25 INFO: 44.80 33.26 38.32
576
+ 2025-12-03 23:59:25 INFO: step 3600: train_loss = 4.493827, dev_score = 0.4480
577
+ 2025-12-03 23:59:25 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
578
+ 2025-12-03 23:59:25 INFO: new model checkpoint saved.
579
+ 2025-12-03 23:59:27 INFO: Finished STEP 3620/50000, loss = 3.987468 (0.072 sec/batch), lr: 0.003000
580
+ 2025-12-03 23:59:28 INFO: Finished STEP 3640/50000, loss = 3.186597 (0.074 sec/batch), lr: 0.003000
581
+ 2025-12-03 23:59:30 INFO: Finished STEP 3660/50000, loss = 4.076442 (0.070 sec/batch), lr: 0.003000
582
+ 2025-12-03 23:59:31 INFO: Finished STEP 3680/50000, loss = 4.194356 (0.072 sec/batch), lr: 0.003000
583
+ 2025-12-03 23:59:33 INFO: Finished STEP 3700/50000, loss = 4.002431 (0.080 sec/batch), lr: 0.003000
584
+ 2025-12-03 23:59:33 INFO: Evaluating on dev set...
585
+ 2025-12-03 23:59:33 INFO: LAS MLAS BLEX
586
+ 2025-12-03 23:59:33 INFO: 45.54 33.47 37.60
587
+ 2025-12-03 23:59:33 INFO: step 3700: train_loss = 4.316014, dev_score = 0.4554
588
+ 2025-12-03 23:59:34 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
589
+ 2025-12-03 23:59:34 INFO: new model checkpoint saved.
590
+ 2025-12-03 23:59:36 INFO: Finished STEP 3720/50000, loss = 3.018656 (0.073 sec/batch), lr: 0.003000
591
+ 2025-12-03 23:59:37 INFO: Finished STEP 3740/50000, loss = 5.110373 (0.076 sec/batch), lr: 0.003000
592
+ 2025-12-03 23:59:38 INFO: Finished STEP 3760/50000, loss = 4.163863 (0.069 sec/batch), lr: 0.003000
593
+ 2025-12-03 23:59:40 INFO: Finished STEP 3780/50000, loss = 5.162381 (0.079 sec/batch), lr: 0.003000
594
+ 2025-12-03 23:59:41 INFO: Finished STEP 3800/50000, loss = 4.207483 (0.071 sec/batch), lr: 0.003000
595
+ 2025-12-03 23:59:41 INFO: Evaluating on dev set...
596
+ 2025-12-03 23:59:42 INFO: LAS MLAS BLEX
597
+ 2025-12-03 23:59:42 INFO: 46.78 33.20 37.76
598
+ 2025-12-03 23:59:42 INFO: step 3800: train_loss = 4.330287, dev_score = 0.4678
599
+ 2025-12-03 23:59:43 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
600
+ 2025-12-03 23:59:43 INFO: new model checkpoint saved.
601
+ 2025-12-03 23:59:44 INFO: Finished STEP 3820/50000, loss = 4.129490 (0.072 sec/batch), lr: 0.003000
602
+ 2025-12-03 23:59:46 INFO: Finished STEP 3840/50000, loss = 4.065084 (0.073 sec/batch), lr: 0.003000
603
+ 2025-12-03 23:59:47 INFO: Finished STEP 3860/50000, loss = 4.672451 (0.080 sec/batch), lr: 0.003000
604
+ 2025-12-03 23:59:48 INFO: Finished STEP 3880/50000, loss = 4.187882 (0.071 sec/batch), lr: 0.003000
605
+ 2025-12-03 23:59:50 INFO: Finished STEP 3900/50000, loss = 4.528747 (0.069 sec/batch), lr: 0.003000
606
+ 2025-12-03 23:59:50 INFO: Evaluating on dev set...
607
+ 2025-12-03 23:59:50 INFO: LAS MLAS BLEX
608
+ 2025-12-03 23:59:50 INFO: 41.58 29.94 35.34
609
+ 2025-12-03 23:59:50 INFO: step 3900: train_loss = 4.380822, dev_score = 0.4158
610
+ 2025-12-03 23:59:51 INFO: Model saved to saved_models/depparse/sv_diachronic_charlm_parser_checkpoint.pt
611
+ 2025-12-03 23:59:51 INFO: new model checkpoint saved.
612
+ 2025-12-03 23:59:53 INFO: Finished STEP 3920/50000, loss = 3.935084 (0.073 sec/batch), lr: 0.003000
613
+ 2025-12-03 23:59:54 INFO: Finished STEP 3940/50000, loss = 4.979820 (0.074 sec/batch), lr: 0.003000
614
+ 2025-12-03 23:59:55 INFO: Finished STEP 3960/50000, loss = 5.611000 (0.068 sec/batch), lr: 0.003000
615
+ 2025-12-03 23:59:57 INFO: Finished STEP 3980/50000, loss = 7.296205 (0.086 sec/batch), lr: 0.003000
616
+ 2025-12-03 23:59:58 INFO: Finished STEP 4000/50000, loss = 4.311894 (0.069 sec/batch), lr: 0.003000
617
+ 2025-12-03 23:59:58 INFO: Evaluating on dev set...
618
+ 2025-12-03 23:59:59 INFO: LAS MLAS BLEX
619
+ 2025-12-03 23:59:59 INFO: 46.04 34.03 38.66
620
+ 2025-12-03 23:59:59 INFO: step 4000: train_loss = 4.541904, dev_score = 0.4604
621
+ 2025-12-03 23:59:59 INFO: Training ended with 4000 steps.
622
+ 2025-12-03 23:59:59 INFO: Best dev F1 = 49.01, at iteration = 3000
623
+ 2025-12-04 00:00:00 INFO: Running dev depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.dev.in.conllu', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt', '--batch_size', '32', '--dropout', '0.33']
624
+ 2025-12-04 00:00:00 INFO: Running parser in predict mode
625
+ 2025-12-04 00:00:00 INFO: Loading model from: saved_models/depparse/sv_diachronic_charlm_parser.pt
626
+ 2025-12-04 00:00:04 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt
627
+ 2025-12-04 00:00:04 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
628
+ 2025-12-04 00:00:04 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
629
+ 2025-12-04 00:00:04 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
630
+ 2025-12-04 00:00:04 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
631
+ 2025-12-04 00:00:04 INFO: Loading data with batch size 32...
632
+ 2025-12-04 00:00:04 DEBUG: 9 batches created.
633
+ 2025-12-04 00:00:04 INFO: F1 scores for each dependency:
634
+ Note that unlabeled attachment errors hurt the labeled attachment scores
635
+ acl: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
636
+ acl:relcl: p 0.0000 r 0.0000 f1 0.0000 (7 actual)
637
+ advcl: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
638
+ advmod: p 0.4118 r 0.5600 f1 0.4746 (25 actual)
639
+ amod: p 0.7037 r 0.6129 f1 0.6552 (31 actual)
640
+ appos: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
641
+ aux: p 0.8000 r 0.7273 f1 0.7619 (11 actual)
642
+ case: p 0.8235 r 0.7500 f1 0.7850 (56 actual)
643
+ cc: p 0.6667 r 0.6154 f1 0.6400 (13 actual)
644
+ ccomp: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
645
+ conj: p 0.1000 r 0.3333 f1 0.1538 (12 actual)
646
+ cop: p 0.5000 r 0.3333 f1 0.4000 (3 actual)
647
+ csubj: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
648
+ det: p 0.8182 r 0.8182 f1 0.8182 (22 actual)
649
+ expl: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
650
+ iobj: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
651
+ mark: p 0.3636 r 0.3333 f1 0.3478 (12 actual)
652
+ nmod: p 0.0000 r 0.0000 f1 0.0000 (15 actual)
653
+ nmod:poss: p 1.0000 r 0.5263 f1 0.6897 (19 actual)
654
+ nsubj: p 0.2326 r 0.5882 f1 0.3333 (17 actual)
655
+ nsubj:pass: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
656
+ obj: p 0.6818 r 0.6818 f1 0.6818 (22 actual)
657
+ obl: p 0.2807 r 0.3902 f1 0.3265 (41 actual)
658
+ obl:agent: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
659
+ orphan: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
660
+ parataxis: p 0.0000 r 0.0000 f1 0.0000 (3 actual)
661
+ punct: p 0.4231 r 0.4231 f1 0.4231 (52 actual)
662
+ root: p 0.4444 r 0.4444 f1 0.4444 (9 actual)
663
+ xcomp: p 0.0000 r 0.0000 f1 0.0000 (8 actual)
664
+ 2025-12-04 00:00:04 INFO: LAS MLAS BLEX
665
+ 2025-12-04 00:00:04 INFO: 49.01 35.07 39.67
666
+ 2025-12-04 00:00:04 INFO: Parser score:
667
+ 2025-12-04 00:00:04 INFO: sv_diachronic 49.01
668
+ 2025-12-04 00:00:04 INFO: Finished running dev set on
669
+ UD_Swedish-diachronic
670
+ UAS LAS CLAS MLAS BLEX
671
+ 60.40 49.01 39.67 35.07 39.67
672
+ 2025-12-04 00:00:04 INFO: Running test depparse for UD_Swedish-diachronic with args ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--eval_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.test.in.conllu', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt', '--batch_size', '32', '--dropout', '0.33']
673
+ 2025-12-04 00:00:04 INFO: Running parser in predict mode
674
+ 2025-12-04 00:00:04 INFO: Loading model from: saved_models/depparse/sv_diachronic_charlm_parser.pt
675
+ 2025-12-04 00:00:07 DEBUG: Loaded pretrain from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/diachronic.pt
676
+ 2025-12-04 00:00:07 DEBUG: Depparse model loading charmodels: /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt and /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
677
+ 2025-12-04 00:00:07 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt
678
+ 2025-12-04 00:00:07 DEBUG: Loading charlm from /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt
679
+ 2025-12-04 00:00:08 DEBUG: Building Adam with lr=0.003000, betas=(0.9, 0.95), eps=0.000001
680
+ 2025-12-04 00:00:08 INFO: Loading data with batch size 32...
681
+ 2025-12-04 00:00:08 DEBUG: 93 batches created.
682
+ 2025-12-04 00:00:12 INFO: F1 scores for each dependency:
683
+ Note that unlabeled attachment errors hurt the labeled attachment scores
684
+ acl: p 0.0000 r 0.0000 f1 0.0000 (32 actual)
685
+ acl:cleft: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
686
+ acl:relcl: p 0.0625 r 0.0133 f1 0.0220 (75 actual)
687
+ advcl: p 0.1892 r 0.1167 f1 0.1443 (60 actual)
688
+ advcl:relcl: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
689
+ advmod: p 0.4143 r 0.5410 f1 0.4693 (268 actual)
690
+ amod: p 0.6667 r 0.6174 f1 0.6411 (230 actual)
691
+ appos: p 0.0000 r 0.0000 f1 0.0000 (13 actual)
692
+ aux: p 0.7241 r 0.7500 f1 0.7368 (84 actual)
693
+ aux:pass: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
694
+ case: p 0.8092 r 0.7051 f1 0.7536 (373 actual)
695
+ cc: p 0.6475 r 0.5097 f1 0.5704 (155 actual)
696
+ ccomp: p 0.1667 r 0.0286 f1 0.0488 (35 actual)
697
+ compound:prt: p 0.5000 r 0.2857 f1 0.3636 (21 actual)
698
+ conj: p 0.0997 r 0.1835 f1 0.1292 (158 actual)
699
+ cop: p 0.7200 r 0.3913 f1 0.5070 (46 actual)
700
+ csubj: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
701
+ dep: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
702
+ det: p 0.7767 r 0.7692 f1 0.7729 (208 actual)
703
+ discourse: p 0.0000 r 0.0000 f1 0.0000 (7 actual)
704
+ dislocated: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
705
+ expl: p 0.0000 r 0.0000 f1 0.0000 (11 actual)
706
+ expl:pv: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
707
+ fixed: p 0.0000 r 0.0000 f1 0.0000 (8 actual)
708
+ flat: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
709
+ flat:name: p 0.0000 r 0.0000 f1 0.0000 (12 actual)
710
+ goeswith: p 0.0000 r 0.0000 f1 0.0000 (2 actual)
711
+ iobj: p 0.0000 r 0.0000 f1 0.0000 (14 actual)
712
+ mark: p 0.5882 r 0.5882 f1 0.5882 (153 actual)
713
+ nmod: p 0.0000 r 0.0000 f1 0.0000 (102 actual)
714
+ nmod:poss: p 0.7500 r 0.4014 f1 0.5229 (142 actual)
715
+ nsubj: p 0.3670 r 0.5964 f1 0.4544 (280 actual)
716
+ nsubj:pass: p 0.0000 r 0.0000 f1 0.0000 (25 actual)
717
+ nummod: p 0.0000 r 0.0000 f1 0.0000 (10 actual)
718
+ obj: p 0.4266 r 0.5082 f1 0.4638 (183 actual)
719
+ obl: p 0.2955 r 0.5252 f1 0.3782 (278 actual)
720
+ obl:agent: p 0.0000 r 0.0000 f1 0.0000 (4 actual)
721
+ orphan: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
722
+ parataxis: p 0.0000 r 0.0000 f1 0.0000 (18 actual)
723
+ punct: p 0.2948 r 0.2941 f1 0.2945 (425 actual)
724
+ reparandum: p 0.0000 r 0.0000 f1 0.0000 (1 actual)
725
+ root: p 0.4747 r 0.4747 f1 0.4747 (99 actual)
726
+ vocative: p 0.0000 r 0.0000 f1 0.0000 (5 actual)
727
+ xcomp: p 0.0000 r 0.0000 f1 0.0000 (75 actual)
728
+ 2025-12-04 00:00:12 INFO: LAS MLAS BLEX
729
+ 2025-12-04 00:00:12 INFO: 45.54 34.84 38.19
730
+ 2025-12-04 00:00:12 INFO: Parser score:
731
+ 2025-12-04 00:00:12 INFO: sv_diachronic 45.54
732
+ 2025-12-04 00:00:12 INFO: Finished running test set on
733
+ UD_Swedish-diachronic
734
+ UAS LAS CLAS MLAS BLEX
735
+ 58.46 45.54 38.19 34.84 38.19
736
+ DONE.
737
+ Full log saved to: logs/log_diachronic.pt_sv_is_20251203_234442.txt
738
+ Symlink updated: logs/latest.txt → log_diachronic.pt_sv_is_20251203_234442.txt
logs/log_sv_diachron_20251212_145741.txt ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === LOGFILE: logs/log_sv_diachron_20251212_145741.txt ===
2
+ Language codes: diachron
3
+ Using pretrained model: sv
4
+
5
+ Running: python prepare-train-val-test.py diachron
6
+ Including DigPhil MACHINE in TRAIN (minus gold)…
7
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu
8
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu
9
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu
10
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu
11
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1063-spf220.conllu
12
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu
13
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu
14
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec252-BremerF_Teckningar1.conllu
15
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec988-spf145.conllu
16
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec987-spf144.conllu
17
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu
18
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-letter141673-Stalhammar.conllu
19
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1033-spf190.conllu
20
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec25-Runius.conllu
21
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu
22
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec452-NyblomH_FantasierFyra.conllu
23
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu
24
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu
25
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu
26
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec1102-spf259.conllu
27
+ Reading GOLD: /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/alanev_raw_files/diachron-validated/svediakorp-sec991-spf148.conllu
28
+ Cleaning TRAIN...
29
+ [REMOVED] sent_id=6 ERRORS=['Line 24: Invalid token ID or head', 'Line 25: Invalid token ID or head', 'Line 29: Invalid token ID or head', 'Token 30 has invalid head 24']
30
+ [REMOVED] sent_id=7_8 ERRORS=['Multiple roots found: [5, 10]']
31
+ [REMOVED] sent_id=30_31 ERRORS=['Multiple roots found: [3, 18]']
32
+ [REMOVED] sent_id=35 ERRORS=['Line 36: Invalid token ID or head']
33
+ [REMOVED] sent_id=2_3 ERRORS=['Multiple roots found: [1, 5]']
34
+ [REMOVED] sent_id=2_3 ERRORS=['Multiple roots found: [7, 20]']
35
+ [REMOVED] sent_id=8_9 ERRORS=['Multiple roots found: [24, 57]']
36
+ [REMOVED] sent_id=12_13 ERRORS=['Multiple roots found: [11, 16]']
37
+ [REMOVED] sent_id=124_split2 ERRORS=['Line 4: Invalid token ID or head', 'No root found', 'Token 1 has invalid head 4', 'Token 2 has invalid head 4', 'Token 3 has invalid head 4', 'Token 6 has invalid head 4', 'Token 11 has invalid head 4', 'Token 15 has invalid head 4']
38
+ [REMOVED] sent_id=396 ERRORS=['Token 2: Missing form']
39
+ [REMOVED] sent_id=416 ERRORS=['Token 2: Missing form']
40
+ [REMOVED] sent_id=589 ERRORS=['Token 2: Missing form']
41
+ [REMOVED] sent_id=909 ERRORS=['Token 2: Missing form']
42
+ [REMOVED] sent_id=912 ERRORS=['Token 2: Missing form']
43
+ [REMOVED] sent_id=3_split1 ERRORS=['Multiple roots found: [4, 15, 17]']
44
+ [REMOVED] sent_id=3_split2 ERRORS=['Line 1: Invalid token ID or head', 'Line 8: Invalid token ID or head', 'Line 15: Invalid token ID or head', 'No root found', 'Token 2 has invalid head 1', 'Token 3 has invalid head 8', 'Token 4 has invalid head 8', 'Token 5 has invalid head 8', 'Token 7 has invalid head 8', 'Token 10 has invalid head 8', 'Token 13 has invalid head 8', 'Token 14 has invalid head 8']
45
+ [REMOVED] sent_id=3_4 ERRORS=['Multiple roots found: [1, 5]']
46
+ [REMOVED] sent_id=5_6 ERRORS=['Multiple roots found: [3, 24]']
47
+ [REMOVED] sent_id=11_12_13 ERRORS=['Multiple roots found: [5, 17, 25]']
48
+ [REMOVED] sent_id=119 ERRORS=['Token 2: Missing form']
49
+ [REMOVED] sent_id=179 ERRORS=['Token 2: Missing form']
50
+ [REMOVED] sent_id=188 ERRORS=['Token 2: Missing form']
51
+ [REMOVED] sent_id=223 ERRORS=['Token 2: Missing form']
52
+ [REMOVED] sent_id=268 ERRORS=['Token 2: Missing form']
53
+ [REMOVED] sent_id=325 ERRORS=['Token 2: Missing form']
54
+ [REMOVED] sent_id=388 ERRORS=['Token 2: Missing form']
55
+ [REMOVED] sent_id=399 ERRORS=['Token 2: Missing form']
56
+ [REMOVED] sent_id=475 ERRORS=['Token 2: Missing form']
57
+ [REMOVED] sent_id=505 ERRORS=['Token 2: Missing form']
58
+ [REMOVED] sent_id=520 ERRORS=['Token 2: Missing form']
59
+ [REMOVED] sent_id=562 ERRORS=['Token 2: Missing form']
60
+ [REMOVED] sent_id=669 ERRORS=['Token 2: Missing form']
61
+ [REMOVED] sent_id=711 ERRORS=['Token 2: Missing form']
62
+ [REMOVED] sent_id=731 ERRORS=['Token 2: Missing form']
63
+ [REMOVED] sent_id=867 ERRORS=['Token 2: Missing form']
64
+ [REMOVED] sent_id=884 ERRORS=['Token 2: Missing form']
65
+ [REMOVED] sent_id=923 ERRORS=['Token 2: Missing form']
66
+ [REMOVED] sent_id=939 ERRORS=['Token 2: Missing form']
67
+ [REMOVED] sent_id=1086 ERRORS=['Token 2: Missing form']
68
+ [REMOVED] sent_id=1179 ERRORS=['Token 2: Missing form']
69
+ [REMOVED] sent_id=1251 ERRORS=['Token 2: Missing form']
70
+ [REMOVED] sent_id=1345 ERRORS=['Token 2: Missing form']
71
+ [REMOVED] sent_id=1459 ERRORS=['Token 2: Missing form']
72
+ [REMOVED] sent_id=1656 ERRORS=['Token 2: Missing form']
73
+ [REMOVED] sent_id=1669 ERRORS=['Token 2: Missing form']
74
+ [REMOVED] sent_id=87_88 ERRORS=['Multiple roots found: [3, 6]']
75
+ [REMOVED] sent_id=65_split2_66_split2 ERRORS=['Line 4: Invalid token ID or head', 'Token 2 has invalid head 4', 'Token 3 has invalid head 4', 'Token 5 has invalid head 4']
76
+ [REMOVED] sent_id=25 ERRORS=['Token 2: Missing form']
77
+ [REMOVED] sent_id=136 ERRORS=['Token 2: Missing form']
78
+ [REMOVED] sent_id=208 ERRORS=['Token 2: Missing form']
79
+ [REMOVED] sent_id=230 ERRORS=['Token 2: Missing form']
80
+ [REMOVED] sent_id=245 ERRORS=['Token 2: Missing form']
81
+ [REMOVED] sent_id=276 ERRORS=['Token 2: Missing form']
82
+ [REMOVED] sent_id=320 ERRORS=['Token 2: Missing form']
83
+ [REMOVED] sent_id=366 ERRORS=['Token 2: Missing form']
84
+ [REMOVED] sent_id=519 ERRORS=['Token 2: Missing form']
85
+ [REMOVED] sent_id=569 ERRORS=['Token 2: Missing form']
86
+ [REMOVED] sent_id=50_split2 ERRORS=['Line 1: Invalid token ID or head', 'Line 6: Invalid token ID or head', 'No root found', 'Token 2 has invalid head 1']
87
+ [REMOVED] sent_id=53_54 ERRORS=['Multiple roots found: [27, 91]']
88
+ [REMOVED] sent_id=55_56_57 ERRORS=['Multiple roots found: [2, 4, 13]']
89
+ [REMOVED] sent_id=17_split1 ERRORS=['Multiple roots found: [2, 14, 17]']
90
+ [REMOVED] sent_id=17_split2 ERRORS=['Line 8: Invalid token ID or head', 'Line 25: Invalid token ID or head', 'Line 38: Invalid token ID or head', 'No root found', 'Token 3 has invalid head 8', 'Token 7 has invalid head 8', 'Token 9 has invalid head 8', 'Token 10 has invalid head 8', 'Token 17 has invalid head 8', 'Token 22 has invalid head 25', 'Token 23 has invalid head 25', 'Token 24 has invalid head 25', 'Token 26 has invalid head 25', 'Token 27 has invalid head 25', 'Token 28 has invalid head 25']
91
+ [REMOVED] sent_id=19_split1 ERRORS=['Multiple roots found: [3, 31]']
92
+ Cleaning DEV...
93
+ [REMOVED] sent_id=33 ERRORS=['Token 15: Missing deprel']
94
+ Cleaning TEST...
95
+ Writing TRAIN → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu (46432 valid sentences)
96
+ Writing DEV → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu (9 valid sentences)
97
+ Writing TEST → /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu (99 valid sentences)
98
+ Done.
99
+ Sourcing scripts/config_alvis.sh
100
+ Running stanza dataset preparation…
101
+ 2025-12-12 14:57:51 INFO: Datasets program called with:
102
+ /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py UD_Swedish-diachronic --wordvec_pretrain_file /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/sv
103
+ 2025-12-12 14:57:51 DEBUG: Downloading resource file from https://raw.githubusercontent.com/stanfordnlp/stanza-resources/main/resources_1.11.0.json
104
+
105
+ 2025-12-12 14:57:51 INFO: Downloaded file to /cephyr/users/cleland/Alvis/stanza_resources/resources.json
106
+ 2025-12-12 14:57:51 DEBUG: Processing parameter "processors"...
107
+ 2025-12-12 14:57:51 WARNING: Can not find pos: diachronic from official model list. Ignoring it.
108
+ 2025-12-12 14:57:51 INFO: Downloading these customized packages for language: sv (Swedish)...
109
+ =======================
110
+ | Processor | Package |
111
+ -----------------------
112
+ =======================
113
+
114
+ 2025-12-12 14:57:51 INFO: Finished downloading models and saved to /cephyr/users/cleland/Alvis/stanza_resources
115
+ 2025-12-12 14:57:51 INFO: Using tagger model in /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt for sv_diachronic
116
+ 2025-12-12 14:57:51 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt for forward charlm
117
+ 2025-12-12 14:57:51 INFO: Using model /cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt for backward charlm
118
+ Augmented 107 quotes: Counter({'""': 16, '„”': 15, '″″': 14, '””': 11, '»«': 10, '《》': 10, '„“': 9, '«»': 9, '「」': 8, '““': 5})
119
+ 2025-12-12 14:57:53 INFO: Running tagger to retag /local/tmp.5491708/tmpd4ypnj14/sv_diachronic.train.gold.conllu to /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu
120
+ Args: ['--wordvec_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain', '--lang', 'sv', '--shorthand', 'sv_diachronic', '--mode', 'predict', '--save_dir', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pos', '--save_name', 'diachronic.pt', '--wordvec_pretrain_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/sv', '--charlm', '--charlm_shorthand', 'sv_conll17', '--charlm_forward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/forward_charlm/conll17.pt', '--charlm_backward_file', '/cephyr/users/cleland/Alvis/stanza_resources/sv/backward_charlm/conll17.pt', '--eval_file', '/local/tmp.5491708/tmpd4ypnj14/sv_diachronic.train.gold.conllu', '--output_file', '/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/data/depparse/sv_diachronic.train.in.conllu']
121
+ 2025-12-12 14:57:53 INFO: Running tagger in predict mode
122
+ 2025-12-12 14:57:53 INFO: Loading model from: /cephyr/users/cleland/Alvis/stanza_resources/sv/pos/diachronic.pt
123
+ 2025-12-12 14:57:53 INFO: Pretrained filename /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/sv specified, but file does not exist. Attempting to load from text file
124
+ 2025-12-12 14:57:53 INFO: Reading pretrained vectors from /cephyr/users/cleland/Alvis/stanza_resources/sv/pretrain/fasttext/Swedish/sv.vectors.xz ...
125
+ Traceback (most recent call last):
126
+ File "<frozen runpy>", line 198, in _run_module_as_main
127
+ File "<frozen runpy>", line 88, in _run_code
128
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py", line 143, in <module>
129
+ main()
130
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py", line 139, in main
131
+ common.main(process_treebank, common.ModelType.DEPPARSE, add_specific_args)
132
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/common.py", line 297, in main
133
+ process_treebank(treebank, model_type, paths, args)
134
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py", line 132, in process_treebank
135
+ prepare_tokenizer_treebank.copy_conllu_treebank(treebank, model_type, paths, paths["DEPPARSE_DATA_DIR"], retag_dataset)
136
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_tokenizer_treebank.py", line 83, in copy_conllu_treebank
137
+ postprocess(tokenizer_dir, "train.gold", dest_dir, "train.in", short_name)
138
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/utils/datasets/prepare_depparse_treebank.py", line 130, in retag_dataset
139
+ tagger.main(tagger_args)
140
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/models/tagger.py", line 149, in main
141
+ return evaluate(args)
142
+ ^^^^^^^^^^^^^^
143
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/models/tagger.py", line 426, in evaluate
144
+ trainer = Trainer(pretrain=pretrain, model_file=model_file, device=args['device'], args=load_args)
145
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
146
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/models/pos/trainer.py", line 34, in __init__
147
+ self.load(model_file, pretrain, args=args, foundation_cache=foundation_cache)
148
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/models/pos/trainer.py", line 174, in load
149
+ emb_matrix = pretrain.emb
150
+ ^^^^^^^^^^^^
151
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/models/common/pretrain.py", line 53, in emb
152
+ self.load()
153
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/models/common/pretrain.py", line 88, in load
154
+ vocab, emb = self.read_pretrain()
155
+ ^^^^^^^^^^^^^^^^^^^^
156
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/models/common/pretrain.py", line 132, in read_pretrain
157
+ words, emb, failed = self.read_from_file(self._vec_filename, self._max_vocab)
158
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
159
+ File "/mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/stanza/models/common/pretrain.py", line 227, in read_from_file
160
+ emb[i+len(VOCAB_PREFIX)] = torch.tensor([float(x) for x in line[-cols:]], dtype=torch.float32)
161
+ ^^^^^^^^
162
+ ValueError: could not convert string to float: 'FastText'
163
+ Preparing data for UD_Swedish-diachronic: sv_diachronic, sv
164
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-train.conllu and writing to /local/tmp.5491708/tmpd4ypnj14/sv_diachronic.train.gold.conllu
165
+ Swapped 'w1, w2' for 'w1 ,w2' 0 times
166
+ Added 454 new sentences with asdf, zzzz -> asdf,zzzz
167
+ Added 49 sentences with parens replaced with square brackets
168
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-dev.conllu and writing to /local/tmp.5491708/tmpd4ypnj14/sv_diachronic.dev.gold.conllu
169
+ Reading from /mimer/NOBACKUP/groups/dionysus/cleland/stanza-digphil/ud/UD_Swedish-diachronic/sv_diachronic-ud-test.conllu and writing to /local/tmp.5491708/tmpd4ypnj14/sv_diachronic.test.gold.conllu
saved_models/depparse/conll17_baseline_sv_only/sv_diachronic_charlm_parser.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50aee8f0eab8b4b0e28958c0e428f1aad29cf118e4320332862b8701fc6dd6e3
3
+ size 141291582
saved_models/depparse/conll17_baseline_sv_only/sv_diachronic_charlm_parser_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a462d11c3d049e0f3caedb15f83bd8f8bddfe5b016df87f0767162681ee40ebd
3
+ size 423367390
saved_models/depparse/conll17_is-modern/sv_diachronic_charlm_parser.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e02dffc9ae0d419373d1ba597f4d7e0459d70d6909e18211d40c33472e5a6d46
3
+ size 148251198
saved_models/depparse/conll17_is-modern/sv_diachronic_charlm_parser_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63dfb50f487373f5f48372ed12b5ffa4cacc144aa99f04fe7a4f90c4c9b626d1
3
+ size 443375018
saved_models/depparse/conll17_sv_diachron/sv_diachronic_charlm_parser.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a67914c27dcc9cdc33034cdbebed9eec4c55b10ab2aeffc5f5d6053a96c5107a
3
+ size 145283635
saved_models/depparse/conll17_sv_diachron/sv_diachronic_charlm_parser_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07052dcb1c03f0e5233a14d4c02c1bceae06350191c28858c6dbbcf3da7967c3
3
+ size 434734043
saved_models/depparse/final-conll17-sv_diachron_test/sv_diachronic_charlm_parser.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adc6ab773899ef484ce15bf4dfb836ef846bf5e7fdbda03bf93618edb28e69a1
3
+ size 145346905
saved_models/depparse/final-conll17-sv_diachron_test/sv_diachronic_charlm_parser_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67eecc2dac065a01b0eaf309e3a37d6929ff7cbee3398098bce7d970504bc5d9
3
+ size 435019321
saved_models/depparse/kubhist2-sv-is-NO-DIACHRON/sv_diachronic_charlm_parser.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a942c33cc80ad182175a82f0e6775374aedb7787025f9e508e0ebfe5f53c18a2
3
+ size 148724780
saved_models/depparse/kubhist2-sv-is-NO-DIACHRON/sv_diachronic_charlm_parser_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7fb7d46a821ec688d77d2a2be2b9903126ac42d05b4586c22cce9b386bffc6c
3
+ size 444703405
ud-treebanks-is/is_icepahc-ud-dev.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ea62344c94791c91f974bc243fab0b08a2c20febafa0b2a5146c9ad342ef68d
3
+ size 11860801
ud-treebanks-is/is_icepahc-ud-test.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b30b3d504f344ffc96b3e9433020b38b8305c301312a3aaf1710a5d88186b1f
3
+ size 11900516
ud-treebanks-is/is_icepahc-ud-train.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fd17bd56309bdd2d3ecb3945fb59fe1961ddc4826baebef2c24f1b81212911f
3
+ size 61400737
ud-treebanks-is/{is_modern-ud-dev.conllu → modern/is_modern-ud-dev.conllu} RENAMED
File without changes
ud-treebanks-is/{is_modern-ud-test.conllu → modern/is_modern-ud-test.conllu} RENAMED
File without changes
ud-treebanks-is/{is_modern-ud-train.conllu → modern/is_modern-ud-train.conllu} RENAMED
File without changes
ud-treebanks-sv/{ucxn_ud_swedish-talbanken.conllu → svediakorp-letter141673-Stalhammar.conllu} RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:297dfac82a181d08f67541dad9ec4f9b504ce510564f5224b60779969fd3d5d4
3
- size 8550848
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37485776d3428e1ffe45c7b585bfca27e46295a87e45e343f85cdd27d4f220cc
3
+ size 12608
ud-treebanks-sv/svediakorp-sec1033-spf190.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62ddd86f5e471385843ba1a802999a1279a2e5f09f36170c9073732123cafdfd
3
+ size 6012
ud-treebanks-sv/svediakorp-sec1063-spf220.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:372978b0b249a0649cdae6167f2c31e87b17e1e127225ab27288a7b91ce0acfb
3
+ size 11392
ud-treebanks-sv/svediakorp-sec1102-spf259.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:100193d6416e1f671166ddcb8337f9e9f28f8345956cf23bfec6123b6918bc7f
3
+ size 13395
ud-treebanks-sv/svediakorp-sec208-Anonym_DetGrasligaMordet.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44aa2f06c44f2813c39855b9b7fdc0c0f4977864e4c5559acfdb89bf98778551
3
+ size 7519
ud-treebanks-sv/svediakorp-sec25-Runius.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abc152cf406ad2e5829305b93fe02e151986317063c2fc85be27bca9a29fbe29
3
+ size 3770
ud-treebanks-sv/svediakorp-sec252-BremerF_Teckningar1.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bbd84e9064f8aa0dcdc16e2a806ca70b1a59a42ed4fe82f0fa35ee1ae3099a6
3
+ size 4034
ud-treebanks-sv/svediakorp-sec254-CederborghF_BerattelseOmJohnHall.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:169be155e89579384f8c6f5c5fca67a7037d2b5e70310cc19fe68225746701a5
3
+ size 18053
ud-treebanks-sv/svediakorp-sec268-DulciU_VitterhetsNojen3.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cab4a8846ad1167ccb4639ce4c0dde4cb87e32149665f014eccf16e868b64f99
3
+ size 47496
ud-treebanks-sv/svediakorp-sec277-EnbomPU_MedborgeligtSkalde.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9df76225fd4dd314acb0ff2ceb1443f9b409f1605c68f5659c426900308462e
3
+ size 17712
ud-treebanks-sv/svediakorp-sec324-GranbergPA_Enslighetsalskaren.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2243cd9c02e3dceb1c1f45b64774377b55350d2963b10f67e308bc87abb9444
3
+ size 10258
ud-treebanks-sv/svediakorp-sec330-GyllenborgC_SwenskaSpratthoken.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4e8b6d092d9fecdd32589e7acc2e6cf51b52a57eb20bf84d8d56a8091a7b5dc
3
+ size 24520
ud-treebanks-sv/svediakorp-sec397-AngeredStrandbergH_UnderSodernsSol.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12a09967ac7cd25999ffb73bf1e5c626b767cc3cbb891d0d9ec734b2c9069b30
3
+ size 14440
ud-treebanks-sv/svediakorp-sec452-NyblomH_FantasierFyra.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb23f174ee6bca8dbd6752ff59e9819e51ed339c61d1d3cbd34e05d90b9ebb28
3
+ size 13639
ud-treebanks-sv/svediakorp-sec486-SchwartzMS_BellmansSkor.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b76dafbea7a70ea546d2602f240890288f5b4dbd80593082a775d0adc925799
3
+ size 8752
ud-treebanks-sv/svediakorp-sec613-EngstromA_StrindbergOchJag.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c43a5699dd2297d3947a3e1057b7ce75d69cb17231a294548b5f930ab4d46281
3
+ size 10375
ud-treebanks-sv/svediakorp-sec631-HasselskogN_HallaHallaGronkoping.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26a4282861e4d9c299bbdb9fcb82dfe44f6609f3f557d681c77d30a02232718d
3
+ size 27150
ud-treebanks-sv/svediakorp-sec639-HeidenstamV_Proletarfilosofiens.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ef841c33bce2dda48378623f2a6ecae17550681c14b6b1aeba0743e3c95bee5
3
+ size 11325
ud-treebanks-sv/svediakorp-sec987-spf144.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cbc4a4f91709e409d1d3133f7fb4f29b1fa14e9ae6bd87b175e0bc82b73d31f
3
+ size 6144
ud-treebanks-sv/svediakorp-sec988-spf145.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58920084e2d42b12262668312112e598dda15a2415985508016e158b6ef2e566
3
+ size 12934
ud-treebanks-sv/svediakorp-sec991-spf148.conllu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6b24574ee863263d9e34a187067792a3d3a9016f565e6b7934bbae4e23d35c5
3
+ size 19692