paper_id
string
claim_id
string
claim
string
label
string
caption
string
evi_type
string
evi_path
string
context
string
domain
string
use_context
string
operation
string
paper_path
string
detail_others
string
license_name
string
license_url
string
claim_id_pair
string
evi_path_original
string
2023.emnlp-main.39
val_tab_0321
Table 4 shows that the evaluation on MNLI and QQP is more robust to different settings, and the variance is more significant on CoLA.
Supported
Table 4: Ablation studies of different calibration sizes.
table
tables_png/dev/val_tab_0321.png
In this section, we first compare the influence of different calibration sizes on FPQ . We vary the calibration size in \{32,64,128,256\} and test on MNLI, QQP, and CoLA.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.39.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0182
tables/dev/val_tab_0321.tex
2023.emnlp-main.39
val_tab_0323
However, we also find that it remains robust and maintains competitive accuracy even with limited access to calibration data, such as when using as few as 32 data points.
Supported
Table 4: Ablation studies of different calibration sizes.
table
tables_png/dev/val_tab_0323.png
In this section, we first compare the influence of different calibration sizes on FPQ . We vary the calibration size in \{32,64,128,256\} and test on MNLI, QQP, and CoLA. Table 4 shows that the evaluation on MNLI and QQP is more robust to different settings, and the variance is more significant on CoLA. We observe that...
nlp
yes
Change the cell values
papers/dev/nlp_2023.emnlp-main.39.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0183
tables/dev/val_tab_0323.tex
2023.emnlp-main.39
val_tab_0324
However, we also find that it remains robust and maintains competitive accuracy even with limited access to calibration data, such as when using as few as 32 data points.
Refuted
Table 4: Ablation studies of different calibration sizes.
table
tables_png/dev/val_tab_0324.png
In this section, we first compare the influence of different calibration sizes on FPQ . We vary the calibration size in \{32,64,128,256\} and test on MNLI, QQP, and CoLA. Table 4 shows that the evaluation on MNLI and QQP is more robust to different settings, and the variance is more significant on CoLA. We observe that...
nlp
yes
Change the cell values
papers/dev/nlp_2023.emnlp-main.39.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0183
tables/dev/val_tab_0324.tex
2023.ijcnlp-main.9
val_tab_0327
We can see in the Table 6 that in all of the cases we get strong agreement ( >0.70 ) among the annotators for the style strength and appropriateness evaluation.
Supported
Table 6: Inter-annotator agreement scores for the three human evaluation tasks. Standard deviations over all data points are shown in brackets for the style strength and appropriateness evaluation tasks. The detailed procedure for calculating the agreement scores can be found in Appendix D .
table
tables_png/dev/val_tab_0327.png
The inter-annotator agreement in all of the tasks are shown in Table 6 . Note that, for calculating agreement in the semantic correctness evaluation task, all of the data points are aggregated to measure the agreement score as they represent categorical evaluation measures. On the other hand, that is not possible in ca...
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0184
tables/dev/val_tab_0327.tex
2023.ijcnlp-main.9
val_tab_0328
We can see in the Table 6 that in all of the cases we get strong agreement ( >0.70 ) among the annotators for the style strength and appropriateness evaluation.
Refuted
Table 6: Inter-annotator agreement scores for the three human evaluation tasks. Standard deviations over all data points are shown in brackets for the style strength and appropriateness evaluation tasks. The detailed procedure for calculating the agreement scores can be found in Appendix D .
table
tables_png/dev/val_tab_0328.png
The inter-annotator agreement in all of the tasks are shown in Table 6 . Note that, for calculating agreement in the semantic correctness evaluation task, all of the data points are aggregated to measure the agreement score as they represent categorical evaluation measures. On the other hand, that is not possible in ca...
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0184
tables/dev/val_tab_0328.tex
2023.emnlp-main.36
val_tab_0329
As evident from the results in Table 5 , GPT-NeoX with a history length of 100 shows comparable performance to supervised learning approaches, without any fine-tuning on any TKG training dataset.
Supported
Table 5: Performance (Hits@K) comparison between supervised models and ICL for single-step (top) and multi-step (bottom) prediction. The first group in each table consists of supervised models, whereas the second group consists of ICL models, i.e., GPT-NeoX with a history length of 100. The best model for each dataset ...
table
tables_png/dev/val_tab_0329.png
We present a comparative analysis of the top-performing ICL model against established supervised learning methodologies for TKG reasoning, which are mostly based on graph representation learning.
nlp
no
Change the cell values
papers/dev/nlp_2023.emnlp-main.36.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0185
tables/dev/val_tab_0329.tex
2023.emnlp-main.36
val_tab_0330
The experimental results presented in Table 6 demonstrate that ICL exhibits superior performance to rule-based baselines.
Supported
Table 6: Performance (Hits@K) with rule-based predictions. The best model for each dataset is shown in bold .
table
tables_png/dev/val_tab_0330.png
To determine the extent to which LLMs engage in pattern analysis, beyond simply relying on frequency and recency biases, we run a comparative analysis between GPT-NeoX and heuristic-rules ( i.e., frequency & recency ) on the ICEWS14 dataset, with history length set to 100. frequency identifies the target that appears m...
nlp
yes
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.36.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0186
tables/dev/val_tab_0330.tex
2023.emnlp-main.36
val_tab_0331
The experimental results presented in Table 6 demonstrate that ICL exhibits superior performance to rule-based baselines.
Refuted
Table 6: Performance (Hits@K) with rule-based predictions. The best model for each dataset is shown in bold .
table
tables_png/dev/val_tab_0331.png
To determine the extent to which LLMs engage in pattern analysis, beyond simply relying on frequency and recency biases, we run a comparative analysis between GPT-NeoX and heuristic-rules ( i.e., frequency & recency ) on the ICEWS14 dataset, with history length set to 100. frequency identifies the target that appears m...
nlp
yes
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.36.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0186
tables/dev/val_tab_0331.tex
2023.ijcnlp-main.4
val_tab_0332
It can be seen that in both configurations, KL-divergence yields lower correlations than other distances, and on average total variation slightly outperforms Hellinger and one-best distances.
Supported
Table 3: Comparison of Statistical Distances using MQAG-Sum without answerability.
table
tables_png/dev/val_tab_0332.png
In Table 3 , our results compare statistical distances.
nlp
other sources
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.4.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0187
tables/dev/val_tab_0332.tex
2023.ijcnlp-main.4
val_tab_0333
It can be seen that in both configurations, KL-divergence yields lower correlations than other distances, and on average total variation slightly outperforms Hellinger and one-best distances.
Refuted
Table 3: Comparison of Statistical Distances using MQAG-Sum without answerability.
table
tables_png/dev/val_tab_0333.png
In Table 3 , our results compare statistical distances.
nlp
other sources
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.4.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0187
tables/dev/val_tab_0333.tex
2023.ijcnlp-main.4
val_tab_0335
The observation is that MQAG achieves a higher correlation than the best SpanQAG on 5 out of 6 tasks.
Supported
Table 5: Pearson Correlation Coefficient (PCC) between the scores of summary evaluation methods and human judgements. PCCs are computed at the summary level on QAG and XSum-H, and at the system level on Podcast and SummEval. PCCs on Podcast are computed on 15 abstractive systems. Our best performing MQAG configuration ...
table
tables_png/dev/val_tab_0335.png
The baseline and MQAG results are shown in Table 5 .
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.4.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0188
tables/dev/val_tab_0335.tex
2023.ijcnlp-main.4
val_tab_0336
The observation is that MQAG achieves a higher correlation than the best SpanQAG on 5 out of 6 tasks.
Refuted
Table 5: Pearson Correlation Coefficient (PCC) between the scores of summary evaluation methods and human judgements. PCCs are computed at the summary level on QAG and XSum-H, and at the system level on Podcast and SummEval. PCCs on Podcast are computed on 15 abstractive systems. Our best performing MQAG configuration ...
table
tables_png/dev/val_tab_0336.png
The baseline and MQAG results are shown in Table 5 .
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.4.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0188
tables/dev/val_tab_0336.tex
2023.emnlp-main.35
val_tab_0341
Results can be found in Table 3 . Unsurprisingly, more samples seems to improve calibration, though it seems on the unambiguous Natural Questions slice, sampling diversity with a small number of samples works relatively well for the cost.
Supported
Table 3: Calibration by number of samples (N). EM is excluded, because it is solely based on the greedy answer and does not depend on the number of samples.
table
tables_png/dev/val_tab_0341.png
As sampling-based approaches require a linear increase in compute for the number of samples, we also examine how calibration scales with the number. In particular, we test three, five, and eight samples and compare that to the original results containing ten samples.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.35.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0189
tables/dev/val_tab_0341.tex
2023.emnlp-main.35
val_tab_0342
Results can be found in Table 3 . Unsurprisingly, more samples seems to improve calibration, though it seems on the unambiguous Natural Questions slice, sampling diversity with a small number of samples works relatively well for the cost.
Refuted
Table 3: Calibration by number of samples (N). EM is excluded, because it is solely based on the greedy answer and does not depend on the number of samples.
table
tables_png/dev/val_tab_0342.png
As sampling-based approaches require a linear increase in compute for the number of samples, we also examine how calibration scales with the number. In particular, we test three, five, and eight samples and compare that to the original results containing ten samples.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.35.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0189
tables/dev/val_tab_0342.tex
2025.naacl-long.4
val_tab_0343
Overall, the cognitive abilities of OPT, Llama-2-chat-70B, GPT-3.5-Turbo, and GPT-4 models successively increase, and the performance of each model gradually declines with the increase of stage, consistent with humans.
Supported
Table 3: Calibrated accuracy (%) of largest model in evaluating series. Acc and Age refer to calibrated accuracy and the age of equivalent human performance. The value of Age is calculated according to Equation 2 . Bold indicates the best performance.
table
tables_png/dev/val_tab_0343.png
As shown in Table 3 , We run the model with the largest number of parameters in each series on CogLM , and report the adult human performance for comparison.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.4.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0190
tables/dev/val_tab_0343.tex
2025.naacl-long.4
val_tab_0344
Specifically, the latest state-of-the-art model, GPT-4, has demonstrated remarkable cognitive capabilities, achieving a level comparable to that of a 20-year-old human.
Supported
Table 3: Calibrated accuracy (%) of largest model in evaluating series. Acc and Age refer to calibrated accuracy and the age of equivalent human performance. The value of Age is calculated according to Equation 2 . Bold indicates the best performance.
table
tables_png/dev/val_tab_0344.png
As shown in Table 3 , We run the model with the largest number of parameters in each series on CogLM , and report the adult human performance for comparison. Overall, the cognitive abilities of OPT, Llama-2-chat-70B, GPT-3.5-Turbo, and GPT-4 models successively increase, and the performance of each model gradually decl...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.4.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0191
tables/dev/val_tab_0344.tex
2025.naacl-long.4
val_tab_0345
Specifically, the latest state-of-the-art model, GPT-4, has demonstrated remarkable cognitive capabilities, achieving a level comparable to that of a 20-year-old human.
Refuted
Table 3: Calibrated accuracy (%) of largest model in evaluating series. Acc and Age refer to calibrated accuracy and the age of equivalent human performance. The value of Age is calculated according to Equation 2 . Bold indicates the best performance.
table
tables_png/dev/val_tab_0345.png
As shown in Table 3 , We run the model with the largest number of parameters in each series on CogLM , and report the adult human performance for comparison. Overall, the cognitive abilities of OPT, Llama-2-chat-70B, GPT-3.5-Turbo, and GPT-4 models successively increase, and the performance of each model gradually decl...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.4.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0191
tables/dev/val_tab_0345.tex
2025.naacl-long.4
val_tab_0347
Despite its superior performance, GPT-4’s performance on plan ability (59.4) is still barely satisfactory, far behind that of humans (95.6), which is consistent with the conclusion of Valmeekam et al. ( 2022 ) .
Supported
Table 3: Calibrated accuracy (%) of largest model in evaluating series. Acc and Age refer to calibrated accuracy and the age of equivalent human performance. The value of Age is calculated according to Equation 2 . Bold indicates the best performance.
table
tables_png/dev/val_tab_0347.png
As shown in Table 3 , We run the model with the largest number of parameters in each series on CogLM , and report the adult human performance for comparison. Overall, the cognitive abilities of OPT, Llama-2-chat-70B, GPT-3.5-Turbo, and GPT-4 models successively increase, and the performance of each model gradually decl...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.4.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0192
tables/dev/val_tab_0347.tex
2025.naacl-long.4
val_tab_0348
Despite its superior performance, GPT-4’s performance on plan ability (59.4) is still barely satisfactory, far behind that of humans (95.6), which is consistent with the conclusion of Valmeekam et al. ( 2022 ) .
Refuted
Table 3: Calibrated accuracy (%) of largest model in evaluating series. Acc and Age refer to calibrated accuracy and the age of equivalent human performance. The value of Age is calculated according to Equation 2 . Bold indicates the best performance.
table
tables_png/dev/val_tab_0348.png
As shown in Table 3 , We run the model with the largest number of parameters in each series on CogLM , and report the adult human performance for comparison. Overall, the cognitive abilities of OPT, Llama-2-chat-70B, GPT-3.5-Turbo, and GPT-4 models successively increase, and the performance of each model gradually decl...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.4.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0192
tables/dev/val_tab_0348.tex
2025.naacl-long.4
val_tab_0349
As shown in Table 6 , in most cognitive abilities, COC can significantly improve the performance of Llama-2-chat-7B.
Supported
Table 6: Calibrated accuracy of Llama-2-chat-7B on CogLM with and without Chain-of-Cognition from GPT-3.5-Turbo as input.
table
tables_png/dev/val_tab_0349.png
Although there is still room for improvement, the cognitive abilities of advanced LLMs have approached levels close to that of adult humans as discussed in Section 3.2 . A natural question is, what are the potential applications for advanced LLMs’ cognitive abilities? When humans address cognitive questions, they deduc...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.4.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0193
tables/dev/val_tab_0349.tex
2025.naacl-long.4
val_tab_0350
As shown in Table 6 , in most cognitive abilities, COC can significantly improve the performance of Llama-2-chat-7B.
Refuted
Table 6: Calibrated accuracy of Llama-2-chat-7B on CogLM with and without Chain-of-Cognition from GPT-3.5-Turbo as input.
table
tables_png/dev/val_tab_0350.png
Although there is still room for improvement, the cognitive abilities of advanced LLMs have approached levels close to that of adult humans as discussed in Section 3.2 . A natural question is, what are the potential applications for advanced LLMs’ cognitive abilities? When humans address cognitive questions, they deduc...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.4.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0193
tables/dev/val_tab_0350.tex
2025.naacl-long.9
val_tab_0351
Notably, ALTER demonstrates the best performance in single-round reasoning among all other methods that utilize result ensemble techniques in the LLM era.
Supported
Table 1: Results of different methods on WikiTQ and TabFact. 1 1 1 For the Dater method, we report the results of using the LLM-based method as backbone (We use underline to denote the second-best performance, bold to denote the best performance for each region: Pre-LLM era, LLM era with result ensemble and without ens...
table
tables_png/dev/val_tab_0351.png
We present the results on the WikiTQ and TabFact datasets. The experimental outcomes are summarized in Table 1 . From the results, we observe that our ALTER method achieves comparatively outstanding outcomes. Specifically, on the WikiTQ dataset, while the Mix SC method do marginally outperforms our results by aggregati...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0194
tables/dev/val_tab_0351.tex
2025.naacl-long.9
val_tab_0353
Furthermore, on the TabFact datasets, both augmentation methods have a much larger impact on hard questions than on simple questions.
Supported
Table 2: Ablation results of query augmentor on the test sets of WikiTQ and TabFact.
table
tables_png/dev/val_tab_0353.png
To analyze the impact of two query augmentation methods in the query augmentor. We conducted experiments on the WikiTQ and TabFact datasets by discarding the step-back augmentation module (denoted as w/o step-back) and the sub-query augmentation module (denoted as w/o sub-query). For each dataset, we further categorize...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0195
tables/dev/val_tab_0353.tex
2025.naacl-long.9
val_tab_0354
Furthermore, on the TabFact datasets, both augmentation methods have a much larger impact on hard questions than on simple questions.
Refuted
Table 2: Ablation results of query augmentor on the test sets of WikiTQ and TabFact.
table
tables_png/dev/val_tab_0354.png
To analyze the impact of two query augmentation methods in the query augmentor. We conducted experiments on the WikiTQ and TabFact datasets by discarding the step-back augmentation module (denoted as w/o step-back) and the sub-query augmentation module (denoted as w/o sub-query). For each dataset, we further categorize...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0195
tables/dev/val_tab_0354.tex
2025.naacl-long.9
val_tab_0355
The performance improvement is particularly noteworthy when dealing with large tables.
Supported
Table 4: Comparison of methods in the LLM era with tables divided by token count on WikiTQ. ( underline denotes the second-best performance; bold denotes the best performance)
table
tables_png/dev/val_tab_0355.png
Figure 3 shows the comparison results of ALTER and methods following the pre-LLM era, including CABINET and OMNITAB, partitioning tables in the WikiTQ dataset by the number of cells. In Table 4 , we present the results based on different table sizes divided by the token count in the WikiTQ dataset, comparing our method...
nlp
yes
Change the cell values
papers/dev/nlp_2025.naacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0196
tables/dev/val_tab_0355.tex
2025.naacl-long.9
val_tab_0356
The performance improvement is particularly noteworthy when dealing with large tables.
Refuted
Table 4: Comparison of methods in the LLM era with tables divided by token count on WikiTQ. ( underline denotes the second-best performance; bold denotes the best performance)
table
tables_png/dev/val_tab_0356.png
Figure 3 shows the comparison results of ALTER and methods following the pre-LLM era, including CABINET and OMNITAB, partitioning tables in the WikiTQ dataset by the number of cells. In Table 4 , we present the results based on different table sizes divided by the token count in the WikiTQ dataset, comparing our method...
nlp
yes
Change the cell values
papers/dev/nlp_2025.naacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0196
tables/dev/val_tab_0356.tex
2025.finnlp-1.9
val_tab_0357
We report labeled F1-score averaged on 100 fine-tuning runs in Table 2 . Precision and recall are reported in Appendix B . The Base model (using the full 12 layers) produces similar results on DOCILE no matter if pre-training on IIT-CDIP or DOCILE .
Supported
Table 2: F1 scores for named-entity recognition using different pre-training and fine-tuning datasets. Results are averaged on 100 runs with different seeds.
table
tables_png/dev/val_tab_0357.png
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0197
tables/dev/val_tab_0357.tex
2025.finnlp-1.9
val_tab_0358
We report labeled F1-score averaged on 100 fine-tuning runs in Table 2 . Precision and recall are reported in Appendix B . The Base model (using the full 12 layers) produces similar results on DOCILE no matter if pre-training on IIT-CDIP or DOCILE .
Refuted
Table 2: F1 scores for named-entity recognition using different pre-training and fine-tuning datasets. Results are averaged on 100 runs with different seeds.
table
tables_png/dev/val_tab_0358.png
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0197
tables/dev/val_tab_0358.tex
2025.finnlp-1.9
val_tab_0359
However, on our internal Payslips datasets, our model pre-trained on DOCILE outperforms the original one.
Supported
Table 2: F1 scores for named-entity recognition using different pre-training and fine-tuning datasets. Results are averaged on 100 runs with different seeds.
table
tables_png/dev/val_tab_0359.png
We report labeled F1-score averaged on 100 fine-tuning runs in Table 2 . Precision and recall are reported in Appendix B . The Base model (using the full 12 layers) produces similar results on DOCILE no matter if pre-training on IIT-CDIP or DOCILE .
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0198
tables/dev/val_tab_0359.tex
2025.finnlp-1.9
val_tab_0360
Moreover, we observe that our pre-trained model exhibits a way lower variance between fine-tuning runs.
Supported
Table 2: F1 scores for named-entity recognition using different pre-training and fine-tuning datasets. Results are averaged on 100 runs with different seeds.
table
tables_png/dev/val_tab_0360.png
We report labeled F1-score averaged on 100 fine-tuning runs in Table 2 . Precision and recall are reported in Appendix B . The Base model (using the full 12 layers) produces similar results on DOCILE no matter if pre-training on IIT-CDIP or DOCILE . However, on our internal Payslips datasets, our model pre-trained on D...
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0199
tables/dev/val_tab_0360.tex
2025.finnlp-1.9
val_tab_0361
Moreover, we observe that our pre-trained model exhibits a way lower variance between fine-tuning runs.
Refuted
Table 2: F1 scores for named-entity recognition using different pre-training and fine-tuning datasets. Results are averaged on 100 runs with different seeds.
table
tables_png/dev/val_tab_0361.png
We report labeled F1-score averaged on 100 fine-tuning runs in Table 2 . Precision and recall are reported in Appendix B . The Base model (using the full 12 layers) produces similar results on DOCILE no matter if pre-training on IIT-CDIP or DOCILE . However, on our internal Payslips datasets, our model pre-trained on D...
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0199
tables/dev/val_tab_0361.tex
2025.naacl-long.1
val_tab_0362
Results are shown in Table 4 . Human performance is quite strong, almost reaching 90 F1@0 score overall.
Supported
Table 4: Human baseline results (F1@0) by phenomenon and source dataset.
table
tables_png/dev/val_tab_0362.png
To find out how humans perform on the task, we hire two expert annotators with formal education in linguistics. We present them with 10 example instances and then ask them to complete 99 randomly sampled test set instances. We also evaluate our best model (see Table 3 ) on the same set.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0200
tables/dev/val_tab_0362.tex
2025.naacl-long.1
val_tab_0363
Results are shown in Table 4 . Human performance is quite strong, almost reaching 90 F1@0 score overall.
Refuted
Table 4: Human baseline results (F1@0) by phenomenon and source dataset.
table
tables_png/dev/val_tab_0363.png
To find out how humans perform on the task, we hire two expert annotators with formal education in linguistics. We present them with 10 example instances and then ask them to complete 99 randomly sampled test set instances. We also evaluate our best model (see Table 3 ) on the same set.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0200
tables/dev/val_tab_0363.tex
2025.naacl-long.1
val_tab_0365
Humans also perform noticeably better on the NYCartoons dataset and on the idiom subset of the task.
Supported
Table 4: Human baseline results (F1@0) by phenomenon and source dataset.
table
tables_png/dev/val_tab_0365.png
To find out how humans perform on the task, we hire two expert annotators with formal education in linguistics. We present them with 10 example instances and then ask them to complete 99 randomly sampled test set instances. We also evaluate our best model (see Table 3 ) on the same set. Results are shown in Table 4 . H...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0201
tables/dev/val_tab_0365.tex
2025.naacl-long.1
val_tab_0366
Humans also perform noticeably better on the NYCartoons dataset and on the idiom subset of the task.
Refuted
Table 4: Human baseline results (F1@0) by phenomenon and source dataset.
table
tables_png/dev/val_tab_0366.png
To find out how humans perform on the task, we hire two expert annotators with formal education in linguistics. We present them with 10 example instances and then ask them to complete 99 randomly sampled test set instances. We also evaluate our best model (see Table 3 ) on the same set. Results are shown in Table 4 . H...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0201
tables/dev/val_tab_0366.tex
2025.naacl-long.1
val_tab_0367
The model has a slight edge in performance on the sarcasm and visual metaphor subsets of the task, perhaps due to difficulty of these subsets and any potential spurious correlations during fine-tuning.
Supported
Table 4: Human baseline results (F1@0) by phenomenon and source dataset.
table
tables_png/dev/val_tab_0367.png
To find out how humans perform on the task, we hire two expert annotators with formal education in linguistics. We present them with 10 example instances and then ask them to complete 99 randomly sampled test set instances. We also evaluate our best model (see Table 3 ) on the same set. Results are shown in Table 4 . H...
nlp
other sources
Change the cell values
papers/dev/nlp_2025.naacl-long.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0202
tables/dev/val_tab_0367.tex
2025.finnlp-1.15
val_tab_0368
(1) Fine-tuned language models consistently outperform generic LLMs, the performance gap can be narrowed through prompt design, few-shot learning, and model size.
Supported
Table 1: Performance of different fine-tuned language models and LLMs under different prompts on FiNER-ORD task.
table
tables_png/dev/val_tab_0368.png
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.15.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0203
tables/dev/val_tab_0368.tex
2025.finnlp-1.15
val_tab_0369
Additionally, few-shot learning and larger LLMs demonstrate notable advantages over their smaller counterparts.
Supported
Table 1: Performance of different fine-tuned language models and LLMs under different prompts on FiNER-ORD task.
table
tables_png/dev/val_tab_0369.png
(1) Fine-tuned language models consistently outperform generic LLMs, the performance gap can be narrowed through prompt design, few-shot learning, and model size. Table 1 demonstrates that fine-tuned language models surpass generic LLMs in zero-shot direct prompting. However, the performance of generic LLMs improves si...
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.15.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0204
tables/dev/val_tab_0369.tex
2025.finnlp-1.15
val_tab_0370
CoT prompting only improves the performance of the GPT-4o-mini model, whereas it significantly degrades the performance of the LLaMA 3.1 series.
Supported
Table 1: Performance of different fine-tuned language models and LLMs under different prompts on FiNER-ORD task.
table
tables_png/dev/val_tab_0370.png
(2) Chain-of-Thought prompting has limited effect on LLMs performance and can sometimes reduce effectiveness. While few-shot learning generally enhances generic LLMs’ performance, Table 1 shows that the difference between prompting styles is marginal.
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.15.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0205
tables/dev/val_tab_0370.tex
2025.naacl-short.14
val_tab_0371
In contrast, translations into Marathi show the lowest scores, ranging from 26.5 to 29.8, likely due to the complexity of translating between less common language pairs.
Supported
Table 3: COMET scores . Top-scoring translation for each source manuscript is in bold text. Second top-scoring translation is in italics.
table
tables_png/dev/val_tab_0371.png
Translation Quality : Table 3 shows translation scores from the Hebrew Old Testament, Greek Old Testament, and Greek New Testament into five target languages. English and Turkish consistently achieve the highest scores across all manuscripts, with English translations ranging from 61.2 to 72.6, and Turkish from 65.4 to...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-short.14.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0206
tables/dev/val_tab_0371.tex
2025.naacl-short.14
val_tab_0372
In contrast, translations into Marathi show the lowest scores, ranging from 26.5 to 29.8, likely due to the complexity of translating between less common language pairs.
Refuted
Table 3: COMET scores . Top-scoring translation for each source manuscript is in bold text. Second top-scoring translation is in italics.
table
tables_png/dev/val_tab_0372.png
Translation Quality : Table 3 shows translation scores from the Hebrew Old Testament, Greek Old Testament, and Greek New Testament into five target languages. English and Turkish consistently achieve the highest scores across all manuscripts, with English translations ranging from 61.2 to 72.6, and Turkish from 65.4 to...
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-short.14.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0206
tables/dev/val_tab_0372.tex
2025.naacl-short.14
val_tab_0373
As suggested by McGovern et al. ( 2024 ) , we indeed see that human translations over or underemphasize the intertextuality present in the text, whereas machine translations provide a neutral baseline, based on these results.
Supported
Table 5: Overemphasized Intertextuality by Human Translation. The intertextuality from Hebrews 8:12 to Isaiah 43:25 is amplified by the human translator’s decision to render different words as "sin". The machine translation abstains from this and restores the original distance, but loses coherence.
table
tables_png/dev/val_tab_0373.png
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-short.14.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0207
tables/dev/val_tab_0373.tex
2025.naacl-short.14
val_tab_0374
As suggested by McGovern et al. ( 2024 ) , we indeed see that human translations over or underemphasize the intertextuality present in the text, whereas machine translations provide a neutral baseline, based on these results.
Refuted
Table 5: Overemphasized Intertextuality by Human Translation. The intertextuality from Hebrews 8:12 to Isaiah 43:25 is amplified by the human translator’s decision to render different words as "sin". The machine translation abstains from this and restores the original distance, but loses coherence.
table
tables_png/dev/val_tab_0374.png
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-short.14.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0207
tables/dev/val_tab_0374.tex
2024.eacl-short.15
val_tab_0376
When the LLM’s answers contain calculations, the accuracy drops significantly for almost all models except for PaLM.
Supported
Table 1: The redundancy ( Red. ) and accuracy of LLMs’ responses. We report the average accuracy ( Avg. ) on all questions (second column), the accuracy for answers without calculation (Cal. ✗, third column) and with calculation (Cal. ✓, fourth column). \dagger : The accuracy of GPT-4 is 100% by construction since we u...
table
tables_png/dev/val_tab_0376.png
After discussing redundancy and accuracy independently, we want to know if redundant calculation co-occurs more often with wrong answers. We separate the model outputs into two groups: one that contains calculations and another that does not have calculations, and we calculate the accuracy for the two groups. The resul...
nlp
no
Swap rows or columns
papers/dev/nlp_2024.eacl-short.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0208
tables/dev/val_tab_0376.tex
2024.eacl-short.15
val_tab_0377
When the LLM’s answers contain calculations, the accuracy drops significantly for almost all models except for PaLM.
Refuted
Table 1: The redundancy ( Red. ) and accuracy of LLMs’ responses. We report the average accuracy ( Avg. ) on all questions (second column), the accuracy for answers without calculation (Cal. ✗, third column) and with calculation (Cal. ✓, fourth column). \dagger : The accuracy of GPT-4 is 100% by construction since we u...
table
tables_png/dev/val_tab_0377.png
After discussing redundancy and accuracy independently, we want to know if redundant calculation co-occurs more often with wrong answers. We separate the model outputs into two groups: one that contains calculations and another that does not have calculations, and we calculate the accuracy for the two groups. The resul...
nlp
no
Swap rows or columns
papers/dev/nlp_2024.eacl-short.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0208
tables/dev/val_tab_0377.tex
2025.clpsych-1.1
val_tab_0379
The average pairwise RMSE (APRMSE) values, macro-averaged over all appraisal dimensions, were very close to each other, ranging from 0.6 to 0.63 over the five runs, with a mean of 0.61 (see Table 1 .
Supported
Table 1: Average Pairwise RMSE (APRMSE) and Spearman correlation coefficients ( \rho ) of 108 random samples of five GPT-4 runs. All Spearman correlations were statistically significant at the level of p<0.001 .
table
tables_png/dev/val_tab_0379.png
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0209
tables/dev/val_tab_0379.tex
2025.clpsych-1.1
val_tab_0380
The average pairwise RMSE (APRMSE) values, macro-averaged over all appraisal dimensions, were very close to each other, ranging from 0.6 to 0.63 over the five runs, with a mean of 0.61 (see Table 1 .
Refuted
Table 1: Average Pairwise RMSE (APRMSE) and Spearman correlation coefficients ( \rho ) of 108 random samples of five GPT-4 runs. All Spearman correlations were statistically significant at the level of p<0.001 .
table
tables_png/dev/val_tab_0380.png
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0209
tables/dev/val_tab_0380.tex
2025.clpsych-1.1
val_tab_0381
The application of the majority voting algorithm improved the RMSE on average by about 30% from 1.61 to 1.12 (columns GPT-4 {}_{\text{avg}} and GPT-4 {}_{\text{maj}} in Table 2 ).
Supported
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0381.png
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0210
tables/dev/val_tab_0381.tex
2025.clpsych-1.1
val_tab_0382
The mean RMSE for GPT-4 from 1.12 to 1.15 (columns GPT-4 {}_{\text{maj}} and GPT-4 {}_{\text{conf}} in Table 2 ), and the mean RMSE for human reader-annotators increased from 0.99 to 1.06 (columns Human {}_{\text{maj}} and Human {}_{\text{conf}} in Table 2 ).
Supported
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0382.png
The results of this experiment are shown in Table 2 columns GPT-4 {}_{\text{conf}} and Human {}_{\text{conf}} . We can see that using the confidence rating to break the ties did not improve the RSME neither for the model nor for the human reader-annotators compared to the random choice.
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0211
tables/dev/val_tab_0382.tex
2025.clpsych-1.1
val_tab_0383
When looking at individual appraisal dimensions, we can see that the RMSE’s are in most cases lower in the random tie-breaking setting compared to using the confidence rating, and that applies to both the model and humans.
Supported
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0383.png
The results of this experiment are shown in Table 2 columns GPT-4 {}_{\text{conf}} and Human {}_{\text{conf}} . We can see that using the confidence rating to break the ties did not improve the RSME neither for the model nor for the human reader-annotators compared to the random choice. The mean RMSE for GPT-4 from 1.1...
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0212
tables/dev/val_tab_0383.tex
2025.clpsych-1.1
val_tab_0384
When looking at individual appraisal dimensions, we can see that the RMSE’s are in most cases lower in the random tie-breaking setting compared to using the confidence rating, and that applies to both the model and humans.
Refuted
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0384.png
The results of this experiment are shown in Table 2 columns GPT-4 {}_{\text{conf}} and Human {}_{\text{conf}} . We can see that using the confidence rating to break the ties did not improve the RSME neither for the model nor for the human reader-annotators compared to the random choice. The mean RMSE for GPT-4 from 1.1...
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0212
tables/dev/val_tab_0384.tex
2025.clpsych-1.1
val_tab_0385
The average RMSE scores for five completions applying the majority voting algorithm was 1.22 (see column GPT-4 {}_{\text{emo}} ), which is worse than the same task without the emotion label prediction studied in relation to research question Q3.
Supported
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0385.png
The results of adding emotion to a prompt are shown in the last column of Table 2 .
nlp
other sources
Change the cell values
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0213
tables/dev/val_tab_0385.tex
2025.clpsych-1.1
val_tab_0386
The average RMSE scores for five completions applying the majority voting algorithm was 1.22 (see column GPT-4 {}_{\text{emo}} ), which is worse than the same task without the emotion label prediction studied in relation to research question Q3.
Refuted
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0386.png
The results of adding emotion to a prompt are shown in the last column of Table 2 .
nlp
other sources
Change the cell values
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0213
tables/dev/val_tab_0386.tex
2025.clpsych-1.1
val_tab_0387
Appraisals consistently showing better accuracy across both GPT-4 models and human annotators include Pleasantness, Unpleasantness, Goal Support, and External Norms (marked as bold in Table 2 ), suggesting that these are the easiest to infer based on text.
Supported
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0387.png
In analyzing appraisal dimensions across all experiments, we looked for patterns by comparing the RMSE of individual appraisal dimensions to the macro-averaged RMSE.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0214
tables/dev/val_tab_0387.tex
2025.clpsych-1.1
val_tab_0388
Own Responsibility, Own Control, Not Consider, and Effort also generally performed well, although Own Responsibility is predicted remarkably worse in the GPT-4 {}_{\text{emo}} setting.
Supported
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0388.png
In analyzing appraisal dimensions across all experiments, we looked for patterns by comparing the RMSE of individual appraisal dimensions to the macro-averaged RMSE. Appraisals consistently showing better accuracy across both GPT-4 models and human annotators include Pleasantness, Unpleasantness, Goal Support, and Exte...
nlp
no
Swap rows or columns
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0215
tables/dev/val_tab_0388.tex
2025.clpsych-1.1
val_tab_0389
Own Responsibility, Own Control, Not Consider, and Effort also generally performed well, although Own Responsibility is predicted remarkably worse in the GPT-4 {}_{\text{emo}} setting.
Refuted
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c...
table
tables_png/dev/val_tab_0389.png
In analyzing appraisal dimensions across all experiments, we looked for patterns by comparing the RMSE of individual appraisal dimensions to the macro-averaged RMSE. Appraisals consistently showing better accuracy across both GPT-4 models and human annotators include Pleasantness, Unpleasantness, Goal Support, and Exte...
nlp
no
Swap rows or columns
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0215
tables/dev/val_tab_0389.tex
2025.clpsych-1.2
val_tab_0391
Overall, the CDD values reported in Table 2 are relatively low (all below 0.006), as indicated by prior studies Wachter et al. ( 2021 ); Koumeri et al. ( 2023 ); Wachter et al. ( 2020 ) , suggesting minimal gender-based disparity across the different classes.
Supported
Table 2: One-vs.-rest CDD values for each class across four conflicts.
table
tables_png/dev/val_tab_0391.png
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.2.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0216
tables/dev/val_tab_0391.tex
2025.clpsych-1.2
val_tab_0392
Overall, the CDD values reported in Table 2 are relatively low (all below 0.006), as indicated by prior studies Wachter et al. ( 2021 ); Koumeri et al. ( 2023 ); Wachter et al. ( 2020 ) , suggesting minimal gender-based disparity across the different classes.
Refuted
Table 2: One-vs.-rest CDD values for each class across four conflicts.
table
tables_png/dev/val_tab_0392.png
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.2.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0216
tables/dev/val_tab_0392.tex
2024.eacl-short.18
val_tab_0393
As seen in Table 1 , five of the seven relations are parsed in accordance with each’s desired relation more than 82\% of the time, four greater than or equal to 95\% of the time, and one is parsed to the desired relation for all tested prompts.
Supported
Table 1: The automatic-evaluation statistics for each relation, where None is generation with the language model alone.
table
tables_png/dev/val_tab_0393.png
The input text alongside its completion is automatically parsed using the DMRST parser.
nlp
yes
Change the cell values
papers/dev/nlp_2024.eacl-short.18.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0217
tables/dev/val_tab_0393.tex
2024.eacl-short.18
val_tab_0394
As seen in Table 1 , five of the seven relations are parsed in accordance with each’s desired relation more than 82\% of the time, four greater than or equal to 95\% of the time, and one is parsed to the desired relation for all tested prompts.
Refuted
Table 1: The automatic-evaluation statistics for each relation, where None is generation with the language model alone.
table
tables_png/dev/val_tab_0394.png
The input text alongside its completion is automatically parsed using the DMRST parser.
nlp
yes
Change the cell values
papers/dev/nlp_2024.eacl-short.18.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0217
tables/dev/val_tab_0394.tex
2024.eacl-short.18
val_tab_0396
Evaluation_NS is unique in being poor, receiving an average of 2.47 .
Supported
Table 2: The human-evaluation statistics for each relation, where None is generation with the language model alone. The metrics are ( Rel[ation-fit]), ( Flu[ency]), and ( Rea[sonableness]).
table
tables_png/dev/val_tab_0396.png
The average annotator rating of relation-fit for generation with each of the relations is presented in Table 2 . The overall average, 3.49 , is well within the positive range.
nlp
yes
Change the cell values
papers/dev/nlp_2024.eacl-short.18.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0218
tables/dev/val_tab_0396.tex
2025.clpsych-1.3
val_tab_0399
Conversely, RoBERTa leads in precision at 0.72, but with lower recall at 0.84.
Supported
Table 4: Classification results for different methods of comparing embeddings to detect risk of suicide. The higher the score (a.k.a. the greener), the better.
table
tables_png/dev/val_tab_0399.png
The results show RACLETTE’s Combined method achieving the highest recall of 0.95, indicating superior ability to identify relevant cases, though this comes with a trade-off in precision at 0.63.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.clpsych-1.3.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0219
tables/dev/val_tab_0399.tex
2025.clpsych-1.3
val_tab_0400
Conversely, RoBERTa leads in precision at 0.72, but with lower recall at 0.84.
Refuted
Table 4: Classification results for different methods of comparing embeddings to detect risk of suicide. The higher the score (a.k.a. the greener), the better.
table
tables_png/dev/val_tab_0400.png
The results show RACLETTE’s Combined method achieving the highest recall of 0.95, indicating superior ability to identify relevant cases, though this comes with a trade-off in precision at 0.63.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.clpsych-1.3.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0219
tables/dev/val_tab_0400.tex
2025.clpsych-1.3
val_tab_0402
Both JS Divergence and Cosine Similarity methods show similar patterns, with high recall (0.93) but lower precision.
Supported
Table 4: Classification results for different methods of comparing embeddings to detect risk of suicide. The higher the score (a.k.a. the greener), the better.
table
tables_png/dev/val_tab_0402.png
The results show RACLETTE’s Combined method achieving the highest recall of 0.95, indicating superior ability to identify relevant cases, though this comes with a trade-off in precision at 0.63. Conversely, RoBERTa leads in precision at 0.72, but with lower recall at 0.84. The KL Divergence variant of RACLETTE stands o...
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.3.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0220
tables/dev/val_tab_0402.tex
2025.clpsych-1.3
val_tab_0403
Both JS Divergence and Cosine Similarity methods show similar patterns, with high recall (0.93) but lower precision.
Refuted
Table 4: Classification results for different methods of comparing embeddings to detect risk of suicide. The higher the score (a.k.a. the greener), the better.
table
tables_png/dev/val_tab_0403.png
The results show RACLETTE’s Combined method achieving the highest recall of 0.95, indicating superior ability to identify relevant cases, though this comes with a trade-off in precision at 0.63. Conversely, RoBERTa leads in precision at 0.72, but with lower recall at 0.84. The KL Divergence variant of RACLETTE stands o...
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.3.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0220
tables/dev/val_tab_0403.tex
2025.clpsych-1.3
val_tab_0404
The color intensity (green shading) in Table 4 indicates better performance, visually highlighting that RACLETTE’s approaches generally outperform the benchmark models.
Supported
Table 4: Classification results for different methods of comparing embeddings to detect risk of suicide. The higher the score (a.k.a. the greener), the better.
table
tables_png/dev/val_tab_0404.png
The results show RACLETTE’s Combined method achieving the highest recall of 0.95, indicating superior ability to identify relevant cases, though this comes with a trade-off in precision at 0.63. Conversely, RoBERTa leads in precision at 0.72, but with lower recall at 0.84. The KL Divergence variant of RACLETTE stands o...
nlp
no
Swap rows or columns
papers/dev/nlp_2025.clpsych-1.3.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0221
tables/dev/val_tab_0404.tex
2023.ijcnlp-main.34
val_tab_0406
In 6 of the 7 datasets with natural claims, the best generalization score is from a model trained on artificial claims.
Supported
Table 3: F1 of binary fact verification on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset.
table
tables_png/dev/val_tab_0406.png
Many works Jiang et al. ( 2020 ); Saakyan et al. ( 2021 ) do not consider NEI claims due to their ambiguity. To explore whether our previous observations also hold for the task of binary fact verification , we evaluate the generalization results for all 11 datasets using only the supports and refutes claims for trainin...
nlp
other sources
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0222
tables/dev/val_tab_0406.tex
2025.naacl-long.11
val_tab_0407
From the results, we notice that the correlations showed by the non-specialized embeddings are extremely weak.
Supported
Table 2 : Gender diversity. Average Spearman correlations between proportion of female voices and diversity scores induced by speech representations.
table
tables_png/dev/val_tab_0407.png
In Table 2 we report correlation scores for the gender diversity. Again, we split in two groups: male-voice and female voice-dominant.
nlp
other sources
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0223
tables/dev/val_tab_0407.tex
2025.naacl-long.11
val_tab_0408
Here, again, we notice that SpeechSim has higher correlations than other non-specialized embedding models.
Supported
Table 4 : Accent diversity. Average Spearman correlations between the accent class entropy and diversity scores induced by speech representations.
table
tables_png/dev/val_tab_0408.png
We report average correlations in Table 4 .
nlp
other sources
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0224
tables/dev/val_tab_0408.tex
2025.naacl-long.11
val_tab_0409
Here, again, we notice that SpeechSim has higher correlations than other non-specialized embedding models.
Refuted
Table 4 : Accent diversity. Average Spearman correlations between the accent class entropy and diversity scores induced by speech representations.
table
tables_png/dev/val_tab_0409.png
We report average correlations in Table 4 .
nlp
other sources
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0224
tables/dev/val_tab_0409.tex
2025.naacl-long.1
val_tab_0412
We observe that the teacher model is leading in terms of the adequacy of the explanations and preference rate, as expected from a larger system equipped for higher quality reasoning and generation capabilities.
Supported
Table 6: Adequacy and Preference rates for generated explanations.
table
tables_png/dev/val_tab_0412.png
In Table 6 , we show adequacy and preference rates for explanations from the 3 systems, where an explanation is deemed adequate if both annotators agreed it is, and inadequate if both agreed it is not. The preference percentage is also taken among instances where the annotators agreed that the model’s explanation is pr...
nlp
other sources
Change the cell values
papers/dev/nlp_2025.naacl-long.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0225
tables/dev/val_tab_0412.tex
2025.naacl-long.1
val_tab_0413
We observe that the teacher model is leading in terms of the adequacy of the explanations and preference rate, as expected from a larger system equipped for higher quality reasoning and generation capabilities.
Refuted
Table 6: Adequacy and Preference rates for generated explanations.
table
tables_png/dev/val_tab_0413.png
In Table 6 , we show adequacy and preference rates for explanations from the 3 systems, where an explanation is deemed adequate if both annotators agreed it is, and inadequate if both agreed it is not. The preference percentage is also taken among instances where the annotators agreed that the model’s explanation is pr...
nlp
other sources
Change the cell values
papers/dev/nlp_2025.naacl-long.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0225
tables/dev/val_tab_0413.tex
2025.naacl-short.1
val_fig_0117
From Fig. 3 , we observed that on in-distribution data, model performance improves with an increase in training tokens, but at a diminishing rate.
Supported
Figure 3: Left: Best Move Accuracy of ChessLLM training with short round data. The accuracy of the best move increases with the number of training tokens. Right: Legal Move Accuracy of ChessLLM training with short round data. The accuracy of the legal move increases with the number of training tokens.
figure
figures/dev/val_fig_0117.png
We evaluated in-distribution data to analyze our model’s performance on the evaluation set under varying computing power.
nlp
yes
Graph Flip
papers/dev/nlp_2025.naacl-short.1.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0226
null
2025.naacl-short.1
val_fig_0118
From Fig. 3 , we observed that on in-distribution data, model performance improves with an increase in training tokens, but at a diminishing rate.
Refuted
Figure 3: Left: Best Move Accuracy of ChessLLM training with short round data. The accuracy of the best move increases with the number of training tokens. Right: Legal Move Accuracy of ChessLLM training with short round data. The accuracy of the legal move increases with the number of training tokens.
figure
figures/dev/val_fig_0118.png
We evaluated in-distribution data to analyze our model’s performance on the evaluation set under varying computing power.
nlp
yes
Graph Flip
papers/dev/nlp_2025.naacl-short.1.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0226
null
2025.naacl-long.14
val_fig_0120
In contrast, models such as Qwen-2 or Gemma-1.1 tend to under-cite in their response.
Supported
Figure 3: Lollipop plots denoting the average heuristic-based feature scores achieved by baselines in \mirage . x -axis denotes the languages in \mirage. whereas y -axis plots every heuristic feature value. models in the same family are represented as the same color in a lollipop (as multiple circles). Figure 9 provide...
figure
figures/dev/val_fig_0120.png
Figure 3 shows lollipop plots indicating the average heuristic-feature value ( y -axis) distribution across all languages ( x -axis). In English detection, smaller LLMs such as Gemma-1.1 (2B) do not generate output in the required target language, but rather rely on English. Next, for citation quality and support evalu...
nlp
yes
Legend Swap
papers/dev/nlp_2025.naacl-long.14.json
same as above
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0227
null
2025.naacl-long.14
val_fig_0121
We observe that proprietary models such as GPT-4o and GPT-4, and larger models such as LLAMA-3 (70B) and Mixtral (8x22B), are slightly better than other baselines on \mirage .
Supported
Figure 4: \mirage arena-based leaderboards: (left) Bradley-Terry model coefficients with rankings with GPT-4o as a judge pairwise judgments on a subset of 100 sampled queries. (right) Synthetic rankings using heuristic-based features and a learning to rank model. Each highlighted value is the rank of the model on the l...
figure
figures/dev/val_fig_0121.png
Figure 4 (left) shows the arena-based leaderboard using bootstrapping and Bradley-Terry modeling after conducting 200 tournaments and sampling 100 matches per tournament on a subset of 100 queries using GPT-4o pairwise comparisons.
nlp
no
Legend Swap
papers/dev/nlp_2025.naacl-long.14.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0228
null
2025.naacl-long.14
val_fig_0122
We observe that proprietary models such as GPT-4o and GPT-4, and larger models such as LLAMA-3 (70B) and Mixtral (8x22B), are slightly better than other baselines on \mirage .
Refuted
Figure 4: \mirage arena-based leaderboards: (left) Bradley-Terry model coefficients with rankings with GPT-4o as a judge pairwise judgments on a subset of 100 sampled queries. (right) Synthetic rankings using heuristic-based features and a learning to rank model. Each highlighted value is the rank of the model on the l...
figure
figures/dev/val_fig_0122.png
Figure 4 (left) shows the arena-based leaderboard using bootstrapping and Bradley-Terry modeling after conducting 200 tournaments and sampling 100 matches per tournament on a subset of 100 queries using GPT-4o pairwise comparisons.
nlp
no
Legend Swap
papers/dev/nlp_2025.naacl-long.14.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0228
null
2025.naacl-long.14
val_fig_0123
Baseline rankings across languages are usually stable; with a few notable exceptions such as Gemma-1.1 (7B) which achieves a rank of 4 in Telugu.
Supported
Figure 4: \mirage arena-based leaderboards: (left) Bradley-Terry model coefficients with rankings with GPT-4o as a judge pairwise judgments on a subset of 100 sampled queries. (right) Synthetic rankings using heuristic-based features and a learning to rank model. Each highlighted value is the rank of the model on the l...
figure
figures/dev/val_fig_0123.png
Figure 4 (left) shows the arena-based leaderboard using bootstrapping and Bradley-Terry modeling after conducting 200 tournaments and sampling 100 matches per tournament on a subset of 100 queries using GPT-4o pairwise comparisons. We observe that proprietary models such as GPT-4o and GPT-4, and larger models such as L...
nlp
other sources
Legend Swap
papers/dev/nlp_2025.naacl-long.14.json
same as above
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0229
null
2025.naacl-long.14
val_fig_0124
Command R (35B) performs poorly in low-resource languages such as Bengali (rank 13) or Swahili (rank 14).
Supported
Figure 4: \mirage arena-based leaderboards: (left) Bradley-Terry model coefficients with rankings with GPT-4o as a judge pairwise judgments on a subset of 100 sampled queries. (right) Synthetic rankings using heuristic-based features and a learning to rank model. Each highlighted value is the rank of the model on the l...
figure
figures/dev/val_fig_0124.png
Figure 4 (left) shows the arena-based leaderboard using bootstrapping and Bradley-Terry modeling after conducting 200 tournaments and sampling 100 matches per tournament on a subset of 100 queries using GPT-4o pairwise comparisons. We observe that proprietary models such as GPT-4o and GPT-4, and larger models such as L...
nlp
no
Legend Swap
papers/dev/nlp_2025.naacl-long.14.json
same as above
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0230
null
2025.naacl-long.14
val_fig_0125
Command R (35B) performs poorly in low-resource languages such as Bengali (rank 13) or Swahili (rank 14).
Refuted
Figure 4: \mirage arena-based leaderboards: (left) Bradley-Terry model coefficients with rankings with GPT-4o as a judge pairwise judgments on a subset of 100 sampled queries. (right) Synthetic rankings using heuristic-based features and a learning to rank model. Each highlighted value is the rank of the model on the l...
figure
figures/dev/val_fig_0125.png
Figure 4 (left) shows the arena-based leaderboard using bootstrapping and Bradley-Terry modeling after conducting 200 tournaments and sampling 100 matches per tournament on a subset of 100 queries using GPT-4o pairwise comparisons. We observe that proprietary models such as GPT-4o and GPT-4, and larger models such as L...
nlp
no
Legend Swap
papers/dev/nlp_2025.naacl-long.14.json
same as above
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0230
null
2025.naacl-long.14
val_fig_0126
From Figure 7 , we observe that GPT-4o is a strong teacher, Mistral-v0.2 (7B) fine-tuned on GPT-4o distilled training data achieves the rank 2 outperforming the Llama-3 (70B) model.
Supported
Figure 7: Approximate rankings using heuristic features after fine-tuning Llama-3 (8B) and Mistral-v0.2 (7B) on \mirage dataset across four configurations.
figure
figures/dev/val_fig_0126.png
Does fine-tuning on \mirage training data help? We evaluate three variants of the \mirage training dataset using two backbones: Mistral-v0.2 (7B) and Llama-3 (8B). We fine-tune the \mirage training datasets using (i) both on GPT-4o, (ii) Llama-3 (8B) on Llama-3 (70B), and (iii) Mistral-v0.2 (7B) on Mixtral (8x22B).
nlp
no
Legend Swap
papers/dev/nlp_2025.naacl-long.14.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0231
null
2025.naacl-long.15
val_fig_0127
From Fig. 5 -right, extracting 4 categories in the Hard task shows the largest performance gap between mapping formats.
Supported
Figure 5: Average estimated true F1 scores ( Section 3.1 ) across models (left) and benchmarks (right) showing performance bias of LLMs across 2 widely used mapping formats.
figure
figures/dev/val_fig_0127.png
nlp
other sources
Legend Swap
papers/dev/nlp_2025.naacl-long.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0232
null
2025.naacl-long.15
val_fig_0128
From Fig. 5 -right, extracting 4 categories in the Hard task shows the largest performance gap between mapping formats.
Refuted
Figure 5: Average estimated true F1 scores ( Section 3.1 ) across models (left) and benchmarks (right) showing performance bias of LLMs across 2 widely used mapping formats.
figure
figures/dev/val_fig_0128.png
nlp
other sources
Legend Swap
papers/dev/nlp_2025.naacl-long.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0232
null
2023.emnlp-main.8
val_fig_0129
In contrast, BERT exhibits no preference for label orders and consistently demonstrates a uniform distribution in its predicted label indices.
Supported
(a) TACRED; (b) TACREV; (c) Re-TACRED; (d) Banking77; The distribution of predicted indices of the test instances with label shuffling before every prediction.
figure
figures/dev/val_fig_0129.png
The empirical results in Section 3.2 indicate that ChatGPT’s predictions are affected by label order. To deeper delve into the effects of label orders on ChatGPT, we analyze the distribution of predicted label indices (e.g., if the prediction is the first label, the label index is 1 ), as introduced in Section 2.2 . We...
nlp
yes
Supported_claim_only
papers/dev/nlp_2023.emnlp-main.8.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
no pair
null
2024.eacl-long.17
val_fig_0130
We note that while DISTANCE and LABEL show similar performance, LabDist in general is the most performant and consistent classifier.
Supported
Figure 2: First row: performance for all LaGoNN configurations and balance regimes for the Hate Speech Offensive dataset. Second row: LaGoNN performance for one to five neighbors for all balance regimes on a collapsed version of the LIAR dataset. We use the LaGoNN lite fine-tuning strategy (see Section 5.1 ).
figure
figures/dev/val_fig_0130.png
We perform extensive experiments over the different LaGoNN configurations.
nlp
no
Legend Swap
papers/dev/nlp_2024.eacl-long.17.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0233
null
2024.eacl-long.17
val_fig_0131
We note that while DISTANCE and LABEL show similar performance, LabDist in general is the most performant and consistent classifier.
Refuted
Figure 2: First row: performance for all LaGoNN configurations and balance regimes for the Hate Speech Offensive dataset. Second row: LaGoNN performance for one to five neighbors for all balance regimes on a collapsed version of the LIAR dataset. We use the LaGoNN lite fine-tuning strategy (see Section 5.1 ).
figure
figures/dev/val_fig_0131.png
We perform extensive experiments over the different LaGoNN configurations.
nlp
no
Legend Swap
papers/dev/nlp_2024.eacl-long.17.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0233
null
2024.eacl-long.16
val_fig_0137
While the performance of the Decider model almost overlaps with the Q Drawer asking questions in random dialogue turns, using model uncertainty significantly improves accuracy, even when a few questions are asked.
Supported
Figure 4: Effect of the average number of questions per dialogue (considering all dialogues in the test set) on the size accuracy. We compare the uncertainty-guided Q Drawer with a version that asks questions in random turns and one that asks questions in the turns selected by an external Decider model.
figure
figures/dev/val_fig_0137.png
We compare the uncertainty-driven Q Drawer against (1) a version that asks questions in random dialogue turns with different average number of questions per dialogue and (2) the Q Drawer paired with the Decider module as described in Section 6.2 . For the latter, the maximum possible number of questions per dialogue (a...
nlp
yes
Legend Swap
papers/dev/nlp_2024.eacl-long.16.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0234
null
2024.eacl-long.16
val_fig_0138
While the performance of the Decider model almost overlaps with the Q Drawer asking questions in random dialogue turns, using model uncertainty significantly improves accuracy, even when a few questions are asked.
Refuted
Figure 4: Effect of the average number of questions per dialogue (considering all dialogues in the test set) on the size accuracy. We compare the uncertainty-guided Q Drawer with a version that asks questions in random turns and one that asks questions in the turns selected by an external Decider model.
figure
figures/dev/val_fig_0138.png
We compare the uncertainty-driven Q Drawer against (1) a version that asks questions in random dialogue turns with different average number of questions per dialogue and (2) the Q Drawer paired with the Decider module as described in Section 6.2 . For the latter, the maximum possible number of questions per dialogue (a...
nlp
yes
Legend Swap
papers/dev/nlp_2024.eacl-long.16.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0234
null
2024.eacl-short.7
val_fig_0139
While Flan-T5 introduces more errors than GPT-3.5 overall, the trends are analogous.
Supported
(a) Prevalence of factual errors in each of domains; (b) Distribution of error categories across domains; Distribution of errors and error categories across domains
figure
figures/dev/val_fig_0139.png
Figure 1(a) shows the average proportion of sentences marked as inconsistent (with respect to the corresponding input) in summaries generated by GPT-3.5 Brown et al. ( 2020 ) and Flan-T5 XL Chung et al. ( 2022 ) for three domains: News, medical, and legal. Perhaps surprisingly, we observe a higher prevalence of inconsi...
nlp
no
Legend Swap
papers/dev/nlp_2024.eacl-short.7.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0235
null
2024.eacl-short.7
val_fig_0140
The news domain has a higher frequency of such cases.
Supported
(a) Prevalence of factual errors in each of domains; (b) Distribution of error categories across domains; Distribution of errors and error categories across domains
figure
figures/dev/val_fig_0140.png
We next characterize the distribution of error categories in factually inconsistent summaries generated by models across the domains considererd. Figure 1(b) reports the distribution of error categories for both models. 3 3 3 Model-specific distributions are in Appendix A.6 There are more extrinsic errors introduced in...
nlp
yes
Legend Swap
papers/dev/nlp_2024.eacl-short.7.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0236
null
2024.acl-short.9
val_fig_0141
The first source “segment” \boldsymbol{x} (leftmost panel) has relatively high probability p(\boldsymbol{x}) , and no tradeoff is observed in this case.
Supported
(a) Simulation with one-dimensional \boldsymbol{x} and \boldsymbol{y} . The three panels correspond to three different source “segments” \boldsymbol{x} of decreasing probability p(\boldsymbol{x}) , and the points in each panel are candidate translations \boldsymbol{y} . Brighter colors indicate translations with larger...
figure
figures/dev/val_fig_0141.png
We initially assume that both \boldsymbol{x} and \boldsymbol{y} are one-dimensional vectors. Figure 2(a) shows the relationship between p(\boldsymbol{y}) and p(\boldsymbol{x|y}) for 3 samples \boldsymbol{x} . Each point in each panel corresponds to a candidate translation \boldsymbol{y} , and candidates with highest p(...
nlp
no
Graph Swap
papers/dev/nlp_2024.acl-short.9.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0237
null
2024.acl-short.9
val_fig_0142
The tradeoff emerges, however, and becomes increasingly strong as \boldsymbol{x} moves away from the mode of the distribution p(\boldsymbol{x}) .
Supported
(a) Simulation with one-dimensional \boldsymbol{x} and \boldsymbol{y} . The three panels correspond to three different source “segments” \boldsymbol{x} of decreasing probability p(\boldsymbol{x}) , and the points in each panel are candidate translations \boldsymbol{y} . Brighter colors indicate translations with larger...
figure
figures/dev/val_fig_0142.png
We initially assume that both \boldsymbol{x} and \boldsymbol{y} are one-dimensional vectors. Figure 2(a) shows the relationship between p(\boldsymbol{y}) and p(\boldsymbol{x|y}) for 3 samples \boldsymbol{x} . Each point in each panel corresponds to a candidate translation \boldsymbol{y} , and candidates with highest p(...
nlp
yes
Supported_claim_only
papers/dev/nlp_2024.acl-short.9.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
no pair
null
2024.bionlp-1.6
val_fig_0144
Larger distances appear less frequently and hence the range of distance is smaller in the first bars, while the last bars have larger ranges.
Supported
Figure 3: Barplot where each bar represents a range of distances between events in the gold pairs. The y axis shows the F1 score of the predictions for the pairs in each bar.
figure
figures/dev/val_fig_0144.png
Since clinical notes contain long texts (see Section 4.1 ), we perform an analysis based on the distance of event pairs for the best-performing LLM (Llama CoT). First, we calculate the distance in terms of characters between the events for all the gold pairs. Then, we sort the pairs by their distances and split them to...
nlp
no
Supported_claim_only
papers/dev/nlp_2024.bionlp-1.6.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
no pair
null
2025.naacl-short.1
val_fig_0145
As data volume increases, performance improves, demonstrating the model’s scalability and potential for further enhancement.
Supported
Figure 3: Left: Best Move Accuracy of ChessLLM training with short round data. The accuracy of the best move increases with the number of training tokens. Right: Legal Move Accuracy of ChessLLM training with short round data. The accuracy of the legal move increases with the number of training tokens.
figure
figures/dev/val_fig_0145.png
Fig. 3 Left shows that with only 0.5B tokens, our model achieves a legal move accuracy of 99.11% on in-distribution boards, indicating its impressive preliminary chess playing ability.
nlp
no
Graph Flip
papers/dev/nlp_2025.naacl-short.1.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0238
null
2025.naacl-short.1
val_fig_0146
With 2.75B tokens, the model achieved a Best Move accuracy of 40.11%.
Supported
Figure 3: Left: Best Move Accuracy of ChessLLM training with short round data. The accuracy of the best move increases with the number of training tokens. Right: Legal Move Accuracy of ChessLLM training with short round data. The accuracy of the legal move increases with the number of training tokens.
figure
figures/dev/val_fig_0146.png
Fig. 3 Left shows that with only 0.5B tokens, our model achieves a legal move accuracy of 99.11% on in-distribution boards, indicating its impressive preliminary chess playing ability. As data volume increases, performance improves, demonstrating the model’s scalability and potential for further enhancement. The high a...
nlp
no
Graph Flip
papers/dev/nlp_2025.naacl-short.1.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0239
null
2025.naacl-short.1
val_fig_0147
Figure 4 shows that within the evaluation set, an increase in Best Move accuracy correlates with Elo rating gains.
Supported
Figure 4: Left: Correlation between ChessLLM’s best move accuracy and its Elo rating. Right: Correlation between ChessLLM’s legal move accuracy and its Elo rating.
figure
figures/dev/val_fig_0147.png
nlp
no
Graph Flip
papers/dev/nlp_2025.naacl-short.1.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0240
null
2025.naacl-short.1
val_fig_0148
Figure 4 shows that within the evaluation set, an increase in Best Move accuracy correlates with Elo rating gains.
Refuted
Figure 4: Left: Correlation between ChessLLM’s best move accuracy and its Elo rating. Right: Correlation between ChessLLM’s legal move accuracy and its Elo rating.
figure
figures/dev/val_fig_0148.png
nlp
no
Graph Flip
papers/dev/nlp_2025.naacl-short.1.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0240
null
2025.naacl-long.15
val_fig_0154
The high bias in the SciDocsRR task is because Mistral and Gemma mostly failed to perform this task following the “Bullet” and “Special character” list formats while excelling in solving it following the other formats.
Supported
(a); (b); Average EstTrueF1 (SemEval2017) and EstTrueMAP (SciDocsRR) ( Section 3.1 ) across models (left) and benchmarks (right) showing performance difference of LLMs across 4 widely used list formats.
figure
figures/dev/val_fig_0154.png
The performance bias is regardless of the task as plotted in Fig. 4 -right, with the highest BiasF_{o} value of 67.07\%^{2} on the order list generation task SciDocsRR, and significantly lower ( 27.58\%^{2} ) on SemEval2017 task.
nlp
no
Legend Swap
papers/dev/nlp_2025.naacl-long.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0241
null
2025.naacl-long.15
val_fig_0155
The high bias in the SciDocsRR task is because Mistral and Gemma mostly failed to perform this task following the “Bullet” and “Special character” list formats while excelling in solving it following the other formats.
Refuted
(a); (b); Average EstTrueF1 (SemEval2017) and EstTrueMAP (SciDocsRR) ( Section 3.1 ) across models (left) and benchmarks (right) showing performance difference of LLMs across 4 widely used list formats.
figure
figures/dev/val_fig_0155.png
The performance bias is regardless of the task as plotted in Fig. 4 -right, with the highest BiasF_{o} value of 67.07\%^{2} on the order list generation task SciDocsRR, and significantly lower ( 27.58\%^{2} ) on SemEval2017 task.
nlp
no
Legend Swap
papers/dev/nlp_2025.naacl-long.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0241
null
2025.naacl-long.15
val_fig_0156
Surprisingly, the Medium task displays the least bias, likely because models perform best in this task.
Supported
Figure 5: Average estimated true F1 scores ( Section 3.1 ) across models (left) and benchmarks (right) showing performance bias of LLMs across 2 widely used mapping formats.
figure
figures/dev/val_fig_0156.png
From Fig. 5 -right, extracting 4 categories in the Hard task shows the largest performance gap between mapping formats.
nlp
no
Graph Swap
papers/dev/nlp_2025.naacl-long.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0242
null