paper_id
string
claim_id
string
claim
string
label
string
caption
string
evi_type
string
evi_path
string
context
string
domain
string
use_context
string
operation
string
paper_path
string
detail_others
string
license_name
string
license_url
string
claim_id_pair
string
evi_path_original
string
2024.eacl-long.20
val_tab_0174
In contrast, for the pretrained NLLB, there is no clear distinction between the multilingual systems and rest.
Refuted
Table 4: Zero-shot formality control results. Best and second best results under the same data condition are marked.
table
tables_png/dev/val_tab_0174.png
In Table 4 , controllers trained on multiple translation directions ( multi ) are compared to those trained on single directions ( en \rightarrow es or de ). On Transformer-base, multi consistently outperforms its single-direction counterparts, regardless whether the controller is finetuning- or CG-based.
nlp
other sources
Change the cell values
papers/dev/nlp_2024.eacl-long.20.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0115
tables/dev/val_tab_0174.tex
2024.eacl-long.20
val_tab_0177
Second, pairwise comparisons with the baseline show both CG and finetuning are effective in formality control, where CG has slightly higher win ratio than FT against the baseline.
Supported
Table 6: Human evaluation on Bengali, with quality on a 5-point scale \uparrow and formality on a 3-point scale ( \uparrow : formal) with standard deviations . Last two columns show pairwise comparison of formality scores to baseline NLLB-200 given the same source sentences (winning: scoring more in the direction of the desired formality).
table
tables_png/dev/val_tab_0177.png
The results are in Table 6 . First, adding attribute control does not appear to impact translation quality.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.20.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0116
tables/dev/val_tab_0177.tex
2024.eacl-long.20
val_tab_0178
Third, the impact on formality scores is more prominent when steering towards informal translation.
Supported
Table 6: Human evaluation on Bengali, with quality on a 5-point scale \uparrow and formality on a 3-point scale ( \uparrow : formal) with standard deviations . Last two columns show pairwise comparison of formality scores to baseline NLLB-200 given the same source sentences (winning: scoring more in the direction of the desired formality).
table
tables_png/dev/val_tab_0178.png
The results are in Table 6 . First, adding attribute control does not appear to impact translation quality. Second, pairwise comparisons with the baseline show both CG and finetuning are effective in formality control, where CG has slightly higher win ratio than FT against the baseline.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.20.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0117
tables/dev/val_tab_0178.tex
2024.eacl-long.20
val_tab_0179
Third, the impact on formality scores is more prominent when steering towards informal translation.
Refuted
Table 6: Human evaluation on Bengali, with quality on a 5-point scale \uparrow and formality on a 3-point scale ( \uparrow : formal) with standard deviations . Last two columns show pairwise comparison of formality scores to baseline NLLB-200 given the same source sentences (winning: scoring more in the direction of the desired formality).
table
tables_png/dev/val_tab_0179.png
The results are in Table 6 . First, adding attribute control does not appear to impact translation quality. Second, pairwise comparisons with the baseline show both CG and finetuning are effective in formality control, where CG has slightly higher win ratio than FT against the baseline.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.20.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0117
tables/dev/val_tab_0179.tex
2024.eacl-short.4
val_tab_0181
When focusing on oscillatory hallucinations according to TNG in Table 4 , the improvement is even more pronounced, with a reduction from 9.3% to 0.7% (-92%) for M2M-100, and from 5.9% to 1.5% (-75%) for SMaLL-100.
Supported
Table 4: Proportion of translations with oscillatory hallucinations according to TNG.
table
tables_png/dev/val_tab_0181.png
The proportion of translations with chrF2 below 10 is shown in Table 3 . We observe large reductions in the number of defect translations, with a reduction from 7.3% to 1.2% (-83%) for M2M-100, and from 5.6% to 1.8% (-67%) for SMaLL-100.
nlp
no
Swap rows or columns
papers/dev/nlp_2024.eacl-short.4.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0118
tables/dev/val_tab_0181.tex
2023.ijcnlp-main.35
val_tab_0182
We find that all QA models suffer a large performance drop ( \sim 20% in EM) in MisinfoQA-noisy compared to MisinfoQA-clean , showing that the models are largely distracted by the fake contexts rather than by the presence of additional contexts.
Supported
Table 4: QA performance under the reading comprehension settings with clean and noisy contexts.
table
tables_png/dev/val_tab_0182.png
Table 4 reports the EM and F1 for both human and different QA models.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.35.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0119
tables/dev/val_tab_0182.tex
2023.ijcnlp-main.35
val_tab_0183
We find that all QA models suffer a large performance drop ( \sim 20% in EM) in MisinfoQA-noisy compared to MisinfoQA-clean , showing that the models are largely distracted by the fake contexts rather than by the presence of additional contexts.
Refuted
Table 4: QA performance under the reading comprehension settings with clean and noisy contexts.
table
tables_png/dev/val_tab_0183.png
Table 4 reports the EM and F1 for both human and different QA models.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.35.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0119
tables/dev/val_tab_0183.tex
2023.ijcnlp-main.35
val_tab_0184
Humans obtained an EM of 69.13 in MisinfoQA-noisy , which, though higher than most QA models’ performance, also shows a significant drop when compared to the MisinfoQA-clean setting (86.57 EM).
Supported
Table 4: QA performance under the reading comprehension settings with clean and noisy contexts.
table
tables_png/dev/val_tab_0184.png
Table 4 reports the EM and F1 for both human and different QA models. We find that all QA models suffer a large performance drop ( \sim 20% in EM) in MisinfoQA-noisy compared to MisinfoQA-clean , showing that the models are largely distracted by the fake contexts rather than by the presence of additional contexts.
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.35.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0120
tables/dev/val_tab_0184.tex
2023.starsem-1.1
val_tab_0185
Also here we can see that our proposed model outperforms the baseline models showing BLEU-4 score of 5.76 in test set.
Supported
Table 4: Back translation results obtained from the generative models when using manual features and facial landmarks and AUs. Our proposed model has the highest scores in all metrics compared to the models using only gloss or text.
table
tables_png/dev/val_tab_0185.png
Table 4 presents the results of including facial landmarks as well as facial AUs with body and hands skeleton joints as input.
nlp
no
Change the cell values
papers/dev/nlp_2023.starsem-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0121
tables/dev/val_tab_0185.tex
2023.ijcnlp-main.36
val_tab_0188
For gender+ethnicity, QAGNN has an equivalent amount or more than BioLinkBert, though for gender+sexual orientation, BioLinkBert has more than double the amount for homosexuals, with a massive percentage of 23.
Supported
Table 2: Percentage of questions with changed answers as compared to a question with no demographic information about the patient. M =male; F =female; W =White; B =Black; A-A =African-American; H =Hispanic; As =Asian; SOr =sexual orientation; Random=Random change as described in Section 3.5 .
table
tables_png/dev/val_tab_0188.png
The first column of Table 2 , “Random”, shows the result of our random change (Sec. 3.5 ). While the other values in the table are larger, and while the words “patient” and “person” may have different connotations for each model based on its training data, this suggests that, to some extent, random noise plays a role in the amount of change each model exhibits. Notably for gender, ethnicity, and sexual orientation, both models change around the same number of answers, except that BioLinkBert has a much higher number for Asian. Additionally, both models have almost double the amount of changed answers for homosexual than bisexual or heterosexual.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.36.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0122
tables/dev/val_tab_0188.tex
2023.ijcnlp-main.36
val_tab_0189
For gender+ethnicity, QAGNN has an equivalent amount or more than BioLinkBert, though for gender+sexual orientation, BioLinkBert has more than double the amount for homosexuals, with a massive percentage of 23.
Refuted
Table 2: Percentage of questions with changed answers as compared to a question with no demographic information about the patient. M =male; F =female; W =White; B =Black; A-A =African-American; H =Hispanic; As =Asian; SOr =sexual orientation; Random=Random change as described in Section 3.5 .
table
tables_png/dev/val_tab_0189.png
The first column of Table 2 , “Random”, shows the result of our random change (Sec. 3.5 ). While the other values in the table are larger, and while the words “patient” and “person” may have different connotations for each model based on its training data, this suggests that, to some extent, random noise plays a role in the amount of change each model exhibits. Notably for gender, ethnicity, and sexual orientation, both models change around the same number of answers, except that BioLinkBert has a much higher number for Asian. Additionally, both models have almost double the amount of changed answers for homosexual than bisexual or heterosexual.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.36.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0122
tables/dev/val_tab_0189.tex
2023.ijcnlp-main.36
val_tab_0191
While the reported accuracy on the original test dataset is 38% for QAGNN and 40% for BioLinkBert, the accuracy on our 100 randomly selected demographic-independent questions use to construct the vignettes is 40% for QAGNN and 39% for BioLinkBert.
Supported
Table 4: Accuracy (in percentages) of the two models on our demographically enhanced datasets. M =male; F =female; W =White; B =Black; A-A =African-American; H =Hispanic; As =Asian; SOr =sexual orientation; O*=original test dataset; O=the original, unmodified 100 vignettes; D=No demographic information; Gen=Gender; 1=QAGNN; 2=BioLinkBERT.
table
tables_png/dev/val_tab_0191.png
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.36.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0123
tables/dev/val_tab_0191.tex
2023.ijcnlp-main.36
val_tab_0193
From Table 8 it is clear that BioLinkBert significantly outperforms its generic LM variation.
Supported
Table 8: Accuracy (in percentages) of the biomedical and generic models on our demographically enhanced datasets. M =male; F =female; W =White; B =Black; A-A =African-American; H =Hispanic; As =Asian; SOr =sexual orientation; O*=original test dataset; O=the original, unmodified 100 questions; D=No demographic information; 1=Generic; 2=Biomedical.
table
tables_png/dev/val_tab_0193.png
Similar to our analysis between QAGNN and BioLinkBert above, our analysis between the biomedical and generic models can be split into the amount of answers and accuracy that changes when the dimensions change. From Table 7 it is visible that the generic transformer has more than double the amount of answers change for each gender. It also has an equivalent amount or more for almost any ethnicity, except for Asians. Notably, for sexual orientation, the generic transformer has almost double the amount of answers change for bisexuals, while the biomedical transformer has more for homosexuals. The generic transformer has significantly larger values than the biomedical transformer in any gender+ethnicity combination, while for gender+sexual orientation, the biomedical system has significantly larger values for homosexuals.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.36.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0124
tables/dev/val_tab_0193.tex
2023.ijcnlp-main.34
val_tab_0194
Compared with the in-domain performance (training and testing on the same dataset), the best zero-shot generalization performance shows a large drop of 20.80% on average.
Supported
Table 2: Macro-F1 of 3-class fact verification on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset. The row SELF corresponds to the in-domain performance (training and testing on the same target dataset).
table
tables_png/dev/val_tab_0194.png
Table 2 shows the zero-shot generalization results in macro-averaged F1 for the 3-class fact verification task on all the datasets that have supports/refutes/NEI labels, where we partition by dataset group: Group I, top (datasets with artificial claims); Group II, bottom (datasets with natural claims). In general, the RoBERTa model generalizes poorly in this zero-shot setup.
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0125
tables/dev/val_tab_0194.tex
2023.ijcnlp-main.34
val_tab_0195
Compared with the in-domain performance (training and testing on the same dataset), the best zero-shot generalization performance shows a large drop of 20.80% on average.
Refuted
Table 2: Macro-F1 of 3-class fact verification on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset. The row SELF corresponds to the in-domain performance (training and testing on the same target dataset).
table
tables_png/dev/val_tab_0195.png
Table 2 shows the zero-shot generalization results in macro-averaged F1 for the 3-class fact verification task on all the datasets that have supports/refutes/NEI labels, where we partition by dataset group: Group I, top (datasets with artificial claims); Group II, bottom (datasets with natural claims). In general, the RoBERTa model generalizes poorly in this zero-shot setup.
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0125
tables/dev/val_tab_0195.tex
2023.ijcnlp-main.34
val_tab_0196
The bottom left of Table 2 shows that the model trained on natural claims generalizes badly to datasets with artificial claims, with an average F1 drop of 72% relative to the in-domain performance on the three artificial datasets.
Supported
Table 2: Macro-F1 of 3-class fact verification on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset. The row SELF corresponds to the in-domain performance (training and testing on the same target dataset).
table
tables_png/dev/val_tab_0196.png
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0126
tables/dev/val_tab_0196.tex
2023.ijcnlp-main.34
val_tab_0197
The bottom left of Table 2 shows that the model trained on natural claims generalizes badly to datasets with artificial claims, with an average F1 drop of 72% relative to the in-domain performance on the three artificial datasets.
Refuted
Table 2: Macro-F1 of 3-class fact verification on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset. The row SELF corresponds to the in-domain performance (training and testing on the same target dataset).
table
tables_png/dev/val_tab_0197.png
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0126
tables/dev/val_tab_0197.tex
2023.ijcnlp-main.34
val_tab_0199
The in-domain results are in line with the empirical observation that Jiang et al. ( 2020 ) it is often ambiguous to differentiate between refutes and NEI claims even for trained human annotators.
Supported
Table 5: Class-wise F1 of 3-class fact verification for the zero-shot generalization setup (left) and the in-domain training setup (right). S: supports; R: refutes; N: NEI.
table
tables_png/dev/val_tab_0199.png
Table 5 shows the breakdown of the class-wise F1 score. For each dataset, we show the average class-wise F1 when training the model on other datasets (zero-shot) and the class-wise F1 for training on the same dataset (in-domain). The results show that the refutes claim has the worst prediction score (in bold) almost for all datasets, in both the zero-shot and the in-domain setting.
nlp
other sources
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0127
tables/dev/val_tab_0199.tex
2023.ijcnlp-main.34
val_tab_0200
The in-domain results are in line with the empirical observation that Jiang et al. ( 2020 ) it is often ambiguous to differentiate between refutes and NEI claims even for trained human annotators.
Refuted
Table 5: Class-wise F1 of 3-class fact verification for the zero-shot generalization setup (left) and the in-domain training setup (right). S: supports; R: refutes; N: NEI.
table
tables_png/dev/val_tab_0200.png
Table 5 shows the breakdown of the class-wise F1 score. For each dataset, we show the average class-wise F1 when training the model on other datasets (zero-shot) and the class-wise F1 for training on the same dataset (in-domain). The results show that the refutes claim has the worst prediction score (in bold) almost for all datasets, in both the zero-shot and the in-domain setting.
nlp
other sources
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0127
tables/dev/val_tab_0200.tex
2023.ijcnlp-main.34
val_tab_0201
From Table 2 , we find that fact verification in a dataset with document-level evidence is more difficult than in the same dataset with sentence-level evidence (an average of 13.29% drop of in-domain F1).
Supported
Table 2: Macro-F1 of 3-class fact verification on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset. The row SELF corresponds to the in-domain performance (training and testing on the same target dataset).
table
tables_png/dev/val_tab_0201.png
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0128
tables/dev/val_tab_0201.tex
2023.starsem-1.9
val_tab_0202
However, the use of co-reference resolution will significantly increase the processing time, as shown in Table 5 .
Supported
Table 5: Average processing time (in seconds) per instance in QAGS-CNN/DM. SRLScore uses ROUGE similarity. BARTScore is run with a batch size of 4.
table
tables_png/dev/val_tab_0202.png
Results in Table 1 reveal that the co-reference system is not always improving scores, particularly on the CNN/DailyMail-derived datasets.
nlp
no
Change the cell values
papers/dev/nlp_2023.starsem-1.9.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0129
tables/dev/val_tab_0202.tex
2023.starsem-1.9
val_tab_0203
We further compare the runtime against BARTScore, which only requires a single forward-pass through a neural net and can be batched easily, resulting in a 10x speed-up.
Supported
Table 5: Average processing time (in seconds) per instance in QAGS-CNN/DM. SRLScore uses ROUGE similarity. BARTScore is run with a batch size of 4.
table
tables_png/dev/val_tab_0203.png
Results in Table 1 reveal that the co-reference system is not always improving scores, particularly on the CNN/DailyMail-derived datasets. However, the use of co-reference resolution will significantly increase the processing time, as shown in Table 5 . This is expected, given that there are now more fact tuples due to the tuple expansion ; since the presented scoring method requires the comparison of each fact tuple in the summary against all input text tuples.
nlp
no
Change the cell values
papers/dev/nlp_2023.starsem-1.9.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0130
tables/dev/val_tab_0203.tex
2023.starsem-1.9
val_tab_0204
We further compare the runtime against BARTScore, which only requires a single forward-pass through a neural net and can be batched easily, resulting in a 10x speed-up.
Refuted
Table 5: Average processing time (in seconds) per instance in QAGS-CNN/DM. SRLScore uses ROUGE similarity. BARTScore is run with a batch size of 4.
table
tables_png/dev/val_tab_0204.png
Results in Table 1 reveal that the co-reference system is not always improving scores, particularly on the CNN/DailyMail-derived datasets. However, the use of co-reference resolution will significantly increase the processing time, as shown in Table 5 . This is expected, given that there are now more fact tuples due to the tuple expansion ; since the presented scoring method requires the comparison of each fact tuple in the summary against all input text tuples.
nlp
no
Change the cell values
papers/dev/nlp_2023.starsem-1.9.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0130
tables/dev/val_tab_0204.tex
2023.ijcnlp-main.40
val_tab_0205
Although a few of the source-only models have satisfactory performance on the target, using our ConDA framework, we achieve performance gains over the source-only model in almost all tasks (rows with positive \Delta F1 values).
Supported
Table 2: Performance of ConDA on unlabeled target domains. Source-only model for each task S \rightarrow T refers to zero-shot evaluation of a model trained on S and evaluated on test set of T. \Delta F1 is increase (or decrease, in a few cases) in F1 scores of the ConDA model over the source-only model. Avg. scores in bold indicate where ConDA out-performs the source-only model.
table
tables_png/dev/val_tab_0205.png
To evaluate the performance of ConDA on each of the target domains, i.e. generators, we first look at how our model improves over a source-only model. Table 2 shows the results for this experiment, grouped by target domain. We report F1 scores for the ConDA framework and a source-only model, along with scores averaged over sources, for each target. The source-only model is a pre-trained RoBERTa (roberta-base) fine-tuned only on the source domain S. The source-only scores provide an estimate of how well a model trained just on the source transfers to the target domain.
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0131
tables/dev/val_tab_0205.tex
2023.ijcnlp-main.40
val_tab_0206
Although a few of the source-only models have satisfactory performance on the target, using our ConDA framework, we achieve performance gains over the source-only model in almost all tasks (rows with positive \Delta F1 values).
Refuted
Table 2: Performance of ConDA on unlabeled target domains. Source-only model for each task S \rightarrow T refers to zero-shot evaluation of a model trained on S and evaluated on test set of T. \Delta F1 is increase (or decrease, in a few cases) in F1 scores of the ConDA model over the source-only model. Avg. scores in bold indicate where ConDA out-performs the source-only model.
table
tables_png/dev/val_tab_0206.png
To evaluate the performance of ConDA on each of the target domains, i.e. generators, we first look at how our model improves over a source-only model. Table 2 shows the results for this experiment, grouped by target domain. We report F1 scores for the ConDA framework and a source-only model, along with scores averaged over sources, for each target. The source-only model is a pre-trained RoBERTa (roberta-base) fine-tuned only on the source domain S. The source-only scores provide an estimate of how well a model trained just on the source transfers to the target domain.
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0131
tables/dev/val_tab_0206.tex
2023.ijcnlp-main.40
val_tab_0208
For targets GROVER_mega and GPT-2_xl, ConDA performs within 3 and 6 F1 points of the fully-supervised model.
Supported
Table 3: Performance of our ConDA model on each of the target domains, with each of the other domains as source. Numbers in bold are the best performing ConDA models for each target domain, i.e. closest to fully supervised performance.
table
tables_png/dev/val_tab_0208.png
Next, we compare the performance of our model with a fully-supervised detector trained on the target domain in Table 3 . For ConDA, we show the test performance for all target-source pairs. For the supervised model, we use a pre-trained RoBERTa (roberta-base) fine-tuned on the target data. We then evaluate the model on the test set of the same target domain, and essentially this is our upper bound performance. ConDA achieves test performance comparable to fully-supervised models. In particular, for targets CTRL and XLM, ConDA (with GROVER_mega as source) achieves upper bound performance.
nlp
other sources
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0132
tables/dev/val_tab_0208.tex
2023.ijcnlp-main.40
val_tab_0209
For targets GROVER_mega and GPT-2_xl, ConDA performs within 3 and 6 F1 points of the fully-supervised model.
Refuted
Table 3: Performance of our ConDA model on each of the target domains, with each of the other domains as source. Numbers in bold are the best performing ConDA models for each target domain, i.e. closest to fully supervised performance.
table
tables_png/dev/val_tab_0209.png
Next, we compare the performance of our model with a fully-supervised detector trained on the target domain in Table 3 . For ConDA, we show the test performance for all target-source pairs. For the supervised model, we use a pre-trained RoBERTa (roberta-base) fine-tuned on the target data. We then evaluate the model on the test set of the same target domain, and essentially this is our upper bound performance. ConDA achieves test performance comparable to fully-supervised models. In particular, for targets CTRL and XLM, ConDA (with GROVER_mega as source) achieves upper bound performance.
nlp
other sources
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0132
tables/dev/val_tab_0209.tex
2023.ijcnlp-main.40
val_tab_0210
Surprisingly, the OpenAI GPT-2 Detector performs poorly on the GPT-2_xl data from TuringBench, although it can be considered supervised for this particular target.
Supported
Table 4: Performance of ConDA in comparison to unsupervised baselines, as AUROC. For ConDA, we report the average AUROC over all sources (for each target) and also the maximum AUROC (across all sources), along with the corresponding source in parentheses. Bold shows superior performance across each target.
table
tables_png/dev/val_tab_0210.png
We compare our ConDA framework with relevant unsupervised baselines and report results in Table 4 . Out of the four GLTR measures ( \log p(x) , Rank, Log Rank, and Entropy), the first three fare quite well for detecting CTRL-generated text, but performance on other generators is quite poor. DetectGPT, which is the most recent method we evaluate, performs poorly on almost all generators, with some satisfactory performance on CTRL and XLM.
nlp
other sources
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0133
tables/dev/val_tab_0210.tex
2023.ijcnlp-main.40
val_tab_0211
Surprisingly, the OpenAI GPT-2 Detector performs poorly on the GPT-2_xl data from TuringBench, although it can be considered supervised for this particular target.
Refuted
Table 4: Performance of ConDA in comparison to unsupervised baselines, as AUROC. For ConDA, we report the average AUROC over all sources (for each target) and also the maximum AUROC (across all sources), along with the corresponding source in parentheses. Bold shows superior performance across each target.
table
tables_png/dev/val_tab_0211.png
We compare our ConDA framework with relevant unsupervised baselines and report results in Table 4 . Out of the four GLTR measures ( \log p(x) , Rank, Log Rank, and Entropy), the first three fare quite well for detecting CTRL-generated text, but performance on other generators is quite poor. DetectGPT, which is the most recent method we evaluate, performs poorly on almost all generators, with some satisfactory performance on CTRL and XLM.
nlp
other sources
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0133
tables/dev/val_tab_0211.tex
2023.ijcnlp-main.40
val_tab_0212
Finally, we see ConDA outperforms all the baselines in terms of maximum AUROC, and all but one in terms of average AUROC.
Supported
Table 4: Performance of ConDA in comparison to unsupervised baselines, as AUROC. For ConDA, we report the average AUROC over all sources (for each target) and also the maximum AUROC (across all sources), along with the corresponding source in parentheses. Bold shows superior performance across each target.
table
tables_png/dev/val_tab_0212.png
We compare our ConDA framework with relevant unsupervised baselines and report results in Table 4 . Out of the four GLTR measures ( \log p(x) , Rank, Log Rank, and Entropy), the first three fare quite well for detecting CTRL-generated text, but performance on other generators is quite poor. DetectGPT, which is the most recent method we evaluate, performs poorly on almost all generators, with some satisfactory performance on CTRL and XLM. Surprisingly, the OpenAI GPT-2 Detector performs poorly on the GPT-2_xl data from TuringBench, although it can be considered supervised for this particular target.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0134
tables/dev/val_tab_0212.tex
2023.ijcnlp-main.40
val_tab_0213
We see that the full model outperforms all the variants, implying that all three types of components are essential for detection performance in this problem setting.
Supported
Table 5: Comparison of different model variants; bold shows best performance. We randomly chose 3 target domains to show in this table due to space constraints.
table
tables_png/dev/val_tab_0213.png
We evaluate variants of the ConDA model, by removing one component at a time and compare these in Table 5 . ConDA \CEs removes the two cross-entropy losses, i.e. no supervision even for the source. ConDA \contrast removes the contrastive loss components for both source and target. ConDA \MMD removes the MMD loss between source and target. Hence the only component that makes use of the unlabeled target domain data is the target contrastive loss. Finally, ConDA is the full model.
nlp
yes
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0135
tables/dev/val_tab_0213.tex
2023.ijcnlp-main.40
val_tab_0214
We see that the full model outperforms all the variants, implying that all three types of components are essential for detection performance in this problem setting.
Refuted
Table 5: Comparison of different model variants; bold shows best performance. We randomly chose 3 target domains to show in this table due to space constraints.
table
tables_png/dev/val_tab_0214.png
We evaluate variants of the ConDA model, by removing one component at a time and compare these in Table 5 . ConDA \CEs removes the two cross-entropy losses, i.e. no supervision even for the source. ConDA \contrast removes the contrastive loss components for both source and target. ConDA \MMD removes the MMD loss between source and target. Hence the only component that makes use of the unlabeled target domain data is the target contrastive loss. Finally, ConDA is the full model.
nlp
yes
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0135
tables/dev/val_tab_0214.tex
2023.ijcnlp-main.40
val_tab_0215
Although we see satisfactory performance across most methods, our ConDA framework with source as FAIR_wmt19 and GPT2_xl has the best and the second best performance, respectively.
Supported
Table 6: Results on our ChatGPT News dataset using unsupervised baselines (upper row) and ConDA (lower row). Scores are AUROC. Bold shows best and underline shows second best performance.
table
tables_png/dev/val_tab_0215.png
Given recent concerns surrounding OpenAI’s ChatGPT and GPT-4 OpenAI ( 2023 ) , it is important to create detectors for text generated by these conversational language models. With the incredible fluency and writing quality these language models possess, not only can such text easily fool humans Else ( 2023 ) but can also be extremely difficult for detectors to identify. Even OpenAI’s detector struggles to detect AI-generated text reliably 6 6 6 https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text . Hence in this case study, we are interested in evaluating our ConDA framework on ChatGPT-generated news articles, in an unsupervised manner. Since there is no existing dataset of ChatGPT-generated vs. human-written text or news, we create our own dataset as explained in Section 4.1 . We assign ChatGPT as the unlabeled target domain and assume that we have labeled data from the 6 other generators (Table 1 ). Therefore we emulate a real-world scenario where labeled data from older generators may be available, but it might be hard to find labeled samples for newer LLMs. We sample 4k articles from our ChatGPT News dataset and evaluate the same 3 unsupervised models as in Section 4.2 (upper row block in Table 6 ), and our ConDA framework over 6 source generators (lower row block in Table 6 ) on this data. For GLTR, we report the average over the 4 statistical measures.
nlp
other sources
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0136
tables/dev/val_tab_0215.tex
2023.ijcnlp-main.40
val_tab_0216
Although we see satisfactory performance across most methods, our ConDA framework with source as FAIR_wmt19 and GPT2_xl has the best and the second best performance, respectively.
Refuted
Table 6: Results on our ChatGPT News dataset using unsupervised baselines (upper row) and ConDA (lower row). Scores are AUROC. Bold shows best and underline shows second best performance.
table
tables_png/dev/val_tab_0216.png
Given recent concerns surrounding OpenAI’s ChatGPT and GPT-4 OpenAI ( 2023 ) , it is important to create detectors for text generated by these conversational language models. With the incredible fluency and writing quality these language models possess, not only can such text easily fool humans Else ( 2023 ) but can also be extremely difficult for detectors to identify. Even OpenAI’s detector struggles to detect AI-generated text reliably 6 6 6 https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text . Hence in this case study, we are interested in evaluating our ConDA framework on ChatGPT-generated news articles, in an unsupervised manner. Since there is no existing dataset of ChatGPT-generated vs. human-written text or news, we create our own dataset as explained in Section 4.1 . We assign ChatGPT as the unlabeled target domain and assume that we have labeled data from the 6 other generators (Table 1 ). Therefore we emulate a real-world scenario where labeled data from older generators may be available, but it might be hard to find labeled samples for newer LLMs. We sample 4k articles from our ChatGPT News dataset and evaluate the same 3 unsupervised models as in Section 4.2 (upper row block in Table 6 ), and our ConDA framework over 6 source generators (lower row block in Table 6 ) on this data. For GLTR, we report the average over the 4 statistical measures.
nlp
other sources
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.40.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0136
tables/dev/val_tab_0216.tex
2024.eacl-long.10
val_tab_0217
In comparison, the LM predictions come close with especially impressive predictions for global salience.
Supported
Table 7: Pearson’ r between model prediction and human annotations (A1) of entity salience.
table
tables_png/dev/val_tab_0217.png
To first holistically evaluate the modelling of salience, we report pairwise Pearson’s correlation coefficients between each set of labels above and the annotations of human A1. In Table 7 , we report a “macro correlation”, namely the mean of correlation of salience scores in each procedure. 6 6 6 To avoid NaN due to constant input array, a 0 is appended to each array as smoothing. First, the correlation between the two annotators is high but imperfect, implying subjectivity in the annotation of entity salience.
nlp
yes
Swap rows or columns
papers/dev/nlp_2024.eacl-long.10.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0137
tables/dev/val_tab_0217.tex
2024.eacl-long.10
val_tab_0218
In comparison, the LM predictions come close with especially impressive predictions for global salience.
Refuted
Table 7: Pearson’ r between model prediction and human annotations (A1) of entity salience.
table
tables_png/dev/val_tab_0218.png
To first holistically evaluate the modelling of salience, we report pairwise Pearson’s correlation coefficients between each set of labels above and the annotations of human A1. In Table 7 , we report a “macro correlation”, namely the mean of correlation of salience scores in each procedure. 6 6 6 To avoid NaN due to constant input array, a 0 is appended to each array as smoothing. First, the correlation between the two annotators is high but imperfect, implying subjectivity in the annotation of entity salience.
nlp
yes
Swap rows or columns
papers/dev/nlp_2024.eacl-long.10.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0137
tables/dev/val_tab_0218.tex
2024.eacl-long.9
val_tab_0220
ReDial and INSPIRED contain 6,637 and 1,546 unique items in total ( Table 1 ) and 1,872 and 264 items in the test set, respectively.
Supported
Table 1: Statistics on ReDial and INSPIRED datasets, combined over train, dev and test sets.
table
tables_png/dev/val_tab_0220.png
We follow the common practices Yang et al. ( 2022 ); Wang et al. ( 2022c ) to evaluate PECRS on both recommendation performance and response generation quality. For recommendation subtask, we measure recall with Recall@K (R@K) metric, taking K\in\{1,10,50\} . In order to assess the recommendation coverage, we also report the number of different items predicted by the model over the test set, denoted as Unique .
nlp
no
Swap rows or columns
papers/dev/nlp_2024.eacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0138
tables/dev/val_tab_0220.tex
2024.eacl-long.9
val_tab_0221
ReDial and INSPIRED contain 6,637 and 1,546 unique items in total ( Table 1 ) and 1,872 and 264 items in the test set, respectively.
Refuted
Table 1: Statistics on ReDial and INSPIRED datasets, combined over train, dev and test sets.
table
tables_png/dev/val_tab_0221.png
We follow the common practices Yang et al. ( 2022 ); Wang et al. ( 2022c ) to evaluate PECRS on both recommendation performance and response generation quality. For recommendation subtask, we measure recall with Recall@K (R@K) metric, taking K\in\{1,10,50\} . In order to assess the recommendation coverage, we also report the number of different items predicted by the model over the test set, denoted as Unique .
nlp
no
Swap rows or columns
papers/dev/nlp_2024.eacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0138
tables/dev/val_tab_0221.tex
2024.eacl-long.9
val_tab_0224
Both PECRS-small and -medium surpass all baselines over Dist@3 and Dist@4.
Supported
Table 3: Results of conversation task compared with the state-of-the-art on ReDial.
table
tables_png/dev/val_tab_0224.png
Table 3 summarizes the results on conversation task, where PECRS achieves promising performance on both types of metrics.
nlp
no
Swap rows or columns
papers/dev/nlp_2024.eacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0139
tables/dev/val_tab_0224.tex
2024.eacl-long.9
val_tab_0225
Comparing PECRS-small and -medium shows that Dist@K improvements can be achieved by scaling up the backbone model.
Supported
Table 3: Results of conversation task compared with the state-of-the-art on ReDial.
table
tables_png/dev/val_tab_0225.png
Table 3 summarizes the results on conversation task, where PECRS achieves promising performance on both types of metrics. Both PECRS-small and -medium surpass all baselines over Dist@3 and Dist@4.
nlp
no
Swap rows or columns
papers/dev/nlp_2024.eacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0140
tables/dev/val_tab_0225.tex
2024.eacl-long.9
val_tab_0228
Moreover, PECRS without training for generation can achieve 11.907 on Dist@2 metric (see w/o Generation loss in Table 5 ), but merely 7.76 on RG-1 metric.
Supported
Table 5: Models comparison with different modules and optimization strategies on ReDial with PECRS-small.
table
tables_png/dev/val_tab_0228.png
We first study the effects of different LM’s decoding strategies on conversational performance over Dist@K metric. Specifically, we analyze the greedy decoding, beam search, diverse beam search Vijayakumar et al. ( 2018 ) , top-k sampling Fan et al. ( 2018 ) and nucleus sampling Holtzman et al. ( 2020 ) strategies on PECRS-small. Reported in Table 8 , reference-based metrics (RG-K) show much less variance on different decoding strategies compared to the reference-free metrics (Dist@K). Meanwhile, the correlation between reference-based and reference-free metrics is weak under different decoding strategies.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0141
tables/dev/val_tab_0228.tex
2024.eacl-long.9
val_tab_0229
Moreover, PECRS without training for generation can achieve 11.907 on Dist@2 metric (see w/o Generation loss in Table 5 ), but merely 7.76 on RG-1 metric.
Refuted
Table 5: Models comparison with different modules and optimization strategies on ReDial with PECRS-small.
table
tables_png/dev/val_tab_0229.png
We first study the effects of different LM’s decoding strategies on conversational performance over Dist@K metric. Specifically, we analyze the greedy decoding, beam search, diverse beam search Vijayakumar et al. ( 2018 ) , top-k sampling Fan et al. ( 2018 ) and nucleus sampling Holtzman et al. ( 2020 ) strategies on PECRS-small. Reported in Table 8 , reference-based metrics (RG-K) show much less variance on different decoding strategies compared to the reference-free metrics (Dist@K). Meanwhile, the correlation between reference-based and reference-free metrics is weak under different decoding strategies.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0141
tables/dev/val_tab_0229.tex
2024.eacl-short.10
val_tab_0230
On the Wiki-SQL dataset the base model is unable to infer the correct answer almost 80% of the time.
Supported
Table 1: The data statistics and experimental results (Exact Match) of various benchmarks and models. The best results are in bold . GPT-3.5 results are based on 5-shots. Sota is based on previously published results.
table
tables_png/dev/val_tab_0230.png
The results are summarised in Table 1 . Compared to the base model, Raven significantly improves the results on the PhraseBank dataset by an absolute 25.9%.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-short.10.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0142
tables/dev/val_tab_0230.tex
2024.eacl-short.10
val_tab_0231
This figure is inverted for Raven which obtains a 4-fold improvement over the base model inferring the correct answer more than 85% of the time.
Supported
Table 1: The data statistics and experimental results (Exact Match) of various benchmarks and models. The best results are in bold . GPT-3.5 results are based on 5-shots. Sota is based on previously published results.
table
tables_png/dev/val_tab_0231.png
The results are summarised in Table 1 . Compared to the base model, Raven significantly improves the results on the PhraseBank dataset by an absolute 25.9%. On the Wiki-SQL dataset the base model is unable to infer the correct answer almost 80% of the time.
nlp
yes
Change the cell values
papers/dev/nlp_2024.eacl-short.10.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0143
tables/dev/val_tab_0231.tex
2024.eacl-long.8
val_tab_0232
Table 2 shows that comparative assessment yields highly impressive performance for long-spoken summarization, with comparative assessment out-competing all other baselines.
Supported
Table 2: Spearman correlation coefficient for Podcast .
table
tables_png/dev/val_tab_0232.png
Podcast Assessment : When considering podcast summarization with long inputs of over 5k tokens on average, only Llama2 models (which have a limit of 4k tokens) were used (as FlanT5 has a limit of 1k tokens).
nlp
no
Swap rows or columns
papers/dev/nlp_2024.eacl-long.8.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0144
tables/dev/val_tab_0232.tex
2024.eacl-long.8
val_tab_0235
However, we observe considerably high bias, with some set-ups even selecting the first option 80% of the time.
Supported
Table 5: Positional bias P(A) for both prompt templates, for various systems in the comparative setup on SummEval.
table
tables_png/dev/val_tab_0235.png
We investigate whether the comparative prompts have any implicit positional bias, and whether systems prefer the first/second position. Table 5 shows the fraction of comparisons that selected the candidate in the first position for SummEval. Since all comparisons in both permutations are considered, this fraction should be 0.50 for an unbiased system.
nlp
yes
Change the cell values
papers/dev/nlp_2024.eacl-long.8.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0145
tables/dev/val_tab_0235.tex
2024.eacl-long.8
val_tab_0236
We observe accuracies between 60-80% across all tasks and observe that debiasing can substantially increase accuracy.
Supported
Table 7: Accuracy of the comparative systems, at a comparison level, for SummEval.
table
tables_png/dev/val_tab_0236.png
One can also measure the accuracy of the comparative system at a comparison level. Table 7 shows the pairwise comparison accuracy for Summeval, over all candidate pairs where the true score of the candidate response varies.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.8.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0146
tables/dev/val_tab_0236.tex
2024.eacl-long.8
val_tab_0237
Table 8 illustrates the self-consistency measured by the accuracy when comparing pairs, and demonstrates that even when using few outputs, the model is very consistent to the final rankings that would be achieved by using many more examples.
Supported
Table 8: Accuracy when using fewer systems with respect to final rankings (using all 16 systems) and the ground truth labels. Results shown for Summeval COH using FlanT5-xl.
table
tables_png/dev/val_tab_0237.png
SummEval has 16 summaries per context which leads to 240 possible comparisons. If one were to instead randomly sample N outputs and consider all N\!\cdot\!(N\!-\!1) comparisons, how consistent would the rankings with the subset of systems be with respect to the final predicted rankings?
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.8.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0147
tables/dev/val_tab_0237.tex
2025.clpsych-1.9
val_tab_0238
The BERT multi-task classifier improves these numbers considerably (e.g., obtaining 59.69 and 59.60 micro F1 for CF and Skill, respectively), indicating the benefit of richer contextual representations.
Supported
Table 1: Three-fold cross-validation micro and macro F1 scores.
table
tables_png/dev/val_tab_0238.png
Table 1 reports the micro and macro F1 scores for various models on the CF, IC, and skill classification. We compare several baselines, including traditional TF-IDF and BERT multi-task classifiers, graph-based models without BERT, and our proposed CFiCS variants that integrate clinical BERT features. The TF-IDF multi-task Random Forest baseline achieves relatively modest performance, with micro F1 scores of 52.50, 74.59, and 53.02 for CF, IC, and Skill, respectively, and corresponding macro F1 scores of 20.43, 38.04, and 49.77.
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0148
tables/dev/val_tab_0238.tex
2025.clpsych-1.8
val_tab_0239
As shown in Table 1 , all correlations between maximum sliding window PPL and TALD scores were statistically significant (p-value < 0.01) across all model and sliding window sizes.
Supported
Table 1: The AVH dataset Spearman’s \rho between the maximum sliding window PPL and TALD across model size. Bold indicates the highest \rho for a model.
table
tables_png/dev/val_tab_0239.png
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.8.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0149
tables/dev/val_tab_0239.tex
2025.clpsych-1.8
val_tab_0241
Interestingly, the 4-bit quantized LLaMA 405b model did not outperform the Pythia suite, attaining a lower Spearman \rho of 0.457 on the 64-token sliding window.
Supported
Table 1: The AVH dataset Spearman’s \rho between the maximum sliding window PPL and TALD across model size. Bold indicates the highest \rho for a model.
table
tables_png/dev/val_tab_0241.png
As shown in Table 1 , all correlations between maximum sliding window PPL and TALD scores were statistically significant (p-value < 0.01) across all model and sliding window sizes. The strongest correlations consistently occurred with a 64-token sliding window, with coefficients peaking at 0.486 for the 1.4b model, and remaining moderately correlated with TALD scores for all model variants.
nlp
other sources
Swap rows or columns
papers/dev/nlp_2025.clpsych-1.8.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0150
tables/dev/val_tab_0241.tex
2025.clpsych-1.8
val_tab_0243
In contrast, there are more variable patterns with the clinical interview results shown in Table 2 . Smaller models (70m-410m) tended to achieve peak correlations at smaller window sizes, while larger models show more distributed peaks across different window sizes.
Supported
Table 2: The clinical interview dataset Spearman’s \rho between the maximum sliding window PPL and modified PANSS across model size. Bold indicates the highest \rho for a model.
table
tables_png/dev/val_tab_0243.png
As can be observed in Table 1 , which shows the correlations between maximum PPL for a transcript and the TALD on the AVH dataset, all models exhibit a consistent pattern where correlation coefficients generally increase with a sliding window of 8 and 16, peak at a sliding window of 64, and then decrease with a sliding window of 128. There is a similar trend in Table A.3 , but with more moderate increases and decreases.
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.8.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0151
tables/dev/val_tab_0243.tex
2025.clpsych-1.8
val_tab_0244
In contrast, there are more variable patterns with the clinical interview results shown in Table 2 . Smaller models (70m-410m) tended to achieve peak correlations at smaller window sizes, while larger models show more distributed peaks across different window sizes.
Refuted
Table 2: The clinical interview dataset Spearman’s \rho between the maximum sliding window PPL and modified PANSS across model size. Bold indicates the highest \rho for a model.
table
tables_png/dev/val_tab_0244.png
As can be observed in Table 1 , which shows the correlations between maximum PPL for a transcript and the TALD on the AVH dataset, all models exhibit a consistent pattern where correlation coefficients generally increase with a sliding window of 8 and 16, peak at a sliding window of 64, and then decrease with a sliding window of 128. There is a similar trend in Table A.3 , but with more moderate increases and decreases.
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.8.json
CC BY-NC-SA 4.0
http://creativecommons.org/licenses/by-nc-sa/4.0/
0151
tables/dev/val_tab_0244.tex
2024.eacl-long.6
val_tab_0245
GPT-4+DIN-SQL obtain EX score of 6.73% on Archer test set, while it is able to achieve 85.3% test-suite execution performance on Spider test set Pourreza and Rafiei ( 2024 ) .
Supported
Table 2: Baseline performance on Archer. GPT-4+DIN-SQL was tested only on the English set due to cost and its English-specific design. We only report the fine-tuned model’s performance on the test set.
table
tables_png/dev/val_tab_0245.png
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.6.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0152
tables/dev/val_tab_0245.tex
2024.eacl-long.6
val_tab_0246
Among the three kinds of prompts, CT-3 achieves the highest EX scores on both English data (EX: 13.34%) and Chinese data (EX: 12.86%).
Supported
Table 2: Baseline performance on Archer. GPT-4+DIN-SQL was tested only on the English set due to cost and its English-specific design. We only report the fine-tuned model’s performance on the test set.
table
tables_png/dev/val_tab_0246.png
GPT-4+DIN-SQL obtain EX score of 6.73% on Archer test set, while it is able to achieve 85.3% test-suite execution performance on Spider test set Pourreza and Rafiei ( 2024 ) . To evaluate the overall difficulty of Archer, we test the zero-shot performance of GPT-3.5 with API Doc, CT-3, CT-3+COT prompts on the full Archer data.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.6.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0153
tables/dev/val_tab_0246.tex
2024.eacl-long.6
val_tab_0247
Among the three kinds of prompts, CT-3 achieves the highest EX scores on both English data (EX: 13.34%) and Chinese data (EX: 12.86%).
Refuted
Table 2: Baseline performance on Archer. GPT-4+DIN-SQL was tested only on the English set due to cost and its English-specific design. We only report the fine-tuned model’s performance on the test set.
table
tables_png/dev/val_tab_0247.png
GPT-4+DIN-SQL obtain EX score of 6.73% on Archer test set, while it is able to achieve 85.3% test-suite execution performance on Spider test set Pourreza and Rafiei ( 2024 ) . To evaluate the overall difficulty of Archer, we test the zero-shot performance of GPT-3.5 with API Doc, CT-3, CT-3+COT prompts on the full Archer data.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.6.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0153
tables/dev/val_tab_0247.tex
2024.eacl-long.6
val_tab_0248
Specifically, the T5-3B model trained on the augmented training set achieved an EX score of 4.81% on the English test (matching the performance of GPT-3.5+CT-3+COT) set and 1.92% on the Chinese test set (matching the performance of GPT-3.5+CT-3).
Supported
Table 2: Baseline performance on Archer. GPT-4+DIN-SQL was tested only on the English set due to cost and its English-specific design. We only report the fine-tuned model’s performance on the test set.
table
tables_png/dev/val_tab_0248.png
From Table 2 . we observe that T5 from scale base to 3B (XL) trained on Archer training set achieve 0.00% EX scores. This outcome could be attributed to the small-scale nature of Archer combined with its high complexity. However, when Archer training set was augmented with the Spider/CSpider training set, the VA scores of T5 models exhibited a substantial improvement.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.6.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0154
tables/dev/val_tab_0248.tex
2024.eacl-long.6
val_tab_0249
Specifically, the T5-3B model trained on the augmented training set achieved an EX score of 4.81% on the English test (matching the performance of GPT-3.5+CT-3+COT) set and 1.92% on the Chinese test set (matching the performance of GPT-3.5+CT-3).
Refuted
Table 2: Baseline performance on Archer. GPT-4+DIN-SQL was tested only on the English set due to cost and its English-specific design. We only report the fine-tuned model’s performance on the test set.
table
tables_png/dev/val_tab_0249.png
From Table 2 . we observe that T5 from scale base to 3B (XL) trained on Archer training set achieve 0.00% EX scores. This outcome could be attributed to the small-scale nature of Archer combined with its high complexity. However, when Archer training set was augmented with the Spider/CSpider training set, the VA scores of T5 models exhibited a substantial improvement.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.6.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0154
tables/dev/val_tab_0249.tex
2024.eacl-long.12
val_tab_0250
Development sets are 5k instances samples from each set.
Supported
Table 1: Dataset statistics for synthetic data generated in this work. We omit the average length of answers for fact verification as it is a classification task. SQ=Single-Query. TQ=Two-Queries.
table
tables_png/dev/val_tab_0250.png
We synthesize approximately 1.5 million multi-hop questions and 1.9 million claims. We use neucleus sampling (Holtzman et al., 2020 ) with a top- p probability of 0.9 for decoding when generating the data.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.12.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0155
tables/dev/val_tab_0250.tex
2025.naacl-short.11
val_tab_0252
STRUX outperforms strong baselines in accuracy and F-scores for stock investment decisions.
Supported
Table 1: Our STRUX system outperforms strong benchmarks in making stock investment decisions. We present macro-averaged precision, recall, F-scores, accuracy for the test set. LLMs evaluated are: Llama3-8b-Instruct and gpt-4o-mini-2024-07-18 .
table
tables_png/dev/val_tab_0252.png
System Comparisons. Table 1 shows the macro-averaged precision, recall, F-scores, and accuracy for the test set.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-short.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0156
tables/dev/val_tab_0252.tex
2025.naacl-short.11
val_tab_0253
STRUX outperforms strong baselines in accuracy and F-scores for stock investment decisions.
Refuted
Table 1: Our STRUX system outperforms strong benchmarks in making stock investment decisions. We present macro-averaged precision, recall, F-scores, accuracy for the test set. LLMs evaluated are: Llama3-8b-Instruct and gpt-4o-mini-2024-07-18 .
table
tables_png/dev/val_tab_0253.png
System Comparisons. Table 1 shows the macro-averaged precision, recall, F-scores, and accuracy for the test set.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-short.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0156
tables/dev/val_tab_0253.tex
2025.naacl-short.11
val_tab_0255
Statistics are presented in Table 3 . Each transcript is distilled into a table of about 40 facts, from which the model selects 9.
Supported
Table 3: Statistics of supporting facts.
table
tables_png/dev/val_tab_0255.png
Supporting Facts. We analyzed the supporting facts identified by the model in cases of correct decisions after reflections.
nlp
yes
Change the cell values
papers/dev/nlp_2025.naacl-short.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0157
tables/dev/val_tab_0255.tex
2025.naacl-long.13
val_tab_0256
The generated dataset consists of 1,802 ambiguous and unanswerable questions spanning various categories.
Supported
Table 3: Dataset statistics and human annotation accuracy on 20 samples per question type. "#Ex" column shows the number of examples generated for each category. "Acc" column shows average binary classification accuracy from human expert.
table
tables_png/dev/val_tab_0256.png
Table 3 shows the statistics of the dataset generated using the Spider dev set with Claude 3 sonnet. Note that the employed methodology can be seamlessly adapted to other text-to-SQL datasets like BIRD, WikiSQL, or any other synthetically generated answerable text-to-SQL corpora combined with any LLM (e.g., Llama3.1 or mixtral).
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0158
tables/dev/val_tab_0256.tex
2025.naacl-long.13
val_tab_0257
Additionally, we collected 1,034 answerable queries from the Spider dev dataset and augmented them with natural language explanations derived from their execution results.
Supported
Table 3: Dataset statistics and human annotation accuracy on 20 samples per question type. "#Ex" column shows the number of examples generated for each category. "Acc" column shows average binary classification accuracy from human expert.
table
tables_png/dev/val_tab_0257.png
Table 3 shows the statistics of the dataset generated using the Spider dev set with Claude 3 sonnet. Note that the employed methodology can be seamlessly adapted to other text-to-SQL datasets like BIRD, WikiSQL, or any other synthetically generated answerable text-to-SQL corpora combined with any LLM (e.g., Llama3.1 or mixtral). The generated dataset consists of 1,802 ambiguous and unanswerable questions spanning various categories.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0159
tables/dev/val_tab_0257.tex
2025.naacl-long.13
val_tab_0258
Additionally, we collected 1,034 answerable queries from the Spider dev dataset and augmented them with natural language explanations derived from their execution results.
Refuted
Table 3: Dataset statistics and human annotation accuracy on 20 samples per question type. "#Ex" column shows the number of examples generated for each category. "Acc" column shows average binary classification accuracy from human expert.
table
tables_png/dev/val_tab_0258.png
Table 3 shows the statistics of the dataset generated using the Spider dev set with Claude 3 sonnet. Note that the employed methodology can be seamlessly adapted to other text-to-SQL datasets like BIRD, WikiSQL, or any other synthetically generated answerable text-to-SQL corpora combined with any LLM (e.g., Llama3.1 or mixtral). The generated dataset consists of 1,802 ambiguous and unanswerable questions spanning various categories.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0159
tables/dev/val_tab_0258.tex
2025.naacl-long.13
val_tab_0260
Sonnet achieve the highest average accuracy of 71.95% and 72.15% on the ambiguous/unanswerable questions.
Supported
Table 5: Execution accuracy of SQLs predicted with DIN-SQL using different LLMs on various categories of ambiguous, unanswerable, and answerable questions. The "All" column shows the overall average accuracy across all categories, while the "Avg. Excluding Answerable" column shows the average accuracy excluding the answerable questions from the Spider dataset.
table
tables_png/dev/val_tab_0260.png
Table 5 shows our baseline method’s (DIN-SQL) performance on SQL prediction of various LLMs given the interaction between the user and the assistant. Overall, Mixtral-large-v2 and Claude 3.5
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0160
tables/dev/val_tab_0260.tex
2025.naacl-long.13
val_tab_0261
Sonnet achieve the highest average accuracy of 71.95% and 72.15% on the ambiguous/unanswerable questions.
Refuted
Table 5: Execution accuracy of SQLs predicted with DIN-SQL using different LLMs on various categories of ambiguous, unanswerable, and answerable questions. The "All" column shows the overall average accuracy across all categories, while the "Avg. Excluding Answerable" column shows the average accuracy excluding the answerable questions from the Spider dataset.
table
tables_png/dev/val_tab_0261.png
Table 5 shows our baseline method’s (DIN-SQL) performance on SQL prediction of various LLMs given the interaction between the user and the assistant. Overall, Mixtral-large-v2 and Claude 3.5
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0160
tables/dev/val_tab_0261.tex
2025.naacl-long.13
val_tab_0262
Claude 3.5 sonnet achieves the highest performance of 79.21% on the answerable questions (original Spider dev set).
Supported
Table 5: Execution accuracy of SQLs predicted with DIN-SQL using different LLMs on various categories of ambiguous, unanswerable, and answerable questions. The "All" column shows the overall average accuracy across all categories, while the "Avg. Excluding Answerable" column shows the average accuracy excluding the answerable questions from the Spider dataset.
table
tables_png/dev/val_tab_0262.png
Table 5 shows our baseline method’s (DIN-SQL) performance on SQL prediction of various LLMs given the interaction between the user and the assistant. Overall, Mixtral-large-v2 and Claude 3.5 Sonnet achieve the highest average accuracy of 71.95% and 72.15% on the ambiguous/unanswerable questions.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0161
tables/dev/val_tab_0262.tex
2025.naacl-long.13
val_tab_0263
Claude 3.5 sonnet achieves the highest performance of 79.21% on the answerable questions (original Spider dev set).
Refuted
Table 5: Execution accuracy of SQLs predicted with DIN-SQL using different LLMs on various categories of ambiguous, unanswerable, and answerable questions. The "All" column shows the overall average accuracy across all categories, while the "Avg. Excluding Answerable" column shows the average accuracy excluding the answerable questions from the Spider dataset.
table
tables_png/dev/val_tab_0263.png
Table 5 shows our baseline method’s (DIN-SQL) performance on SQL prediction of various LLMs given the interaction between the user and the assistant. Overall, Mixtral-large-v2 and Claude 3.5 Sonnet achieve the highest average accuracy of 71.95% and 72.15% on the ambiguous/unanswerable questions.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0161
tables/dev/val_tab_0263.tex
2025.naacl-long.13
val_tab_0264
The open-source model Llama-3.1 70B performs competitively on the answerable questions achieving 76.31% accuracy, only 2.9% lower than Claude 3.5 sonnet.
Supported
Table 5: Execution accuracy of SQLs predicted with DIN-SQL using different LLMs on various categories of ambiguous, unanswerable, and answerable questions. The "All" column shows the overall average accuracy across all categories, while the "Avg. Excluding Answerable" column shows the average accuracy excluding the answerable questions from the Spider dataset.
table
tables_png/dev/val_tab_0264.png
Table 5 shows our baseline method’s (DIN-SQL) performance on SQL prediction of various LLMs given the interaction between the user and the assistant. Overall, Mixtral-large-v2 and Claude 3.5 Sonnet achieve the highest average accuracy of 71.95% and 72.15% on the ambiguous/unanswerable questions. Claude 3.5 sonnet achieves the highest performance of 79.21% on the answerable questions (original Spider dev set).
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0162
tables/dev/val_tab_0264.tex
2025.naacl-long.13
val_tab_0265
The open-source model Llama-3.1 70B performs competitively on the answerable questions achieving 76.31% accuracy, only 2.9% lower than Claude 3.5 sonnet.
Refuted
Table 5: Execution accuracy of SQLs predicted with DIN-SQL using different LLMs on various categories of ambiguous, unanswerable, and answerable questions. The "All" column shows the overall average accuracy across all categories, while the "Avg. Excluding Answerable" column shows the average accuracy excluding the answerable questions from the Spider dataset.
table
tables_png/dev/val_tab_0265.png
Table 5 shows our baseline method’s (DIN-SQL) performance on SQL prediction of various LLMs given the interaction between the user and the assistant. Overall, Mixtral-large-v2 and Claude 3.5 Sonnet achieve the highest average accuracy of 71.95% and 72.15% on the ambiguous/unanswerable questions. Claude 3.5 sonnet achieves the highest performance of 79.21% on the answerable questions (original Spider dev set).
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0162
tables/dev/val_tab_0265.tex
2025.naacl-long.13
val_tab_0266
However, it performs only at 67.58% accuracy on ambiguous/unanswerable questions, lagging 3.7% behind Claude 3.5 sonnet.
Supported
Table 5: Execution accuracy of SQLs predicted with DIN-SQL using different LLMs on various categories of ambiguous, unanswerable, and answerable questions. The "All" column shows the overall average accuracy across all categories, while the "Avg. Excluding Answerable" column shows the average accuracy excluding the answerable questions from the Spider dataset.
table
tables_png/dev/val_tab_0266.png
Table 5 shows our baseline method’s (DIN-SQL) performance on SQL prediction of various LLMs given the interaction between the user and the assistant. Overall, Mixtral-large-v2 and Claude 3.5 Sonnet achieve the highest average accuracy of 71.95% and 72.15% on the ambiguous/unanswerable questions. Claude 3.5 sonnet achieves the highest performance of 79.21% on the answerable questions (original Spider dev set). The open-source model Llama-3.1 70B performs competitively on the answerable questions achieving 76.31% accuracy, only 2.9% lower than Claude 3.5 sonnet.
nlp
yes
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0163
tables/dev/val_tab_0266.tex
2025.naacl-long.13
val_tab_0267
However, it performs only at 67.58% accuracy on ambiguous/unanswerable questions, lagging 3.7% behind Claude 3.5 sonnet.
Refuted
Table 5: Execution accuracy of SQLs predicted with DIN-SQL using different LLMs on various categories of ambiguous, unanswerable, and answerable questions. The "All" column shows the overall average accuracy across all categories, while the "Avg. Excluding Answerable" column shows the average accuracy excluding the answerable questions from the Spider dataset.
table
tables_png/dev/val_tab_0267.png
Table 5 shows our baseline method’s (DIN-SQL) performance on SQL prediction of various LLMs given the interaction between the user and the assistant. Overall, Mixtral-large-v2 and Claude 3.5 Sonnet achieve the highest average accuracy of 71.95% and 72.15% on the ambiguous/unanswerable questions. Claude 3.5 sonnet achieves the highest performance of 79.21% on the answerable questions (original Spider dev set). The open-source model Llama-3.1 70B performs competitively on the answerable questions achieving 76.31% accuracy, only 2.9% lower than Claude 3.5 sonnet.
nlp
yes
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0163
tables/dev/val_tab_0267.tex
2025.naacl-long.13
val_tab_0270
For Factuality, the annotators demonstrate moderate agreement (Krippendorff’s Alpha = 0.68), suggesting that the conversations are consistently viewed as highly factual, which implies that the SQL queries in our dataset are of high quality.
Supported
Table 4: Summary of Human Annotation Scores for Naturalness, Factuality, and Helpfulness.
table
tables_png/dev/val_tab_0270.png
Table 4 shows the mean, standard deviation, and Krippendorff’s Alpha for inter-annotator agreement. The high mean scores close to 1 (1.15-1.5) and substantial agreement (Alpha 0.68-0.82) indicate high-quality, natural conversations with factual and helpful responses. For Naturalness, we observe that annotators have a substantial agreement (Krippendorff’s Alpha = 0.82), indicating that the conversations are generally perceived as natural and fluent.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0164
tables/dev/val_tab_0270.tex
2025.naacl-long.11
val_tab_0272
Finally, on comparing average cosine vs. Vendi Score aggregation, we observe that often Vendi Score shows higher correlations.
Supported
Table 1 : Voice diversity. Average Spearman correlations ( \pm standard error) between number of distinct speakers and diversity scores induced by speech representations. Male (Female) refer to samples with male-only (female-only) voices. Best results are in bold, second best results are underlined.
table
tables_png/dev/val_tab_0272.png
From Table 1 we firstly notice that, across all columns, SpeechSim shows higher correlation scores than off-the-shelf embedding models and the models trained from scratch. This supports our decision to rely on SpeechSim as a basis for a per-facet models. In turn, further voice specialization of SpeechSim (SpeechSim/Voice) results in similar scores.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0165
tables/dev/val_tab_0272.tex
2025.naacl-long.11
val_tab_0273
Finally, on comparing average cosine vs. Vendi Score aggregation, we observe that often Vendi Score shows higher correlations.
Refuted
Table 1 : Voice diversity. Average Spearman correlations ( \pm standard error) between number of distinct speakers and diversity scores induced by speech representations. Male (Female) refer to samples with male-only (female-only) voices. Best results are in bold, second best results are underlined.
table
tables_png/dev/val_tab_0273.png
From Table 1 we firstly notice that, across all columns, SpeechSim shows higher correlation scores than off-the-shelf embedding models and the models trained from scratch. This supports our decision to rely on SpeechSim as a basis for a per-facet models. In turn, further voice specialization of SpeechSim (SpeechSim/Voice) results in similar scores.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0165
tables/dev/val_tab_0273.tex
2025.naacl-long.11
val_tab_0274
In this setup, Vendi Score underperforms w.r.t. the average cosine dissimilarity.
Supported
Table 2 : Gender diversity. Average Spearman correlations between proportion of female voices and diversity scores induced by speech representations.
table
tables_png/dev/val_tab_0274.png
In Table 2 we report correlation scores for the gender diversity. Again, we split in two groups: male-voice and female voice-dominant. From the results, we notice that the correlations showed by the non-specialized embeddings are extremely weak. However, both specialized models reach very high correlations (e.g., up to 0.946 for SpeechSim/Gender). All in all, gender-specific projector gets the higher correlation score.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0166
tables/dev/val_tab_0274.tex
2025.naacl-long.11
val_tab_0275
In this setup, Vendi Score underperforms w.r.t. the average cosine dissimilarity.
Refuted
Table 2 : Gender diversity. Average Spearman correlations between proportion of female voices and diversity scores induced by speech representations.
table
tables_png/dev/val_tab_0275.png
In Table 2 we report correlation scores for the gender diversity. Again, we split in two groups: male-voice and female voice-dominant. From the results, we notice that the correlations showed by the non-specialized embeddings are extremely weak. However, both specialized models reach very high correlations (e.g., up to 0.946 for SpeechSim/Gender). All in all, gender-specific projector gets the higher correlation score.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0166
tables/dev/val_tab_0275.tex
2025.naacl-long.11
val_tab_0276
However, SpeechSim finetuned on EmoV gets a Spearman correlation equal to 0.321.
Supported
Table 3 : Emotion diversity. Average Spearman correlations between the classes entropy in EmoV and Expresso and diversity scores induced by the speech representations. SpeechSim/Emotion-Expresso (resp. Speech/Emotion-EmoV) refers to SpeechSim emotion head trained on Expresso (resp. EmoV).
table
tables_png/dev/val_tab_0276.png
To make sure that our models can measure diversity of emotions beyond that is covered by the (limited) label sets of EmoV and Expresso, we run an experiment where the projection model is trained on one dataset and tested on another (the only common label is “neutral”). Table 3 shows that SpeechSim finetuned on Expresso does transfer to EmoV classes as it obtains a high Spearman correlation.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0167
tables/dev/val_tab_0276.tex
2025.naacl-long.11
val_tab_0277
However, SpeechSim finetuned on EmoV gets a Spearman correlation equal to 0.321.
Refuted
Table 3 : Emotion diversity. Average Spearman correlations between the classes entropy in EmoV and Expresso and diversity scores induced by the speech representations. SpeechSim/Emotion-Expresso (resp. Speech/Emotion-EmoV) refers to SpeechSim emotion head trained on Expresso (resp. EmoV).
table
tables_png/dev/val_tab_0277.png
To make sure that our models can measure diversity of emotions beyond that is covered by the (limited) label sets of EmoV and Expresso, we run an experiment where the projection model is trained on one dataset and tested on another (the only common label is “neutral”). Table 3 shows that SpeechSim finetuned on Expresso does transfer to EmoV classes as it obtains a high Spearman correlation.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0167
tables/dev/val_tab_0277.tex
2025.naacl-long.11
val_tab_0278
SpeechSim/Noise combined with Vendi Score obtains the second highest correlation, following HuBERT.
Supported
Table 5 : Background noise diversity. Average Spearman correlations between the number of different classes of noise and diversity scores induced by speech representations.
table
tables_png/dev/val_tab_0278.png
We report our results in Table 5 . We see that generally all representations have somewhat low levels of correlation.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0168
tables/dev/val_tab_0278.tex
2025.naacl-long.11
val_tab_0279
SpeechSim/Noise combined with Vendi Score obtains the second highest correlation, following HuBERT.
Refuted
Table 5 : Background noise diversity. Average Spearman correlations between the number of different classes of noise and diversity scores induced by speech representations.
table
tables_png/dev/val_tab_0279.png
We report our results in Table 5 . We see that generally all representations have somewhat low levels of correlation.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0168
tables/dev/val_tab_0279.tex
2025.naacl-long.10
val_tab_0284
Table 1 displays the number of documents for each target identity in both datasets, and Appendix A shows examples from the two datasets for each functionality.
Supported
Table 1: Number of examples for each target identity in HateCheck and GPT-HateCheck . We omit the functionalities without targeting identity, such as abusing objects or non-protected groups.
table
tables_png/dev/val_tab_0284.png
We use the HateCheck (Röttger et al., 2021 ) and GPT-HateCheck (Jin et al., 2024 ) datasets to conduct our analyses, as these datasets provide additional diagnostic insights. Both datasets cover the same seven target identities and 24 functionalities ( GPT-HateCheck omitted the five functionalities related to spelling variations in HateCheck ).
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.10.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0169
tables/dev/val_tab_0284.tex
2025.naacl-long.10
val_tab_0286
We can observe a clear push-back pattern: The higher the “coldness” or “incompetence” scores are for hateful stereotypes towards a target identity, the stronger the counter-stereotypes are in the opposite directions; consider, for illustration, the “warmth” dimension for gays and the “competence” dimension for women.
Supported
Table 5: The mean “warmth” and “competence” scores for hateful (H) and non-hateful (N/H) examples. We highlight the scores with the highest magnitude in bold .
table
tables_png/dev/val_tab_0286.png
The mean “warmth” and “competence” scores for each target identity are presented in Table 5 .
nlp
other sources
Change the cell values
papers/dev/nlp_2025.naacl-long.10.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0170
tables/dev/val_tab_0286.tex
2025.naacl-short.13
val_tab_0288
A total of 1,896 chiastic structures were identified at the half-verse level, with an average length of 5.93 textual units ( \pm 1.34) and an average score of 0.32 ( \pm 0.1).
Supported
Table 2: Summary of detected chiasmi. 2700+ chiasmi were detected at the verse and half-verse level. The highest number of chiasmi was found in the Book of Genesis and Book of Numbers. Both the precision and the inter-annotator agreement increase for the verse-level chiasmi.
table
tables_png/dev/val_tab_0288.png
Table 2 presents an overview of the system’s output for chiastic structures at the half-verse and verse levels.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-short.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0171
tables/dev/val_tab_0288.tex
2025.naacl-short.13
val_tab_0289
A total of 1,896 chiastic structures were identified at the half-verse level, with an average length of 5.93 textual units ( \pm 1.34) and an average score of 0.32 ( \pm 0.1).
Refuted
Table 2: Summary of detected chiasmi. 2700+ chiasmi were detected at the verse and half-verse level. The highest number of chiasmi was found in the Book of Genesis and Book of Numbers. Both the precision and the inter-annotator agreement increase for the verse-level chiasmi.
table
tables_png/dev/val_tab_0289.png
Table 2 presents an overview of the system’s output for chiastic structures at the half-verse and verse levels.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-short.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0171
tables/dev/val_tab_0289.tex
2025.naacl-short.8
val_tab_0290
Average reward model accuracy ("Avg") in Table 1 shows that English RMs surpass target language RMs in general.
Supported
Table 1: Multilingual RewardBench evaluation results on the target language ("Target") and English ("English") RMs. " \Delta " denotes the accuracy gain of English RMs compared to the target language RMs. English RMs show higher average scores in the lingual axis than target language RMs. Also, English RMs excel target language RMs in reasoning ("Reason") tasks with diverse evaluation sub-categories.
table
tables_png/dev/val_tab_0290.png
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-short.8.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0172
tables/dev/val_tab_0290.tex
2023.emnlp-main.15
val_tab_0291
Observation of the results show that social causal reasoning tasks scored higher on BERTScore while physical causal reasoning tasks scored higher on SentenceBERT .
Supported
Table 5: GPT-4 performance for videos partitioned by the physical (changes due to outside, real forces) and social (changes related to social cues) on the Infilling-1 task with structured, FAMOuS descriptions. Significant differences between partitions are underlined .
table
tables_png/dev/val_tab_0291.png
We examine the difference in performance between physical and social causal reasoning ( Table 5 ). A task necessitates physical causal reasoning when the video changes stem from external, tangible forces, like a wave crashing on a beach. Conversely, a task involves social causal reasoning when video changes result from social cues, such as a character becoming joyful during a surprise party.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0173
tables/dev/val_tab_0291.tex
2023.emnlp-main.15
val_tab_0292
Observation of the results show that social causal reasoning tasks scored higher on BERTScore while physical causal reasoning tasks scored higher on SentenceBERT .
Refuted
Table 5: GPT-4 performance for videos partitioned by the physical (changes due to outside, real forces) and social (changes related to social cues) on the Infilling-1 task with structured, FAMOuS descriptions. Significant differences between partitions are underlined .
table
tables_png/dev/val_tab_0292.png
We examine the difference in performance between physical and social causal reasoning ( Table 5 ). A task necessitates physical causal reasoning when the video changes stem from external, tangible forces, like a wave crashing on a beach. Conversely, a task involves social causal reasoning when video changes result from social cues, such as a character becoming joyful during a surprise party.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0173
tables/dev/val_tab_0292.tex
2023.emnlp-main.13
val_tab_0295
In the most challenging second-order ToM inference tasks, where agents estimate others’ beliefs about their own mental states, GPT-4 + Belief agents correctly respond in nearly 70% of cases.
Supported
Table 2: LLM-based agents’ performance in ToM inference tasks. Natural language answers are annotated by experimenters and compared with the ground truth based on global interaction history. Percentages represent the inference accuracy.
table
tables_png/dev/val_tab_0295.png
A critical aspect of teamwork is inferring teammates’ mental states, including beliefs, desires, and intentions. We assess LLM-based agents by asking them to conduct Theory of Mind inferences during the mission. As seen in Table 2 , LLM-based agents can estimate their own and their teammates’ mental states.
nlp
no
Change the cell values
papers/dev/nlp_2023.emnlp-main.13.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0174
tables/dev/val_tab_0295.tex
2023.emnlp-main.20
val_tab_0297
The evaluation results are shown in Table 6 . InstructBLIP performs the best under all settings, highlighting the importance of instruction-tuning on large visual instruction corpora.
Supported
Table 6: Evaluation results of LVLMs on POPE and VQA. For VQA tasks, we report the VQA score on A-OKVQA and Accuracy on GQA. For POPE, we copy the result under the random setting from Table 11 .
table
tables_png/dev/val_tab_0297.png
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.20.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0175
tables/dev/val_tab_0297.tex
2023.emnlp-main.20
val_tab_0298
The evaluation results are shown in Table 6 . InstructBLIP performs the best under all settings, highlighting the importance of instruction-tuning on large visual instruction corpora.
Refuted
Table 6: Evaluation results of LVLMs on POPE and VQA. For VQA tasks, we report the VQA score on A-OKVQA and Accuracy on GQA. For POPE, we copy the result under the random setting from Table 11 .
table
tables_png/dev/val_tab_0298.png
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.20.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0175
tables/dev/val_tab_0298.tex
2023.emnlp-main.4
val_tab_0299
The evaluation results are presented in Table 1 . While the state-of-the-art models demonstrate competitive performance, SocialSense outperforms all other models across all evaluation metrics consistently.
Supported
Table 1: Response forecasting results. We report the Spearman and Pearson correlations for the forecasting of sentiment intensity, as well as Micro F1 and Macro F1 scores for the sentiment polarity prediction. The best overall performance is in bold. Our framework outperforms the baselines consistently.
table
tables_png/dev/val_tab_0299.png
We conduct an evaluation of the proposed SocialSense model and the baseline models introduced in Section 4.2 for the supervised response forecasting task.
nlp
no
Change the cell values
papers/dev/nlp_2023.emnlp-main.4.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0176
tables/dev/val_tab_0299.tex
2023.emnlp-main.4
val_tab_0300
The evaluation results are presented in Table 1 . While the state-of-the-art models demonstrate competitive performance, SocialSense outperforms all other models across all evaluation metrics consistently.
Refuted
Table 1: Response forecasting results. We report the Spearman and Pearson correlations for the forecasting of sentiment intensity, as well as Micro F1 and Macro F1 scores for the sentiment polarity prediction. The best overall performance is in bold. Our framework outperforms the baselines consistently.
table
tables_png/dev/val_tab_0300.png
We conduct an evaluation of the proposed SocialSense model and the baseline models introduced in Section 4.2 for the supervised response forecasting task.
nlp
no
Change the cell values
papers/dev/nlp_2023.emnlp-main.4.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0176
tables/dev/val_tab_0300.tex
2023.emnlp-main.4
val_tab_0301
Furthermore, our model, SocialSense {}_{\text{Zero}} , achieves the highest scores consistently across all metrics.
Supported
Table 2: The above Zero-Shot Response forecasting results highlight that the Social Prompt from Section 3.4 consistently offers an advantage.
table
tables_png/dev/val_tab_0301.png
In addition to supervised response forecasting, we also evaluate our framework under the zero-shot setting (Section 3.4 ). The results are presented in Table 2 . Based on the higher scores attained by ChatGPT L , it is evident that the inclusion of latent structured persona information indeed aids the model in comprehending the user more effectively.
nlp
no
Change the cell values
papers/dev/nlp_2023.emnlp-main.4.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0177
tables/dev/val_tab_0301.tex
2023.emnlp-main.9
val_tab_0302
The number of the direct and contextual rationales is the largest among the other types, which further increases when we look at the error cases of InstructGPT.
Supported
Table 4: Annotation results of rationale types on 100 examples randomly sampled from all subquestions (left) and from the error examples by InstructGPT (right).
table
tables_png/dev/val_tab_0302.png
We report our annotation results in Table 4 .
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.9.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0178
tables/dev/val_tab_0302.tex
2023.emnlp-main.9
val_tab_0303
We observe an improvement when the selective rationale is added; however, degradation occurs when we add the eliminative rationale, even if it is provided with the selective rationale.
Supported
Table 5: MainQ accuracy of InstructGPT that uses the selective or eliminative rationales in the input.
table
tables_png/dev/val_tab_0303.png
Because the collected rationales are expected to support the decision of selecting and eliminating answer options, we investigate whether adding the rationales to the main questions improves the performance in the five-shot InstructGPT. We append the rationale to the context, main question, and four options with the Rationale: label. The results are shown in Table 5 .
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.9.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0179
tables/dev/val_tab_0303.tex
2023.emnlp-main.9
val_tab_0304
We observe an improvement when the selective rationale is added; however, degradation occurs when we add the eliminative rationale, even if it is provided with the selective rationale.
Refuted
Table 5: MainQ accuracy of InstructGPT that uses the selective or eliminative rationales in the input.
table
tables_png/dev/val_tab_0304.png
Because the collected rationales are expected to support the decision of selecting and eliminating answer options, we investigate whether adding the rationales to the main questions improves the performance in the five-shot InstructGPT. We append the rationale to the context, main question, and four options with the Rationale: label. The results are shown in Table 5 .
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.9.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0179
tables/dev/val_tab_0304.tex
2025.naacl-long.14
val_tab_0305
From Table 4 , we observe that removing low-correlated heuristic RAG features, actually helps the learning to rank model to learn better leading to a conclusion that not necessarily all features are important.
Supported
Table 4: Kendall Tau ( \tau ) scores using different features for training the random forest regression model.
table
tables_png/dev/val_tab_0305.png
Are all heuristic features necessary? We experiment with the set of features used for learning to rank model training as a surrogate judge. We evaluate four training configurations: (i) all features (ii) without LLM-measured features (iii) without language detection and support, i.e., the low-correlation features observed in Figure 5 , and (iv) including only LLM-measured features.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.14.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0180
tables/dev/val_tab_0305.tex
2025.naacl-long.14
val_tab_0306
Next, removing the LLM-measured features completely or only keeping them decreases the Kendall-Tau ( \tau ) correlation score in \mirage .
Supported
Table 4: Kendall Tau ( \tau ) scores using different features for training the random forest regression model.
table
tables_png/dev/val_tab_0306.png
Are all heuristic features necessary? We experiment with the set of features used for learning to rank model training as a surrogate judge. We evaluate four training configurations: (i) all features (ii) without LLM-measured features (iii) without language detection and support, i.e., the low-correlation features observed in Figure 5 , and (iv) including only LLM-measured features. From Table 4 , we observe that removing low-correlated heuristic RAG features, actually helps the learning to rank model to learn better leading to a conclusion that not necessarily all features are important.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.14.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0181
tables/dev/val_tab_0306.tex