paper_id
string
claim_id
string
claim
string
label
string
caption
string
evi_type
string
evi_path
string
context
string
domain
string
use_context
string
operation
string
paper_path
string
detail_others
string
license_name
string
license_url
string
claim_id_pair
string
evi_path_original
string
17284
val_tab_1573
Pregnant women who were advised to terminate their pregnancies or carefully consider whether to continue gestation were 41.113 times more likely to undergo labor induction compared to those who were advised to continue the pregnancy (OR = 41.113, 95% CI [11.028–153.267]), ( Table 2 ).
Refuted
Table 2: Analysis of influencing factors of pregnancy outcome in pregnant women with fetal abnormalities.
table
tables_png/dev/val_tab_1573.png
Pregnant women with multiple fetal abnormalities were 3.774 times more likely to undergo labor induction than those with a single fetal abnormality (OR = 3.774, 95% CI [1.640–8.683]).
peerj
no
Change the cell values
papers/dev/peerj_17284.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0286
tables/dev/val_tab_1573.html
2025.naacl-long.10
val_tab_1574
We can observe a clear push-back pattern: The higher the “coldness” or “incompetence” scores are for hateful stereotypes towards a target identity, the stronger the counter-stereotypes are in the opposite directions; consider, for illustration, the “warmth” dimension for gays and the “competence” dimension for women.
Refuted
Table 5: The mean “warmth” and “competence” scores for hateful (H) and non-hateful (N/H) examples. We highlight the scores with the highest magnitude in bold .
table
tables_png/dev/val_tab_1574.png
The mean “warmth” and “competence” scores for each target identity are presented in Table 5 .
nlp
other sources
Change the cell values
papers/dev/nlp_2025.naacl-long.10.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0170
tables/dev/val_tab_1574.tex
18472
val_tab_1575
Fifty healthy students from University Europea de Madrid volunteered for the study Table 1 .
Refuted
Table 1: Descriptive data.
table
tables_png/dev/val_tab_1575.png
peerj
no
Change the cell values
papers/dev/peerj_18472.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0267
tables/dev/val_tab_1575.html
18312
val_fig_1544
In clinical samples, except Veillonella , Leptotrichia and Streptococcus , other bacteria occupied the most of bacterial community.
Refuted
Figure 3: Bacterial community structure analysis. (A) The distributions of the predominant bacteria at genus level. (B) The relative abundance of differential predominant bacteria among aerobic culture, anaerobic culture and clinical groups analyzed at genus level by ANOVA.  
figure
figures/dev/val_fig_1544.png
To display the composition of dominant bacteria, the stacked column plot showed the percentage of each high-abundance bacteria.
peerj
no
Legend Swap
papers/dev/peerj_18312.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0341
null
16935
val_fig_1545
Among CNVs, IGFBP7 had the highest frequency of acquired CNVs (CNV_gain), whereas SHISA5 had the highest frequency of deletion mutations (CNV_loss) ( Figure 4C ).
Refuted
Figure 4: Analysis of cellular senescence-related prognostic genes. (A) Forest plot of prognosis-related genes; (B) waterfall plot of SNVs in the seven prognosis-related genes; (C) Percentage plot of CNVs in the seven prognosis-related genes; (D) The expression of the seven prognosis-related genes expressed in tumor and normal tissues; (E) CDF curve of TCGA cohort; (F) CDF-delta area curves for TCGA cohort. The Delta area curve for consensus clustering indicates the relative change in the area under the CDF curve for each category number, k, relative to k-1. The horizontal axis represents the category number, k, and the vertical axis represents the relative change in area under the CDF curve; (G) Heat map of sample clustering at consensus k = 3; (H) KM curve of the relationship between the prognoses of the three subtypes in TCGA; (I) Heat map of the expression of seven prognosis-related genes among three subtypes in TCGA dataset.  
figure
figures/dev/val_fig_1545.png
The above pathways comprised 253 genes, of which 186 were present in TCGA dataset. Subsequently, a univariate analysis of these 186 genes yielded seven prognosis-related genes, including ETS2, SERPINE1, FOS, SHISA5, IL1A, TP53AIP1, and IGFBP7 ( Figure 4A ). Mutations in these seven genes in TNBC were further examined. Among SNVs, ETS2 showed the highest mutation frequency (primarily mutation type: missense) ( Figure 4B ).
peerj
no
Category Swap
papers/dev/peerj_16935.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0324
null
2023.emnlp-main.13
val_tab_1576
In the most challenging second-order ToM inference tasks, where agents estimate others’ beliefs about their own mental states, GPT-4 + Belief agents correctly respond in nearly 70% of cases.
Refuted
Table 2: LLM-based agents’ performance in ToM inference tasks. Natural language answers are annotated by experimenters and compared with the ground truth based on global interaction history. Percentages represent the inference accuracy.
table
tables_png/dev/val_tab_1576.png
A critical aspect of teamwork is inferring teammates’ mental states, including beliefs, desires, and intentions. We assess LLM-based agents by asking them to conduct Theory of Mind inferences during the mission. As seen in Table 2 , LLM-based agents can estimate their own and their teammates’ mental states.
nlp
no
Change the cell values
papers/dev/nlp_2023.emnlp-main.13.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0174
tables/dev/val_tab_1576.tex
2209.12362
val_tab_1577
In terms of performance on ActivityNet, we observe that both training methods achieve good results, which might be because ActivityNet classes are highly overlapped with Kinetics-400 (65 out of 200).
Refuted
Table 3: Ablation experiments. We investigate the effectiveness of each component of our method as well as compare to vanilla multi-dataset training method. “Vanilla” means using cross entropy (CE) loss in training. “w/o informative los” means using CE and projection loss. The numbers are top-1/top-5 accuracy, respectively. Training data: (e) Kinetics-400; (f) SSv2; (g) MiT; (h) ActivityNet.
table
tables_png/dev/val_tab_1577.png
Does our proposed robust loss help? We compare our model training with vanilla multi-dataset training, where multiple classification heads are attached to the same backbone and the model is trained simply with cross-entropy loss. The vanilla model is trained from a K400 checkpoint as ours. As shown in Table 3 , we try training the vanilla model with both the same training schedule as ours and a 4x longer schedule. As we see, there is a significant gap between the overall performance of the vanilla model and ours, validating the efficacy of our proposed method. Also, longer training schedule does not lead to better performance on some datasets, including SSv2, suggesting vanilla multi-dataset training is unstable.
ml
no
Change the cell values
papers/dev/ml_2209.12362.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0009
tables/dev/val_tab_1577.tex
2025.naacl-long.15
val_fig_1546
Surprisingly, the Medium task displays the least bias, likely because models perform best in this task.
Refuted
Figure 5: Average estimated true F1 scores ( Section 3.1 ) across models (left) and benchmarks (right) showing performance bias of LLMs across 2 widely used mapping formats.
figure
figures/dev/val_fig_1546.png
From Fig. 5 -right, extracting 4 categories in the Hard task shows the largest performance gap between mapping formats.
nlp
no
Graph Swap
papers/dev/nlp_2025.naacl-long.15.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0242
null
2025.naacl-long.1
val_tab_1578
Yet still only half of its explanations are considered adequate.
Refuted
Table 6: Adequacy and Preference rates for generated explanations.
table
tables_png/dev/val_tab_1578.png
In Table 6 , we show adequacy and preference rates for explanations from the 3 systems, where an explanation is deemed adequate if both annotators agreed it is, and inadequate if both agreed it is not. The preference percentage is also taken among instances where the annotators agreed that the model’s explanation is preferred among all the adequate explanations. The average IAA using Cohen’s \kappa is 0.47, indicating moderate agreement Cohen ( 1960 ) . We observe that the teacher model is leading in terms of the adequacy of the explanations and preference rate, as expected from a larger system equipped for higher quality reasoning and generation capabilities.
nlp
yes
Change the cell values
papers/dev/nlp_2025.naacl-long.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0099
tables/dev/val_tab_1578.tex
2024.eacl-short.18
val_tab_1579
Evaluation_NS is unique in being poor, receiving an average of 2.47 .
Refuted
Table 2: The human-evaluation statistics for each relation, where None is generation with the language model alone. The metrics are ( Rel[ation-fit]), ( Flu[ency]), and ( Rea[sonableness]).
table
tables_png/dev/val_tab_1579.png
The average annotator rating of relation-fit for generation with each of the relations is presented in Table 2 . The overall average, 3.49 , is well within the positive range.
nlp
yes
Change the cell values
papers/dev/nlp_2024.eacl-short.18.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0218
tables/dev/val_tab_1579.tex
2024.acl-short.16
val_fig_1547
As we can see in Fig. 2 , the model with r=4 , yields poorer performance, highlighting the need for high rank for the frozen tensors.
Refuted
Figure 2: Performance of ELoRA with two different ranks of the frozen projection matrices.
figure
figures/dev/val_fig_1547.png
To understand the high rank requirement for the frozen projection metrices in ELoRA, we conduct two sets of fine-tuning on SST-2 and MRPC, with ELoRA having rank ( r ) of 1024 and 4, respectively.
nlp
no
Legend Swap
papers/dev/nlp_2024.acl-short.16.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0254
null
2023.ijcnlp-main.34
val_tab_1580
The reason is that the model struggles in distinguishing between refutes and NEI claims in these datasets, as reflected by Table 5 .
Refuted
Table 5: Class-wise F1 of 3-class fact verification for the zero-shot generalization setup (left) and the in-domain training setup (right). S: supports; R: refutes; N: NEI.
table
tables_png/dev/val_tab_1580.png
Many works Jiang et al. ( 2020 ); Saakyan et al. ( 2021 ) do not consider NEI claims due to their ambiguity. To explore whether our previous observations also hold for the task of binary fact verification , we evaluate the generalization results for all 11 datasets using only the supports and refutes claims for training and evaluation, shown in Table 3 . In this setting, artificial claims also generalize well to natural claims in other domains. In 6 of the 7 datasets with natural claims, the best generalization score is from a model trained on artificial claims. This also holds for the evidence length: datasets with sentence-level evidence tend to generalize better than document-level datasets. Finally, compared with the three-class result in Table 2 , generalization improves a lot on Climate-FEVER, SciFact, and PubHealth.
nlp
yes
Swap rows or columns
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0078
tables/dev/val_tab_1580.tex
2023.starsem-1.1
val_tab_1581
Also here we can see that our proposed model outperforms the baseline models showing BLEU-4 score of 5.76 in test set.
Refuted
Table 4: Back translation results obtained from the generative models when using manual features and facial landmarks and AUs. Our proposed model has the highest scores in all metrics compared to the models using only gloss or text.
table
tables_png/dev/val_tab_1581.png
Table 4 presents the results of including facial landmarks as well as facial AUs with body and hands skeleton joints as input.
nlp
no
Change the cell values
papers/dev/nlp_2023.starsem-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0121
tables/dev/val_tab_1581.tex
2023.ijcnlp-main.35
val_tab_1582
Humans obtained an EM of 69.13 in MisinfoQA-noisy , which, though higher than most QA models’ performance, also shows a significant drop when compared to the MisinfoQA-clean setting (86.57 EM).
Refuted
Table 4: QA performance under the reading comprehension settings with clean and noisy contexts.
table
tables_png/dev/val_tab_1582.png
Table 4 reports the EM and F1 for both human and different QA models. We find that all QA models suffer a large performance drop ( \sim 20% in EM) in MisinfoQA-noisy compared to MisinfoQA-clean , showing that the models are largely distracted by the fake contexts rather than by the presence of additional contexts.
nlp
no
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.35.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0120
tables/dev/val_tab_1582.tex
19471
val_tab_1583
Tukey’s post-hoc analysis revealed that those in C2 had the youngest age (44.80 ± 11.30 years, p < 0.001 vs. all other clusters) and highest BMI (25.59 ± 2.42, p < 0.001 vs. all other clusters).
Refuted
Table 2: Comparisons of features between the clusters classified by Louvain clustering unsupervised machine learning.
table
tables_png/dev/val_tab_1583.png
To validate the quality and distinctiveness of the Louvain clustering solution, we calculated widely-used cluster validity indices. The Davies–Bouldin index was 2.09, and the Calinski–Harabasz index was 59.26. While the Davies–Bouldin index was somewhat high (lower values indicate better separation), the Calinski–Harabasz index supported the presence of distinct groupings in our data. The Louvain clustering algorithm identified four distinct clusters (C1–C4) with significant differences in all features (all p < 0.001, except for AHD-SU ( p = 0.017) and portion of EOA ( p = 0.019)) ( Table 2 ).
peerj
no
Change the cell values
papers/dev/peerj_19471.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0270
tables/dev/val_tab_1583.html
18000
val_tab_1584
Our results indicate significant correlations between the peak angular momentums of the attack arm, non-attack arm, non-attack leg, forearm, and hand with the arm swing speed at BI ( Table 1 ).
Refuted
Table 1: Peak angular momentum variables and correlations with the arm swing speed at ball impact (BI).
table
tables_png/dev/val_tab_1584.png
This study investigated the relationships between angular momentum variables and swing hand speed during jump serve aerial spiking, utilising correlation and regression analyses.
peerj
no
Change the cell values
papers/dev/peerj_18000.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0294
tables/dev/val_tab_1584.html
2025.naacl-long.11
val_tab_1585
Equally, the specialized projection of SpeechSim for the emotion facet, SpeechSim/Emotion performs best across all representations.
Refuted
Table 3 : Emotion diversity. Average Spearman correlations between the classes entropy in EmoV and Expresso and diversity scores induced by the speech representations. SpeechSim/Emotion-Expresso (resp. Speech/Emotion-EmoV) refers to SpeechSim emotion head trained on Expresso (resp. EmoV).
table
tables_png/dev/val_tab_1585.png
In Table 3 we report average Spearman correlations for the tested representation models, when tested on EmoV and Expresso separately. From these results, we see that, generally, SpeechSim performs better than other general-purpose representations across all configurations.
nlp
yes
Swap rows or columns
papers/dev/nlp_2025.naacl-long.11.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0087
tables/dev/val_tab_1585.tex
16850
val_fig_1548
Kaplan-Meier analysis implicated that patients in the cluster two group had worse PFS outcomes compared with those in the cluster 1 group, in which the log-rank test P value was 0.013 ( Figure 5K ).
Refuted
Figure 5: External validation of ARGs signature and clustering analysis across PRAD samples. (A) The ROC analysis of ARGs exhibiting its predictive efficiency of 3-, 5- and 7-year in predicting PFS. (B) Kaplan-Meier curves showing the differential survival outcomes in AGRs-high and ARGs-low samples in GSE116918. (C and D) The ROC analysis and Kaplan-Meier curves of ARGs in GSE70769. (E and F) The ROC analysis and Kaplan-Meier curves of ARGs in MSKCC-PRAD cohort. (G) Color‐coded heatmap corresponding to the consensus matrix for k = 2 obtained by consensus clustering. The color gradients from 0 to 1 represent the degree of consensus, with white corresponding to 0 and dark blue corresponding to 1. (H) Consensus among clusters for each category number k. (I) Delta area curve of consensus clustering indicating the relative change in area under the cumulative distribution function (CDF) curve for each category number k compared to k − 1. (J) PCA analysis indicated the identified two groups based on ARGs signature. (K) Kaplan-Meier analysis indicated the differential survival outcomes of two groups. (L) Gene set enrichment analysis (GSEA) exhibiting the enriched crosstalk between Cluster1 and Cluster2.  
figure
figures/dev/val_fig_1548.png
Utilizing the unsupervised clustering algorithm, we conducted consensus clustering analysis to distinguish PRAD patients in the training cohort into subgroups based on the expression of ARGs. The K = 2 was identified with the optimal clustering stability ( Figs. 5G – 5I ). We further conducted the PCA analysis and could classify the patients into two distinct groups with individual features (cluster 1 & cluster 2) in Figure 5J .
peerj
no
Legend Swap
papers/dev/peerj_16850.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0322
null
19374
val_fig_1549
Lactobacillus was positively correlated with xylitol and showed a negative correlation with 5-aminovaleric acid.
Refuted
Figure 10: Correlation analysis of characteristic metabolites and bacteria. Corr represents correlation. Red and blue separately denote positive and negative correlations. The darker the color, the stronger the correlation. *P < 0.05, **P < 0.01, and ***P < 0.001.
figure
figures/dev/val_fig_1549.png
To further explore correlations of metabolites in human fecal matter and intestinal flora after SNP treatment, thermography was used to analyze contents of metabolites showing significant differences with 30 most abundant genera. Figure 10 shows the results. Bifidobacterium was positively associated with glycerol, CO 2 , 4-aminobutanoic acid, L-cysteine, and D-gluconic acid, and could be negatively correlated with DL-phenylalanine. Faecalibacterium was positively associated with pentanoic acid and 4-aminobutanoic acid, while were negatively correlated with hydracrylic acid.
peerj
no
Category Swap
papers/dev/peerj_19374.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0328
null
2024.eacl-long.18
val_tab_1586
Table 6 shows the average F1 scores on each task, demonstrating that lexicon-based pretraining boosts the performance of vanilla mBERT {}_{\text{Base}} .
Refuted
Table 6: Lexicon-based pretraining performance (macro-F1) over stance detection, hate speech detection, and emotion classification. The results are based on the limited training data scenario.
table
tables_png/dev/val_tab_1586.png
For each task, we take two datasets and perform experiments in few-shot training using mBERT {}_{\text{Base}} , following the setup described in Section 4.2 . Instead of using the multilingual lexicon, we use the English NRC-VAD lexicon since all the data is in English. For detailed information about the datasets and results, see the Appendix.
nlp
no
Change the cell values
papers/dev/nlp_2024.eacl-long.18.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0111
tables/dev/val_tab_1586.tex
2025.naacl-long.4
val_tab_1587
Overall, the cognitive abilities of OPT, Llama-2-chat-70B, GPT-3.5-Turbo, and GPT-4 models successively increase, and the performance of each model gradually declines with the increase of stage, consistent with humans.
Refuted
Table 3: Calibrated accuracy (%) of largest model in evaluating series. Acc and Age refer to calibrated accuracy and the age of equivalent human performance. The value of Age is calculated according to Equation 2 . Bold indicates the best performance.
table
tables_png/dev/val_tab_1587.png
As shown in Table 3 , We run the model with the largest number of parameters in each series on CogLM , and report the adult human performance for comparison.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.4.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0190
tables/dev/val_tab_1587.tex
2025.clpsych-1.9
val_tab_1588
The BERT multi-task classifier improves these numbers considerably (e.g., obtaining 59.69 and 59.60 micro F1 for CF and Skill, respectively), indicating the benefit of richer contextual representations.
Refuted
Table 1: Three-fold cross-validation micro and macro F1 scores.
table
tables_png/dev/val_tab_1588.png
Table 1 reports the micro and macro F1 scores for various models on the CF, IC, and skill classification. We compare several baselines, including traditional TF-IDF and BERT multi-task classifiers, graph-based models without BERT, and our proposed CFiCS variants that integrate clinical BERT features. The TF-IDF multi-task Random Forest baseline achieves relatively modest performance, with micro F1 scores of 52.50, 74.59, and 53.02 for CF, IC, and Skill, respectively, and corresponding macro F1 scores of 20.43, 38.04, and 49.77.
nlp
no
Change the cell values
papers/dev/nlp_2025.clpsych-1.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0148
tables/dev/val_tab_1588.tex
18713
val_fig_1550
It follows that 12 out of 189 rarely and sometimes (6.3%) had the urge to smoke, respectively.
Refuted
Figure 3: Impact of pictorial warnings on the initiation to smoking by non-smokers (n = 189).  
figure
figures/dev/val_fig_1550.png
Figure 3 shows the impact of pictorial warnings on the initiation of smoking by non-smokers. Most non-smokers ( n = 161, 85.2%) never had the urge to smoke upon the sight of pictorial warnings on the cigarette packaging.
peerj
yes
Legend Swap
papers/dev/peerj_18713.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0335
null
18312
val_fig_1551
Additionally, those in NEMM displayed an increased relative abundance of protein functions linked to defense metabolism, inorganic ion transport and metabolism under anaerobic conditions.
Refuted
Figure 8: Function prediction by PICRUSt2. The differential analysis of COG function (A) and KEGG pathway (B) show the abundance ratio of different functions between groups, whose middle figures show the difference ratio of functional abundance within the 95% confidence interval.  
figure
figures/dev/val_fig_1551.png
To assess the effects of the two media on the predicted gene categories (COGs), we compared the predicted COGs between aNEMM and aSHI, nNEMM and nSHI ( Figure 8A ). NEMM showed more ability in carbohydrate transport and metabolism than SHI medium under both anaerobic conditions and normoxia conditions.
peerj
yes
Legend Swap
papers/dev/peerj_18312.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0318
null
2023.ijcnlp-main.9
val_tab_1589
The only exception is the case of style strength evaluation task in the direction of H_{1} \rightarrow H_{2} , using the GPT-NeoX model.
Refuted
Table 6: Inter-annotator agreement scores for the three human evaluation tasks. Standard deviations over all data points are shown in brackets for the style strength and appropriateness evaluation tasks. The detailed procedure for calculating the agreement scores can be found in Appendix D .
table
tables_png/dev/val_tab_1589.png
The inter-annotator agreement in all of the tasks are shown in Table 6 . Note that, for calculating agreement in the semantic correctness evaluation task, all of the data points are aggregated to measure the agreement score as they represent categorical evaluation measures. On the other hand, that is not possible in case of ranking based evaluations for style strength and appropriateness. So, we measure the agreement for each data point and take the average agreement over all data points. We can see in the Table 6 that in all of the cases we get strong agreement ( >0.70 ) among the annotators for the style strength and appropriateness evaluation.
nlp
yes
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0098
tables/dev/val_tab_1589.tex
18098
val_fig_1552
Among tumor region samples, the model correctly identified 86% as tumor, while misclassifying 14% as non-tumor.
Refuted
Figure 2: Performance of the segmentation network. (A) Receiver operating characteristic (ROC) curve comparing the performance of the segmentation network with three pathologists (senior, medium, and junior) in identifying tumor areas. (B) Confusion matrix for the segmentation network. (C) Original WSIs and annotation maps identified by the segmentation network. Left: original WSI images; right: corresponding annotation maps.  
figure
figures/dev/val_fig_1552.png
A segmentation network was developed to accurately distinguish tumor regions from WSIs. This model demonstrated high efficacy, achieving an AUC of 0.960 (95% CI [0.959–0.961]). The accuracy, sensitivity, and specificity were 0.888 (95% CI [0.887–0.890]), 0.859 (95% CI [0.856–0.861]), and 0.908 (95% CI [0.906–0.909]), respectively. Compared with three pathologists of varying experience (junior, medium, and senior), the model’s performance was almost equivalent to that of a medium-level pathologist ( Figure 2A ). The confusion matrix shows specific performance metrics of the segmentation model.
peerj
no
Category Swap
papers/dev/peerj_18098.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0337
null
2025.finnlp-1.15
val_tab_1590
CoT prompting only improves the performance of the GPT-4o-mini model, whereas it significantly degrades the performance of the LLaMA 3.1 series.
Refuted
Table 1: Performance of different fine-tuned language models and LLMs under different prompts on FiNER-ORD task.
table
tables_png/dev/val_tab_1590.png
(2) Chain-of-Thought prompting has limited effect on LLMs performance and can sometimes reduce effectiveness. While few-shot learning generally enhances generic LLMs’ performance, Table 1 shows that the difference between prompting styles is marginal.
nlp
no
Change the cell values
papers/dev/nlp_2025.finnlp-1.15.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0205
tables/dev/val_tab_1590.tex
2025.naacl-long.14
val_tab_1591
Next, removing the LLM-measured features completely or only keeping them decreases the Kendall-Tau ( \tau ) correlation score in \mirage .
Refuted
Table 4: Kendall Tau ( \tau ) scores using different features for training the random forest regression model.
table
tables_png/dev/val_tab_1591.png
Are all heuristic features necessary? We experiment with the set of features used for learning to rank model training as a surrogate judge. We evaluate four training configurations: (i) all features (ii) without LLM-measured features (iii) without language detection and support, i.e., the low-correlation features observed in Figure 5 , and (iv) including only LLM-measured features. From Table 4 , we observe that removing low-correlated heuristic RAG features, actually helps the learning to rank model to learn better leading to a conclusion that not necessarily all features are important.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.14.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0181
tables/dev/val_tab_1591.tex
2025.naacl-long.10
val_tab_1592
Table 1 displays the number of documents for each target identity in both datasets, and Appendix A shows examples from the two datasets for each functionality.
Refuted
Table 1: Number of examples for each target identity in HateCheck and GPT-HateCheck . We omit the functionalities without targeting identity, such as abusing objects or non-protected groups.
table
tables_png/dev/val_tab_1592.png
We use the HateCheck (Röttger et al., 2021 ) and GPT-HateCheck (Jin et al., 2024 ) datasets to conduct our analyses, as these datasets provide additional diagnostic insights. Both datasets cover the same seven target identities and 24 functionalities ( GPT-HateCheck omitted the five functionalities related to spelling variations in HateCheck ).
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.10.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0169
tables/dev/val_tab_1592.tex
2025.clpsych-1.1
val_tab_1593
Appraisals consistently showing better accuracy across both GPT-4 models and human annotators include Pleasantness, Unpleasantness, Goal Support, and External Norms (marked as bold in Table 2 ), suggesting that these are the easiest to infer based on text.
Refuted
Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with confidence rating as a tiebreaker, emo : majority vote of five GPT-4 completions with the emotion prediction task in a prompt. Bold/underline marks the appraisal dimensions that are consistently predicted better/worse than the macro average.
table
tables_png/dev/val_tab_1593.png
In analyzing appraisal dimensions across all experiments, we looked for patterns by comparing the RMSE of individual appraisal dimensions to the macro-averaged RMSE.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.clpsych-1.1.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0214
tables/dev/val_tab_1593.tex
18312
val_fig_1553
NEMM showed more ability in carbohydrate transport and metabolism than SHI medium under both anaerobic conditions and normoxia conditions.
Refuted
Figure 8: Function prediction by PICRUSt2. The differential analysis of COG function (A) and KEGG pathway (B) show the abundance ratio of different functions between groups, whose middle figures show the difference ratio of functional abundance within the 95% confidence interval.  
figure
figures/dev/val_fig_1553.png
To assess the effects of the two media on the predicted gene categories (COGs), we compared the predicted COGs between aNEMM and aSHI, nNEMM and nSHI ( Figure 8A ).
peerj
other sources
Legend Swap
papers/dev/peerj_18312.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0352
null
18596
val_tab_1594
Significant negative correlations were observed between age and the IIEF total score, as well as between the duration of hypertension and the IIEF total score ( p < 0.05).
Refuted
Table 3: Correlation between age, duration of hypertension, duration of smoking and IIEF total.
table
tables_png/dev/val_tab_1594.png
Table 3 displays the correlation between age, duration of hypertension, duration of smoking, and the IIEF total score.
peerj
no
Change the cell values
papers/dev/peerj_18596.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0301
tables/dev/val_tab_1594.html
2023.emnlp-main.39
val_tab_1595
Table 4 shows that the evaluation on MNLI and QQP is more robust to different settings, and the variance is more significant on CoLA.
Refuted
Table 4: Ablation studies of different calibration sizes.
table
tables_png/dev/val_tab_1595.png
In this section, we first compare the influence of different calibration sizes on FPQ . We vary the calibration size in \{32,64,128,256\} and test on MNLI, QQP, and CoLA.
nlp
no
Swap rows or columns
papers/dev/nlp_2023.emnlp-main.39.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0182
tables/dev/val_tab_1595.tex
2025.naacl-short.10
val_tab_1596
In this case, eleven different metrics are extracted.
Refuted
Table 1: This table presents the tasks implemented in this paper. The first column specifies the different tasks. The second details the metrics used (ROUGE includes ROUGE1, ROUGE2 and ROUGEL, and Perplexity includes Bits per Byte, Byte Perplexity, and Word Perplexity). The third column outlines the benchmarks used for each task.
table
tables_png/dev/val_tab_1596.png
This study considers four different close-ended healthcare tasks, which include nine different datasets ( e.g. , MedQA). These are all assessed using the accuracy metric. At the same time, six open-ended tasks are studied, based on nine distinct datasets ( e.g. , MedText).
nlp
yes
Change the cell values
papers/dev/nlp_2025.naacl-short.10.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0081
tables/dev/val_tab_1596.tex
2025.naacl-long.14
val_tab_1597
From Table 4 , we observe that removing low-correlated heuristic RAG features, actually helps the learning to rank model to learn better leading to a conclusion that not necessarily all features are important.
Refuted
Table 4: Kendall Tau ( \tau ) scores using different features for training the random forest regression model.
table
tables_png/dev/val_tab_1597.png
Are all heuristic features necessary? We experiment with the set of features used for learning to rank model training as a surrogate judge. We evaluate four training configurations: (i) all features (ii) without LLM-measured features (iii) without language detection and support, i.e., the low-correlation features observed in Figure 5 , and (iv) including only LLM-measured features.
nlp
no
Swap rows or columns
papers/dev/nlp_2025.naacl-long.14.json
CC BY-SA 4.0
http://creativecommons.org/licenses/by-sa/4.0/
0180
tables/dev/val_tab_1597.tex
2403.19863
val_fig_1555
An intriguing observation is that the decodability of both attributes decreases as the depth of the neural network increases, which is also observed in [ 10 ] .
Refuted
(a); (b); Exploring the Effect of depth modulation: (a) illustrates how the linear decodability of features decreases as neural network depth increases, while (b) dives into the training dynamics of MLPs with varying depths under ERM.
figure
figures/dev/val_fig_1555.png
We employ the concept of feature decodability to assess the extent to which the specific features of a given dataset can be reliably decoded from the models with varying depths. Hermann et al . [ 10 ] demonstrated that the visual features can be decoded from the higher layers of untrained models. Additionally, they observed that the feature decodability from an untrained model has a significant impact in determining which features are emphasized and suppressed during the model training. Following their approach, we specifically focus on assessing the decodability of bias and core attributes from the penultimate layer of untrained models. In order to evaluate the decodability of an attribute in a dataset, we train a decoder to map the activations from the penultimate layer of a frozen, untrained model to attribute labels. The decoder comprises a single linear layer followed by a softmax activation function. The decoder is trained using an unbiased validation set associated with the dataset, where each instance is labeled according to the attribute under consideration. Subsequently, the linear decodability of the attribute, measured in accuracy, is reported on the unbiased test set. We investigate the decodability of digit and color attributes in the CMNIST dataset from MLP models with varying depths, including 3, 4, and 5 layers, and the results are depicted in Fig. 2(a) .
ml
no
Graph Flip
papers/dev/ml_2403.19863.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0045
null
17284
val_tab_1598
Chi-square test analysis showed significant differences in the gestational age at diagnosis of fetal abnormalities, the number of fetal abnormalities, diagnostic gestational age, and treatment recommendations of doctors among pregnant women ( P < 0.05).
Refuted
Table 1: Comparison of pregnancy outcomes in pregnant women with different characteristics of fetal abnormalities.
table
tables_png/dev/val_tab_1598.png
peerj
no
Change the cell values
papers/dev/peerj_17284.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0283
tables/dev/val_tab_1598.html
2025.naacl-long.9
val_tab_1599
Notably, ALTER demonstrates the best performance in single-round reasoning among all other methods that utilize result ensemble techniques in the LLM era.
Refuted
Table 1: Results of different methods on WikiTQ and TabFact. 1 1 1 For the Dater method, we report the results of using the LLM-based method as backbone (We use underline to denote the second-best performance, bold to denote the best performance for each region: Pre-LLM era, LLM era with result ensemble and without ensemble)
table
tables_png/dev/val_tab_1599.png
We present the results on the WikiTQ and TabFact datasets. The experimental outcomes are summarized in Table 1 . From the results, we observe that our ALTER method achieves comparatively outstanding outcomes. Specifically, on the WikiTQ dataset, while the Mix SC method do marginally outperforms our results by aggregating multiple reasoning paths (with 10 sampling times), ALTER still managed to exceed the performance of all other methods under comparison.
nlp
no
Change the cell values
papers/dev/nlp_2025.naacl-long.9.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0194
tables/dev/val_tab_1599.tex
2209.12362
val_tab_1600
As shown in Table 3 , the performance on MiT and SSv2 suffers by a large margin, indicating that the projection design helps boost training by better utilizing multi-dataset information.
Refuted
Table 3: Ablation experiments. We investigate the effectiveness of each component of our method as well as compare to vanilla multi-dataset training method. “Vanilla” means using cross entropy (CE) loss in training. “w/o informative los” means using CE and projection loss. The numbers are top-1/top-5 accuracy, respectively. Training data: (e) Kinetics-400; (f) SSv2; (g) MiT; (h) ActivityNet.
table
tables_png/dev/val_tab_1600.png
How important is the projection loss? We then experiment with removing the projection heads (Section 3.2 ) during multi-dataset training. The model is trained with the original cross-entropy loss and the informative loss.
ml
no
Change the cell values
papers/dev/ml_2209.12362.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0010
tables/dev/val_tab_1600.tex
2023.starsem-1.10
val_tab_1603
The remaining prompts types ( Discourse, Mask ) are comparable to Corpus .
Refuted
Table 3: Mean perplexity (PPL) calculated using the GPT-J-6B model. Only the strings enclosed in square brackets are considered during calculation in order to provide a fair comparison with similar token length. For Corpus, PPL is calculated using the provided gold completion.
table
tables_png/dev/val_tab_1603.png
To further analyze the outputs, we calculate the perplexity (PPL) of the generated predictions to determine their plausibility (Wilcox et al., 2020 ) . Here, we choose the model with the best \textit{WHR}_{5} score on the MKR-NQ benchmark, and calculate the mean perplexity over all queries for each prompt type (5 completions for each query). PPL is calculated as the exponentiated average negative log-likelihood of a sequence, with exponent base e . As a point of reference, we calculated the average perplexity of the provided completion of the original non-negated dataset (denoted Corpus ). From the reported perplexities ( Table 3 ), we can see that Default output are the most plausible (with PPL markedly lower than Corpus ), while Contrasting is the least natural.
nlp
yes
Swap rows or columns
papers/dev/nlp_2023.starsem-1.10.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0076
tables/dev/val_tab_1603.tex
2205.11361
val_tab_1604
Lastly, we note that using larger values of \gamma naively does not guarantee better test performance: one has to fine tune the parameters \beta , \mu , \sigma , \eta appropriately to achieve favorable trade-off between training stability and test performance.
Refuted
Table 1: Shallow neural nets trained on the airfoil data set. The results in parenthesis are achieved with the variant ( 11 ). All the results are averaged over 5 models trained with different seed values.
table
tables_png/dev/val_tab_1604.png
For the training, we use a fully connected shallow neural network of width 16 with ReLU activation and train for 3000 epochs with the learning rate \eta=0.1 , using mean square error (MSE) as the loss and choosing \beta=0.5 . Table 1 reports the average root MSE (RMSE) and the RMSE gap (defined as test RMSE - train RMSE) evaluated for models that are trained with 5 different seed values for this task. We can see that MPGD leads to both lower test RMSE and RMSE gap when compared with vanilla GD (baseline) and GD with uncorrelated Gaussian perturbations (see the results not in parenthesis in Table 1 ; here \mu=0.01 , \sigma=0.02 ). Using the form of the perturbations in ( 11 ) instead can also give lower RMSE gap (see the results in parenthesis in Table 1 ; here \sigma=\mu=0.01 ). Overall these results support our generalization theory for MPGD.
ml
no
Change the cell values
papers/dev/ml_2205.11361.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0005
tables/dev/val_tab_1604.tex
2202.02363
val_fig_1556
Ant-dir : We trained models for performing the task on an episode of 200 timesteps and found that MetODS can adapt in a few time-steps, similar to memory-based models such as RL^{2} , (Figure 3 -f) thanks to its continual adaptation mechanism.
Refuted
Figure 3 : a-b) Schemas of the Harlow and Mujoco Ant-directional locomotion task. c-d) Evolution of accumulated reward over training. In the Harlow task, we conduct an ablation study by either reducing the number of recursive iterations (S=1) or removing the trainable plasticity weights \bm{\alpha} resulting in sub-optimal policy. In Ant-dir we compare our agent training profile against MAML and RL 2 . e) We can interpret the learned policy in terms of a Hopfield energy adapting with experience. We show horizontally two reward profiles of different episodes and the energy E_{\bm{W}_{t}}(v_{1},v_{2})=-v^{T}_{1}\bm{W}_{t}v_{2} along two principal components of the vector trajectory \bm{v_{t}}. In the first episode, the error in the first presentation (red square) transforms the energy landscape which changes the agent policy, while on the other episode, the model belief does not change over time. Note the two modes for every energy map, which allows the model to handle the potential position permutation of the presented values. f) Average rewards per timestep during a single episode of the Ant-dir task.
figure
figures/dev/val_fig_1556.png
\diamond MuJoCo
ml
no
Legend Swap
papers/dev/ml_2202.02363.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0068
null
19471
val_tab_1605
C1 exhibited negative mean values for the knee and lower leg displacements during SD, significantly different from other clusters ( p < 0.001).
Refuted
Table 2: Comparisons of features between the clusters classified by Louvain clustering unsupervised machine learning.
table
tables_png/dev/val_tab_1605.png
To validate the quality and distinctiveness of the Louvain clustering solution, we calculated widely-used cluster validity indices. The Davies–Bouldin index was 2.09, and the Calinski–Harabasz index was 59.26. While the Davies–Bouldin index was somewhat high (lower values indicate better separation), the Calinski–Harabasz index supported the presence of distinct groupings in our data. The Louvain clustering algorithm identified four distinct clusters (C1–C4) with significant differences in all features (all p < 0.001, except for AHD-SU ( p = 0.017) and portion of EOA ( p = 0.019)) ( Table 2 ). Tukey’s post-hoc analysis revealed that those in C2 had the youngest age (44.80 ± 11.30 years, p < 0.001 vs. all other clusters) and highest BMI (25.59 ± 2.42, p < 0.001 vs. all other clusters). C2 included exclusively males, whereas the other clusters had a mixed sex composition. For SU kinematics, C4 exhibited the largest horizontal displacements for the pelvis, femur, knee, and lower leg, all significantly higher than other clusters ( p < 0.001). During SD, C3 showed the largest pelvis and femur displacements, while C4 had significantly higher ankle displacement ( p < 0.001 vs. all other clusters).
peerj
other sources
Change the cell values
papers/dev/peerj_19471.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0306
tables/dev/val_tab_1605.html
2025.naacl-long.12
val_tab_1606
Additional synthetic datasets and high-quality retrieval datasets further improve the embedding model quality despite the tasks these datasets solve already being well represented in the basic datasets.
Refuted
Table 2: Different data sources impact. Model performance is measured on ruMTEB. Avg. stands for the average score and is computed as the mean of the category scores. The best score is put in bold, the second best is underlined.
table
tables_png/dev/val_tab_1606.png
Results presented in Table 2 indicate that the embedding model gets better results when trained on data in Russian and English simultaneously.
nlp
yes
Swap rows or columns
papers/dev/nlp_2025.naacl-long.12.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0092
tables/dev/val_tab_1606.tex
2023.emnlp-main.19
val_fig_1557
Furthermore, selecting datapoints based solely on high predictive uncertainty without considering diversity ( HighLoss ) is an ineffective strategy, having the second lowest proportion of wins (Fig. 5 , right).
Refuted
Figure 5: Summary of subset selection strategies performances from. Left: percentage of times each strategy gets the best performance out of 35 settings (across each of the 7 languages and 5 \hat{D}^{Syn}_{train} sizes). Right: bootstrapped confidence intervals for the percentages on the left.
figure
figures/dev/val_fig_1557.png
nlp
no
Category Swap
papers/dev/nlp_2023.emnlp-main.19.json
CC BY 4.0
https://creativecommons.org/licenses/by/4.0/
0248
null
2023.ijcnlp-main.34
val_tab_1607
In 6 of the 7 datasets with natural claims, the best generalization score is from a model trained on artificial claims.
Refuted
Table 3: F1 of binary fact verification on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset.
table
tables_png/dev/val_tab_1607.png
Many works Jiang et al. ( 2020 ); Saakyan et al. ( 2021 ) do not consider NEI claims due to their ambiguity. To explore whether our previous observations also hold for the task of binary fact verification , we evaluate the generalization results for all 11 datasets using only the supports and refutes claims for training and evaluation, shown in Table 3 . In this setting, artificial claims also generalize well to natural claims in other domains.
nlp
other sources
Change the cell values
papers/dev/nlp_2023.ijcnlp-main.34.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0222
tables/dev/val_tab_1607.tex
2025.naacl-short.13
val_tab_1608
The book of Genesis contains the highest number of half-verse chiasmi, while Numbers contains the most verse-level chiasmi.
Refuted
Table 2: Summary of detected chiasmi. 2700+ chiasmi were detected at the verse and half-verse level. The highest number of chiasmi was found in the Book of Genesis and Book of Numbers. Both the precision and the inter-annotator agreement increase for the verse-level chiasmi.
table
tables_png/dev/val_tab_1608.png
Table 2 presents an overview of the system’s output for chiastic structures at the half-verse and verse levels. A total of 1,896 chiastic structures were identified at the half-verse level, with an average length of 5.93 textual units ( \pm 1.34) and an average score of 0.32 ( \pm 0.1). For verse-level groupings, 879 chiastic structures were found, with an average length of 6.01 lines ( \pm 1.38) and an average score of 0.29 ( \pm 0.08).
nlp
yes
Swap rows or columns
papers/dev/nlp_2025.naacl-short.13.json
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
0094
tables/dev/val_tab_1608.tex