paper_id string | claim_id string | claim string | label string | caption string | evi_type string | evi_path string | context string | domain string | use_context string | operation string | paper_path string | detail_others string | license_name string | license_url string | claim_id_pair string | evi_path_original string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2025.clpsych-1.1 | val_tab_1507 | The application of the majority voting algorithm improved the RMSE on average by about 30% from 1.61 to 1.12 (columns GPT-4 {}_{\text{avg}} and GPT-4 {}_{\text{maj}} in Table 2 ). | Refuted | Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c... | table | tables_png/dev/val_tab_1507.png | nlp | no | Change the cell values | papers/dev/nlp_2025.clpsych-1.1.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0210 | tables/dev/val_tab_1507.tex | ||
2024.eacl-long.20 | val_tab_1508 | Second, pairwise comparisons with the baseline show both CG and finetuning are effective in formality control, where CG has slightly higher win ratio than FT against the baseline. | Refuted | Table 6: Human evaluation on Bengali,
with quality on a 5-point scale \uparrow and formality on a 3-point scale ( \uparrow : formal) with standard deviations .
Last two columns show pairwise comparison of formality scores to baseline NLLB-200 given the same source sentences (winning: scoring more in the direction of th... | table | tables_png/dev/val_tab_1508.png | The results are in Table 6 .
First, adding attribute control does not appear to impact translation quality. | nlp | no | Change the cell values | papers/dev/nlp_2024.eacl-long.20.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0116 | tables/dev/val_tab_1508.tex | |
2024.eacl-long.12 | val_tab_1509 | Development sets are 5k instances samples from each set. | Refuted | Table 1: Dataset statistics for synthetic data generated in this work. We omit the average length of answers for fact verification as it is a classification task. SQ=Single-Query. TQ=Two-Queries. | table | tables_png/dev/val_tab_1509.png | We synthesize approximately 1.5 million multi-hop questions and 1.9 million claims. We use neucleus sampling (Holtzman et al., 2020 ) with a top- p probability of 0.9 for decoding when generating the data. | nlp | no | Change the cell values | papers/dev/nlp_2024.eacl-long.12.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0155 | tables/dev/val_tab_1509.tex | |
2410.17599 | val_fig_1506 | The underfitting delta model had a win rate of 54 when \alpha was 1.0, which increased to 64 when \alpha was 2.0. | Refuted | (a) Impact on instruction tuning; (b) Impact on unlearning; Impact of strength coefficient \alpha on performance | figure | figures/dev/val_fig_1506.png | For instruction tuning, we tested values within the range of [0.5, 2] and evaluated them on the first 50 data points of AlpacaEval. As shown in Figure 4(a) , we found that the performance was optimal when the \alpha value was 1.0, and increasing or decreasing \alpha resulted in decreased performance. This is similar to... | ml | yes | Legend Swap | papers/dev/ml_2410.17599.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0039 | null | |
2024.eacl-long.14 | val_fig_1508 | We observe that all languages that are included in the pre-training corpus of XLM-R, the cross-lingual transfer performs similarly to fully supervised methods. | Refuted | Figure 5: Comparison of Various Scenarios. We group languages by whether they and their scripts are seen in the pre-training corpus of XLM-R. Languages are ordered by the XLM-R fully-supervised performance in every group. | figure | figures/dev/val_fig_1508.png | Here, we compare the performance between cross-lingual transfer and fully-supervised methods. | nlp | no | Legend Swap | papers/dev/nlp_2024.eacl-long.14.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0253 | null | |
2023.ijcnlp-main.36 | val_tab_1510 | While the reported accuracy on the original test dataset is 38% for QAGNN and 40% for BioLinkBert, the accuracy on our 100 randomly selected demographic-independent questions use to construct the vignettes is 40% for QAGNN and 39% for BioLinkBert. | Refuted | Table 4: Accuracy (in percentages) of the two models on our demographically enhanced datasets. M =male; F =female; W =White; B =Black; A-A =African-American; H =Hispanic; As =Asian; SOr =sexual orientation; O*=original test dataset; O=the original, unmodified 100 vignettes; D=No demographic information; Gen=Gender; 1=Q... | table | tables_png/dev/val_tab_1510.png | nlp | no | Change the cell values | papers/dev/nlp_2023.ijcnlp-main.36.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0123 | tables/dev/val_tab_1510.tex | ||
2025.naacl-long.13 | val_tab_1511 | For Factuality, the annotators demonstrate moderate agreement (Krippendorff’s Alpha = 0.68), suggesting that the conversations are consistently viewed as highly factual, which implies that the SQL queries in our dataset are of high quality. | Refuted | Table 4: Summary of Human Annotation Scores for Naturalness, Factuality, and Helpfulness. | table | tables_png/dev/val_tab_1511.png | Table 4 shows the mean, standard deviation, and Krippendorff’s Alpha for inter-annotator agreement. The high mean scores close to 1 (1.15-1.5) and substantial agreement (Alpha 0.68-0.82) indicate high-quality, natural conversations with factual and helpful responses. For Naturalness, we observe that annotators have a s... | nlp | no | Change the cell values | papers/dev/nlp_2025.naacl-long.13.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0164 | tables/dev/val_tab_1511.tex | |
2024.acl-short.16 | val_fig_1509 | Compared to ELoRA we can yield up to 1.86\times and 2.96\times runtime and FLOPs improvement while remain comparable with LoRA in these two metrices. | Refuted | Figure 3: A comparison of various system performance between LoRA, ELoRA, and AFLoRA. | figure | figures/dev/val_fig_1509.png | Runtime & FLOPs Comparison. Fig. 3 shows the comparison of the normalized average training runtime, normalized FLOPs, and normalized trainable parameters. For AFLoRA, we average the training time, FLOPs, and trainable parameters over six GLUE datasets (except the MNLI and QQP datasets). Note, for LoRA and ELoRA, the tr... | nlp | other sources | Legend Swap | papers/dev/nlp_2024.acl-short.16.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0260 | null | |
2023.emnlp-main.1 | val_tab_1512 | As shown in Table 1 , for CSQA2.0, higher scores are reported for retrieval only than for induction only, while the result is contrary for StrategyQA. | Refuted | Table 1: Performance on two ODQA tasks. The first two columns report scores on CSQA2.0 dev set and StreategyQA test set respectively. The last two columns compare IAG with ChatGPT on a randomly held-out subset containing 50 examples for each task. | table | tables_png/dev/val_tab_1512.png | Besides, the results on different setups of IAG-GPT suggest that, the relative contributions of the retrieval and the inductive knowledge can be different, depending on the tasks. | nlp | yes | Change the cell values | papers/dev/nlp_2023.emnlp-main.1.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0097 | tables/dev/val_tab_1512.tex | |
2025.naacl-long.15 | val_fig_1510 | In contrast, ChatGPT and Gemma show much lower bias, with values of 7.08\%^{2} and 1.32\%^{2} , respectively. | Refuted | (a); (b); Average EstTrueF1 (SemEval2017) and EstTrueMAP (SciDocsRR) ( Section 3.1 ) across models (left) and benchmarks (right) showing performance difference of LLMs across 4 widely used list formats. | figure | figures/dev/val_fig_1510.png | Fig. 4 displays the key findings of our evaluation across models and datasets with numerical results in Appx.- Tab. 10 . From Fig. 4 -left, we notice that Mistral exhibits the most bias, with the BiasF_{o} value ( Eq. 8 ) of 353.80\%^{2} . | nlp | other sources | Legend Swap | papers/dev/nlp_2025.naacl-long.15.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0259 | null | |
18669 | val_fig_1511 | Total protein foods consumed more only during the 6 months after intervention period ( P < 0.001). | Refuted | Figure 1: Comparison of component scores of HEI-2015 measured during pre-intervention, 6 weeks after intervention and 6 months after intervention.
In HEI-2015, a higher score indicates a higher diet quality. The Friedman test indicated significant differences in the component scores of HEI-2015 between pre-intervention... | figure | figures/dev/val_fig_1511.png | Figure 1 shows the comparison of component scores of HEI-2015 recorded during the pre-intervention, 6 weeks after intervention and 6 months after intervention. Accordingly, participants consumed significantly more total fruit ( P < 0.001) and whole fruit ( P < 0.001) after the intervention period. Although scores were ... | peerj | no | Graph Swap | papers/dev/peerj_18669.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0350 | null | |
2023.ijcnlp-main.34 | val_tab_1513 | From Table 2 , we find that fact verification in a dataset with document-level evidence is more difficult than in the same dataset with sentence-level evidence (an average of 13.29% drop of in-domain F1). | Refuted | Table 2: Macro-F1 of 3-class fact verification on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset. The row SELF corresponds to the in-domain performance (training and testing on the same target dataset). | table | tables_png/dev/val_tab_1513.png | nlp | no | Change the cell values | papers/dev/nlp_2023.ijcnlp-main.34.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0128 | tables/dev/val_tab_1513.tex | ||
2024.eacl-long.8 | val_tab_1514 | Table 8 illustrates the self-consistency measured by the accuracy when comparing pairs, and demonstrates that even when using few outputs, the model is very consistent to the final rankings that would be achieved by using many more examples. | Refuted | Table 8: Accuracy when using fewer systems with respect to final rankings (using all 16 systems) and the ground truth labels. Results shown for Summeval COH using FlanT5-xl. | table | tables_png/dev/val_tab_1514.png | SummEval has 16 summaries per context which leads to 240 possible comparisons. If one were to instead randomly sample N outputs and consider all N\!\cdot\!(N\!-\!1) comparisons, how consistent would the rankings with the subset of systems be with respect to the final predicted rankings? | nlp | no | Change the cell values | papers/dev/nlp_2024.eacl-long.8.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0147 | tables/dev/val_tab_1514.tex | |
2211.16499 | val_fig_1512 | For Base-22k and Large, the difference becomes large after the scale multiplier is under 0.6. | Refuted | Figure 5: Counterfactual study of all sizes of ConvNext and Swin networks for object scale. | figure | figures/dev/val_fig_1512.png | In order to compare the robustness of ConvNext and Swin networks to scale, we first generate images of all main object models with unit object scale. Although objects have different volume in this state (e.g. microwaves are larger than spatulas), they take a fair amount of space in the frame and don’t present much of a... | ml | yes | Graph Swap | papers/dev/ml_2211.16499.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0036 | null | |
2403.19863 | val_fig_1513 | Hence, the disparity in decodability between color and digit attributes becomes significantly more pronounced in a 5-layer MLP in comparison to a 3-layer MLP. | Refuted | (a); (b); Exploring the Effect of depth modulation: (a) illustrates how the linear decodability of features decreases as neural network depth increases, while (b) dives into the training dynamics of MLPs with varying depths under ERM. | figure | figures/dev/val_fig_1513.png | As observed in Fig. 2(b) , the initial phases of training for both networks emphasize color attribute (since bias is easy to learn), resulting in a notable enhancement in the decodability of color in both models. Also, as the training progresses, the decodability of the digit is higher in a 3-layer model when compared ... | ml | no | Graph Flip | papers/dev/ml_2403.19863.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0046 | null | |
2501.08508 | val_tab_1515 | We note that sampling time of FuncMol is an order of magnitude better than baselines. | Refuted | Table 1: QM9 results w.r.t. test set for 10000 samples per model. \uparrow / \downarrow indicate that higher/lower numbers are better. The row data are randomly sampled molecules from the validation set. We report 1-sigma error bars over 3 sampling runs. | table | tables_png/dev/val_tab_1515.png | Table 1 report the metrics described in Section ˜ 5.1 . We see that FuncMol slightly improves VoxMol and both models perform worse compared to the equivariant point-cloud based baselines. | ml | other sources | Change the cell values | papers/dev/ml_2501.08508.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0033 | tables/dev/val_tab_1515.tex | |
19459 | val_tab_1516 | Participants with longer sleep latency had a higher percentage of suboptimal SRH, evening-type chronotype, and slept less than 6 h per night compared to those with sleep latency of less than 10 min ( P < 0.05). | Refuted | Table 1: Characteristics of study participants. | table | tables_png/dev/val_tab_1516.png | The study included 1,396 participants, with a mean age of 19.6 ±1.2 years, and 58.6% female, of whom 599 (42.9%) reported suboptimal SRH and 390 (27.9%) reported prolonged sleep latency (≥30 min). Table 1 presents the basic characteristics of the participants. | peerj | no | Change the cell values | papers/dev/peerj_19459.json | CC BY-NC 4.0 | https://creativecommons.org/licenses/by-nc/4.0/ | 0272 | tables/dev/val_tab_1516.html | |
2025.finnlp-1.15 | val_tab_1517 | (1) Fine-tuned language models consistently outperform generic LLMs, the performance gap can be narrowed through prompt design, few-shot learning, and model size. | Refuted | Table 1: Performance of different fine-tuned language models and LLMs under different prompts on FiNER-ORD task. | table | tables_png/dev/val_tab_1517.png | nlp | no | Change the cell values | papers/dev/nlp_2025.finnlp-1.15.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0203 | tables/dev/val_tab_1517.tex | ||
2209.12590 | val_tab_1518 | We also show that adversarial training learns a useful generative model with meaningful latent space by interpolating between sentences (Table 3 ). | Refuted | Table 3 : Sentence interpolation (Yelp dataset). Representations of two sentences (top, bottom) are obtained by feeding them through an adversarially trained VAE encoder. Three linearly interpolated representations are passed to the VAE decoder and sentences generated by greedy sampling (middle) . | table | tables_png/dev/val_tab_1518.png | We obtain further insight into what the adversary learns by analyzing the word dropout scores for different sentences. Table 2 shows that the adversary applies lower dropout probabilities to less informative words such as ‘unknown’ tokens that replace all out-of-dictionary words, and so offer little information about t... | ml | no | Change the cell values | papers/dev/ml_2209.12590.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0004 | tables/dev/val_tab_1518.tex | |
2024.eacl-long.9 | val_tab_1519 | Comparing PECRS-small and -medium shows that Dist@K improvements can be achieved by scaling up the backbone model. | Refuted | Table 3: Results of conversation task compared with the state-of-the-art on ReDial. | table | tables_png/dev/val_tab_1519.png | Table 3 summarizes the results on conversation task, where PECRS achieves promising performance on both types of metrics. Both PECRS-small and -medium surpass all baselines over Dist@3 and Dist@4. | nlp | no | Swap rows or columns | papers/dev/nlp_2024.eacl-long.9.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0140 | tables/dev/val_tab_1519.tex | |
2201.11817 | val_fig_1514 | Matching the main result of the experimental study, we find that strategic directed exploration increases with description length, while strategic random exploration remains unaffected. | Refuted | Figure 4: Illustration of strategic directed and random exploration in the horizon task. (a) Human data from Somerville et al. ( 2017 ). During adolescence, people start to engage more in strategic directed exploration, whereas strategic random exploration remains constant over time. (b) Data simulated from RR-RL 2 wit... | figure | figures/dev/val_fig_1514.png | Results: We trained RR-RL 2 with a targeted description length of \{100,200,\ldots,10000\} nats on the same distribution used in the original experimental study. Figure 4 (b) visualizes how strategic directed and random exploration change as the description length of RR-RL 2 increases. | ml | no | Graph Flip | papers/dev/ml_2201.11817.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0067 | null | |
2025.naacl-short.11 | val_tab_1520 | Statistics are presented in Table 3 . Each transcript is distilled into a table of about 40 facts, from which the model selects 9. | Refuted | Table 3: Statistics of supporting facts. | table | tables_png/dev/val_tab_1520.png | Supporting Facts. We analyzed the supporting facts identified by the model in cases of correct decisions after reflections. | nlp | yes | Change the cell values | papers/dev/nlp_2025.naacl-short.11.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0157 | tables/dev/val_tab_1520.tex | |
2023.ijcnlp-main.40 | val_tab_1521 | Finally, we see ConDA outperforms all the baselines in terms of maximum AUROC, and all but one in terms of average AUROC. | Refuted | Table 4: Performance of ConDA in comparison to unsupervised baselines, as AUROC. For ConDA, we report the average AUROC over all sources (for each target) and also the maximum AUROC (across all sources), along with the corresponding source in parentheses. Bold shows superior performance across each target. | table | tables_png/dev/val_tab_1521.png | We compare our ConDA framework with relevant unsupervised baselines and report results in Table 4 . Out of the four GLTR measures ( \log p(x) , Rank, Log Rank, and Entropy), the first three fare quite well for detecting CTRL-generated text, but performance on other generators is quite poor. DetectGPT, which is the most... | nlp | no | Swap rows or columns | papers/dev/nlp_2023.ijcnlp-main.40.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0134 | tables/dev/val_tab_1521.tex | |
2025.naacl-short.1 | val_fig_1515 | With 2.75B tokens, the model achieved a Best Move accuracy of 40.11%. | Refuted | Figure 3: Left: Best Move Accuracy of ChessLLM training with short round data. The accuracy of the best move increases with the number of training tokens. Right: Legal Move Accuracy of ChessLLM training with short round data. The accuracy of the legal move increases with the number of training tokens. | figure | figures/dev/val_fig_1515.png | Fig. 3 Left shows that with only 0.5B tokens, our model achieves a legal move accuracy of 99.11% on in-distribution boards, indicating its impressive preliminary chess playing ability. As data volume increases, performance improves, demonstrating the model’s scalability and potential for further enhancement. The high a... | nlp | no | Graph Flip | papers/dev/nlp_2025.naacl-short.1.json | CC BY-NC-SA 4.0 | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 0239 | null | |
2205.11361 | val_tab_1522 | Table 2 shows that MPGD can lead to better test performance and lower accuracy gap when compared to the baseline (full batch GD) and Gaussian noise-perturbed GD. | Refuted | Table 2: ResNet-18 trained on CIFAR-10 for 1000 epochs. Here, accuracy gap = training accuracy - validation accuracy. The results in parenthesis are achieved with the variant of MPGD ( 11 ). All the results are averaged over 5 models trained with different seed values. | table | tables_png/dev/val_tab_1522.png | ml | no | Change the cell values | papers/dev/ml_2205.11361.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0006 | tables/dev/val_tab_1522.tex | ||
2201.10328 | val_fig_1516 | With the increase of data size, the performance increases. | Refuted | Figure 6: The Cumulative Reward of Anonymous in Validation Dataset. The performance of DAgger models is related to the number of iteration rounds. | figure | figures/dev/val_fig_1516.png | We show the performance of models with different settings in the validation set in Figure 6 . With the same data size, the model using DAgger performs better. | ml | other sources | Graph Flip | papers/dev/ml_2201.10328.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0072 | null | |
2023.emnlp-main.9 | val_tab_1523 | The number of the direct and contextual rationales is the largest among the other types, which further increases when we look at the error cases of InstructGPT. | Refuted | Table 4: Annotation results of rationale types on 100 examples randomly sampled from all subquestions (left) and from the error examples by InstructGPT (right). | table | tables_png/dev/val_tab_1523.png | We report our annotation results in Table 4 . | nlp | no | Swap rows or columns | papers/dev/nlp_2023.emnlp-main.9.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0178 | tables/dev/val_tab_1523.tex | |
2025.naacl-long.12 | val_tab_1524 | Merging the encoder layers after language adaptation with the original model improves the model quality while using RetroMAE leads to decreased results. | Refuted | Table 3: Results of the model, method, and data variation. † The reference results for the training objective and training examples sections is model based on ru-en-RoBERTa. Each experiment changes a single component (e.g., use AnglE similarity instead of cosine). Model performance is evaluated on ruMTEB. The best scor... | table | tables_png/dev/val_tab_1524.png | We perform contrastive fine-tuning for each model and then evaluate them on ruMTEB. Results (see Table 3 ) show that ru-en-RoBERTa outperforms both baselines by a significant margin. Additionally, the fact that XLM-R slightly outperforms ruRoBERTa may indicate that XLM-R copes better with knowledge transfer from basic ... | nlp | yes | Swap rows or columns | papers/dev/nlp_2025.naacl-long.12.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0093 | tables/dev/val_tab_1524.tex | |
2025.naacl-short.1 | val_fig_1517 | As data volume increases, performance improves, demonstrating the model’s scalability and potential for further enhancement. | Refuted | Figure 3: Left: Best Move Accuracy of ChessLLM training with short round data. The accuracy of the best move increases with the number of training tokens. Right: Legal Move Accuracy of ChessLLM training with short round data. The accuracy of the legal move increases with the number of training tokens. | figure | figures/dev/val_fig_1517.png | Fig. 3 Left shows that with only 0.5B tokens, our model achieves a legal move accuracy of 99.11% on in-distribution boards, indicating its impressive preliminary chess playing ability. | nlp | no | Graph Flip | papers/dev/nlp_2025.naacl-short.1.json | CC BY-NC-SA 4.0 | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 0238 | null | |
17403 | val_tab_1525 | A day effect was only found for the Nine Hole Peg test, revealing that the time for completing the task during the assessment-2 was less than during the assessment-1 (MD: 0.39 s; 95% CI [0.19–0.60]; P < 0.001). | Refuted | Table 2: Clinical characteristics of participants in the different test for the wrist extensor muscles. | table | tables_png/dev/val_tab_1525.png | No side * day interaction was found in the RM-ANOVA for any of the variables ( P > 0.114), indicating a similar pattern across assessments of the dominant and non-dominant sides. | peerj | no | Change the cell values | papers/dev/peerj_17403.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0287 | tables/dev/val_tab_1525.html | |
2025.naacl-long.1 | val_tab_1526 | The model has a slight edge in performance on the sarcasm and visual metaphor subsets of the task, perhaps due to difficulty of these subsets and any potential spurious correlations during fine-tuning. | Refuted | Table 4: Human baseline results (F1@0) by phenomenon and source dataset. | table | tables_png/dev/val_tab_1526.png | To find out how humans perform on the task, we hire two expert annotators with formal education in linguistics. We present them with 10 example instances and then ask them to complete 99 randomly sampled test set instances. We also evaluate our best model (see Table 3 ) on the same set. Results are shown in Table 4 . H... | nlp | other sources | Change the cell values | papers/dev/nlp_2025.naacl-long.1.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0202 | tables/dev/val_tab_1526.tex | |
2024.eacl-short.10 | val_tab_1527 | This figure is inverted
for Raven which obtains a 4-fold improvement over the base model inferring the correct answer more than 85% of the time. | Refuted | Table 1: The data statistics and experimental results (Exact Match) of various benchmarks and models. The best results are in bold . GPT-3.5 results are based on 5-shots. Sota is based on previously published results. | table | tables_png/dev/val_tab_1527.png | The results are summarised in Table 1 .
Compared to the base model, Raven significantly improves the results on the PhraseBank dataset by an absolute 25.9%. On the Wiki-SQL dataset the base model is unable to infer the correct answer almost 80% of the time. | nlp | yes | Change the cell values | papers/dev/nlp_2024.eacl-short.10.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0143 | tables/dev/val_tab_1527.tex | |
2024.acl-short.9 | val_fig_1518 | As expected, p(\boldsymbol{x|y}) shows a higher correlation with accuracy than fluency, and p(\boldsymbol{y}) shows the opposite pattern. | Refuted | Figure 5: \text{accuracy}_{M} and \text{fluency}_{M} predict human accuracy and fluency ratings for RLTC and WMT submissions to the general translation task in 2022 and 2023. zhen and ende refer to Chinese-English and English-German language pairs. All correlations reported are significant ( p<.001 ). | figure | figures/dev/val_fig_1518.png | A similar conclusion is suggested by Figure 5 , which shows Pearson correlations of translation probability ( p(\boldsymbol{y|x}) ), \text{accuracy}_{M} ( p(\boldsymbol{x|y}) ) and \text{fluency}_{M} ( p(\boldsymbol{y}) ) with human ratings of accuracy and fluency for RLTC and MTMQM. 9 9 9 Values are ranked by percenti... | nlp | no | Legend Swap | papers/dev/nlp_2024.acl-short.9.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0256 | null | |
16988 | val_tab_1528 | A total of 54 COPD patients (25 in stages I–II and 29 in stages III–IV) and 24 healthy individuals participated in the study from September 9, 2021, to May 1, 2022. | Refuted | Table 1: Participants’ clinical characteristics and demographics. | table | tables_png/dev/val_tab_1528.png | peerj | no | Change the cell values | papers/dev/peerj_16988.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0281 | tables/dev/val_tab_1528.html | ||
2024.eacl-short.10 | val_tab_1529 | On the Wiki-SQL dataset the base model is unable to infer the correct answer almost 80% of the time. | Refuted | Table 1: The data statistics and experimental results (Exact Match) of various benchmarks and models. The best results are in bold . GPT-3.5 results are based on 5-shots. Sota is based on previously published results. | table | tables_png/dev/val_tab_1529.png | The results are summarised in Table 1 .
Compared to the base model, Raven significantly improves the results on the PhraseBank dataset by an absolute 25.9%. | nlp | no | Change the cell values | papers/dev/nlp_2024.eacl-short.10.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0142 | tables/dev/val_tab_1529.tex | |
2025.naacl-long.17 | val_fig_1519 | Our findings show that: (1) As expected, using the most closely related language as the auxiliary example bank yields the best performance, as it provides relevant guidance to the LLM, and (2) Selecting a random or unrelated language results in little to no improvement, with performance remaining close to the relevance... | Refuted | Figure 1: Token-F1 evaluation on the cross-lingual QA task in Manipuri, with retrievers trained using different auxiliary high-resource example banks: (1) Closely related language, (2) Random language, and (3) Unrelated language. | figure | figures/dev/val_fig_1519.png | We evaluate three setups: (1) Our method (Alg. 2 ) of selecting the most closely related language as the auxiliary example bank, (2) selecting a random language, and (3) selecting the most unrelated language. We show the few-shot performance on cross-lingual QA task in Figure 1 , with retrievers trained under each setu... | nlp | no | Legend Swap | papers/dev/nlp_2025.naacl-long.17.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0246 | null | |
2202.10720 | val_fig_1521 | Figure 3 shows that across different noise levels, EIGNN consistently outperforms IGNN. | Refuted | Figure 3 : Accuracy on datasets with different feature noise. | figure | figures/dev/val_fig_1521.png | On synthetic chain datasets, we add random perturbations to node features as in Wu et al. [ 31 ] .
Specifically, we add uniform noise \epsilon\sim\mathcal{U}(-\alpha,\alpha) to each node’s features for constructing noisy datasets. | ml | no | Legend Swap | papers/dev/ml_2202.10720.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0065 | null | |
2024.eacl-long.16 | val_tab_1530 | As reported in Table 2 , in Q Drawer the value of the entropy threshold \theta has, by design, a direct impact on the number of questions asked: i.e., a model that only asks for clarification in the face of high uncertainty ( \theta=1.1 ) gives rise to fewer dialogues with questions and to a lower average number of que... | Refuted | Table 2: The effect of different \theta values on the number of questions asked by Q Drawer. We report the percentage of dialogues with at least one clarification question and, within this subset, the average number of clarification questions per dialogue. | table | tables_png/dev/val_tab_1530.png | Asking clarification questions carries a cost, both from the perspective of the agent asking the question and the agent processing it (Clark, 1996 ; Purver, 2004 ) . We take this partially into account by adopting a simple approach: we evaluate Q Drawer by using different entropy threshold values \theta to control for ... | nlp | no | Change the cell values | papers/dev/nlp_2024.eacl-long.16.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0112 | tables/dev/val_tab_1530.tex | |
17773 | val_tab_1531 | Regarding methodological quality assessment tools, 12 studies utilized the Cochrane Collaboration Network RoB risk assessment tool, four employed the PEDro scale, one used the Cochrane Collaboration Network RoB and PEDro, and one applied the EPHPP for risk assessment. | Refuted | Table 1: Basic characteristics of the literature included in the study. | table | tables_png/dev/val_tab_1531.png | The included meta-analyses were published between 2018 and 2023, and all the original studies were randomized and controlled. The largest study included 2,526 participants and the smallest had 182 participants, involving a total of 18,461 individuals. The interventions in the trial groups consisted of aerobic, physical... | peerj | no | Change the cell values | papers/dev/peerj_17773.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0302 | tables/dev/val_tab_1531.html | |
2024.eacl-short.7 | val_fig_1522 | While Flan-T5 introduces more errors than GPT-3.5 overall, the trends are analogous. | Refuted | (a) Prevalence of factual errors in each of domains; (b) Distribution of error categories across domains; Distribution of errors and error categories across domains | figure | figures/dev/val_fig_1522.png | Figure 1(a) shows the average proportion of sentences marked as inconsistent (with respect to the corresponding input) in summaries generated by GPT-3.5 Brown et al. ( 2020 ) and Flan-T5 XL Chung et al. ( 2022 ) for three domains: News, medical, and legal. Perhaps surprisingly, we observe a higher prevalence of
inconsi... | nlp | no | Legend Swap | papers/dev/nlp_2024.eacl-short.7.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0235 | null | |
16727 | val_tab_1532 | These variables include age ( p = 0.002), having children ( p = 0.002), being related to the health sector ( p = 0.001), perception of the efficacy and protective effect of the booster dose (<0.001), as well as fear of adverse effects ( p < 0.001) were significantly associated with the IBV ( Table 2 ). | Refuted | Table 2: Bivariate analysis of study characteristics according to intention to be vaccinated with the booster dose COVID-19 in Peru, July 2022 (n= 924). | table | tables_png/dev/val_tab_1532.png | Several variables were found to be significantly associated with the intention to be vaccinated (IBV), as detailed in Table 2 . | peerj | yes | Change the cell values | papers/dev/peerj_16727.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0263 | tables/dev/val_tab_1532.html | |
2024.eacl-long.8 | val_tab_1533 | Table 2 shows that comparative assessment yields highly impressive performance for long-spoken summarization, with comparative assessment out-competing all other baselines. | Refuted | Table 2: Spearman correlation coefficient for Podcast . | table | tables_png/dev/val_tab_1533.png | Podcast Assessment : When considering podcast summarization with long inputs of over 5k tokens on average, only Llama2 models (which have a limit of 4k tokens) were used (as FlanT5 has a limit of 1k tokens). | nlp | no | Swap rows or columns | papers/dev/nlp_2024.eacl-long.8.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0144 | tables/dev/val_tab_1533.tex | |
18000 | val_tab_1534 | Among the nine independent variables, five correlated with arm swing speed at BI ( Table 1 ). | Refuted | Table 1: Peak angular momentum variables and correlations with the arm swing speed at ball impact (BI). | table | tables_png/dev/val_tab_1534.png | peerj | no | Change the cell values | papers/dev/peerj_18000.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0293 | tables/dev/val_tab_1534.html | ||
2025.naacl-long.11 | val_tab_1535 | From the results, we notice that the correlations showed by the non-specialized embeddings are extremely weak. | Refuted | Table 2 : Gender diversity. Average Spearman correlations between proportion of female voices and diversity scores induced by speech representations. | table | tables_png/dev/val_tab_1535.png | In Table 2 we report correlation scores for the gender diversity. Again, we split in two groups: male-voice and female voice-dominant. | nlp | other sources | Swap rows or columns | papers/dev/nlp_2025.naacl-long.11.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0223 | tables/dev/val_tab_1535.tex | |
2405.14507 | val_fig_1523 | However, in our exploratory experiment on Mixtral 8x7B [ 17 ] , we find simply raising the number of activated experts (blue lines in Figure 1 ) does not lead to stable improvements and may even hurt performance on different tasks. | Refuted | Figure 1 : Performance comparison between increasing the value of top- k ( i.e. , ensemble routing) and SCMoE. SCMoE surpasses the performance of ensemble routing across various benchmarks. | figure | figures/dev/val_fig_1523.png | In this paper, we investigate the impact of unchosen experts 1 1 1 Unchosen experts refer to the experts not selected by default routing ( e.g. , top-2 routing in Mixtral 8x7B). on the performance of MoE models and explore their suitable usage. A direct hypothesis is that incorporating more experts improves MoE models ... | ml | no | Graph Flip | papers/dev/ml_2405.14507.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0053 | null | |
2023.emnlp-main.36 | val_tab_1536 | As evident from the results in Table 5 , GPT-NeoX with a history length of 100 shows comparable performance to supervised learning approaches, without any fine-tuning on any TKG training dataset. | Refuted | Table 5: Performance (Hits@K) comparison between supervised models and ICL for single-step (top) and multi-step (bottom) prediction.
The first group in each table consists of supervised models, whereas the second group consists of ICL models, i.e., GPT-NeoX with a history length of 100.
The best model for each dataset ... | table | tables_png/dev/val_tab_1536.png | We present a comparative analysis of the top-performing ICL model against established supervised learning methodologies for TKG reasoning, which are mostly based on graph representation learning. | nlp | no | Change the cell values | papers/dev/nlp_2023.emnlp-main.36.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0185 | tables/dev/val_tab_1536.tex | |
16887 | val_fig_1524 | Conversely, low-frequency adaptation resulted in a 4% increase in numerosity (linear regression slope of 1.03, compared to the baseline slope of 0.99), the line shifted towards the upper side compared to the baseline (no adaptation), indicating an overestimation of perceived numerosity ( Figure 2B ). | Refuted | Figure 2: Results.
(A) Adaptation index under grouping and no-grouping conditions; (B) The perceived numerosity, averaged across trials and subjects, varies as a function of physical numerosity in the three adaptation conditions. The analysis includes best-fitting linear regressions (R2 >0.98 in all conditions). The re... | figure | figures/dev/val_fig_1524.png | The adaptation effect is measured by adaptation index. We calculated the adaptation index separately for the grouping and no-grouping conditions. The results are presented in Figure 2A , which illustrates the adaptation index under both grouping and no-grouping conditions. Notably, the grouping condition exhibits a mor... | peerj | no | Legend Swap | papers/dev/peerj_16887.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0325 | null | |
2024.acl-short.16 | val_fig_1525 | In specific, we illustrate the specific number of iterations required before freezing each component in Fig. 5 . Interestingly, as can be seen from the figure, analysis reveals that the down-projection matrix parallel the intermediate linear layer require longer training duration prior to being frozen, as compared to t... | Refuted | Figure 5: Visualization of freezing iterations for each layer. ‘out’ and ‘inter’ refer to the second and the first MLP layer of the FFN, respectively. ‘A’ and ‘B’ represent the down-projection and up-projection matrix, respectively. The darker the color, the more iterations the matrix has to go through before freezing. | figure | figures/dev/val_fig_1525.png | Discussion on Freezing Trend. We use the RTE dataset as a case study, to understand the freezing trend of the PMs across different layers. | nlp | other sources | Graph Flip | papers/dev/nlp_2024.acl-short.16.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0262 | null | |
17403 | val_tab_1537 | For the active range of motion, reliability was ‘excellent’ for both extension, flexion, and total range (ICC: 0.94–0.97). | Refuted | Table 3: Reliability indicators of myotonometry, manual dexterity, pressure pain thresholds, active range of motion, and maximal isometric strength for the wrist extensor muscles. | table | tables_png/dev/val_tab_1537.png | Table 3 presents ICC 3,1 , SEM, MDC 90 , and MDC 95 for each variable. For the muscle mechanical properties of the dominant and non-dominant sides, reliability was ‘excellent’ for the frequency and stiffness parameters (ICC: 0.91–0.96), and ‘good’ to ‘excellent’ for the decrement (ICC: 0.86–0.91). For the manual dexter... | peerj | no | Change the cell values | papers/dev/peerj_17403.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0288 | tables/dev/val_tab_1537.html | |
2210.05883 | val_tab_1538 | From Table 6 , we can see that RoBERTa with AD-Drop achieves better generalization, where AD-Drop boosts the performance by 0.66 on HANS and 3.35 on PAWS-X, illustrating that the model trained with AD-Drop generalizes better to OOD data. | Refuted | Table 6: Testing AD-Drop on OOD datasets. | table | tables_png/dev/val_tab_1538.png | To further demonstrate AD-Drop is beneficial to reducing overfitting, we test AD-Drop with RoBERTa base on two out-of-distribution (OOD) datasets, i.e., HANS and PAWS-X. For HANS, we use the checkpoints trained on MNLI and test their performance on the validation set (the test set is not supplied). For PAWS-X, we use t... | ml | no | Change the cell values | papers/dev/ml_2210.05883.json | CC BY-NC-SA 4.0 | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 0017 | tables/dev/val_tab_1538.tex | |
19471 | val_tab_1539 | For SU kinematics, C4 exhibited the largest horizontal displacements for the pelvis, femur, knee, and lower leg, all significantly higher than other clusters ( p < 0.001). | Refuted | Table 2: Comparisons of features between the clusters classified by Louvain clustering unsupervised machine learning. | table | tables_png/dev/val_tab_1539.png | To validate the quality and distinctiveness of the Louvain clustering solution, we calculated widely-used cluster validity indices. The Davies–Bouldin index was 2.09, and the Calinski–Harabasz index was 59.26. While the Davies–Bouldin index was somewhat high (lower values indicate better separation), the Calinski–Harab... | peerj | other sources | Change the cell values | papers/dev/peerj_19471.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0305 | tables/dev/val_tab_1539.html | |
2023.ijcnlp-main.36 | val_tab_1540 | It also has an equivalent amount or more for almost any ethnicity, except for Asians. | Refuted | Table 7: Percentage of questions with changed answers between the biomedical and generic model as compared to a question with no demographic information about the patient. M =male; F =female; W =White; B =Black; A-A =African-American; H =Hispanic; As =Asian; SOr =sexual orientation. | table | tables_png/dev/val_tab_1540.png | Similar to our analysis between QAGNN and BioLinkBert above, our analysis between the biomedical and generic models can be split into the amount of answers and accuracy that changes when the dimensions change. From Table 7 it is visible that the generic transformer has more than double the amount of answers change for ... | nlp | yes | Change the cell values | papers/dev/nlp_2023.ijcnlp-main.36.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0077 | tables/dev/val_tab_1540.tex | |
16901 | val_tab_1542 | The greatest mean maximum load was obtained for the medially positioned plate ( Table 2 ), but the difference between this group and the dorsal LP lag screw and the medial LP lag screw groups was not significant ( p = 0.70 and p = 0.52, respectively). | Refuted | Table 2: Comparison of mean values of flexural stiffness, initial flexural stiffness, shear stiffness and maximum load, and their standard deviations (SDs) for different fixation methods. | table | tables_png/dev/val_tab_1542.png | The addition of an interfragmentary lag screw to the medially positioned plate increased the initial flexural stiffness by 153% and the shear stiffness by 352%, see Table 2 . The dorsal plate with the lag screw was the stiffest construct ( Figs. 5A , 5B ). Significant differences in stiffness were observed ( p < 0.05, ... | peerj | other sources | Change the cell values | papers/dev/peerj_16901.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0312 | tables/dev/val_tab_1542.html | |
18312 | val_fig_1526 | The results showed that the microbiota developed in NEMM were predicted stronger anaerobic ability, whether under anaerobic conditions or normoxic conditions. | Refuted | Figure 7: BugBase analysis among different groups.
Note: X-axis, group name; Y-axis, Relative abundance in percentage. The three lines from bottom to top are lines indicating lower quartile, average and upper quartile.
| figure | figures/dev/val_fig_1526.png | Bugbase was used to predict the function of bacterial community in NEMM and SHI under anaerobic and normoxia culture conditions. | peerj | no | Graph Swap | papers/dev/peerj_18312.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0347 | null | |
17090 | val_tab_1543 | IS was negatively correlated with SS (Index = −0.179, P < 0.001). | Refuted | Table 3: Regression analysis of appearance anxiety, interpersonal sensitivity, social support, and depression. | table | tables_png/dev/val_tab_1543.png | A multiple mediation analysis was conducted to explore the mediation effects of IS and SS in a college student population. Control variables included gender, age, BMI, GPA, being an only child or not, and home address. AA and depression were entered as independent and dependent variables respectively. The proposed medi... | peerj | other sources | Change the cell values | papers/dev/peerj_17090.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0278 | tables/dev/val_tab_1543.html | |
2205.13790 | val_tab_1544 | As shown in Table 4 , when we directly evaluate detectors trained without robustness augmentation, BEVFusion shows higher accuracy than the LiDAR-only stream and vanilla LiDAR-camera fusion approach in TransFusion. | Refuted | Table 4: Results on robustness setting of object failure cases. Here, we report the results of baseline and our method that trained on the nuScenes dataset with and without the proposed robustness augmentation (Aug.). All settings are the same as in Table 3 . | table | tables_png/dev/val_tab_1544.png | LiDAR fails to receive object reflection points. Here exist common scenarios when LiDAR fails to receive points from the object. For example, on rainy days, the reflection rate of some common objects is below the threshold of LiDAR hence causing the issue of object failure AnonymousBenchmark . To simulate such a scenar... | ml | no | Change the cell values | papers/dev/ml_2205.13790.json | Public Domain | http://creativecommons.org/publicdomain/zero/1.0/ | 0027 | tables/dev/val_tab_1544.tex | |
2025.clpsych-1.8 | val_tab_1545 | Interestingly, the 4-bit quantized LLaMA 405b model did not outperform the Pythia suite, attaining a lower Spearman \rho of 0.457 on the 64-token sliding window. | Refuted | Table 1: The AVH dataset Spearman’s \rho between the maximum sliding window PPL and TALD across model size. Bold indicates the highest \rho for a model. | table | tables_png/dev/val_tab_1545.png | As shown in Table 1 , all correlations between maximum sliding window PPL and TALD scores were statistically significant (p-value < 0.01) across all model and sliding window sizes. The strongest correlations consistently occurred with a 64-token sliding window, with coefficients peaking at 0.486 for the 1.4b model, and... | nlp | other sources | Swap rows or columns | papers/dev/nlp_2025.clpsych-1.8.json | CC BY-NC-SA 4.0 | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 0150 | tables/dev/val_tab_1545.tex | |
17254 | val_fig_1527 | As shown in Figure 8 , the integrated model achieved an AUC of 0.838 compared with 0.654, 0.768, and 0.893 for clinicians D1, D2, and D3, respectively, suggesting differences between the model and the expert evaluation. | Refuted | Figure 8: The ROC curves of the integrated model and diagnostic results of each radiologist in the test cohort.
The integrated model reached a high AUCs (AUC: 0.838), doctor1 acquired 0.654 of AUC, the AUC of doctor2 was 0.768 and doctor3 demonstrated the highest with 0.893.
| figure | figures/dev/val_fig_1527.png | The performance of the integrated model was compared with that of clinical experts. Three clinicians with varying levels of experience in oral diseases, i.e ., less than 5 years (D1), less than 10 years but longer than D1 (D2), and more than 15 years (D3), were asked to interpret the imaging and clinical data. The deci... | peerj | no | Legend Swap | papers/dev/peerj_17254.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0340 | null | |
2025.clpsych-1.1 | val_tab_1546 | At the same time, the RMSE obtained using the majority voting is also considerably lower than the results shown in the previous subsection from just one run (1.12 vs 1.45; columns GPT-4 {}_{\text{maj}} and GPT-4 in Table 2 ). | Refuted | Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c... | table | tables_png/dev/val_tab_1546.png | The application of the majority voting algorithm improved the RMSE on average by about 30% from 1.61 to 1.12 (columns GPT-4 {}_{\text{avg}} and GPT-4 {}_{\text{maj}} in Table 2 ). However, compared to the results shown in previous Section 4.2 (also shown in Table 2 ), generating five runs and taking the average increas... | nlp | yes | Change the cell values | papers/dev/nlp_2025.clpsych-1.1.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0102 | tables/dev/val_tab_1546.tex | |
2025.naacl-long.14 | val_fig_1528 | Baseline rankings across languages are usually stable; with a few notable exceptions such as Gemma-1.1 (7B) which achieves a rank of 4 in Telugu. | Refuted | Figure 4: \mirage arena-based leaderboards: (left) Bradley-Terry model coefficients with rankings with GPT-4o as a judge pairwise judgments on a subset of 100 sampled queries. (right) Synthetic rankings using heuristic-based features and a learning to rank model. Each highlighted value is the rank of the model on the l... | figure | figures/dev/val_fig_1528.png | Figure 4 (left) shows the arena-based leaderboard using bootstrapping and Bradley-Terry modeling after conducting 200 tournaments and sampling 100 matches per tournament on a subset of 100 queries using GPT-4o pairwise comparisons. We observe that proprietary models such as GPT-4o and GPT-4, and larger models such as L... | nlp | other sources | Legend Swap | papers/dev/nlp_2025.naacl-long.14.json | same as above | CC BY-SA 4.0 | http://creativecommons.org/licenses/by-sa/4.0/ | 0229 | null |
2023.emnlp-main.19 | val_fig_1529 | Loss method outperformed all others in approximately one-third of the 35 settings examined, as indicated in Figure 5 (left). | Refuted | Figure 5: Summary of subset selection strategies performances from. Left: percentage of times each strategy gets the best performance out of 35 settings (across each of the 7 languages and 5 \hat{D}^{Syn}_{train} sizes). Right: bootstrapped confidence intervals for the percentages on the left. | figure | figures/dev/val_fig_1529.png | Effective subsets have high diversity and predictive uncertainty . Our analysis reveals statistically significant differences between the subset selection strategies, highlighting the effectiveness of the hybrid approaches ( UMT/EMT+ Loss ) that consider both diversity and predictive uncertainty. Among the strategies t... | nlp | no | Category Swap | papers/dev/nlp_2023.emnlp-main.19.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0247 | null | |
2025.naacl-short.8 | val_tab_1547 | Average reward model accuracy ("Avg") in Table 1 shows that English RMs surpass target language RMs in general. | Refuted | Table 1: Multilingual RewardBench evaluation results on the target language ("Target") and English ("English") RMs. " \Delta " denotes the accuracy gain of English RMs compared to the target language RMs. English RMs show higher average scores in the lingual axis than target language RMs. Also, English RMs excel target... | table | tables_png/dev/val_tab_1547.png | nlp | no | Swap rows or columns | papers/dev/nlp_2025.naacl-short.8.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0172 | tables/dev/val_tab_1547.tex | ||
2023.emnlp-main.4 | val_tab_1548 | Furthermore, our model, SocialSense {}_{\text{Zero}} , achieves the highest scores consistently across all metrics. | Refuted | Table 2: The above Zero-Shot Response forecasting results highlight that the Social Prompt from Section 3.4 consistently offers an advantage. | table | tables_png/dev/val_tab_1548.png | In addition to supervised response forecasting, we also evaluate our framework under the zero-shot setting (Section 3.4 ). The results are presented in Table 2 . Based on the higher scores attained by ChatGPT L , it is evident that the inclusion of latent structured persona information indeed aids the model in comprehe... | nlp | no | Change the cell values | papers/dev/nlp_2023.emnlp-main.4.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0177 | tables/dev/val_tab_1548.tex | |
2025.finnlp-1.15 | val_tab_1549 | Additionally, few-shot learning and larger LLMs demonstrate notable advantages over their smaller counterparts. | Refuted | Table 1: Performance of different fine-tuned language models and LLMs under different prompts on FiNER-ORD task. | table | tables_png/dev/val_tab_1549.png | (1) Fine-tuned language models consistently outperform generic LLMs, the performance gap can be narrowed through prompt design, few-shot learning, and model size. Table 1 demonstrates that fine-tuned language models surpass generic LLMs in zero-shot direct prompting. However, the performance of generic LLMs improves si... | nlp | no | Change the cell values | papers/dev/nlp_2025.finnlp-1.15.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0204 | tables/dev/val_tab_1549.tex | |
17407 | val_tab_1550 | The most encountered category was fibers, followed by metal, glass, wood, paint chips and balloons ( Table 2 ). | Refuted | Table 2: Composition of plastic and non-plastic debris found in 1,447 analyzed pellets of the neotropic cormorant,N. brasilianus, on the Circuito de Playas Costa Verde (CPCV), Lima, Perú, during both the pre-pandemic and pandemic phases. | table | tables_png/dev/val_tab_1550.png | In addition to plastic, various other types of anthropogenic materials found in 23 out of 1,447 pellets were identified and classified as non-plastic debris. | peerj | yes | Change the cell values | papers/dev/peerj_17407.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0292 | tables/dev/val_tab_1550.html | |
2024.eacl-long.18 | val_tab_1551 | Although this setting works reasonably well for XLM-R, it yields poor performance for mT5. | Refuted | Table 5: Regression vs. classification in lexicon-based pretraining for zero-shot sentiment analysis. | table | tables_png/dev/val_tab_1551.png | In Table 5 we present the average performance across the four language groups to compare the effectiveness of lexicon-based pretraining in regression and classification tasks for both binary and 3-way classification. Our findings indicate that regression performs better for binary classification, while classification l... | nlp | yes | Change the cell values | papers/dev/nlp_2024.eacl-long.18.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0110 | tables/dev/val_tab_1551.tex | |
16850 | val_fig_1530 | Positive correlations between AR and ORC1 were confirmed in the TCGA-PRAD samples ( Figure 7D ). | Refuted | Figure 7: AR activates ORC1 to sustain tumor progression and enzalutamide resistance in PRAD. (A) The anchorage-independent growth of C4-2B and 22RV-1 cells in soft agar (scale bars = 200 µm, left). Quantification of the soft agar colony formation assay results (right). (B) Sphere formation assays revealing the self-re... | figure | figures/dev/val_fig_1530.png | To elucidate the function of ORC1 in PRAD, we established stable ORC1-overexpressing PRAD cells (22RV-1 and C4-2B). The up-regulation of ORC1 significantly enhanced PRAD cell soft agar colony formation efficiency ( Figure 7A ). Besides, the self-renewal potentiality of PRAD cells was also increased when cells were tran... | peerj | no | Graph Flip | papers/dev/peerj_16850.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0323 | null | |
2408.16862 | val_tab_1552 | Moreover, this leads to improved estimation of latent dynamics, a switching rate that agrees with the true system, and improved multistep inference performance as shown in Table 1 . | Refuted | Table 1: Metrics for synthetic dynamical systems. Bold means best performance. (\uparrow) indicates higher score is better while (\downarrow) indicates that lower is better. ✗ indicates that value diverged towards -\infty . Switch events for decomposed models are defined as times where the active set of DOs change from... | table | tables_png/dev/val_tab_1552.png | In figure 2 D, we see that rSLDS does not distinguish between the different speeds along the outer and inner sections of the attractor. Instead, the discrete states obscure the continuum of speeds by incorrectly grouping all activity in each lobe into a single regime. Furthermore, we observe that dLDS is limited withou... | ml | other sources | Change the cell values | papers/dev/ml_2408.16862.json | CC BY-SA 4.0 | http://creativecommons.org/licenses/by-sa/4.0/ | 0031 | tables/dev/val_tab_1552.tex | |
2025.clpsych-1.1 | val_tab_1553 | The mean RMSE for GPT-4 from 1.12 to 1.15 (columns GPT-4 {}_{\text{maj}} and GPT-4 {}_{\text{conf}} in Table 2 ), and the mean RMSE for human reader-annotators increased from 0.99 to 1.06 (columns Human {}_{\text{maj}} and Human {}_{\text{conf}} in Table 2 ). | Refuted | Table 2: RMSE results for research questions Q2, Q3, and Q4. GPT-4 stands for the GPT-4 annotator, Human for the human reader-annotator, avg : average of five GPT-4 completions/human guesses, maj : majority vote of five GPT-4 completions/human guesses, conf : majority vote of five GPT-4 completions/human guesses with c... | table | tables_png/dev/val_tab_1553.png | The results of this experiment are shown in Table 2 columns GPT-4 {}_{\text{conf}} and Human {}_{\text{conf}} . We can see that using the confidence rating to break the ties did not improve the RSME neither for the model nor for the human reader-annotators compared to the random choice. | nlp | no | Change the cell values | papers/dev/nlp_2025.clpsych-1.1.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0211 | tables/dev/val_tab_1553.tex | |
18312 | val_fig_1531 | Of the 19 most predominant bacterial genera found in NEMM and SHI media, Ligilactobacillus , Limosilactobacillus , Lactobacillus , and Desemzia showed the highest abundance levels. | Refuted | Figure 6: The relative abundance of differential predominant bacteria between NEMM and SHI analyzed at genus level by ANOVA.
| figure | figures/dev/val_fig_1531.png | Differential predominant bacteria between NEMM and SHI were analyzed in the relative abundance of bacteria at the genus level by ANOVA. | peerj | no | Category Swap | papers/dev/peerj_18312.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0346 | null | |
19374 | val_fig_1533 | Lachnospiraceae were positively associated with 1, 3-propanediol and L-proline, and were negatively correlated with hydracrylic acid and CH 4 . | Refuted | Figure 10: Correlation analysis of characteristic metabolites and bacteria.
Corr represents correlation. Red and blue separately denote positive and negative correlations. The darker the color, the stronger the correlation. *P < 0.05, **P < 0.01, and ***P < 0.001.
| figure | figures/dev/val_fig_1533.png | To further explore correlations of metabolites in human fecal matter and intestinal flora after SNP treatment, thermography was used to analyze contents of metabolites showing significant differences with 30 most abundant genera. Figure 10 shows the results. Bifidobacterium was positively associated with glycerol, CO 2... | peerj | no | Category Swap | papers/dev/peerj_19374.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0331 | null | |
2210.05883 | val_tab_1554 | Third, AD-Drop improves the original BERT base and RoBERTa base with any of the masking strategies, demonstrating the robustness of AD-Drop to overfitting when fine-tuning these models. | Refuted | Table 2: Results of ablation studies, in which r/w means “replace with” and w/o means “without”. | table | tables_png/dev/val_tab_1554.png | AD-Drop can be implemented with different attribution methods to generate the mask matrix in Eq. ( 1 ), such as integrated gradient attribution (IGA) introduced Eq. ( 3 ), attention weights for attribution (AA), and randomly generating the discard region (RD) in Eq. ( 6 ). We replace the gradient attribution (GA) in Eq... | ml | no | Change the cell values | papers/dev/ml_2210.05883.json | CC BY-NC-SA 4.0 | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 0015 | tables/dev/val_tab_1554.tex | |
2024.eacl-short.4 | val_tab_1555 | When focusing on oscillatory hallucinations according to TNG in Table 4 , the improvement is even more pronounced,
with a reduction from 9.3% to 0.7% (-92%) for M2M-100, and from 5.9% to 1.5% (-75%) for SMaLL-100. | Refuted | Table 4: Proportion of translations with oscillatory hallucinations according to TNG. | table | tables_png/dev/val_tab_1555.png | The proportion of translations with chrF2 below 10 is shown in Table 3 . We observe large reductions in the number of defect translations, with a reduction from 7.3% to 1.2% (-83%) for M2M-100, and from 5.6% to 1.8% (-67%) for SMaLL-100. | nlp | no | Swap rows or columns | papers/dev/nlp_2024.eacl-short.4.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0118 | tables/dev/val_tab_1555.tex | |
2024.eacl-long.9 | val_tab_1556 | Both PECRS-small and -medium surpass all baselines over Dist@3 and Dist@4. | Refuted | Table 3: Results of conversation task compared with the state-of-the-art on ReDial. | table | tables_png/dev/val_tab_1556.png | Table 3 summarizes the results on conversation task, where PECRS achieves promising performance on both types of metrics. | nlp | no | Swap rows or columns | papers/dev/nlp_2024.eacl-long.9.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0139 | tables/dev/val_tab_1556.tex | |
2205.14612 | val_tab_1557 | A first observation is that one can transfer these weights to deeper ResNets without significantly affecting the test accuracy of the model: it remains above 94.5\% on CIFAR-10 and 72\% on ImageNet. | Refuted | Table 2: Test accuracy (ResNet) | table | tables_png/dev/val_tab_1557.png | In addition, we also want our pretrained model to verify assumption 2 so we consider the following setup. On CIFAR (resp. ImageNet) we train a ResNet with 4 (resp. 8) blocks in each layer, where weights are tied within each layer. | ml | yes | Change the cell values | papers/dev/ml_2205.14612.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0028 | tables/dev/val_tab_1557.tex | |
2025.naacl-long.14 | val_fig_1534 | In contrast, models such as Qwen-2 or Gemma-1.1 tend to under-cite in their response. | Refuted | Figure 3: Lollipop plots denoting the average heuristic-based feature scores achieved by baselines in \mirage . x -axis denotes the languages in \mirage. whereas y -axis plots every heuristic feature value. models in the same family are represented as the same color in a lollipop (as multiple circles). Figure 9 provide... | figure | figures/dev/val_fig_1534.png | Figure 3 shows lollipop plots indicating the average heuristic-feature value ( y -axis) distribution across all languages ( x -axis). In English detection, smaller LLMs such as Gemma-1.1 (2B) do not generate output in the required target language, but rather rely on English. Next, for citation quality and support evalu... | nlp | yes | Legend Swap | papers/dev/nlp_2025.naacl-long.14.json | same as above | CC BY-SA 4.0 | http://creativecommons.org/licenses/by-sa/4.0/ | 0227 | null |
2024.eacl-short.7 | val_fig_1535 | The news domain has a higher frequency of such cases. | Refuted | (a) Prevalence of factual errors in each of domains; (b) Distribution of error categories across domains; Distribution of errors and error categories across domains | figure | figures/dev/val_fig_1535.png | We next characterize the distribution of error categories in factually inconsistent summaries generated by models across the domains considererd. Figure 1(b) reports the distribution of error categories for both models. 3 3 3 Model-specific distributions are in Appendix A.6 There are more extrinsic errors introduced in... | nlp | yes | Legend Swap | papers/dev/nlp_2024.eacl-short.7.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0236 | null | |
2025.finnlp-1.9 | val_tab_1558 | However, on our internal Payslips datasets, our model pre-trained on DOCILE outperforms the original one. | Refuted | Table 2: F1 scores for named-entity recognition using different pre-training and fine-tuning datasets. Results are averaged on 100 runs with different seeds. | table | tables_png/dev/val_tab_1558.png | We report labeled F1-score averaged on 100 fine-tuning runs in Table 2 .
Precision and recall are reported in Appendix B .
The Base model (using the full 12 layers) produces similar results on DOCILE no matter if pre-training on IIT-CDIP or DOCILE . | nlp | no | Change the cell values | papers/dev/nlp_2025.finnlp-1.9.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0198 | tables/dev/val_tab_1558.tex | |
2209.12362 | val_tab_1559 | We also show that our performance boost does not come from the additional training dataset of ActivityNet in Table 3 . | Refuted | Table 3: Ablation experiments. We investigate the effectiveness of each component of our method as well as compare to vanilla multi-dataset training method.
“Vanilla” means using cross entropy (CE) loss in training.
“w/o informative los” means using CE and projection loss.
The numbers are top-1/top-5 accuracy, respecti... | table | tables_png/dev/val_tab_1559.png | We then compare our method with state-of-the-art on these datasets. We train a higher resolution model with larger spatial inputs (312p) and achieves better performance compared to recent multi-dataset training methods, CoVER [ 59 ] and PolyVit [ 38 ] , on Kinetics-400, and significantly better on MiT and SSv2, as show... | ml | no | Change the cell values | papers/dev/ml_2209.12362.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0007 | tables/dev/val_tab_1559.tex | |
2501.09163 | val_fig_1536 | We can observe that classification errors rise with increasing noise levels and region sizes. | Refuted | Figure 3 : TTA classification errors under different levels of shift severity levels and scopes. | figure | figures/dev/val_fig_1536.png | To investigate the trade-off between the shift scope (dense vs. sparse) and severity, we simulate different levels of corruption severity and corrupted region sizes and evaluate a classical TTA method TENT [ 15 ] on these configurations. Following [ 45 ] , we inject impulse noise to the CIFAR10 dataset, with noise leve... | ml | no | Graph Flip | papers/dev/ml_2501.09163.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0069 | null | |
2205.14612 | val_fig_1537 | The true backpropagation gives the same curves for the ResNet and the HeunNet. | Refuted | Figure 4: Comparison of the best test errors as a function of depth when using Euler or Heun’s discretization method with or without the adjoint method. | figure | figures/dev/val_fig_1537.png | We then apply a batch norm, a ReLU and iterate relation ( 1 ) where f is a pre-activation basic block (He et al., 2016b ) . We consider the zero residual initialisation: the last batch norm of each basic block is initialized to zero. We consider different values for the depth N and notice that in this setup, the deeper... | ml | no | Legend Swap | papers/dev/ml_2205.14612.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0063 | null | |
2405.14800 | val_tab_1560 | The best value of ASR and AUC of baseline methods decreases to around 65%, and the best value of TPR@1%FPR decreases to around 5%, indicating insufficient effectiveness of previous member inference methods in real-world training scenarios of text-to-image diffusion models. | Refuted | Results under Real-world training setting. We also highlight key results according to Tab. 1 . | table | tables_png/dev/val_tab_1560.png | Real-world training scenario. In Tab. 2 , we adjust the training steps simulating real-word training scenario [ 19 ] and utilize default data augmentation [ 20 ] . | ml | other sources | Change the cell values | papers/dev/ml_2405.14800.json | CC BY-NC-SA 4.0 | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 0026 | tables/dev/val_tab_1560.tex | |
2023.ijcnlp-main.36 | val_tab_1561 | From Table 8 it is clear that BioLinkBert significantly outperforms its generic LM variation. | Refuted | Table 8: Accuracy (in percentages) of the biomedical and generic models on our demographically enhanced datasets. M =male; F =female; W =White; B =Black; A-A =African-American; H =Hispanic; As =Asian; SOr =sexual orientation; O*=original test dataset; O=the original, unmodified 100 questions; D=No demographic informati... | table | tables_png/dev/val_tab_1561.png | Similar to our analysis between QAGNN and BioLinkBert above, our analysis between the biomedical and generic models can be split into the amount of answers and accuracy that changes when the dimensions change. From Table 7 it is visible that the generic transformer has more than double the amount of answers change for ... | nlp | no | Swap rows or columns | papers/dev/nlp_2023.ijcnlp-main.36.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0124 | tables/dev/val_tab_1561.tex | |
17417 | val_fig_1538 | Functional enrichment analysis of the upregulated DEGs revealed that the top five biological processes were skeletal system development, sister chromatid cohesion, ossification, mitotic nuclear division, and extracellular matrix organization ( Figure 3A ). | Refuted | Figure 3: GO and KEGG pathway enrichment analyses of upregulated genes.
GO functional classification of DEGs. The x-axis represents the number of DEGs, with individual GO terms plotted on the y-axis. The graph displays only significantly enriched GO terms (P < 0.05). All GO terms were grouped into three categories: (A)... | figure | figures/dev/val_fig_1538.png | GO annotation and KEGG pathway enrichment analyses were performed to better understand the functional significance of the DEGs. | peerj | no | Category Swap | papers/dev/peerj_17417.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0320 | null | |
2025.naacl-short.5 | val_fig_1539 | Number+Text a have lower Dolma token counts when RQA fails (Fig 4 ), so LLMs struggle to recall long-tail numerical facts Kandpal et al. ( 2023 ) . | Refuted | Figure 4: Answer token count and question difficulty of when RQA succeeds/fails, averaged over all LLMs. | figure | figures/dev/val_fig_1539.png | nlp | no | Legend Swap | papers/dev/nlp_2025.naacl-short.5.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0245 | null | ||
17284 | val_tab_1562 | Pregnant women with multiple fetal abnormalities were 3.774 times more likely to undergo labor induction than those with a single fetal abnormality (OR = 3.774, 95% CI [1.640–8.683]). | Refuted | Table 2: Analysis of influencing factors of pregnancy outcome in pregnant women with fetal abnormalities. | table | tables_png/dev/val_tab_1562.png | peerj | no | Change the cell values | papers/dev/peerj_17284.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0285 | tables/dev/val_tab_1562.html | ||
2024.eacl-long.8 | val_tab_1563 | We observe accuracies between 60-80% across all tasks and observe that debiasing can substantially increase accuracy. | Refuted | Table 7: Accuracy of the comparative systems, at a comparison level, for SummEval. | table | tables_png/dev/val_tab_1563.png | One can also measure the accuracy of the comparative system at a comparison level. Table 7 shows the pairwise comparison accuracy for Summeval, over all candidate pairs where the true score of the candidate response varies. | nlp | no | Change the cell values | papers/dev/nlp_2024.eacl-long.8.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0146 | tables/dev/val_tab_1563.tex | |
2025.naacl-long.14 | val_fig_1540 | From Figure 7 , we observe that GPT-4o is a strong teacher, Mistral-v0.2 (7B) fine-tuned on GPT-4o distilled training data achieves the rank 2 outperforming the Llama-3 (70B) model. | Refuted | Figure 7: Approximate rankings using heuristic features after fine-tuning Llama-3 (8B) and Mistral-v0.2 (7B) on \mirage dataset across four configurations. | figure | figures/dev/val_fig_1540.png | Does fine-tuning on \mirage training data help? We evaluate three variants of the \mirage training dataset using two backbones: Mistral-v0.2 (7B) and Llama-3 (8B). We fine-tune the \mirage training datasets using (i) both on GPT-4o, (ii) Llama-3 (8B) on Llama-3 (70B), and (iii) Mistral-v0.2 (7B) on Mixtral (8x22B). | nlp | no | Legend Swap | papers/dev/nlp_2025.naacl-long.14.json | CC BY-SA 4.0 | http://creativecommons.org/licenses/by-sa/4.0/ | 0231 | null | |
2025.naacl-long.13 | val_tab_1564 | The generated dataset consists of 1,802 ambiguous and unanswerable questions spanning various categories. | Refuted | Table 3: Dataset statistics and human annotation accuracy on 20 samples per question type. "#Ex" column shows the number of examples generated for each category. "Acc" column shows average binary classification accuracy from human expert. | table | tables_png/dev/val_tab_1564.png | Table 3 shows the statistics of the dataset generated using the Spider dev set with Claude 3 sonnet. Note that the employed methodology can be seamlessly adapted to other text-to-SQL datasets like BIRD, WikiSQL, or any other synthetically generated answerable text-to-SQL corpora combined with any LLM (e.g., Llama3.1 or... | nlp | no | Change the cell values | papers/dev/nlp_2025.naacl-long.13.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0158 | tables/dev/val_tab_1564.tex | |
18596 | val_tab_1565 | The mean duration of hypertension was 7.92 ± 4.06 years, and the most commonly used medication among participants (27.8%) was ACE Inhibitors. | Refuted | Table 1: Participants’ personal data, IIEF total score (n= 223). | table | tables_png/dev/val_tab_1565.png | Table 1 presents the personal data of the participants along with the total score of IIEF. The participants had a mean age of 63.26 ± 7.62 years, with 97.3% being married. Additionally, 46.6% of participants were primary school graduates, 9.4% reported alcohol use, and 13% were smokers, with an average duration of smok... | peerj | no | Change the cell values | papers/dev/peerj_18596.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0299 | tables/dev/val_tab_1565.html | |
2024.eacl-long.8 | val_tab_1566 | However, we observe considerably high bias, with some set-ups even selecting the first option 80% of the time. | Refuted | Table 5: Positional bias P(A) for both prompt templates, for various systems in the comparative setup on SummEval. | table | tables_png/dev/val_tab_1566.png | We investigate whether the comparative prompts have any implicit positional bias, and whether systems prefer the first/second position. Table 5 shows the fraction of comparisons that selected the candidate in the first position for SummEval. Since all comparisons in both permutations are considered, this fraction shoul... | nlp | yes | Change the cell values | papers/dev/nlp_2024.eacl-long.8.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0145 | tables/dev/val_tab_1566.tex | |
2203.01212 | val_tab_1567 | If we compare the two-layer network results from GeoLIP and Sampling in Table 1 , which is a lower bound of true Lipschitz constant, the ratio is within 1.783 . | Refuted | Table 1: \ell_{\infty} -FGL estimation of various methods: DGeoLIP and NGeoLIP induce the same values on two layer networks. DGeoLIP always produces tighter estimations than LiPopt and MP do. | table | tables_png/dev/val_tab_1567.png | We have also shown that the two-layer network \ell_{\infty} -FGL estimation from GeoLIP has a theoretical guarantee with the approximation factor K_{G}<1.783 ( Theorem 3.3 ). | ml | no | Change the cell values | papers/dev/ml_2203.01212.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0002 | tables/dev/val_tab_1567.tex | |
2210.05883 | val_tab_1568 | As shown in Table 7 , although IGA achieves more favorable performance on one of the datasets, it requires higher computational costs than its counterparts, especially when applied in all the layers. | Refuted | Table 7: Results of performance and computational cost of AD-Drop with different masking strategies (GA, IGA, AA, and RD) relative to the original fine-tuning. The symbol \ddagger means AD-Drop is only applied in the first layer. BERT is chosen as the base model. | table | tables_png/dev/val_tab_1568.png | To analyze the computational efficiency, we quantitatively study the computational cost of AD-Drop with different dropping strategies (GA, IGA, AA, and RD) relative to the original fine-tuning on CoLA, STS-B, MRPC, and RTE. BERT is chosen as the base model for this experiment. | ml | no | Change the cell values | papers/dev/ml_2210.05883.json | CC BY-NC-SA 4.0 | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 0018 | tables/dev/val_tab_1568.tex | |
2209.15246 | val_tab_1569 | Note that resistance against the attack to both in- and out-sets is much harder than other cases since the perturbation budget has effectively been doubled. | Refuted | Table 1: OOD detection AUROC under attack with \epsilon=\frac{8}{255} for various methods trained with CIFAR-10 or CIFAR-100 as the closed set. A clean evaluation is one where no attack is made on the data, whereas an in/out evaluation means that the corresponding data is attacked. The best and second-best results are ... | table | tables_png/dev/val_tab_1569.png | OOD detection under adversarial attack: To perform a comprehensive study, AUROC is computed in four different settings for each method. First, the standard OOD detection without any attack is conducted (Clean). Next, either in- or out-datasets are attacked (In/Out). Finally, both the in- and out-sets are attacked (In a... | ml | no | Change the cell values | papers/dev/ml_2209.15246.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0019 | tables/dev/val_tab_1569.tex | |
2023.emnlp-main.1 | val_fig_1541 | As shown in Figure 4 , IAG- Student achieves the best performance with the statement number between 5 and 7. | Refuted | Figure 4: Scores of IAG- Student on StrategyQA dev set with different numbers of knowledge statements. | figure | figures/dev/val_fig_1541.png | Our implementation of IAG samples 5 knowledge statements to feed into the generator. To justify this design choice, we evaluate the performance of IAG- Student with varying statement numbers. | nlp | no | Graph Flip | papers/dev/nlp_2023.emnlp-main.1.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0250 | null | |
17090 | val_tab_1570 | Results of the analysis ( Table 3 ) showed that AA was positively correlated with depression (Index = 0.168, P < 0.001). | Refuted | Table 3: Regression analysis of appearance anxiety, interpersonal sensitivity, social support, and depression. | table | tables_png/dev/val_tab_1570.png | A multiple mediation analysis was conducted to explore the mediation effects of IS and SS in a college student population. Control variables included gender, age, BMI, GPA, being an only child or not, and home address. AA and depression were entered as independent and dependent variables respectively. The proposed medi... | peerj | other sources | Change the cell values | papers/dev/peerj_17090.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0276 | tables/dev/val_tab_1570.html | |
19471 | val_tab_1571 | In addition, significant differences were observed in all SU and SD kinematics except for KHD during SD. | Refuted | Table 1: Participants characteristics. | table | tables_png/dev/val_tab_1571.png | A total of 43 recreational table tennis players were included in this study, with 44 and 42 legs in the non-EOA and EOA groups, respectively ( Table 1 ). There were no significant differences between the groups in terms of sex distribution ( p = 0.22), age ( p = 0.46), height ( p = 0.07), weight ( p = 0.78), BMI ( p = ... | peerj | yes | Change the cell values | papers/dev/peerj_19471.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0269 | tables/dev/val_tab_1571.html | |
2024.eacl-long.6 | val_tab_1572 | GPT-4+DIN-SQL obtain EX score of 6.73% on Archer test set, while it is able to achieve 85.3% test-suite execution performance on Spider test set Pourreza and Rafiei ( 2024 ) . | Refuted | Table 2: Baseline performance on Archer. GPT-4+DIN-SQL was tested only on the English set due to cost and its English-specific design. We only report the fine-tuned model’s performance on the test set. | table | tables_png/dev/val_tab_1572.png | nlp | no | Change the cell values | papers/dev/nlp_2024.eacl-long.6.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0152 | tables/dev/val_tab_1572.tex | ||
19374 | val_fig_1542 | Faecalibacterium was positively associated with pentanoic acid and 4-aminobutanoic acid, while were negatively correlated with hydracrylic acid. | Refuted | Figure 10: Correlation analysis of characteristic metabolites and bacteria.
Corr represents correlation. Red and blue separately denote positive and negative correlations. The darker the color, the stronger the correlation. *P < 0.05, **P < 0.01, and ***P < 0.001.
| figure | figures/dev/val_fig_1542.png | To further explore correlations of metabolites in human fecal matter and intestinal flora after SNP treatment, thermography was used to analyze contents of metabolites showing significant differences with 30 most abundant genera. Figure 10 shows the results. Bifidobacterium was positively associated with glycerol, CO 2... | peerj | no | Category Swap | papers/dev/peerj_19374.json | CC BY 4.0 | https://creativecommons.org/licenses/by/4.0/ | 0327 | null | |
2403.19137 | val_fig_1543 | In Fig. 5(a) , the accuracy is poorer in range [1,10] , grows in range [10,20] , and saturates thereafter. | Refuted | (a) Accuracy-runtime tradeoff; (b) Parameter count comparison; Ablations on CIFAR100 showing: (a) performance trade-off with the number of MC samples M , (b) the number of trainable parameters in different finetuning methods. | figure | figures/dev/val_fig_1543.png | We vary the number of MC samples M from 1 to 50. | ml | no | Graph Flip | papers/dev/ml_2403.19137.json | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 0044 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.