prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Top-down (BERT) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Guz et al. (2020) model in the Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the End-to-end Top-down (XLNet) model in the RST Parsing from Scratch paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Top-down Span-based Parser with Silver Agreement Subtrees (ensemble) model in the Improving Neural RST Parsing Model with Silver Agreement Subtrees paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Top-down Span-based Parser model in the Top-Down RST Parsing Utilizing Granularity Levels in Documents paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Top-down Span-based Parser with Silver Agreement Subtrees model in the Improving Neural RST Parsing Model with Silver Agreement Subtrees paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Two-stage Parser model in the A Two-Stage Parsing Method for Text-Level Discourse Analysis paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up Linear-chain CRF-based Parser model in the A Linear-Time Bottom-Up Discourse Parser with Constraints and Post-Editing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Transition-based Parser with Implicit Syntax Features model in the Transition-based Neural RST Parsing with Implicit Syntax Features paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Two-stage Discourse Parser with a Sliding Window model in the CODRA: A Novel Discriminative Framework for Rhetorical Analysis paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the HILDA Parser model in the A Novel Discourse Parser Based on Support Vector Machine Classification paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (DeBERTa) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (DeBERTa) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (XLNet) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (RoBERTa) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (RoBERTa) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (XLNet) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (SpanBERT) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (SpanBERT) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the LSTM Dynamic model in the Top-down Discourse Parsing via Sequence Labelling paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Guz et al. (2020) (pretrained) model in the Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the LSTM Static model in the Top-down Discourse Parsing via Sequence Labelling paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Guz et al. (2020) model in the Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Transformer (dynamic) model in the Top-down Discourse Parsing via Sequence Labelling paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Transformer (static) model in the Top-down Discourse Parsing via Sequence Labelling paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the End-to-end Top-down (Glove) model in the RST Parsing from Scratch paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (BERT) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (BERT) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Guz et al. (2020) model in the Unleashing the Power of Neural Discourse Parsers - A Context and Structure Aware Approach Using Large Scale Pretraining paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Greedy Bottom-up Parser with Syntactic Features model in the Two Practical Rhetorical Structure Theory Parsers paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Re-implemented HILDA RST parser model in the Empirical comparison of dependency conversions for RST discourse trees paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Discourse Parser with Hierarchical Attention model in the Discourse Parsing with Attention-based Hierarchical Neural Networks paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Discourse Parsing from Linear Projection model in the Representation Learning for Text-level Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Transition-Based Parser Trained on Cross-Lingual Corpus model in the Cross-lingual RST Discourse Parsing paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the LSTM Sequential Discourse Parser (Braud et al., 2016) model in the Multi-view and multi-task training of RST discourse parsers paper on the RST-DT dataset? | RST-Parseval (Span), RST-Parseval (Nuclearity), RST-Parseval (Relation), RST-Parseval (Full), Standard Parseval (Nuclearity), Standard Parseval (Full), Standard Parseval (Span), Standard Parseval (Relation) |
What metrics were used to measure the Structured model in the Structured Dialogue Discourse Parsing paper on the STAC dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the SSP-BERT + SCIJE model in the Speaker-Aware Discourse Parsing on Multi-Party Dialogues paper on the STAC dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the Struct-Aware model in the A Structure Self-Aware Model for Discourse Parsing on Multi-Party Dialogues paper on the STAC dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the Hierarchical model in the Improving Multi-Party Dialogue Discourse Parsing via Domain Integration paper on the STAC dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the DiscProReco model in the A Joint Model for Dropped Pronoun Recovery and Conversational Discourse Parsing in Chinese Conversational Speech paper on the STAC dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the Deep Sequential model in the A Deep Sequential Model for Discourse Parsing on Multi-Party Dialogues paper on the STAC dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the HG-MDP model in the paper on the STAC dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the SigNet-F (SVM) model in the SigNet: Convolutional Siamese Network for Writer Independent Offline Signature Verification paper on the CEDAR Signature dataset? | FAR |
What metrics were used to measure the Siamese_MultiHeadCrossAttention_SoftAttention (Siamese_MHCA_SA) model in the Attention based Writer Independent Handwriting Verification paper on the CEDAR Signature dataset? | FAR |
What metrics were used to measure the Siamese_MHCA_SA model in the Attention based Writer Independent Handwriting Verification paper on the AND Dataset dataset? | Average F1 |
What metrics were used to measure the k-PCA + HDBSCAN model in the A Hybrid Architecture for Out of Domain Intent Detection and Intent Discovery paper on the SNIPS dataset? | ARI |
What metrics were used to measure the k-PCA + HDBSCAN model in the A Hybrid Architecture for Out of Domain Intent Detection and Intent Discovery paper on the Persian-ATIS dataset? | ARI |
What metrics were used to measure the k-PCA + HDBSCAN model in the A Hybrid Architecture for Out of Domain Intent Detection and Intent Discovery paper on the ATIS dataset? | ARI |
What metrics were used to measure the AcrE model in the Knowledge Graph Embedding with Atrous Convolution and Residual Learning paper on the FB15k dataset? | MRR |
What metrics were used to measure the Pi-net-linear model in the $Π-$nets: Deep Polynomial Neural Networks paper on the COMA dataset? | Error (mm) |
What metrics were used to measure the Lin et al. 2021 model in the Robust High-Resolution Video Matting with Temporal Guidance paper on the ImageNet dataset? | Alpha - Conn, Alpha - Grad, Alpha - MAD, Alpha - MSE, Alpha - dtSSD |
What metrics were used to measure the MODNet model in the Robust High-Resolution Video Matting with Temporal Guidance paper on the ImageNet dataset? | Alpha - Conn, Alpha - Grad, Alpha - MAD, Alpha - MSE, Alpha - dtSSD |
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 125M model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the CASPSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the CASPSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the CASPSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the CASPSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 125M model in the Galactica: A Large Language Model for Science paper on the CASPSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 125M model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset? | Validation perplexity |
What metrics were used to measure the GAL 125M model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset? | Validation perplexity |
What metrics were used to measure the CNN model in the Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces paper on the Website Traffic Data on Tor dataset? | Accuracy (%) |
What metrics were used to measure the SwinIA model in the SwinIA: Self-Supervised Blind-Spot Image Denoising with Zero Convolutions paper on the FMD Two-Photon Mice dataset? | PSNR, SSIM |
What metrics were used to measure the SwinIA model in the SwinIA: Self-Supervised Blind-Spot Image Denoising with Zero Convolutions paper on the FMD Confocal Mice dataset? | PSNR, SSIM |
What metrics were used to measure the Deep Dynamic Residual Attention Network model in the Learning Medical Image Denoising with Deep Dynamic Residual Attention Network paper on the Dermatologist level dermoscopy skin cancer classification using different deep learning convolutional neural networks algorithms dataset? | SSIM, Average PSNR |
What metrics were used to measure the SwinIA model in the SwinIA: Self-Supervised Blind-Spot Image Denoising with Zero Convolutions paper on the FMD Confocal Fish dataset? | PSNR, SSIM |
What metrics were used to measure the Deep Dynamic Residual Attention Network model in the Learning Medical Image Denoising with Deep Dynamic Residual Attention Network paper on the LGG Segmentation Dataset dataset? | Average PSNR, SSIM |
What metrics were used to measure the Deep Dynamic Residual Attention Network model in the Learning Medical Image Denoising with Deep Dynamic Residual Attention Network paper on the Human Protein Atlas Image dataset? | Average PSNR, SSIM |
What metrics were used to measure the DanFEVER XLM-RoBERTa Large model in the DanFEVER: claim verification dataset for Danish paper on the DanFEVER dataset? | F1 |
What metrics were used to measure the Re2G model in the Re2G: Retrieve, Rerank, Generate paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the intersect model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Wikipedia model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the KGI model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Multitask DPR + BART model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BERT + DPR model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the RAG model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BART + DPR model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the NSMN model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the TABi model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the chriskuei model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the GENRE model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Sphere model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the aa_evalai model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the BART model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the GENRE+roBERTa finetuning model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the SVM with rbf kernel model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the ElefPav model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.