prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the PanoDepth model in the PanoDepth: A Two-Stage Approach for Monocular Omnidirectional Depth Estimation paper on the Stanford2D3D Panoramic dataset?
RMSE, absolute relative error
What metrics were used to measure the HoHoNet (ResNet-101) model in the HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features paper on the Stanford2D3D Panoramic dataset?
RMSE, absolute relative error
What metrics were used to measure the BiFuse with fusion model in the BiFuse: Monocular 360 Depth Estimation via Bi-Projection Fusion paper on the Stanford2D3D Panoramic dataset?
RMSE, absolute relative error
What metrics were used to measure the Jin et al. model in the Geometric Structure Based and Regularized Depth Estimation From 360 Indoor Imagery paper on the Stanford2D3D Panoramic dataset?
RMSE, absolute relative error
What metrics were used to measure the SphereDepth model in the SphereDepth: Panorama Depth Estimation from Spherical Domain paper on the Stanford2D3D Panoramic dataset?
RMSE, absolute relative error
What metrics were used to measure the OmniDepth model in the OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas paper on the Stanford2D3D Panoramic dataset?
RMSE, absolute relative error
What metrics were used to measure the Bhattacharjee et al. model in the Estimating Image Depth in the Comics Domain paper on the DCM dataset?
Abs Rel, RMSE, RMSE log, Sq Rel
What metrics were used to measure the MIDAS model in the Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer paper on the DCM dataset?
Abs Rel, RMSE, RMSE log, Sq Rel
What metrics were used to measure the T2Net model in the T2Net: Synthetic-to-Realistic Translation for Solving Single-Image Depth Estimation Tasks paper on the DCM dataset?
Abs Rel, RMSE, RMSE log, Sq Rel
What metrics were used to measure the Atlas (plain) model in the Atlas: End-to-End 3D Scene Reconstruction from Posed Images paper on the ScanNet dataset?
RMSE, absolute relative error
What metrics were used to measure the Atlas (finetuned) model in the Atlas: End-to-End 3D Scene Reconstruction from Posed Images paper on the ScanNet dataset?
RMSE, absolute relative error
What metrics were used to measure the X-TC (Cross-Task Consistency) model in the Robust Learning Through Cross-Task Consistency paper on the Taskonomy dataset?
L1 error
What metrics were used to measure the DINOv2 (ViT-g/14 frozen, w/ DPT decoder) model in the DINOv2: Learning Robust Visual Features without Supervision paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the SwinV2-L 1K-MIM model in the Revealing the Dark Secrets of Masked Image Modeling paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the Semantic-aware NN model in the 3D Ken Burns Effect from a Single Image paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the SwinV2-B 1K-MIM model in the Revealing the Dark Secrets of Masked Image Modeling paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the P3Depth model in the P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the AdaBins model in the AdaBins: Depth Estimation using Adaptive Bins paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the TransDepth (AGD+ ViT) model in the Transformer-Based Attention Networks for Continuous Pixel-Wise Prediction paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the BTS model in the From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the VNL model in the Enforcing geometric constraints of virtual normal for depth prediction paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the Optimized, freeform model in the Deep Optics for Monocular Depth Estimation and 3D Object Detection paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the Freeform model in the Deep Optics for Monocular Depth Estimation and 3D Object Detection paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the DORN model in the Deep Ordinal Regression Network for Monocular Depth Estimation paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the MS-CRF model in the Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the PAD-Net model in the PAD-Net: Multi-Tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the Defocus/DepthNet (Normalized) model in the Focus on defocus: bridging the synthetic to real domain gap for depth estimation paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the A2J model in the A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image paper on the NYU-Depth V2 dataset?
RMS, RMSE, mAP
What metrics were used to measure the UniFuse model in the UniFuse: Unidirectional Fusion for 360$^{\circ}$ Panorama Depth Estimation paper on the Matterport3D dataset?
Abs Rel
What metrics were used to measure the LightDepth model in the LightDepth: A Resource Efficient Depth Estimation Approach for Dealing with Ground Truth Sparsity via Curriculum Learning paper on the KITTI Eigen split dataset?
Number of parameters (M)
What metrics were used to measure the DELTAS model in the DELTAS: Depth Estimation by Learning Triangulation And densification of Sparse points paper on the ScanNetV2 dataset?
Average mean absolute error, absolute relative error
What metrics were used to measure the LeReS model in the Learning to Recover 3D Scene Shape from a Single Image paper on the ScanNetV2 dataset?
Average mean absolute error, absolute relative error
What metrics were used to measure the XBNET model in the XBNet : An Extremely Boosted Neural Network paper on the Diabetes dataset?
Accuracy
What metrics were used to measure the CAML model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD-10-full dataset?
Macro-AUC, Micro-AUC, Macro-F1, Micro-F1, Precision@8
What metrics were used to measure the PLM model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD-10-full dataset?
Macro-AUC, Micro-AUC, Macro-F1, Micro-F1, Precision@8
What metrics were used to measure the LAAT model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD-10-full dataset?
Macro-AUC, Micro-AUC, Macro-F1, Micro-F1, Precision@8
What metrics were used to measure the Joint LAAT model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD-10-full dataset?
Macro-AUC, Micro-AUC, Macro-F1, Micro-F1, Precision@8
What metrics were used to measure the MSMN model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD-10-full dataset?
Macro-AUC, Micro-AUC, Macro-F1, Micro-F1, Precision@8
What metrics were used to measure the PLM-ICD model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-9 dataset?
AUC Macro, AUC Micro, Exact Match Ratio, F1 Macro, F1 Micro, Precision@15, Precision@8, R-Prec, mAP
What metrics were used to measure the LAAT model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-9 dataset?
AUC Macro, AUC Micro, Exact Match Ratio, F1 Macro, F1 Micro, Precision@15, Precision@8, R-Prec, mAP
What metrics were used to measure the MultiResCNN model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-9 dataset?
AUC Macro, AUC Micro, Exact Match Ratio, F1 Macro, F1 Micro, Precision@15, Precision@8, R-Prec, mAP
What metrics were used to measure the Bi-GRU model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-9 dataset?
AUC Macro, AUC Micro, Exact Match Ratio, F1 Macro, F1 Micro, Precision@15, Precision@8, R-Prec, mAP
What metrics were used to measure the CAML model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-9 dataset?
AUC Macro, AUC Micro, Exact Match Ratio, F1 Macro, F1 Micro, Precision@15, Precision@8, R-Prec, mAP
What metrics were used to measure the CNN model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-9 dataset?
AUC Macro, AUC Micro, Exact Match Ratio, F1 Macro, F1 Micro, Precision@15, Precision@8, R-Prec, mAP
What metrics were used to measure the MSMN model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-full dataset?
Macro AUC, Micro AUC, F1 Macro, F1 Micro, Precision@8
What metrics were used to measure the PLM-ICD model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-full dataset?
Macro AUC, Micro AUC, F1 Macro, F1 Micro, Precision@8
What metrics were used to measure the Joint LAAT model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-full dataset?
Macro AUC, Micro AUC, F1 Macro, F1 Micro, Precision@8
What metrics were used to measure the LAAT model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-full dataset?
Macro AUC, Micro AUC, F1 Macro, F1 Micro, Precision@8
What metrics were used to measure the CAML model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-full dataset?
Macro AUC, Micro AUC, F1 Macro, F1 Micro, Precision@8
What metrics were used to measure the MSMN model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD10-top50 dataset?
F1 (micro), F1 (macro), AUC (Micro), AUC (Macro), Precision@5
What metrics were used to measure the PLM-ICD model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD10-top50 dataset?
F1 (micro), F1 (macro), AUC (Micro), AUC (Macro), Precision@5
What metrics were used to measure the Joint LAAT model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD10-top50 dataset?
F1 (micro), F1 (macro), AUC (Micro), AUC (Macro), Precision@5
What metrics were used to measure the LAAT model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD10-top50 dataset?
F1 (micro), F1 (macro), AUC (Micro), AUC (Macro), Precision@5
What metrics were used to measure the CAML model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD10-top50 dataset?
F1 (micro), F1 (macro), AUC (Micro), AUC (Macro), Precision@5
What metrics were used to measure the PLM-ICD model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-10 dataset?
Precision@8, F1 Macro, F1 Micro, Precision@15, R-Prec, mAP, Exact Match Ratio, AUC Macro, AUC Micro
What metrics were used to measure the LAAT model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-10 dataset?
Precision@8, F1 Macro, F1 Micro, Precision@15, R-Prec, mAP, Exact Match Ratio, AUC Macro, AUC Micro
What metrics were used to measure the MultiResCNN model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-10 dataset?
Precision@8, F1 Macro, F1 Micro, Precision@15, R-Prec, mAP, Exact Match Ratio, AUC Macro, AUC Micro
What metrics were used to measure the CAML model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-10 dataset?
Precision@8, F1 Macro, F1 Micro, Precision@15, R-Prec, mAP, Exact Match Ratio, AUC Macro, AUC Micro
What metrics were used to measure the Bi-GRU model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-10 dataset?
Precision@8, F1 Macro, F1 Micro, Precision@15, R-Prec, mAP, Exact Match Ratio, AUC Macro, AUC Micro
What metrics were used to measure the CNN model in the Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study paper on the MIMIC-IV ICD-10 dataset?
Precision@8, F1 Macro, F1 Micro, Precision@15, R-Prec, mAP, Exact Match Ratio, AUC Macro, AUC Micro
What metrics were used to measure the MSMN+KEPTLongformer model in the Knowledge Injected Prompt Based Fine-tuning for Multi-label Few-shot ICD Coding paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the EffectiveCAN model in the Effective Convolutional Attention Network for Multi-label Clinical Document Classification paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the Discnet+RE model in the Automatic ICD Coding Exploiting Discourse Structure and Reconciled Code Embeddings paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the RAC model in the Read, Attend, and Code: Pushing the Limits of Medical Codes Prediction from Clinical Notes by Machines paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the MSMN model in the Code Synonyms Do Matter: Multiple Synonyms Matching Network for Automatic ICD Coding paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the JointLAAT model in the A Label Attention Model for ICD Coding from Clinical Text paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the LAAT model in the A Label Attention Model for ICD Coding from Clinical Text paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the MSATT-KG model in the paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the MultiResCNN model in the ICD Coding from Clinical Text Using Multi-Filter Residual Convolutional Neural Network paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the CAML model in the Explainable Prediction of Medical Codes from Clinical Text paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the DR-CAML model in the Explainable Prediction of Medical Codes from Clinical Text paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the SVM model in the Explainable Prediction of Medical Codes from Clinical Text paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the CNN model in the Explainable Prediction of Medical Codes from Clinical Text paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the Bi-GRU model in the Explainable Prediction of Medical Codes from Clinical Text paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the HAN model in the Explainable Automated Coding of Clinical Notes using Hierarchical Label-wise Attention Networks and Label Embedding Initialisation paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the Logistic Regression model in the Explainable Prediction of Medical Codes from Clinical Text paper on the MIMIC-III dataset?
Micro-F1, Macro-F1, Micro-AUC, Macro-AUC, Precision@5, Precision@8, Precision@15
What metrics were used to measure the MSMN model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-top50 dataset?
AUC Macro, AUC Micro, F1 Macro, F1 Micro, Precision @5
What metrics were used to measure the PLM-ICD model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-top50 dataset?
AUC Macro, AUC Micro, F1 Macro, F1 Micro, Precision @5
What metrics were used to measure the Joint LAAT model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-top50 dataset?
AUC Macro, AUC Micro, F1 Macro, F1 Micro, Precision @5
What metrics were used to measure the LAAT model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-top50 dataset?
AUC Macro, AUC Micro, F1 Macro, F1 Micro, Precision @5
What metrics were used to measure the CAML model in the Mimic-IV-ICD: A new benchmark for eXtreme MultiLabel Classification paper on the MIMIC-IV-ICD9-top50 dataset?
AUC Macro, AUC Micro, F1 Macro, F1 Micro, Precision @5
What metrics were used to measure the MAN-SF model in the Deep Attentive Learning for Stock Movement Prediction From Social Media Text and Company Correlations paper on the stocknet dataset?
F1
What metrics were used to measure the StockNet model in the Stock Movement Prediction from Tweets and Historical Prices paper on the stocknet dataset?
F1
What metrics were used to measure the Adversarial LSTM model in the paper on the stocknet dataset?
F1
What metrics were used to measure the HATS model in the paper on the stocknet dataset?
F1
What metrics were used to measure the LSTM model in the Forecasting directional movements of stock prices for intraday trading using LSTM and random forests paper on the S&P 500 dataset?
Average daily returns
What metrics were used to measure the SRLP model in the Astock: A New Dataset and Automated Stock Trading based on Stock-specific News Analyzing Model paper on the Astock dataset?
1:1 Accuracy
What metrics were used to measure the ComplEx-N3-RP model in the Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations paper on the CoDEx Medium dataset?
MRR, Hits@1, Hits@3, Hits@10
What metrics were used to measure the ComplEx model in the CoDEx: A Comprehensive Knowledge Graph Completion Benchmark paper on the CoDEx Medium dataset?
MRR, Hits@1, Hits@3, Hits@10
What metrics were used to measure the TuckER model in the CoDEx: A Comprehensive Knowledge Graph Completion Benchmark paper on the CoDEx Medium dataset?
MRR, Hits@1, Hits@3, Hits@10
What metrics were used to measure the ConvE model in the CoDEx: A Comprehensive Knowledge Graph Completion Benchmark paper on the CoDEx Medium dataset?
MRR, Hits@1, Hits@3, Hits@10
What metrics were used to measure the RESCAL model in the CoDEx: A Comprehensive Knowledge Graph Completion Benchmark paper on the CoDEx Medium dataset?
MRR, Hits@1, Hits@3, Hits@10
What metrics were used to measure the TransE model in the CoDEx: A Comprehensive Knowledge Graph Completion Benchmark paper on the CoDEx Medium dataset?
MRR, Hits@1, Hits@3, Hits@10
What metrics were used to measure the GATNE-I model in the Representation Learning for Attributed Multiplex Heterogeneous Network paper on the Alibaba dataset?
F1-Score, PR AUC, ROC AUC
What metrics were used to measure the GraphStar (double weight on positive examples) model in the Graph Star Net for Generalized Multi-Task Learning paper on the Pubmed (biased evaluation) dataset?
AP, AUC, Accuracy
What metrics were used to measure the SCAT (half of negative examples with 0 features) model in the Encoding Robust Representation for Graph Generation paper on the Pubmed (biased evaluation) dataset?
AP, AUC, Accuracy
What metrics were used to measure the Asymmetric Transitivity Preservation model in the ATP: Directed Graph Embedding with Asymmetric Transitivity Preservation paper on the Cit-HepPH dataset?
AUC
What metrics were used to measure the Prob-CBR model in the Probabilistic Case-based Reasoning for Open-World Knowledge Graph Completion paper on the NELL-995 dataset?
Hits@1, Hits@10, MRR, Mean AP, HITS@3
What metrics were used to measure the CoPER-ConvE model in the Contextual Parameter Generation for Knowledge Graph Link Prediction paper on the NELL-995 dataset?
Hits@1, Hits@10, MRR, Mean AP, HITS@3
What metrics were used to measure the Meta-KGR (ConvE) model in the Adapting Meta Knowledge Graph Information for Multi-Hop Reasoning over Few-Shot Relations paper on the NELL-995 dataset?
Hits@1, Hits@10, MRR, Mean AP, HITS@3