dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
ACMPS | CFEN | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the CFEN model get on the ACMPS dataset
| 0.9789 |
PASCAL VOC 2012 test | SynCo (ResNet-50) 200ep | SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02401v5 | [
"https://github.com/giakoumoglou/synco"
] | In the paper 'SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations', what Bounding Box AP score did the SynCo (ResNet-50) 200ep model get on the PASCAL VOC 2012 test dataset
| 57.2 |
XSum | SRformer-BART | Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.16340v3 | [
"https://github.com/yinghanlong/SRtransformer"
] | In the paper 'Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model', what ROUGE-1 score did the SRformer-BART model get on the XSum dataset
| 39.03 |
DanceTrack | AED | Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown | 2024-09-14T00:00:00 | https://arxiv.org/abs/2409.09293v1 | [
"https://github.com/balabooooo/aed"
] | In the paper 'Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown', what HOTA score did the AED model get on the DanceTrack dataset
| 66.6 |
MMBench | LLaVA-LLaMA3-8B-ViT + MoSLoRA | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what GPT-3.5 score score did the LLaVA-LLaMA3-8B-ViT + MoSLoRA model get on the MMBench dataset
| 73.0 |
Elliptic Dataset | GAT | Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.19383v2 | [
"https://github.com/B-Deprez/AML_Network"
] | In the paper 'Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation', what AUPRC score did the GAT model get on the Elliptic Dataset dataset
| 0.5886 |
MAWPS | GPT-J | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the GPT-J model get on the MAWPS dataset
| 9.9 |
GSM8K | WizardMath-70B-V1.0 | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09583v1 | [
"https://github.com/nlpxucan/wizardlm"
] | In the paper 'WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct', what Accuracy score did the WizardMath-70B-V1.0 model get on the GSM8K dataset
| 81.6 |
Mini-Imagenet 5-way (1-shot) | MSENet | Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.07989v1 | [
"https://github.com/FatemehAskari/MSENet"
] | In the paper 'Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms', what Accuracy score did the MSENet model get on the Mini-Imagenet 5-way (1-shot) dataset
| 66.57 |
CVC-ColonDB | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what mean Dice score did the EMCAD model get on the CVC-ColonDB dataset
| 0.9231 |
BEA-2019 (test) | GEC-DI (LM+GED) | Improving Seq2Seq Grammatical Error Correction via Decoding Interventions | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14534v1 | [
"https://github.com/Jacob-Zhou/gecdi"
] | In the paper 'Improving Seq2Seq Grammatical Error Correction via Decoding Interventions', what F0.5 score did the GEC-DI (LM+GED) model get on the BEA-2019 (test) dataset
| 73.1 |
STS12 | PromptEOL+CSE+OPT-2.7B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-2.7B model get on the STS12 dataset
| 0.7949 |
CIFAR-100-LT (ρ=50) | GML (ResNet-32) | Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels | 2023-05-02T00:00:00 | https://arxiv.org/abs/2305.01160v3 | [
"https://github.com/bluecdm/Long-tailed-recognition"
] | In the paper 'Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels', what Error Rate score did the GML (ResNet-32) model get on the CIFAR-100-LT (ρ=50) dataset
| 41.9 |
NAS-Bench-201, CIFAR-100 | DiNAS | Multi-conditioned Graph Diffusion for Neural Architecture Search | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.06020v2 | [
"https://github.com/rohanasthana/dinas"
] | In the paper 'Multi-conditioned Graph Diffusion for Neural Architecture Search', what Accuracy (Test) score did the DiNAS model get on the NAS-Bench-201, CIFAR-100 dataset
| 73.51 |
CIRR | CoVR-BLIP | CoVR-2: Automatic Data Construction for Composed Video Retrieval | 2023-08-28T00:00:00 | https://arxiv.org/abs/2308.14746v4 | [
"https://github.com/lucas-ventura/CoVR"
] | In the paper 'CoVR-2: Automatic Data Construction for Composed Video Retrieval', what (Recall@5+Recall_subset@1)/2 score did the CoVR-BLIP model get on the CIRR dataset
| 76.81 |
TerraIncognita | SPG (CLIP, ViT-B/16) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ViT-B/16) model get on the TerraIncognita dataset
| 50.2 |
RST-DT | Top-down Llama 2 (13B) | Can we obtain significant success in RST discourse parsing by using Large Language Models? | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05065v1 | [
"https://github.com/nttcslab-nlp/rstparser_eacl24"
] | In the paper 'Can we obtain significant success in RST discourse parsing by using Large Language Models?', what Standard Parseval (Span) score did the Top-down Llama 2 (13B) model get on the RST-DT dataset
| 78.6 |
ASQP | ChatGPT (gpt-3.5-turbo, few-shot) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (R15) score did the ChatGPT (gpt-3.5-turbo, few-shot) model get on the ASQP dataset
| 34.27 |
ETTm2 (192) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTm2 (192) Multivariate dataset
| 0.216 |
Cornell | HiGNN | Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17351v2 | [
"https://github.com/zylMozart/HiGNN"
] | In the paper 'Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network', what Accuracy score did the HiGNN model get on the Cornell dataset
| 80.00 ± 4.26 |
SVAMP | OpenMath-CodeLlama-70B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Execution Accuracy score did the OpenMath-CodeLlama-70B (w/ code) model get on the SVAMP dataset
| 87.8 |
ImageNet | TinySaver(Swin_large, 1.0 Acc drop) | Tiny Models are the Computational Saver for Large Models | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17726v3 | [
"https://github.com/QingyuanWang/tinysaver"
] | In the paper 'Tiny Models are the Computational Saver for Large Models', what Top 1 Accuracy score did the TinySaver(Swin_large, 1.0 Acc drop) model get on the ImageNet dataset
| 85.24 |
DSIFN-CD | HANet | HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09178v1 | [
"https://github.com/chengxihan/hanet-cd"
] | In the paper 'HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images', what F1 score did the HANet model get on the DSIFN-CD dataset
| 62.67 |
Atari 2600 River Raid | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 River Raid dataset
| 24445 |
MPI-INF-3DHP | RTPCA | Refined Temporal Pyramidal Compression-and-Amplification Transformer for 3D Human Pose Estimation | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01365v3 | [
"https://github.com/hbing-l/rtpca"
] | In the paper 'Refined Temporal Pyramidal Compression-and-Amplification Transformer for 3D Human Pose Estimation', what AUC score did the RTPCA model get on the MPI-INF-3DHP dataset
| 74.2 |
TCGA | Snuffy (SimCLR Exhaustive) | Snuffy: Efficient Whole Slide Image Classifier | 2024-08-15T00:00:00 | https://arxiv.org/abs/2408.08258v2 | [
"https://github.com/jafarinia/snuffy"
] | In the paper 'Snuffy: Efficient Whole Slide Image Classifier', what AUC score did the Snuffy (SimCLR Exhaustive) model get on the TCGA dataset
| 0.972 |
Manga109 - 3x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Manga109 - 3x upscaling dataset
| 36.10 |
MVTec AD | CRAD | Continuous Memory Representation for Anomaly Detection | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18293v3 | [
"https://github.com/tae-mo/GRAD"
] | In the paper 'Continuous Memory Representation for Anomaly Detection', what Detection AUROC score did the CRAD model get on the MVTec AD dataset
| 99.4 |
HCP Aging | NeuroPath | NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.17510v3 | [
"https://github.com/Chrisa142857/neuro_detour"
] | In the paper 'NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes', what Accuracy score did the NeuroPath model get on the HCP Aging dataset
| 98.23 |
DuoRC | RecallM | RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models | 2023-07-06T00:00:00 | https://arxiv.org/abs/2307.02738v3 | [
"https://github.com/cisco-open/DeepVision/tree/main/recallm"
] | In the paper 'RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models', what Accuracy score did the RecallM model get on the DuoRC dataset
| 48.13 |
QVHighlights | R^2-Tuning | $R^2$-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00801v2 | [
"https://github.com/yeliudev/R2-Tuning"
] | In the paper '$R^2$-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding', what mAP score did the R^2-Tuning model get on the QVHighlights dataset
| 46.17 |
ColonINST-v1 (Seen) | MobileVLM-1.7B
(w/ LoRA, w/ extra data) | MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16886v2 | [
"https://github.com/meituan-automl/mobilevlm"
] | In the paper 'MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices', what Accuray score did the MobileVLM-1.7B
(w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 97.87 |
MLO-Cn2 | Minute Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Minute Climatology model get on the MLO-Cn2 dataset
| 0.551 |
Clotho | PaSST-RoBERTa & Estimated Audio–Caption Correspondences | Estimated Audio-Caption Correspondences Improve Language-Based Audio Retrieval | 2024-08-21T00:00:00 | https://arxiv.org/abs/2408.11641v1 | [
"https://github.com/optimusprimus/salsa"
] | In the paper 'Estimated Audio-Caption Correspondences Improve Language-Based Audio Retrieval', what R@1 score did the PaSST-RoBERTa & Estimated Audio–Caption Correspondences model get on the Clotho dataset
| 27.69 |
ETTm2 (720) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTm2 (720) Multivariate dataset
| 0.358 |
Referring Expressions for DAVIS 2016 & 2017 | MUTR | Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation | 2023-05-25T00:00:00 | https://arxiv.org/abs/2305.16318v2 | [
"https://github.com/opengvlab/mutr"
] | In the paper 'Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation', what J&F 1st frame score did the MUTR model get on the Referring Expressions for DAVIS 2016 & 2017 dataset
| 68.0 |
Kinetics-600 12 frames, 64x64 | OmniTokenizer-AR | OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09399v1 | [
"https://github.com/foundationvision/omnitokenizer"
] | In the paper 'OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation', what FVD score did the OmniTokenizer-AR model get on the Kinetics-600 12 frames, 64x64 dataset
| 32.9 |
ImageNet-1k vs Places | ODIN+UMAP (ResNet-50) | Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03715v1 | [
"https://github.com/tmlr-group/unleashing-mask"
] | In the paper 'Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability', what FPR95 score did the ODIN+UMAP (ResNet-50) model get on the ImageNet-1k vs Places dataset
| 50.06 |
RMAS | MAS-SAM | MAS-SAM: Segment Any Marine Animal with Aggregated Features | 2024-04-24T00:00:00 | https://arxiv.org/abs/2404.15700v2 | [
"https://github.com/drchip61/mas-sam"
] | In the paper 'MAS-SAM: Segment Any Marine Animal with Aggregated Features', what S-measure score did the MAS-SAM model get on the RMAS dataset
| 0.865 |
ImageNet | DAVI | Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems | 2024-07-23T00:00:00 | https://arxiv.org/abs/2407.16125v1 | [
"https://github.com/mlvlab/davi"
] | In the paper 'Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems', what PSNR score did the DAVI model get on the ImageNet dataset
| 26.58 |
cifar10 | ResNet18 | Guarding Barlow Twins Against Overfitting with Mixed Samples | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02151v1 | [
"https://github.com/wgcban/mix-bt"
] | In the paper 'Guarding Barlow Twins Against Overfitting with Mixed Samples', what average top-1 classification accuracy score did the ResNet18 model get on the cifar10 dataset
| 92.58 |
COD | ZoomNeXt-PVTv2-B5 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what MAE score did the ZoomNeXt-PVTv2-B5 model get on the COD dataset
| 0.018 |
ETTm2 (336) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTm2 (336) Multivariate dataset
| 0.272 |
Set14 - 2x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Set14 - 2x upscaling dataset
| 35.33 |
OK-VQA | RA-VQA-v2 (BLIP 2) | Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17133v2 | [
"https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering"
] | In the paper 'Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering', what Accuracy score did the RA-VQA-v2 (BLIP 2) model get on the OK-VQA dataset
| 62.08 |
GSM8K | ToRA-Code 34B | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-Code 34B model get on the GSM8K dataset
| 80.7 |
ModelNet40 | Point-JEPA (no voting) | Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16432v4 | [
"https://github.com/Ayumu-J-S/Point-JEPA"
] | In the paper 'Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud', what Overall Accuracy score did the Point-JEPA (no voting) model get on the ModelNet40 dataset
| 93.8±0.2 |
Mapillary val | SelaVPR | Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14505v3 | [
"https://github.com/Lu-Feng/SelaVPR"
] | In the paper 'Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition', what Recall@1 score did the SelaVPR model get on the Mapillary val dataset
| 90.8 |
FLIR | RGB-X Scene Adaptive CBAM | RGB-X Object Detection via Scene-Specific Fusion Modules | 2023-10-30T00:00:00 | https://arxiv.org/abs/2310.19372v1 | [
"https://github.com/dsriaditya999/rgbxfusion"
] | In the paper 'RGB-X Object Detection via Scene-Specific Fusion Modules', what mAP50 score did the RGB-X Scene Adaptive CBAM model get on the FLIR dataset
| 86.16% |
miniF2F-test | Decomposing the Enigma | Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving | 2023-05-25T00:00:00 | https://arxiv.org/abs/2305.16366v1 | [
"https://github.com/hkunlp/subgoal-theorem-prover"
] | In the paper 'Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving', what Pass@100 score did the Decomposing the Enigma model get on the miniF2F-test dataset
| 45.5 |
MNIST-to-USPS | FACT | FACT: Federated Adversarial Cross Training | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00607v2 | [
"https://github.com/jonas-lippl/fact"
] | In the paper 'FACT: Federated Adversarial Cross Training', what Accuracy score did the FACT model get on the MNIST-to-USPS dataset
| 98.8 |
TVBench | ST-LLM | ST-LLM: Large Language Models Are Effective Temporal Learners | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00308v1 | [
"https://github.com/TencentARC/ST-LLM"
] | In the paper 'ST-LLM: Large Language Models Are Effective Temporal Learners', what Average Accuracy score did the ST-LLM model get on the TVBench dataset
| 36.1 |
AIDA-CoNLL | SpEL-large (2023) | SpEL: Structured Prediction for Entity Linking | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14684v1 | [
"https://github.com/shavarani/spel"
] | In the paper 'SpEL: Structured Prediction for Entity Linking', what Micro-F1 strong score did the SpEL-large (2023) model get on the AIDA-CoNLL dataset
| 88.6 |
Flickr30k | DSMD | Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning | 2024-04-16T00:00:00 | https://arxiv.org/abs/2404.10838v1 | [
"https://github.com/chrisx599/dsmd"
] | In the paper 'Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning', what Image-to-text R@1 score did the DSMD model get on the Flickr30k dataset
| 82.5 |
ETTh1 (720) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTh1 (720) Multivariate dataset
| 0.489 |
DAVIS 2017 (val) | Cutie (base) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what Jaccard (Mean) score did the Cutie (base) model get on the DAVIS 2017 (val) dataset
| 84.6 |
UCR Anomaly Archive | Deep SVDD | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the Deep SVDD model get on the UCR Anomaly Archive dataset
| 0.076 |
Laurel Caverns | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the Laurel Caverns dataset
| 61.61 |
MAWPS | GPT-3 text-babbage-001 (6.7B) | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the GPT-3 text-babbage-001 (6.7B) model get on the MAWPS dataset
| 2.76 |
InfoSeek | PreFLMR | PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers | 2024-02-13T00:00:00 | https://arxiv.org/abs/2402.08327v2 | [
"https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering"
] | In the paper 'PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers', what Recall@5 score did the PreFLMR model get on the InfoSeek dataset
| 62.1 |
GSM8K | ToRA-Code 7B | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-Code 7B model get on the GSM8K dataset
| 72.6 |
Long Video Dataset | READMem-QDMN (sr=10) | READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12823v2 | [
"https://github.com/Vujas-Eteph/READMem"
] | In the paper 'READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation', what J&F score did the READMem-QDMN (sr=10) model get on the Long Video Dataset dataset
| 84.0 |
ChEBI-20 | InstructMol-G | InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.16208v1 | [
"https://github.com/idea-xl/instructmol"
] | In the paper 'InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery', what BLEU-2 score did the InstructMol-G model get on the ChEBI-20 dataset
| 46.6 |
ActivityNet-QA | MovieChat | MovieChat: From Dense Token to Sparse Memory for Long Video Understanding | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16449v4 | [
"https://github.com/rese1f/MovieChat"
] | In the paper 'MovieChat: From Dense Token to Sparse Memory for Long Video Understanding', what Accuracy score did the MovieChat model get on the ActivityNet-QA dataset
| 45.7 |
ETTh2 (720) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh2 (720) Multivariate dataset
| 0.395 |
Wisconsin | DJ-GNN | Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16976v1 | [
"https://github.com/AhmedBegggaUA/TFM"
] | In the paper 'Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters', what Accuracy score did the DJ-GNN model get on the Wisconsin dataset
| 92.54±3.70 |
COCO test-dev | LeYOLO-Small@480 | LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14239v1 | [
"https://github.com/LilianHollard/LeYOLO"
] | In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what box mAP score did the LeYOLO-Small@480 model get on the COCO test-dev dataset
| 35.2 |
AMZ Photo | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the HH-GraphSAGE model get on the AMZ Photo dataset
| 94.55% |
ImageNet | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the ImageNet dataset
| 74.00 |
SPair-71k | SD+DINO + CleanDIFT (Zero-Shot) | CleanDIFT: Diffusion Features without Noise | 2024-12-04T00:00:00 | https://arxiv.org/abs/2412.03439v1 | [
"https://github.com/CompVis/cleandift"
] | In the paper 'CleanDIFT: Diffusion Features without Noise', what PCK score did the SD+DINO + CleanDIFT (Zero-Shot) model get on the SPair-71k dataset
| 64.8 |
Amazon Computers | CGT | Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16788v1 | [
"https://github.com/nslab-cuk/community-aware-graph-transformer"
] | In the paper 'Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures', what Accuracy score did the CGT model get on the Amazon Computers dataset
| 91.45±0.58 |
Nighttime Driving | CoDA | CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17369v3 | [
"https://github.com/Cuzyoung/CoDA"
] | In the paper 'CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning', what mIoU score did the CoDA model get on the Nighttime Driving dataset
| 59.2 |
COCO-20i (1-shot) | MSDNet (ResNet-50) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-50) model get on the COCO-20i (1-shot) dataset
| 46.5 |
Casia V1+ | Late Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Late Fusion model get on the Casia V1+ dataset
| .775 |
SOTS Outdoor | CasDyF-Net | CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters | 2024-09-13T00:00:00 | https://arxiv.org/abs/2409.08510v1 | [
"https://github.com/dauing/casdyf-net"
] | In the paper 'CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters', what PSNR score did the CasDyF-Net model get on the SOTS Outdoor dataset
| 38.86 |
ReCoRD | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what F1 score did the PaLM 2-M (one-shot) model get on the ReCoRD dataset
| 92.4 |
HIDE (trained on GOPRO) | ID-Blau (Stripformer) | ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10998v2 | [
"https://github.com/plusgood-steven/id-blau"
] | In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR (sRGB) score did the ID-Blau (Stripformer) model get on the HIDE (trained on GOPRO) dataset
| 31.50 |
SAR-AIRcraft-1.0 | PGD-YOLOv8 | Physics-Guided Detector for SAR Airplanes | 2024-11-19T00:00:00 | https://arxiv.org/abs/2411.12301v1 | [
"https://github.com/xai4sar/pgd"
] | In the paper 'Physics-Guided Detector for SAR Airplanes', what Average mAP score did the PGD-YOLOv8 model get on the SAR-AIRcraft-1.0 dataset
| 90.7% |
MAPS | hFT-Transformer | Automatic Piano Transcription with Hierarchical Frequency-Time Transformer | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04305v1 | [
"https://github.com/sony/hft-transformer"
] | In the paper 'Automatic Piano Transcription with Hierarchical Frequency-Time Transformer', what Onset F1 score did the hFT-Transformer model get on the MAPS dataset
| 85.14 |
iNaturalist 2018 | DirMixE | Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07780v1 | [
"https://github.com/scongl/dirmixe"
] | In the paper 'Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition', what Top-1 Accuracy score did the DirMixE model get on the iNaturalist 2018 dataset
| 73.21% |
EconLogicQA | Yi-6B-Chat | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Yi-6B-Chat model get on the EconLogicQA dataset
| 0.2077 |
Coauthor CS | GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the GraphSAGE model get on the Coauthor CS dataset
| 95.11% |
LEVIR-CD | CGNet | Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09179v1 | [
"https://github.com/chengxihan/cgnet-cd"
] | In the paper 'Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery', what F1 score did the CGNet model get on the LEVIR-CD dataset
| 92.01 |
Atari 2600 Ice Hockey | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Ice Hockey dataset
| -3.6 |
ModelNet40 | PointGST | Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08114v1 | [
"https://github.com/jerryfeng2003/pointgst"
] | In the paper 'Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning', what Overall Accuracy score did the PointGST model get on the ModelNet40 dataset
| 95.3 |
LLFF | CE3D | Chat-Edit-3D: Interactive 3D Scene Editing via Text Prompts | 2024-07-09T00:00:00 | https://arxiv.org/abs/2407.06842v2 | [
"https://github.com/Fangkang515/CE3D"
] | In the paper 'Chat-Edit-3D: Interactive 3D Scene Editing via Text Prompts', what CLIP score did the CE3D model get on the LLFF dataset
| 0.99 |
PASTIS | Exchanger+Mask2Former | Revisiting the Encoding of Satellite Image Time Series | 2023-05-03T00:00:00 | https://arxiv.org/abs/2305.02086v2 | [
"https://github.com/TotalVariation/Exchanger4SITS"
] | In the paper 'Revisiting the Encoding of Satellite Image Time Series', what SQ score did the Exchanger+Mask2Former model get on the PASTIS dataset
| 84.6 |
RST-DT | Top-down Llama 2 (7B) | Can we obtain significant success in RST discourse parsing by using Large Language Models? | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05065v1 | [
"https://github.com/nttcslab-nlp/rstparser_eacl24"
] | In the paper 'Can we obtain significant success in RST discourse parsing by using Large Language Models?', what Standard Parseval (Span) score did the Top-down Llama 2 (7B) model get on the RST-DT dataset
| 76.3 |
Places2 | SEELE | Repositioning the Subject within Image | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16861v3 | [
"https://github.com/yikai-wang/res"
] | In the paper 'Repositioning the Subject within Image', what FID score did the SEELE model get on the Places2 dataset
| 1.477 |
HKU-IS | M3Net-S | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-S model get on the HKU-IS dataset
| 0.019 |
TVBench | PLLaVA-7B | PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16994v2 | [
"https://github.com/magic-research/PLLaVA"
] | In the paper 'PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning', what Average Accuracy score did the PLLaVA-7B model get on the TVBench dataset
| 34.6 |
ETTh1 (96) | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh1 (96) dataset
| 0.368 |
WikiTableQuestions | SynTQA (GPT) | SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA | 2024-09-25T00:00:00 | https://arxiv.org/abs/2409.16682v2 | [
"https://github.com/siyue-zhang/SynTableQA"
] | In the paper 'SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA', what Accuracy (Test) score did the SynTQA (GPT) model get on the WikiTableQuestions dataset
| 74.4 |
MM-Vet | InternVL2.5-26B | Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05271v1 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling', what GPT-4 score score did the InternVL2.5-26B model get on the MM-Vet dataset
| 65.0 |
OASIS | NeuroPath | NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.17510v3 | [
"https://github.com/Chrisa142857/neuro_detour"
] | In the paper 'NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes', what Accuracy score did the NeuroPath model get on the OASIS dataset
| 90.01 |
WNUT 2017 | GoLLIE | GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03668v5 | [
"https://github.com/hitz-zentroa/gollie"
] | In the paper 'GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction', what F1 score did the GoLLIE model get on the WNUT 2017 dataset
| 54.3 |
AgeDB | ResNet-50-DLDL-v2 | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL-v2 model get on the AgeDB dataset
| 5.80 |
MSVD-QA | VIOLET+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the VIOLET+ model get on the MSVD-QA dataset
| 0.495 |
UCF101 | OTI(ViT-L/14) | Orthogonal Temporal Interpolation for Zero-Shot Video Recognition | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06897v1 | [
"https://github.com/sweetorangezhuyan/mm2023_oti"
] | In the paper 'Orthogonal Temporal Interpolation for Zero-Shot Video Recognition', what Top-1 Accuracy score did the OTI(ViT-L/14) model get on the UCF101 dataset
| 92.8 |
CIRR | CoVR-BLIP | CoVR-2: Automatic Data Construction for Composed Video Retrieval | 2023-08-28T00:00:00 | https://arxiv.org/abs/2308.14746v4 | [
"https://github.com/lucas-ventura/CoVR"
] | In the paper 'CoVR-2: Automatic Data Construction for Composed Video Retrieval', what (Recall@5+Recall_subset@1)/2 score did the CoVR-BLIP model get on the CIRR dataset
| 76.81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.