dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
COCO-MLT | LMPT(ResNet-50) | LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04536v2 | [
"https://github.com/richard-peng-xia/LMPT"
] | In the paper 'LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition', what Average mAP score did the LMPT(ResNet-50) model get on the COCO-MLT dataset
| 58.97 |
PIQA | phi-1.5-web (1.3B) | Textbooks Are All You Need II: phi-1.5 technical report | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05463v1 | [
"https://github.com/knowlab/bi-weekly-paper-presentation"
] | In the paper 'Textbooks Are All You Need II: phi-1.5 technical report', what Accuracy score did the phi-1.5-web (1.3B) model get on the PIQA dataset
| 77 |
PubLayNet val | GLAM | A Graphical Approach to Document Layout Analysis | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02051v1 | [
"https://github.com/ivanstepanovftw/glam"
] | In the paper 'A Graphical Approach to Document Layout Analysis', what Text score did the GLAM model get on the PubLayNet val dataset
| 0.878 |
MATH | WizardMath-7B-V1.0 | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09583v1 | [
"https://github.com/nlpxucan/wizardlm"
] | In the paper 'WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct', what Accuracy score did the WizardMath-7B-V1.0 model get on the MATH dataset
| 10.7 |
THUMOS' 14 | MSQNet | Actor-agnostic Multi-label Action Recognition with Multi-modal Query | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10763v3 | [
"https://github.com/mondalanindya/msqnet"
] | In the paper 'Actor-agnostic Multi-label Action Recognition with Multi-modal Query', what Accuracy score did the MSQNet model get on the THUMOS' 14 dataset
| 75.33 |
Charades-STA | UnLoc-L | UnLoc: A Unified Framework for Video Localization Tasks | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.11062v1 | [
"https://github.com/google-research/scenic"
] | In the paper 'UnLoc: A Unified Framework for Video Localization Tasks', what R@1 IoU=0.5 score did the UnLoc-L model get on the Charades-STA dataset
| 60.8 |
COCO test-dev | GLEE-Plus | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Plus model get on the COCO test-dev dataset
| 53.3 |
ScanObjectNN | PointMLP∗ + JM3D | Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation | 2023-08-06T00:00:00 | https://arxiv.org/abs/2308.02982v2 | [
"https://github.com/mr-neko/jm3d"
] | In the paper 'Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation', what Overall Accuracy score did the PointMLP∗ + JM3D model get on the ScanObjectNN dataset
| 89.5 |
VLCS | GMDG (RegNetY-16GF, SWAD) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF, SWAD) model get on the VLCS dataset
| 82.2 |
ParaMAWPS | GPT-3.5 Turbo (175B) | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the GPT-3.5 Turbo (175B) model get on the ParaMAWPS dataset
| 73.0 |
DVD | BSSTNet | Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07551v1 | [
"https://github.com/huicongzhang/bsstnet"
] | In the paper 'Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring', what PSNR score did the BSSTNet model get on the DVD dataset
| 34.95 |
SVTP | CPPD | Context Perception Parallel Decoder for Scene Text Recognition | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12270v2 | [
"https://github.com/PaddlePaddle/PaddleOCR"
] | In the paper 'Context Perception Parallel Decoder for Scene Text Recognition', what Accuracy score did the CPPD model get on the SVTP dataset
| 96.7 |
Winoground | InstructBLIP-CCoT | Compositional Chain-of-Thought Prompting for Large Multimodal Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.17076v3 | [
"https://github.com/chancharikmitra/ccot"
] | In the paper 'Compositional Chain-of-Thought Prompting for Large Multimodal Models', what Text Score score did the InstructBLIP-CCoT model get on the Winoground dataset
| 21.0 |
roman-empire | GraphHyperConv | HyperAggregation: Aggregating over Graph Edges with Hypernetworks | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11596v1 | [
"https://github.com/foisunt/hyperaggregation"
] | In the paper 'HyperAggregation: Aggregating over Graph Edges with Hypernetworks', what Accuracy (% ) score did the GraphHyperConv model get on the roman-empire dataset
| 92.27±0.57 |
PASCAL-5i (5-Shot) | QCLNet (VGG-16) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (VGG-16) model get on the PASCAL-5i (5-Shot) dataset
| 64.2 |
YouTube-VIS 2021 | GRAtt-VIS (Swin-L) | GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17096v1 | [
"https://github.com/tanveer81/grattvis"
] | In the paper 'GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation', what mask AP score did the GRAtt-VIS (Swin-L) model get on the YouTube-VIS 2021 dataset
| 60.3 |
Traffic (96) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Traffic (96) dataset
| 0.353 |
RealBlur-J (trained on GoPro) | ALGNet | Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring | 2024-03-29T00:00:00 | https://arxiv.org/abs/2403.20106v2 | [
"https://github.com/Tombs98/ALGNet"
] | In the paper 'Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring', what PSNR (sRGB) score did the ALGNet model get on the RealBlur-J (trained on GoPro) dataset
| 29.12 |
Clothing1M | LRA-diffusion (CC) | Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19518v2 | [
"https://github.com/puar-playground/lra-diffusion"
] | In the paper 'Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels', what Accuracy score did the LRA-diffusion (CC) model get on the Clothing1M dataset
| 75.7% |
ogbl-citation2 | GraphGPT(d1n30) | GraphGPT: Graph Learning with Generative Pre-trained Transformers | 2023-12-31T00:00:00 | https://arxiv.org/abs/2401.00529v1 | [
"https://github.com/alibaba/graph-gpt"
] | In the paper 'GraphGPT: Graph Learning with Generative Pre-trained Transformers', what Test MRR score did the GraphGPT(d1n30) model get on the ogbl-citation2 dataset
| 0.9305 ± 0.0020 |
St Lucia | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the St Lucia dataset
| 62.7 |
ImageNet | CaiT-S24 | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the CaiT-S24 model get on the ImageNet dataset
| 84.91% |
ASTE | ChatGPT (gpt-3.5-turbo, zero-shot) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (L14) score did the ChatGPT (gpt-3.5-turbo, zero-shot) model get on the ASTE dataset
| 36.05 |
DiDeMo | DMAE (ViT-B/32) | Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11082v3 | [
"https://github.com/alipay/Ant-Multi-Modal-Framework"
] | In the paper 'Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning', what text-to-video R@1 score did the DMAE (ViT-B/32) model get on the DiDeMo dataset
| 52.7 |
LAMBADA | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (one-shot) model get on the LAMBADA dataset
| 83.7 |
MedConceptsQA | epfl-llm/meditron-7b | MEDITRON-70B: Scaling Medical Pretraining for Large Language Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.16079v1 | [
"https://github.com/epfllm/meditron"
] | In the paper 'MEDITRON-70B: Scaling Medical Pretraining for Large Language Models', what Accuracy score did the epfl-llm/meditron-7b model get on the MedConceptsQA dataset
| 25.751 |
CACD | ResNet-50-OR-CNN | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-OR-CNN model get on the CACD dataset
| 4.01 |
HERA RFI Detection | Nearest Latent Neighbours | RFI Detection with Spiking Neural Networks | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14303v2 | [
"https://github.com/pritchardn/snn-nln"
] | In the paper 'RFI Detection with Spiking Neural Networks', what AUROC score did the Nearest Latent Neighbours model get on the HERA RFI Detection dataset
| 0.983 |
TACO-Code | CodeLlama-7B-Python | TACO: Topics in Algorithmic COde generation dataset | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14852v3 | [
"https://github.com/flagopen/taco"
] | In the paper 'TACO: Topics in Algorithmic COde generation dataset', what easy pass@1 score did the CodeLlama-7B-Python model get on the TACO-Code dataset
| 9.32% |
ImageNet 512x512 | TiTok-L-64 | An Image is Worth 32 Tokens for Reconstruction and Generation | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07550v1 | [
"https://github.com/bytedance/1d-tokenizer"
] | In the paper 'An Image is Worth 32 Tokens for Reconstruction and Generation', what FID score did the TiTok-L-64 model get on the ImageNet 512x512 dataset
| 2.49 |
ISIC2018 | RFS+MLP | Improving Cross-domain Few-shot Classification with Multilayer Perceptron | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09589v1 | [
"https://github.com/BaiShuanghao/CDFSC-MLP"
] | In the paper 'Improving Cross-domain Few-shot Classification with Multilayer Perceptron', what 5 shot score did the RFS+MLP model get on the ISIC2018 dataset
| 46.33 |
AMZ Photo | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the HH-GCN model get on the AMZ Photo dataset
| 94.52% |
AGQA 2.0 balanced | GF (uns) - S3D | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01529v1 | [
"https://github.com/byz0e/glance-focus"
] | In the paper 'Glance and Focus: Memory Prompting for Multi-Event Video Question Answering', what Average Accuracy score did the GF (uns) - S3D model get on the AGQA 2.0 balanced dataset
| 53.33 |
Cora: fixed 20 node per class | ScaleNet | Scale Invariance of Graph Neural Networks | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19392v2 | [
"https://github.com/qin87/scalenet"
] | In the paper 'Scale Invariance of Graph Neural Networks', what Accuracy score did the ScaleNet model get on the Cora: fixed 20 node per class dataset
| 82.3±1.1 |
CIFAR-10 | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the CIFAR-10 dataset
| 93.6 |
COCO-20i (2-way 1-shot) | Label Anything (ViT-B/16-MAE) | Label Anything: Multi-Class Few-Shot Semantic Segmentation with Visual Prompts | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02075v1 | [
"https://github.com/pasqualedem/LabelAnything"
] | In the paper 'Label Anything: Multi-Class Few-Shot Semantic Segmentation with Visual Prompts', what mIoU score did the Label Anything (ViT-B/16-MAE) model get on the COCO-20i (2-way 1-shot) dataset
| 31.9 |
AMZ Comp | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the HH-GCN model get on the AMZ Comp dataset
| 90.92% |
MM-Vet | Janus | Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation | 2024-10-17T00:00:00 | https://arxiv.org/abs/2410.13848v1 | [
"https://github.com/deepseek-ai/janus"
] | In the paper 'Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation', what GPT-4 score score did the Janus model get on the MM-Vet dataset
| 34.3 |
SFCHD | VFNet | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the VFNet model get on the SFCHD dataset
| 76.4 |
Fashion IQ | SPN4CIR | Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives | 2024-04-17T00:00:00 | https://arxiv.org/abs/2404.11317v2 | [
"https://github.com/BUAADreamer/SPN4CIR"
] | In the paper 'Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives', what (Recall@10+Recall@50)/2 score did the SPN4CIR model get on the Fashion IQ dataset
| 66.41 |
CUTE80 | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-B model get on the CUTE80 dataset
| 99.3 |
AIDA/testc | SpEL-large (2023) | SpEL: Structured Prediction for Entity Linking | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14684v1 | [
"https://github.com/shavarani/spel"
] | In the paper 'SpEL: Structured Prediction for Entity Linking', what Micro-F1 strong score did the SpEL-large (2023) model get on the AIDA/testc dataset
| 77.5 |
WHAMR! | TD-Confomer (S) | On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.06125v1 | [
"https://github.com/jwr1995/pubsep"
] | In the paper 'On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments', what SI-SDRi score did the TD-Confomer (S) model get on the WHAMR! dataset
| 10.5 |
BSD100 - 2x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the BSD100 - 2x upscaling dataset
| 32.75 |
CIFAR-10 | MuLAN | Diffusion Models With Learned Adaptive Noise | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13236v3 | [
"https://github.com/s-sahoo/mulan"
] | In the paper 'Diffusion Models With Learned Adaptive Noise', what bits/dimension score did the MuLAN model get on the CIFAR-10 dataset
| 2.55 |
Something-Something V2 | TAdaConvNeXtV2-B | Temporally-Adaptive Models for Efficient Video Understanding | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05787v1 | [
"https://github.com/alibaba-mmai-research/TAdaConv"
] | In the paper 'Temporally-Adaptive Models for Efficient Video Understanding', what Top-1 Accuracy score did the TAdaConvNeXtV2-B model get on the Something-Something V2 dataset
| 71.1 |
iNaturalist | AIMv2-H | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-H model get on the iNaturalist dataset
| 77.9 |
MATH | ToRA 7B (w/ code) | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA 7B (w/ code) model get on the MATH dataset
| 40.1 |
SICE-Mix | CIDNet | You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05809v3 | [
"https://github.com/fediory/hvi-cidnet"
] | In the paper 'You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement', what Average PSNR score did the CIDNet model get on the SICE-Mix dataset
| 13.425 |
MCubeS | StitchFusion (RGB-A) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-A) model get on the MCubeS dataset
| 52.68 |
MM-Vet | StableLLaVA | StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue Data | 2023-08-20T00:00:00 | https://arxiv.org/abs/2308.10253v2 | [
"https://github.com/icoz69/stablellava"
] | In the paper 'StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue Data', what GPT-4 score score did the StableLLaVA model get on the MM-Vet dataset
| 36.1 |
RepCount | ESCounts | Every Shot Counts: Using Exemplars for Repetition Counting in Videos | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.18074v2 | [
"https://github.com/sinhasaptarshi/EveryShotCounts"
] | In the paper 'Every Shot Counts: Using Exemplars for Repetition Counting in Videos', what MAE score did the ESCounts model get on the RepCount dataset
| 0.213 |
ImageNet | GFNet-S | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the GFNet-S model get on the ImageNet dataset
| 81.33% |
CARLA | TransFuser++ WP (TF++WP) | Hidden Biases of End-to-End Driving Models | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07957v2 | [
"https://github.com/autonomousvision/carla_garage"
] | In the paper 'Hidden Biases of End-to-End Driving Models', what Driving Score score did the TransFuser++ WP (TF++WP) model get on the CARLA dataset
| 73 |
WiGesture | CSI-BERT | Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12400v1 | [
"https://github.com/rs2002/csi-bert"
] | In the paper 'Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing', what Accuracy (% ) score did the CSI-BERT model get on the WiGesture dataset
| 93.94 |
LRS2 | TDFNet-large | TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion | 2024-01-25T00:00:00 | https://arxiv.org/abs/2401.14185v1 | [
"https://github.com/spkgyk/TDFNet"
] | In the paper 'TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion', what SI-SNRi score did the TDFNet-large model get on the LRS2 dataset
| 15.8 |
Wisconsin | HiGNN | Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17351v2 | [
"https://github.com/zylMozart/HiGNN"
] | In the paper 'Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network', what Accuracy score did the HiGNN model get on the Wisconsin dataset
| 85.88 ± 3.18 |
REBUS | Gemini Pro | REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05604v2 | [
"https://github.com/cvndsh/rebus"
] | In the paper 'REBUS: A Robust Evaluation Benchmark of Understanding Symbols', what Accuracy score did the Gemini Pro model get on the REBUS dataset
| 13.2 |
MATH | WizardMath-70B-V1.0 | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09583v1 | [
"https://github.com/nlpxucan/wizardlm"
] | In the paper 'WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct', what Accuracy score did the WizardMath-70B-V1.0 model get on the MATH dataset
| 22.7 |
Filosax | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Filosax dataset
| 98.5 |
PeMS04 | STD-MAE | Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00516v3 | [
"https://github.com/jimmy-7664/std-mae"
] | In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what 12 Steps MAE score did the STD-MAE model get on the PeMS04 dataset
| 17.80 |
CamVid | DSNet | DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03702v1 | [
"https://github.com/takaniwa/dsnet"
] | In the paper 'DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation', what Mean IoU score did the DSNet model get on the CamVid dataset
| 83.32 |
ADE20K | DAT-S++ | DAT++: Spatially Dynamic Vision Transformer with Deformable Attention | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01430v1 | [
"https://github.com/leaplabthu/dat"
] | In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what Validation mIoU score did the DAT-S++ model get on the ADE20K dataset
| 51.2 |
DukeMTMC-reID | PLIP-RN50-MGN | PLIP: Language-Image Pre-training for Person Representation Learning | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08386v2 | [
"https://github.com/zplusdragon/plip"
] | In the paper 'PLIP: Language-Image Pre-training for Person Representation Learning', what mAP score did the PLIP-RN50-MGN model get on the DukeMTMC-reID dataset
| 81.7 |
Atari 2600 Kangaroo | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Kangaroo dataset
| 13027 |
S3DIS Area5 | DeLA | Decoupled Local Aggregation for Point Cloud Learning | 2023-08-31T00:00:00 | https://arxiv.org/abs/2308.16532v1 | [
"https://github.com/matrix-asc/dela"
] | In the paper 'Decoupled Local Aggregation for Point Cloud Learning', what mIoU score did the DeLA model get on the S3DIS Area5 dataset
| 74.1 |
WI-LOCNESS | RedPenNet | RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans | 2023-09-19T00:00:00 | https://arxiv.org/abs/2309.10898v1 | [
"https://github.com/webspellchecker/unlp-2023-shared-task"
] | In the paper 'RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans', what F0.5 score did the RedPenNet model get on the WI-LOCNESS dataset
| 77.60 |
HELOC | Binary Diffusion | Tabular Data Generation using Binary Diffusion | 2024-09-20T00:00:00 | https://arxiv.org/abs/2409.13882v2 | [
"https://github.com/vkinakh/binary-diffusion-tabular"
] | In the paper 'Tabular Data Generation using Binary Diffusion', what LR Accuracy score did the Binary Diffusion model get on the HELOC dataset
| 71.76 |
UruDendro | CS-TRD | CS-TRD: a Cross Sections Tree Ring Detection method | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10809v2 | [
"https://github.com/hmarichal93/cstrd_ipol"
] | In the paper 'CS-TRD: a Cross Sections Tree Ring Detection method', what FScore score did the CS-TRD model get on the UruDendro dataset
| 0.91 |
UCR Anomaly Archive | MDI | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the MDI model get on the UCR Anomaly Archive dataset
| 0.47 |
ETTm2 (336) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTm2 (336) Multivariate dataset
| 0.273 |
SNU-FILM (medium) | VFIMamba | VFIMamba: Video Frame Interpolation with State Space Models | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02315v2 | [
"https://github.com/mcg-nju/vfimamba"
] | In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the SNU-FILM (medium) dataset
| 36.40 |
ECSSD | M3Net-R | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-R model get on the ECSSD dataset
| 0.029 |
COCO test-dev | GLEE-Plus | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what box mAP score did the GLEE-Plus model get on the COCO test-dev dataset
| 60.6 |
VisA | VCP-CLIP | VCP-CLIP: A visual context prompting model for zero-shot anomaly segmentation | 2024-07-17T00:00:00 | https://arxiv.org/abs/2407.12276v1 | [
"https://github.com/xiaozhen228/vcp-clip"
] | In the paper 'VCP-CLIP: A visual context prompting model for zero-shot anomaly segmentation', what Segmentation AUROC score did the VCP-CLIP model get on the VisA dataset
| 95.7 |
ETTh1 (336) Multivariate | DeformTime | DeformTime: Capturing Variable Dependencies with Deformable Attention for Time Series Forecasting | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07438v2 | [
"https://github.com/ClaudiaShu/DeformTime"
] | In the paper 'DeformTime: Capturing Variable Dependencies with Deformable Attention for Time Series Forecasting', what MAE score did the DeformTime model get on the ETTh1 (336) Multivariate dataset
| 0.2158 |
Winoground | LLaVA-7B (BERTScore) | An Examination of the Compositionality of Large Generative Vision-Language Models | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10509v2 | [
"https://github.com/teleema/sade"
] | In the paper 'An Examination of the Compositionality of Large Generative Vision-Language Models', what Text Score score did the LLaVA-7B (BERTScore) model get on the Winoground dataset
| 13.50 |
KITTI Test (Online Methods) | C-TWiX | Learning Data Association for Multi-Object Tracking using Only Coordinates | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.08018v1 | [
"https://github.com/Guepardow/TWiX"
] | In the paper 'Learning Data Association for Multi-Object Tracking using Only Coordinates', what HOTA score did the C-TWiX model get on the KITTI Test (Online Methods) dataset
| 77.58 |
RefCOCOg-val | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what Overall IoU score did the GLEE-Pro model get on the RefCOCOg-val dataset
| 72.9 |
ImageNet | Customized Ensemble | Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17091v2 | [
"https://github.com/zhihelu/ensemble_vlm"
] | In the paper 'Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models', what Harmonic mean score did the Customized Ensemble model get on the ImageNet dataset
| 75.49 |
Urban100 - 2x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the Urban100 - 2x upscaling dataset
| 35.17 |
ImageNet-1k vs Places | SCALE (ResNet50) | Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement | 2023-09-30T00:00:00 | https://arxiv.org/abs/2310.00227v1 | [
"https://github.com/kai422/scale"
] | In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what FPR95 score did the SCALE (ResNet50) model get on the ImageNet-1k vs Places dataset
| 34.51 |
S3DIS Area5 | ConDaFormer | ConDaFormer: Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.11112v1 | [
"https://github.com/lhduan/condaformer"
] | In the paper 'ConDaFormer: Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding', what mIoU score did the ConDaFormer model get on the S3DIS Area5 dataset
| 73.5 |
YouCook2 | MA-LMM | MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05726v2 | [
"https://github.com/boheumd/MA-LMM"
] | In the paper 'MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding', what METEOR score did the MA-LMM model get on the YouCook2 dataset
| 17.6 |
CIFAR-100-LT (ρ=10) | SURE(ResNet-32) | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01T00:00:00 | https://arxiv.org/abs/2403.00543v1 | [
"https://github.com/YutingLi0606/SURE"
] | In the paper 'SURE: SUrvey REcipes for building reliable and robust deep networks', what Error Rate score did the SURE(ResNet-32) model get on the CIFAR-100-LT (ρ=10) dataset
| 26.76 |
NCBI Disease | UniNER-7B | UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03279v2 | [
"https://github.com/emma1066/retrieval-augmented-it-openner"
] | In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the NCBI Disease dataset
| 86.96 |
VP-Air | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the VP-Air dataset
| 66.74 |
Wikipeople | HAHE | HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06588v2 | [
"https://github.com/lhrlab/hahe"
] | In the paper 'HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level', what MRR score did the HAHE model get on the Wikipeople dataset
| 0.509 |
AudioCaps | AutoCap | Taming Data and Transformers for Audio Generation | 2024-06-27T00:00:00 | https://arxiv.org/abs/2406.19388v2 | [
"https://github.com/snap-research/GenAU"
] | In the paper 'Taming Data and Transformers for Audio Generation', what CIDEr score did the AutoCap model get on the AudioCaps dataset
| 0.832 |
InOutDoor | MMPedestron | When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10125v1 | [
"https://github.com/BubblyYi/MMPedestron"
] | In the paper 'When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset', what AP score did the MMPedestron model get on the InOutDoor dataset
| 65.7 |
Digits-five | Crafting-Shifts(LeNet) | Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19774v1 | [
"https://github.com/nikosefth/crafting-shifts"
] | In the paper 'Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization', what Accuracy score did the Crafting-Shifts(LeNet) model get on the Digits-five dataset
| 82.61 |
SQA3D | Situation3D | Situational Awareness Matters in 3D Vision Language Reasoning | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07544v2 | [
"https://github.com/YunzeMan/Situation3D"
] | In the paper 'Situational Awareness Matters in 3D Vision Language Reasoning', what AnswerExactMatch (Question Answering) score did the Situation3D model get on the SQA3D dataset
| 52.6 |
COCO test-dev | MaskConver (ResNet50, single-scale) | MaskConver: Revisiting Pure Convolution Model for Panoptic Segmentation | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06052v1 | [
"https://github.com/tensorflow/models"
] | In the paper 'MaskConver: Revisiting Pure Convolution Model for Panoptic Segmentation', what PQ score did the MaskConver (ResNet50, single-scale) model get on the COCO test-dev dataset
| 53.6 |
CK+ | PAtt-Lite | PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition | 2023-06-16T00:00:00 | https://arxiv.org/abs/2306.09626v2 | [
"https://github.com/jlrex/patt-lite"
] | In the paper 'PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition', what Accuracy (7 emotion) score did the PAtt-Lite model get on the CK+ dataset
| 100.00 |
SMAC MMM2_7m2M1M_vs_9m3M1M | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QPLEX model get on the SMAC MMM2_7m2M1M_vs_9m3M1M dataset
| 90.62 |
VOC-MLT | LMPT(ViT-B/16) | LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04536v2 | [
"https://github.com/richard-peng-xia/LMPT"
] | In the paper 'LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition', what Average mAP score did the LMPT(ViT-B/16) model get on the VOC-MLT dataset
| 87.88 |
VideoInstruct | VideoGPT+ | VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09418v1 | [
"https://github.com/mbzuai-oryx/videogpt-plus"
] | In the paper 'VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding', what Correctness of Information score did the VideoGPT+ model get on the VideoInstruct dataset
| 3.27 |
ChEBI-20 | MolReGPT (GPT-3.5-turbo) | Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective | 2023-06-11T00:00:00 | https://arxiv.org/abs/2306.06615v2 | [
"https://github.com/phenixace/molregpt"
] | In the paper 'Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective', what BLEU-2 score did the MolReGPT (GPT-3.5-turbo) model get on the ChEBI-20 dataset
| 56.5 |
Squirrel | TE-GCNN | Transfer Entropy in Graph Convolutional Neural Networks | 2024-06-08T00:00:00 | https://arxiv.org/abs/2406.06632v1 | [
"https://github.com/avmoldovan/Heterophily_and_oversmoothing-forked"
] | In the paper 'Transfer Entropy in Graph Convolutional Neural Networks', what Accuracy score did the TE-GCNN model get on the Squirrel dataset
| 55.04±1.64 |
17 Places | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the 17 Places dataset
| 65.02 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.