dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Winoground | LLaVA-7B (GPTScore) | An Examination of the Compositionality of Large Generative Vision-Language Models | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10509v2 | [
"https://github.com/teleema/sade"
] | In the paper 'An Examination of the Compositionality of Large Generative Vision-Language Models', what Text Score score did the LLaVA-7B (GPTScore) model get on the Winoground dataset
| 25.50 |
CIFAR-10 (40 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuarcy score did the UnMixMatch model get on the CIFAR-10 (40 Labels, ImageNet-100 Unlabeled) dataset
| 52.07 |
Cityscapes val | CSFNet-1 | CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01328v1 | [
"https://github.com/Danial-Qashqai/CSFNet"
] | In the paper 'CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes', what mIoU score did the CSFNet-1 model get on the Cityscapes val dataset
| 74.73 |
DeLiVER | StitchFusion (RGB-Depth) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-Depth) model get on the DeLiVER dataset
| 65.75 |
VNHSGE-Civic | ChatGPT | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE-Civic dataset
| 70.5 |
Office-Home | RCL | Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning | 2024-05-28T00:00:00 | https://arxiv.org/abs/2405.18376v1 | [
"https://github.com/Dong-Jie-Chen/RCL"
] | In the paper 'Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning', what Accuracy score did the RCL model get on the Office-Home dataset
| 90.0 |
VLCS | VL2V-SD (CLIP, ViT-B/16) | Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification | 2023-10-12T00:00:00 | https://arxiv.org/abs/2310.08255v2 | [
"https://github.com/val-iisc/VL2V-ADiP"
] | In the paper 'Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification', what Average Accuracy score did the VL2V-SD (CLIP, ViT-B/16) model get on the VLCS dataset
| 83.25 |
. | Gear-NeRF | Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03723v1 | [
"https://github.com/merlresearch/Gear-NeRF"
] | In the paper 'Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling', what PSNR score did the Gear-NeRF model get on the . dataset
| 32.21 |
LibriTTS | EVA-GAN-big | EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks | 2024-01-31T00:00:00 | https://arxiv.org/abs/2402.00892v1 | [
"https://github.com/fishaudio/vocoder"
] | In the paper 'EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks', what PESQ score did the EVA-GAN-big model get on the LibriTTS dataset
| 4.3536 |
ETTh1 (336) Multivariate | TimeMachine | TimeMachine: A Time Series is Worth 4 Mambas for Long-term Forecasting | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09898v2 | [
"https://github.com/atik-ahamed/timemachine"
] | In the paper 'TimeMachine: A Time Series is Worth 4 Mambas for Long-term Forecasting', what MSE score did the TimeMachine model get on the ETTh1 (336) Multivariate dataset
| 0.429 |
VoxCeleb2 | RTFS-Net-6 | RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17189v4 | [
"https://github.com/spkgyk/RTFS-Net"
] | In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-6 model get on the VoxCeleb2 dataset
| 11.8 |
Candombe | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Candombe dataset
| 99.7 |
ImageNet | ViT-L @224 (DeiT-III + AugSub) | Masking Augmentation for Supervised Learning | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11339v2 | [
"https://github.com/naver-ai/augsub"
] | In the paper 'Masking Augmentation for Supervised Learning', what Top 1 Accuracy score did the ViT-L @224 (DeiT-III + AugSub) model get on the ImageNet dataset
| 85.3% |
Set5 - 3x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the Set5 - 3x upscaling dataset
| 35.32 |
ETTh1 (336) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTh1 (336) Multivariate dataset
| 0.469 |
ARC (Easy) | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the ARC (Easy) dataset
| 85.6 |
Winoground | LLaVA-1.5 | Compositional Chain-of-Thought Prompting for Large Multimodal Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.17076v3 | [
"https://github.com/chancharikmitra/ccot"
] | In the paper 'Compositional Chain-of-Thought Prompting for Large Multimodal Models', what Text Score score did the LLaVA-1.5 model get on the Winoground dataset
| 36.0 |
24/7 Tokyo | HED-N-GAN | Dark Side Augmentation: Generating Diverse Night Examples for Metric Learning | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16351v2 | [
"https://github.com/mohwald/gandtr"
] | In the paper 'Dark Side Augmentation: Generating Diverse Night Examples for Metric Learning', what mAP score did the HED-N-GAN model get on the 24/7 Tokyo dataset
| 92.2 |
Math23K | ATHENA (roberta-base) | ATHENA: Mathematical Reasoning with Thought Expansion | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01036v1 | [
"https://github.com/the-jb/athena-math"
] | In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Accuracy (training-test) score did the ATHENA (roberta-base) model get on the Math23K dataset
| 84.4 |
pokec | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what Accuracy score did the NeuralWalker model get on the pokec dataset
| 86.46 ± 0.09 |
CHAMELEON | ZoomNeXt-ResNet-50 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what S-measure score did the ZoomNeXt-ResNet-50 model get on the CHAMELEON dataset
| 0.908 |
VQA v2 test-dev | LocVLM-L | Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07449v1 | [
"https://github.com/kahnchana/locvlm"
] | In the paper 'Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs', what Accuracy score did the LocVLM-L model get on the VQA v2 test-dev dataset
| 56.2 |
MSVD-QA | JustAsk+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the JustAsk+ model get on the MSVD-QA dataset
| 0.477 |
ARC (Easy) | phi-1.5-web 1.3B (0-shot) | Textbooks Are All You Need II: phi-1.5 technical report | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05463v1 | [
"https://github.com/knowlab/bi-weekly-paper-presentation"
] | In the paper 'Textbooks Are All You Need II: phi-1.5 technical report', what Accuracy score did the phi-1.5-web 1.3B (0-shot) model get on the ARC (Easy) dataset
| 76.1 |
BoolQ | LLaMA3+MoSLoRA | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what Accuracy score did the LLaMA3+MoSLoRA model get on the BoolQ dataset
| 74.6 |
HumanEval | LDB (GPT4o) | Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step-by-step | 2024-02-25T00:00:00 | https://arxiv.org/abs/2402.16906v6 | [
"https://github.com/floridsleeves/llmdebugger"
] | In the paper 'Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step-by-step', what Pass@1 score did the LDB (GPT4o) model get on the HumanEval dataset
| 98.2 |
ImageNet-Sketch | Discrete Adversarial Distillation (ViT-B, 224) | Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01441v2 | [
"https://github.com/lapisrocks/DiscreteAdversarialDistillation"
] | In the paper 'Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models', what Top-1 accuracy score did the Discrete Adversarial Distillation (ViT-B, 224) model get on the ImageNet-Sketch dataset
| 46.1 |
SIMAC | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the SIMAC dataset
| 77.9 |
MM-Vet | Uni-MoE | Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts | 2024-05-18T00:00:00 | https://arxiv.org/abs/2405.11273v1 | [
"https://github.com/hitsz-tmg/umoe-scaling-unified-multimodal-llms"
] | In the paper 'Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts', what GPT-4 score score did the Uni-MoE model get on the MM-Vet dataset
| 32.8 |
H2O (2 Hands and Objects) | ISTA-Net | Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07469v1 | [
"https://github.com/Necolizer/ISTA-Net"
] | In the paper 'Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition', what Actions Top-1 score did the ISTA-Net model get on the H2O (2 Hands and Objects) dataset
| 89.09 |
MixATIS | BiSLU | Joint Multiple Intent Detection and Slot Filling with Supervised Contrastive Learning and Self-Distillation | 2023-08-28T00:00:00 | https://arxiv.org/abs/2308.14654v1 | [
"https://github.com/anhtunguyen98/bislu"
] | In the paper 'Joint Multiple Intent Detection and Slot Filling with Supervised Contrastive Learning and Self-Distillation', what Micro F1 score did the BiSLU model get on the MixATIS dataset
| 89.4 |
ImageNet-C | FAN-L-Hybrid+STL | Fully Attentional Networks with Self-emerging Token Labeling | 2024-01-08T00:00:00 | https://arxiv.org/abs/2401.03844v1 | [
"https://github.com/NVlabs/STL"
] | In the paper 'Fully Attentional Networks with Self-emerging Token Labeling', what mean Corruption Error (mCE) score did the FAN-L-Hybrid+STL model get on the ImageNet-C dataset
| 42.1 |
VideoInstruct | VLM-RLAIF | Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback | 2024-02-06T00:00:00 | https://arxiv.org/abs/2402.03746v3 | [
"https://github.com/yonseivnl/vlm-rlaif"
] | In the paper 'Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback', what Correctness of Information score did the VLM-RLAIF model get on the VideoInstruct dataset
| 3.63 |
VibraVox (forehead accelerometer) | ECAPA2 | Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11828v2 | [
"https://github.com/jhauret/vibravox"
] | In the paper 'Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors', what Test EER score did the ECAPA2 model get on the VibraVox (forehead accelerometer) dataset
| 0.009 |
MM-Vet | LLaVA-1.5-7B + TeamLoRA | TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition | 2024-08-19T00:00:00 | https://arxiv.org/abs/2408.09856v1 | [
"https://github.com/lin-tianwei/teamlora"
] | In the paper 'TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition', what GPT-4 score score did the LLaVA-1.5-7B + TeamLoRA model get on the MM-Vet dataset
| 31.2 |
CIFAR10 100k | TIGT | Topology-Informed Graph Transformer | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02005v1 | [
"https://github.com/leemingo/tigt"
] | In the paper 'Topology-Informed Graph Transformer', what Accuracy (%) score did the TIGT model get on the CIFAR10 100k dataset
| 73.955 |
BanglaBook | SVM (word 2-gram + word 3-gram) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the SVM (word 2-gram + word 3-gram) model get on the BanglaBook dataset
| 0.9053 |
Potsdam-3 | EAGLE (DINO, ViT-B/8) | EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation | 2024-03-03T00:00:00 | https://arxiv.org/abs/2403.01482v4 | [
"https://github.com/MICV-yonsei/EAGLE"
] | In the paper 'EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation', what Accuracy score did the EAGLE (DINO, ViT-B/8) model get on the Potsdam-3 dataset
| 83.3 |
N-Caltech 101 | S-TLLR | S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15220v4 | [
"https://github.com/mapolinario94/s-tllr"
] | In the paper 'S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks', what Accuracy score did the S-TLLR model get on the N-Caltech 101 dataset
| 66.05 |
iris | Best Model | Machine Learning in the Quantum Age: Quantum vs. Classical Support Vector Machines | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10910v1 | [
"https://github.com/detasar/quantum_computing_notebooks/blob/main/SVC_VS_gridSearchQSVC.ipynb"
] | In the paper 'Machine Learning in the Quantum Age: Quantum vs. Classical Support Vector Machines', what Average F1 score did the Best Model model get on the iris dataset
| 1 |
GigaSpeech DEV | Zipformer+pruned transducer
(no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+pruned transducer
(no external language model) model get on the GigaSpeech DEV dataset
| 10.09 |
UCR Anomaly Archive | RCF | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the RCF model get on the UCR Anomaly Archive dataset
| 0.387 |
MVTec AD | TransFusion | TransFusion -- A Transparency-Based Diffusion Model for Anomaly Detection | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09999v2 | [
"https://github.com/maticfuc/eccv_transfusion"
] | In the paper 'TransFusion -- A Transparency-Based Diffusion Model for Anomaly Detection', what Detection AUROC score did the TransFusion model get on the MVTec AD dataset
| 99.4 |
ImageNet - 10% labeled data | SequenceMatch (ResNet-50) | SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15787v1 | [
"https://github.com/beandkay/sequencematch"
] | In the paper 'SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning', what Top 5 Accuracy score did the SequenceMatch (ResNet-50) model get on the ImageNet - 10% labeled data dataset
| 91.9 |
ICFG-PEDES | RaSa | RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13653v1 | [
"https://github.com/flame-chasers/rasa"
] | In the paper 'RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search', what mAP score did the RaSa model get on the ICFG-PEDES dataset
| 41.29 |
SOTS Indoor | CasDyF-Net | CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters | 2024-09-13T00:00:00 | https://arxiv.org/abs/2409.08510v1 | [
"https://github.com/dauing/casdyf-net"
] | In the paper 'CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters', what PSNR score did the CasDyF-Net model get on the SOTS Indoor dataset
| 43.21 |
ETTh1 (336) Multivariate | Simba | Mamba-360: Survey of State Space Models as Transformer Alternative for Long Sequence Modelling: Methods, Applications, and Challenges | 2024-04-24T00:00:00 | https://arxiv.org/abs/2404.16112v1 | [
"https://github.com/badripatro/mamba360"
] | In the paper 'Mamba-360: Survey of State Space Models as Transformer Alternative for Long Sequence Modelling: Methods, Applications, and Challenges', what MSE score did the Simba model get on the ETTh1 (336) Multivariate dataset
| 0.473 |
CIFAR-100-LT (ρ=50) | LIFT (ViT-B/16, ImageNet-21K pre-training) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Error Rate score did the LIFT (ViT-B/16, ImageNet-21K pre-training) model get on the CIFAR-100-LT (ρ=50) dataset
| 9.8 |
Atari 2600 Freeway | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Freeway dataset
| 33.9 |
IllusionVQA | GPT4-Vision | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the GPT4-Vision model get on the IllusionVQA dataset
| 58.85 |
BDD100K val | TwinLiteNet | TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10705v5 | [
"https://github.com/chequanghuy/TwinLiteNet"
] | In the paper 'TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars', what mIoU score did the TwinLiteNet model get on the BDD100K val dataset
| 91.3 |
NYU Depth v2 | PGT (Swin-T) | Prompt Guided Transformer for Multi-Task Dense Prediction | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15362v1 | [
"https://github.com/innovator-zero/MTDP_Lib"
] | In the paper 'Prompt Guided Transformer for Multi-Task Dense Prediction', what Mean IoU score did the PGT (Swin-T) model get on the NYU Depth v2 dataset
| 41.61 |
MSCOCO | DE-ViT | Detect Everything with Few Examples | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12969v4 | [
"https://github.com/mlzxy/devit"
] | In the paper 'Detect Everything with Few Examples', what AP 0.5 score did the DE-ViT model get on the MSCOCO dataset
| 50 |
LAGENDA | MiVOLO-D1 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what MAE score did the MiVOLO-D1 model get on the LAGENDA dataset
| 3.99 |
HumanML3D | BAD (CBS) | BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.10847v1 | [
"https://github.com/rohollahhs/bad"
] | In the paper 'BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation', what FID score did the BAD (CBS) model get on the HumanML3D dataset
| 0.049 |
WIDER | DAEFR | Dual Associated Encoder for Face Restoration | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.07314v2 | [
"https://github.com/LIAGM/DAEFR"
] | In the paper 'Dual Associated Encoder for Face Restoration', what FID score did the DAEFR model get on the WIDER dataset
| 36.72 |
ActivityNet | BLIP-2 T5 | Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy | 2024-02-11T00:00:00 | https://arxiv.org/abs/2402.07270v2 | [
"https://github.com/lmb-freiburg/ovqa"
] | In the paper 'Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy', what ClipMatch@1 score did the BLIP-2 T5 model get on the ActivityNet dataset
| 53.39 |
HICO-DET | DiffHOI | Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12252v1 | [
"https://github.com/IDEA-Research/DiffHOI"
] | In the paper 'Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model', what mAP score did the DiffHOI model get on the HICO-DET dataset
| 41.50 |
QVHighlights | SG-DETR | Saliency-Guided DETR for Moment Retrieval and Highlight Detection | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01615v1 | [
"https://github.com/ai-forever/sg-detr"
] | In the paper 'Saliency-Guided DETR for Moment Retrieval and Highlight Detection', what mAP score did the SG-DETR model get on the QVHighlights dataset
| 54.10 |
RealBlur-J | ID-Blau (Restormer) | ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10998v2 | [
"https://github.com/plusgood-steven/id-blau"
] | In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what SSIM (sRGB) score did the ID-Blau (Restormer) model get on the RealBlur-J dataset
| 0.937 |
BIG-bench (SNARKS) | PaLM 2 (few-shot, k=3, Direct) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (SNARKS) dataset
| 78.7 |
SYSU-CD | CGNet | Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09179v1 | [
"https://github.com/chengxihan/cgnet-cd"
] | In the paper 'Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery', what F1 score did the CGNet model get on the SYSU-CD dataset
| 79.92 |
SportsMOT | MeMOTR (Deformable-DETR) | MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15700v3 | [
"https://github.com/mcg-nju/memotr"
] | In the paper 'MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking', what HOTA score did the MeMOTR (Deformable-DETR) model get on the SportsMOT dataset
| 68.8 |
ADE20K | MAFT+ | Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation | 2024-08-01T00:00:00 | https://arxiv.org/abs/2408.00744v2 | [
"https://github.com/jiaosiyu1999/MAFT-Plus"
] | In the paper 'Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation', what PQ score did the MAFT+ model get on the ADE20K dataset
| 27.1 |
PascalVOC-20 | EBSeg-L | Open-Vocabulary Semantic Segmentation with Image Embedding Balancing | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.09829v1 | [
"https://github.com/slonetime/ebseg"
] | In the paper 'Open-Vocabulary Semantic Segmentation with Image Embedding Balancing', what mIoU score did the EBSeg-L model get on the PascalVOC-20 dataset
| 96.4 |
Perception Test | Static Baseline | Perception Test: A Diagnostic Benchmark for Multimodal Video Models | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13786v2 | [
"https://github.com/deepmind/perception_test"
] | In the paper 'Perception Test: A Diagnostic Benchmark for Multimodal Video Models', what Average Jaccard score did the Static Baseline model get on the Perception Test dataset
| 0.36 |
COLLAB | GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GIN + PANDA model get on the COLLAB dataset
| 75.11% |
LibriSpeech test-clean | Zipformer+pruned transducer (no external language model) | Zipformer: A faster and better encoder for automatic speech recognition | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11230v4 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'Zipformer: A faster and better encoder for automatic speech recognition', what Word Error Rate (WER) score did the Zipformer+pruned transducer (no external language model) model get on the LibriSpeech test-clean dataset
| 2.00 |
Refer-YouTube-VOS (2021 public validation) | MUTR | Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation | 2023-05-25T00:00:00 | https://arxiv.org/abs/2305.16318v2 | [
"https://github.com/opengvlab/mutr"
] | In the paper 'Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation', what J&F score did the MUTR model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 68.4 |
ETTh1 (336) Multivariate | S-Mamba | Is Mamba Effective for Time Series Forecasting? | 2024-03-17T00:00:00 | https://arxiv.org/abs/2403.11144v3 | [
"https://github.com/wzhwzhwzh0921/s-d-mamba"
] | In the paper 'Is Mamba Effective for Time Series Forecasting?', what MSE score did the S-Mamba model get on the ETTh1 (336) Multivariate dataset
| 0.489 |
Amazon Men | CARCA Learnt + Con | Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation | 2024-05-16T00:00:00 | https://arxiv.org/abs/2405.10436v1 | [
"https://github.com/researcher1741/position_encoding_srs"
] | In the paper 'Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation', what Hit@10 score did the CARCA Learnt + Con model get on the Amazon Men dataset
| 0.7386 |
RefCOCO testB | VATEX | Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding | 2024-04-12T00:00:00 | https://arxiv.org/abs/2404.08590v2 | [
"https://github.com/nero1342/VATEX_RIS"
] | In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what mIoU score did the VATEX model get on the RefCOCO testB dataset
| 75.64 |
MNIST | CKGCN | CKGConv: General Graph Convolution with Continuous Kernels | 2024-04-21T00:00:00 | https://arxiv.org/abs/2404.13604v2 | [
"https://github.com/networkslab/ckgconv"
] | In the paper 'CKGConv: General Graph Convolution with Continuous Kernels', what Accuracy score did the CKGCN model get on the MNIST dataset
| 98.423 |
TrecQA | TANDA DeBERTa-V3-Large + ALL | Structural Self-Supervised Objectives for Transformers | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08272v1 | [
"https://github.com/lucadiliello/transformers-framework"
] | In the paper 'Structural Self-Supervised Objectives for Transformers', what MAP score did the TANDA DeBERTa-V3-Large + ALL model get on the TrecQA dataset
| 0.954 |
iNaturalist 2018 | LIFT (ViT-L/14) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Top-1 Accuracy score did the LIFT (ViT-L/14) model get on the iNaturalist 2018 dataset
| 85.2% |
Nighttime Driving | TADP | Text-image Alignment for Diffusion-based Perception | 2023-09-29T00:00:00 | https://arxiv.org/abs/2310.00031v3 | [
"https://github.com/damaggu/tadp"
] | In the paper 'Text-image Alignment for Diffusion-based Perception', what mIoU score did the TADP model get on the Nighttime Driving dataset
| 60.8 |
LRS2 | IIANet | IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation | 2023-08-16T00:00:00 | https://arxiv.org/abs/2308.08143v3 | [
"https://github.com/JusperLee/IIANet"
] | In the paper 'IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation', what SI-SNRi score did the IIANet model get on the LRS2 dataset
| 16.4 |
CUHK-PEDES | APTM | Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02898v4 | [
"https://github.com/Shuyu-XJTU/APTM"
] | In the paper 'Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark', what R@1 score did the APTM model get on the CUHK-PEDES dataset
| 76.53 |
VideoInstruct | MovieChat | MovieChat: From Dense Token to Sparse Memory for Long Video Understanding | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16449v4 | [
"https://github.com/rese1f/MovieChat"
] | In the paper 'MovieChat: From Dense Token to Sparse Memory for Long Video Understanding', what gpt-score score did the MovieChat model get on the VideoInstruct dataset
| 2.76 |
MOSE | Cutie (base, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie (base, MEGA) model get on the MOSE dataset
| 69.9 |
SportsMOT | MeMOTR (Deformable-DETR) | MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15700v3 | [
"https://github.com/mcg-nju/memotr"
] | In the paper 'MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking', what HOTA score did the MeMOTR (Deformable-DETR) model get on the SportsMOT dataset
| 68.8 |
CIFAR-10 | SPICE-BPA | The Balanced-Pairwise-Affinities Feature Transform | 2024-06-25T00:00:00 | https://arxiv.org/abs/2407.01467v1 | [
"https://github.com/danielshalam/bpa"
] | In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the SPICE-BPA model get on the CIFAR-10 dataset
| 0.933 |
CALVIN | GR-MG | GR-MG: Leveraging Partially Annotated Data via Multi-Modal Goal Conditioned Policy | 2024-08-26T00:00:00 | https://arxiv.org/abs/2408.14368v1 | [
"https://github.com/bytedance/GR-MG"
] | In the paper 'GR-MG: Leveraging Partially Annotated Data via Multi-Modal Goal Conditioned Policy', what Avg. sequence length score did the GR-MG model get on the CALVIN dataset
| 4.04 |
DAVIS-S | BiRefNet (DUTS, HRSOD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-measure score did the BiRefNet (DUTS, HRSOD) model get on the DAVIS-S dataset
| 0.973 |
UDED | TEED | Tiny and Efficient Model for the Edge Detection Generalization | 2023-08-12T00:00:00 | https://arxiv.org/abs/2308.06468v1 | [
"https://github.com/xavysp/teed"
] | In the paper 'Tiny and Efficient Model for the Edge Detection Generalization', what ODS score did the TEED model get on the UDED dataset
| 0.828 |
MM-Vet | LLaVA-InternLM2-7B-ViT + MoSLoRA | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what GPT-4 score score did the LLaVA-InternLM2-7B-ViT + MoSLoRA model get on the MM-Vet dataset
| 35.2 |
IllusionVQA | InstructBLIP-13B | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the InstructBLIP-13B model get on the IllusionVQA dataset
| 24.3 |
QVHighlights | UniVTG (w/ PT) | UniVTG: Towards Unified Video-Language Temporal Grounding | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16715v2 | [
"https://github.com/showlab/univtg"
] | In the paper 'UniVTG: Towards Unified Video-Language Temporal Grounding', what mAP score did the UniVTG (w/ PT) model get on the QVHighlights dataset
| 40.54 |
CDD Dataset (season-varying) | HANet | HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09178v1 | [
"https://github.com/chengxihan/hanet-cd"
] | In the paper 'HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images', what F1-Score score did the HANet model get on the CDD Dataset (season-varying) dataset
| 89.23 |
Wiki-40B | OutEffHop-Bert_base | Outlier-Efficient Hopfield Layers for Large Transformer-Based Models | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03828v2 | [
"https://github.com/magics-lab/outeffhop"
] | In the paper 'Outlier-Efficient Hopfield Layers for Large Transformer-Based Models', what Perplexity score did the OutEffHop-Bert_base model get on the Wiki-40B dataset
| 6.209 |
PGPS9K | GOLD | GOLD: Geometry Problem Solver with Natural Language Description | 2024-05-01T00:00:00 | https://arxiv.org/abs/2405.00494v1 | [
"https://github.com/neurasearch/geometry-diagram-description"
] | In the paper 'GOLD: Geometry Problem Solver with Natural Language Description', what Completion accuracy score did the GOLD model get on the PGPS9K dataset
| 65.8 |
RefCOCO+ test B | HyperSeg | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what Overall IoU score did the HyperSeg model get on the RefCOCO+ test B dataset
| 75.2 |
Weather2K1786 (336) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K1786 (336) dataset
| 0.603 |
ENZYMES | R-GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GIN + PANDA model get on the ENZYMES dataset
| 53.1 |
ImageNet | KD++(T:resnet-152 S:resnet18) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T:resnet-152 S:resnet18) model get on the ImageNet dataset
| 72.54 |
BanglaBook | Logistic Regression (char 2-gram + char 3-gram) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the Logistic Regression (char 2-gram + char 3-gram) model get on the BanglaBook dataset
| 0.8978 |
Total-Text | MixNet | MixNet: Toward Accurate Detection of Challenging Scene Text in the Wild | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12817v2 | [
"https://github.com/D641593/MixNet"
] | In the paper 'MixNet: Toward Accurate Detection of Challenging Scene Text in the Wild', what F-Measure score did the MixNet model get on the Total-Text dataset
| 90.5% |
nuScenes | PointBeV (static) | PointBeV: A Sparse Approach to BeV Predictions | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00703v2 | [
"https://github.com/valeoai/pointbev"
] | In the paper 'PointBeV: A Sparse Approach to BeV Predictions', what IoU veh - 224x480 - No vis filter - 100x100 at 0.5 score did the PointBeV (static) model get on the nuScenes dataset
| 38.7 |
CVC-ClinicDB | RaBiT | RaBiT: An Efficient Transformer using Bidirectional Feature Pyramid Network with Reverse Attention for Colon Polyp Segmentation | 2023-07-12T00:00:00 | https://arxiv.org/abs/2307.06420v1 | [
"https://github.com/nguyenhoangthuan99/RaBiT"
] | In the paper 'RaBiT: An Efficient Transformer using Bidirectional Feature Pyramid Network with Reverse Attention for Colon Polyp Segmentation', what mean Dice score did the RaBiT model get on the CVC-ClinicDB dataset
| 0.951 |
PanNuke | LKCell | LKCell: Efficient Cell Nuclei Instance Segmentation with Large Convolution Kernels | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18054v1 | [
"https://github.com/hustvl/lkcell"
] | In the paper 'LKCell: Efficient Cell Nuclei Instance Segmentation with Large Convolution Kernels', what PQ score did the LKCell model get on the PanNuke dataset
| 50.80 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.