dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MBPP | MapCoder (GPT-4) | MapCoder: Multi-Agent Code Generation for Competitive Problem Solving | 2024-05-18T00:00:00 | https://arxiv.org/abs/2405.11403v1 | [
"https://github.com/md-ashraful-pramanik/mapcoder"
] | In the paper 'MapCoder: Multi-Agent Code Generation for Competitive Problem Solving', what Accuracy score did the MapCoder (GPT-4) model get on the MBPP dataset
| 83.1 |
MVTec AD | CPR-fast(TensorRT) | Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06748v1 | [
"https://github.com/flyinghu123/cpr"
] | In the paper 'Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval', what FPS score did the CPR-fast(TensorRT) model get on the MVTec AD dataset
| 362 |
AE-110k | GPT-4-json-val-10-dem | ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12537v5 | [
"https://github.com/wbsg-uni-mannheim/extractgpt"
] | In the paper 'ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction', what F1-score score did the GPT-4-json-val-10-dem model get on the AE-110k dataset
| 87.5 |
NYU Depth v2 | Marigold + E2E FT(zero-shot) | Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11355v1 | [
"https://github.com/VisualComputingInstitute/diffusion-e2e-ft"
] | In the paper 'Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think', what % < 11.25 score did the Marigold + E2E FT(zero-shot) model get on the NYU Depth v2 dataset
| 61.4 |
PAD Dataset | SplatPose | SplatPose & Detect: Pose-Agnostic 3D Anomaly Detection | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.06832v1 | [
"https://github.com/m-kruse98/splatpose"
] | In the paper 'SplatPose & Detect: Pose-Agnostic 3D Anomaly Detection', what Detection AUROC score did the SplatPose model get on the PAD Dataset dataset
| 93.9 |
17 Places | SegVLAD-FineT (M) | Revisit Anything: Visual Place Recognition via Image Segment Retrieval | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.18049v1 | [
"https://github.com/anyloc/revisit-anything"
] | In the paper 'Revisit Anything: Visual Place Recognition via Image Segment Retrieval', what Recall@1 score did the SegVLAD-FineT (M) model get on the 17 Places dataset
| 95.3 |
ColonINST-v1 (Seen) | MobileVLM-1.7B
(w/o LoRA, w/ extra data) | MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16886v2 | [
"https://github.com/meituan-automl/mobilevlm"
] | In the paper 'MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices', what Accuray score did the MobileVLM-1.7B
(w/o LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 97.78 |
CIFAR-10-LT (ρ=100) | VS + ADRW + TLA | A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code | 2023-10-07T00:00:00 | https://arxiv.org/abs/2310.04752 | [
"https://github.com/wang22ti/DDC"
] | In the paper 'A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code', what Error Rate score did the VS + ADRW + TLA model get on the CIFAR-10-LT (ρ=100) dataset
| 13.58 |
Replica | OpenIns3D (with rgbd) | OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00616v5 | [
"https://github.com/Pointcept/OpenIns3D"
] | In the paper 'OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation', what mAP score did the OpenIns3D (with rgbd) model get on the Replica dataset
| 21.1 |
MUV | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what ROC-AUC score did the G-Tuning model get on the MUV dataset
| 75.84 |
no extra data | FDQN | FDQN: A Flexible Deep Q-Network Framework for Game Automation | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.18761v1 | [
"https://github.com/prabhath-r/FDQN_RL"
] | In the paper 'FDQN: A Flexible Deep Q-Network Framework for Game Automation', what Average Reward score did the FDQN model get on the no extra data dataset
| 728 |
Cityscapes val | HALO | Hyperbolic Active Learning for Semantic Segmentation under Domain Shift | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.11180v5 | [
"https://github.com/paolomandica/HALO"
] | In the paper 'Hyperbolic Active Learning for Semantic Segmentation under Domain Shift', what mIoU score did the HALO model get on the Cityscapes val dataset
| 77.8 |
Cornell (60%/20%/20% random splits) | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GCN model get on the Cornell (60%/20%/20% random splits) dataset
| 63.24 ± 5.43 |
LaSOT | ARTrackV2-L | ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17133v3 | [
"https://github.com/miv-xjtu/artrack"
] | In the paper 'ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe', what AUC score did the ARTrackV2-L model get on the LaSOT dataset
| 73.6 |
GTA-to-Avg(Cityscapes,BDD,Mapillary) | VLTSeg | Strong but simple: A Baseline for Domain Generalized Dense Perception by CLIP-based Transfer Learning | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02021v4 | [
"https://github.com/VLTSeg/VLTSeg"
] | In the paper 'Strong but simple: A Baseline for Domain Generalized Dense Perception by CLIP-based Transfer Learning', what mIoU score did the VLTSeg model get on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset
| 63.5 |
TriMouse-161 | BUCTD-CoAM-W48 (DLCRNet) | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the BUCTD-CoAM-W48 (DLCRNet) model get on the TriMouse-161 dataset
| 99.1 |
PTC | CIN++ | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what Accuracy score did the CIN++ model get on the PTC dataset
| 73.2% |
UCR Anomaly Archive | Convolutional AE | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the Convolutional AE model get on the UCR Anomaly Archive dataset
| 0.352 |
WikiText-2 | Ensemble of All | Advancing State of the Art in Language Modeling | 2023-11-28T00:00:00 | https://arxiv.org/abs/2312.03735v1 | [
"https://github.com/davidherel/sota_lm"
] | In the paper 'Advancing State of the Art in Language Modeling', what Validation perplexity score did the Ensemble of All model get on the WikiText-2 dataset
| 55.4 |
PASCAL Context-59 | LaVG | In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation | 2024-08-09T00:00:00 | https://arxiv.org/abs/2408.04961v1 | [
"https://github.com/dahyun-kang/lazygrounding"
] | In the paper 'In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation', what mIoU score did the LaVG model get on the PASCAL Context-59 dataset
| 34.7 |
Traffic (720) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Traffic (720) dataset
| 0.421 |
Youtube-VIS 2022 Validation | DVIS++(VIT-L) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mAP_L score did the DVIS++(VIT-L) model get on the Youtube-VIS 2022 Validation dataset
| 50.9 |
KADID-10k | ARNIQA | ARNIQA: Learning Distortion Manifold for Image Quality Assessment | 2023-10-20T00:00:00 | https://arxiv.org/abs/2310.14918v2 | [
"https://github.com/miccunifi/arniqa"
] | In the paper 'ARNIQA: Learning Distortion Manifold for Image Quality Assessment', what SRCC score did the ARNIQA model get on the KADID-10k dataset
| 0.908 |
MSVD | PAU | Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17093v3 | [
"https://github.com/leolee99/pau"
] | In the paper 'Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval', what text-to-video R@1 score did the PAU model get on the MSVD dataset
| 47.3 |
MUSDB18-HQ | TFC-TDF-UNet (v3) | Sound Demixing Challenge 2023 Music Demixing Track Technical Report: TFC-TDF-UNet v3 | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09382v3 | [
"https://github.com/kuielab/sdx23"
] | In the paper 'Sound Demixing Challenge 2023 Music Demixing Track Technical Report: TFC-TDF-UNet v3', what SDR (drums) score did the TFC-TDF-UNet (v3) model get on the MUSDB18-HQ dataset
| 8.44 |
VoxCeleb | ReDimNet-B6-SF2-LM (15.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B6-SF2-LM (15.0M) model get on the VoxCeleb dataset
| 0.4 |
IIIT5k | CLIP4STR-L (DataComp-1B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L (DataComp-1B) model get on the IIIT5k dataset
| 99.6 |
SFCHD | YOLOv8+SCALE | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the YOLOv8+SCALE model get on the SFCHD dataset
| 78.6 |
UniGeo | GOLD | GOLD: Geometry Problem Solver with Natural Language Description | 2024-05-01T00:00:00 | https://arxiv.org/abs/2405.00494v1 | [
"https://github.com/neurasearch/geometry-diagram-description"
] | In the paper 'GOLD: Geometry Problem Solver with Natural Language Description', what Accuracy (%) score did the GOLD model get on the UniGeo dataset
| 98.5 |
ADE20K | TADP | Text-image Alignment for Diffusion-based Perception | 2023-09-29T00:00:00 | https://arxiv.org/abs/2310.00031v3 | [
"https://github.com/damaggu/tadp"
] | In the paper 'Text-image Alignment for Diffusion-based Perception', what Validation mIoU score did the TADP model get on the ADE20K dataset
| 55.9 |
BGS | BoP | From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis | 2024-11-17T00:00:00 | https://arxiv.org/abs/2411.11149v1 | [
"https://github.com/kbogas/PAM_BoP"
] | In the paper 'From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis', what Accuracy score did the BoP model get on the BGS dataset
| 90.34 |
Atari 2600 Crazy Climber | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Crazy Climber dataset
| 166019 |
PASCAL VOC 2012 val | CAUSE (ViT-B/8) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what Clustering [mIoU] score did the CAUSE (ViT-B/8) model get on the PASCAL VOC 2012 val dataset
| 53.3 |
IMDb | Bert+ Wilson-Cowan model RNN | Learning in Wilson-Cowan model for metapopulation | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16453v2 | [
"https://github.com/raffaelemarino/learning_in_wilsoncowan"
] | In the paper 'Learning in Wilson-Cowan model for metapopulation', what Accuracy score did the Bert+ Wilson-Cowan model RNN model get on the IMDb dataset
| 87.46 |
PASCAL VOC to Watercolor2k | CDDMSL | Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13525v1 | [
"https://github.com/sinamalakouti/CDDMSL"
] | In the paper 'Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment', what mAp score did the CDDMSL model get on the PASCAL VOC to Watercolor2k dataset
| 49.7 |
CLUSTER | CKGCN | CKGConv: General Graph Convolution with Continuous Kernels | 2024-04-21T00:00:00 | https://arxiv.org/abs/2404.13604v2 | [
"https://github.com/networkslab/ckgconv"
] | In the paper 'CKGConv: General Graph Convolution with Continuous Kernels', what Accuracy score did the CKGCN model get on the CLUSTER dataset
| 79.003 |
THuman2.0 Dataset | SIFU | SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.06704v3 | [
"https://github.com/River-Zhang/SIFU"
] | In the paper 'SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction', what CLIP Similarity score did the SIFU model get on the THuman2.0 Dataset dataset
| 0.8663 |
ImageNet-LT | GML (ResNeXt-50) | Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels | 2023-05-02T00:00:00 | https://arxiv.org/abs/2305.01160v3 | [
"https://github.com/bluecdm/Long-tailed-recognition"
] | In the paper 'Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels', what Top-1 Accuracy score did the GML (ResNeXt-50) model get on the ImageNet-LT dataset
| 58.8 |
MM-Vet | LLaVA-OneVision-72B | LLaVA-OneVision: Easy Visual Task Transfer | 2024-08-06T00:00:00 | https://arxiv.org/abs/2408.03326v3 | [
"https://github.com/evolvinglmms-lab/lmms-eval"
] | In the paper 'LLaVA-OneVision: Easy Visual Task Transfer', what GPT-4 score score did the LLaVA-OneVision-72B model get on the MM-Vet dataset
| 63.7 |
VoxCeleb | ReDimNet-B2-SF2-LM (4.7M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B2-SF2-LM (4.7M) model get on the VoxCeleb dataset
| 0.57 |
PASCAL-5i (1-Shot) | SCCAN (ResNet-101) | Self-Calibrated Cross Attention Network for Few-Shot Segmentation | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09294v1 | [
"https://github.com/sam1224/sccan"
] | In the paper 'Self-Calibrated Cross Attention Network for Few-Shot Segmentation', what Mean IoU score did the SCCAN (ResNet-101) model get on the PASCAL-5i (1-Shot) dataset
| 68.3 |
HumanEval | MGDebugger (DeepSeek-Coder-V2-Lite) | From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01215v2 | [
"https://github.com/YerbaPage/MGDebugger"
] | In the paper 'From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging', what Pass@1 score did the MGDebugger (DeepSeek-Coder-V2-Lite) model get on the HumanEval dataset
| 94.5 |
MORPH Album2 (SE) | ResNet-50-Mean-Variance | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Mean-Variance model get on the MORPH Album2 (SE) dataset
| 2.83 |
Places-LT | LIFT (ViT-B/16) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Top-1 Accuracy score did the LIFT (ViT-B/16) model get on the Places-LT dataset
| 52.2 |
GlaS | MDM | Masked Diffusion as Self-supervised Representation Learner | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05695v4 | [
"https://github.com/zx-pan/mdm"
] | In the paper 'Masked Diffusion as Self-supervised Representation Learner', what F1 score did the MDM model get on the GlaS dataset
| 91.95 |
LRS3-TED | SyncVSR | SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12233v1 | [
"https://github.com/KAIST-AILab/SyncVSR"
] | In the paper 'SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization', what Word Error Rate (WER) score did the SyncVSR model get on the LRS3-TED dataset
| 21.5 |
Weather (336) | RLinear-CI | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear-CI model get on the Weather (336) dataset
| 0.241 |
CIFAR-FS 5-way (5-shot) | PT+MAP+SF+BPA (transductive) | The Balanced-Pairwise-Affinities Feature Transform | 2024-06-25T00:00:00 | https://arxiv.org/abs/2407.01467v1 | [
"https://github.com/danielshalam/bpa"
] | In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the PT+MAP+SF+BPA (transductive) model get on the CIFAR-FS 5-way (5-shot) dataset
| 92.83 |
ActivityNet-1.3 | CASE | Revisiting Foreground and Background Separation in Weakly-supervised Temporal Action Localization: A Clustering-based Approach | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.14138v1 | [
"https://github.com/qinying-liu/case"
] | In the paper 'Revisiting Foreground and Background Separation in Weakly-supervised Temporal Action Localization: A Clustering-based Approach', what mAP@0.5 score did the CASE model get on the ActivityNet-1.3 dataset
| 43.2 |
Astock | SRL&SDPG&Factors | FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02647v1 | [
"https://github.com/frinkleko/finreport"
] | In the paper 'FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model', what Accuray score did the SRL&SDPG&Factors model get on the Astock dataset
| 75.40 |
LingOly | Claude Opus | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06196v3 | [
"https://github.com/am-bean/lingOly"
] | In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the Claude Opus model get on the LingOly dataset
| 46.3% |
RefCOCO+ test B | VATEX | Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding | 2024-04-12T00:00:00 | https://arxiv.org/abs/2404.08590v2 | [
"https://github.com/nero1342/VATEX_RIS"
] | In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what mIoU score did the VATEX model get on the RefCOCO+ test B dataset
| 62.52 |
SUN397 | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the SUN397 dataset
| 80.82 |
MOT17 | BoostTrack++ | BoostTrack++: using tracklet information to detect more objects in multiple object tracking | 2024-08-23T00:00:00 | https://arxiv.org/abs/2408.13003v1 | [
"https://github.com/vukasin-stanojevic/BoostTrack"
] | In the paper 'BoostTrack++: using tracklet information to detect more objects in multiple object tracking', what MOTA score did the BoostTrack++ model get on the MOT17 dataset
| 80.7 |
MSR-VTT | Video-LaVIT | Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03161v3 | [
"https://github.com/jy0205/lavit"
] | In the paper 'Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization', what FID score did the Video-LaVIT model get on the MSR-VTT dataset
| 11.27 |
SportsMOT | AED | Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown | 2024-09-14T00:00:00 | https://arxiv.org/abs/2409.09293v1 | [
"https://github.com/balabooooo/aed"
] | In the paper 'Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown', what HOTA score did the AED model get on the SportsMOT dataset
| 79.1 |
MMBench | LLaVA-InternLM2-ViT + MoSLoRA | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what GPT-3.5 score score did the LLaVA-InternLM2-ViT + MoSLoRA model get on the MMBench dataset
| 73.8 |
UCF-Crime | MULDE-frame-centric-micro-one-class-classification | MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14497v1 | [
"https://github.com/jakubmicorek/MULDE-Multiscale-Log-Density-Estimation-via-Denoising-Score-Matching-for-Video-Anomaly-Detection"
] | In the paper 'MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection', what AUC score did the MULDE-frame-centric-micro-one-class-classification model get on the UCF-Crime dataset
| 78.5% |
TASD | MvP | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (R15) score did the MvP model get on the TASD dataset
| 64.53 |
MATH | DART-Math-Llama3-70B-Prop2Diff (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Llama3-70B-Prop2Diff (0-shot CoT, w/o code) model get on the MATH dataset
| 56.1 |
STS13 | PromptEOL+CSE+LLaMA-30B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+LLaMA-30B model get on the STS13 dataset
| 0.9025 |
DAVIS 2017 (test-dev) | DEVA | Tracking Anything with Decoupled Video Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03903v1 | [
"https://github.com/hkchengrex/Tracking-Anything-with-DEVA"
] | In the paper 'Tracking Anything with Decoupled Video Segmentation', what J&F score did the DEVA model get on the DAVIS 2017 (test-dev) dataset
| 83.2 |
ETTh2 (336) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTh2 (336) Multivariate dataset
| 0.325 |
SAFIM | gpt-4-1106-preview | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the gpt-4-1106-preview model get on the SAFIM dataset
| 42.11 |
CIFAKE: Real and AI-Generated Synthetic Images | FasterThanLies | Faster Than Lies: Real-time Deepfake Detection using Binary Neural Networks | 2024-06-07T00:00:00 | https://arxiv.org/abs/2406.04932v1 | [
"https://github.com/fedeloper/binary_deepfake_detection"
] | In the paper 'Faster Than Lies: Real-time Deepfake Detection using Binary Neural Networks', what Validation Accuracy score did the FasterThanLies model get on the CIFAKE: Real and AI-Generated Synthetic Images dataset
| 97.29 |
ScanObjectNN | GPSFormer | GPSFormer: A Global Perception and Local Structure Fitting-based Transformer for Point Cloud Understanding | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13519v2 | [
"https://github.com/changshuowang/GPSFormer"
] | In the paper 'GPSFormer: A Global Perception and Local Structure Fitting-based Transformer for Point Cloud Understanding', what Overall Accuracy score did the GPSFormer model get on the ScanObjectNN dataset
| 95.4 |
ImageNet 64x64 | DisCo-Diff | DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03300v1 | [
"https://github.com/gcorso/disco-diffdock"
] | In the paper 'DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents', what FID score did the DisCo-Diff model get on the ImageNet 64x64 dataset
| 1.22 |
SBU / SBU-Refine | SDDNet (MM 2023) (512x512) | SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.08935v2 | [
"https://github.com/rmcong/sddnet_acmmm23"
] | In the paper 'SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection', what BER score did the SDDNet (MM 2023) (512x512) model get on the SBU / SBU-Refine dataset
| 4.86 |
Cityscapes to ACDC | CMFormer | Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation | 2023-07-01T00:00:00 | https://arxiv.org/abs/2307.00371v5 | [
"https://github.com/BiQiWHU/CMFormer"
] | In the paper 'Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation', what mIoU score did the CMFormer model get on the Cityscapes to ACDC dataset
| 60.1 |
MVSEC | HyperE2VID | HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks | 2023-05-10T00:00:00 | https://arxiv.org/abs/2305.06382v2 | [
"https://github.com/ercanburak/HyperE2VID"
] | In the paper 'HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks', what Mean Squared Error score did the HyperE2VID model get on the MVSEC dataset
| 0.076 |
SAMSum | InstructDS | Instructive Dialogue Summarization with Query Aggregations | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10981v3 | [
"https://github.com/BinWang28/InstructDS"
] | In the paper 'Instructive Dialogue Summarization with Query Aggregations', what ROUGE-1 score did the InstructDS model get on the SAMSum dataset
| 55.3 |
Charades | MSQNet | Actor-agnostic Multi-label Action Recognition with Multi-modal Query | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10763v3 | [
"https://github.com/mondalanindya/msqnet"
] | In the paper 'Actor-agnostic Multi-label Action Recognition with Multi-modal Query', what MAP score did the MSQNet model get on the Charades dataset
| 47.57 |
ISTD+ | RASM(WRONG COMPARISION) | Regional Attention for Shadow Removal | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14201v1 | [
"https://github.com/CalcuLuUus/RASM"
] | In the paper 'Regional Attention for Shadow Removal', what RMSE score did the RASM(WRONG COMPARISION) model get on the ISTD+ dataset
| 2.53 (WRONG COMPARISION) |
QM9 | PAMNet | A Universal Framework for Accurate and Efficient Geometric Deep Learning of Molecular Systems | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11228v1 | [
"https://github.com/XieResearchGroup/Physics-aware-Multiplex-GNN"
] | In the paper 'A Universal Framework for Accurate and Efficient Geometric Deep Learning of Molecular Systems', what Error ratio score did the PAMNet model get on the QM9 dataset
| 0.363 |
PASCAL-5i (1-Shot) | MIANet (VGG-16) | MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13864v1 | [
"https://github.com/aldrich2y/mianet"
] | In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (VGG-16) model get on the PASCAL-5i (1-Shot) dataset
| 67.10 |
PascalVOC-20 | FC-CLIP | Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02487v2 | [
"https://github.com/bytedance/fc-clip"
] | In the paper 'Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP', what mIoU score did the FC-CLIP model get on the PascalVOC-20 dataset
| 95.4 |
ImageNet 64x64 | 2-rectified flow++ (NFE=1) | Improving the Training of Rectified Flows | 2024-05-30T00:00:00 | https://arxiv.org/abs/2405.20320v2 | [
"https://github.com/sangyun884/rfpp"
] | In the paper 'Improving the Training of Rectified Flows', what FID score did the 2-rectified flow++ (NFE=1) model get on the ImageNet 64x64 dataset
| 4.31 |
SMAC corridor_2z_vs_24zg | QMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the QMIX model get on the SMAC corridor_2z_vs_24zg dataset
| 4.80 |
VidHOI | ST-GAZE | Human-Object Interaction Prediction in Videos through Gaze Following | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03597v1 | [
"https://github.com/nizhf/hoi-prediction-gaze-transformer"
] | In the paper 'Human-Object Interaction Prediction in Videos through Gaze Following', what Oracle: Full (mAP@0.5) score did the ST-GAZE model get on the VidHOI dataset
| 38.61 |
EQ-Bench | meta-llama/Llama-2-7b-chat-hf | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the meta-llama/Llama-2-7b-chat-hf model get on the EQ-Bench dataset
| 25.43 |
SMAC MMM2 | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC MMM2 dataset
| 96.88 |
EQ-Bench | OpenAI text-davinci-002 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI text-davinci-002 model get on the EQ-Bench dataset
| 39.44 |
IIIT5k | CLIP4STR-L | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L model get on the IIIT5k dataset
| 99.5 |
DTD | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the DTD dataset
| 72.16 |
COCO-Stuff | OTSeg | OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14183v2 | [
"https://github.com/cubeyoung/OTSeg"
] | In the paper 'OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation', what Transductive Setting hIoU score did the OTSeg model get on the COCO-Stuff dataset
| 49.5 |
Human3.6M | ARTS (Resnet50 L=16) | ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos | 2024-10-21T00:00:00 | https://arxiv.org/abs/2410.15582v1 | [
"https://github.com/tangtao-pku/arts"
] | In the paper 'ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos', what Average MPJPE (mm) score did the ARTS (Resnet50 L=16) model get on the Human3.6M dataset
| 51.6 |
MSR-VTT | VideoAssembler (Zero-Shot, 256x256, class-conditional) | MagDiff: Multi-Alignment Diffusion for High-Fidelity Video Generation and Editing | 2023-11-29T00:00:00 | https://arxiv.org/abs/2311.17338v3 | [
"https://github.com/gulucaptain/videoassembler"
] | In the paper 'MagDiff: Multi-Alignment Diffusion for High-Fidelity Video Generation and Editing', what Inception score score did the VideoAssembler (Zero-Shot, 256x256, class-conditional) model get on the MSR-VTT dataset
| 15.79 |
Abt-Buy | Meta-Llama-3.1-8B-Instruct | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Meta-Llama-3.1-8B-Instruct model get on the Abt-Buy dataset
| 56.57 |
MVTec LOCO AD | SINBAD Ens | Set Features for Anomaly Detection | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14773v3 | [
"https://github.com/NivC/SINBAD"
] | In the paper 'Set Features for Anomaly Detection', what Avg. Detection AUROC score did the SINBAD Ens model get on the MVTec LOCO AD dataset
| 88.3 |
THUMOS 2014 | HR-Pro | HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation | 2023-08-24T00:00:00 | https://arxiv.org/abs/2308.12608v3 | [
"https://github.com/pipixin321/hr-pro"
] | In the paper 'HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation', what mAP@0.5 score did the HR-Pro model get on the THUMOS 2014 dataset
| 52.2 |
MOT17 | SFSORT | SFSORT: Scene Features-based Simple Online Real-Time Tracker | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07553v1 | [
"https://github.com/gitmehrdad/sfsort"
] | In the paper 'SFSORT: Scene Features-based Simple Online Real-Time Tracker', what MOTA score did the SFSORT model get on the MOT17 dataset
| 78.8 |
Synapse multi-organ CT | ParaTransCNN | ParaTransCNN: Parallelized TransCNN Encoder for Medical Image Segmentation | 2024-01-27T00:00:00 | https://arxiv.org/abs/2401.15307v1 | [
"https://github.com/hongkunsun/paratranscnn"
] | In the paper 'ParaTransCNN: Parallelized TransCNN Encoder for Medical Image Segmentation', what Avg DSC score did the ParaTransCNN model get on the Synapse multi-organ CT dataset
| 83.86 |
ICDAR2013 | CLIP4STR-L (DataComp-1B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L (DataComp-1B) model get on the ICDAR2013 dataset
| 99.0 |
RealBlur-R | ID-Blau (Restormer) | ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10998v2 | [
"https://github.com/plusgood-steven/id-blau"
] | In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR (sRGB) score did the ID-Blau (Restormer) model get on the RealBlur-R dataset
| 40.31 |
HInt: Hand Interactions in the wild | HaMeR | Reconstructing Hands in 3D with Transformers | 2023-12-08T00:00:00 | https://arxiv.org/abs/2312.05251v1 | [
"https://github.com/geopavlakos/hamer"
] | In the paper 'Reconstructing Hands in 3D with Transformers', what PCK@0.05 (New Days) All score did the HaMeR model get on the HInt: Hand Interactions in the wild dataset
| 48.0 |
SMAC MMM2_7m2M1M_vs_8m4M1M | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QPLEX model get on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset
| 46.88 |
NYU Depth v2 | DPLNet | Efficient Multimodal Semantic Segmentation via Dual-Prompt Learning | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00360v2 | [
"https://github.com/shaohuadong2021/dplnet"
] | In the paper 'Efficient Multimodal Semantic Segmentation via Dual-Prompt Learning', what Mean IoU score did the DPLNet model get on the NYU Depth v2 dataset
| 59.3 |
Winoground | BLIP (ITM) | Revisiting the Role of Language Priors in Vision-Language Models | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01879v4 | [
"https://github.com/linzhiqiu/visual_gpt_score"
] | In the paper 'Revisiting the Role of Language Priors in Vision-Language Models', what Text Score score did the BLIP (ITM) model get on the Winoground dataset
| 35.8 |
Panoptic SYNTHIA-to-Cityscapes | MC-PanDA | MC-PanDA: Mask Confidence for Panoptic Domain Adaptation | 2024-07-19T00:00:00 | https://arxiv.org/abs/2407.14110v1 | [
"https://github.com/helen1c/mc-panda"
] | In the paper 'MC-PanDA: Mask Confidence for Panoptic Domain Adaptation', what mPQ score did the MC-PanDA model get on the Panoptic SYNTHIA-to-Cityscapes dataset
| 47.4 |
Atari 2600 Boxing | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Boxing dataset
| 99.6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.