dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Astock | SDPG&Factors | FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02647v1 | [
"https://github.com/frinkleko/finreport"
] | In the paper 'FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model', what Accuray score did the SDPG&Factors model get on the Astock dataset
| 73.12 |
PECC | Phi-3-mini-128k-instruct | PECC: Problem Extraction and Coding Challenges | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18766v1 | [
"https://github.com/hallerpatrick/pecc"
] | In the paper 'PECC: Problem Extraction and Coding Challenges', what Pass@3 score did the Phi-3-mini-128k-instruct model get on the PECC dataset
| 7.18 |
PeMSD7 | STAEformer | STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10425v5 | [
"https://github.com/xdzhelheim/staeformer"
] | In the paper 'STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting', what 12 steps MAE score did the STAEformer model get on the PeMSD7 dataset
| 19.14 |
Balanced Audio Set | EAT | EAT: Self-Supervised Pre-Training with Efficient Audio Transformer | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03497v1 | [
"https://github.com/cwx-worst-one/eat"
] | In the paper 'EAT: Self-Supervised Pre-Training with Efficient Audio Transformer', what Mean AP score did the EAT model get on the Balanced Audio Set dataset
| 40.3 |
ODinW-13 | MQ-GLIP-T | Multi-modal Queried Object Detection in the Wild | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18980v2 | [
"https://github.com/yifanxu74/mq-det"
] | In the paper 'Multi-modal Queried Object Detection in the Wild', what Average Score score did the MQ-GLIP-T model get on the ODinW-13 dataset
| 57 |
EventPed | MMPedestron | When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10125v1 | [
"https://github.com/BubblyYi/MMPedestron"
] | In the paper 'When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset', what AP score did the MMPedestron model get on the EventPed dataset
| 79.0 |
Places2 | ASUKA | Towards Context-Stable and Visual-Consistent Image Inpainting | 2023-12-08T00:00:00 | https://arxiv.org/abs/2312.04831v2 | [
"https://github.com/yikai-wang/asuka-misato"
] | In the paper 'Towards Context-Stable and Visual-Consistent Image Inpainting', what FID score did the ASUKA model get on the Places2 dataset
| 1.230 |
Vinoground | LLaVA-OneVision-Qwen2-72B | LLaVA-OneVision: Easy Visual Task Transfer | 2024-08-06T00:00:00 | https://arxiv.org/abs/2408.03326v3 | [
"https://github.com/evolvinglmms-lab/lmms-eval"
] | In the paper 'LLaVA-OneVision: Easy Visual Task Transfer', what Text Score score did the LLaVA-OneVision-Qwen2-72B model get on the Vinoground dataset
| 48.4 |
Lyft Level 5 | PointBeV (EfficientNet-b4) | PointBeV: A Sparse Approach to BeV Predictions | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00703v2 | [
"https://github.com/valeoai/pointbev"
] | In the paper 'PointBeV: A Sparse Approach to BeV Predictions', what IoU vehicle - 224x480 - Long score did the PointBeV (EfficientNet-b4) model get on the Lyft Level 5 dataset
| 45.4 |
DAVIS 2017 (test-dev) | Cutie+ (base) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie+ (base) model get on the DAVIS 2017 (test-dev) dataset
| 85.9 |
Atari 2600 Montezuma's Revenge | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Montezuma's Revenge dataset
| 0 |
CelebAMask-HQ | SCDM | Stochastic Conditional Diffusion Models for Robust Semantic Image Synthesis | 2024-02-26T00:00:00 | https://arxiv.org/abs/2402.16506v3 | [
"https://github.com/mlvlab/scdm"
] | In the paper 'Stochastic Conditional Diffusion Models for Robust Semantic Image Synthesis', what FID score did the SCDM model get on the CelebAMask-HQ dataset
| 17.4 |
Amazon-Book | NESCL | Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11523v1 | [
"https://github.com/PeiJieSun/NESCL"
] | In the paper 'Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering', what Recall@20 score did the NESCL model get on the Amazon-Book dataset
| 0.0624 |
Office-31 | MLNet | MLNet: Mutual Learning Network with Neighborhood Invariance for Universal Domain Adaptation | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.07871v4 | [
"https://github.com/YanzuoLu/MLNet"
] | In the paper 'MLNet: Mutual Learning Network with Neighborhood Invariance for Universal Domain Adaptation', what H-score score did the MLNet model get on the Office-31 dataset
| 92.8 |
MedQA | LLAMA-2 (70B SC CoT) | MEDITRON-70B: Scaling Medical Pretraining for Large Language Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.16079v1 | [
"https://github.com/epfllm/meditron"
] | In the paper 'MEDITRON-70B: Scaling Medical Pretraining for Large Language Models', what Accuracy score did the LLAMA-2 (70B SC CoT) model get on the MedQA dataset
| 61.5 |
MSCOCO | RALF | Retrieval-Augmented Open-Vocabulary Object Detection | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05687v1 | [
"https://github.com/mlvlab/RALF"
] | In the paper 'Retrieval-Augmented Open-Vocabulary Object Detection', what AP 0.5 score did the RALF model get on the MSCOCO dataset
| 41.3 |
IndustReal | YoloV8 | IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17323v1 | [
"https://github.com/timschoonbeek/industreal"
] | In the paper 'IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting', what mAP score did the YoloV8 model get on the IndustReal dataset
| 64.1 |
Youtube-VIS 2022 Validation | CAVIS (VIT-L) | Context-Aware Video Instance Segmentation | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03010v1 | [
"https://github.com/Seung-Hun-Lee/CAVIS"
] | In the paper 'Context-Aware Video Instance Segmentation', what mAP_L score did the CAVIS (VIT-L) model get on the Youtube-VIS 2022 Validation dataset
| 48.6 |
MVTec AD | ReConPatch WRN-50 (+RefineNet) | ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16713v3 | [
"https://github.com/travishsu/ReConPatch-TF"
] | In the paper 'ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection', what Detection AUROC score did the ReConPatch WRN-50 (+RefineNet) model get on the MVTec AD dataset
| 99.71 |
EQ-Bench | OpenAI ADA | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI ADA model get on the EQ-Bench dataset
| 2.25 |
MM-Vet | InternVL2-26B (SGP, token ratio 9%) | A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs | 2024-12-04T00:00:00 | https://arxiv.org/abs/2412.03324v2 | [
"https://github.com/NUS-HPC-AI-Lab/SGL"
] | In the paper 'A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs', what GPT-4 score score did the InternVL2-26B (SGP, token ratio 9%) model get on the MM-Vet dataset
| 52.10 |
DWIE | REXEL | REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking | 2024-04-19T00:00:00 | https://arxiv.org/abs/2404.12788v1 | [
"https://github.com/amazon-science/e2e-docie"
] | In the paper 'REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking', what F1-Hard score did the REXEL model get on the DWIE dataset
| 65.8 |
GSM8K | DART-Math-Mistral-7B-Prop2Diff (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Mistral-7B-Prop2Diff (0-shot CoT, w/o code) model get on the GSM8K dataset
| 81.1 |
HO-3D v2 | HaMeR | Reconstructing Hands in 3D with Transformers | 2023-12-08T00:00:00 | https://arxiv.org/abs/2312.05251v1 | [
"https://github.com/geopavlakos/hamer"
] | In the paper 'Reconstructing Hands in 3D with Transformers', what PA-MPJPE (mm) score did the HaMeR model get on the HO-3D v2 dataset
| 7.7 |
VibraVox (soft in-ear microphone) | ECAPA2 | Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11828v2 | [
"https://github.com/jhauret/vibravox"
] | In the paper 'Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors', what Test EER score did the ECAPA2 model get on the VibraVox (soft in-ear microphone) dataset
| 0.0172 |
cb | OPT-125M | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-125M model get on the cb dataset
| 75% |
MSRVTT-QA | All-in-one+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the All-in-one+ model get on the MSRVTT-QA dataset
| 0.395 |
PIQA | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the PIQA dataset
| 85.0 |
Cora | CDNMF | Contrastive Deep Nonnegative Matrix Factorization for Community Detection | 2023-11-04T00:00:00 | https://arxiv.org/abs/2311.02357v2 | [
"https://github.com/6lyc/cdnmf"
] | In the paper 'Contrastive Deep Nonnegative Matrix Factorization for Community Detection', what NMI score did the CDNMF model get on the Cora dataset
| 0.4006 |
MedNLI | BiomedGPT-B | BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17100v4 | [
"https://github.com/taokz/biomedgpt"
] | In the paper 'BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks', what Accuracy score did the BiomedGPT-B model get on the MedNLI dataset
| 83.83 |
SQuAD | GOLD (T5-base) | GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19754v1 | [
"https://github.com/mgholamikn/GOLD"
] | In the paper 'GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation', what Exact Match score did the GOLD (T5-base) model get on the SQuAD dataset
| 75.2 |
NASA C-MAPSS | SRLA | Hierarchical Framework for Interpretable and Probabilistic Model-Based Safe Reinforcement Learning | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18811v1 | [
"https://github.com/ammar-n-abbas/Predictive-Maintenance-BC-IOHMM-DRL"
] | In the paper 'Hierarchical Framework for Interpretable and Probabilistic Model-Based Safe Reinforcement Learning', what Average Remaining Cycles score did the SRLA model get on the NASA C-MAPSS dataset
| 6.4 |
NYU Depth v2 | GeminiFusion (MiT-B3) | GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.01210v2 | [
"https://github.com/jiadingcn/geminifusion"
] | In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what Mean IoU score did the GeminiFusion (MiT-B3) model get on the NYU Depth v2 dataset
| 56.8 |
DocRED-IE | REXEL | REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking | 2024-04-19T00:00:00 | https://arxiv.org/abs/2404.12788v1 | [
"https://github.com/amazon-science/e2e-docie"
] | In the paper 'REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking', what Relation F1 score did the REXEL model get on the DocRED-IE dataset
| 60.10 |
MSU SR-QA Dataset | TOPIQ
trained on PIPAL | TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment | 2023-08-06T00:00:00 | https://arxiv.org/abs/2308.03060v1 | [
"https://github.com/chaofengc/iqa-pytorch"
] | In the paper 'TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment', what SROCC score did the TOPIQ
trained on PIPAL model get on the MSU SR-QA Dataset dataset
| 0.55568 |
EQ-Bench | OpenAI gpt-4-0613 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI gpt-4-0613 model get on the EQ-Bench dataset
| 62.52 |
ENTIRe-ID | TransReID (Strong Baseline) | ENTIRe-ID: An Extensive and Diverse Dataset for Person Re-Identification | 2024-05-30T00:00:00 | https://arxiv.org/abs/2405.20465v1 | [
"https://github.com/serdaryildiz/ENTIRe-ID"
] | In the paper 'ENTIRe-ID: An Extensive and Diverse Dataset for Person Re-Identification', what mAP score did the TransReID (Strong Baseline) model get on the ENTIRe-ID dataset
| 38.4 |
Query-Focused Video Summarization Dataset | EgoVLPv2 | EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone | 2023-07-11T00:00:00 | https://arxiv.org/abs/2307.05463v2 | [
"https://github.com/facebookresearch/EgoVLPv2"
] | In the paper 'EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone', what F1 (avg) score did the EgoVLPv2 model get on the Query-Focused Video Summarization Dataset dataset
| 52.08 |
RSITMD | HarMA (w/ GeoRSCLIP) | Efficient Remote Sensing with Harmonized Transfer Learning and Modality Alignment | 2024-04-28T00:00:00 | https://arxiv.org/abs/2404.18253v5 | [
"https://github.com/seekerhuang/harma"
] | In the paper 'Efficient Remote Sensing with Harmonized Transfer Learning and Modality Alignment', what Mean Recall score did the HarMA (w/ GeoRSCLIP) model get on the RSITMD dataset
| 52.27% |
ZINC | Graph-JEPA | Graph-level Representation Learning with Joint-Embedding Predictive Architectures | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16014v2 | [
"https://github.com/geriskenderi/graph-jepa"
] | In the paper 'Graph-level Representation Learning with Joint-Embedding Predictive Architectures', what MAE score did the Graph-JEPA model get on the ZINC dataset
| 0.434 |
Human3.6M | 3D-LFM | 3D-LFM: Lifting Foundation Model | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11894v2 | [
"https://github.com/mosamdabhi/3dlfm"
] | In the paper '3D-LFM: Lifting Foundation Model', what Average MPJPE (mm) score did the 3D-LFM model get on the Human3.6M dataset
| 31.89 |
UCF-101 | Video-LaVIT | Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03161v3 | [
"https://github.com/jy0205/lavit"
] | In the paper 'Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization', what Inception Score score did the Video-LaVIT model get on the UCF-101 dataset
| 44.26 |
CUFED5 - 4x upscaling | Extracter-rec | EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01379v1 | [
"https://github.com/esteban-rs/extracter"
] | In the paper 'EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution', what PSNR score did the Extracter-rec model get on the CUFED5 - 4x upscaling dataset
| 27.29 |
Winoground | MiniGPT-4-7B (VisualGPTScore) | An Examination of the Compositionality of Large Generative Vision-Language Models | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10509v2 | [
"https://github.com/teleema/sade"
] | In the paper 'An Examination of the Compositionality of Large Generative Vision-Language Models', what Text Score score did the MiniGPT-4-7B (VisualGPTScore) model get on the Winoground dataset
| 23.25 |
S3DIS | Superpoint Transformer | Efficient 3D Semantic Segmentation with Superpoint Transformer | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.08045v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Efficient 3D Semantic Segmentation with Superpoint Transformer', what mIoU (6-Fold) score did the Superpoint Transformer model get on the S3DIS dataset
| 76.0 |
Unpaired-abdomen-CT | CLIP+SVD+ResNet | Spatially Covariant Image Registration with Text Prompts | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15607v2 | [
"https://github.com/tinymilky/NeRD"
] | In the paper 'Spatially Covariant Image Registration with Text Prompts', what DSC score did the CLIP+SVD+ResNet model get on the Unpaired-abdomen-CT dataset
| 0.6001 |
RSTPReid | RDE | Noisy-Correspondence Learning for Text-to-Image Person Re-identification | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09911v3 | [
"https://github.com/QinYang79/RDE"
] | In the paper 'Noisy-Correspondence Learning for Text-to-Image Person Re-identification', what R@1 score did the RDE model get on the RSTPReid dataset
| 65.35 |
ColonINST-v1 (Seen) | LLaVA-Med-v1.5
(w/ LoRA, w/o extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.5
(w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Seen) dataset
| 99.3 |
MM-Vet | InternVL2.5-38B | Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05271v1 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling', what GPT-4 score score did the InternVL2.5-38B model get on the MM-Vet dataset
| 68.8 |
ETTm1 (96) Multivariate | SCNN | Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13036v3 | [
"https://github.com/JLDeng/SCNN"
] | In the paper 'Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting', what MSE score did the SCNN model get on the ETTm1 (96) Multivariate dataset
| 0.287 |
Weather (336) | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the Weather (336) dataset
| 0.265 |
DND | DualDn | DualDn: Dual-domain Denoising via Differentiable ISP | 2024-09-27T00:00:00 | https://arxiv.org/abs/2409.18783v2 | [
"https://github.com/OpenImagingLab/DualDn"
] | In the paper 'DualDn: Dual-domain Denoising via Differentiable ISP', what PSNR (sRGB) score did the DualDn model get on the DND dataset
| 40.594 |
COIN | UnLoc-L | UnLoc: A Unified Framework for Video Localization Tasks | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.11062v1 | [
"https://github.com/google-research/scenic"
] | In the paper 'UnLoc: A Unified Framework for Video Localization Tasks', what Frame accuracy score did the UnLoc-L model get on the COIN dataset
| 72.8 |
Tanks and Temples | Self-Organizing Gaussians | Compact 3D Scene Representation via Self-Organizing Gaussian Grids | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.13299v2 | [
"https://github.com/fraunhoferhhi/Self-Organizing-Gaussians"
] | In the paper 'Compact 3D Scene Representation via Self-Organizing Gaussian Grids', what PSNR score did the Self-Organizing Gaussians model get on the Tanks and Temples dataset
| 25.63 |
Charades-STA | Mr. BLIP | The Surprising Effectiveness of Multimodal Large Language Models for Video Moment Retrieval | 2024-06-26T00:00:00 | https://arxiv.org/abs/2406.18113v3 | [
"https://github.com/sudo-Boris/mr-Blip"
] | In the paper 'The Surprising Effectiveness of Multimodal Large Language Models for Video Moment Retrieval', what R@1 IoU=0.5 score did the Mr. BLIP model get on the Charades-STA dataset
| 69.31 |
ARMBench | RISE (R50) | Robot Instance Segmentation with Few Annotations for Grasping | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01302v1 | [
"https://github.com/mkimhi/RISE"
] | In the paper 'Robot Instance Segmentation with Few Annotations for Grasping', what AP50 score did the RISE (R50) model get on the ARMBench dataset
| 83.53 |
S2RDA-49 | PGA | Enhancing Domain Adaptation through Prompt Gradient Alignment | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09353v2 | [
"https://github.com/viethoang1512/pga"
] | In the paper 'Enhancing Domain Adaptation through Prompt Gradient Alignment', what Accuracy score did the PGA model get on the S2RDA-49 dataset
| 74.1 |
Telegram (Directed Graph label rate 60%) | ScaleNet | Scale Invariance of Graph Neural Networks | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19392v2 | [
"https://github.com/qin87/scalenet"
] | In the paper 'Scale Invariance of Graph Neural Networks', what Accuracy score did the ScaleNet model get on the Telegram (Directed Graph label rate 60%) dataset
| 96.8±2.1 |
COCO minival | GLEE-Lite | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Lite model get on the COCO minival dataset
| 48.4 |
COCO-20i (5-shot) | QCLNet (ResNet-101) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (ResNet-101) model get on the COCO-20i (5-shot) dataset
| 51.9 |
MVBench | VideoGPT+ | VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09418v1 | [
"https://github.com/mbzuai-oryx/videogpt-plus"
] | In the paper 'VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding', what Avg. score did the VideoGPT+ model get on the MVBench dataset
| 58.7 |
LeukemiaAttri | ConfMix [23] L_100x_C2 | A Large-scale Multi Domain Leukemia Dataset for the White Blood Cells Detection with Morphological Attributes for Explainability | 2024-05-17T00:00:00 | https://arxiv.org/abs/2405.10803v1 | [
"https://github.com/intelligentMachines-ITU/Blood-Cancer-Dataset-Lukemia-Attri-MICCAI-2024"
] | In the paper 'A Large-scale Multi Domain Leukemia Dataset for the White Blood Cells Detection with Morphological Attributes for Explainability', what mAP score did the ConfMix [23] L_100x_C2 model get on the LeukemiaAttri dataset
| 33.5 |
ImageNet | AIMv2-2B | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Number of params score did the AIMv2-2B model get on the ImageNet dataset
| 2700M |
CelebA 64x64 | Blackout Diffusion | Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11089v1 | [
"https://github.com/lanl/blackout-diffusion"
] | In the paper 'Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces', what FID score did the Blackout Diffusion model get on the CelebA 64x64 dataset
| 3.22 |
UCF101 | EZ-CLIP | EZ-CLIP: Efficient Zeroshot Video Action Recognition | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08010v2 | [
"https://github.com/shahzadnit/ez-clip"
] | In the paper 'EZ-CLIP: Efficient Zeroshot Video Action Recognition', what Top-1 Accuracy score did the EZ-CLIP model get on the UCF101 dataset
| 79.1 |
Cornell | DJ-GNN | Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16976v1 | [
"https://github.com/AhmedBegggaUA/TFM"
] | In the paper 'Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters', what Accuracy score did the DJ-GNN model get on the Cornell dataset
| 87.03±1.62 |
GoPro | ALGNet-B | Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring | 2024-03-29T00:00:00 | https://arxiv.org/abs/2403.20106v2 | [
"https://github.com/Tombs98/ALGNet"
] | In the paper 'Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring', what PSNR score did the ALGNet-B model get on the GoPro dataset
| 34.05 |
Shot2Story20K | SUM-shot | Shot2Story20K: A New Benchmark for Comprehensive Understanding of Multi-shot Videos | 2023-12-16T00:00:00 | https://arxiv.org/abs/2312.10300v2 | [
"https://github.com/bytedance/Shot2Story"
] | In the paper 'Shot2Story20K: A New Benchmark for Comprehensive Understanding of Multi-shot Videos', what CIDEr score did the SUM-shot model get on the Shot2Story20K dataset
| 8.6 |
Amazon Computers | GAT | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GAT model get on the Amazon Computers dataset
| 94.09±0.37 |
ImageNet 32x32 | PaGoDA | PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14822v2 | [
"https://github.com/sony/pagoda"
] | In the paper 'PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher', what FID score did the PaGoDA model get on the ImageNet 32x32 dataset
| 0.79 |
PASCAL-5i (5-Shot) | QCLNet (ResNet-101) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (ResNet-101) model get on the PASCAL-5i (5-Shot) dataset
| 71.2 |
SPOT-10 | ResNet101V2 Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the ResNet101V2 Distiller model get on the SPOT-10 dataset
| 80.29 |
SMAC MMM2_7m2M1M_vs_9m3M1M | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC MMM2_7m2M1M_vs_9m3M1M dataset
| 90.62 |
St Lucia | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the St Lucia dataset
| 96.17 |
ImageNet | CoaT-Ti | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the CoaT-Ti model get on the ImageNet dataset
| 78.42% |
ImageNet 64x64 | LEGO | Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling | 2023-10-10T00:00:00 | https://arxiv.org/abs/2310.06389v3 | [
"https://github.com/JegZheng/LEGODiffusion"
] | In the paper 'Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling', what Inception Score score did the LEGO model get on the ImageNet 64x64 dataset
| 78.7 |
ScanNet++ | PonderV2 + SparseUNet | PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm | 2023-10-12T00:00:00 | https://arxiv.org/abs/2310.08586v3 | [
"https://github.com/OpenGVLab/PonderV2"
] | In the paper 'PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm', what Top-1 IoU score did the PonderV2 + SparseUNet model get on the ScanNet++ dataset
| 0.386 |
RARE | Rematch | Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity | 2024-04-02T00:00:00 | https://arxiv.org/abs/2404.02126v1 | [
"https://github.com/osome-iu/Rematch-RARE"
] | In the paper 'Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity', what Spearman Correlation score did the Rematch model get on the RARE dataset
| 95.32 |
Split M-NIST | Model with negotiation paradigm | Negotiated Representations to Prevent Forgetting in Machine Learning Applications | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00237v1 | [
"https://github.com/nurikorhan/negotiated-representations-for-continual-learning"
] | In the paper 'Negotiated Representations to Prevent Forgetting in Machine Learning Applications', what Percentage Average accuracy - 5 tasks score did the Model with negotiation paradigm model get on the Split M-NIST dataset
| 82.3 |
DAVIS 2017 (val) | VATEX | Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding | 2024-04-12T00:00:00 | https://arxiv.org/abs/2404.08590v2 | [
"https://github.com/nero1342/VATEX_RIS"
] | In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what J&F score score did the VATEX model get on the DAVIS 2017 (val) dataset
| 65.4 |
SPair-71k | GeoAware-SC + CleanDIFT (Zero-Shot) | CleanDIFT: Diffusion Features without Noise | 2024-12-04T00:00:00 | https://arxiv.org/abs/2412.03439v1 | [
"https://github.com/CompVis/cleandift"
] | In the paper 'CleanDIFT: Diffusion Features without Noise', what PCK score did the GeoAware-SC + CleanDIFT (Zero-Shot) model get on the SPair-71k dataset
| 70.0 |
RefCOCOg-test | VATEX | Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding | 2024-04-12T00:00:00 | https://arxiv.org/abs/2404.08590v2 | [
"https://github.com/nero1342/VATEX_RIS"
] | In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what mIoU score did the VATEX model get on the RefCOCOg-test dataset
| 70.58 |
3DPW | CycleAdapt (w/o 2D GT) | Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction | 2023-08-12T00:00:00 | https://arxiv.org/abs/2308.06554v1 | [
"https://github.com/hygenie1228/cycleadapt_release"
] | In the paper 'Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction', what PA-MPJPE score did the CycleAdapt (w/o 2D GT) model get on the 3DPW dataset
| 51.1 |
MixATIS | MISCA | MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05741v1 | [
"https://github.com/vinairesearch/misca"
] | In the paper 'MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention', what Accuracy score did the MISCA model get on the MixATIS dataset
| 76.7 |
ScanObjectNN | Mamba3D (no voting) | Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.14966v2 | [
"https://github.com/xhanxu/Mamba3D"
] | In the paper 'Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model', what Overall Accuracy score did the Mamba3D (no voting) model get on the ScanObjectNN dataset
| 91.81 |
Condensed Movies | TESTA (ViT-B/16) | TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding | 2023-10-29T00:00:00 | https://arxiv.org/abs/2310.19060v1 | [
"https://github.com/renshuhuai-andy/testa"
] | In the paper 'TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding', what text-to-video R@1 score did the TESTA (ViT-B/16) model get on the Condensed Movies dataset
| 24.9 |
GSM8K | OpenMath-Mistral-7B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-Mistral-7B (w/ code) model get on the GSM8K dataset
| 80.2 |
DocRED-IE | REXEL | REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking | 2024-04-19T00:00:00 | https://arxiv.org/abs/2404.12788v1 | [
"https://github.com/amazon-science/e2e-docie"
] | In the paper 'REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking', what Avg F1 score did the REXEL model get on the DocRED-IE dataset
| 96.01 |
EuroSAT-SAR | MAE (ViT-S/16) | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18653v1 | [
"https://github.com/zhu-xlab/fgmae"
] | In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what Overall Accuracy score did the MAE (ViT-S/16) model get on the EuroSAT-SAR dataset
| 81.0 |
DIV2K val - 4x upscaling | GOUB | Image Restoration Through Generalized Ornstein-Uhlenbeck Bridge | 2023-12-16T00:00:00 | https://arxiv.org/abs/2312.10299v2 | [
"https://github.com/Hammour-steak/GOUB"
] | In the paper 'Image Restoration Through Generalized Ornstein-Uhlenbeck Bridge', what PSNR score did the GOUB model get on the DIV2K val - 4x upscaling dataset
| 26.89 |
GRAZPEDWRI-DX | YOLOv8+GC | Global Context Modeling in YOLOv8 for Pediatric Wrist Fracture Detection | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03163v1 | [
"https://github.com/ruiyangju/yolov8_global_context_fracture_detection"
] | In the paper 'Global Context Modeling in YOLOv8 for Pediatric Wrist Fracture Detection', what mAP score did the YOLOv8+GC model get on the GRAZPEDWRI-DX dataset
| 66.32 |
ImageNet-1k vs NINCO | EffNetb7 Relative Cosine Sim | In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00826v1 | [
"https://github.com/j-cb/ninco"
] | In the paper 'In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation', what AUROC score did the EffNetb7 Relative Cosine Sim model get on the ImageNet-1k vs NINCO dataset
| 87.9 |
THuman2.0 Dataset | Ultraman | Ultraman: Single Image 3D Human Reconstruction with Ultra Speed and Detail | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12028v1 | [
"https://github.com/tomorrow1238/Ultraman"
] | In the paper 'Ultraman: Single Image 3D Human Reconstruction with Ultra Speed and Detail', what CLIP Similarity score did the Ultraman model get on the THuman2.0 Dataset dataset
| 0.9131 |
GazeFollow | ViTGaze | ViTGaze: Gaze Following with Interaction Features in Vision Transformers | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12778v2 | [
"https://github.com/hustvl/vitgaze"
] | In the paper 'ViTGaze: Gaze Following with Interaction Features in Vision Transformers', what AUC score did the ViTGaze model get on the GazeFollow dataset
| 0.949 |
iNaturalist | AIMv2-1B | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-1B model get on the iNaturalist dataset
| 79.7 |
ScanObjectNN | ExpPoint-MAE | ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.10798v3 | [
"https://github.com/vvrpanda/exppoint-mae"
] | In the paper 'ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers', what OBJ-BG (OA) score did the ExpPoint-MAE model get on the ScanObjectNN dataset
| 90.88 |
RESIDE-6K | MixDehazeNet | MixDehazeNet : Mix Structure Block For Image Dehazing Network | 2023-05-28T00:00:00 | https://arxiv.org/abs/2305.17654v1 | [
"https://github.com/ameryxiong/mixdehazenet"
] | In the paper 'MixDehazeNet : Mix Structure Block For Image Dehazing Network', what PSNR score did the MixDehazeNet model get on the RESIDE-6K dataset
| 30.18 |
JAAH | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the JAAH dataset
| 95.1 |
MATH | MathCoder-L-7B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-L-7B model get on the MATH dataset
| 23.3 |
CIFAR-10, 50 Labels (OpenSet, 6/4) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the CIFAR-10, 50 Labels (OpenSet, 6/4) dataset
| 95.7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.