dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MoNuSeg | MDM | Masked Diffusion as Self-supervised Representation Learner | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05695v4 | [
"https://github.com/zx-pan/mdm"
] | In the paper 'Masked Diffusion as Self-supervised Representation Learner', what F1 score did the MDM model get on the MoNuSeg dataset
| 81.01 |
ColonINST-v1 (Unseen) | MobileVLM-1.7B
(w/o LoRA, w/ extra data) | MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16886v2 | [
"https://github.com/meituan-automl/mobilevlm"
] | In the paper 'MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices', what Accuray score did the MobileVLM-1.7B
(w/o LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
| 78.75 |
MATH | DART-Math-DSMath-7B-Prop2Diff (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-DSMath-7B-Prop2Diff (0-shot CoT, w/o code) model get on the MATH dataset
| 53.6 |
STL-10 | SeCu | Stable Cluster Discrimination for Deep Clustering | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14310v1 | [
"https://github.com/idstcv/secu"
] | In the paper 'Stable Cluster Discrimination for Deep Clustering', what Accuracy score did the SeCu model get on the STL-10 dataset
| 0.836 |
SQA3D | Lexicon3D | Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding | 2024-09-05T00:00:00 | https://arxiv.org/abs/2409.03757v2 | [
"https://github.com/yunzeman/lexicon3d"
] | In the paper 'Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding', what AnswerExactMatch (Question Answering) score did the Lexicon3D model get on the SQA3D dataset
| 50.7 |
PeMSD7(M) | STD-MAE | Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00516v3 | [
"https://github.com/jimmy-7664/std-mae"
] | In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what 12 steps MAE score did the STD-MAE model get on the PeMSD7(M) dataset
| 2.52 |
STS12 | PromptEOL+CSE+LLaMA-30B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+LLaMA-30B model get on the STS12 dataset
| 0.7972 |
MVTec LOCO AD | PSAD | Few Shot Part Segmentation Reveals Compositional Logic for Industrial Anomaly Detection | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13783v2 | [
"https://github.com/oopil/PSAD_logical_anomaly_detection"
] | In the paper 'Few Shot Part Segmentation Reveals Compositional Logic for Industrial Anomaly Detection', what Avg. Detection AUROC score did the PSAD model get on the MVTec LOCO AD dataset
| 94.9 |
COD | ZoomNeXt-ResNet-50 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what MAE score did the ZoomNeXt-ResNet-50 model get on the COD dataset
| 0.026 |
PeMSD8 | STD-MAE | Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00516v3 | [
"https://github.com/jimmy-7664/std-mae"
] | In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what 12 steps MAE score did the STD-MAE model get on the PeMSD8 dataset
| 13.44 |
MS-COCO | GKGNet(resolution 576) | GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for Multi-Label Image Recognition | 2023-08-28T00:00:00 | https://arxiv.org/abs/2308.14378v3 | [
"https://github.com/jin-s13/gkgnet"
] | In the paper 'GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for Multi-Label Image Recognition', what mAP score did the GKGNet(resolution 576) model get on the MS-COCO dataset
| 87.7 |
SROIE | RORE (GeoLayoutLM) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (GeoLayoutLM) model get on the SROIE dataset
| 96.97 |
MM-Vet | InternVL 1.5 | How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16821v2 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites', what GPT-4 score score did the InternVL 1.5 model get on the MM-Vet dataset
| 62.8 |
CoNLL 2003 (English) | PromptNER [RoBERTa-large] | PromptNER: Prompt Locating and Typing for Named Entity Recognition | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17104v1 | [
"https://github.com/tricktreat/promptner"
] | In the paper 'PromptNER: Prompt Locating and Typing for Named Entity Recognition', what F1 score did the PromptNER [RoBERTa-large] model get on the CoNLL 2003 (English) dataset
| 93.08 |
BIG-bench (Reasoning About Colored Objects) | PaLM 2 (few-shot, k=3, Direct) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (Reasoning About Colored Objects) dataset
| 61.2 |
BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) | CHESS | CHESS: Contextual Harnessing for Efficient SQL Synthesis | 2024-05-27T00:00:00 | https://arxiv.org/abs/2405.16755v3 | [
"https://github.com/shayantalaei/chess"
] | In the paper 'CHESS: Contextual Harnessing for Efficient SQL Synthesis', what Execution Accuracy % (Test) score did the CHESS model get on the BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) dataset
| 66.69 |
SMAC 6h_vs_9z | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the QPLEX model get on the SMAC 6h_vs_9z dataset
| 13.86 |
SALMon | TWIST 350M | Textually Pretrained Speech Language Models | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13009v3 | [
"https://github.com/slp-rl/spokenstorycloze"
] | In the paper 'Textually Pretrained Speech Language Models', what Speaker Consistency score did the TWIST 350M model get on the SALMon dataset
| 69.5 |
EconLogicQA | Llama-3-8B | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Llama-3-8B model get on the EconLogicQA dataset
| 0.2385 |
ColonINST-v1 (Seen) | LLaVA-Med-v1.5
(w/ LoRA, w/ extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.5
(w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 99.4 |
ColonINST-v1 (Unseen) | LLaVA-Med-v1.5
(w/ LoRA, w/ extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.5
(w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
| 66.51 |
IC19-Art | CLIP4STR-L (DataComp-1B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy (%) score did the CLIP4STR-L (DataComp-1B) model get on the IC19-Art dataset
| 86.4 |
SPair-71k | SD+DINO (Zero-shot) | A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15347v2 | [
"https://github.com/Junyi42/sd-dino"
] | In the paper 'A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence', what PCK score did the SD+DINO (Zero-shot) model get on the SPair-71k dataset
| 64.0 |
UruDendro | INBD | A Brief Analysis of the Iterative Next Boundary Detection Network for Tree Rings Delineation in Images of Pinus taeda | 2024-08-26T00:00:00 | https://arxiv.org/abs/2408.14343v1 | [
"https://github.com/hmarichal93/mlbrief_inbd"
] | In the paper 'A Brief Analysis of the Iterative Next Boundary Detection Network for Tree Rings Delineation in Images of Pinus taeda', what FScore score did the INBD model get on the UruDendro dataset
| 0.79 |
Atari 2600 Venture | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Venture dataset
| 291 |
FP-T-H | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-T-H dataset
| 64.18 |
NC4K | ZoomNeXt-PVTv2-B4 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what S-measure score did the ZoomNeXt-PVTv2-B4 model get on the NC4K dataset
| 0.900 |
Atari 2600 Video Pinball | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Video Pinball dataset
| 626794 |
UCI HEPMASS | PaddingFlow | PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08216v2 | [
"https://github.com/adamqlmeng/paddingflow"
] | In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what CD score did the PaddingFlow model get on the UCI HEPMASS dataset
| 13.8 |
MORPH Album2 (SE) | ResNet-50-DLDL | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL model get on the MORPH Album2 (SE) dataset
| 2.81 |
MVTec LOCO AD | CSAD | CSAD: Unsupervised Component Segmentation for Logical Anomaly Detection | 2024-08-28T00:00:00 | https://arxiv.org/abs/2408.15628v2 | [
"https://github.com/Tokichan/CSAD"
] | In the paper 'CSAD: Unsupervised Component Segmentation for Logical Anomaly Detection', what Avg. Detection AUROC score did the CSAD model get on the MVTec LOCO AD dataset
| 95.3 |
FUNSD | RORE (GeoLayoutLM) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (GeoLayoutLM) model get on the FUNSD dataset
| 91.84 |
PHEVA | STG-NF | PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset | 2024-08-26T00:00:00 | https://arxiv.org/abs/2408.14329v1 | [
"https://github.com/tecsar-uncc/pheva"
] | In the paper 'PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset', what AUC-ROC score did the STG-NF model get on the PHEVA dataset
| 57.57 |
spider | PET-SQL | PET-SQL: A Prompt-Enhanced Two-Round Refinement of Text-to-SQL with Cross-consistency | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.09732v4 | [
"https://github.com/zhshlii/petsql"
] | In the paper 'PET-SQL: A Prompt-Enhanced Two-Round Refinement of Text-to-SQL with Cross-consistency', what Exact Match Accuracy (Test) score did the PET-SQL model get on the spider dataset
| 66.6 |
COCO-20i (1-shot) | BAM (DifFSS, ResNet-50) | DifFSS: Diffusion Model for Few-Shot Semantic Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00773v3 | [
"https://github.com/TrinitialChan/DifFSS"
] | In the paper 'DifFSS: Diffusion Model for Few-Shot Semantic Segmentation', what Mean IoU score did the BAM (DifFSS, ResNet-50) model get on the COCO-20i (1-shot) dataset
| 43.6 |
Pittsburgh-250k-test | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the Pittsburgh-250k-test dataset
| 95 |
Weather (336) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Weather (336) dataset
| 0.241 |
ETTm1 (192) Multivariate | SCNN | Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13036v3 | [
"https://github.com/JLDeng/SCNN"
] | In the paper 'Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting', what MSE score did the SCNN model get on the ETTm1 (192) Multivariate dataset
| 0.327 |
CoNLL03 | UniNER-7B | UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03279v2 | [
"https://github.com/emma1066/retrieval-augmented-it-openner"
] | In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the CoNLL03 dataset
| 93.3 |
ETTh1 (96) Multivariate | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the ETTh1 (96) Multivariate dataset
| 0.375 |
ETTh2 (336) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTh2 (336) Multivariate dataset
| 0.361 |
Cora with Public Split: fixed 20 nodes per class | GGCM | From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13599v2 | [
"https://github.com/zhengwang100/ogc_ggcm"
] | In the paper 'From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited', what Accuracy score did the GGCM model get on the Cora with Public Split: fixed 20 nodes per class dataset
| 83.6% |
STAR Benchmark | LLaMA-VQA | Large Language Models are Temporal and Causal Reasoners for Video Question Answering | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15747v2 | [
"https://github.com/mlvlab/Flipped-VQA"
] | In the paper 'Large Language Models are Temporal and Causal Reasoners for Video Question Answering', what Average Accuracy score did the LLaMA-VQA model get on the STAR Benchmark dataset
| 65.4 |
COCO Captions | LaDiC | LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-Text Generation? | 2024-04-16T00:00:00 | https://arxiv.org/abs/2404.10763v1 | [
"https://github.com/wangyuchi369/ladic"
] | In the paper 'LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-Text Generation?', what ROUGE-L score did the LaDiC model get on the COCO Captions dataset
| 58.7 |
SVAMP | ATHENA (roberta-base) | ATHENA: Mathematical Reasoning with Thought Expansion | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01036v1 | [
"https://github.com/the-jb/athena-math"
] | In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Execution Accuracy score did the ATHENA (roberta-base) model get on the SVAMP dataset
| 45.6 |
CNN/Daily Mail | Claude Instant + SigExt | Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02741v2 | [
"https://github.com/amazon-science/SigExt"
] | In the paper 'Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization', what ROUGE-1 score did the Claude Instant + SigExt model get on the CNN/Daily Mail dataset
| 42 |
ZJU-RGB-P | CSFNet-1 | CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01328v1 | [
"https://github.com/Danial-Qashqai/CSFNet"
] | In the paper 'CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes', what mIoU score did the CSFNet-1 model get on the ZJU-RGB-P dataset
| 90.85 |
FGVC Aircraft | SaSPA + CAL | Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14551v2 | [
"https://github.com/eyalmichaeli/saspa-aug"
] | In the paper 'Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation', what Harmonic mean score did the SaSPA + CAL model get on the FGVC Aircraft dataset
| 52.2 |
CIFAR-100-LT (ρ=100) | MDCS | MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09922v2 | [
"https://github.com/fistyee/mdcs"
] | In the paper 'MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition', what Error Rate score did the MDCS model get on the CIFAR-100-LT (ρ=100) dataset
| 43.9 |
UBody | SMPLer-X | SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17448v3 | [
"https://github.com/caizhongang/SMPLer-X"
] | In the paper 'SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation', what PVE-All score did the SMPLer-X model get on the UBody dataset
| 57.5 |
FMB Dataset | MMSFormer (RGB) | MMSFormer: Multimodal Transformer for Material and Semantic Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.04001v4 | [
"https://github.com/csiplab/mmsformer"
] | In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB) model get on the FMB Dataset dataset
| 57.20 |
Mini-Imagenet 5-way (1-shot) | DiffKendall (Meta-Baseline, ResNet-12) | DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15317v2 | [
"https://github.com/kaipengm2/DiffKendall"
] | In the paper 'DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation', what Accuracy score did the DiffKendall (Meta-Baseline, ResNet-12) model get on the Mini-Imagenet 5-way (1-shot) dataset
| 65.56 |
RefCOCO | EVP | EVP: Enhanced Visual Perception using Inverse Multi-Attentive Feature Refinement and Regularized Image-Text Alignment | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08548v1 | [
"https://github.com/lavreniuk/evp"
] | In the paper 'EVP: Enhanced Visual Perception using Inverse Multi-Attentive Feature Refinement and Regularized Image-Text Alignment', what IoU score did the EVP model get on the RefCOCO dataset
| 77.61 |
ImageNet | GTP-LV-ViT-M/P8 | GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation | 2023-11-06T00:00:00 | https://arxiv.org/abs/2311.03035v2 | [
"https://github.com/ackesnal/gtp-vit"
] | In the paper 'GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation', what Top 1 Accuracy score did the GTP-LV-ViT-M/P8 model get on the ImageNet dataset
| 82.8% |
ScanObjectNN | ULIP-2 + Point-BERT | ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding | 2023-05-14T00:00:00 | https://arxiv.org/abs/2305.08275v4 | [
"https://github.com/salesforce/ulip"
] | In the paper 'ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding', what Overall Accuracy score did the ULIP-2 + Point-BERT model get on the ScanObjectNN dataset
| 89.0 |
COCO 2017 val | ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(mobilenet-v2)) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what mAP score did the ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(mobilenet-v2)) model get on the COCO 2017 val dataset
| 34.51 |
ImageNet | DAT-T++ | DAT++: Spatially Dynamic Vision Transformer with Deformable Attention | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01430v1 | [
"https://github.com/leaplabthu/dat"
] | In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what Top 1 Accuracy score did the DAT-T++ model get on the ImageNet dataset
| 83.9% |
Weather2K1786 (96) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K1786 (96) dataset
| 0.535 |
QVHighlights | UniVTG (w/ PT) | UniVTG: Towards Unified Video-Language Temporal Grounding | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16715v2 | [
"https://github.com/showlab/univtg"
] | In the paper 'UniVTG: Towards Unified Video-Language Temporal Grounding', what mAP score did the UniVTG (w/ PT) model get on the QVHighlights dataset
| 43.63 |
Insectwingbeat | ConvTran | Improving Position Encoding of Transformers for Multivariate Time Series Classification | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16642v1 | [
"https://github.com/navidfoumani/convtran"
] | In the paper 'Improving Position Encoding of Transformers for Multivariate Time Series Classification', what Accuracy score did the ConvTran model get on the Insectwingbeat dataset
| 0.7132 |
CFC-DAOD | PT (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what AP@0.5 score did the PT (ResNet50-FPN) model get on the CFC-DAOD dataset
| 69.0 |
VideoInstruct | SlowFast-LLaVA-34B | SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15841v2 | [
"https://github.com/apple/ml-slowfast-llava"
] | In the paper 'SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models', what gpt-score score did the SlowFast-LLaVA-34B model get on the VideoInstruct dataset
| 3.48 |
MM-Vet | Imp-3B | Imp: Highly Capable Large Multimodal Models for Mobile Devices | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.12107v2 | [
"https://github.com/milvlg/imp"
] | In the paper 'Imp: Highly Capable Large Multimodal Models for Mobile Devices', what GPT-4 score score did the Imp-3B model get on the MM-Vet dataset
| 43.3 |
Mid-Atlantic Ridge | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Mid-Atlantic Ridge dataset
| 25.74 |
InfiMM-Eval | SPHINX v2 | SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models | 2023-11-13T00:00:00 | https://arxiv.org/abs/2311.07575v1 | [
"https://github.com/alpha-vllm/llama2-accessory"
] | In the paper 'SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models', what Overall score score did the SPHINX v2 model get on the InfiMM-Eval dataset
| 39.48 |
USPTO-50k | NAG2G (reaction class unknown) | Node-Aligned Graph-to-Graph (NAG2G): Elevating Template-Free Deep Learning Approaches in Single-Step Retrosynthesis | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15798v2 | [
"https://github.com/dptech-corp/nag2g"
] | In the paper 'Node-Aligned Graph-to-Graph (NAG2G): Elevating Template-Free Deep Learning Approaches in Single-Step Retrosynthesis', what Top-1 accuracy score did the NAG2G (reaction class unknown) model get on the USPTO-50k dataset
| 55.1 |
Food-101N | SURE(ResNet-50) | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01T00:00:00 | https://arxiv.org/abs/2403.00543v1 | [
"https://github.com/YutingLi0606/SURE"
] | In the paper 'SURE: SUrvey REcipes for building reliable and robust deep networks', what Accuracy score did the SURE(ResNet-50) model get on the Food-101N dataset
| 88.0 |
MS-COCO (30-shot) | CD-ViTO | Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03094v4 | [
"https://github.com/lovelyqian/CDFSOD-benchmark"
] | In the paper 'Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector', what AP score did the CD-ViTO model get on the MS-COCO (30-shot) dataset
| 35.9 |
Visual Genome | ADTrans | Panoptic Scene Graph Generation with Semantics-Prototype Learning | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15567v3 | [
"https://github.com/lili0415/psg-biased-annotation"
] | In the paper 'Panoptic Scene Graph Generation with Semantics-Prototype Learning', what Recall@50 score did the ADTrans model get on the Visual Genome dataset
| 23.0 |
MLT17 | MRM | MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.14758v3 | [
"https://github.com/simplify23/MRN"
] | In the paper 'MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition', what Acc score did the MRM model get on the MLT17 dataset
| 78.4 |
FP-O-H | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-O-H dataset
| 2.64 |
CVC-ColonDB | PVT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what mean Dice score did the PVT-GCASCADE model get on the CVC-ColonDB dataset
| 0.8261 |
COCO test-dev | LeYOLO-Nano@480 | LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14239v1 | [
"https://github.com/LilianHollard/LeYOLO"
] | In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what box mAP score did the LeYOLO-Nano@480 model get on the COCO test-dev dataset
| 31.3 |
DAVIS 2017 (val) | UniVS(Swin-L) | UniVS: Unified and Universal Video Segmentation with Prompts as Queries | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18115v2 | [
"https://github.com/minghanli/univs"
] | In the paper 'UniVS: Unified and Universal Video Segmentation with Prompts as Queries', what Mean Jaccard & F-Measure score did the UniVS(Swin-L) model get on the DAVIS 2017 (val) dataset
| 76.2 |
FSC147 | SemAug-CounTR | Semantic Generative Augmentations for Few-Shot Counting | 2023-10-26T00:00:00 | https://arxiv.org/abs/2311.16122v1 | [
"https://github.com/perladoubinsky/SemAug"
] | In the paper 'Semantic Generative Augmentations for Few-Shot Counting', what MAE(val) score did the SemAug-CounTR model get on the FSC147 dataset
| 12.31 |
ChestX-ray14 | CoAtNet | SynthEnsemble: A Fusion of CNN, Vision Transformer, and Hybrid Models for Multi-Label Chest X-Ray Classification | 2023-11-13T00:00:00 | https://arxiv.org/abs/2311.07750v3 | [
"https://github.com/syednabilashraf/SynthEnsemble"
] | In the paper 'SynthEnsemble: A Fusion of CNN, Vision Transformer, and Hybrid Models for Multi-Label Chest X-Ray Classification', what Average AUC on 14 label score did the CoAtNet model get on the ChestX-ray14 dataset
| 84.239 |
Oxford-IIIT Pets | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the Oxford-IIIT Pets dataset
| 89 |
MSU SR-QA Dataset | Q-Align (VQA) | Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17090v1 | [
"https://github.com/q-future/q-align"
] | In the paper 'Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels', what SROCC score did the Q-Align (VQA) model get on the MSU SR-QA Dataset dataset
| 0.71812 |
LLRGBD-synthetic | SMMCL (ResNet-101) | Understanding Dark Scenes by Contrasting Multi-Modal Observations | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12320v2 | [
"https://github.com/palmdong/smmcl"
] | In the paper 'Understanding Dark Scenes by Contrasting Multi-Modal Observations', what mIoU score did the SMMCL (ResNet-101) model get on the LLRGBD-synthetic dataset
| 64.40 |
BSD100 - 2x upscaling | WaveMixSR | WaveMixSR: A Resource-efficient Neural Network for Image Super-resolution | 2023-07-01T00:00:00 | https://arxiv.org/abs/2307.00430v1 | [
"https://github.com/pranavphoenix/WaveMixSR"
] | In the paper 'WaveMixSR: A Resource-efficient Neural Network for Image Super-resolution', what PSNR score did the WaveMixSR model get on the BSD100 - 2x upscaling dataset
| 33.08 |
Nardo-Air R | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the Nardo-Air R dataset
| 85.92 |
FMB Dataset | MMSFormer (RGB-Infrared) | MMSFormer: Multimodal Transformer for Material and Semantic Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.04001v4 | [
"https://github.com/csiplab/mmsformer"
] | In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB-Infrared) model get on the FMB Dataset dataset
| 61.70 |
AGORA | NIKI (Twist-and-Swing) | NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D Human Pose and Shape Estimation | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08590v1 | [
"https://github.com/jeff-sjtu/niki"
] | In the paper 'NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D Human Pose and Shape Estimation', what B-NMVE score did the NIKI (Twist-and-Swing) model get on the AGORA dataset
| 70.2 |
ETTh1 (192) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTh1 (192) Multivariate dataset
| 0.453 |
MS-COCO (30-shot) | DE-ViT | Detect Everything with Few Examples | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12969v4 | [
"https://github.com/mlzxy/devit"
] | In the paper 'Detect Everything with Few Examples', what AP score did the DE-ViT model get on the MS-COCO (30-shot) dataset
| 34 |
COCO 2017 | DeBiFormer-S (IN1k pretrain, Retina) | DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08582v1 | [
"https://github.com/maclong01/DeBiFormer"
] | In the paper 'DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention', what mAP score did the DeBiFormer-S (IN1k pretrain, Retina) model get on the COCO 2017 dataset
| 45.6 |
CATT | Command R+ | CATT: Character-based Arabic Tashkeel Transformer | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03236v3 | [
"https://github.com/abjadai/catt"
] | In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the Command R+ model get on the CATT dataset
| 13.169 |
EQ-Bench | OpenAI text-davinci-003 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI text-davinci-003 model get on the EQ-Bench dataset
| 43.73 |
KITTI360pose | MambaPlace | MambaPlace:Text-to-Point-Cloud Cross-Modal Place Recognition with Attention Mamba Mechanisms | 2024-08-28T00:00:00 | https://arxiv.org/abs/2408.15740v1 | [
"https://github.com/CV4RA/MambaPlace"
] | In the paper 'MambaPlace:Text-to-Point-Cloud Cross-Modal Place Recognition with Attention Mamba Mechanisms', what Localization Recall@1 score did the MambaPlace model get on the KITTI360pose dataset
| 0.45 |
AudioSet | DyMN-L (Audio-Only, Single) | Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15648v1 | [
"https://github.com/fschmid56/efficientat"
] | In the paper 'Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models', what Test mAP score did the DyMN-L (Audio-Only, Single) model get on the AudioSet dataset
| 0.490 |
Beam-Splitter Deblurring (BSD) | Turtle | Learning Truncated Causal History Model for Video Restoration | 2024-10-04T00:00:00 | https://arxiv.org/abs/2410.03936v2 | [
"https://github.com/Ascend-Research/Turtle"
] | In the paper 'Learning Truncated Causal History Model for Video Restoration', what PSNR score did the Turtle model get on the Beam-Splitter Deblurring (BSD) dataset
| 33.58 |
Ballroom | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Ballroom dataset
| 95.3 |
LEVIR-CD | SGSLN/128 | Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11302v1 | [
"https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network"
] | In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-score score did the SGSLN/128 model get on the LEVIR-CD dataset
| 0.91 |
VDD | Mask2Former(Swin-T) | VDD: Varied Drone Dataset for Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13608v3 | [
"https://github.com/RussRobin/VDD"
] | In the paper 'VDD: Varied Drone Dataset for Semantic Segmentation', what mIoU score did the Mask2Former(Swin-T) model get on the VDD dataset
| 77.85 |
MSRVTT-QA | vid-TLDR (UMT-L) | vid-TLDR: Training Free Token merging for Light-weight Video Transformer | 2024-03-20T00:00:00 | https://arxiv.org/abs/2403.13347v2 | [
"https://github.com/mlvlab/vid-tldr"
] | In the paper 'vid-TLDR: Training Free Token merging for Light-weight Video Transformer', what Accuracy score did the vid-TLDR (UMT-L) model get on the MSRVTT-QA dataset
| 0.470 |
ScanObjectNN | ReCon+PPT | Positional Prompt Tuning for Efficient 3D Representation Learning | 2024-08-21T00:00:00 | https://arxiv.org/abs/2408.11567v1 | [
"https://github.com/zsc000722/ppt"
] | In the paper 'Positional Prompt Tuning for Efficient 3D Representation Learning', what Overall Accuracy score did the ReCon+PPT model get on the ScanObjectNN dataset
| 89.52 |
MATH | WizardMath-7B-V1.1 | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09583v1 | [
"https://github.com/nlpxucan/wizardlm"
] | In the paper 'WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct', what Accuracy score did the WizardMath-7B-V1.1 model get on the MATH dataset
| 33.0 |
STAR Benchmark | SeViLA (0-shot) | Self-Chained Image-Language Model for Video Localization and Question Answering | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06988v2 | [
"https://github.com/yui010206/sevila"
] | In the paper 'Self-Chained Image-Language Model for Video Localization and Question Answering', what Average Accuracy score did the SeViLA (0-shot) model get on the STAR Benchmark dataset
| 44.6 |
ScanNet | OneFormer3D | OneFormer3D: One Transformer for Unified Point Cloud Segmentation | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14405v1 | [
"https://github.com/oneformer3d/oneformer3d"
] | In the paper 'OneFormer3D: One Transformer for Unified Point Cloud Segmentation', what val mIoU score did the OneFormer3D model get on the ScanNet dataset
| 76.6 |
NTU RGB+D | DVANet (RGB only) | DVANet: Disentangling View and Action Features for Multi-View Action Recognition | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05719v1 | [
"https://github.com/NyleSiddiqui/MultiView_Actions"
] | In the paper 'DVANet: Disentangling View and Action Features for Multi-View Action Recognition', what Accuracy (CS) score did the DVANet (RGB only) model get on the NTU RGB+D dataset
| 93.4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.