dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
EC-FUNSD | LayoutLMv3 (large) | Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02379v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective', what F1 score did the LayoutLMv3 (large) model get on the EC-FUNSD dataset
| 78.14 |
Nordland | DINOv2 SALAD (1-frame thr.) | Optimal Transport Aggregation for Visual Place Recognition | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15937v2 | [
"https://github.com/serizba/salad"
] | In the paper 'Optimal Transport Aggregation for Visual Place Recognition', what Recall@1 score did the DINOv2 SALAD (1-frame thr.) model get on the Nordland dataset
| 85.2 |
MovieLens 1M | HSTU+MoL | Retrieval with Learned Similarities | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15462v3 | [
"https://github.com/bailuding/rails"
] | In the paper 'Retrieval with Learned Similarities', what HR@10 (full corpus) score did the HSTU+MoL model get on the MovieLens 1M dataset
| .3412 |
PASCAL-5i (5-Shot) | SCCAN (ResNet-50) | Self-Calibrated Cross Attention Network for Few-Shot Segmentation | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09294v1 | [
"https://github.com/sam1224/sccan"
] | In the paper 'Self-Calibrated Cross Attention Network for Few-Shot Segmentation', what Mean IoU score did the SCCAN (ResNet-50) model get on the PASCAL-5i (5-Shot) dataset
| 70.3 |
ogbn-arxiv | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Test Accuracy score did the GCN model get on the ogbn-arxiv dataset
| 0.7360 ± 0.0018 |
PeMS08 | Cy2Mixer | Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks | 2024-01-29T00:00:00 | https://arxiv.org/abs/2401.15894v2 | [
"https://github.com/leemingo/cy2mixer"
] | In the paper 'Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks', what MAE@1h score did the Cy2Mixer model get on the PeMS08 dataset
| 13.53 |
PF-WILLOW | LDMCorrespondences | Unsupervised Semantic Correspondence Using Stable Diffusion | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15581v2 | [
"https://github.com/ubc-vision/LDM_correspondences"
] | In the paper 'Unsupervised Semantic Correspondence Using Stable Diffusion', what PCK score did the LDMCorrespondences model get on the PF-WILLOW dataset
| 84.3 |
Weather (720) | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the Weather (720) dataset
| 0.316 |
KITTI-360 | Superpoint Transformer | Efficient 3D Semantic Segmentation with Superpoint Transformer | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.08045v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Efficient 3D Semantic Segmentation with Superpoint Transformer', what miou Val score did the Superpoint Transformer model get on the KITTI-360 dataset
| 63.5 |
SARDet-100K | MSFA (F-RCNN+ConvNext-T) | SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection | 2024-03-11T00:00:00 | https://arxiv.org/abs/2403.06534v2 | [
"https://github.com/zcablii/sardet_100k"
] | In the paper 'SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection', what box mAP score did the MSFA (F-RCNN+ConvNext-T) model get on the SARDet-100K dataset
| 54.8 |
ImageNet | ToMe-ViT-S | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the ToMe-ViT-S model get on the ImageNet dataset
| 82.11% |
PECC | Claude 3 Haiku | PECC: Problem Extraction and Coding Challenges | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18766v1 | [
"https://github.com/hallerpatrick/pecc"
] | In the paper 'PECC: Problem Extraction and Coding Challenges', what Pass@3 score did the Claude 3 Haiku model get on the PECC dataset
| 27.67 |
RefCOCO testB | HyperSeg | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what Overall IoU score did the HyperSeg model get on the RefCOCO testB dataset
| 83.4 |
Musk v2 | Snuffy | Snuffy: Efficient Whole Slide Image Classifier | 2024-08-15T00:00:00 | https://arxiv.org/abs/2408.08258v2 | [
"https://github.com/jafarinia/snuffy"
] | In the paper 'Snuffy: Efficient Whole Slide Image Classifier', what AUC score did the Snuffy model get on the Musk v2 dataset
| 0.985 |
Natural Questions | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-M (one-shot) model get on the Natural Questions dataset
| 32.0 |
FRMT (Chinese - Mainland) | PaLM | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the PaLM model get on the FRMT (Chinese - Mainland) dataset
| 70.3 |
SIDER | GIT-Mol(G+S) | GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06911v3 | [
"https://github.com/ai-hpc-research-team/git-mol"
] | In the paper 'GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text', what AUC score did the GIT-Mol(G+S) model get on the SIDER dataset
| 0.634 |
RWTH-PHOENIX-Weather 2014 | MSKA-SLR | Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation | 2024-05-09T00:00:00 | https://arxiv.org/abs/2405.05672v1 | [
"https://github.com/sutwangyan/MSKA"
] | In the paper 'Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation', what Word Error Rate (WER) score did the MSKA-SLR model get on the RWTH-PHOENIX-Weather 2014 dataset
| 22.1 |
ADE20K-150 | LaVG | In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation | 2024-08-09T00:00:00 | https://arxiv.org/abs/2408.04961v1 | [
"https://github.com/dahyun-kang/lazygrounding"
] | In the paper 'In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation', what mIoU score did the LaVG model get on the ADE20K-150 dataset
| 15.8 |
CIFAR-10 | Easy Consistency Tuning (ECT) | Consistency Models Made Easy | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14548v2 | [
"https://github.com/locuslab/ect"
] | In the paper 'Consistency Models Made Easy', what FID score did the Easy Consistency Tuning (ECT) model get on the CIFAR-10 dataset
| 1.94 |
PHEVA | MPED-RNN | PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset | 2024-08-26T00:00:00 | https://arxiv.org/abs/2408.14329v1 | [
"https://github.com/tecsar-uncc/pheva"
] | In the paper 'PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset', what AUC-ROC score did the MPED-RNN model get on the PHEVA dataset
| 76.05 |
DukeMTMC-reID | CLIP-ReID Baseline+UFFM+AMC | Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.01101v4 | [
"https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC"
] | In the paper 'Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination', what Rank-1 score did the CLIP-ReID Baseline+UFFM+AMC model get on the DukeMTMC-reID dataset
| 91.3 |
YouTube-VOS 2018 | Cutie+ (base, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what F-Measure (Seen) score did the Cutie+ (base, MEGA) model get on the YouTube-VOS 2018 dataset
| 91.0 |
VideoInstruct | ST-LLM-7B | ST-LLM: Large Language Models Are Effective Temporal Learners | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00308v1 | [
"https://github.com/TencentARC/ST-LLM"
] | In the paper 'ST-LLM: Large Language Models Are Effective Temporal Learners', what Correctness of Information score did the ST-LLM-7B model get on the VideoInstruct dataset
| 3.23 |
Chameleon | 2-HiGCN | Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12971v2 | [
"https://github.com/yiminghh/higcn"
] | In the paper 'Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes', what Accuracy score did the 2-HiGCN model get on the Chameleon dataset
| 68.47±0.45 |
SUN397 | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the SUN397 dataset
| 71 |
CIFAR-100-LT (ρ=100) | VS + ADRW + TLA | A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code | 2023-10-07T00:00:00 | https://arxiv.org/abs/2310.04752 | [
"https://github.com/wang22ti/DDC"
] | In the paper 'A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code', what Error Rate score did the VS + ADRW + TLA model get on the CIFAR-100-LT (ρ=100) dataset
| 46.95 |
Road Anomaly | Mask2Anomaly | Unmasking Anomalies in Road-Scene Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13316v1 | [
"https://github.com/shyam671/mask2anomaly-unmasking-anomalies-in-road-scene-segmentation"
] | In the paper 'Unmasking Anomalies in Road-Scene Segmentation', what AP score did the Mask2Anomaly model get on the Road Anomaly dataset
| 79.70 |
AGQA 2.0 balanced | GF (sup) - Faster RCNN | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01529v1 | [
"https://github.com/byz0e/glance-focus"
] | In the paper 'Glance and Focus: Memory Prompting for Multi-Event Video Question Answering', what Average Accuracy score did the GF (sup) - Faster RCNN model get on the AGQA 2.0 balanced dataset
| 55.08 |
GSM8K | Gemini Pro (maj1@32) | Gemini: A Family of Highly Capable Multimodal Models | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11805v4 | [
"https://github.com/valdecy/pybibx"
] | In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what Accuracy score did the Gemini Pro (maj1@32) model get on the GSM8K dataset
| 86.5 |
Pittsburgh-250k-test | ProGEO | ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.01906v1 | [
"https://github.com/chain-mao/progeo"
] | In the paper 'ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization', what Recall@1 score did the ProGEO model get on the Pittsburgh-250k-test dataset
| 92.2 |
Wildtrack | ReST | ReST: A Reconfigurable Spatial-Temporal Graph Model for Multi-Camera Multi-Object Tracking | 2023-08-25T00:00:00 | https://arxiv.org/abs/2308.13229v1 | [
"https://github.com/chengche6230/rest"
] | In the paper 'ReST: A Reconfigurable Spatial-Temporal Graph Model for Multi-Camera Multi-Object Tracking', what IDF1 score did the ReST model get on the Wildtrack dataset
| 86.7 |
BreakHis | WaveMix | Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision | 2024-06-09T00:00:00 | https://arxiv.org/abs/2406.05612v2 | [
"https://github.com/pranavphoenix/Backbones"
] | In the paper 'Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision', what Accuracy (%) score did the WaveMix model get on the BreakHis dataset
| 99.39 |
MVTec LOCO AD | MuSc (zero-shot) | MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16753v1 | [
"https://github.com/xrli-U/MuSc"
] | In the paper 'MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images', what Avg. Detection AUROC score did the MuSc (zero-shot) model get on the MVTec LOCO AD dataset
| 75.9 |
ImageNet | TokenLearner-ViT-8 | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the TokenLearner-ViT-8 model get on the ImageNet dataset
| 80.66% |
MM-Vet | LLaVA-Plus-7B (All Tools) | LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents | 2023-11-09T00:00:00 | https://arxiv.org/abs/2311.05437v1 | [
"https://github.com/LLaVA-VL/LLaVA-Plus-Codebase"
] | In the paper 'LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents', what GPT-4 score score did the LLaVA-Plus-7B (All Tools) model get on the MM-Vet dataset
| 27.5±0.3 |
WHAMR! | TD-Conformer (XL) + DM | On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.06125v1 | [
"https://github.com/jwr1995/pubsep"
] | In the paper 'On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments', what SI-SDRi score did the TD-Conformer (XL) + DM model get on the WHAMR! dataset
| 14.6 |
Krapivin | PromptRank | PromptRank: Unsupervised Keyphrase Extraction Using Prompt | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04490v2 | [
"https://github.com/hlt-nlp/promptrank"
] | In the paper 'PromptRank: Unsupervised Keyphrase Extraction Using Prompt', what F1@10 score did the PromptRank model get on the Krapivin dataset
| 16.71 |
MSR-VTT | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what CIDEr score did the COSA model get on the MSR-VTT dataset
| 74.7 |
EQ-Bench | OpenAI ADA | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI ADA model get on the EQ-Bench dataset
| 2.25 |
MCubeS | StitchFusion (RGB-D) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-D) model get on the MCubeS dataset
| 52.72 |
Cora | TransGNN | Strong Transitivity Relations and Graph Neural Networks | 2024-01-01T00:00:00 | https://arxiv.org/abs/2401.01384v1 | [
"https://github.com/yassinmihemedi/strong-transitivity-relations-and-graph-neural-network"
] | In the paper 'Strong Transitivity Relations and Graph Neural Networks', what 1:1 Accuracy score did the TransGNN model get on the Cora dataset
| 85.1 |
ActivityNet-1.3 | UnLoc-L | UnLoc: A Unified Framework for Video Localization Tasks | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.11062v1 | [
"https://github.com/google-research/scenic"
] | In the paper 'UnLoc: A Unified Framework for Video Localization Tasks', what mAP IOU@0.5 score did the UnLoc-L model get on the ActivityNet-1.3 dataset
| 59.3 |
DDD17-SEG | EventSAM | Segment Any Events via Weighted Adaptation of Pivotal Tokens | 2023-12-24T00:00:00 | https://arxiv.org/abs/2312.16222v1 | [
"https://github.com/happychenpipi/eventsam"
] | In the paper 'Segment Any Events via Weighted Adaptation of Pivotal Tokens', what mIoU score did the EventSAM model get on the DDD17-SEG dataset
| 0.37 |
LLVIP | MMPedestron | When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10125v1 | [
"https://github.com/BubblyYi/MMPedestron"
] | In the paper 'When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset', what AP score did the MMPedestron model get on the LLVIP dataset
| 0.726 |
DUTS-TE | SAM2-UNet | SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08870v1 | [
"https://github.com/wzh0120/sam2-unet"
] | In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what MAE score did the SAM2-UNet model get on the DUTS-TE dataset
| 0.020 |
TvSum | UVCOM (train from scratch) | Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.16464v1 | [
"https://github.com/easonxiao-888/uvcom"
] | In the paper 'Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection', what mAP score did the UVCOM (train from scratch) model get on the TvSum dataset
| 86.3 |
BoolQ | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the BoolQ dataset
| 88.1 |
Chameleon (60%/20%/20% random splits) | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GCN model get on the Chameleon (60%/20%/20% random splits) dataset
| 60.24 ± 1.93 |
WHU-CD | ChangeMamba | ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03425v6 | [
"https://github.com/chenhongruixuan/mambacd"
] | In the paper 'ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model', what F1 score did the ChangeMamba model get on the WHU-CD dataset
| 94.19 |
LAVIB | EMA-VFI | LAVIB: A Large-scale Video Interpolation Benchmark | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.09754v2 | [
"https://github.com/alexandrosstergiou/lavib"
] | In the paper 'LAVIB: A Large-scale Video Interpolation Benchmark', what PSNR score did the EMA-VFI model get on the LAVIB dataset
| 33.14 |
ECLAIR | Res16UNet14C | ECLAIR: A High-Fidelity Aerial LiDAR Dataset for Semantic Segmentation | 2024-04-16T00:00:00 | https://arxiv.org/abs/2404.10699v1 | [
"https://github.com/sharpershape/eclair-dataset"
] | In the paper 'ECLAIR: A High-Fidelity Aerial LiDAR Dataset for Semantic Segmentation', what F1 score did the Res16UNet14C model get on the ECLAIR dataset
| 0.845 |
TAO | GLEE-Lite | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what TETA score did the GLEE-Lite model get on the TAO dataset
| 40.1 |
Oxford 102 Flower | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the Oxford 102 Flower dataset
| 90.24 |
ACE 2005 | PromptNER [BERT-large] | PromptNER: Prompt Locating and Typing for Named Entity Recognition | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17104v1 | [
"https://github.com/tricktreat/promptner"
] | In the paper 'PromptNER: Prompt Locating and Typing for Named Entity Recognition', what F1 score did the PromptNER [BERT-large] model get on the ACE 2005 dataset
| 87.21 |
DUT-OMRON | SAM2-UNet | SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08870v1 | [
"https://github.com/wzh0120/sam2-unet"
] | In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what MAE score did the SAM2-UNet model get on the DUT-OMRON dataset
| 0.039 |
DEplain-web-doc | long-mBART (trained on DEplain-web-doc) | DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18939v1 | [
"https://github.com/rstodden/deplain"
] | In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the long-mBART (trained on DEplain-web-doc) model get on the DEplain-web-doc dataset
| 49.584 |
HERA RFI Detection | Spiking Nerest Latent Neighbours | RFI Detection with Spiking Neural Networks | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14303v2 | [
"https://github.com/pritchardn/snn-nln"
] | In the paper 'RFI Detection with Spiking Neural Networks', what AUROC score did the Spiking Nerest Latent Neighbours model get on the HERA RFI Detection dataset
| 0.944 |
ImageNet 128x128 | DisCo-Diff | DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03300v1 | [
"https://github.com/gcorso/disco-diffdock"
] | In the paper 'DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents', what FID score did the DisCo-Diff model get on the ImageNet 128x128 dataset
| 1.73 |
VLCS | GMDG (RegNetY-16GF) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF) model get on the VLCS dataset
| 82.4 |
ImageNet | KD++(T:resnet-152 S:resnet-50) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T:resnet-152 S:resnet-50) model get on the ImageNet dataset
| 77.48 |
MATH | CR (GPT-4 model, w/o code) | Cumulative Reasoning with Large Language Models | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04371v6 | [
"https://github.com/iiis-ai/cumulative-reasoning"
] | In the paper 'Cumulative Reasoning with Large Language Models', what Accuracy score did the CR (GPT-4 model, w/o code) model get on the MATH dataset
| 58.0 |
CIFAR-100 | SparseSwin | SparseSwin: Swin Transformer with Sparse Transformer Block | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05224v1 | [
"https://github.com/krisnapinasthika/sparseswin"
] | In the paper 'SparseSwin: Swin Transformer with Sparse Transformer Block', what Percentage correct score did the SparseSwin model get on the CIFAR-100 dataset
| 85.35 |
NYU Depth v2 | PGT (Swin-S) | Prompt Guided Transformer for Multi-Task Dense Prediction | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15362v1 | [
"https://github.com/innovator-zero/MTDP_Lib"
] | In the paper 'Prompt Guided Transformer for Multi-Task Dense Prediction', what Mean IoU score did the PGT (Swin-S) model get on the NYU Depth v2 dataset
| 46.43 |
MVTec AD | FAIR | FAIR: Frequency-aware Image Restoration for Industrial Visual Anomaly Detection | 2023-09-13T00:00:00 | https://arxiv.org/abs/2309.07068v1 | [
"https://github.com/liutongkun/fair"
] | In the paper 'FAIR: Frequency-aware Image Restoration for Industrial Visual Anomaly Detection', what Detection AUROC score did the FAIR model get on the MVTec AD dataset
| 98.6 |
BSD100 - 4x upscaling | WaveMixSR-V2 | WaveMixSR-V2: Enhancing Super-resolution with Higher Efficiency | 2024-09-16T00:00:00 | https://arxiv.org/abs/2409.10582v3 | [
"https://github.com/pranavphoenix/WaveMixSR"
] | In the paper 'WaveMixSR-V2: Enhancing Super-resolution with Higher Efficiency', what PSNR score did the WaveMixSR-V2 model get on the BSD100 - 4x upscaling dataset
| 27.87 |
Electricity (720) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Electricity (720) dataset
| 0.185 |
BDD100K val | TwinLiteNet | TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10705v5 | [
"https://github.com/chequanghuy/TwinLiteNet"
] | In the paper 'TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars', what Params (M) score did the TwinLiteNet model get on the BDD100K val dataset
| 0.43 |
ImageNet | BLIP-2 OPT | Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy | 2024-02-11T00:00:00 | https://arxiv.org/abs/2402.07270v2 | [
"https://github.com/lmb-freiburg/ovqa"
] | In the paper 'Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy', what Contains score did the BLIP-2 OPT model get on the ImageNet dataset
| 35.49 |
SportsMOT | MOTIP (Deformable DETR) | Multiple Object Tracking as ID Prediction | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16848v1 | [
"https://github.com/MCG-NJU/MOTIP"
] | In the paper 'Multiple Object Tracking as ID Prediction', what HOTA score did the MOTIP (Deformable DETR) model get on the SportsMOT dataset
| 71.9 |
TAP-Vid-Kinetics-First | CoTracker | CoTracker: It is Better to Track Together | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07635v3 | [
"https://github.com/facebookresearch/co-tracker"
] | In the paper 'CoTracker: It is Better to Track Together', what Average Jaccard score did the CoTracker model get on the TAP-Vid-Kinetics-First dataset
| 48.8 |
ActivityNet-QA | VIOLET+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the VIOLET+ model get on the ActivityNet-QA dataset
| 39.7 |
FaceDetection | ConvTran | Improving Position Encoding of Transformers for Multivariate Time Series Classification | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16642v1 | [
"https://github.com/navidfoumani/convtran"
] | In the paper 'Improving Position Encoding of Transformers for Multivariate Time Series Classification', what Accuracy score did the ConvTran model get on the FaceDetection dataset
| 0.6722 |
LibriSpeech test-other | Zipformer+pruned transducer
(no external language model) | Zipformer: A faster and better encoder for automatic speech recognition | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11230v4 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'Zipformer: A faster and better encoder for automatic speech recognition', what Word Error Rate (WER) score did the Zipformer+pruned transducer
(no external language model) model get on the LibriSpeech test-other dataset
| 4.38 |
VC-Clothes | CAL+DLCR | DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID | 2024-11-11T00:00:00 | https://arxiv.org/abs/2411.07205v2 | [
"https://github.com/croitorualin/dlcr"
] | In the paper 'DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID', what Rank-1 score did the CAL+DLCR model get on the VC-Clothes dataset
| 87.1 |
A2D Sentences | SOC (Video-Swin-B) | SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17011v1 | [
"https://github.com/RobertLuo1/NeurIPS2023_SOC"
] | In the paper 'SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation', what Precision@0.5 score did the SOC (Video-Swin-B) model get on the A2D Sentences dataset
| 0.851 |
Aria Synthetic Environments | 3DETR | EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10224v1 | [
"https://github.com/facebookresearch/efm3d"
] | In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what MAP score did the 3DETR model get on the Aria Synthetic Environments dataset
| 33 |
GTA5 to Cityscapes | HALO | Hyperbolic Active Learning for Semantic Segmentation under Domain Shift | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.11180v5 | [
"https://github.com/paolomandica/HALO"
] | In the paper 'Hyperbolic Active Learning for Semantic Segmentation under Domain Shift', what mIoU score did the HALO model get on the GTA5 to Cityscapes dataset
| 77.8 |
nuscenes Camera-Radar | RCM-Fusion | RCM-Fusion: Radar-Camera Multi-Level Fusion for 3D Object Detection | 2023-07-17T00:00:00 | https://arxiv.org/abs/2307.10249v5 | [
"https://github.com/mjseong0414/RCM-Fusion"
] | In the paper 'RCM-Fusion: Radar-Camera Multi-Level Fusion for 3D Object Detection', what NDS score did the RCM-Fusion model get on the nuscenes Camera-Radar dataset
| 58.7 |
TGIF-QA | LocVLM-Vid-B | Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07449v1 | [
"https://github.com/kahnchana/locvlm"
] | In the paper 'Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs', what Accuracy score did the LocVLM-Vid-B model get on the TGIF-QA dataset
| 51.8 |
GigaSpeech DEV | Zipformer+ CR-CTC/AED
(no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+ CR-CTC/AED
(no external language model) model get on the GigaSpeech DEV dataset
| 9.92 |
CATT | Alkhalil | CATT: Character-based Arabic Tashkeel Transformer | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03236v3 | [
"https://github.com/abjadai/catt"
] | In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the Alkhalil model get on the CATT dataset
| 14.232 |
ImageNet | EfficientFormer-V2-S0 | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the EfficientFormer-V2-S0 model get on the ImageNet dataset
| 71.53% |
LFW | DAEFR | Dual Associated Encoder for Face Restoration | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.07314v2 | [
"https://github.com/LIAGM/DAEFR"
] | In the paper 'Dual Associated Encoder for Face Restoration', what FID score did the DAEFR model get on the LFW dataset
| 47.532 |
ImageNet 64x64 | 2-rectified flow++ (NFE=2) | Improving the Training of Rectified Flows | 2024-05-30T00:00:00 | https://arxiv.org/abs/2405.20320v2 | [
"https://github.com/sangyun884/rfpp"
] | In the paper 'Improving the Training of Rectified Flows', what FID score did the 2-rectified flow++ (NFE=2) model get on the ImageNet 64x64 dataset
| 3.64 |
MixSNIPS | BiSLU | Joint Multiple Intent Detection and Slot Filling with Supervised Contrastive Learning and Self-Distillation | 2023-08-28T00:00:00 | https://arxiv.org/abs/2308.14654v1 | [
"https://github.com/anhtunguyen98/bislu"
] | In the paper 'Joint Multiple Intent Detection and Slot Filling with Supervised Contrastive Learning and Self-Distillation', what Accuracy score did the BiSLU model get on the MixSNIPS dataset
| 97.8 |
ImageNet - ResNet 50 - 90% sparsity | Feather | Feather: An Elegant Solution to Effective DNN Sparsification | 2023-10-03T00:00:00 | https://arxiv.org/abs/2310.02448v1 | [
"https://github.com/athglentis/feather"
] | In the paper 'Feather: An Elegant Solution to Effective DNN Sparsification', what Top-1 Accuracy score did the Feather model get on the ImageNet - ResNet 50 - 90% sparsity dataset
| 76.93 |
Mini-Imagenet 5-way (5-shot) | SemFew-Trans | Simple Semantic-Aided Few-Shot Learning | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18649v3 | [
"https://github.com/zhangdoudou123/semfew"
] | In the paper 'Simple Semantic-Aided Few-Shot Learning', what Accuracy score did the SemFew-Trans model get on the Mini-Imagenet 5-way (5-shot) dataset
| 86.49 |
SportsMOT | DeepEIoU + GTA | GTA: Global Tracklet Association for Multi-Object Tracking in Sports | 2024-11-12T00:00:00 | https://arxiv.org/abs/2411.08216v1 | [
"https://github.com/sjc042/gta-link"
] | In the paper 'GTA: Global Tracklet Association for Multi-Object Tracking in Sports', what HOTA score did the DeepEIoU + GTA model get on the SportsMOT dataset
| 81.0 |
RefCOCO testA | VATEX | Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding | 2024-04-12T00:00:00 | https://arxiv.org/abs/2404.08590v2 | [
"https://github.com/nero1342/VATEX_RIS"
] | In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what mIoU score did the VATEX model get on the RefCOCO testA dataset
| 79.64 |
ImageNet 64x64 | GDD-I | Diffusion Models Are Innate One-Step Generators | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20750v2 | [
"https://github.com/Zyriix/GDD"
] | In the paper 'Diffusion Models Are Innate One-Step Generators', what FID score did the GDD-I model get on the ImageNet 64x64 dataset
| 1.16 |
Kinetics-600 12 frames, 64x64 | LARP | LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21264v1 | [
"https://github.com/hywang66/LARP"
] | In the paper 'LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior', what FVD score did the LARP model get on the Kinetics-600 12 frames, 64x64 dataset
| 5.1 |
Coastal Inundation Maps with Floodwater Depth Values | CASPIAN | Deep Vision-Based Framework for Coastal Flood Prediction Under Climate Change Impacts and Shoreline Adaptations | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.15451v1 | [
"https://github.com/Arnukk/CASPIAN"
] | In the paper 'Deep Vision-Based Framework for Coastal Flood Prediction Under Climate Change Impacts and Shoreline Adaptations', what Average MAE score did the CASPIAN model get on the Coastal Inundation Maps with Floodwater Depth Values dataset
| 0.06 |
USNA-Cn2 (short-duration) | Offshore Macro Meteorological | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Offshore Macro Meteorological model get on the USNA-Cn2 (short-duration) dataset
| 0.178 |
ModelNet40 | DeLA | Decoupled Local Aggregation for Point Cloud Learning | 2023-08-31T00:00:00 | https://arxiv.org/abs/2308.16532v1 | [
"https://github.com/matrix-asc/dela"
] | In the paper 'Decoupled Local Aggregation for Point Cloud Learning', what Overall Accuracy score did the DeLA model get on the ModelNet40 dataset
| 94.0 |
Mol-Instruction | SLM4CRP | A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09606v1 | [
"https://github.com/ai-hpc-research-team/slm4crp"
] | In the paper 'A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions', what METEOR score did the SLM4CRP model get on the Mol-Instruction dataset
| 0.901 |
DTD | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the DTD dataset
| 68.61 |
ADE20K | MetaPrompt-SD | Harnessing Diffusion Models for Visual Perception with Meta Prompts | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14733v1 | [
"https://github.com/fudan-zvg/meta-prompts"
] | In the paper 'Harnessing Diffusion Models for Visual Perception with Meta Prompts', what Validation mIoU score did the MetaPrompt-SD model get on the ADE20K dataset
| 56.8 |
BACE | ChemBFN | A Bayesian Flow Network Framework for Chemistry Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.20294v1 | [
"https://github.com/Augus1999/bayesian-flow-network-for-chemistry"
] | In the paper 'A Bayesian Flow Network Framework for Chemistry Tasks', what ROC-AUC score did the ChemBFN model get on the BACE dataset
| 73.56 |
MVTec AD | AdaCLIP | AdaCLIP: Adapting CLIP with Hybrid Learnable Prompts for Zero-Shot Anomaly Detection | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15795v1 | [
"https://github.com/caoyunkang/adaclip"
] | In the paper 'AdaCLIP: Adapting CLIP with Hybrid Learnable Prompts for Zero-Shot Anomaly Detection', what Detection AUROC score did the AdaCLIP model get on the MVTec AD dataset
| 89.2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.