dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
DanceTrack | MeMOTR (Deformable DETR) | MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15700v3 | [
"https://github.com/mcg-nju/memotr"
] | In the paper 'MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking', what HOTA score did the MeMOTR (Deformable DETR) model get on the DanceTrack dataset
| 63.4 |
QVHighlights | VideoLights-B-pt | VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval | 2024-12-02T00:00:00 | https://arxiv.org/abs/2412.01558v1 | [
"https://github.com/dpaul06/VideoLights"
] | In the paper 'VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval', what mAP score did the VideoLights-B-pt model get on the QVHighlights dataset
| 42.84 |
ImageNet 64x64 | CAF | Constant Acceleration Flow | 2024-11-01T00:00:00 | https://arxiv.org/abs/2411.00322v1 | [
"https://github.com/mlvlab/CAF"
] | In the paper 'Constant Acceleration Flow', what Inception Score score did the CAF model get on the ImageNet 64x64 dataset
| 62.03 |
OTB-2015 | ODTrack-L | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what AUC score did the ODTrack-L model get on the OTB-2015 dataset
| 0.724 |
ogbn-arxiv | 3-HiGCN | Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12971v2 | [
"https://github.com/yiminghh/higcn"
] | In the paper 'Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes', what Validation Accuracy score did the 3-HiGCN model get on the ogbn-arxiv dataset
| 0.7641±0.0053 |
Vid4 - 4x upscaling | MIA-VSR | Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention | 2024-01-12T00:00:00 | https://arxiv.org/abs/2401.06312v4 | [
"https://github.com/labshuhanggu/mia-vsr"
] | In the paper 'Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention', what PSNR score did the MIA-VSR model get on the Vid4 - 4x upscaling dataset
| 28.20 |
CVC-ClinicDB | PVT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what mean Dice score did the PVT-GCASCADE model get on the CVC-ClinicDB dataset
| 0.9468 |
STL-10 | DPAC | Deep Online Probability Aggregation Clustering | 2024-07-07T00:00:00 | https://arxiv.org/abs/2407.05246v2 | [
"https://github.com/aomandechenai/deep-probability-aggregation-clustering"
] | In the paper 'Deep Online Probability Aggregation Clustering', what Accuracy score did the DPAC model get on the STL-10 dataset
| 0.934 |
MM-Vet | LLaVA-OneVision-0.5B | LLaVA-OneVision: Easy Visual Task Transfer | 2024-08-06T00:00:00 | https://arxiv.org/abs/2408.03326v3 | [
"https://github.com/evolvinglmms-lab/lmms-eval"
] | In the paper 'LLaVA-OneVision: Easy Visual Task Transfer', what GPT-4 score score did the LLaVA-OneVision-0.5B model get on the MM-Vet dataset
| 29.1 |
WildDESED | CRNN (WildDESED) | WildDESED: An LLM-Powered Dataset for Wild Domestic Environment Sound Event Detection System | 2024-07-04T00:00:00 | https://arxiv.org/abs/2407.03656v3 | [
"https://github.com/swagshaw/wilddesed"
] | In the paper 'WildDESED: An LLM-Powered Dataset for Wild Domestic Environment Sound Event Detection System', what PSDS1 (-5dB) score did the CRNN (WildDESED) model get on the WildDESED dataset
| 0.048 |
ChEBI-20 | MolCA, Galac1.3B | MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12798v4 | [
"https://github.com/acharkq/molca"
] | In the paper 'MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter', what BLEU-2 score did the MolCA, Galac1.3B model get on the ChEBI-20 dataset
| 62.0 |
COCO-Text | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what 1:1 Accuracy score did the CLIP4STR-B model get on the COCO-Text dataset
| 81.1 |
SIR^2(Postcard) | RDNet | Reversible Decoupling Network for Single Image Reflection Removal | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08063v1 | [
"https://github.com/lime-j/RDNet"
] | In the paper 'Reversible Decoupling Network for Single Image Reflection Removal', what PSNR score did the RDNet model get on the SIR^2(Postcard) dataset
| 26.33 |
MBPP | GPT-3.5 Turbo + INTERVENOR | INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09868v5 | [
"https://github.com/neuir/intervenor"
] | In the paper 'INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair', what Accuracy score did the GPT-3.5 Turbo + INTERVENOR model get on the MBPP dataset
| 69.8 |
MM-Vet | OtterHD-8B | OtterHD: A High-Resolution Multi-modality Model | 2023-11-07T00:00:00 | https://arxiv.org/abs/2311.04219v1 | [
"https://github.com/luodian/otter"
] | In the paper 'OtterHD: A High-Resolution Multi-modality Model', what GPT-4 score score did the OtterHD-8B model get on the MM-Vet dataset
| 26.3 |
LEVIR+ | ChangeMamba | ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03425v6 | [
"https://github.com/chenhongruixuan/mambacd"
] | In the paper 'ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model', what F1 score did the ChangeMamba model get on the LEVIR+ dataset
| 88.39 |
ModelNet40 | Point-GN | Point-GN: A Non-Parametric Network Using Gaussian Positional Encoding for Point Cloud Classification | 2024-12-04T00:00:00 | https://arxiv.org/abs/2412.03056v2 | [
"https://github.com/asalarpour/Point_GN"
] | In the paper 'Point-GN: A Non-Parametric Network Using Gaussian Positional Encoding for Point Cloud Classification', what Accuracy (%) score did the Point-GN model get on the ModelNet40 dataset
| 85.3 |
COCOFake | FasterThanLies | Faster Than Lies: Real-time Deepfake Detection using Binary Neural Networks | 2024-06-07T00:00:00 | https://arxiv.org/abs/2406.04932v1 | [
"https://github.com/fedeloper/binary_deepfake_detection"
] | In the paper 'Faster Than Lies: Real-time Deepfake Detection using Binary Neural Networks', what Accuracy score did the FasterThanLies model get on the COCOFake dataset
| 99.25 |
VisDA-2017 | TransAdapter | TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04073v1 | [
"https://github.com/enesdoruk/TransAdapter"
] | In the paper 'TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation', what Accuracy score did the TransAdapter model get on the VisDA-2017 dataset
| 91.2 |
arXiv-year | Dir-GNN | Edge Directionality Improves Learning on Heterophilic Graphs | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10498v3 | [
"https://github.com/emalgorithm/directed-graph-neural-network"
] | In the paper 'Edge Directionality Improves Learning on Heterophilic Graphs', what Accuracy score did the Dir-GNN model get on the arXiv-year dataset
| 64.08±0.26 |
PATTERN | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what Accuracy score did the NeuralWalker model get on the PATTERN dataset
| 86.977 ± 0.012 |
MM-Vet | CaMML-13B | CaMML: Context-Aware Multimodal Learner for Large Models | 2024-01-06T00:00:00 | https://arxiv.org/abs/2401.03149v3 | [
"https://github.com/amazon-science/camml"
] | In the paper 'CaMML: Context-Aware Multimodal Learner for Large Models', what GPT-4 score score did the CaMML-13B model get on the MM-Vet dataset
| 36.4 |
APPS | WizardCoder-15b | CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules | 2023-10-13T00:00:00 | https://arxiv.org/abs/2310.08992v3 | [
"https://github.com/SalesforceAIResearch/CodeChain"
] | In the paper 'CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules', what Introductory Pass@1 score did the WizardCoder-15b model get on the APPS dataset
| 26.04 |
ColonINST-v1 (Seen) | LLaVA-Med-v1.0
(w/o LoRA, w/o extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.0
(w/o LoRA, w/o extra data) model get on the ColonINST-v1 (Seen) dataset
| 97.74 |
Cityscapes val | SwinMTL | SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images | 2024-03-15T00:00:00 | https://arxiv.org/abs/2403.10662v1 | [
"https://github.com/pardistaghavi/swinmtl"
] | In the paper 'SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images', what mIoU score did the SwinMTL model get on the Cityscapes val dataset
| 76.41 |
Turbulence | CodeLlama:13B-4bit-quantised | Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14856v2 | [
"https://github.com/shahinhonarvar/turbulence-benchmark"
] | In the paper 'Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code', what CorrSc score did the CodeLlama:13B-4bit-quantised model get on the Turbulence dataset
| 0.327 |
NAS-Bench-201, CIFAR-10 | DiNAS | Multi-conditioned Graph Diffusion for Neural Architecture Search | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.06020v2 | [
"https://github.com/rohanasthana/dinas"
] | In the paper 'Multi-conditioned Graph Diffusion for Neural Architecture Search', what Accuracy (Test) score did the DiNAS model get on the NAS-Bench-201, CIFAR-10 dataset
| 94.37 |
SIQA | phi-1.5 1.3B (zero-shot) | Textbooks Are All You Need II: phi-1.5 technical report | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05463v1 | [
"https://github.com/knowlab/bi-weekly-paper-presentation"
] | In the paper 'Textbooks Are All You Need II: phi-1.5 technical report', what Accuracy score did the phi-1.5 1.3B (zero-shot) model get on the SIQA dataset
| 52.6 |
WikiQA | TANDA-DeBERTa-V3-Large + ALL | Structural Self-Supervised Objectives for Transformers | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08272v1 | [
"https://github.com/lucadiliello/transformers-framework"
] | In the paper 'Structural Self-Supervised Objectives for Transformers', what MAP score did the TANDA-DeBERTa-V3-Large + ALL model get on the WikiQA dataset
| 0.927 |
Matterport3D | SFSS-MMSI (RGB Only) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what Validation mIoU score did the SFSS-MMSI (RGB Only) model get on the Matterport3D dataset
| 35.15 |
PMD | SAM2-UNet | SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08870v1 | [
"https://github.com/wzh0120/sam2-unet"
] | In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what MAE score did the SAM2-UNet model get on the PMD dataset
| 0.027 |
Squirrel | HiGNN | Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17351v2 | [
"https://github.com/zylMozart/HiGNN"
] | In the paper 'Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network', what Accuracy score did the HiGNN model get on the Squirrel dataset
| 54.78 ± 1.58 |
Astock | SRL&Factors | FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02647v1 | [
"https://github.com/frinkleko/finreport"
] | In the paper 'FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model', what Accuray score did the SRL&Factors model get on the Astock dataset
| 69.48 |
ADE20K-150 | EBSeg-L | Open-Vocabulary Semantic Segmentation with Image Embedding Balancing | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.09829v1 | [
"https://github.com/slonetime/ebseg"
] | In the paper 'Open-Vocabulary Semantic Segmentation with Image Embedding Balancing', what mIoU score did the EBSeg-L model get on the ADE20K-150 dataset
| 32.8 |
COCO 2014 | SADCL | Semantic-Aware Dual Contrastive Learning for Multi-label Image Classification | 2023-07-19T00:00:00 | https://arxiv.org/abs/2307.09715v4 | [
"https://github.com/yu-gi-oh-leilei/sadcl"
] | In the paper 'Semantic-Aware Dual Contrastive Learning for Multi-label Image Classification', what mAP score did the SADCL model get on the COCO 2014 dataset
| 85.6 |
EconLogicQA | Llama-3-8B-Instruct | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Llama-3-8B-Instruct model get on the EconLogicQA dataset
| 0.3462 |
CIFAR-100, 400 Labels | SemiReward | SemiReward: A General Reward Model for Semi-supervised Learning | 2023-10-04T00:00:00 | https://arxiv.org/abs/2310.03013v2 | [
"https://github.com/Westlake-AI/SemiReward"
] | In the paper 'SemiReward: A General Reward Model for Semi-supervised Learning', what Percentage error score did the SemiReward model get on the CIFAR-100, 400 Labels dataset
| 15.62 |
LingOly | Mixtral 8x7B | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06196v3 | [
"https://github.com/am-bean/lingOly"
] | In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the Mixtral 8x7B model get on the LingOly dataset
| 14.2% |
Winoground | MiniGPT-4-7B (BERTScore) | An Examination of the Compositionality of Large Generative Vision-Language Models | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10509v2 | [
"https://github.com/teleema/sade"
] | In the paper 'An Examination of the Compositionality of Large Generative Vision-Language Models', what Text Score score did the MiniGPT-4-7B (BERTScore) model get on the Winoground dataset
| 14.00 |
VeRi-776 | MBR4B (without re-ranking) | Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01129v1 | [
"https://github.com/videturfortuna/vehicle_reid_itsc2023"
] | In the paper 'Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification', what mAP score did the MBR4B (without re-ranking) model get on the VeRi-776 dataset
| 84.72 |
LSA64 | HWGAT | Hierarchical Windowed Graph Attention Network and a Large Scale Dataset for Isolated Indian Sign Language Recognition | 2024-07-19T00:00:00 | https://arxiv.org/abs/2407.14224v2 | [
"https://github.com/suvajit-patra/sl-hwgat"
] | In the paper 'Hierarchical Windowed Graph Attention Network and a Large Scale Dataset for Isolated Indian Sign Language Recognition', what Accuracy (%) score did the HWGAT model get on the LSA64 dataset
| 98.59 |
RefCOCOg-test | MagNet | Mask Grounding for Referring Image Segmentation | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12198v2 | [
"https://github.com/yxchng/mask-grounding"
] | In the paper 'Mask Grounding for Referring Image Segmentation', what Overall IoU score did the MagNet model get on the RefCOCOg-test dataset
| 66.03 |
CIFAR-10 | DDIM+CS | Compensation Sampling for Improved Convergence in Diffusion Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06285v1 | [
"https://github.com/hotfinda/Compensation-sampling"
] | In the paper 'Compensation Sampling for Improved Convergence in Diffusion Models', what FID score did the DDIM+CS model get on the CIFAR-10 dataset
| 2.01 |
nuScenes Camera Only | SA-BEV | SA-BEV: Generating Semantic-Aware Bird's-Eye-View Feature for Multi-view 3D Object Detection | 2023-07-21T00:00:00 | https://arxiv.org/abs/2307.11477v1 | [
"https://github.com/mengtan00/sa-bev"
] | In the paper 'SA-BEV: Generating Semantic-Aware Bird's-Eye-View Feature for Multi-view 3D Object Detection', what NDS score did the SA-BEV model get on the nuScenes Camera Only dataset
| 62.4 |
USNA-Cn2 (short-duration) | Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Climatology model get on the USNA-Cn2 (short-duration) dataset
| 0.480 |
CelebA 64x64 | LEGO | Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling | 2023-10-10T00:00:00 | https://arxiv.org/abs/2310.06389v3 | [
"https://github.com/JegZheng/LEGODiffusion"
] | In the paper 'Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling', what FID score did the LEGO model get on the CelebA 64x64 dataset
| 2.09 |
Distinctions-646 | DiffMatte | Diffusion for Natural Image Matting | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05915v2 | [
"https://github.com/yihanhu-2022/diffmatte"
] | In the paper 'Diffusion for Natural Image Matting', what SAD score did the DiffMatte model get on the Distinctions-646 dataset
| 15.50 |
SAFIM | deepseek-coder-1.3b-base | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the deepseek-coder-1.3b-base model get on the SAFIM dataset
| 41.20 |
PROTEINS | R-GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GIN + PANDA model get on the PROTEINS dataset
| 76.17 |
Cityscapes to ACDC | CMFormer | Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation | 2023-07-01T00:00:00 | https://arxiv.org/abs/2307.00371v5 | [
"https://github.com/BiQiWHU/CMFormer"
] | In the paper 'Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation', what mIoU score did the CMFormer model get on the Cityscapes to ACDC dataset
| 60.1 |
QVHighlights | UVCOM (w/ PT ASR Captions) | Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.16464v1 | [
"https://github.com/easonxiao-888/uvcom"
] | In the paper 'Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection', what mAP score did the UVCOM (w/ PT ASR Captions) model get on the QVHighlights dataset
| 43.8 |
UCF101 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the UCF101 dataset
| 82.3 |
MVTec AD | GLASS | A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09359v1 | [
"https://github.com/cqylunlun/glass"
] | In the paper 'A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization', what Detection AUROC score did the GLASS model get on the MVTec AD dataset
| 99.9 |
Charades-STA | BM-DETR | Background-aware Moment Detection for Video Moment Retrieval | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02728v3 | [
"https://github.com/minjoong507/bm-detr"
] | In the paper 'Background-aware Moment Detection for Video Moment Retrieval', what R@1 IoU=0.5 score did the BM-DETR model get on the Charades-STA dataset
| 59.48 |
DomainNet | MoA (OpenCLIP, ViT-B/16) | Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11031v2 | [
"https://github.com/KU-CVLAB/MoA"
] | In the paper 'Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters', what Average Accuracy score did the MoA (OpenCLIP, ViT-B/16) model get on the DomainNet dataset
| 62.7 |
pendigits | ConvTran | Improving Position Encoding of Transformers for Multivariate Time Series Classification | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16642v1 | [
"https://github.com/navidfoumani/convtran"
] | In the paper 'Improving Position Encoding of Transformers for Multivariate Time Series Classification', what Accuracy score did the ConvTran model get on the pendigits dataset
| 0.9871 |
WOST | CLIP4STR-L (DataComp-1B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what 1:1 Accuracy score did the CLIP4STR-L (DataComp-1B) model get on the WOST dataset
| 90.6 |
LingOly | GPT-4o | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06196v3 | [
"https://github.com/am-bean/lingOly"
] | In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the GPT-4o model get on the LingOly dataset
| 37.6% |
MM-Vet | Imp-2B | Imp: Highly Capable Large Multimodal Models for Mobile Devices | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.12107v2 | [
"https://github.com/milvlg/imp"
] | In the paper 'Imp: Highly Capable Large Multimodal Models for Mobile Devices', what GPT-4 score score did the Imp-2B model get on the MM-Vet dataset
| 33.5 |
TapCorrect | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the TapCorrect dataset
| 86.4 |
MedMCQA Dev | MedMobile (3.8B) | MedMobile: A mobile-sized language model with expert-level clinical capabilities | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.09019v1 | [
"https://github.com/nyuolab/MedMobile"
] | In the paper 'MedMobile: A mobile-sized language model with expert-level clinical capabilities', what Accuarcy score did the MedMobile (3.8B) model get on the MedMCQA Dev dataset
| 63.2 |
ImageNet | AIMv2-1B | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-1B model get on the ImageNet dataset
| 88.1% |
ImageNet | TinySaver(Swin_large, 0.5 Acc drop) | Tiny Models are the Computational Saver for Large Models | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17726v3 | [
"https://github.com/QingyuanWang/tinysaver"
] | In the paper 'Tiny Models are the Computational Saver for Large Models', what Top 1 Accuracy score did the TinySaver(Swin_large, 0.5 Acc drop) model get on the ImageNet dataset
| 85.74 |
LaSOT | LoRAT-L-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what AUC score did the LoRAT-L-378 model get on the LaSOT dataset
| 75.1 |
SMAC MMM2_7m2M1M_vs_8m4M1M | DDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DDN model get on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset
| 56.82 |
RefCOCO testA | MaskRIS (Swin-B, combined DB) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B, combined DB) model get on the RefCOCO testA dataset
| 80.64 |
MATH | GPT-4-code model (w/o code) | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | 2023-08-15T00:00:00 | https://arxiv.org/abs/2308.07921v1 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification', what Accuracy score did the GPT-4-code model (w/o code) model get on the MATH dataset
| 60.8 |
SA-1B | unSAM+ (Semi-supervised) | Segment Anything without Supervision | 2024-06-28T00:00:00 | https://arxiv.org/abs/2406.20081v1 | [
"https://github.com/frank-xwang/unsam"
] | In the paper 'Segment Anything without Supervision', what Average Precision score did the unSAM+ (Semi-supervised) model get on the SA-1B dataset
| 42.8 |
SOD4SB Public Test | GFL + Test Time Augmentation | BandRe: Rethinking Band-Pass Filters for Scale-Wise Object Detection Evaluation | 2023-07-21T00:00:00 | https://arxiv.org/abs/2307.11748v1 | [
"https://github.com/shinya7y/UniverseNet"
] | In the paper 'BandRe: Rethinking Band-Pass Filters for Scale-Wise Object Detection Evaluation', what AP50 score did the GFL + Test Time Augmentation model get on the SOD4SB Public Test dataset
| 73.1 |
ImageNet 64x64 | ECM-XL | Consistency Models Made Easy | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14548v2 | [
"https://github.com/locuslab/ect"
] | In the paper 'Consistency Models Made Easy', what FID score did the ECM-XL model get on the ImageNet 64x64 dataset
| 1.67 |
Human3.6M | FinePOSE | FinePOSE: Fine-Grained Prompt-Driven 3D Human Pose Estimation via Diffusion Models | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05216v1 | [
"https://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024"
] | In the paper 'FinePOSE: Fine-Grained Prompt-Driven 3D Human Pose Estimation via Diffusion Models', what Average MPJPE (mm) score did the FinePOSE model get on the Human3.6M dataset
| 16.7 |
MP-100 | CapeX | CapeX: Category-Agnostic Pose Estimation from Textual Point Explanation | 2024-06-01T00:00:00 | https://arxiv.org/abs/2406.00384v1 | [
"https://github.com/matanr/capex"
] | In the paper 'CapeX: Category-Agnostic Pose Estimation from Textual Point Explanation', what Mean PCK@0.2 - 1shot score did the CapeX model get on the MP-100 dataset
| 91.5 |
ImageNet-LT | DirMixE(ResNeXt-50) | Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07780v1 | [
"https://github.com/scongl/dirmixe"
] | In the paper 'Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition', what Top-1 Accuracy score did the DirMixE(ResNeXt-50) model get on the ImageNet-LT dataset
| 58.61 |
ImageNet | DeiT-B | Kolmogorov-Arnold Transformer | 2024-09-16T00:00:00 | https://arxiv.org/abs/2409.10594v1 | [
"https://github.com/Adamdad/kat"
] | In the paper 'Kolmogorov-Arnold Transformer', what Top 1 Accuracy score did the DeiT-B model get on the ImageNet dataset
| 81.8 |
FoodSeg103 | FoodSAM | FoodSAM: Any Food Segmentation | 2023-08-11T00:00:00 | https://arxiv.org/abs/2308.05938v1 | [
"https://github.com/jamesjg/foodsam"
] | In the paper 'FoodSAM: Any Food Segmentation', what mIoU score did the FoodSAM model get on the FoodSeg103 dataset
| 46.4 |
VLCS | PromptStyler (CLIP, ResNet-50) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ResNet-50) model get on the VLCS dataset
| 82.3 |
OpenBookQA | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the OpenBookQA dataset
| 57.4 |
ogbl-ppa | MPLP | Pure Message Passing Can Estimate Common Neighbor for Link Prediction | 2023-09-02T00:00:00 | https://arxiv.org/abs/2309.00976v4 | [
"https://github.com/Barcavin/efficient-node-labelling"
] | In the paper 'Pure Message Passing Can Estimate Common Neighbor for Link Prediction', what Test Hits@100 score did the MPLP model get on the ogbl-ppa dataset
| 0.6524 ± 0.0150 |
SumMe | CSTA | CSTA: CNN-based Spatiotemporal Attention for Video Summarization | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.11905v2 | [
"https://github.com/thswodnjs3/CSTA"
] | In the paper 'CSTA: CNN-based Spatiotemporal Attention for Video Summarization', what Kendall's Tau score did the CSTA model get on the SumMe dataset
| 0.246 |
ColonINST-v1 (Seen) | LLaVA-Med-v1.5 (w/ LoRA, w/o extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.5 (w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Seen) dataset
| 93.62 |
VTAB-1k(Structured<8>) | GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) | Improving Visual Prompt Tuning for Self-supervised Vision Transformers | 2023-06-08T00:00:00 | https://arxiv.org/abs/2306.05067v1 | [
"https://github.com/ryongithub/gatedprompttuning"
] | In the paper 'Improving Visual Prompt Tuning for Self-supervised Vision Transformers', what Mean Accuracy score did the GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) model get on the VTAB-1k(Structured<8>) dataset
| 49.10 |
3DPW | W-HMR | W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration | 2023-11-29T00:00:00 | https://arxiv.org/abs/2311.17460v6 | [
"https://github.com/yw0208/W-HMR"
] | In the paper 'W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration', what PA-MPJPE score did the W-HMR model get on the 3DPW dataset
| 40.5 |
COVID-19 Image Data Collection | MSTP | Efficient and Accurate Pneumonia Detection Using a Novel Multi-Scale Transformer Approach | 2024-08-08T00:00:00 | https://arxiv.org/abs/2408.04290v2 | [
"https://github.com/amirrezafateh/multi-scale-transformer-pneumonia"
] | In the paper 'Efficient and Accurate Pneumonia Detection Using a Novel Multi-Scale Transformer Approach', what Accuracy score did the MSTP model get on the COVID-19 Image Data Collection dataset
| 95.11 |
FGVC-Aircraft | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the FGVC-Aircraft dataset
| 40.73 |
ISIC 2018 | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what DSC score did the EMCAD model get on the ISIC 2018 dataset
| 90.96 |
UTKFace | ResNet-50-Regression | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Regression model get on the UTKFace dataset
| 4.72 |
ScanObjectNN | Point-RAE (no voting) | Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning | 2023-09-25T00:00:00 | https://arxiv.org/abs/2310.03670v1 | [
"https://github.com/liuyyy111/point-rae"
] | In the paper 'Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning', what Overall Accuracy score did the Point-RAE (no voting) model get on the ScanObjectNN dataset
| 90.28 |
ogbl-citation2 | MPLP | Pure Message Passing Can Estimate Common Neighbor for Link Prediction | 2023-09-02T00:00:00 | https://arxiv.org/abs/2309.00976v4 | [
"https://github.com/Barcavin/efficient-node-labelling"
] | In the paper 'Pure Message Passing Can Estimate Common Neighbor for Link Prediction', what Test MRR score did the MPLP model get on the ogbl-citation2 dataset
| 0.9072 ± 0.0012 |
PASCAL-S | M3Net-R | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-R model get on the PASCAL-S dataset
| 0.06 |
ArabicDigits | ConvTran | Improving Position Encoding of Transformers for Multivariate Time Series Classification | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16642v1 | [
"https://github.com/navidfoumani/convtran"
] | In the paper 'Improving Position Encoding of Transformers for Multivariate Time Series Classification', what Accuracy score did the ConvTran model get on the ArabicDigits dataset
| 0.9945 |
Set14 - 4x upscaling | SPAN | Swift Parameter-free Attention Network for Efficient Super-Resolution | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12770v3 | [
"https://github.com/hongyuanyu/span"
] | In the paper 'Swift Parameter-free Attention Network for Efficient Super-Resolution', what PSNR score did the SPAN model get on the Set14 - 4x upscaling dataset
| 28.66 |
Tanks and Temples | Compact3D | CompGS: Smaller and Faster Gaussian Splatting with Vector Quantization | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18159v3 | [
"https://github.com/ucdvision/compact3d"
] | In the paper 'CompGS: Smaller and Faster Gaussian Splatting with Vector Quantization', what PSNR score did the Compact3D model get on the Tanks and Temples dataset
| 23.47 |
BC2GM | UniNER-7B | UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03279v2 | [
"https://github.com/emma1066/retrieval-augmented-it-openner"
] | In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the BC2GM dataset
| 82.42 |
CDD Dataset (season-varying) | SGSLN/128 | Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11302v1 | [
"https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network"
] | In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-Score score did the SGSLN/128 model get on the CDD Dataset (season-varying) dataset
| 93.76 |
2018 Data Science Bowl | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what Dice score did the EMCAD model get on the 2018 Data Science Bowl dataset
| 0.9274 |
SOTS Indoor | CL2S | Rethinking the Elementary Function Fusion for Single-Image Dehazing | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.15817v1 | [
"https://github.com/YesianRohn/CL2S"
] | In the paper 'Rethinking the Elementary Function Fusion for Single-Image Dehazing', what PSNR score did the CL2S model get on the SOTS Indoor dataset
| 35.36 |
GSM8K | OVM-Mistral-7B (verify20@1) | OVM, Outcome-supervised Value Models for Planning in Mathematical Reasoning | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09724v2 | [
"https://github.com/freedomintelligence/ovm"
] | In the paper 'OVM, Outcome-supervised Value Models for Planning in Mathematical Reasoning', what Accuracy score did the OVM-Mistral-7B (verify20@1) model get on the GSM8K dataset
| 82.6 |
MUTAG | GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GCN + PANDA model get on the MUTAG dataset
| 85.75% |
CHAMELEON | ZoomNeXt-PVTv2-B4 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what S-measure score did the ZoomNeXt-PVTv2-B4 model get on the CHAMELEON dataset
| 0.925 |
Occ3D-nuScenes | CTF-Occ | FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation | 2023-07-04T00:00:00 | https://arxiv.org/abs/2307.01492v1 | [
"https://github.com/nvlabs/fb-bev"
] | In the paper 'FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation', what mIoU score did the CTF-Occ model get on the Occ3D-nuScenes dataset
| 28.53 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.