dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
GoogleGZ-CD | C2FNet | C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images | 2024-04-22T00:00:00 | https://arxiv.org/abs/2404.13838v1 | [
"https://github.com/chengxihan/c2f-semicd-and-c2f-cdnet"
] | In the paper 'C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images', what F1 score did the C2FNet model get on the GoogleGZ-CD dataset
| 86.86 |
AVisT | UVLTrack-L | Unifying Visual and Vision-Language Tracking via Contrastive Learning | 2024-01-20T00:00:00 | https://arxiv.org/abs/2401.11228v1 | [
"https://github.com/openspaceai/uvltrack"
] | In the paper 'Unifying Visual and Vision-Language Tracking via Contrastive Learning', what Success Rate score did the UVLTrack-L model get on the AVisT dataset
| 57.8 |
Stanford Cars | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the Stanford Cars dataset
| 83.13 |
Human3.6M | KTPFormer | KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00658v2 | [
"https://github.com/JihuaPeng/KTPFormer"
] | In the paper 'KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation', what Average MPJPE (mm) score did the KTPFormer model get on the Human3.6M dataset
| 18.1 |
Cityscapes to Foggy Cityscapes | ALDI++ (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the ALDI++ (ResNet50-FPN) model get on the Cityscapes to Foggy Cityscapes dataset
| 66.8 |
CROHME 2014 | TAMER | TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08578v2 | [
"https://github.com/qingzhenduyu/tamer"
] | In the paper 'TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition', what ExpRate score did the TAMER model get on the CROHME 2014 dataset
| 61.23 |
CIFAR-100-LT (ρ=100) | GCL | Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11733v1 | [
"https://github.com/keke921/gclloss"
] | In the paper 'Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment', what Error Rate score did the GCL model get on the CIFAR-100-LT (ρ=100) dataset
| 51.29 |
RefCoCo val | VATEX | Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding | 2024-04-12T00:00:00 | https://arxiv.org/abs/2404.08590v2 | [
"https://github.com/nero1342/VATEX_RIS"
] | In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what mIoU score did the VATEX model get on the RefCoCo val dataset
| 78.16 |
OVIS validation | CTVIS (Swin-L) | CTVIS: Consistent Training for Online Video Instance Segmentation | 2023-07-24T00:00:00 | https://arxiv.org/abs/2307.12616v1 | [
"https://github.com/kainingying/ctvis"
] | In the paper 'CTVIS: Consistent Training for Online Video Instance Segmentation', what mask AP score did the CTVIS (Swin-L) model get on the OVIS validation dataset
| 46.9 |
PASCAL Context-459 | EBSeg-L | Open-Vocabulary Semantic Segmentation with Image Embedding Balancing | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.09829v1 | [
"https://github.com/slonetime/ebseg"
] | In the paper 'Open-Vocabulary Semantic Segmentation with Image Embedding Balancing', what mIoU score did the EBSeg-L model get on the PASCAL Context-459 dataset
| 21.0 |
EQ-Bench | lmsys/vicuna-7b-v1.1 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the lmsys/vicuna-7b-v1.1 model get on the EQ-Bench dataset
| 22.24 |
ActivityNet-1.2 | CASE | Revisiting Foreground and Background Separation in Weakly-supervised Temporal Action Localization: A Clustering-based Approach | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.14138v1 | [
"https://github.com/qinying-liu/case"
] | In the paper 'Revisiting Foreground and Background Separation in Weakly-supervised Temporal Action Localization: A Clustering-based Approach', what mAP@0.5 score did the CASE model get on the ActivityNet-1.2 dataset
| 43.8 |
ColonINST-v1 (Unseen) | MobileVLM-1.7B
(w/o LoRA, w/ extra data) | MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16886v2 | [
"https://github.com/meituan-automl/mobilevlm"
] | In the paper 'MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices', what Accuray score did the MobileVLM-1.7B
(w/o LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
| 73.14 |
APPS | deepseek-ai/deepseek-coder-6.7b-instruct | DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence | 2024-01-25T00:00:00 | https://arxiv.org/abs/2401.14196v2 | [
"https://github.com/deepseek-ai/DeepSeek-Coder"
] | In the paper 'DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence', what Introductory Pass@1 score did the deepseek-ai/deepseek-coder-6.7b-instruct model get on the APPS dataset
| 31.92 |
Occ3D-nuScenes | FB-OCC-K | FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation | 2023-07-04T00:00:00 | https://arxiv.org/abs/2307.01492v1 | [
"https://github.com/nvlabs/fb-bev"
] | In the paper 'FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation', what mIoU score did the FB-OCC-K model get on the Occ3D-nuScenes dataset
| 52.79 |
PACS | SPG (CLIP, ResNet-50) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ResNet-50) model get on the PACS dataset
| 92.8 |
StreetHazards | Mask2Anomaly | Unmasking Anomalies in Road-Scene Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13316v1 | [
"https://github.com/shyam671/mask2anomaly-unmasking-anomalies-in-road-scene-segmentation"
] | In the paper 'Unmasking Anomalies in Road-Scene Segmentation', what Open-mIoU score did the Mask2Anomaly model get on the StreetHazards dataset
| 59.8 |
CIRR | CaLa | CaLa: Complementary Association Learning for Augmenting Composed Image Retrieval | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.19149v2 | [
"https://github.com/chiangsonw/cala"
] | In the paper 'CaLa: Complementary Association Learning for Augmenting Composed Image Retrieval', what (Recall@5+Recall_subset@1)/2 score did the CaLa model get on the CIRR dataset
| 78.74 |
ColonINST-v1 (Unseen) | Bunny-v1.0-3B
(w/ LoRA, w/o extra data) | Efficient Multimodal Learning from Data-centric Perspective | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11530v3 | [
"https://github.com/baai-dcai/bunny"
] | In the paper 'Efficient Multimodal Learning from Data-centric Perspective', what Accuray score did the Bunny-v1.0-3B
(w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Unseen) dataset
| 69.45 |
WHU-CD | RSM-CD | RS-Mamba for Large Remote Sensing Image Dense Prediction | 2024-04-03T00:00:00 | https://arxiv.org/abs/2404.02668v2 | [
"https://github.com/walking-shadow/Official_Remote_Sensing_Mamba"
] | In the paper 'RS-Mamba for Large Remote Sensing Image Dense Prediction', what F1 score did the RSM-CD model get on the WHU-CD dataset
| 91.87 |
RefCOCO+ testA | HyperSeg | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what Overall IoU score did the HyperSeg model get on the RefCOCO+ testA dataset
| 83.5 |
VSPW | DVIS++(VIT-L) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mIoU score did the DVIS++(VIT-L) model get on the VSPW dataset
| 63.8 |
COPA | PaLM 2-M (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the COPA dataset
| 90.0 |
YouTube-VOS 2018 | UniVS(Swin-L) | UniVS: Unified and Universal Video Segmentation with Prompts as Queries | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18115v2 | [
"https://github.com/minghanli/univs"
] | In the paper 'UniVS: Unified and Universal Video Segmentation with Prompts as Queries', what Mean Jaccard & F-Measure score did the UniVS(Swin-L) model get on the YouTube-VOS 2018 dataset
| 71.5 |
PASCAL Context-459 | MAFT+ | Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation | 2024-08-01T00:00:00 | https://arxiv.org/abs/2408.00744v2 | [
"https://github.com/jiaosiyu1999/MAFT-Plus"
] | In the paper 'Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation', what mIoU score did the MAFT+ model get on the PASCAL Context-459 dataset
| 21.6 |
HICO-DET | PViC-SwinL | Exploring Predicate Visual Context in Detecting Human-Object Interactions | 2023-08-11T00:00:00 | https://arxiv.org/abs/2308.06202v2 | [
"https://github.com/fredzzhang/pvic"
] | In the paper 'Exploring Predicate Visual Context in Detecting Human-Object Interactions', what mAP score did the PViC-SwinL model get on the HICO-DET dataset
| 44.32 |
BSD100 - 2x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the BSD100 - 2x upscaling dataset
| 32.79 |
MATH | Branch-Train-MiX 4x7B (sampling top-2 experts) | Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07816v1 | [
"https://github.com/Leeroo-AI/mergoo"
] | In the paper 'Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM', what Accuracy score did the Branch-Train-MiX 4x7B (sampling top-2 experts) model get on the MATH dataset
| 17.8 |
FGVC | GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K) | Improving Visual Prompt Tuning for Self-supervised Vision Transformers | 2023-06-08T00:00:00 | https://arxiv.org/abs/2306.05067v1 | [
"https://github.com/ryongithub/gatedprompttuning"
] | In the paper 'Improving Visual Prompt Tuning for Self-supervised Vision Transformers', what Mean Accuracy score did the GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K) model get on the FGVC dataset
| 73.39 |
Atari 2600 Up and Down | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Up and Down dataset
| 25127.4 |
TNL2K | LoRAT-g-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what precision score did the LoRAT-g-378 model get on the TNL2K dataset
| 67.8 |
COCO-WholeBody | DWPose | Effective Whole-body Pose Estimation with Two-stages Distillation | 2023-07-29T00:00:00 | https://arxiv.org/abs/2307.15880v2 | [
"https://github.com/idea-research/dwpose"
] | In the paper 'Effective Whole-body Pose Estimation with Two-stages Distillation', what WB score did the DWPose model get on the COCO-WholeBody dataset
| 66.4 |
SAFIM | incoder-1B | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the incoder-1B model get on the SAFIM dataset
| 21.06 |
SST-5 Fine-grained classification | LM-CPPF RoBERTa-base | LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18169v3 | [
"https://github.com/amirabaskohi/lm-cppf"
] | In the paper 'LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning', what Accuracy score did the LM-CPPF RoBERTa-base model get on the SST-5 Fine-grained classification dataset
| 54.9 |
Food-101 | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the Food-101 dataset
| 90.58 |
COVERAGE | Early Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Early Fusion model get on the COVERAGE dataset
| .839 |
ETTh2 (192) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTh2 (192) Multivariate dataset
| 0.332 |
Peptides-func | CIN++-500k | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what AP score did the CIN++-500k model get on the Peptides-func dataset
| 0.6569±0.0117 |
HIV | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what ROC-AUC score did the G-Tuning model get on the HIV dataset
| 77.33 |
WN18RR | MetaSD | Self-Distillation with Meta Learning for Knowledge Graph Completion | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12209v1 | [
"https://github.com/pldlgb/MetaSD"
] | In the paper 'Self-Distillation with Meta Learning for Knowledge Graph Completion', what MRR score did the MetaSD model get on the WN18RR dataset
| 0.491 |
DIV2K val - 4x upscaling | AESOP | Auto-Encoded Supervision for Perceptual Image Super-Resolution | 2024-11-28T00:00:00 | https://arxiv.org/abs/2412.00124v1 | [
"https://github.com/2minkyulee/aesop-auto-encoded-supervision-for-perceptual-image-super-resolution"
] | In the paper 'Auto-Encoded Supervision for Perceptual Image Super-Resolution', what PSNR score did the AESOP model get on the DIV2K val - 4x upscaling dataset
| 29.137 |
SemanticKITTI | OA-CNNs | OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14418v1 | [
"https://github.com/Pointcept/Pointcept"
] | In the paper 'OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation', what val mIoU score did the OA-CNNs model get on the SemanticKITTI dataset
| 70.6% |
COCO-20i (5-shot) | GF-SAM | Bridge the Points: Graph-based Few-shot Segment Anything Semantically | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.06964v2 | [
"https://github.com/ANDYZAQ/GF-SAM"
] | In the paper 'Bridge the Points: Graph-based Few-shot Segment Anything Semantically', what Mean IoU score did the GF-SAM model get on the COCO-20i (5-shot) dataset
| 66.8 |
ActivityNet-1.2 | P-MIL | Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.17861v1 | [
"https://github.com/RenHuan1999/CVPR2023_P-MIL"
] | In the paper 'Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization', what mAP@0.5 score did the P-MIL model get on the ActivityNet-1.2 dataset
| 44.2 |
MATH | MathCoder-CL-13B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-CL-13B model get on the MATH dataset
| 35.9 |
ImageNet | DAT-B++ (224x224) | DAT++: Spatially Dynamic Vision Transformer with Deformable Attention | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01430v1 | [
"https://github.com/leaplabthu/dat"
] | In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what Top 1 Accuracy score did the DAT-B++ (224x224) model get on the ImageNet dataset
| 84.9% |
OVIS validation | CAVIS(VIT-L, Offline) | Context-Aware Video Instance Segmentation | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03010v1 | [
"https://github.com/Seung-Hun-Lee/CAVIS"
] | In the paper 'Context-Aware Video Instance Segmentation', what mask AP score did the CAVIS(VIT-L, Offline) model get on the OVIS validation dataset
| 57.1 |
OMNIGLOT | PaddingFlow | PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08216v2 | [
"https://github.com/adamqlmeng/paddingflow"
] | In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what MMD-L2 score did the PaddingFlow model get on the OMNIGLOT dataset
| 20.3 |
Human3.6M | FFINet | Fast Fourier Inception Networks for Occluded Video Prediction | 2023-06-17T00:00:00 | https://arxiv.org/abs/2306.10346v1 | [
"https://github.com/mlvccn/research"
] | In the paper 'Fast Fourier Inception Networks for Occluded Video Prediction', what SSIM score did the FFINet model get on the Human3.6M dataset
| 0.912 |
CIFAR-100 | ABNet-2G-R3-Combined | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R3-Combined model get on the CIFAR-100 dataset
| 82.784 |
RefCOCO+ testA | MagNet | Mask Grounding for Referring Image Segmentation | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12198v2 | [
"https://github.com/yxchng/mask-grounding"
] | In the paper 'Mask Grounding for Referring Image Segmentation', what Overall IoU score did the MagNet model get on the RefCOCO+ testA dataset
| 71.32 |
PeMSD4 | STD-MAE | Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00516v3 | [
"https://github.com/jimmy-7664/std-mae"
] | In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what 12 steps MAE score did the STD-MAE model get on the PeMSD4 dataset
| 17.80 |
SVOX-Rain | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the SVOX-Rain dataset
| 96.2 |
GTA5-to-Cityscapes | tqdm (EVA02-CLIP-L) | Textual Query-Driven Mask Transformer for Domain Generalized Segmentation | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09033v1 | [
"https://github.com/ByeongHyunPak/tqdm"
] | In the paper 'Textual Query-Driven Mask Transformer for Domain Generalized Segmentation', what mIoU score did the tqdm (EVA02-CLIP-L) model get on the GTA5-to-Cityscapes dataset
| 68.88 |
TNL2K | RTracker-L | RTracker: Recoverable Tracking via PN Tree Structured Memory | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19242v1 | [
"https://github.com/norahgreen/rtracker"
] | In the paper 'RTracker: Recoverable Tracking via PN Tree Structured Memory', what precision score did the RTracker-L model get on the TNL2K dataset
| 63.7 |
VLCS | MoA (OpenCLIP, ViT-B/16) | Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11031v2 | [
"https://github.com/KU-CVLAB/MoA"
] | In the paper 'Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters', what Average Accuracy score did the MoA (OpenCLIP, ViT-B/16) model get on the VLCS dataset
| 83.1 |
NTU RGB+D 120 | π-ViT (RGB only) | Just Add $π$! Pose Induced Video Transformers for Understanding Activities of Daily Living | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18840v1 | [
"https://github.com/dominickrei/pi-vit"
] | In the paper 'Just Add $π$! Pose Induced Video Transformers for Understanding Activities of Daily Living', what Accuracy (Cross-Subject) score did the π-ViT (RGB only) model get on the NTU RGB+D 120 dataset
| 92.9 |
HIV dataset | CIN++-small | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what ROC-AUC score did the CIN++-small model get on the HIV dataset dataset
| 80.26 |
Adult Census Income | Binary Diffusion | Tabular Data Generation using Binary Diffusion | 2024-09-20T00:00:00 | https://arxiv.org/abs/2409.13882v2 | [
"https://github.com/vkinakh/binary-diffusion-tabular"
] | In the paper 'Tabular Data Generation using Binary Diffusion', what LR Accuracy score did the Binary Diffusion model get on the Adult Census Income dataset
| 85.45 |
ClinTox | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what ROC-AUC score did the G-Tuning model get on the ClinTox dataset
| 74.64 |
VoxCeleb | ReDimNet-B1-LM (2.2M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B1-LM (2.2M) model get on the VoxCeleb dataset
| 0.85 |
ModelNet40 | PointMAE+PPT | Positional Prompt Tuning for Efficient 3D Representation Learning | 2024-08-21T00:00:00 | https://arxiv.org/abs/2408.11567v1 | [
"https://github.com/zsc000722/ppt"
] | In the paper 'Positional Prompt Tuning for Efficient 3D Representation Learning', what Overall Accuracy score did the PointMAE+PPT model get on the ModelNet40 dataset
| 93.88 |
AUTSL | HWGAT | Hierarchical Windowed Graph Attention Network and a Large Scale Dataset for Isolated Indian Sign Language Recognition | 2024-07-19T00:00:00 | https://arxiv.org/abs/2407.14224v2 | [
"https://github.com/suvajit-patra/sl-hwgat"
] | In the paper 'Hierarchical Windowed Graph Attention Network and a Large Scale Dataset for Isolated Indian Sign Language Recognition', what Rank-1 Recognition Rate score did the HWGAT model get on the AUTSL dataset
| 0.9580 |
SemanticKITTI | UniSeg | UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05573v1 | [
"https://github.com/pjlab-adg/pcseg"
] | In the paper 'UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase', what test mIoU score did the UniSeg model get on the SemanticKITTI dataset
| 75.2% |
PROTEINS | CIN++ | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what Accuracy score did the CIN++ model get on the PROTEINS dataset
| 80.5 |
Pittsburgh-30k-test | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Pittsburgh-30k-test dataset
| 54.97 |
Stanford2D3D Panoramic | SFSS-MMSI (RGB+Normal) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what mIoU score did the SFSS-MMSI (RGB+Normal) model get on the Stanford2D3D Panoramic dataset
| 58.24% |
dbp15k ja-en | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the dbp15k ja-en dataset
| 0.857 |
GTA-to-Avg(Cityscapes,BDD,Mapillary) | ReVT | A Re-Parameterized Vision Transformer (ReVT) for Domain-Generalized Semantic Segmentation | 2023-08-25T00:00:00 | https://arxiv.org/abs/2308.13331v1 | [
"https://github.com/ifnspaml/revt"
] | In the paper 'A Re-Parameterized Vision Transformer (ReVT) for Domain-Generalized Semantic Segmentation', what mIoU score did the ReVT model get on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset
| 50.2 |
ImageNet 256x256 | RCG-L (w/o guidance) | Return of Unconditional Generation: A Self-supervised Representation Generation Method | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03701v4 | [
"https://github.com/LTH14/rcg"
] | In the paper 'Return of Unconditional Generation: A Self-supervised Representation Generation Method', what FID score did the RCG-L (w/o guidance) model get on the ImageNet 256x256 dataset
| 3.49 |
ImageNet V2 | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Top-1 accuracy % score did the HPT model get on the ImageNet V2 dataset
| 65.25 |
InfographicVQA | PaLI-3 (w/ OCR) | PaLI-3 Vision Language Models: Smaller, Faster, Stronger | 2023-10-13T00:00:00 | https://arxiv.org/abs/2310.09199v2 | [
"https://github.com/kyegomez/PALI3"
] | In the paper 'PaLI-3 Vision Language Models: Smaller, Faster, Stronger', what ANLS score did the PaLI-3 (w/ OCR) model get on the InfographicVQA dataset
| 62.4 |
DomainNet | Transadapter | TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04073v1 | [
"https://github.com/enesdoruk/TransAdapter"
] | In the paper 'TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation', what Accuracy score did the Transadapter model get on the DomainNet dataset
| 53.7 |
Pendulum-v1 | TLA | Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18701v3 | [
"https://github.com/dee0512/Temporally-Layered-Architecture"
] | In the paper 'Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures', what Action Repetition score did the TLA model get on the Pendulum-v1 dataset
| .7032 |
ImageNet | CoKe | Stable Cluster Discrimination for Deep Clustering | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14310v1 | [
"https://github.com/idstcv/secu"
] | In the paper 'Stable Cluster Discrimination for Deep Clustering', what NMI score did the CoKe model get on the ImageNet dataset
| 76.2 |
MBPP | MGDebugger (CodeQwen1.5) | From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01215v2 | [
"https://github.com/YerbaPage/MGDebugger"
] | In the paper 'From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging', what Accuracy score did the MGDebugger (CodeQwen1.5) model get on the MBPP dataset
| 80.8 |
Citeseer | Graph-MLP + SWA | The Split Matters: Flat Minima Methods for Improving the Performance of GNNs | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09121v1 | [
"https://github.com/foisunt/fmms-in-gnns"
] | In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what Accuracy score did the Graph-MLP + SWA model get on the Citeseer dataset
| 77.99 ± 1.57% |
Manga109 - 4x upscaling | AESOP | Auto-Encoded Supervision for Perceptual Image Super-Resolution | 2024-11-28T00:00:00 | https://arxiv.org/abs/2412.00124v1 | [
"https://github.com/2minkyulee/aesop-auto-encoded-supervision-for-perceptual-image-super-resolution"
] | In the paper 'Auto-Encoded Supervision for Perceptual Image Super-Resolution', what PSNR score did the AESOP model get on the Manga109 - 4x upscaling dataset
| 30.061 |
SICK | PromptEOL+CSE+OPT-2.7B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-2.7B model get on the SICK dataset
| 0.8129 |
Refer-YouTube-VOS (2021 public validation) | LoSh-R | LoSh: Long-Short Text Joint Prediction Network for Referring Video Object Segmentation | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.08736v3 | [
"https://github.com/linfengyuan1997/losh"
] | In the paper 'LoSh: Long-Short Text Joint Prediction Network for Referring Video Object Segmentation', what J&F score did the LoSh-R model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 64.2 |
COCO-Stuff-81 | CAUSE-MLP (ViT-S/8) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what mIoU score did the CAUSE-MLP (ViT-S/8) model get on the COCO-Stuff-81 dataset
| 19.1 |
Ego4D | EgoVLPv2 | EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone | 2023-07-11T00:00:00 | https://arxiv.org/abs/2307.05463v2 | [
"https://github.com/facebookresearch/EgoVLPv2"
] | In the paper 'EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone', what R@1 IoU=0.3 score did the EgoVLPv2 model get on the Ego4D dataset
| 12.95 |
MVTec AD | AnomalyDINO-S (2-shot) | AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2 | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14529v2 | [
"https://github.com/dammsi/AnomalyDINO"
] | In the paper 'AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2', what Detection AUROC score did the AnomalyDINO-S (2-shot) model get on the MVTec AD dataset
| 96.9 |
EC-FUNSD | LayoutLMv3 (base) | Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02379v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective', what F1 score did the LayoutLMv3 (base) model get on the EC-FUNSD dataset
| 67.47 |
EuRoC MAV | CIVO | Brain-Inspired Visual Odometry: Balancing Speed and Interpretability through a System of Systems Approach | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13162v1 | [
"https://github.com/habib-Boloorchi/CIVO-Visual-Odometry-"
] | In the paper 'Brain-Inspired Visual Odometry: Balancing Speed and Interpretability through a System of Systems Approach', what Relative Position Error Translation [cm] score did the CIVO model get on the EuRoC MAV dataset
| 1.3574 |
GMOT-40 | iGDINO MAC-SORT | TP-GMOT: Tracking Generic Multiple Object by Textual Prompt with Motion-Appearance Cost (MAC) SORT | 2024-09-04T00:00:00 | https://arxiv.org/abs/2409.02490v1 | [
"https://github.com/Fsoft-AIC/TP-GMOT"
] | In the paper 'TP-GMOT: Tracking Generic Multiple Object by Textual Prompt with Motion-Appearance Cost (MAC) SORT', what mAP@0.5 score did the iGDINO MAC-SORT model get on the GMOT-40 dataset
| 72.7 |
UMVM-oea-en-fr | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the UMVM-oea-en-fr dataset
| 0.895 |
MORPH Album2 (SE) | ResNet-50-Unimodal-Concentrated | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Unimodal-Concentrated model get on the MORPH Album2 (SE) dataset
| 2.78 |
genius | GESN | Addressing Heterophily in Node Classification with Graph Echo State Networks | 2023-05-14T00:00:00 | https://arxiv.org/abs/2305.08233v2 | [
"https://github.com/dtortorella/addressing-heterophily-gesn"
] | In the paper 'Addressing Heterophily in Node Classification with Graph Echo State Networks', what 1:1 Accuracy score did the GESN model get on the genius dataset
| 91.72 ± 0.08 |
Oxford-IIIT Pet Dataset | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the Oxford-IIIT Pet Dataset dataset
| 96.37 |
Office-Home | WAKD (Resnet-18) | Weight Averaging Improves Knowledge Distillation under Domain Shift | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11446v1 | [
"https://github.com/vorobeevich/distillation-in-dg"
] | In the paper 'Weight Averaging Improves Knowledge Distillation under Domain Shift', what Average Accuracy score did the WAKD (Resnet-18) model get on the Office-Home dataset
| 66.7 |
FreiHAND | Zhou et al. | A Simple Baseline for Efficient Hand Mesh Reconstruction | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.01813v1 | [
"https://github.com/patiencefromzhou/simplehand"
] | In the paper 'A Simple Baseline for Efficient Hand Mesh Reconstruction', what PA-MPVPE score did the Zhou et al. model get on the FreiHAND dataset
| 6.0 |
Filosax | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Filosax dataset
| 99.5 |
LLFF | Chat-Edit-3D | Chat-Edit-3D: Interactive 3D Scene Editing via Text Prompts | 2024-07-09T00:00:00 | https://arxiv.org/abs/2407.06842v2 | [
"https://github.com/Fangkang515/CE3D"
] | In the paper 'Chat-Edit-3D: Interactive 3D Scene Editing via Text Prompts', what CLIP score did the Chat-Edit-3D model get on the LLFF dataset
| 0.9 |
Waymo Open Dataset | LION | LION: Linear Group RNN for 3D Object Detection in Point Clouds | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18232v1 | [
"https://github.com/happinesslz/LION"
] | In the paper 'LION: Linear Group RNN for 3D Object Detection in Point Clouds', what mAPH/L2 score did the LION model get on the Waymo Open Dataset dataset
| 74.0 |
VQA v2 test-dev | CuMo-7B | CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts | 2024-05-09T00:00:00 | https://arxiv.org/abs/2405.05949v1 | [
"https://github.com/shi-labs/cumo"
] | In the paper 'CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts', what Accuracy score did the CuMo-7B model get on the VQA v2 test-dev dataset
| 82.2 |
General-100 - 4x upscaling | AESOP | Auto-Encoded Supervision for Perceptual Image Super-Resolution | 2024-11-28T00:00:00 | https://arxiv.org/abs/2412.00124v1 | [
"https://github.com/2minkyulee/aesop-auto-encoded-supervision-for-perceptual-image-super-resolution"
] | In the paper 'Auto-Encoded Supervision for Perceptual Image Super-Resolution', what LPIPS score did the AESOP model get on the General-100 - 4x upscaling dataset
| 0.071 |
SMAC 26m_vs_30m | QMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QMIX model get on the SMAC 26m_vs_30m dataset
| 62.78 |
CSL-Daily | TCNet | TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.11818v1 | [
"https://github.com/hotfinda/tcnet"
] | In the paper 'TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions', what Word Error Rate (WER) score did the TCNet model get on the CSL-Daily dataset
| 29.3 |
VoxCeleb1 | ReDimNet-B2-SF2-LM-ASNorm (4.7M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B2-SF2-LM-ASNorm (4.7M) model get on the VoxCeleb1 dataset
| 0.52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.