dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
TrackingNet | SAMURAI-L | SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory | 2024-11-18T00:00:00 | https://arxiv.org/abs/2411.11922v2 | [
"https://github.com/yangchris11/samurai"
] | In the paper 'SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory', what Accuracy score did the SAMURAI-L model get on the TrackingNet dataset
| 85.3 |
EconLogicQA | Zephyr-7B-Alpha | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Zephyr-7B-Alpha model get on the EconLogicQA dataset
| 0.2308 |
SMAC MMM2_7m2M1M_vs_8m4M1M | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset
| 50.00 |
COCO | InstructBLIP Vicuna | Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy | 2024-02-11T00:00:00 | https://arxiv.org/abs/2402.07270v2 | [
"https://github.com/lmb-freiburg/ovqa"
] | In the paper 'Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy', what ClipMatch@1 score did the InstructBLIP Vicuna model get on the COCO dataset
| 59.58 |
CUTE80 | CLIP4STR-L (DataComp-1B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L (DataComp-1B) model get on the CUTE80 dataset
| 99.7 |
Texas (60%/20%/20% random splits) | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GraphSAGE model get on the Texas (60%/20%/20% random splits) dataset
| 85.95 ± 6.42 |
EMDB | TRAM | TRAM: Global Trajectory and Motion of 3D Humans from in-the-wild Videos | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17346v2 | [
"https://github.com/yufu-wang/tram"
] | In the paper 'TRAM: Global Trajectory and Motion of 3D Humans from in-the-wild Videos', what Average MPJPE (mm) score did the TRAM model get on the EMDB dataset
| 74.4 |
NTU RGB+D | CHASE(CTR-GCN) | CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.07153v1 | [
"https://github.com/Necolizer/CHASE"
] | In the paper 'CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition', what Accuracy (Cross-Subject) score did the CHASE(CTR-GCN) model get on the NTU RGB+D dataset
| 96.5 |
ImageNet | KD++(T:resnet152 S:resnet34) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T:resnet152 S:resnet34) model get on the ImageNet dataset
| 75.53 |
PACS | EoQ (ResNet-50) | QT-DoG: Quantization-aware Training for Domain Generalization | 2024-10-08T00:00:00 | https://arxiv.org/abs/2410.06020v1 | [
"https://github.com/saqibjaved1/QT-DoG"
] | In the paper 'QT-DoG: Quantization-aware Training for Domain Generalization', what Average Accuracy score did the EoQ (ResNet-50) model get on the PACS dataset
| 90.7 |
TACRED-Revisited | RAG4RE | Retrieval-Augmented Generation-based Relation Extraction | 2024-04-20T00:00:00 | https://arxiv.org/abs/2404.13397v1 | [
"https://github.com/sefeoglu/rag4re"
] | In the paper 'Retrieval-Augmented Generation-based Relation Extraction', what F1 score did the RAG4RE model get on the TACRED-Revisited dataset
| 88.3 |
ETTm2 (96) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTm2 (96) Multivariate dataset
| 0.162 |
MM-Vet | InternVL2.5-1B | Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05271v1 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling', what GPT-4 score score did the InternVL2.5-1B model get on the MM-Vet dataset
| 48.8 |
nuscenes Camera-Radar | HyDRa | Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07746v2 | [
"https://github.com/phi-wol/hydra"
] | In the paper 'Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception', what NDS score did the HyDRa model get on the nuscenes Camera-Radar dataset
| 64.2 |
ExpW | ResEmoteNet | ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition | 2024-09-01T00:00:00 | https://arxiv.org/abs/2409.10545v2 | [
"https://github.com/ArnabKumarRoy02/ResEmoteNet"
] | In the paper 'ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition', what Accuracy score did the ResEmoteNet model get on the ExpW dataset
| 75.67 |
Peptides-func | TIGT | Topology-Informed Graph Transformer | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02005v1 | [
"https://github.com/leemingo/tigt"
] | In the paper 'Topology-Informed Graph Transformer', what AP score did the TIGT model get on the Peptides-func dataset
| 0.6679 |
Sim10k | MILA | MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection | 2023-11-20T00:00:00 | https://arxiv.org/abs/2309.01086v1 | [
"https://github.com/hitachi-rd-cv/MILA"
] | In the paper 'MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection', what mAP score did the MILA model get on the Sim10k dataset
| 57.4 |
PIQA | LLaMA3 8B+MoSLoRA | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what Accuracy score did the LLaMA3 8B+MoSLoRA model get on the PIQA dataset
| 89.7 |
3DPW | NIKI (Twist-and-Swing) | NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D Human Pose and Shape Estimation | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08590v1 | [
"https://github.com/jeff-sjtu/niki"
] | In the paper 'NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D Human Pose and Shape Estimation', what PA-MPJPE score did the NIKI (Twist-and-Swing) model get on the 3DPW dataset
| 40.6 |
LaSOT-ext | ODTrack-B | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what AUC score did the ODTrack-B model get on the LaSOT-ext dataset
| 52.4 |
CUHK Avenue | MULDE-object-centric-micro | MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14497v1 | [
"https://github.com/jakubmicorek/MULDE-Multiscale-Log-Density-Estimation-via-Denoising-Score-Matching-for-Video-Anomaly-Detection"
] | In the paper 'MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection', what AUC score did the MULDE-object-centric-micro model get on the CUHK Avenue dataset
| 94.3% |
CIFAR-100 | GAC-SNN | Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks | 2023-08-12T00:00:00 | https://arxiv.org/abs/2308.06582v2 | [
"https://github.com/bollossom/GAC"
] | In the paper 'Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks', what Percentage correct score did the GAC-SNN model get on the CIFAR-100 dataset
| 80.45 |
AVeriTeC | HerO | HerO at AVeriTeC: The Herd of Open Large Language Models for Verifying Real-World Claims | 2024-10-16T00:00:00 | https://arxiv.org/abs/2410.12377v2 | [
"https://github.com/ssu-humane/hero"
] | In the paper 'HerO at AVeriTeC: The Herd of Open Large Language Models for Verifying Real-World Claims', what Question Only score score did the HerO model get on the AVeriTeC dataset
| 0.48 |
SUN-RGBD | DFormer-B | AsymFormer: Asymmetrical Cross-Modal Representation Learning for Mobile Platform Real-Time RGB-D Semantic Segmentation | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.14065v7 | [
"https://github.com/Fourier7754/AsymFormer"
] | In the paper 'AsymFormer: Asymmetrical Cross-Modal Representation Learning for Mobile Platform Real-Time RGB-D Semantic Segmentation', what Mean IoU score did the DFormer-B model get on the SUN-RGBD dataset
| 49.1% |
MassSpecGym | MIST | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the MIST model get on the MassSpecGym dataset
| 14.64 |
MS-COCO | GKGNet(resolution 224) | GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for Multi-Label Image Recognition | 2023-08-28T00:00:00 | https://arxiv.org/abs/2308.14378v3 | [
"https://github.com/jin-s13/gkgnet"
] | In the paper 'GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for Multi-Label Image Recognition', what mAP score did the GKGNet(resolution 224) model get on the MS-COCO dataset
| 82 |
Stanford2D3D Panoramic | SFSS-MMSI (RGB Only) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what mIoU score did the SFSS-MMSI (RGB Only) model get on the Stanford2D3D Panoramic dataset
| 52.87% |
MMPD-Dataset | MMPedestron | When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10125v1 | [
"https://github.com/BubblyYi/MMPedestron"
] | In the paper 'When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset', what box mAP score did the MMPedestron model get on the MMPD-Dataset dataset
| 79.0 |
ETTh2 (96) Multivariate | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the ETTh2 (96) Multivariate dataset
| 0.273 |
CropHarvest multicrop - Global | Feature Gated Fusion | Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14297v2 | [
"https://github.com/fmenat/missingviews-study-eo"
] | In the paper 'Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications', what Average Accuracy score did the Feature Gated Fusion model get on the CropHarvest multicrop - Global dataset
| 0.734 |
ImageNet 512x512 | EDM2- S Autoguidance (XS, T /16) | Guiding a Diffusion Model with a Bad Version of Itself | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.02507v2 | [
"https://github.com/nvlabs/edm2"
] | In the paper 'Guiding a Diffusion Model with a Bad Version of Itself', what FID score did the EDM2- S Autoguidance (XS, T /16) model get on the ImageNet 512x512 dataset
| 1.34 |
NYU Depth v2 | SwinMTL | SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images | 2024-03-15T00:00:00 | https://arxiv.org/abs/2403.10662v1 | [
"https://github.com/pardistaghavi/swinmtl"
] | In the paper 'SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images', what Mean IoU score did the SwinMTL model get on the NYU Depth v2 dataset
| 58.14% |
MM-Vet | LLaVA-VT (Vicuna-7B) | Beyond Embeddings: The Promise of Visual Table in Visual Reasoning | 2024-03-27T00:00:00 | https://arxiv.org/abs/2403.18252v2 | [
"https://github.com/lavi-lab/visual-table"
] | In the paper 'Beyond Embeddings: The Promise of Visual Table in Visual Reasoning', what GPT-4 score score did the LLaVA-VT (Vicuna-7B) model get on the MM-Vet dataset
| 31.8 |
Food-101 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Food-101 dataset
| 92.2 |
SVHN (250 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the SVHN (250 Labels, ImageNet-100 Unlabeled) dataset
| 80.78 |
NYU Depth v2 | Metric3Dv2(L, FT) | Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation | 2024-03-22T00:00:00 | https://arxiv.org/abs/2404.15506v3 | [
"https://github.com/yvanyin/metric3d"
] | In the paper 'Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation', what % < 11.25 score did the Metric3Dv2(L, FT) model get on the NYU Depth v2 dataset
| 68.8 |
PASCAL VOC to Comic2k | CDDMSL | Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13525v1 | [
"https://github.com/sinamalakouti/CDDMSL"
] | In the paper 'Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment', what mAP score did the CDDMSL model get on the PASCAL VOC to Comic2k dataset
| 46.3 |
VisDA2017 | SFDA2 | SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation | 2024-03-16T00:00:00 | https://arxiv.org/abs/2403.10834v1 | [
"https://github.com/shinyflight/sfda2"
] | In the paper 'SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation', what Accuracy score did the SFDA2 model get on the VisDA2017 dataset
| 88.1 |
BTAD | RealNet | RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.05897v1 | [
"https://github.com/cnulab/realnet"
] | In the paper 'RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection', what Segmentation AUROC score did the RealNet model get on the BTAD dataset
| 97.9 |
VGGFace2 | SMDL-Attribution (ICLR version) | Less is More: Fewer Interpretable Region via Submodular Subset Selection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09164v3 | [
"https://github.com/ruoyuchen10/smdl-attribution"
] | In the paper 'Less is More: Fewer Interpretable Region via Submodular Subset Selection', what Insertion AUC score (ArcFace ResNet-101) score did the SMDL-Attribution (ICLR version) model get on the VGGFace2 dataset
| 0.6705 |
HO-3D v2 | Hamba | Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09646v2 | [
"https://github.com/humansensinglab/Hamba"
] | In the paper 'Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba', what PA-MPJPE (mm) score did the Hamba model get on the HO-3D v2 dataset
| 7.5 |
Fish-100 | BUCTD-preNet-W48 (CID-W32) | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the BUCTD-preNet-W48 (CID-W32) model get on the Fish-100 dataset
| 88.0 |
MORPH Album2 (SE) | FaRL+MLP | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the FaRL+MLP model get on the MORPH Album2 (SE) dataset
| 3.04 |
ImageNet | HVT Huge | HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space | 2024-09-25T00:00:00 | https://arxiv.org/abs/2409.16897v2 | [
"https://github.com/hyperbolicvit/hyperbolicvit"
] | In the paper 'HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space', what Top 1 Accuracy score did the HVT Huge model get on the ImageNet dataset
| 87.4% |
KIT Motion-Language | GUESS | GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion Generation | 2024-01-04T00:00:00 | https://arxiv.org/abs/2401.02142v2 | [
"https://github.com/xuehao-gao/guess"
] | In the paper 'GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion Generation', what FID score did the GUESS model get on the KIT Motion-Language dataset
| 0.371 |
IMDb | OCaTS (kNN & GPT-3.5-turbo | Cache me if you Can: an Online Cost-aware Teacher-Student framework to Reduce the Calls to Large Language Models | 2023-10-20T00:00:00 | https://arxiv.org/abs/2310.13395v1 | [
"https://github.com/stoyian/OCaTS"
] | In the paper 'Cache me if you Can: an Online Cost-aware Teacher-Student framework to Reduce the Calls to Large Language Models', what Accuracy score did the OCaTS (kNN & GPT-3.5-turbo model get on the IMDb dataset
| 93.06 |
ImageNet-LT | ProCo (ResNet50) | Probabilistic Contrastive Learning for Long-Tailed Visual Recognition | 2024-03-11T00:00:00 | https://arxiv.org/abs/2403.06726v2 | [
"https://github.com/leaplabthu/proco"
] | In the paper 'Probabilistic Contrastive Learning for Long-Tailed Visual Recognition', what Top-1 Accuracy score did the ProCo (ResNet50) model get on the ImageNet-LT dataset
| 60.2 |
ICDAR2015 | CLIP4STR-L (DataComp-1B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L (DataComp-1B) model get on the ICDAR2015 dataset
| 91.4 |
ELD SonyA7S2 x100 | LRD | Towards General Low-Light Raw Noise Synthesis and Modeling | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16508v2 | [
"https://github.com/fengzhang427/LRD"
] | In the paper 'Towards General Low-Light Raw Noise Synthesis and Modeling', what PSNR (Raw) score did the LRD model get on the ELD SonyA7S2 x100 dataset
| 44.95 |
ImageNet | DeiT-S-24 + GFSA | Graph Convolutions Enrich the Self-Attention in Transformers! | 2023-12-07T00:00:00 | https://arxiv.org/abs/2312.04234v5 | [
"https://github.com/jeongwhanchoi/gfsa"
] | In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Top 1 Accuracy score did the DeiT-S-24 + GFSA model get on the ImageNet dataset
| 81.5% |
SVHN-to-MNIST | FACT | FACT: Federated Adversarial Cross Training | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00607v2 | [
"https://github.com/jonas-lippl/fact"
] | In the paper 'FACT: Federated Adversarial Cross Training', what Accuracy score did the FACT model get on the SVHN-to-MNIST dataset
| 90.6 |
ETTm1 (720) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTm1 (720) Multivariate dataset
| 0.426 |
Lipophilicity | SMA | Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14789v1 | [
"https://github.com/johnathan-xie/sma"
] | In the paper 'Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning', what RMSE score did the SMA model get on the Lipophilicity dataset
| 0.609 |
IndustReal | YoloV8 (synthetic data only) | IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17323v1 | [
"https://github.com/timschoonbeek/industreal"
] | In the paper 'IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting', what mAP score did the YoloV8 (synthetic data only) model get on the IndustReal dataset
| 57.5 |
Set5 - 4x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Set5 - 4x upscaling dataset
| 33.38 |
ZINC-500k | GraphGPS + HDSE | Enhancing Graph Transformers with Hierarchical Distance Structural Encoding | 2023-08-22T00:00:00 | https://arxiv.org/abs/2308.11129v4 | [
"https://github.com/luoyk1999/hdse"
] | In the paper 'Enhancing Graph Transformers with Hierarchical Distance Structural Encoding', what MAE score did the GraphGPS + HDSE model get on the ZINC-500k dataset
| 0.062 |
SPair-71k | GeoAware-SC (Supervised, AP-10K P.T.) | Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17034v2 | [
"https://github.com/Junyi42/geoaware-sc"
] | In the paper 'Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence', what PCK score did the GeoAware-SC (Supervised, AP-10K P.T.) model get on the SPair-71k dataset
| 85.6 |
MeQSum | BiomedGPT | BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17100v4 | [
"https://github.com/taokz/biomedgpt"
] | In the paper 'BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks', what RougeL score did the BiomedGPT model get on the MeQSum dataset
| 52.3 |
ImageNet - 10% labeled data | Meta Co-Training | Meta Co-Training: Two Views are Better than One | 2023-11-29T00:00:00 | https://arxiv.org/abs/2311.18083v4 | [
"https://github.com/jayrothenberger/meta-co-training"
] | In the paper 'Meta Co-Training: Two Views are Better than One', what Top 1 Accuracy score did the Meta Co-Training model get on the ImageNet - 10% labeled data dataset
| 85.8% |
CAMELYON16 | CAMIL (CAMIL-L) | CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.05314v3 | [
"https://github.com/olgarithmics/ICLR_CAMIL"
] | In the paper 'CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images', what AUC score did the CAMIL (CAMIL-L) model get on the CAMELYON16 dataset
| 0.953 |
ActivityNet Adverbs | ReGaDa | Video-adverb retrieval with compositional adverb-action embeddings | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.15086v1 | [
"https://github.com/ExplainableML/ReGaDa"
] | In the paper 'Video-adverb retrieval with compositional adverb-action embeddings', what mAP W score did the ReGaDa model get on the ActivityNet Adverbs dataset
| 0.239 |
SCUT-CTW1500 | DeepSolo (ResNet-50) | DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19957v2 | [
"https://github.com/vitae-transformer/deepsolo"
] | In the paper 'DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting', what F-measure (%) - No Lexicon score did the DeepSolo (ResNet-50) model get on the SCUT-CTW1500 dataset
| 64.2 |
PEMS-BAY | TITAN | A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.17440v1 | [
"https://github.com/sqlcow/TITAN"
] | In the paper 'A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction', what MAE @ 12 step score did the TITAN model get on the PEMS-BAY dataset
| 1.69 |
CoIR | Voyage-code-002 | CoIR: A Comprehensive Benchmark for Code Information Retrieval Models | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.02883v1 | [
"https://github.com/coir-team/coir"
] | In the paper 'CoIR: A Comprehensive Benchmark for Code Information Retrieval Models', what nDCG@10 score did the Voyage-code-002 model get on the CoIR dataset
| 56.26 |
MSCOCO | CLIM (RN50) | CLIM: Contrastive Language-Image Mosaic for Region Representation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.11376v2 | [
"https://github.com/wusize/clim"
] | In the paper 'CLIM: Contrastive Language-Image Mosaic for Region Representation', what AP 0.5 score did the CLIM (RN50) model get on the MSCOCO dataset
| 36.9 |
EMDB | WHAM (ViT) | WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion | 2023-12-12T00:00:00 | https://arxiv.org/abs/2312.07531v2 | [
"https://github.com/yohanshin/WHAM"
] | In the paper 'WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion', what Average MPJPE (mm) score did the WHAM (ViT) model get on the EMDB dataset
| 79.7 |
TriMouse-161 | CID-W32 | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the CID-W32 model get on the TriMouse-161 dataset
| 86.8 |
Clotho | PaSST–RoBERTa & GPT-augment | Advancing Natural-Language Based Audio Retrieval with PaSST and Large Audio-Caption Data Sets | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04258v1 | [
"https://github.com/optimusprimus/dcase2023_task6b"
] | In the paper 'Advancing Natural-Language Based Audio Retrieval with PaSST and Large Audio-Caption Data Sets', what R@1 score did the PaSST–RoBERTa & GPT-augment model get on the Clotho dataset
| 26.07 |
VRDS | Turtle | Learning Truncated Causal History Model for Video Restoration | 2024-10-04T00:00:00 | https://arxiv.org/abs/2410.03936v2 | [
"https://github.com/Ascend-Research/Turtle"
] | In the paper 'Learning Truncated Causal History Model for Video Restoration', what SSIM score did the Turtle model get on the VRDS dataset
| 0.9590 |
MSRVTT-QA | FrozenBiLM+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the FrozenBiLM+ model get on the MSRVTT-QA dataset
| 0.470 |
PeMS08 | PM-DMNet(R) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what MAE@1h score did the PM-DMNet(R) model get on the PeMS08 dataset
| 13.40 |
MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what Avg DSC score did the EMCAD model get on the MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge dataset
| 83.63 |
CIFAR-100 ResNet-18 - 300 Epochs | IBM | Towards Redundancy-Free Sub-networks in Continual Learning | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00840v2 | [
"https://github.com/zackschen/IBM-Net"
] | In the paper 'Towards Redundancy-Free Sub-networks in Continual Learning', what Accuracy score did the IBM model get on the CIFAR-100 ResNet-18 - 300 Epochs dataset
| 88.15 |
RefCOCO+ val | VATEX | Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding | 2024-04-12T00:00:00 | https://arxiv.org/abs/2404.08590v2 | [
"https://github.com/nero1342/VATEX_RIS"
] | In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what Mean IoU score did the VATEX model get on the RefCOCO+ val dataset
| 70.02 |
NeedForSpeed | LoRAT-g-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what AUC score did the LoRAT-g-378 model get on the NeedForSpeed dataset
| 0.681 |
DomainNet | GMDG (RegNetY-16GF, SWAD) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF, SWAD) model get on the DomainNet dataset
| 61.3 |
IndustReal | B3 | IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17323v1 | [
"https://github.com/timschoonbeek/industreal"
] | In the paper 'IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting', what F1 score did the B3 model get on the IndustReal dataset
| 0.883 |
CIFAR-10 (partial ratio 0.1) | ILL | Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12715v4 | [
"https://github.com/hhhhhhao/general-framework-weak-supervision"
] | In the paper 'Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations', what Accuracy score did the ILL model get on the CIFAR-10 (partial ratio 0.1) dataset
| 96.37 |
COCO minival | GLEE-Plus | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Plus model get on the COCO minival dataset
| 53.0 |
QVHighlights | SG-DETR | Saliency-Guided DETR for Moment Retrieval and Highlight Detection | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01615v1 | [
"https://github.com/ai-forever/sg-detr"
] | In the paper 'Saliency-Guided DETR for Moment Retrieval and Highlight Detection', what mAP score did the SG-DETR model get on the QVHighlights dataset
| 43.76 |
CIFAR-100 | DKD++(T:resnet50, S:mobilenetv2) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 Accuracy (%) score did the DKD++(T:resnet50, S:mobilenetv2) model get on the CIFAR-100 dataset
| 70.82 |
PASCAL VOC | OneNeted,4 | OneNet: A Channel-Wise 1D Convolutional U-Net | 2024-11-14T00:00:00 | https://arxiv.org/abs/2411.09838v1 | [
"https://github.com/shbyun080/onenet"
] | In the paper 'OneNet: A Channel-Wise 1D Convolutional U-Net', what mIoU score did the OneNeted,4 model get on the PASCAL VOC dataset
| 14.9 |
SUN-RGBD | GeminiFusion (Swin-Large) | GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.01210v2 | [
"https://github.com/jiadingcn/geminifusion"
] | In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what Mean IoU score did the GeminiFusion (Swin-Large) model get on the SUN-RGBD dataset
| 54.6 |
Pittsburgh-30k-test | ProGEO | ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.01906v1 | [
"https://github.com/chain-mao/progeo"
] | In the paper 'ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization', what Recall@1 score did the ProGEO model get on the Pittsburgh-30k-test dataset
| 93.0 |
MSP-IMPROV | emoDARTS | emoDARTS: Joint Optimisation of CNN & Sequential Neural Network Architectures for Superior Speech Emotion Recognition | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14083v1 | [
"https://github.com/jayaneetha/emoDARTS"
] | In the paper 'emoDARTS: Joint Optimisation of CNN & Sequential Neural Network Architectures for Superior Speech Emotion Recognition', what UA score did the emoDARTS model get on the MSP-IMPROV dataset
| .6563 |
GraspNet-1Billion | HGGD-CD | Efficient Heatmap-Guided 6-Dof Grasp Detection in Cluttered Scenes | 2024-03-27T00:00:00 | https://arxiv.org/abs/2403.18546v2 | [
"https://github.com/THU-VCLab/HGGD"
] | In the paper 'Efficient Heatmap-Guided 6-Dof Grasp Detection in Cluttered Scenes', what AP_similar score did the HGGD-CD model get on the GraspNet-1Billion dataset
| 53.59 |
SVHN, 250 Labels | ShrinkMatch | Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06777v1 | [
"https://github.com/LiheYoung/ShrinkMatch"
] | In the paper 'Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning', what Accuracy score did the ShrinkMatch model get on the SVHN, 250 Labels dataset
| 98.04 |
E2E NLG Challenge | Self-memory | Self-training from Self-memory in Data-to-text Generation | 2024-01-19T00:00:00 | https://arxiv.org/abs/2401.10567v1 | [
"https://github.com/hoangthangta/stsm"
] | In the paper 'Self-training from Self-memory in Data-to-text Generation', what BLEU score did the Self-memory model get on the E2E NLG Challenge dataset
| 65.11 |
COCO test-dev | LeYOLO-Nano | LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14239v1 | [
"https://github.com/LilianHollard/LeYOLO"
] | In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what Params (M) score did the LeYOLO-Nano model get on the COCO test-dev dataset
| 1.1 |
QVHighlights | BAM-DETR | BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00083v2 | [
"https://github.com/Pilhyeon/BAM-DETR"
] | In the paper 'BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos', what mAP score did the BAM-DETR model get on the QVHighlights dataset
| 45.36 |
MultiNLI | LM-CPPF RoBERTa-base | LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18169v3 | [
"https://github.com/amirabaskohi/lm-cppf"
] | In the paper 'LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning', what Accuracy score did the LM-CPPF RoBERTa-base model get on the MultiNLI dataset
| 68.4 |
SHD | SNN with Dilated Convolution with Learnable Spacings | Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings | 2023-06-30T00:00:00 | https://arxiv.org/abs/2306.17670v3 | [
"https://github.com/thvnvtos/snn-delays"
] | In the paper 'Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings', what Percentage correct score did the SNN with Dilated Convolution with Learnable Spacings model get on the SHD dataset
| 95.1 |
BIG-bench (Causal Judgment) | PaLM 2 (few-shot, k=3, CoT) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, CoT) model get on the BIG-bench (Causal Judgment) dataset
| 58.8 |
ReDial | KERL | Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10967v3 | [
"https://github.com/icedpanda/KERL"
] | In the paper 'Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems', what Recall@1 score did the KERL model get on the ReDial dataset
| 0.056 |
MBPP | Branch-Train-MiX 4x7B (sampling top-2 experts) | Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07816v1 | [
"https://github.com/Leeroo-AI/mergoo"
] | In the paper 'Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM', what Accuracy score did the Branch-Train-MiX 4x7B (sampling top-2 experts) model get on the MBPP dataset
| 39.4 |
PhotoChat | PaCE | PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.14839v2 | [
"https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/pace"
] | In the paper 'PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts', what R1 score did the PaCE model get on the PhotoChat dataset
| 15.2 |
Tanks and Temples | TensoRF + NeRFLiX++ | From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm | 2023-06-10T00:00:00 | https://arxiv.org/abs/2306.06388v3 | [
"https://github.com/redrock303/NeRFLiX_CPVR2023"
] | In the paper 'From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm', what PSNR score did the TensoRF + NeRFLiX++ model get on the Tanks and Temples dataset
| 29.24 |
GSM8K | DART-Math-Llama3-8B-Uniform (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Llama3-8B-Uniform (0-shot CoT, w/o code) model get on the GSM8K dataset
| 82.5 |
UCR Anomaly Archive | SR-CNN | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the SR-CNN model get on the UCR Anomaly Archive dataset
| 0.3 |
S3DIS | OneFormer3D | OneFormer3D: One Transformer for Unified Point Cloud Segmentation | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14405v1 | [
"https://github.com/oneformer3d/oneformer3d"
] | In the paper 'OneFormer3D: One Transformer for Unified Point Cloud Segmentation', what mIoU (6-Fold) score did the OneFormer3D model get on the S3DIS dataset
| 75.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.