dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MM-Vet | LLaVA-S^2 + DenseFusion-1M (Vicuna-7B) | DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception | 2024-07-11T00:00:00 | https://arxiv.org/abs/2407.08303v2 | [
"https://github.com/baaivision/densefusion"
] | In the paper 'DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception', what GPT-4 score score did the LLaVA-S^2 + DenseFusion-1M (Vicuna-7B) model get on the MM-Vet dataset
| 37.5 |
COIN | Norton | Multi-granularity Correspondence Learning from Long-term Noisy Videos | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16702v1 | [
"https://github.com/XLearning-SCU/2024-ICLR-Norton"
] | In the paper 'Multi-granularity Correspondence Learning from Long-term Noisy Videos', what Frame accuracy score did the Norton model get on the COIN dataset
| 69.8 |
QVHighlights | Mr. BLIP | The Surprising Effectiveness of Multimodal Large Language Models for Video Moment Retrieval | 2024-06-26T00:00:00 | https://arxiv.org/abs/2406.18113v3 | [
"https://github.com/sudo-Boris/mr-Blip"
] | In the paper 'The Surprising Effectiveness of Multimodal Large Language Models for Video Moment Retrieval', what mAP score did the Mr. BLIP model get on the QVHighlights dataset
| 51.37 |
RES-Q | QurrentOS-coder + Claude 3 Opus | RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16801v2 | [
"https://github.com/qurrent-ai/res-q"
] | In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + Claude 3 Opus model get on the RES-Q dataset
| 36.0 |
LVIS v1.0 val | DiverGen (Swin-L) | DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data | 2024-05-16T00:00:00 | https://arxiv.org/abs/2405.10185v1 | [
"https://github.com/aim-uofa/DiverGen"
] | In the paper 'DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data', what box AP score did the DiverGen (Swin-L) model get on the LVIS v1.0 val dataset
| 51.2 |
UAV123 | HIPTrack | HIPTrack: Visual Tracking with Historical Prompts | 2023-11-03T00:00:00 | https://arxiv.org/abs/2311.02072v2 | [
"https://github.com/wenruicai/hiptrack"
] | In the paper 'HIPTrack: Visual Tracking with Historical Prompts', what AUC score did the HIPTrack model get on the UAV123 dataset
| 0.705 |
ZINC | N2-GNN | Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.03266v3 | [
"https://github.com/jiaruifeng/n2gnn"
] | In the paper 'Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman', what MAE score did the N2-GNN model get on the ZINC dataset
| 0.059 |
PROTEINS | Graph-JEPA | Graph-level Representation Learning with Joint-Embedding Predictive Architectures | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16014v2 | [
"https://github.com/geriskenderi/graph-jepa"
] | In the paper 'Graph-level Representation Learning with Joint-Embedding Predictive Architectures', what Accuracy score did the Graph-JEPA model get on the PROTEINS dataset
| 75.67% |
CUB | RAT-Diffusion | Data Extrapolation for Text-to-image Generation on Small Datasets | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01638v1 | [
"https://github.com/senmaoy/RAT-Diffusion"
] | In the paper 'Data Extrapolation for Text-to-image Generation on Small Datasets', what FID score did the RAT-Diffusion model get on the CUB dataset
| 6.36 |
Cora | GAT + SWA | The Split Matters: Flat Minima Methods for Improving the Performance of GNNs | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09121v1 | [
"https://github.com/foisunt/fmms-in-gnns"
] | In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what Accuracy score did the GAT + SWA model get on the Cora dataset
| 88.66 ± 1.38% |
nuScenes | Fast-Poly | Fast-Poly: A Fast Polyhedral Framework For 3D Multi-Object Tracking | 2024-03-20T00:00:00 | https://arxiv.org/abs/2403.13443v2 | [
"https://github.com/lixiaoyu2000/fastpoly"
] | In the paper 'Fast-Poly: A Fast Polyhedral Framework For 3D Multi-Object Tracking', what AMOTA score did the Fast-Poly model get on the nuScenes dataset
| 0.758 |
UZLF | LUNet | LUNet: Deep Learning for the Segmentation of Arterioles and Venules in High Resolution Fundus Images | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05780v1 | [
"https://github.com/aim-lab/LUNet"
] | In the paper 'LUNet: Deep Learning for the Segmentation of Arterioles and Venules in High Resolution Fundus Images', what Average Dice (0.5*Dice_a + 0.5*Dice_v) score did the LUNet model get on the UZLF dataset
| 83.2 |
RESISC45 | SAG-ViT | SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers | 2024-11-14T00:00:00 | https://arxiv.org/abs/2411.09420v2 | [
"https://github.com/shravan-18/SAG-ViT"
] | In the paper 'SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers', what F1 score did the SAG-ViT model get on the RESISC45 dataset
| 95.49 |
MNIST | TIGT | Topology-Informed Graph Transformer | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02005v1 | [
"https://github.com/leemingo/tigt"
] | In the paper 'Topology-Informed Graph Transformer', what Accuracy score did the TIGT model get on the MNIST dataset
| 98.230±0.133 |
AfriSenti | AfroXLMR | UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.01093v1 | [
"https://github.com/zerohd4869/sacl"
] | In the paper 'UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis', what weighted-F1 score score did the AfroXLMR model get on the AfriSenti dataset
| 0.561 |
UMVM-oea-en-fr | UMAEA (w/o surf & iter ) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the UMVM-oea-en-fr dataset
| 0.848 |
BBBP | GIT-Mol(G+S) | GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06911v3 | [
"https://github.com/ai-hpc-research-team/git-mol"
] | In the paper 'GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text', what AUC score did the GIT-Mol(G+S) model get on the BBBP dataset
| 0.739 |
FP-R-M | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-R-M dataset
| 55.93 |
PASCAL-S | M3Net-S | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-S model get on the PASCAL-S dataset
| 0.047 |
LaSOT-ext | RTracker-L | RTracker: Recoverable Tracking via PN Tree Structured Memory | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19242v1 | [
"https://github.com/norahgreen/rtracker"
] | In the paper 'RTracker: Recoverable Tracking via PN Tree Structured Memory', what AUC score did the RTracker-L model get on the LaSOT-ext dataset
| 54.9 |
DAIR-V2X-I | CoBEV | CoBEV: Elevating Roadside 3D Object Detection with Depth and Height Complementarity | 2023-10-04T00:00:00 | https://arxiv.org/abs/2310.02815v3 | [
"https://github.com/MasterHow/CoBEV"
] | In the paper 'CoBEV: Elevating Roadside 3D Object Detection with Depth and Height Complementarity', what AP|R40(moderate) score did the CoBEV model get on the DAIR-V2X-I dataset
| 69.6 |
ImageNet - 10% labeled data | CoMatch + EPASS (ResNet-50) | Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15764v1 | [
"https://github.com/beandkay/epass"
] | In the paper 'Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector', what Top 5 Accuracy score did the CoMatch + EPASS (ResNet-50) model get on the ImageNet - 10% labeled data dataset
| 91.5 |
Winoground | InstructBLIP-ZS-CoT | Compositional Chain-of-Thought Prompting for Large Multimodal Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.17076v3 | [
"https://github.com/chancharikmitra/ccot"
] | In the paper 'Compositional Chain-of-Thought Prompting for Large Multimodal Models', what Text Score score did the InstructBLIP-ZS-CoT model get on the Winoground dataset
| 9.3 |
waymo pedestrian | PillarNeXt | PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR Point Clouds | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04925v1 | [
"https://github.com/qcraftai/pillarnext"
] | In the paper 'PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR Point Clouds', what APH/L2 score did the PillarNeXt model get on the waymo pedestrian dataset
| 75.98 |
SVAMP | MMOS-CODE-34B(0-shot) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Execution Accuracy score did the MMOS-CODE-34B(0-shot) model get on the SVAMP dataset
| 80.6 |
MedQA | BioMedGPT-10B | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09442v2 | [
"https://github.com/pharmolix/openbiomed"
] | In the paper 'BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine', what Accuracy score did the BioMedGPT-10B model get on the MedQA dataset
| 50.4 |
View-of-Delft (val) | RCBEVDet | RCBEVDet: Radar-camera Fusion in Bird's Eye View for 3D Object Detection | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16440v1 | [
"https://github.com/vdigpku/rcbevdet"
] | In the paper 'RCBEVDet: Radar-camera Fusion in Bird's Eye View for 3D Object Detection', what mAP score did the RCBEVDet model get on the View-of-Delft (val) dataset
| 49.99 |
VNHSGE Mathematics | Bing Chat | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE Mathematics dataset
| 60 |
IMDb Movie Reviews | XLNet | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the XLNet model get on the IMDb Movie Reviews dataset
| 0.9387 |
USNA-Cn2 (long-term) | RNN | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the RNN model get on the USNA-Cn2 (long-term) dataset
| 0.530 |
BEDLAM | Multi-HMR | Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14654v2 | [
"https://github.com/naver/multi-hmr"
] | In the paper 'Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot', what PVE-All score did the Multi-HMR model get on the BEDLAM dataset
| 76.80 |
LAGENDA age | MiVOLO-V2 | Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02302v3 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation', what MAE score did the MiVOLO-V2 model get on the LAGENDA age dataset
| 3.65 |
AudioSet | CLAPSep | CLAPSep: Leveraging Contrastive Pre-trained Model for Multi-Modal Query-Conditioned Target Sound Extraction | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17455v4 | [
"https://github.com/aisaka0v0/clapsep"
] | In the paper 'CLAPSep: Leveraging Contrastive Pre-trained Model for Multi-Modal Query-Conditioned Target Sound Extraction', what SI-SDRi score did the CLAPSep model get on the AudioSet dataset
| 8.44 |
ImageNet-LT | APA (SE-ResNext-50) | Adaptive Parametric Activation | 2024-07-11T00:00:00 | https://arxiv.org/abs/2407.08567v2 | [
"https://github.com/kostas1515/aglu"
] | In the paper 'Adaptive Parametric Activation', what Top-1 Accuracy score did the APA (SE-ResNext-50) model get on the ImageNet-LT dataset
| 59.1 |
Places-LT | APA (SE-ResNet-50) | Adaptive Parametric Activation | 2024-07-11T00:00:00 | https://arxiv.org/abs/2407.08567v2 | [
"https://github.com/kostas1515/aglu"
] | In the paper 'Adaptive Parametric Activation', what Top-1 Accuracy score did the APA (SE-ResNet-50) model get on the Places-LT dataset
| 42.0 |
RLBench | PolarNet | PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15596v1 | [
"https://github.com/vlc-robot/polarnet"
] | In the paper 'PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation', what Succ. Rate (18 tasks, 100 demo/task) score did the PolarNet model get on the RLBench dataset
| 46.4 |
CommitmentBank | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (one-shot) model get on the CommitmentBank dataset
| 87.5 |
PascalVOC-SP | GatedGCN-HSG | Next Level Message-Passing with Hierarchical Support Graphs | 2024-06-22T00:00:00 | https://arxiv.org/abs/2406.15852v2 | [
"https://github.com/carlosinator/support-graphs"
] | In the paper 'Next Level Message-Passing with Hierarchical Support Graphs', what macro F1 score did the GatedGCN-HSG model get on the PascalVOC-SP dataset
| 0.4604±0.0059 |
MATH | ToRA 70B (w/ code) | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA 70B (w/ code) model get on the MATH dataset
| 49.7 |
ETTm1 (336) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTm1 (336) Multivariate dataset
| 0.367 |
MM-Vet | Emu2-Chat | Generative Multimodal Models are In-Context Learners | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13286v2 | [
"https://github.com/baaivision/emu"
] | In the paper 'Generative Multimodal Models are In-Context Learners', what GPT-4 score score did the Emu2-Chat model get on the MM-Vet dataset
| 48.5 |
PixelRec | SASRec | An Image Dataset for Benchmarking Recommender Systems with Raw Pixels | 2023-09-13T00:00:00 | https://arxiv.org/abs/2309.06789v2 | [
"https://github.com/westlake-repl/pixelrec"
] | In the paper 'An Image Dataset for Benchmarking Recommender Systems with Raw Pixels', what Hit@10 score did the SASRec model get on the PixelRec dataset
| 0.025 |
LaSOT | LoRAT-g-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what AUC score did the LoRAT-g-378 model get on the LaSOT dataset
| 76.2 |
SMAC 3s5z_vs_4s6z | DDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DDN model get on the SMAC 3s5z_vs_4s6z dataset
| 89.77 |
IIIT5k | CLIP4STR-B (DataComp-1B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-B (DataComp-1B) model get on the IIIT5k dataset
| 99.5 |
Electricity (96) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Electricity (96) dataset
| 0.127 |
Manga109 - 4x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the Manga109 - 4x upscaling dataset
| 32.96 |
ECSSD | SAM2-UNet | SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08870v1 | [
"https://github.com/wzh0120/sam2-unet"
] | In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what MAE score did the SAM2-UNet model get on the ECSSD dataset
| 0.020 |
Stanford Cars | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the Stanford Cars dataset
| 71.2 |
CelebA-Test | PMRF | Posterior-Mean Rectified Flow: Towards Minimum MSE Photo-Realistic Image Restoration | 2024-10-01T00:00:00 | https://arxiv.org/abs/2410.00418v1 | [
"https://github.com/ohayonguy/PMRF"
] | In the paper 'Posterior-Mean Rectified Flow: Towards Minimum MSE Photo-Realistic Image Restoration', what FID score did the PMRF model get on the CelebA-Test dataset
| 37.46 |
MedQA | Meditron-70B (CoT + SC) | MEDITRON-70B: Scaling Medical Pretraining for Large Language Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.16079v1 | [
"https://github.com/epfllm/meditron"
] | In the paper 'MEDITRON-70B: Scaling Medical Pretraining for Large Language Models', what Accuracy score did the Meditron-70B (CoT + SC) model get on the MedQA dataset
| 70.2 |
AMZ Photo | GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the GCN model get on the AMZ Photo dataset
| 93.59% |
DESED | ATST-SED | Fine-tune the pretrained ATST model for sound event detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08153v2 | [
"https://github.com/Audio-WestlakeU/ATST-SED"
] | In the paper 'Fine-tune the pretrained ATST model for sound event detection', what event-based F1 score score did the ATST-SED model get on the DESED dataset
| 63.4 |
PROTEINS | rLap (unsupervised) | Randomized Schur Complement Views for Graph Contrastive Learning | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.04004v1 | [
"https://github.com/kvignesh1420/rlap"
] | In the paper 'Randomized Schur Complement Views for Graph Contrastive Learning', what Accuracy score did the rLap (unsupervised) model get on the PROTEINS dataset
| 84.3 |
WN18RR | KERMIT | KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with Inverse Transformation | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.14770v2 | [
"https://github.com/lirt1231/kermit"
] | In the paper 'KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with Inverse Transformation', what MRR score did the KERMIT model get on the WN18RR dataset
| 0.700 |
BIOSCAN_1M_Insect Dataset | BIOSCAN_1M_family_classifier | A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset | 2023-07-19T00:00:00 | https://arxiv.org/abs/2307.10455v3 | [
"https://github.com/zahrag/BIOSCAN-1M"
] | In the paper 'A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset', what Macro F1 score did the BIOSCAN_1M_family_classifier model get on the BIOSCAN_1M_Insect Dataset dataset
| 91.45 |
DAVIS | DMT | Deficiency-Aware Masked Transformer for Video Inpainting | 2023-07-17T00:00:00 | https://arxiv.org/abs/2307.08629v1 | [
"https://github.com/yeates/dmt"
] | In the paper 'Deficiency-Aware Masked Transformer for Video Inpainting', what PSNR score did the DMT model get on the DAVIS dataset
| 33.82 |
MMBench | DreamLLM-7B | DreamLLM: Synergistic Multimodal Comprehension and Creation | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11499v2 | [
"https://github.com/RunpeiDong/DreamLLM"
] | In the paper 'DreamLLM: Synergistic Multimodal Comprehension and Creation', what GPT-3.5 score score did the DreamLLM-7B model get on the MMBench dataset
| 49.9 |
SPOT-10 | MobileNet Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the MobileNet Distiller model get on the SPOT-10 dataset
| 78.26 |
CARLA | DriveAdapter+TCP | DriveAdapter: Breaking the Coupling Barrier of Perception and Planning in End-to-End Autonomous Driving | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00398v2 | [
"https://github.com/opendrivelab/driveadapter"
] | In the paper 'DriveAdapter: Breaking the Coupling Barrier of Perception and Planning in End-to-End Autonomous Driving', what Driving Score score did the DriveAdapter+TCP model get on the CARLA dataset
| 71 |
PSNR | Analyzing Noise Models and Advanced Filtering Algorithms for Image Enhancement | Analyzing Noise Models and Advanced Filtering Algorithms for Image Enhancement | 2024-10-29T00:00:00 | https://arxiv.org/abs/2410.21946v2 | [
"https://github.com/SahilAliAkbar/Image_Noise_Analysis"
] | In the paper 'Analyzing Noise Models and Advanced Filtering Algorithms for Image Enhancement', what PSNR score did the Analyzing Noise Models and Advanced Filtering Algorithms for Image Enhancement model get on the PSNR dataset
| PSNR Values |
NTU RGB+D | π-ViT (RGB only) | Just Add $π$! Pose Induced Video Transformers for Understanding Activities of Daily Living | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18840v1 | [
"https://github.com/dominickrei/pi-vit"
] | In the paper 'Just Add $π$! Pose Induced Video Transformers for Understanding Activities of Daily Living', what Accuracy (CS) score did the π-ViT (RGB only) model get on the NTU RGB+D dataset
| 94.0 |
ImageNet 32x32 | MuLAN | Diffusion Models With Learned Adaptive Noise | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13236v3 | [
"https://github.com/s-sahoo/mulan"
] | In the paper 'Diffusion Models With Learned Adaptive Noise', what NLL (bits/dim) score did the MuLAN model get on the ImageNet 32x32 dataset
| 3.67 |
Wiki-CS | ScaleNet | Scale Invariance of Graph Neural Networks | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19392v2 | [
"https://github.com/qin87/scalenet"
] | In the paper 'Scale Invariance of Graph Neural Networks', what Accuracy score did the ScaleNet model get on the Wiki-CS dataset
| 79.3±0.6 |
Human3.6M | KTPFormer (T=243) | KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00658v2 | [
"https://github.com/JihuaPeng/KTPFormer"
] | In the paper 'KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation', what Average MPJPE (mm) score did the KTPFormer (T=243) model get on the Human3.6M dataset
| 33.0 |
ChaLearn 2016 | FaRL+MLP | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the FaRL+MLP model get on the ChaLearn 2016 dataset
| 3.38 |
DRIVE | PVT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what F1 score score did the PVT-GCASCADE model get on the DRIVE dataset
| 0.8210 |
UTKFace | ResNet-50-Mean-Variance | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Mean-Variance model get on the UTKFace dataset
| 4.42 |
Cornell | MGNN + Hetero-S (4 layers) | The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12539v1 | [
"https://github.com/bingreeky/heterosnoh"
] | In the paper 'The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs', what Accuracy score did the MGNN + Hetero-S (4 layers) model get on the Cornell dataset
| 68.18 |
One-class CIFAR-10 | GeneralAD | GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features | 2024-07-17T00:00:00 | https://arxiv.org/abs/2407.12427v1 | [
"https://github.com/LucStrater/GeneralAD"
] | In the paper 'GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features', what AUROC score did the GeneralAD model get on the One-class CIFAR-10 dataset
| 99.3 |
MOSE | Cutie (small, with mose) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie (small, with mose) model get on the MOSE dataset
| 67.4 |
AISHELL-1 | Zipformer+CR-CTC (no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+CR-CTC (no external language model) model get on the AISHELL-1 dataset
| 4.02 |
TID2013 | ARNIQA | ARNIQA: Learning Distortion Manifold for Image Quality Assessment | 2023-10-20T00:00:00 | https://arxiv.org/abs/2310.14918v2 | [
"https://github.com/miccunifi/arniqa"
] | In the paper 'ARNIQA: Learning Distortion Manifold for Image Quality Assessment', what SRCC score did the ARNIQA model get on the TID2013 dataset
| 0.880 |
UMVM-oea-d-w-v2 | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the UMVM-oea-d-w-v2 dataset
| 0.973 |
Nordland | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the Nordland dataset
| 83.1 |
SWaT | CARLA | CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09296v4 | [
"https://github.com/zamanzadeh/CARLA"
] | In the paper 'CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection', what precision score did the CARLA model get on the SWaT dataset
| 0.9886 |
Motion-X | HumanTOMATO | HumanTOMATO: Text-aligned Whole-body Motion Generation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12978v1 | [
"https://github.com/IDEA-Research/HumanTOMATO"
] | In the paper 'HumanTOMATO: Text-aligned Whole-body Motion Generation', what FID score did the HumanTOMATO model get on the Motion-X dataset
| 1.174 |
KIT Motion-Language | FineMoGen | FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.15004v1 | [
"https://github.com/mingyuan-zhang/FineMoGen"
] | In the paper 'FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing', what FID score did the FineMoGen model get on the KIT Motion-Language dataset
| 0.178 |
BN-AuthProf | Multinomial Naive Bayes (MNB) | BN-AuthProf: Benchmarking Machine Learning for Bangla Author Profiling on Social Media Texts | 2024-12-03T00:00:00 | https://arxiv.org/abs/2412.02058v1 | [
"https://github.com/crusnic-corp/BN-AuthProf"
] | In the paper 'BN-AuthProf: Benchmarking Machine Learning for Bangla Author Profiling on Social Media Texts', what F1 score score did the Multinomial Naive Bayes (MNB) model get on the BN-AuthProf dataset
| 0.905 |
Weather (192) | RLinear-CI | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear-CI model get on the Weather (192) dataset
| 0.189 |
MM-Vet | TextBind | TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.08637v5 | [
"https://github.com/sihengli99/textbind"
] | In the paper 'TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild', what GPT-4 score score did the TextBind model get on the MM-Vet dataset
| 19.4 |
Mol-Instruction | BioT5+ | BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17810v2 | [
"https://github.com/QizhiPei/BioT5"
] | In the paper 'BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning', what Exact score did the BioT5+ model get on the Mol-Instruction dataset
| 0.642 |
MM-Vet v2 | Emu2-Chat | Generative Multimodal Models are In-Context Learners | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13286v2 | [
"https://github.com/baaivision/emu"
] | In the paper 'Generative Multimodal Models are In-Context Learners', what GPT-4 score score did the Emu2-Chat model get on the MM-Vet v2 dataset
| 38.0±0.1 |
GSM8K | OVM-Mistral-7B (verify100@1) | OVM, Outcome-supervised Value Models for Planning in Mathematical Reasoning | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09724v2 | [
"https://github.com/freedomintelligence/ovm"
] | In the paper 'OVM, Outcome-supervised Value Models for Planning in Mathematical Reasoning', what Accuracy score did the OVM-Mistral-7B (verify100@1) model get on the GSM8K dataset
| 84.7 |
BURST-val | Cutie (base, with mose, 600 pixels) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what HOTA (all) score did the Cutie (base, with mose, 600 pixels) model get on the BURST-val dataset
| 58.4 |
UTKFace | ResNet-50-OR-CNN | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-OR-CNN model get on the UTKFace dataset
| 4.40 |
CHILI-100K | Random | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the Random model get on the CHILI-100K dataset
| 0.015 +/- 0.000 |
LibriSpeech test-clean | kNN-VC (prematched HiFiGAN) | Voice Conversion With Just Nearest Neighbors | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18975v1 | [
"https://github.com/bshall/knn-vc"
] | In the paper 'Voice Conversion With Just Nearest Neighbors', what Word Error Rate (WER) score did the kNN-VC (prematched HiFiGAN) model get on the LibriSpeech test-clean dataset
| 7.36 |
Binarized MNIST | BFN | Bayesian Flow Networks | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.07037v5 | [
"https://github.com/nnaisense/bayesian-flow-networks"
] | In the paper 'Bayesian Flow Networks', what nats score did the BFN model get on the Binarized MNIST dataset
| 77.87 |
CelebA | SMDL-Attribution (ICLR version) | Less is More: Fewer Interpretable Region via Submodular Subset Selection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09164v3 | [
"https://github.com/ruoyuchen10/smdl-attribution"
] | In the paper 'Less is More: Fewer Interpretable Region via Submodular Subset Selection', what Insertion AUC score (ArcFace ResNet-101) score did the SMDL-Attribution (ICLR version) model get on the CelebA dataset
| 0.5752 |
Refer-YouTube-VOS (2021 public validation) | DEVA (ReferFormer) | Tracking Anything with Decoupled Video Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03903v1 | [
"https://github.com/hkchengrex/Tracking-Anything-with-DEVA"
] | In the paper 'Tracking Anything with Decoupled Video Segmentation', what J&F score did the DEVA (ReferFormer) model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 66.0 |
Office-Home | TransAdapter-B (ours) | TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04073v1 | [
"https://github.com/enesdoruk/TransAdapter"
] | In the paper 'TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation', what Accuracy score did the TransAdapter-B (ours) model get on the Office-Home dataset
| 89.4 |
BigEarthNet (official test set) | MAE (ViT-S/16) | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18653v1 | [
"https://github.com/zhu-xlab/fgmae"
] | In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what mAP (micro) score did the MAE (ViT-S/16) model get on the BigEarthNet (official test set) dataset
| 88.6 |
ImageNet | WavePaint | WavePaint: Resource-efficient Token-mixer for Self-supervised Inpainting | 2023-07-01T00:00:00 | https://arxiv.org/abs/2307.00407v1 | [
"https://github.com/pranavphoenix/WavePaint"
] | In the paper 'WavePaint: Resource-efficient Token-mixer for Self-supervised Inpainting', what FID score did the WavePaint model get on the ImageNet dataset
| 3.21 |
3DPW | WHAM (ViT) | WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion | 2023-12-12T00:00:00 | https://arxiv.org/abs/2312.07531v2 | [
"https://github.com/yohanshin/WHAM"
] | In the paper 'WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion', what PA-MPJPE score did the WHAM (ViT) model get on the 3DPW dataset
| 35.9 |
USNA-Cn2 (short-duration) | RNN | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the RNN model get on the USNA-Cn2 (short-duration) dataset
| 0.375 |
ImageNet | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the ImageNet dataset
| 77.62 |
FER2013 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the FER2013 dataset
| 36.2 |
PF-PASCAL | GeoAware-SC (Supervised) | Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17034v2 | [
"https://github.com/Junyi42/geoaware-sc"
] | In the paper 'Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence', what PCK score did the GeoAware-SC (Supervised) model get on the PF-PASCAL dataset
| 95.1 |
COPA | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the COPA dataset
| 96.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.