dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MM-Vet | OneLLM-7B | OneLLM: One Framework to Align All Modalities with Language | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03700v1 | [
"https://github.com/csuhan/onellm"
] | In the paper 'OneLLM: One Framework to Align All Modalities with Language', what GPT-4 score score did the OneLLM-7B model get on the MM-Vet dataset
| 29.1 |
ARKitScenes | UniDet3D | UniDet3D: Multi-dataset Indoor 3D Object Detection | 2024-09-06T00:00:00 | https://arxiv.org/abs/2409.04234v1 | [
"https://github.com/filapro/unidet3d"
] | In the paper 'UniDet3D: Multi-dataset Indoor 3D Object Detection', what mAP@0.25 score did the UniDet3D model get on the ARKitScenes dataset
| 61.3 |
Saarbruecken Voice Database (males) | SVM | Reproducible Machine Learning-based Voice Pathology Detection: Introducing the Pitch Difference Feature | 2024-10-14T00:00:00 | https://arxiv.org/abs/2410.10537v1 | [
"https://github.com/aailab-uct/automated-robust-and-reproducible-voice-pathology-detection"
] | In the paper 'Reproducible Machine Learning-based Voice Pathology Detection: Introducing the Pitch Difference Feature', what UAR score did the SVM model get on the Saarbruecken Voice Database (males) dataset
| 84.10% |
UTKFace | ResNet-50-DLDL | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL model get on the UTKFace dataset
| 4.39 |
Casia V1+ | Early Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Early Fusion model get on the Casia V1+ dataset
| .784 |
Office-Home | SPG (CLIP, ResNet-50) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ResNet-50) model get on the Office-Home dataset
| 73.8 |
PRCC | CAL+DLCR | DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID | 2024-11-11T00:00:00 | https://arxiv.org/abs/2411.07205v2 | [
"https://github.com/croitorualin/dlcr"
] | In the paper 'DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID', what Rank-1 score did the CAL+DLCR model get on the PRCC dataset
| 66.5 |
PascalVOC-20 | TagAlign(trained with image-text pairs) | TagAlign: Improving Vision-Language Alignment with Multi-Tag Classification | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.14149v4 | [
"https://github.com/Qinying-Liu/TagAlign"
] | In the paper 'TagAlign: Improving Vision-Language Alignment with Multi-Tag Classification', what mIoU score did the TagAlign(trained with image-text pairs) model get on the PascalVOC-20 dataset
| 87.9 |
ImageNet | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the ImageNet dataset
| 74.17 |
Set14 - 2x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the Set14 - 2x upscaling dataset
| 35.36 |
MUSES: MUlti-SEnsor Semantic perception dataset | Mask2Former (R50) | MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty | 2024-01-23T00:00:00 | https://arxiv.org/abs/2401.12761v4 | [
"https://github.com/timbroed/MUSES"
] | In the paper 'MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty', what AP score did the Mask2Former (R50) model get on the MUSES: MUlti-SEnsor Semantic perception dataset dataset
| 28.14 |
ImageNet | KAT-B* | Kolmogorov-Arnold Transformer | 2024-09-16T00:00:00 | https://arxiv.org/abs/2409.10594v1 | [
"https://github.com/Adamdad/kat"
] | In the paper 'Kolmogorov-Arnold Transformer', what Top 1 Accuracy score did the KAT-B* model get on the ImageNet dataset
| 82.8 |
CHILI-100K | Mean | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the Mean model get on the CHILI-100K dataset
| 0.307 |
Natural Questions | Blended RAG | Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers | 2024-03-22T00:00:00 | https://arxiv.org/abs/2404.07220v2 | [
"https://github.com/ibm-ecosystem-engineering/blended-rag"
] | In the paper 'Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers', what EM score did the Blended RAG model get on the Natural Questions dataset
| 42.63 |
Cityscapes test | SwinMTL | SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images | 2024-03-15T00:00:00 | https://arxiv.org/abs/2403.10662v1 | [
"https://github.com/pardistaghavi/swinmtl"
] | In the paper 'SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images', what RMSE score did the SwinMTL model get on the Cityscapes test dataset
| 6.352 |
RefCOCO testB | EVP | EVP: Enhanced Visual Perception using Inverse Multi-Attentive Feature Refinement and Regularized Image-Text Alignment | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08548v1 | [
"https://github.com/lavreniuk/evp"
] | In the paper 'EVP: Enhanced Visual Perception using Inverse Multi-Attentive Feature Refinement and Regularized Image-Text Alignment', what Overall IoU score did the EVP model get on the RefCOCO testB dataset
| 72.94 |
REDDIT-B | Graph-JEPA | Graph-level Representation Learning with Joint-Embedding Predictive Architectures | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16014v2 | [
"https://github.com/geriskenderi/graph-jepa"
] | In the paper 'Graph-level Representation Learning with Joint-Embedding Predictive Architectures', what Accuracy score did the Graph-JEPA model get on the REDDIT-B dataset
| 56.73 |
MNIST | CNN+ Wilson-Cowan model RNN | Learning in Wilson-Cowan model for metapopulation | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16453v2 | [
"https://github.com/raffaelemarino/learning_in_wilsoncowan"
] | In the paper 'Learning in Wilson-Cowan model for metapopulation', what Accuracy score did the CNN+ Wilson-Cowan model RNN model get on the MNIST dataset
| 99.31 |
MAWPS | ATHENA (roberta-large) | ATHENA: Mathematical Reasoning with Thought Expansion | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01036v1 | [
"https://github.com/the-jb/athena-math"
] | In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Accuracy (%) score did the ATHENA (roberta-large) model get on the MAWPS dataset
| 93 |
EC-FUNSD | RORE (LayoutLMv3-large) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (LayoutLMv3-large) model get on the EC-FUNSD dataset
| 84.53 |
Clotho | Audio Flamingo (Pengi trainset) | Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities | 2024-02-02T00:00:00 | https://arxiv.org/abs/2402.01831v3 | [
"https://github.com/NVIDIA/audio-flamingo"
] | In the paper 'Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities', what CIDEr score did the Audio Flamingo (Pengi trainset) model get on the Clotho dataset
| 0.489 |
ETTm2 (96) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTm2 (96) Multivariate dataset
| 0.164 |
Wiki-CS | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the HH-GraphSAGE model get on the Wiki-CS dataset
| 82.81 |
CACD | ResNet-50-Mean-Variance | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Mean-Variance model get on the CACD dataset
| 4.07 |
UVG | DiQP on HVEC with QP 51 | Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression | 2024-12-12T00:00:00 | https://arxiv.org/abs/2412.08912v1 | [
"https://github.com/alimd94/DiQP"
] | In the paper 'Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression', what Average PSNR (dB) score did the DiQP on HVEC with QP 51 model get on the UVG dataset
| 31.965 |
IAM(line-level) | HTR-VT | HTR-VT: Handwritten Text Recognition with Vision Transformer | 2024-09-13T00:00:00 | https://arxiv.org/abs/2409.08573v1 | [
"https://github.com/yutingli0606/htr-vt"
] | In the paper 'HTR-VT: Handwritten Text Recognition with Vision Transformer', what Test CER score did the HTR-VT model get on the IAM(line-level) dataset
| 4.7 |
ACDC | AgileFormer | AgileFormer: Spatially Agile Transformer UNet for Medical Image Segmentation | 2024-03-29T00:00:00 | https://arxiv.org/abs/2404.00122v2 | [
"https://github.com/sotiraslab/AgileFormer"
] | In the paper 'AgileFormer: Spatially Agile Transformer UNet for Medical Image Segmentation', what Dice Score score did the AgileFormer model get on the ACDC dataset
| 0.9255 |
CIFAR10 100k | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what Accuracy (%) score did the NeuralWalker model get on the CIFAR10 100k dataset
| 80.027 ± 0.185 |
nuScenes | MCTrack | MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving | 2024-09-23T00:00:00 | https://arxiv.org/abs/2409.16149v2 | [
"https://github.com/megvii-research/mctrack"
] | In the paper 'MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving', what AMOTA score did the MCTrack model get on the nuScenes dataset
| 0.763 |
Words in Context | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (one-shot) model get on the Words in Context dataset
| 66.8 |
PlantVillage | SAG-ViT | SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers | 2024-11-14T00:00:00 | https://arxiv.org/abs/2411.09420v2 | [
"https://github.com/shravan-18/SAG-ViT"
] | In the paper 'SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers', what F1 score did the SAG-ViT model get on the PlantVillage dataset
| 97.72 |
KMNIST | Spiking-Diffusion | Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks | 2023-08-20T00:00:00 | https://arxiv.org/abs/2308.10187v4 | [
"https://github.com/Arktis2022/Spiking-Diffusion"
] | In the paper 'Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks', what FID score did the Spiking-Diffusion model get on the KMNIST dataset
| 59.23 |
UHRSD | BiRefNet (DUTS) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (DUTS) model get on the UHRSD dataset
| 0.931 |
DanceTrack | C-TWiX | Learning Data Association for Multi-Object Tracking using Only Coordinates | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.08018v1 | [
"https://github.com/Guepardow/TWiX"
] | In the paper 'Learning Data Association for Multi-Object Tracking using Only Coordinates', what HOTA score did the C-TWiX model get on the DanceTrack dataset
| 62.1 |
InfographicVQA | DocFormerv2-large | DocFormerv2: Local Features for Document Understanding | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01733v1 | [
"https://github.com/uakarsh/docformerv2"
] | In the paper 'DocFormerv2: Local Features for Document Understanding', what ANLS score did the DocFormerv2-large model get on the InfographicVQA dataset
| 48.8 |
AgeDB | ResNet-50-Cross-Entropy | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Cross-Entropy model get on the AgeDB dataset
| 5.81 |
Oxford RobotCar Dataset | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Oxford RobotCar Dataset dataset
| 34.55 |
UMVM-oea-en-de | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the UMVM-oea-en-de dataset
| 0.974 |
V*bench | IVM-Enhanced GPT4-V | Instruction-Guided Visual Masking | 2024-05-30T00:00:00 | https://arxiv.org/abs/2405.19783v2 | [
"https://github.com/2toinf/ivm"
] | In the paper 'Instruction-Guided Visual Masking', what Accuracy score did the IVM-Enhanced GPT4-V model get on the V*bench dataset
| 81.2 |
Atari 2600 Breakout | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Breakout dataset
| 621.7 |
CHILI-100K | GAT | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GAT model get on the CHILI-100K dataset
| 0.192 +/- 0.000 |
GRAZPEDWRI-DX | YOLOv8+ResGAM | YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09329v5 | [
"https://github.com/ruiyangju/fracture_detection_improved_yolov8"
] | In the paper 'YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection', what mAP score did the YOLOv8+ResGAM model get on the GRAZPEDWRI-DX dataset
| 65.0 |
Urban100 - 4x upscaling | RCAN (DUKD) | Data Upcycling Knowledge Distillation for Image Super-Resolution | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.14162v4 | [
"https://github.com/yun224/dukd"
] | In the paper 'Data Upcycling Knowledge Distillation for Image Super-Resolution', what PSNR score did the RCAN (DUKD) model get on the Urban100 - 4x upscaling dataset
| 26.62 |
CUTE80 | CLIP4STR-B* | An Empirical Study of Scaling Law for OCR | 2023-12-29T00:00:00 | https://arxiv.org/abs/2401.00028v3 | [
"https://github.com/large-ocr-model/large-ocr-model.github.io"
] | In the paper 'An Empirical Study of Scaling Law for OCR', what Accuracy score did the CLIP4STR-B* model get on the CUTE80 dataset
| 99.65 |
RSTPReid | MARS | MARS: Paying more attention to visual attributes for text-based person search | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04287v1 | [
"https://github.com/ergastialex/mars"
] | In the paper 'MARS: Paying more attention to visual attributes for text-based person search', what R@1 score did the MARS model get on the RSTPReid dataset
| 67.55 |
TerraIncognita | UniDG + CORAL + ConvNeXt-B | Towards Unified and Effective Domain Generalization | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10008v1 | [
"https://github.com/invictus717/UniDG"
] | In the paper 'Towards Unified and Effective Domain Generalization', what Average Accuracy score did the UniDG + CORAL + ConvNeXt-B model get on the TerraIncognita dataset
| 69.6 |
PROTEINS | R-GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GCN + PANDA model get on the PROTEINS dataset
| 76 |
WildDESED | CRNN (with BEATs + Separation) | Leveraging LLM and Text-Queried Separation for Noise-Robust Sound Event Detection | 2024-11-02T00:00:00 | https://arxiv.org/abs/2411.01174v1 | [
"https://github.com/apple-yinhan/noise-robust-sed"
] | In the paper 'Leveraging LLM and Text-Queried Separation for Noise-Robust Sound Event Detection', what PSDS1 (-5dB) score did the CRNN (with BEATs + Separation) model get on the WildDESED dataset
| 0.134 |
STL-10, 40 Labels | RelationMatch | RelationMatch: Matching In-batch Relationships for Semi-supervised Learning | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10397v2 | [
"https://github.com/yifanzhang-pro/relationmatch"
] | In the paper 'RelationMatch: Matching In-batch Relationships for Semi-supervised Learning', what Accuracy score did the RelationMatch model get on the STL-10, 40 Labels dataset
| 86.06 |
ADE20K-847 | CLIPSelf | CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01403v2 | [
"https://github.com/wusize/clipself"
] | In the paper 'CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction', what mIoU score did the CLIPSelf model get on the ADE20K-847 dataset
| 12.4 |
UCR Anomaly Archive | USAD | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the USAD model get on the UCR Anomaly Archive dataset
| 0.276 |
Cityscapes test | PriMaPs-EM (DINO ViT-S/8) | Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16818v2 | [
"https://github.com/visinf/primaps"
] | In the paper 'Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals', what mIoU score did the PriMaPs-EM (DINO ViT-S/8) model get on the Cityscapes test dataset
| 19.4 |
ANLI test | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what A1 score did the PaLM 2-L (one-shot) model get on the ANLI test dataset
| 73.1 |
GSM8K | ToRA-70B (SC, k=50) | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-70B (SC, k=50) model get on the GSM8K dataset
| 88.3 |
CIFAR-100-LT (ρ=100) | GML (ResNet-32) | Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels | 2023-05-02T00:00:00 | https://arxiv.org/abs/2305.01160v3 | [
"https://github.com/bluecdm/Long-tailed-recognition"
] | In the paper 'Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels', what Error Rate score did the GML (ResNet-32) model get on the CIFAR-100-LT (ρ=100) dataset
| 46.0 |
VoxCeleb | ReDimNet-B0-LM (1.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B0-LM (1.0M) model get on the VoxCeleb dataset
| 1.16 |
ADE20K-150 | CLIPSelf | CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01403v2 | [
"https://github.com/wusize/clipself"
] | In the paper 'CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction', what mIoU score did the CLIPSelf model get on the ADE20K-150 dataset
| 34.5 |
Mini-Imagenet 5-way (5-shot) | MSENet | Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.07989v1 | [
"https://github.com/FatemehAskari/MSENet"
] | In the paper 'Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms', what Accuracy score did the MSENet model get on the Mini-Imagenet 5-way (5-shot) dataset
| 84.42 |
GoogleEarth | GaussianCity | GaussianCity: Generative Gaussian Splatting for Unbounded 3D City Generation | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06526v2 | [
"https://github.com/hzxie/GaussianCity"
] | In the paper 'GaussianCity: Generative Gaussian Splatting for Unbounded 3D City Generation', what KID score did the GaussianCity model get on the GoogleEarth dataset
| 0.09 |
S2RDA-MS-39 | PGA | Enhancing Domain Adaptation through Prompt Gradient Alignment | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09353v2 | [
"https://github.com/viethoang1512/pga"
] | In the paper 'Enhancing Domain Adaptation through Prompt Gradient Alignment', what Accuracy score did the PGA model get on the S2RDA-MS-39 dataset
| 38 |
VP Air | SegVLAD-PreT (M) | Revisit Anything: Visual Place Recognition via Image Segment Retrieval | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.18049v1 | [
"https://github.com/anyloc/revisit-anything"
] | In the paper 'Revisit Anything: Visual Place Recognition via Image Segment Retrieval', what Recall@1 score did the SegVLAD-PreT (M) model get on the VP Air dataset
| 67.2 |
RITE | RRWNet | RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03166v4 | [
"https://github.com/j-morano/rrwnet"
] | In the paper 'RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification', what Accuracy score did the RRWNet model get on the RITE dataset
| 0.9666 |
BanglaBook | Multinomial NB (word 2-gram + word 3-gram) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the Multinomial NB (word 2-gram + word 3-gram) model get on the BanglaBook dataset
| 0.8663 |
ETTm2 (336) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTm2 (336) Multivariate dataset
| 0.268 |
Perception Test | Oyrx (34B) | Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.12961v2 | [
"https://github.com/oryx-mllm/oryx"
] | In the paper 'Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution', what Accuracy (Top-1) score did the Oyrx (34B) model get on the Perception Test dataset
| 71.4 |
ImageNet | AIMv2-3B (448 res) | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-3B (448 res) model get on the ImageNet dataset
| 89.5% |
Far-OOD | SCALE (ResNet50) | Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement | 2023-09-30T00:00:00 | https://arxiv.org/abs/2310.00227v1 | [
"https://github.com/kai422/scale"
] | In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what FPR@95 score did the SCALE (ResNet50) model get on the Far-OOD dataset
| 16.53 |
Action-Camera Parking | ViT | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1 score did the ViT model get on the Action-Camera Parking dataset
| 0.8152 |
READ 2016 | HTR-VT(line-level) | HTR-VT: Handwritten Text Recognition with Vision Transformer | 2024-09-13T00:00:00 | https://arxiv.org/abs/2409.08573v1 | [
"https://github.com/yutingli0606/htr-vt"
] | In the paper 'HTR-VT: Handwritten Text Recognition with Vision Transformer', what CER (%) score did the HTR-VT(line-level) model get on the READ 2016 dataset
| 3.9 |
PASCAL Context-59 | HyperSeg | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what mIoU score did the HyperSeg model get on the PASCAL Context-59 dataset
| 64.6 |
MATH | ToRA-Code 7B (w/ code) | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-Code 7B (w/ code) model get on the MATH dataset
| 44.6 |
JF17K | HAHE | HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06588v2 | [
"https://github.com/lhrlab/hahe"
] | In the paper 'HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level', what MRR score did the HAHE model get on the JF17K dataset
| 0.623 |
Potsdam-3 | PriMaPs-EM+HP (DINO ViT-B/8) | Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16818v2 | [
"https://github.com/visinf/primaps"
] | In the paper 'Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals', what Accuracy score did the PriMaPs-EM+HP (DINO ViT-B/8) model get on the Potsdam-3 dataset
| 83.3 |
CIFAR-100 | DPAC | Deep Online Probability Aggregation Clustering | 2024-07-07T00:00:00 | https://arxiv.org/abs/2407.05246v2 | [
"https://github.com/aomandechenai/deep-probability-aggregation-clustering"
] | In the paper 'Deep Online Probability Aggregation Clustering', what Accuracy score did the DPAC model get on the CIFAR-100 dataset
| 0.555 |
DocVQA test | DocFormerv2-large | DocFormerv2: Local Features for Document Understanding | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01733v1 | [
"https://github.com/uakarsh/docformerv2"
] | In the paper 'DocFormerv2: Local Features for Document Understanding', what ANLS score did the DocFormerv2-large model get on the DocVQA test dataset
| 0.8784 |
REBUS | GPT-4V | REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05604v2 | [
"https://github.com/cvndsh/rebus"
] | In the paper 'REBUS: A Robust Evaluation Benchmark of Understanding Symbols', what Accuracy score did the GPT-4V model get on the REBUS dataset
| 24.0 |
GSM8K | DART-Math-Mistral-7B-Uniform (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Mistral-7B-Uniform (0-shot CoT, w/o code) model get on the GSM8K dataset
| 82.6 |
AFHQ Cat | BOSS | Bellman Optimal Stepsize Straightening of Flow-Matching Models | 2023-12-27T00:00:00 | https://arxiv.org/abs/2312.16414v3 | [
"https://github.com/nguyenngocbaocmt02/boss"
] | In the paper 'Bellman Optimal Stepsize Straightening of Flow-Matching Models', what clean-FID score did the BOSS model get on the AFHQ Cat dataset
| 22.2 |
RefCOCO testB | MaskRIS (Swin-B) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B) model get on the RefCOCO testB dataset
| 73.96 |
CACD | ResNet-50-SORD | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-SORD model get on the CACD dataset
| 3.96 |
CAMELYON16 | Snuffy (MAE Adapter) | Snuffy: Efficient Whole Slide Image Classifier | 2024-08-15T00:00:00 | https://arxiv.org/abs/2408.08258v2 | [
"https://github.com/jafarinia/snuffy"
] | In the paper 'Snuffy: Efficient Whole Slide Image Classifier', what AUC score did the Snuffy (MAE Adapter) model get on the CAMELYON16 dataset
| 0.910 |
Assembly101 | ISTA-Net | Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07469v1 | [
"https://github.com/Necolizer/ISTA-Net"
] | In the paper 'Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition', what Actions Top-1 score did the ISTA-Net model get on the Assembly101 dataset
| 28.07 |
ImageNet-A | Discrete Adversarial Distillation (ResNet-50) | Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01441v2 | [
"https://github.com/lapisrocks/DiscreteAdversarialDistillation"
] | In the paper 'Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models', what Top-1 accuracy % score did the Discrete Adversarial Distillation (ResNet-50) model get on the ImageNet-A dataset
| 7.7 |
CHILI-3K | GraphSAGE | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GraphSAGE model get on the CHILI-3K dataset
| 0.491 +/- 0.004 |
CiteSeer with Public Split: fixed 20 nodes per class | OGC | From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13599v2 | [
"https://github.com/zhengwang100/ogc_ggcm"
] | In the paper 'From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited', what Accuracy score did the OGC model get on the CiteSeer with Public Split: fixed 20 nodes per class dataset
| 77.5 |
PASCAL-5i (1-Shot) | MSDNet (ResNet-50) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-50) model get on the PASCAL-5i (1-Shot) dataset
| 64.3 |
PASCAL-5i (1-Shot) | GF-SAM | Bridge the Points: Graph-based Few-shot Segment Anything Semantically | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.06964v2 | [
"https://github.com/ANDYZAQ/GF-SAM"
] | In the paper 'Bridge the Points: Graph-based Few-shot Segment Anything Semantically', what Mean IoU score did the GF-SAM model get on the PASCAL-5i (1-Shot) dataset
| 72.1 |
MPI-INF-3DHP | STAF | STAF: 3D Human Mesh Recovery from Video with Spatio-Temporal Alignment Fusion | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01730v1 | [
"https://github.com/yw0208/STAF"
] | In the paper 'STAF: 3D Human Mesh Recovery from Video with Spatio-Temporal Alignment Fusion', what MPJPE score did the STAF model get on the MPI-INF-3DHP dataset
| 92.4 |
Common Objects in 3D | NU-MCC | NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF | 2023-07-18T00:00:00 | https://arxiv.org/abs/2307.09112v2 | [
"https://github.com/sail-sg/numcc"
] | In the paper 'NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF', what Avg. F1 score did the NU-MCC model get on the Common Objects in 3D dataset
| 83.8 |
PeMS08 | STAEformer | STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10425v5 | [
"https://github.com/xdzhelheim/staeformer"
] | In the paper 'STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting', what MAE@1h score did the STAEformer model get on the PeMS08 dataset
| 13.46 |
WHU-CD | HANet | HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09178v1 | [
"https://github.com/chengxihan/hanet-cd"
] | In the paper 'HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images', what F1 score did the HANet model get on the WHU-CD dataset
| 88.16 |
BIG-bench (Penguins In A Table) | PaLM 2 (few-shot, k=3, CoT) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, CoT) model get on the BIG-bench (Penguins In A Table) dataset
| 84.9 |
Flowers-102 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Flowers-102 dataset
| 99.6 |
SMAC 6h_vs_9z | QMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QMIX model get on the SMAC 6h_vs_9z dataset
| 1.14 |
CropHarvest - Global | Input Fusion | Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14297v2 | [
"https://github.com/fmenat/missingviews-study-eo"
] | In the paper 'Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications', what Average Accuracy score did the Input Fusion model get on the CropHarvest - Global dataset
| 0.847 |
Mapillary val | Resnet50 | MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18331v2 | [
"https://github.com/airl-iisc/MRFP"
] | In the paper 'MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation', what mIoU score did the Resnet50 model get on the Mapillary val dataset
| 32.93 |
RefCoCo val | EVF-SAM | EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model | 2024-06-28T00:00:00 | https://arxiv.org/abs/2406.20076v4 | [
"https://github.com/hustvl/evf-sam"
] | In the paper 'EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model', what Overall IoU score did the EVF-SAM model get on the RefCoCo val dataset
| 82.1 |
QVHighlights | UniVTG | UniVTG: Towards Unified Video-Language Temporal Grounding | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16715v2 | [
"https://github.com/showlab/univtg"
] | In the paper 'UniVTG: Towards Unified Video-Language Temporal Grounding', what mAP score did the UniVTG model get on the QVHighlights dataset
| 35.47 |
Stanford Cars | ResNet-50 | PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans | 2023-08-25T00:00:00 | https://arxiv.org/abs/2308.13651v5 | [
"https://github.com/giangnguyen2412/PCNN-src-code-TMRL2024"
] | In the paper 'PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans', what Accuracy score did the ResNet-50 model get on the Stanford Cars dataset
| 91.06% |
H3WB | 3D-LFM | 3D-LFM: Lifting Foundation Model | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11894v2 | [
"https://github.com/mosamdabhi/3dlfm"
] | In the paper '3D-LFM: Lifting Foundation Model', what Average MPJPE (mm) score did the 3D-LFM model get on the H3WB dataset
| 28.22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.