dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
PEMS-BAY | STAEformer | STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10425v5 | [
"https://github.com/xdzhelheim/staeformer"
] | In the paper 'STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting', what MAE @ 12 step score did the STAEformer model get on the PEMS-BAY dataset
| 1.91 |
MSCOCO | SIA-OVD (RN50) | SIA-OVD: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection | 2024-10-08T00:00:00 | https://arxiv.org/abs/2410.05650v1 | [
"https://github.com/pku-icst-mipl/sia-ovd_acmmm2024"
] | In the paper 'SIA-OVD: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection', what AP 0.5 score did the SIA-OVD (RN50) model get on the MSCOCO dataset
| 35.5 |
ImageNet 256x256 | MaskBit | MaskBit: Embedding-free Image Generation via Bit Tokens | 2024-09-24T00:00:00 | https://arxiv.org/abs/2409.16211v2 | [
"https://github.com/markweberdev/maskbit"
] | In the paper 'MaskBit: Embedding-free Image Generation via Bit Tokens', what FID score did the MaskBit model get on the ImageNet 256x256 dataset
| 1.52 |
COIN | MA-LMM | MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05726v2 | [
"https://github.com/boheumd/MA-LMM"
] | In the paper 'MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding', what Accuracy (%) score did the MA-LMM model get on the COIN dataset
| 93.2 |
Nardo-Air R | AnyLoc-VLAD-DINO | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINO model get on the Nardo-Air R dataset
| 94.37 |
BoolQ | OPT-1.3B | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-1.3B model get on the BoolQ dataset
| 62.5% |
FGVC-Aircraft | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the FGVC-Aircraft dataset
| 40.28 |
TAO | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what TETA score did the GLEE-Pro model get on the TAO dataset
| 47.2 |
Peptides-struct | BoP | From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis | 2024-11-17T00:00:00 | https://arxiv.org/abs/2411.11149v1 | [
"https://github.com/kbogas/PAM_BoP"
] | In the paper 'From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis', what MAE score did the BoP model get on the Peptides-struct dataset
| 0.25 |
ASDiv-A | MMOS-DeepSeekMath-7B(0-shot) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Execution Accuracy score did the MMOS-DeepSeekMath-7B(0-shot) model get on the ASDiv-A dataset
| 87.6 |
PACS | WAKD (DeiT-Ti) | Weight Averaging Improves Knowledge Distillation under Domain Shift | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11446v1 | [
"https://github.com/vorobeevich/distillation-in-dg"
] | In the paper 'Weight Averaging Improves Knowledge Distillation under Domain Shift', what Average Accuracy score did the WAKD (DeiT-Ti) model get on the PACS dataset
| 87.6 |
ImageNet - 10% labeled data | SimMatch + EPASS (ResNet-50) | Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15764v1 | [
"https://github.com/beandkay/epass"
] | In the paper 'Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector', what Top 5 Accuracy score did the SimMatch + EPASS (ResNet-50) model get on the ImageNet - 10% labeled data dataset
| 92.6 |
Sky Time-lapse | DDMI | DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations | 2024-01-23T00:00:00 | https://arxiv.org/abs/2401.12517v2 | [
"https://github.com/mlvlab/DDMI"
] | In the paper 'DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations', what FVD 16 score did the DDMI model get on the Sky Time-lapse dataset
| 66.25 |
NYU Depth v2 | GeminiFusion (MiT-B5) | GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.01210v2 | [
"https://github.com/jiadingcn/geminifusion"
] | In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what Mean IoU score did the GeminiFusion (MiT-B5) model get on the NYU Depth v2 dataset
| 57.7 |
IC19-Art | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy (%) score did the CLIP4STR-B model get on the IC19-Art dataset
| 85.8 |
MVTec LOCO AD | PUAD-M | PUAD: Frustratingly Simple Method for Robust Anomaly Detection | 2024-02-23T00:00:00 | https://arxiv.org/abs/2402.15143v1 | [
"https://github.com/LeapMind/PUAD"
] | In the paper 'PUAD: Frustratingly Simple Method for Robust Anomaly Detection', what Avg. Detection AUROC score did the PUAD-M model get on the MVTec LOCO AD dataset
| 94.4 |
DAVIS-585 | ViT-B+MST+CL | MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation | 2024-01-09T00:00:00 | https://arxiv.org/abs/2401.04403v2 | [
"https://github.com/hahamyt/mst"
] | In the paper 'MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation', what NoC@90 score did the ViT-B+MST+CL model get on the DAVIS-585 dataset
| 2.29 |
HateXplain | Space-XLNet | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the Space-XLNet model get on the HateXplain dataset
| 0.8798 |
ImageNet-100 | SparseSwin with L2 | SparseSwin: Swin Transformer with Sparse Transformer Block | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05224v1 | [
"https://github.com/krisnapinasthika/sparseswin"
] | In the paper 'SparseSwin: Swin Transformer with Sparse Transformer Block', what Percentage correct score did the SparseSwin with L2 model get on the ImageNet-100 dataset
| 86.96 |
CIRR | SPRC | Sentence-level Prompts Benefit Composed Image Retrieval | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.05473v1 | [
"https://github.com/chunmeifeng/sprc"
] | In the paper 'Sentence-level Prompts Benefit Composed Image Retrieval', what (Recall@5+Recall_subset@1)/2 score did the SPRC model get on the CIRR dataset
| 82.6 |
GQA test-dev | HYDRA | HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12884v2 | [
"https://github.com/ControlNet/HYDRA"
] | In the paper 'HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning', what Accuracy score did the HYDRA model get on the GQA test-dev dataset
| 47.9 |
SUN-RGBD val | Point-GCC+TR3D | Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19623v2 | [
"https://github.com/asterisci/point-gcc"
] | In the paper 'Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast', what mAP@0.25 score did the Point-GCC+TR3D model get on the SUN-RGBD val dataset
| 67.7 |
Urban100 - 2x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Urban100 - 2x upscaling dataset
| 35.24 |
ADE20K-150 | MAFT+ | Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation | 2024-08-01T00:00:00 | https://arxiv.org/abs/2408.00744v2 | [
"https://github.com/jiaosiyu1999/MAFT-Plus"
] | In the paper 'Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation', what mIoU score did the MAFT+ model get on the ADE20K-150 dataset
| 36.1 |
ChestX-ray14 | SynthEnsemble | SynthEnsemble: A Fusion of CNN, Vision Transformer, and Hybrid Models for Multi-Label Chest X-Ray Classification | 2023-11-13T00:00:00 | https://arxiv.org/abs/2311.07750v3 | [
"https://github.com/syednabilashraf/SynthEnsemble"
] | In the paper 'SynthEnsemble: A Fusion of CNN, Vision Transformer, and Hybrid Models for Multi-Label Chest X-Ray Classification', what Average AUC on 14 label score did the SynthEnsemble model get on the ChestX-ray14 dataset
| 85.433 |
Deformable Plate | HCMT | Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12467v3 | [
"https://github.com/yuyudeep/hcmt"
] | In the paper 'Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer', what Rollout RMSE-all [1e3] Position score did the HCMT model get on the Deformable Plate dataset
| 7.67±0.42 |
UTKFace | ResNet-50-Cross-Entropy | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Cross-Entropy model get on the UTKFace dataset
| 4.38 |
SMAC corridor | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC corridor dataset
| 81.25 |
Clipart1k | CDDMSL | Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13525v1 | [
"https://github.com/sinamalakouti/CDDMSL"
] | In the paper 'Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment', what MAP score did the CDDMSL model get on the Clipart1k dataset
| 39.8 |
LaSOT-ext | LoRAT-L-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what AUC score did the LoRAT-L-378 model get on the LaSOT-ext dataset
| 56.6 |
Replica | LabelMaker | LABELMAKER: Automatic Semantic Label Generation from RGB-D Trajectories | 2023-11-20T00:00:00 | https://arxiv.org/abs/2311.12174v1 | [
"https://github.com/cvg/labelmaker"
] | In the paper 'LABELMAKER: Automatic Semantic Label Generation from RGB-D Trajectories', what mIoU score did the LabelMaker model get on the Replica dataset
| 42.1 |
IIIT5k | CPPD | Context Perception Parallel Decoder for Scene Text Recognition | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12270v2 | [
"https://github.com/PaddlePaddle/PaddleOCR"
] | In the paper 'Context Perception Parallel Decoder for Scene Text Recognition', what Accuracy score did the CPPD model get on the IIIT5k dataset
| 99.3 |
Charades-Ego | EgoVLPv2 | EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone | 2023-07-11T00:00:00 | https://arxiv.org/abs/2307.05463v2 | [
"https://github.com/facebookresearch/EgoVLPv2"
] | In the paper 'EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone', what mAP score did the EgoVLPv2 model get on the Charades-Ego dataset
| 34.1 |
PROTEINS | GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GIN + PANDA model get on the PROTEINS dataset
| 75.759 |
Oracle-MNIST | ResNet-18 + Vision Eagle Attention | Vision Eagle Attention: a new lens for advancing image classification | 2024-11-15T00:00:00 | https://arxiv.org/abs/2411.10564v2 | [
"https://github.com/MahmudulHasan11085/Vision-Eagle-Attention"
] | In the paper 'Vision Eagle Attention: a new lens for advancing image classification', what Accuracy score did the ResNet-18 + Vision Eagle Attention model get on the Oracle-MNIST dataset
| 97.20 |
The Pile | Phi-3 7B | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Phi-3 7B model get on the The Pile dataset
| 0.678 |
Electricity (96) | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the Electricity (96) dataset
| 0.132 |
Electricity (336) | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the Electricity (336) dataset
| 0.158 |
Vinoground | InternLM-XC-2.5 (CoT) | InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03320v1 | [
"https://github.com/internlm/internlm-xcomposer"
] | In the paper 'InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output', what Text Score score did the InternLM-XC-2.5 (CoT) model get on the Vinoground dataset
| 30.8 |
PATTERN | GPTrans-Nano | Graph Propagation Transformer for Graph Representation Learning | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11424v3 | [
"https://github.com/czczup/gptrans"
] | In the paper 'Graph Propagation Transformer for Graph Representation Learning', what Accuracy score did the GPTrans-Nano model get on the PATTERN dataset
| 86.734±0.008 |
LVIS v1.0 val | DiverGen (Swin-L) | DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data | 2024-05-16T00:00:00 | https://arxiv.org/abs/2405.10185v1 | [
"https://github.com/aim-uofa/DiverGen"
] | In the paper 'DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data', what mask AP score did the DiverGen (Swin-L) model get on the LVIS v1.0 val dataset
| 45.5 |
NYU Depth v2 | GeminiFusion (Swin-Large) | GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.01210v2 | [
"https://github.com/jiadingcn/geminifusion"
] | In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what Mean IoU score did the GeminiFusion (Swin-Large) model get on the NYU Depth v2 dataset
| 60.2 |
QM9 | BayesAgg-MTL | Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning | 2024-02-06T00:00:00 | https://arxiv.org/abs/2402.04005v2 | [
"https://github.com/ssi-research/bayesagg_mtl"
] | In the paper 'Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning', what ∆m% score did the BayesAgg-MTL model get on the QM9 dataset
| 53.7 |
NeRF | Self-Organizing Gaussians | Compact 3D Scene Representation via Self-Organizing Gaussian Grids | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.13299v2 | [
"https://github.com/fraunhoferhhi/Self-Organizing-Gaussians"
] | In the paper 'Compact 3D Scene Representation via Self-Organizing Gaussian Grids', what PSNR score did the Self-Organizing Gaussians model get on the NeRF dataset
| 33.7 |
CACD | ResNet-50-Regression | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Regression model get on the CACD dataset
| 4.06 |
YouTube-VIS validation | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Pro model get on the YouTube-VIS validation dataset
| 67.4 |
VDD | UperNet(Swin-L) | VDD: Varied Drone Dataset for Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13608v3 | [
"https://github.com/RussRobin/VDD"
] | In the paper 'VDD: Varied Drone Dataset for Semantic Segmentation', what mIoU score did the UperNet(Swin-L) model get on the VDD dataset
| 85.63 |
EgoExoLearn | RAAN+TL | EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World | 2024-03-24T00:00:00 | https://arxiv.org/abs/2403.16182v2 | [
"https://github.com/opengvlab/egoexolearn"
] | In the paper 'EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World', what Accuracy score did the RAAN+TL model get on the EgoExoLearn dataset
| 79.875 |
GSM8K | OpenMath-Llama2-70B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-Llama2-70B (w/ code) model get on the GSM8K dataset
| 84.7 |
ImageNet | STViT-Swin-Ti | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the STViT-Swin-Ti model get on the ImageNet dataset
| 82.22% |
BIG-bench (Disambiguation QA) | PaLM 2 (few-shot, k=3, Direct) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (Disambiguation QA) dataset
| 78.8 |
GTA5 to Cityscapes | HALO | Hyperbolic Active Learning for Semantic Segmentation under Domain Shift | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.11180v5 | [
"https://github.com/paolomandica/HALO"
] | In the paper 'Hyperbolic Active Learning for Semantic Segmentation under Domain Shift', what mIoU score did the HALO model get on the GTA5 to Cityscapes dataset
| 73.3 |
CoNLL 2003 (English) | PromptNER [BERT-large] | PromptNER: Prompt Locating and Typing for Named Entity Recognition | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17104v1 | [
"https://github.com/tricktreat/promptner"
] | In the paper 'PromptNER: Prompt Locating and Typing for Named Entity Recognition', what F1 score did the PromptNER [BERT-large] model get on the CoNLL 2003 (English) dataset
| 92.41 |
MS COCO | RAPHAEL (zero-shot) | RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18295v5 | [
"https://github.com/lucidrains/soft-moe-pytorch"
] | In the paper 'RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths', what FID score did the RAPHAEL (zero-shot) model get on the MS COCO dataset
| 6.61 |
RefCOCO+ val | Florence-2-large-ft | Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks | 2023-11-10T00:00:00 | https://arxiv.org/abs/2311.06242v1 | [
"https://github.com/retkowsky/florence-2"
] | In the paper 'Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks', what Accuracy (%) score did the Florence-2-large-ft model get on the RefCOCO+ val dataset
| 93.4 |
IMDb-M | Graph-JEPA | Graph-level Representation Learning with Joint-Embedding Predictive Architectures | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16014v2 | [
"https://github.com/geriskenderi/graph-jepa"
] | In the paper 'Graph-level Representation Learning with Joint-Embedding Predictive Architectures', what Accuracy score did the Graph-JEPA model get on the IMDb-M dataset
| 50.69% |
ETTh2 (96) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTh2 (96) Multivariate dataset
| 0.287 |
ARC (Challenge) | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the ARC (Challenge) dataset
| 69.2 |
ActivityNet | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what text-to-video R@1 score did the COSA model get on the ActivityNet dataset
| 67.3 |
DAVIS 2017 (val) | HTR | Temporally Consistent Referring Video Object Segmentation with Hybrid Memory | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19407v2 | [
"https://github.com/bo-miao/HTR"
] | In the paper 'Temporally Consistent Referring Video Object Segmentation with Hybrid Memory', what J&F 1st frame score did the HTR model get on the DAVIS 2017 (val) dataset
| 65.6 |
RST-DT | Bottom-up Llama 2 (70B) | Can we obtain significant success in RST discourse parsing by using Large Language Models? | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05065v1 | [
"https://github.com/nttcslab-nlp/rstparser_eacl24"
] | In the paper 'Can we obtain significant success in RST discourse parsing by using Large Language Models?', what Standard Parseval (Span) score did the Bottom-up Llama 2 (70B) model get on the RST-DT dataset
| 79.8 |
PoseTrack2018 | 4DHumans + ViTDet | Humans in 4D: Reconstructing and Tracking Humans with Transformers | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.20091v3 | [
"https://github.com/shubham-goel/4D-Humans"
] | In the paper 'Humans in 4D: Reconstructing and Tracking Humans with Transformers', what MOTA score did the 4DHumans + ViTDet model get on the PoseTrack2018 dataset
| 61.9 |
ADE20K-847 | EBSeg-L | Open-Vocabulary Semantic Segmentation with Image Embedding Balancing | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.09829v1 | [
"https://github.com/slonetime/ebseg"
] | In the paper 'Open-Vocabulary Semantic Segmentation with Image Embedding Balancing', what mIoU score did the EBSeg-L model get on the ADE20K-847 dataset
| 13.7 |
HellaSwag | PaLM 2-M (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the HellaSwag dataset
| 86.7 |
Weather (336) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather (336) dataset
| 0.238 |
TriviaQA | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-M (one-shot) model get on the TriviaQA dataset
| 81.7 |
ImageNet | GTP-DeiT-S/P8 | GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation | 2023-11-06T00:00:00 | https://arxiv.org/abs/2311.03035v2 | [
"https://github.com/ackesnal/gtp-vit"
] | In the paper 'GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation', what Top 1 Accuracy score did the GTP-DeiT-S/P8 model get on the ImageNet dataset
| 79.5% |
BanglaBook | SVM (word 1-gram) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the SVM (word 1-gram) model get on the BanglaBook dataset
| 0.8519 |
CHILI-100K | GAT | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GAT model get on the CHILI-100K dataset
| 0.252 +/- 0.003 |
NTU RGB+D | IPP-Net (Parsing + Pose) | Integrating Human Parsing and Pose Network for Human Action Recognition | 2023-07-16T00:00:00 | https://arxiv.org/abs/2307.07977v1 | [
"https://github.com/liujf69/ipp-net-parsing"
] | In the paper 'Integrating Human Parsing and Pose Network for Human Action Recognition', what Accuracy (CS) score did the IPP-Net (Parsing + Pose) model get on the NTU RGB+D dataset
| 93.8 |
CALVIN | RoboUniView(Ours) | RoboUniView: Visual-Language Model with Unified View Representation for Robotic Manipulation | 2024-06-27T00:00:00 | https://arxiv.org/abs/2406.18977v3 | [
"https://github.com/liufanfanlff/robouniview"
] | In the paper 'RoboUniView: Visual-Language Model with Unified View Representation for Robotic Manipulation', what avg. sequence length (D to D) score did the RoboUniView(Ours) model get on the CALVIN dataset
| 3.855 |
COCO-20i (1-shot) | HDMNet (DifFSS, ResNet-50) | DifFSS: Diffusion Model for Few-Shot Semantic Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00773v3 | [
"https://github.com/TrinitialChan/DifFSS"
] | In the paper 'DifFSS: Diffusion Model for Few-Shot Semantic Segmentation', what Mean IoU score did the HDMNet (DifFSS, ResNet-50) model get on the COCO-20i (1-shot) dataset
| 46.7 |
imagenet-1k | BinaryViT | BinaryViT: Pushing Binary Vision Transformers Towards Convolutional Models | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16678v1 | [
"https://github.com/phuoc-hoan-le/binaryvit"
] | In the paper 'BinaryViT: Pushing Binary Vision Transformers Towards Convolutional Models', what Top 1 Accuracy score did the BinaryViT model get on the imagenet-1k dataset
| 70.6 |
CAMO | BiRefNet | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet model get on the CAMO dataset
| 0.030 |
Far-OOD | ISH (ResNet50) | Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement | 2023-09-30T00:00:00 | https://arxiv.org/abs/2310.00227v1 | [
"https://github.com/kai422/scale"
] | In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what FPR@95 score did the ISH (ResNet50) model get on the Far-OOD dataset
| 15.62 |
Caltech-101 | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the Caltech-101 dataset
| 96.28 |
NCT-CRC-HE-100K | SAG-ViT | SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers | 2024-11-14T00:00:00 | https://arxiv.org/abs/2411.09420v2 | [
"https://github.com/shravan-18/SAG-ViT"
] | In the paper 'SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers', what F1 score did the SAG-ViT model get on the NCT-CRC-HE-100K dataset
| 98.61 |
GSM8K | MuggleMATH 70B | MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.05506v3 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | In the paper 'MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning', what Accuracy score did the MuggleMATH 70B model get on the GSM8K dataset
| 82.3 |
MM-Vet | LLaVA-v1.6 (7B, w/ STIC) | Enhancing Large Vision Language Models with Self-Training on Image Comprehension | 2024-05-30T00:00:00 | https://arxiv.org/abs/2405.19716v2 | [
"https://github.com/yihedeng9/stic"
] | In the paper 'Enhancing Large Vision Language Models with Self-Training on Image Comprehension', what GPT-4 score score did the LLaVA-v1.6 (7B, w/ STIC) model get on the MM-Vet dataset
| 45.0 |
Set5 - 2x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Set5 - 2x upscaling dataset
| 38.95 |
HalfCheetah-v2 | TLA | Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18701v3 | [
"https://github.com/dee0512/Temporally-Layered-Architecture"
] | In the paper 'Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures', what Mean Reward score did the TLA model get on the HalfCheetah-v2 dataset
| 9571.99 |
USNA-Cn2 (short-duration) | Mean Window Forecast | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Mean Window Forecast model get on the USNA-Cn2 (short-duration) dataset
| 0.182 |
Rendered SST2 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Rendered SST2 dataset
| 51.6 |
PASCAL Context-59 | EBSeg-L | Open-Vocabulary Semantic Segmentation with Image Embedding Balancing | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.09829v1 | [
"https://github.com/slonetime/ebseg"
] | In the paper 'Open-Vocabulary Semantic Segmentation with Image Embedding Balancing', what mIoU score did the EBSeg-L model get on the PASCAL Context-59 dataset
| 60.2 |
QVHighlights | BAM-DETR (w/ PT ASR Captions) | BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00083v2 | [
"https://github.com/Pilhyeon/BAM-DETR"
] | In the paper 'BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos', what mAP score did the BAM-DETR (w/ PT ASR Captions) model get on the QVHighlights dataset
| 46.67 |
Atari 2600 Chopper Command | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Chopper Command dataset
| 15071 |
Set14 - 4x upscaling | DAT | Dual Aggregation Transformer for Image Super-Resolution | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03364v2 | [
"https://github.com/zhengchen1999/dat"
] | In the paper 'Dual Aggregation Transformer for Image Super-Resolution', what PSNR score did the DAT model get on the Set14 - 4x upscaling dataset
| 29.23 |
ScanObjectNN | Mamba3D | Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.14966v2 | [
"https://github.com/xhanxu/Mamba3D"
] | In the paper 'Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model', what Overall Accuracy score did the Mamba3D model get on the ScanObjectNN dataset
| 92.64 |
NTU RGB+D 120 | EPP-Net (Parsing + Pose) | Explore Human Parsing Modality for Action Recognition | 2024-01-04T00:00:00 | https://arxiv.org/abs/2401.02138v1 | [
"https://github.com/liujf69/EPP-Net-Action"
] | In the paper 'Explore Human Parsing Modality for Action Recognition', what Accuracy (Cross-Subject) score did the EPP-Net (Parsing + Pose) model get on the NTU RGB+D 120 dataset
| 91.1 |
Heartbeat | ConvTran | Improving Position Encoding of Transformers for Multivariate Time Series Classification | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16642v1 | [
"https://github.com/navidfoumani/convtran"
] | In the paper 'Improving Position Encoding of Transformers for Multivariate Time Series Classification', what Accuracy score did the ConvTran model get on the Heartbeat dataset
| 0.7853 |
VoxCeleb | ReDimNet-B5-SF2-LM (9.2M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B5-SF2-LM (9.2M) model get on the VoxCeleb dataset
| 0.43 |
RefCOCOg-val | MagNet | Mask Grounding for Referring Image Segmentation | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12198v2 | [
"https://github.com/yxchng/mask-grounding"
] | In the paper 'Mask Grounding for Referring Image Segmentation', what Overall IoU score did the MagNet model get on the RefCOCOg-val dataset
| 65.36 |
SemEval 2014 Task 4 Subtask 1+2 | gpt-3.5 finetuned | Large language models for aspect-based sentiment analysis | 2023-10-27T00:00:00 | https://arxiv.org/abs/2310.18025v1 | [
"https://github.com/qagentur/absa_llm"
] | In the paper 'Large language models for aspect-based sentiment analysis', what F1 score did the gpt-3.5 finetuned model get on the SemEval 2014 Task 4 Subtask 1+2 dataset
| 83.76 |
ETTh1 (336) Multivariate | MOIRAISmall | Unified Training of Universal Time Series Forecasting Transformers | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02592v2 | [
"https://github.com/SalesforceAIResearch/uni2ts"
] | In the paper 'Unified Training of Universal Time Series Forecasting Transformers', what MSE score did the MOIRAISmall model get on the ETTh1 (336) Multivariate dataset
| 0.412 |
MATH | MathCoder-CL-7B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-CL-7B model get on the MATH dataset
| 30.2 |
DAVIS-S | BiRefNet (HRSOD, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-measure score did the BiRefNet (HRSOD, UHRSD) model get on the DAVIS-S dataset
| 0.976 |
CiteSeer with Public Split: fixed 20 nodes per class | GGCM | From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13599v2 | [
"https://github.com/zhengwang100/ogc_ggcm"
] | In the paper 'From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited', what Accuracy score did the GGCM model get on the CiteSeer with Public Split: fixed 20 nodes per class dataset
| 74.2 |
RealBlur-R | ALGNet | Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring | 2024-03-29T00:00:00 | https://arxiv.org/abs/2403.20106v2 | [
"https://github.com/Tombs98/ALGNet"
] | In the paper 'Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring', what PSNR (sRGB) score did the ALGNet model get on the RealBlur-R dataset
| 41.16 |
TAP-Vid-DAVIS-First | CoTracker | CoTracker: It is Better to Track Together | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07635v3 | [
"https://github.com/facebookresearch/co-tracker"
] | In the paper 'CoTracker: It is Better to Track Together', what Average Jaccard score did the CoTracker model get on the TAP-Vid-DAVIS-First dataset
| 62.2 |
SYSU-CD | CDMaskFormer | Rethinking Remote Sensing Change Detection With A Mask View | 2024-06-21T00:00:00 | https://arxiv.org/abs/2406.15320v1 | [
"https://github.com/xwmaxwma/rschange"
] | In the paper 'Rethinking Remote Sensing Change Detection With A Mask View', what F1 score did the CDMaskFormer model get on the SYSU-CD dataset
| 82.84 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.