dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
CUTE80 | CPPD | Context Perception Parallel Decoder for Scene Text Recognition | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12270v2 | [
"https://github.com/PaddlePaddle/PaddleOCR"
] | In the paper 'Context Perception Parallel Decoder for Scene Text Recognition', what Accuracy score did the CPPD model get on the CUTE80 dataset
| 99.7 |
miniF2F-test | COPRA + GPT-4-turbo | An In-Context Learning Agent for Formal Theorem-Proving | 2023-10-06T00:00:00 | https://arxiv.org/abs/2310.04353v5 | [
"https://github.com/trishullab/copra"
] | In the paper 'An In-Context Learning Agent for Formal Theorem-Proving', what Pass@1 score did the COPRA + GPT-4-turbo model get on the miniF2F-test dataset
| 30.7 |
QVHighlights | BM-DETR | Background-aware Moment Detection for Video Moment Retrieval | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02728v3 | [
"https://github.com/minjoong507/bm-detr"
] | In the paper 'Background-aware Moment Detection for Video Moment Retrieval', what mAP score did the BM-DETR model get on the QVHighlights dataset
| 40.08 |
COCO 2017 val | ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(resnet50)) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what mAP score did the ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(resnet50)) model get on the COCO 2017 val dataset
| 41.03 |
HumanML3D | DiverseMotion (s=1) | DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01372v1 | [
"https://github.com/axdfhj/mdd"
] | In the paper 'DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion', what FID score did the DiverseMotion (s=1) model get on the HumanML3D dataset
| 0.070 |
Youtube-VIS 2022 Validation | CTVIS (Swin-L) | CTVIS: Consistent Training for Online Video Instance Segmentation | 2023-07-24T00:00:00 | https://arxiv.org/abs/2307.12616v1 | [
"https://github.com/kainingying/ctvis"
] | In the paper 'CTVIS: Consistent Training for Online Video Instance Segmentation', what mAP_L score did the CTVIS (Swin-L) model get on the Youtube-VIS 2022 Validation dataset
| 46.4 |
ChartQA | PaLI-3 (w/ OCR) | PaLI-3 Vision Language Models: Smaller, Faster, Stronger | 2023-10-13T00:00:00 | https://arxiv.org/abs/2310.09199v2 | [
"https://github.com/kyegomez/PALI3"
] | In the paper 'PaLI-3 Vision Language Models: Smaller, Faster, Stronger', what 1:1 Accuracy score did the PaLI-3 (w/ OCR) model get on the ChartQA dataset
| 69.5 |
Set14 - 4x upscaling | DUKD | Data Upcycling Knowledge Distillation for Image Super-Resolution | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.14162v4 | [
"https://github.com/yun224/dukd"
] | In the paper 'Data Upcycling Knowledge Distillation for Image Super-Resolution', what PSNR score did the DUKD model get on the Set14 - 4x upscaling dataset
| 28.80 |
QVHighlights | VideoLights-B-pt | VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval | 2024-12-02T00:00:00 | https://arxiv.org/abs/2412.01558v1 | [
"https://github.com/dpaul06/VideoLights"
] | In the paper 'VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval', what mAP score did the VideoLights-B-pt model get on the QVHighlights dataset
| 47.94 |
ScanNet200 | PonderV2 + SparseUNet | PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm | 2023-10-12T00:00:00 | https://arxiv.org/abs/2310.08586v3 | [
"https://github.com/OpenGVLab/PonderV2"
] | In the paper 'PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm', what val mIoU score did the PonderV2 + SparseUNet model get on the ScanNet200 dataset
| 32.3 |
COD | ZoomNeXt-PVTv2-B4 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what MAE score did the ZoomNeXt-PVTv2-B4 model get on the COD dataset
| 0.017 |
BookSum | Echoes-Extractive-Abstractive | Echoes from Alexandria: A Large Resource for Multilingual Book Summarization | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04334v1 | [
"https://github.com/babelscape/echoes-from-alexandria"
] | In the paper 'Echoes from Alexandria: A Large Resource for Multilingual Book Summarization', what ROUGE-2 score did the Echoes-Extractive-Abstractive model get on the BookSum dataset
| 10.53 |
PeMS07 | STAEformer | STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10425v5 | [
"https://github.com/xdzhelheim/staeformer"
] | In the paper 'STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting', what MAE@1h score did the STAEformer model get on the PeMS07 dataset
| 19.14 |
COCO-Stuff-171 | LaVG | In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation | 2024-08-09T00:00:00 | https://arxiv.org/abs/2408.04961v1 | [
"https://github.com/dahyun-kang/lazygrounding"
] | In the paper 'In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation', what mIoU score did the LaVG model get on the COCO-Stuff-171 dataset
| 23.2 |
AgNews | vONTSS | vONTSS: vMF based semi-supervised neural topic modeling with optimal transport | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.01226v2 | [
"https://github.com/xuweijieshuai/vONTSS"
] | In the paper 'vONTSS: vMF based semi-supervised neural topic modeling with optimal transport', what C_v score did the vONTSS model get on the AgNews dataset
| 0.49 |
Refer-YouTube-VOS (2021 public validation) | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what J&F score did the GLEE-Pro model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 70.6 |
ImageNet | KD++(T:renset101 S:resnet18) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T:renset101 S:resnet18) model get on the ImageNet dataset
| 72.54 |
Matterport3D | SFSS-MMSI (RGB+Depth) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what Validation mIoU score did the SFSS-MMSI (RGB+Depth) model get on the Matterport3D dataset
| 39.19 |
Weather2K1786 (192) | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the Weather2K1786 (192) dataset
| 0.581 |
HumanML3D | BAD (OAAS) | BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.10847v1 | [
"https://github.com/rohollahhs/bad"
] | In the paper 'BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation', what FID score did the BAD (OAAS) model get on the HumanML3D dataset
| 0.065 |
Amazon-Electronics | HetroFair | Heterophily-Aware Fair Recommendation using Graph Convolutional Networks | 2024-01-31T00:00:00 | https://arxiv.org/abs/2402.03365v2 | [
"https://github.com/nematgh/hetrofair"
] | In the paper 'Heterophily-Aware Fair Recommendation using Graph Convolutional Networks', what NDCG@20 score did the HetroFair model get on the Amazon-Electronics dataset
| 0.0525 |
ImageNet_CN | $M^2$-Encoder | M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining | 2024-01-29T00:00:00 | https://arxiv.org/abs/2401.15896v2 | [
"https://github.com/alipay/Ant-Multi-Modal-Framework/tree/main/prj/M2_Encoder"
] | In the paper 'M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining', what Accuracy score did the $M^2$-Encoder model get on the ImageNet_CN dataset
| 80.7 |
2000 HUB5 English | MMLU | Spirit LM: Interleaved Spoken and Written Language Model | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05755v2 | [
"https://github.com/facebookresearch/spiritlm"
] | In the paper 'Spirit LM: Interleaved Spoken and Written Language Model', what 10-stage average accuracy score did the MMLU model get on the 2000 HUB5 English dataset
| 10 |
Office-Home | PromptStyler (CLIP, ResNet-50) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ResNet-50) model get on the Office-Home dataset
| 73.6 |
ImageNet | VkD (T:RegNety 160 S:DeiT-S) | $V_kD:$ Improving Knowledge Distillation using Orthogonal Projections | 2024-03-10T00:00:00 | https://arxiv.org/abs/2403.06213v1 | [
"https://github.com/roymiles/vkd"
] | In the paper '$V_kD:$ Improving Knowledge Distillation using Orthogonal Projections', what Top-1 accuracy % score did the VkD (T:RegNety 160 S:DeiT-S) model get on the ImageNet dataset
| 82.9 |
Segmentation in the Wild | HIPIE | Hierarchical Open-vocabulary Universal Image Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00764v2 | [
"https://github.com/berkeley-hipie/hipie"
] | In the paper 'Hierarchical Open-vocabulary Universal Image Segmentation', what Mean AP score did the HIPIE model get on the Segmentation in the Wild dataset
| 41.6 |
OVIS validation | DVIS(Swin-L, Offline) | DVIS: Decoupled Video Instance Segmentation Framework | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03413v3 | [
"https://github.com/zhang-tao-whu/DVIS"
] | In the paper 'DVIS: Decoupled Video Instance Segmentation Framework', what mask AP score did the DVIS(Swin-L, Offline) model get on the OVIS validation dataset
| 49.9 |
COLLAB | R-GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GIN + PANDA model get on the COLLAB dataset
| 77.8% |
MedConceptsQA | BioMistral/BioMistral-7B-DARE | BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10373v3 | [
"https://github.com/biomistral/biomistral"
] | In the paper 'BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains', what Accuracy score did the BioMistral/BioMistral-7B-DARE model get on the MedConceptsQA dataset
| 24.569 |
SVAMP (1:N) | ATHENA (roberta-base) | ATHENA: Mathematical Reasoning with Thought Expansion | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01036v1 | [
"https://github.com/the-jb/athena-math"
] | In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Execution Accuracy score did the ATHENA (roberta-base) model get on the SVAMP (1:N) dataset
| 52.5 |
CUB-200-2011 | ResNet-50 | PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans | 2023-08-25T00:00:00 | https://arxiv.org/abs/2308.13651v5 | [
"https://github.com/giangnguyen2412/PCNN-src-code-TMRL2024"
] | In the paper 'PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans', what Accuracy score did the ResNet-50 model get on the CUB-200-2011 dataset
| 88.59% |
VoiceBank + DEMAND | MP-SENet | Explicit Estimation of Magnitude and Phase Spectra in Parallel for High-Quality Speech Enhancement | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.08926v2 | [
"https://github.com/yxlu-0102/MP-SENet"
] | In the paper 'Explicit Estimation of Magnitude and Phase Spectra in Parallel for High-Quality Speech Enhancement', what PESQ score did the MP-SENet model get on the VoiceBank + DEMAND dataset
| 3.60 |
CIFAR-10 | SparseSwin | SparseSwin: Swin Transformer with Sparse Transformer Block | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05224v1 | [
"https://github.com/krisnapinasthika/sparseswin"
] | In the paper 'SparseSwin: Swin Transformer with Sparse Transformer Block', what Percentage correct score did the SparseSwin model get on the CIFAR-10 dataset
| 97.43 |
ImageNet | KD++(T: regnety-16GF S:ViT-B) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T: regnety-16GF S:ViT-B) model get on the ImageNet dataset
| 83.60 |
Atari 2600 Road Runner | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Road Runner dataset
| 56520 |
VeRi-Wild Small | MBR-4B (without RK) | Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01129v1 | [
"https://github.com/videturfortuna/vehicle_reid_itsc2023"
] | In the paper 'Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification', what mAP score did the MBR-4B (without RK) model get on the VeRi-Wild Small dataset
| 88.9 |
RefCOCO+ val | HyperSeg | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what Overall IoU score did the HyperSeg model get on the RefCOCO+ val dataset
| 79.0 |
IMDb | RoBERTa-large with LlamBERT | LlamBERT: Large-scale low-cost data annotation in NLP | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15938v1 | [
"https://github.com/aielte-research/llambert"
] | In the paper 'LlamBERT: Large-scale low-cost data annotation in NLP', what Accuracy score did the RoBERTa-large with LlamBERT model get on the IMDb dataset
| 96.68 |
LIVE-FB LSVQ | OneAlign + FAST-VQA | Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17090v1 | [
"https://github.com/q-future/q-align"
] | In the paper 'Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels', what PLCC score did the OneAlign + FAST-VQA model get on the LIVE-FB LSVQ dataset
| 0.900 |
MassSpecGym | SELFIES Transformer | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Top-1 Accuracy score did the SELFIES Transformer model get on the MassSpecGym dataset
| 0.00 |
MuCGEC | GEC-DI (LM+GED) | Improving Seq2Seq Grammatical Error Correction via Decoding Interventions | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14534v1 | [
"https://github.com/Jacob-Zhou/gecdi"
] | In the paper 'Improving Seq2Seq Grammatical Error Correction via Decoding Interventions', what F0.5 score did the GEC-DI (LM+GED) model get on the MuCGEC dataset
| 48.61 |
COCO minival | HIPIE (ViT-H, single-scale) | Hierarchical Open-vocabulary Universal Image Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00764v2 | [
"https://github.com/berkeley-hipie/hipie"
] | In the paper 'Hierarchical Open-vocabulary Universal Image Segmentation', what PQ score did the HIPIE (ViT-H, single-scale) model get on the COCO minival dataset
| 58.1 |
MedConceptsQA | johnsnowlabs/JSL-MedMNX-7B | MedConceptsQA: Open Source Medical Concepts QA Benchmark | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07348v2 | [
"https://github.com/nadavlab/MedConceptsQA"
] | In the paper 'MedConceptsQA: Open Source Medical Concepts QA Benchmark', what Accuracy score did the johnsnowlabs/JSL-MedMNX-7B model get on the MedConceptsQA dataset
| 25.627 |
MVTec AD | VCP-CLIP | VCP-CLIP: A visual context prompting model for zero-shot anomaly segmentation | 2024-07-17T00:00:00 | https://arxiv.org/abs/2407.12276v1 | [
"https://github.com/xiaozhen228/vcp-clip"
] | In the paper 'VCP-CLIP: A visual context prompting model for zero-shot anomaly segmentation', what Segmentation AUROC score did the VCP-CLIP model get on the MVTec AD dataset
| 92.0 |
3RScan | UniDet3D | UniDet3D: Multi-dataset Indoor 3D Object Detection | 2024-09-06T00:00:00 | https://arxiv.org/abs/2409.04234v1 | [
"https://github.com/filapro/unidet3d"
] | In the paper 'UniDet3D: Multi-dataset Indoor 3D Object Detection', what mAP@0.25 score did the UniDet3D model get on the 3RScan dataset
| 64.7 |
RefCOCOg-val | MaskRIS (Swin-B, combined DB) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B, combined DB) model get on the RefCOCOg-val dataset
| 69.12 |
GoPro | BSSTNet | Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07551v1 | [
"https://github.com/huicongzhang/bsstnet"
] | In the paper 'Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring', what PSNR score did the BSSTNet model get on the GoPro dataset
| 35.98 |
ETTh2 (192) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTh2 (192) Multivariate dataset
| 0.362 |
MM-Vet | SeVa-7B | Self-Supervised Visual Preference Alignment | 2024-04-16T00:00:00 | https://arxiv.org/abs/2404.10501v2 | [
"https://github.com/Kevinz-code/SeVa"
] | In the paper 'Self-Supervised Visual Preference Alignment', what GPT-4 score score did the SeVa-7B model get on the MM-Vet dataset
| 37.2 |
CIFAR-100 AlexNet - 300 Epoch | IBM | Towards Redundancy-Free Sub-networks in Continual Learning | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00840v2 | [
"https://github.com/zackschen/IBM-Net"
] | In the paper 'Towards Redundancy-Free Sub-networks in Continual Learning', what Accuracy score did the IBM model get on the CIFAR-100 AlexNet - 300 Epoch dataset
| 82.69 |
MOT20 | HopTrack[Embedded GPU] | HopTrack: A Real-time Multi-Object Tracking System for Embedded Devices | 2024-11-01T00:00:00 | https://arxiv.org/abs/2411.00608v1 | [
"https://github.com/Mrxiangli/HopTrack"
] | In the paper 'HopTrack: A Real-time Multi-Object Tracking System for Embedded Devices', what MOTA score did the HopTrack[Embedded GPU] model get on the MOT20 dataset
| 45.6 |
VisDA2017 | PDA (CLIP, ViT-B/16) | Prompt-based Distribution Alignment for Unsupervised Domain Adaptation | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09553v2 | [
"https://github.com/baishuanghao/prompt-based-distribution-alignment"
] | In the paper 'Prompt-based Distribution Alignment for Unsupervised Domain Adaptation', what Accuracy score did the PDA (CLIP, ViT-B/16) model get on the VisDA2017 dataset
| 89.7 |
ESC-50 | DyMN-L | Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15648v1 | [
"https://github.com/fschmid56/efficientat"
] | In the paper 'Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models', what Top-1 Accuracy score did the DyMN-L model get on the ESC-50 dataset
| 97.4 |
WDC Products-80%cc-seen-medium | Llama3.1_70B | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Llama3.1_70B model get on the WDC Products-80%cc-seen-medium dataset
| 75.20 |
ScanObjectNN | PointGST | Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08114v1 | [
"https://github.com/jerryfeng2003/pointgst"
] | In the paper 'Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning', what Overall Accuracy score did the PointGST model get on the ScanObjectNN dataset
| 96.18 |
Beatles | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Beatles dataset
| 94.5 |
EarthVQA | SOBA | EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12222v1 | [
"https://github.com/Junjue-Wang/EarthVQA"
] | In the paper 'EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering', what Overall Accuracy score did the SOBA model get on the EarthVQA dataset
| 78.14 |
CAMELYON16 | Snuffy (SimCLR Exhaustive) | Snuffy: Efficient Whole Slide Image Classifier | 2024-08-15T00:00:00 | https://arxiv.org/abs/2408.08258v2 | [
"https://github.com/jafarinia/snuffy"
] | In the paper 'Snuffy: Efficient Whole Slide Image Classifier', what AUC score did the Snuffy (SimCLR Exhaustive) model get on the CAMELYON16 dataset
| 0.970 |
MM-Vet | ASMv2 | The All-Seeing Project V2: Towards General Relation Comprehension of the Open World | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.19474v4 | [
"https://github.com/opengvlab/all-seeing"
] | In the paper 'The All-Seeing Project V2: Towards General Relation Comprehension of the Open World', what GPT-4 score score did the ASMv2 model get on the MM-Vet dataset
| 41.3 |
VisDA-2017 | SFDA2++ | SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation | 2024-03-16T00:00:00 | https://arxiv.org/abs/2403.10834v1 | [
"https://github.com/shinyflight/sfda2"
] | In the paper 'SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation', what Accuracy score did the SFDA2++ model get on the VisDA-2017 dataset
| 89.6 |
Coauthor CS | GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the GCN model get on the Coauthor CS dataset
| 94.06% |
SOTS Indoor | MixDehazeNet | MixDehazeNet : Mix Structure Block For Image Dehazing Network | 2023-05-28T00:00:00 | https://arxiv.org/abs/2305.17654v1 | [
"https://github.com/ameryxiong/mixdehazenet"
] | In the paper 'MixDehazeNet : Mix Structure Block For Image Dehazing Network', what PSNR score did the MixDehazeNet model get on the SOTS Indoor dataset
| 42.62 |
Deep Blending | Self-Organizing Gaussians | Compact 3D Scene Representation via Self-Organizing Gaussian Grids | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.13299v2 | [
"https://github.com/fraunhoferhhi/Self-Organizing-Gaussians"
] | In the paper 'Compact 3D Scene Representation via Self-Organizing Gaussian Grids', what PSNR score did the Self-Organizing Gaussians model get on the Deep Blending dataset
| 30.35 |
MUTAG | R-GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GCN + PANDA model get on the MUTAG dataset
| 90.05% |
YouTube-VIS 2021 | GRAtt-VIS (ResNet-50) | GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17096v1 | [
"https://github.com/tanveer81/grattvis"
] | In the paper 'GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation', what mask AP score did the GRAtt-VIS (ResNet-50) model get on the YouTube-VIS 2021 dataset
| 48.9 |
CIFAR-10 | ABNet-2G-R1 | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R1 model get on the CIFAR-10 dataset
| 95.536 |
Ant-v2 | TLA | Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18701v3 | [
"https://github.com/dee0512/Temporally-Layered-Architecture"
] | In the paper 'Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures', what Mean Reward score did the TLA model get on the Ant-v2 dataset
| 5163.54 |
LRS3-TED | SyncVSR | SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12233v1 | [
"https://github.com/KAIST-AILab/SyncVSR"
] | In the paper 'SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization', what Word Error Rate (WER) score did the SyncVSR model get on the LRS3-TED dataset
| 31.2 |
shape bias | Imagen | Intriguing properties of generative classifiers | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16779v2 | [
"https://github.com/SamsungSAILMontreal/ForestDiffusion"
] | In the paper 'Intriguing properties of generative classifiers', what shape bias score did the Imagen model get on the shape bias dataset
| 98.7 |
nuScenes | MEFormer | Robust Multimodal 3D Object Detection via Modality-Agnostic Decoding and Proximity-based Modality Ensemble | 2024-07-27T00:00:00 | https://arxiv.org/abs/2407.19156v2 | [
"https://github.com/hanchaa/meformer"
] | In the paper 'Robust Multimodal 3D Object Detection via Modality-Agnostic Decoding and Proximity-based Modality Ensemble', what NDS score did the MEFormer model get on the nuScenes dataset
| 0.74 |
USPS-to-MNIST | FACT | FACT: Federated Adversarial Cross Training | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00607v2 | [
"https://github.com/jonas-lippl/fact"
] | In the paper 'FACT: Federated Adversarial Cross Training', what Accuracy score did the FACT model get on the USPS-to-MNIST dataset
| 98.6 |
CropHarvest - Brazil | Feature fusion with LSTM | In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16582v2 | [
"https://github.com/fmenat/optimal-multiview-crop-classifier"
] | In the paper 'In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data', what Average Accuracy score did the Feature fusion with LSTM model get on the CropHarvest - Brazil dataset
| 0.975 |
VoxCeleb1 | SSAMBA | SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space Model | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.11831v1 | [
"https://github.com/siavashshams/ssamba"
] | In the paper 'SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space Model', what Top-1 (%) score did the SSAMBA model get on the VoxCeleb1 dataset
| 70.1 |
Candombe | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Candombe dataset
| 99.7 |
ImageNet | GAC-SNN MS-ResNet-34 | Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks | 2023-08-12T00:00:00 | https://arxiv.org/abs/2308.06582v2 | [
"https://github.com/bollossom/GAC"
] | In the paper 'Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks', what Top 1 Accuracy score did the GAC-SNN MS-ResNet-34 model get on the ImageNet dataset
| 70.42 |
MM-Vet | Gemini 1.5 Pro (gemini-1.5-pro) | Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05530v4 | [
"https://github.com/dlvuldet/primevul"
] | In the paper 'Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context', what GPT-4 score score did the Gemini 1.5 Pro (gemini-1.5-pro) model get on the MM-Vet dataset
| 65.8±0.1 |
GSM8K | Vicuna (SYRELM) | Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning | 2023-12-09T00:00:00 | https://arxiv.org/abs/2312.05571v2 | [
"https://github.com/joykirat18/syrelm"
] | In the paper 'Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning', what Accuracy score did the Vicuna (SYRELM) model get on the GSM8K dataset
| 35.2 |
CROHME 2014 | PosFormer | PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer | 2024-07-10T00:00:00 | https://arxiv.org/abs/2407.07764v1 | [
"https://github.com/sjtu-deepvisionlab/posformer"
] | In the paper 'PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer', what ExpRate score did the PosFormer model get on the CROHME 2014 dataset
| 60.45 |
PeMSD8 | PM-DMNet(R) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(R) model get on the PeMSD8 dataset
| 13.40 |
BIG-bench (Formal Fallacies Syllogisms Negation) | PaLM 2 (few-shot, k=3, Direct) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset
| 64.8 |
MMNeedle | Gemini Pro 1.0 | Gemini: A Family of Highly Capable Multimodal Models | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11805v4 | [
"https://github.com/valdecy/pybibx"
] | In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what 1 Image, 2*2 Stitching, Exact Accuracy score did the Gemini Pro 1.0 model get on the MMNeedle dataset
| 29.53 |
MoB | ConvLSTM | Malicious or Benign? Towards Effective Content Moderation for Children's Videos | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15551v1 | [
"https://github.com/syedhammadahmed/mob"
] | In the paper 'Malicious or Benign? Towards Effective Content Moderation for Children's Videos', what Accuracy score did the ConvLSTM model get on the MoB dataset
| 69.71 |
LEVIR+ | HANet | HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09178v1 | [
"https://github.com/chengxihan/hanet-cd"
] | In the paper 'HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images', what F1 score did the HANet model get on the LEVIR+ dataset
| 77.56 |
COCO-20i (5-shot) | Matcher(DINOv2) | Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13310v2 | [
"https://github.com/aim-uofa/matcher"
] | In the paper 'Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching', what Mean IoU score did the Matcher(DINOv2) model get on the COCO-20i (5-shot) dataset
| 60.7 |
ActivityNet | TESTA (ViT-B/16) | TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding | 2023-10-29T00:00:00 | https://arxiv.org/abs/2310.19060v1 | [
"https://github.com/renshuhuai-andy/testa"
] | In the paper 'TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding', what text-to-video R@1 score did the TESTA (ViT-B/16) model get on the ActivityNet dataset
| 54.8 |
MVTec LOCO AD | ComAD | Component-aware anomaly detection framework for adjustable and logical industrial visual inspection | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08509v1 | [
"https://github.com/liutongkun/comad"
] | In the paper 'Component-aware anomaly detection framework for adjustable and logical industrial visual inspection', what Avg. Detection AUROC score did the ComAD model get on the MVTec LOCO AD dataset
| 81.2 |
CIFAR-10 | SAG-ViT | SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers | 2024-11-14T00:00:00 | https://arxiv.org/abs/2411.09420v2 | [
"https://github.com/shravan-18/SAG-ViT"
] | In the paper 'SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers', what F1 score did the SAG-ViT model get on the CIFAR-10 dataset
| 95.74 |
DVS128 Gesture | TENNs-PLEIADES | TENNs-PLEIADES: Building Temporal Kernels with Orthogonal Polynomials | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.12179v3 | [
"https://github.com/peabrane/pleiades"
] | In the paper 'TENNs-PLEIADES: Building Temporal Kernels with Orthogonal Polynomials', what Accuracy (%) score did the TENNs-PLEIADES model get on the DVS128 Gesture dataset
| 100.00 |
SUN-RGBD val | SPGroup3D(Geo only) | SPGroup3D: Superpoint Grouping Network for Indoor 3D Object Detection | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13641v1 | [
"https://github.com/zyrant/spgroup3d"
] | In the paper 'SPGroup3D: Superpoint Grouping Network for Indoor 3D Object Detection', what mAP@0.25 score did the SPGroup3D(Geo only) model get on the SUN-RGBD val dataset
| 65.4 |
CHILI-3K | GCN | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GCN model get on the CHILI-3K dataset
| 0.056 +/- 0.006 |
MSR-VTT | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what text-to-video R@1 score did the COSA model get on the MSR-VTT dataset
| 57.9 |
MedConceptsQA | HuggingFaceH4/zephyr-7b-beta | Zephyr: Direct Distillation of LM Alignment | 2023-10-25T00:00:00 | https://arxiv.org/abs/2310.16944v1 | [
"https://github.com/huggingface/alignment-handbook"
] | In the paper 'Zephyr: Direct Distillation of LM Alignment', what Accuracy score did the HuggingFaceH4/zephyr-7b-beta model get on the MedConceptsQA dataset
| 25.538 |
CIFAR-100 | ABNet-2G-R1 | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R1 model get on the CIFAR-100 dataset
| 78.792 |
CUHK Avenue | VideoPatchCore | VideoPatchCore: An Effective Method to Memorize Normality for Video Anomaly Detection | 2024-09-24T00:00:00 | https://arxiv.org/abs/2409.16225v5 | [
"https://github.com/SkiddieAhn/Paper-VideoPatchCore"
] | In the paper 'VideoPatchCore: An Effective Method to Memorize Normality for Video Anomaly Detection', what AUC score did the VideoPatchCore model get on the CUHK Avenue dataset
| 92.8% |
Citeseer | CGT | Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16788v1 | [
"https://github.com/nslab-cuk/community-aware-graph-transformer"
] | In the paper 'Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures', what Accuracy score did the CGT model get on the Citeseer dataset
| 76.59±0.98 |
Speech Commands | EAT | EAT: Self-Supervised Pre-Training with Efficient Audio Transformer | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03497v1 | [
"https://github.com/cwx-worst-one/eat"
] | In the paper 'EAT: Self-Supervised Pre-Training with Efficient Audio Transformer', what Accuracy score did the EAT model get on the Speech Commands dataset
| 98.3±0.04 |
MBPP | GPT-3.5 Turbo (few-shot) | INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09868v5 | [
"https://github.com/neuir/intervenor"
] | In the paper 'INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair', what Accuracy score did the GPT-3.5 Turbo (few-shot) model get on the MBPP dataset
| 45.4 |
IMDb Movie Reviews | Space-XLNet | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what F1 Macro score did the Space-XLNet model get on the IMDb Movie Reviews dataset
| 0.9487 |
REDDIT-BINARY | R-GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GIN + PANDA model get on the REDDIT-BINARY dataset
| 91.36 |
Story Cloze | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (one-shot) model get on the Story Cloze dataset
| 87.4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.