dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Food-101N | LRA-diffusion (CLIP ViT) | Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19518v2 | [
"https://github.com/puar-playground/lra-diffusion"
] | In the paper 'Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels', what Accuracy score did the LRA-diffusion (CLIP ViT) model get on the Food-101N dataset
| 93.42 |
MUSES: MUlti-SEnsor Semantic perception dataset | MUSES (Mask2Former /w 4xSwin-T) | MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty | 2024-01-23T00:00:00 | https://arxiv.org/abs/2401.12761v4 | [
"https://github.com/timbroed/MUSES"
] | In the paper 'MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty', what PQ score did the MUSES (Mask2Former /w 4xSwin-T) model get on the MUSES: MUlti-SEnsor Semantic perception dataset dataset
| 53.6 |
CelebA-HQ 256x256 | DDMI | DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations | 2024-01-23T00:00:00 | https://arxiv.org/abs/2401.12517v2 | [
"https://github.com/mlvlab/DDMI"
] | In the paper 'DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations', what FID score did the DDMI model get on the CelebA-HQ 256x256 dataset
| 8.73 |
CDD Dataset (season-varying) | C2FNet | C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images | 2024-04-22T00:00:00 | https://arxiv.org/abs/2404.13838v1 | [
"https://github.com/chengxihan/c2f-semicd-and-c2f-cdnet"
] | In the paper 'C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images', what F1-Score score did the C2FNet model get on the CDD Dataset (season-varying) dataset
| 95.93 |
ETTh2 (720) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTh2 (720) Multivariate dataset
| 0.396 |
AudioCaps | CLAPSep | CLAPSep: Leveraging Contrastive Pre-trained Model for Multi-Modal Query-Conditioned Target Sound Extraction | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17455v4 | [
"https://github.com/aisaka0v0/clapsep"
] | In the paper 'CLAPSep: Leveraging Contrastive Pre-trained Model for Multi-Modal Query-Conditioned Target Sound Extraction', what SI-SDRi score did the CLAPSep model get on the AudioCaps dataset
| 9.40 |
Nordland | SelaVPR | Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14505v3 | [
"https://github.com/Lu-Feng/SelaVPR"
] | In the paper 'Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition', what Recall@1 score did the SelaVPR model get on the Nordland dataset
| 86.6 |
ETTm2 (720) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTm2 (720) Multivariate dataset
| 0.366 |
MM-Vet | CoLLaVO | CoLLaVO: Crayon Large Language and Vision mOdel | 2024-02-17T00:00:00 | https://arxiv.org/abs/2402.11248v4 | [
"https://github.com/ByungKwanLee/CoLLaVO"
] | In the paper 'CoLLaVO: Crayon Large Language and Vision mOdel', what GPT-4 score score did the CoLLaVO model get on the MM-Vet dataset
| 40.3 |
EconLogicQA | Gemma-2B-IT | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Gemma-2B-IT model get on the EconLogicQA dataset
| 0.0846 |
FSC147 | SemAug-SAFECount | Semantic Generative Augmentations for Few-Shot Counting | 2023-10-26T00:00:00 | https://arxiv.org/abs/2311.16122v1 | [
"https://github.com/perladoubinsky/SemAug"
] | In the paper 'Semantic Generative Augmentations for Few-Shot Counting', what MAE(val) score did the SemAug-SAFECount model get on the FSC147 dataset
| 12.59 |
personachat | P5 | P5: Plug-and-Play Persona Prompting for Personalized Response Selection | 2023-10-10T00:00:00 | https://arxiv.org/abs/2310.06390v1 | [
"https://github.com/rungjoo/plug-and-play-prompt-persona"
] | In the paper 'P5: Plug-and-Play Persona Prompting for Personalized Response Selection', what R20@1 score did the P5 model get on the personachat dataset
| 87.45 |
NTU RGB+D | π-ViT (RGB + Pose) | Just Add $π$! Pose Induced Video Transformers for Understanding Activities of Daily Living | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18840v1 | [
"https://github.com/dominickrei/pi-vit"
] | In the paper 'Just Add $π$! Pose Induced Video Transformers for Understanding Activities of Daily Living', what Accuracy (CS) score did the π-ViT (RGB + Pose) model get on the NTU RGB+D dataset
| 96.3 |
VNHSGE-Geography | ChatGPT | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE-Geography dataset
| 61.5 |
Slovo: Russian Sign Language Dataset | mVITv2-S | Slovo: Russian Sign Language Dataset | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14527v3 | [
"https://github.com/hukenovs/slovo"
] | In the paper 'Slovo: Russian Sign Language Dataset', what Mean Accuracy score did the mVITv2-S model get on the Slovo: Russian Sign Language Dataset dataset
| 64.09 |
ogbn-arxiv | GraphSAGE | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Test Accuracy score did the GraphSAGE model get on the ogbn-arxiv dataset
| 0.7295 ± 0.0031 |
Abt-Buy | gpt-4o-mini-2024-07-18_fine_tuned | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-mini-2024-07-18_fine_tuned model get on the Abt-Buy dataset
| 94.09 |
MM-Vet | VisionZip (Retain 192 Tokens) | VisionZip: Longer is Better but Not Necessary in Vision Language Models | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04467v1 | [
"https://github.com/dvlab-research/visionzip"
] | In the paper 'VisionZip: Longer is Better but Not Necessary in Vision Language Models', what GPT-4 score score did the VisionZip (Retain 192 Tokens) model get on the MM-Vet dataset
| 31.7 |
Automatic Cardiac Diagnosis Challenge (ACDC) | MERIT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what Avg DSC score did the MERIT-GCASCADE model get on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset
| 92.23 |
SST-2 | OPT-125M | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-125M model get on the SST-2 dataset
| 85.08% |
UCF-101 | ACDiT | ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer | 2024-12-10T00:00:00 | https://arxiv.org/abs/2412.07720v1 | [
"https://github.com/thunlp/acdit"
] | In the paper 'ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer', what FVD16 score did the ACDiT model get on the UCF-101 dataset
| 90 |
BBBP | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what ROC-AUC score did the G-Tuning model get on the BBBP dataset
| 72.59 |
IMDB-BINARY | GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GIN + PANDA model get on the IMDB-BINARY dataset
| 72.56 |
Aria Everyday Objects | 3DETR | EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10224v1 | [
"https://github.com/facebookresearch/efm3d"
] | In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what mAP score did the 3DETR model get on the Aria Everyday Objects dataset
| 16 |
GTEA | HR-Pro | HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation | 2023-08-24T00:00:00 | https://arxiv.org/abs/2308.12608v3 | [
"https://github.com/pipixin321/hr-pro"
] | In the paper 'HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation', what mAP@0.1:0.7 score did the HR-Pro model get on the GTEA dataset
| 47.3 |
WikiText-103 | Transformer+SSA | The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01705v1 | [
"https://github.com/shamim-hussain/ssa"
] | In the paper 'The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles', what Validation perplexity score did the Transformer+SSA model get on the WikiText-103 dataset
| 16.91 |
SVT | CLIP4STR-L | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L model get on the SVT dataset
| 98.5 |
GoPro | CGNet | CascadedGaze: Efficiency in Global Context Extraction for Image Restoration | 2024-01-26T00:00:00 | https://arxiv.org/abs/2401.15235v2 | [
"https://github.com/Ascend-Research/CascadedGaze"
] | In the paper 'CascadedGaze: Efficiency in Global Context Extraction for Image Restoration', what PSNR score did the CGNet model get on the GoPro dataset
| 33.77 |
PCQM4Mv2-LSC | EGT+SSA | The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01705v1 | [
"https://github.com/shamim-hussain/ssa"
] | In the paper 'The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles', what Validation MAE score did the EGT+SSA model get on the PCQM4Mv2-LSC dataset
| 0.0876 |
EuroSAT | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the EuroSAT dataset
| 84.82 |
MSP-Podcast (Dominance) | wav2small-Teacher | Wav2Small: Distilling Wav2Vec2 to 72K parameters for Low-Resource Speech emotion recognition | 2024-08-25T00:00:00 | https://arxiv.org/abs/2408.13920v4 | [
"https://github.com/dkounadis/wav2small"
] | In the paper 'Wav2Small: Distilling Wav2Vec2 to 72K parameters for Low-Resource Speech emotion recognition', what CCC score did the wav2small-Teacher model get on the MSP-Podcast (Dominance) dataset
| 0.6840044 |
RAF-DB | ResEmoteNet | ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition | 2024-09-01T00:00:00 | https://arxiv.org/abs/2409.10545v2 | [
"https://github.com/ArnabKumarRoy02/ResEmoteNet"
] | In the paper 'ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition', what Overall Accuracy score did the ResEmoteNet model get on the RAF-DB dataset
| 94.76 |
GSM8K | MathCoder-CL-7B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-CL-7B model get on the GSM8K dataset
| 67.8 |
NExT-QA (Open-ended VideoQA) | MovieChat+ | MovieChat+: Question-aware Sparse Memory for Long Video Question Answering | 2024-04-26T00:00:00 | https://arxiv.org/abs/2404.17176v1 | [
"https://github.com/rese1f/MovieChat"
] | In the paper 'MovieChat+: Question-aware Sparse Memory for Long Video Question Answering', what Accuracy score did the MovieChat+ model get on the NExT-QA (Open-ended VideoQA) dataset
| 54.8 |
CIFAR10 100k | GraphGPS + HDSE | Enhancing Graph Transformers with Hierarchical Distance Structural Encoding | 2023-08-22T00:00:00 | https://arxiv.org/abs/2308.11129v4 | [
"https://github.com/luoyk1999/hdse"
] | In the paper 'Enhancing Graph Transformers with Hierarchical Distance Structural Encoding', what Accuracy (%) score did the GraphGPS + HDSE model get on the CIFAR10 100k dataset
| 76.180±0.277 |
cifar100 | ResNet50 | Guarding Barlow Twins Against Overfitting with Mixed Samples | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02151v1 | [
"https://github.com/wgcban/mix-bt"
] | In the paper 'Guarding Barlow Twins Against Overfitting with Mixed Samples', what average top-1 classification accuracy score did the ResNet50 model get on the cifar100 dataset
| 72.51 |
WDC-PAVE | MAVEQA | Using LLMs for the Extraction and Normalization of Product Attribute Values | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02130v4 | [
"https://github.com/wbsg-uni-mannheim/wdc-pave"
] | In the paper 'Using LLMs for the Extraction and Normalization of Product Attribute Values', what F1-Score score did the MAVEQA model get on the WDC-PAVE dataset
| 65.10 |
ETTh1 (336) Multivariate | GridTST | Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers | 2024-05-22T00:00:00 | https://arxiv.org/abs/2405.13810v1 | [
"https://github.com/Hannibal046/GridTST"
] | In the paper 'Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers', what MSE score did the GridTST model get on the ETTh1 (336) Multivariate dataset
| 0.436 |
RES-Q | QurrentOS-coder + GPT-4 | RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16801v2 | [
"https://github.com/qurrent-ai/res-q"
] | In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + GPT-4 model get on the RES-Q dataset
| 30.0 |
UMVM-dbp-zh-en | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the UMVM-dbp-zh-en dataset
| 0.856 |
Cityscapes Panoptic Parts | TAPPS (Swin-B, COCO pre-training) | Task-aligned Part-aware Panoptic Segmentation through Joint Object-Part Representations | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10114v1 | [
"https://github.com/tue-mps/tapps"
] | In the paper 'Task-aligned Part-aware Panoptic Segmentation through Joint Object-Part Representations', what PartPQ score did the TAPPS (Swin-B, COCO pre-training) model get on the Cityscapes Panoptic Parts dataset
| 64.8 |
Cholec80 | MSN | Self-Supervised Learning for Endoscopic Video Analysis | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12394v1 | [
"https://github.com/royhirsch/endossl"
] | In the paper 'Self-Supervised Learning for Endoscopic Video Analysis', what F1 score did the MSN model get on the Cholec80 dataset
| 89.6 |
MedQA | MedMobile (3.8B) | MedMobile: A mobile-sized language model with expert-level clinical capabilities | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.09019v1 | [
"https://github.com/nyuolab/MedMobile"
] | In the paper 'MedMobile: A mobile-sized language model with expert-level clinical capabilities', what Accuracy score did the MedMobile (3.8B) model get on the MedQA dataset
| 75.7 |
FRMT (Chinese - Taiwan) | PaLM 2 | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the PaLM 2 model get on the FRMT (Chinese - Taiwan) dataset
| 72.0 |
Weather2K114 (96) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K114 (96) dataset
| 0.391 |
CIFAR-100-LT (ρ=100) | DirMixE | Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07780v1 | [
"https://github.com/scongl/dirmixe"
] | In the paper 'Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition', what Error Rate score did the DirMixE model get on the CIFAR-100-LT (ρ=100) dataset
| 51.62 |
ImageNet-S | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Top-1 accuracy % score did the HPT model get on the ImageNet-S dataset
| 49.36 |
Nardo-Air | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the Nardo-Air dataset
| 76.06 |
waymo cyclist | PillarNeXt | PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR Point Clouds | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04925v1 | [
"https://github.com/qcraftai/pillarnext"
] | In the paper 'PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR Point Clouds', what APH/L2 score did the PillarNeXt model get on the waymo cyclist dataset
| 70.55 |
MATH | ToRA 13B (w/ code) | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA 13B (w/ code) model get on the MATH dataset
| 43.0 |
Kvasir-SEG | EffiSegNet-B4 | EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder | 2024-07-23T00:00:00 | https://arxiv.org/abs/2407.16298v1 | [
"https://github.com/ivezakis/effisegnet"
] | In the paper 'EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder', what mean Dice score did the EffiSegNet-B4 model get on the Kvasir-SEG dataset
| 0.9483 |
ETTh1 (336) Multivariate | Aaren | Attention as an RNN | 2024-05-22T00:00:00 | https://arxiv.org/abs/2405.13956v2 | [
"https://github.com/claCase/Attention-as-RNN"
] | In the paper 'Attention as an RNN', what MSE score did the Aaren model get on the ETTh1 (336) Multivariate dataset
| 0.65 |
UTKFace | MiVOLO-D1 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what MAE score did the MiVOLO-D1 model get on the UTKFace dataset
| 3.7 |
UMVM-oea-en-de | UMAEA (w/o surf & iter ) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the UMVM-oea-en-de dataset
| 0.956 |
OoDIS | UGainS | UGainS: Uncertainty Guided Anomaly Instance Segmentation | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02046v1 | [
"https://github.com/kumuji/ugains"
] | In the paper 'UGainS: Uncertainty Guided Anomaly Instance Segmentation', what AP score did the UGainS model get on the OoDIS dataset
| 11.14 |
DanceTrack | UCMCTrack | UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08952v2 | [
"https://github.com/corfyi/ucmctrack"
] | In the paper 'UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation', what HOTA score did the UCMCTrack model get on the DanceTrack dataset
| 63.6 |
CIFAR10 100k | GRED | Recurrent Distance Filtering for Graph Representation Learning | 2023-12-03T00:00:00 | https://arxiv.org/abs/2312.01538v3 | [
"https://github.com/skeletondyh/gred"
] | In the paper 'Recurrent Distance Filtering for Graph Representation Learning', what Accuracy (%) score did the GRED model get on the CIFAR10 100k dataset
| 76.853±0.185 |
MAWPS | GPT-3 text-curie-001 (13B) | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the GPT-3 text-curie-001 (13B) model get on the MAWPS dataset
| 4.09 |
UCSD Ped2 | AnomalyRuler | Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10299v2 | [
"https://github.com/Yuchen413/AnomalyRuler"
] | In the paper 'Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models', what AUC score did the AnomalyRuler model get on the UCSD Ped2 dataset
| 97.9% |
MLO-Cn2 | Persistence | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Persistence model get on the MLO-Cn2 dataset
| 1.227 |
LAMBADA | PaLM 2-S (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (one-shot) model get on the LAMBADA dataset
| 80.7 |
RWTH-PHOENIX-Weather 2014 T | TCNet | TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.11818v1 | [
"https://github.com/hotfinda/tcnet"
] | In the paper 'TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions', what Word Error Rate (WER) score did the TCNet model get on the RWTH-PHOENIX-Weather 2014 T dataset
| 19.4 |
ColonINST-v1 (Seen) | ColonGPT (w/ LoRA, w/o extra data) | Frontiers in Intelligent Colonoscopy | 2024-10-22T00:00:00 | https://arxiv.org/abs/2410.17241v1 | [
"https://github.com/ai4colonoscopy/intelliscope"
] | In the paper 'Frontiers in Intelligent Colonoscopy', what Accuray score did the ColonGPT (w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Seen) dataset
| 99.02 |
Bongard-HOI | SVM-Mimic + PMF (fine-tuned CLIP RN-50) | Support-Set Context Matters for Bongard Problems | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03468v2 | [
"https://github.com/nraghuraman/bongard-context"
] | In the paper 'Support-Set Context Matters for Bongard Problems', what Avg. Accuracy score did the SVM-Mimic + PMF (fine-tuned CLIP RN-50) model get on the Bongard-HOI dataset
| 76.41 |
Squirrel | FaberNet | HoloNets: Spectral Convolutions do extend to Directed Graphs | 2023-10-03T00:00:00 | https://arxiv.org/abs/2310.02232v2 | [
"https://github.com/ChristianKoke/HoloNets"
] | In the paper 'HoloNets: Spectral Convolutions do extend to Directed Graphs', what Accuracy score did the FaberNet model get on the Squirrel dataset
| 76.71±1.92 |
STS13 | PromptEOL+CSE+OPT-13B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-13B model get on the STS13 dataset
| 0.9024 |
IC13 | ABINet-LV+TPS++ | TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.05322v1 | [
"https://github.com/simplify23/tps_pp"
] | In the paper 'TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition', what Accuracy score did the ABINet-LV+TPS++ model get on the IC13 dataset
| 97.8 |
VoiceBank + DEMAND | SEMamba (+PCS) | An Investigation of Incorporating Mamba for Speech Enhancement | 2024-05-10T00:00:00 | https://arxiv.org/abs/2405.06573v1 | [
"https://github.com/roychao19477/semamba"
] | In the paper 'An Investigation of Incorporating Mamba for Speech Enhancement', what PESQ score did the SEMamba (+PCS) model get on the VoiceBank + DEMAND dataset
| 3.69 |
ASTE | MvP | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (L14) score did the MvP model get on the ASTE dataset
| 63.33 |
Weather (720) | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the Weather (720) dataset
| 0.306 |
SoundingEarth | GeoCLAP | Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping | 2023-09-19T00:00:00 | https://arxiv.org/abs/2309.10667v1 | [
"https://github.com/mvrl/geoclap"
] | In the paper 'Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping', what Median Rank score did the GeoCLAP model get on the SoundingEarth dataset
| 159 |
FineDiving | RICA^2 (Deterministic) | RICA2: Rubric-Informed, Calibrated Assessment of Actions | 2024-08-04T00:00:00 | https://arxiv.org/abs/2408.02138v2 | [
"https://github.com/abrarmajeedi/rica2_aqa"
] | In the paper 'RICA2: Rubric-Informed, Calibrated Assessment of Actions', what Spearman Correlation score did the RICA^2 (Deterministic) model get on the FineDiving dataset
| 0.9421 |
MixSNIPS | MISCA | MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05741v1 | [
"https://github.com/vinairesearch/misca"
] | In the paper 'MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention', what Accuracy score did the MISCA model get on the MixSNIPS dataset
| 97.3 |
WikiText-103 | Transformer+SSA+Self-ensemble | The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01705v1 | [
"https://github.com/shamim-hussain/ssa"
] | In the paper 'The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles', what Validation perplexity score did the Transformer+SSA+Self-ensemble model get on the WikiText-103 dataset
| 16.54 |
PCQM4Mv2-LSC | GPTrans-T | Graph Propagation Transformer for Graph Representation Learning | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11424v3 | [
"https://github.com/czczup/gptrans"
] | In the paper 'Graph Propagation Transformer for Graph Representation Learning', what Validation MAE score did the GPTrans-T model get on the PCQM4Mv2-LSC dataset
| 0.0833 |
PubMed with Public Split: fixed 20 nodes per class | GGCM | From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13599v2 | [
"https://github.com/zhengwang100/ogc_ggcm"
] | In the paper 'From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited', what Accuracy score did the GGCM model get on the PubMed with Public Split: fixed 20 nodes per class dataset
| 80.8% |
HO-3D v2 | HandBooster | HandBooster: Boosting 3D Hand-Mesh Reconstruction by Conditional Synthesis and Sampling of Hand-Object Interactions | 2024-03-27T00:00:00 | https://arxiv.org/abs/2403.18575v1 | [
"https://github.com/hxwork/handbooster_pytorch"
] | In the paper 'HandBooster: Boosting 3D Hand-Mesh Reconstruction by Conditional Synthesis and Sampling of Hand-Object Interactions', what PA-MPJPE (mm) score did the HandBooster model get on the HO-3D v2 dataset
| 8.2 |
Intel Image Classification | ResNet-18 + Vision Eagle Attention | Vision Eagle Attention: a new lens for advancing image classification | 2024-11-15T00:00:00 | https://arxiv.org/abs/2411.10564v2 | [
"https://github.com/MahmudulHasan11085/Vision-Eagle-Attention"
] | In the paper 'Vision Eagle Attention: a new lens for advancing image classification', what Accuracy score did the ResNet-18 + Vision Eagle Attention model get on the Intel Image Classification dataset
| 92.43 |
SIR^2(Objects) | DSRNet | Single Image Reflection Separation via Component Synergy | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.10027v1 | [
"https://github.com/mingcv/dsrnet"
] | In the paper 'Single Image Reflection Separation via Component Synergy', what PSNR score did the DSRNet model get on the SIR^2(Objects) dataset
| 26.28 |
COLLAB | GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GCN + PANDA model get on the COLLAB dataset
| 68.4% |
CATT | CBHG | CATT: Character-based Arabic Tashkeel Transformer | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03236v3 | [
"https://github.com/abjadai/catt"
] | In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the CBHG model get on the CATT dataset
| 10.808 |
THUMOS’14 | ActionMamba(InternVideo2-6B) | Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09626v1 | [
"https://github.com/opengvlab/video-mamba-suite"
] | In the paper 'Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding', what mAP IOU@0.5 score did the ActionMamba(InternVideo2-6B) model get on the THUMOS’14 dataset
| 76.90 |
ADE20K-150 | TTD (MaskCLIP) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (MaskCLIP) model get on the ADE20K-150 dataset
| 12.7 |
VTAB-1k(Specialized<4>) | GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K) | Improving Visual Prompt Tuning for Self-supervised Vision Transformers | 2023-06-08T00:00:00 | https://arxiv.org/abs/2306.05067v1 | [
"https://github.com/ryongithub/gatedprompttuning"
] | In the paper 'Improving Visual Prompt Tuning for Self-supervised Vision Transformers', what Mean Accuracy score did the GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K) model get on the VTAB-1k(Specialized<4>) dataset
| 76.86 |
CIFAR-10-LT (ρ=10) | SURE(ResNet-32) | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01T00:00:00 | https://arxiv.org/abs/2403.00543v1 | [
"https://github.com/YutingLi0606/SURE"
] | In the paper 'SURE: SUrvey REcipes for building reliable and robust deep networks', what Error Rate score did the SURE(ResNet-32) model get on the CIFAR-10-LT (ρ=10) dataset
| 5.04 |
ImageNet 512x512 | EDM2-S w/ guidance interval | Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07724v2 | [
"https://github.com/kynkaat/guidance-interval"
] | In the paper 'Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models', what FID score did the EDM2-S w/ guidance interval model get on the ImageNet 512x512 dataset
| 1.68 |
HRSOD | BiRefNet (DUTS, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (DUTS, UHRSD) model get on the HRSOD dataset
| 0.959 |
The Pile | Test-Time Fine-Tuning with SIFT + Llama-3.2 (1B) | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Test-Time Fine-Tuning with SIFT + Llama-3.2 (1B) model get on the The Pile dataset
| 0.606 |
YouTube-UGC | ReLaX-VQA | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11496v1 | [
"https://github.com/xinyiw915/relax-vqa"
] | In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA model get on the YouTube-UGC dataset
| 0.8204 |
ChEBI-20 | BioT5 | BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07276v3 | [
"https://github.com/QizhiPei/BioT5"
] | In the paper 'BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations', what BLEU-2 score did the BioT5 model get on the ChEBI-20 dataset
| 63.5 |
METR-LA | STAEformer | STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10425v5 | [
"https://github.com/xdzhelheim/staeformer"
] | In the paper 'STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting', what MAE @ 12 step score did the STAEformer model get on the METR-LA dataset
| 3.34 |
COCO-SP | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what macro F1 score did the NeuralWalker model get on the COCO-SP dataset
| 0.4398 ± 0.0033 |
SMAC corridor_2z_vs_24zg | DMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the DMIX model get on the SMAC corridor_2z_vs_24zg dataset
| 7.41 |
ActivityNet-QA | MA-LMM | MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05726v2 | [
"https://github.com/boheumd/MA-LMM"
] | In the paper 'MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding', what Accuracy score did the MA-LMM model get on the ActivityNet-QA dataset
| 49.8 |
Weather (192) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Weather (192) dataset
| 0.188 |
ScanNet | AVS-Net | AVS-Net: Point Sampling with Adaptive Voxel Size for 3D Scene Understanding | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17521v3 | [
"https://github.com/yhc2021/avs-net"
] | In the paper 'AVS-Net: Point Sampling with Adaptive Voxel Size for 3D Scene Understanding', what val mIoU score did the AVS-Net model get on the ScanNet dataset
| 76.1 |
Human3.6M | DC-GCT(T=1,GT) | Double-chain Constraints for 3D Human Pose Estimation in Images and Videos | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05298v1 | [
"https://github.com/KHB1698/DC-GCT"
] | In the paper 'Double-chain Constraints for 3D Human Pose Estimation in Images and Videos', what Average MPJPE (mm) score did the DC-GCT(T=1,GT) model get on the Human3.6M dataset
| 32.4 |
MSVD-Indonesian | X-CLIP (Cross-Lingual) | MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11341v1 | [
"https://github.com/willyfh/msvd-indonesian"
] | In the paper 'MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian', what R@1 score did the X-CLIP (Cross-Lingual) model get on the MSVD-Indonesian dataset
| 32.3 |
Electricity (336) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Electricity (336) dataset
| 0.161 |
HO-3D v2 | HOISDF | HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed Distance Fields | 2024-02-26T00:00:00 | https://arxiv.org/abs/2402.17062v1 | [
"https://github.com/amathislab/hoisdf"
] | In the paper 'HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed Distance Fields', what PA-MPJPE (mm) score did the HOISDF model get on the HO-3D v2 dataset
| 9.2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.