dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
EC-FUNSD | RORE (GeoLayoutLM) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (GeoLayoutLM) model get on the EC-FUNSD dataset
| 87.42 |
MassSpecGym | SMILES Transformer | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Top-1 Accuracy score did the SMILES Transformer model get on the MassSpecGym dataset
| 0.00 |
BreakHis | WaveMix | Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision | 2024-06-09T00:00:00 | https://arxiv.org/abs/2406.05612v2 | [
"https://github.com/pranavphoenix/Backbones"
] | In the paper 'Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision', what Average Test Accuracy over all magnifications score did the WaveMix model get on the BreakHis dataset
| 99.39 |
nuscenes Camera-Radar | HyDRa | Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07746v2 | [
"https://github.com/phi-wol/hydra"
] | In the paper 'Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception', what AMOTA score did the HyDRa model get on the nuscenes Camera-Radar dataset
| 0.584 |
Wikidata5M | KGT5-context + Description | Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13059v2 | [
"https://github.com/uma-pi1/kgt5-context"
] | In the paper 'Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction', what MRR score did the KGT5-context + Description model get on the Wikidata5M dataset
| 0.426 |
Atari 2600 Demon Attack | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Demon Attack dataset
| 119773.9 |
VLCS | PromptStyler (CLIP, ViT-B/16) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ViT-B/16) model get on the VLCS dataset
| 82.9 |
CAT2000 | SUM | SUM: Saliency Unification through Mamba for Visual Attention Modeling | 2024-06-25T00:00:00 | https://arxiv.org/abs/2406.17815v2 | [
"https://github.com/Arhosseini77/SUM"
] | In the paper 'SUM: Saliency Unification through Mamba for Visual Attention Modeling', what KL score did the SUM model get on the CAT2000 dataset
| 0.27 |
EconLogicQA | Llama-2-13B-Chat | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Llama-2-13B-Chat model get on the EconLogicQA dataset
| 0.1462 |
RefCOCO testA | MaskRIS (Swin-B) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B) model get on the RefCOCO testA dataset
| 78.96 |
ETTh2 (96) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTh2 (96) Multivariate dataset
| 0.262 |
Weather (192) | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the Weather (192) dataset
| 0.218 |
WHU Building Dataset | SGSLN/512 | Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11302v1 | [
"https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network"
] | In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-score score did the SGSLN/512 model get on the WHU Building Dataset dataset
| 0.9486 |
NC4K | BiRefNet | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-measure score did the BiRefNet model get on the NC4K dataset
| 0.914 |
ADE20K | FC-CLIP | Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02487v2 | [
"https://github.com/bytedance/fc-clip"
] | In the paper 'Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP', what PQ score did the FC-CLIP model get on the ADE20K dataset
| 26.8 |
Synapse multi-organ CT | MIST | MIST: Medical Image Segmentation Transformer with Convolutional Attention Mixing (CAM) Decoder | 2023-10-30T00:00:00 | https://arxiv.org/abs/2310.19898v1 | [
"https://github.com/rahman-motiur/mist"
] | In the paper 'MIST: Medical Image Segmentation Transformer with Convolutional Attention Mixing (CAM) Decoder', what Avg DSC score did the MIST model get on the Synapse multi-organ CT dataset
| 86.92 |
KIT Motion-Language | MMM (gt length) | MMM: Generative Masked Motion Model | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03596v2 | [
"https://github.com/exitudio/MMM"
] | In the paper 'MMM: Generative Masked Motion Model', what FID score did the MMM (gt length) model get on the KIT Motion-Language dataset
| 0.316 |
MLO-Cn2 | GBRT | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the GBRT model get on the MLO-Cn2 dataset
| 0.428 |
ImageNet-1k vs Curated OODs (avg.) | SCALE (ResNet50) | Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement | 2023-09-30T00:00:00 | https://arxiv.org/abs/2310.00227v1 | [
"https://github.com/kai422/scale"
] | In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what AUROC score did the SCALE (ResNet50) model get on the ImageNet-1k vs Curated OODs (avg.) dataset
| 95.71 |
Cityscapes | TTD (MaskCLIP) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (MaskCLIP) model get on the Cityscapes dataset
| 27.0 |
Cora with Public Split: fixed 20 nodes per class | OGC | From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13599v2 | [
"https://github.com/zhengwang100/ogc_ggcm"
] | In the paper 'From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited', what Accuracy score did the OGC model get on the Cora with Public Split: fixed 20 nodes per class dataset
| 86.9% |
MOSE | Cutie (base) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie (base) model get on the MOSE dataset
| 64.0 |
CHASE_DB1 | PVT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what DSC score did the PVT-GCASCADE model get on the CHASE_DB1 dataset
| 0.8251 |
VideoInstruct | VTimeLLM | VTimeLLM: Empower LLM to Grasp Video Moments | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18445v1 | [
"https://github.com/huangb23/vtimellm"
] | In the paper 'VTimeLLM: Empower LLM to Grasp Video Moments', what mean score did the VTimeLLM model get on the VideoInstruct dataset
| 2.17 |
InsPLAD | AttentDifferNet (SENet-AlexNet) | Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study | 2023-11-05T00:00:00 | https://arxiv.org/abs/2311.02747v2 | [
"https://github.com/andreluizbvs/insplad"
] | In the paper 'Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study', what Detection AUROC score did the AttentDifferNet (SENet-AlexNet) model get on the InsPLAD dataset
| 94.34 |
SARDet-100K | MSFA (F-RCNN+R50) | SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection | 2024-03-11T00:00:00 | https://arxiv.org/abs/2403.06534v2 | [
"https://github.com/zcablii/sardet_100k"
] | In the paper 'SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection', what box mAP score did the MSFA (F-RCNN+R50) model get on the SARDet-100K dataset
| 51.1 |
APPS | MoTCoder-15b | MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks | 2023-12-26T00:00:00 | https://arxiv.org/abs/2312.15960v3 | [
"https://github.com/dvlab-research/motcoder"
] | In the paper 'MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks', what Introductory Pass@1 score did the MoTCoder-15b model get on the APPS dataset
| 33.80 |
Mapillary val | BoQ | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the Mapillary val dataset
| 93.8 |
Wikidata5M | KGT5-context | Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13059v2 | [
"https://github.com/uma-pi1/kgt5-context"
] | In the paper 'Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction', what MRR score did the KGT5-context model get on the Wikidata5M dataset
| 0.378 |
VoxCeleb1 | ReDimNet-B5-SF2-LM-ASNorm (9.2M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B5-SF2-LM-ASNorm (9.2M) model get on the VoxCeleb1 dataset
| 0.39 |
Slakh2100 | MT3 (colab) | YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04822v3 | [
"https://github.com/mimbres/yourmt3"
] | In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what note-level F-measure-no-offset (Fno) score did the MT3 (colab) model get on the Slakh2100 dataset
| 0.752 |
MedQA | Med-PaLM 2 | Towards Expert-Level Medical Question Answering with Large Language Models | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09617v1 | [
"https://github.com/m42-health/med42"
] | In the paper 'Towards Expert-Level Medical Question Answering with Large Language Models', what Accuracy score did the Med-PaLM 2 model get on the MedQA dataset
| 85.4 |
KITTI (Distant PCR) | Predator+APR(a) | APR: Online Distant Point Cloud Registration Through Aggregated Point Cloud Reconstruction | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.02893v2 | [
"https://github.com/liuquan98/apr"
] | In the paper 'APR: Online Distant Point Cloud Registration Through Aggregated Point Cloud Reconstruction', what mRR @ Normal Criterion (1.5°&0.3m) score did the Predator+APR(a) model get on the KITTI (Distant PCR) dataset
| 78.2 |
Geo-Tagged NUS-WIDE (GPS + Visual) | GeoCLIP | GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16020v2 | [
"https://github.com/VicenteVivan/geo-clip"
] | In the paper 'GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization', what mAP score did the GeoCLIP model get on the Geo-Tagged NUS-WIDE (GPS + Visual) dataset
| 0.362 |
ETTh1 (336) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTh1 (336) Multivariate dataset
| 0.427 |
CHILI-100K | GraphUNet | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GraphUNet model get on the CHILI-100K dataset
| 0.085 +/- 0.002 |
Elliptic Dataset | GIN | Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.19383v2 | [
"https://github.com/B-Deprez/AML_Network"
] | In the paper 'Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation', what AUPRC score did the GIN model get on the Elliptic Dataset dataset
| 0.5517 |
Meta-Dataset | SMAT (DINO-VIT-Base-16-224) | Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08477v3 | [
"https://github.com/szc12153/sparse_meta_tuning"
] | In the paper 'Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts', what Accuracy score did the SMAT (DINO-VIT-Base-16-224) model get on the Meta-Dataset dataset
| 85.27 |
COCO-Stuff Labels-to-Photos | USIS-Wavelet | Wavelet-based Unsupervised Label-to-Image Translation | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09647v1 | [
"https://github.com/GeorgeEskandar/USIS-Unsupervised-Semantic-Image-Synthesis"
] | In the paper 'Wavelet-based Unsupervised Label-to-Image Translation', what mIoU score did the USIS-Wavelet model get on the COCO-Stuff Labels-to-Photos dataset
| 13.4 |
LaSOT | ODTrack-B | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what AUC score did the ODTrack-B model get on the LaSOT dataset
| 73.2 |
BTAD | ReConPatch WRN-50 | ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16713v3 | [
"https://github.com/travishsu/ReConPatch-TF"
] | In the paper 'ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection', what Detection AUROC score did the ReConPatch WRN-50 model get on the BTAD dataset
| 95.8 |
Marmoset-8K | BUCTD-CoAM-W48 (DLCRNet) | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the BUCTD-CoAM-W48 (DLCRNet) model get on the Marmoset-8K dataset
| 91.6 |
MATH | OpenMath-CodeLlama-7B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-7B (w/ code) model get on the MATH dataset
| 43.6 |
S3DIS Area5 | KPConvX-L | KPConvX: Modernizing Kernel Point Convolution with Kernel Attention | 2024-05-21T00:00:00 | https://arxiv.org/abs/2405.13194v1 | [
"https://github.com/apple/ml-kpconvx"
] | In the paper 'KPConvX: Modernizing Kernel Point Convolution with Kernel Attention', what mIoU score did the KPConvX-L model get on the S3DIS Area5 dataset
| 73.5 |
ImageNet-A | Discrete Adversarial Distillation (ViT-B/224) | Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01441v2 | [
"https://github.com/lapisrocks/DiscreteAdversarialDistillation"
] | In the paper 'Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models', what Top-1 accuracy % score did the Discrete Adversarial Distillation (ViT-B/224) model get on the ImageNet-A dataset
| 31.8 |
STL-10 | ResNet50 | Guarding Barlow Twins Against Overfitting with Mixed Samples | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02151v1 | [
"https://github.com/wgcban/mix-bt"
] | In the paper 'Guarding Barlow Twins Against Overfitting with Mixed Samples', what Accuracy score did the ResNet50 model get on the STL-10 dataset
| 91.70 |
virtual KITTI to KITTI (MDE) | CoReg | Consistency Regularisation for Unsupervised Domain Adaptation in Monocular Depth Estimation | 2024-05-27T00:00:00 | https://arxiv.org/abs/2405.17704v1 | [
"https://github.com/amirmael/semisupmde"
] | In the paper 'Consistency Regularisation for Unsupervised Domain Adaptation in Monocular Depth Estimation', what RMSE score did the CoReg model get on the virtual KITTI to KITTI (MDE) dataset
| 4.449 |
Atari 2600 Seaquest | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Seaquest dataset
| 29278.6 |
MassSpecGym | DeepSets | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the DeepSets model get on the MassSpecGym dataset
| 1.47 |
NYCTaxi | ADCSD | Online Test-Time Adaptation of Spatial-Temporal Traffic Flow Forecasting | 2024-01-08T00:00:00 | https://arxiv.org/abs/2401.04148v1 | [
"https://github.com/pengxin-guo/adcsd"
] | In the paper 'Online Test-Time Adaptation of Spatial-Temporal Traffic Flow Forecasting', what MAE @ in score did the ADCSD model get on the NYCTaxi dataset
| 16.987 |
YouTube Highlights | SG-DETR (w/ PT) | Saliency-Guided DETR for Moment Retrieval and Highlight Detection | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01615v1 | [
"https://github.com/ai-forever/sg-detr"
] | In the paper 'Saliency-Guided DETR for Moment Retrieval and Highlight Detection', what mAP score did the SG-DETR (w/ PT) model get on the YouTube Highlights dataset
| 78.0 |
UA-GEC | RedPenNet | RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans | 2023-09-19T00:00:00 | https://arxiv.org/abs/2309.10898v1 | [
"https://github.com/webspellchecker/unlp-2023-shared-task"
] | In the paper 'RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans', what F0.5 score did the RedPenNet model get on the UA-GEC dataset
| 67.71 |
NExT-QA (Open-ended VideoQA) | MovieChat | MovieChat: From Dense Token to Sparse Memory for Long Video Understanding | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16449v4 | [
"https://github.com/rese1f/MovieChat"
] | In the paper 'MovieChat: From Dense Token to Sparse Memory for Long Video Understanding', what Accuracy score did the MovieChat model get on the NExT-QA (Open-ended VideoQA) dataset
| 49.9 |
RetVQA | MI-BART | Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question Answering | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16713v1 | [
"https://github.com/Abhiram4572/mi_bart"
] | In the paper 'Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question Answering', what Accuarcy score did the MI-BART model get on the RetVQA dataset
| 76.5 |
ITOP front-view | SPiKE | SPiKE: 3D Human Pose from Point Cloud Sequences | 2024-09-03T00:00:00 | https://arxiv.org/abs/2409.01879v1 | [
"https://github.com/iballester/SPiKE"
] | In the paper 'SPiKE: 3D Human Pose from Point Cloud Sequences', what Mean mAP score did the SPiKE model get on the ITOP front-view dataset
| 89.19 |
ASTE | ChatGPT (gpt-3.5-turbo, few-shot) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (L14) score did the ChatGPT (gpt-3.5-turbo, few-shot) model get on the ASTE dataset
| 38.12 |
FineDance | Lodge (DDIM) | Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives | 2024-03-15T00:00:00 | https://arxiv.org/abs/2403.10518v3 | [
"https://github.com/li-ronghui/LODGE"
] | In the paper 'Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives', what fid_k score did the Lodge (DDIM) model get on the FineDance dataset
| 50.00 |
MLO-Cn2 | Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Climatology model get on the MLO-Cn2 dataset
| 0.661 |
ETTh1 (336) Multivariate | MOIRAIBase | Unified Training of Universal Time Series Forecasting Transformers | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02592v2 | [
"https://github.com/SalesforceAIResearch/uni2ts"
] | In the paper 'Unified Training of Universal Time Series Forecasting Transformers', what MSE score did the MOIRAIBase model get on the ETTh1 (336) Multivariate dataset
| 0.456 |
PubMedQA | Med-PaLM 2 (5-shot) | Towards Expert-Level Medical Question Answering with Large Language Models | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09617v1 | [
"https://github.com/m42-health/med42"
] | In the paper 'Towards Expert-Level Medical Question Answering with Large Language Models', what Accuracy score did the Med-PaLM 2 (5-shot) model get on the PubMedQA dataset
| 79.2 |
HICO-DET | HOIGen | Unseen No More: Unlocking the Potential of CLIP for Generative Zero-shot HOI Detection | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.05974v1 | [
"https://github.com/soberguo/hoigen"
] | In the paper 'Unseen No More: Unlocking the Potential of CLIP for Generative Zero-shot HOI Detection', what mAP score did the HOIGen model get on the HICO-DET dataset
| 34.84 |
DuoRC | Vector Database (ChromaDB) | RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models | 2023-07-06T00:00:00 | https://arxiv.org/abs/2307.02738v3 | [
"https://github.com/cisco-open/DeepVision/tree/main/recallm"
] | In the paper 'RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models', what Accuracy score did the Vector Database (ChromaDB) model get on the DuoRC dataset
| 55.71 |
TrackingNet | LoRAT-L-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what Precision score did the LoRAT-L-378 model get on the TrackingNet dataset
| 85.4 |
FLoRes-200 | GenTranslate-7B | GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators | 2024-02-10T00:00:00 | https://arxiv.org/abs/2402.06894v2 | [
"https://github.com/yuchen005/gentranslate"
] | In the paper 'GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators', what BLEU score did the GenTranslate-7B model get on the FLoRes-200 dataset
| 38.5 |
VoxCeleb | ReDimNet-B4-LM-ASNorm (6.3M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B4-LM-ASNorm (6.3M) model get on the VoxCeleb dataset
| 0.44 |
Wiki-CS | CGT | Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16788v1 | [
"https://github.com/nslab-cuk/community-aware-graph-transformer"
] | In the paper 'Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures', what Accuracy score did the CGT model get on the Wiki-CS dataset
| 84.61±0.53 |
VNHSGE-History | ChatGPT | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE-History dataset
| 56.5 |
MVTec AD | GLAD | GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07487v3 | [
"https://github.com/hyao1/glad"
] | In the paper 'GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection', what Detection AUROC score did the GLAD model get on the MVTec AD dataset
| 99.3 |
SRD | ShadowMaskFormer (arXiv 2024) (512x512) | ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18433v2 | [
"https://github.com/lizhh268/shadowmaskformer"
] | In the paper 'ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal', what RMSE score did the ShadowMaskFormer (arXiv 2024) (512x512) model get on the SRD dataset
| 4.15 |
COCO 10% labeled data | Guided Distillation (ResNet50) | Guided Distillation for Semi-Supervised Instance Segmentation | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02668v2 | [
"https://github.com/facebookresearch/guideddistillation"
] | In the paper 'Guided Distillation for Semi-Supervised Instance Segmentation', what mask AP score did the Guided Distillation (ResNet50) model get on the COCO 10% labeled data dataset
| 35.0 |
FIRE | LKRetina | Reverse Knowledge Distillation: Training a Large Model using a Small One for Retinal Image Matching on Limited Data | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10698v2 | [
"https://github.com/SaharAlmahfouzNasser/MeDAL-Retina"
] | In the paper 'Reverse Knowledge Distillation: Training a Large Model using a Small One for Retinal Image Matching on Limited Data', what mAUC score did the LKRetina model get on the FIRE dataset
| 0.761 |
ImageNet-1k vs OpenImage-O | NAC-UE (ResNet-50) | Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02879v3 | [
"https://github.com/bierone/ood_coverage"
] | In the paper 'Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization', what AUROC score did the NAC-UE (ResNet-50) model get on the ImageNet-1k vs OpenImage-O dataset
| 91.45 |
Caltech-101 | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the Caltech-101 dataset
| 96.65 |
PubLayNet val | ResNext-101-32×8d | Vision Grid Transformer for Document Layout Analysis | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14978v1 | [
"https://github.com/alibabaresearch/advancedliteratemachinery"
] | In the paper 'Vision Grid Transformer for Document Layout Analysis', what Text score did the ResNext-101-32×8d model get on the PubLayNet val dataset
| 0.930 |
Market-1501 | CA-Jaccard | CA-Jaccard: Camera-aware Jaccard Distance for Person Re-identification | 2023-11-17T00:00:00 | https://arxiv.org/abs/2311.10605v2 | [
"https://github.com/chen960/ca-jaccard"
] | In the paper 'CA-Jaccard: Camera-aware Jaccard Distance for Person Re-identification', what Rank-1 score did the CA-Jaccard model get on the Market-1501 dataset
| 96.2 |
HotpotQA | Chain-of-Skills | Chain-of-Skills: A Configurable Model for Open-domain Question Answering | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.03130v2 | [
"https://github.com/mayer123/udt-qa"
] | In the paper 'Chain-of-Skills: A Configurable Model for Open-domain Question Answering', what ANS-EM score did the Chain-of-Skills model get on the HotpotQA dataset
| 0.674 |
CoNLL-2014 Shared Task | Ensembles of best 7 models + GRECO + GTP-rerank | Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.14914v1 | [
"https://github.com/grammarly/pillars-of-gec"
] | In the paper 'Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models', what F0.5 score did the Ensembles of best 7 models + GRECO + GTP-rerank model get on the CoNLL-2014 Shared Task dataset
| 72.8 |
USNA-Cn2 (short-duration) | Persistence | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Persistence model get on the USNA-Cn2 (short-duration) dataset
| 0.758 |
MVTec AD | Dinomaly ViT-B (model-unified multi-class) | Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14325v4 | [
"https://github.com/guojiajeremy/dinomaly"
] | In the paper 'Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection', what Detection AUROC score did the Dinomaly ViT-B (model-unified multi-class) model get on the MVTec AD dataset
| 99.60 |
SMAC 3s5z_vs_3s6z | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QPLEX model get on the SMAC 3s5z_vs_3s6z dataset
| 84.38 |
USNA-Cn2 (short-duration) | Linear Forecast | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Linear Forecast model get on the USNA-Cn2 (short-duration) dataset
| 0.358 |
BTAD | CPR | Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06748v1 | [
"https://github.com/flyinghu123/cpr"
] | In the paper 'Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval', what Segmentation AUROC score did the CPR model get on the BTAD dataset
| 98.4 |
ScanNet | ODIN | ODIN: A Single Model for 2D and 3D Segmentation | 2024-01-04T00:00:00 | https://arxiv.org/abs/2401.02416v3 | [
"https://github.com/ayushjain1144/odin"
] | In the paper 'ODIN: A Single Model for 2D and 3D Segmentation', what test mIoU score did the ODIN model get on the ScanNet dataset
| 74.4 |
HO-3D v2 | WiLoR | WiLoR: End-to-end 3D Hand Localization and Reconstruction in-the-wild | 2024-09-18T00:00:00 | https://arxiv.org/abs/2409.12259v1 | [
"https://github.com/rolpotamias/WiLoR"
] | In the paper 'WiLoR: End-to-end 3D Hand Localization and Reconstruction in-the-wild', what PA-MPJPE (mm) score did the WiLoR model get on the HO-3D v2 dataset
| 7.5 |
LibriTTS | EVA-GAN-base | EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks | 2024-01-31T00:00:00 | https://arxiv.org/abs/2402.00892v1 | [
"https://github.com/fishaudio/vocoder"
] | In the paper 'EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks', what PESQ score did the EVA-GAN-base model get on the LibriTTS dataset
| 4.0330 |
CIFAR-10 | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the CIFAR-10 dataset
| 93.4 |
CAMO | ZoomNeXt-PVTv2-B5 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what MAE score did the ZoomNeXt-PVTv2-B5 model get on the CAMO dataset
| 0.041 |
BJTaxi | SimVP+SVQ (Learnable) | SVQ: Sparse Vector Quantization for Spatiotemporal Forecasting | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03406v3 | [
"https://github.com/Pachark/SVQ-Forecasting"
] | In the paper 'SVQ: Sparse Vector Quantization for Spatiotemporal Forecasting', what MAE @ in score did the SimVP+SVQ (Learnable) model get on the BJTaxi dataset
| 14.64 |
ShanghaiTech Campus | TSGAD | An Exploratory Study on Human-Centric Video Anomaly Detection through Variational Autoencoders and Trajectory Prediction | 2024-04-29T00:00:00 | https://arxiv.org/abs/2406.15395v1 | [
"https://github.com/tecsar-uncc/tsgad"
] | In the paper 'An Exploratory Study on Human-Centric Video Anomaly Detection through Variational Autoencoders and Trajectory Prediction', what AUC-ROC score did the TSGAD model get on the ShanghaiTech Campus dataset
| 80.67 |
PubMed with Public Split: fixed 20 nodes per class | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GCN model get on the PubMed with Public Split: fixed 20 nodes per class dataset
| 81.12 ± 0.52 |
ScanNet | PPT + SparseUNet | Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09718v2 | [
"https://github.com/Pointcept/Pointcept"
] | In the paper 'Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training', what test mIoU score did the PPT + SparseUNet model get on the ScanNet dataset
| 76.6 |
Hainsworth | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Hainsworth dataset
| 80.0 |
EgoTaskQA | GF(sup) | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01529v1 | [
"https://github.com/byz0e/glance-focus"
] | In the paper 'Glance and Focus: Memory Prompting for Multi-Event Video Question Answering', what Direct score did the GF(sup) model get on the EgoTaskQA dataset
| 44.27 |
SAFIM | deepseek-coder-6.7b-base | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the deepseek-coder-6.7b-base model get on the SAFIM dataset
| 54.74 |
ETTm1 (720) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTm1 (720) Multivariate dataset
| 0.447 |
ImageNet | WTTM (T:resnet50, S:mobilenet-v1) | Knowledge Distillation Based on Transformed Teacher Matching | 2024-02-17T00:00:00 | https://arxiv.org/abs/2402.11148v2 | [
"https://github.com/zkxufo/TTM"
] | In the paper 'Knowledge Distillation Based on Transformed Teacher Matching', what Top-1 accuracy % score did the WTTM (T:resnet50, S:mobilenet-v1) model get on the ImageNet dataset
| 73.09 |
MM-Vet | LLaVA-1.5-7B (DC-S) | ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models | 2024-12-09T00:00:00 | https://arxiv.org/abs/2412.07012v2 | [
"https://github.com/jieyuz2/provision"
] | In the paper 'ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models', what GPT-4 score score did the LLaVA-1.5-7B (DC-S) model get on the MM-Vet dataset
| 38.5 |
DRIVE | PVT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what F1 score score did the PVT-GCASCADE model get on the DRIVE dataset
| 0.8210 |
MeetingBank | Claude Instant + SigExt | Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02741v2 | [
"https://github.com/amazon-science/SigExt"
] | In the paper 'Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization', what Rouge-1 score did the Claude Instant + SigExt model get on the MeetingBank dataset
| 42.3 |
Few-NERD (SUP) | NuNER | NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data | 2024-02-23T00:00:00 | https://arxiv.org/abs/2402.15343v1 | [
"https://github.com/Serega6678/NuNER"
] | In the paper 'NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data', what Precision score did the NuNER model get on the Few-NERD (SUP) dataset
| 67.8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.