dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Mini-Imagenet 5-way (1-shot) | CAML [Laion-2b] | Context-Aware Meta-Learning | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10971v2 | [
"https://github.com/cfifty/CAML"
] | In the paper 'Context-Aware Meta-Learning', what Accuracy score did the CAML [Laion-2b] model get on the Mini-Imagenet 5-way (1-shot) dataset
| 96.2 |
ChEBI-20 | MolReGPT (GPT-3.5-turbo) | Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective | 2023-06-11T00:00:00 | https://arxiv.org/abs/2306.06615v2 | [
"https://github.com/phenixace/molregpt"
] | In the paper 'Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective', what Text2Mol score did the MolReGPT (GPT-3.5-turbo) model get on the ChEBI-20 dataset
| 57.1 |
EuroSAT | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the EuroSAT dataset
| 84.88 |
JapaneseVowels | ConvTran | Improving Position Encoding of Transformers for Multivariate Time Series Classification | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16642v1 | [
"https://github.com/navidfoumani/convtran"
] | In the paper 'Improving Position Encoding of Transformers for Multivariate Time Series Classification', what Accuracy score did the ConvTran model get on the JapaneseVowels dataset
| 0.9891 |
Benchmarking Chinese Text Recognition: Datasets, Baselines, and an Empirical Study | DTrOCR | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what Accuracy (%) score did the DTrOCR model get on the Benchmarking Chinese Text Recognition: Datasets, Baselines, and an Empirical Study dataset
| 89.6 |
ParaMAWPS | GPT-3 text-curie-001 (13B) | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the GPT-3 text-curie-001 (13B) model get on the ParaMAWPS dataset
| 4.20 |
COCO 1% labeled data | MixPL | Mixed Pseudo Labels for Semi-Supervised Object Detection | 2023-12-12T00:00:00 | https://arxiv.org/abs/2312.07006v1 | [
"https://github.com/czm369/mixpl"
] | In the paper 'Mixed Pseudo Labels for Semi-Supervised Object Detection', what mAP score did the MixPL model get on the COCO 1% labeled data dataset
| 31.7 |
ETTm1 (96) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTm1 (96) Multivariate dataset
| 0.291 |
PACS | GMDG (ResNet-50) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50) model get on the PACS dataset
| 85.6 |
Chameleon (60%/20%/20% random splits) | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GraphSAGE model get on the Chameleon (60%/20%/20% random splits) dataset
| 62.98 ± 3.35 |
RefCOCO+ testA | MaskRIS (Swin-B) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B) model get on the RefCOCO+ testA dataset
| 74.46 |
SHS100K-TEST | CoverHunter-128 | CoverHunter: Cover Song Identification with Refined Attention and Alignments | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09025v1 | [
"https://github.com/Liu-Feng-deeplearning/CoverHunter"
] | In the paper 'CoverHunter: Cover Song Identification with Refined Attention and Alignments', what mAP score did the CoverHunter-128 model get on the SHS100K-TEST dataset
| 0.858 |
EC-FUNSD | LayoutLMv3 (base) | Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02379v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective', what F1 score did the LayoutLMv3 (base) model get on the EC-FUNSD dataset
| 82.30 |
FineDance | Lodge (DDPM) | Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives | 2024-03-15T00:00:00 | https://arxiv.org/abs/2403.10518v3 | [
"https://github.com/li-ronghui/LODGE"
] | In the paper 'Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives', what fid_k score did the Lodge (DDPM) model get on the FineDance dataset
| 45.56 |
BTAD | MuSc (zero-shot) | MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16753v1 | [
"https://github.com/xrli-U/MuSc"
] | In the paper 'MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images', what Segmentation AUROC score did the MuSc (zero-shot) model get on the BTAD dataset
| 97.35 |
HumanML3D | Motion Mamba | Motion Mamba: Efficient and Long Sequence Motion Generation | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07487v4 | [
"https://github.com/steve-zeyu-zhang/MotionMamba"
] | In the paper 'Motion Mamba: Efficient and Long Sequence Motion Generation', what FID score did the Motion Mamba model get on the HumanML3D dataset
| 0.281 |
Pubmed | CGT | Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16788v1 | [
"https://github.com/nslab-cuk/community-aware-graph-transformer"
] | In the paper 'Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures', what Accuracy score did the CGT model get on the Pubmed dataset
| 86.86±0.12 |
LVIS v1.0 val | SE-R101-FPN-MaskRCNN-APA | Adaptive Parametric Activation | 2024-07-11T00:00:00 | https://arxiv.org/abs/2407.08567v2 | [
"https://github.com/kostas1515/aglu"
] | In the paper 'Adaptive Parametric Activation', what mask AP score did the SE-R101-FPN-MaskRCNN-APA model get on the LVIS v1.0 val dataset
| 30.7 |
RefCoCo val | MaskRIS (Swin-B, combined DB) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B, combined DB) model get on the RefCoCo val dataset
| 78.71 |
SPOT-10 | DenseNet121 Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the DenseNet121 Distiller model get on the SPOT-10 dataset
| 81.84 |
DTU | ET-MVSNet | When Epipolar Constraint Meets Non-local Operators in Multi-View Stereo | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17218v1 | [
"https://github.com/tqtqliu/et-mvsnet"
] | In the paper 'When Epipolar Constraint Meets Non-local Operators in Multi-View Stereo', what Acc score did the ET-MVSNet model get on the DTU dataset
| 0.329 |
QVHighlights | UVCOM | Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.16464v1 | [
"https://github.com/easonxiao-888/uvcom"
] | In the paper 'Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection', what mAP score did the UVCOM model get on the QVHighlights dataset
| 43.18 |
Oxford 102 Flower | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the Oxford 102 Flower dataset
| 86.70 |
ImageNet | DeBiFormer-B | DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08582v1 | [
"https://github.com/maclong01/DeBiFormer"
] | In the paper 'DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention', what Top 1 Accuracy score did the DeBiFormer-B model get on the ImageNet dataset
| 84.4% |
Argoverse 2 | ZeroFlow 5x XL | ZeroFlow: Scalable Scene Flow via Distillation | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10424v8 | [
"https://github.com/kylevedder/zeroflow"
] | In the paper 'ZeroFlow: Scalable Scene Flow via Distillation', what EPE 3-Way score did the ZeroFlow 5x XL model get on the Argoverse 2 dataset
| 0.049392 |
Coauthor CS | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the HH-GCN model get on the Coauthor CS dataset
| 94.71% |
READ2016(line-level) | HTR-VT | HTR-VT: Handwritten Text Recognition with Vision Transformer | 2024-09-13T00:00:00 | https://arxiv.org/abs/2409.08573v1 | [
"https://github.com/yutingli0606/htr-vt"
] | In the paper 'HTR-VT: Handwritten Text Recognition with Vision Transformer', what Test CER score did the HTR-VT model get on the READ2016(line-level) dataset
| 3.9 |
SVT | CLIP4STR-B* | An Empirical Study of Scaling Law for OCR | 2023-12-29T00:00:00 | https://arxiv.org/abs/2401.00028v3 | [
"https://github.com/large-ocr-model/large-ocr-model.github.io"
] | In the paper 'An Empirical Study of Scaling Law for OCR', what Accuracy score did the CLIP4STR-B* model get on the SVT dataset
| 98.76 |
CIFAR-100-LT (ρ=10) | VS + ADRW + TLA | A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code | 2023-10-07T00:00:00 | https://arxiv.org/abs/2310.04752 | [
"https://github.com/wang22ti/DDC"
] | In the paper 'A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code', what Error Rate score did the VS + ADRW + TLA model get on the CIFAR-100-LT (ρ=10) dataset
| 34.41 |
SPair-71k | GeoAware-SC (Supervised) | Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17034v2 | [
"https://github.com/Junyi42/geoaware-sc"
] | In the paper 'Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence', what PCK score did the GeoAware-SC (Supervised) model get on the SPair-71k dataset
| 82.9 |
Mapillary test | SelaVPR | Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14505v3 | [
"https://github.com/Lu-Feng/SelaVPR"
] | In the paper 'Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition', what Recall@1 score did the SelaVPR model get on the Mapillary test dataset
| 73.5 |
COCO-Stuff-171 | TTD (MaskCLIP) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (MaskCLIP) model get on the COCO-Stuff-171 dataset
| 19.4 |
PACS | POEM | POEM: Polarization of Embeddings for Domain-Invariant Representations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13046v1 | [
"https://github.com/josangyoung/official-poem"
] | In the paper 'POEM: Polarization of Embeddings for Domain-Invariant Representations', what Average Accuracy score did the POEM model get on the PACS dataset
| 86.7 |
VNHSGE-Chemistry | ChatGPT | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE-Chemistry dataset
| 48 |
STL-10 | SPICE-BPA | The Balanced-Pairwise-Affinities Feature Transform | 2024-06-25T00:00:00 | https://arxiv.org/abs/2407.01467v1 | [
"https://github.com/danielshalam/bpa"
] | In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the SPICE-BPA model get on the STL-10 dataset
| 0.943 |
VidHOI | ST-GAZE | Human-Object Interaction Prediction in Videos through Gaze Following | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03597v1 | [
"https://github.com/nizhf/hoi-prediction-gaze-transformer"
] | In the paper 'Human-Object Interaction Prediction in Videos through Gaze Following', what Person-wise Top5: t=1(mAP@0.5) score did the ST-GAZE model get on the VidHOI dataset
| 37.59 |
Story Cloze | PaLM 2-S (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (one-shot) model get on the Story Cloze dataset
| 85.6 |
SYNTHIA-to-Cityscapes | MIC + Guidance Training | Improve Cross-domain Mixed Sampling with Guidance Training for Adaptive Segmentation | 2024-03-22T00:00:00 | https://arxiv.org/abs/2403.14995v1 | [
"https://github.com/wenlve-zhou/guidance-training"
] | In the paper 'Improve Cross-domain Mixed Sampling with Guidance Training for Adaptive Segmentation', what mIoU score did the MIC + Guidance Training model get on the SYNTHIA-to-Cityscapes dataset
| 63.8 |
BSD100 - 4x upscaling | AESOP | Auto-Encoded Supervision for Perceptual Image Super-Resolution | 2024-11-28T00:00:00 | https://arxiv.org/abs/2412.00124v1 | [
"https://github.com/2minkyulee/aesop-auto-encoded-supervision-for-perceptual-image-super-resolution"
] | In the paper 'Auto-Encoded Supervision for Perceptual Image Super-Resolution', what PSNR score did the AESOP model get on the BSD100 - 4x upscaling dataset
| 25.93 |
YouTube-VIS 2021 | CAVIS(VIT-L, Offline) | Context-Aware Video Instance Segmentation | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03010v1 | [
"https://github.com/Seung-Hun-Lee/CAVIS"
] | In the paper 'Context-Aware Video Instance Segmentation', what mask AP score did the CAVIS(VIT-L, Offline) model get on the YouTube-VIS 2021 dataset
| 65.3 |
CUB-200-2011 | Q-SENN | Q-SENN: Quantized Self-Explaining Neural Networks | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13839v2 | [
"https://github.com/thomasnorr/q-senn"
] | In the paper 'Q-SENN: Quantized Self-Explaining Neural Networks', what Top 1 Accuracy score did the Q-SENN model get on the CUB-200-2011 dataset
| 85.9 |
MATH | OpenMath-CodeLlama-13B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-13B (w/ code, SC, k=50) model get on the MATH dataset
| 57.6 |
MM-Vet | LLaVA-HR-X | Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.03003v1 | [
"https://github.com/luogen1996/llava-hr"
] | In the paper 'Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models', what GPT-4 score score did the LLaVA-HR-X model get on the MM-Vet dataset
| 35.5 |
Yahoo A1 | CARLA | CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09296v4 | [
"https://github.com/zamanzadeh/CARLA"
] | In the paper 'CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection', what precision score did the CARLA model get on the Yahoo A1 dataset
| 0.9755 |
GOT-10k | RTracker-L | RTracker: Recoverable Tracking via PN Tree Structured Memory | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19242v1 | [
"https://github.com/norahgreen/rtracker"
] | In the paper 'RTracker: Recoverable Tracking via PN Tree Structured Memory', what Average Overlap score did the RTracker-L model get on the GOT-10k dataset
| 77.9 |
CIFAR-10 | SCT | Stable Consistency Tuning: Understanding and Improving Consistency Models | 2024-10-24T00:00:00 | https://arxiv.org/abs/2410.18958v3 | [
"https://github.com/G-U-N/Stable-Consistency-Tuning"
] | In the paper 'Stable Consistency Tuning: Understanding and Improving Consistency Models', what FID score did the SCT model get on the CIFAR-10 dataset
| 1.84 |
imSitu | ClipSitu | ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition | 2023-07-02T00:00:00 | https://arxiv.org/abs/2307.00586v3 | [
"https://github.com/LUNAProject22/CLIPSitu"
] | In the paper 'ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition', what Top-1 Verb score did the ClipSitu model get on the imSitu dataset
| 47.23 |
FRMT (Chinese - Mainland) | Google Translate | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the Google Translate model get on the FRMT (Chinese - Mainland) dataset
| 72.3 |
Winoground | VQ2 | What You See is What You Read? Improving Text-Image Alignment Evaluation | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10400v4 | [
"https://github.com/yonatanbitton/wysiwyr"
] | In the paper 'What You See is What You Read? Improving Text-Image Alignment Evaluation', what Text Score score did the VQ2 model get on the Winoground dataset
| 47 |
AMZ Photo | CGT | Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16788v1 | [
"https://github.com/nslab-cuk/community-aware-graph-transformer"
] | In the paper 'Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures', what Accuracy score did the CGT model get on the AMZ Photo dataset
| 95.73±0.84 |
Squirrel | CATv3-sup | CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08672v3 | [
"https://github.com/geox-lab/cat"
] | In the paper 'CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph', what Accuracy score did the CATv3-sup model get on the Squirrel dataset
| 59.3±1.8 |
OVIS validation | DVIS(Swin-L, Online) | DVIS: Decoupled Video Instance Segmentation Framework | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03413v3 | [
"https://github.com/zhang-tao-whu/DVIS"
] | In the paper 'DVIS: Decoupled Video Instance Segmentation Framework', what mask AP score did the DVIS(Swin-L, Online) model get on the OVIS validation dataset
| 47.1 |
Social media attributions of YouTube comments | Space-BERT | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the Space-BERT model get on the Social media attributions of YouTube comments dataset
| 0.8309 |
DESED | ABC + MDFD-CRNN | Pushing the Limit of Sound Event Detection with Multi-Dilated Frequency Dynamic Convolution | 2024-06-19T00:00:00 | https://arxiv.org/abs/2406.13312v3 | [
"https://github.com/frednam93/MDFD-SED"
] | In the paper 'Pushing the Limit of Sound Event Detection with Multi-Dilated Frequency Dynamic Convolution', what PSDS1 score did the ABC + MDFD-CRNN model get on the DESED dataset
| 0.577 |
ETTh1 (720) Multivariate | SparseTSF | SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.00946v2 | [
"https://github.com/lss-1138/SparseTSF"
] | In the paper 'SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters', what MSE score did the SparseTSF model get on the ETTh1 (720) Multivariate dataset
| 0.426 |
ZINC-500k | CKGCN | CKGConv: General Graph Convolution with Continuous Kernels | 2024-04-21T00:00:00 | https://arxiv.org/abs/2404.13604v2 | [
"https://github.com/networkslab/ckgconv"
] | In the paper 'CKGConv: General Graph Convolution with Continuous Kernels', what MAE score did the CKGCN model get on the ZINC-500k dataset
| 5.9 |
FinSen | LSTM | Enhancing Financial Market Predictions: Causality-Driven Feature Selection | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01005v1 | [
"https://github.com/EagleAdelaide/FinSen_Dataset"
] | In the paper 'Enhancing Financial Market Predictions: Causality-Driven Feature Selection', what Mean MSE score did the LSTM model get on the FinSen dataset
| 0.01 |
LaSOT | PiVOT-L | Improving Visual Object Tracking through Visual Prompting | 2024-09-27T00:00:00 | https://arxiv.org/abs/2409.18901v1 | [
"https://github.com/chenshihfang/GOT"
] | In the paper 'Improving Visual Object Tracking through Visual Prompting', what AUC score did the PiVOT-L model get on the LaSOT dataset
| 73.4 |
Full-body Parkinson’s disease dataset | MotionAGFormer | MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network | 2023-10-25T00:00:00 | https://arxiv.org/abs/2310.16288v1 | [
"https://github.com/taatiteam/motionagformer"
] | In the paper 'MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network', what F1-score (weighted) score did the MotionAGFormer model get on the Full-body Parkinson’s disease dataset dataset
| 0.42 |
FSD50K | MN | Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15648v1 | [
"https://github.com/fschmid56/efficientat"
] | In the paper 'Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models', what mAP score did the MN model get on the FSD50K dataset
| 65.6 |
PECC | WizardLM-2-7B | PECC: Problem Extraction and Coding Challenges | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18766v1 | [
"https://github.com/hallerpatrick/pecc"
] | In the paper 'PECC: Problem Extraction and Coding Challenges', what Pass@3 score did the WizardLM-2-7B model get on the PECC dataset
| 3.72 |
SPOT-10 | MobileNetV3Small Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the MobileNetV3Small Distiller model get on the SPOT-10 dataset
| 78.04 |
ImageNet 256x256 | LEGO-XL | Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling | 2023-10-10T00:00:00 | https://arxiv.org/abs/2310.06389v3 | [
"https://github.com/JegZheng/LEGODiffusion"
] | In the paper 'Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling', what FID score did the LEGO-XL model get on the ImageNet 256x256 dataset
| 2.05 |
LRS2 | TDFNet-small | TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion | 2024-01-25T00:00:00 | https://arxiv.org/abs/2401.14185v1 | [
"https://github.com/spkgyk/TDFNet"
] | In the paper 'TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion', what SI-SNRi score did the TDFNet-small model get on the LRS2 dataset
| 13.6 |
Market-1501 | PCL-CLIP (L_pcl) | Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17218v1 | [
"https://github.com/RikoLi/PCL-CLIP"
] | In the paper 'Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification', what Rank-1 score did the PCL-CLIP (L_pcl) model get on the Market-1501 dataset
| 96.1 |
VisA | URD | Unlocking the Potential of Reverse Distillation for Anomaly Detection | 2024-12-10T00:00:00 | https://arxiv.org/abs/2412.07579v1 | [
"https://github.com/hito2448/urd"
] | In the paper 'Unlocking the Potential of Reverse Distillation for Anomaly Detection', what Detection AUROC score did the URD model get on the VisA dataset
| 96.5 |
LibriSpeech test-other | Branchformer + GFSA | Graph Convolutions Enrich the Self-Attention in Transformers! | 2023-12-07T00:00:00 | https://arxiv.org/abs/2312.04234v5 | [
"https://github.com/jeongwhanchoi/gfsa"
] | In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Word Error Rate (WER) score did the Branchformer + GFSA model get on the LibriSpeech test-other dataset
| 4.94 |
Id Pattern Dataset | Gemini 1.5 Pro | Identification of Stone Deterioration Patterns with Large Multimodal Models | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03207v1 | [
"https://github.com/dcorradetti/redai_id_pattern"
] | In the paper 'Identification of Stone Deterioration Patterns with Large Multimodal Models', what Percentage correct score did the Gemini 1.5 Pro model get on the Id Pattern Dataset dataset
| 39% |
LEVIR-CD | SGSLN/256 | Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11302v1 | [
"https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network"
] | In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-score score did the SGSLN/256 model get on the LEVIR-CD dataset
| 0.9193 |
iNaturalist 2018 | LIFT (ViT-B/16) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Top-1 Accuracy score did the LIFT (ViT-B/16) model get on the iNaturalist 2018 dataset
| 80.4% |
Peptides-struct | CKGCN | CKGConv: General Graph Convolution with Continuous Kernels | 2024-04-21T00:00:00 | https://arxiv.org/abs/2404.13604v2 | [
"https://github.com/networkslab/ckgconv"
] | In the paper 'CKGConv: General Graph Convolution with Continuous Kernels', what MAE score did the CKGCN model get on the Peptides-struct dataset
| 0.2477 |
GTEA | BaFormer | Efficient Temporal Action Segmentation via Boundary-aware Query Voting | 2024-05-25T00:00:00 | https://arxiv.org/abs/2405.15995v1 | [
"https://github.com/peiyao-w/baformer"
] | In the paper 'Efficient Temporal Action Segmentation via Boundary-aware Query Voting', what F1@10% score did the BaFormer model get on the GTEA dataset
| 92.0 |
HRF | RRWNet | RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03166v4 | [
"https://github.com/j-morano/rrwnet"
] | In the paper 'RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification', what Accuracy score did the RRWNet model get on the HRF dataset
| 0.9783 |
WSJ0-2mix | SPGM + DM | SPGM: Prioritizing Local Features for enhanced speech separation performance | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12608v2 | [
"https://huggingface.co/yipjiaqi/spgm"
] | In the paper 'SPGM: Prioritizing Local Features for enhanced speech separation performance', what SI-SDRi score did the SPGM + DM model get on the WSJ0-2mix dataset
| 22.7 |
MNIST | Wilson-Cowan model RNN | Learning in Wilson-Cowan model for metapopulation | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16453v2 | [
"https://github.com/raffaelemarino/learning_in_wilsoncowan"
] | In the paper 'Learning in Wilson-Cowan model for metapopulation', what Accuracy score did the Wilson-Cowan model RNN model get on the MNIST dataset
| 98.13 |
Atari 2600 Beam Rider | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Beam Rider dataset
| 26841.6 |
REBUS | LLaVa-1.5-7B | REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05604v2 | [
"https://github.com/cvndsh/rebus"
] | In the paper 'REBUS: A Robust Evaluation Benchmark of Understanding Symbols', what Accuracy score did the LLaVa-1.5-7B model get on the REBUS dataset
| 1.5 |
MathQA | Exp-Tree | An Expression Tree Decoding Strategy for Mathematical Equation Generation | 2023-10-14T00:00:00 | https://arxiv.org/abs/2310.09619v3 | [
"https://github.com/zwq2018/multi-view-consistency-for-mwp"
] | In the paper 'An Expression Tree Decoding Strategy for Mathematical Equation Generation', what Answer Accuracy score did the Exp-Tree model get on the MathQA dataset
| 81.5 |
Objaverse | MiniGPT-3D | MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.01413v1 | [
"https://github.com/tangyuan96/minigpt-3d"
] | In the paper 'MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors', what GPT-4 score did the MiniGPT-3D model get on the Objaverse dataset
| 57.06 |
PECC | codechat-bison | PECC: Problem Extraction and Coding Challenges | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18766v1 | [
"https://github.com/hallerpatrick/pecc"
] | In the paper 'PECC: Problem Extraction and Coding Challenges', what Pass@3 score did the codechat-bison model get on the PECC dataset
| 11.39 |
MLO-Cn2 | Minute Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Minute Climatology model get on the MLO-Cn2 dataset
| 0.504 |
SUN-RGBD | FSFNet | DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.09668v2 | [
"https://github.com/VCIP-RGBD/DFormer"
] | In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what Mean IoU score did the FSFNet model get on the SUN-RGBD dataset
| 48.8% |
TinyImageNet ResNet-18 - 300 Epochs | IBM | Towards Redundancy-Free Sub-networks in Continual Learning | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00840v2 | [
"https://github.com/zackschen/IBM-Net"
] | In the paper 'Towards Redundancy-Free Sub-networks in Continual Learning', what Accuracy score did the IBM model get on the TinyImageNet ResNet-18 - 300 Epochs dataset
| 52.38 |
Kvasir-SEG | EffiSegNet-B5 | EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder | 2024-07-23T00:00:00 | https://arxiv.org/abs/2407.16298v1 | [
"https://github.com/ivezakis/effisegnet"
] | In the paper 'EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder', what mean Dice score did the EffiSegNet-B5 model get on the Kvasir-SEG dataset
| 0.9488 |
COCO-20i (2-way 1-shot) | Label Anything (Vit-B/16-SAM) | Label Anything: Multi-Class Few-Shot Semantic Segmentation with Visual Prompts | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02075v1 | [
"https://github.com/pasqualedem/LabelAnything"
] | In the paper 'Label Anything: Multi-Class Few-Shot Semantic Segmentation with Visual Prompts', what mIoU score did the Label Anything (Vit-B/16-SAM) model get on the COCO-20i (2-way 1-shot) dataset
| 34.6 |
MOSE | Cutie (base, with mose) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie (base, with mose) model get on the MOSE dataset
| 68.3 |
CIFAR-10, 40 Labels | RelationMatch | RelationMatch: Matching In-batch Relationships for Semi-supervised Learning | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10397v2 | [
"https://github.com/yifanzhang-pro/relationmatch"
] | In the paper 'RelationMatch: Matching In-batch Relationships for Semi-supervised Learning', what Percentage error score did the RelationMatch model get on the CIFAR-10, 40 Labels dataset
| 4.96 |
FUNSD | RORE (GeoLayoutLM) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (GeoLayoutLM) model get on the FUNSD dataset
| 88.46 |
ZJU-RGB-P | ShareCMP (B4 RGB-FP) | ShareCMP: Polarization-Aware RGB-P Semantic Segmentation | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03430v2 | [
"https://github.com/lefteyex/sharecmp"
] | In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP (B4 RGB-FP) model get on the ZJU-RGB-P dataset
| 92.7 |
VideoInstruct | VTimeLLM | VTimeLLM: Empower LLM to Grasp Video Moments | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18445v1 | [
"https://github.com/huangb23/vtimellm"
] | In the paper 'VTimeLLM: Empower LLM to Grasp Video Moments', what Correctness of Information score did the VTimeLLM model get on the VideoInstruct dataset
| 2.78 |
NCBI-disease | NuNER Zero Span | NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data | 2024-02-23T00:00:00 | https://arxiv.org/abs/2402.15343v1 | [
"https://github.com/Serega6678/NuNER"
] | In the paper 'NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data', what F1 score did the NuNER Zero Span model get on the NCBI-disease dataset
| 61.1 |
Atari 2600 Surround | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Surround dataset
| 2.5 |
cifar100 | ResNet18 | Guarding Barlow Twins Against Overfitting with Mixed Samples | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02151v1 | [
"https://github.com/wgcban/mix-bt"
] | In the paper 'Guarding Barlow Twins Against Overfitting with Mixed Samples', what average top-1 classification accuracy score did the ResNet18 model get on the cifar100 dataset
| 69.31 |
STAR Benchmark | TraveLER (0-shot) | TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.01476v2 | [
"https://github.com/traveler-framework/traveler"
] | In the paper 'TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering', what Average Accuracy score did the TraveLER (0-shot) model get on the STAR Benchmark dataset
| 44.9 |
TACO-Code | Starcoder-15.5B | TACO: Topics in Algorithmic COde generation dataset | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14852v3 | [
"https://github.com/flagopen/taco"
] | In the paper 'TACO: Topics in Algorithmic COde generation dataset', what easy pass@1 score did the Starcoder-15.5B model get on the TACO-Code dataset
| 11.6% |
Columbia | Late Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Late Fusion model get on the Columbia dataset
| .977 |
VideoInstruct | BT-Adapter (zero-shot) | BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15785v2 | [
"https://github.com/farewellthree/BT-Adapter"
] | In the paper 'BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning', what gpt-score score did the BT-Adapter (zero-shot) model get on the VideoInstruct dataset
| 2.16 |
ImageNet 256x256 | RAR-L, autoregressive | Randomized Autoregressive Visual Generation | 2024-11-01T00:00:00 | https://arxiv.org/abs/2411.00776v1 | [
"https://github.com/bytedance/1d-tokenizer"
] | In the paper 'Randomized Autoregressive Visual Generation', what FID score did the RAR-L, autoregressive model get on the ImageNet 256x256 dataset
| 1.70 |
Structured3D | SFSS-MMSI (RGB+Depth+Normal) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what Validation mIoU score did the SFSS-MMSI (RGB+Depth+Normal) model get on the Structured3D dataset
| 75.86 |
ETTh1 (336) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTh1 (336) Multivariate dataset
| 0.42 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.