dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MAWPS | MsAT-DeductReasoner | Learning Multi-Step Reasoning by Solving Arithmetic Tasks | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01707v3 | [
"https://github.com/TianduoWang/MsAT"
] | In the paper 'Learning Multi-Step Reasoning by Solving Arithmetic Tasks', what Accuracy (%) score did the MsAT-DeductReasoner model get on the MAWPS dataset
| 94.3 |
Food-101 | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the Food-101 dataset
| 91.22 |
IllusionVQA | GPT4-Vision 4-shot | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the GPT4-Vision 4-shot model get on the IllusionVQA dataset
| 62.99 |
INRIA Aerial Image Labeling | WSDNet | Ultra-High Resolution Segmentation with Ultra-Rich Context: A Novel Benchmark | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10899v1 | [
"https://github.com/jankyee/urur"
] | In the paper 'Ultra-High Resolution Segmentation with Ultra-Rich Context: A Novel Benchmark', what mIOU score did the WSDNet model get on the INRIA Aerial Image Labeling dataset
| 0.752 |
Shot2Story20K | Shotluck-Holmes (3.1B) | Shotluck Holmes: A Family of Efficient Small-Scale Large Language Vision Models For Video Captioning and Summarization | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20648v2 | [
"https://github.com/Skyline-9/Shotluck-Holmes"
] | In the paper 'Shotluck Holmes: A Family of Efficient Small-Scale Large Language Vision Models For Video Captioning and Summarization', what CIDEr score did the Shotluck-Holmes (3.1B) model get on the Shot2Story20K dataset
| 152.3 |
Stanford Cars | TResnet-L + PMD | Progressive Multi-task Anti-Noise Learning and Distilling Frameworks for Fine-grained Vehicle Recognition | 2024-01-25T00:00:00 | https://arxiv.org/abs/2401.14336v1 | [
"https://github.com/dichao-liu/anti-noise_fgvr"
] | In the paper 'Progressive Multi-task Anti-Noise Learning and Distilling Frameworks for Fine-grained Vehicle Recognition', what Accuracy score did the TResnet-L + PMD model get on the Stanford Cars dataset
| 97.3% |
Set5 - 2x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the Set5 - 2x upscaling dataset
| 39.14 |
CIFAR-10 (partial ratio 0.5) | ILL | Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12715v4 | [
"https://github.com/hhhhhhao/general-framework-weak-supervision"
] | In the paper 'Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations', what Accuracy score did the ILL model get on the CIFAR-10 (partial ratio 0.5) dataset
| 95.91 |
TNL2K | ARTrackV2-L | ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17133v3 | [
"https://github.com/miv-xjtu/artrack"
] | In the paper 'ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe', what AUC score did the ARTrackV2-L model get on the TNL2K dataset
| 61.6 |
split CIFAR-100 | Model with negotiation paradigm | Negotiated Representations to Prevent Forgetting in Machine Learning Applications | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00237v1 | [
"https://github.com/nurikorhan/negotiated-representations-for-continual-learning"
] | In the paper 'Negotiated Representations to Prevent Forgetting in Machine Learning Applications', what Percentage Average accuracy - 5 tasks score did the Model with negotiation paradigm model get on the split CIFAR-100 dataset
| 34.9 |
AffectNet | CAGE | CAGE: Circumplex Affect Guided Expression Inference | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.14975v1 | [
"https://github.com/wagner-niklas/cage_expression_inference"
] | In the paper 'CAGE: Circumplex Affect Guided Expression Inference', what Accuracy (7 emotion) score did the CAGE model get on the AffectNet dataset
| 66.6 |
DiDeMo | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what text-to-video R@1 score did the COSA model get on the DiDeMo dataset
| 70.5 |
CNN / Daily Mail | SRformer-BART | Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.16340v3 | [
"https://github.com/yinghanlong/SRtransformer"
] | In the paper 'Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model', what ROUGE-1 score did the SRformer-BART model get on the CNN / Daily Mail dataset
| 43.19 |
DanceTrack | MeMOTR | MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15700v3 | [
"https://github.com/mcg-nju/memotr"
] | In the paper 'MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking', what HOTA score did the MeMOTR model get on the DanceTrack dataset
| 68.5 |
GTA5 to Cityscapes | MIC + Guidance Training | Improve Cross-domain Mixed Sampling with Guidance Training for Adaptive Segmentation | 2024-03-22T00:00:00 | https://arxiv.org/abs/2403.14995v1 | [
"https://github.com/wenlve-zhou/guidance-training"
] | In the paper 'Improve Cross-domain Mixed Sampling with Guidance Training for Adaptive Segmentation', what mIoU score did the MIC + Guidance Training model get on the GTA5 to Cityscapes dataset
| 67.0 |
MCubeS | MMSFormer (RGB) | MMSFormer: Multimodal Transformer for Material and Semantic Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.04001v4 | [
"https://github.com/csiplab/mmsformer"
] | In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB) model get on the MCubeS dataset
| 50.44% |
FGVC Aircraft | SaSPA + CAL | Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14551v2 | [
"https://github.com/eyalmichaeli/saspa-aug"
] | In the paper 'Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation', what Accuracy score did the SaSPA + CAL model get on the FGVC Aircraft dataset
| 94.5 |
ETTh1 (336) Multivariate | Minusformer-96 | Minusformer: Improving Time Series Forecasting by Progressively Learning Residuals | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02332v3 | [
"https://github.com/anoise/minusformer"
] | In the paper 'Minusformer: Improving Time Series Forecasting by Progressively Learning Residuals', what MSE score did the Minusformer-96 model get on the ETTh1 (336) Multivariate dataset
| 0.465 |
Kvasir-SEG | RaBiT | RaBiT: An Efficient Transformer using Bidirectional Feature Pyramid Network with Reverse Attention for Colon Polyp Segmentation | 2023-07-12T00:00:00 | https://arxiv.org/abs/2307.06420v1 | [
"https://github.com/nguyenhoangthuan99/RaBiT"
] | In the paper 'RaBiT: An Efficient Transformer using Bidirectional Feature Pyramid Network with Reverse Attention for Colon Polyp Segmentation', what mean Dice score did the RaBiT model get on the Kvasir-SEG dataset
| 0.927 |
RealBlur-R (trained on GoPro) | DeblurDiNAT-L | DeblurDiNAT: A Generalizable Transformer for Perceptual Image Deblurring | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.13163v4 | [
"https://github.com/hanzhouliu/deblurdinat"
] | In the paper 'DeblurDiNAT: A Generalizable Transformer for Perceptual Image Deblurring', what PSNR (sRGB) score did the DeblurDiNAT-L model get on the RealBlur-R (trained on GoPro) dataset
| 36.09 |
GoogleEarth | CityDreamer | CityDreamer: Compositional Generative Model of Unbounded 3D Cities | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00610v3 | [
"https://github.com/hzxie/CityDreamer"
] | In the paper 'CityDreamer: Compositional Generative Model of Unbounded 3D Cities', what KID score did the CityDreamer model get on the GoogleEarth dataset
| 0.096 |
Clotho | SLAM-AAC | SLAM-AAC: Enhancing Audio Captioning with Paraphrasing Augmentation and CLAP-Refine through LLMs | 2024-10-12T00:00:00 | https://arxiv.org/abs/2410.09503v1 | [
"https://github.com/X-LANCE/SLAM-LLM"
] | In the paper 'SLAM-AAC: Enhancing Audio Captioning with Paraphrasing Augmentation and CLAP-Refine through LLMs', what CIDEr score did the SLAM-AAC model get on the Clotho dataset
| 0.515 |
ScanNetV2 | SuperCluster | Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering | 2024-01-12T00:00:00 | https://arxiv.org/abs/2401.06704v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what PQ score did the SuperCluster model get on the ScanNetV2 dataset
| 58.7 |
MoB | VTN | Malicious or Benign? Towards Effective Content Moderation for Children's Videos | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15551v1 | [
"https://github.com/syedhammadahmed/mob"
] | In the paper 'Malicious or Benign? Towards Effective Content Moderation for Children's Videos', what Accuracy score did the VTN model get on the MoB dataset
| 77.85 |
CATT | CATT EO | CATT: Character-based Arabic Tashkeel Transformer | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03236v3 | [
"https://github.com/abjadai/catt"
] | In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the CATT EO model get on the CATT dataset
| 8.762 |
SIM10K to Cityscapes | MIC (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the MIC (ResNet50-FPN) model get on the SIM10K to Cityscapes dataset
| 73.1 |
ScreenSpot | UGround-7B | Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05243v1 | [
"https://github.com/OSU-NLP-Group/UGround"
] | In the paper 'Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents', what Accuracy (%) score did the UGround-7B model get on the ScreenSpot dataset
| 73.3 |
VietMed | XLSR-53 | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what Dev WER score did the XLSR-53 model get on the VietMed dataset
| 45.2 |
ogbl-citation2 | GCN + Heuristic Encoding | Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods | 2024-11-22T00:00:00 | https://arxiv.org/abs/2411.14711v1 | [
"https://github.com/astroming/GNNHE"
] | In the paper 'Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods', what Test MRR score did the GCN + Heuristic Encoding model get on the ogbl-citation2 dataset
| 0.8891 ± 0.0005 |
MS COCO | RAT-Diffusion | Data Extrapolation for Text-to-image Generation on Small Datasets | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01638v1 | [
"https://github.com/senmaoy/RAT-Diffusion"
] | In the paper 'Data Extrapolation for Text-to-image Generation on Small Datasets', what FID score did the RAT-Diffusion model get on the MS COCO dataset
| 5.00 |
ColonINST-v1 (Unseen) | Bunny-v1.0-3B
(w/ LoRA, w/ extra data) | Efficient Multimodal Learning from Data-centric Perspective | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11530v3 | [
"https://github.com/baai-dcai/bunny"
] | In the paper 'Efficient Multimodal Learning from Data-centric Perspective', what Accuray score did the Bunny-v1.0-3B
(w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
| 79.50 |
GSM8K | AlphaLLM (with MCTS) | Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing | 2024-04-18T00:00:00 | https://arxiv.org/abs/2404.12253v2 | [
"https://github.com/yetianjhu/alphallm"
] | In the paper 'Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing', what Accuracy score did the AlphaLLM (with MCTS) model get on the GSM8K dataset
| 92 |
NTU RGB+D 120 | SkateFormer | SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09508v3 | [
"https://github.com/KAIST-VICLab/SkateFormer"
] | In the paper 'SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition', what Accuracy (Cross-Subject) score did the SkateFormer model get on the NTU RGB+D 120 dataset
| 92.3 |
Cityscapes test | SwinMTL | SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images | 2024-03-15T00:00:00 | https://arxiv.org/abs/2403.10662v1 | [
"https://github.com/pardistaghavi/swinmtl"
] | In the paper 'SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images', what Mean IoU (class) score did the SwinMTL model get on the Cityscapes test dataset
| 76.41% |
ScanObjectNN | ULIP-2 + PointNeXt | ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding | 2023-05-14T00:00:00 | https://arxiv.org/abs/2305.08275v4 | [
"https://github.com/salesforce/ulip"
] | In the paper 'ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding', what Overall Accuracy score did the ULIP-2 + PointNeXt model get on the ScanObjectNN dataset
| 91.5 |
PASCAL Context-59 | SED | SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15537v2 | [
"https://github.com/xb534/sed"
] | In the paper 'SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation', what mIoU score did the SED model get on the PASCAL Context-59 dataset
| 60.6 |
ZINC | TIGT | Topology-Informed Graph Transformer | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02005v1 | [
"https://github.com/leemingo/tigt"
] | In the paper 'Topology-Informed Graph Transformer', what MAE score did the TIGT model get on the ZINC dataset
| 0.057 |
COCO-Stuff-171 | TTD (TCL) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (TCL) model get on the COCO-Stuff-171 dataset
| 23.7 |
HumanEval | MapCoder (GPT-4) | MapCoder: Multi-Agent Code Generation for Competitive Problem Solving | 2024-05-18T00:00:00 | https://arxiv.org/abs/2405.11403v1 | [
"https://github.com/md-ashraful-pramanik/mapcoder"
] | In the paper 'MapCoder: Multi-Agent Code Generation for Competitive Problem Solving', what Pass@1 score did the MapCoder (GPT-4) model get on the HumanEval dataset
| 93.9 |
Fish-100 | BUCTD-preNet-W48 (DLCRNet) | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the BUCTD-preNet-W48 (DLCRNet) model get on the Fish-100 dataset
| 88.7 |
RSITMD | GeoRSCLIP-FT | RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11300v5 | [
"https://github.com/om-ai-lab/rs5m"
] | In the paper 'RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing', what Mean Recall score did the GeoRSCLIP-FT model get on the RSITMD dataset
| 51.81% |
Office-Home | POEM | POEM: Polarization of Embeddings for Domain-Invariant Representations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13046v1 | [
"https://github.com/josangyoung/official-poem"
] | In the paper 'POEM: Polarization of Embeddings for Domain-Invariant Representations', what Average Accuracy score did the POEM model get on the Office-Home dataset
| 68.0 |
EconLogicQA | Yi-6B | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Yi-6B model get on the EconLogicQA dataset
| 0.0385 |
UTKFace | ResNet-50-Unimodal-Concentrated | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Unimodal-Concentrated model get on the UTKFace dataset
| 4.47 |
EQ-Bench | migtissera/SynthIA-70B-v1.5 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the migtissera/SynthIA-70B-v1.5 model get on the EQ-Bench dataset
| 54.83 |
LibriSpeech test-other | Zipformer+CR-CTC
(no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+CR-CTC
(no external language model) model get on the LibriSpeech test-other dataset
| 4.35 |
CIFAR-10 | RDGAN | A High-Quality Robust Diffusion Framework for Corrupted Dataset | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17101v2 | [
"https://github.com/VinAIResearch/RDUOT"
] | In the paper 'A High-Quality Robust Diffusion Framework for Corrupted Dataset', what FID score did the RDGAN model get on the CIFAR-10 dataset
| 3.53 |
PeMSD8 | PM-DMNet(P) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(P) model get on the PeMSD8 dataset
| 13.55 |
VoxCeleb | ReDimNet-B5-SF2-LM-ASNorm (9.2M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B5-SF2-LM-ASNorm (9.2M) model get on the VoxCeleb dataset
| 0.39 |
WHAMR! | TD-Confomer (M) + DM | On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.06125v1 | [
"https://github.com/jwr1995/pubsep"
] | In the paper 'On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments', what SI-SDRi score did the TD-Confomer (M) + DM model get on the WHAMR! dataset
| 12 |
Atari 2600 Space Invaders | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Space Invaders dataset
| 21602 |
MSL | CARLA | CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09296v4 | [
"https://github.com/zamanzadeh/CARLA"
] | In the paper 'CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection', what precision score did the CARLA model get on the MSL dataset
| 0.3891 |
MM-Vet | LinVT | LinVT: Empower Your Image-level Large Language Model to Understand Videos | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05185v2 | [
"https://github.com/gls0425/linvt"
] | In the paper 'LinVT: Empower Your Image-level Large Language Model to Understand Videos', what GPT-4 score score did the LinVT model get on the MM-Vet dataset
| 23.5 |
Office-Home | GSDE | Gradual Source Domain Expansion for Unsupervised Domain Adaptation | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09599v1 | [
"https://github.com/ThomasWestfechtel/GSDE"
] | In the paper 'Gradual Source Domain Expansion for Unsupervised Domain Adaptation', what Accuracy score did the GSDE model get on the Office-Home dataset
| 73.6 |
ImageNet 256x256 | TiTok-B-32 | An Image is Worth 32 Tokens for Reconstruction and Generation | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07550v1 | [
"https://github.com/bytedance/1d-tokenizer"
] | In the paper 'An Image is Worth 32 Tokens for Reconstruction and Generation', what FID score did the TiTok-B-32 model get on the ImageNet 256x256 dataset
| 2.77 |
ImageNet 128x128 | PaGoDA | PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14822v2 | [
"https://github.com/sony/pagoda"
] | In the paper 'PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher', what FID score did the PaGoDA model get on the ImageNet 128x128 dataset
| 1.48 |
TVBench | PLLaVA-34B | PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16994v2 | [
"https://github.com/magic-research/PLLaVA"
] | In the paper 'PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning', what Average Accuracy score did the PLLaVA-34B model get on the TVBench dataset
| 41.9 |
Set14 - 2x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the Set14 - 2x upscaling dataset
| 34.96 |
CUB-200-2011 | SMDL-Attribution (ICLR version) | Less is More: Fewer Interpretable Region via Submodular Subset Selection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09164v3 | [
"https://github.com/ruoyuchen10/smdl-attribution"
] | In the paper 'Less is More: Fewer Interpretable Region via Submodular Subset Selection', what Insertion AUC score (ResNet-101) score did the SMDL-Attribution (ICLR version) model get on the CUB-200-2011 dataset
| 0.7262 |
DomainNet | GMDG (ResNet-50) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50) model get on the DomainNet dataset
| 44.6 |
ASVspoof 2019 - LA | GREEN-SSL-SVM | Exploring Green AI for Audio Deepfake Detection | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14290v1 | [
"https://github.com/sahasubhajit/Speech-Spoofing-"
] | In the paper 'Exploring Green AI for Audio Deepfake Detection', what EER score did the GREEN-SSL-SVM model get on the ASVspoof 2019 - LA dataset
| 0.90 |
WHU Building Dataset | RSM-CD | RS-Mamba for Large Remote Sensing Image Dense Prediction | 2024-04-03T00:00:00 | https://arxiv.org/abs/2404.02668v2 | [
"https://github.com/walking-shadow/Official_Remote_Sensing_Mamba"
] | In the paper 'RS-Mamba for Large Remote Sensing Image Dense Prediction', what F1-score score did the RSM-CD model get on the WHU Building Dataset dataset
| 0.9187 |
Stanford Cars | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the Stanford Cars dataset
| 71.8 |
Visual Genome | SpeaQ (without reweighting) | Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17709v1 | [
"https://github.com/mlvlab/speaq"
] | In the paper 'Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection', what Recall@50 score did the SpeaQ (without reweighting) model get on the Visual Genome dataset
| 32.9 |
Weather2K79 (192) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K79 (192) dataset
| 0.566 |
VisA | MuSc (zero-shot) | MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16753v1 | [
"https://github.com/xrli-U/MuSc"
] | In the paper 'MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images', what Detection AUROC score did the MuSc (zero-shot) model get on the VisA dataset
| 92.8 |
Fashion-MNIST | Spiking-Diffusion | Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks | 2023-08-20T00:00:00 | https://arxiv.org/abs/2308.10187v4 | [
"https://github.com/Arktis2022/Spiking-Diffusion"
] | In the paper 'Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks', what FID score did the Spiking-Diffusion model get on the Fashion-MNIST dataset
| 91.98 |
UCF101 | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the UCF101 dataset
| 76.3 |
Breakfast | MA-LMM | MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05726v2 | [
"https://github.com/boheumd/MA-LMM"
] | In the paper 'MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding', what Accuracy (%) score did the MA-LMM model get on the Breakfast dataset
| 93.0 |
ActivityNet-1.3 | UniMD+Sync. | UniMD: Towards Unifying Moment Retrieval and Temporal Action Detection | 2024-04-07T00:00:00 | https://arxiv.org/abs/2404.04933v2 | [
"https://github.com/yingsen1/unimd"
] | In the paper 'UniMD: Towards Unifying Moment Retrieval and Temporal Action Detection', what mAP IOU@0.5 score did the UniMD+Sync. model get on the ActivityNet-1.3 dataset
| 60.29 |
Assembly101 | HandFormer-B/21 | On the Utility of 3D Hand Poses for Action Recognition | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09805v2 | [
"https://github.com/s-shamil/HandFormer"
] | In the paper 'On the Utility of 3D Hand Poses for Action Recognition', what Actions Top-1 score did the HandFormer-B/21 model get on the Assembly101 dataset
| 41.06 |
WHAMR! | SepReformer-L + DM | Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.05983v3 | [
"https://github.com/dmlguq456/SepReformer"
] | In the paper 'Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation', what SI-SDRi score did the SepReformer-L + DM model get on the WHAMR! dataset
| 17.1 |
MVTec AD | CPR-faster | Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06748v1 | [
"https://github.com/flyinghu123/cpr"
] | In the paper 'Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval', what Detection AUROC score did the CPR-faster model get on the MVTec AD dataset
| 99.4 |
FreiHAND | HaMeR | Reconstructing Hands in 3D with Transformers | 2023-12-08T00:00:00 | https://arxiv.org/abs/2312.05251v1 | [
"https://github.com/geopavlakos/hamer"
] | In the paper 'Reconstructing Hands in 3D with Transformers', what PA-MPVPE score did the HaMeR model get on the FreiHAND dataset
| 5.7 |
LEVIR-CD | CDMaskFormer | Rethinking Remote Sensing Change Detection With A Mask View | 2024-06-21T00:00:00 | https://arxiv.org/abs/2406.15320v1 | [
"https://github.com/xwmaxwma/rschange"
] | In the paper 'Rethinking Remote Sensing Change Detection With A Mask View', what F1 score did the CDMaskFormer model get on the LEVIR-CD dataset
| 90.66 |
BigEarthNet (official test set) | FG-MAE (ViT-S/16) | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18653v1 | [
"https://github.com/zhu-xlab/fgmae"
] | In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what mAP (micro) score did the FG-MAE (ViT-S/16) model get on the BigEarthNet (official test set) dataset
| 89.3 |
VTAB-1k(Natural<7>) | GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) | Improving Visual Prompt Tuning for Self-supervised Vision Transformers | 2023-06-08T00:00:00 | https://arxiv.org/abs/2306.05067v1 | [
"https://github.com/ryongithub/gatedprompttuning"
] | In the paper 'Improving Visual Prompt Tuning for Self-supervised Vision Transformers', what Mean Accuracy score did the GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) model get on the VTAB-1k(Natural<7>) dataset
| 74.84 |
HIV | ChemBFN | A Bayesian Flow Network Framework for Chemistry Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.20294v1 | [
"https://github.com/Augus1999/bayesian-flow-network-for-chemistry"
] | In the paper 'A Bayesian Flow Network Framework for Chemistry Tasks', what ROC-AUC score did the ChemBFN model get on the HIV dataset
| 79.37 |
MLO-Cn2 | Persistence | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Persistence model get on the MLO-Cn2 dataset
| 1.209 |
CNRPark+EXT | ResNet50 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the ResNet50 model get on the CNRPark+EXT dataset
| 0.938 |
ColonINST-v1 (Unseen) | MobileVLM-1.7B
(w/ LoRA, w/ extra data) | MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16886v2 | [
"https://github.com/meituan-automl/mobilevlm"
] | In the paper 'MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices', what Accuray score did the MobileVLM-1.7B
(w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
| 80.44 |
ImageNet-1k vs iNaturalist | SCALE (ResNet50) | Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement | 2023-09-30T00:00:00 | https://arxiv.org/abs/2310.00227v1 | [
"https://github.com/kai422/scale"
] | In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what FPR95 score did the SCALE (ResNet50) model get on the ImageNet-1k vs iNaturalist dataset
| 9.5 |
Urban100 - 4x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the Urban100 - 4x upscaling dataset
| 28.70 |
Atari 2600 Zaxxon | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Zaxxon dataset
| 16420 |
Story Cloze | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (one-shot) model get on the Story Cloze dataset
| 86.7 |
ADE20K-150 | FC-CLIP | Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02487v2 | [
"https://github.com/bytedance/fc-clip"
] | In the paper 'Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP', what mIoU score did the FC-CLIP model get on the ADE20K-150 dataset
| 34.1 |
Rambo Benchmark | Rambo | RAMBO: Enhancing RAG-based Repository-Level Method Body Completion | 2024-09-23T00:00:00 | https://arxiv.org/abs/2409.15204v2 | [
"https://github.com/ise-uet-vnu/rambo"
] | In the paper 'RAMBO: Enhancing RAG-based Repository-Level Method Body Completion', what Compilation Rate score did the Rambo model get on the Rambo Benchmark dataset
| 58.94 |
PCQM4Mv2-LSC | GPTrans-L | Graph Propagation Transformer for Graph Representation Learning | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11424v3 | [
"https://github.com/czczup/gptrans"
] | In the paper 'Graph Propagation Transformer for Graph Representation Learning', what Validation MAE score did the GPTrans-L model get on the PCQM4Mv2-LSC dataset
| 0.0809 |
CIFAR-10, 100 Labels (OpenSet, 6/4) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the CIFAR-10, 100 Labels (OpenSet, 6/4) dataset
| 96.8 |
MM-Vet | LLaVA-VT (Vicuna-13B) | Beyond Embeddings: The Promise of Visual Table in Visual Reasoning | 2024-03-27T00:00:00 | https://arxiv.org/abs/2403.18252v2 | [
"https://github.com/lavi-lab/visual-table"
] | In the paper 'Beyond Embeddings: The Promise of Visual Table in Visual Reasoning', what GPT-4 score score did the LLaVA-VT (Vicuna-13B) model get on the MM-Vet dataset
| 39.8 |
Criteo | WMLFF | Weighted Multi-Level Feature Factorization for App ads CTR and installation prediction | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02568v1 | [
"https://github.com/knife982000/recsys2023challenge"
] | In the paper 'Weighted Multi-Level Feature Factorization for App ads CTR and installation prediction', what AUC score did the WMLFF model get on the Criteo dataset
| 0.804 |
SSv2-template retrieval | vid-TLDR (UMT-L) | vid-TLDR: Training Free Token merging for Light-weight Video Transformer | 2024-03-20T00:00:00 | https://arxiv.org/abs/2403.13347v2 | [
"https://github.com/mlvlab/vid-tldr"
] | In the paper 'vid-TLDR: Training Free Token merging for Light-weight Video Transformer', what text-to-video R@1 score did the vid-TLDR (UMT-L) model get on the SSv2-template retrieval dataset
| 90.2 |
MS-COCO | GKGNet(resolution 448) | GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for Multi-Label Image Recognition | 2023-08-28T00:00:00 | https://arxiv.org/abs/2308.14378v3 | [
"https://github.com/jin-s13/gkgnet"
] | In the paper 'GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for Multi-Label Image Recognition', what mAP score did the GKGNet(resolution 448) model get on the MS-COCO dataset
| 86.7 |
Atari 2600 Krull | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Krull dataset
| 10422.5 |
Adult | MambaNet | MambaTab: A Plug-and-Play Model for Learning Tabular Data | 2024-01-16T00:00:00 | https://arxiv.org/abs/2401.08867v2 | [
"https://github.com/atik-ahamed/mambatab"
] | In the paper 'MambaTab: A Plug-and-Play Model for Learning Tabular Data', what AUROC score did the MambaNet model get on the Adult dataset
| 0.914 |
RefCOCO+ testA | MaskRIS (Swin-B, combined DB) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B, combined DB) model get on the RefCOCO+ testA dataset
| 75.15 |
Atari 2600 Pitfall! | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Pitfall! dataset
| 0 |
TS50 | SPIN | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the SPIN model get on the TS50 dataset
| 30.3 |
Atari 2600 Asteroids | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Asteroids dataset
| 1984.5 |
BBBP | SMA | Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14789v1 | [
"https://github.com/johnathan-xie/sma"
] | In the paper 'Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning', what ROC-AUC score did the SMA model get on the BBBP dataset
| 75.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.