dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
DUTS-TE | BiRefNet (DUTS, HRSOD, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (DUTS, HRSOD, UHRSD) model get on the DUTS-TE dataset
| 0.018 |
THUMOS’14 | RDFA-S6 (InternVideo2-6B) | Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13078v1 | [
"https://github.com/lsy0882/RDFA-S6"
] | In the paper 'Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism', what mAP IOU@0.5 score did the RDFA-S6 (InternVideo2-6B) model get on the THUMOS’14 dataset
| 78.2 |
MOT17 | MOTIP (Deformable-DETR) | Multiple Object Tracking as ID Prediction | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16848v1 | [
"https://github.com/MCG-NJU/MOTIP"
] | In the paper 'Multiple Object Tracking as ID Prediction', what HOTA score did the MOTIP (Deformable-DETR) model get on the MOT17 dataset
| 59.2 |
fake | LightGBM + OpenAI embedding | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the LightGBM + OpenAI embedding model get on the fake dataset
| 0.966 |
SQuAD | Blended RAG | Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers | 2024-03-22T00:00:00 | https://arxiv.org/abs/2404.07220v2 | [
"https://github.com/ibm-ecosystem-engineering/blended-rag"
] | In the paper 'Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers', what Exact Match score did the Blended RAG model get on the SQuAD dataset
| 57.63 |
MaleVis | Levit-MC | Accelerating Malware Classification: A Vision Transformer Solution | 2024-09-28T00:00:00 | https://arxiv.org/abs/2409.19461v1 | [
"https://github.com/Shrey-55/MalwareClassification"
] | In the paper 'Accelerating Malware Classification: A Vision Transformer Solution', what Accuracy score did the Levit-MC model get on the MaleVis dataset
| 96.6 |
REDDIT-12K | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what Accuracy (10 fold) score did the G-Tuning model get on the REDDIT-12K dataset
| 42.80 |
ShanghaiTech | VideoPatchCore | VideoPatchCore: An Effective Method to Memorize Normality for Video Anomaly Detection | 2024-09-24T00:00:00 | https://arxiv.org/abs/2409.16225v5 | [
"https://github.com/SkiddieAhn/Paper-VideoPatchCore"
] | In the paper 'VideoPatchCore: An Effective Method to Memorize Normality for Video Anomaly Detection', what AUC score did the VideoPatchCore model get on the ShanghaiTech dataset
| 85.1% |
IllusionVQA | GPT4-Vision 4-shot+CoT | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the GPT4-Vision 4-shot+CoT model get on the IllusionVQA dataset
| 49.7 |
SOD4SB Private Test | E2 method (Normalized Gaussian Wasserstein Distance + Switch Hard Augmentation + Multi scale train + Weight Moving Average + CenterNet + VarifocalNet) | MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results | 2023-07-18T00:00:00 | https://arxiv.org/abs/2307.09143v1 | [
"https://github.com/iim-ttij/mva2023smallobjectdetection4spottingbirds"
] | In the paper 'MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results', what AP50 score did the E2 method (Normalized Gaussian Wasserstein Distance + Switch Hard Augmentation + Multi scale train + Weight Moving Average + CenterNet + VarifocalNet) model get on the SOD4SB Private Test dataset
| 22.1 |
Ant + Maze | STAR | Reconciling Spatial and Temporal Abstractions for Goal Representation | 2024-01-18T00:00:00 | https://arxiv.org/abs/2401.09870v2 | [
"https://github.com/cosynus-lix/STAR"
] | In the paper 'Reconciling Spatial and Temporal Abstractions for Goal Representation', what Return score did the STAR model get on the Ant + Maze dataset
| 0.85 |
CIFAR-100 | DKD++(T:resnet-32x4, S:resnet-8x4) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 Accuracy (%) score did the DKD++(T:resnet-32x4, S:resnet-8x4) model get on the CIFAR-100 dataset
| 76.28 |
PASCAL-5i (1-Shot) | QCLNet (ResNet-101) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (ResNet-101) model get on the PASCAL-5i (1-Shot) dataset
| 67 |
VeRi-776 | MBR4B-LAI (w/ RK) | Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01129v1 | [
"https://github.com/videturfortuna/vehicle_reid_itsc2023"
] | In the paper 'Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification', what mAP score did the MBR4B-LAI (w/ RK) model get on the VeRi-776 dataset
| 92.1 |
KIT Motion-Language | BAD (OAAS) | BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.10847v1 | [
"https://github.com/rohollahhs/bad"
] | In the paper 'BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation', what FID score did the BAD (OAAS) model get on the KIT Motion-Language dataset
| 0.221 |
Weather2K850 (96) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K850 (96) dataset
| 0.474 |
AffectNet | FMAE | Representation Learning and Identity Adversarial Training for Facial Behavior Understanding | 2024-07-15T00:00:00 | https://arxiv.org/abs/2407.11243v1 | [
"https://github.com/forever208/fmae-iat"
] | In the paper 'Representation Learning and Identity Adversarial Training for Facial Behavior Understanding', what Accuracy (8 emotion) score did the FMAE model get on the AffectNet dataset
| 65.00 |
Watercolor2k | CDDMSL | Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13525v1 | [
"https://github.com/sinamalakouti/CDDMSL"
] | In the paper 'Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment', what MAP score did the CDDMSL model get on the Watercolor2k dataset
| 49.8 |
PASCAL-5i (1-Shot) | QCLNet (VGG-16) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (VGG-16) model get on the PASCAL-5i (1-Shot) dataset
| 60.6 |
ImageNet 256x256 | DiffiT | DiffiT: Diffusion Vision Transformers for Image Generation | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02139v3 | [
"https://github.com/nvlabs/diffit"
] | In the paper 'DiffiT: Diffusion Vision Transformers for Image Generation', what FID score did the DiffiT model get on the ImageNet 256x256 dataset
| 1.73 |
Leaderboard | LLaVA-Plus (13B) | LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents | 2023-11-09T00:00:00 | https://arxiv.org/abs/2311.05437v1 | [
"https://github.com/LLaVA-VL/LLaVA-Plus-Codebase"
] | In the paper 'LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents', what ELO Rating score did the LLaVA-Plus (13B) model get on the Leaderboard dataset
| 1203 |
ETTh2 (192) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTh2 (192) Multivariate dataset
| 0.319 |
ImageNet 512x512 | DiffiT | DiffiT: Diffusion Vision Transformers for Image Generation | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02139v3 | [
"https://github.com/nvlabs/diffit"
] | In the paper 'DiffiT: Diffusion Vision Transformers for Image Generation', what FID score did the DiffiT model get on the ImageNet 512x512 dataset
| 2.67 |
DeLiVER | StitchFusion (RGB-D-Event) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-D-Event) model get on the DeLiVER dataset
| 66.03 |
MagnaTagATune (clean) | EAsT-Final + PaSST | Audio Embeddings as Teachers for Music Classification | 2023-06-30T00:00:00 | https://arxiv.org/abs/2306.17424v1 | [
"https://github.com/suncerock/EAsT-music-classification"
] | In the paper 'Audio Embeddings as Teachers for Music Classification', what ROC-AUC score did the EAsT-Final + PaSST model get on the MagnaTagATune (clean) dataset
| 91.2 |
MATH | GPT-4 Turbo (MACM, w/code, voting) | MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems | 2024-04-06T00:00:00 | https://arxiv.org/abs/2404.04735v2 | [
"https://github.com/bin123apple/macm"
] | In the paper 'MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems', what Accuracy score did the GPT-4 Turbo (MACM, w/code, voting) model get on the MATH dataset
| 87.920 |
LLAMAS | FENetV2 | FENet: Focusing Enhanced Network for Lane Detection | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17163v6 | [
"https://github.com/hanyangzhong/fenet"
] | In the paper 'FENet: Focusing Enhanced Network for Lane Detection', what mF1 score did the FENetV2 model get on the LLAMAS dataset
| 71.85 |
CROHME 2019 | TAMER | TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08578v2 | [
"https://github.com/qingzhenduyu/tamer"
] | In the paper 'TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition', what ExpRate score did the TAMER model get on the CROHME 2019 dataset
| 61.97 |
ETTm2 (192) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTm2 (192) Multivariate dataset
| 0.219 |
LAMBADA | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (one-shot) model get on the LAMBADA dataset
| 86.9 |
TVQA | LLaMA-VQA | Large Language Models are Temporal and Causal Reasoners for Video Question Answering | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15747v2 | [
"https://github.com/mlvlab/Flipped-VQA"
] | In the paper 'Large Language Models are Temporal and Causal Reasoners for Video Question Answering', what Accuracy score did the LLaMA-VQA model get on the TVQA dataset
| 82.2 |
CUHK-PEDES | TBPS-CLIP (ViT-B/16) | An Empirical Study of CLIP for Text-based Person Search | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.10045v2 | [
"https://github.com/flame-chasers/tbps-clip"
] | In the paper 'An Empirical Study of CLIP for Text-based Person Search', what R@1 score did the TBPS-CLIP (ViT-B/16) model get on the CUHK-PEDES dataset
| 73.54 |
EconLogicQA | Mistral-7B-v0.2 | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Mistral-7B-v0.2 model get on the EconLogicQA dataset
| 0.2615 |
UBnormal | MoCoDAD | Multimodal Motion Conditioned Diffusion Model for Skeleton-based Video Anomaly Detection | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07205v3 | [
"https://github.com/aleflabo/MoCoDAD"
] | In the paper 'Multimodal Motion Conditioned Diffusion Model for Skeleton-based Video Anomaly Detection', what AUC score did the MoCoDAD model get on the UBnormal dataset
| 68.3% |
RefCOCOg-test | MaskRIS (Swin-B, combined DB) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B, combined DB) model get on the RefCOCOg-test dataset
| 71.09 |
MM-Vet | ShareGPT4V-7B | ShareGPT4V: Improving Large Multi-Modal Models with Better Captions | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12793v2 | [
"https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V"
] | In the paper 'ShareGPT4V: Improving Large Multi-Modal Models with Better Captions', what GPT-4 score score did the ShareGPT4V-7B model get on the MM-Vet dataset
| 37.6 |
Human3.6M | GLA-GCN (T=243, GT) | GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video | 2023-07-12T00:00:00 | https://arxiv.org/abs/2307.05853v2 | [
"https://github.com/bruceyo/GLA-GCN"
] | In the paper 'GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video', what Average MPJPE (mm) score did the GLA-GCN (T=243, GT) model get on the Human3.6M dataset
| 21.0 |
Yelp | SplitEE-S | SplitEE: Early Exit in Deep Neural Networks with Split Computing | 2023-09-17T00:00:00 | https://arxiv.org/abs/2309.09195v1 | [
"https://github.com/Div290/SplitEE/blob/main/README.md"
] | In the paper 'SplitEE: Early Exit in Deep Neural Networks with Split Computing', what Accuracy score did the SplitEE-S model get on the Yelp dataset
| 76.7 |
NC4K | ZoomNeXt-PVTv2-B5 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what S-measure score did the ZoomNeXt-PVTv2-B5 model get on the NC4K dataset
| 0.903 |
Voyage-code-002 | CoIR: A Comprehensive Benchmark for Code Information Retrieval Models | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.02883v1 | [
"https://github.com/coir-team/coir"
] | In the paper 'CoIR: A Comprehensive Benchmark for Code Information Retrieval Models', what nDCG@10 score did the Voyage-code-002 model get on the dataset
| 56.26 | |
LTCC | FIRe2 | Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10692v2 | [
"https://github.com/qizaowang/fire-ccreid"
] | In the paper 'Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification', what Rank-1 score did the FIRe2 model get on the LTCC dataset
| 44.6 |
RefCoCo val | MagNet | Mask Grounding for Referring Image Segmentation | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12198v2 | [
"https://github.com/yxchng/mask-grounding"
] | In the paper 'Mask Grounding for Referring Image Segmentation', what Overall IoU score did the MagNet model get on the RefCoCo val dataset
| 75.24 |
Stanford Cars | Real-Guidance + CAL | Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation? | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12954v1 | [
"https://github.com/zhengli97/dm-kd"
] | In the paper 'Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?', what 8-shot Accuracy score did the Real-Guidance + CAL model get on the Stanford Cars dataset
| 73.1 |
TAO | AED (Co-DETR) | Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown | 2024-09-14T00:00:00 | https://arxiv.org/abs/2409.09293v1 | [
"https://github.com/balabooooo/aed"
] | In the paper 'Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown', what TETA score did the AED (Co-DETR) model get on the TAO dataset
| 55.3 |
CausalGym | Difference-in-means | CausalGym: Benchmarking causal interpretability methods on linguistic tasks | 2024-02-19T00:00:00 | https://arxiv.org/abs/2402.12560v1 | [
"https://github.com/aryamanarora/causalgym"
] | In the paper 'CausalGym: Benchmarking causal interpretability methods on linguistic tasks', what Log odds-ratio (pythia-6.9b) score did the Difference-in-means model get on the CausalGym dataset
| 2.91 |
CoNLL-2014 Shared Task | Majority-voting ensemble on best 7 models | Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.14914v1 | [
"https://github.com/grammarly/pillars-of-gec"
] | In the paper 'Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models', what F0.5 score did the Majority-voting ensemble on best 7 models model get on the CoNLL-2014 Shared Task dataset
| 71.8 |
WHU-CD | CGNet | Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09179v1 | [
"https://github.com/chengxihan/cgnet-cd"
] | In the paper 'Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery', what F1 score did the CGNet model get on the WHU-CD dataset
| 92.59 |
V2XSet | V2X-AHD | V2X-AHD:Vehicle-to-Everything Cooperation Perception via Asymmetric Heterogenous Distillation Network | 2023-10-10T00:00:00 | https://arxiv.org/abs/2310.06603v1 | [
"https://github.com/feeling0414-lab/V2X-AHD"
] | In the paper 'V2X-AHD:Vehicle-to-Everything Cooperation Perception via Asymmetric Heterogenous Distillation Network', what AP0.5 (Perfect) score did the V2X-AHD model get on the V2XSet dataset
| 0.855 |
WinoGrande | LLaMA2-7b | GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs | 2024-08-27T00:00:00 | https://arxiv.org/abs/2408.15300v1 | [
"https://github.com/On-Point-RND/GIFT_SW"
] | In the paper 'GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs', what Accuracy (% ) score did the LLaMA2-7b model get on the WinoGrande dataset
| 70.80 |
MATH | OpenMath-Mistral-7B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-Mistral-7B (w/ code, SC, k=50) model get on the MATH dataset
| 57.2 |
ImageNet-100 (Class-IL, 5T) | MoCo + CaSSLe | Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning | 2023-06-08T00:00:00 | https://arxiv.org/abs/2306.05101v2 | [
"https://github.com/csm9493/PNR"
] | In the paper 'Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning', what Top 1 Accuracy score did the MoCo + CaSSLe model get on the ImageNet-100 (Class-IL, 5T) dataset
| 63.49 |
miniF2F-test | MMOS-DeepSeekMath-7B | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Pass@1 score did the MMOS-DeepSeekMath-7B model get on the miniF2F-test dataset
| 28.3 |
CIRR | SPN4CIR | Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives | 2024-04-17T00:00:00 | https://arxiv.org/abs/2404.11317v2 | [
"https://github.com/BUAADreamer/SPN4CIR"
] | In the paper 'Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives', what (Recall@5+Recall_subset@1)/2 score did the SPN4CIR model get on the CIRR dataset
| 82.69 |
Squirrel | Dir-GNN | Edge Directionality Improves Learning on Heterophilic Graphs | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10498v3 | [
"https://github.com/emalgorithm/directed-graph-neural-network"
] | In the paper 'Edge Directionality Improves Learning on Heterophilic Graphs', what Accuracy score did the Dir-GNN model get on the Squirrel dataset
| 75.31±1.92 |
MAWPS | Exp-Tree | An Expression Tree Decoding Strategy for Mathematical Equation Generation | 2023-10-14T00:00:00 | https://arxiv.org/abs/2310.09619v3 | [
"https://github.com/zwq2018/multi-view-consistency-for-mwp"
] | In the paper 'An Expression Tree Decoding Strategy for Mathematical Equation Generation', what Accuracy (%) score did the Exp-Tree model get on the MAWPS dataset
| 92.3 |
AFAD | ResNet-50-DLDL | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL model get on the AFAD dataset
| 3.14 |
MATH | MMOS-DeepSeekMath-7B(0-shot,k=50) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Accuracy score did the MMOS-DeepSeekMath-7B(0-shot,k=50) model get on the MATH dataset
| 63.7 |
CBSD68 sigm75 | MeD | Multi-view Self-supervised Disentanglement for General Image Denoising | 2023-09-10T00:00:00 | https://arxiv.org/abs/2309.05049v1 | [
"https://github.com/chqwer2/multi-view-self-supervised-disentanglement-denoising"
] | In the paper 'Multi-view Self-supervised Disentanglement for General Image Denoising', what PSNR/SSIM score did the MeD model get on the CBSD68 sigm75 dataset
| 25.40/0.6645 |
EMNIST-Letters | Spiking-Diffusion | Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks | 2023-08-20T00:00:00 | https://arxiv.org/abs/2308.10187v4 | [
"https://github.com/Arktis2022/Spiking-Diffusion"
] | In the paper 'Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks', what FID score did the Spiking-Diffusion model get on the EMNIST-Letters dataset
| 67.41 |
ChartQA | Gemini Ultra | Gemini: A Family of Highly Capable Multimodal Models | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11805v4 | [
"https://github.com/valdecy/pybibx"
] | In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what 1:1 Accuracy score did the Gemini Ultra model get on the ChartQA dataset
| 80.8 |
COCO | CM3Leon-7B | Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning | 2023-09-05T00:00:00 | https://arxiv.org/abs/2309.02591v1 | [
"https://github.com/kyegomez/CM3Leon"
] | In the paper 'Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning', what FID score did the CM3Leon-7B model get on the COCO dataset
| 4.88 |
MM-Vet | MoAI | MoAI: Mixture of All Intelligence for Large Language and Vision Models | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07508v3 | [
"https://github.com/ByungKwanLee/MoAI"
] | In the paper 'MoAI: Mixture of All Intelligence for Large Language and Vision Models', what GPT-4 score score did the MoAI model get on the MM-Vet dataset
| 43.7 |
AudioSet | DASS-Medium (Audio-only, single) | DASS: Distilled Audio State Space Models Are Stronger and More Duration-Scalable Learners | 2024-07-04T00:00:00 | https://arxiv.org/abs/2407.04082v1 | [
"https://github.com/Saurabhbhati/DASS"
] | In the paper 'DASS: Distilled Audio State Space Models Are Stronger and More Duration-Scalable Learners', what Test mAP score did the DASS-Medium (Audio-only, single) model get on the AudioSet dataset
| 0.476 |
EconLogicQA | Llama-2-7B | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Llama-2-7B model get on the EconLogicQA dataset
| 0.0077 |
IndustReal | MViT-V2 | IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17323v1 | [
"https://github.com/timschoonbeek/industreal"
] | In the paper 'IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting', what Top-1 score did the MViT-V2 model get on the IndustReal dataset
| 65.25 |
MVTec LOCO AD | SINBAD+EfficientAD | Set Features for Anomaly Detection | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14773v3 | [
"https://github.com/NivC/SINBAD"
] | In the paper 'Set Features for Anomaly Detection', what Avg. Detection AUROC score did the SINBAD+EfficientAD model get on the MVTec LOCO AD dataset
| 94.2 |
Wiki-CS | GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the GraphSAGE model get on the Wiki-CS dataset
| 83.67 |
NYU Depth v2 | SMMCL (ResNet-101) | Understanding Dark Scenes by Contrasting Multi-Modal Observations | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12320v2 | [
"https://github.com/palmdong/smmcl"
] | In the paper 'Understanding Dark Scenes by Contrasting Multi-Modal Observations', what Mean IoU score did the SMMCL (ResNet-101) model get on the NYU Depth v2 dataset
| 52.5% |
spider | T5-3B+NatSQL+Token Preprocessing | Improving Generalization in Language Model-Based Text-to-SQL Semantic Parsing: Two Simple Semantic Boundary-Based Techniques | 2023-05-27T00:00:00 | https://arxiv.org/abs/2305.17378v1 | [
"https://github.com/dakingrai/ood-generalization-semantic-boundary-techniques"
] | In the paper 'Improving Generalization in Language Model-Based Text-to-SQL Semantic Parsing: Two Simple Semantic Boundary-Based Techniques', what Exact Match Accuracy (Dev) score did the T5-3B+NatSQL+Token Preprocessing model get on the spider dataset
| 69.4 |
Office-Home | PGA (ViT-B/16) | Enhancing Domain Adaptation through Prompt Gradient Alignment | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09353v2 | [
"https://github.com/viethoang1512/pga"
] | In the paper 'Enhancing Domain Adaptation through Prompt Gradient Alignment', what Accuracy score did the PGA (ViT-B/16) model get on the Office-Home dataset
| 85.1 |
ImageNet 256x256 | SiT-XL/2 + REPA (with the guidance interval) | Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.06940v2 | [
"https://github.com/sihyun-yu/REPA"
] | In the paper 'Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think', what FID score did the SiT-XL/2 + REPA (with the guidance interval) model get on the ImageNet 256x256 dataset
| 1.42 |
ARMBench | RISE (R101) | Robot Instance Segmentation with Few Annotations for Grasping | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01302v1 | [
"https://github.com/mkimhi/RISE"
] | In the paper 'Robot Instance Segmentation with Few Annotations for Grasping', what AP50 score did the RISE (R101) model get on the ARMBench dataset
| 84.74 |
COCO-Stuff-27 | U2Seg | Unsupervised Universal Image Segmentation | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17243v1 | [
"https://github.com/u2seg/u2seg"
] | In the paper 'Unsupervised Universal Image Segmentation', what Accuracy score did the U2Seg model get on the COCO-Stuff-27 dataset
| 63.9 |
ColonINST-v1 (Unseen) | LLaVA-Med-v1.0
(w/o LoRA, w/o extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.0
(w/o LoRA, w/o extra data) model get on the ColonINST-v1 (Unseen) dataset
| 78.04 |
DALES | SuperCluster | Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering | 2024-01-12T00:00:00 | https://arxiv.org/abs/2401.06704v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what PQ score did the SuperCluster model get on the DALES dataset
| 61.2 |
Amazon Photo | GraphSAGE | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GraphSAGE model get on the Amazon Photo dataset
| 96.78 ± 0.23 |
INRIA Aerial Image Labeling | UANet(Re2sNet50) | Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12309v1 | [
"https://github.com/henryjiepanli/uncertainty-aware-network"
] | In the paper 'Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network', what IoU score did the UANet(Re2sNet50) model get on the INRIA Aerial Image Labeling dataset
| 83.17 |
CIFAR-10 (250 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the CIFAR-10 (250 Labels, ImageNet-100 Unlabeled) dataset
| 68.72 |
ImageNet 256x256 | ACDiT | ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer | 2024-12-10T00:00:00 | https://arxiv.org/abs/2412.07720v1 | [
"https://github.com/thunlp/acdit"
] | In the paper 'ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer', what FID score did the ACDiT model get on the ImageNet 256x256 dataset
| 2.37 |
BigEarthNet-S1 (official test set) | ViT-S/16 | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18653v1 | [
"https://github.com/zhu-xlab/fgmae"
] | In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what mAP (micro) score did the ViT-S/16 model get on the BigEarthNet-S1 (official test set) dataset
| 79.5 |
KIT Motion-Language | DiverseMotion | DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01372v1 | [
"https://github.com/axdfhj/mdd"
] | In the paper 'DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion', what FID score did the DiverseMotion model get on the KIT Motion-Language dataset
| 0.468 |
OVIS validation | DVIS++(VIT-L,Offline) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mask AP score did the DVIS++(VIT-L,Offline) model get on the OVIS validation dataset
| 53.4 |
HistGen WSI-Report Dataset | HistGen | HistGen: Histopathology Report Generation via Local-Global Feature Encoding and Cross-modal Context Interaction | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05396v2 | [
"https://github.com/dddavid4real/HistGen"
] | In the paper 'HistGen: Histopathology Report Generation via Local-Global Feature Encoding and Cross-modal Context Interaction', what BLEU-4 score did the HistGen model get on the HistGen WSI-Report Dataset dataset
| 0.184 |
ModelNet40 | Point-JEPA (voting) | Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16432v4 | [
"https://github.com/Ayumu-J-S/Point-JEPA"
] | In the paper 'Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud', what Overall Accuracy score did the Point-JEPA (voting) model get on the ModelNet40 dataset
| 94.1±0.1 |
CC3M-TagMask | TTD (w/o fine-tuning) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what F1 score did the TTD (w/o fine-tuning) model get on the CC3M-TagMask dataset
| 78.5 |
VATEX | CoCap (ViT/L14) | Accurate and Fast Compressed Video Captioning | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12867v2 | [
"https://github.com/acherstyx/CoCap"
] | In the paper 'Accurate and Fast Compressed Video Captioning', what BLEU-4 score did the CoCap (ViT/L14) model get on the VATEX dataset
| 35.8 |
ActivityNet-QA | Video Chat | VideoChat: Chat-Centric Video Understanding | 2023-05-10T00:00:00 | https://arxiv.org/abs/2305.06355v2 | [
"https://github.com/opengvlab/ask-anything"
] | In the paper 'VideoChat: Chat-Centric Video Understanding', what Accuracy score did the Video Chat model get on the ActivityNet-QA dataset
| 26.5 |
GSM8K | OpenMath-CodeLlama-34B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-34B (w/ code, SC, k=50) model get on the GSM8K dataset
| 88.0 |
CVC-ColonDB | ProMISe | ProMISe: Promptable Medical Image Segmentation using SAM | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04164v3 | [
"https://github.com/xinkunwang111/promise"
] | In the paper 'ProMISe: Promptable Medical Image Segmentation using SAM', what mean Dice score did the ProMISe model get on the CVC-ColonDB dataset
| 0.874 |
Yelp2018 | NESCL | Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11523v1 | [
"https://github.com/PeiJieSun/NESCL"
] | In the paper 'Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering', what NDCG@20 score did the NESCL model get on the Yelp2018 dataset
| 0.0611 |
AGQA 2.0 balanced | AIO - ViT | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01529v1 | [
"https://github.com/byz0e/glance-focus"
] | In the paper 'Glance and Focus: Memory Prompting for Multi-Event Video Question Answering', what Average Accuracy score did the AIO - ViT model get on the AGQA 2.0 balanced dataset
| 48.59 |
MP-100 | EdgeCape | Edge Weight Prediction For Category-Agnostic Pose Estimation | 2024-11-25T00:00:00 | https://arxiv.org/abs/2411.16665v1 | [
"https://github.com/orhir/EdgeCape"
] | In the paper 'Edge Weight Prediction For Category-Agnostic Pose Estimation', what Mean PCK@0.2 - 1shot score did the EdgeCape model get on the MP-100 dataset
| 89.01 |
Freyfaces | PaddingFlow | PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08216v2 | [
"https://github.com/adamqlmeng/paddingflow"
] | In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what MMD-L2 score did the PaddingFlow model get on the Freyfaces dataset
| 0.621 |
WikiText-103 | Skip Cross-Head Transformer-XL | Memory-efficient Stochastic methods for Memory-based Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08123v1 | [
"https://github.com/vishwajit-vishnu/memory-efficient-stochastic-methods-for-memory-based-transformers"
] | In the paper 'Memory-efficient Stochastic methods for Memory-based Transformers', what Validation perplexity score did the Skip Cross-Head Transformer-XL model get on the WikiText-103 dataset
| 21.87 |
IAM | HTR-VT(line-level) | HTR-VT: Handwritten Text Recognition with Vision Transformer | 2024-09-13T00:00:00 | https://arxiv.org/abs/2409.08573v1 | [
"https://github.com/yutingli0606/htr-vt"
] | In the paper 'HTR-VT: Handwritten Text Recognition with Vision Transformer', what CER score did the HTR-VT(line-level) model get on the IAM dataset
| 4.7 |
X-Sum | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what ROUGE-2 score did the PaLM 2-M (one-shot) model get on the X-Sum dataset
| 17.2 |
Bongard-OpenWorld | InstructBLIP + GPT-4 | Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10207v5 | [
"https://github.com/joyjayng/Bongard-OpenWorld"
] | In the paper 'Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World', what 2-Class Accuracy score did the InstructBLIP + GPT-4 model get on the Bongard-OpenWorld dataset
| 63.8 |
WHAM! | SepReformer-L + DM | Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.05983v3 | [
"https://github.com/dmlguq456/SepReformer"
] | In the paper 'Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation', what SI-SDRi score did the SepReformer-L + DM model get on the WHAM! dataset
| 18.4 |
FSS-1000 (1-shot) | GF-SAM | Bridge the Points: Graph-based Few-shot Segment Anything Semantically | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.06964v2 | [
"https://github.com/ANDYZAQ/GF-SAM"
] | In the paper 'Bridge the Points: Graph-based Few-shot Segment Anything Semantically', what Mean IoU score did the GF-SAM model get on the FSS-1000 (1-shot) dataset
| 88 |
EQ-Bench | Qwen/Qwen-72B-Chat | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the Qwen/Qwen-72B-Chat model get on the EQ-Bench dataset
| 52.44 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.