dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Gowalla | NESCL | Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11523v1 | [
"https://github.com/PeiJieSun/NESCL"
] | In the paper 'Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering', what Recall@20 score did the NESCL model get on the Gowalla dataset
| 0.1917 |
ETTh1 (336) Multivariate | GPHT* | Generative Pretrained Hierarchical Transformer for Time Series Forecasting | 2024-02-26T00:00:00 | https://arxiv.org/abs/2402.16516v2 | [
"https://github.com/icantnamemyself/gpht"
] | In the paper 'Generative Pretrained Hierarchical Transformer for Time Series Forecasting', what MSE score did the GPHT* model get on the ETTh1 (336) Multivariate dataset
| 0.456 |
Atari 2600 Star Gunner | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Star Gunner dataset
| 129140 |
COCO val2017 | U2Seg | Unsupervised Universal Image Segmentation | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17243v1 | [
"https://github.com/u2seg/u2seg"
] | In the paper 'Unsupervised Universal Image Segmentation', what PQ score did the U2Seg model get on the COCO val2017 dataset
| 16.1 |
COCO-SP | GatedGCN-HSG | Next Level Message-Passing with Hierarchical Support Graphs | 2024-06-22T00:00:00 | https://arxiv.org/abs/2406.15852v2 | [
"https://github.com/carlosinator/support-graphs"
] | In the paper 'Next Level Message-Passing with Hierarchical Support Graphs', what macro F1 score did the GatedGCN-HSG model get on the COCO-SP dataset
| 0.3535±0.0032 |
Peptides-struct | PathNN | Path Neural Networks: Expressive and Accurate Graph Neural Networks | 2023-06-09T00:00:00 | https://arxiv.org/abs/2306.05955v1 | [
"https://github.com/gasmichel/pathnns_expressive"
] | In the paper 'Path Neural Networks: Expressive and Accurate Graph Neural Networks', what MAE score did the PathNN model get on the Peptides-struct dataset
| 0.2545±0.0032 |
VLCS | SPG (CLIP, ViT-B/16) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ViT-B/16) model get on the VLCS dataset
| 82.4 |
ACE 2005 | GoLLIE | GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03668v5 | [
"https://github.com/hitz-zentroa/gollie"
] | In the paper 'GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction', what F1 score did the GoLLIE model get on the ACE 2005 dataset
| 89.6 |
MVTec LOCO AD | ComAD+PatchCore | Component-aware anomaly detection framework for adjustable and logical industrial visual inspection | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08509v1 | [
"https://github.com/liutongkun/comad"
] | In the paper 'Component-aware anomaly detection framework for adjustable and logical industrial visual inspection', what Avg. Detection AUROC score did the ComAD+PatchCore model get on the MVTec LOCO AD dataset
| 90.1 |
PKLot | VGG-19 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the VGG-19 model get on the PKLot dataset
| 0.9988 |
Pascal Panoptic Parts | HIPIE (ViT-H) | Hierarchical Open-vocabulary Universal Image Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00764v2 | [
"https://github.com/berkeley-hipie/hipie"
] | In the paper 'Hierarchical Open-vocabulary Universal Image Segmentation', what mIoUPartS score did the HIPIE (ViT-H) model get on the Pascal Panoptic Parts dataset
| 63.8 |
Wisconsin | UniG-Encoder | UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node Classification | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.01650v1 | [
"https://github.com/minhzou/unig-encoder"
] | In the paper 'UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node Classification', what Accuracy score did the UniG-Encoder model get on the Wisconsin dataset
| 88.03±4.42 |
VOC-MLT | LMPT(ResNet-50) | LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04536v2 | [
"https://github.com/richard-peng-xia/LMPT"
] | In the paper 'LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition', what Average mAP score did the LMPT(ResNet-50) model get on the VOC-MLT dataset
| 85.44 |
MPI-INF-3DHP | MotionAGFormer-XS (T=27) | MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network | 2023-10-25T00:00:00 | https://arxiv.org/abs/2310.16288v1 | [
"https://github.com/taatiteam/motionagformer"
] | In the paper 'MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network', what AUC score did the MotionAGFormer-XS (T=27) model get on the MPI-INF-3DHP dataset
| 83.5 |
CVC-ClinicDB | MADGNet | Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention | 2024-05-10T00:00:00 | https://arxiv.org/abs/2405.06284v1 | [
"https://github.com/Inha-CVAI/MADGNet"
] | In the paper 'Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention', what mean Dice score did the MADGNet model get on the CVC-ClinicDB dataset
| 0.9390 |
ImageNet 256x256 | RAR-B, autoregressive | Randomized Autoregressive Visual Generation | 2024-11-01T00:00:00 | https://arxiv.org/abs/2411.00776v1 | [
"https://github.com/bytedance/1d-tokenizer"
] | In the paper 'Randomized Autoregressive Visual Generation', what FID score did the RAR-B, autoregressive model get on the ImageNet 256x256 dataset
| 1.95 |
ImageNet 256x256 | LFM | Flow Matching in Latent Space | 2023-07-17T00:00:00 | https://arxiv.org/abs/2307.08698v1 | [
"https://github.com/vinairesearch/lfm"
] | In the paper 'Flow Matching in Latent Space', what FID score did the LFM model get on the ImageNet 256x256 dataset
| 4.46 |
UZLF | Junior Ophtalmologist | LUNet: Deep Learning for the Segmentation of Arterioles and Venules in High Resolution Fundus Images | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05780v1 | [
"https://github.com/aim-lab/LUNet"
] | In the paper 'LUNet: Deep Learning for the Segmentation of Arterioles and Venules in High Resolution Fundus Images', what Average Dice (0.5*Dice_a + 0.5*Dice_v) score did the Junior Ophtalmologist model get on the UZLF dataset
| 82.6 |
GSM8K | RFT 7B | Scaling Relationship on Learning Mathematical Reasoning with Large Language Models | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.01825v2 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | In the paper 'Scaling Relationship on Learning Mathematical Reasoning with Large Language Models', what Accuracy score did the RFT 7B model get on the GSM8K dataset
| 51.2 |
ImageNet | Swin-S + GFSA | Graph Convolutions Enrich the Self-Attention in Transformers! | 2023-12-07T00:00:00 | https://arxiv.org/abs/2312.04234v5 | [
"https://github.com/jeongwhanchoi/gfsa"
] | In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Top 1 Accuracy score did the Swin-S + GFSA model get on the ImageNet dataset
| 83% |
ETTh1 (96) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTh1 (96) Multivariate dataset
| 0.354 |
VoxCeleb1 | ReDimNet-B2-SF2-LM (4.7M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B2-SF2-LM (4.7M) model get on the VoxCeleb1 dataset
| 0.57 |
CIFAR-10 | Net2 (2) | Efficacy of Neural Prediction-Based Zero-Shot NAS | 2023-08-31T00:00:00 | https://arxiv.org/abs/2308.16775v3 | [
"https://github.com/minh1409/dft-npzs-nas"
] | In the paper 'Efficacy of Neural Prediction-Based Zero-Shot NAS', what Top-1 Error Rate score did the Net2 (2) model get on the CIFAR-10 dataset
| 3.3% |
Mip-NeRF 360 | MVGS | MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.02103v1 | [
"https://github.com/xiaobiaodu/MVGS"
] | In the paper 'MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis', what PSNR score did the MVGS model get on the Mip-NeRF 360 dataset
| 29.82 |
IllusionVQA | InstructBLIP-13B | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the InstructBLIP-13B model get on the IllusionVQA dataset
| 34.25 |
MSP-Podcast (Valence) | wav2small-Teacher | Wav2Small: Distilling Wav2Vec2 to 72K parameters for Low-Resource Speech emotion recognition | 2024-08-25T00:00:00 | https://arxiv.org/abs/2408.13920v4 | [
"https://github.com/dkounadis/wav2small"
] | In the paper 'Wav2Small: Distilling Wav2Vec2 to 72K parameters for Low-Resource Speech emotion recognition', what CCC score did the wav2small-Teacher model get on the MSP-Podcast (Valence) dataset
| 0.676 |
Stanford Cars | SaSPA + CAL | Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14551v2 | [
"https://github.com/eyalmichaeli/saspa-aug"
] | In the paper 'Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation', what Accuracy score did the SaSPA + CAL model get on the Stanford Cars dataset
| 95.72 |
OxfordPets | OneNete,4-C | OneNet: A Channel-Wise 1D Convolutional U-Net | 2024-11-14T00:00:00 | https://arxiv.org/abs/2411.09838v1 | [
"https://github.com/shbyun080/onenet"
] | In the paper 'OneNet: A Channel-Wise 1D Convolutional U-Net', what Dice Score score did the OneNete,4-C model get on the OxfordPets dataset
| 0.967 |
BACE | SMA | Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14789v1 | [
"https://github.com/johnathan-xie/sma"
] | In the paper 'Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning', what ROC-AUC score did the SMA model get on the BACE dataset
| 84.3 |
PeMSD7(L) | PM-DMNet(P) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(P) model get on the PeMSD7(L) dataset
| 2.81 |
OTT-QA | DoTTeR | Denoising Table-Text Retrieval for Open-Domain Question Answering | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17611v1 | [
"https://github.com/deokhk/dotter"
] | In the paper 'Denoising Table-Text Retrieval for Open-Domain Question Answering', what ANS-EM score did the DoTTeR model get on the OTT-QA dataset
| 35.9 |
SUN397 | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the SUN397 dataset
| 81.06 |
MSR-VTT | LocVLM-Vid-B | Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07449v1 | [
"https://github.com/kahnchana/locvlm"
] | In the paper 'Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs', what Accuracy score did the LocVLM-Vid-B model get on the MSR-VTT dataset
| 51.2 |
MVTec AD | ReConPatch WRN-50 | ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16713v3 | [
"https://github.com/travishsu/ReConPatch-TF"
] | In the paper 'ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection', what Detection AUROC score did the ReConPatch WRN-50 model get on the MVTec AD dataset
| 99.56 |
BanglaBook | Random Forest (word 2-gram + word 3-gram) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the Random Forest (word 2-gram + word 3-gram) model get on the BanglaBook dataset
| 0.9106 |
Set5 - 3x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Set5 - 3x upscaling dataset
| 35.35 |
Mid-Atlantic Ridge | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the Mid-Atlantic Ridge dataset
| 34.65 |
CUTE80 | CLIP4STR-L | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L model get on the CUTE80 dataset
| 99.0 |
NTU RGB+D 120 | DVANet (RGB only) | DVANet: Disentangling View and Action Features for Multi-View Action Recognition | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05719v1 | [
"https://github.com/NyleSiddiqui/MultiView_Actions"
] | In the paper 'DVANet: Disentangling View and Action Features for Multi-View Action Recognition', what Accuracy (Cross-Subject) score did the DVANet (RGB only) model get on the NTU RGB+D 120 dataset
| 91.6 |
ScanNetV2 | SPGroup3D | SPGroup3D: Superpoint Grouping Network for Indoor 3D Object Detection | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13641v1 | [
"https://github.com/zyrant/spgroup3d"
] | In the paper 'SPGroup3D: Superpoint Grouping Network for Indoor 3D Object Detection', what mAP@0.25 score did the SPGroup3D model get on the ScanNetV2 dataset
| 74.3 |
MVTec AD | RealNet | RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.05897v1 | [
"https://github.com/cnulab/realnet"
] | In the paper 'RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection', what Detection AUROC score did the RealNet model get on the MVTec AD dataset
| 99.6 |
MM-Vet | MiCo-Chat-7B | Explore the Limits of Omni-modal Pretraining at Scale | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09412v1 | [
"https://github.com/invictus717/MiCo"
] | In the paper 'Explore the Limits of Omni-modal Pretraining at Scale', what GPT-4 score score did the MiCo-Chat-7B model get on the MM-Vet dataset
| 31.4 |
PASCAL-5i (5-Shot) | MSDNet (ResNet-50) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-50) model get on the PASCAL-5i (5-Shot) dataset
| 68.7 |
MATH | GPT-4-code model (CSV, w/ code, SC, k=16) | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | 2023-08-15T00:00:00 | https://arxiv.org/abs/2308.07921v1 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification', what Accuracy score did the GPT-4-code model (CSV, w/ code, SC, k=16) model get on the MATH dataset
| 84.3 |
COCO-MLT | LMPT(ViT-B/16) | LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04536v2 | [
"https://github.com/richard-peng-xia/LMPT"
] | In the paper 'LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition', what Average mAP score did the LMPT(ViT-B/16) model get on the COCO-MLT dataset
| 66.19 |
RealBlur-J | MLWNet | Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring | 2023-12-29T00:00:00 | https://arxiv.org/abs/2401.00027v2 | [
"https://github.com/thqiu0419/mlwnet"
] | In the paper 'Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring', what SSIM (sRGB) score did the MLWNet model get on the RealBlur-J dataset
| 0.941 |
BanglaBook | Bangla-BERT (base-uncased) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the Bangla-BERT (base-uncased) model get on the BanglaBook dataset
| 0.9064 |
UTKFace | VOLO-D1 age&gender | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what MAE score did the VOLO-D1 age&gender model get on the UTKFace dataset
| 4.23 |
ActivityNet-QA | All-in-one+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the All-in-one+ model get on the ActivityNet-QA dataset
| 40.0 |
VideoAttentionTarget | ViTGaze | ViTGaze: Gaze Following with Interaction Features in Vision Transformers | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12778v2 | [
"https://github.com/hustvl/vitgaze"
] | In the paper 'ViTGaze: Gaze Following with Interaction Features in Vision Transformers', what AUC score did the ViTGaze model get on the VideoAttentionTarget dataset
| 0.938 |
Weather (192) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather (192) dataset
| 0.203 |
MM-Vet | LLaVA-TokenPacker (Vicuna-7B) | TokenPacker: Efficient Visual Projector for Multimodal LLM | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02392v4 | [
"https://github.com/circleradon/tokenpacker"
] | In the paper 'TokenPacker: Efficient Visual Projector for Multimodal LLM', what GPT-4 score score did the LLaVA-TokenPacker (Vicuna-7B) model get on the MM-Vet dataset
| 29.6 |
Kinetics-700 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Kinetics-700 dataset
| 43.0 |
ImageNet | AIMv2-L | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-L model get on the ImageNet dataset
| 86.6% |
UCR Anomaly Archive | LSTM-VAE | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the LSTM-VAE model get on the UCR Anomaly Archive dataset
| 0.198 |
VisDrone - 10% labeled data | SSOD + Crop (L + U) | Density Crop-guided Semi-supervised Object Detection in Aerial Images | 2023-08-09T00:00:00 | https://arxiv.org/abs/2308.05032v1 | [
"https://github.com/akhilpm/dronessod"
] | In the paper 'Density Crop-guided Semi-supervised Object Detection in Aerial Images', what COCO-style AP score did the SSOD + Crop (L + U) model get on the VisDrone - 10% labeled data dataset
| 27.46 |
AudioCaps | EnCLAP-base | EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning | 2024-01-31T00:00:00 | https://arxiv.org/abs/2401.17690v1 | [
"https://github.com/jaeyeonkim99/enclap"
] | In the paper 'EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning', what CIDEr score did the EnCLAP-base model get on the AudioCaps dataset
| 0.7795 |
ICDAR2013 | DTrOCR 105M | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what Accuracy score did the DTrOCR 105M model get on the ICDAR2013 dataset
| 99.4 |
ETTh1 (96) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTh1 (96) Multivariate dataset
| 0.369 |
MM-Vet | LLaVA-1.5-HACL | Hallucination Augmented Contrastive Learning for Multimodal Large Language Model | 2023-12-12T00:00:00 | https://arxiv.org/abs/2312.06968v4 | [
"https://github.com/x-plug/mplug-halowl"
] | In the paper 'Hallucination Augmented Contrastive Learning for Multimodal Large Language Model', what GPT-4 score score did the LLaVA-1.5-HACL model get on the MM-Vet dataset
| 30.4 |
ogbn-products | LD+GIANT+SAGN+SCR | Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.14907v1 | [
"https://github.com/MIRALab-USTC/LD"
] | In the paper 'Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias', what Test Accuracy score did the LD+GIANT+SAGN+SCR model get on the ogbn-products dataset
| 0.8718 ± 0.0004 |
ImageNet-1k vs Textures | SCALE (ResNet50) | Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement | 2023-09-30T00:00:00 | https://arxiv.org/abs/2310.00227v1 | [
"https://github.com/kai422/scale"
] | In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what FPR95 score did the SCALE (ResNet50) model get on the ImageNet-1k vs Textures dataset
| 12.93 |
LSUN Churches 256 x 256 | LFM | Flow Matching in Latent Space | 2023-07-17T00:00:00 | https://arxiv.org/abs/2307.08698v1 | [
"https://github.com/vinairesearch/lfm"
] | In the paper 'Flow Matching in Latent Space', what FID score did the LFM model get on the LSUN Churches 256 x 256 dataset
| 5.54 |
EC-FUNSD | RORE (LayoutLMv3-base) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (LayoutLMv3-base) model get on the EC-FUNSD dataset
| 82.80 |
VoxCeleb | ReDimNet-B0-LM-ASNorm (1.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B0-LM-ASNorm (1.0M) model get on the VoxCeleb dataset
| 1.07 |
COCO 2% labeled data | MixPL | Mixed Pseudo Labels for Semi-Supervised Object Detection | 2023-12-12T00:00:00 | https://arxiv.org/abs/2312.07006v1 | [
"https://github.com/czm369/mixpl"
] | In the paper 'Mixed Pseudo Labels for Semi-Supervised Object Detection', what mAP score did the MixPL model get on the COCO 2% labeled data dataset
| 34.7 |
ImageNet | MIRL (ViT-B-48) | Masked Image Residual Learning for Scaling Deeper Vision Transformers | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.14136v3 | [
"https://github.com/russellllaputa/MIRL"
] | In the paper 'Masked Image Residual Learning for Scaling Deeper Vision Transformers', what Top 1 Accuracy score did the MIRL (ViT-B-48) model get on the ImageNet dataset
| 86.2% |
GenWiki | T5-large | Ontology-Free General-Domain Knowledge Graph-to-Text Generation Dataset Synthesis using Large Language Model | 2024-09-11T00:00:00 | https://arxiv.org/abs/2409.07088v1 | [
"https://github.com/daehuikim/WikiOFGraph"
] | In the paper 'Ontology-Free General-Domain Knowledge Graph-to-Text Generation Dataset Synthesis using Large Language Model', what BLEU score did the T5-large model get on the GenWiki dataset
| 45.85 |
X-Sum | PaLM 2-S (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what ROUGE-2 score did the PaLM 2-S (one-shot) model get on the X-Sum dataset
| 16.9 |
MPI-INF-3DHP | GLA-GCN (T=81) | GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video | 2023-07-12T00:00:00 | https://arxiv.org/abs/2307.05853v2 | [
"https://github.com/bruceyo/GLA-GCN"
] | In the paper 'GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video', what AUC score did the GLA-GCN (T=81) model get on the MPI-INF-3DHP dataset
| 79.12 |
EC-FUNSD | RORE (LayoutLMv3-base) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (LayoutLMv3-base) model get on the EC-FUNSD dataset
| 73.64 |
COCO-20i (1-shot) | MIANet (ResNet-50) | MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13864v1 | [
"https://github.com/aldrich2y/mianet"
] | In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (ResNet-50) model get on the COCO-20i (1-shot) dataset
| 47.66 |
FFHQ 256 x 256 | PDM+CS | Compensation Sampling for Improved Convergence in Diffusion Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06285v1 | [
"https://github.com/hotfinda/Compensation-sampling"
] | In the paper 'Compensation Sampling for Improved Convergence in Diffusion Models', what FID score did the PDM+CS model get on the FFHQ 256 x 256 dataset
| 2.57 |
CIFAR-10 | ResNet18 (TRADES-ANCRA/PGD-40) | Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03358v2 | [
"https://github.com/changzhang777/ancra"
] | In the paper 'Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria', what Accuracy score did the ResNet18 (TRADES-ANCRA/PGD-40) model get on the CIFAR-10 dataset
| 81.70 |
EconLogicQA | GPT-4 | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the GPT-4 model get on the EconLogicQA dataset
| 0.5538 |
Stanford2D3D Panoramic | SGAT4PASS(RGB only, 3 Fold AVG) | SGAT4PASS: Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03403v2 | [
"https://github.com/tencentarc/sgat4pass"
] | In the paper 'SGAT4PASS: Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation', what mIoU score did the SGAT4PASS(RGB only, 3 Fold AVG) model get on the Stanford2D3D Panoramic dataset
| 55.3% |
Amazon-Google | gpt-4o-mini-2024-07-18_fine_tuned | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-mini-2024-07-18_fine_tuned model get on the Amazon-Google dataset
| 80.25 |
ROxford (Medium) | HED-N-GAN | Dark Side Augmentation: Generating Diverse Night Examples for Metric Learning | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16351v2 | [
"https://github.com/mohwald/gandtr"
] | In the paper 'Dark Side Augmentation: Generating Diverse Night Examples for Metric Learning', what mAP score did the HED-N-GAN model get on the ROxford (Medium) dataset
| 66.3 |
CHILI-3K | PMLP | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the PMLP model get on the CHILI-3K dataset
| 0.461 +/- 0.000 |
BTAD | D3AD | Dynamic Addition of Noise in a Diffusion Model for Anomaly Detection | 2024-01-09T00:00:00 | https://arxiv.org/abs/2401.04463v2 | [
"https://github.com/JustinTebbe/D3AD"
] | In the paper 'Dynamic Addition of Noise in a Diffusion Model for Anomaly Detection', what Detection AUROC score did the D3AD model get on the BTAD dataset
| 95.2 |
Id Pattern Dataset | GPT-4omni | Identification of Stone Deterioration Patterns with Large Multimodal Models | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03207v1 | [
"https://github.com/dcorradetti/redai_id_pattern"
] | In the paper 'Identification of Stone Deterioration Patterns with Large Multimodal Models', what Percentage correct score did the GPT-4omni model get on the Id Pattern Dataset dataset
| 42.1% |
WebApp1K-React | claude-3.5-sonnet | Insights from Benchmarking Frontier Language Models on Web App Code Generation | 2024-09-08T00:00:00 | https://arxiv.org/abs/2409.05177v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'Insights from Benchmarking Frontier Language Models on Web App Code Generation', what pass@1 score did the claude-3.5-sonnet model get on the WebApp1K-React dataset
| 0.8808 |
SID SonyA7S2 x250 | LRD | Towards General Low-Light Raw Noise Synthesis and Modeling | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16508v2 | [
"https://github.com/fengzhang427/LRD"
] | In the paper 'Towards General Low-Light Raw Noise Synthesis and Modeling', what PSNR (Raw) score did the LRD model get on the SID SonyA7S2 x250 dataset
| 39.25 |
FRMT (Portuguese - Portugal) | PaLM | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the PaLM model get on the FRMT (Portuguese - Portugal) dataset
| 76.1 |
ICAT LLM bias | BAD | BAD: BiAs Detection for Large Language Models in the context of candidate screening | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10407v1 | [
"https://github.com/namhkoh/bad-bias-detection-in-llms"
] | In the paper 'BAD: BiAs Detection for Large Language Models in the context of candidate screening', what ICAT Score score did the BAD model get on the ICAT LLM bias dataset
| 23.44 |
FLEURS X-eng | GenTranslateV1 | GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators | 2024-02-10T00:00:00 | https://arxiv.org/abs/2402.06894v2 | [
"https://github.com/yuchen005/gentranslate"
] | In the paper 'GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators', what ASR-BLEU score did the GenTranslateV1 model get on the FLEURS X-eng dataset
| 30.1 |
EgoExoLearn | cross-view association baseline (gaze, val) | EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World | 2024-03-24T00:00:00 | https://arxiv.org/abs/2403.16182v2 | [
"https://github.com/opengvlab/egoexolearn"
] | In the paper 'EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World', what Accuracy score did the cross-view association baseline (gaze, val) model get on the EgoExoLearn dataset
| 48.35 |
MM-Vet | LLaVA-Plus-13B (All Tools, V1.3, 336px) | LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents | 2023-11-09T00:00:00 | https://arxiv.org/abs/2311.05437v1 | [
"https://github.com/LLaVA-VL/LLaVA-Plus-Codebase"
] | In the paper 'LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents', what GPT-4 score score did the LLaVA-Plus-13B (All Tools, V1.3, 336px) model get on the MM-Vet dataset
| 35.0±0.0 |
Synapse multi-organ CT | PAG-TransYnet | Rethinking Attention Gated with Hybrid Dual Pyramid Transformer-CNN for Generalized Segmentation in Medical Imaging | 2024-04-28T00:00:00 | https://arxiv.org/abs/2404.18199v1 | [
"https://github.com/faresbougourzi/pagtransynet"
] | In the paper 'Rethinking Attention Gated with Hybrid Dual Pyramid Transformer-CNN for Generalized Segmentation in Medical Imaging', what Avg DSC score did the PAG-TransYnet model get on the Synapse multi-organ CT dataset
| 83.43 |
VNHSGE-History | Bing Chat | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE-History dataset
| 88.5 |
UMVM-dbp-fr-en | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the UMVM-dbp-fr-en dataset
| 0.873 |
IMDb | Space-XLNet | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy score did the Space-XLNet model get on the IMDb dataset
| 94.88 |
RefCOCO testB | EVF-SAM | EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model | 2024-06-28T00:00:00 | https://arxiv.org/abs/2406.20076v4 | [
"https://github.com/hustvl/evf-sam"
] | In the paper 'EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model', what Overall IoU score did the EVF-SAM model get on the RefCOCO testB dataset
| 80 |
Atari 2600 Bank Heist | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Bank Heist dataset
| 1340.9 |
Cityscapes test | CAUSE (ViT-B/8) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what mIoU score did the CAUSE (ViT-B/8) model get on the Cityscapes test dataset
| 28.0 |
TrackingNet | ODTrack-L | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what Accuracy score did the ODTrack-L model get on the TrackingNet dataset
| 86.1 |
XCOPA | PaLM 2 (few-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot) model get on the XCOPA dataset
| 94.4 |
Peptides-func | DRew-GCN+LapPE | DRew: Dynamically Rewired Message Passing with Delay | 2023-05-13T00:00:00 | https://arxiv.org/abs/2305.08018v2 | [
"https://github.com/bengutteridge/drew"
] | In the paper 'DRew: Dynamically Rewired Message Passing with Delay', what AP score did the DRew-GCN+LapPE model get on the Peptides-func dataset
| 0.7150±0.0044 |
Peptides-struct | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what MAE score did the NeuralWalker model get on the Peptides-struct dataset
| 0.2463 ± 0.0005 |
CACD | FaRL+MLP | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the FaRL+MLP model get on the CACD dataset
| 3.96 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.