dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
ETTh1 (192) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh1 (192) Multivariate dataset
| 0.399 |
Wiki-40B | OutEffHop-Bert_base | Outlier-Efficient Hopfield Layers for Large Transformer-Based Models | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03828v2 | [
"https://github.com/magics-lab/outeffhop"
] | In the paper 'Outlier-Efficient Hopfield Layers for Large Transformer-Based Models', what Perplexity score did the OutEffHop-Bert_base model get on the Wiki-40B dataset
| 6.295 |
PASCAL VOC | OneNete,4-C | OneNet: A Channel-Wise 1D Convolutional U-Net | 2024-11-14T00:00:00 | https://arxiv.org/abs/2411.09838v1 | [
"https://github.com/shbyun080/onenet"
] | In the paper 'OneNet: A Channel-Wise 1D Convolutional U-Net', what mIoU score did the OneNete,4-C model get on the PASCAL VOC dataset
| 63.6 |
MATH | DART-Math-DSMath-7B-Uniform (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-DSMath-7B-Uniform (0-shot CoT, w/o code) model get on the MATH dataset
| 52.9 |
MM-Vet | InfMLLM-7B-Chat | InfMLLM: A Unified Framework for Visual-Language Tasks | 2023-11-12T00:00:00 | https://arxiv.org/abs/2311.06791v2 | [
"https://github.com/mightyzau/infmllm"
] | In the paper 'InfMLLM: A Unified Framework for Visual-Language Tasks', what GPT-4 score score did the InfMLLM-7B-Chat model get on the MM-Vet dataset
| 33.4 |
MVTec LOCO AD | ComAD+DRAEM | Component-aware anomaly detection framework for adjustable and logical industrial visual inspection | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08509v1 | [
"https://github.com/liutongkun/comad"
] | In the paper 'Component-aware anomaly detection framework for adjustable and logical industrial visual inspection', what Avg. Detection AUROC score did the ComAD+DRAEM model get on the MVTec LOCO AD dataset
| 87.9 |
GSM8K | OpenMath-CodeLlama-7B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-7B (w/ code) model get on the GSM8K dataset
| 75.9 |
VibraVox (throat microphone) | ECAPA2 | Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11828v2 | [
"https://github.com/jhauret/vibravox"
] | In the paper 'Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors', what Test EER score did the ECAPA2 model get on the VibraVox (throat microphone) dataset
| 0.0353 |
UniProtQA | BioMedGPT-10B | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09442v2 | [
"https://github.com/pharmolix/openbiomed"
] | In the paper 'BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine', what BLEU-2 score did the BioMedGPT-10B model get on the UniProtQA dataset
| 0.571 |
Office-Home | PDA (CLIP, ViT-B/16) | Prompt-based Distribution Alignment for Unsupervised Domain Adaptation | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09553v2 | [
"https://github.com/baishuanghao/prompt-based-distribution-alignment"
] | In the paper 'Prompt-based Distribution Alignment for Unsupervised Domain Adaptation', what Accuracy score did the PDA (CLIP, ViT-B/16) model get on the Office-Home dataset
| 85.7 |
TrackingNet | ARTrackV2-L | ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17133v3 | [
"https://github.com/miv-xjtu/artrack"
] | In the paper 'ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe', what Precision score did the ARTrackV2-L model get on the TrackingNet dataset
| 86.2 |
RST-DT | Top-down Llama 2 (70B) | Can we obtain significant success in RST discourse parsing by using Large Language Models? | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05065v1 | [
"https://github.com/nttcslab-nlp/rstparser_eacl24"
] | In the paper 'Can we obtain significant success in RST discourse parsing by using Large Language Models?', what Standard Parseval (Span) score did the Top-down Llama 2 (70B) model get on the RST-DT dataset
| 78.8 |
RSBlur | MLWNet | Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring | 2023-12-29T00:00:00 | https://arxiv.org/abs/2401.00027v2 | [
"https://github.com/thqiu0419/mlwnet"
] | In the paper 'Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring', what Average PSNR score did the MLWNet model get on the RSBlur dataset
| 34.94 |
SVHN (1000 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the SVHN (1000 Labels, ImageNet-100 Unlabeled) dataset
| 91.03 |
HateXplain | Space-BERT | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the Space-BERT model get on the HateXplain dataset
| 0.8110 |
ICBHI Respiratory Sound Database | DAT (AST) | Stethoscope-guided Supervised Contrastive Learning for Cross-domain Adaptation on Respiratory Sound Classification | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09603v1 | [
"https://github.com/kaen2891/stethoscope-guided_supervised_contrastive_learning"
] | In the paper 'Stethoscope-guided Supervised Contrastive Learning for Cross-domain Adaptation on Respiratory Sound Classification', what ICBHI Score score did the DAT (AST) model get on the ICBHI Respiratory Sound Database dataset
| 59.81 |
GoPro | DeblurDiNAT-L | DeblurDiNAT: A Generalizable Transformer for Perceptual Image Deblurring | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.13163v4 | [
"https://github.com/hanzhouliu/deblurdinat"
] | In the paper 'DeblurDiNAT: A Generalizable Transformer for Perceptual Image Deblurring', what PSNR score did the DeblurDiNAT-L model get on the GoPro dataset
| 33.63 |
St Lucia | BoQ | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@5 score did the BoQ model get on the St Lucia dataset
| 100 |
ETTh1 (336) Multivariate | WinNet | WinNet: Make Only One Convolutional Layer Effective for Time Series Forecasting | 2023-11-01T00:00:00 | https://arxiv.org/abs/2311.00214v2 | [
"https://github.com/ouwen18/WinNet"
] | In the paper 'WinNet: Make Only One Convolutional Layer Effective for Time Series Forecasting', what MSE score did the WinNet model get on the ETTh1 (336) Multivariate dataset
| 0.419 |
Oxford-IIIT Pets | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the Oxford-IIIT Pets dataset
| 90 |
PubMedQA | Meditron-70B (CoT + SC) | MEDITRON-70B: Scaling Medical Pretraining for Large Language Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.16079v1 | [
"https://github.com/epfllm/meditron"
] | In the paper 'MEDITRON-70B: Scaling Medical Pretraining for Large Language Models', what Accuracy score did the Meditron-70B (CoT + SC) model get on the PubMedQA dataset
| 81.6 |
VisA | Dinomaly ViT-L (model-unified multi-class) | Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14325v4 | [
"https://github.com/guojiajeremy/dinomaly"
] | In the paper 'Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection', what Detection AUROC score did the Dinomaly ViT-L (model-unified multi-class) model get on the VisA dataset
| 98.9 |
X-Sum | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what ROUGE-2 score did the PaLM 2-L (one-shot) model get on the X-Sum dataset
| 23.2 |
SFCHD | Faster RCNN | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the Faster RCNN model get on the SFCHD dataset
| 76.4 |
CausalGym | Linear probe | CausalGym: Benchmarking causal interpretability methods on linguistic tasks | 2024-02-19T00:00:00 | https://arxiv.org/abs/2402.12560v1 | [
"https://github.com/aryamanarora/causalgym"
] | In the paper 'CausalGym: Benchmarking causal interpretability methods on linguistic tasks', what Log odds-ratio (pythia-6.9b) score did the Linear probe model get on the CausalGym dataset
| 3.42 |
The Pile | Test-Time Fine-Tuning with SIFT + GPT-2 (124M) | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Test-Time Fine-Tuning with SIFT + GPT-2 (124M) model get on the The Pile dataset
| 0.862 |
Tanks and Temples | C3DGS | Compact 3D Gaussian Representation for Radiance Field | 2023-11-22T00:00:00 | https://arxiv.org/abs/2311.13681v2 | [
"https://github.com/maincold2/Compact-3DGS"
] | In the paper 'Compact 3D Gaussian Representation for Radiance Field', what PSNR score did the C3DGS model get on the Tanks and Temples dataset
| 0.2332 |
MVBench | mPLUG-Owl3(7B) | mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models | 2024-08-09T00:00:00 | https://arxiv.org/abs/2408.04840v2 | [
"https://github.com/x-plug/mplug-owl"
] | In the paper 'mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models', what Avg. score did the mPLUG-Owl3(7B) model get on the MVBench dataset
| 59.5 |
Inspec | PromptRank | PromptRank: Unsupervised Keyphrase Extraction Using Prompt | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04490v2 | [
"https://github.com/hlt-nlp/promptrank"
] | In the paper 'PromptRank: Unsupervised Keyphrase Extraction Using Prompt', what F1@10 score did the PromptRank model get on the Inspec dataset
| 37.88 |
CALVIN | RoboUniView | RoboUniView: Visual-Language Model with Unified View Representation for Robotic Manipulation | 2024-06-27T00:00:00 | https://arxiv.org/abs/2406.18977v3 | [
"https://github.com/liufanfanlff/robouniview"
] | In the paper 'RoboUniView: Visual-Language Model with Unified View Representation for Robotic Manipulation', what Avg. sequence length score did the RoboUniView model get on the CALVIN dataset
| 3.647 |
SBU / SBU-Refine | SDDNet (MM 2023) (256x256) | SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.08935v2 | [
"https://github.com/rmcong/sddnet_acmmm23"
] | In the paper 'SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection', what BER score did the SDDNet (MM 2023) (256x256) model get on the SBU / SBU-Refine dataset
| 5.39 |
HIDE (trained on GOPRO) | ID-Blau (Restormer) | ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10998v2 | [
"https://github.com/plusgood-steven/id-blau"
] | In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR (sRGB) score did the ID-Blau (Restormer) model get on the HIDE (trained on GOPRO) dataset
| 31.66 |
PKU-DDD17-Car | CAFR | Embracing Events and Frames with Hierarchical Feature Refinement Network for Object Detection | 2024-07-17T00:00:00 | https://arxiv.org/abs/2407.12582v2 | [
"https://github.com/hucaofighting/frn"
] | In the paper 'Embracing Events and Frames with Hierarchical Feature Refinement Network for Object Detection', what mAP50 score did the CAFR model get on the PKU-DDD17-Car dataset
| 86.7 |
SIDD | DRANet | Dual Residual Attention Network for Image Denoising | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04269v1 | [
"https://github.com/WenCongWu/DRANet"
] | In the paper 'Dual Residual Attention Network for Image Denoising', what SSIM (sRGB) score did the DRANet model get on the SIDD dataset
| 0.957 |
Telegram (Directed Graph label rate 60%) | 1iG | Scale Invariance of Graph Neural Networks | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19392v2 | [
"https://github.com/qin87/scalenet"
] | In the paper 'Scale Invariance of Graph Neural Networks', what Accuracy score did the 1iG model get on the Telegram (Directed Graph label rate 60%) dataset
| 95.8±3.5 |
SALECI | ECT-SAL | Brand Visibility in Packaging: A Deep Learning Approach for Logo Detection, Saliency-Map Prediction, and Logo Placement Analysis | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02336v1 | [
"https://github.com/Arhosseini77/Brand_Attention"
] | In the paper 'Brand Visibility in Packaging: A Deep Learning Approach for Logo Detection, Saliency-Map Prediction, and Logo Placement Analysis', what KL score did the ECT-SAL model get on the SALECI dataset
| 0.578 |
ImageNet-1k vs SUN | ODIN+UMAP (ResNet-50) | Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03715v1 | [
"https://github.com/tmlr-group/unleashing-mask"
] | In the paper 'Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability', what FPR95 score did the ODIN+UMAP (ResNet-50) model get on the ImageNet-1k vs SUN dataset
| 49.69 |
ImageNet-A | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Top-1 accuracy % score did the HPT model get on the ImageNet-A dataset
| 50.85 |
WSJ eval92 | RobustGER | It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech Recognition | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05457v1 | [
"https://github.com/Hypotheses-Paradise/UADF"
] | In the paper 'It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech Recognition', what Word Error Rate (WER) score did the RobustGER model get on the WSJ eval92 dataset
| 2.2 |
MM-Vet | Phantom-7B | Phantom of Latent for Large Language and Vision Models | 2024-09-23T00:00:00 | https://arxiv.org/abs/2409.14713v1 | [
"https://github.com/byungkwanlee/phantom"
] | In the paper 'Phantom of Latent for Large Language and Vision Models', what GPT-4 score score did the Phantom-7B model get on the MM-Vet dataset
| 70.8 |
CIFAR-10-LT (ρ=50) | MDCS | MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09922v2 | [
"https://github.com/fistyee/mdcs"
] | In the paper 'MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition', what Error Rate score did the MDCS model get on the CIFAR-10-LT (ρ=50) dataset
| 11.7 |
PACS | UniDG + CORAL + ConvNeXt-B | Towards Unified and Effective Domain Generalization | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10008v1 | [
"https://github.com/invictus717/UniDG"
] | In the paper 'Towards Unified and Effective Domain Generalization', what Average Accuracy score did the UniDG + CORAL + ConvNeXt-B model get on the PACS dataset
| 95.6 |
HJDB | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the HJDB dataset
| 96.6 |
CARLA | Coaching a Teachable Student (CaT) | Coaching a Teachable Student | 2023-06-16T00:00:00 | https://arxiv.org/abs/2306.10014v1 | [
"https://github.com/h2xlab/CaT"
] | In the paper 'Coaching a Teachable Student', what Driving Score score did the Coaching a Teachable Student (CaT) model get on the CARLA dataset
| 58 |
SALICON | MDS-ViTNet | MDS-ViTNet: Improving saliency prediction for Eye-Tracking with Vision Transformer | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.19501v1 | [
"https://github.com/ignatpolezhaev/mds-vitnet"
] | In the paper 'MDS-ViTNet: Improving saliency prediction for Eye-Tracking with Vision Transformer', what CC score did the MDS-ViTNet model get on the SALICON dataset
| 0.8980 |
STS Benchmark | Rematch | Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity | 2024-04-02T00:00:00 | https://arxiv.org/abs/2404.02126v1 | [
"https://github.com/osome-iu/Rematch-RARE"
] | In the paper 'Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity', what Spearman Correlation score did the Rematch model get on the STS Benchmark dataset
| 0.6652 |
LVIS v1.0 | RTGen | RTGen: Generating Region-Text Pairs for Open-Vocabulary Object Detection | 2024-05-30T00:00:00 | https://arxiv.org/abs/2405.19854v1 | [
"https://github.com/seermer/RTGen"
] | In the paper 'RTGen: Generating Region-Text Pairs for Open-Vocabulary Object Detection', what AP novel-LVIS base training score did the RTGen model get on the LVIS v1.0 dataset
| 30.2 |
Weather2K114 (720) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K114 (720) dataset
| 0.425 |
Ontonotes v5 (English) | NuNER | NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data | 2024-02-23T00:00:00 | https://arxiv.org/abs/2402.15343v1 | [
"https://github.com/Serega6678/NuNER"
] | In the paper 'NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data', what F1 score did the NuNER model get on the Ontonotes v5 (English) dataset
| 89.1 |
In-Shop | EfficientDML-VPTSP-G/512 | Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02340v2 | [
"https://github.com/noahsark/parameterefficient-dml"
] | In the paper 'Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning', what R@1 score did the EfficientDML-VPTSP-G/512 model get on the In-Shop dataset
| 92.1 |
NYU-Depth V2 | PGT (Swin-S) | Prompt Guided Transformer for Multi-Task Dense Prediction | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15362v1 | [
"https://github.com/innovator-zero/MTDP_Lib"
] | In the paper 'Prompt Guided Transformer for Multi-Task Dense Prediction', what odsF score did the PGT (Swin-S) model get on the NYU-Depth V2 dataset
| 78.04 |
Human3.6M | Regular Splitting Graph Network | Regular Splitting Graph Network for 3D Human Pose Estimation | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.05785v1 | [
"https://github.com/nies14/rs-net"
] | In the paper 'Regular Splitting Graph Network for 3D Human Pose Estimation', what Average MPJPE (mm) score did the Regular Splitting Graph Network model get on the Human3.6M dataset
| 47 |
DAVIS 2017 (val) | DEVA | Tracking Anything with Decoupled Video Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03903v1 | [
"https://github.com/hkchengrex/Tracking-Anything-with-DEVA"
] | In the paper 'Tracking Anything with Decoupled Video Segmentation', what Jaccard (Mean) score did the DEVA model get on the DAVIS 2017 (val) dataset
| 84.2 |
Human3.6M | RTPCA | Refined Temporal Pyramidal Compression-and-Amplification Transformer for 3D Human Pose Estimation | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01365v3 | [
"https://github.com/hbing-l/rtpca"
] | In the paper 'Refined Temporal Pyramidal Compression-and-Amplification Transformer for 3D Human Pose Estimation', what Average MPJPE (mm) score did the RTPCA model get on the Human3.6M dataset
| 40.1 |
SMAC 6h_vs_8z | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the QPLEX model get on the SMAC 6h_vs_8z dataset
| 15.95 |
ImageNet | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the ImageNet dataset
| 74.09 |
Amazon-Movies | HetroFair | Heterophily-Aware Fair Recommendation using Graph Convolutional Networks | 2024-01-31T00:00:00 | https://arxiv.org/abs/2402.03365v2 | [
"https://github.com/nematgh/hetrofair"
] | In the paper 'Heterophily-Aware Fair Recommendation using Graph Convolutional Networks', what NDCG@20 score did the HetroFair model get on the Amazon-Movies dataset
| 0.0777 |
DTD | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the DTD dataset
| 72.31 |
ETTm1 (96) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTm1 (96) Multivariate dataset
| 0.301 |
miniF2F-test | Lyra + GPT-4 | Lyra: Orchestrating Dual Correction in Automated Theorem Proving | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15806v4 | [
"https://github.com/chuanyang-zheng/lyra-theorem-prover"
] | In the paper 'Lyra: Orchestrating Dual Correction in Automated Theorem Proving', what Pass@100 score did the Lyra + GPT-4 model get on the miniF2F-test dataset
| 47.1 |
Manga109 - 4x upscaling | DAT+ | Dual Aggregation Transformer for Image Super-Resolution | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03364v2 | [
"https://github.com/zhengchen1999/dat"
] | In the paper 'Dual Aggregation Transformer for Image Super-Resolution', what PSNR score did the DAT+ model get on the Manga109 - 4x upscaling dataset
| 32.67 |
CIFAR-10 | TM Composites Toolbox | An Optimized Toolbox for Advanced Image Processing with Tsetlin Machine Composites | 2024-06-02T00:00:00 | https://arxiv.org/abs/2406.00704v1 | [
"https://github.com/cair/An-Optimized-Toolbox-for-Advanced-Image-Processing-with-Tsetlin-Machine-Composites"
] | In the paper 'An Optimized Toolbox for Advanced Image Processing with Tsetlin Machine Composites', what Percentage correct score did the TM Composites Toolbox model get on the CIFAR-10 dataset
| 82.8 |
CIFAR-10 | ABNet-2G-R0 | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R0 model get on the CIFAR-10 dataset
| 94.118 |
SOD4SB Private Test | DL method (YOLOv8 + Ensamble) | MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results | 2023-07-18T00:00:00 | https://arxiv.org/abs/2307.09143v1 | [
"https://github.com/iim-ttij/mva2023smallobjectdetection4spottingbirds"
] | In the paper 'MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results', what AP50 score did the DL method (YOLOv8 + Ensamble) model get on the SOD4SB Private Test dataset
| 22.9 |
HRSOD | BiRefNet (DUTS, HRSOD, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (DUTS, HRSOD, UHRSD) model get on the HRSOD dataset
| 0.962 |
Gowalla | NESCL | Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11523v1 | [
"https://github.com/PeiJieSun/NESCL"
] | In the paper 'Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering', what Recall@20 score did the NESCL model get on the Gowalla dataset
| 0.1917 |
GSM8K | PaLM 2 (few-shot, k=8, CoT) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=8, CoT) model get on the GSM8K dataset
| 80.7 |
MBPP | GPT-3.5 Turbo (0-shot) | INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09868v5 | [
"https://github.com/neuir/intervenor"
] | In the paper 'INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair', what Accuracy score did the GPT-3.5 Turbo (0-shot) model get on the MBPP dataset
| 39.8 |
PASCAL VOC | OTSeg | OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14183v2 | [
"https://github.com/cubeyoung/OTSeg"
] | In the paper 'OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation', what Transductive Setting hIoU score did the OTSeg model get on the PASCAL VOC dataset
| 94.2 |
InsPLAD | RD++ (CBAM-ResNet-18) | Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study | 2023-11-05T00:00:00 | https://arxiv.org/abs/2311.02747v2 | [
"https://github.com/andreluizbvs/insplad"
] | In the paper 'Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study', what Detection AUROC score did the RD++ (CBAM-ResNet-18) model get on the InsPLAD dataset
| 90.75 |
KITTI Test (Online Methods) | UCMCTrack | UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08952v2 | [
"https://github.com/corfyi/ucmctrack"
] | In the paper 'UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation', what HOTA score did the UCMCTrack model get on the KITTI Test (Online Methods) dataset
| 77.1 |
EgoExoLearn | Action anticipation baseline (co-training, no gaze) | EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World | 2024-03-24T00:00:00 | https://arxiv.org/abs/2403.16182v2 | [
"https://github.com/opengvlab/egoexolearn"
] | In the paper 'EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World', what Accuracy score did the Action anticipation baseline (co-training, no gaze) model get on the EgoExoLearn dataset
| 38.7 |
MS-COCO (10-shot) | DE-ViT | Detect Everything with Few Examples | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12969v4 | [
"https://github.com/mlzxy/devit"
] | In the paper 'Detect Everything with Few Examples', what AP score did the DE-ViT model get on the MS-COCO (10-shot) dataset
| 34.0 |
MBPP | AFlow(GPT-4o-mini) | AFlow: Automating Agentic Workflow Generation | 2024-10-14T00:00:00 | https://arxiv.org/abs/2410.10762v1 | [
"https://github.com/geekan/metagpt"
] | In the paper 'AFlow: Automating Agentic Workflow Generation', what Accuracy score did the AFlow(GPT-4o-mini) model get on the MBPP dataset
| 83.4 |
SIR^2(Objects) | RDNet | Reversible Decoupling Network for Single Image Reflection Removal | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08063v1 | [
"https://github.com/lime-j/RDNet"
] | In the paper 'Reversible Decoupling Network for Single Image Reflection Removal', what PSNR score did the RDNet model get on the SIR^2(Objects) dataset
| 26.78 |
SHHS (single-channel) | NeuroNet (C4-A1 only) | NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.17585v2 | [
"https://github.com/dlcjfgmlnasa/NeuroNet"
] | In the paper 'NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG', what Accuracy score did the NeuroNet (C4-A1 only) model get on the SHHS (single-channel) dataset
| 86.88% |
QVHighlights | SG-DETR (w/ PT) | Saliency-Guided DETR for Moment Retrieval and Highlight Detection | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01615v1 | [
"https://github.com/ai-forever/sg-detr"
] | In the paper 'Saliency-Guided DETR for Moment Retrieval and Highlight Detection', what mAP score did the SG-DETR (w/ PT) model get on the QVHighlights dataset
| 44.70 |
ScanNet200 | ODIN | ODIN: A Single Model for 2D and 3D Segmentation | 2024-01-04T00:00:00 | https://arxiv.org/abs/2401.02416v3 | [
"https://github.com/ayushjain1144/odin"
] | In the paper 'ODIN: A Single Model for 2D and 3D Segmentation', what val mIoU score did the ODIN model get on the ScanNet200 dataset
| 40.5 |
DAVIS 2017 (test-dev) | Cutie+ (base, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie+ (base, MEGA) model get on the DAVIS 2017 (test-dev) dataset
| 88.1 |
The Pile | Test-Time Fine-Tuning with SIFT + Phi-3 (3.8B) | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Test-Time Fine-Tuning with SIFT + Phi-3 (3.8B) model get on the The Pile dataset
| 0.595 |
Weather (720) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Weather (720) dataset
| 0.326 |
BanglaBook | LSTM (GloVe) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the LSTM (GloVe) model get on the BanglaBook dataset
| 0.0991 |
Charades-STA | VideoLights-B-pt | VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval | 2024-12-02T00:00:00 | https://arxiv.org/abs/2412.01558v1 | [
"https://github.com/dpaul06/VideoLights"
] | In the paper 'VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval', what R@1 IoU=0.5 score did the VideoLights-B-pt model get on the Charades-STA dataset
| 61.96 |
QVHighlights | R^2-Tuning | $R^2$-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00801v2 | [
"https://github.com/yeliudev/R2-Tuning"
] | In the paper '$R^2$-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding', what mAP score did the R^2-Tuning model get on the QVHighlights dataset
| 40.75 |
RefCOCOg-test | EVF-SAM | EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model | 2024-06-28T00:00:00 | https://arxiv.org/abs/2406.20076v4 | [
"https://github.com/hustvl/evf-sam"
] | In the paper 'EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model', what Overall IoU score did the EVF-SAM model get on the RefCOCOg-test dataset
| 77.4 |
Amazon Men | ProxyRCA | Proxy-based Item Representation for Attribute and Context-aware Recommendation | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06145v1 | [
"https://github.com/theeluwin/ProxyRCA"
] | In the paper 'Proxy-based Item Representation for Attribute and Context-aware Recommendation', what Hit@10 score did the ProxyRCA model get on the Amazon Men dataset
| 0.617 |
Assembly101 | Goal Consistency | Action Anticipation with Goal Consistency | 2023-06-26T00:00:00 | https://arxiv.org/abs/2306.15045v1 | [
"https://github.com/olga-zats/goal_consistency"
] | In the paper 'Action Anticipation with Goal Consistency', what Actions Recall@5 score did the Goal Consistency model get on the Assembly101 dataset
| 12.07 |
Caltech-101 | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the Caltech-101 dataset
| 97.77 |
ChEBI-20 | DSOKR | Deep Sketched Output Kernel Regression for Structured Prediction | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09253v1 | [
"https://github.com/tamim-el/dsokr"
] | In the paper 'Deep Sketched Output Kernel Regression for Structured Prediction', what Mean Rank score did the DSOKR model get on the ChEBI-20 dataset
| 76.43 |
rt-inod-bias | GPT-4 | Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09785v1 | [
"https://github.com/innodatalabs/innodata-llm-safety"
] | In the paper 'Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations', what Best-of score did the GPT-4 model get on the rt-inod-bias dataset
| 0.5 |
Charades-STA | D3G (Semi-weak, MViT-K400-Pretrain-feature, evaluated by AdaFocus) | D3G: Exploring Gaussian Prior for Temporal Sentence Grounding with Glance Annotation | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04197v1 | [
"https://github.com/solicucu/d3g"
] | In the paper 'D3G: Exploring Gaussian Prior for Temporal Sentence Grounding with Glance Annotation', what R1@0.5 score did the D3G (Semi-weak, MViT-K400-Pretrain-feature, evaluated by AdaFocus) model get on the Charades-STA dataset
| 46.0 |
MPI-INF-3DHP | DC-GCT | Double-chain Constraints for 3D Human Pose Estimation in Images and Videos | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05298v1 | [
"https://github.com/KHB1698/DC-GCT"
] | In the paper 'Double-chain Constraints for 3D Human Pose Estimation in Images and Videos', what AUC score did the DC-GCT model get on the MPI-INF-3DHP dataset
| 55.9 |
Winoground | TIFA | What You See is What You Read? Improving Text-Image Alignment Evaluation | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10400v4 | [
"https://github.com/yonatanbitton/wysiwyr"
] | In the paper 'What You See is What You Read? Improving Text-Image Alignment Evaluation', what Text Score score did the TIFA model get on the Winoground dataset
| 19.00 |
MATH | LogicNet (with code interpreter) | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | 2023-08-15T00:00:00 | https://arxiv.org/abs/2308.07921v1 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification', what Accuracy score did the LogicNet (with code interpreter) model get on the MATH dataset
| 71.2 |
PeMSD7(L) | PM-DMNet(R) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(R) model get on the PeMSD7(L) dataset
| 2.79 |
SemanticKITTI | TALoS | TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on the Line of Sight | 2024-10-21T00:00:00 | https://arxiv.org/abs/2410.15674v2 | [
"https://github.com/blue-531/talos"
] | In the paper 'TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on the Line of Sight', what mIoU score did the TALoS model get on the SemanticKITTI dataset
| 37.9 |
Google Speech Commands V2 35 | SSAMBA | SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space Model | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.11831v1 | [
"https://github.com/siavashshams/ssamba"
] | In the paper 'SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space Model', what Accuracy (10-fold) score did the SSAMBA model get on the Google Speech Commands V2 35 dataset
| 97.4 |
SALICON | SUM | SUM: Saliency Unification through Mamba for Visual Attention Modeling | 2024-06-25T00:00:00 | https://arxiv.org/abs/2406.17815v2 | [
"https://github.com/Arhosseini77/SUM"
] | In the paper 'SUM: Saliency Unification through Mamba for Visual Attention Modeling', what CC score did the SUM model get on the SALICON dataset
| 0.909 |
ADNI | NeuroPath | NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.17510v3 | [
"https://github.com/Chrisa142857/neuro_detour"
] | In the paper 'NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes', what Accuracy score did the NeuroPath model get on the ADNI dataset
| 85.56 |
Casia V1+ | Late Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Late Fusion model get on the Casia V1+ dataset
| .930 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.