dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
ETTm2 (336) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTm2 (336) Multivariate dataset
| 0.289 |
SPKL | MobileNetV2 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the MobileNetV2 model get on the SPKL dataset
| 0.6937 |
ScanNet | PointHR | PointHR: Exploring High-Resolution Architectures for 3D Point Cloud Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07743v1 | [
"https://github.com/haibo-qiu/PointHR"
] | In the paper 'PointHR: Exploring High-Resolution Architectures for 3D Point Cloud Segmentation', what test mIoU score did the PointHR model get on the ScanNet dataset
| 76.6 |
MultiRC | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what F1 score did the PaLM 2-L (one-shot) model get on the MultiRC dataset
| 88.2 |
DTU | GoMVS | GoMVS: Geometrically Consistent Cost Aggregation for Multi-View Stereo | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07992v1 | [
"https://github.com/wuuu3511/gomvs"
] | In the paper 'GoMVS: Geometrically Consistent Cost Aggregation for Multi-View Stereo', what Acc score did the GoMVS model get on the DTU dataset
| 0.347 |
DAVIS 2017 (val) | Cutie+ (base) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what Jaccard (Mean) score did the Cutie+ (base) model get on the DAVIS 2017 (val) dataset
| 87.5 |
MusicCaps | MeLFusion (image-conditioned) | MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models | 2024-06-07T00:00:00 | https://arxiv.org/abs/2406.04673v1 | [
"https://github.com/schowdhury671/melfusion"
] | In the paper 'MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models', what FAD VGG score did the MeLFusion (image-conditioned) model get on the MusicCaps dataset
| 1.12 |
IIIT5k | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-B model get on the IIIT5k dataset
| 99.2 |
DUTS-TE | M3Net-R | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-R model get on the DUTS-TE dataset
| 0.036 |
LVIS v1.0 val | SE-R50-FPN-MaskRCNN-APA | Adaptive Parametric Activation | 2024-07-11T00:00:00 | https://arxiv.org/abs/2407.08567v2 | [
"https://github.com/kostas1515/aglu"
] | In the paper 'Adaptive Parametric Activation', what mask AP score did the SE-R50-FPN-MaskRCNN-APA model get on the LVIS v1.0 val dataset
| 29.1 |
PASCAL-5i (1-Shot) | CyCTR (DifFSS, ResNet-50) | DifFSS: Diffusion Model for Few-Shot Semantic Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00773v3 | [
"https://github.com/TrinitialChan/DifFSS"
] | In the paper 'DifFSS: Diffusion Model for Few-Shot Semantic Segmentation', what Mean IoU score did the CyCTR (DifFSS, ResNet-50) model get on the PASCAL-5i (1-Shot) dataset
| 66.2 |
ImageNet-1k vs iNaturalist | ODIN+UMAP (ResNet-50) | Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03715v1 | [
"https://github.com/tmlr-group/unleashing-mask"
] | In the paper 'Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability', what FPR95 score did the ODIN+UMAP (ResNet-50) model get on the ImageNet-1k vs iNaturalist dataset
| 21.97 |
IFEval | AutoIF (Llama3 70B) | Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models | 2024-06-19T00:00:00 | https://arxiv.org/abs/2406.13542v3 | [
"https://github.com/QwenLM/AutoIF"
] | In the paper 'Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models', what Prompt-level strict-accuracy score did the AutoIF (Llama3 70B) model get on the IFEval dataset
| 80.2 |
WikiLargeFR | mT5 (fine-tuned on MULTI-SIM) | Revisiting non-English Text Simplification: A Unified Multilingual Benchmark | 2023-05-25T00:00:00 | https://arxiv.org/abs/2305.15678v1 | [
"https://github.com/xenonmolecule/multisim"
] | In the paper 'Revisiting non-English Text Simplification: A Unified Multilingual Benchmark', what SARI score did the mT5 (fine-tuned on MULTI-SIM) model get on the WikiLargeFR dataset
| 39.23 |
OpenBookQA | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the OpenBookQA dataset
| 58.5 |
MUTAG | CIN++ | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what Accuracy score did the CIN++ model get on the MUTAG dataset
| 94.4% |
Unpaired-abdomen-CT | SAME++ | SAME++: A Self-supervised Anatomical eMbeddings Enhanced medical image registration framework using stable sampling and regularized transformation | 2023-11-25T00:00:00 | https://arxiv.org/abs/2311.14986v2 | [
"https://github.com/alibaba-damo-academy/same"
] | In the paper 'SAME++: A Self-supervised Anatomical eMbeddings Enhanced medical image registration framework using stable sampling and regularized transformation', what DSC score did the SAME++ model get on the Unpaired-abdomen-CT dataset
| 0.4927 |
CelebA-HQ | DDPM | Normalizing Flow-Based Metric for Image Generation | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.02004v2 | [
"https://github.com/pranavphoenix/FLD"
] | In the paper 'Normalizing Flow-Based Metric for Image Generation', what FLD score did the DDPM model get on the CelebA-HQ dataset
| 1.4 |
VNHSGE-English | Bing Chat | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE-English dataset
| 92.4 |
rt-inod-bias | Mistral | Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09785v1 | [
"https://github.com/innodatalabs/innodata-llm-safety"
] | In the paper 'Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations', what Best-of score did the Mistral model get on the rt-inod-bias dataset
| 0.36 |
ImageNet 256x256 | RAR-XL, autoregressive | Randomized Autoregressive Visual Generation | 2024-11-01T00:00:00 | https://arxiv.org/abs/2411.00776v1 | [
"https://github.com/bytedance/1d-tokenizer"
] | In the paper 'Randomized Autoregressive Visual Generation', what FID score did the RAR-XL, autoregressive model get on the ImageNet 256x256 dataset
| 1.50 |
Manga109 - 2x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the Manga109 - 2x upscaling dataset
| 40.41 |
ACOS | ChatGPT (gpt-3.5-turbo, few-shot) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (Restaurant) score did the ChatGPT (gpt-3.5-turbo, few-shot) model get on the ACOS dataset
| 37.71 |
WebApp1k-Duo-React | mistral-large-2 | A Case Study of Web App Coding with OpenAI Reasoning Models | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.13773v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the mistral-large-2 model get on the WebApp1k-Duo-React dataset
| 0.449 |
Mol-Instruction | SLM4CRP | A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09606v1 | [
"https://github.com/ai-hpc-research-team/slm4crp"
] | In the paper 'A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions', what METEOR score did the SLM4CRP model get on the Mol-Instruction dataset
| 0.744 |
Weather (192) | MoLE-RMLP | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RMLP model get on the Weather (192) dataset
| 0.19 |
CIFAR-10 | ABNet-2G-R3 | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R3 model get on the CIFAR-10 dataset
| 96.088 |
LRS3-TED | VSP-LLM | Where Visual Speech Meets Language: VSP-LLM Framework for Efficient and Context-Aware Visual Speech Processing | 2024-02-23T00:00:00 | https://arxiv.org/abs/2402.15151v2 | [
"https://github.com/sally-sh/vsp-llm"
] | In the paper 'Where Visual Speech Meets Language: VSP-LLM Framework for Efficient and Context-Aware Visual Speech Processing', what Word Error Rate (WER) score did the VSP-LLM model get on the LRS3-TED dataset
| 25.4 |
Real20 | DSRNet | Single Image Reflection Separation via Component Synergy | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.10027v1 | [
"https://github.com/mingcv/dsrnet"
] | In the paper 'Single Image Reflection Separation via Component Synergy', what PSNR score did the DSRNet model get on the Real20 dataset
| 24.23 |
Urban100 - 4x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the Urban100 - 4x upscaling dataset
| 28.40 |
CR | LM-CPPF RoBERTa-base | LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18169v3 | [
"https://github.com/amirabaskohi/lm-cppf"
] | In the paper 'LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning', what Accuracy score did the LM-CPPF RoBERTa-base model get on the CR dataset
| 93.3 |
Winoground | KeyComp* (GPT-3.5) | Prompting Large Vision-Language Models for Compositional Reasoning | 2024-01-20T00:00:00 | https://arxiv.org/abs/2401.11337v1 | [
"https://github.com/tossowski/keycomp"
] | In the paper 'Prompting Large Vision-Language Models for Compositional Reasoning', what Text Score score did the KeyComp* (GPT-3.5) model get on the Winoground dataset
| 42.7 |
VisDrone - 5% labeled data | SSOD + Crop (L + U) | Density Crop-guided Semi-supervised Object Detection in Aerial Images | 2023-08-09T00:00:00 | https://arxiv.org/abs/2308.05032v1 | [
"https://github.com/akhilpm/dronessod"
] | In the paper 'Density Crop-guided Semi-supervised Object Detection in Aerial Images', what COCO-style AP score did the SSOD + Crop (L + U) model get on the VisDrone - 5% labeled data dataset
| 23.57 |
MCubeS (P) | ShareCMP(B2 RGB-A) | ShareCMP: Polarization-Aware RGB-P Semantic Segmentation | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03430v2 | [
"https://github.com/lefteyex/sharecmp"
] | In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP(B2 RGB-A) model get on the MCubeS (P) dataset
| 50.34 |
UCR Anomaly Archive | TS-TCC-AD | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the TS-TCC-AD model get on the UCR Anomaly Archive dataset
| 0.006 |
MSRC-21 (per-class) | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what Accuracy (10 fold) score did the G-Tuning model get on the MSRC-21 (per-class) dataset
| 11.01 |
MATH | Qwen2-72B-Instruct-Step-DPO (0-shot CoT, w/o code) | Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs | 2024-06-26T00:00:00 | https://arxiv.org/abs/2406.18629v1 | [
"https://github.com/dvlab-research/step-dpo"
] | In the paper 'Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs', what Accuracy score did the Qwen2-72B-Instruct-Step-DPO (0-shot CoT, w/o code) model get on the MATH dataset
| 70.8 |
COCO-Stuff-27 | EAGLE (DINO, ViT-S/8) | EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation | 2024-03-03T00:00:00 | https://arxiv.org/abs/2403.01482v4 | [
"https://github.com/MICV-yonsei/EAGLE"
] | In the paper 'EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation', what Accuracy score did the EAGLE (DINO, ViT-S/8) model get on the COCO-Stuff-27 dataset
| 64.2 |
Marmoset-8K | BUCTD-preNet-W48 (CID-W32) | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the BUCTD-preNet-W48 (CID-W32) model get on the Marmoset-8K dataset
| 93.3 |
Amazon Computers | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GCN model get on the Amazon Computers dataset
| 93.99±0.12 |
Oxford-IIIT Pet Dataset | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the Oxford-IIIT Pet Dataset dataset
| 97.15 |
MM-Vet | InternVL2.5-78B | Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05271v1 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling', what GPT-4 score score did the InternVL2.5-78B model get on the MM-Vet dataset
| 72.3 |
Clothing1M (using clean data) | EMLC (k=1) | Enhanced Meta Label Correction for Coping with Label Corruption | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12961v2 | [
"https://github.com/MitchellKT/Enhanced-Meta-Label-Correction"
] | In the paper 'Enhanced Meta Label Correction for Coping with Label Corruption', what Accuracy score did the EMLC (k=1) model get on the Clothing1M (using clean data) dataset
| 79.35% |
RefCoCo val | ETRIS | Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation | 2023-07-21T00:00:00 | https://arxiv.org/abs/2307.11545v1 | [
"https://github.com/kkakkkka/etris"
] | In the paper 'Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation', what Overall IoU score did the ETRIS model get on the RefCoCo val dataset
| 71.06 |
Unpaired-abdomen-CT | Chatgpt+SVD | Spatially Covariant Image Registration with Text Prompts | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15607v2 | [
"https://github.com/tinymilky/NeRD"
] | In the paper 'Spatially Covariant Image Registration with Text Prompts', what DSC score did the Chatgpt+SVD model get on the Unpaired-abdomen-CT dataset
| 0.5763 |
MuPoTS-3D | Multi-HMR | Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14654v2 | [
"https://github.com/naver/multi-hmr"
] | In the paper 'Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot', what 3DPCK score did the Multi-HMR model get on the MuPoTS-3D dataset
| 89.5 |
Penn Treebank | Hashing + Bert | To be Continuous, or to be Discrete, Those are Bits of Questions | 2024-06-12T00:00:00 | https://arxiv.org/abs/2406.07812v1 | [
"https://github.com/speedcell4/parserker"
] | In the paper 'To be Continuous, or to be Discrete, Those are Bits of Questions', what F1 score score did the Hashing + Bert model get on the Penn Treebank dataset
| 96.03 |
OK-VQA | RA-VQA-v2 (T5-large) | Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17133v2 | [
"https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering"
] | In the paper 'Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering', what Accuracy score did the RA-VQA-v2 (T5-large) model get on the OK-VQA dataset
| 54.85 |
ISTD+ | ShadowMaskFormer (arXiv 2024) (512x512) | ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18433v2 | [
"https://github.com/lizhh268/shadowmaskformer"
] | In the paper 'ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal', what RMSE score did the ShadowMaskFormer (arXiv 2024) (512x512) model get on the ISTD+ dataset
| 2.95 |
Aria Synthetic Environments | Cube R-CNN | EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10224v1 | [
"https://github.com/facebookresearch/efm3d"
] | In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what MAP score did the Cube R-CNN model get on the Aria Synthetic Environments dataset
| 36 |
BoolQ | PaLM 2-M (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the BoolQ dataset
| 88.6 |
FindVehicle | UniNER-7B | UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03279v2 | [
"https://github.com/emma1066/retrieval-augmented-it-openner"
] | In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the FindVehicle dataset
| 98.3 |
Pascal Panoptic Parts | HIPIE (ResNet-50) | Hierarchical Open-vocabulary Universal Image Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00764v2 | [
"https://github.com/berkeley-hipie/hipie"
] | In the paper 'Hierarchical Open-vocabulary Universal Image Segmentation', what mIoUPartS score did the HIPIE (ResNet-50) model get on the Pascal Panoptic Parts dataset
| 57.2 |
ETTh1 (720) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTh1 (720) Multivariate dataset
| 0.409 |
UCI MINIBOONE | PaddingFlow | PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08216v2 | [
"https://github.com/adamqlmeng/paddingflow"
] | In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what CD score did the PaddingFlow model get on the UCI MINIBOONE dataset
| 24.5 |
ARMBench | Mask2Former | Robot Instance Segmentation with Few Annotations for Grasping | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01302v1 | [
"https://github.com/mkimhi/RISE"
] | In the paper 'Robot Instance Segmentation with Few Annotations for Grasping', what AP50 score did the Mask2Former model get on the ARMBench dataset
| 81.2 |
MassSpecGym | SMILES Transformer | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Top-1 MCES score did the SMILES Transformer model get on the MassSpecGym dataset
| 53.80 |
Office-Home | GMDG (ResNet-50, SWAD) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50, SWAD) model get on the Office-Home dataset
| 72.5 |
SALICON | SalNAS-XL + Self-KD | SalNAS: Efficient Saliency-prediction Neural Architecture Search with self-knowledge distillation | 2024-07-29T00:00:00 | https://arxiv.org/abs/2407.20062v1 | [
"https://github.com/chakkritte/SalNAS"
] | In the paper 'SalNAS: Efficient Saliency-prediction Neural Architecture Search with self-knowledge distillation', what CC score did the SalNAS-XL + Self-KD model get on the SALICON dataset
| 0.909 |
shape bias | Parti | Intriguing properties of generative classifiers | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16779v2 | [
"https://github.com/SamsungSAILMontreal/ForestDiffusion"
] | In the paper 'Intriguing properties of generative classifiers', what shape bias score did the Parti model get on the shape bias dataset
| 91.7 |
Aria Digital Twin Dataset | EVL | EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10224v1 | [
"https://github.com/facebookresearch/efm3d"
] | In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what Accuracy score did the EVL model get on the Aria Digital Twin Dataset dataset
| 18.2 |
Persian Font Recognition (PFR) | presis | Persis: A Persian Font Recognition Pipeline Using Convolutional Neural Networks | 2023-10-08T00:00:00 | https://arxiv.org/abs/2310.05255v2 | [
"https://github.com/mehrdad-dev/persis"
] | In the paper 'Persis: A Persian Font Recognition Pipeline Using Convolutional Neural Networks', what Top 5 Accuracy score did the presis model get on the Persian Font Recognition (PFR) dataset
| 96.7 |
Baidu Mall | SegVLAD-PreT (M) | Revisit Anything: Visual Place Recognition via Image Segment Retrieval | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.18049v1 | [
"https://github.com/anyloc/revisit-anything"
] | In the paper 'Revisit Anything: Visual Place Recognition via Image Segment Retrieval', what Recall@1 score did the SegVLAD-PreT (M) model get on the Baidu Mall dataset
| 80.4 |
Action-Camera Parking | MobileNetV2 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the MobileNetV2 model get on the Action-Camera Parking dataset
| 0.9343 |
ScanNet200 | OA-CNNs | OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14418v1 | [
"https://github.com/Pointcept/Pointcept"
] | In the paper 'OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation', what val mIoU score did the OA-CNNs model get on the ScanNet200 dataset
| 33.3 |
SYSU-CD | C2FNet | C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images | 2024-04-22T00:00:00 | https://arxiv.org/abs/2404.13838v1 | [
"https://github.com/chengxihan/c2f-semicd-and-c2f-cdnet"
] | In the paper 'C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images', what F1 score did the C2FNet model get on the SYSU-CD dataset
| 77.97 |
VTAB-1k(Specialized<4>) | GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) | Improving Visual Prompt Tuning for Self-supervised Vision Transformers | 2023-06-08T00:00:00 | https://arxiv.org/abs/2306.05067v1 | [
"https://github.com/ryongithub/gatedprompttuning"
] | In the paper 'Improving Visual Prompt Tuning for Self-supervised Vision Transformers', what Mean Accuracy score did the GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) model get on the VTAB-1k(Specialized<4>) dataset
| 83.38 |
17 Places | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the 17 Places dataset
| 59.36 |
Math23K | GPT-4 (Teaching-Inspired) | Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08068v1 | [
"https://github.com/sallytan13/teaching-inspired-prompting"
] | In the paper 'Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models', what Accuracy (5-fold) score did the GPT-4 (Teaching-Inspired) model get on the Math23K dataset
| 94.3 |
WinoGrande | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the WinoGrande dataset
| 77.9 |
KoNViD-1k | ReLaX-VQA | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11496v1 | [
"https://github.com/xinyiw915/relax-vqa"
] | In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA model get on the KoNViD-1k dataset
| 0.8473 |
DAVIS 2017 (val) | SgMg | Spectrum-guided Multi-granularity Referring Video Object Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13537v1 | [
"https://github.com/bo-miao/sgmg"
] | In the paper 'Spectrum-guided Multi-granularity Referring Video Object Segmentation', what J&F 1st frame score did the SgMg model get on the DAVIS 2017 (val) dataset
| 63.3 |
Biwi 3D Audiovisual Corpus of Affective Communication - B3D(AC)^2 | SelfTalk | SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.10799v2 | [
"https://github.com/psyai-net/SelfTalk_release"
] | In the paper 'SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces', what Lip Vertex Error score did the SelfTalk model get on the Biwi 3D Audiovisual Corpus of Affective Communication - B3D(AC)^2 dataset
| 4.2485 |
Molweni | Structured | Structured Dialogue Discourse Parsing | 2023-06-26T00:00:00 | https://arxiv.org/abs/2306.15103v1 | [
"https://github.com/chijames/structured_dialogue_discourse_parsing"
] | In the paper 'Structured Dialogue Discourse Parsing', what Link F1 score did the Structured model get on the Molweni dataset
| 83.5 |
BDD100K val | TwinLiteNetPlus-Large | TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10705v5 | [
"https://github.com/chequanghuy/TwinLiteNet"
] | In the paper 'TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars', what mIoU score did the TwinLiteNetPlus-Large model get on the BDD100K val dataset
| 92.9 |
VoxCeleb2 | RTFS-Net-12 | RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17189v4 | [
"https://github.com/spkgyk/RTFS-Net"
] | In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-12 model get on the VoxCeleb2 dataset
| 12.4 |
CIFAR-10 | MuLAN | Diffusion Models With Learned Adaptive Noise | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13236v3 | [
"https://github.com/s-sahoo/mulan"
] | In the paper 'Diffusion Models With Learned Adaptive Noise', what NLL (bits/dim) score did the MuLAN model get on the CIFAR-10 dataset
| 2.55 |
SQuAD | Prompt2Model (T5-base) | Prompt2Model: Generating Deployable Models from Natural Language Instructions | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12261v1 | [
"https://github.com/neulab/prompt2model"
] | In the paper 'Prompt2Model: Generating Deployable Models from Natural Language Instructions', what Exact Match score did the Prompt2Model (T5-base) model get on the SQuAD dataset
| 74.4 |
YouCook2 | CM² | Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07610v1 | [
"https://github.com/ailab-kyunghee/cm2_dvc"
] | In the paper 'Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval', what METEOR score did the CM² model get on the YouCook2 dataset
| 6.08 |
COCO 2017 | DeBiFormer-B (IN1k pretrain, Retina) | DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08582v1 | [
"https://github.com/maclong01/DeBiFormer"
] | In the paper 'DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention', what mAP score did the DeBiFormer-B (IN1k pretrain, Retina) model get on the COCO 2017 dataset
| 47.1 |
CUTE80 | NRTR+TPS++ | TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.05322v1 | [
"https://github.com/simplify23/tps_pp"
] | In the paper 'TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition', what Accuracy score did the NRTR+TPS++ model get on the CUTE80 dataset
| 92.4 |
AFAD | ResNet-50-DLDL-v2 | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL-v2 model get on the AFAD dataset
| 3.15 |
Actor | MGNN + Hetero-S (4 layers) | The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12539v1 | [
"https://github.com/bingreeky/heterosnoh"
] | In the paper 'The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs', what Accuracy score did the MGNN + Hetero-S (4 layers) model get on the Actor dataset
| 35.99 |
ETTh1 (96) Multivariate | TTM | Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series | 2024-01-08T00:00:00 | https://arxiv.org/abs/2401.03955v8 | [
"https://github.com/ibm-granite/granite-tsfm"
] | In the paper 'Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series', what MSE score did the TTM model get on the ETTh1 (96) Multivariate dataset
| 0.36 |
ETTh2 (720) Multivariate | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the ETTh2 (720) Multivariate dataset
| 0.409 |
GOT-10k | LoRAT-g-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what Average Overlap score did the LoRAT-g-378 model get on the GOT-10k dataset
| 78.9 |
PASCAL VOC 2007 | YOLOv7+Inner-IoU | Inner-IoU: More Effective Intersection over Union Loss with Auxiliary Bounding Box | 2023-11-06T00:00:00 | https://arxiv.org/abs/2311.02877v4 | [
"https://github.com/malagoutou/Inner-IoU"
] | In the paper 'Inner-IoU: More Effective Intersection over Union Loss with Auxiliary Bounding Box', what mAP@50 score did the YOLOv7+Inner-IoU model get on the PASCAL VOC 2007 dataset
| 64.44 |
TACRED | RAG4RE | Retrieval-Augmented Generation-based Relation Extraction | 2024-04-20T00:00:00 | https://arxiv.org/abs/2404.13397v1 | [
"https://github.com/sefeoglu/rag4re"
] | In the paper 'Retrieval-Augmented Generation-based Relation Extraction', what F1 score did the RAG4RE model get on the TACRED dataset
| 86.6 |
WebApp1k-Duo-React | o1-preview | A Case Study of Web App Coding with OpenAI Reasoning Models | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.13773v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the o1-preview model get on the WebApp1k-Duo-React dataset
| 0.652 |
AISHELL-2 | Paraformer-large | FunASR: A Fundamental End-to-End Speech Recognition Toolkit | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11013v1 | [
"https://github.com/alibaba-damo-academy/FunASR"
] | In the paper 'FunASR: A Fundamental End-to-End Speech Recognition Toolkit', what Word Error Rate (WER) score did the Paraformer-large model get on the AISHELL-2 dataset
| 2.85 |
DSIFN-CD | C2FNet | C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images | 2024-04-22T00:00:00 | https://arxiv.org/abs/2404.13838v1 | [
"https://github.com/chengxihan/c2f-semicd-and-c2f-cdnet"
] | In the paper 'C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images', what F1 score did the C2FNet model get on the DSIFN-CD dataset
| 64.03 |
NYU Depth v2 | GeminiFusion (Swin-Large) | GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.01210v2 | [
"https://github.com/jiadingcn/geminifusion"
] | In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what Mean IoU score did the GeminiFusion (Swin-Large) model get on the NYU Depth v2 dataset
| 60.9 |
GSM8K | WizardMath-7B-V1.1 | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09583v1 | [
"https://github.com/nlpxucan/wizardlm"
] | In the paper 'WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct', what Accuracy score did the WizardMath-7B-V1.1 model get on the GSM8K dataset
| 83.2 |
BDD100K val | A-YOLOM(s) | You Only Look at Once for Real-time and Generic Multi-Task | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01641v4 | [
"https://github.com/jiayuanwang-jw/yolov8-multi-task"
] | In the paper 'You Only Look at Once for Real-time and Generic Multi-Task', what Accuracy score did the A-YOLOM(s) model get on the BDD100K val dataset
| 84.9 |
SMAC 6h_vs_9z | VDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the VDN model get on the SMAC 6h_vs_9z dataset
| 13.57 |
USNA-Cn2 (short-duration) | RNN | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the RNN model get on the USNA-Cn2 (short-duration) dataset
| 0.187 |
spider | DAIL-SQL + GPT-4 + Self-Consistency | Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.15363v4 | [
"https://github.com/beachwang/dail-sql"
] | In the paper 'Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation', what Exact Match Accuracy (Dev) score did the DAIL-SQL + GPT-4 + Self-Consistency model get on the spider dataset
| 74.4 |
TVC | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what BLEU-4 score did the COSA model get on the TVC dataset
| 18.8 |
UrduDoc | EAST | UTRNet: High-Resolution Urdu Text Recognition In Printed Documents | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15782v3 | [
"https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition"
] | In the paper 'UTRNet: High-Resolution Urdu Text Recognition In Printed Documents', what Precision score did the EAST model get on the UrduDoc dataset
| 70.43 |
WLASL-2000 | HWGAT | Hierarchical Windowed Graph Attention Network and a Large Scale Dataset for Isolated Indian Sign Language Recognition | 2024-07-19T00:00:00 | https://arxiv.org/abs/2407.14224v2 | [
"https://github.com/suvajit-patra/sl-hwgat"
] | In the paper 'Hierarchical Windowed Graph Attention Network and a Large Scale Dataset for Isolated Indian Sign Language Recognition', what Top-1 Accuracy score did the HWGAT model get on the WLASL-2000 dataset
| 48.49 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.