dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
USNA-Cn2 (long-term) | Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Climatology model get on the USNA-Cn2 (long-term) dataset
| 0.632 |
Dense-Haze | CasDyF-Net | CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters | 2024-09-13T00:00:00 | https://arxiv.org/abs/2409.08510v1 | [
"https://github.com/dauing/casdyf-net"
] | In the paper 'CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters', what SSIM score did the CasDyF-Net model get on the Dense-Haze dataset
| 0.658 |
RWTH-PHOENIX-Weather 2014 T | MSKA-SLT | Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation | 2024-05-09T00:00:00 | https://arxiv.org/abs/2405.05672v1 | [
"https://github.com/sutwangyan/MSKA"
] | In the paper 'Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation', what BLEU-4 score did the MSKA-SLT model get on the RWTH-PHOENIX-Weather 2014 T dataset
| 29.03 |
CIFAR-100 (250 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuarcy score did the UnMixMatch model get on the CIFAR-100 (250 Labels, ImageNet-100 Unlabeled) dataset
| 54.18 |
ColonINST-v1 (Seen) | LLaVA-Med-v1.0 (w/o LoRA, w/ extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.0 (w/o LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 93.84 |
SMAC corridor_2z_vs_24zg | VDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the VDN model get on the SMAC corridor_2z_vs_24zg dataset
| 0.00 |
NCI109 | CIN++ | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what Accuracy score did the CIN++ model get on the NCI109 dataset
| 84.5 |
VATEX Adverbs | ReGaDa | Video-adverb retrieval with compositional adverb-action embeddings | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.15086v1 | [
"https://github.com/ExplainableML/ReGaDa"
] | In the paper 'Video-adverb retrieval with compositional adverb-action embeddings', what mAP W score did the ReGaDa model get on the VATEX Adverbs dataset
| 0.29 |
PubLayNet val | VGT | Vision Grid Transformer for Document Layout Analysis | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14978v1 | [
"https://github.com/alibabaresearch/advancedliteratemachinery"
] | In the paper 'Vision Grid Transformer for Document Layout Analysis', what Text score did the VGT model get on the PubLayNet val dataset
| 0.950 |
MATH | MathCoder-L-34B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-L-34B model get on the MATH dataset
| 45.1 |
iNaturalist | EfficientDML-VPTSP-G/512 | Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02340v2 | [
"https://github.com/noahsark/parameterefficient-dml"
] | In the paper 'Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning', what R@1 score did the EfficientDML-VPTSP-G/512 model get on the iNaturalist dataset
| 84.5 |
MSR-VTT-1kA | UCoFiA | Unified Coarse-to-Fine Alignment for Video-Text Retrieval | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10091v1 | [
"https://github.com/ziyang412/ucofia"
] | In the paper 'Unified Coarse-to-Fine Alignment for Video-Text Retrieval', what text-to-video R@1 score did the UCoFiA model get on the MSR-VTT-1kA dataset
| 49.4 |
ImageNet-1k vs Curated OODs (avg.) | ODIN+UMAP (ResNet-50) | Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03715v1 | [
"https://github.com/tmlr-group/unleashing-mask"
] | In the paper 'Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability', what AUROC score did the ODIN+UMAP (ResNet-50) model get on the ImageNet-1k vs Curated OODs (avg.) dataset
| 89.24 |
OpenBookQA | LLaMA-3 8B+MoSLoRA | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what Accuracy score did the LLaMA-3 8B+MoSLoRA model get on the OpenBookQA dataset
| 86.8 |
FP-R-H | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-R-H dataset
| 47.75 |
MPDD | GLASS | A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09359v1 | [
"https://github.com/cqylunlun/glass"
] | In the paper 'A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization', what Detection AUROC score did the GLASS model get on the MPDD dataset
| 99.6 |
DART | self-mem + new data | Self-training from Self-memory in Data-to-text Generation | 2024-01-19T00:00:00 | https://arxiv.org/abs/2401.10567v1 | [
"https://github.com/hoangthangta/stsm"
] | In the paper 'Self-training from Self-memory in Data-to-text Generation', what BLEU score did the self-mem + new data model get on the DART dataset
| 47.76 |
SMAC 26m_vs_30m | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC 26m_vs_30m dataset
| 59.38 |
Bongard-OpenWorld | Human | Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10207v5 | [
"https://github.com/joyjayng/Bongard-OpenWorld"
] | In the paper 'Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World', what 2-Class Accuracy score did the Human model get on the Bongard-OpenWorld dataset
| 91.0 |
YouTube Highlights | LLMEPET | Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrieval | 2024-07-21T00:00:00 | https://arxiv.org/abs/2407.15051v3 | [
"https://github.com/fletcherjiang/llmepet"
] | In the paper 'Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrieval', what mAP score did the LLMEPET model get on the YouTube Highlights dataset
| 75.3 |
CUB-200-2011 | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the CUB-200-2011 dataset
| 64.2 |
ImageNet - 10% labeled data | SynCo (ResNet-50) 800ep | SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02401v5 | [
"https://github.com/giakoumoglou/synco"
] | In the paper 'SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations', what Top 5 Accuracy score did the SynCo (ResNet-50) 800ep model get on the ImageNet - 10% labeled data dataset
| 88.0% |
MassSpecGym | GNN | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit Rate @ 1 score did the GNN model get on the MassSpecGym dataset
| 3.63 |
Plant village | ViTaL | ViTaL: An Advanced Framework for Automated Plant Disease Identification in Leaf Images Using Vision Transformers and Linear Projection For Feature Reduction | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17424v2 | [
"https://github.com/abby1712/ViTaL"
] | In the paper 'ViTaL: An Advanced Framework for Automated Plant Disease Identification in Leaf Images Using Vision Transformers and Linear Projection For Feature Reduction', what Hamming Loss score did the ViTaL model get on the Plant village dataset
| 0.054 |
LingOly | GPT-3.5 | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06196v3 | [
"https://github.com/am-bean/lingOly"
] | In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the GPT-3.5 model get on the LingOly dataset
| 21.2% |
VNHSGE Mathematics | ChatGPT | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE Mathematics dataset
| 58.8 |
VEDAI | ICAFusion | ICAFusion: Iterative Cross-Attention Guided Feature Fusion for Multispectral Object Detection | 2023-08-15T00:00:00 | https://arxiv.org/abs/2308.07504v1 | [
"https://github.com/chanchanchan97/icafusion"
] | In the paper 'ICAFusion: Iterative Cross-Attention Guided Feature Fusion for Multispectral Object Detection', what mAP50 score did the ICAFusion model get on the VEDAI dataset
| 76.62 |
Mini-Imagenet 5-way (1-shot) | SemFew-Trans | Simple Semantic-Aided Few-Shot Learning | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18649v3 | [
"https://github.com/zhangdoudou123/semfew"
] | In the paper 'Simple Semantic-Aided Few-Shot Learning', what Accuracy score did the SemFew-Trans model get on the Mini-Imagenet 5-way (1-shot) dataset
| 78.94 |
Citeseer (48%/32%/20% fixed splits) | GESN | Addressing Heterophily in Node Classification with Graph Echo State Networks | 2023-05-14T00:00:00 | https://arxiv.org/abs/2305.08233v2 | [
"https://github.com/dtortorella/addressing-heterophily-gesn"
] | In the paper 'Addressing Heterophily in Node Classification with Graph Echo State Networks', what 1:1 Accuracy score did the GESN model get on the Citeseer (48%/32%/20% fixed splits) dataset
| 74.51 ± 2.14 |
DUT-OMRON | BiRefNet (DUTS, HRSOD, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (DUTS, HRSOD, UHRSD) model get on the DUT-OMRON dataset
| 0.038 |
CIFAR-10 | RDUOT | A High-Quality Robust Diffusion Framework for Corrupted Dataset | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17101v2 | [
"https://github.com/VinAIResearch/RDUOT"
] | In the paper 'A High-Quality Robust Diffusion Framework for Corrupted Dataset', what FID score did the RDUOT model get on the CIFAR-10 dataset
| 2.95 |
TXL-PBC: a freely accessible labeled peripheral blood cell dataset | yolov5n | TXL-PBC: a freely accessible labeled peripheral blood cell dataset | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13214v1 | [
"https://github.com/lugan113/TXL-PBC_Dataset"
] | In the paper 'TXL-PBC: a freely accessible labeled peripheral blood cell dataset', what mAP50 score did the yolov5n model get on the TXL-PBC: a freely accessible labeled peripheral blood cell dataset dataset
| 0.958 |
CIFAR-10, 400 Labels (OpenSet, 6/4) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the CIFAR-10, 400 Labels (OpenSet, 6/4) dataset
| 97.2 |
LTCC | CAL+DLCR | DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID | 2024-11-11T00:00:00 | https://arxiv.org/abs/2411.07205v2 | [
"https://github.com/croitorualin/dlcr"
] | In the paper 'DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID', what Rank-1 score did the CAL+DLCR model get on the LTCC dataset
| 41.3 |
minesweeper | GAT | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what AUCROC score did the GAT model get on the minesweeper dataset
| 97.73 ± 0.73 |
ogbl-ppa | GraphGPT(SMTP) | GraphGPT: Graph Learning with Generative Pre-trained Transformers | 2023-12-31T00:00:00 | https://arxiv.org/abs/2401.00529v1 | [
"https://github.com/alibaba/graph-gpt"
] | In the paper 'GraphGPT: Graph Learning with Generative Pre-trained Transformers', what Test Hits@100 score did the GraphGPT(SMTP) model get on the ogbl-ppa dataset
| 0.6876 ± 0.0067 |
X4K1000FPS-2K | VFIMamba | VFIMamba: Video Frame Interpolation with State Space Models | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02315v2 | [
"https://github.com/mcg-nju/vfimamba"
] | In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the X4K1000FPS-2K dataset
| 33.33 |
MNIST | Spiking-Diffusion | Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks | 2023-08-20T00:00:00 | https://arxiv.org/abs/2308.10187v4 | [
"https://github.com/Arktis2022/Spiking-Diffusion"
] | In the paper 'Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks', what FID score did the Spiking-Diffusion model get on the MNIST dataset
| 27.61 |
LibriSpeech test-clean | Zipformer+CR-CTC (no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+CR-CTC (no external language model) model get on the LibriSpeech test-clean dataset
| 2.02 |
RefCOCO+ val | EVF-SAM | EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model | 2024-06-28T00:00:00 | https://arxiv.org/abs/2406.20076v4 | [
"https://github.com/hustvl/evf-sam"
] | In the paper 'EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model', what Overall IoU score did the EVF-SAM model get on the RefCOCO+ val dataset
| 75.2 |
TCGA | MSI-H Transformer | From Whole-slide Image to Biomarker Prediction: A Protocol for End-to-End Deep Learning in Computational Pathology | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10944v1 | [
"https://github.com/KatherLab/STAMP"
] | In the paper 'From Whole-slide Image to Biomarker Prediction: A Protocol for End-to-End Deep Learning in Computational Pathology', what AUROC score did the MSI-H Transformer model get on the TCGA dataset
| 0.84 |
DWIE | REXEL | REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking | 2024-04-19T00:00:00 | https://arxiv.org/abs/2404.12788v1 | [
"https://github.com/amazon-science/e2e-docie"
] | In the paper 'REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking', what F1-Hard score did the REXEL model get on the DWIE dataset
| 90.59 |
CARLA | TransFuser++ (TF++) | Hidden Biases of End-to-End Driving Models | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07957v2 | [
"https://github.com/autonomousvision/carla_garage"
] | In the paper 'Hidden Biases of End-to-End Driving Models', what Driving Score score did the TransFuser++ (TF++) model get on the CARLA dataset
| 69 |
Flowers-102 | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the Flowers-102 dataset
| 75.9 |
FC100 5-way (1-shot) | MSENet | Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.07989v1 | [
"https://github.com/FatemehAskari/MSENet"
] | In the paper 'Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms', what Accuracy score did the MSENet model get on the FC100 5-way (1-shot) dataset
| 44.78 |
FGVC | GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) | Improving Visual Prompt Tuning for Self-supervised Vision Transformers | 2023-06-08T00:00:00 | https://arxiv.org/abs/2306.05067v1 | [
"https://github.com/ryongithub/gatedprompttuning"
] | In the paper 'Improving Visual Prompt Tuning for Self-supervised Vision Transformers', what Mean Accuracy score did the GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) model get on the FGVC dataset
| 83.00 |
IllusionVQA | Gemini-Pro | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the Gemini-Pro model get on the IllusionVQA dataset
| 43.5 |
UZLF | Little W-Net | VascX Models: Model Ensembles for Retinal Vascular Analysis from Color Fundus Images | 2024-09-24T00:00:00 | https://arxiv.org/abs/2409.16016v2 | [
"https://github.com/eyened/rtnls_vascx_models"
] | In the paper 'VascX Models: Model Ensembles for Retinal Vascular Analysis from Color Fundus Images', what Average Dice (0.5*Dice_a + 0.5*Dice_v) score did the Little W-Net model get on the UZLF dataset
| 60.9 |
WenetSpeech | Zipformer+pruned transducer (no external language model) | Zipformer: A faster and better encoder for automatic speech recognition | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11230v4 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'Zipformer: A faster and better encoder for automatic speech recognition', what Character Error Rate (CER) score did the Zipformer+pruned transducer (no external language model) model get on the WenetSpeech dataset
| 7.29 |
WikiTableQuestions | CABINET | CABINET: Content Relevance based Noise Reduction for Table Question Answering | 2024-02-02T00:00:00 | https://arxiv.org/abs/2402.01155v3 | [
"https://github.com/sohanpatnaik106/cabinet_qa"
] | In the paper 'CABINET: Content Relevance based Noise Reduction for Table Question Answering', what Accuracy (Dev) score did the CABINET model get on the WikiTableQuestions dataset
| / |
enwik8 | Skip Cross-Head Transformer-XL | Memory-efficient Stochastic methods for Memory-based Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08123v1 | [
"https://github.com/vishwajit-vishnu/memory-efficient-stochastic-methods-for-memory-based-transformers"
] | In the paper 'Memory-efficient Stochastic methods for Memory-based Transformers', what Bit per Character (BPC) score did the Skip Cross-Head Transformer-XL model get on the enwik8 dataset
| 1.033 |
GSM8K | OVM-Llama2-7B (verify100@1) | OVM, Outcome-supervised Value Models for Planning in Mathematical Reasoning | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09724v2 | [
"https://github.com/freedomintelligence/ovm"
] | In the paper 'OVM, Outcome-supervised Value Models for Planning in Mathematical Reasoning', what Accuracy score did the OVM-Llama2-7B (verify100@1) model get on the GSM8K dataset
| 73.7 |
Exposure-Errors | CSEC | Color Shift Estimation-and-Correction for Image Enhancement | 2024-05-28T00:00:00 | https://arxiv.org/abs/2405.17725v2 | [
"https://github.com/yiyulics/csec"
] | In the paper 'Color Shift Estimation-and-Correction for Image Enhancement', what PSNR score did the CSEC model get on the Exposure-Errors dataset
| 22.728 |
Oracle-MNIST | ResNet-18 | Vision Eagle Attention: a new lens for advancing image classification | 2024-11-15T00:00:00 | https://arxiv.org/abs/2411.10564v2 | [
"https://github.com/MahmudulHasan11085/Vision-Eagle-Attention"
] | In the paper 'Vision Eagle Attention: a new lens for advancing image classification', what Accuracy score did the ResNet-18 model get on the Oracle-MNIST dataset
| 96.77 |
Tokyo247 | ProGEO | ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.01906v1 | [
"https://github.com/chain-mao/progeo"
] | In the paper 'ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization', what Recall@1 score did the ProGEO model get on the Tokyo247 dataset
| 88.6 |
ISIC2018 | MobileUNETR | MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation | 2024-09-04T00:00:00 | https://arxiv.org/abs/2409.03062v1 | [
"https://github.com/osupcvlab/mobileunetr"
] | In the paper 'MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation', what mean Dice score did the MobileUNETR model get on the ISIC2018 dataset
| 90.74 |
OVIS validation | CTVIS (ResNet-50) | CTVIS: Consistent Training for Online Video Instance Segmentation | 2023-07-24T00:00:00 | https://arxiv.org/abs/2307.12616v1 | [
"https://github.com/kainingying/ctvis"
] | In the paper 'CTVIS: Consistent Training for Online Video Instance Segmentation', what mask AP score did the CTVIS (ResNet-50) model get on the OVIS validation dataset
| 35.5 |
MCubeS | ShareCMP(B2 RGB-A) | ShareCMP: Polarization-Aware RGB-P Semantic Segmentation | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03430v2 | [
"https://github.com/lefteyex/sharecmp"
] | In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP(B2 RGB-A) model get on the MCubeS dataset
| 50.34 |
SAFIM | codegen-2B-multi | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the codegen-2B-multi model get on the SAFIM dataset
| 23.49 |
UAV123 | ARTrackV2-L | ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17133v3 | [
"https://github.com/miv-xjtu/artrack"
] | In the paper 'ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe', what AUC score did the ARTrackV2-L model get on the UAV123 dataset
| 0.717 |
Cityscapes-to-FoggyZurich | CoDA | CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17369v3 | [
"https://github.com/Cuzyoung/CoDA"
] | In the paper 'CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning', what mIoU score did the CoDA model get on the Cityscapes-to-FoggyZurich dataset
| 60.9 |
EgoTaskQA | EgoVLPv2 | EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone | 2023-07-11T00:00:00 | https://arxiv.org/abs/2307.05463v2 | [
"https://github.com/facebookresearch/EgoVLPv2"
] | In the paper 'EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone', what Direct score did the EgoVLPv2 model get on the EgoTaskQA dataset
| 46.26 |
ScanNet++ | UniDet3D | UniDet3D: Multi-dataset Indoor 3D Object Detection | 2024-09-06T00:00:00 | https://arxiv.org/abs/2409.04234v1 | [
"https://github.com/filapro/unidet3d"
] | In the paper 'UniDet3D: Multi-dataset Indoor 3D Object Detection', what mAP@0.25 score did the UniDet3D model get on the ScanNet++ dataset
| 26.4 |
CUHK Avenue | SD-MAE | Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly Detectors | 2023-06-21T00:00:00 | https://arxiv.org/abs/2306.12041v2 | [
"https://github.com/ristea/aed-mae"
] | In the paper 'Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly Detectors', what AUC score did the SD-MAE model get on the CUHK Avenue dataset
| 91.3% |
ARC (Challenge) | phi-1.5-web 1.3B (zero-shot) | Textbooks Are All You Need II: phi-1.5 technical report | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05463v1 | [
"https://github.com/knowlab/bi-weekly-paper-presentation"
] | In the paper 'Textbooks Are All You Need II: phi-1.5 technical report', what Accuracy score did the phi-1.5-web 1.3B (zero-shot) model get on the ARC (Challenge) dataset
| 44.9 |
DomainNet | SPG (CLIP, ResNet-50) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ResNet-50) model get on the DomainNet dataset
| 50.1 |
HumanEval | MetaGPT (GPT-4) | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00352v7 | [
"https://github.com/geekan/metagpt"
] | In the paper 'MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework', what Pass@1 score did the MetaGPT (GPT-4) model get on the HumanEval dataset
| 85.9 |
RefCOCOg-test | MaskRIS (Swin-B) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B) model get on the RefCOCOg-test dataset
| 66.5 |
ICFG-PEDES | TBPS-CLIP (ViT-B/16) | An Empirical Study of CLIP for Text-based Person Search | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.10045v2 | [
"https://github.com/flame-chasers/tbps-clip"
] | In the paper 'An Empirical Study of CLIP for Text-based Person Search', what mAP score did the TBPS-CLIP (ViT-B/16) model get on the ICFG-PEDES dataset
| 39.83 |
WDC-PAVE | GPT-4_10_example_values_&_10_demonstrations | Using LLMs for the Extraction and Normalization of Product Attribute Values | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02130v4 | [
"https://github.com/wbsg-uni-mannheim/wdc-pave"
] | In the paper 'Using LLMs for the Extraction and Normalization of Product Attribute Values', what F1-Score score did the GPT-4_10_example_values_&_10_demonstrations model get on the WDC-PAVE dataset
| 90.54 |
ParaMAWPS | GPT-J (6B) | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the GPT-J (6B) model get on the ParaMAWPS dataset
| 5.9 |
PASTIS | Exchanger+Mask2Former | Revisiting the Encoding of Satellite Image Time Series | 2023-05-03T00:00:00 | https://arxiv.org/abs/2305.02086v2 | [
"https://github.com/TotalVariation/Exchanger4SITS"
] | In the paper 'Revisiting the Encoding of Satellite Image Time Series', what Mean IoU (test) score did the Exchanger+Mask2Former model get on the PASTIS dataset
| 67.9 |
Peir Gross | BiomedGPT | BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17100v4 | [
"https://github.com/taokz/biomedgpt"
] | In the paper 'BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks', what CIDEr score did the BiomedGPT model get on the Peir Gross dataset
| 122.7 |
ETTh2 (336) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTh2 (336) Multivariate dataset
| 0.419 |
Pascal Panoptic Parts | TAPPS (Swin-B, COCO pre-training) | Task-aligned Part-aware Panoptic Segmentation through Joint Object-Part Representations | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10114v1 | [
"https://github.com/tue-mps/tapps"
] | In the paper 'Task-aligned Part-aware Panoptic Segmentation through Joint Object-Part Representations', what PartPQ score did the TAPPS (Swin-B, COCO pre-training) model get on the Pascal Panoptic Parts dataset
| 60.4 |
MAPS | YourMT3+ (YPTF.MoE+M, unseen) noPS | YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04822v3 | [
"https://github.com/mimbres/yourmt3"
] | In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what Onset F1 score did the YourMT3+ (YPTF.MoE+M, unseen) noPS model get on the MAPS dataset
| 88.73 |
EPIC-KITCHENS-100 | Avion (ViT-L) | Training a Large Video Model on a Single Machine in a Day | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16669v1 | [
"https://github.com/zhaoyue-zephyrus/avion"
] | In the paper 'Training a Large Video Model on a Single Machine in a Day', what Action@1 score did the Avion (ViT-L) model get on the EPIC-KITCHENS-100 dataset
| 54.4 |
PASCAL VOC 2007 | TinyissimoYOLO-v8 | Ultra-Efficient On-Device Object Detection on AI-Integrated Smart Glasses with TinyissimoYOLO | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01057v2 | [
"https://github.com/eth-pbl/tinyissimoyolo"
] | In the paper 'Ultra-Efficient On-Device Object Detection on AI-Integrated Smart Glasses with TinyissimoYOLO', what MAP score did the TinyissimoYOLO-v8 model get on the PASCAL VOC 2007 dataset
| 42.3% |
RefCOCOg-test | HyperSeg | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what Overall IoU score did the HyperSeg model get on the RefCOCOg-test dataset
| 78.9 |
SVOX-Sun | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the SVOX-Sun dataset
| 95.9 |
ActivityNet | vid-TLDR (UMT-L) | vid-TLDR: Training Free Token merging for Light-weight Video Transformer | 2024-03-20T00:00:00 | https://arxiv.org/abs/2403.13347v2 | [
"https://github.com/mlvlab/vid-tldr"
] | In the paper 'vid-TLDR: Training Free Token merging for Light-weight Video Transformer', what text-to-video R@1 score did the vid-TLDR (UMT-L) model get on the ActivityNet dataset
| 66.7 |
SF-XL test v1 | ProGEO | ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.01906v1 | [
"https://github.com/chain-mao/progeo"
] | In the paper 'ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization', what Recall@1 score did the ProGEO model get on the SF-XL test v1 dataset
| 84.7 |
Food.com | LLaVA-Chef | LLaVA-Chef: A Multi-modal Generative Model for Food Recipes | 2024-08-29T00:00:00 | https://arxiv.org/abs/2408.16889v1 | [
"https://github.com/mohbattharani/LLaVA-Chef"
] | In the paper 'LLaVA-Chef: A Multi-modal Generative Model for Food Recipes', what BLEU-1 score did the LLaVA-Chef model get on the Food.com dataset
| 29 |
RefCOCO testA | EVP | EVP: Enhanced Visual Perception using Inverse Multi-Attentive Feature Refinement and Regularized Image-Text Alignment | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08548v1 | [
"https://github.com/lavreniuk/evp"
] | In the paper 'EVP: Enhanced Visual Perception using Inverse Multi-Attentive Feature Refinement and Regularized Image-Text Alignment', what Overall IoU score did the EVP model get on the RefCOCO testA dataset
| 78.75 |
MIT300 | SUM | SUM: Saliency Unification through Mamba for Visual Attention Modeling | 2024-06-25T00:00:00 | https://arxiv.org/abs/2406.17815v2 | [
"https://github.com/Arhosseini77/SUM"
] | In the paper 'SUM: Saliency Unification through Mamba for Visual Attention Modeling', what CC score did the SUM model get on the MIT300 dataset
| 0.768 |
VietMed | GMM-HMM SAT+VTLN | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what Dev WER score did the GMM-HMM SAT+VTLN model get on the VietMed dataset
| 52.2 |
DAVIS 2017 (val) | Cutie+ (base, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what Jaccard (Mean) score did the Cutie+ (base, MEGA) model get on the DAVIS 2017 (val) dataset
| 85.5 |
DeepCAD | A | 19 Parameters Is All You Need: Tiny Neural Networks for Particle Physics | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16121v3 | [
"https://github.com/abogatskiy/pelican-nano"
] | In the paper '19 Parameters Is All You Need: Tiny Neural Networks for Particle Physics', what 1-1 score did the A model get on the DeepCAD dataset
| mm |
CHASE_DB1 | MERIT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what DSC score did the MERIT-GCASCADE model get on the CHASE_DB1 dataset
| 0.8267 |
kickstarter | ResNet + RoBERTa finetune | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the ResNet + RoBERTa finetune model get on the kickstarter dataset
| 0.786 |
MSMT17 | BoT+UFFM+AMC | Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.01101v4 | [
"https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC"
] | In the paper 'Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination', what Rank-1 score did the BoT+UFFM+AMC model get on the MSMT17 dataset
| 82.0 |
CIFAR-100-LT (ρ=50) | MDCS | MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09922v2 | [
"https://github.com/fistyee/mdcs"
] | In the paper 'MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition', what Error Rate score did the MDCS model get on the CIFAR-100-LT (ρ=50) dataset
| 39.9 |
IMDb Movie Reviews | Space-DistilBERT | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the Space-DistilBERT model get on the IMDb Movie Reviews dataset
| 0.8322 |
SPED | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the SPED dataset
| 86.5 |
Actor | CATv3-sup | CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08672v3 | [
"https://github.com/geox-lab/cat"
] | In the paper 'CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph', what Accuracy score did the CATv3-sup model get on the Actor dataset
| 38.5±1.2 |
Hawkins | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the Hawkins dataset
| 65.25 |
CATT | Multilevel Diacritizer | CATT: Character-based Arabic Tashkeel Transformer | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03236v3 | [
"https://github.com/abjadai/catt"
] | In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the Multilevel Diacritizer model get on the CATT dataset
| 16.482 |
Amazon-Book | HSTU+MoL | Retrieval with Learned Similarities | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15462v3 | [
"https://github.com/bailuding/rails"
] | In the paper 'Retrieval with Learned Similarities', what HR@10 score did the HSTU+MoL model get on the Amazon-Book dataset
| 0.0613 |
SYSU-CD | ChangeMamba | ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03425v6 | [
"https://github.com/chenhongruixuan/mambacd"
] | In the paper 'ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model', what F1 score did the ChangeMamba model get on the SYSU-CD dataset
| 83.11 |
LingOly | Command R+ | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06196v3 | [
"https://github.com/am-bean/lingOly"
] | In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the Command R+ model get on the LingOly dataset
| 21.5% |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.