dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Columbia | Late Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Late Fusion model get on the Columbia dataset
| .864 |
UCF-101 | VideoAssembler (Zero-shot, 256x256, class-conditional) | MagDiff: Multi-Alignment Diffusion for High-Fidelity Video Generation and Editing | 2023-11-29T00:00:00 | https://arxiv.org/abs/2311.17338v3 | [
"https://github.com/gulucaptain/videoassembler"
] | In the paper 'MagDiff: Multi-Alignment Diffusion for High-Fidelity Video Generation and Editing', what Inception Score score did the VideoAssembler (Zero-shot, 256x256, class-conditional) model get on the UCF-101 dataset
| 48.01 |
ScanNet | DeLA | Decoupled Local Aggregation for Point Cloud Learning | 2023-08-31T00:00:00 | https://arxiv.org/abs/2308.16532v1 | [
"https://github.com/matrix-asc/dela"
] | In the paper 'Decoupled Local Aggregation for Point Cloud Learning', what val mIoU score did the DeLA model get on the ScanNet dataset
| 75.9 |
MVTec AD | SuperSimpleNet | SuperSimpleNet: Unifying Unsupervised and Supervised Learning for Fast and Reliable Surface Defect Detection | 2024-08-06T00:00:00 | https://arxiv.org/abs/2408.03143v2 | [
"https://github.com/blaz-r/supersimplenet"
] | In the paper 'SuperSimpleNet: Unifying Unsupervised and Supervised Learning for Fast and Reliable Surface Defect Detection', what Detection AUROC score did the SuperSimpleNet model get on the MVTec AD dataset
| 98.4 |
ETTh1 (336) Multivariate | Basisformer | BasisFormer: Attention-based Time Series Forecasting with Learnable and Interpretable Basis | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20496v2 | [
"https://github.com/nzl5116190/basisformer"
] | In the paper 'BasisFormer: Attention-based Time Series Forecasting with Learnable and Interpretable Basis', what MSE score did the Basisformer model get on the ETTh1 (336) Multivariate dataset
| 0.473 |
LVIS v1.0 | CLIPSelf | CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01403v2 | [
"https://github.com/wusize/clipself"
] | In the paper 'CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction', what AP novel-LVIS base training score did the CLIPSelf model get on the LVIS v1.0 dataset
| 34.9 |
ModelNet40 | PCP-MAE | PCP-MAE: Learning to Predict Centers for Point Masked Autoencoders | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08753v2 | [
"https://github.com/aHapBean/PCP-MAE"
] | In the paper 'PCP-MAE: Learning to Predict Centers for Point Masked Autoencoders', what Overall Accuracy score did the PCP-MAE model get on the ModelNet40 dataset
| 94.2 |
Manga109 - 2x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Manga109 - 2x upscaling dataset
| 41.13 |
VietMed | GMM-HMM VTLN | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what Dev WER score did the GMM-HMM VTLN model get on the VietMed dataset
| 61.3 |
Cornell (60%/20%/20% random splits) | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GraphSAGE model get on the Cornell (60%/20%/20% random splits) dataset
| 74.6 ± 6.06 |
IMDB-BINARY | R-GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GCN + PANDA model get on the IMDB-BINARY dataset
| 66.79 |
iNaturalist | AIMv2-3B (448 res) | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-3B (448 res) model get on the iNaturalist dataset
| 85.9 |
VeRi-776 | MBR4B-LAI (without re-ranking) | Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01129v1 | [
"https://github.com/videturfortuna/vehicle_reid_itsc2023"
] | In the paper 'Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification', what mAP score did the MBR4B-LAI (without re-ranking) model get on the VeRi-776 dataset
| 86.0 |
MVTec 3D-AD (RGB) | CPR | Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06748v1 | [
"https://github.com/flyinghu123/cpr"
] | In the paper 'Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval', what Segmentation AP score did the CPR model get on the MVTec 3D-AD (RGB) dataset
| 57.8 |
CIFAR-10 | WRN-28-10 | Language Guided Adversarial Purification | 2023-09-19T00:00:00 | https://arxiv.org/abs/2309.10348v1 | [
"https://github.com/Visual-Conception-Group/LGAP"
] | In the paper 'Language Guided Adversarial Purification', what Accuracy score did the WRN-28-10 model get on the CIFAR-10 dataset
| 90.03 |
FRMT (Portuguese - Brazil) | PaLM | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the PaLM model get on the FRMT (Portuguese - Brazil) dataset
| 78.5 |
fake | ResNet + RoBERTa embedding | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the ResNet + RoBERTa embedding model get on the fake dataset
| 0.934 |
Chameleon (60%/20%/20% random splits) | HH-GAT | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GAT model get on the Chameleon (60%/20%/20% random splits) dataset
| 61.12 ± 1.83 |
iNaturalist 2018 | GML (ViT-B-16) | Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels | 2023-05-02T00:00:00 | https://arxiv.org/abs/2305.01160v3 | [
"https://github.com/bluecdm/Long-tailed-recognition"
] | In the paper 'Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels', what Top-1 Accuracy score did the GML (ViT-B-16) model get on the iNaturalist 2018 dataset
| 82.1% |
CFC-DAOD | MIC (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what AP@0.5 score did the MIC (ResNet50-FPN) model get on the CFC-DAOD dataset
| 74.1 |
IndustReal | B3 - Synthetic Only | IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17323v1 | [
"https://github.com/timschoonbeek/industreal"
] | In the paper 'IndustReal: A Dataset for Procedure Step Recognition Handling Execution Errors in Egocentric Videos in an Industrial-Like Setting', what F1 score did the B3 - Synthetic Only model get on the IndustReal dataset
| 0.597 |
MM-Vet | Mini-Gemini (+MoCa) | Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.07167v2 | [
"https://github.com/shikiw/modality-integration-rate"
] | In the paper 'Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate', what GPT-4 score score did the Mini-Gemini (+MoCa) model get on the MM-Vet dataset
| 42.9 |
Atari 2600 James Bond | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 James Bond dataset
| 2237 |
AG-ReID.v2 | V2E | AG-ReID.v2: Bridging Aerial and Ground Views for Person Re-identification | 2024-01-05T00:00:00 | https://arxiv.org/abs/2401.02634v2 | [
"https://github.com/huynguyen792/ag-reid.v2"
] | In the paper 'AG-ReID.v2: Bridging Aerial and Ground Views for Person Re-identification', what Average mAP score did the V2E model get on the AG-ReID.v2 dataset
| 80.72 |
PAD Dataset | OmniposeAD | PAD: A Dataset and Benchmark for Pose-agnostic Anomaly Detection | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07716v1 | [
"https://github.com/ericlee0224/pad"
] | In the paper 'PAD: A Dataset and Benchmark for Pose-agnostic Anomaly Detection', what Detection AUROC score did the OmniposeAD model get on the PAD Dataset dataset
| 90.9 |
Astock | Factos Only | FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02647v1 | [
"https://github.com/frinkleko/finreport"
] | In the paper 'FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model', what Accuray score did the Factos Only model get on the Astock dataset
| 63.74 |
CochlScene | Audio Flamingo | Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities | 2024-02-02T00:00:00 | https://arxiv.org/abs/2402.01831v3 | [
"https://github.com/NVIDIA/audio-flamingo"
] | In the paper 'Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities', what 1:1 Accuracy score did the Audio Flamingo model get on the CochlScene dataset
| 0.830 |
EconLogicQA | Mistral-7B-v0.1 | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Mistral-7B-v0.1 model get on the EconLogicQA dataset
| 0.2615 |
CoNLL 2003 (English) | GoLLIE | GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03668v5 | [
"https://github.com/hitz-zentroa/gollie"
] | In the paper 'GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction', what F1 score did the GoLLIE model get on the CoNLL 2003 (English) dataset
| 93.1 |
PRCC | FIRe2 | Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10692v2 | [
"https://github.com/qizaowang/fire-ccreid"
] | In the paper 'Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification', what mAP score did the FIRe2 model get on the PRCC dataset
| 63.1 |
SOTS Outdoor | Instruct-IPT | Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation | 2024-06-30T00:00:00 | https://arxiv.org/abs/2407.00676v1 | [
"https://github.com/huawei-noah/Pretrained-IPT"
] | In the paper 'Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation', what PSNR score did the Instruct-IPT model get on the SOTS Outdoor dataset
| 39.95 |
DTD | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the DTD dataset
| 71.09 |
COCO minival | ViT-B+MST+CL | MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation | 2024-01-09T00:00:00 | https://arxiv.org/abs/2401.04403v2 | [
"https://github.com/hahamyt/mst"
] | In the paper 'MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation', what NoC@85 score did the ViT-B+MST+CL model get on the COCO minival dataset
| 2.08 |
MBPP | GPT-4 (few-shot) | DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence | 2024-01-25T00:00:00 | https://arxiv.org/abs/2401.14196v2 | [
"https://github.com/deepseek-ai/DeepSeek-Coder"
] | In the paper 'DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence', what Accuracy score did the GPT-4 (few-shot) model get on the MBPP dataset
| 80 |
Extended Task10_Colon Medical Decathlon | nnUNet | Expanding the Medical Decathlon dataset: segmentation of colon and colorectal cancer from computed tomography images | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21516v1 | [
"https://github.com/blacky-i/MDE_colon_segmentation"
] | In the paper 'Expanding the Medical Decathlon dataset: segmentation of colon and colorectal cancer from computed tomography images', what Average Dice score did the nnUNet model get on the Extended Task10_Colon Medical Decathlon dataset
| 0.6988 |
ApolloScape | deeplabv3 | VREM-FL: Mobility-Aware Computation-Scheduling Co-Design for Vehicular Federated Learning | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18741v3 | [
"https://github.com/lucaballotta/vrem-fl"
] | In the paper 'VREM-FL: Mobility-Aware Computation-Scheduling Co-Design for Vehicular Federated Learning', what mIoU score did the deeplabv3 model get on the ApolloScape dataset
| 0.43 |
SPair-71k | GeoAware-SC (Zero-Shot) | Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17034v2 | [
"https://github.com/Junyi42/geoaware-sc"
] | In the paper 'Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence', what PCK score did the GeoAware-SC (Zero-Shot) model get on the SPair-71k dataset
| 68.5 |
SHD | Event-SSM | Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18508v3 | [
"https://github.com/Efficient-Scalable-Machine-Learning/event-ssm"
] | In the paper 'Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models', what Percentage correct score did the Event-SSM model get on the SHD dataset
| 95.9 |
LibriSpeech test-other | FAdam | FAdam: Adam is a natural gradient optimizer using diagonal empirical Fisher information | 2024-05-21T00:00:00 | https://arxiv.org/abs/2405.12807v10 | [
"https://github.com/lessw2020/fadam_pytorch"
] | In the paper 'FAdam: Adam is a natural gradient optimizer using diagonal empirical Fisher information', what Word Error Rate (WER) score did the FAdam model get on the LibriSpeech test-other dataset
| 2.49 |
COCO test-dev | LeYOLO-Small | LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14239v1 | [
"https://github.com/LilianHollard/LeYOLO"
] | In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what box mAP score did the LeYOLO-Small model get on the COCO test-dev dataset
| 38.2 |
ColonINST-v1 (Unseen) | LLaVA-Med-v1.5
(w/ LoRA, w/ extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.5
(w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
| 70.00 |
IllusionVQA | GPT4-Vision | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the GPT4-Vision model get on the IllusionVQA dataset
| 40 |
MATH | MMOS-CODE-34B(0-shot) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Accuracy score did the MMOS-CODE-34B(0-shot) model get on the MATH dataset
| 49.5 |
MCubeS (P) | MMSFormer (RGB) | MMSFormer: Multimodal Transformer for Material and Semantic Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.04001v4 | [
"https://github.com/csiplab/mmsformer"
] | In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB) model get on the MCubeS (P) dataset
| 50.44 |
Kinetics | OTI(ViT-L/14) | Orthogonal Temporal Interpolation for Zero-Shot Video Recognition | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06897v1 | [
"https://github.com/sweetorangezhuyan/mm2023_oti"
] | In the paper 'Orthogonal Temporal Interpolation for Zero-Shot Video Recognition', what Top-1 Accuracy score did the OTI(ViT-L/14) model get on the Kinetics dataset
| 70.6 |
VDD | Segformer-B5 | VDD: Varied Drone Dataset for Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13608v3 | [
"https://github.com/RussRobin/VDD"
] | In the paper 'VDD: Varied Drone Dataset for Semantic Segmentation', what mIoU score did the Segformer-B5 model get on the VDD dataset
| 82.11 |
Charades-STA | D3G (Semi-weak, I3D-K400-Pretrain-feature, evaluated by AdaFocus) | D3G: Exploring Gaussian Prior for Temporal Sentence Grounding with Glance Annotation | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04197v1 | [
"https://github.com/solicucu/d3g"
] | In the paper 'D3G: Exploring Gaussian Prior for Temporal Sentence Grounding with Glance Annotation', what R1@0.5 score did the D3G (Semi-weak, I3D-K400-Pretrain-feature, evaluated by AdaFocus) model get on the Charades-STA dataset
| 41.7 |
MIMIC-CXR, MIMIC-IV | MedPromptX | MedPromptX: Grounded Multimodal Prompting for Chest X-ray Diagnosis | 2024-03-22T00:00:00 | https://arxiv.org/abs/2403.15585v3 | [
"https://github.com/biomedia-mbzuai/medpromptx"
] | In the paper 'MedPromptX: Grounded Multimodal Prompting for Chest X-ray Diagnosis', what F1-score score did the MedPromptX model get on the MIMIC-CXR, MIMIC-IV dataset
| 0.69 |
LIVE-VQC | ReLaX-VQA (finetuned on LIVE-VQC) | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11496v1 | [
"https://github.com/xinyiw915/relax-vqa"
] | In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA (finetuned on LIVE-VQC) model get on the LIVE-VQC dataset
| 0.8876 |
KonIQ-10k | OneAlign | Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17090v1 | [
"https://github.com/q-future/q-align"
] | In the paper 'Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels', what SRCC score did the OneAlign model get on the KonIQ-10k dataset
| 0.941 |
AgeDB | ResNet-50-DLDL | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL model get on the AgeDB dataset
| 5.80 |
AudioCaps | LOAE | Enhancing Automated Audio Captioning via Large Language Models with Optimized Audio Encoding | 2024-06-19T00:00:00 | https://arxiv.org/abs/2406.13275v2 | [
"https://github.com/frankenliu/LOAE"
] | In the paper 'Enhancing Automated Audio Captioning via Large Language Models with Optimized Audio Encoding', what CIDEr score did the LOAE model get on the AudioCaps dataset
| 0.816 |
miniF2F-test | COPRA + GPT-3.5 | An In-Context Learning Agent for Formal Theorem-Proving | 2023-10-06T00:00:00 | https://arxiv.org/abs/2310.04353v5 | [
"https://github.com/trishullab/copra"
] | In the paper 'An In-Context Learning Agent for Formal Theorem-Proving', what Pass@1 score did the COPRA + GPT-3.5 model get on the miniF2F-test dataset
| 11.9 |
NYU Depth v2 | DFormer-B | DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.09668v2 | [
"https://github.com/VCIP-RGBD/DFormer"
] | In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what Mean IoU score did the DFormer-B model get on the NYU Depth v2 dataset
| 55.6% |
MVBench | Oryx(34B) | Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.12961v2 | [
"https://github.com/oryx-mllm/oryx"
] | In the paper 'Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution', what Avg. score did the Oryx(34B) model get on the MVBench dataset
| 64.7 |
CUTE80 | DTrOCR 105M | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what Accuracy score did the DTrOCR 105M model get on the CUTE80 dataset
| 99.1 |
MCubeS | ShareCMP(B2 RGB-D) | ShareCMP: Polarization-Aware RGB-P Semantic Segmentation | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03430v2 | [
"https://github.com/lefteyex/sharecmp"
] | In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP(B2 RGB-D) model get on the MCubeS dataset
| 50.55 |
CIFAR-100 | shufflenet-v2(T:resnet-32x4, S:shufflenet-v2) | Logit Standardization in Knowledge Distillation | 2024-03-03T00:00:00 | https://arxiv.org/abs/2403.01427v1 | [
"https://github.com/sunshangquan/logit-standardardization-kd"
] | In the paper 'Logit Standardization in Knowledge Distillation', what Top-1 Accuracy (%) score did the shufflenet-v2(T:resnet-32x4, S:shufflenet-v2) model get on the CIFAR-100 dataset
| 78.76 |
Atari 2600 Asterix | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Asterix dataset
| 567640 |
MATH | MuggleMATH 7B | MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.05506v3 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | In the paper 'MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning', what Accuracy score did the MuggleMATH 7B model get on the MATH dataset
| 25.8 |
TvSum | SG-DETR | Saliency-Guided DETR for Moment Retrieval and Highlight Detection | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01615v1 | [
"https://github.com/ai-forever/sg-detr"
] | In the paper 'Saliency-Guided DETR for Moment Retrieval and Highlight Detection', what mAP score did the SG-DETR model get on the TvSum dataset
| 87.1 |
OpenBookQA | PaLM 2-M (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the OpenBookQA dataset
| 56.2 |
MSVD-QA | FrozenBiLM+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the FrozenBiLM+ model get on the MSVD-QA dataset
| 0.558 |
LargeST | HL | LargeST: A Benchmark Dataset for Large-Scale Traffic Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.08259v2 | [
"https://github.com/liuxu77/largest"
] | In the paper 'LargeST: A Benchmark Dataset for Large-Scale Traffic Forecasting', what 12 steps MAPE score did the HL model get on the LargeST dataset
| 101.74 |
GSM8K | ToRA 70B | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA 70B model get on the GSM8K dataset
| 84.3 |
Winoground | LLaVA-1.5-ZS-CoT | Compositional Chain-of-Thought Prompting for Large Multimodal Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.17076v3 | [
"https://github.com/chancharikmitra/ccot"
] | In the paper 'Compositional Chain-of-Thought Prompting for Large Multimodal Models', what Text Score score did the LLaVA-1.5-ZS-CoT model get on the Winoground dataset
| 28.0 |
Breakfast | BaFormer | Efficient Temporal Action Segmentation via Boundary-aware Query Voting | 2024-05-25T00:00:00 | https://arxiv.org/abs/2405.15995v1 | [
"https://github.com/peiyao-w/baformer"
] | In the paper 'Efficient Temporal Action Segmentation via Boundary-aware Query Voting', what F1@10% score did the BaFormer model get on the Breakfast dataset
| 79.2 |
CHILI-3K | GIN | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GIN model get on the CHILI-3K dataset
| 0.464 +/- 0.005 |
DiDeMo | TESTA (ViT-B/16) | TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding | 2023-10-29T00:00:00 | https://arxiv.org/abs/2310.19060v1 | [
"https://github.com/renshuhuai-andy/testa"
] | In the paper 'TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding', what text-to-video R@1 score did the TESTA (ViT-B/16) model get on the DiDeMo dataset
| 61.2 |
AFAD | ResNet-50-Mean-Variance | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Mean-Variance model get on the AFAD dataset
| 3.16 |
DSIFN-CD | CGNet | Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09179v1 | [
"https://github.com/chengxihan/cgnet-cd"
] | In the paper 'Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery', what F1 score did the CGNet model get on the DSIFN-CD dataset
| 60.19 |
Mip-NeRF 360 | Compressed 3D Gaussian Splatting | Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis | 2023-11-17T00:00:00 | https://arxiv.org/abs/2401.02436v2 | [
"https://github.com/KeKsBoTer/c3dgs"
] | In the paper 'Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis', what PSNR score did the Compressed 3D Gaussian Splatting model get on the Mip-NeRF 360 dataset
| 26.98 |
BC5CDR | UniNER-7B | UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03279v2 | [
"https://github.com/emma1066/retrieval-augmented-it-openner"
] | In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the BC5CDR dataset
| 89.34 |
Winoground | BLIP (ITC) | Revisiting the Role of Language Priors in Vision-Language Models | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01879v4 | [
"https://github.com/linzhiqiu/visual_gpt_score"
] | In the paper 'Revisiting the Role of Language Priors in Vision-Language Models', what Text Score score did the BLIP (ITC) model get on the Winoground dataset
| 28.0 |
Groove | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Groove dataset
| 82.1 |
PHEVA | GEPC | PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset | 2024-08-26T00:00:00 | https://arxiv.org/abs/2408.14329v1 | [
"https://github.com/tecsar-uncc/pheva"
] | In the paper 'PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset', what AUC-ROC score did the GEPC model get on the PHEVA dataset
| 62.25 |
Urban100 - 4x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Urban100 - 4x upscaling dataset
| 28.69 |
STAR Benchmark | VLAP (4 frames) | ViLA: Efficient Video-Language Alignment for Video Question Answering | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08367v4 | [
"https://github.com/xijun-cs/vila"
] | In the paper 'ViLA: Efficient Video-Language Alignment for Video Question Answering', what Average Accuracy score did the VLAP (4 frames) model get on the STAR Benchmark dataset
| 67.1 |
ScanNet | OA-CNNs | OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14418v1 | [
"https://github.com/Pointcept/Pointcept"
] | In the paper 'OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation', what test mIoU score did the OA-CNNs model get on the ScanNet dataset
| 75.6 |
Wiki-CS | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the HH-GCN model get on the Wiki-CS dataset
| 82.57 |
DTD | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the DTD dataset
| 51 |
MSR-VTT-1kA | DMAE
(ViT-B/16) | Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11082v3 | [
"https://github.com/alipay/Ant-Multi-Modal-Framework"
] | In the paper 'Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning', what text-to-video Mean Rank score did the DMAE
(ViT-B/16) model get on the MSR-VTT-1kA dataset
| 10.0 |
VietMed | Hybrid 4-gram VietMed-Train | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what PPL score did the Hybrid 4-gram VietMed-Train model get on the VietMed dataset
| 210 |
Vinoground | VTimeLLM | VTimeLLM: Empower LLM to Grasp Video Moments | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18445v1 | [
"https://github.com/huangb23/vtimellm"
] | In the paper 'VTimeLLM: Empower LLM to Grasp Video Moments', what Text Score score did the VTimeLLM model get on the Vinoground dataset
| 19.4 |
Fashion-MNIST | Tsetlin Machine Composites | TMComposites: Plug-and-Play Collaboration Between Specialized Tsetlin Machines | 2023-09-09T00:00:00 | https://arxiv.org/abs/2309.04801v2 | [
"https://github.com/cair/plug-and-play-collaboration-between-specialized-tsetlin-machines"
] | In the paper 'TMComposites: Plug-and-Play Collaboration Between Specialized Tsetlin Machines', what Accuracy score did the Tsetlin Machine Composites model get on the Fashion-MNIST dataset
| 93.0 |
YouTube-VIS 2021 | DVIS++(VIT-L, Offline) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mask AP score did the DVIS++(VIT-L, Offline) model get on the YouTube-VIS 2021 dataset
| 63.9 |
QVHighlights | BAM-DETR (w/ audio) | BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00083v2 | [
"https://github.com/Pilhyeon/BAM-DETR"
] | In the paper 'BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos', what mAP score did the BAM-DETR (w/ audio) model get on the QVHighlights dataset
| 46.91 |
AlpacaEval | Yi 34B Chat | Yi: Open Foundation Models by 01.AI | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04652v1 | [
"https://github.com/01-ai/yi"
] | In the paper 'Yi: Open Foundation Models by 01.AI', what Average win rate score did the Yi 34B Chat model get on the AlpacaEval dataset
| 27.2 |
Stanford Cars | SaSPA + CAL | Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14551v2 | [
"https://github.com/eyalmichaeli/saspa-aug"
] | In the paper 'Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation', what 8-shot Accuracy score did the SaSPA + CAL model get on the Stanford Cars dataset
| 82.6 |
Panoptic SYNTHIA-to-Mapillary | MC-PanDA | MC-PanDA: Mask Confidence for Panoptic Domain Adaptation | 2024-07-19T00:00:00 | https://arxiv.org/abs/2407.14110v1 | [
"https://github.com/helen1c/mc-panda"
] | In the paper 'MC-PanDA: Mask Confidence for Panoptic Domain Adaptation', what mPQ score did the MC-PanDA model get on the Panoptic SYNTHIA-to-Mapillary dataset
| 38.7 |
DVS128 Gesture | Event-SSM | Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18508v3 | [
"https://github.com/Efficient-Scalable-Machine-Learning/event-ssm"
] | In the paper 'Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models', what Accuracy (%) score did the Event-SSM model get on the DVS128 Gesture dataset
| 97.7 |
Fashion-MNIST | Wilson-Cowan model RNN | Learning in Wilson-Cowan model for metapopulation | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16453v2 | [
"https://github.com/raffaelemarino/learning_in_wilsoncowan"
] | In the paper 'Learning in Wilson-Cowan model for metapopulation', what Accuracy score did the Wilson-Cowan model RNN model get on the Fashion-MNIST dataset
| 88.39 |
COCO-20i -> Pascal VOC (1-shot) | MSDNet (ResNet-50) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-50) model get on the COCO-20i -> Pascal VOC (1-shot) dataset
| 72.1 |
Squirrel (60%/20%/20% random splits) | HH-GAT | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GAT model get on the Squirrel (60%/20%/20% random splits) dataset
| 46.35 ± 1.86 |
AudioCaps | TANGO-AF&AC-FT-AC | Improving Text-To-Audio Models with Synthetic Captions | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.15487v2 | [
"https://github.com/declare-lab/tango"
] | In the paper 'Improving Text-To-Audio Models with Synthetic Captions', what FD score did the TANGO-AF&AC-FT-AC model get on the AudioCaps dataset
| 17.19 |
SNU-FILM (hard) | VFIMamba | VFIMamba: Video Frame Interpolation with State Space Models | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02315v2 | [
"https://github.com/mcg-nju/vfimamba"
] | In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the SNU-FILM (hard) dataset
| 30.99 |
VideoInstruct | Video Chat | VideoChat: Chat-Centric Video Understanding | 2023-05-10T00:00:00 | https://arxiv.org/abs/2305.06355v2 | [
"https://github.com/opengvlab/ask-anything"
] | In the paper 'VideoChat: Chat-Centric Video Understanding', what Correctness of Information score did the Video Chat model get on the VideoInstruct dataset
| 2.23 |
COCO test-dev | LeYOLO-Nano@320 | LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14239v1 | [
"https://github.com/LilianHollard/LeYOLO"
] | In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what box mAP score did the LeYOLO-Nano@320 model get on the COCO test-dev dataset
| 25.2 |
iSAID | AerialFormer-S | AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation | 2023-06-12T00:00:00 | https://arxiv.org/abs/2306.06842v2 | [
"https://github.com/UARK-AICV/AerialFormer"
] | In the paper 'AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation', what mIoU score did the AerialFormer-S model get on the iSAID dataset
| 68.4 |
ImageNet | ViT-B @224 (DeiT-III + AugSub) | Masking Augmentation for Supervised Learning | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11339v2 | [
"https://github.com/naver-ai/augsub"
] | In the paper 'Masking Augmentation for Supervised Learning', what Top 1 Accuracy score did the ViT-B @224 (DeiT-III + AugSub) model get on the ImageNet dataset
| 84.2% |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.