dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Tokyo247 | SelaVPR | Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14505v3 | [
"https://github.com/Lu-Feng/SelaVPR"
] | In the paper 'Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition', what Recall@1 score did the SelaVPR model get on the Tokyo247 dataset
| 94.0 |
CropHarvest - Togo | Ensemble aggregation with GRU | In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16582v2 | [
"https://github.com/fmenat/optimal-multiview-crop-classifier"
] | In the paper 'In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data', what Average Accuracy score did the Ensemble aggregation with GRU model get on the CropHarvest - Togo dataset
| 0.842 |
GSM8K | GPT-4 (Teaching-Inspired) | Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08068v1 | [
"https://github.com/sallytan13/teaching-inspired-prompting"
] | In the paper 'Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models', what Accuracy score did the GPT-4 (Teaching-Inspired) model get on the GSM8K dataset
| 94.8 |
CMU-MOSEI | ConCluGen | Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition | 2024-04-16T00:00:00 | https://arxiv.org/abs/2404.10904v2 | [
"https://github.com/tub-cv-group/conclugen"
] | In the paper 'Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition', what Weighted Accuracy score did the ConCluGen model get on the CMU-MOSEI dataset
| 66.48 |
UCF101 | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the UCF101 dataset
| 83.25 |
STL-10 | ResNet18 | Guarding Barlow Twins Against Overfitting with Mixed Samples | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02151v1 | [
"https://github.com/wgcban/mix-bt"
] | In the paper 'Guarding Barlow Twins Against Overfitting with Mixed Samples', what Accuracy score did the ResNet18 model get on the STL-10 dataset
| 91.02 |
Abt-Buy | Meta-Llama-3.1-70B-Instruct | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Meta-Llama-3.1-70B-Instruct model get on the Abt-Buy dataset
| 79.12 |
CIFAR-100 (10000 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the CIFAR-100 (10000 Labels, ImageNet-100 Unlabeled) dataset
| 71.73 |
LaSOT | MITS | Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation | 2023-08-25T00:00:00 | https://arxiv.org/abs/2308.13266v3 | [
"https://github.com/yoxu515/mits"
] | In the paper 'Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation', what AUC score did the MITS model get on the LaSOT dataset
| 72.0 |
MATH | ToRA 70B (w/ code, SC, k=50) | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA 70B (w/ code, SC, k=50) model get on the MATH dataset
| 56.9 |
COPA | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the COPA dataset
| 89.0 |
LLVIP | MiPa | MiPa: Mixed Patch Infrared-Visible Modality Agnostic Object Detection | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18849v2 | [
"https://github.com/heitorrapela/mipa"
] | In the paper 'MiPa: Mixed Patch Infrared-Visible Modality Agnostic Object Detection', what AP score did the MiPa model get on the LLVIP dataset
| 0.665 |
MTL-AQA | RICA^2 (Deterministic) | RICA2: Rubric-Informed, Calibrated Assessment of Actions | 2024-08-04T00:00:00 | https://arxiv.org/abs/2408.02138v2 | [
"https://github.com/abrarmajeedi/rica2_aqa"
] | In the paper 'RICA2: Rubric-Informed, Calibrated Assessment of Actions', what Spearman Correlation score did the RICA^2 (Deterministic) model get on the MTL-AQA dataset
| 96.20 |
Food-101 | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the Food-101 dataset
| 87.8 |
PPI | GAT + PGN | The Split Matters: Flat Minima Methods for Improving the Performance of GNNs | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09121v1 | [
"https://github.com/foisunt/fmms-in-gnns"
] | In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what F1 score did the GAT + PGN model get on the PPI dataset
| 99.34 ± 0.02% |
DanceTrack | IMM-JHSE | One Homography is All You Need: IMM-based Joint Homography and Multiple Object State Estimation | 2024-09-04T00:00:00 | https://arxiv.org/abs/2409.02562v2 | [
"https://github.com/Paulkie99/imm-jhse"
] | In the paper 'One Homography is All You Need: IMM-based Joint Homography and Multiple Object State Estimation', what HOTA score did the IMM-JHSE model get on the DanceTrack dataset
| 66.24 |
MedSecId | GPT-4 | LLM-Based Section Identifiers Excel on Open Source but Stumble in Real World Applications | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16294v1 | [
"https://github.com/inqbator-evicore/llm_section_identifiers"
] | In the paper 'LLM-Based Section Identifiers Excel on Open Source but Stumble in Real World Applications', what 1 shot Micro-F1 score did the GPT-4 model get on the MedSecId dataset
| 96.86 |
WSC | OPT-1.3B | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-1.3B model get on the WSC dataset
| 64.16% |
PeMSD3 | STD-MAE | Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00516v3 | [
"https://github.com/jimmy-7664/std-mae"
] | In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what 12 steps MAE score did the STD-MAE model get on the PeMSD3 dataset
| 13.80 |
FLIR | MMPedestron | When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10125v1 | [
"https://github.com/BubblyYi/MMPedestron"
] | In the paper 'When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset', what mAP50 score did the MMPedestron model get on the FLIR dataset
| 86.4 |
MM-Vet | LLaVA-1.5-13B (+CSR) | Calibrated Self-Rewarding Vision Language Models | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14622v4 | [
"https://github.com/yiyangzhou/csr"
] | In the paper 'Calibrated Self-Rewarding Vision Language Models', what GPT-4 score score did the LLaVA-1.5-13B (+CSR) model get on the MM-Vet dataset
| 37.8 |
DanceTrack | MOTIP (DAB-Deformable DETR) | Multiple Object Tracking as ID Prediction | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16848v1 | [
"https://github.com/MCG-NJU/MOTIP"
] | In the paper 'Multiple Object Tracking as ID Prediction', what HOTA score did the MOTIP (DAB-Deformable DETR) model get on the DanceTrack dataset
| 70.0 |
OntoNotes | caw-coref + RoBERTa | CAW-coref: Conjunction-Aware Word-level Coreference Resolution | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.06165v2 | [
"https://github.com/kareldo/wl-coref"
] | In the paper 'CAW-coref: Conjunction-Aware Word-level Coreference Resolution', what F1 score did the caw-coref + RoBERTa model get on the OntoNotes dataset
| 81.6 |
IBims-1 | Metric3Dv2(g2, ZS) | Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation | 2024-03-22T00:00:00 | https://arxiv.org/abs/2404.15506v3 | [
"https://github.com/yvanyin/metric3d"
] | In the paper 'Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation', what % < 11.25 score did the Metric3Dv2(g2, ZS) model get on the IBims-1 dataset
| 69.7 |
ADE20K-847 | PosSAM | PosSAM: Panoptic Open-vocabulary Segment Anything | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09620v1 | [
"https://github.com/Vibashan/PosSAM"
] | In the paper 'PosSAM: Panoptic Open-vocabulary Segment Anything', what mIoU score did the PosSAM model get on the ADE20K-847 dataset
| 14.9 |
SECOND | ChangeMamba | ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03425v6 | [
"https://github.com/chenhongruixuan/mambacd"
] | In the paper 'ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model', what SeK score did the ChangeMamba model get on the SECOND dataset
| 24.11 |
ModelNet40 | Mamba3D + Point-MAE | Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.14966v2 | [
"https://github.com/xhanxu/Mamba3D"
] | In the paper 'Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model', what Overall Accuracy score did the Mamba3D + Point-MAE model get on the ModelNet40 dataset
| 95.1 |
The Pile | Phi-3 14B | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Phi-3 14B model get on the The Pile dataset
| 0.651 |
VoxCeleb1 | ReDimNet-B6-SF2-LM-ASNorm (15.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B6-SF2-LM-ASNorm (15.0M) model get on the VoxCeleb1 dataset
| 0.37 |
Peptides-func | GRED | Recurrent Distance Filtering for Graph Representation Learning | 2023-12-03T00:00:00 | https://arxiv.org/abs/2312.01538v3 | [
"https://github.com/skeletondyh/gred"
] | In the paper 'Recurrent Distance Filtering for Graph Representation Learning', what AP score did the GRED model get on the Peptides-func dataset
| 0.7085±0.0027 |
CTB5 | Hashing + Bert | To be Continuous, or to be Discrete, Those are Bits of Questions | 2024-06-12T00:00:00 | https://arxiv.org/abs/2406.07812v1 | [
"https://github.com/speedcell4/parserker"
] | In the paper 'To be Continuous, or to be Discrete, Those are Bits of Questions', what F1 score score did the Hashing + Bert model get on the CTB5 dataset
| 92.33 |
CocoGlide | Early Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Early Fusion model get on the CocoGlide dataset
| .755 |
TruthfulQA | LLaMa-2-7B-Chat + TruthX | TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17811v2 | [
"https://github.com/ictnlp/truthx"
] | In the paper 'TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space', what MC1 score did the LLaMa-2-7B-Chat + TruthX model get on the TruthfulQA dataset
| 0.54 |
Pittsburgh-250k-test | BoQ | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the Pittsburgh-250k-test dataset
| 96.6 |
BSD100 - 2x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the BSD100 - 2x upscaling dataset
| 32.90 |
Replica | OVIR-3D | OVIR-3D: Open-Vocabulary 3D Instance Retrieval Without Training on 3D Data | 2023-11-06T00:00:00 | https://arxiv.org/abs/2311.02873v1 | [
"https://github.com/shiyoung77/ovir-3d"
] | In the paper 'OVIR-3D: Open-Vocabulary 3D Instance Retrieval Without Training on 3D Data', what mAP score did the OVIR-3D model get on the Replica dataset
| 11.1 |
Electricity (192) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Electricity (192) dataset
| 0.148 |
ImageNet 512x512 | MAR-L, Diff Loss | Autoregressive Image Generation without Vector Quantization | 2024-06-17T00:00:00 | https://arxiv.org/abs/2406.11838v3 | [
"https://github.com/lth14/mar"
] | In the paper 'Autoregressive Image Generation without Vector Quantization', what FID score did the MAR-L, Diff Loss model get on the ImageNet 512x512 dataset
| 1.73 |
WNUT 2017 | RoBERTa-BiLSTM-context | Supplementary Features of BiLSTM for Enhanced Sequence Labeling | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19928v4 | [
"https://github.com/conglei2xu/global-context-mechanism"
] | In the paper 'Supplementary Features of BiLSTM for Enhanced Sequence Labeling', what F1 score did the RoBERTa-BiLSTM-context model get on the WNUT 2017 dataset
| 59.20 |
OOD-CV | UGT | A Bayesian Approach to OOD Robustness in Image Classification | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07277v1 | [
"https://github.com/toshi2k2/CompnetDA"
] | In the paper 'A Bayesian Approach to OOD Robustness in Image Classification', what Accuracy (Top-1) score did the UGT model get on the OOD-CV dataset
| 85 |
Groove | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Groove dataset
| 93.7 |
MSVD | DMAE
(ViT-B/32) | Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11082v3 | [
"https://github.com/alipay/Ant-Multi-Modal-Framework"
] | In the paper 'Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning', what text-to-video R@1 score did the DMAE
(ViT-B/32) model get on the MSVD dataset
| 48.7 |
AudioCaps | Auffusion | Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generation | 2024-01-02T00:00:00 | https://arxiv.org/abs/2401.01044v1 | [
"https://github.com/happylittlecat2333/Auffusion"
] | In the paper 'Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generation', what FAD score did the Auffusion model get on the AudioCaps dataset
| 1.63 |
THUMOS14 | MSQNet | Actor-agnostic Multi-label Action Recognition with Multi-modal Query | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10763v3 | [
"https://github.com/mondalanindya/msqnet"
] | In the paper 'Actor-agnostic Multi-label Action Recognition with Multi-modal Query', what Accuracy score did the MSQNet model get on the THUMOS14 dataset
| 83.16 |
OK-VQA | FLMR | Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17133v2 | [
"https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering"
] | In the paper 'Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering', what Recall@5 score did the FLMR model get on the OK-VQA dataset
| 89.32 |
InvertedDoublePendulum-v2 | TLA | Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18701v3 | [
"https://github.com/dee0512/Temporally-Layered-Architecture"
] | In the paper 'Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures', what Mean Reward score did the TLA model get on the InvertedDoublePendulum-v2 dataset
| 9356.67 |
COCO-20i (5-shot) | MSDNet (ResNet-50) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-50) model get on the COCO-20i (5-shot) dataset
| 54.5 |
DomainNet | SPG (CLIP, ViT-B/16) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ViT-B/16) model get on the DomainNet dataset
| 60.1 |
AgeDB | ResNet-50-Unimodal-Concentrated | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Unimodal-Concentrated model get on the AgeDB dataset
| 5.90 |
S3DIS Area5 | Superpoint Transformer | Efficient 3D Semantic Segmentation with Superpoint Transformer | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.08045v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Efficient 3D Semantic Segmentation with Superpoint Transformer', what mIoU score did the Superpoint Transformer model get on the S3DIS Area5 dataset
| 68.9 |
MRR-Benchmark | Monkey-Chat-7B | Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models | 2023-11-11T00:00:00 | https://arxiv.org/abs/2311.06607v4 | [
"https://github.com/yuliang-liu/monkey"
] | In the paper 'Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models', what Total Column Score score did the Monkey-Chat-7B model get on the MRR-Benchmark dataset
| 214 |
MM-Vet | VW-LMM | Multi-modal Auto-regressive Modeling via Visual Words | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07720v2 | [
"https://github.com/pengts/vw-lmm"
] | In the paper 'Multi-modal Auto-regressive Modeling via Visual Words', what GPT-4 score score did the VW-LMM model get on the MM-Vet dataset
| 44.0 |
MM-Vet | LayoutLMv3+ConvNeXt+CLIP | MouSi: Poly-Visual-Expert Vision-Language Models | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.17221v1 | [
"https://github.com/fudannlplab/mousi"
] | In the paper 'MouSi: Poly-Visual-Expert Vision-Language Models', what GPT-4 score score did the LayoutLMv3+ConvNeXt+CLIP model get on the MM-Vet dataset
| 38.4 |
MATH | OpenMath-CodeLlama-7B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-7B (w/ code, SC, k=50) model get on the MATH dataset
| 55.6 |
MM-Vet | LLaVA-OneVision-7B | LLaVA-OneVision: Easy Visual Task Transfer | 2024-08-06T00:00:00 | https://arxiv.org/abs/2408.03326v3 | [
"https://github.com/evolvinglmms-lab/lmms-eval"
] | In the paper 'LLaVA-OneVision: Easy Visual Task Transfer', what GPT-4 score score did the LLaVA-OneVision-7B model get on the MM-Vet dataset
| 57.5 |
AFHQ Cat | DDMI | DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations | 2024-01-23T00:00:00 | https://arxiv.org/abs/2401.12517v2 | [
"https://github.com/mlvlab/DDMI"
] | In the paper 'DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations', what FID score did the DDMI model get on the AFHQ Cat dataset
| 4.27 |
ELD SonyA7S2 x200 | LRD | Towards General Low-Light Raw Noise Synthesis and Modeling | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16508v2 | [
"https://github.com/fengzhang427/LRD"
] | In the paper 'Towards General Low-Light Raw Noise Synthesis and Modeling', what PSNR (Raw) score did the LRD model get on the ELD SonyA7S2 x200 dataset
| 43.32 |
VisDA-2017 | RCL | Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning | 2024-05-28T00:00:00 | https://arxiv.org/abs/2405.18376v1 | [
"https://github.com/Dong-Jie-Chen/RCL"
] | In the paper 'Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning', what Accuracy score did the RCL model get on the VisDA-2017 dataset
| 93.2 |
LOFAR RFI Detection | Nearest Latent Neighbours | RFI Detection with Spiking Neural Networks | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14303v2 | [
"https://github.com/pritchardn/snn-nln"
] | In the paper 'RFI Detection with Spiking Neural Networks', what AUROC score did the Nearest Latent Neighbours model get on the LOFAR RFI Detection dataset
| 0.818 |
SemanticKITTI | TALoS | TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on the Line of Sight | 2024-10-21T00:00:00 | https://arxiv.org/abs/2410.15674v2 | [
"https://github.com/blue-531/talos"
] | In the paper 'TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on the Line of Sight', what mIoU score did the TALoS model get on the SemanticKITTI dataset
| 39.29 |
MSR-VTT | Show-1 | Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15818v2 | [
"https://github.com/showlab/show-1"
] | In the paper 'Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation', what FID score did the Show-1 model get on the MSR-VTT dataset
| 13.08 |
SMAC 6h_vs_9z | DDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DDN model get on the SMAC 6h_vs_9z dataset
| 0.28 |
SUN397 | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the SUN397 dataset
| 71.4 |
LoveDA | LOGCAN++ | LOGCAN++: Adaptive Local-global class-aware network for semantic segmentation of remote sensing imagery | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16502v2 | [
"https://github.com/xwmaxwma/rssegmentation"
] | In the paper 'LOGCAN++: Adaptive Local-global class-aware network for semantic segmentation of remote sensing imagery', what Category mIoU score did the LOGCAN++ model get on the LoveDA dataset
| 53.35 |
MVBench | SPHINX-Plus | SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05935v2 | [
"https://github.com/alpha-vllm/llama2-accessory"
] | In the paper 'SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models', what Avg. score did the SPHINX-Plus model get on the MVBench dataset
| 39.7 |
MSU SR-QA Dataset | Q-Align (IQA) | Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17090v1 | [
"https://github.com/q-future/q-align"
] | In the paper 'Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels', what SROCC score did the Q-Align (IQA) model get on the MSU SR-QA Dataset dataset
| 0.75088 |
VNHSGE-Biology | Bing Chat | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE-Biology dataset
| 69 |
MUTAG | GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GIN + PANDA model get on the MUTAG dataset
| 88.75% |
Abt-Buy | Meta-Llama-3.1-8B-Instruct_fine_tuned | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Meta-Llama-3.1-8B-Instruct_fine_tuned model get on the Abt-Buy dataset
| 87.34 |
MSRVTT-QA | MA-LMM | MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05726v2 | [
"https://github.com/boheumd/MA-LMM"
] | In the paper 'MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding', what Accuracy score did the MA-LMM model get on the MSRVTT-QA dataset
| 48.5 |
StyleBench | StyleID | Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.09008v2 | [
"https://github.com/jiwoogit/StyleID"
] | In the paper 'Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer', what CLIP Score score did the StyleID model get on the StyleBench dataset
| 0.604 |
CropHarvest - Global | Feature Gated Fusion | Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14297v2 | [
"https://github.com/fmenat/missingviews-study-eo"
] | In the paper 'Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications', what Average Accuracy score did the Feature Gated Fusion model get on the CropHarvest - Global dataset
| 0.849 |
ImageNet | CaiT-S + GFSA | Graph Convolutions Enrich the Self-Attention in Transformers! | 2023-12-07T00:00:00 | https://arxiv.org/abs/2312.04234v5 | [
"https://github.com/jeongwhanchoi/gfsa"
] | In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Top 1 Accuracy score did the CaiT-S + GFSA model get on the ImageNet dataset
| 82.8% |
ELI5 | Fourier Transformer | Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15099v1 | [
"https://github.com/lumia-group/fouriertransformer"
] | In the paper 'Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator', what Rouge-L score did the Fourier Transformer model get on the ELI5 dataset
| 26.9 |
Dhoroni | BanglaBERT-Dhoroni | Dhoroni: Exploring Bengali Climate Change and Environmental Views with a Multi-Perspective News Dataset and Natural Language Processing | 2024-10-22T00:00:00 | https://arxiv.org/abs/2410.17225v2 | [
"https://github.com/ciol-researchlab/Dhoroni"
] | In the paper 'Dhoroni: Exploring Bengali Climate Change and Environmental Views with a Multi-Perspective News Dataset and Natural Language Processing', what Accuracy score did the BanglaBERT-Dhoroni model get on the Dhoroni dataset
| 0.635 |
Winoground | PaLI (ft SNLI-VE) | What You See is What You Read? Improving Text-Image Alignment Evaluation | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10400v4 | [
"https://github.com/yonatanbitton/wysiwyr"
] | In the paper 'What You See is What You Read? Improving Text-Image Alignment Evaluation', what Text Score score did the PaLI (ft SNLI-VE) model get on the Winoground dataset
| 45.00 |
EQ-Bench | lmsys/vicuna-33b-v1.3 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the lmsys/vicuna-33b-v1.3 model get on the EQ-Bench dataset
| 36.52 |
RSITMD | PE-RSITR (MRS-Adapter) | Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval | 2023-08-24T00:00:00 | https://arxiv.org/abs/2308.12509v1 | [
"https://github.com/ZhanYang-nwpu/PE-RSITR"
] | In the paper 'Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval', what Mean Recall score did the PE-RSITR (MRS-Adapter) model get on the RSITMD dataset
| 44.47% |
Long Video Dataset | READMem-MiVOS (sr=10) | READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12823v2 | [
"https://github.com/Vujas-Eteph/READMem"
] | In the paper 'READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation', what J&F score did the READMem-MiVOS (sr=10) model get on the Long Video Dataset dataset
| 86.0 |
Atari 2600 Battle Zone | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Battle Zone dataset
| 38986 |
EQ-Bench | OpenAI gpt-3.5-0613 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI gpt-3.5-0613 model get on the EQ-Bench dataset
| 49.17 |
SUTD-TrafficQA | Tem-adapter | Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer | 2023-08-16T00:00:00 | https://arxiv.org/abs/2308.08414v1 | [
"https://github.com/xliu443/tem-adapter"
] | In the paper 'Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer', what 1/4 score did the Tem-adapter model get on the SUTD-TrafficQA dataset
| 46.0 |
UMVM-dbp-fr-en | UMAEA (w/o surf & iter ) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the UMVM-dbp-fr-en dataset
| 0.818 |
Pubmed | CDNMF | Contrastive Deep Nonnegative Matrix Factorization for Community Detection | 2023-11-04T00:00:00 | https://arxiv.org/abs/2311.02357v2 | [
"https://github.com/6lyc/cdnmf"
] | In the paper 'Contrastive Deep Nonnegative Matrix Factorization for Community Detection', what ACC score did the CDNMF model get on the Pubmed dataset
| 0.6653 |
EconLogicQA | Llama-2-7B-Chat | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Llama-2-7B-Chat model get on the EconLogicQA dataset
| 0.0923 |
MS-COCO (1-shot) | RISF | Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection | 2023-11-01T00:00:00 | https://arxiv.org/abs/2311.00278v1 | [
"https://github.com/INFINIQ-AI1/RISF"
] | In the paper 'Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection', what AP score did the RISF model get on the MS-COCO (1-shot) dataset
| 11.7 |
S2Looking | HANet | HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09178v1 | [
"https://github.com/chengxihan/hanet-cd"
] | In the paper 'HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images', what F1-Score score did the HANet model get on the S2Looking dataset
| 58.54 |
Chameleon | HiGNN | Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17351v2 | [
"https://github.com/zylMozart/HiGNN"
] | In the paper 'Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network', what Accuracy score did the HiGNN model get on the Chameleon dataset
| 68.86 ± 1.45 |
Potsdam-3 | PriMaPs-EM (DINO ViT-B/8) | Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16818v2 | [
"https://github.com/visinf/primaps"
] | In the paper 'Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals', what Accuracy score did the PriMaPs-EM (DINO ViT-B/8) model get on the Potsdam-3 dataset
| 80.5 |
J-HMDB | SOC (Video-Swin-T) | SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17011v1 | [
"https://github.com/RobertLuo1/NeurIPS2023_SOC"
] | In the paper 'SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation', what Precision@0.5 score did the SOC (Video-Swin-T) model get on the J-HMDB dataset
| 0.947 |
SIR^2(Wild) | DSRNet | Single Image Reflection Separation via Component Synergy | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.10027v1 | [
"https://github.com/mingcv/dsrnet"
] | In the paper 'Single Image Reflection Separation via Component Synergy', what PSNR score did the DSRNet model get on the SIR^2(Wild) dataset
| 25.68 |
HarmfulQA | GPT-4 | Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09662v3 | [
"https://github.com/declare-lab/red-instruct"
] | In the paper 'Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment', what ASR score did the GPT-4 model get on the HarmfulQA dataset
| 65.1 |
VoiceBank + DEMAND | DeepFilterNet3 | DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement | 2023-05-14T00:00:00 | https://arxiv.org/abs/2305.08227v1 | [
"https://github.com/rikorose/deepfilternet"
] | In the paper 'DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement', what PESQ score did the DeepFilterNet3 model get on the VoiceBank + DEMAND dataset
| 3.17 |
ADE20K | SegViT-v2 (BEiT-v2-Large) | SegViTv2: Exploring Efficient and Continual Semantic Segmentation with Plain Vision Transformers | 2023-06-09T00:00:00 | https://arxiv.org/abs/2306.06289v2 | [
"https://github.com/zbwxp/SegVit"
] | In the paper 'SegViTv2: Exploring Efficient and Continual Semantic Segmentation with Plain Vision Transformers', what Validation mIoU score did the SegViT-v2 (BEiT-v2-Large) model get on the ADE20K dataset
| 58.2 |
WikiCoref | Maverick_mes | Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21489v1 | [
"https://github.com/sapienzanlp/maverick-coref"
] | In the paper 'Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends', what F1 score did the Maverick_mes model get on the WikiCoref dataset
| 66.8 |
VideoInstruct | PLLaVA-34B | PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16994v2 | [
"https://github.com/magic-research/PLLaVA"
] | In the paper 'PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning', what gpt-score score did the PLLaVA-34B model get on the VideoInstruct dataset
| 3.60 |
CIFAR-10 | DPAC | Deep Online Probability Aggregation Clustering | 2024-07-07T00:00:00 | https://arxiv.org/abs/2407.05246v2 | [
"https://github.com/aomandechenai/deep-probability-aggregation-clustering"
] | In the paper 'Deep Online Probability Aggregation Clustering', what Accuracy score did the DPAC model get on the CIFAR-10 dataset
| 0.934 |
Quora Question Pairs Dev | BERT + SCH attn | Memory-efficient Stochastic methods for Memory-based Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08123v1 | [
"https://github.com/vishwajit-vishnu/memory-efficient-stochastic-methods-for-memory-based-transformers"
] | In the paper 'Memory-efficient Stochastic methods for Memory-based Transformers', what Val F1 Score score did the BERT + SCH attn model get on the Quora Question Pairs Dev dataset
| 88.436 |
Set14 | ATD | Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary | 2024-01-16T00:00:00 | https://arxiv.org/abs/2401.08209v2 | [
"https://github.com/labshuhanggu/adaptive-token-dictionary"
] | In the paper 'Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary', what PSNR score did the ATD model get on the Set14 dataset
| 29.24 |
Oxford-IIIT Pet Dataset | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the Oxford-IIIT Pet Dataset dataset
| 96.71 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.