dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MBPP | phi-1.5-web 1.3B | Textbooks Are All You Need II: phi-1.5 technical report | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05463v1 | [
"https://github.com/knowlab/bi-weekly-paper-presentation"
] | In the paper 'Textbooks Are All You Need II: phi-1.5 technical report', what Accuracy score did the phi-1.5-web 1.3B model get on the MBPP dataset
| 43.5 |
TvSum | CSTA | CSTA: CNN-based Spatiotemporal Attention for Video Summarization | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.11905v2 | [
"https://github.com/thswodnjs3/CSTA"
] | In the paper 'CSTA: CNN-based Spatiotemporal Attention for Video Summarization', what Kendall's Tau score did the CSTA model get on the TvSum dataset
| 0.194 |
BEHAVE | CONTHO | Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer | 2024-04-07T00:00:00 | https://arxiv.org/abs/2404.04819v1 | [
"https://github.com/dqj5182/contho_release"
] | In the paper 'Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer', what Chamfer Distance score did the CONTHO model get on the BEHAVE dataset
| 8.42 |
CROHME 2014 | ICAL | ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition | 2024-05-15T00:00:00 | https://arxiv.org/abs/2405.09032v4 | [
"https://github.com/qingzhenduyu/ical"
] | In the paper 'ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition', what ExpRate score did the ICAL model get on the CROHME 2014 dataset
| 60.63 |
Mini-Imagenet 5-way (5-shot) | CAML [Laion-2b] | Context-Aware Meta-Learning | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10971v2 | [
"https://github.com/cfifty/CAML"
] | In the paper 'Context-Aware Meta-Learning', what Accuracy score did the CAML [Laion-2b] model get on the Mini-Imagenet 5-way (5-shot) dataset
| 98.6 |
Cornell (60%/20%/20% random splits) | HH-GAT | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GAT model get on the Cornell (60%/20%/20% random splits) dataset
| 72.7 ± 4.26 |
PerSeg | PerSAM | Personalize Segment Anything Model with One Shot | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.03048v2 | [
"https://github.com/zrrskywalker/personalize-sam"
] | In the paper 'Personalize Segment Anything Model with One Shot', what mIoU score did the PerSAM model get on the PerSeg dataset
| 89.32 |
Baidu Mall | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Baidu Mall dataset
| 56.02 |
DSO-1 | Late Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Late Fusion model get on the DSO-1 dataset
| .899 |
VideoInstruct | TS-LLaVA-34B | TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models | 2024-11-17T00:00:00 | https://arxiv.org/abs/2411.11066v1 | [
"https://github.com/tingyu215/ts-llava"
] | In the paper 'TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models', what gpt-score score did the TS-LLaVA-34B model get on the VideoInstruct dataset
| 3.55 |
CCTSDB-AUG | CCSPNet-Joint | CCSPNet-Joint: Efficient Joint Training Method for Traffic Sign Detection Under Extreme Conditions | 2023-09-13T00:00:00 | https://arxiv.org/abs/2309.06902v4 | [
"https://github.com/haoqinhong/ccspnet-joint"
] | In the paper 'CCSPNet-Joint: Efficient Joint Training Method for Traffic Sign Detection Under Extreme Conditions', what Averaged Precision score did the CCSPNet-Joint model get on the CCTSDB-AUG dataset
| 0.951 |
ColonINST-v1 (Unseen) | ColonGPT (w/ LoRA, w/o extra data) | Frontiers in Intelligent Colonoscopy | 2024-10-22T00:00:00 | https://arxiv.org/abs/2410.17241v1 | [
"https://github.com/ai4colonoscopy/intelliscope"
] | In the paper 'Frontiers in Intelligent Colonoscopy', what Accuray score did the ColonGPT (w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Unseen) dataset
| 83.42 |
MVTec AD | CPR-fast | Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06748v1 | [
"https://github.com/flyinghu123/cpr"
] | In the paper 'Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval', what Detection AUROC score did the CPR-fast model get on the MVTec AD dataset
| 99.7 |
CHILI-3K | EdgeCNN | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the EdgeCNN model get on the CHILI-3K dataset
| 0.015 +/- 0.001 |
FP-T-E | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-T-E dataset
| 66.25 |
SBD | ViT-B+MST+CL | MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation | 2024-01-09T00:00:00 | https://arxiv.org/abs/2401.04403v2 | [
"https://github.com/hahamyt/mst"
] | In the paper 'MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation', what NoC@85 score did the ViT-B+MST+CL model get on the SBD dataset
| 3.03 |
LRS2 | SyncVSR | SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12233v1 | [
"https://github.com/KAIST-AILab/SyncVSR"
] | In the paper 'SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization', what Word Error Rate (WER) score did the SyncVSR model get on the LRS2 dataset
| 16.5 |
MVBench | VideoChat | VideoChat: Chat-Centric Video Understanding | 2023-05-10T00:00:00 | https://arxiv.org/abs/2305.06355v2 | [
"https://github.com/opengvlab/ask-anything"
] | In the paper 'VideoChat: Chat-Centric Video Understanding', what Avg. score did the VideoChat model get on the MVBench dataset
| 35.5 |
ActivityNet-QA | FrozenBiLM+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the FrozenBiLM+ model get on the ActivityNet-QA dataset
| 44.8 |
SVHN, 40 Labels | ShrinkMatch | Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06777v1 | [
"https://github.com/LiheYoung/ShrinkMatch"
] | In the paper 'Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning', what Percentage error score did the ShrinkMatch model get on the SVHN, 40 Labels dataset
| 2.51 |
Cityscapes to Foggy Cityscapes | PT (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the PT (ResNet50-FPN) model get on the Cityscapes to Foggy Cityscapes dataset
| 59.2 |
Human3.6M | DC-GCT(T=1) | Double-chain Constraints for 3D Human Pose Estimation in Images and Videos | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05298v1 | [
"https://github.com/KHB1698/DC-GCT"
] | In the paper 'Double-chain Constraints for 3D Human Pose Estimation in Images and Videos', what Average MPJPE (mm) score did the DC-GCT(T=1) model get on the Human3.6M dataset
| 46.1 |
V2X-SIM | QUEST | QUEST: Query Stream for Practical Cooperative Perception | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.01804v3 | [
"https://github.com/leofansq/QUEST"
] | In the paper 'QUEST: Query Stream for Practical Cooperative Perception', what mAP score did the QUEST model get on the V2X-SIM dataset
| 23.9 |
PCQM-Contact | DRew-GCN | DRew: Dynamically Rewired Message Passing with Delay | 2023-05-13T00:00:00 | https://arxiv.org/abs/2305.08018v2 | [
"https://github.com/bengutteridge/drew"
] | In the paper 'DRew: Dynamically Rewired Message Passing with Delay', what MRR score did the DRew-GCN model get on the PCQM-Contact dataset
| 0.3444±0.0017 |
Vulnerability Java Dataset | WizardCoder | Finetuning Large Language Models for Vulnerability Detection | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.17010v5 | [
"https://github.com/rmusab/vul-llm-finetune"
] | In the paper 'Finetuning Large Language Models for Vulnerability Detection', what AUC score did the WizardCoder model get on the Vulnerability Java Dataset dataset
| 0.86 |
DBP15k zh-en | UMAEA (w/o surf & iter ) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the DBP15k zh-en dataset
| 0.800 |
Cityscapes test | MetaPrompt-SD | Harnessing Diffusion Models for Visual Perception with Meta Prompts | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14733v1 | [
"https://github.com/fudan-zvg/meta-prompts"
] | In the paper 'Harnessing Diffusion Models for Visual Perception with Meta Prompts', what Mean IoU (class) score did the MetaPrompt-SD model get on the Cityscapes test dataset
| 86.2 |
ModelNet40 | MiniGPT-3D | MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.01413v1 | [
"https://github.com/tangyuan96/minigpt-3d"
] | In the paper 'MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors', what ModelNet40 (Average) score did the MiniGPT-3D model get on the ModelNet40 dataset
| 60.86 |
STS14 | PromptEOL+CSE+OPT-2.7B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-2.7B model get on the STS14 dataset
| 0.8480 |
Shot2Story20K | Ours | Shot2Story20K: A New Benchmark for Comprehensive Understanding of Multi-shot Videos | 2023-12-16T00:00:00 | https://arxiv.org/abs/2312.10300v2 | [
"https://github.com/bytedance/Shot2Story"
] | In the paper 'Shot2Story20K: A New Benchmark for Comprehensive Understanding of Multi-shot Videos', what METEOR score did the Ours model get on the Shot2Story20K dataset
| 24.8 |
KITTI (Distant PCR) | FCGF+APR(s) | APR: Online Distant Point Cloud Registration Through Aggregated Point Cloud Reconstruction | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.02893v2 | [
"https://github.com/liuquan98/apr"
] | In the paper 'APR: Online Distant Point Cloud Registration Through Aggregated Point Cloud Reconstruction', what mRR @ Normal Criterion (1.5°&0.3m) score did the FCGF+APR(s) model get on the KITTI (Distant PCR) dataset
| 62.9 |
SVT | CLIP4STR-L (DataComp-1B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L (DataComp-1B) model get on the SVT dataset
| 98.6 |
ETTh2 (96) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTh2 (96) Multivariate dataset
| 0.268 |
MORPH Album2 (SE) | ResNet-50-Cross-Entropy | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Cross-Entropy model get on the MORPH Album2 (SE) dataset
| 2.81 |
Country211 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Country211 dataset
| 11.1 |
SemanticKITTI | HTCL-S | Hierarchical Temporal Context Learning for Camera-based Semantic Scene Completion | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02077v4 | [
"https://github.com/arlo0o/htcl"
] | In the paper 'Hierarchical Temporal Context Learning for Camera-based Semantic Scene Completion', what mIoU score did the HTCL-S model get on the SemanticKITTI dataset
| 17.09 |
CATH 4.2 | ProteinMPNN | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the ProteinMPNN model get on the CATH 4.2 dataset
| 45.96 |
Atari 2600 Private Eye | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Private Eye dataset
| 349.7 |
CIFAR-10-LT (ρ=100) | DirMixE | Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07780v1 | [
"https://github.com/scongl/dirmixe"
] | In the paper 'Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition', what Error Rate score did the DirMixE model get on the CIFAR-10-LT (ρ=100) dataset
| 16.74 |
NCBI-disease | GoLLIE | GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03668v5 | [
"https://github.com/hitz-zentroa/gollie"
] | In the paper 'GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction', what F1 score did the GoLLIE model get on the NCBI-disease dataset
| 86.5 |
GSM8K | DART-Math-DSMath-7B-Uniform (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-DSMath-7B-Uniform (0-shot CoT, w/o code) model get on the GSM8K dataset
| 88.2 |
LaSOT-ext | LoRAT-g-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what AUC score did the LoRAT-g-378 model get on the LaSOT-ext dataset
| 56.5 |
ImageNet 512x512 | PaGoDA | PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14822v2 | [
"https://github.com/sony/pagoda"
] | In the paper 'PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher', what FID score did the PaGoDA model get on the ImageNet 512x512 dataset
| 1.80 |
SMAC 26m_vs_30m | VDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the VDN model get on the SMAC 26m_vs_30m dataset
| 23.01 |
CROHME 2016 | ICAL | ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition | 2024-05-15T00:00:00 | https://arxiv.org/abs/2405.09032v4 | [
"https://github.com/qingzhenduyu/ical"
] | In the paper 'ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition', what ExpRate score did the ICAL model get on the CROHME 2016 dataset
| 58.79 |
Office-Home | WAKD (DeiT-Ti) | Weight Averaging Improves Knowledge Distillation under Domain Shift | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11446v1 | [
"https://github.com/vorobeevich/distillation-in-dg"
] | In the paper 'Weight Averaging Improves Knowledge Distillation under Domain Shift', what Average Accuracy score did the WAKD (DeiT-Ti) model get on the Office-Home dataset
| 70.5 |
Office-Home | UniDG + CORAL + ConvNeXt-B | Towards Unified and Effective Domain Generalization | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10008v1 | [
"https://github.com/invictus717/UniDG"
] | In the paper 'Towards Unified and Effective Domain Generalization', what Average Accuracy score did the UniDG + CORAL + ConvNeXt-B model get on the Office-Home dataset
| 88.9 |
IllusionVQA | CogVLM | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the CogVLM model get on the IllusionVQA dataset
| 28 |
ARC (Easy) | LLaMA 3 8B+MoSLoRA (fine-tuned) | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what Accuracy score did the LLaMA 3 8B+MoSLoRA (fine-tuned) model get on the ARC (Easy) dataset
| 90.5 |
ADE20K | DAT-T++ | DAT++: Spatially Dynamic Vision Transformer with Deformable Attention | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01430v1 | [
"https://github.com/leaplabthu/dat"
] | In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what Validation mIoU score did the DAT-T++ model get on the ADE20K dataset
| 50.3 |
MSR-VTT Adverbs | ReGaDa | Video-adverb retrieval with compositional adverb-action embeddings | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.15086v1 | [
"https://github.com/ExplainableML/ReGaDa"
] | In the paper 'Video-adverb retrieval with compositional adverb-action embeddings', what mAP W score did the ReGaDa model get on the MSR-VTT Adverbs dataset
| 0.378 |
GSM8K | RFT 13B | Scaling Relationship on Learning Mathematical Reasoning with Large Language Models | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.01825v2 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | In the paper 'Scaling Relationship on Learning Mathematical Reasoning with Large Language Models', what Accuracy score did the RFT 13B model get on the GSM8K dataset
| 55.3 |
NExT-QA | ViLA (3B, 4 frames) | ViLA: Efficient Video-Language Alignment for Video Question Answering | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08367v4 | [
"https://github.com/xijun-cs/vila"
] | In the paper 'ViLA: Efficient Video-Language Alignment for Video Question Answering', what Accuracy score did the ViLA (3B, 4 frames) model get on the NExT-QA dataset
| 74.4 |
ICBHI Respiratory Sound Database | SG-SCL (AST) | Stethoscope-guided Supervised Contrastive Learning for Cross-domain Adaptation on Respiratory Sound Classification | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09603v1 | [
"https://github.com/kaen2891/stethoscope-guided_supervised_contrastive_learning"
] | In the paper 'Stethoscope-guided Supervised Contrastive Learning for Cross-domain Adaptation on Respiratory Sound Classification', what ICBHI Score score did the SG-SCL (AST) model get on the ICBHI Respiratory Sound Database dataset
| 61.71 |
CropHarvest - Kenya | Radar TS with TempCNN | In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16582v2 | [
"https://github.com/fmenat/optimal-multiview-crop-classifier"
] | In the paper 'In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data', what Average Accuracy score did the Radar TS with TempCNN model get on the CropHarvest - Kenya dataset
| 0.676 |
YouTube-UGC | ReLaX-VQA (finetuned on YouTube-UGC) | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11496v1 | [
"https://github.com/xinyiw915/relax-vqa"
] | In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA (finetuned on YouTube-UGC) model get on the YouTube-UGC dataset
| 0.8652 |
LEVIR-CD | SGSLN/512 | Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11302v1 | [
"https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network"
] | In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-score score did the SGSLN/512 model get on the LEVIR-CD dataset
| 0.9233 |
Electricity (192) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Electricity (192) dataset
| 0.147 |
Manga109 - 2x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the Manga109 - 2x upscaling dataset
| 41.14 |
VisDA2017 | SWG | Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidance | 2023-12-07T00:00:00 | https://arxiv.org/abs/2312.04066v4 | [
"https://github.com/ThomasWestfechtel/SWG"
] | In the paper 'Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidance', what Accuracy score did the SWG model get on the VisDA2017 dataset
| 92.7 |
Human3.6M | W-HMR | W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration | 2023-11-29T00:00:00 | https://arxiv.org/abs/2311.17460v6 | [
"https://github.com/yw0208/W-HMR"
] | In the paper 'W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration', what Average MPJPE (mm) score did the W-HMR model get on the Human3.6M dataset
| 45.5 |
BDD100K | CDDMSL | Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13525v1 | [
"https://github.com/sinamalakouti/CDDMSL"
] | In the paper 'Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment', what MAP score did the CDDMSL model get on the BDD100K dataset
| 27.1 |
MMNeedle | LLaVA-Llama-3 | LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.11703v1 | [
"https://github.com/thunlp/llava-uhd"
] | In the paper 'LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images', what 1 Image, 2*2 Stitching, Exact Accuracy score did the LLaVA-Llama-3 model get on the MMNeedle dataset
| 43.8 |
CUB 200 5-way 1-shot | CAML [Laion-2b] | Context-Aware Meta-Learning | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10971v2 | [
"https://github.com/cfifty/CAML"
] | In the paper 'Context-Aware Meta-Learning', what Accuracy score did the CAML [Laion-2b] model get on the CUB 200 5-way 1-shot dataset
| 95.8 |
MSU SR-QA Dataset | TOPIQ | TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment | 2023-08-06T00:00:00 | https://arxiv.org/abs/2308.03060v1 | [
"https://github.com/chaofengc/iqa-pytorch"
] | In the paper 'TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment', what SROCC score did the TOPIQ model get on the MSU SR-QA Dataset dataset
| 0.62715 |
Countix | ESCounts | Every Shot Counts: Using Exemplars for Repetition Counting in Videos | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.18074v2 | [
"https://github.com/sinhasaptarshi/EveryShotCounts"
] | In the paper 'Every Shot Counts: Using Exemplars for Repetition Counting in Videos', what OBO score did the ESCounts model get on the Countix dataset
| 0.673 |
RAVDESS | MultiMAE-DER | MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition | 2024-04-28T00:00:00 | https://arxiv.org/abs/2404.18327v2 | [
"https://github.com/Peihao-Xiang/MultiMAE-DFER"
] | In the paper 'MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition', what WAR score did the MultiMAE-DER model get on the RAVDESS dataset
| 83.61 |
STS13 | PromptEOL+CSE+OPT-2.7B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-2.7B model get on the STS13 dataset
| 0.8964 |
COCO-20i -> Pascal VOC (1-shot) | MSDNet (ResNet-101) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-101) model get on the COCO-20i -> Pascal VOC (1-shot) dataset
| 73.9 |
3DPW | STAF | STAF: 3D Human Mesh Recovery from Video with Spatio-Temporal Alignment Fusion | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01730v1 | [
"https://github.com/yw0208/STAF"
] | In the paper 'STAF: 3D Human Mesh Recovery from Video with Spatio-Temporal Alignment Fusion', what PA-MPJPE score did the STAF model get on the 3DPW dataset
| 48.0 |
MassSpecGym | Fingerprint FFN | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the Fingerprint FFN model get on the MassSpecGym dataset
| 2.54 |
COCO 2017 | DeBiFormer-B (IN1k pretrain, MaskRCNN 12ep) | DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08582v1 | [
"https://github.com/maclong01/DeBiFormer"
] | In the paper 'DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention', what mAP score did the DeBiFormer-B (IN1k pretrain, MaskRCNN 12ep) model get on the COCO 2017 dataset
| 48.5 |
NExT-QA | LLaMA-VQA (33B) | Large Language Models are Temporal and Causal Reasoners for Video Question Answering | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15747v2 | [
"https://github.com/mlvlab/Flipped-VQA"
] | In the paper 'Large Language Models are Temporal and Causal Reasoners for Video Question Answering', what Accuracy score did the LLaMA-VQA (33B) model get on the NExT-QA dataset
| 75.5 |
EuroSAT-SAR | ViT-S/16 | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18653v1 | [
"https://github.com/zhu-xlab/fgmae"
] | In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what Overall Accuracy score did the ViT-S/16 model get on the EuroSAT-SAR dataset
| 78.4 |
MS-COCO (30-shot) | RISF (SWIN-Large) | Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection | 2023-11-01T00:00:00 | https://arxiv.org/abs/2311.00278v1 | [
"https://github.com/INFINIQ-AI1/RISF"
] | In the paper 'Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection', what AP score did the RISF (SWIN-Large) model get on the MS-COCO (30-shot) dataset
| 31.9 |
Wisconsin | 5-HiGCN | Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12971v2 | [
"https://github.com/yiminghh/higcn"
] | In the paper 'Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes', what Accuracy score did the 5-HiGCN model get on the Wisconsin dataset
| 94.99±0.65 |
RealBlur-R (trained on GoPro) | ALGNet | Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring | 2024-03-29T00:00:00 | https://arxiv.org/abs/2403.20106v2 | [
"https://github.com/Tombs98/ALGNet"
] | In the paper 'Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring', what PSNR (sRGB) score did the ALGNet model get on the RealBlur-R (trained on GoPro) dataset
| 36.35 |
MultiOFF | RGCL | Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08110v3 | [
"https://github.com/JingbiaoMei/RGCL"
] | In the paper 'Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning', what Accuracy score did the RGCL model get on the MultiOFF dataset
| 67.1 |
Harmonix | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Harmonix dataset
| 95.8 |
SICKLE | U-TAE | SICKLE: A Multi-Sensor Satellite Imagery Dataset Annotated with Multiple Key Cropping Parameters | 2023-11-29T00:00:00 | https://arxiv.org/abs/2312.00069v1 | [
"https://github.com/Depanshu-Sani/SICKLE"
] | In the paper 'SICKLE: A Multi-Sensor Satellite Imagery Dataset Annotated with Multiple Key Cropping Parameters', what MAPE (%) score did the U-TAE model get on the SICKLE dataset
| 49.63 |
OntoNotes | Maverick_mes | Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21489v1 | [
"https://github.com/sapienzanlp/maverick-coref"
] | In the paper 'Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends', what F1 score did the Maverick_mes model get on the OntoNotes dataset
| 83.6 |
VisA | SuperSimpleNet | SuperSimpleNet: Unifying Unsupervised and Supervised Learning for Fast and Reliable Surface Defect Detection | 2024-08-06T00:00:00 | https://arxiv.org/abs/2408.03143v2 | [
"https://github.com/blaz-r/supersimplenet"
] | In the paper 'SuperSimpleNet: Unifying Unsupervised and Supervised Learning for Fast and Reliable Surface Defect Detection', what Detection AUROC score did the SuperSimpleNet model get on the VisA dataset
| 93.4 |
Long Video Dataset | READMem-STCN (sr=10) | READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12823v2 | [
"https://github.com/Vujas-Eteph/READMem"
] | In the paper 'READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation', what J&F score did the READMem-STCN (sr=10) model get on the Long Video Dataset dataset
| 81.8 |
AfriSenti | XLM-R | UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.01093v1 | [
"https://github.com/zerohd4869/sacl"
] | In the paper 'UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis', what weighted-F1 score score did the XLM-R model get on the AfriSenti dataset
| 0.399 |
VisA | GLASS | A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09359v1 | [
"https://github.com/cqylunlun/glass"
] | In the paper 'A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization', what Detection AUROC score did the GLASS model get on the VisA dataset
| 98.8 |
RSICD | GeoRSCLIP-FT | RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11300v5 | [
"https://github.com/om-ai-lab/rs5m"
] | In the paper 'RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing', what Mean Recall score did the GeoRSCLIP-FT model get on the RSICD dataset
| 38.87% |
WN18RR | LERP | Logical Entity Representation in Knowledge-Graphs for Differentiable Rule Learning | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12738v1 | [
"https://github.com/glaciohound/lerp"
] | In the paper 'Logical Entity Representation in Knowledge-Graphs for Differentiable Rule Learning', what MRR score did the LERP model get on the WN18RR dataset
| 0.622 |
Mapillary test | DINOv2 SALAD | Optimal Transport Aggregation for Visual Place Recognition | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15937v2 | [
"https://github.com/serizba/salad"
] | In the paper 'Optimal Transport Aggregation for Visual Place Recognition', what Recall@1 score did the DINOv2 SALAD model get on the Mapillary test dataset
| 75 |
BURST-test | Cutie (base, MEGA, 600 pixels) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what HOTA (all) score did the Cutie (base, MEGA, 600 pixels) model get on the BURST-test dataset
| 66.0 |
Food-101 | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the Food-101 dataset
| 91.34 |
Peptides-struct | GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what MAE score did the GCN + PANDA model get on the Peptides-struct dataset
| 0.3272±0.0001 |
ImageNet | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the ImageNet dataset
| 74.02 |
ETTh1 (720) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh1 (720) Multivariate dataset
| 0.444 |
JIGSAWS | RICA^2 | RICA2: Rubric-Informed, Calibrated Assessment of Actions | 2024-08-04T00:00:00 | https://arxiv.org/abs/2408.02138v2 | [
"https://github.com/abrarmajeedi/rica2_aqa"
] | In the paper 'RICA2: Rubric-Informed, Calibrated Assessment of Actions', what Spearman Correlation score did the RICA^2 model get on the JIGSAWS dataset
| 0.92 |
CIFAR-10 | DiffiT | DiffiT: Diffusion Vision Transformers for Image Generation | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02139v3 | [
"https://github.com/nvlabs/diffit"
] | In the paper 'DiffiT: Diffusion Vision Transformers for Image Generation', what FID score did the DiffiT model get on the CIFAR-10 dataset
| 1.95 |
map2seq | FLAME | FLAME: Learning to Navigate with Multimodal LLM in Urban Environments | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.11051v1 | [
"https://github.com/xyz9911/FLAME"
] | In the paper 'FLAME: Learning to Navigate with Multimodal LLM in Urban Environments', what Task Completion (TC) score did the FLAME model get on the map2seq dataset
| 52.44 |
ogbl-biokg | UniBi | Prior Bilinear Based Models for Knowledge Graph Completion | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.13834v1 | [
"https://github.com/lrlbbzl/unibi_ogb"
] | In the paper 'Prior Bilinear Based Models for Knowledge Graph Completion', what Test MRR score did the UniBi model get on the ogbl-biokg dataset
| 0.8550 ± 0.0003 |
SPKL | EfficientNet-P | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the EfficientNet-P model get on the SPKL dataset
| 0.7393 |
GoPro | Turtle | Learning Truncated Causal History Model for Video Restoration | 2024-10-04T00:00:00 | https://arxiv.org/abs/2410.03936v2 | [
"https://github.com/Ascend-Research/Turtle"
] | In the paper 'Learning Truncated Causal History Model for Video Restoration', what PSNR score did the Turtle model get on the GoPro dataset
| 34.5 |
Cityscapes to ACDC | CoDA | CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17369v3 | [
"https://github.com/Cuzyoung/CoDA"
] | In the paper 'CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning', what mIoU score did the CoDA model get on the Cityscapes to ACDC dataset
| 72.6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.