dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Market-1501 | SOLIDER +UFFM+AMC | Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.01101v4 | [
"https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC"
] | In the paper 'Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination', what Rank-1 score did the SOLIDER +UFFM+AMC model get on the Market-1501 dataset
| 97 |
Geo-Tagged NUS-WIDE (GPS Only) | GeoCLIP | GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16020v2 | [
"https://github.com/VicenteVivan/geo-clip"
] | In the paper 'GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization', what mAP score did the GeoCLIP model get on the Geo-Tagged NUS-WIDE (GPS Only) dataset
| 0.249 |
CACD | ResNet-50-Unimodal-Concentrated | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Unimodal-Concentrated model get on the CACD dataset
| 4.10 |
DAVIS-DTA | PGraphDTA | PGraphDTA: Improving Drug Target Interaction Prediction using Protein Language Models and Contact Maps | 2023-10-06T00:00:00 | https://arxiv.org/abs/2310.04017v3 | [
"https://github.com/yijia-xiao/pgraphdta"
] | In the paper 'PGraphDTA: Improving Drug Target Interaction Prediction using Protein Language Models and Contact Maps', what MSE score did the PGraphDTA model get on the DAVIS-DTA dataset
| 0.221 |
LaSOT | RTracker-L | RTracker: Recoverable Tracking via PN Tree Structured Memory | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19242v1 | [
"https://github.com/norahgreen/rtracker"
] | In the paper 'RTracker: Recoverable Tracking via PN Tree Structured Memory', what AUC score did the RTracker-L model get on the LaSOT dataset
| 74.7 |
VLCS | SPG (CLIP, ResNet-50) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ResNet-50) model get on the VLCS dataset
| 84.0 |
CATH 4.3 | GVP-large | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the GVP-large model get on the CATH 4.3 dataset
| 39.2 |
DocVQA test | MLCD-Embodied-7B | Multi-label Cluster Discrimination for Visual Representation Learning | 2024-07-24T00:00:00 | https://arxiv.org/abs/2407.17331v2 | [
"https://github.com/deepglint/unicom"
] | In the paper 'Multi-label Cluster Discrimination for Visual Representation Learning', what ANLS score did the MLCD-Embodied-7B model get on the DocVQA test dataset
| 0.916 |
SHHS | NeuroNet (C4-A1 only) | NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.17585v2 | [
"https://github.com/dlcjfgmlnasa/NeuroNet"
] | In the paper 'NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG', what Accuracy score did the NeuroNet (C4-A1 only) model get on the SHHS dataset
| 86.88% |
WiC | OPT-125M | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-125M model get on the WiC dataset
| 53.38% |
MATH | OpenMath-CodeLlama-34B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-34B (w/ code, SC, k=50) model get on the MATH dataset
| 60.2 |
Mip-NeRF 360 | Self-Organizing Gaussians | Compact 3D Scene Representation via Self-Organizing Gaussian Grids | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.13299v2 | [
"https://github.com/fraunhoferhhi/Self-Organizing-Gaussians"
] | In the paper 'Compact 3D Scene Representation via Self-Organizing Gaussian Grids', what PSNR score did the Self-Organizing Gaussians model get on the Mip-NeRF 360 dataset
| 27.64 |
EQ-Bench | 01-ai/Yi-34B-Chat | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the 01-ai/Yi-34B-Chat model get on the EQ-Bench dataset
| 51.03 |
UCI POWER | PaddingFlow | PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08216v2 | [
"https://github.com/adamqlmeng/paddingflow"
] | In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what CD score did the PaddingFlow model get on the UCI POWER dataset
| 0.142 |
ImageNet | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Top 1 Accuracy score did the ZLaP* model get on the ImageNet dataset
| 72.1 |
Winoground | CLIP RN50x64 | What You See is What You Read? Improving Text-Image Alignment Evaluation | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10400v4 | [
"https://github.com/yonatanbitton/wysiwyr"
] | In the paper 'What You See is What You Read? Improving Text-Image Alignment Evaluation', what Text Score score did the CLIP RN50x64 model get on the Winoground dataset
| 26.50 |
ImageNet | HVT Base | HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space | 2024-09-25T00:00:00 | https://arxiv.org/abs/2409.16897v2 | [
"https://github.com/hyperbolicvit/hyperbolicvit"
] | In the paper 'HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space', what Top 1 Accuracy score did the HVT Base model get on the ImageNet dataset
| 80.1% |
COCO 1% labeled data | Guided Distillation (ResNet50) | Guided Distillation for Semi-Supervised Instance Segmentation | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02668v2 | [
"https://github.com/facebookresearch/guideddistillation"
] | In the paper 'Guided Distillation for Semi-Supervised Instance Segmentation', what mask AP score did the Guided Distillation (ResNet50) model get on the COCO 1% labeled data dataset
| 21.5 |
ARMBench | RoboLLM (VIT-B) | RoboLLM: Robotic Vision Tasks Grounded on Multimodal Large Language Models | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10221v2 | [
"https://github.com/longkukuhi/armbench"
] | In the paper 'RoboLLM: Robotic Vision Tasks Grounded on Multimodal Large Language Models', what AP50 score did the RoboLLM (VIT-B) model get on the ARMBench dataset
| 82.0 |
GSM8K | MathCoder-L-13B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-L-13B model get on the GSM8K dataset
| 72.6 |
SMAC 3s5z_vs_4s6z | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the QPLEX model get on the SMAC 3s5z_vs_4s6z dataset
| 13.60 |
HumanML3D | MotionLCM (4-step) | MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19759v2 | [
"https://github.com/Dai-Wenxun/MotionLCM"
] | In the paper 'MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model', what FID score did the MotionLCM (4-step) model get on the HumanML3D dataset
| 0.304 |
FineAction | RDFA-S6 (InternVideo2-6B) | Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13078v1 | [
"https://github.com/lsy0882/RDFA-S6"
] | In the paper 'Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism', what mAP score did the RDFA-S6 (InternVideo2-6B) model get on the FineAction dataset
| 29.6 |
WebApp1K-React | o1-preview | A Case Study of Web App Coding with OpenAI Reasoning Models | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.13773v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the o1-preview model get on the WebApp1K-React dataset
| 0.952 |
SportsMOT | Deep-EIoU | Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports | 2023-06-22T00:00:00 | https://arxiv.org/abs/2306.13074v5 | [
"https://github.com/hsiangwei0903/Deep-EIoU"
] | In the paper 'Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports', what HOTA score did the Deep-EIoU model get on the SportsMOT dataset
| 77.2 |
MATH | MetaMath 70B | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | 2023-09-21T00:00:00 | https://arxiv.org/abs/2309.12284v4 | [
"https://github.com/meta-math/MetaMath"
] | In the paper 'MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models', what Accuracy score did the MetaMath 70B model get on the MATH dataset
| 26.0 |
PATTERN | TIGT | Topology-Informed Graph Transformer | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02005v1 | [
"https://github.com/leemingo/tigt"
] | In the paper 'Topology-Informed Graph Transformer', what Accuracy score did the TIGT model get on the PATTERN dataset
| 86.680 |
NExT-QA (Efficient) | SeViLA (4 frames) | Self-Chained Image-Language Model for Video Localization and Question Answering | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06988v2 | [
"https://github.com/yui010206/sevila"
] | In the paper 'Self-Chained Image-Language Model for Video Localization and Question Answering', what 1:1 Accuracy score did the SeViLA (4 frames) model get on the NExT-QA (Efficient) dataset
| 73.8 |
AgeDB | ResNet-50-Mean-Variance | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Mean-Variance model get on the AgeDB dataset
| 5.85 |
SPOT-10 | NASNetMobile Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the NASNetMobile Distiller model get on the SPOT-10 dataset
| 77.75 |
Vid4 - 4x upscaling | EvTexture | EvTexture: Event-driven Texture Enhancement for Video Super-Resolution | 2024-06-19T00:00:00 | https://arxiv.org/abs/2406.13457v1 | [
"https://github.com/dachunkai/evtexture"
] | In the paper 'EvTexture: Event-driven Texture Enhancement for Video Super-Resolution', what PSNR score did the EvTexture model get on the Vid4 - 4x upscaling dataset
| 29.51 |
CIFAR-100 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the CIFAR-100 dataset
| 0.899 |
ICDAR2015 | CPPD | Context Perception Parallel Decoder for Scene Text Recognition | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12270v2 | [
"https://github.com/PaddlePaddle/PaddleOCR"
] | In the paper 'Context Perception Parallel Decoder for Scene Text Recognition', what Accuracy score did the CPPD model get on the ICDAR2015 dataset
| 91.7 |
Structured3D | SFSS-MMSI (RGB+Normal) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what Validation mIoU score did the SFSS-MMSI (RGB+Normal) model get on the Structured3D dataset
| 74.38 |
LaSOT-ext | ODTrack-L | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what AUC score did the ODTrack-L model get on the LaSOT-ext dataset
| 53.9 |
ADNI | AXIAL | AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scans | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02418v2 | [
"https://github.com/GabrieleLozupone/AXIAL"
] | In the paper 'AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scans', what AD-Related Brain Areas Identified score did the AXIAL model get on the ADNI dataset
| hippocampus, amygdala, parahippocampal, inferior lateral ventricles |
CDD Dataset (season-varying) | SGSLN/512 | Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11302v1 | [
"https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network"
] | In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-Score score did the SGSLN/512 model get on the CDD Dataset (season-varying) dataset
| 97.77 |
3DPW | Multi-HMR | Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14654v2 | [
"https://github.com/naver/multi-hmr"
] | In the paper 'Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot', what PA-MPJPE score did the Multi-HMR model get on the 3DPW dataset
| 41.7 |
KADID-10k | UNIQA | You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment | 2023-10-14T00:00:00 | https://arxiv.org/abs/2310.09560v2 | [
"https://github.com/barcodereader/yoto"
] | In the paper 'You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment', what SRCC score did the UNIQA model get on the KADID-10k dataset
| 0.944 |
Human3.6M | SoloPose (H36M+HeatPose+H71M) | SoloPose: One-Shot Kinematic 3D Human Pose Estimation with Video Data Augmentation | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.10195v1 | [
"https://github.com/Santa-Clara-Media-Lab/SoloPose"
] | In the paper 'SoloPose: One-Shot Kinematic 3D Human Pose Estimation with Video Data Augmentation', what Average MPJPE (mm) score did the SoloPose (H36M+HeatPose+H71M) model get on the Human3.6M dataset
| 26.0 |
SYN-UDTIRI | CAINet | Context-Aware Interaction Network for RGB-T Semantic Segmentation | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01624v1 | [
"https://github.com/yinglv1106/cainet"
] | In the paper 'Context-Aware Interaction Network for RGB-T Semantic Segmentation', what IoU score did the CAINet model get on the SYN-UDTIRI dataset
| 91.77 |
ESOL | SMA | Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14789v1 | [
"https://github.com/johnathan-xie/sma"
] | In the paper 'Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning', what RMSE score did the SMA model get on the ESOL dataset
| 0.623 |
UCF-101 | OmniTokenizer-AR | OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09399v1 | [
"https://github.com/foundationvision/omnitokenizer"
] | In the paper 'OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation', what FVD16 score did the OmniTokenizer-AR model get on the UCF-101 dataset
| 191 |
VoiceBank + DEMAND | ROSE | ROSE: A Recognition-Oriented Speech Enhancement Framework in Air Traffic Control Using Multi-Objective Learning | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06118v2 | [
"https://github.com/xcyu-0903/rose"
] | In the paper 'ROSE: A Recognition-Oriented Speech Enhancement Framework in Air Traffic Control Using Multi-Objective Learning', what PESQ score did the ROSE model get on the VoiceBank + DEMAND dataset
| 3.01 |
Xiph-2K | VFIMamba | VFIMamba: Video Frame Interpolation with State Space Models | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02315v2 | [
"https://github.com/mcg-nju/vfimamba"
] | In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the Xiph-2K dataset
| 37.13 |
TACO-Code | GPT-4 | TACO: Topics in Algorithmic COde generation dataset | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14852v3 | [
"https://github.com/flagopen/taco"
] | In the paper 'TACO: Topics in Algorithmic COde generation dataset', what easy pass@1 score did the GPT-4 model get on the TACO-Code dataset
| 31.50% |
SVAMP | DeBERTa | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Execution Accuracy score did the DeBERTa model get on the SVAMP dataset
| 63.5 |
Text8 | BFN | Bayesian Flow Networks | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.07037v5 | [
"https://github.com/nnaisense/bayesian-flow-networks"
] | In the paper 'Bayesian Flow Networks', what Bit per Character (BPC) score did the BFN model get on the Text8 dataset
| 1.41 |
CodeContests | MoTCoder-15B | MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks | 2023-12-26T00:00:00 | https://arxiv.org/abs/2312.15960v3 | [
"https://github.com/dvlab-research/motcoder"
] | In the paper 'MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks', what Test Set pass@1 score did the MoTCoder-15B model get on the CodeContests dataset
| 2.39 |
HMDB51 | MSQNet | Actor-agnostic Multi-label Action Recognition with Multi-modal Query | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10763v3 | [
"https://github.com/mondalanindya/msqnet"
] | In the paper 'Actor-agnostic Multi-label Action Recognition with Multi-modal Query', what Accuracy score did the MSQNet model get on the HMDB51 dataset
| 69.43 |
IllusionVQA | Gemini-Pro 4-shot | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the Gemini-Pro 4-shot model get on the IllusionVQA dataset
| 41.8 |
SYSU-CD | HANet | HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09178v1 | [
"https://github.com/chengxihan/hanet-cd"
] | In the paper 'HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images', what F1 score did the HANet model get on the SYSU-CD dataset
| 77.41 |
RefCOCO+ testA | HYDRA | HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12884v2 | [
"https://github.com/ControlNet/HYDRA"
] | In the paper 'HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning', what IoU score did the HYDRA model get on the RefCOCO+ testA dataset
| 61.1 |
PubChemQA | BioMedGPT-10B | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09442v2 | [
"https://github.com/pharmolix/openbiomed"
] | In the paper 'BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine', what BLEU-2 score did the BioMedGPT-10B model get on the PubChemQA dataset
| 0.234 |
MATH | DART-Math-Mistral-7B-Prop2Diff (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Mistral-7B-Prop2Diff (0-shot CoT, w/o code) model get on the MATH dataset
| 45.5 |
GSO | Wonder3D | Wonder3D: Single Image to 3D using Cross-Domain Diffusion | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.15008v3 | [
"https://github.com/xxlong0/wonder3d"
] | In the paper 'Wonder3D: Single Image to 3D using Cross-Domain Diffusion', what Chamfer Distance score did the Wonder3D model get on the GSO dataset
| 0.0199 |
MATH | CR (GPT-4-turbo model, w/ code) | Cumulative Reasoning with Large Language Models | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04371v6 | [
"https://github.com/iiis-ai/cumulative-reasoning"
] | In the paper 'Cumulative Reasoning with Large Language Models', what Accuracy score did the CR (GPT-4-turbo model, w/ code) model get on the MATH dataset
| 72.2 |
SheetCopilot | SheetCopilot (NIPS2023) | SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.19308v2 | [
"https://github.com/bravegroup/sheetcopilot"
] | In the paper 'SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models', what Pass@1 score did the SheetCopilot (NIPS2023) model get on the SheetCopilot dataset
| 44.3% |
SVTP | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-B model get on the SVTP dataset
| 97.2 |
CATH 4.3 | ESM-IF | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the ESM-IF model get on the CATH 4.3 dataset
| 38.3 |
DomainNet | GMDG (ResNet-50, SWAD) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50, SWAD) model get on the DomainNet dataset
| 47.3 |
Perception Test | Flamingo | Perception Test: A Diagnostic Benchmark for Multimodal Video Models | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13786v2 | [
"https://github.com/deepmind/perception_test"
] | In the paper 'Perception Test: A Diagnostic Benchmark for Multimodal Video Models', what Accuracy (Top-1) score did the Flamingo model get on the Perception Test dataset
| 0.46 |
RWTH-PHOENIX-Weather 2014 | TCNet | TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.11818v1 | [
"https://github.com/hotfinda/tcnet"
] | In the paper 'TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions', what Word Error Rate (WER) score did the TCNet model get on the RWTH-PHOENIX-Weather 2014 dataset
| 18.9 |
SVTP | CLIP4STR-L* | An Empirical Study of Scaling Law for OCR | 2023-12-29T00:00:00 | https://arxiv.org/abs/2401.00028v3 | [
"https://github.com/large-ocr-model/large-ocr-model.github.io"
] | In the paper 'An Empirical Study of Scaling Law for OCR', what Accuracy score did the CLIP4STR-L* model get on the SVTP dataset
| 98.13 |
ogbn-products | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Test Accuracy score did the GCN model get on the ogbn-products dataset
| 0.8233 ± 0.0019 |
MATH | PaLM 2 (few-shot, k=4, SC) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=4, SC) model get on the MATH dataset
| 48.8 |
Cityscapes test | EAGLE (DINO, ViT-B/8) | EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation | 2024-03-03T00:00:00 | https://arxiv.org/abs/2403.01482v4 | [
"https://github.com/MICV-yonsei/EAGLE"
] | In the paper 'EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation', what mIoU score did the EAGLE (DINO, ViT-B/8) model get on the Cityscapes test dataset
| 22.1 |
MM-Vet | Gemini 1.5 Pro (gemini-1.5-pro-002) | Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05530v4 | [
"https://github.com/dlvuldet/primevul"
] | In the paper 'Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context', what GPT-4 score score did the Gemini 1.5 Pro (gemini-1.5-pro-002) model get on the MM-Vet dataset
| 76.9±0.1 |
Google Speech Commands | SSAMBA | SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space Model | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.11831v1 | [
"https://github.com/siavashshams/ssamba"
] | In the paper 'SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space Model', what Google Speech Commands V1 12 score did the SSAMBA model get on the Google Speech Commands dataset
| 96.9 |
ARC (Easy) | PaLM 2-M (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the ARC (Easy) dataset
| 88.0 |
CIFAR-10 | CKGCN | CKGConv: General Graph Convolution with Continuous Kernels | 2024-04-21T00:00:00 | https://arxiv.org/abs/2404.13604v2 | [
"https://github.com/networkslab/ckgconv"
] | In the paper 'CKGConv: General Graph Convolution with Continuous Kernels', what Accuracy score did the CKGCN model get on the CIFAR-10 dataset
| 72.785 |
MATH | Gemini Ultra (4-shot) | Gemini: A Family of Highly Capable Multimodal Models | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11805v4 | [
"https://github.com/valdecy/pybibx"
] | In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what Accuracy score did the Gemini Ultra (4-shot) model get on the MATH dataset
| 53.2 |
SRD | RASM | Regional Attention for Shadow Removal | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14201v1 | [
"https://github.com/CalcuLuUus/RASM"
] | In the paper 'Regional Attention for Shadow Removal', what RMSE score did the RASM model get on the SRD dataset
| 3.37 |
BBBP | elEmBERT-V1 | Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties | 2023-09-17T00:00:00 | https://arxiv.org/abs/2309.09355v3 | [
"https://github.com/dmamur/elembert"
] | In the paper 'Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties', what AUC score did the elEmBERT-V1 model get on the BBBP dataset
| 0.905 |
NUS | PromptRank | PromptRank: Unsupervised Keyphrase Extraction Using Prompt | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04490v2 | [
"https://github.com/hlt-nlp/promptrank"
] | In the paper 'PromptRank: Unsupervised Keyphrase Extraction Using Prompt', what F1@10 score did the PromptRank model get on the NUS dataset
| 20.13 |
ETTh1 (96) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTh1 (96) Multivariate dataset
| 0.377 |
CUHK-PEDES | MARS | MARS: Paying more attention to visual attributes for text-based person search | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04287v1 | [
"https://github.com/ergastialex/mars"
] | In the paper 'MARS: Paying more attention to visual attributes for text-based person search', what R@1 score did the MARS model get on the CUHK-PEDES dataset
| 77.62 |
RSICD | GeoRSCLIP-FT | RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11300v5 | [
"https://github.com/om-ai-lab/rs5m"
] | In the paper 'RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing', what Image to Text Recall@1 score did the GeoRSCLIP-FT model get on the RSICD dataset
| 22.14% |
SMC | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the SMC dataset
| 62.7 |
MNIST | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the MNIST dataset
| 97.8 |
COCO 2% labeled data | Guided Distillation (ResNet50) | Guided Distillation for Semi-Supervised Instance Segmentation | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02668v2 | [
"https://github.com/facebookresearch/guideddistillation"
] | In the paper 'Guided Distillation for Semi-Supervised Instance Segmentation', what mask AP score did the Guided Distillation (ResNet50) model get on the COCO 2% labeled data dataset
| 25.3 |
CALVIN | 3D Diffuser Actor | 3D Diffuser Actor: Policy Diffusion with 3D Scene Representations | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.10885 | [
"https://github.com/nickgkan/3d_diffuser_actor"
] | In the paper '3D Diffuser Actor: Policy Diffusion with 3D Scene Representations', what Avg. sequence length score did the 3D Diffuser Actor model get on the CALVIN dataset
| 3.27 |
MultiviewX | TrackTacular (Bilinear Sampling) | Lifting Multi-View Detection and Tracking to the Bird's Eye View | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12573v1 | [
"https://github.com/tteepe/tracktacular"
] | In the paper 'Lifting Multi-View Detection and Tracking to the Bird's Eye View', what IDF1 score did the TrackTacular (Bilinear Sampling) model get on the MultiviewX dataset
| 85.6 |
Refer-YouTube-VOS (2021 public validation) | SOC (Video-Swin-T) | SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17011v1 | [
"https://github.com/RobertLuo1/NeurIPS2023_SOC"
] | In the paper 'SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation', what J&F score did the SOC (Video-Swin-T) model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 59.2 |
SNU-FILM (easy) | VFIMamba | VFIMamba: Video Frame Interpolation with State Space Models | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02315v2 | [
"https://github.com/mcg-nju/vfimamba"
] | In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the SNU-FILM (easy) dataset
| 40.51 |
ImageNet-1k vs Textures | ODIN+UMAP (ResNet-50) | Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03715v1 | [
"https://github.com/tmlr-group/unleashing-mask"
] | In the paper 'Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability', what FPR95 score did the ODIN+UMAP (ResNet-50) model get on the ImageNet-1k vs Textures dataset
| 42.02 |
HME100K | ICAL | ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition | 2024-05-15T00:00:00 | https://arxiv.org/abs/2405.09032v4 | [
"https://github.com/qingzhenduyu/ical"
] | In the paper 'ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition', what ExpRate score did the ICAL model get on the HME100K dataset
| 69.06 |
PROTEINS | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what Accuracy (10 fold) score did the G-Tuning model get on the PROTEINS dataset
| 72.05 |
RefCOCO testB | MagNet | Mask Grounding for Referring Image Segmentation | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12198v2 | [
"https://github.com/yxchng/mask-grounding"
] | In the paper 'Mask Grounding for Referring Image Segmentation', what Overall IoU score did the MagNet model get on the RefCOCO testB dataset
| 71.05 |
APPS | MapCoder APPS-150-cherrypicked (GPT-4) | MapCoder: Multi-Agent Code Generation for Competitive Problem Solving | 2024-05-18T00:00:00 | https://arxiv.org/abs/2405.11403v1 | [
"https://github.com/md-ashraful-pramanik/mapcoder"
] | In the paper 'MapCoder: Multi-Agent Code Generation for Competitive Problem Solving', what Competition Pass@any score did the MapCoder APPS-150-cherrypicked (GPT-4) model get on the APPS dataset
| 22.0 |
Electricity (720) | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the Electricity (720) dataset
| 0.198 |
Texas | M2M-GNN | Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20652v1 | [
"https://github.com/Jinx-byebye/m2mgnn"
] | In the paper 'Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs', what Accuracy score did the M2M-GNN model get on the Texas dataset
| 89.19 ± 4.5 |
LibriTTS | PeriodWave + FreeU | PeriodWave: Multi-Period Flow Matching for High-Fidelity Waveform Generation | 2024-08-14T00:00:00 | https://arxiv.org/abs/2408.07547v1 | [
"https://github.com/sh-lee-prml/periodwave"
] | In the paper 'PeriodWave: Multi-Period Flow Matching for High-Fidelity Waveform Generation', what PESQ score did the PeriodWave + FreeU model get on the LibriTTS dataset
| 4.248 |
GoPro | ID-Blau (FFTformer) | ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10998v2 | [
"https://github.com/plusgood-steven/id-blau"
] | In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR score did the ID-Blau (FFTformer) model get on the GoPro dataset
| 34.36 |
GigaSpeech TEST | Zipformer+CR-CTC/AED
(no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+CR-CTC/AED
(no external language model) model get on the GigaSpeech TEST dataset
| 10.07 |
NYU-Depth V2 | EVP | EVP: Enhanced Visual Perception using Inverse Multi-Attentive Feature Refinement and Regularized Image-Text Alignment | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08548v1 | [
"https://github.com/lavreniuk/evp"
] | In the paper 'EVP: Enhanced Visual Perception using Inverse Multi-Attentive Feature Refinement and Regularized Image-Text Alignment', what RMS score did the EVP model get on the NYU-Depth V2 dataset
| 0.224 |
FFHQ 256 x 256 | LFM | Flow Matching in Latent Space | 2023-07-17T00:00:00 | https://arxiv.org/abs/2307.08698v1 | [
"https://github.com/vinairesearch/lfm"
] | In the paper 'Flow Matching in Latent Space', what FID score did the LFM model get on the FFHQ 256 x 256 dataset
| 4.55 |
ScanNetV2 | OneFormer3D | OneFormer3D: One Transformer for Unified Point Cloud Segmentation | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14405v1 | [
"https://github.com/oneformer3d/oneformer3d"
] | In the paper 'OneFormer3D: One Transformer for Unified Point Cloud Segmentation', what mAP@0.25 score did the OneFormer3D model get on the ScanNetV2 dataset
| 76.9 |
MATH | DART-Math-Llama3-8B-Prop2Diff (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Llama3-8B-Prop2Diff (0-shot CoT, w/o code) model get on the MATH dataset
| 46.6 |
YouTube-VIS 2021 | DVIS++(VIT-L, Online) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mask AP score did the DVIS++(VIT-L, Online) model get on the YouTube-VIS 2021 dataset
| 62.3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.