dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
LRS2 | Whisper-LLaMA | Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition | 2023-10-10T00:00:00 | https://arxiv.org/abs/2310.06434v2 | [
"https://github.com/srijith-rkr/whispering-llama"
] | In the paper 'Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition', what Test WER score did the Whisper-LLaMA model get on the LRS2 dataset
| 6.6 |
GRAZPEDWRI-DX | YOLOv10-X | Pediatric Wrist Fracture Detection in X-rays via YOLOv10 Algorithm and Dual Label Assignment System | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15689v2 | [
"https://github.com/ammarlodhi255/YOLOv10-Fracture-Detection"
] | In the paper 'Pediatric Wrist Fracture Detection in X-rays via YOLOv10 Algorithm and Dual Label Assignment System', what mAP score did the YOLOv10-X model get on the GRAZPEDWRI-DX dataset
| 76.2 |
Coauthor Physics | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GCN model get on the Coauthor Physics dataset
| 97.46 ± 0.10 |
H3WB | 3D-LFM | 3D-LFM: Lifting Foundation Model | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11894v2 | [
"https://github.com/mosamdabhi/3dlfm"
] | In the paper '3D-LFM: Lifting Foundation Model', what MPJPE score did the 3D-LFM model get on the H3WB dataset
| 60.83 |
ETTh1 (96) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh1 (96) Multivariate dataset
| 0.368 |
MOT20 | SFSORT | SFSORT: Scene Features-based Simple Online Real-Time Tracker | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07553v1 | [
"https://github.com/gitmehrdad/sfsort"
] | In the paper 'SFSORT: Scene Features-based Simple Online Real-Time Tracker', what MOTA score did the SFSORT model get on the MOT20 dataset
| 75 |
EgoTaskQA | GF(uns) | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01529v1 | [
"https://github.com/byz0e/glance-focus"
] | In the paper 'Glance and Focus: Memory Prompting for Multi-Event Video Question Answering', what Direct score did the GF(uns) model get on the EgoTaskQA dataset
| 43.06 |
St Lucia | ProGEO | ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.01906v1 | [
"https://github.com/chain-mao/progeo"
] | In the paper 'ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization', what Recall@1 score did the ProGEO model get on the St Lucia dataset
| 99.7 |
ColonINST-v1 (Unseen) | LLaVA-Med-v1.0
(w/o LoRA, w/o extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.0
(w/o LoRA, w/o extra data) model get on the ColonINST-v1 (Unseen) dataset
| 75.07 |
ChEBI-20 | BioT5+ | BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17810v2 | [
"https://github.com/QizhiPei/BioT5"
] | In the paper 'BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning', what Text2Mol score did the BioT5+ model get on the ChEBI-20 dataset
| 57.9 |
S3DIS Area5 | SuperCluster | Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering | 2024-01-12T00:00:00 | https://arxiv.org/abs/2401.06704v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what mIoU score did the SuperCluster model get on the S3DIS Area5 dataset
| 68.1 |
COCO-20i (5-shot) | MIANet (ResNet-50) | MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13864v1 | [
"https://github.com/aldrich2y/mianet"
] | In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (ResNet-50) model get on the COCO-20i (5-shot) dataset
| 51.65 |
EconLogicQA | Gemma-7B-IT | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Gemma-7B-IT model get on the EconLogicQA dataset
| 0.0231 |
Office-31 | SFDA2 | SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation | 2024-03-16T00:00:00 | https://arxiv.org/abs/2403.10834v1 | [
"https://github.com/shinyflight/sfda2"
] | In the paper 'SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation', what Average Accuracy score did the SFDA2 model get on the Office-31 dataset
| 89.9 |
ChestX-ray14 | BayesAgg-MTL | Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning | 2024-02-06T00:00:00 | https://arxiv.org/abs/2402.04005v2 | [
"https://github.com/ssi-research/bayesagg_mtl"
] | In the paper 'Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning', what delta_m score did the BayesAgg-MTL model get on the ChestX-ray14 dataset
| −14.96 |
Fishyscapes | Mask2Anomaly | Unmasking Anomalies in Road-Scene Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13316v1 | [
"https://github.com/shyam671/mask2anomaly-unmasking-anomalies-in-road-scene-segmentation"
] | In the paper 'Unmasking Anomalies in Road-Scene Segmentation', what AP score did the Mask2Anomaly model get on the Fishyscapes dataset
| 95.20 |
OVIS validation | DVIS++(VIT-L, Online) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mask AP score did the DVIS++(VIT-L, Online) model get on the OVIS validation dataset
| 49.6 |
Office-Home | PGA (RN50) | Enhancing Domain Adaptation through Prompt Gradient Alignment | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09353v2 | [
"https://github.com/viethoang1512/pga"
] | In the paper 'Enhancing Domain Adaptation through Prompt Gradient Alignment', what Accuracy score did the PGA (RN50) model get on the Office-Home dataset
| 75.8 |
Tox21 | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what ROC-AUC score did the G-Tuning model get on the Tox21 dataset
| 75.80 |
NTU RGB+D | EPP-Net (Parsing + Pose) | Explore Human Parsing Modality for Action Recognition | 2024-01-04T00:00:00 | https://arxiv.org/abs/2401.02138v1 | [
"https://github.com/liujf69/EPP-Net-Action"
] | In the paper 'Explore Human Parsing Modality for Action Recognition', what Accuracy (CS) score did the EPP-Net (Parsing + Pose) model get on the NTU RGB+D dataset
| 94.7 |
UCR Anomaly Archive | AE | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the AE model get on the UCR Anomaly Archive dataset
| 0.236 |
MM-Vet | Gemini 1.0 Pro Vision (gemini-pro-vision) | Gemini: A Family of Highly Capable Multimodal Models | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11805v4 | [
"https://github.com/valdecy/pybibx"
] | In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what GPT-4 score score did the Gemini 1.0 Pro Vision (gemini-pro-vision) model get on the MM-Vet dataset
| 64.3±0.4 |
SIM10K to Cityscapes | AT (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the AT (ResNet50-FPN) model get on the SIM10K to Cityscapes dataset
| 72.0 |
ENZYMES | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what Accuracy (10 fold) score did the G-Tuning model get on the ENZYMES dataset
| 26.70 |
SQA3D | CREMA | CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05889v3 | [
"https://github.com/Yui010206/CREMA"
] | In the paper 'CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion', what AnswerExactMatch (Question Answering) score did the CREMA model get on the SQA3D dataset
| 54.6 |
Charades | MSQNet | Actor-agnostic Multi-label Action Recognition with Multi-modal Query | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10763v3 | [
"https://github.com/mondalanindya/msqnet"
] | In the paper 'Actor-agnostic Multi-label Action Recognition with Multi-modal Query', what mAP score did the MSQNet model get on the Charades dataset
| 35.59 |
MOSE | Cutie (small) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie (small) model get on the MOSE dataset
| 62.2 |
CFC-DAOD | SADA (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what AP@0.5 score did the SADA (ResNet50-FPN) model get on the CFC-DAOD dataset
| 58.9 |
Urban100 - 4x upscaling | AESOP | Auto-Encoded Supervision for Perceptual Image Super-Resolution | 2024-11-28T00:00:00 | https://arxiv.org/abs/2412.00124v1 | [
"https://github.com/2minkyulee/aesop-auto-encoded-supervision-for-perceptual-image-super-resolution"
] | In the paper 'Auto-Encoded Supervision for Perceptual Image Super-Resolution', what PSNR score did the AESOP model get on the Urban100 - 4x upscaling dataset
| 26.148 |
GoogleGZ-CD | HANet | HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09178v1 | [
"https://github.com/chengxihan/hanet-cd"
] | In the paper 'HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images', what F1 score did the HANet model get on the GoogleGZ-CD dataset
| 75.28 |
LLRGBD-synthetic | SMMCL (SegNeXt-B) | Understanding Dark Scenes by Contrasting Multi-Modal Observations | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12320v2 | [
"https://github.com/palmdong/smmcl"
] | In the paper 'Understanding Dark Scenes by Contrasting Multi-Modal Observations', what mIoU score did the SMMCL (SegNeXt-B) model get on the LLRGBD-synthetic dataset
| 68.76 |
CARPK | CounTX (uses arbitrary text input to specify object to count, used "the cars" for CARPK) | Open-world Text-specified Object Counting | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01851v2 | [
"https://github.com/niki-amini-naieni/countx"
] | In the paper 'Open-world Text-specified Object Counting', what MAE score did the CounTX (uses arbitrary text input to specify object to count, used "the cars" for CARPK) model get on the CARPK dataset
| 8.13 |
nuScenes (Distant PCR) | Predator+APR(a) | APR: Online Distant Point Cloud Registration Through Aggregated Point Cloud Reconstruction | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.02893v2 | [
"https://github.com/liuquan98/apr"
] | In the paper 'APR: Online Distant Point Cloud Registration Through Aggregated Point Cloud Reconstruction', what mRR @ Normal Criterion (1.5°&0.3m) score did the Predator+APR(a) model get on the nuScenes (Distant PCR) dataset
| 52.2 |
Human3.6M | VoxelKeypointFusion (transfer) | VoxelKeypointFusion: Generalizable Multi-View Multi-Person Pose Estimation | 2024-10-24T00:00:00 | https://arxiv.org/abs/2410.18723v2 | [
"https://gitlab.com/Percipiote/VoxelKeypointFusion"
] | In the paper 'VoxelKeypointFusion: Generalizable Multi-View Multi-Person Pose Estimation', what Average MPJPE (mm) score did the VoxelKeypointFusion (transfer) model get on the Human3.6M dataset
| 64.3 |
ETTm1 (336) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTm1 (336) Multivariate dataset
| 0.362 |
IAM | DTrOCR 105M | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what CER score did the DTrOCR 105M model get on the IAM dataset
| 2.38 |
EPIC-KITCHENS-100 | TAdaConvNeXtV2-S | Temporally-Adaptive Models for Efficient Video Understanding | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05787v1 | [
"https://github.com/alibaba-mmai-research/TAdaConv"
] | In the paper 'Temporally-Adaptive Models for Efficient Video Understanding', what Action@1 score did the TAdaConvNeXtV2-S model get on the EPIC-KITCHENS-100 dataset
| 48.9 |
UCF-101 | LARP | LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21264v1 | [
"https://github.com/hywang66/LARP"
] | In the paper 'LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior', what FVD16 score did the LARP model get on the UCF-101 dataset
| 57 |
GSM8K | RFT 70B | Scaling Relationship on Learning Mathematical Reasoning with Large Language Models | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.01825v2 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | In the paper 'Scaling Relationship on Learning Mathematical Reasoning with Large Language Models', what Accuracy score did the RFT 70B model get on the GSM8K dataset
| 64.8 |
ETTh1 (192) Multivariate | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the ETTh1 (192) Multivariate dataset
| 0.403 |
ETTh1 (336) Multivariate | VCformer | VCformer: Variable Correlation Transformer with Inherent Lagged Correlation for Multivariate Time Series Forecasting | 2024-05-19T00:00:00 | https://arxiv.org/abs/2405.11470v1 | [
"https://github.com/csyyn/vcformer"
] | In the paper 'VCformer: Variable Correlation Transformer with Inherent Lagged Correlation for Multivariate Time Series Forecasting', what MSE score did the VCformer model get on the ETTh1 (336) Multivariate dataset
| 0.473 |
nuscenes Camera-Radar | RCBEVDet | RCBEVDet: Radar-camera Fusion in Bird's Eye View for 3D Object Detection | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16440v1 | [
"https://github.com/vdigpku/rcbevdet"
] | In the paper 'RCBEVDet: Radar-camera Fusion in Bird's Eye View for 3D Object Detection', what NDS score did the RCBEVDet model get on the nuscenes Camera-Radar dataset
| 63.9 |
Cora with Public Split: fixed 20 nodes per class | GEM | Graph Entropy Minimization for Semi-supervised Node Classification | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19502v1 | [
"https://github.com/cf020031308/gem"
] | In the paper 'Graph Entropy Minimization for Semi-supervised Node Classification', what Accuracy score did the GEM model get on the Cora with Public Split: fixed 20 nodes per class dataset
| 83.05% |
IEMOCAP | MultiMAE-DER (V + A) | MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition | 2024-04-28T00:00:00 | https://arxiv.org/abs/2404.18327v2 | [
"https://github.com/Peihao-Xiang/MultiMAE-DFER"
] | In the paper 'MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition', what WAR score did the MultiMAE-DER (V + A) model get on the IEMOCAP dataset
| 63.73 |
HRSOD | BiRefNet (DUTS) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (DUTS) model get on the HRSOD dataset
| 0.957 |
WebApp1K-React | deepseek-coder-v2-instruct | Insights from Benchmarking Frontier Language Models on Web App Code Generation | 2024-09-08T00:00:00 | https://arxiv.org/abs/2409.05177v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'Insights from Benchmarking Frontier Language Models on Web App Code Generation', what pass@1 score did the deepseek-coder-v2-instruct model get on the WebApp1K-React dataset
| 0.7002 |
COCO Captions | LaDiC (ours, 30 steps) | LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-Text Generation? | 2024-04-16T00:00:00 | https://arxiv.org/abs/2404.10763v1 | [
"https://github.com/wangyuchi369/ladic"
] | In the paper 'LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-Text Generation?', what BLEU-4 score did the LaDiC (ours, 30 steps) model get on the COCO Captions dataset
| 0.382 |
EQ-Bench | meta-llama/Llama-2-13b-chat-hf | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the meta-llama/Llama-2-13b-chat-hf model get on the EQ-Bench dataset
| 33.02 |
UrduDoc | DRRG [72] | UTRNet: High-Resolution Urdu Text Recognition In Printed Documents | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15782v3 | [
"https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition"
] | In the paper 'UTRNet: High-Resolution Urdu Text Recognition In Printed Documents', what Precision score did the DRRG [72] model get on the UrduDoc dataset
| 83.87 |
CIFAR-100 | resnet18 | Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03358v2 | [
"https://github.com/changzhang777/ancra"
] | In the paper 'Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria', what autoattack score did the resnet18 model get on the CIFAR-100 dataset
| 60.10/35.05 |
Coauthor CS | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the HH-GraphSAGE model get on the Coauthor CS dataset
| 95.13% |
Weather2K114 (336) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K114 (336) dataset
| 0.415 |
Tiered ImageNet 5-way (5-shot) | DiffKendall (Meta-Baseline, ResNet-12) | DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15317v2 | [
"https://github.com/kaipengm2/DiffKendall"
] | In the paper 'DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation', what Accuracy score did the DiffKendall (Meta-Baseline, ResNet-12) model get on the Tiered ImageNet 5-way (5-shot) dataset
| 85.31 |
PKU-MMD | DVANet (RGB only) | DVANet: Disentangling View and Action Features for Multi-View Action Recognition | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05719v1 | [
"https://github.com/NyleSiddiqui/MultiView_Actions"
] | In the paper 'DVANet: Disentangling View and Action Features for Multi-View Action Recognition', what X-Sub score did the DVANet (RGB only) model get on the PKU-MMD dataset
| 95.8 |
ADE20K-847 | SED | SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15537v2 | [
"https://github.com/xb534/sed"
] | In the paper 'SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation', what mIoU score did the SED model get on the ADE20K-847 dataset
| 13.9 |
DTD | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the DTD dataset
| 77.94 |
YouTube-VOS 2019 | Cutie+ (base, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what Overall score did the Cutie+ (base, MEGA) model get on the YouTube-VOS 2019 dataset
| 87.5 |
HumanML3D | DiverseMotion (s=2) | DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01372v1 | [
"https://github.com/axdfhj/mdd"
] | In the paper 'DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion', what FID score did the DiverseMotion (s=2) model get on the HumanML3D dataset
| 0.072 |
SPair-71k | LDMCorrespondences | Unsupervised Semantic Correspondence Using Stable Diffusion | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15581v2 | [
"https://github.com/ubc-vision/LDM_correspondences"
] | In the paper 'Unsupervised Semantic Correspondence Using Stable Diffusion', what PCK score did the LDMCorrespondences model get on the SPair-71k dataset
| 45.4 |
CropHarvest - Togo | Ensemble aggregation | A Comparative Assessment of Multi-view fusion learning for Crop Classification | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05407v1 | [
"https://github.com/fmenat/multiviewcropclassification"
] | In the paper 'A Comparative Assessment of Multi-view fusion learning for Crop Classification', what Average Accuracy score did the Ensemble aggregation model get on the CropHarvest - Togo dataset
| 0.840 |
NYU Depth v2 | SMMCL (SegFormer-B2) | Understanding Dark Scenes by Contrasting Multi-Modal Observations | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12320v2 | [
"https://github.com/palmdong/smmcl"
] | In the paper 'Understanding Dark Scenes by Contrasting Multi-Modal Observations', what Mean IoU score did the SMMCL (SegFormer-B2) model get on the NYU Depth v2 dataset
| 53.7% |
Epinions | HetroFair | Heterophily-Aware Fair Recommendation using Graph Convolutional Networks | 2024-01-31T00:00:00 | https://arxiv.org/abs/2402.03365v2 | [
"https://github.com/nematgh/hetrofair"
] | In the paper 'Heterophily-Aware Fair Recommendation using Graph Convolutional Networks', what NDCG@20 score did the HetroFair model get on the Epinions dataset
| 0.0895 |
iWildCam2020-WILDS | COSMO | Reviving the Context: Camera Trap Species Classification as Link Prediction on Multimodal Knowledge Graphs | 2023-12-31T00:00:00 | https://arxiv.org/abs/2401.00608v5 | [
"https://github.com/osu-nlp-group/cosmo"
] | In the paper 'Reviving the Context: Camera Trap Species Classification as Link Prediction on Multimodal Knowledge Graphs', what Accuracy (Top-1) score did the COSMO model get on the iWildCam2020-WILDS dataset
| 74.5 |
RSICD | PE-RSITR (MRS-Adapter) | Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval | 2023-08-24T00:00:00 | https://arxiv.org/abs/2308.12509v1 | [
"https://github.com/ZhanYang-nwpu/PE-RSITR"
] | In the paper 'Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval', what Mean Recall score did the PE-RSITR (MRS-Adapter) model get on the RSICD dataset
| 31.12% |
MS-COCO (5-shot) | UniFS | UniFS: Universal Few-shot Instance Perception with Point Representations | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19401v3 | [
"https://github.com/jin-s13/unifs"
] | In the paper 'UniFS: Universal Few-shot Instance Perception with Point Representations', what AP score did the UniFS model get on the MS-COCO (5-shot) dataset
| 18.2 |
FineDiving | FineParser | FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06887v1 | [
"https://github.com/pku-icst-mipl/fineparser_cvpr2024"
] | In the paper 'FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment', what Spearman Correlation score did the FineParser model get on the FineDiving dataset
| 0.9435 |
COCO-N Medium | Mask R-CNN ResNet-50 FPN | Benchmarking Label Noise in Instance Segmentation: Spatial Noise Matters | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.10891v2 | [
"https://github.com/eden500/Noisy-Labels-Instance-Segmentation"
] | In the paper 'Benchmarking Label Noise in Instance Segmentation: Spatial Noise Matters', what mIOU score did the Mask R-CNN ResNet-50 FPN model get on the COCO-N Medium dataset
| 30.3 |
LVIS v1.0 | CoDet (EVA02-L) | CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection | 2023-10-25T00:00:00 | https://arxiv.org/abs/2310.16667v1 | [
"https://github.com/cvmi-lab/codet"
] | In the paper 'CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection', what AP novel-LVIS base training score did the CoDet (EVA02-L) model get on the LVIS v1.0 dataset
| 37.0 |
ImageNet-R | Discrete Adversarial Distillation (ViT-B,224) | Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01441v2 | [
"https://github.com/lapisrocks/DiscreteAdversarialDistillation"
] | In the paper 'Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models', what Top-1 Error Rate score did the Discrete Adversarial Distillation (ViT-B,224) model get on the ImageNet-R dataset
| 34.9 |
CelebA-HQ 256x256 | LFM | Flow Matching in Latent Space | 2023-07-17T00:00:00 | https://arxiv.org/abs/2307.08698v1 | [
"https://github.com/vinairesearch/lfm"
] | In the paper 'Flow Matching in Latent Space', what FID score did the LFM model get on the CelebA-HQ 256x256 dataset
| 5.26 |
Domain-independent anomalies datasets | Spatial Embedding MLP (ViT-B/8) | Domain-independent detection of known anomalies | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.02910v1 | [
"https://github.com/Jonas1302/anomalib"
] | In the paper 'Domain-independent detection of known anomalies', what Detection AUROC score did the Spatial Embedding MLP (ViT-B/8) model get on the Domain-independent anomalies datasets dataset
| 86.7 |
MemeTracker | HP-CDE | Hawkes Process Based on Controlled Differential Equations | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.07031v2 | [
"https://github.com/kookseungji/Hawkes-Process-Based-on-Controlled-Differential-Equations"
] | In the paper 'Hawkes Process Based on Controlled Differential Equations', what Accuracy score did the HP-CDE model get on the MemeTracker dataset
| 0.151±0.005 |
KITTI | GaussianCity | GaussianCity: Generative Gaussian Splatting for Unbounded 3D City Generation | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06526v2 | [
"https://github.com/hzxie/GaussianCity"
] | In the paper 'GaussianCity: Generative Gaussian Splatting for Unbounded 3D City Generation', what FID score did the GaussianCity model get on the KITTI dataset
| 29.5 |
ToxCast | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what ROC-AUC score did the G-Tuning model get on the ToxCast dataset
| 64.25 |
Set14 - 4x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the Set14 - 4x upscaling dataset
| 29.40 |
ImageNet | DAT-S++ | DAT++: Spatially Dynamic Vision Transformer with Deformable Attention | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01430v1 | [
"https://github.com/leaplabthu/dat"
] | In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what Top 1 Accuracy score did the DAT-S++ model get on the ImageNet dataset
| 84.6% |
REBUS | QWEN | REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05604v2 | [
"https://github.com/cvndsh/rebus"
] | In the paper 'REBUS: A Robust Evaluation Benchmark of Understanding Symbols', what Accuracy score did the QWEN model get on the REBUS dataset
| 0.9 |
PascalVOC-SP | DRew-GatedGCN+LapPE | DRew: Dynamically Rewired Message Passing with Delay | 2023-05-13T00:00:00 | https://arxiv.org/abs/2305.08018v2 | [
"https://github.com/bengutteridge/drew"
] | In the paper 'DRew: Dynamically Rewired Message Passing with Delay', what macro F1 score did the DRew-GatedGCN+LapPE model get on the PascalVOC-SP dataset
| 0.3314±0.0024 |
CIFAR-10 | CNN+ Wilson-Cowan model RNN | Learning in Wilson-Cowan model for metapopulation | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16453v2 | [
"https://github.com/raffaelemarino/learning_in_wilsoncowan"
] | In the paper 'Learning in Wilson-Cowan model for metapopulation', what Percentage correct score did the CNN+ Wilson-Cowan model RNN model get on the CIFAR-10 dataset
| 86.59 |
UA-GEC | Llama + 1M BT + gold | To Err Is Human, but Llamas Can Learn It Too | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05493v2 | [
"https://github.com/TartuNLP/gec-llm"
] | In the paper 'To Err Is Human, but Llamas Can Learn It Too', what F0.5 score did the Llama + 1M BT + gold model get on the UA-GEC dataset
| 74.09 |
AffectNet | ResEmoteNet | ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition | 2024-09-01T00:00:00 | https://arxiv.org/abs/2409.10545v2 | [
"https://github.com/ArnabKumarRoy02/ResEmoteNet"
] | In the paper 'ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition', what Accuracy (7 emotion) score did the ResEmoteNet model get on the AffectNet dataset
| 72.93 |
OA-Mine - annotations | GPT-4-json-val-10-dem | ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12537v5 | [
"https://github.com/wbsg-uni-mannheim/extractgpt"
] | In the paper 'ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction', what F1-score score did the GPT-4-json-val-10-dem model get on the OA-Mine - annotations dataset
| 82.2 |
ImageNet | Discrete Adversarial Distillation (ViT-B, 224) | Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01441v2 | [
"https://github.com/lapisrocks/DiscreteAdversarialDistillation"
] | In the paper 'Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models', what Top 1 Accuracy score did the Discrete Adversarial Distillation (ViT-B, 224) model get on the ImageNet dataset
| 81.9% |
AISHELL-1 | Lightweight Transducer | Lightweight Transducer Based on Frame-Level Criterion | 2024-09-05T00:00:00 | https://arxiv.org/abs/2409.13698v2 | [
"https://github.com/wangmengzhi/Lightweight-Transducer"
] | In the paper 'Lightweight Transducer Based on Frame-Level Criterion', what Word Error Rate (WER) score did the Lightweight Transducer model get on the AISHELL-1 dataset
| 4.31 |
MM-Vet | LLaVA-1.5+MMInstruct (Vicuna-13B) | MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15838v2 | [
"https://github.com/yuecao0119/mminstruct"
] | In the paper 'MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity', what GPT-4 score score did the LLaVA-1.5+MMInstruct (Vicuna-13B) model get on the MM-Vet dataset
| 37.9 |
FishEye8K | Yolov8x (640x640) | FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection | 2023-05-27T00:00:00 | https://arxiv.org/abs/2305.17449v2 | [
"https://github.com/moyog/fisheye8k"
] | In the paper 'FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection', what mAP score did the Yolov8x (640x640) model get on the FishEye8K dataset
| 61.4 |
TruthfulQA | Mistral-7B-Instruct-v0.2 + TruthX | TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17811v2 | [
"https://github.com/ictnlp/truthx"
] | In the paper 'TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space', what MC1 score did the Mistral-7B-Instruct-v0.2 + TruthX model get on the TruthfulQA dataset
| 0.56 |
Weather (720) | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the Weather (720) dataset
| 0.329 |
LJSpeech | Matcha-TTS | Matcha-TTS: A fast TTS architecture with conditional flow matching | 2023-09-06T00:00:00 | https://arxiv.org/abs/2309.03199v2 | [
"https://github.com/shivammehta25/Matcha-TTS"
] | In the paper 'Matcha-TTS: A fast TTS architecture with conditional flow matching', what MOS score did the Matcha-TTS model get on the LJSpeech dataset
| 3.84 |
SIR^2(Postcard) | DSRNet | Single Image Reflection Separation via Component Synergy | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.10027v1 | [
"https://github.com/mingcv/dsrnet"
] | In the paper 'Single Image Reflection Separation via Component Synergy', what PSNR score did the DSRNet model get on the SIR^2(Postcard) dataset
| 24.56 |
GOT-10k | LoRAT-L-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what Average Overlap score did the LoRAT-L-378 model get on the GOT-10k dataset
| 77.5 |
PACS | RISE (ResNet-50) | A Sentence Speaks a Thousand Images: Domain Generalization through Distilling CLIP with Language Guidance | 2023-09-21T00:00:00 | https://arxiv.org/abs/2309.12530v1 | [
"https://github.com/oodbag/rise"
] | In the paper 'A Sentence Speaks a Thousand Images: Domain Generalization through Distilling CLIP with Language Guidance', what Average Accuracy score did the RISE (ResNet-50) model get on the PACS dataset
| 90.2 |
CUB 200 5-way 5-shot | CAML [Laion-2b] | Context-Aware Meta-Learning | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10971v2 | [
"https://github.com/cfifty/CAML"
] | In the paper 'Context-Aware Meta-Learning', what Accuracy score did the CAML [Laion-2b] model get on the CUB 200 5-way 5-shot dataset
| 98.7 |
Nardo-Air | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Nardo-Air dataset
| 42.25 |
CropHarvest - Togo | Decision fusion with GRU | In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16582v2 | [
"https://github.com/fmenat/optimal-multiview-crop-classifier"
] | In the paper 'In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data', what Average Accuracy score did the Decision fusion with GRU model get on the CropHarvest - Togo dataset
| 0.825 |
ColonINST-v1 (Unseen) | LLaVA-Med-v1.0
(w/o LoRA, w/ extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.0
(w/o LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
| 77.38 |
O-Haze | CL2S | Rethinking the Elementary Function Fusion for Single-Image Dehazing | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.15817v1 | [
"https://github.com/YesianRohn/CL2S"
] | In the paper 'Rethinking the Elementary Function Fusion for Single-Image Dehazing', what PSNR score did the CL2S model get on the O-Haze dataset
| 24.58 |
ChartQA | PaLI-3 | PaLI-3 Vision Language Models: Smaller, Faster, Stronger | 2023-10-13T00:00:00 | https://arxiv.org/abs/2310.09199v2 | [
"https://github.com/kyegomez/PALI3"
] | In the paper 'PaLI-3 Vision Language Models: Smaller, Faster, Stronger', what 1:1 Accuracy score did the PaLI-3 model get on the ChartQA dataset
| 70 |
ECSSD | M3Net-S | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-S model get on the ECSSD dataset
| 0.021 |
EC-FUNSD | GeoLayoutLM | Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02379v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective', what F1 score did the GeoLayoutLM model get on the EC-FUNSD dataset
| 86.18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.