dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge | PVT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what Avg DSC score did the PVT-GCASCADE model get on the MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge dataset
| 83.28 |
Actor | M2M-GNN | Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20652v1 | [
"https://github.com/Jinx-byebye/m2mgnn"
] | In the paper 'Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs', what Accuracy score did the M2M-GNN model get on the Actor dataset
| 36.72 ± 1.6 |
BigEarthNet (official test set) | ViT-S/16 | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18653v1 | [
"https://github.com/zhu-xlab/fgmae"
] | In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what mAP (micro) score did the ViT-S/16 model get on the BigEarthNet (official test set) dataset
| 87.8 |
KITTI | TIE-KD (T: Adabins S: MobileNetV2) | TIE-KD: Teacher-Independent and Explainable Knowledge Distillation for Monocular Depth Estimation | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14340v1 | [
"https://github.com/hpc-lab-koreatech/tie-kd"
] | In the paper 'TIE-KD: Teacher-Independent and Explainable Knowledge Distillation for Monocular Depth Estimation', what RMSE score did the TIE-KD (T: Adabins S: MobileNetV2) model get on the KITTI dataset
| 2.4315 |
Uber-Text | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy (%) score did the CLIP4STR-B model get on the Uber-Text dataset
| 86.8 |
Texas | DJ-GNN | Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16976v1 | [
"https://github.com/AhmedBegggaUA/TFM"
] | In the paper 'Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters', what Accuracy score did the DJ-GNN model get on the Texas dataset
| 92.43±3.15 |
HOST | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what 1:1 Accuracy score did the CLIP4STR-B model get on the HOST dataset
| 79.8 |
MM-Vet | LLaVA-TokenPacker (Vicuna-13B) | TokenPacker: Efficient Visual Projector for Multimodal LLM | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02392v4 | [
"https://github.com/circleradon/tokenpacker"
] | In the paper 'TokenPacker: Efficient Visual Projector for Multimodal LLM', what GPT-4 score score did the LLaVA-TokenPacker (Vicuna-13B) model get on the MM-Vet dataset
| 34.1 |
Actor | ChebNet+Bregman | Bregman Graph Neural Network | 2023-09-12T00:00:00 | https://arxiv.org/abs/2309.06645v1 | [
"https://github.com/jiayuzhai1207/bregmangnn"
] | In the paper 'Bregman Graph Neural Network', what Accuracy score did the ChebNet+Bregman model get on the Actor dataset
| 35.92 ± 0.84 |
MCubeS (P) | ShareCMP (B2 RGB-A-D) | ShareCMP: Polarization-Aware RGB-P Semantic Segmentation | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03430v2 | [
"https://github.com/lefteyex/sharecmp"
] | In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP (B2 RGB-A-D) model get on the MCubeS (P) dataset
| 50.99 |
GMOT-40 | MAC-SORT | TP-GMOT: Tracking Generic Multiple Object by Textual Prompt with Motion-Appearance Cost (MAC) SORT | 2024-09-04T00:00:00 | https://arxiv.org/abs/2409.02490v1 | [
"https://github.com/Fsoft-AIC/TP-GMOT"
] | In the paper 'TP-GMOT: Tracking Generic Multiple Object by Textual Prompt with Motion-Appearance Cost (MAC) SORT', what HOTA score did the MAC-SORT model get on the GMOT-40 dataset
| 58.58 |
CIFAR-100 | ResNet18/MART-ANCRA | Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03358v2 | [
"https://github.com/changzhang777/ancra"
] | In the paper 'Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria', what Clean Accuracy score did the ResNet18/MART-ANCRA model get on the CIFAR-100 dataset
| 60.10 |
Cityscapes val | MetaPrompt-SD | Harnessing Diffusion Models for Visual Perception with Meta Prompts | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14733v1 | [
"https://github.com/fudan-zvg/meta-prompts"
] | In the paper 'Harnessing Diffusion Models for Visual Perception with Meta Prompts', what mIoU score did the MetaPrompt-SD model get on the Cityscapes val dataset
| 87.1 |
BenchLMM | Otter-7B | Otter: A Multi-Modal Model with In-Context Instruction Tuning | 2023-05-05T00:00:00 | https://arxiv.org/abs/2305.03726v1 | [
"https://github.com/luodian/otter"
] | In the paper 'Otter: A Multi-Modal Model with In-Context Instruction Tuning', what GPT-3.5 score score did the Otter-7B model get on the BenchLMM dataset
| 39.13 |
SYN-UDTIRI | DFormer | DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.09668v2 | [
"https://github.com/VCIP-RGBD/DFormer"
] | In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what IoU score did the DFormer model get on the SYN-UDTIRI dataset
| 90.88 |
STS15 | PromptEOL+CSE+OPT-13B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-13B model get on the STS15 dataset
| 0.8952 |
ColonINST-v1 (Seen) | LLaVA-Med-v1.5 (w/ LoRA, w/ extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.5 (w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 87.22 |
CIFAR-10 | Balanced Mixture | Balanced Mixture of SuperNets for Learning the CNN Pooling Architecture | 2023-06-21T00:00:00 | https://arxiv.org/abs/2306.11982v1 | [
"https://github.com/mehravehj/Balanced-Mixture-of-SuperNets"
] | In the paper 'Balanced Mixture of SuperNets for Learning the CNN Pooling Architecture', what Accuracy (% ) score did the Balanced Mixture model get on the CIFAR-10 dataset
| 91.55 |
Waymo Open Dataset | AVS-Net | AVS-Net: Point Sampling with Adaptive Voxel Size for 3D Scene Understanding | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17521v3 | [
"https://github.com/yhc2021/avs-net"
] | In the paper 'AVS-Net: Point Sampling with Adaptive Voxel Size for 3D Scene Understanding', what mAPH/L2 score did the AVS-Net model get on the Waymo Open Dataset dataset
| 72.4 |
IllusionVQA | LLaVA-1.5-13B | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the LLaVA-1.5-13B model get on the IllusionVQA dataset
| 24.8 |
Persian Text Image Segmentation (PTI SEG) | presis | Persis: A Persian Font Recognition Pipeline Using Convolutional Neural Networks | 2023-10-08T00:00:00 | https://arxiv.org/abs/2310.05255v2 | [
"https://github.com/mehrdad-dev/persis"
] | In the paper 'Persis: A Persian Font Recognition Pipeline Using Convolutional Neural Networks', what IOU50 score did the presis model get on the Persian Text Image Segmentation (PTI SEG) dataset
| 88.7 |
SPOT-10 | MobileNetV3Large Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the MobileNetV3Large Distiller model get on the SPOT-10 dataset
| 77.88 |
A-OKVQA | HYDRA | HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12884v2 | [
"https://github.com/ControlNet/HYDRA"
] | In the paper 'HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning', what MC Accuracy score did the HYDRA model get on the A-OKVQA dataset
| 56.35 |
ImageNet-1k vs NINCO | EffNetv2-M Relative Mahalanobis | In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00826v1 | [
"https://github.com/j-cb/ninco"
] | In the paper 'In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation', what AUROC score did the EffNetv2-M Relative Mahalanobis model get on the ImageNet-1k vs NINCO dataset
| 88.9 |
COCO 5% labeled data | Guided Distillation (ResNet50) | Guided Distillation for Semi-Supervised Instance Segmentation | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02668v2 | [
"https://github.com/facebookresearch/guideddistillation"
] | In the paper 'Guided Distillation for Semi-Supervised Instance Segmentation', what mask AP score did the Guided Distillation (ResNet50) model get on the COCO 5% labeled data dataset
| 29.9 |
BigEarthNet-S1 (official test set) | FG-MAE (ViT-S/16) | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18653v1 | [
"https://github.com/zhu-xlab/fgmae"
] | In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what mAP (micro) score did the FG-MAE (ViT-S/16) model get on the BigEarthNet-S1 (official test set) dataset
| 82.7 |
CATH 4.2 | GCA | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the GCA model get on the CATH 4.2 dataset
| 37.64 |
ARC (Challenge) | PaLM 2-M (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the ARC (Challenge) dataset
| 64.9 |
LoveDA | SelectiveMAE+ViT-L | Scaling Efficient Masked Image Modeling on Large Remote Sensing Dataset | 2024-06-17T00:00:00 | https://arxiv.org/abs/2406.11933v4 | [
"https://github.com/Fengxiang23/SelectiveMAE"
] | In the paper 'Scaling Efficient Masked Image Modeling on Large Remote Sensing Dataset', what Category mIoU score did the SelectiveMAE+ViT-L model get on the LoveDA dataset
| 54.31 |
MM-Vet | InternVL2.5-2B | Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05271v1 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling', what GPT-4 score score did the InternVL2.5-2B model get on the MM-Vet dataset
| 60.8 |
ITOP front-view | SPiKE | SPiKE: 3D Human Pose from Point Cloud Sequences | 2024-09-03T00:00:00 | https://arxiv.org/abs/2409.01879v1 | [
"https://github.com/iballester/SPiKE"
] | In the paper 'SPiKE: 3D Human Pose from Point Cloud Sequences', what Mean mAP score did the SPiKE model get on the ITOP front-view dataset
| 89.19 |
ToolBench | GPT4-TOPGUN | SwissNYF: Tool Grounded LLM Agents for Black Box Setting | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10051v1 | [
"https://github.com/iclr-dummy-user/swissnyf"
] | In the paper 'SwissNYF: Tool Grounded LLM Agents for Black Box Setting', what Win rate score did the GPT4-TOPGUN model get on the ToolBench dataset
| 86.54 |
ADE20K-847 | MAFT+ | Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation | 2024-08-01T00:00:00 | https://arxiv.org/abs/2408.00744v2 | [
"https://github.com/jiaosiyu1999/MAFT-Plus"
] | In the paper 'Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation', what mIoU score did the MAFT+ model get on the ADE20K-847 dataset
| 15.1 |
OpenMIC-2018 | DyMN-L | Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15648v1 | [
"https://github.com/fschmid56/efficientat"
] | In the paper 'Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models', what mean average precision score did the DyMN-L model get on the OpenMIC-2018 dataset
| 0.855 |
ETTm2 (96) Multivariate | SCNN | Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13036v3 | [
"https://github.com/JLDeng/SCNN"
] | In the paper 'Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting', what MSE score did the SCNN model get on the ETTm2 (96) Multivariate dataset
| 0.163 |
Frappe | TF4CTR | TF4CTR: Twin Focus Framework for CTR Prediction via Adaptive Sample Differentiation | 2024-05-06T00:00:00 | https://arxiv.org/abs/2405.03167v2 | [
"https://github.com/salmon1802/tf4ctr"
] | In the paper 'TF4CTR: Twin Focus Framework for CTR Prediction via Adaptive Sample Differentiation', what AUC score did the TF4CTR model get on the Frappe dataset
| 0.9872 |
ImageNet | DeBiFormer-T | DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08582v1 | [
"https://github.com/maclong01/DeBiFormer"
] | In the paper 'DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention', what Top 1 Accuracy score did the DeBiFormer-T model get on the ImageNet dataset
| 81.9% |
SVAMP | ATHENA (roberta-large) | ATHENA: Mathematical Reasoning with Thought Expansion | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01036v1 | [
"https://github.com/the-jb/athena-math"
] | In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Execution Accuracy score did the ATHENA (roberta-large) model get on the SVAMP dataset
| 54.8 |
RSTPReid | TBPS-CLIP (ViT-B/16) | An Empirical Study of CLIP for Text-based Person Search | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.10045v2 | [
"https://github.com/flame-chasers/tbps-clip"
] | In the paper 'An Empirical Study of CLIP for Text-based Person Search', what R@1 score did the TBPS-CLIP (ViT-B/16) model get on the RSTPReid dataset
| 61.95 |
ADE20K-150 | TTD (TCL) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (TCL) model get on the ADE20K-150 dataset
| 17.0 |
DexYCB | SimpleHand | A Simple Baseline for Efficient Hand Mesh Reconstruction | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.01813v1 | [
"https://github.com/patiencefromzhou/simplehand"
] | In the paper 'A Simple Baseline for Efficient Hand Mesh Reconstruction', what Average MPJPE (mm) score did the SimpleHand model get on the DexYCB dataset
| 12.4 |
CIFAR-10 (4000 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the CIFAR-10 (4000 Labels, ImageNet-100 Unlabeled) dataset
| 89.58 |
MM-Vet v2 | Gemini Pro Vision | Gemini: A Family of Highly Capable Multimodal Models | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11805v4 | [
"https://github.com/valdecy/pybibx"
] | In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what GPT-4 score score did the Gemini Pro Vision model get on the MM-Vet v2 dataset
| 57.2±0.2 |
DUTS-TE | M3Net-S | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-S model get on the DUTS-TE dataset
| 0.024 |
Cornell | UniG-Encoder | UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node Classification | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.01650v1 | [
"https://github.com/minhzou/unig-encoder"
] | In the paper 'UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node Classification', what Accuracy score did the UniG-Encoder model get on the Cornell dataset
| 86.75±6.56 |
Market-1501 | CLIP-ReID Baseline +UFFM+AMC | Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.01101v4 | [
"https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC"
] | In the paper 'Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination', what Rank-1 score did the CLIP-ReID Baseline +UFFM+AMC model get on the Market-1501 dataset
| 96.1 |
Walker2d-v4 | MEow | Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow | 2024-05-22T00:00:00 | https://arxiv.org/abs/2405.13629v2 | [
"https://github.com/ChienFeng-hub/meow"
] | In the paper 'Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow', what Average Return score did the MEow model get on the Walker2d-v4 dataset
| 5526.66 |
COCO 2014 | DSMD | Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning | 2024-04-16T00:00:00 | https://arxiv.org/abs/2404.10838v1 | [
"https://github.com/chrisx599/dsmd"
] | In the paper 'Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning', what Image-to-text R@1 score did the DSMD model get on the COCO 2014 dataset
| 48.0 |
LDC2017T10 | LeakDistill (base) | Incorporating Graph Information in Transformer-based AMR Parsing | 2023-06-23T00:00:00 | https://arxiv.org/abs/2306.13467v1 | [
"https://github.com/sapienzanlp/leakdistill"
] | In the paper 'Incorporating Graph Information in Transformer-based AMR Parsing', what Smatch score did the LeakDistill (base) model get on the LDC2017T10 dataset
| 84.7 |
nuScenes | OA-CNNs | OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14418v1 | [
"https://github.com/Pointcept/Pointcept"
] | In the paper 'OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation', what val mIoU score did the OA-CNNs model get on the nuScenes dataset
| 0.789 |
CiteSeer with Public Split: fixed 20 nodes per class | GEM | Graph Entropy Minimization for Semi-supervised Node Classification | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19502v1 | [
"https://github.com/cf020031308/gem"
] | In the paper 'Graph Entropy Minimization for Semi-supervised Node Classification', what Accuracy score did the GEM model get on the CiteSeer with Public Split: fixed 20 nodes per class dataset
| 74.2 |
GTA5-to-Cityscapes | VLTSeg (EVA02-CLIP-L) | Strong but simple: A Baseline for Domain Generalized Dense Perception by CLIP-based Transfer Learning | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02021v4 | [
"https://github.com/VLTSeg/VLTSeg"
] | In the paper 'Strong but simple: A Baseline for Domain Generalized Dense Perception by CLIP-based Transfer Learning', what mIoU score did the VLTSeg (EVA02-CLIP-L) model get on the GTA5-to-Cityscapes dataset
| 65.6 |
MSU SR-QA Dataset | TOPIQ
trained on SPAQ (NR) | TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment | 2023-08-06T00:00:00 | https://arxiv.org/abs/2308.03060v1 | [
"https://github.com/chaofengc/iqa-pytorch"
] | In the paper 'TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment', what SROCC score did the TOPIQ
trained on SPAQ (NR) model get on the MSU SR-QA Dataset dataset
| 0.64923 |
SpaceNet 1 | SelectiveMAE+ViT-B | Scaling Efficient Masked Image Modeling on Large Remote Sensing Dataset | 2024-06-17T00:00:00 | https://arxiv.org/abs/2406.11933v4 | [
"https://github.com/Fengxiang23/SelectiveMAE"
] | In the paper 'Scaling Efficient Masked Image Modeling on Large Remote Sensing Dataset', what Mean IoU score did the SelectiveMAE+ViT-B model get on the SpaceNet 1 dataset
| 79.50 |
ETTh2 (336) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh2 (336) Multivariate dataset
| 0.357 |
pokec | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GCN model get on the pokec dataset
| 86.33 ± 0.17 |
USNA-Cn2 (long-term) | Hybrid Air-Water Temperature Difference | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Hybrid Air-Water Temperature Difference model get on the USNA-Cn2 (long-term) dataset
| 0.458 |
Peptides-func | PathNN | Path Neural Networks: Expressive and Accurate Graph Neural Networks | 2023-06-09T00:00:00 | https://arxiv.org/abs/2306.05955v1 | [
"https://github.com/gasmichel/pathnns_expressive"
] | In the paper 'Path Neural Networks: Expressive and Accurate Graph Neural Networks', what AP score did the PathNN model get on the Peptides-func dataset
| 0.6816±0.0026 |
Ant-v4 | MEow | Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow | 2024-05-22T00:00:00 | https://arxiv.org/abs/2405.13629v2 | [
"https://github.com/ChienFeng-hub/meow"
] | In the paper 'Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow', what Average Return score did the MEow model get on the Ant-v4 dataset
| 6586.33 |
MUTAG | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what Accuracy (10 fold) score did the G-Tuning model get on the MUTAG dataset
| 86.14 |
ETTm1 (720) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTm1 (720) Multivariate dataset
| 0.416 |
MOSE | Cutie (small, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie (small, MEGA) model get on the MOSE dataset
| 68.6 |
Human3.6M | HMR 2.0a | Humans in 4D: Reconstructing and Tracking Humans with Transformers | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.20091v3 | [
"https://github.com/shubham-goel/4D-Humans"
] | In the paper 'Humans in 4D: Reconstructing and Tracking Humans with Transformers', what Average MPJPE (mm) score did the HMR 2.0a model get on the Human3.6M dataset
| 44.8 |
WSJ0-2mix | SepReformer-L | Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.05983v3 | [
"https://github.com/dmlguq456/SepReformer"
] | In the paper 'Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation', what SI-SDRi score did the SepReformer-L model get on the WSJ0-2mix dataset
| 25.1 |
Cityscapes test | SwinMTL | SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images | 2024-03-15T00:00:00 | https://arxiv.org/abs/2403.10662v1 | [
"https://github.com/pardistaghavi/swinmtl"
] | In the paper 'SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images', what mIoU score did the SwinMTL model get on the Cityscapes test dataset
| 76.41 |
DanceTrack | MOTIP (Deformable DETR) | Multiple Object Tracking as ID Prediction | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16848v1 | [
"https://github.com/MCG-NJU/MOTIP"
] | In the paper 'Multiple Object Tracking as ID Prediction', what HOTA score did the MOTIP (Deformable DETR) model get on the DanceTrack dataset
| 67.5 |
CIFAR-100 | KD++(T:resnet56, S:resnet20) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 Accuracy (%) score did the KD++(T:resnet56, S:resnet20) model get on the CIFAR-100 dataset
| 72.53 |
CLEVR Counts | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the CLEVR Counts dataset
| 24.0 |
Weather2K1786 (96) | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the Weather2K1786 (96) dataset
| 0.535 |
NYU Depth v2 | ComPtr (Swin-T) | ComPtr: Towards Diverse Bi-source Dense Prediction Tasks via A Simple yet General Complementary Transformer | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12349v1 | [
"https://github.com/lartpang/comptr"
] | In the paper 'ComPtr: Towards Diverse Bi-source Dense Prediction Tasks via A Simple yet General Complementary Transformer', what Mean IoU score did the ComPtr (Swin-T) model get on the NYU Depth v2 dataset
| 49.2% |
LingOly | Llama 3 8B | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06196v3 | [
"https://github.com/am-bean/lingOly"
] | In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the Llama 3 8B model get on the LingOly dataset
| 11.4% |
Office-Home | MLNet | MLNet: Mutual Learning Network with Neighborhood Invariance for Universal Domain Adaptation | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.07871v4 | [
"https://github.com/YanzuoLu/MLNet"
] | In the paper 'MLNet: Mutual Learning Network with Neighborhood Invariance for Universal Domain Adaptation', what H-Score score did the MLNet model get on the Office-Home dataset
| 77.4 |
Actor | 2-HiGCN | Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12971v2 | [
"https://github.com/yiminghh/higcn"
] | In the paper 'Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes', what Accuracy score did the 2-HiGCN model get on the Actor dataset
| 41.81±0.52 |
CIFAR-100, 2500 Labels | ShrinkMatch | Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06777v1 | [
"https://github.com/LiheYoung/ShrinkMatch"
] | In the paper 'Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning', what Percentage error score did the ShrinkMatch model get on the CIFAR-100, 2500 Labels dataset
| 25.17 |
FreiHAND | Hamba | Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09646v2 | [
"https://github.com/humansensinglab/Hamba"
] | In the paper 'Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba', what PA-MPVPE score did the Hamba model get on the FreiHAND dataset
| 5.3 |
ACMPS | CarNet | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the CarNet model get on the ACMPS dataset
| 0.9877 |
SUN397 | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the SUN397 dataset
| 79.18 |
VoxCeleb1 | ReDimNet-B4-LM-ASNorm (6.3M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B4-LM-ASNorm (6.3M) model get on the VoxCeleb1 dataset
| 0.44 |
SYNTHIA-to-Cityscapes | HALO | Hyperbolic Active Learning for Semantic Segmentation under Domain Shift | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.11180v5 | [
"https://github.com/paolomandica/HALO"
] | In the paper 'Hyperbolic Active Learning for Semantic Segmentation under Domain Shift', what mIoU score did the HALO model get on the SYNTHIA-to-Cityscapes dataset
| 78.1 |
ChEBI-20 | GIT-Mol | GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06911v3 | [
"https://github.com/ai-hpc-research-team/git-mol"
] | In the paper 'GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text', what BLEU score did the GIT-Mol model get on the ChEBI-20 dataset
| 0.924 |
Turbulence | Command | Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14856v2 | [
"https://github.com/shahinhonarvar/turbulence-benchmark"
] | In the paper 'Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code', what CorrSc score did the Command model get on the Turbulence dataset
| 0.063 |
PACS | GMDG (ResNet-50, SWAD) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50, SWAD) model get on the PACS dataset
| 88.4 |
S3DIS Area5 | SPG(PTv2) | Subspace Prototype Guidance for Mitigating Class Imbalance in Point Cloud Semantic Segmentation | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10537v2 | [
"https://github.com/Javion11/PointLiBR"
] | In the paper 'Subspace Prototype Guidance for Mitigating Class Imbalance in Point Cloud Semantic Segmentation', what mIoU score did the SPG(PTv2) model get on the S3DIS Area5 dataset
| 73.3 |
S3DIS | SuperCluster | Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering | 2024-01-12T00:00:00 | https://arxiv.org/abs/2401.06704v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what PQ score did the SuperCluster model get on the S3DIS dataset
| 55.9 |
TerraIncognita | POEM | POEM: Polarization of Embeddings for Domain-Invariant Representations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13046v1 | [
"https://github.com/josangyoung/official-poem"
] | In the paper 'POEM: Polarization of Embeddings for Domain-Invariant Representations', what Average Accuracy score did the POEM model get on the TerraIncognita dataset
| 49.5 |
MOT16 | HopTrack[Embedded GPU] | HopTrack: A Real-time Multi-Object Tracking System for Embedded Devices | 2024-11-01T00:00:00 | https://arxiv.org/abs/2411.00608v1 | [
"https://github.com/Mrxiangli/HopTrack"
] | In the paper 'HopTrack: A Real-time Multi-Object Tracking System for Embedded Devices', what MOTA score did the HopTrack[Embedded GPU] model get on the MOT16 dataset
| 63.12 |
MVTec AD | AnomalyDINO-S (full-shot) | AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2 | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14529v2 | [
"https://github.com/dammsi/AnomalyDINO"
] | In the paper 'AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2', what Detection AUROC score did the AnomalyDINO-S (full-shot) model get on the MVTec AD dataset
| 99.5 |
Ego4D | GANOv2 | Guided Attention for Next Active Object @ EGO4D STA Challenge | 2023-05-25T00:00:00 | https://arxiv.org/abs/2305.16066v3 | [
"https://github.com/sanketsans/ganov2"
] | In the paper 'Guided Attention for Next Active Object @ EGO4D STA Challenge', what Overall (Top5 mAP) score did the GANOv2 model get on the Ego4D dataset
| 3.99 |
Deep Noise Suppression (DNS) Challenge | aTENNuate | Real-time Speech Enhancement on Raw Signals with Deep State-space Modeling | 2024-09-05T00:00:00 | https://arxiv.org/abs/2409.03377v2 | [
"https://github.com/Brainchip-Inc/aTENNuate"
] | In the paper 'Real-time Speech Enhancement on Raw Signals with Deep State-space Modeling', what PESQ-WB score did the aTENNuate model get on the Deep Noise Suppression (DNS) Challenge dataset
| 2.98 |
GENIA | UniNER-7B | UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03279v2 | [
"https://github.com/emma1066/retrieval-augmented-it-openner"
] | In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the GENIA dataset
| 77.54 |
ImageNet 512x512 | DiT-XL/2 with SA-Solver | SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models | 2023-09-10T00:00:00 | https://arxiv.org/abs/2309.05019v2 | [
"https://github.com/scxue/SA-Solver"
] | In the paper 'SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models', what FID score did the DiT-XL/2 with SA-Solver model get on the ImageNet 512x512 dataset
| 2.80 |
WDC-PAVE | AVEQA | Using LLMs for the Extraction and Normalization of Product Attribute Values | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02130v4 | [
"https://github.com/wbsg-uni-mannheim/wdc-pave"
] | In the paper 'Using LLMs for the Extraction and Normalization of Product Attribute Values', what F1-Score score did the AVEQA model get on the WDC-PAVE dataset
| 80.83 |
Visual Genome | SpeaQ (with reweighting) | Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17709v1 | [
"https://github.com/mlvlab/speaq"
] | In the paper 'Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection', what Recall@50 score did the SpeaQ (with reweighting) model get on the Visual Genome dataset
| 32.1 |
MNIST | PaddingFlow | PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08216v2 | [
"https://github.com/adamqlmeng/paddingflow"
] | In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what MMD-L2 score did the PaddingFlow model get on the MNIST dataset
| 11.0 |
TerraIncognita | GMDG (RegNetY-16GF, SWAD) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF, SWAD) model get on the TerraIncognita dataset
| 65 |
BURST-val | Cutie (base, MEGA, 600 pixels) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what HOTA (all) score did the Cutie (base, MEGA, 600 pixels) model get on the BURST-val dataset
| 61.2 |
ImageNet | ViT-H @224 (DeiT-III + AugSub) | Masking Augmentation for Supervised Learning | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11339v2 | [
"https://github.com/naver-ai/augsub"
] | In the paper 'Masking Augmentation for Supervised Learning', what Top 1 Accuracy score did the ViT-H @224 (DeiT-III + AugSub) model get on the ImageNet dataset
| 85.7% |
Objaverse | MiniGPT-3D | MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.01413v1 | [
"https://github.com/tangyuan96/minigpt-3d"
] | In the paper 'MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors', what Objaverse (I) score did the MiniGPT-3D model get on the Objaverse dataset
| 60.00 |
MM-Vet | VisionZip (Retain 192 Tokens, fine-tuning) | VisionZip: Longer is Better but Not Necessary in Vision Language Models | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04467v1 | [
"https://github.com/dvlab-research/visionzip"
] | In the paper 'VisionZip: Longer is Better but Not Necessary in Vision Language Models', what GPT-4 score score did the VisionZip (Retain 192 Tokens, fine-tuning) model get on the MM-Vet dataset
| 32.6 |
Peptides-func | ViT-PS | Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02866v3 | [
"https://github.com/jw9730/lps"
] | In the paper 'Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance', what AP score did the ViT-PS model get on the Peptides-func dataset
| 0.6575 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.