dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Cityscapes test | VLTSeg | Strong but simple: A Baseline for Domain Generalized Dense Perception by CLIP-based Transfer Learning | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02021v4 | [
"https://github.com/VLTSeg/VLTSeg"
] | In the paper 'Strong but simple: A Baseline for Domain Generalized Dense Perception by CLIP-based Transfer Learning', what Mean IoU (class) score did the VLTSeg model get on the Cityscapes test dataset
| 86.4 |
DTU | GC-MVSNet | GC-MVSNet: Multi-View, Multi-Scale, Geometrically-Consistent Multi-View Stereo | 2023-10-30T00:00:00 | https://arxiv.org/abs/2310.19583v3 | [
"https://github.com/vkvats/GC-MVSNet"
] | In the paper 'GC-MVSNet: Multi-View, Multi-Scale, Geometrically-Consistent Multi-View Stereo', what Acc score did the GC-MVSNet model get on the DTU dataset
| 0.330 |
ImageNet - 1% labeled data | CoMatch + EPASS (ResNet-50) | Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15764v1 | [
"https://github.com/beandkay/epass"
] | In the paper 'Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector', what Top 5 Accuracy score did the CoMatch + EPASS (ResNet-50) model get on the ImageNet - 1% labeled data dataset
| 87.3 |
Synapse multi-organ CT | AgileFormer | AgileFormer: Spatially Agile Transformer UNet for Medical Image Segmentation | 2024-03-29T00:00:00 | https://arxiv.org/abs/2404.00122v2 | [
"https://github.com/sotiraslab/AgileFormer"
] | In the paper 'AgileFormer: Spatially Agile Transformer UNet for Medical Image Segmentation', what Avg DSC score did the AgileFormer model get on the Synapse multi-organ CT dataset
| 86.11 |
STS15 | PromptEOL+CSE+OPT-2.7B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-2.7B model get on the STS15 dataset
| 0.8951 |
PASCAL VOC 2012 val | TADP | Text-image Alignment for Diffusion-based Perception | 2023-09-29T00:00:00 | https://arxiv.org/abs/2310.00031v3 | [
"https://github.com/damaggu/tadp"
] | In the paper 'Text-image Alignment for Diffusion-based Perception', what mIoU score did the TADP model get on the PASCAL VOC 2012 val dataset
| 87.11% |
NYU Depth v2 | SMMCL (SegNeXt-B) | Understanding Dark Scenes by Contrasting Multi-Modal Observations | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12320v2 | [
"https://github.com/palmdong/smmcl"
] | In the paper 'Understanding Dark Scenes by Contrasting Multi-Modal Observations', what Mean IoU score did the SMMCL (SegNeXt-B) model get on the NYU Depth v2 dataset
| 55.8% |
Atari 2600 Ms. Pacman | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Ms. Pacman dataset
| 4416 |
STAR Benchmark | AnyMAL-70B (0-shot) | AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16058v1 | [
"https://github.com/nokia-bell-labs/papagei-foundation-model"
] | In the paper 'AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model', what Average Accuracy score did the AnyMAL-70B (0-shot) model get on the STAR Benchmark dataset
| 48.2 |
BDD100K val | MRFP+(Ours) Resnet50 | MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18331v2 | [
"https://github.com/airl-iisc/MRFP"
] | In the paper 'MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation', what mIoU score did the MRFP+(Ours) Resnet50 model get on the BDD100K val dataset
| 39.55 |
Chameleon | FaberNet | HoloNets: Spectral Convolutions do extend to Directed Graphs | 2023-10-03T00:00:00 | https://arxiv.org/abs/2310.02232v2 | [
"https://github.com/ChristianKoke/HoloNets"
] | In the paper 'HoloNets: Spectral Convolutions do extend to Directed Graphs', what Accuracy score did the FaberNet model get on the Chameleon dataset
| 80.33±1.19 |
Atari 2600 Tutankham | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Tutankham dataset
| 252.9 |
X-Sum | Selfmem | Lift Yourself Up: Retrieval-augmented Text Generation with Self Memory | 2023-05-03T00:00:00 | https://arxiv.org/abs/2305.02437v3 | [
"https://github.com/hannibal046/selfmemory"
] | In the paper 'Lift Yourself Up: Retrieval-augmented Text Generation with Self Memory', what ROUGE-1 score did the Selfmem model get on the X-Sum dataset
| 50.30 |
Automatic Cardiac Diagnosis Challenge (ACDC) | SegFormer3D | SegFormer3D: an Efficient Transformer for 3D Medical Image Segmentation | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.10156v2 | [
"https://github.com/osupcvlab/segformer3d"
] | In the paper 'SegFormer3D: an Efficient Transformer for 3D Medical Image Segmentation', what Avg DSC score did the SegFormer3D model get on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset
| 90.96 |
SMAC 6h_vs_8z | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC 6h_vs_8z dataset
| 43.75 |
CC3M-TagMask | TTD (w/ fine-tuning) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what F1 score did the TTD (w/ fine-tuning) model get on the CC3M-TagMask dataset
| 82.8 |
PascalVOC-SP | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what macro F1 score did the NeuralWalker model get on the PascalVOC-SP dataset
| 0.4912 ± 0.0042 |
BanglaBook | Bangla-BERT (large) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the Bangla-BERT (large) model get on the BanglaBook dataset
| 0.9331 |
NExT-QA | SeViLA | Self-Chained Image-Language Model for Video Localization and Question Answering | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06988v2 | [
"https://github.com/yui010206/sevila"
] | In the paper 'Self-Chained Image-Language Model for Video Localization and Question Answering', what Accuracy score did the SeViLA model get on the NExT-QA dataset
| 73.8 |
ETTh1 (192) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTh1 (192) Multivariate dataset
| 0.397 |
RefCOCO testA | EVF-SAM | EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model | 2024-06-28T00:00:00 | https://arxiv.org/abs/2406.20076v4 | [
"https://github.com/hustvl/evf-sam"
] | In the paper 'EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model', what Overall IoU score did the EVF-SAM model get on the RefCOCO testA dataset
| 83.7 |
MedQA | LLAMA-2 (70B) | MEDITRON-70B: Scaling Medical Pretraining for Large Language Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.16079v1 | [
"https://github.com/epfllm/meditron"
] | In the paper 'MEDITRON-70B: Scaling Medical Pretraining for Large Language Models', what Accuracy score did the LLAMA-2 (70B) model get on the MedQA dataset
| 59.2 |
Shot2Story20K | Shotluck-Holmes (3.1B) | Shotluck Holmes: A Family of Efficient Small-Scale Large Language Vision Models For Video Captioning and Summarization | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20648v2 | [
"https://github.com/Skyline-9/Shotluck-Holmes"
] | In the paper 'Shotluck Holmes: A Family of Efficient Small-Scale Large Language Vision Models For Video Captioning and Summarization', what CIDEr score did the Shotluck-Holmes (3.1B) model get on the Shot2Story20K dataset
| 63.2 |
COCO-20i (5-shot) | SCCAN (ResNet-101) | Self-Calibrated Cross Attention Network for Few-Shot Segmentation | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09294v1 | [
"https://github.com/sam1224/sccan"
] | In the paper 'Self-Calibrated Cross Attention Network for Few-Shot Segmentation', what Mean IoU score did the SCCAN (ResNet-101) model get on the COCO-20i (5-shot) dataset
| 57 |
TerraIncognita | EOQ (ResNet-50) | QT-DoG: Quantization-aware Training for Domain Generalization | 2024-10-08T00:00:00 | https://arxiv.org/abs/2410.06020v1 | [
"https://github.com/saqibjaved1/QT-DoG"
] | In the paper 'QT-DoG: Quantization-aware Training for Domain Generalization', what Average Accuracy score did the EOQ (ResNet-50) model get on the TerraIncognita dataset
| 53.2 |
MSVD | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what CIDEr score did the COSA model get on the MSVD dataset
| 178.5 |
BoolQ | Gemma-7B | Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12644v4 | [
"https://github.com/devichand579/HPT"
] | In the paper 'Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles', what Accuracy score did the Gemma-7B model get on the BoolQ dataset
| 99.419 |
CIFAR-100 | ReviewKD++(T:resnet-32x4, S:shufflenet-v2) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 Accuracy (%) score did the ReviewKD++(T:resnet-32x4, S:shufflenet-v2) model get on the CIFAR-100 dataset
| 77.93 |
USNA-Cn2 (long-term) | Persistence | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Persistence model get on the USNA-Cn2 (long-term) dataset
| 1.208 |
UHRSD | BiRefNet (DUTS, HRSOD, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (DUTS, HRSOD, UHRSD) model get on the UHRSD dataset
| 0.957 |
STS Benchmark | PromptEOL+CSE+LLaMA-30B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+LLaMA-30B model get on the STS Benchmark dataset
| 0.8914 |
Market-1501 | PLIP-RN50-ABDNet | PLIP: Language-Image Pre-training for Person Representation Learning | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08386v2 | [
"https://github.com/zplusdragon/plip"
] | In the paper 'PLIP: Language-Image Pre-training for Person Representation Learning', what mAP score did the PLIP-RN50-ABDNet model get on the Market-1501 dataset
| 91.2 |
ETTh1 (336) Multivariate | UniTS-ST | UniTS: A Unified Multi-Task Time Series Model | 2024-02-29T00:00:00 | https://arxiv.org/abs/2403.00131v3 | [
"https://github.com/mims-harvard/UniTS"
] | In the paper 'UniTS: A Unified Multi-Task Time Series Model', what MSE score did the UniTS-ST model get on the ETTh1 (336) Multivariate dataset
| 0.405 |
NQ (BEIR) | Blended RAG | Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers | 2024-03-22T00:00:00 | https://arxiv.org/abs/2404.07220v2 | [
"https://github.com/ibm-ecosystem-engineering/blended-rag"
] | In the paper 'Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers', what nDCG@10 score did the Blended RAG model get on the NQ (BEIR) dataset
| 0.67 |
FMB Dataset | StitchFusion+FFMs (RGB-Infrared) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion+FFMs (RGB-Infrared) model get on the FMB Dataset dataset
| 64.32 |
PubMed with Public Split: fixed 20 nodes per class | Graph-MLP | Graph Entropy Minimization for Semi-supervised Node Classification | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19502v1 | [
"https://github.com/cf020031308/gem"
] | In the paper 'Graph Entropy Minimization for Semi-supervised Node Classification', what Accuracy score did the Graph-MLP model get on the PubMed with Public Split: fixed 20 nodes per class dataset
| 79.91 |
ASQP | MvP | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (R15) score did the MvP model get on the ASQP dataset
| 51.04 |
RSTPReid | RDE | Noisy-Correspondence Learning for Text-to-Image Person Re-identification | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09911v3 | [
"https://github.com/QinYang79/RDE"
] | In the paper 'Noisy-Correspondence Learning for Text-to-Image Person Re-identification', what Rank 1 score did the RDE model get on the RSTPReid dataset
| 64.45 |
GRAZPEDWRI-DX | YOLOv8+ECA | YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09329v5 | [
"https://github.com/ruiyangju/fracture_detection_improved_yolov8"
] | In the paper 'YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection', what mAP score did the YOLOv8+ECA model get on the GRAZPEDWRI-DX dataset
| 64.2 |
TerraIncognita | GMDG (ResNet-50) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50) model get on the TerraIncognita dataset
| 51.1 |
DSIFN-CD | CDMaskFormer | Rethinking Remote Sensing Change Detection With A Mask View | 2024-06-21T00:00:00 | https://arxiv.org/abs/2406.15320v1 | [
"https://github.com/xwmaxwma/rschange"
] | In the paper 'Rethinking Remote Sensing Change Detection With A Mask View', what F1 score did the CDMaskFormer model get on the DSIFN-CD dataset
| 74.75 |
MORPH Album2 (SE) | ResNet-50-DLDL-v2 | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL-v2 model get on the MORPH Album2 (SE) dataset
| 2.82 |
CIFAR-100-LT (ρ=100) | SURE(ResNet-32) | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01T00:00:00 | https://arxiv.org/abs/2403.00543v1 | [
"https://github.com/YutingLi0606/SURE"
] | In the paper 'SURE: SUrvey REcipes for building reliable and robust deep networks', what Error Rate score did the SURE(ResNet-32) model get on the CIFAR-100-LT (ρ=100) dataset
| 43.66 |
DeLiVER | StitchFusion (RGB-LiDAR) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-LiDAR) model get on the DeLiVER dataset
| 58.03 |
SNU-FILM (extreme) | VFIMamba | VFIMamba: Video Frame Interpolation with State Space Models | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02315v2 | [
"https://github.com/mcg-nju/vfimamba"
] | In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the SNU-FILM (extreme) dataset
| 25.79 |
NExT-QA | CREMA | CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05889v3 | [
"https://github.com/Yui010206/CREMA"
] | In the paper 'CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion', what Accuracy score did the CREMA model get on the NExT-QA dataset
| 73.9 |
ImageNet | DAT-B++ (384x384) | DAT++: Spatially Dynamic Vision Transformer with Deformable Attention | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01430v1 | [
"https://github.com/leaplabthu/dat"
] | In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what Top 1 Accuracy score did the DAT-B++ (384x384) model get on the ImageNet dataset
| 85.9% |
COCO-Stuff-171 | CAUSE-TR (ViT-S/8) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what mIoU score did the CAUSE-TR (ViT-S/8) model get on the COCO-Stuff-171 dataset
| 15.2 |
EQ-Bench | Open-Orca/Mistral-7B-OpenOrca | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the Open-Orca/Mistral-7B-OpenOrca model get on the EQ-Bench dataset
| 44.40 |
THUMOS14 | P-MIL | Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.17861v1 | [
"https://github.com/RenHuan1999/CVPR2023_P-MIL"
] | In the paper 'Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization', what avg-mAP (0.1-0.5) score did the P-MIL model get on the THUMOS14 dataset
| 57.4 |
ImageNet-A | FAN-L-Hybrid+STL | Fully Attentional Networks with Self-emerging Token Labeling | 2024-01-08T00:00:00 | https://arxiv.org/abs/2401.03844v1 | [
"https://github.com/NVlabs/STL"
] | In the paper 'Fully Attentional Networks with Self-emerging Token Labeling', what Top-1 accuracy % score did the FAN-L-Hybrid+STL model get on the ImageNet-A dataset
| 46.1 |
PubMedQA | BioMedGPT-10B | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09442v2 | [
"https://github.com/pharmolix/openbiomed"
] | In the paper 'BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine', what Accuracy score did the BioMedGPT-10B model get on the PubMedQA dataset
| 76.1 |
Citeseer | CDNMF | Contrastive Deep Nonnegative Matrix Factorization for Community Detection | 2023-11-04T00:00:00 | https://arxiv.org/abs/2311.02357v2 | [
"https://github.com/6lyc/cdnmf"
] | In the paper 'Contrastive Deep Nonnegative Matrix Factorization for Community Detection', what ACC score did the CDNMF model get on the Citeseer dataset
| 0.4756 |
Pascal VOC to Clipart1K | MILA | MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection | 2023-11-20T00:00:00 | https://arxiv.org/abs/2309.01086v1 | [
"https://github.com/hitachi-rd-cv/MILA"
] | In the paper 'MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection', what mAP score did the MILA model get on the Pascal VOC to Clipart1K dataset
| 49.9 |
TerraIncognita | MoA (OpenCLIP, ViT-B/16) | Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11031v2 | [
"https://github.com/KU-CVLAB/MoA"
] | In the paper 'Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters', what Average Accuracy score did the MoA (OpenCLIP, ViT-B/16) model get on the TerraIncognita dataset
| 52.8 |
ImageNet | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Top 1 Accuracy score did the ZLaP model get on the ImageNet dataset
| 72.1 |
RefCOCO+ test B | MagNet | Mask Grounding for Referring Image Segmentation | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12198v2 | [
"https://github.com/yxchng/mask-grounding"
] | In the paper 'Mask Grounding for Referring Image Segmentation', what Overall IoU score did the MagNet model get on the RefCOCO+ test B dataset
| 58.14 |
OVIS validation | DVIS++(R50, Online) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mask AP score did the DVIS++(R50, Online) model get on the OVIS validation dataset
| 37.2 |
Saarbruecken Voice Database (females) | SVM | Reproducible Machine Learning-based Voice Pathology Detection: Introducing the Pitch Difference Feature | 2024-10-14T00:00:00 | https://arxiv.org/abs/2410.10537v1 | [
"https://github.com/aailab-uct/automated-robust-and-reproducible-voice-pathology-detection"
] | In the paper 'Reproducible Machine Learning-based Voice Pathology Detection: Introducing the Pitch Difference Feature', what UAR score did the SVM model get on the Saarbruecken Voice Database (females) dataset
| 85.44% |
FRMT (Chinese - Mainland) | PaLM 2 | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the PaLM 2 model get on the FRMT (Chinese - Mainland) dataset
| 74.4 |
Office-Home | RCL | Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning | 2024-05-28T00:00:00 | https://arxiv.org/abs/2405.18376v1 | [
"https://github.com/Dong-Jie-Chen/RCL"
] | In the paper 'Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning', what Accuracy score did the RCL model get on the Office-Home dataset
| 90.0 |
NeedForSpeed | SAMURAI-L | SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory | 2024-11-18T00:00:00 | https://arxiv.org/abs/2411.11922v2 | [
"https://github.com/yangchris11/samurai"
] | In the paper 'SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory', what AUC score did the SAMURAI-L model get on the NeedForSpeed dataset
| 0.692 |
Social media attributions of YouTube comments | BERT-base | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the BERT-base model get on the Social media attributions of YouTube comments dataset
| 0.8220 |
CIFAR-10 | GAC-SNN | Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks | 2023-08-12T00:00:00 | https://arxiv.org/abs/2308.06582v2 | [
"https://github.com/bollossom/GAC"
] | In the paper 'Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks', what Percentage correct score did the GAC-SNN model get on the CIFAR-10 dataset
| 96.46 |
Bongard-OpenWorld | Otter | Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10207v5 | [
"https://github.com/joyjayng/Bongard-OpenWorld"
] | In the paper 'Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World', what 2-Class Accuracy score did the Otter model get on the Bongard-OpenWorld dataset
| 49.3 |
MedQA | Med-PaLM 2 (CoT + SC) | Towards Expert-Level Medical Question Answering with Large Language Models | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09617v1 | [
"https://github.com/m42-health/med42"
] | In the paper 'Towards Expert-Level Medical Question Answering with Large Language Models', what Accuracy score did the Med-PaLM 2 (CoT + SC) model get on the MedQA dataset
| 83.7 |
MM-Vet | CuMo-7B | CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts | 2024-05-09T00:00:00 | https://arxiv.org/abs/2405.05949v1 | [
"https://github.com/shi-labs/cumo"
] | In the paper 'CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts', what GPT-4 score score did the CuMo-7B model get on the MM-Vet dataset
| 51.0 |
CFC-DAOD | ALDI++ (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what AP@0.5 score did the ALDI++ (ResNet50-FPN) model get on the CFC-DAOD dataset
| 76.1 |
ETTh2 (96) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh2 (96) Multivariate dataset
| 0.276 |
Cityscapes to Foggy Cityscapes | CDDMSL | Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13525v1 | [
"https://github.com/sinamalakouti/CDDMSL"
] | In the paper 'Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment', what mAP score did the CDDMSL model get on the Cityscapes to Foggy Cityscapes dataset
| 54.3 |
HIDE (trained on GOPRO) | DeblurDiNAT-L | DeblurDiNAT: A Generalizable Transformer for Perceptual Image Deblurring | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.13163v4 | [
"https://github.com/hanzhouliu/deblurdinat"
] | In the paper 'DeblurDiNAT: A Generalizable Transformer for Perceptual Image Deblurring', what PSNR (sRGB) score did the DeblurDiNAT-L model get on the HIDE (trained on GOPRO) dataset
| 31.47 |
GRAZPEDWRI-DX | YOLOv8+GAM | YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09329v5 | [
"https://github.com/ruiyangju/fracture_detection_improved_yolov8"
] | In the paper 'YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection', what mAP score did the YOLOv8+GAM model get on the GRAZPEDWRI-DX dataset
| 64.2 |
MM-Vet | ShareGPT4V-13B | ShareGPT4V: Improving Large Multi-Modal Models with Better Captions | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12793v2 | [
"https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V"
] | In the paper 'ShareGPT4V: Improving Large Multi-Modal Models with Better Captions', what GPT-4 score score did the ShareGPT4V-13B model get on the MM-Vet dataset
| 43.1 |
CrowdPose | BUCTD-W48 (w/cond. input from PETR) | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what AP score did the BUCTD-W48 (w/cond. input from PETR) model get on the CrowdPose dataset
| 76.7 |
MNIST | GECCO | A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification | 2024-02-01T00:00:00 | https://arxiv.org/abs/2402.00564v6 | [
"https://github.com/geccoproject/gecco"
] | In the paper 'A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification', what Percentage error score did the GECCO model get on the MNIST dataset
| 1.96 |
WaterScenes | Achelous-MV-GDF-S2 | Achelous: A Fast Unified Water-surface Panoptic Perception Framework based on Fusion of Monocular Camera and 4D mmWave Radar | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07102v1 | [
"https://github.com/GuanRunwei/Achelous"
] | In the paper 'Achelous: A Fast Unified Water-surface Panoptic Perception Framework based on Fusion of Monocular Camera and 4D mmWave Radar', what mAP@50-95 score did the Achelous-MV-GDF-S2 model get on the WaterScenes dataset
| 56.0 |
WDC-PAVE | GPT-3.5_10_example_values_&_10_demonstrations | Using LLMs for the Extraction and Normalization of Product Attribute Values | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02130v4 | [
"https://github.com/wbsg-uni-mannheim/wdc-pave"
] | In the paper 'Using LLMs for the Extraction and Normalization of Product Attribute Values', what F1-Score score did the GPT-3.5_10_example_values_&_10_demonstrations model get on the WDC-PAVE dataset
| 88.02 |
ELD SonyA7S2 x200 | ExposureDiffusion (UNet+ELD) | ExposureDiffusion: Learning to Expose for Low-light Image Enhancement | 2023-07-15T00:00:00 | https://arxiv.org/abs/2307.07710v2 | [
"https://github.com/wyf0912/ExposureDiffusion"
] | In the paper 'ExposureDiffusion: Learning to Expose for Low-light Image Enhancement', what PSNR (Raw) score did the ExposureDiffusion (UNet+ELD) model get on the ELD SonyA7S2 x200 dataset
| 40.39 |
Re-TACRED | LLM-QA4RE (XXLarge) | Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11159v1 | [
"https://github.com/osu-nlp-group/qa4re"
] | In the paper 'Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors', what F1 score did the LLM-QA4RE (XXLarge) model get on the Re-TACRED dataset
| 66.5 |
CNRPark+EXT | CarNet | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the CarNet model get on the CNRPark+EXT dataset
| 0.9332 |
LVIS v1.0 val | RichSem (Focal-H + ImageNet as weakly-supervised extra data) | Learning from Rich Semantics and Coarse Locations for Long-tailed Object Detection | 2023-10-18T00:00:00 | https://arxiv.org/abs/2310.12152v1 | [
"https://github.com/MengLcool/RichSem"
] | In the paper 'Learning from Rich Semantics and Coarse Locations for Long-tailed Object Detection', what box AP score did the RichSem (Focal-H + ImageNet as weakly-supervised extra data) model get on the LVIS v1.0 val dataset
| 61.2 |
MNIST | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what Accuracy score did the NeuralWalker model get on the MNIST dataset
| 98.760 ± 0.079 |
GraspNet-1Billion | HGGD | Efficient Heatmap-Guided 6-Dof Grasp Detection in Cluttered Scenes | 2024-03-27T00:00:00 | https://arxiv.org/abs/2403.18546v2 | [
"https://github.com/THU-VCLab/HGGD"
] | In the paper 'Efficient Heatmap-Guided 6-Dof Grasp Detection in Cluttered Scenes', what AP_similar score did the HGGD model get on the GraspNet-1Billion dataset
| 51.20 |
ETTh1 (336) Multivariate | UniTime | UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series Forecasting | 2023-10-15T00:00:00 | https://arxiv.org/abs/2310.09751v3 | [
"https://github.com/liuxu77/unitime"
] | In the paper 'UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series Forecasting', what MSE score did the UniTime model get on the ETTh1 (336) Multivariate dataset
| 0.398 |
ETTh1 (720) Multivariate | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the ETTh1 (720) Multivariate dataset
| 0.449 |
UMVM-dbp-ja-en | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the UMVM-dbp-ja-en dataset
| 0.857 |
ETTm2 (96) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTm2 (96) Multivariate dataset
| 0.168 |
Argoverse2 | LION | LION: Linear Group RNN for 3D Object Detection in Point Clouds | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18232v1 | [
"https://github.com/happinesslz/LION"
] | In the paper 'LION: Linear Group RNN for 3D Object Detection in Point Clouds', what mAP score did the LION model get on the Argoverse2 dataset
| 41.5 |
ULS labeled data | SOUL | Semantic segmentation of sparse irregular point clouds for leaf/wood discrimination | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16963v3 | [
"https://github.com/na1an/phd_mission"
] | In the paper 'Semantic segmentation of sparse irregular point clouds for leaf/wood discrimination', what G-mean score did the SOUL model get on the ULS labeled data dataset
| 0.744 |
ScanObjectNN | PointMLP+TAP | Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.14971v2 | [
"https://github.com/wangzy22/tap"
] | In the paper 'Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models', what Overall Accuracy score did the PointMLP+TAP model get on the ScanObjectNN dataset
| 88.5 |
LDC2020T02 | LeakDistill | Incorporating Graph Information in Transformer-based AMR Parsing | 2023-06-23T00:00:00 | https://arxiv.org/abs/2306.13467v1 | [
"https://github.com/sapienzanlp/leakdistill"
] | In the paper 'Incorporating Graph Information in Transformer-based AMR Parsing', what Smatch score did the LeakDistill model get on the LDC2020T02 dataset
| 84.6 |
SFCHD | VFNet+SCALE | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the VFNet+SCALE model get on the SFCHD dataset
| 76.6 |
EconLogicQA | Zephyr-7B-Beta | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Zephyr-7B-Beta model get on the EconLogicQA dataset
| 0.1769 |
ClinTox | GIT-Mol(G+S) | GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06911v3 | [
"https://github.com/ai-hpc-research-team/git-mol"
] | In the paper 'GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text', what AUC score did the GIT-Mol(G+S) model get on the ClinTox dataset
| 0.883 |
3DPW | HMR 2.0 | Humans in 4D: Reconstructing and Tracking Humans with Transformers | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.20091v3 | [
"https://github.com/shubham-goel/4D-Humans"
] | In the paper 'Humans in 4D: Reconstructing and Tracking Humans with Transformers', what PA-MPJPE score did the HMR 2.0 model get on the 3DPW dataset
| 44.4 |
Amazon Beauty | CARCA-Rotatory | Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation | 2024-05-16T00:00:00 | https://arxiv.org/abs/2405.10436v1 | [
"https://github.com/researcher1741/position_encoding_srs"
] | In the paper 'Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation', what Hit@10 score did the CARCA-Rotatory model get on the Amazon Beauty dataset
| 0.6187 |
CHAMELEON | ZoomNeXt-PVTv2-B5 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what S-measure score did the ZoomNeXt-PVTv2-B5 model get on the CHAMELEON dataset
| 0.924 |
KIT Motion-Language | EMDM | EMDM: Efficient Motion Diffusion Model for Fast and High-Quality Motion Generation | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02256v3 | [
"https://github.com/frank-zy-dou/emdm"
] | In the paper 'EMDM: Efficient Motion Diffusion Model for Fast and High-Quality Motion Generation', what FID score did the EMDM model get on the KIT Motion-Language dataset
| 0.261 |
MSVD | vid-TLDR (UMT-L) | vid-TLDR: Training Free Token merging for Light-weight Video Transformer | 2024-03-20T00:00:00 | https://arxiv.org/abs/2403.13347v2 | [
"https://github.com/mlvlab/vid-tldr"
] | In the paper 'vid-TLDR: Training Free Token merging for Light-weight Video Transformer', what text-to-video R@1 score did the vid-TLDR (UMT-L) model get on the MSVD dataset
| 57.9 |
Mapillary test | BoQ | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the Mapillary test dataset
| 79 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.