dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
VideoInstruct | VideoGPT+ | VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09418v1 | [
"https://github.com/mbzuai-oryx/videogpt-plus"
] | In the paper 'VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding', what gpt-score score did the VideoGPT+ model get on the VideoInstruct dataset
| 3.27 |
ImageNet 256x256 | MAR-L, Diff Loss | Autoregressive Image Generation without Vector Quantization | 2024-06-17T00:00:00 | https://arxiv.org/abs/2406.11838v3 | [
"https://github.com/lth14/mar"
] | In the paper 'Autoregressive Image Generation without Vector Quantization', what FID score did the MAR-L, Diff Loss model get on the ImageNet 256x256 dataset
| 1.78 |
PeMS08 | PM-DMNet(P) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what MAE@1h score did the PM-DMNet(P) model get on the PeMS08 dataset
| 13.55 |
OTB-2015 | HIPTrack | HIPTrack: Visual Tracking with Historical Prompts | 2023-11-03T00:00:00 | https://arxiv.org/abs/2311.02072v2 | [
"https://github.com/wenruicai/hiptrack"
] | In the paper 'HIPTrack: Visual Tracking with Historical Prompts', what AUC score did the HIPTrack model get on the OTB-2015 dataset
| 0.71 |
FER+ | PAtt-Lite | PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition | 2023-06-16T00:00:00 | https://arxiv.org/abs/2306.09626v2 | [
"https://github.com/jlrex/patt-lite"
] | In the paper 'PAtt-Lite: Lightweight Patch and Attention MobileNet for Challenging Facial Expression Recognition', what Accuracy score did the PAtt-Lite model get on the FER+ dataset
| 95.55 |
Stanford Cars | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the Stanford Cars dataset
| 75.57 |
IMDb | RoBERTa-large | LlamBERT: Large-scale low-cost data annotation in NLP | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15938v1 | [
"https://github.com/aielte-research/llambert"
] | In the paper 'LlamBERT: Large-scale low-cost data annotation in NLP', what Accuracy score did the RoBERTa-large model get on the IMDb dataset
| 96.54 |
PF-PASCAL | GeoAware-SC (Zero-Shot) | Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17034v2 | [
"https://github.com/Junyi42/geoaware-sc"
] | In the paper 'Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence', what PCK score did the GeoAware-SC (Zero-Shot) model get on the PF-PASCAL dataset
| 82.6 |
PASCAL VOC 2012 val | CAUSE (iBOT, ViT-B/16) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what Clustering [mIoU] score did the CAUSE (iBOT, ViT-B/16) model get on the PASCAL VOC 2012 val dataset
| 53.4 |
Columbia | Early Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Early Fusion model get on the Columbia dataset
| .996 |
MPI-INF-3DHP | ZeDO (S=50) | Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation | 2023-07-07T00:00:00 | https://arxiv.org/abs/2307.03833v3 | [
"https://github.com/ipl-uw/ZeDO-Release"
] | In the paper 'Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation', what AUC score did the ZeDO (S=50) model get on the MPI-INF-3DHP dataset
| 65.6 |
CNRPark+EXT | MobileNetV2 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the MobileNetV2 model get on the CNRPark+EXT dataset
| 0.9663 |
roman-empire | FaberNet | HoloNets: Spectral Convolutions do extend to Directed Graphs | 2023-10-03T00:00:00 | https://arxiv.org/abs/2310.02232v2 | [
"https://github.com/ChristianKoke/HoloNets"
] | In the paper 'HoloNets: Spectral Convolutions do extend to Directed Graphs', what Accuracy (% ) score did the FaberNet model get on the roman-empire dataset
| 92.24±0.43 |
EconLogicQA | Mistral-7B-Instruct-v0.1 | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Mistral-7B-Instruct-v0.1 model get on the EconLogicQA dataset
| 0.1538 |
IMDB-BINARY | GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GCN + PANDA model get on the IMDB-BINARY dataset
| 63.76 |
NYU Depth v2 | PolyMaX(ConvNeXt-L) | PolyMaX: General Dense Prediction with Mask Transformer | 2023-11-09T00:00:00 | https://arxiv.org/abs/2311.05770v1 | [
"https://github.com/google-research/deeplab2"
] | In the paper 'PolyMaX: General Dense Prediction with Mask Transformer', what % < 11.25 score did the PolyMaX(ConvNeXt-L) model get on the NYU Depth v2 dataset
| 65.66 |
iNaturalist 2018 | MDCS(Resnet50) | MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09922v2 | [
"https://github.com/fistyee/mdcs"
] | In the paper 'MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition', what Top-1 Accuracy score did the MDCS(Resnet50) model get on the iNaturalist 2018 dataset
| 75.6% |
Charades-STA | SG-DETR | Saliency-Guided DETR for Moment Retrieval and Highlight Detection | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01615v1 | [
"https://github.com/ai-forever/sg-detr"
] | In the paper 'Saliency-Guided DETR for Moment Retrieval and Highlight Detection', what R@1 IoU=0.5 score did the SG-DETR model get on the Charades-STA dataset
| 70.20 |
UHRSD | BiRefNet (HRSOD, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (HRSOD, UHRSD) model get on the UHRSD dataset
| 0.952 |
CIFAR-10, 40 Labels | ShrinkMatch | Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06777v1 | [
"https://github.com/LiheYoung/ShrinkMatch"
] | In the paper 'Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning', what Percentage error score did the ShrinkMatch model get on the CIFAR-10, 40 Labels dataset
| 5.08 |
Squirrel (60%/20%/20% random splits) | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GraphSAGE model get on the Squirrel (60%/20%/20% random splits) dataset
| 45.25 ± 1.52 |
MM-Vet | InternVL2.5-4B | Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05271v1 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling', what GPT-4 score score did the InternVL2.5-4B model get on the MM-Vet dataset
| 60.6 |
PACS | PromptStyler (CLIP, ResNet-50) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ResNet-50) model get on the PACS dataset
| 93.2 |
RefCOCO+ test B | EVF-SAM | EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model | 2024-06-28T00:00:00 | https://arxiv.org/abs/2406.20076v4 | [
"https://github.com/hustvl/evf-sam"
] | In the paper 'EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model', what Overall IoU score did the EVF-SAM model get on the RefCOCO+ test B dataset
| 70.1 |
BEA-2019 (test) | RedPenNet | RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans | 2023-09-19T00:00:00 | https://arxiv.org/abs/2309.10898v1 | [
"https://github.com/webspellchecker/unlp-2023-shared-task"
] | In the paper 'RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans', what F0.5 score did the RedPenNet model get on the BEA-2019 (test) dataset
| 77.60 |
Electricity (336) | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the Electricity (336) dataset
| 0.162 |
PCQM4M-LSC | GPTrans-L | Graph Propagation Transformer for Graph Representation Learning | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11424v3 | [
"https://github.com/czczup/gptrans"
] | In the paper 'Graph Propagation Transformer for Graph Representation Learning', what Validation MAE score did the GPTrans-L model get on the PCQM4M-LSC dataset
| 0.1151 |
MLO-Cn2 | Mean Window Forecast | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Mean Window Forecast model get on the MLO-Cn2 dataset
| 0.481 |
SVT | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-B model get on the SVT dataset
| 98.3 |
Fashion-MNIST | ResNet-18 + Vision Eagle Attention | Vision Eagle Attention: a new lens for advancing image classification | 2024-11-15T00:00:00 | https://arxiv.org/abs/2411.10564v2 | [
"https://github.com/MahmudulHasan11085/Vision-Eagle-Attention"
] | In the paper 'Vision Eagle Attention: a new lens for advancing image classification', what Percentage error score did the ResNet-18 + Vision Eagle Attention model get on the Fashion-MNIST dataset
| 6.70 |
CHILI-3K | GAT | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GAT model get on the CHILI-3K dataset
| 0.342 +/- 0.117 |
Texas | CATv3-sup | CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08672v3 | [
"https://github.com/geox-lab/cat"
] | In the paper 'CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph', what Accuracy score did the CATv3-sup model get on the Texas dataset
| 83.0±2.5 |
STS16 | PromptEOL+CSE+LLaMA-30B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+LLaMA-30B model get on the STS16 dataset
| 0.8627 |
3DPW | TokenHMR (SD + ITW + BL) | TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16752v1 | [
"https://github.com/saidwivedi/TokenHMR"
] | In the paper 'TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation', what PA-MPJPE score did the TokenHMR (SD + ITW + BL) model get on the 3DPW dataset
| 44.3 |
MVBench | Tarsier (34B) | Tarsier: Recipes for Training and Evaluating Large Video Description Models | 2024-06-30T00:00:00 | https://arxiv.org/abs/2407.00634v2 | [
"https://github.com/bytedance/tarsier"
] | In the paper 'Tarsier: Recipes for Training and Evaluating Large Video Description Models', what Avg. score did the Tarsier (34B) model get on the MVBench dataset
| 67.6 |
PASCAL Context-59 | TTD (TCL) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (TCL) model get on the PASCAL Context-59 dataset
| 37.4 |
ImageNet 256x256 | TiTok-S-128 | An Image is Worth 32 Tokens for Reconstruction and Generation | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07550v1 | [
"https://github.com/bytedance/1d-tokenizer"
] | In the paper 'An Image is Worth 32 Tokens for Reconstruction and Generation', what FID score did the TiTok-S-128 model get on the ImageNet 256x256 dataset
| 1.97 |
GRAZPEDWRI-DX | YOLOv6m | Enhancing Wrist Fracture Detection with YOLO | 2024-07-17T00:00:00 | https://arxiv.org/abs/2407.12597v2 | [
"https://github.com/ammarlodhi255/pediatric_wrist_abnormality_detection-end-to-end-implementation"
] | In the paper 'Enhancing Wrist Fracture Detection with YOLO', what mAP score did the YOLOv6m model get on the GRAZPEDWRI-DX dataset
| 64.00 |
TpuGraphs Layout mean | TGraph | Graph neural networks with configuration cross-attention for tensor compilers | 2024-05-26T00:00:00 | https://arxiv.org/abs/2405.16623v2 | [
"https://github.com/thanhhau097/google_fast_or_slow"
] | In the paper 'Graph neural networks with configuration cross-attention for tensor compilers', what Kendall's Tau score did the TGraph model get on the TpuGraphs Layout mean dataset
| 0.674 |
PubMedQA | Med-PaLM 2 (CoT + SC) | Towards Expert-Level Medical Question Answering with Large Language Models | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09617v1 | [
"https://github.com/m42-health/med42"
] | In the paper 'Towards Expert-Level Medical Question Answering with Large Language Models', what Accuracy score did the Med-PaLM 2 (CoT + SC) model get on the PubMedQA dataset
| 74.0 |
NExT-QA (Efficient) | ViLA (3B, 4 frames) | ViLA: Efficient Video-Language Alignment for Video Question Answering | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08367v4 | [
"https://github.com/xijun-cs/vila"
] | In the paper 'ViLA: Efficient Video-Language Alignment for Video Question Answering', what 1:1 Accuracy score did the ViLA (3B, 4 frames) model get on the NExT-QA (Efficient) dataset
| 74.4 |
ASTE | MvP (multi-task) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (L14) score did the MvP (multi-task) model get on the ASTE dataset
| 65.30 |
Action-Camera Parking | ResNet50 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the ResNet50 model get on the Action-Camera Parking dataset
| 0.8377 |
VideoInstruct | VideoGPT+ | VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09418v1 | [
"https://github.com/mbzuai-oryx/videogpt-plus"
] | In the paper 'VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding', what mean score did the VideoGPT+ model get on the VideoInstruct dataset
| 2.47 |
MATH | Gemini Pro (4-shot) | Gemini: A Family of Highly Capable Multimodal Models | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11805v4 | [
"https://github.com/valdecy/pybibx"
] | In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what Accuracy score did the Gemini Pro (4-shot) model get on the MATH dataset
| 32.6 |
ETTh1 (720) Univariate | AutoCon | Self-Supervised Contrastive Learning for Long-term Forecasting | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02023v2 | [
"https://github.com/junwoopark92/self-supervised-contrastive-forecsating"
] | In the paper 'Self-Supervised Contrastive Learning for Long-term Forecasting', what MSE score did the AutoCon model get on the ETTh1 (720) Univariate dataset
| 0.078 |
GoPro | Instruct-IPT | Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation | 2024-06-30T00:00:00 | https://arxiv.org/abs/2407.00676v1 | [
"https://github.com/huawei-noah/Pretrained-IPT"
] | In the paper 'Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation', what PSNR score did the Instruct-IPT model get on the GoPro dataset
| 33.86 |
ICDAR2015 | CLIP4STR-L | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L model get on the ICDAR2015 dataset
| 90.8 |
MSU SR-QA Dataset | TOPIQ + Res50 (IAA) | TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment | 2023-08-06T00:00:00 | https://arxiv.org/abs/2308.03060v1 | [
"https://github.com/chaofengc/iqa-pytorch"
] | In the paper 'TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment', what SROCC score did the TOPIQ + Res50 (IAA) model get on the MSU SR-QA Dataset dataset
| 0.36204 |
ImageNet-R | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Top-1 accuracy % score did the HPT model get on the ImageNet-R dataset
| 77.38 |
COCO-Stuff-27 | DiffSeg (512) | Diffuse, Attend, and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12469v3 | [
"https://github.com/google/diffseg"
] | In the paper 'Diffuse, Attend, and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion', what mIoU score did the DiffSeg (512) model get on the COCO-Stuff-27 dataset
| 43.6 |
RES-Q | QurrentOS-coder + Qwen-72B-Instruct | RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16801v2 | [
"https://github.com/qurrent-ai/res-q"
] | In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + Qwen-72B-Instruct model get on the RES-Q dataset
| 18.0 |
MM-Vet | InternVL2.5-8B | Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05271v1 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling', what GPT-4 score score did the InternVL2.5-8B model get on the MM-Vet dataset
| 62.8 |
EgoExoLearn | RAAN+TL+Gaze | EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World | 2024-03-24T00:00:00 | https://arxiv.org/abs/2403.16182v2 | [
"https://github.com/opengvlab/egoexolearn"
] | In the paper 'EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World', what Accuracy score did the RAAN+TL+Gaze model get on the EgoExoLearn dataset
| 81.27 |
nuScenes | VAD-Base [jiang2023vad] | Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10430v2 | [
"https://github.com/E2E-AD/AD-MLP"
] | In the paper 'Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes', what Collision-3s score did the VAD-Base [jiang2023vad] model get on the nuScenes dataset
| 0.24 |
YouTube-VIS validation | DVIS++(VIT-L, Offline) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mask AP score did the DVIS++(VIT-L, Offline) model get on the YouTube-VIS validation dataset
| 68.3 |
Cora with Public Split: fixed 20 nodes per class | GCN-TV | Re-Think and Re-Design Graph Neural Networks in Spaces of Continuous Graph Diffusion Functionals | 2023-07-01T00:00:00 | https://arxiv.org/abs/2307.00222v1 | [
"https://github.com/Dandy5721/GNN-PDE-COV"
] | In the paper 'Re-Think and Re-Design Graph Neural Networks in Spaces of Continuous Graph Diffusion Functionals', what Accuracy score did the GCN-TV model get on the Cora with Public Split: fixed 20 nodes per class dataset
| 86.3% |
WildDESED | CRNN (with BEATs) | Leveraging LLM and Text-Queried Separation for Noise-Robust Sound Event Detection | 2024-11-02T00:00:00 | https://arxiv.org/abs/2411.01174v1 | [
"https://github.com/apple-yinhan/noise-robust-sed"
] | In the paper 'Leveraging LLM and Text-Queried Separation for Noise-Robust Sound Event Detection', what PSDS1 (-5dB) score did the CRNN (with BEATs) model get on the WildDESED dataset
| 0.065 |
Electricity (336) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Electricity (336) dataset
| 0.162 |
LAGENDA | MiVOLO-V2 | Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02302v3 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation', what MAE score did the MiVOLO-V2 model get on the LAGENDA dataset
| 3.65 |
BEOID | HR-Pro | HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation | 2023-08-24T00:00:00 | https://arxiv.org/abs/2308.12608v3 | [
"https://github.com/pipixin321/hr-pro"
] | In the paper 'HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation', what mAP@0.1:0.7 score did the HR-Pro model get on the BEOID dataset
| 59.4 |
ImageNet | TinySaver(ConvNeXtV2_h, 0.01 Acc drop) | Tiny Models are the Computational Saver for Large Models | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17726v3 | [
"https://github.com/QingyuanWang/tinysaver"
] | In the paper 'Tiny Models are the Computational Saver for Large Models', what Top 1 Accuracy score did the TinySaver(ConvNeXtV2_h, 0.01 Acc drop) model get on the ImageNet dataset
| 86.24 |
ZINC | CIN++ | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what MAE score did the CIN++ model get on the ZINC dataset
| 0.074 |
Comic2k | MILA | MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection | 2023-11-20T00:00:00 | https://arxiv.org/abs/2309.01086v1 | [
"https://github.com/hitachi-rd-cv/MILA"
] | In the paper 'MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection', what mAP score did the MILA model get on the Comic2k dataset
| 44.6 |
nuScenes | PPT+SparseUNet | Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09718v2 | [
"https://github.com/Pointcept/Pointcept"
] | In the paper 'Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training', what val mIoU score did the PPT+SparseUNet model get on the nuScenes dataset
| 0.786 |
ICDAR2013 | CLIP4STR-L* | An Empirical Study of Scaling Law for OCR | 2023-12-29T00:00:00 | https://arxiv.org/abs/2401.00028v3 | [
"https://github.com/large-ocr-model/large-ocr-model.github.io"
] | In the paper 'An Empirical Study of Scaling Law for OCR', what Accuracy score did the CLIP4STR-L* model get on the ICDAR2013 dataset
| 99.42 |
Sintel-clean | Ef-RAFT | Rethinking RAFT for Efficient Optical Flow | 2024-01-01T00:00:00 | https://arxiv.org/abs/2401.00833v1 | [
"https://github.com/n3slami/Ef-RAFT"
] | In the paper 'Rethinking RAFT for Efficient Optical Flow', what Average End-Point Error score did the Ef-RAFT model get on the Sintel-clean dataset
| 1.27 |
The Pile | Gemma-2 27B | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Gemma-2 27B model get on the The Pile dataset
| 0.629 |
HumanML3D | MoMask | MoMask: Generative Masked Modeling of 3D Human Motions | 2023-11-29T00:00:00 | https://arxiv.org/abs/2312.00063v1 | [
"https://github.com/EricGuo5513/momask-codes"
] | In the paper 'MoMask: Generative Masked Modeling of 3D Human Motions', what FID score did the MoMask model get on the HumanML3D dataset
| 0.045 |
Weather2K1786 (720) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K1786 (720) dataset
| 0.66 |
The Pile | Llama-3.2-Instruct 1B | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Llama-3.2-Instruct 1B model get on the The Pile dataset
| 0.807 |
Squirrel (60%/20%/20% random splits) | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GCN model get on the Squirrel (60%/20%/20% random splits) dataset
| 47.19 ± 1.21 |
3DPW | ARTS (Resnet50 L=16) | ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos | 2024-10-21T00:00:00 | https://arxiv.org/abs/2410.15582v1 | [
"https://github.com/tangtao-pku/arts"
] | In the paper 'ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos', what PA-MPJPE score did the ARTS (Resnet50 L=16) model get on the 3DPW dataset
| 46.5 |
THUMOS' 14 | MAT (ours) | Memory-and-Anticipation Transformer for Online Action Understanding | 2023-08-15T00:00:00 | https://arxiv.org/abs/2308.07893v1 | [
"https://github.com/echo0125/memory-and-anticipation-transformer"
] | In the paper 'Memory-and-Anticipation Transformer for Online Action Understanding', what mAP score did the MAT (ours) model get on the THUMOS' 14 dataset
| 58.2 |
BSD100 - 3x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the BSD100 - 3x upscaling dataset
| 29.66 |
VideoInstruct | BT-Adapter (zero-shot) | BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15785v2 | [
"https://github.com/farewellthree/BT-Adapter"
] | In the paper 'BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning', what Correctness of Information score did the BT-Adapter (zero-shot) model get on the VideoInstruct dataset
| 2.16 |
ImageNet 512x512 | GIVT-Causal-L+A | GIVT: Generative Infinite-Vocabulary Transformers | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02116v4 | [
"https://github.com/google-research/big_vision"
] | In the paper 'GIVT: Generative Infinite-Vocabulary Transformers', what FID score did the GIVT-Causal-L+A model get on the ImageNet 512x512 dataset
| 2.92 |
Vulnerability Java Dataset | ContraBERT | Finetuning Large Language Models for Vulnerability Detection | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.17010v5 | [
"https://github.com/rmusab/vul-llm-finetune"
] | In the paper 'Finetuning Large Language Models for Vulnerability Detection', what AUC score did the ContraBERT model get on the Vulnerability Java Dataset dataset
| 0.85 |
LAVIB | RIFE | LAVIB: A Large-scale Video Interpolation Benchmark | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.09754v2 | [
"https://github.com/alexandrosstergiou/lavib"
] | In the paper 'LAVIB: A Large-scale Video Interpolation Benchmark', what PSNR score did the RIFE model get on the LAVIB dataset
| 27.88 |
AudioCaps | Make-An-Audio 2 | Make-An-Audio 2: Temporal-Enhanced Text-to-Audio Generation | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18474v1 | [
"https://github.com/bytedance/make-an-audio-2"
] | In the paper 'Make-An-Audio 2: Temporal-Enhanced Text-to-Audio Generation', what FAD score did the Make-An-Audio 2 model get on the AudioCaps dataset
| 1.80 |
ScreenSpot | SeeClick | SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents | 2024-01-17T00:00:00 | https://arxiv.org/abs/2401.10935v2 | [
"https://github.com/njucckevin/seeclick"
] | In the paper 'SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents', what Accuracy (%) score did the SeeClick model get on the ScreenSpot dataset
| 53.4 |
STL-10 | RDGAN | A High-Quality Robust Diffusion Framework for Corrupted Dataset | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17101v2 | [
"https://github.com/VinAIResearch/RDUOT"
] | In the paper 'A High-Quality Robust Diffusion Framework for Corrupted Dataset', what FID score did the RDGAN model get on the STL-10 dataset
| 13.07 |
THUMOS 2014 | P-MIL | Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.17861v1 | [
"https://github.com/RenHuan1999/CVPR2023_P-MIL"
] | In the paper 'Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization', what mAP@0.5 score did the P-MIL model get on the THUMOS 2014 dataset
| 40.0 |
ColonINST-v1 (Seen) | Bunny-v1.0-3B
(w/ LoRA, w/ extra data) | Efficient Multimodal Learning from Data-centric Perspective | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11530v3 | [
"https://github.com/baai-dcai/bunny"
] | In the paper 'Efficient Multimodal Learning from Data-centric Perspective', what Accuray score did the Bunny-v1.0-3B
(w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 96.02 |
nuScenes | Offline Tracking with Object Permanence | Offline Tracking with Object Permanence | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01288v4 | [
"https://github.com/tudelft-iv-students/offline-tracking-with-object-permanence"
] | In the paper 'Offline Tracking with Object Permanence', what AMOTA score did the Offline Tracking with Object Permanence model get on the nuScenes dataset
| 0.671 |
ImageNet | EViT (delete) | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the EViT (delete) model get on the ImageNet dataset
| 82.29% |
COCO 2017 | DeBiFormer-S (IN1k pretrain, MaskRCNN 12ep) | DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08582v1 | [
"https://github.com/maclong01/DeBiFormer"
] | In the paper 'DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention', what mAP score did the DeBiFormer-S (IN1k pretrain, MaskRCNN 12ep) model get on the COCO 2017 dataset
| 47.5 |
Peptides-func | GRED+LapPE | Recurrent Distance Filtering for Graph Representation Learning | 2023-12-03T00:00:00 | https://arxiv.org/abs/2312.01538v3 | [
"https://github.com/skeletondyh/gred"
] | In the paper 'Recurrent Distance Filtering for Graph Representation Learning', what AP score did the GRED+LapPE model get on the Peptides-func dataset
| 0.7133±0.0011 |
ETTh1 (336) Multivariate | SOFTS | SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion | 2024-04-22T00:00:00 | https://arxiv.org/abs/2404.14197v3 | [
"https://github.com/secilia-cxy/softs"
] | In the paper 'SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion', what MSE score did the SOFTS model get on the ETTh1 (336) Multivariate dataset
| 0.480 |
OVIS validation | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Pro model get on the OVIS validation dataset
| 50.4 |
PIPA | SocialGPT | SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21411v1 | [
"https://github.com/mengzibin/socialgpt"
] | In the paper 'SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization', what Accuracy score did the SocialGPT model get on the PIPA dataset
| 66.7 |
ZINC-full | TIGT | Topology-Informed Graph Transformer | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02005v1 | [
"https://github.com/leemingo/tigt"
] | In the paper 'Topology-Informed Graph Transformer', what MAE score did the TIGT model get on the ZINC-full dataset
| 0.014 |
KIT Motion-Language | MMM (predict length) | MMM: Generative Masked Motion Model | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03596v2 | [
"https://github.com/exitudio/MMM"
] | In the paper 'MMM: Generative Masked Motion Model', what FID score did the MMM (predict length) model get on the KIT Motion-Language dataset
| 0.429 |
MSRVTT-MC | Norton | Multi-granularity Correspondence Learning from Long-term Noisy Videos | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16702v1 | [
"https://github.com/XLearning-SCU/2024-ICLR-Norton"
] | In the paper 'Multi-granularity Correspondence Learning from Long-term Noisy Videos', what Accuracy score did the Norton model get on the MSRVTT-MC dataset
| 92.7 |
OoDIS | Mask2Anomaly | Unmasking Anomalies in Road-Scene Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13316v1 | [
"https://github.com/shyam671/mask2anomaly-unmasking-anomalies-in-road-scene-segmentation"
] | In the paper 'Unmasking Anomalies in Road-Scene Segmentation', what AP score did the Mask2Anomaly model get on the OoDIS dataset
| 13.73 |
Vinoground | LLaVA-OneVision-Qwen2-7B | LLaVA-OneVision: Easy Visual Task Transfer | 2024-08-06T00:00:00 | https://arxiv.org/abs/2408.03326v3 | [
"https://github.com/evolvinglmms-lab/lmms-eval"
] | In the paper 'LLaVA-OneVision: Easy Visual Task Transfer', what Text Score score did the LLaVA-OneVision-Qwen2-7B model get on the Vinoground dataset
| 41.6 |
WDC Products | gpt-4o-2024-08-06_fine_tuned_wdc_small | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-2024-08-06_fine_tuned_wdc_small model get on the WDC Products dataset
| 87.07 |
LIVE-VQC | ReLaX-VQA (trained on LSVQ only) | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11496v1 | [
"https://github.com/xinyiw915/relax-vqa"
] | In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA (trained on LSVQ only) model get on the LIVE-VQC dataset
| 0.8242 |
ogbn-products | LD+GAMLP | Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.14907v1 | [
"https://github.com/MIRALab-USTC/LD"
] | In the paper 'Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias', what Test Accuracy score did the LD+GAMLP model get on the ogbn-products dataset
| 0.8645 ± 0.0012 |
Lipophilicity | ChemBFN | A Bayesian Flow Network Framework for Chemistry Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.20294v1 | [
"https://github.com/Augus1999/bayesian-flow-network-for-chemistry"
] | In the paper 'A Bayesian Flow Network Framework for Chemistry Tasks', what RMSE score did the ChemBFN model get on the Lipophilicity dataset
| 0.746 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.