dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Abt-Buy | gpt4-0613_zeroshot | Entity Matching using Large Language Models | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11244v4 | [
"https://github.com/wbsg-uni-mannheim/matchgpt"
] | In the paper 'Entity Matching using Large Language Models', what F1 (%) score did the gpt4-0613_zeroshot model get on the Abt-Buy dataset
| 95.78 |
PeMSD4 | PM-DMNet(R) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(R) model get on the PeMSD4 dataset
| 18.37 |
SAFIM | codegen-16B-multi | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the codegen-16B-multi model get on the SAFIM dataset
| 25.94 |
PASCAL Context-459 | SED | SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15537v2 | [
"https://github.com/xb534/sed"
] | In the paper 'SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation', what mIoU score did the SED model get on the PASCAL Context-459 dataset
| 22.6 |
ScanNet | OneFormer3D | OneFormer3D: One Transformer for Unified Point Cloud Segmentation | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14405v1 | [
"https://github.com/oneformer3d/oneformer3d"
] | In the paper 'OneFormer3D: One Transformer for Unified Point Cloud Segmentation', what PQ score did the OneFormer3D model get on the ScanNet dataset
| 71.2 |
ogbl-biokg | ComplEx^2 | How to Turn Your Knowledge Graph Embeddings into Generative Models | 2023-05-25T00:00:00 | https://arxiv.org/abs/2305.15944v3 | [
"https://github.com/april-tools/gekcs"
] | In the paper 'How to Turn Your Knowledge Graph Embeddings into Generative Models', what Test MRR score did the ComplEx^2 model get on the ogbl-biokg dataset
| 0.8583 ± 0.0005 |
VDD | Segformer-B0 | VDD: Varied Drone Dataset for Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13608v3 | [
"https://github.com/RussRobin/VDD"
] | In the paper 'VDD: Varied Drone Dataset for Semantic Segmentation', what mIoU score did the Segformer-B0 model get on the VDD dataset
| 75.37 |
Actor | TE-GCNN | Transfer Entropy in Graph Convolutional Neural Networks | 2024-06-08T00:00:00 | https://arxiv.org/abs/2406.06632v1 | [
"https://github.com/avmoldovan/Heterophily_and_oversmoothing-forked"
] | In the paper 'Transfer Entropy in Graph Convolutional Neural Networks', what Accuracy score did the TE-GCNN model get on the Actor dataset
| 37.50±1.57 |
Benchmarking Chinese Text Recognition: Datasets, Baselines, and an Empirical Study | DTrOCR 105M | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what Accuracy (%) score did the DTrOCR 105M model get on the Benchmarking Chinese Text Recognition: Datasets, Baselines, and an Empirical Study dataset
| 89.6 |
COCO-20i (1-shot) | GF-SAM | Bridge the Points: Graph-based Few-shot Segment Anything Semantically | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.06964v2 | [
"https://github.com/ANDYZAQ/GF-SAM"
] | In the paper 'Bridge the Points: Graph-based Few-shot Segment Anything Semantically', what Mean IoU score did the GF-SAM model get on the COCO-20i (1-shot) dataset
| 58.7 |
Atari 2600 Phoenix | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Phoenix dataset
| 71752.6 |
MOT20 | BoostTrack++ | BoostTrack++: using tracklet information to detect more objects in multiple object tracking | 2024-08-23T00:00:00 | https://arxiv.org/abs/2408.13003v1 | [
"https://github.com/vukasin-stanojevic/BoostTrack"
] | In the paper 'BoostTrack++: using tracklet information to detect more objects in multiple object tracking', what MOTA score did the BoostTrack++ model get on the MOT20 dataset
| 77.7 |
AVeriTeC | CTU AIC | AIC CTU system at AVeriTeC: Re-framing automated fact-checking as a simple RAG task | 2024-10-15T00:00:00 | https://arxiv.org/abs/2410.11446v1 | [
"https://github.com/aic-factcheck/aic_averitec"
] | In the paper 'AIC CTU system at AVeriTeC: Re-framing automated fact-checking as a simple RAG task', what Question Only score score did the CTU AIC model get on the AVeriTeC dataset
| 0.46 |
VietMed | w2v2-Viet | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what Dev WER score did the w2v2-Viet model get on the VietMed dataset
| 45.3 |
ImageNet 256x256 | RAR-XXL, autoregressive | Randomized Autoregressive Visual Generation | 2024-11-01T00:00:00 | https://arxiv.org/abs/2411.00776v1 | [
"https://github.com/bytedance/1d-tokenizer"
] | In the paper 'Randomized Autoregressive Visual Generation', what FID score did the RAR-XXL, autoregressive model get on the ImageNet 256x256 dataset
| 1.48 |
SportsMOT | MOTIP (Deformable DETR, with SportsMOT val) | Multiple Object Tracking as ID Prediction | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16848v1 | [
"https://github.com/MCG-NJU/MOTIP"
] | In the paper 'Multiple Object Tracking as ID Prediction', what HOTA score did the MOTIP (Deformable DETR, with SportsMOT val) model get on the SportsMOT dataset
| 75.2 |
DUT-OMRON | M3Net-S | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-S model get on the DUT-OMRON dataset
| 0.045 |
Weather2K850 (336) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K850 (336) dataset
| 0.474 |
SMAC corridor_2z_vs_24zg | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the QPLEX model get on the SMAC corridor_2z_vs_24zg dataset
| 6.44 |
ARMBench | Deformable DETR | Robot Instance Segmentation with Few Annotations for Grasping | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01302v1 | [
"https://github.com/mkimhi/RISE"
] | In the paper 'Robot Instance Segmentation with Few Annotations for Grasping', what AP50 score did the Deformable DETR model get on the ARMBench dataset
| 77.03 |
AG-ReID | VDT | View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14513v1 | [
"https://github.com/linlyac/vdt-agpreid"
] | In the paper 'View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network', what Averaged rank-1 acc(%) score did the VDT model get on the AG-ReID dataset
| 82.91 |
ChEBI-20 | MolReGPT (GPT-4-0314) | Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective | 2023-06-11T00:00:00 | https://arxiv.org/abs/2306.06615v2 | [
"https://github.com/phenixace/molregpt"
] | In the paper 'Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective', what BLEU-2 score did the MolReGPT (GPT-4-0314) model get on the ChEBI-20 dataset
| 60.7 |
LaSOT | HIPTrack | HIPTrack: Visual Tracking with Historical Prompts | 2023-11-03T00:00:00 | https://arxiv.org/abs/2311.02072v2 | [
"https://github.com/wenruicai/hiptrack"
] | In the paper 'HIPTrack: Visual Tracking with Historical Prompts', what AUC score did the HIPTrack model get on the LaSOT dataset
| 72.7 |
PASCAL-5i (1-Shot) | MIANet (ResNet-101) | MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13864v1 | [
"https://github.com/aldrich2y/mianet"
] | In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (ResNet-101) model get on the PASCAL-5i (1-Shot) dataset
| 67.63 |
MSR-VTT-1kA | PAU | Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17093v3 | [
"https://github.com/leolee99/pau"
] | In the paper 'Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval', what text-to-video Mean Rank score did the PAU model get on the MSR-VTT-1kA dataset
| 14.0 |
Clotho | LOAE | Enhancing Automated Audio Captioning via Large Language Models with Optimized Audio Encoding | 2024-06-19T00:00:00 | https://arxiv.org/abs/2406.13275v2 | [
"https://github.com/frankenliu/LOAE"
] | In the paper 'Enhancing Automated Audio Captioning via Large Language Models with Optimized Audio Encoding', what CIDEr score did the LOAE model get on the Clotho dataset
| 0.513 |
VietMed | Hybrid 4-gram VietMed-Train + ExtraText | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what PPL score did the Hybrid 4-gram VietMed-Train + ExtraText model get on the VietMed dataset
| 84 |
Cityscapes to Foggy Cityscapes | SADA (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the SADA (ResNet50-FPN) model get on the Cityscapes to Foggy Cityscapes dataset
| 54.2 |
DomainNet | GMDG (RegNetY-16GF) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF) model get on the DomainNet dataset
| 54.6 |
NTU RGB+D 120 | π-ViT (RGB + Pose) | Just Add $π$! Pose Induced Video Transformers for Understanding Activities of Daily Living | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18840v1 | [
"https://github.com/dominickrei/pi-vit"
] | In the paper 'Just Add $π$! Pose Induced Video Transformers for Understanding Activities of Daily Living', what Accuracy (Cross-Subject) score did the π-ViT (RGB + Pose) model get on the NTU RGB+D 120 dataset
| 95.1 |
VisA | AnomalyDINO-S (2-shot) | AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2 | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14529v2 | [
"https://github.com/dammsi/AnomalyDINO"
] | In the paper 'AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2', what Detection AUROC score did the AnomalyDINO-S (2-shot) model get on the VisA dataset
| 89.7 |
Office-Home | SWG | Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidance | 2023-12-07T00:00:00 | https://arxiv.org/abs/2312.04066v4 | [
"https://github.com/ThomasWestfechtel/SWG"
] | In the paper 'Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidance', what Accuracy score did the SWG model get on the Office-Home dataset
| 92.3 |
AudioSet | EquiAV | EquiAV: Leveraging Equivariance for Audio-Visual Contrastive Learning | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09502v2 | [
"https://github.com/jongsuk1/equiav"
] | In the paper 'EquiAV: Leveraging Equivariance for Audio-Visual Contrastive Learning', what Test mAP score did the EquiAV model get on the AudioSet dataset
| 0.546 |
ImageNet | SwinV2-Ti | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the SwinV2-Ti model get on the ImageNet dataset
| 83.09% |
OCHuman | MaskPose-b | Detection, Pose Estimation and Segmentation for Multiple Bodies: Closing the Virtuous Circle | 2024-12-02T00:00:00 | https://arxiv.org/abs/2412.01562v1 | [
"https://github.com/MiraPurkrabek/BBoxMaskPose"
] | In the paper 'Detection, Pose Estimation and Segmentation for Multiple Bodies: Closing the Virtuous Circle', what Test AP score did the MaskPose-b model get on the OCHuman dataset
| 45.0 |
Weather2K1786 (720) | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the Weather2K1786 (720) dataset
| 0.628 |
DexYCB | HOISDF | HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed Distance Fields | 2024-02-26T00:00:00 | https://arxiv.org/abs/2402.17062v1 | [
"https://github.com/amathislab/hoisdf"
] | In the paper 'HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed Distance Fields', what Average MPJPE (mm) score did the HOISDF model get on the DexYCB dataset
| 10.1 |
GSM8K | MuggleMATH 13B | MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.05506v3 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | In the paper 'MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning', what Accuracy score did the MuggleMATH 13B model get on the GSM8K dataset
| 74 |
SALMon | Spirit-LM (Expr.) | Spirit LM: Interleaved Spoken and Written Language Model | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05755v2 | [
"https://github.com/facebookresearch/spiritlm"
] | In the paper 'Spirit LM: Interleaved Spoken and Written Language Model', what Speaker Consistency score did the Spirit-LM (Expr.) model get on the SALMon dataset
| 81.0 |
fake | ResNet + OpenAI embedding | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the ResNet + OpenAI embedding model get on the fake dataset
| 0.923 |
QVHighlights | UniVTG | UniVTG: Towards Unified Video-Language Temporal Grounding | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16715v2 | [
"https://github.com/showlab/univtg"
] | In the paper 'UniVTG: Towards Unified Video-Language Temporal Grounding', what mAP score did the UniVTG model get on the QVHighlights dataset
| 38.20 |
ColonINST-v1 (Unseen) | LLaVA-Med-v1.0
(w/o LoRA, w/ extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.0
(w/o LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
| 75.25 |
SFCHD | TOOD+SCALE | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the TOOD+SCALE model get on the SFCHD dataset
| 79.3 |
Composition-1K | DiffMatte | Diffusion for Natural Image Matting | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05915v2 | [
"https://github.com/yihanhu-2022/diffmatte"
] | In the paper 'Diffusion for Natural Image Matting', what MSE score did the DiffMatte model get on the Composition-1K dataset
| 2.26 |
xBD | MambaBDA-Base | ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03425v6 | [
"https://github.com/chenhongruixuan/mambacd"
] | In the paper 'ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model', what Weighted Average F1-score score did the MambaBDA-Base model get on the xBD dataset
| 0.8141 |
MPI-INF-3DHP | FinePOSE | FinePOSE: Fine-Grained Prompt-Driven 3D Human Pose Estimation via Diffusion Models | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05216v1 | [
"https://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024"
] | In the paper 'FinePOSE: Fine-Grained Prompt-Driven 3D Human Pose Estimation via Diffusion Models', what MPJPE score did the FinePOSE model get on the MPI-INF-3DHP dataset
| 26.2 |
dacl10k v1 testdev | SegFormer mit-b1 | dacl10k: Benchmark for Semantic Bridge Damage Segmentation | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00460v1 | [
"https://github.com/phiyodr/dacl10k-toolkit"
] | In the paper 'dacl10k: Benchmark for Semantic Bridge Damage Segmentation', what mIoU score did the SegFormer mit-b1 model get on the dacl10k v1 testdev dataset
| 0.40 |
iSAID | AerialFormer-B | AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation | 2023-06-12T00:00:00 | https://arxiv.org/abs/2306.06842v2 | [
"https://github.com/UARK-AICV/AerialFormer"
] | In the paper 'AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation', what mIoU score did the AerialFormer-B model get on the iSAID dataset
| 69.3 |
Math23K | ATHENA (roberta-large) | ATHENA: Mathematical Reasoning with Thought Expansion | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01036v1 | [
"https://github.com/the-jb/athena-math"
] | In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Accuracy (training-test) score did the ATHENA (roberta-large) model get on the Math23K dataset
| 86.5 |
CIFAR-FS 5-way (5-shot) | CAML [Laion-2b] | Context-Aware Meta-Learning | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10971v2 | [
"https://github.com/cfifty/CAML"
] | In the paper 'Context-Aware Meta-Learning', what Accuracy score did the CAML [Laion-2b] model get on the CIFAR-FS 5-way (5-shot) dataset
| 93.5 |
O-Haze | CasDyF-Net | CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters | 2024-09-13T00:00:00 | https://arxiv.org/abs/2409.08510v1 | [
"https://github.com/dauing/casdyf-net"
] | In the paper 'CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters', what PSNR score did the CasDyF-Net model get on the O-Haze dataset
| 25.44 |
RAF-DB | FMAE | Representation Learning and Identity Adversarial Training for Facial Behavior Understanding | 2024-07-15T00:00:00 | https://arxiv.org/abs/2407.11243v1 | [
"https://github.com/forever208/fmae-iat"
] | In the paper 'Representation Learning and Identity Adversarial Training for Facial Behavior Understanding', what Overall Accuracy score did the FMAE model get on the RAF-DB dataset
| 93.09 |
ETTh1 (336) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh1 (336) Multivariate dataset
| 0.421 |
BanglaBook | XGBoost (word 2-gram + word 3-gram) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the XGBoost (word 2-gram + word 3-gram) model get on the BanglaBook dataset
| 0.8651 |
MSU SR-QA Dataset | TOPIQ (IAA) | TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment | 2023-08-06T00:00:00 | https://arxiv.org/abs/2308.03060v1 | [
"https://github.com/chaofengc/iqa-pytorch"
] | In the paper 'TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment', what SROCC score did the TOPIQ (IAA) model get on the MSU SR-QA Dataset dataset
| 0.51687 |
PeMS04 | STAEformer | STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10425v5 | [
"https://github.com/xdzhelheim/staeformer"
] | In the paper 'STAEformer: Spatio-Temporal Adaptive Embedding Makes Vanilla Transformer SOTA for Traffic Forecasting', what 12 Steps MAE score did the STAEformer model get on the PeMS04 dataset
| 18.22 |
RLBench | 3D Diffuser Actor | 3D Diffuser Actor: Policy Diffusion with 3D Scene Representations | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.10885 | [
"https://github.com/nickgkan/3d_diffuser_actor"
] | In the paper '3D Diffuser Actor: Policy Diffusion with 3D Scene Representations', what Succ. Rate (18 tasks, 100 demo/task) score did the 3D Diffuser Actor model get on the RLBench dataset
| 81.3 |
ImageNet 32x32 | i-DODE | Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs | 2023-05-06T00:00:00 | https://arxiv.org/abs/2305.03935v4 | [
"https://github.com/thu-ml/i-dode"
] | In the paper 'Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs', what bpd score did the i-DODE model get on the ImageNet 32x32 dataset
| 3.43 |
YouTube-VIS validation | DVIS(Swin-L) | DVIS: Decoupled Video Instance Segmentation Framework | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03413v3 | [
"https://github.com/zhang-tao-whu/DVIS"
] | In the paper 'DVIS: Decoupled Video Instance Segmentation Framework', what mask AP score did the DVIS(Swin-L) model get on the YouTube-VIS validation dataset
| 64.9 |
CNRPark+EXT | CFEN | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the CFEN model get on the CNRPark+EXT dataset
| 0.8482 |
COCO test-dev | LeYOLO-Large | LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14239v1 | [
"https://github.com/LilianHollard/LeYOLO"
] | In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what box mAP score did the LeYOLO-Large model get on the COCO test-dev dataset
| 41 |
MM-Vet | LLaVA-v1.5 (+MoCa) | Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.07167v2 | [
"https://github.com/shikiw/modality-integration-rate"
] | In the paper 'Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate', what GPT-4 score score did the LLaVA-v1.5 (+MoCa) model get on the MM-Vet dataset
| 32.2 |
dacl10k v1 testfinal | FPN EfficientNet-B4 | dacl10k: Benchmark for Semantic Bridge Damage Segmentation | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00460v1 | [
"https://github.com/phiyodr/dacl10k-toolkit"
] | In the paper 'dacl10k: Benchmark for Semantic Bridge Damage Segmentation', what mIoU score did the FPN EfficientNet-B4 model get on the dacl10k v1 testfinal dataset
| 42.4 |
ETTh2 (192) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTh2 (192) Multivariate dataset
| 0.325 |
ETTh1 (336) Multivariate | SparseTSF | SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.00946v2 | [
"https://github.com/lss-1138/SparseTSF"
] | In the paper 'SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters', what MSE score did the SparseTSF model get on the ETTh1 (336) Multivariate dataset
| 0.434 |
Math23K | Exp-Tree | An Expression Tree Decoding Strategy for Mathematical Equation Generation | 2023-10-14T00:00:00 | https://arxiv.org/abs/2310.09619v3 | [
"https://github.com/zwq2018/multi-view-consistency-for-mwp"
] | In the paper 'An Expression Tree Decoding Strategy for Mathematical Equation Generation', what Accuracy (5-fold) score did the Exp-Tree model get on the Math23K dataset
| 84.1 |
arXiv-year | FaberNet | HoloNets: Spectral Convolutions do extend to Directed Graphs | 2023-10-03T00:00:00 | https://arxiv.org/abs/2310.02232v2 | [
"https://github.com/ChristianKoke/HoloNets"
] | In the paper 'HoloNets: Spectral Convolutions do extend to Directed Graphs', what Accuracy score did the FaberNet model get on the arXiv-year dataset
| 64.62±1.01 |
A-OKVQA | MC-CoT | Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training | 2023-11-23T00:00:00 | https://arxiv.org/abs/2311.14109v2 | [
"https://github.com/chengtan9907/mc-cot"
] | In the paper 'Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training', what MC Accuracy score did the MC-CoT model get on the A-OKVQA dataset
| 71 |
AgeDB | ResNet-50-Regression | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Regression model get on the AgeDB dataset
| 6.23 |
LingOly | Llama 3 70B | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06196v3 | [
"https://github.com/am-bean/lingOly"
] | In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the Llama 3 70B model get on the LingOly dataset
| 10.3% |
RESIDE-6K | DA-CLIP | Controlling Vision-Language Models for Multi-Task Image Restoration | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01018v2 | [
"https://github.com/algolzw/daclip-uir"
] | In the paper 'Controlling Vision-Language Models for Multi-Task Image Restoration', what PSNR score did the DA-CLIP model get on the RESIDE-6K dataset
| 30.16 |
EstGEC-L2 | Llama + 1M BT + gold | To Err Is Human, but Llamas Can Learn It Too | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05493v2 | [
"https://github.com/TartuNLP/gec-llm"
] | In the paper 'To Err Is Human, but Llamas Can Learn It Too', what F0.5 score did the Llama + 1M BT + gold model get on the EstGEC-L2 dataset
| 69.97 |
SEPE 8K | DiQP on HVEC with QP 51 | Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression | 2024-12-12T00:00:00 | https://arxiv.org/abs/2412.08912v1 | [
"https://github.com/alimd94/DiQP"
] | In the paper 'Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression', what Average PSNR (dB) score did the DiQP on HVEC with QP 51 model get on the SEPE 8K dataset
| 34.197 |
TVBench | Tarsier-7B | Tarsier: Recipes for Training and Evaluating Large Video Description Models | 2024-06-30T00:00:00 | https://arxiv.org/abs/2407.00634v2 | [
"https://github.com/bytedance/tarsier"
] | In the paper 'Tarsier: Recipes for Training and Evaluating Large Video Description Models', what Average Accuracy score did the Tarsier-7B model get on the TVBench dataset
| 46.4 |
Ego4D | UniMD+Sync. | UniMD: Towards Unifying Moment Retrieval and Temporal Action Detection | 2024-04-07T00:00:00 | https://arxiv.org/abs/2404.04933v2 | [
"https://github.com/yingsen1/unimd"
] | In the paper 'UniMD: Towards Unifying Moment Retrieval and Temporal Action Detection', what R@1 IoU=0.3 score did the UniMD+Sync. model get on the Ego4D dataset
| 14.16 |
ZJU-RGB-P | CSFNet-2 | CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01328v1 | [
"https://github.com/Danial-Qashqai/CSFNet"
] | In the paper 'CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes', what mIoU score did the CSFNet-2 model get on the ZJU-RGB-P dataset
| 91.40 |
AISHELL-1 | UMA | Unimodal Aggregation for CTC-based Speech Recognition | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08150v2 | [
"https://github.com/Audio-WestlakeU/UMA-ASR"
] | In the paper 'Unimodal Aggregation for CTC-based Speech Recognition', what Word Error Rate (WER) score did the UMA model get on the AISHELL-1 dataset
| 4.7 |
PASCAL Context-59 | FC-CLIP | Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02487v2 | [
"https://github.com/bytedance/fc-clip"
] | In the paper 'Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP', what mIoU score did the FC-CLIP model get on the PASCAL Context-59 dataset
| 58.4 |
Lipogram-e | GPT-2-with-filter | Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio | 2023-06-28T00:00:00 | https://arxiv.org/abs/2306.15926v1 | [
"https://github.com/hellisotherpeople/constrained-text-generation-studio"
] | In the paper 'Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio', what Ignored Constraint Error Rate score did the GPT-2-with-filter model get on the Lipogram-e dataset
| 0% |
LRS3-TED | Whisper | Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10082v3 | [
"https://github.com/roudimit/whisper-flamingo"
] | In the paper 'Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation', what Word Error Rate (WER) score did the Whisper model get on the LRS3-TED dataset
| 0.68 |
SSv2-label retrieval | vid-TLDR (UMT-L) | vid-TLDR: Training Free Token merging for Light-weight Video Transformer | 2024-03-20T00:00:00 | https://arxiv.org/abs/2403.13347v2 | [
"https://github.com/mlvlab/vid-tldr"
] | In the paper 'vid-TLDR: Training Free Token merging for Light-weight Video Transformer', what text-to-video R@1 score did the vid-TLDR (UMT-L) model get on the SSv2-label retrieval dataset
| 73.1 |
iNaturalist | AIMv2-L | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-L model get on the iNaturalist dataset
| 76 |
ETTh1 (96) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTh1 (96) Multivariate dataset
| 0.366 |
Caltech-UCSD Birds 200 (partial ratio 0.05) | ILL | Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12715v4 | [
"https://github.com/hhhhhhao/general-framework-weak-supervision"
] | In the paper 'Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations', what Accuracy score did the ILL model get on the Caltech-UCSD Birds 200 (partial ratio 0.05) dataset
| 70.77 |
Cornell | CATv3-sup | CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08672v3 | [
"https://github.com/geox-lab/cat"
] | In the paper 'CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph', what Accuracy score did the CATv3-sup model get on the Cornell dataset
| 88.8±2.1 |
STL-10 (1000 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the STL-10 (1000 Labels, ImageNet-100 Unlabeled) dataset
| 84.73 |
AIST++ | Lodge (DDPM) | Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives | 2024-03-15T00:00:00 | https://arxiv.org/abs/2403.10518v3 | [
"https://github.com/li-ronghui/LODGE"
] | In the paper 'Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives', what Beat alignment score score did the Lodge (DDPM) model get on the AIST++ dataset
| 0.24 |
CIFAR-100, 400 Labels | ShrinkMatch | Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06777v1 | [
"https://github.com/LiheYoung/ShrinkMatch"
] | In the paper 'Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning', what Percentage error score did the ShrinkMatch model get on the CIFAR-100, 400 Labels dataset
| 35.36 |
EC-FUNSD | RORE (LayoutLMv3-large) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (LayoutLMv3-large) model get on the EC-FUNSD dataset
| 79.33 |
A2D Sentences | SOC (Video-Swin-T) | SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17011v1 | [
"https://github.com/RobertLuo1/NeurIPS2023_SOC"
] | In the paper 'SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation', what Precision@0.5 score did the SOC (Video-Swin-T) model get on the A2D Sentences dataset
| 0.79 |
IllusionVQA | Gemini-Pro 4-shot | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the Gemini-Pro 4-shot model get on the IllusionVQA dataset
| 52.87 |
GRAZPEDWRI-DX | YOLOv8+GCT | Pediatric Wrist Fracture Detection Using Feature Context Excitation Modules in X-ray Images | 2024-10-01T00:00:00 | https://arxiv.org/abs/2410.01031v2 | [
"https://github.com/ruiyangju/fce-yolov8"
] | In the paper 'Pediatric Wrist Fracture Detection Using Feature Context Excitation Modules in X-ray Images', what mAP score did the YOLOv8+GCT model get on the GRAZPEDWRI-DX dataset
| 65.67 |
Atari 2600 Skiing | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Skiing dataset
| -8295.4 |
ETTh1 (336) Multivariate | AMD | Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03751v1 | [
"https://github.com/troubadour000/amd"
] | In the paper 'Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting', what MSE score did the AMD model get on the ETTh1 (336) Multivariate dataset
| 0.418 |
ESC-50 | EAT | EAT: Self-Supervised Pre-Training with Efficient Audio Transformer | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03497v1 | [
"https://github.com/cwx-worst-one/eat"
] | In the paper 'EAT: Self-Supervised Pre-Training with Efficient Audio Transformer', what Top-1 Accuracy score did the EAT model get on the ESC-50 dataset
| 96.0 |
S3DIS | PonderV2 + SparseUNet | PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm | 2023-10-12T00:00:00 | https://arxiv.org/abs/2310.08586v3 | [
"https://github.com/OpenGVLab/PonderV2"
] | In the paper 'PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm', what Mean IoU score did the PonderV2 + SparseUNet model get on the S3DIS dataset
| 79.9 |
GoPro | NIRE | Neural Image Re-Exposure | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13593v1 | [
"https://github.com/zhangxydlut/Neural-Image-Re-Exposure"
] | In the paper 'Neural Image Re-Exposure', what PSNR score did the NIRE model get on the GoPro dataset
| 35.03 |
Ego4D | RGNet | RGNet: A Unified Clip Retrieval and Grounding Network for Long Videos | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06729v3 | [
"https://github.com/tanveer81/rgnet"
] | In the paper 'RGNet: A Unified Clip Retrieval and Grounding Network for Long Videos', what R@1 IoU=0.3 score did the RGNet model get on the Ego4D dataset
| 20.63 |
Fashion IQ | CoVR-BLIP | CoVR-2: Automatic Data Construction for Composed Video Retrieval | 2023-08-28T00:00:00 | https://arxiv.org/abs/2308.14746v4 | [
"https://github.com/lucas-ventura/CoVR"
] | In the paper 'CoVR-2: Automatic Data Construction for Composed Video Retrieval', what (Recall@10+Recall@50)/2 score did the CoVR-BLIP model get on the Fashion IQ dataset
| 59.39 |
VLEP | LLaMA-VQA | Large Language Models are Temporal and Causal Reasoners for Video Question Answering | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15747v2 | [
"https://github.com/mlvlab/Flipped-VQA"
] | In the paper 'Large Language Models are Temporal and Causal Reasoners for Video Question Answering', what Accuracy score did the LLaMA-VQA model get on the VLEP dataset
| 71.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.