dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
DUO | GCC-Net | A Gated Cross-domain Collaborative Network for Underwater Object Detection | 2023-06-25T00:00:00 | https://arxiv.org/abs/2306.14141v1 | [
"https://github.com/ixiaohuihuihui/gcc-net"
] | In the paper 'A Gated Cross-domain Collaborative Network for Underwater Object Detection', what All mAP score did the GCC-Net model get on the DUO dataset
| 69.1 |
QVHighlights | UnLoc-B | UnLoc: A Unified Framework for Video Localization Tasks | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.11062v1 | [
"https://github.com/google-research/scenic"
] | In the paper 'UnLoc: A Unified Framework for Video Localization Tasks', what R@1 IoU=0.5 score did the UnLoc-B model get on the QVHighlights dataset
| 64.5 |
Citeseer | TransGNN | Strong Transitivity Relations and Graph Neural Networks | 2024-01-01T00:00:00 | https://arxiv.org/abs/2401.01384v1 | [
"https://github.com/yassinmihemedi/strong-transitivity-relations-and-graph-neural-network"
] | In the paper 'Strong Transitivity Relations and Graph Neural Networks', what 1:1 Accuracy score did the TransGNN model get on the Citeseer dataset
| 75.0 |
PACS | ABA (ResNet18) | Adversarial Bayesian Augmentation for Single-Source Domain Generalization | 2023-07-18T00:00:00 | https://arxiv.org/abs/2307.09520v2 | [
"https://github.com/shengcheng/aba"
] | In the paper 'Adversarial Bayesian Augmentation for Single-Source Domain Generalization', what Accuracy score did the ABA (ResNet18) model get on the PACS dataset
| 66.36 |
3RScan | SG-PGM | SG-PGM: Partial Graph Matching Network with Semantic Geometric Fusion for 3D Scene Graph Alignment and Its Downstream Tasks | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19474v1 | [
"https://github.com/dfki-av/sg-pgm"
] | In the paper 'SG-PGM: Partial Graph Matching Network with Semantic Geometric Fusion for 3D Scene Graph Alignment and Its Downstream Tasks', what CD score did the SG-PGM model get on the 3RScan dataset
| 0.0083 |
IEMOCAP | CNN - DARTS | Enhancing Speech Emotion Recognition Through Differentiable Architecture Search | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14402v3 | [
"https://github.com/jayaneetha/emoDARTS"
] | In the paper 'Enhancing Speech Emotion Recognition Through Differentiable Architecture Search', what UA score did the CNN - DARTS model get on the IEMOCAP dataset
| 0.696 |
NASA Li-ion Dataset | SambaMixer | SambaMixer: State of Health Prediction of Li-ion Batteries using Mamba State Space Models | 2024-10-31T00:00:00 | https://arxiv.org/abs/2411.00233v1 | [
"https://github.com/sascha-kirch/samba-mixer"
] | In the paper 'SambaMixer: State of Health Prediction of Li-ion Batteries using Mamba State Space Models', what mean absolute error score did the SambaMixer model get on the NASA Li-ion Dataset dataset
| 1.072 |
ImageNet | XCiT-S | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the XCiT-S model get on the ImageNet dataset
| 83.65% |
UZLF | Automorph | VascX Models: Model Ensembles for Retinal Vascular Analysis from Color Fundus Images | 2024-09-24T00:00:00 | https://arxiv.org/abs/2409.16016v2 | [
"https://github.com/eyened/rtnls_vascx_models"
] | In the paper 'VascX Models: Model Ensembles for Retinal Vascular Analysis from Color Fundus Images', what Average Dice (0.5*Dice_a + 0.5*Dice_v) score did the Automorph model get on the UZLF dataset
| 74.0 |
PeMS07 | Cy2Mixer | Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks | 2024-01-29T00:00:00 | https://arxiv.org/abs/2401.15894v2 | [
"https://github.com/leemingo/cy2mixer"
] | In the paper 'Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks', what MAE@1h score did the Cy2Mixer model get on the PeMS07 dataset
| 19.45 |
COCO test-dev | LeYOLO-Medium@480 | LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14239v1 | [
"https://github.com/LilianHollard/LeYOLO"
] | In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what box mAP score did the LeYOLO-Medium@480 model get on the COCO test-dev dataset
| 36.4 |
CausalGym | k-means | CausalGym: Benchmarking causal interpretability methods on linguistic tasks | 2024-02-19T00:00:00 | https://arxiv.org/abs/2402.12560v1 | [
"https://github.com/aryamanarora/causalgym"
] | In the paper 'CausalGym: Benchmarking causal interpretability methods on linguistic tasks', what Log odds-ratio (pythia-6.9b) score did the k-means model get on the CausalGym dataset
| 1.87 |
CommitmentBank | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (one-shot) model get on the CommitmentBank dataset
| 80.4 |
DROP Test | PaLM 2 (few-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what F1 score did the PaLM 2 (few-shot) model get on the DROP Test dataset
| 85.0 |
SVAMP | GPT-4 (Model Selection) | Automatic Model Selection with Large Language Models for Reasoning | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14333v2 | [
"https://github.com/xuzhao0/model-selection-reasoning"
] | In the paper 'Automatic Model Selection with Large Language Models for Reasoning', what Execution Accuracy score did the GPT-4 (Model Selection) model get on the SVAMP dataset
| 93.7 |
SVT | DTrOCR 105M | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what Accuracy score did the DTrOCR 105M model get on the SVT dataset
| 98.9 |
GTAV-to-Cityscapes Labels | DIDA | Dual-level Interaction for Domain Adaptive Semantic Segmentation | 2023-07-16T00:00:00 | https://arxiv.org/abs/2307.07972v2 | [
"https://github.com/rainjamesy/dida"
] | In the paper 'Dual-level Interaction for Domain Adaptive Semantic Segmentation', what mIoU score did the DIDA model get on the GTAV-to-Cityscapes Labels dataset
| 71.0 |
AFLW2000-3D | DSFNet-is | DSFNet: Dual Space Fusion Network for Occlusion-Robust 3D Dense Face Alignment | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11522v1 | [
"https://github.com/lhyfst/dsfnet"
] | In the paper 'DSFNet: Dual Space Fusion Network for Occlusion-Robust 3D Dense Face Alignment', what Mean NME score did the DSFNet-is model get on the AFLW2000-3D dataset
| 3.16 |
ImageNet | HVT Large | HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space | 2024-09-25T00:00:00 | https://arxiv.org/abs/2409.16897v2 | [
"https://github.com/hyperbolicvit/hyperbolicvit"
] | In the paper 'HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space', what Top 1 Accuracy score did the HVT Large model get on the ImageNet dataset
| 85% |
AFAD | ResNet-50-SORD | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-SORD model get on the AFAD dataset
| 3.14 |
R2R | VLN-PETL | VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation | 2023-08-20T00:00:00 | https://arxiv.org/abs/2308.10172v1 | [
"https://github.com/yanyuanqiao/vln-petl"
] | In the paper 'VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation', what spl score did the VLN-PETL model get on the R2R dataset
| 0.58 |
CommonsenseQA | PaLM 2 (few‑shot, CoT, SC) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few‑shot, CoT, SC) model get on the CommonsenseQA dataset
| 90.4 |
UHD-IQA | ARNIQA | ARNIQA: Learning Distortion Manifold for Image Quality Assessment | 2023-10-20T00:00:00 | https://arxiv.org/abs/2310.14918v2 | [
"https://github.com/miccunifi/arniqa"
] | In the paper 'ARNIQA: Learning Distortion Manifold for Image Quality Assessment', what SRCC score did the ARNIQA model get on the UHD-IQA dataset
| 0.739 |
nuScenes | UniTraj (MTR) | UniTraj: A Unified Framework for Scalable Vehicle Trajectory Prediction | 2024-03-22T00:00:00 | https://arxiv.org/abs/2403.15098v3 | [
"https://github.com/vita-epfl/unitraj"
] | In the paper 'UniTraj: A Unified Framework for Scalable Vehicle Trajectory Prediction', what MinADE_5 score did the UniTraj (MTR) model get on the nuScenes dataset
| 0.96 |
MLO-Cn2 | GBRT | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the GBRT model get on the MLO-Cn2 dataset
| 0.212 |
Inverse-Text | DeepSolo (ResNet-50, TextOCR) | DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19957v2 | [
"https://github.com/vitae-transformer/deepsolo"
] | In the paper 'DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting', what F-measure (%) - No Lexicon score did the DeepSolo (ResNet-50, TextOCR) model get on the Inverse-Text dataset
| 64.6 |
LibriTTS | PeriodWave-Turbo-L | Accelerating High-Fidelity Waveform Generation via Adversarial Flow Matching Optimization | 2024-08-15T00:00:00 | https://arxiv.org/abs/2408.08019v1 | [
"https://github.com/sh-lee-prml/periodwave"
] | In the paper 'Accelerating High-Fidelity Waveform Generation via Adversarial Flow Matching Optimization', what PESQ score did the PeriodWave-Turbo-L model get on the LibriTTS dataset
| 4.454 |
MMBench | CuMo-7B | CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts | 2024-05-09T00:00:00 | https://arxiv.org/abs/2405.05949v1 | [
"https://github.com/shi-labs/cumo"
] | In the paper 'CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts', what GPT-3.5 score score did the CuMo-7B model get on the MMBench dataset
| 73.0 |
Places-LT | LIFT (ViT-L/14) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Top-1 Accuracy score did the LIFT (ViT-L/14) model get on the Places-LT dataset
| 53.7 |
Wisconsin | H2GCN + UniGAP | UniGAP: A Universal and Adaptive Graph Upsampling Approach to Mitigate Over-Smoothing in Node Classification Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.19420v1 | [
"https://github.com/wangxiaotang0906/unigap"
] | In the paper 'UniGAP: A Universal and Adaptive Graph Upsampling Approach to Mitigate Over-Smoothing in Node Classification Tasks', what Accuracy score did the H2GCN + UniGAP model get on the Wisconsin dataset
| 87.73 ± 4.8 |
Training and validation dataset of capsule vision 2024 challenge. | BiomedCLIP+PubmedBERT | A Multimodal Approach For Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT | 2024-10-25T00:00:00 | https://arxiv.org/abs/2410.19944v2 | [
"https://github.com/Satyajithchary/MedInfoLab_Capsule_Vision_2024_Challenge"
] | In the paper 'A Multimodal Approach For Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT', what Total Accuracy score did the BiomedCLIP+PubmedBERT model get on the Training and validation dataset of capsule vision 2024 challenge. dataset
| 97.75 |
CIFAR-10 | XU-Net | Attention Masks Help Adversarial Attacks to Bypass Safety Detectors | 2024-11-07T00:00:00 | https://arxiv.org/abs/2411.04772v1 | [
"https://github.com/FrankShi9/Attention-Mask-Attack"
] | In the paper 'Attention Masks Help Adversarial Attacks to Bypass Safety Detectors', what Robust Accuracy score did the XU-Net model get on the CIFAR-10 dataset
| 1% |
WHOOPS! | VLIS (Lynx) | VLIS: Unimodal Language Models Guide Multimodal Language Generation | 2023-10-15T00:00:00 | https://arxiv.org/abs/2310.09767v2 | [
"https://github.com/jiwanchung/vlis"
] | In the paper 'VLIS: Unimodal Language Models Guide Multimodal Language Generation', what Accuracy score did the VLIS (Lynx) model get on the WHOOPS! dataset
| 80 |
NYU Depth v2 | DFormer-T | DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.09668v2 | [
"https://github.com/VCIP-RGBD/DFormer"
] | In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what Mean IoU score did the DFormer-T model get on the NYU Depth v2 dataset
| 51.8% |
HACS | DyFADet(VideoMAEv2) | DyFADet: Dynamic Feature Aggregation for Temporal Action Detection | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03197v1 | [
"https://github.com/yangle15/DyFADet-pytorch"
] | In the paper 'DyFADet: Dynamic Feature Aggregation for Temporal Action Detection', what Average-mAP score did the DyFADet(VideoMAEv2) model get on the HACS dataset
| 44.3 |
Manga109 - 4x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Manga109 - 4x upscaling dataset
| 33.19 |
Marmoset-8K | CID-W32 | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the CID-W32 model get on the Marmoset-8K dataset
| 92.5 |
Elliptic Dataset | GCN | Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.19383v2 | [
"https://github.com/B-Deprez/AML_Network"
] | In the paper 'Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation', what AUPRC score did the GCN model get on the Elliptic Dataset dataset
| 0.5946 |
CROHME 2016 | PosFormer | PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer | 2024-07-10T00:00:00 | https://arxiv.org/abs/2407.07764v1 | [
"https://github.com/sjtu-deepvisionlab/posformer"
] | In the paper 'PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer', what ExpRate score did the PosFormer model get on the CROHME 2016 dataset
| 60.94 |
HME100K | PosFormer | PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer | 2024-07-10T00:00:00 | https://arxiv.org/abs/2407.07764v1 | [
"https://github.com/sjtu-deepvisionlab/posformer"
] | In the paper 'PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer', what ExpRate score did the PosFormer model get on the HME100K dataset
| 69.51 |
InfiMM-Eval | Otter | Otter: A Multi-Modal Model with In-Context Instruction Tuning | 2023-05-05T00:00:00 | https://arxiv.org/abs/2305.03726v1 | [
"https://github.com/luodian/otter"
] | In the paper 'Otter: A Multi-Modal Model with In-Context Instruction Tuning', what Overall score score did the Otter model get on the InfiMM-Eval dataset
| 22.69 |
SAFIM | incoder-6B | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the incoder-6B model get on the SAFIM dataset
| 25.16 |
ICDAR2015 | DTrOCR 105M | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what Accuracy score did the DTrOCR 105M model get on the ICDAR2015 dataset
| 93.5 |
spider | T5-SR | T5-SR: A Unified Seq-to-Seq Decoding Strategy for Semantic Parsing | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.08368v1 | [
"https://github.com/JuruoMP/T5-SR"
] | In the paper 'T5-SR: A Unified Seq-to-Seq Decoding Strategy for Semantic Parsing', what Exact Match Accuracy (Dev) score did the T5-SR model get on the spider dataset
| 77.2 |
EPIC-KITCHENS-100 | TAdaFormer-L/14 | Temporally-Adaptive Models for Efficient Video Understanding | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05787v1 | [
"https://github.com/alibaba-mmai-research/TAdaConv"
] | In the paper 'Temporally-Adaptive Models for Efficient Video Understanding', what Action@1 score did the TAdaFormer-L/14 model get on the EPIC-KITCHENS-100 dataset
| 51.8 |
FreeSolv | SMA | Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14789v1 | [
"https://github.com/johnathan-xie/sma"
] | In the paper 'Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning', what RMSE score did the SMA model get on the FreeSolv dataset
| 1.09 |
COD | BiRefNet | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet model get on the COD dataset
| 0.014 |
Geometry3K | GOLD | GOLD: Geometry Problem Solver with Natural Language Description | 2024-05-01T00:00:00 | https://arxiv.org/abs/2405.00494v1 | [
"https://github.com/neurasearch/geometry-diagram-description"
] | In the paper 'GOLD: Geometry Problem Solver with Natural Language Description', what Accuracy (%) score did the GOLD model get on the Geometry3K dataset
| 69.1 |
CIFAR-100 | ReviewKD++(T:WRN-40-2, S:WRN-40-1) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 Accuracy (%) score did the ReviewKD++(T:WRN-40-2, S:WRN-40-1) model get on the CIFAR-100 dataset
| 75.66 |
MixSNIPS | MISCA | MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05741v1 | [
"https://github.com/vinairesearch/misca"
] | In the paper 'MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention', what Micro F1 score did the MISCA model get on the MixSNIPS dataset
| 95.2 |
RMAS | SAM2-UNet | SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08870v1 | [
"https://github.com/wzh0120/sam2-unet"
] | In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what S-measure score did the SAM2-UNet model get on the RMAS dataset
| 0.874 |
CATT | CATT ED | CATT: Character-based Arabic Tashkeel Transformer | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03236v3 | [
"https://github.com/abjadai/catt"
] | In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the CATT ED model get on the CATT dataset
| 8.624 |
CHILI-100K | GCN | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GCN model get on the CHILI-100K dataset
| 0.090 +/- 0.002 |
UCR Anomaly Archive | Matrix Profile STUMPY | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the Matrix Profile STUMPY model get on the UCR Anomaly Archive dataset
| 0.512 |
MediaSum | SRformer-BART | Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.16340v3 | [
"https://github.com/yinghanlong/SRtransformer"
] | In the paper 'Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model', what ROUGE-1 score did the SRformer-BART model get on the MediaSum dataset
| 32.36 |
OVIS validation | GRAtt-VIS (ResNet-50) | GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17096v1 | [
"https://github.com/tanveer81/grattvis"
] | In the paper 'GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation', what mask AP score did the GRAtt-VIS (ResNet-50) model get on the OVIS validation dataset
| 36.2 |
Long Video Dataset | READMem-MiVOS (s=1) | READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12823v2 | [
"https://github.com/Vujas-Eteph/READMem"
] | In the paper 'READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation', what J&F score did the READMem-MiVOS (s=1) model get on the Long Video Dataset dataset
| 83.6 |
CIFAR-100-LT (ρ=100) | FBL (Resnet-32) | Feature-Balanced Loss for Long-Tailed Visual Recognition | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10772v1 | [
"https://github.com/juyongjiang/fbl"
] | In the paper 'Feature-Balanced Loss for Long-Tailed Visual Recognition', what Error Rate score did the FBL (Resnet-32) model get on the CIFAR-100-LT (ρ=100) dataset
| 54.78 |
EuroSAT | RFS+MLP | Improving Cross-domain Few-shot Classification with Multilayer Perceptron | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09589v1 | [
"https://github.com/BaiShuanghao/CDFSC-MLP"
] | In the paper 'Improving Cross-domain Few-shot Classification with Multilayer Perceptron', what 5 shot score did the RFS+MLP model get on the EuroSAT dataset
| 78.13 |
VoxCeleb | ReDimNet-B1-LM-ASNorm (2.2M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B1-LM-ASNorm (2.2M) model get on the VoxCeleb dataset
| 0.73 |
Caltech-101 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Caltech-101 dataset
| 89.8 |
COCO 5% labeled data | MixPL | Mixed Pseudo Labels for Semi-Supervised Object Detection | 2023-12-12T00:00:00 | https://arxiv.org/abs/2312.07006v1 | [
"https://github.com/czm369/mixpl"
] | In the paper 'Mixed Pseudo Labels for Semi-Supervised Object Detection', what mAP score did the MixPL model get on the COCO 5% labeled data dataset
| 40.1 |
UCI GAS | PaddingFlow | PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08216v2 | [
"https://github.com/adamqlmeng/paddingflow"
] | In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what CD score did the PaddingFlow model get on the UCI GAS dataset
| 0.89 |
ImageNet 256x256 | ELM | Elucidating the design space of language models for image generation | 2024-10-21T00:00:00 | https://arxiv.org/abs/2410.16257v1 | [
"https://github.com/Pepper-lll/LMforImageGeneration"
] | In the paper 'Elucidating the design space of language models for image generation', what FID score did the ELM model get on the ImageNet 256x256 dataset
| 1.54 |
MSU SR-QA Dataset | Q-Align (IAA) | Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17090v1 | [
"https://github.com/q-future/q-align"
] | In the paper 'Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels', what SROCC score did the Q-Align (IAA) model get on the MSU SR-QA Dataset dataset
| 0.51521 |
VideoInstruct | VTimeLLM | VTimeLLM: Empower LLM to Grasp Video Moments | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18445v1 | [
"https://github.com/huangb23/vtimellm"
] | In the paper 'VTimeLLM: Empower LLM to Grasp Video Moments', what gpt-score score did the VTimeLLM model get on the VideoInstruct dataset
| 2.78 |
AudioCaps | EnCLAP++-large | EnCLAP++: Analyzing the EnCLAP Framework for Optimizing Automated Audio Captioning Performance | 2024-09-02T00:00:00 | https://arxiv.org/abs/2409.01201v1 | [
"https://github.com/jaeyeonkim99/enclap"
] | In the paper 'EnCLAP++: Analyzing the EnCLAP Framework for Optimizing Automated Audio Captioning Performance', what CIDEr score did the EnCLAP++-large model get on the AudioCaps dataset
| 0.823 |
MSVD-QA | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what Accuracy score did the COSA model get on the MSVD-QA dataset
| 0.60 |
MM-Vet | INF-LLaVA | INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model | 2024-07-23T00:00:00 | https://arxiv.org/abs/2407.16198v1 | [
"https://github.com/weihuanglin/inf-llava"
] | In the paper 'INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model', what GPT-4 score score did the INF-LLaVA model get on the MM-Vet dataset
| 34.5 |
Pendulum-v1 | TLA with Hierarchical Reward Functions | Creating Hierarchical Dispositions of Needs in an Agent | 2024-11-23T00:00:00 | https://arxiv.org/abs/2412.00044v1 | [
"https://github.com/TofaraMoyo/Heirachical-Reward-Functions"
] | In the paper 'Creating Hierarchical Dispositions of Needs in an Agent', what Action Repetition score did the TLA with Hierarchical Reward Functions model get on the Pendulum-v1 dataset
| .8073 |
Squirrel | JKNet + Hetero-S (8 layers) | The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12539v1 | [
"https://github.com/bingreeky/heterosnoh"
] | In the paper 'The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs', what Accuracy score did the JKNet + Hetero-S (8 layers) model get on the Squirrel dataset
| 57.83 |
MBPP | GPT-3.5 Turbo (few-shot) | DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence | 2024-01-25T00:00:00 | https://arxiv.org/abs/2401.14196v2 | [
"https://github.com/deepseek-ai/DeepSeek-Coder"
] | In the paper 'DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence', what Accuracy score did the GPT-3.5 Turbo (few-shot) model get on the MBPP dataset
| 70.8 |
MM-Vet | Meteor | Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15574v4 | [
"https://github.com/byungkwanlee/meteor"
] | In the paper 'Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models', what GPT-4 score score did the Meteor model get on the MM-Vet dataset
| 57.3 |
COCO val2017 | SynCo (ResNet-50) 200ep | SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02401v5 | [
"https://github.com/giakoumoglou/synco"
] | In the paper 'SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations', what Bounding Box AP score did the SynCo (ResNet-50) 200ep model get on the COCO val2017 dataset
| 40.4 |
Automatic Cardiac Diagnosis Challenge (ACDC) | LHU-Net | LHU-Net: A Light Hybrid U-Net for Cost-Efficient, High-Performance Volumetric Medical Image Segmentation | 2024-04-07T00:00:00 | https://arxiv.org/abs/2404.05102v2 | [
"https://github.com/xmindflow/lhunet"
] | In the paper 'LHU-Net: A Light Hybrid U-Net for Cost-Efficient, High-Performance Volumetric Medical Image Segmentation', what Avg DSC score did the LHU-Net model get on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset
| 92.65 |
ImageNet | VkD (T:RegNety 160 S:DeiT-Ti) | $V_kD:$ Improving Knowledge Distillation using Orthogonal Projections | 2024-03-10T00:00:00 | https://arxiv.org/abs/2403.06213v1 | [
"https://github.com/roymiles/vkd"
] | In the paper '$V_kD:$ Improving Knowledge Distillation using Orthogonal Projections', what Top-1 accuracy % score did the VkD (T:RegNety 160 S:DeiT-Ti) model get on the ImageNet dataset
| 79.2 |
ActivityNet-QA | BT-Adapter (zero-shot) | BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15785v2 | [
"https://github.com/farewellthree/BT-Adapter"
] | In the paper 'BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning', what Accuracy score did the BT-Adapter (zero-shot) model get on the ActivityNet-QA dataset
| 46.1 |
NAS-Bench-101 | DiNAS | Multi-conditioned Graph Diffusion for Neural Architecture Search | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.06020v2 | [
"https://github.com/rohanasthana/dinas"
] | In the paper 'Multi-conditioned Graph Diffusion for Neural Architecture Search', what Accuracy (%) score did the DiNAS model get on the NAS-Bench-101 dataset
| 94.98% |
DanceTrack | MOTIP (Deformable DETR, with CrowdHuman) | Multiple Object Tracking as ID Prediction | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16848v1 | [
"https://github.com/MCG-NJU/MOTIP"
] | In the paper 'Multiple Object Tracking as ID Prediction', what HOTA score did the MOTIP (Deformable DETR, with CrowdHuman) model get on the DanceTrack dataset
| 71.4 |
ogbg-molhiv | GPTrans-B | Graph Propagation Transformer for Graph Representation Learning | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11424v3 | [
"https://github.com/czczup/gptrans"
] | In the paper 'Graph Propagation Transformer for Graph Representation Learning', what Test ROC-AUC score did the GPTrans-B model get on the ogbg-molhiv dataset
| 0.8126 ± 0.0032 |
CAMO | ZoomNeXt-PVTv2-B4 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what MAE score did the ZoomNeXt-PVTv2-B4 model get on the CAMO dataset
| 0.04 |
PRONTO | DyEdgeGAT | DyEdgeGAT: Dynamic Edge via Graph Attention for Early Fault Detection in IIoT Systems | 2023-07-07T00:00:00 | https://arxiv.org/abs/2307.03761v3 | [
"https://github.com/mengjiezhao/dyedgegat"
] | In the paper 'DyEdgeGAT: Dynamic Edge via Graph Attention for Early Fault Detection in IIoT Systems', what AUC score did the DyEdgeGAT model get on the PRONTO dataset
| 0.8 |
RefCOCO+ val | MaskRIS (Swin-B) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B) model get on the RefCOCO+ val dataset
| 67.54 |
GSM8K | MetaMath-Mistral-7B | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | 2023-09-21T00:00:00 | https://arxiv.org/abs/2309.12284v4 | [
"https://github.com/meta-math/MetaMath"
] | In the paper 'MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models', what Accuracy score did the MetaMath-Mistral-7B model get on the GSM8K dataset
| 77.7 |
Abt-Buy | gpt-4o-mini-2024-07-18 | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-mini-2024-07-18 model get on the Abt-Buy dataset
| 87.68 |
Action-Camera Parking | VGG-19 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the VGG-19 model get on the Action-Camera Parking dataset
| 0.9152 |
iMiGUE | Joint Skeletal and Semantic Embedding Loss for Micro-gesture Classification | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10624v1 | [
"https://github.com/VUT-HFUT/MiGA2023_Track1"
] | In the paper 'Joint Skeletal and Semantic Embedding Loss for Micro-gesture Classification', what Top 1 Accuracy score did the model get on the iMiGUE dataset
| 64.12 | |
Animal Kingdom | MSQNet | Actor-agnostic Multi-label Action Recognition with Multi-modal Query | 2023-07-20T00:00:00 | https://arxiv.org/abs/2307.10763v3 | [
"https://github.com/mondalanindya/msqnet"
] | In the paper 'Actor-agnostic Multi-label Action Recognition with Multi-modal Query', what mAP score did the MSQNet model get on the Animal Kingdom dataset
| 73.1 |
MSVD-Indonesian | VNS-GRU (Cross-Lingual) | MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11341v1 | [
"https://github.com/willyfh/msvd-indonesian"
] | In the paper 'MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian', what BLEU-4 score did the VNS-GRU (Cross-Lingual) model get on the MSVD-Indonesian dataset
| 58.68 |
UrduDoc | EAST [75] | UTRNet: High-Resolution Urdu Text Recognition In Printed Documents | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15782v3 | [
"https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition"
] | In the paper 'UTRNet: High-Resolution Urdu Text Recognition In Printed Documents', what Precision score did the EAST [75] model get on the UrduDoc dataset
| 71.48 |
TerraIncognita | QT-DoG (ResNet-50) | QT-DoG: Quantization-aware Training for Domain Generalization | 2024-10-08T00:00:00 | https://arxiv.org/abs/2410.06020v1 | [
"https://github.com/saqibjaved1/QT-DoG"
] | In the paper 'QT-DoG: Quantization-aware Training for Domain Generalization', what Average Accuracy score did the QT-DoG (ResNet-50) model get on the TerraIncognita dataset
| 50.8 |
SID SonyA7S2 x250 | LED | Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03448v2 | [
"https://github.com/srameo/led"
] | In the paper 'Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model', what PSNR (Raw) score did the LED model get on the SID SonyA7S2 x250 dataset
| 39.34 |
DTD | Linear FT(ViT-L/14) | Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12827v3 | [
"https://github.com/gortizji/tangent_task_arithmetic"
] | In the paper 'Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models', what Accuracy score did the Linear FT(ViT-L/14) model get on the DTD dataset
| 90.0 |
N-UCLA | DVANet | DVANet: Disentangling View and Action Features for Multi-View Action Recognition | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05719v1 | [
"https://github.com/NyleSiddiqui/MultiView_Actions"
] | In the paper 'DVANet: Disentangling View and Action Features for Multi-View Action Recognition', what Accuracy (Cross-Subject) score did the DVANet model get on the N-UCLA dataset
| 94.4 |
ColonINST-v1 (Unseen) | ColonGPT (w/ LoRA, w/o extra data) | Frontiers in Intelligent Colonoscopy | 2024-10-22T00:00:00 | https://arxiv.org/abs/2410.17241v1 | [
"https://github.com/ai4colonoscopy/intelliscope"
] | In the paper 'Frontiers in Intelligent Colonoscopy', what Accuray score did the ColonGPT (w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Unseen) dataset
| 85.81 |
HumanML3D | MLP+GRU | Motion2Language, unsupervised learning of synchronized semantic motion segmentation | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10594v2 | [
"https://github.com/rd20karim/M2T-Segmentation"
] | In the paper 'Motion2Language, unsupervised learning of synchronized semantic motion segmentation', what BLEU-4 score did the MLP+GRU model get on the HumanML3D dataset
| 23.4 |
CIFAR-100-LT (ρ=10) | LIFT (ViT-B/16, CLIP) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Error Rate score did the LIFT (ViT-B/16, CLIP) model get on the CIFAR-100-LT (ρ=10) dataset
| 15.1 |
GeoQA | GOLD | GOLD: Geometry Problem Solver with Natural Language Description | 2024-05-01T00:00:00 | https://arxiv.org/abs/2405.00494v1 | [
"https://github.com/neurasearch/geometry-diagram-description"
] | In the paper 'GOLD: Geometry Problem Solver with Natural Language Description', what Accuracy (%) score did the GOLD model get on the GeoQA dataset
| 75.2 |
DRIVE | MERIT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what F1 score score did the MERIT-GCASCADE model get on the DRIVE dataset
| 0.8290 |
METR-LA | TITAN | A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.17440v1 | [
"https://github.com/sqlcow/TITAN"
] | In the paper 'A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction', what MAE @ 12 step score did the TITAN model get on the METR-LA dataset
| 3.08 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.