dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Aria Synthetic Environments | EVL | EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10224v1 | [
"https://github.com/facebookresearch/efm3d"
] | In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what Accuracy score did the EVL model get on the Aria Synthetic Environments dataset
| 5.7 |
RGBE-SEG | EventSAM | Segment Any Events via Weighted Adaptation of Pivotal Tokens | 2023-12-24T00:00:00 | https://arxiv.org/abs/2312.16222v1 | [
"https://github.com/happychenpipi/eventsam"
] | In the paper 'Segment Any Events via Weighted Adaptation of Pivotal Tokens', what mIoU score did the EventSAM model get on the RGBE-SEG dataset
| 0.41 |
MVTec AD | ProbabilisticPatchCore | A Probabilistic Transformation of Distance-Based Outliers | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09446v2 | [
"https://github.com/davnn/probabilistic-distance"
] | In the paper 'A Probabilistic Transformation of Distance-Based Outliers', what Detection AUROC score did the ProbabilisticPatchCore model get on the MVTec AD dataset
| 98.2 |
MassSpecGym | DeepSets | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the DeepSets model get on the MassSpecGym dataset
| 4.42 |
amazon-ratings | GraphSAGE | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy (%) score did the GraphSAGE model get on the amazon-ratings dataset
| 55.40 ± 0.21 |
SPOT-10 | MobileNetV2 Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the MobileNetV2 Distiller model get on the SPOT-10 dataset
| 77.53 |
RefCOCO+ testA | VATEX | Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding | 2024-04-12T00:00:00 | https://arxiv.org/abs/2404.08590v2 | [
"https://github.com/nero1342/VATEX_RIS"
] | In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what mIoU score did the VATEX model get on the RefCOCO+ testA dataset
| 74.41 |
ImageNet-10 | DPAC | Deep Online Probability Aggregation Clustering | 2024-07-07T00:00:00 | https://arxiv.org/abs/2407.05246v2 | [
"https://github.com/aomandechenai/deep-probability-aggregation-clustering"
] | In the paper 'Deep Online Probability Aggregation Clustering', what Accuracy score did the DPAC model get on the ImageNet-10 dataset
| 0.97 |
TriMouse-161 | DLCRNet | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the DLCRNet model get on the TriMouse-161 dataset
| 95.8 |
Winoground | OFA large (ft SNLI-VE) | What You See is What You Read? Improving Text-Image Alignment Evaluation | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10400v4 | [
"https://github.com/yonatanbitton/wysiwyr"
] | In the paper 'What You See is What You Read? Improving Text-Image Alignment Evaluation', what Text Score score did the OFA large (ft SNLI-VE) model get on the Winoground dataset
| 27.70 |
dbp15k fr-en | UMAEA (w/o surf & iter ) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the dbp15k fr-en dataset
| 0.818 |
CoNLL-2014 Shared Task | GEC-DI (LM+GED) | Improving Seq2Seq Grammatical Error Correction via Decoding Interventions | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14534v1 | [
"https://github.com/Jacob-Zhou/gecdi"
] | In the paper 'Improving Seq2Seq Grammatical Error Correction via Decoding Interventions', what F0.5 score did the GEC-DI (LM+GED) model get on the CoNLL-2014 Shared Task dataset
| 69.6 |
EgoExoLearn | cross-view association baseline (no gaze, val) | EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World | 2024-03-24T00:00:00 | https://arxiv.org/abs/2403.16182v2 | [
"https://github.com/opengvlab/egoexolearn"
] | In the paper 'EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World', what Accuracy score did the cross-view association baseline (no gaze, val) model get on the EgoExoLearn dataset
| 44.15 |
MCubeS | StitchFusion (RGB-A-D) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-A-D) model get on the MCubeS dataset
| 53.26 |
AgeDB | FaRL+MLP | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the FaRL+MLP model get on the AgeDB dataset
| 5.64 |
BACE | GIT-Mol(G+S) | GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06911v3 | [
"https://github.com/ai-hpc-research-team/git-mol"
] | In the paper 'GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text', what AUC score did the GIT-Mol(G+S) model get on the BACE dataset
| 0.8108 |
Amazon Men | RMHA-4 | Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation | 2024-05-16T00:00:00 | https://arxiv.org/abs/2405.10436v1 | [
"https://github.com/researcher1741/position_encoding_srs"
] | In the paper 'Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation', what Hit@10 score did the RMHA-4 model get on the Amazon Men dataset
| 0.7013 |
CIFAR-100 (partial ratio 0.1) | ILL | Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12715v4 | [
"https://github.com/hhhhhhao/general-framework-weak-supervision"
] | In the paper 'Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations', what Accuracy score did the ILL model get on the CIFAR-100 (partial ratio 0.1) dataset
| 74 |
ImageNet | DeiT-S-12 + GFSA | Graph Convolutions Enrich the Self-Attention in Transformers! | 2023-12-07T00:00:00 | https://arxiv.org/abs/2312.04234v5 | [
"https://github.com/jeongwhanchoi/gfsa"
] | In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Top 1 Accuracy score did the DeiT-S-12 + GFSA model get on the ImageNet dataset
| 81.1% |
BDD100K val | A-YOLOM(s) | You Only Look at Once for Real-time and Generic Multi-Task | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01641v4 | [
"https://github.com/jiayuanwang-jw/yolov8-multi-task"
] | In the paper 'You Only Look at Once for Real-time and Generic Multi-Task', what mIoU score did the A-YOLOM(s) model get on the BDD100K val dataset
| 91 |
IMDb | SplitEE-S | SplitEE: Early Exit in Deep Neural Networks with Split Computing | 2023-09-17T00:00:00 | https://arxiv.org/abs/2309.09195v1 | [
"https://github.com/Div290/SplitEE/blob/main/README.md"
] | In the paper 'SplitEE: Early Exit in Deep Neural Networks with Split Computing', what Accuracy score did the SplitEE-S model get on the IMDb dataset
| 82.2 |
PECC | GPT-3.5 Turbo | PECC: Problem Extraction and Coding Challenges | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18766v1 | [
"https://github.com/hallerpatrick/pecc"
] | In the paper 'PECC: Problem Extraction and Coding Challenges', what Pass@3 score did the GPT-3.5 Turbo model get on the PECC dataset
| 23.75 |
SMAC 26m_vs_30m | DDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DDN model get on the SMAC 26m_vs_30m dataset
| 67.90 |
MVTec AD | CPR | Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06748v1 | [
"https://github.com/flyinghu123/cpr"
] | In the paper 'Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval', what Detection AUROC score did the CPR model get on the MVTec AD dataset
| 99.7 |
ImageNet-LT | LIFT (ViT-L/14) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Top-1 Accuracy score did the LIFT (ViT-L/14) model get on the ImageNet-LT dataset
| 82.9 |
SMAC MMM2_7m2M1M_vs_8m4M1M | QMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QMIX model get on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset
| 29.55 |
BIG-bench (Temporal Sequences) | PaLM 2 (few-shot, k=3, Direct) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (Temporal Sequences) dataset
| 96.4 |
Breakfast | HERMES | HERMES: temporal-coHERent long-forM understanding with Episodes and Semantics | 2024-08-30T00:00:00 | https://arxiv.org/abs/2408.17443v3 | [
"https://github.com/joslefaure/HERMES"
] | In the paper 'HERMES: temporal-coHERent long-forM understanding with Episodes and Semantics', what Accuracy (%) score did the HERMES model get on the Breakfast dataset
| 95.2 |
Texas | 2-HiGCN | Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12971v2 | [
"https://github.com/yiminghh/higcn"
] | In the paper 'Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes', what Accuracy score did the 2-HiGCN model get on the Texas dataset
| 92.45±0.73 |
VisA | AnomalyDINO-S (full-shot) | AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2 | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14529v2 | [
"https://github.com/dammsi/AnomalyDINO"
] | In the paper 'AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2', what Detection AUROC score did the AnomalyDINO-S (full-shot) model get on the VisA dataset
| 97.6 |
RefCOCO+ val | MaskRIS (Swin-B, combined DB) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B, combined DB) model get on the RefCOCO+ val dataset
| 70.26 |
TNL2K | ODTrack-L | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what AUC score did the ODTrack-L model get on the TNL2K dataset
| 61.7 |
Something-Something V2 | TAdaFormer-L/14 | Temporally-Adaptive Models for Efficient Video Understanding | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05787v1 | [
"https://github.com/alibaba-mmai-research/TAdaConv"
] | In the paper 'Temporally-Adaptive Models for Efficient Video Understanding', what Top-1 Accuracy score did the TAdaFormer-L/14 model get on the Something-Something V2 dataset
| 73.6 |
Synthetic Dynamic Networks | Time-cohort Dynamic Features + Static Features | Learning the mechanisms of network growth | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00793v3 | [
"https://github.com/LourensT/DynamicNetworkSimulation"
] | In the paper 'Learning the mechanisms of network growth', what Accuracy score did the Time-cohort Dynamic Features + Static Features model get on the Synthetic Dynamic Networks dataset
| 98.4 |
AISHELL-1 | Lightweight Transducer With LM | Lightweight Transducer Based on Frame-Level Criterion | 2024-09-05T00:00:00 | https://arxiv.org/abs/2409.13698v2 | [
"https://github.com/wangmengzhi/Lightweight-Transducer"
] | In the paper 'Lightweight Transducer Based on Frame-Level Criterion', what Word Error Rate (WER) score did the Lightweight Transducer With LM model get on the AISHELL-1 dataset
| 4.03 |
CHILI-3K | GraphUNet | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GraphUNet model get on the CHILI-3K dataset
| 0.552 +/- 0.079 |
RefCOCO testA | HYDRA | HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12884v2 | [
"https://github.com/ControlNet/HYDRA"
] | In the paper 'HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning', what IoU score did the HYDRA model get on the RefCOCO testA dataset
| 61.7 |
Fish-100 | HRNet-W48 + Faster R-CNN | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what mAP score did the HRNet-W48 + Faster R-CNN model get on the Fish-100 dataset
| 89.1 |
DomainNet | PromptStyler (CLIP, ResNet-50) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ResNet-50) model get on the DomainNet dataset
| 49.5 |
MM-Vet | InternVL 1.2 | How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16821v2 | [
"https://github.com/opengvlab/internvl"
] | In the paper 'How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites', what GPT-4 score score did the InternVL 1.2 model get on the MM-Vet dataset
| 48.9 |
ActivityNet-1.3 | P-MIL | Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.17861v1 | [
"https://github.com/RenHuan1999/CVPR2023_P-MIL"
] | In the paper 'Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization', what mAP@0.5 score did the P-MIL model get on the ActivityNet-1.3 dataset
| 41.8 |
Gaze360 | MCGaze | End-to-end Video Gaze Estimation via Capturing Head-face-eye Spatial-temporal Interaction Context | 2023-10-27T00:00:00 | https://arxiv.org/abs/2310.18131v3 | [
"https://github.com/zgchen33/mcgaze"
] | In the paper 'End-to-end Video Gaze Estimation via Capturing Head-face-eye Spatial-temporal Interaction Context', what Angular Error score did the MCGaze model get on the Gaze360 dataset
| 10.02 |
Peptides-func | GatedGCN-HSG | Next Level Message-Passing with Hierarchical Support Graphs | 2024-06-22T00:00:00 | https://arxiv.org/abs/2406.15852v2 | [
"https://github.com/carlosinator/support-graphs"
] | In the paper 'Next Level Message-Passing with Hierarchical Support Graphs', what AP score did the GatedGCN-HSG model get on the Peptides-func dataset
| 0.6866±0.0038 |
Texas | HiGNN | Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17351v2 | [
"https://github.com/zylMozart/HiGNN"
] | In the paper 'Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network', what Accuracy score did the HiGNN model get on the Texas dataset
| 86.22 ± 4.67 |
LaSOT-ext | HIPTrack | HIPTrack: Visual Tracking with Historical Prompts | 2023-11-03T00:00:00 | https://arxiv.org/abs/2311.02072v2 | [
"https://github.com/wenruicai/hiptrack"
] | In the paper 'HIPTrack: Visual Tracking with Historical Prompts', what AUC score did the HIPTrack model get on the LaSOT-ext dataset
| 53.0 |
Stanford Dogs | ResNet-50 | PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans | 2023-08-25T00:00:00 | https://arxiv.org/abs/2308.13651v5 | [
"https://github.com/giangnguyen2412/PCNN-src-code-TMRL2024"
] | In the paper 'PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans', what Accuracy score did the ResNet-50 model get on the Stanford Dogs dataset
| 86.31% |
UTKFace | ResNet-50-SORD | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-SORD model get on the UTKFace dataset
| 4.36 |
GerMS-AT | GBERT-large-SVM | Detecting Sexism in German Online Newspaper Comments with Open-Source Text Embeddings (Team GDA, GermEval2024 Shared Task 1: GerMS-Detect, Subtasks 1 and 2, Closed Track) | 2024-09-16T00:00:00 | https://arxiv.org/abs/2409.10341v2 | [
"https://github.com/dslaborg/germeval2024"
] | In the paper 'Detecting Sexism in German Online Newspaper Comments with Open-Source Text Embeddings (Team GDA, GermEval2024 Shared Task 1: GerMS-Detect, Subtasks 1 and 2, Closed Track)', what Jensen-Shannon distance score did the GBERT-large-SVM model get on the GerMS-AT dataset
| 0.301 |
Cornell | M2M-GNN | Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20652v1 | [
"https://github.com/Jinx-byebye/m2mgnn"
] | In the paper 'Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs', what Accuracy score did the M2M-GNN model get on the Cornell dataset
| 86.48 ± 6.1 |
Cityscapes-to-FoggyDriving | CoDA | CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17369v3 | [
"https://github.com/Cuzyoung/CoDA"
] | In the paper 'CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning', what mIoU score did the CoDA model get on the Cityscapes-to-FoggyDriving dataset
| 61.0 |
IWSLT2014 English-German | DRDA | Deterministic Reversible Data Augmentation for Neural Machine Translation | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.02517v1 | [
"https://github.com/BITHLP/DRDA"
] | In the paper 'Deterministic Reversible Data Augmentation for Neural Machine Translation', what BLEU score score did the DRDA model get on the IWSLT2014 English-German dataset
| 30.92 |
VideoInstruct | PPLLaVA-7B | PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance | 2024-11-04T00:00:00 | https://arxiv.org/abs/2411.02327v2 | [
"https://github.com/farewellthree/ppllava"
] | In the paper 'PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance', what Correctness of Information score did the PPLLaVA-7B model get on the VideoInstruct dataset
| 3.32 |
MassSpecGym | FraGNNet | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit Rate @ 1 score did the FraGNNet model get on the MassSpecGym dataset
| 31.93 |
AFAD | FaRL+MLP | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the FaRL+MLP model get on the AFAD dataset
| 3.12 |
MM-Vet | Imp-4B | Imp: Highly Capable Large Multimodal Models for Mobile Devices | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.12107v2 | [
"https://github.com/milvlg/imp"
] | In the paper 'Imp: Highly Capable Large Multimodal Models for Mobile Devices', what GPT-4 score score did the Imp-4B model get on the MM-Vet dataset
| 44.6 |
S3DIS | SPGroup3D | SPGroup3D: Superpoint Grouping Network for Indoor 3D Object Detection | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13641v1 | [
"https://github.com/zyrant/spgroup3d"
] | In the paper 'SPGroup3D: Superpoint Grouping Network for Indoor 3D Object Detection', what mAP@0.5 score did the SPGroup3D model get on the S3DIS dataset
| 47.2 |
The Pile | Gemma-2 2B | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Gemma-2 2B model get on the The Pile dataset
| 0.721 |
HowTo100M Adverbs | ReGaDa | Video-adverb retrieval with compositional adverb-action embeddings | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.15086v1 | [
"https://github.com/ExplainableML/ReGaDa"
] | In the paper 'Video-adverb retrieval with compositional adverb-action embeddings', what mAP W score did the ReGaDa model get on the HowTo100M Adverbs dataset
| 0.567 |
MNIST | rKAN | rKAN: Rational Kolmogorov-Arnold Networks | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14495v1 | [
"https://github.com/alirezaafzalaghaei/rkan"
] | In the paper 'rKAN: Rational Kolmogorov-Arnold Networks', what Accuracy score did the rKAN model get on the MNIST dataset
| 99.293 |
ISTD | ShadowRefiner | ShadowRefiner: Towards Mask-free Shadow Removal via Fast Fourier Transformer | 2024-04-18T00:00:00 | https://arxiv.org/abs/2406.02559v2 | [
"https://github.com/movingforward100/shadow_r"
] | In the paper 'ShadowRefiner: Towards Mask-free Shadow Removal via Fast Fourier Transformer', what MAE score did the ShadowRefiner model get on the ISTD dataset
| 5.45 |
nuScenes Camera Only | SparseBEV (V2-99) | SparseBEV: High-Performance Sparse 3D Object Detection from Multi-Camera Videos | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09244v2 | [
"https://github.com/mcg-nju/sparsebev"
] | In the paper 'SparseBEV: High-Performance Sparse 3D Object Detection from Multi-Camera Videos', what NDS score did the SparseBEV (V2-99) model get on the nuScenes Camera Only dataset
| 67.5 |
iNaturalist 2018 | APA (SE-ResNet-50) | Adaptive Parametric Activation | 2024-07-11T00:00:00 | https://arxiv.org/abs/2407.08567v2 | [
"https://github.com/kostas1515/aglu"
] | In the paper 'Adaptive Parametric Activation', what Top-1 Accuracy score did the APA (SE-ResNet-50) model get on the iNaturalist 2018 dataset
| 74.8 |
DIV2K val - 4x upscaling | RCOT | Residual-Conditioned Optimal Transport: Towards Structure-Preserving Unpaired and Paired Image Restoration | 2024-05-05T00:00:00 | https://arxiv.org/abs/2405.02843v2 | [
"https://github.com/xl-tang3/RCOT"
] | In the paper 'Residual-Conditioned Optimal Transport: Towards Structure-Preserving Unpaired and Paired Image Restoration', what PSNR score did the RCOT model get on the DIV2K val - 4x upscaling dataset
| 28.41 |
GRAZPEDWRI-DX | YOLOv9-C | YOLOv9 for Fracture Detection in Pediatric Wrist Trauma X-ray Images | 2024-03-17T00:00:00 | https://arxiv.org/abs/2403.11249v2 | [
"https://github.com/ruiyangju/yolov9-fracture-detection"
] | In the paper 'YOLOv9 for Fracture Detection in Pediatric Wrist Trauma X-ray Images', what mAP score did the YOLOv9-C model get on the GRAZPEDWRI-DX dataset
| 65.57 |
USNA-Cn2 (long-term) | GBRT | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the GBRT model get on the USNA-Cn2 (long-term) dataset
| 1.340 |
RefCOCO testA | HyperSeg | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what Overall IoU score did the HyperSeg model get on the RefCOCO testA dataset
| 85.7 |
OpenMIC-2018 | EAsT-Final + PaSST | Audio Embeddings as Teachers for Music Classification | 2023-06-30T00:00:00 | https://arxiv.org/abs/2306.17424v1 | [
"https://github.com/suncerock/EAsT-music-classification"
] | In the paper 'Audio Embeddings as Teachers for Music Classification', what mean average precision score did the EAsT-Final + PaSST model get on the OpenMIC-2018 dataset
| .847 |
VRDS | RainMamba | RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21773v2 | [
"https://github.com/TonyHongtaoWu/RainMamba"
] | In the paper 'RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining', what SSIM score did the RainMamba model get on the VRDS dataset
| 0.9366 |
ACMPS | ResNet50 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the ResNet50 model get on the ACMPS dataset
| 0.9379 |
Atari 2600 Wizard of Wor | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Wizard of Wor dataset
| 21049 |
GSM8K | PaLM 2 (few-shot, k=8, SC) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=8, SC) model get on the GSM8K dataset
| 91.0 |
ChEBI-20 | BioT5+ | BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17810v2 | [
"https://github.com/QizhiPei/BioT5"
] | In the paper 'BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning', what BLEU-2 score did the BioT5+ model get on the ChEBI-20 dataset
| 66.6 |
MM-Vet | DreamLLM-7B | DreamLLM: Synergistic Multimodal Comprehension and Creation | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11499v2 | [
"https://github.com/RunpeiDong/DreamLLM"
] | In the paper 'DreamLLM: Synergistic Multimodal Comprehension and Creation', what GPT-4 score score did the DreamLLM-7B model get on the MM-Vet dataset
| 35.9 |
SID SonyA7S2 x100 | LRD | Towards General Low-Light Raw Noise Synthesis and Modeling | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16508v2 | [
"https://github.com/fengzhang427/LRD"
] | In the paper 'Towards General Low-Light Raw Noise Synthesis and Modeling', what PSNR (Raw) score did the LRD model get on the SID SonyA7S2 x100 dataset
| 41.95 |
Re-TACRED | RAG4RE | Retrieval-Augmented Generation-based Relation Extraction | 2024-04-20T00:00:00 | https://arxiv.org/abs/2404.13397v1 | [
"https://github.com/sefeoglu/rag4re"
] | In the paper 'Retrieval-Augmented Generation-based Relation Extraction', what F1 score did the RAG4RE model get on the Re-TACRED dataset
| 73.3 |
Charades | UniMD+Sync. (RGB+Flow) | UniMD: Towards Unifying Moment Retrieval and Temporal Action Detection | 2024-04-07T00:00:00 | https://arxiv.org/abs/2404.04933v2 | [
"https://github.com/yingsen1/unimd"
] | In the paper 'UniMD: Towards Unifying Moment Retrieval and Temporal Action Detection', what mAP score did the UniMD+Sync. (RGB+Flow) model get on the Charades dataset
| 26.53 |
Peptides-func | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what AP score did the NeuralWalker model get on the Peptides-func dataset
| 0.7096 ± 0.0078 |
MBPP | GPT-4 + AgentCoder | AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13010v3 | [
"https://github.com/huangd1999/AgentCoder"
] | In the paper 'AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation', what Accuracy score did the GPT-4 + AgentCoder model get on the MBPP dataset
| 91.8 |
ETTm1 (192) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTm1 (192) Multivariate dataset
| 0.335 |
Refer-YouTube-VOS (2021 public validation) | SgMg (Pre-training) | Spectrum-guided Multi-granularity Referring Video Object Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13537v1 | [
"https://github.com/bo-miao/sgmg"
] | In the paper 'Spectrum-guided Multi-granularity Referring Video Object Segmentation', what J&F score did the SgMg (Pre-training) model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 65.7 |
Waymo Open Dataset | HEDNet | HEDNet: A Hierarchical Encoder-Decoder Network for 3D Object Detection in Point Clouds | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20234v1 | [
"https://github.com/zhanggang001/hednet"
] | In the paper 'HEDNet: A Hierarchical Encoder-Decoder Network for 3D Object Detection in Point Clouds', what mAPH/L2 score did the HEDNet model get on the Waymo Open Dataset dataset
| 73.4 |
ScanNetV2 | UniDet3D | UniDet3D: Multi-dataset Indoor 3D Object Detection | 2024-09-06T00:00:00 | https://arxiv.org/abs/2409.04234v1 | [
"https://github.com/filapro/unidet3d"
] | In the paper 'UniDet3D: Multi-dataset Indoor 3D Object Detection', what mAP@0.25 score did the UniDet3D model get on the ScanNetV2 dataset
| 77.5 |
VibraVox (rigid in-ear microphone) | ECAPA2 | Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11828v2 | [
"https://github.com/jhauret/vibravox"
] | In the paper 'Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors', what Test EER score did the ECAPA2 model get on the VibraVox (rigid in-ear microphone) dataset
| 0.0316 |
Laurel Caverns | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Laurel Caverns dataset
| 36.61 |
Unpaired-abdomen-CT | CLIP+SVD+ViT | Spatially Covariant Image Registration with Text Prompts | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15607v2 | [
"https://github.com/tinymilky/NeRD"
] | In the paper 'Spatially Covariant Image Registration with Text Prompts', what DSC score did the CLIP+SVD+ViT model get on the Unpaired-abdomen-CT dataset
| 0.6075 |
ADE20K training-free zero-shot segmentation | GEM (MetaCLIP) | Grounding Everything: Emerging Localization Properties in Vision-Language Transformers | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00878v3 | [
"https://github.com/walbouss/gem"
] | In the paper 'Grounding Everything: Emerging Localization Properties in Vision-Language Transformers', what mIoU score did the GEM (MetaCLIP) model get on the ADE20K training-free zero-shot segmentation dataset
| 17.1 |
ETTh1 (336) Multivariate | D-PAD | D-PAD: Deep-Shallow Multi-Frequency Patterns Disentangling for Time Series Forecasting | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17814v1 | [
"https://github.com/xybbo5/d-pad"
] | In the paper 'D-PAD: Deep-Shallow Multi-Frequency Patterns Disentangling for Time Series Forecasting', what MSE score did the D-PAD model get on the ETTh1 (336) Multivariate dataset
| 0.374 |
MS-COCO (10-shot) | RISF (Resnet-101) | Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection | 2023-11-01T00:00:00 | https://arxiv.org/abs/2311.00278v1 | [
"https://github.com/INFINIQ-AI1/RISF"
] | In the paper 'Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection', what AP score did the RISF (Resnet-101) model get on the MS-COCO (10-shot) dataset
| 21.9 |
Xiph-4k | VFIMamba | VFIMamba: Video Frame Interpolation with State Space Models | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02315v2 | [
"https://github.com/mcg-nju/vfimamba"
] | In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the Xiph-4k dataset
| 34.62 |
LEVIR-CD | HANet | HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09178v1 | [
"https://github.com/chengxihan/hanet-cd"
] | In the paper 'HANet: A Hierarchical Attention Network for Change Detection With Bitemporal Very-High-Resolution Remote Sensing Images', what F1 score did the HANet model get on the LEVIR-CD dataset
| 90.28 |
Weather (96) | RLinear-CI | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear-CI model get on the Weather (96) dataset
| 0.146 |
KoNViD-1k | ReLaX-VQA (trained on LSVQ only) | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11496v1 | [
"https://github.com/xinyiw915/relax-vqa"
] | In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA (trained on LSVQ only) model get on the KoNViD-1k dataset
| 0.8427 |
NExT-QA | LLaVA-OV(7B) | LLaVA-OneVision: Easy Visual Task Transfer | 2024-08-06T00:00:00 | https://arxiv.org/abs/2408.03326v3 | [
"https://github.com/evolvinglmms-lab/lmms-eval"
] | In the paper 'LLaVA-OneVision: Easy Visual Task Transfer', what Accuracy score did the LLaVA-OV(7B) model get on the NExT-QA dataset
| 79.4 |
PeMSD7(M) | PM-DMNet(R) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(R) model get on the PeMSD7(M) dataset
| 2.60 |
RefCOCO+ val | MagNet | Mask Grounding for Referring Image Segmentation | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12198v2 | [
"https://github.com/yxchng/mask-grounding"
] | In the paper 'Mask Grounding for Referring Image Segmentation', what Overall IoU score did the MagNet model get on the RefCOCO+ val dataset
| 66.16 |
CIFAR-100-LT (ρ=50) | GCL | Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11733v1 | [
"https://github.com/keke921/gclloss"
] | In the paper 'Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment', what Error Rate score did the GCL model get on the CIFAR-100-LT (ρ=50) dataset
| 46.4 |
Pubmed | Graph-MLP + SAF | The Split Matters: Flat Minima Methods for Improving the Performance of GNNs | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09121v1 | [
"https://github.com/foisunt/fmms-in-gnns"
] | In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what Accuracy score did the Graph-MLP + SAF model get on the Pubmed dataset
| 90.64 ± 0.46% |
PeMS07 | ADCSD | Online Test-Time Adaptation of Spatial-Temporal Traffic Flow Forecasting | 2024-01-08T00:00:00 | https://arxiv.org/abs/2401.04148v1 | [
"https://github.com/pengxin-guo/adcsd"
] | In the paper 'Online Test-Time Adaptation of Spatial-Temporal Traffic Flow Forecasting', what MAE@1h score did the ADCSD model get on the PeMS07 dataset
| 19.62 |
TerraIncognita | GMDG (RegNetY-16GF) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF) model get on the TerraIncognita dataset
| 60.7 |
kickstarter | Trompt + OpenAI embedding | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the Trompt + OpenAI embedding model get on the kickstarter dataset
| 0.81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.