dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Caltech-101 | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the Caltech-101 dataset
| 84 |
MVTec AD | URD | Unlocking the Potential of Reverse Distillation for Anomaly Detection | 2024-12-10T00:00:00 | https://arxiv.org/abs/2412.07579v1 | [
"https://github.com/hito2448/urd"
] | In the paper 'Unlocking the Potential of Reverse Distillation for Anomaly Detection', what Detection AUROC score did the URD model get on the MVTec AD dataset
| 99.2 |
Amazon-CDs | HetroFair | Heterophily-Aware Fair Recommendation using Graph Convolutional Networks | 2024-01-31T00:00:00 | https://arxiv.org/abs/2402.03365v2 | [
"https://github.com/nematgh/hetrofair"
] | In the paper 'Heterophily-Aware Fair Recommendation using Graph Convolutional Networks', what NDCG@20 score did the HetroFair model get on the Amazon-CDs dataset
| 0.1449 |
MountainCarContinuous-v0 | TLA | Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18701v3 | [
"https://github.com/dee0512/Temporally-Layered-Architecture"
] | In the paper 'Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures', what Action Repetition score did the TLA model get on the MountainCarContinuous-v0 dataset
| .914 |
MultiRC | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what F1 score did the PaLM 2-M (one-shot) model get on the MultiRC dataset
| 84.1 |
Wisconsin | CATv3-sup | CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08672v3 | [
"https://github.com/geox-lab/cat"
] | In the paper 'CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph', what Accuracy score did the CATv3-sup model get on the Wisconsin dataset
| 85.6±2.1 |
MM-Vet | Mipha-3B | Mipha: A Comprehensive Overhaul of Multimodal Assistant with Small Language Models | 2024-03-10T00:00:00 | https://arxiv.org/abs/2403.06199v4 | [
"https://github.com/zhuyiche/llava-phi"
] | In the paper 'Mipha: A Comprehensive Overhaul of Multimodal Assistant with Small Language Models', what GPT-4 score score did the Mipha-3B model get on the MM-Vet dataset
| 32.1 |
Shot2Story20K | Shot2Story | Shot2Story20K: A New Benchmark for Comprehensive Understanding of Multi-shot Videos | 2023-12-16T00:00:00 | https://arxiv.org/abs/2312.10300v2 | [
"https://github.com/bytedance/Shot2Story"
] | In the paper 'Shot2Story20K: A New Benchmark for Comprehensive Understanding of Multi-shot Videos', what CIDEr score did the Shot2Story model get on the Shot2Story20K dataset
| 37.4 |
BanglaBook | Random Forest (word 1-gram) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the Random Forest (word 1-gram) model get on the BanglaBook dataset
| 0.9043 |
MM-Vet | HyperLLaVA | HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models | 2024-03-20T00:00:00 | https://arxiv.org/abs/2403.13447v1 | [
"https://github.com/dcdmllm/hyperllava"
] | In the paper 'HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models', what GPT-4 score score did the HyperLLaVA model get on the MM-Vet dataset
| 31.0 |
Cornell | TE-GCNN | Transfer Entropy in Graph Convolutional Neural Networks | 2024-06-08T00:00:00 | https://arxiv.org/abs/2406.06632v1 | [
"https://github.com/avmoldovan/Heterophily_and_oversmoothing-forked"
] | In the paper 'Transfer Entropy in Graph Convolutional Neural Networks', what Accuracy score did the TE-GCNN model get on the Cornell dataset
| 85.68 ± 6.63 |
UCR Anomaly Archive | IF | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the IF model get on the UCR Anomaly Archive dataset
| 0.376 |
SST-2 Binary classification | LM-CPPF RoBERTa-base | LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18169v3 | [
"https://github.com/amirabaskohi/lm-cppf"
] | In the paper 'LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning', what Accuracy score did the LM-CPPF RoBERTa-base model get on the SST-2 Binary classification dataset
| 93.2 |
VoxCeleb1 | ReDimNet-B1-LM-ASNorm (2.2M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B1-LM-ASNorm (2.2M) model get on the VoxCeleb1 dataset
| 0.73 |
AFHQ Dog | DDMI | DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations | 2024-01-23T00:00:00 | https://arxiv.org/abs/2401.12517v2 | [
"https://github.com/mlvlab/DDMI"
] | In the paper 'DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations', what FID score did the DDMI model get on the AFHQ Dog dataset
| 8.54 |
Cora with Public Split: fixed 20 nodes per class | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GCN model get on the Cora with Public Split: fixed 20 nodes per class dataset
| 85.1 ± 0.7 |
ICBHI Respiratory Sound Database | AST (fine-tuning) | Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14032v4 | [
"https://github.com/raymin0223/patch-mix_contrastive_learning"
] | In the paper 'Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification', what ICBHI Score score did the AST (fine-tuning) model get on the ICBHI Respiratory Sound Database dataset
| 59.55 |
Amazon Beauty | CARCA Abs + Con | Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation | 2024-05-16T00:00:00 | https://arxiv.org/abs/2405.10436v1 | [
"https://github.com/researcher1741/position_encoding_srs"
] | In the paper 'Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation', what Hit@10 score did the CARCA Abs + Con model get on the Amazon Beauty dataset
| 0.6793 |
Tanks and Temples | ET-MVSNet | When Epipolar Constraint Meets Non-local Operators in Multi-View Stereo | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17218v1 | [
"https://github.com/tqtqliu/et-mvsnet"
] | In the paper 'When Epipolar Constraint Meets Non-local Operators in Multi-View Stereo', what Mean F1 (Intermediate) score did the ET-MVSNet model get on the Tanks and Temples dataset
| 65.49 |
Cityscapes val | MRFP+(Ours) Resnet50 | MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18331v2 | [
"https://github.com/airl-iisc/MRFP"
] | In the paper 'MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation', what mIoU score did the MRFP+(Ours) Resnet50 model get on the Cityscapes val dataset
| 42.4 |
YouTube-VIS 2021 | DVIS(Swin-L) | DVIS: Decoupled Video Instance Segmentation Framework | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03413v3 | [
"https://github.com/zhang-tao-whu/DVIS"
] | In the paper 'DVIS: Decoupled Video Instance Segmentation Framework', what mask AP score did the DVIS(Swin-L) model get on the YouTube-VIS 2021 dataset
| 60.1 |
CHILI-100K | GIN | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GIN model get on the CHILI-100K dataset
| 0.491 +/- 0.038 |
Wisconsin | M2M-GNN | Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20652v1 | [
"https://github.com/Jinx-byebye/m2mgnn"
] | In the paper 'Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs', what Accuracy score did the M2M-GNN model get on the Wisconsin dataset
| 89.01 ± 4.1 |
Chameleon | JKNet + Hetero-S (8 layers) | The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12539v1 | [
"https://github.com/bingreeky/heterosnoh"
] | In the paper 'The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs', what Accuracy score did the JKNet + Hetero-S (8 layers) model get on the Chameleon dataset
| 70.18 |
DocRED-IE | REXEL | REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking | 2024-04-19T00:00:00 | https://arxiv.org/abs/2404.12788v1 | [
"https://github.com/amazon-science/e2e-docie"
] | In the paper 'REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking', what Avg F1 score did the REXEL model get on the DocRED-IE dataset
| 90.93 |
STS14 | PromptEOL+CSE+LLaMA-30B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+LLaMA-30B model get on the STS14 dataset
| 0.8585 |
EQ-Bench | lmsys/vicuna-13b-v1.1 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the lmsys/vicuna-13b-v1.1 model get on the EQ-Bench dataset
| 32.85 |
Vinoground | MA-LMM-Vicuna-7B | MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05726v2 | [
"https://github.com/boheumd/MA-LMM"
] | In the paper 'MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding', what Text Score score did the MA-LMM-Vicuna-7B model get on the Vinoground dataset
| 23.8 |
spider | C3 + ChatGPT + Zero-Shot | C3: Zero-shot Text-to-SQL with ChatGPT | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07306v1 | [
"https://github.com/bigbigwatermalon/c3sql"
] | In the paper 'C3: Zero-shot Text-to-SQL with ChatGPT', what Execution Accuracy (Dev) score did the C3 + ChatGPT + Zero-Shot model get on the spider dataset
| 81.8 |
VisDA2017 | SFDA2 | SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation | 2024-03-16T00:00:00 | https://arxiv.org/abs/2403.10834v1 | [
"https://github.com/shinyflight/sfda2"
] | In the paper 'SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation', what Accuracy score did the SFDA2 model get on the VisDA2017 dataset
| 88.1 |
Humanoid-v4 | MEow | Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow | 2024-05-22T00:00:00 | https://arxiv.org/abs/2405.13629v2 | [
"https://github.com/ChienFeng-hub/meow"
] | In the paper 'Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow', what Average Return score did the MEow model get on the Humanoid-v4 dataset
| 6923.22 |
EuroSAT | WaveMix | Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision | 2024-06-09T00:00:00 | https://arxiv.org/abs/2406.05612v2 | [
"https://github.com/pranavphoenix/Backbones"
] | In the paper 'Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision', what Accuracy (%) score did the WaveMix model get on the EuroSAT dataset
| 98.96 |
Caltech-101 | PaddingFlow | PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise | 2024-03-13T00:00:00 | https://arxiv.org/abs/2403.08216v2 | [
"https://github.com/adamqlmeng/paddingflow"
] | In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what MMD-L2 score did the PaddingFlow model get on the Caltech-101 dataset
| 17.9 |
AudioCaps | EnCLAP++-base | EnCLAP++: Analyzing the EnCLAP Framework for Optimizing Automated Audio Captioning Performance | 2024-09-02T00:00:00 | https://arxiv.org/abs/2409.01201v1 | [
"https://github.com/jaeyeonkim99/enclap"
] | In the paper 'EnCLAP++: Analyzing the EnCLAP Framework for Optimizing Automated Audio Captioning Performance', what CIDEr score did the EnCLAP++-base model get on the AudioCaps dataset
| 0.815 |
https://www.kaggle.com/datasets/saurabhshahane/classification-of-malwares | Quantum Neural Network | Software Supply Chain Vulnerabilities Detection in Source Code: Performance Comparison between Traditional and Quantum Machine Learning Algorithms | 2023-05-31T00:00:00 | https://arxiv.org/abs/2306.08060v1 | [
"https://github.com/jannatulshapna/QML-Software-Supply-Chain"
] | In the paper 'Software Supply Chain Vulnerabilities Detection in Source Code: Performance Comparison between Traditional and Quantum Machine Learning Algorithms', what F1 score score did the Quantum Neural Network model get on the https://www.kaggle.com/datasets/saurabhshahane/classification-of-malwares dataset
| 0.85 |
RefCoCo val | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what Overall IoU score did the GLEE-Pro model get on the RefCoCo val dataset
| 80.0 |
The Pile | Llama-3.2 3B | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Llama-3.2 3B model get on the The Pile dataset
| 0.640 |
SVOX-Night | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the SVOX-Night dataset
| 87.1 |
SMAC MMM2_7m2M1M_vs_9m3M1M | QMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QMIX model get on the SMAC MMM2_7m2M1M_vs_9m3M1M dataset
| 88.64 |
Refer-YouTube-VOS (2021 public validation) | DsHmp (Video-Swin-Base) | Decoupling Static and Hierarchical Motion Perception for Referring Video Segmentation | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03645v1 | [
"https://github.com/heshuting555/dshmp"
] | In the paper 'Decoupling Static and Hierarchical Motion Perception for Referring Video Segmentation', what J&F score did the DsHmp (Video-Swin-Base) model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 67.1 |
RESISC45 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the RESISC45 dataset
| 89.6 |
MATH | AlphaMath-7B-SBS@3 | AlphaMath Almost Zero: Process Supervision without Process | 2024-05-06T00:00:00 | https://arxiv.org/abs/2405.03553v3 | [
"https://github.com/MARIO-Math-Reasoning/Super_MARIO"
] | In the paper 'AlphaMath Almost Zero: Process Supervision without Process', what Accuracy score did the AlphaMath-7B-SBS@3 model get on the MATH dataset
| 66.3 |
ONCE | LION | LION: Linear Group RNN for 3D Object Detection in Point Clouds | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18232v1 | [
"https://github.com/happinesslz/LION"
] | In the paper 'LION: Linear Group RNN for 3D Object Detection in Point Clouds', what mAP score did the LION model get on the ONCE dataset
| 66.6 |
NYU-Depth V2 | PGT (Swin-T) | Prompt Guided Transformer for Multi-Task Dense Prediction | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15362v1 | [
"https://github.com/innovator-zero/MTDP_Lib"
] | In the paper 'Prompt Guided Transformer for Multi-Task Dense Prediction', what odsF score did the PGT (Swin-T) model get on the NYU-Depth V2 dataset
| 77.05 |
FER2013 | ResEmoteNet | ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition | 2024-09-01T00:00:00 | https://arxiv.org/abs/2409.10545v2 | [
"https://github.com/ArnabKumarRoy02/ResEmoteNet"
] | In the paper 'ResEmoteNet: Bridging Accuracy and Loss Reduction in Facial Emotion Recognition', what Accuracy score did the ResEmoteNet model get on the FER2013 dataset
| 79.79 |
FakeAVCeleb | FACTOR | Detecting Deepfakes Without Seeing Any | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01458v1 | [
"https://github.com/talreiss/factor"
] | In the paper 'Detecting Deepfakes Without Seeing Any', what ROC AUC score did the FACTOR model get on the FakeAVCeleb dataset
| 97.4 |
HIDE | M3SNet | A Mountain-Shaped Single-Stage Network for Accurate Image Restoration | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.05146v1 | [
"https://github.com/Tombs98/M3SNet"
] | In the paper 'A Mountain-Shaped Single-Stage Network for Accurate Image Restoration', what PSNR score did the M3SNet model get on the HIDE dataset
| 31.49 |
YouTube-VIS validation | CAVIS(VIT-L, Offline) | Context-Aware Video Instance Segmentation | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03010v1 | [
"https://github.com/Seung-Hun-Lee/CAVIS"
] | In the paper 'Context-Aware Video Instance Segmentation', what mask AP score did the CAVIS(VIT-L, Offline) model get on the YouTube-VIS validation dataset
| 69.4 |
CUB-200-2011 | SMDL-Attribution (ICLR version) | Less is More: Fewer Interpretable Region via Submodular Subset Selection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09164v3 | [
"https://github.com/ruoyuchen10/smdl-attribution"
] | In the paper 'Less is More: Fewer Interpretable Region via Submodular Subset Selection', what Average highest confidence (ResNet-101) score did the SMDL-Attribution (ICLR version) model get on the CUB-200-2011 dataset
| 0.4513 |
CIFAR-100 | ReviewKD++(T:resnet-32x4, S:shufflenet-v1) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 Accuracy (%) score did the ReviewKD++(T:resnet-32x4, S:shufflenet-v1) model get on the CIFAR-100 dataset
| 77.68 |
RES-Q | QurrentOS-coder + Claude 3.5 Sonnet | RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16801v2 | [
"https://github.com/qurrent-ai/res-q"
] | In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + Claude 3.5 Sonnet model get on the RES-Q dataset
| 58.0 |
MVTec AD | CPR(TensorRT) | Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06748v1 | [
"https://github.com/flyinghu123/cpr"
] | In the paper 'Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval', what FPS score did the CPR(TensorRT) model get on the MVTec AD dataset
| 130 |
Assembly101 | LTContext | How Much Temporal Long-Term Context is Needed for Action Segmentation? | 2023-08-22T00:00:00 | https://arxiv.org/abs/2308.11358v2 | [
"https://github.com/ltcontext/ltcontext"
] | In the paper 'How Much Temporal Long-Term Context is Needed for Action Segmentation?', what MoF score did the LTContext model get on the Assembly101 dataset
| 41.2 |
Office-Home | GMDG (RegNetY-16GF, SWAD) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF, SWAD) model get on the Office-Home dataset
| 84.7 |
ManyTypes4TypeScript | CodeTIDAL5 | Learning Type Inference for Enhanced Dataflow Analysis | 2023-10-01T00:00:00 | https://arxiv.org/abs/2310.00673v2 | [
"https://github.com/joernio/joernti-codetidal5"
] | In the paper 'Learning Type Inference for Enhanced Dataflow Analysis', what Average Accuracy score did the CodeTIDAL5 model get on the ManyTypes4TypeScript dataset
| 71.27 |
CHILI-3K | Random | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the Random model get on the CHILI-3K dataset
| 0.016 +/- 0.000 |
MassSpecGym | Random | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the Random model get on the MassSpecGym dataset
| 3.06 |
GTA-to-Avg(Cityscapes,BDD,Mapillary) | DIDEX | Generalization by Adaptation: Diffusion-Based Domain Extension for Domain-Generalized Semantic Segmentation | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01850v1 | [
"https://github.com/jniemeijer/didex"
] | In the paper 'Generalization by Adaptation: Diffusion-Based Domain Extension for Domain-Generalized Semantic Segmentation', what mIoU score did the DIDEX model get on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset
| 59.7 |
MedConceptsQA | PharMolix/BioMedGPT-LM-7B | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09442v2 | [
"https://github.com/pharmolix/openbiomed"
] | In the paper 'BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine', what Accuracy score did the PharMolix/BioMedGPT-LM-7B model get on the MedConceptsQA dataset
| 24.747 |
HKU-IS | SAM2-UNet | SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08870v1 | [
"https://github.com/wzh0120/sam2-unet"
] | In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what MAE score did the SAM2-UNet model get on the HKU-IS dataset
| 0.019 |
MM-Vet | InternLM-XC2 + MMDU-45k | MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs | 2024-06-17T00:00:00 | https://arxiv.org/abs/2406.11833v2 | [
"https://github.com/liuziyu77/mmdu"
] | In the paper 'MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs', what GPT-4 score score did the InternLM-XC2 + MMDU-45k model get on the MM-Vet dataset
| 38.8 |
ACE 2005 | PromptNER [RoBERTa-large] | PromptNER: Prompt Locating and Typing for Named Entity Recognition | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17104v1 | [
"https://github.com/tricktreat/promptner"
] | In the paper 'PromptNER: Prompt Locating and Typing for Named Entity Recognition', what F1 score did the PromptNER [RoBERTa-large] model get on the ACE 2005 dataset
| 88.26 |
WebApp1K-React | gpt-4o-2024-08-06 | Insights from Benchmarking Frontier Language Models on Web App Code Generation | 2024-09-08T00:00:00 | https://arxiv.org/abs/2409.05177v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'Insights from Benchmarking Frontier Language Models on Web App Code Generation', what pass@1 score did the gpt-4o-2024-08-06 model get on the WebApp1K-React dataset
| 0.885 |
LEVIR-CD | SRC-Net | SRC-Net: Bi-Temporal Spatial Relationship Concerned Network for Change Detection | 2024-06-09T00:00:00 | https://arxiv.org/abs/2406.05668v2 | [
"https://github.com/Chnja/SRCNet"
] | In the paper 'SRC-Net: Bi-Temporal Spatial Relationship Concerned Network for Change Detection', what F1 score did the SRC-Net model get on the LEVIR-CD dataset
| 92.24 |
SFCHD | SSD | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the SSD model get on the SFCHD dataset
| 72.8 |
Refer-YouTube-VOS (2021 public validation) | OnlineRefer (Swin-L, online) | OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation | 2023-07-18T00:00:00 | https://arxiv.org/abs/2307.09356v1 | [
"https://github.com/wudongming97/onlinerefer"
] | In the paper 'OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation', what J&F score did the OnlineRefer (Swin-L, online) model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 63.5 |
CIFAR-10 | TRADES-ANCRA/ResNet18 | Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03358v2 | [
"https://github.com/changzhang777/ancra"
] | In the paper 'Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria', what Attack: AutoAttack score did the TRADES-ANCRA/ResNet18 model get on the CIFAR-10 dataset
| 59.70 |
ADE20K | PosSAM | PosSAM: Panoptic Open-vocabulary Segment Anything | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09620v1 | [
"https://github.com/Vibashan/PosSAM"
] | In the paper 'PosSAM: Panoptic Open-vocabulary Segment Anything', what PQ score did the PosSAM model get on the ADE20K dataset
| 29.2 |
RTE | OPT-125M | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-125M model get on the RTE dataset
| 57.05% |
DUT-OMRON | M3Net-R | M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection | 2023-09-15T00:00:00 | https://arxiv.org/abs/2309.08365v1 | [
"https://github.com/I2-Multimedia-Lab/M3Net"
] | In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-R model get on the DUT-OMRON dataset
| 0.061 |
GSM8K | DART-Math-Llama3-8B-Prop2Diff (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Llama3-8B-Prop2Diff (0-shot CoT, w/o code) model get on the GSM8K dataset
| 81.1 |
Tiered ImageNet 5-way (1-shot) | DiffKendall (Meta-Baseline, ResNet-12) | DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15317v2 | [
"https://github.com/kaipengm2/DiffKendall"
] | In the paper 'DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation', what Accuracy score did the DiffKendall (Meta-Baseline, ResNet-12) model get on the Tiered ImageNet 5-way (1-shot) dataset
| 70.76 |
BSD100 - 4x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the BSD100 - 4x upscaling dataset
| 28.13 |
Words in Context | PaLM 2-S (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (one-shot) model get on the Words in Context dataset
| 50.6 |
SMAC corridor_2z_vs_24zg | DDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DDN model get on the SMAC corridor_2z_vs_24zg dataset
| 41.19 |
HumanML3D | FineMoGen | FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.15004v1 | [
"https://github.com/mingyuan-zhang/FineMoGen"
] | In the paper 'FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing', what FID score did the FineMoGen model get on the HumanML3D dataset
| 0.151 |
PASCAL-S | SAM2-UNet | SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08870v1 | [
"https://github.com/wzh0120/sam2-unet"
] | In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what MAE score did the SAM2-UNet model get on the PASCAL-S dataset
| 0.043 |
Slakh2100 | PerceiverTF | YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04822v3 | [
"https://github.com/mimbres/yourmt3"
] | In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what note-level F-measure-no-offset (Fno) score did the PerceiverTF model get on the Slakh2100 dataset
| 0.819 |
OUMVLP | HSTL | Hierarchical Spatio-Temporal Representation Learning for Gait Recognition | 2023-07-19T00:00:00 | https://arxiv.org/abs/2307.09856v1 | [
"https://github.com/gudaochangsheng/HSTL"
] | In the paper 'Hierarchical Spatio-Temporal Representation Learning for Gait Recognition', what Averaged rank-1 acc(%) score did the HSTL model get on the OUMVLP dataset
| 92.4 |
TNL2K | ODTrack-B | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what AUC score did the ODTrack-B model get on the TNL2K dataset
| 60.9 |
BSD100 - 4x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the BSD100 - 4x upscaling dataset
| 28.06 |
SPair-71k | SD+DINO (Supervised) | A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15347v2 | [
"https://github.com/Junyi42/sd-dino"
] | In the paper 'A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence', what PCK score did the SD+DINO (Supervised) model get on the SPair-71k dataset
| 74.6 |
FER+ | ARBEx | ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning | 2023-05-02T00:00:00 | https://arxiv.org/abs/2305.01486v5 | [
"https://github.com/takihasan/arbex"
] | In the paper 'ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning', what Accuracy score did the ARBEx model get on the FER+ dataset
| 93.09 |
Chameleon | CATv3-sup | CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08672v3 | [
"https://github.com/geox-lab/cat"
] | In the paper 'CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph', what Accuracy score did the CATv3-sup model get on the Chameleon dataset
| 69.9±1.0 |
CFC-DAOD | AT (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what AP@0.5 score did the AT (ResNet50-FPN) model get on the CFC-DAOD dataset
| 69.1 |
VideoInstruct | ST-LLM | ST-LLM: Large Language Models Are Effective Temporal Learners | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00308v1 | [
"https://github.com/TencentARC/ST-LLM"
] | In the paper 'ST-LLM: Large Language Models Are Effective Temporal Learners', what gpt-score score did the ST-LLM model get on the VideoInstruct dataset
| 3.23 |
questions | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what AUCROC score did the GCN model get on the questions dataset
| 79.02±0.60 |
NCI1 | CIN++ | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what Accuracy score did the CIN++ model get on the NCI1 dataset
| 85.3% |
S3DIS | UniDet3D | UniDet3D: Multi-dataset Indoor 3D Object Detection | 2024-09-06T00:00:00 | https://arxiv.org/abs/2409.04234v1 | [
"https://github.com/filapro/unidet3d"
] | In the paper 'UniDet3D: Multi-dataset Indoor 3D Object Detection', what mAP@0.5 score did the UniDet3D model get on the S3DIS dataset
| 60.8 |
BURST-test | Cutie (base, with mose, 600 pixels) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what HOTA (all) score did the Cutie (base, with mose, 600 pixels) model get on the BURST-test dataset
| 62.6 |
YouTube-UGC | ReLaX-VQA (trained on LSVQ only) | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11496v1 | [
"https://github.com/xinyiw915/relax-vqa"
] | In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA (trained on LSVQ only) model get on the YouTube-UGC dataset
| 0.8354 |
ShapeNet Airplane | DiT-3D | DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation | 2023-07-04T00:00:00 | https://arxiv.org/abs/2307.01831v1 | [
"https://github.com/DiT-3D/DiT-3D"
] | In the paper 'DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation', what 1-NNA-CD score did the DiT-3D model get on the ShapeNet Airplane dataset
| 62.35 |
Adience Age | MiVOLO-D1 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what Accuracy (5-fold) score did the MiVOLO-D1 model get on the Adience Age dataset
| 68.69 |
WebApp1k-Duo-React | gpt-4o-2024-08-06 | A Case Study of Web App Coding with OpenAI Reasoning Models | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.13773v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the gpt-4o-2024-08-06 model get on the WebApp1k-Duo-React dataset
| 0.531 |
UCR Anomaly Archive | TranAD | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the TranAD model get on the UCR Anomaly Archive dataset
| 0.19 |
PECC | chat-bison | PECC: Problem Extraction and Coding Challenges | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18766v1 | [
"https://github.com/hallerpatrick/pecc"
] | In the paper 'PECC: Problem Extraction and Coding Challenges', what Pass@3 score did the chat-bison model get on the PECC dataset
| 8.48 |
MM-Vet | LLaVA-Phi | LLaVA-Phi: Efficient Multi-Modal Assistant with Small Language Model | 2024-01-04T00:00:00 | https://arxiv.org/abs/2401.02330v4 | [
"https://github.com/zhuyiche/llava-phi"
] | In the paper 'LLaVA-Phi: Efficient Multi-Modal Assistant with Small Language Model', what GPT-4 score score did the LLaVA-Phi model get on the MM-Vet dataset
| 28.9 |
Cifar100-B0(20 tasks)-no-exemplars | SEED | Divide and not forget: Ensemble of selectively trained experts in Continual Learning | 2024-01-18T00:00:00 | https://arxiv.org/abs/2401.10191v3 | [
"https://github.com/grypesc/seed"
] | In the paper 'Divide and not forget: Ensemble of selectively trained experts in Continual Learning', what Average Incremental Accuracy score did the SEED model get on the Cifar100-B0(20 tasks)-no-exemplars dataset
| 56.2 |
H2O (2 Hands and Objects) | HandFormer-B/21x8 | On the Utility of 3D Hand Poses for Action Recognition | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09805v2 | [
"https://github.com/s-shamil/HandFormer"
] | In the paper 'On the Utility of 3D Hand Poses for Action Recognition', what Actions Top-1 score did the HandFormer-B/21x8 model get on the H2O (2 Hands and Objects) dataset
| 93.39 |
Weather (96) | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the Weather (96) dataset
| 0.175 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.