dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MSRVTT-QA | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what Accuracy score did the COSA model get on the MSRVTT-QA dataset
| 49.2 |
Defects4J | Rambo | RAMBO: Enhancing RAG-based Repository-Level Method Body Completion | 2024-09-23T00:00:00 | https://arxiv.org/abs/2409.15204v2 | [
"https://github.com/ise-uet-vnu/rambo"
] | In the paper 'RAMBO: Enhancing RAG-based Repository-Level Method Body Completion', what Compilation Rate score did the Rambo model get on the Defects4J dataset
| 76.47 |
MassSpecGym | DeepSets + Fourier features | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the DeepSets + Fourier features model get on the MassSpecGym dataset
| 5.24 |
WHU Building Dataset | SGSLN/256 | Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11302v1 | [
"https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network"
] | In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-score score did the SGSLN/256 model get on the WHU Building Dataset dataset
| 0.9467 |
Penn Treebank (Word Level) | Ensemble of All | Advancing State of the Art in Language Modeling | 2023-11-28T00:00:00 | https://arxiv.org/abs/2312.03735v1 | [
"https://github.com/davidherel/sota_lm"
] | In the paper 'Advancing State of the Art in Language Modeling', what Validation perplexity score did the Ensemble of All model get on the Penn Treebank (Word Level) dataset
| 48.92 |
KITTI-360 | SuperCluster | Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering | 2024-01-12T00:00:00 | https://arxiv.org/abs/2401.06704v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what miou Val score did the SuperCluster model get on the KITTI-360 dataset
| 62.1 |
ColonINST-v1 (Unseen) | LLaVA-Med-v1.5
(w/ LoRA, w/o extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.5
(w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Unseen) dataset
| 79.24 |
ModelNet40 | Point-JEPA | Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16432v4 | [
"https://github.com/Ayumu-J-S/Point-JEPA"
] | In the paper 'Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud', what Overall Accuracy score did the Point-JEPA model get on the ModelNet40 dataset
| 93.7±0.2 |
LDC2020T02 | LeakDistill (base) | Incorporating Graph Information in Transformer-based AMR Parsing | 2023-06-23T00:00:00 | https://arxiv.org/abs/2306.13467v1 | [
"https://github.com/sapienzanlp/leakdistill"
] | In the paper 'Incorporating Graph Information in Transformer-based AMR Parsing', what Smatch score did the LeakDistill (base) model get on the LDC2020T02 dataset
| 83.5 |
WebApp1k-Duo-React | deepseek-v2.5 | A Case Study of Web App Coding with OpenAI Reasoning Models | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.13773v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the deepseek-v2.5 model get on the WebApp1k-Duo-React dataset
| 0.49 |
Flowers-102 | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the Flowers-102 dataset
| 75.5 |
KITTI | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the KITTI dataset
| 39.4 |
Automatic Cardiac Diagnosis Challenge (ACDC) | PVT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what Avg DSC score did the PVT-GCASCADE model get on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset
| 91.95 |
Actor | GGCN + UniGAP | UniGAP: A Universal and Adaptive Graph Upsampling Approach to Mitigate Over-Smoothing in Node Classification Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.19420v1 | [
"https://github.com/wangxiaotang0906/unigap"
] | In the paper 'UniGAP: A Universal and Adaptive Graph Upsampling Approach to Mitigate Over-Smoothing in Node Classification Tasks', what Accuracy score did the GGCN + UniGAP model get on the Actor dataset
| 37.69 ± 1.2 |
AudioSet | DyMN-L (Audio-Only, Single) | Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15648v1 | [
"https://github.com/fschmid56/efficientat"
] | In the paper 'Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models', what mean average precision score did the DyMN-L (Audio-Only, Single) model get on the AudioSet dataset
| 0.490 |
Stanford2D3D Panoramic | SGAT4PASS(RGB only, Fold 1) | SGAT4PASS: Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03403v2 | [
"https://github.com/tencentarc/sgat4pass"
] | In the paper 'SGAT4PASS: Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation', what mIoU score did the SGAT4PASS(RGB only, Fold 1) model get on the Stanford2D3D Panoramic dataset
| 56.4% |
CelebA-HQ 256x256 | RDUOT | A High-Quality Robust Diffusion Framework for Corrupted Dataset | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17101v2 | [
"https://github.com/VinAIResearch/RDUOT"
] | In the paper 'A High-Quality Robust Diffusion Framework for Corrupted Dataset', what FID score did the RDUOT model get on the CelebA-HQ 256x256 dataset
| 5.6 |
PerSeg | PerSAM-F | Personalize Segment Anything Model with One Shot | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.03048v2 | [
"https://github.com/zrrskywalker/personalize-sam"
] | In the paper 'Personalize Segment Anything Model with One Shot', what mIoU score did the PerSAM-F model get on the PerSeg dataset
| 95.33 |
MoB | I3D | Malicious or Benign? Towards Effective Content Moderation for Children's Videos | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15551v1 | [
"https://github.com/syedhammadahmed/mob"
] | In the paper 'Malicious or Benign? Towards Effective Content Moderation for Children's Videos', what Accuracy score did the I3D model get on the MoB dataset
| 72.11 |
Manga109 - 4x upscaling | CFSR | Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05633v2 | [
"https://github.com/aitical/cfsr"
] | In the paper 'Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach', what PSNR score did the CFSR model get on the Manga109 - 4x upscaling dataset
| 30.72 |
Chameleon | DJ-GNN | Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16976v1 | [
"https://github.com/AhmedBegggaUA/TFM"
] | In the paper 'Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters', what Accuracy score did the DJ-GNN model get on the Chameleon dataset
| 80.48±1.46 |
MS COCO | MetaPrompt-SD | Harnessing Diffusion Models for Visual Perception with Meta Prompts | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14733v1 | [
"https://github.com/fudan-zvg/meta-prompts"
] | In the paper 'Harnessing Diffusion Models for Visual Perception with Meta Prompts', what AP score did the MetaPrompt-SD model get on the MS COCO dataset
| 79.0 |
ClinTox | BiLSTM | Accelerating Drug Safety Assessment using Bidirectional-LSTM for SMILES Data | 2024-07-08T00:00:00 | https://arxiv.org/abs/2407.18919v1 | [
"https://github.com/kvrsid/toxic"
] | In the paper 'Accelerating Drug Safety Assessment using Bidirectional-LSTM for SMILES Data', what AUC score did the BiLSTM model get on the ClinTox dataset
| 0.97 |
AMZ Comp | GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the GraphSAGE model get on the AMZ Comp dataset
| 84.79% |
DIV2K val - 4x upscaling | SRFlow-LP | Boosting Flow-based Generative Super-Resolution Models via Learned Prior | 2024-03-16T00:00:00 | https://arxiv.org/abs/2403.10988v3 | [
"https://github.com/liyuantsao/BFSR"
] | In the paper 'Boosting Flow-based Generative Super-Resolution Models via Learned Prior', what PSNR score did the SRFlow-LP model get on the DIV2K val - 4x upscaling dataset
| 27.51 |
HumanML3D | EMDM | EMDM: Efficient Motion Diffusion Model for Fast and High-Quality Motion Generation | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02256v3 | [
"https://github.com/frank-zy-dou/emdm"
] | In the paper 'EMDM: Efficient Motion Diffusion Model for Fast and High-Quality Motion Generation', what FID score did the EMDM model get on the HumanML3D dataset
| 0.112 |
KITTI Cars Moderate | VoxelNet With Eloss | Entropy Loss: An Interpretability Amplifier of 3D Object Detection Network for Intelligent Driving | 2024-09-01T00:00:00 | https://arxiv.org/abs/2409.00839v1 | [
"https://github.com/yhbcode000/Eloss-Interpretability"
] | In the paper 'Entropy Loss: An Interpretability Amplifier of 3D Object Detection Network for Intelligent Driving', what AP score did the VoxelNet With Eloss model get on the KITTI Cars Moderate dataset
| 73.67% |
Amazon Photo | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GCN model get on the Amazon Photo dataset
| 96.10 ± 0.46 |
WSC | OPT-125M | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-125M model get on the WSC dataset
| 59.59% |
CATH 4.2 | PiFold | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the PiFold model get on the CATH 4.2 dataset
| 51.66 |
Stanford Cars | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Stanford Cars dataset
| 0.646 |
nuScenes | AD-MLP | Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10430v2 | [
"https://github.com/E2E-AD/AD-MLP"
] | In the paper 'Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes', what Collision-3s score did the AD-MLP model get on the nuScenes dataset
| 0.24 |
PubMed with Public Split: fixed 20 nodes per class | Graph-MLP + ASAM | The Split Matters: Flat Minima Methods for Improving the Performance of GNNs | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09121v1 | [
"https://github.com/foisunt/fmms-in-gnns"
] | In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what Accuracy score did the Graph-MLP + ASAM model get on the PubMed with Public Split: fixed 20 nodes per class dataset
| 82.60 ± 0.80% |
Weather (336) | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the Weather (336) dataset
| 0.234 |
PASCAL VOC 2012 | MBS | Mitigating Background Shift in Class-Incremental Semantic Segmentation | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11859v1 | [
"https://github.com/roadonep/eccv2024_mbs"
] | In the paper 'Mitigating Background Shift in Class-Incremental Semantic Segmentation', what mIoU score did the MBS model get on the PASCAL VOC 2012 dataset
| 82.8 |
Food-101 | Balanced Mixture | Balanced Mixture of SuperNets for Learning the CNN Pooling Architecture | 2023-06-21T00:00:00 | https://arxiv.org/abs/2306.11982v1 | [
"https://github.com/mehravehj/Balanced-Mixture-of-SuperNets"
] | In the paper 'Balanced Mixture of SuperNets for Learning the CNN Pooling Architecture', what Accuracy (% ) score did the Balanced Mixture model get on the Food-101 dataset
| 84.73 |
THUMOS14 | HR-Pro | HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation | 2023-08-24T00:00:00 | https://arxiv.org/abs/2308.12608v3 | [
"https://github.com/pipixin321/hr-pro"
] | In the paper 'HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation', what avg-mAP (0.1-0.5) score did the HR-Pro model get on the THUMOS14 dataset
| 71.6 |
SYNTHIA-to-Cityscapes | DIDA | Dual-level Interaction for Domain Adaptive Semantic Segmentation | 2023-07-16T00:00:00 | https://arxiv.org/abs/2307.07972v2 | [
"https://github.com/rainjamesy/dida"
] | In the paper 'Dual-level Interaction for Domain Adaptive Semantic Segmentation', what mIoU (13 classes) score did the DIDA model get on the SYNTHIA-to-Cityscapes dataset
| 70.1 |
Chameleon | Dir-GNN | Edge Directionality Improves Learning on Heterophilic Graphs | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10498v3 | [
"https://github.com/emalgorithm/directed-graph-neural-network"
] | In the paper 'Edge Directionality Improves Learning on Heterophilic Graphs', what Accuracy score did the Dir-GNN model get on the Chameleon dataset
| 79.71±1.26 |
RefCOCO+ test B | MaskRIS (Swin-B) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B) model get on the RefCOCO+ test B dataset
| 59.39 |
CIFAR-10-LT (ρ=10) | GCLLoss | Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11733v1 | [
"https://github.com/keke921/gclloss"
] | In the paper 'Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment', what Error Rate score did the GCLLoss model get on the CIFAR-10-LT (ρ=10) dataset
| 10.77 |
MATH | DART-Math-Llama3-8B-Uniform (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Llama3-8B-Uniform (0-shot CoT, w/o code) model get on the MATH dataset
| 45.3 |
WFDD | GLASS | A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09359v1 | [
"https://github.com/cqylunlun/glass"
] | In the paper 'A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization', what Detection AUROC score did the GLASS model get on the WFDD dataset
| 100 |
FineAction | ActionMamba(InternVideo2-6B) | Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09626v1 | [
"https://github.com/opengvlab/video-mamba-suite"
] | In the paper 'Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding', what mAP score did the ActionMamba(InternVideo2-6B) model get on the FineAction dataset
| 29.04 |
IEMOCAP | emoDARTS | emoDARTS: Joint Optimisation of CNN & Sequential Neural Network Architectures for Superior Speech Emotion Recognition | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14083v1 | [
"https://github.com/jayaneetha/emoDARTS"
] | In the paper 'emoDARTS: Joint Optimisation of CNN & Sequential Neural Network Architectures for Superior Speech Emotion Recognition', what UA CV score did the emoDARTS model get on the IEMOCAP dataset
| 0.7655 |
RefCOCO | ETRIS | Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation | 2023-07-21T00:00:00 | https://arxiv.org/abs/2307.11545v1 | [
"https://github.com/kkakkkka/etris"
] | In the paper 'Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation', what IoU score did the ETRIS model get on the RefCOCO dataset
| 71.06 |
kodak | MLIC++ | MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15421v9 | [
"https://github.com/jiangweibeta/mlic"
] | In the paper 'MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression', what BD-Rate over VTM-17.0 score did the MLIC++ model get on the kodak dataset
| -13.39 |
EuroSAT | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the EuroSAT dataset
| 89.14 |
UHRSD | BiRefNet (DUTS, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (DUTS, UHRSD) model get on the UHRSD dataset
| 0.952 |
Set14 - 4x upscaling | DSRNet | Image super-resolution via dynamic network | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10413v2 | [
"https://github.com/hellloxiaotian/dsrnet"
] | In the paper 'Image super-resolution via dynamic network', what PSNR score did the DSRNet model get on the Set14 - 4x upscaling dataset
| 28.38 |
delete | Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks | 2023-10-01T00:00:00 | https://arxiv.org/abs/2310.00567v1 | [
"https://github.com/mail-research/randomized_defenses"
] | In the paper 'Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks', what Top 1 Accuracy score did the model get on the delete dataset
| 36.2 | |
UMVM-oea-d-w-v1 | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the UMVM-oea-d-w-v1 dataset
| 0.945 |
VNHSGE-Physics | Bing Chat | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE-Physics dataset
| 66 |
Visual Genome | ControlCap | ControlCap: Controllable Region-level Captioning | 2024-01-31T00:00:00 | https://arxiv.org/abs/2401.17910v3 | [
"https://github.com/callsys/controlcap"
] | In the paper 'ControlCap: Controllable Region-level Captioning', what mAP score did the ControlCap model get on the Visual Genome dataset
| 18.2 |
DomainNet | VDPG (CLIP, ViT-L/14) | Adapting to Distribution Shift by Visual Domain Prompt Generation | 2024-05-05T00:00:00 | https://arxiv.org/abs/2405.02797v1 | [
"https://github.com/guliisgreat/vdpg"
] | In the paper 'Adapting to Distribution Shift by Visual Domain Prompt Generation', what Average Accuracy score did the VDPG (CLIP, ViT-L/14) model get on the DomainNet dataset
| 65.2 |
ICFG-PEDES | RDE | Noisy-Correspondence Learning for Text-to-Image Person Re-identification | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09911v3 | [
"https://github.com/QinYang79/RDE"
] | In the paper 'Noisy-Correspondence Learning for Text-to-Image Person Re-identification', what mAP score did the RDE model get on the ICFG-PEDES dataset
| 40.06 |
Weather (96) | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the Weather (96) dataset
| 0.146 |
GSM8K | MetaMath 13B | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | 2023-09-21T00:00:00 | https://arxiv.org/abs/2309.12284v4 | [
"https://github.com/meta-math/MetaMath"
] | In the paper 'MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models', what Accuracy score did the MetaMath 13B model get on the GSM8K dataset
| 71.0 |
SUN-RGBD | GeminiFusion (MiT-B3) | GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.01210v2 | [
"https://github.com/jiadingcn/geminifusion"
] | In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what Mean IoU score did the GeminiFusion (MiT-B3) model get on the SUN-RGBD dataset
| 52.7 |
Office-Home | PromptStyler (CLIP, ViT-B/16) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ViT-B/16) model get on the Office-Home dataset
| 83.6 |
ColonINST-v1 (Seen) | Bunny-v1.0-3B (w/ LoRA, w/o extra data) | Efficient Multimodal Learning from Data-centric Perspective | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11530v3 | [
"https://github.com/baai-dcai/bunny"
] | In the paper 'Efficient Multimodal Learning from Data-centric Perspective', what Accuray score did the Bunny-v1.0-3B (w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Seen) dataset
| 91.16 |
OCHuman | BUCTD (CID-W32) | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what Test AP score did the BUCTD (CID-W32) model get on the OCHuman dataset
| 47.2 |
The Pile | Test-Time Fine-Tuning with SIFT + GPT-2 (774M) | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Test-Time Fine-Tuning with SIFT + GPT-2 (774M) model get on the The Pile dataset
| 0.762 |
CHILI-100K | GraphUNet | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GraphUNet model get on the CHILI-100K dataset
| 0.287 +/- 0.004 |
RealBlur-J (trained on GoPro) | DeblurDiNAT-L | DeblurDiNAT: A Generalizable Transformer for Perceptual Image Deblurring | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.13163v4 | [
"https://github.com/hanzhouliu/deblurdinat"
] | In the paper 'DeblurDiNAT: A Generalizable Transformer for Perceptual Image Deblurring', what PSNR (sRGB) score did the DeblurDiNAT-L model get on the RealBlur-J (trained on GoPro) dataset
| 28.98 |
Pascal VOC to Clipart1K | CDDMSL | Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13525v1 | [
"https://github.com/sinamalakouti/CDDMSL"
] | In the paper 'Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment', what mAP score did the CDDMSL model get on the Pascal VOC to Clipart1K dataset
| 40.4 |
SUN397 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the SUN397 dataset
| 67.9 |
LOFAR RFI Detection | Spiking Nerest Latent Neighbours | RFI Detection with Spiking Neural Networks | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14303v2 | [
"https://github.com/pritchardn/snn-nln"
] | In the paper 'RFI Detection with Spiking Neural Networks', what AUROC score did the Spiking Nerest Latent Neighbours model get on the LOFAR RFI Detection dataset
| 0.609 |
Lip Reading in the Wild | SyncVSR (Word Boundary) | SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12233v1 | [
"https://github.com/KAIST-AILab/SyncVSR"
] | In the paper 'SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization', what Top-1 Accuracy score did the SyncVSR (Word Boundary) model get on the Lip Reading in the Wild dataset
| 95.0 |
CARPK | VLCounter | VLCounter: Text-aware Visual Representation for Zero-Shot Object Counting | 2023-12-27T00:00:00 | https://arxiv.org/abs/2312.16580v2 | [
"https://github.com/seunggu0305/vlcounter"
] | In the paper 'VLCounter: Text-aware Visual Representation for Zero-Shot Object Counting', what MAE score did the VLCounter model get on the CARPK dataset
| 6.46 |
Quora Question Pairs Dev | BERT + SCH attm | Memory-efficient Stochastic methods for Memory-based Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08123v1 | [
"https://github.com/vishwajit-vishnu/memory-efficient-stochastic-methods-for-memory-based-transformers"
] | In the paper 'Memory-efficient Stochastic methods for Memory-based Transformers', what Val Accuracy score did the BERT + SCH attm model get on the Quora Question Pairs Dev dataset
| 91.422 |
SFCHD | YOLOv5 | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the YOLOv5 model get on the SFCHD dataset
| 74.1 |
MassSpecGym | Random | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the Random model get on the MassSpecGym dataset
| 0.37 |
Elliptic Dataset | GraphSAGE | Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.19383v2 | [
"https://github.com/B-Deprez/AML_Network"
] | In the paper 'Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation', what AUPRC score did the GraphSAGE model get on the Elliptic Dataset dataset
| 0.6312 |
Urban100 - 4x upscaling | EDSR (DUKD) | Data Upcycling Knowledge Distillation for Image Super-Resolution | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.14162v4 | [
"https://github.com/yun224/dukd"
] | In the paper 'Data Upcycling Knowledge Distillation for Image Super-Resolution', what PSNR score did the EDSR (DUKD) model get on the Urban100 - 4x upscaling dataset
| 26.45 |
PeMSD7 | PM-DMNet(P) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(P) model get on the PeMSD7 dataset
| 19.35 |
Atari 2600 Enduro | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Enduro dataset
| 2103.1 |
VideoInstruct | PLLaVA-34B | PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16994v2 | [
"https://github.com/magic-research/PLLaVA"
] | In the paper 'PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning', what Correctness of Information score did the PLLaVA-34B model get on the VideoInstruct dataset
| 3.60 |
VideoInstruct | Video Chat | VideoChat: Chat-Centric Video Understanding | 2023-05-10T00:00:00 | https://arxiv.org/abs/2305.06355v2 | [
"https://github.com/opengvlab/ask-anything"
] | In the paper 'VideoChat: Chat-Centric Video Understanding', what gpt-score score did the Video Chat model get on the VideoInstruct dataset
| 2.32 |
GEN1 Detection | HMNet-L3 | Hierarchical Neural Memory Network for Low Latency Event Processing | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.17852v1 | [
"https://github.com/hamarh/HMNet_pth"
] | In the paper 'Hierarchical Neural Memory Network for Low Latency Event Processing', what mAP score did the HMNet-L3 model get on the GEN1 Detection dataset
| 47.1 |
Atari 2600 Double Dunk | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Double Dunk dataset
| 0.1 |
SMAC 6h_vs_9z | DMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the DMIX model get on the SMAC 6h_vs_9z dataset
| 13.73 |
ETTh1 (336) Multivariate | MOIRAILarge | Unified Training of Universal Time Series Forecasting Transformers | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02592v2 | [
"https://github.com/SalesforceAIResearch/uni2ts"
] | In the paper 'Unified Training of Universal Time Series Forecasting Transformers', what MSE score did the MOIRAILarge model get on the ETTh1 (336) Multivariate dataset
| 0.514 |
H2O (2 Hands and Objects) | EffHandEgoNet | In My Perspective, In My Hands: Accurate Egocentric 2D Hand Pose and Action Recognition | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09308v2 | [
"https://github.com/wiktormucha/effhandegonet"
] | In the paper 'In My Perspective, In My Hands: Accurate Egocentric 2D Hand Pose and Action Recognition', what Actions Top-1 score did the EffHandEgoNet model get on the H2O (2 Hands and Objects) dataset
| 91.32 |
VDD | Mask2Former(ResNet-50) | VDD: Varied Drone Dataset for Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13608v3 | [
"https://github.com/RussRobin/VDD"
] | In the paper 'VDD: Varied Drone Dataset for Semantic Segmentation', what mIoU score did the Mask2Former(ResNet-50) model get on the VDD dataset
| 83.21 |
LRS2 | Whisper | Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10082v3 | [
"https://github.com/roudimit/whisper-flamingo"
] | In the paper 'Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation', what Test WER score did the Whisper model get on the LRS2 dataset
| 1.3 |
HME100K | TAMER | TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08578v2 | [
"https://github.com/qingzhenduyu/tamer"
] | In the paper 'TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition', what ExpRate score did the TAMER model get on the HME100K dataset
| 68.52 |
HateXplain | XLNet | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the XLNet model get on the HateXplain dataset
| 0.8160 |
DEplain-APA-doc | long-mBART (trained on DEplain-APA-doc & DEplain-web-doc) | DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18939v1 | [
"https://github.com/rstodden/deplain"
] | In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the long-mBART (trained on DEplain-APA-doc & DEplain-web-doc) model get on the DEplain-APA-doc dataset
| 42.862 |
Synapse multi-organ CT | SelfReg-UNet: SwinUNet | SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation | 2024-06-21T00:00:00 | https://arxiv.org/abs/2406.14896v1 | [
"https://github.com/chongqingnosubway/selfreg-unet"
] | In the paper 'SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation', what Avg DSC score did the SelfReg-UNet: SwinUNet model get on the Synapse multi-organ CT dataset
| 80.54 |
Mip-NeRF 360 | LightGaussian | LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17245v6 | [
"https://github.com/VITA-Group/LightGaussian"
] | In the paper 'LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS', what PSNR score did the LightGaussian model get on the Mip-NeRF 360 dataset
| 28.45 |
CIFAR-100-LT (ρ=50) | SURE(ResNet-32) | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01T00:00:00 | https://arxiv.org/abs/2403.00543v1 | [
"https://github.com/YutingLi0606/SURE"
] | In the paper 'SURE: SUrvey REcipes for building reliable and robust deep networks', what Error Rate score did the SURE(ResNet-32) model get on the CIFAR-100-LT (ρ=50) dataset
| 36.87 |
TerraIncognita | VL2V-SD (CLIP, ViT-B/16) | Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification | 2023-10-12T00:00:00 | https://arxiv.org/abs/2310.08255v2 | [
"https://github.com/val-iisc/VL2V-ADiP"
] | In the paper 'Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification', what Average Accuracy score did the VL2V-SD (CLIP, ViT-B/16) model get on the TerraIncognita dataset
| 58.54 |
COCO test-dev | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Pro model get on the COCO test-dev dataset
| 54.5 |
ViP-Bench | Shikra-7B (Coordinates) | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15195v2 | [
"https://github.com/shikras/shikra"
] | In the paper 'Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic', what GPT-4 score (bbox) score did the Shikra-7B (Coordinates) model get on the ViP-Bench dataset
| 33.7 |
Sleep-EDFx | NeuroNet (Fpz-Cz only) | NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.17585v2 | [
"https://github.com/dlcjfgmlnasa/NeuroNet"
] | In the paper 'NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG', what Accuracy score did the NeuroNet (Fpz-Cz only) model get on the Sleep-EDFx dataset
| 85.24% |
Actor | DJ-GNN | Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16976v1 | [
"https://github.com/AhmedBegggaUA/TFM"
] | In the paper 'Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters', what Accuracy score did the DJ-GNN model get on the Actor dataset
| 36.93 ± 0.84 |
Matterport3D | SFSS-MMSI (RGB+Normal) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what Validation mIoU score did the SFSS-MMSI (RGB+Normal) model get on the Matterport3D dataset
| 38.91 |
VisDA-2017 | SFDA2 | SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation | 2024-03-16T00:00:00 | https://arxiv.org/abs/2403.10834v1 | [
"https://github.com/shinyflight/sfda2"
] | In the paper 'SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation', what Accuracy score did the SFDA2 model get on the VisDA-2017 dataset
| 88.1 |
SemEval-2010 Task 8 | RAG4RE | Retrieval-Augmented Generation-based Relation Extraction | 2024-04-20T00:00:00 | https://arxiv.org/abs/2404.13397v1 | [
"https://github.com/sefeoglu/rag4re"
] | In the paper 'Retrieval-Augmented Generation-based Relation Extraction', what F1 score did the RAG4RE model get on the SemEval-2010 Task 8 dataset
| 23.41 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.