dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
VoxCeleb2 | RTFS-Net-4 | RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17189v4 | [
"https://github.com/spkgyk/RTFS-Net"
] | In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-4 model get on the VoxCeleb2 dataset
| 11.5 |
Weather (720) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather (720) dataset
| 0.314 |
TASD | MvP (multi-task) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (R15) score did the MvP (multi-task) model get on the TASD dataset
| 64.74 |
VGGSound | EquiAV | EquiAV: Leveraging Equivariance for Audio-Visual Contrastive Learning | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09502v2 | [
"https://github.com/jongsuk1/equiav"
] | In the paper 'EquiAV: Leveraging Equivariance for Audio-Visual Contrastive Learning', what Top 1 Accuracy score did the EquiAV model get on the VGGSound dataset
| 67.1 |
YouTube-VOS 2018 | DMT | Deficiency-Aware Masked Transformer for Video Inpainting | 2023-07-17T00:00:00 | https://arxiv.org/abs/2307.08629v1 | [
"https://github.com/yeates/dmt"
] | In the paper 'Deficiency-Aware Masked Transformer for Video Inpainting', what PSNR score did the DMT model get on the YouTube-VOS 2018 dataset
| 34.27 |
CHILI-3K | GAT | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GAT model get on the CHILI-3K dataset
| 0.461 +/- 0.000 |
ARC (Challenge) | LLaMA 3 8B + MoSLoRA (fine-tuned) | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what Accuracy score did the LLaMA 3 8B + MoSLoRA (fine-tuned) model get on the ARC (Challenge) dataset
| 81.5 |
LibriTTS | RFWave | RFWave: Multi-band Rectified Flow for Audio Waveform Reconstruction | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05010v3 | [
"https://github.com/bfs18/rfwave"
] | In the paper 'RFWave: Multi-band Rectified Flow for Audio Waveform Reconstruction', what PESQ score did the RFWave model get on the LibriTTS dataset
| 4.228 |
MAWPS | OpenMath-CodeLlama-70B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy (%) score did the OpenMath-CodeLlama-70B (w/ code) model get on the MAWPS dataset
| 95.7 |
CIFAR-10-LT (ρ=100) | FBL (ResNet-32) | Feature-Balanced Loss for Long-Tailed Visual Recognition | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10772v1 | [
"https://github.com/juyongjiang/fbl"
] | In the paper 'Feature-Balanced Loss for Long-Tailed Visual Recognition', what Error Rate score did the FBL (ResNet-32) model get on the CIFAR-10-LT (ρ=100) dataset
| 17.54 |
VisA | FAIRnoDTD | FAIR: Frequency-aware Image Restoration for Industrial Visual Anomaly Detection | 2023-09-13T00:00:00 | https://arxiv.org/abs/2309.07068v1 | [
"https://github.com/liutongkun/fair"
] | In the paper 'FAIR: Frequency-aware Image Restoration for Industrial Visual Anomaly Detection', what Detection AUROC score did the FAIRnoDTD model get on the VisA dataset
| 97.1 |
J-HMDB | SgMg (Video-Swin-B) | Spectrum-guided Multi-granularity Referring Video Object Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13537v1 | [
"https://github.com/bo-miao/sgmg"
] | In the paper 'Spectrum-guided Multi-granularity Referring Video Object Segmentation', what Precision@0.5 score did the SgMg (Video-Swin-B) model get on the J-HMDB dataset
| 0.972 |
Cora with Public Split: fixed 20 nodes per class | GAT+PGN | The Split Matters: Flat Minima Methods for Improving the Performance of GNNs | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09121v1 | [
"https://github.com/foisunt/fmms-in-gnns"
] | In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what Accuracy score did the GAT+PGN model get on the Cora with Public Split: fixed 20 nodes per class dataset
| 83.26 ± 0.69% |
VisA | TransFusion | TransFusion -- A Transparency-Based Diffusion Model for Anomaly Detection | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09999v2 | [
"https://github.com/maticfuc/eccv_transfusion"
] | In the paper 'TransFusion -- A Transparency-Based Diffusion Model for Anomaly Detection', what Detection AUROC score did the TransFusion model get on the VisA dataset
| 98.7 |
OoDIS | Mask2Anomaly | Unmasking Anomalies in Road-Scene Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13316v1 | [
"https://github.com/shyam671/mask2anomaly-unmasking-anomalies-in-road-scene-segmentation"
] | In the paper 'Unmasking Anomalies in Road-Scene Segmentation', what AP score did the Mask2Anomaly model get on the OoDIS dataset
| 1.24 |
MVTec AD | MSFlow | MSFlow: Multi-Scale Flow-based Framework for Unsupervised Anomaly Detection | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.15300v1 | [
"https://github.com/cool-xuan/msflow"
] | In the paper 'MSFlow: Multi-Scale Flow-based Framework for Unsupervised Anomaly Detection', what Detection AUROC score did the MSFlow model get on the MVTec AD dataset
| 99.7 |
KITTI (trained on 3DMatch) | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Success Rate score did the GeoTransformer model get on the KITTI (trained on 3DMatch) dataset
| 67.93 |
MTL-AQA | FineParser | FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06887v1 | [
"https://github.com/pku-icst-mipl/fineparser_cvpr2024"
] | In the paper 'FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment', what Spearman Correlation score did the FineParser model get on the MTL-AQA dataset
| 95.85 |
SMAC 3s5z_vs_4s6z | DMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DMIX model get on the SMAC 3s5z_vs_4s6z dataset
| 83.52 |
ICDAR2015 | CLIP4STR-B | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-B model get on the ICDAR2015 dataset
| 90.6 |
Elliptic Dataset | Deepwalk | Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.19383v2 | [
"https://github.com/B-Deprez/AML_Network"
] | In the paper 'Network Analytics for Anti-Money Laundering -- A Systematic Literature Review and Experimental Evaluation', what AUPRC score did the Deepwalk model get on the Elliptic Dataset dataset
| 0.0488 |
RST-DT | Bottom-up Llama 2 (13B) | Can we obtain significant success in RST discourse parsing by using Large Language Models? | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05065v1 | [
"https://github.com/nttcslab-nlp/rstparser_eacl24"
] | In the paper 'Can we obtain significant success in RST discourse parsing by using Large Language Models?', what Standard Parseval (Span) score did the Bottom-up Llama 2 (13B) model get on the RST-DT dataset
| 78.3 |
MassSpecGym | Precursor m/z | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit Rate @ 1 score did the Precursor m/z model get on the MassSpecGym dataset
| 2.09 |
WebQuestions | PaLM 2-M (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-M (one-shot) model get on the WebQuestions dataset
| 26.9 |
VLCS | GMDG (ResNet-50) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50) model get on the VLCS dataset
| 79.2 |
ImageNet | Poly-SA-ViT-S | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the Poly-SA-ViT-S model get on the ImageNet dataset
| 78.34% |
FRMT (Chinese - Taiwan) | Google Translate | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the Google Translate model get on the FRMT (Chinese - Taiwan) dataset
| 68.5 |
CelebA-HQ 256x256 | BOSS | Bellman Optimal Stepsize Straightening of Flow-Matching Models | 2023-12-27T00:00:00 | https://arxiv.org/abs/2312.16414v3 | [
"https://github.com/nguyenngocbaocmt02/boss"
] | In the paper 'Bellman Optimal Stepsize Straightening of Flow-Matching Models', what clean-FID score did the BOSS model get on the CelebA-HQ 256x256 dataset
| 20.13 |
MM-Vet | SoM-LLaVA-1.5-T | List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16375v1 | [
"https://github.com/zzxslp/som-llava"
] | In the paper 'List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs', what GPT-4 score score did the SoM-LLaVA-1.5-T model get on the MM-Vet dataset
| 37.2 |
MATH | Shepherd+Mistral-7B (SFT on MetaMATH + PRM RL+ PRM rerank, k=256) | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08935v3 | [
"https://huggingface.co/datasets/peiyi9979/Math-Shepherd"
] | In the paper 'Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations', what Accuracy score did the Shepherd+Mistral-7B (SFT on MetaMATH + PRM RL+ PRM rerank, k=256) model get on the MATH dataset
| 43.5 |
ETTm2 (720) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTm2 (720) Multivariate dataset
| 0.353 |
BIG-bench (Formal Fallacies Syllogisms Negation) | PaLM 2 (few-shot, k=3, CoT) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, CoT) model get on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset
| 57.2 |
FRMT (Portuguese - Brazil) | PaLM 2 | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the PaLM 2 model get on the FRMT (Portuguese - Brazil) dataset
| 81.1 |
Baidu Mall | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the Baidu Mall dataset
| 75.22 |
IU X-Ray | BiomedGPT | BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17100v4 | [
"https://github.com/taokz/biomedgpt"
] | In the paper 'BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks', what CIDEr score did the BiomedGPT model get on the IU X-Ray dataset
| 36.0 |
PECC | Mixtral-8x7B-Instruct | PECC: Problem Extraction and Coding Challenges | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18766v1 | [
"https://github.com/hallerpatrick/pecc"
] | In the paper 'PECC: Problem Extraction and Coding Challenges', what Pass@3 score did the Mixtral-8x7B-Instruct model get on the PECC dataset
| 8.35 |
Fashion IQ | CaLa | CaLa: Complementary Association Learning for Augmenting Composed Image Retrieval | 2024-05-29T00:00:00 | https://arxiv.org/abs/2405.19149v2 | [
"https://github.com/chiangsonw/cala"
] | In the paper 'CaLa: Complementary Association Learning for Augmenting Composed Image Retrieval', what (Recall@10+Recall@50)/2 score did the CaLa model get on the Fashion IQ dataset
| 57.96 |
ADE20K-150 | SED | SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15537v2 | [
"https://github.com/xb534/sed"
] | In the paper 'SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation', what mIoU score did the SED model get on the ADE20K-150 dataset
| 35.2 |
AudioCaps | SLAM-AAC | SLAM-AAC: Enhancing Audio Captioning with Paraphrasing Augmentation and CLAP-Refine through LLMs | 2024-10-12T00:00:00 | https://arxiv.org/abs/2410.09503v1 | [
"https://github.com/X-LANCE/SLAM-LLM"
] | In the paper 'SLAM-AAC: Enhancing Audio Captioning with Paraphrasing Augmentation and CLAP-Refine through LLMs', what CIDEr score did the SLAM-AAC model get on the AudioCaps dataset
| 0.841 |
GTZAN | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the GTZAN dataset
| 89.1 |
Urban100 - 2x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the Urban100 - 2x upscaling dataset
| 34.54 |
TAP-Vid-Kinetics | LocoTrack-B | Local All-Pair Correspondence for Point Tracking | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15420v1 | [
"https://github.com/ku-cvlab/locotrack"
] | In the paper 'Local All-Pair Correspondence for Point Tracking', what Average Jaccard score did the LocoTrack-B model get on the TAP-Vid-Kinetics dataset
| 59.1 |
MM-Vet | DynMOE-LLaVA | Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14297v3 | [
"https://github.com/lins-lab/dynmoe"
] | In the paper 'Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models', what GPT-4 score score did the DynMOE-LLaVA model get on the MM-Vet dataset
| 33.6 |
MLO-Cn2 | Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Climatology model get on the MLO-Cn2 dataset
| 0.658 |
Stanford Drone | PPT | Progressive Pretext Task Learning for Human Trajectory Prediction | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11588v1 | [
"https://github.com/isee-laboratory/ppt"
] | In the paper 'Progressive Pretext Task Learning for Human Trajectory Prediction', what ADE-8/12 @K = 20 score did the PPT model get on the Stanford Drone dataset
| 7.03 |
CHASE_DB1 | MERIT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what F1 score score did the MERIT-GCASCADE model get on the CHASE_DB1 dataset
| 0.8267 |
PASCAL Context-459 | FC-CLIP | Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02487v2 | [
"https://github.com/bytedance/fc-clip"
] | In the paper 'Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP', what mIoU score did the FC-CLIP model get on the PASCAL Context-459 dataset
| 18.2 |
Charades-STA | UniMD+Sync. | UniMD: Towards Unifying Moment Retrieval and Temporal Action Detection | 2024-04-07T00:00:00 | https://arxiv.org/abs/2404.04933v2 | [
"https://github.com/yingsen1/unimd"
] | In the paper 'UniMD: Towards Unifying Moment Retrieval and Temporal Action Detection', what R@1 IoU=0.5 score did the UniMD+Sync. model get on the Charades-STA dataset
| 63.98 |
LRS2 | Whisper-Flamingo | Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10082v3 | [
"https://github.com/roudimit/whisper-flamingo"
] | In the paper 'Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation', what Test WER score did the Whisper-Flamingo model get on the LRS2 dataset
| 1.4 |
ShanghaiTech | AnomalyRuler | Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10299v2 | [
"https://github.com/Yuchen413/AnomalyRuler"
] | In the paper 'Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models', what AUC score did the AnomalyRuler model get on the ShanghaiTech dataset
| 85.2% |
CUB-200-2011 | ResNet-50 | PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans | 2023-08-25T00:00:00 | https://arxiv.org/abs/2308.13651v5 | [
"https://github.com/giangnguyen2412/PCNN-src-code-TMRL2024"
] | In the paper 'PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans', what Accuracy score did the ResNet-50 model get on the CUB-200-2011 dataset
| 88.59 |
CATH 4.2 | AlphaDesign | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the AlphaDesign model get on the CATH 4.2 dataset
| 41.31 |
OTB-2015 | ODTrack-B | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what AUC score did the ODTrack-B model get on the OTB-2015 dataset
| 0.723 |
MPI-INF-3DHP | MotionAGFormer-S (T=81) | MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network | 2023-10-25T00:00:00 | https://arxiv.org/abs/2310.16288v1 | [
"https://github.com/taatiteam/motionagformer"
] | In the paper 'MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network', what AUC score did the MotionAGFormer-S (T=81) model get on the MPI-INF-3DHP dataset
| 84.5 |
Massachusetts Roads Dataset | RSM-SS | RS-Mamba for Large Remote Sensing Image Dense Prediction | 2024-04-03T00:00:00 | https://arxiv.org/abs/2404.02668v2 | [
"https://github.com/walking-shadow/Official_Remote_Sensing_Mamba"
] | In the paper 'RS-Mamba for Large Remote Sensing Image Dense Prediction', what IoU score did the RSM-SS model get on the Massachusetts Roads Dataset dataset
| 67.35 |
FSD50K | DyMN-L | Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15648v1 | [
"https://github.com/fschmid56/efficientat"
] | In the paper 'Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models', what mAP score did the DyMN-L model get on the FSD50K dataset
| 65.5 |
COCO 2017 val | ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(resnet18)) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what mAP score did the ReviewKD++(T: faster rcnn(resnet101), S:faster rcnn(resnet18)) model get on the COCO 2017 val dataset
| 37.43 |
THUMOS 2014 | CASE + Zhou et al. | Revisiting Foreground and Background Separation in Weakly-supervised Temporal Action Localization: A Clustering-based Approach | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.14138v1 | [
"https://github.com/qinying-liu/case"
] | In the paper 'Revisiting Foreground and Background Separation in Weakly-supervised Temporal Action Localization: A Clustering-based Approach', what mAP@0.1:0.7 score did the CASE + Zhou et al. model get on the THUMOS 2014 dataset
| 49.2 |
SICK | Binary Diffusion | Tabular Data Generation using Binary Diffusion | 2024-09-20T00:00:00 | https://arxiv.org/abs/2409.13882v2 | [
"https://github.com/vkinakh/binary-diffusion-tabular"
] | In the paper 'Tabular Data Generation using Binary Diffusion', what LR Accuracy score did the Binary Diffusion model get on the SICK dataset
| 96.14 |
CIFAR-10 | GDD | Diffusion Models Are Innate One-Step Generators | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20750v2 | [
"https://github.com/Zyriix/GDD"
] | In the paper 'Diffusion Models Are Innate One-Step Generators', what Inception score score did the GDD model get on the CIFAR-10 dataset
| 10.11 |
BBBP | ChemBFN | A Bayesian Flow Network Framework for Chemistry Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.20294v1 | [
"https://github.com/Augus1999/bayesian-flow-network-for-chemistry"
] | In the paper 'A Bayesian Flow Network Framework for Chemistry Tasks', what ROC-AUC score did the ChemBFN model get on the BBBP dataset
| 95.74 |
Adience Gender | MiVOLO-D1 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what Accuracy (5-fold) score did the MiVOLO-D1 model get on the Adience Gender dataset
| 96.51 |
Office-Home | SPG (CLIP, ViT-B/16) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ViT-B/16) model get on the Office-Home dataset
| 83.6 |
ZJU-RGB-P | ShareCMP (B2 RGB-FP) | ShareCMP: Polarization-Aware RGB-P Semantic Segmentation | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03430v2 | [
"https://github.com/lefteyex/sharecmp"
] | In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP (B2 RGB-FP) model get on the ZJU-RGB-P dataset
| 92.4 |
STS15 | PromptEOL+CSE+LLaMA-30B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+LLaMA-30B model get on the STS15 dataset
| 0.9004 |
ImageNet 256x256 | DiGIT-0.7B | Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective | 2024-10-16T00:00:00 | https://arxiv.org/abs/2410.12490v2 | [
"https://github.com/DAMO-NLP-SG/DiGIT"
] | In the paper 'Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective', what FID score did the DiGIT-0.7B model get on the ImageNet 256x256 dataset
| 3.39 |
Weather2K79 (336) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K79 (336) dataset
| 0.546 |
AFAD | ResNet-50-Regression | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Regression model get on the AFAD dataset
| 3.17 |
DotPrompts | SantaCoder-MGD | Guiding Language Models of Code with Global Context using Monitors | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.10763v2 | [
"https://github.com/microsoft/monitors4codegen"
] | In the paper 'Guiding Language Models of Code with Global Context using Monitors', what Compilation Rate score did the SantaCoder-MGD model get on the DotPrompts dataset
| 73.03 |
CHILI-3K | GIN | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GIN model get on the CHILI-3K dataset
| 0.587 +/- 0.002 |
GSM8K | MMOS-CODE-7B(0-shot) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Accuracy score did the MMOS-CODE-7B(0-shot) model get on the GSM8K dataset
| 73.9 |
NeedForSpeed | LoRAT-L-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what AUC score did the LoRAT-L-378 model get on the NeedForSpeed dataset
| 0.667 |
TinyImageNet | ResNet18 | Guarding Barlow Twins Against Overfitting with Mixed Samples | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02151v1 | [
"https://github.com/wgcban/mix-bt"
] | In the paper 'Guarding Barlow Twins Against Overfitting with Mixed Samples', what average top-1 classification accuracy score did the ResNet18 model get on the TinyImageNet dataset
| 51.67 |
Tanks and Temples | GC-MVSNet | GC-MVSNet: Multi-View, Multi-Scale, Geometrically-Consistent Multi-View Stereo | 2023-10-30T00:00:00 | https://arxiv.org/abs/2310.19583v3 | [
"https://github.com/vkvats/GC-MVSNet"
] | In the paper 'GC-MVSNet: Multi-View, Multi-Scale, Geometrically-Consistent Multi-View Stereo', what Mean F1 (Intermediate) score did the GC-MVSNet model get on the Tanks and Temples dataset
| 62.74 |
VisA | DDAD | Anomaly Detection with Conditioned Denoising Diffusion Models | 2023-05-25T00:00:00 | https://arxiv.org/abs/2305.15956v2 | [
"https://github.com/arimousa/DDAD"
] | In the paper 'Anomaly Detection with Conditioned Denoising Diffusion Models', what Detection AUROC score did the DDAD model get on the VisA dataset
| 98.9 |
MM-Vet | TroL-7B | TroL: Traversal of Layers for Large Language and Vision Models | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12246v3 | [
"https://github.com/byungkwanlee/trol"
] | In the paper 'TroL: Traversal of Layers for Large Language and Vision Models', what GPT-4 score score did the TroL-7B model get on the MM-Vet dataset
| 54.7 |
ImageNet 128x128 | EluCD_DDPM | Elucidating The Design Space of Classifier-Guided Diffusion Generation | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11311v1 | [
"https://github.com/alexmaols/elucd"
] | In the paper 'Elucidating The Design Space of Classifier-Guided Diffusion Generation', what FID score did the EluCD_DDPM model get on the ImageNet 128x128 dataset
| 2.19 |
BEAT2 | EMAGE | EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling | 2023-12-31T00:00:00 | https://arxiv.org/abs/2401.00374v5 | [
"https://github.com/PantoMatrix/PantoMatrix"
] | In the paper 'EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling', what MSE score did the EMAGE model get on the BEAT2 dataset
| 7.680 |
N-Caltech 101 | Event Trojan | Event Trojan: Asynchronous Event-based Backdoor Attacks | 2024-07-09T00:00:00 | https://arxiv.org/abs/2407.06838v2 | [
"https://github.com/rfww/eventtrojan"
] | In the paper 'Event Trojan: Asynchronous Event-based Backdoor Attacks', what Accuracy (% ) score did the Event Trojan model get on the N-Caltech 101 dataset
| 85.61 |
ARC (Easy) | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the ARC (Easy) dataset
| 89.7 |
ViP-Bench | ViP-LLaVA-13B (Visual Prompt) | Making Large Language Models Better Data Creators | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20111v1 | [
"https://github.com/microsoft/llm-data-creation"
] | In the paper 'Making Large Language Models Better Data Creators', what GPT-4 score (bbox) score did the ViP-LLaVA-13B (Visual Prompt) model get on the ViP-Bench dataset
| 48.3 |
Chameleon | CoED | Improving Graph Neural Networks by Learning Continuous Edge Directions | 2024-10-18T00:00:00 | https://arxiv.org/abs/2410.14109v1 | [
"https://github.com/hormoz-lab/coed-gnn"
] | In the paper 'Improving Graph Neural Networks by Learning Continuous Edge Directions', what Accuracy score did the CoED model get on the Chameleon dataset
| 79.69±1.35 |
Cityscapes val | FAN-L-Hybrid+STL | Fully Attentional Networks with Self-emerging Token Labeling | 2024-01-08T00:00:00 | https://arxiv.org/abs/2401.03844v1 | [
"https://github.com/NVlabs/STL"
] | In the paper 'Fully Attentional Networks with Self-emerging Token Labeling', what mIoU score did the FAN-L-Hybrid+STL model get on the Cityscapes val dataset
| 82.8 |
MiniImageNet ResNet-18 - 300 Epochs | IBM | Towards Redundancy-Free Sub-networks in Continual Learning | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00840v2 | [
"https://github.com/zackschen/IBM-Net"
] | In the paper 'Towards Redundancy-Free Sub-networks in Continual Learning', what Accuracy score did the IBM model get on the MiniImageNet ResNet-18 - 300 Epochs dataset
| 53.9 |
HICO-DET | PViC-R50 | Exploring Predicate Visual Context in Detecting Human-Object Interactions | 2023-08-11T00:00:00 | https://arxiv.org/abs/2308.06202v2 | [
"https://github.com/fredzzhang/pvic"
] | In the paper 'Exploring Predicate Visual Context in Detecting Human-Object Interactions', what mAP score did the PViC-R50 model get on the HICO-DET dataset
| 34.69 |
ScanNet200 | Open3DIS | Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance | 2023-12-17T00:00:00 | https://arxiv.org/abs/2312.10671v3 | [
"https://github.com/VinAIResearch/Open3DIS"
] | In the paper 'Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance', what mAP score did the Open3DIS model get on the ScanNet200 dataset
| 23.7 |
FSS-1000 (1-shot) | Annotation-free FSS (With Annotation,ResNet-50) | Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach | 2023-07-26T00:00:00 | https://arxiv.org/abs/2307.14446v1 | [
"https://github.com/mindflow-institue/annotation_free_fewshot"
] | In the paper 'Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach', what Mean IoU score did the Annotation-free FSS (With Annotation,ResNet-50) model get on the FSS-1000 (1-shot) dataset
| 85.7 |
ETTh1 (336) Multivariate | Fredformer | Fredformer: Frequency Debiased Transformer for Time Series Forecasting | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09009v4 | [
"https://github.com/chenzrg/fredformer"
] | In the paper 'Fredformer: Frequency Debiased Transformer for Time Series Forecasting', what MSE score did the Fredformer model get on the ETTh1 (336) Multivariate dataset
| 0.395 |
Oxford-IIIT Pet Dataset | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the Oxford-IIIT Pet Dataset dataset
| 96.05 |
Hateful Memes | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Hateful Memes dataset
| 54.2 |
WikiText-103 | Primal.+Trans. | Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19798v2 | [
"https://github.com/yingyichen-cyy/PrimalAttention"
] | In the paper 'Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation', what Test perplexity score did the Primal.+Trans. model get on the WikiText-103 dataset
| 31.0 |
AmsterTime | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the AmsterTime dataset
| 52.2 |
LLRGBD-synthetic | SMMCL (SegFormer-B2) | Understanding Dark Scenes by Contrasting Multi-Modal Observations | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12320v2 | [
"https://github.com/palmdong/smmcl"
] | In the paper 'Understanding Dark Scenes by Contrasting Multi-Modal Observations', what mIoU score did the SMMCL (SegFormer-B2) model get on the LLRGBD-synthetic dataset
| 67.77 |
MORPH Album2 (SE) | ResNet-50-SORD | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-SORD model get on the MORPH Album2 (SE) dataset
| 2.81 |
VQA v2 val | LocVLM-L | Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07449v1 | [
"https://github.com/kahnchana/locvlm"
] | In the paper 'Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs', what Accuracy score did the LocVLM-L model get on the VQA v2 val dataset
| 55.9 |
ScanObjectNN | GPSFormer-elite | GPSFormer: A Global Perception and Local Structure Fitting-based Transformer for Point Cloud Understanding | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13519v2 | [
"https://github.com/changshuowang/GPSFormer"
] | In the paper 'GPSFormer: A Global Perception and Local Structure Fitting-based Transformer for Point Cloud Understanding', what Overall Accuracy score did the GPSFormer-elite model get on the ScanObjectNN dataset
| 93.30 |
ColonINST-v1 (Seen) | Bunny-v1.0-3B
(w/ LoRA, w/o extra data) | Efficient Multimodal Learning from Data-centric Perspective | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11530v3 | [
"https://github.com/baai-dcai/bunny"
] | In the paper 'Efficient Multimodal Learning from Data-centric Perspective', what Accuray score did the Bunny-v1.0-3B
(w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Seen) dataset
| 96.61 |
CCVID | CAL+DLCR | DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID | 2024-11-11T00:00:00 | https://arxiv.org/abs/2411.07205v2 | [
"https://github.com/croitorualin/dlcr"
] | In the paper 'DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID', what Rank-1 score did the CAL+DLCR model get on the CCVID dataset
| 88.0 |
CIFAR-10 | GDD-I | Diffusion Models Are Innate One-Step Generators | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20750v2 | [
"https://github.com/Zyriix/GDD"
] | In the paper 'Diffusion Models Are Innate One-Step Generators', what Inception score score did the GDD-I model get on the CIFAR-10 dataset
| 10.10 |
MATH | Shepherd + DeepSeek-67B (SFT on MetaMATH + PRM rerank, k=256) | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08935v3 | [
"https://huggingface.co/datasets/peiyi9979/Math-Shepherd"
] | In the paper 'Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations', what Accuracy score did the Shepherd + DeepSeek-67B (SFT on MetaMATH + PRM rerank, k=256) model get on the MATH dataset
| 48.1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.