dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
UMVM-oea-d-w-v1 | UMAEA (w/o surf & iter ) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the UMVM-oea-d-w-v1 dataset
| 0.904 |
VLCS | PromptStyler (CLIP, ViT-L/14) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ViT-L/14) model get on the VLCS dataset
| 82.4 |
AISHELL-1 | Paraformer-large | FunASR: A Fundamental End-to-End Speech Recognition Toolkit | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11013v1 | [
"https://github.com/alibaba-damo-academy/FunASR"
] | In the paper 'FunASR: A Fundamental End-to-End Speech Recognition Toolkit', what Word Error Rate (WER) score did the Paraformer-large model get on the AISHELL-1 dataset
| 1.95 |
PASCAL-5i (5-Shot) | QCLNet (ResNet-50) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (ResNet-50) model get on the PASCAL-5i (5-Shot) dataset
| 69.5 |
ODinW-35 | MQ-GLIP-T | Multi-modal Queried Object Detection in the Wild | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18980v2 | [
"https://github.com/yifanxu74/mq-det"
] | In the paper 'Multi-modal Queried Object Detection in the Wild', what Average Score score did the MQ-GLIP-T model get on the ODinW-35 dataset
| 43 |
Mip-NeRF 360 | C3DGS | Compact 3D Gaussian Representation for Radiance Field | 2023-11-22T00:00:00 | https://arxiv.org/abs/2311.13681v2 | [
"https://github.com/maincold2/Compact-3DGS"
] | In the paper 'Compact 3D Gaussian Representation for Radiance Field', what PSNR score did the C3DGS model get on the Mip-NeRF 360 dataset
| 27.08 |
SAFIM | CodeLlama-7b-hf | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the CodeLlama-7b-hf model get on the SAFIM dataset
| 34.68 |
S3DIS | Open3DIS | Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance | 2023-12-17T00:00:00 | https://arxiv.org/abs/2312.10671v3 | [
"https://github.com/VinAIResearch/Open3DIS"
] | In the paper 'Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance', what AP50 Base B8/N4 score did the Open3DIS model get on the S3DIS dataset
| 60.8 |
MATH | MMOS-CODE-7B(0-shot) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Accuracy score did the MMOS-CODE-7B(0-shot) model get on the MATH dataset
| 44.3 |
Middlebury 2014 | MoCha-V2 | MoCha-Stereo: Motif Channel Attention Network for Stereo Matching | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.06842v3 | [
"https://github.com/zyangchen/mocha-stereo"
] | In the paper 'MoCha-Stereo: Motif Channel Attention Network for Stereo Matching', what D1 Error (2px) score did the MoCha-V2 model get on the Middlebury 2014 dataset
| 3.51 |
ISIC 2018 | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what DSC score did the EMCAD model get on the ISIC 2018 dataset
| 90.96 |
WikiText-103 | Ensemble of All | Advancing State of the Art in Language Modeling | 2023-11-28T00:00:00 | https://arxiv.org/abs/2312.03735v1 | [
"https://github.com/davidherel/sota_lm"
] | In the paper 'Advancing State of the Art in Language Modeling', what Validation perplexity score did the Ensemble of All model get on the WikiText-103 dataset
| 13.11 |
DreamBooth | BLIP-Diffusion SD v1.5 | BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.14720v2 | [
"https://github.com/salesforce/lavis"
] | In the paper 'BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing', what Concept Preservation (CP) score did the BLIP-Diffusion SD v1.5 model get on the DreamBooth dataset
| 0.547 |
MassSpecGym | Fingerprint FFN | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the Fingerprint FFN model get on the MassSpecGym dataset
| 5.09 |
Touchdown Dataset | FLAME | FLAME: Learning to Navigate with Multimodal LLM in Urban Environments | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.11051v1 | [
"https://github.com/xyz9911/FLAME"
] | In the paper 'FLAME: Learning to Navigate with Multimodal LLM in Urban Environments', what Task Completion (TC) score did the FLAME model get on the Touchdown Dataset dataset
| 40.20 |
Set14 - 3x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Set14 - 3x upscaling dataset
| 31.47 |
RefCOCO+ val | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what Overall IoU score did the GLEE-Pro model get on the RefCOCO+ val dataset
| 69.6 |
Stanford2D3D Panoramic | SFSS-MMSI (RGB+Depth) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what mIoU score did the SFSS-MMSI (RGB+Depth) model get on the Stanford2D3D Panoramic dataset
| 55.49% |
MSVD | HowToCaption | HowToCaption: Prompting LLMs to Transform Video Annotations at Scale | 2023-10-07T00:00:00 | https://arxiv.org/abs/2310.04900v2 | [
"https://github.com/ninatu/howtocaption"
] | In the paper 'HowToCaption: Prompting LLMs to Transform Video Annotations at Scale', what CIDEr score did the HowToCaption model get on the MSVD dataset
| 154.2 |
Wisconsin (60%/20%/20% random splits) | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GCN model get on the Wisconsin (60%/20%/20% random splits) dataset
| 79.8 ± 4.30 |
USNA-Cn2 (long-term) | Minute Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Minute Climatology model get on the USNA-Cn2 (long-term) dataset
| 0.625 |
PASCAL-5i (5-Shot) | GF-SAM | Bridge the Points: Graph-based Few-shot Segment Anything Semantically | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.06964v2 | [
"https://github.com/ANDYZAQ/GF-SAM"
] | In the paper 'Bridge the Points: Graph-based Few-shot Segment Anything Semantically', what Mean IoU score did the GF-SAM model get on the PASCAL-5i (5-Shot) dataset
| 82.6 |
Texas | UniG-Encoder | UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node Classification | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.01650v1 | [
"https://github.com/minhzou/unig-encoder"
] | In the paper 'UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node Classification', what Accuracy score did the UniG-Encoder model get on the Texas dataset
| 85.40±5.3 |
COCO test-dev | GLEE-Lite | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Lite model get on the COCO test-dev dataset
| 48.3 |
DIV2K val - 4x upscaling | LINF-LP | Boosting Flow-based Generative Super-Resolution Models via Learned Prior | 2024-03-16T00:00:00 | https://arxiv.org/abs/2403.10988v3 | [
"https://github.com/liyuantsao/BFSR"
] | In the paper 'Boosting Flow-based Generative Super-Resolution Models via Learned Prior', what PSNR score did the LINF-LP model get on the DIV2K val - 4x upscaling dataset
| 28.00 |
SUN397 | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the SUN397 dataset
| 82.60 |
FineDiving | RICA^2 | RICA2: Rubric-Informed, Calibrated Assessment of Actions | 2024-08-04T00:00:00 | https://arxiv.org/abs/2408.02138v2 | [
"https://github.com/abrarmajeedi/rica2_aqa"
] | In the paper 'RICA2: Rubric-Informed, Calibrated Assessment of Actions', what Spearman Correlation score did the RICA^2 model get on the FineDiving dataset
| 0.9402 |
E2E | self-mem + new data (random) | Self-training from Self-memory in Data-to-text Generation | 2024-01-19T00:00:00 | https://arxiv.org/abs/2401.10567v1 | [
"https://github.com/hoangthangta/stsm"
] | In the paper 'Self-training from Self-memory in Data-to-text Generation', what METEOR score did the self-mem + new data (random) model get on the E2E dataset
| 46.11 |
GSM8K | Branch-Train-MiX 4x7B (sampling top-2 experts) | Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07816v1 | [
"https://github.com/Leeroo-AI/mergoo"
] | In the paper 'Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM', what Accuracy score did the Branch-Train-MiX 4x7B (sampling top-2 experts) model get on the GSM8K dataset
| 37.1 |
KITTI 2015 (train) | Ef-RAFT | Rethinking RAFT for Efficient Optical Flow | 2024-01-01T00:00:00 | https://arxiv.org/abs/2401.00833v1 | [
"https://github.com/n3slami/Ef-RAFT"
] | In the paper 'Rethinking RAFT for Efficient Optical Flow', what F1-all score did the Ef-RAFT model get on the KITTI 2015 (train) dataset
| 16.45 |
ETTm2 (192) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTm2 (192) Multivariate dataset
| 0.233 |
SVAMP | MathCoder-L-70B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Execution Accuracy score did the MathCoder-L-70B model get on the SVAMP dataset
| 84.9 |
GSM8K | OpenChat-3.5 7B | OpenChat: Advancing Open-source Language Models with Mixed-Quality Data | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11235v2 | [
"https://github.com/imoneoi/openchat"
] | In the paper 'OpenChat: Advancing Open-source Language Models with Mixed-Quality Data', what Accuracy score did the OpenChat-3.5 7B model get on the GSM8K dataset
| 77.3 |
Synapse multi-organ CT | SelfReg-UNet: Vanilla UNet | SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation | 2024-06-21T00:00:00 | https://arxiv.org/abs/2406.14896v1 | [
"https://github.com/chongqingnosubway/selfreg-unet"
] | In the paper 'SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation', what Avg DSC score did the SelfReg-UNet: Vanilla UNet model get on the Synapse multi-organ CT dataset
| 80.34 |
ETDII Dataset | SCAResNet | SCAResNet: A ResNet Variant Optimized for Tiny Object Detection in Transmission and Distribution Towers | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04179v1 | [
"https://github.com/lisavilalee/scaresnet_mmdet"
] | In the paper 'SCAResNet: A ResNet Variant Optimized for Tiny Object Detection in Transmission and Distribution Towers', what mAP@0.5 score did the SCAResNet model get on the ETDII Dataset dataset
| 62.6 |
LRS3 | IIANet | IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation | 2023-08-16T00:00:00 | https://arxiv.org/abs/2308.08143v3 | [
"https://github.com/JusperLee/IIANet"
] | In the paper 'IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation', what SI-SNRi score did the IIANet model get on the LRS3 dataset
| 18.3 |
Office-Home | GMDG (RegNetY-16GF) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF) model get on the Office-Home dataset
| 80.8 |
Berkeley | ViT-B+MST+CL | MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation | 2024-01-09T00:00:00 | https://arxiv.org/abs/2401.04403v2 | [
"https://github.com/hahamyt/mst"
] | In the paper 'MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation', what NoC@90 score did the ViT-B+MST+CL model get on the Berkeley dataset
| 1.50 |
DEplain-APA-doc | long-mBART (trained on DEplain-APA-doc) | DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18939v1 | [
"https://github.com/rstodden/deplain"
] | In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the long-mBART (trained on DEplain-APA-doc) model get on the DEplain-APA-doc dataset
| 44.56 |
Kvasir-SEG | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what mean Dice score did the EMCAD model get on the Kvasir-SEG dataset
| 0.928 |
RefCOCOg-val | MaskRIS (Swin-B) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B) model get on the RefCOCOg-val dataset
| 65.55 |
CSIQ | ARNIQA | ARNIQA: Learning Distortion Manifold for Image Quality Assessment | 2023-10-20T00:00:00 | https://arxiv.org/abs/2310.14918v2 | [
"https://github.com/miccunifi/arniqa"
] | In the paper 'ARNIQA: Learning Distortion Manifold for Image Quality Assessment', what SRCC score did the ARNIQA model get on the CSIQ dataset
| 0.962 |
RotKITTI Registration Benchmark | UMERegRobust | UMERegRobust -- Universal Manifold Embedding Compatible Features for Robust Point Cloud Registration | 2024-08-22T00:00:00 | https://arxiv.org/abs/2408.12380v2 | [
"https://github.com/yuvalh9/umeregrobust"
] | In the paper 'UMERegRobust -- Universal Manifold Embedding Compatible Features for Robust Point Cloud Registration', what RR@(1.5,0.3) score did the UMERegRobust model get on the RotKITTI Registration Benchmark dataset
| 81.1 |
UCR Anomaly Archive | OC-SVM | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the OC-SVM model get on the UCR Anomaly Archive dataset
| 0.088 |
CSL-Daily | SlowFastSign | SlowFast Network for Continuous Sign Language Recognition | 2023-09-21T00:00:00 | https://arxiv.org/abs/2309.12304v1 | [
"https://github.com/kaistmm/SlowFastSign"
] | In the paper 'SlowFast Network for Continuous Sign Language Recognition', what Word Error Rate (WER) score did the SlowFastSign model get on the CSL-Daily dataset
| 24.9 |
ImageNet 64x64 | TarFlow | Normalizing Flows are Capable Generative Models | 2024-12-09T00:00:00 | https://arxiv.org/abs/2412.06329v2 | [
"https://github.com/apple/ml-tarflow"
] | In the paper 'Normalizing Flows are Capable Generative Models', what Bits per dim score did the TarFlow model get on the ImageNet 64x64 dataset
| 2.99 |
PACS | WAKD (Resnet-18) | Weight Averaging Improves Knowledge Distillation under Domain Shift | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11446v1 | [
"https://github.com/vorobeevich/distillation-in-dg"
] | In the paper 'Weight Averaging Improves Knowledge Distillation under Domain Shift', what Average Accuracy score did the WAKD (Resnet-18) model get on the PACS dataset
| 86.6 |
ETTm2 (336) Multivariate | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTm2 (336) Multivariate dataset
| 0.273 |
MSR-VTT | HowToCaption | HowToCaption: Prompting LLMs to Transform Video Annotations at Scale | 2023-10-07T00:00:00 | https://arxiv.org/abs/2310.04900v2 | [
"https://github.com/ninatu/howtocaption"
] | In the paper 'HowToCaption: Prompting LLMs to Transform Video Annotations at Scale', what CIDEr score did the HowToCaption model get on the MSR-VTT dataset
| 65.3 |
IMDb-B | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what Accuracy (10-fold) score did the G-Tuning model get on the IMDb-B dataset
| 74.30 |
CNN / Daily Mail | Fourier Transformer | Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.15099v1 | [
"https://github.com/lumia-group/fouriertransformer"
] | In the paper 'Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator', what ROUGE-1 score did the Fourier Transformer model get on the CNN / Daily Mail dataset
| 44.76 |
STS Benchmark | PromptEOL+CSE+OPT-2.7B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-2.7B model get on the STS Benchmark dataset
| 0.8833 |
ASDiv-A | ATHENA (roberta-large) | ATHENA: Mathematical Reasoning with Thought Expansion | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01036v1 | [
"https://github.com/the-jb/athena-math"
] | In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Execution Accuracy score did the ATHENA (roberta-large) model get on the ASDiv-A dataset
| 91 |
GSM8K | MathCoder-L-70B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-L-70B model get on the GSM8K dataset
| 83.9 |
ISRUC-Sleep | NeuroNet (C4-A1 only) | NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.17585v2 | [
"https://github.com/dlcjfgmlnasa/NeuroNet"
] | In the paper 'NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG', what Accuracy score did the NeuroNet (C4-A1 only) model get on the ISRUC-Sleep dataset
| 77.05% |
DomainNet | VDPG (CLIP, ViT-B/16) | Adapting to Distribution Shift by Visual Domain Prompt Generation | 2024-05-05T00:00:00 | https://arxiv.org/abs/2405.02797v1 | [
"https://github.com/guliisgreat/vdpg"
] | In the paper 'Adapting to Distribution Shift by Visual Domain Prompt Generation', what Average Accuracy score did the VDPG (CLIP, ViT-B/16) model get on the DomainNet dataset
| 59.8 |
GRAZPEDWRI-DX | YOLOv8+SA | YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09329v5 | [
"https://github.com/ruiyangju/fracture_detection_improved_yolov8"
] | In the paper 'YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection', what mAP score did the YOLOv8+SA model get on the GRAZPEDWRI-DX dataset
| 64.3 |
HumanML3D | ST-MLP | Guided Attention for Interpretable Motion Captioning | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07324v2 | [
"https://github.com/rd20karim/m2t-interpretable"
] | In the paper 'Guided Attention for Interpretable Motion Captioning', what BLEU-4 score did the ST-MLP model get on the HumanML3D dataset
| 25.0 |
Fashion-MNIST | GECCO | A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification | 2024-02-01T00:00:00 | https://arxiv.org/abs/2402.00564v6 | [
"https://github.com/geccoproject/gecco"
] | In the paper 'A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification', what Percentage error score did the GECCO model get on the Fashion-MNIST dataset
| 11.91 |
ETTm1 (336) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTm1 (336) Multivariate dataset
| 0.38 |
CropHarvest multicrop - Global | Input Fusion | Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14297v2 | [
"https://github.com/fmenat/missingviews-study-eo"
] | In the paper 'Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications', what Average Accuracy score did the Input Fusion model get on the CropHarvest multicrop - Global dataset
| 0.738 |
MVTec AD | MuSc (zero-shot) | MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16753v1 | [
"https://github.com/xrli-U/MuSc"
] | In the paper 'MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images', what Detection AUROC score did the MuSc (zero-shot) model get on the MVTec AD dataset
| 97.8 |
CiteSeer with Public Split: fixed 20 nodes per class | Graph-MLP + PGN | The Split Matters: Flat Minima Methods for Improving the Performance of GNNs | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09121v1 | [
"https://github.com/foisunt/fmms-in-gnns"
] | In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what Accuracy score did the Graph-MLP + PGN model get on the CiteSeer with Public Split: fixed 20 nodes per class dataset
| 74.73 ± 0.6% |
ODinW Full-Shot 13 Tasks | MQ-GLIP-L | Multi-modal Queried Object Detection in the Wild | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18980v2 | [
"https://github.com/yifanxu74/mq-det"
] | In the paper 'Multi-modal Queried Object Detection in the Wild', what AP score did the MQ-GLIP-L model get on the ODinW Full-Shot 13 Tasks dataset
| 71.3 |
PASCAL VOC 2012 val | CAUSE (DINOv2, ViT-B/14) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what Clustering [mIoU] score did the CAUSE (DINOv2, ViT-B/14) model get on the PASCAL VOC 2012 val dataset
| 53.2 |
SemanticKITTI | Symphonies (RGB input only) | Symphonize 3D Semantic Scene Completion with Contextual Instance Queries | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15670v2 | [
"https://github.com/hustvl/symphonies"
] | In the paper 'Symphonize 3D Semantic Scene Completion with Contextual Instance Queries', what mIoU score did the Symphonies (RGB input only) model get on the SemanticKITTI dataset
| 15.04 |
Tiered ImageNet 5-way (5-shot) | CAML [Laion-2b] | Context-Aware Meta-Learning | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10971v2 | [
"https://github.com/cfifty/CAML"
] | In the paper 'Context-Aware Meta-Learning', what Accuracy score did the CAML [Laion-2b] model get on the Tiered ImageNet 5-way (5-shot) dataset
| 98.8 |
REBUS | InstructBLIP | REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05604v2 | [
"https://github.com/cvndsh/rebus"
] | In the paper 'REBUS: A Robust Evaluation Benchmark of Understanding Symbols', what Accuracy score did the InstructBLIP model get on the REBUS dataset
| 0.6 |
PeMSD7(M) | PM-DMNet(P) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(P) model get on the PeMSD7(M) dataset
| 2.61 |
VoxCeleb | ReDimNet-B6-SF2-LM-ASNorm (15.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B6-SF2-LM-ASNorm (15.0M) model get on the VoxCeleb dataset
| 0.37 |
CausalGym | Random | CausalGym: Benchmarking causal interpretability methods on linguistic tasks | 2024-02-19T00:00:00 | https://arxiv.org/abs/2402.12560v1 | [
"https://github.com/aryamanarora/causalgym"
] | In the paper 'CausalGym: Benchmarking causal interpretability methods on linguistic tasks', what Log odds-ratio (pythia-6.9b) score did the Random model get on the CausalGym dataset
| 0.01 |
nuScenes | FocalFormer3D-L | FocalFormer3D : Focusing on Hard Instance for 3D Object Detection | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04556v1 | [
"https://github.com/NVlabs/FocalFormer3D"
] | In the paper 'FocalFormer3D : Focusing on Hard Instance for 3D Object Detection', what NDS score did the FocalFormer3D-L model get on the nuScenes dataset
| 0.73 |
ToxCast | GIT-Mol(G+S) | GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06911v3 | [
"https://github.com/ai-hpc-research-team/git-mol"
] | In the paper 'GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text', what AUC score did the GIT-Mol(G+S) model get on the ToxCast dataset
| 0.668 |
MVTEC AD textures | CSE | CSE: Surface Anomaly Detection with Contrastively Selected Embedding | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.01859v1 | [
"https://github.com/SimonThomine/CSE"
] | In the paper 'CSE: Surface Anomaly Detection with Contrastively Selected Embedding', what Detection AUROC score did the CSE model get on the MVTEC AD textures dataset
| 99.8 |
J-HMDB | SOC (Video-Swin-B) | SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17011v1 | [
"https://github.com/RobertLuo1/NeurIPS2023_SOC"
] | In the paper 'SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation', what Precision@0.5 score did the SOC (Video-Swin-B) model get on the J-HMDB dataset
| 0.969 |
VietMed | GMM-HMM Mono | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what Dev WER score did the GMM-HMM Mono model get on the VietMed dataset
| 71.7 |
Zero-Shot Video Question Answer on EgoSchema (fullset) | HCQA | HCQA @ Ego4D EgoSchema Challenge 2024 | 2024-06-22T00:00:00 | https://arxiv.org/abs/2406.15771v2 | [
"https://github.com/hyu-zhang/hcqa"
] | In the paper 'HCQA @ Ego4D EgoSchema Challenge 2024', what Accuracy score did the HCQA model get on the Zero-Shot Video Question Answer on EgoSchema (fullset) dataset
| 75 |
Synthetic Dynamic Networks | Size-cohort Dynamic Features + Static Features | Learning the mechanisms of network growth | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00793v3 | [
"https://github.com/LourensT/DynamicNetworkSimulation"
] | In the paper 'Learning the mechanisms of network growth', what Accuracy score did the Size-cohort Dynamic Features + Static Features model get on the Synthetic Dynamic Networks dataset
| 98.06 |
Amazon-Google | gpt4-0613_fewshot-10 | Entity Matching using Large Language Models | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11244v4 | [
"https://github.com/wbsg-uni-mannheim/matchgpt"
] | In the paper 'Entity Matching using Large Language Models', what F1 (%) score did the gpt4-0613_fewshot-10 model get on the Amazon-Google dataset
| 85.21 |
Gardens Point | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Gardens Point dataset
| 42.5 |
Cityscapes to Foggy Cityscapes | AT (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the AT (ResNet50-FPN) model get on the Cityscapes to Foggy Cityscapes dataset
| 63.3 |
Actor | HiGNN | Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17351v2 | [
"https://github.com/zylMozart/HiGNN"
] | In the paper 'Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network', what Accuracy score did the HiGNN model get on the Actor dataset
| 37.21 ± 1.35 |
Urban100 - 4x upscaling | SwinIR (DUKD) | Data Upcycling Knowledge Distillation for Image Super-Resolution | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.14162v4 | [
"https://github.com/yun224/dukd"
] | In the paper 'Data Upcycling Knowledge Distillation for Image Super-Resolution', what PSNR score did the SwinIR (DUKD) model get on the Urban100 - 4x upscaling dataset
| 26.43 |
Hateful Memes | RGCL | Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08110v3 | [
"https://github.com/JingbiaoMei/RGCL"
] | In the paper 'Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning', what ROC-AUC score did the RGCL model get on the Hateful Memes dataset
| 0.870 |
ImageNet | Swin-T+SSA | The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01705v1 | [
"https://github.com/shamim-hussain/ssa"
] | In the paper 'The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles', what Top 1 Accuracy score did the Swin-T+SSA model get on the ImageNet dataset
| 81.89% |
INRIA Aerial Image Labeling | UANet(ResNet50) | Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12309v1 | [
"https://github.com/henryjiepanli/uncertainty-aware-network"
] | In the paper 'Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network', what IoU score did the UANet(ResNet50) model get on the INRIA Aerial Image Labeling dataset
| 82.17 |
PascalVOC-20 | HyperSeg | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what mIoU score did the HyperSeg model get on the PascalVOC-20 dataset
| 92.1 |
DEplain-web-sent | mBART (trained on DEplain-APA-sent & DEplain-web-sent) | DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18939v1 | [
"https://github.com/rstodden/deplain"
] | In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the mBART (trained on DEplain-APA-sent & DEplain-web-sent) model get on the DEplain-web-sent dataset
| 34.828 |
MUTAG | BoP | From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis | 2024-11-17T00:00:00 | https://arxiv.org/abs/2411.11149v1 | [
"https://github.com/kbogas/PAM_BoP"
] | In the paper 'From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis', what Accuracy score did the BoP model get on the MUTAG dataset
| 91.17 |
VisA | AnomalyDINO-S (4-shot) | AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2 | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14529v2 | [
"https://github.com/dammsi/AnomalyDINO"
] | In the paper 'AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2', what Detection AUROC score did the AnomalyDINO-S (4-shot) model get on the VisA dataset
| 92.6 |
AIDA/testc | SpEL-base (2023) | SpEL: Structured Prediction for Entity Linking | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14684v1 | [
"https://github.com/shavarani/spel"
] | In the paper 'SpEL: Structured Prediction for Entity Linking', what Micro-F1 strong score did the SpEL-base (2023) model get on the AIDA/testc dataset
| 73.7 |
TrackingNet | MITS | Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation | 2023-08-25T00:00:00 | https://arxiv.org/abs/2308.13266v3 | [
"https://github.com/yoxu515/mits"
] | In the paper 'Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation', what Precision score did the MITS model get on the TrackingNet dataset
| 84.6 |
Sun80 - 4x upscaling | Extracter-rec | EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01379v1 | [
"https://github.com/esteban-rs/extracter"
] | In the paper 'EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution', what PSNR score did the Extracter-rec model get on the Sun80 - 4x upscaling dataset
| 30.02 |
IMDB-BINARY | R-GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GIN + PANDA model get on the IMDB-BINARY dataset
| 72.09 |
Cityscapes test | CAUSE (DINOv2, ViT-B/14) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what mIoU score did the CAUSE (DINOv2, ViT-B/14) model get on the Cityscapes test dataset
| 29.9 |
KITTI 2015 | MoCha-Stereo | MoCha-Stereo: Motif Channel Attention Network for Stereo Matching | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.06842v3 | [
"https://github.com/zyangchen/mocha-stereo"
] | In the paper 'MoCha-Stereo: Motif Channel Attention Network for Stereo Matching', what D1-all All score did the MoCha-Stereo model get on the KITTI 2015 dataset
| 1.53 |
WADI | CARLA | CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09296v4 | [
"https://github.com/zamanzadeh/CARLA"
] | In the paper 'CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection', what precision score did the CARLA model get on the WADI dataset
| 0.185 |
Atari 2600 Defender | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Defender dataset
| 37026.5 |
Cityscapes | DiffSeg (512) | Diffuse, Attend, and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12469v3 | [
"https://github.com/google/diffseg"
] | In the paper 'Diffuse, Attend, and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion', what mIoU score did the DiffSeg (512) model get on the Cityscapes dataset
| 21.2 |
MVBench | PPLLaVA (7b) | PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance | 2024-11-04T00:00:00 | https://arxiv.org/abs/2411.02327v2 | [
"https://github.com/farewellthree/ppllava"
] | In the paper 'PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance', what Avg. score did the PPLLaVA (7b) model get on the MVBench dataset
| 59.2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.