dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
REDDIT-BINARY | GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GCN + PANDA model get on the REDDIT-BINARY dataset
| 80.69 |
RLBench | 3D-LOTUS | Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01345v1 | [
"https://github.com/vlc-robot/robot-3dlotus"
] | In the paper 'Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy', what Succ. Rate (18 tasks, 100 demo/task) score did the 3D-LOTUS model get on the RLBench dataset
| 81.5 |
SIM10K to Cityscapes | ALDI++ (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the ALDI++ (ResNet50-FPN) model get on the SIM10K to Cityscapes dataset
| 78.2 |
MATH | OpenChat-3.5-1210 7B | OpenChat: Advancing Open-source Language Models with Mixed-Quality Data | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11235v2 | [
"https://github.com/imoneoi/openchat"
] | In the paper 'OpenChat: Advancing Open-source Language Models with Mixed-Quality Data', what Accuracy score did the OpenChat-3.5-1210 7B model get on the MATH dataset
| 28.9 |
ETTm1 (336) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTm1 (336) Multivariate dataset
| 0.37 |
ImageNet | AIMv2-H | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-H model get on the ImageNet dataset
| 87.5% |
D&D | Graph-JEPA | Graph-level Representation Learning with Joint-Embedding Predictive Architectures | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16014v2 | [
"https://github.com/geriskenderi/graph-jepa"
] | In the paper 'Graph-level Representation Learning with Joint-Embedding Predictive Architectures', what Accuracy score did the Graph-JEPA model get on the D&D dataset
| 78.64% |
DomainNet | POEM | POEM: Polarization of Embeddings for Domain-Invariant Representations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13046v1 | [
"https://github.com/josangyoung/official-poem"
] | In the paper 'POEM: Polarization of Embeddings for Domain-Invariant Representations', what Average Accuracy score did the POEM model get on the DomainNet dataset
| 44.0 |
NYU Depth v2 | ComPtr (Swin-B) | ComPtr: Towards Diverse Bi-source Dense Prediction Tasks via A Simple yet General Complementary Transformer | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12349v1 | [
"https://github.com/lartpang/comptr"
] | In the paper 'ComPtr: Towards Diverse Bi-source Dense Prediction Tasks via A Simple yet General Complementary Transformer', what Mean IoU score did the ComPtr (Swin-B) model get on the NYU Depth v2 dataset
| 55.5% |
ETTh1 (336) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTh1 (336) Multivariate dataset
| 0.424 |
FC100 5-way (5-shot) | MSENet | Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.07989v1 | [
"https://github.com/FatemehAskari/MSENet"
] | In the paper 'Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms', what Accuracy score did the MSENet model get on the FC100 5-way (5-shot) dataset
| 66.27 |
TrackingNet | ODTrack-B | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what Accuracy score did the ODTrack-B model get on the TrackingNet dataset
| 85.1 |
GTA-to-Avg(Cityscapes,BDD,Mapillary) | DIFF | Diffusion Features to Bridge Domain Gap for Semantic Segmentation | 2024-06-02T00:00:00 | https://arxiv.org/abs/2406.00777v2 | [
"https://github.com/Yux1angJi/DIFF"
] | In the paper 'Diffusion Features to Bridge Domain Gap for Semantic Segmentation', what mIoU score did the DIFF model get on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset
| 57.15 |
IllusionVQA | GPT4-Vision 4-shot | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the GPT4-Vision 4-shot model get on the IllusionVQA dataset
| 46 |
Long Video Dataset | READMem-STCN (sr=1) | READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12823v2 | [
"https://github.com/Vujas-Eteph/READMem"
] | In the paper 'READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation', what J&F score did the READMem-STCN (sr=1) model get on the Long Video Dataset dataset
| 80.8 |
GRAZPEDWRI-DX | YOLOv8+SE | Pediatric Wrist Fracture Detection Using Feature Context Excitation Modules in X-ray Images | 2024-10-01T00:00:00 | https://arxiv.org/abs/2410.01031v2 | [
"https://github.com/ruiyangju/fce-yolov8"
] | In the paper 'Pediatric Wrist Fracture Detection Using Feature Context Excitation Modules in X-ray Images', what mAP score did the YOLOv8+SE model get on the GRAZPEDWRI-DX dataset
| 67.07 |
SAFIM | CodeLlama-13b-hf | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the CodeLlama-13b-hf model get on the SAFIM dataset
| 41.41 |
UAV123 | LoRAT-g-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what AUC score did the LoRAT-g-378 model get on the UAV123 dataset
| 0.739 |
CIFAR-FS 5-way (1-shot) | CAML [Laion-2b] | Context-Aware Meta-Learning | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10971v2 | [
"https://github.com/cfifty/CAML"
] | In the paper 'Context-Aware Meta-Learning', what Accuracy score did the CAML [Laion-2b] model get on the CIFAR-FS 5-way (1-shot) dataset
| 83.3 |
MVTec AD | ReConPatch WRN-101 | ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.16713v3 | [
"https://github.com/travishsu/ReConPatch-TF"
] | In the paper 'ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection', what Detection AUROC score did the ReConPatch WRN-101 model get on the MVTec AD dataset
| 99.62 |
MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge | MERIT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what Avg DSC score did the MERIT-GCASCADE model get on the MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge dataset
| 84.54 |
MIMIC-CXR | SEI-1 | Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14905v1 | [
"https://github.com/mk-runner/sei"
] | In the paper 'Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation', what BLEU-2 score did the SEI-1 model get on the MIMIC-CXR dataset
| 0.247 |
BanglaBook | Logistic Regression (word 2-gram + word 3-gram) | BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06595v3 | [
"https://github.com/mohsinulkabir14/banglabook"
] | In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the Logistic Regression (word 2-gram + word 3-gram) model get on the BanglaBook dataset
| 0.8964 |
VideoInstruct | LITA-13B | LITA: Language Instructed Temporal-Localization Assistant | 2024-03-27T00:00:00 | https://arxiv.org/abs/2403.19046v1 | [
"https://github.com/nvlabs/lita"
] | In the paper 'LITA: Language Instructed Temporal-Localization Assistant', what Correctness of Information score did the LITA-13B model get on the VideoInstruct dataset
| 2.94 |
dbp15k fr-en | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the dbp15k fr-en dataset
| 0.873 |
OASIS+NACC+ICBM+ABIDE+IXI | ResNet-18 | Ordinal Classification with Distance Regularization for Robust Brain Age Prediction | 2023-10-25T00:00:00 | https://arxiv.org/abs/2403.10522v2 | [
"https://github.com/jaygshah/Robust-Brain-Age-Prediction"
] | In the paper 'Ordinal Classification with Distance Regularization for Robust Brain Age Prediction', what Mean absolute error score did the ResNet-18 model get on the OASIS+NACC+ICBM+ABIDE+IXI dataset
| 2.56 |
WaterScenes | Achelous-FV-RDF-S2 | Achelous: A Fast Unified Water-surface Panoptic Perception Framework based on Fusion of Monocular Camera and 4D mmWave Radar | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07102v1 | [
"https://github.com/GuanRunwei/Achelous"
] | In the paper 'Achelous: A Fast Unified Water-surface Panoptic Perception Framework based on Fusion of Monocular Camera and 4D mmWave Radar', what mIoU score did the Achelous-FV-RDF-S2 model get on the WaterScenes dataset
| 79.6 |
CIFAR-10 | ABNet-2G-R2 | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R2 model get on the CIFAR-10 dataset
| 95.900 |
PascalVOC-20b | FC-CLIP | Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02487v2 | [
"https://github.com/bytedance/fc-clip"
] | In the paper 'Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP', what mIoU score did the FC-CLIP model get on the PascalVOC-20b dataset
| 81.8 |
PH2 | MobileUNETR | MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation | 2024-09-04T00:00:00 | https://arxiv.org/abs/2409.03062v1 | [
"https://github.com/osupcvlab/mobileunetr"
] | In the paper 'MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation', what Average Dice score did the MobileUNETR model get on the PH2 dataset
| 95.70 |
Set5 - 3x upscaling | DRCT | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the Set5 - 3x upscaling dataset
| 35.18 |
Lipogram-e | GPT-2-fine-tuned-20-epochs | Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio | 2023-06-28T00:00:00 | https://arxiv.org/abs/2306.15926v1 | [
"https://github.com/hellisotherpeople/constrained-text-generation-studio"
] | In the paper 'Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio', what Ignored Constraint Error Rate score did the GPT-2-fine-tuned-20-epochs model get on the Lipogram-e dataset
| 0.3% |
BigEarthNet-S1 (official test set) | MAE (ViT-S/16) | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing | 2023-10-28T00:00:00 | https://arxiv.org/abs/2310.18653v1 | [
"https://github.com/zhu-xlab/fgmae"
] | In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what mAP (micro) score did the MAE (ViT-S/16) model get on the BigEarthNet-S1 (official test set) dataset
| 81.3 |
NeedForSpeed | ARTrackV2-L | ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17133v3 | [
"https://github.com/miv-xjtu/artrack"
] | In the paper 'ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe', what AUC score did the ARTrackV2-L model get on the NeedForSpeed dataset
| 0.684 |
DocRED-IE | REXEL | REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking | 2024-04-19T00:00:00 | https://arxiv.org/abs/2404.12788v1 | [
"https://github.com/amazon-science/e2e-docie"
] | In the paper 'REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking', what Avg F1 score did the REXEL model get on the DocRED-IE dataset
| 86.74 |
NExT-QA (Open-ended VideoQA) | VideoChat | VideoChat: Chat-Centric Video Understanding | 2023-05-10T00:00:00 | https://arxiv.org/abs/2305.06355v2 | [
"https://github.com/opengvlab/ask-anything"
] | In the paper 'VideoChat: Chat-Centric Video Understanding', what Accuracy score did the VideoChat model get on the NExT-QA (Open-ended VideoQA) dataset
| 56.6 |
HumanML3D | MCM | MCM: Multi-condition Motion Synthesis Framework | 2024-04-19T00:00:00 | https://arxiv.org/abs/2404.12886v1 | [
"https://github.com/fluide1022/MCM"
] | In the paper 'MCM: Multi-condition Motion Synthesis Framework', what FID score did the MCM model get on the HumanML3D dataset
| 0.053 |
Bongard-OpenWorld | InstructBLIP + ChatGPT + Neuro-Symbolic | Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10207v5 | [
"https://github.com/joyjayng/Bongard-OpenWorld"
] | In the paper 'Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World', what 2-Class Accuracy score did the InstructBLIP + ChatGPT + Neuro-Symbolic model get on the Bongard-OpenWorld dataset
| 55.5 |
WinoGrande | Branch-Train-MiX 4x7B (sampling top-1 expert) | Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07816v1 | [
"https://github.com/Leeroo-AI/mergoo"
] | In the paper 'Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM', what Accuracy score did the Branch-Train-MiX 4x7B (sampling top-1 expert) model get on the WinoGrande dataset
| 70.6 |
PASCAL-5i (1-Shot) | QCLNet (ResNet-50) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (ResNet-50) model get on the PASCAL-5i (1-Shot) dataset
| 64.3 |
EQ-Bench | OpenAI text-davinci-001 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI text-davinci-001 model get on the EQ-Bench dataset
| 15.19 |
rt-inod-bias | Llama2 | Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09785v1 | [
"https://github.com/innodatalabs/innodata-llm-safety"
] | In the paper 'Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations', what Best-of score did the Llama2 model get on the rt-inod-bias dataset
| 0.34 |
AFAD | ResNet-50-Unimodal-Concentrated | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Unimodal-Concentrated model get on the AFAD dataset
| 3.20 |
miniF2F-valid | Lyra + GPT-4 | Lyra: Orchestrating Dual Correction in Automated Theorem Proving | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15806v4 | [
"https://github.com/chuanyang-zheng/lyra-theorem-prover"
] | In the paper 'Lyra: Orchestrating Dual Correction in Automated Theorem Proving', what Pass@100 score did the Lyra + GPT-4 model get on the miniF2F-valid dataset
| 52.0 |
RES-Q | QurrentOS-coder + Llama 3 70b | RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16801v2 | [
"https://github.com/qurrent-ai/res-q"
] | In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + Llama 3 70b model get on the RES-Q dataset
| 20.0 |
HumanEva-I | GLA-GCN (T=27, GT) | GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video | 2023-07-12T00:00:00 | https://arxiv.org/abs/2307.05853v2 | [
"https://github.com/bruceyo/GLA-GCN"
] | In the paper 'GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video', what Mean Reconstruction Error (mm) score did the GLA-GCN (T=27, GT) model get on the HumanEva-I dataset
| 9.2 |
UrduDoc | ContourNet [69] | UTRNet: High-Resolution Urdu Text Recognition In Printed Documents | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15782v3 | [
"https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition"
] | In the paper 'UTRNet: High-Resolution Urdu Text Recognition In Printed Documents', what Precision score did the ContourNet [69] model get on the UrduDoc dataset
| 86.99 |
NAS-Bench-201, ImageNet-16-120 | DiNAS | Multi-conditioned Graph Diffusion for Neural Architecture Search | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.06020v2 | [
"https://github.com/rohanasthana/dinas"
] | In the paper 'Multi-conditioned Graph Diffusion for Neural Architecture Search', what Accuracy (Test) score did the DiNAS model get on the NAS-Bench-201, ImageNet-16-120 dataset
| 45.41 |
SVAMP | MMOS-CODE-7B(0-shot) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Execution Accuracy score did the MMOS-CODE-7B(0-shot) model get on the SVAMP dataset
| 76.4 |
SSC | Event-SSM | Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18508v3 | [
"https://github.com/Efficient-Scalable-Machine-Learning/event-ssm"
] | In the paper 'Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models', what Accuracy score did the Event-SSM model get on the SSC dataset
| 88.4 |
IBims-1 | Marigold + E2E FT(zero-shot) | Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11355v1 | [
"https://github.com/VisualComputingInstitute/diffusion-e2e-ft"
] | In the paper 'Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think', what % < 11.25 score did the Marigold + E2E FT(zero-shot) model get on the IBims-1 dataset
| 69.9 |
CDD Dataset (season-varying) | CGNet | Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09179v1 | [
"https://github.com/chengxihan/cgnet-cd"
] | In the paper 'Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery', what F1-Score score did the CGNet model get on the CDD Dataset (season-varying) dataset
| 94.73 |
EQ-Bench | Koala 13B | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the Koala 13B model get on the EQ-Bench dataset
| 24.92 |
CATH 4.2 | Knowledge-Design | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the Knowledge-Design model get on the CATH 4.2 dataset
| 60.77 |
RefCOCO+ val | HIPIE | Hierarchical Open-vocabulary Universal Image Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00764v2 | [
"https://github.com/berkeley-hipie/hipie"
] | In the paper 'Hierarchical Open-vocabulary Universal Image Segmentation', what Overall IoU score did the HIPIE model get on the RefCOCO+ val dataset
| 73.9 |
Peptides-func | CKGCN | CKGConv: General Graph Convolution with Continuous Kernels | 2024-04-21T00:00:00 | https://arxiv.org/abs/2404.13604v2 | [
"https://github.com/networkslab/ckgconv"
] | In the paper 'CKGConv: General Graph Convolution with Continuous Kernels', what AP score did the CKGCN model get on the Peptides-func dataset
| 0.6952 |
SIDD | CGNet | CascadedGaze: Efficiency in Global Context Extraction for Image Restoration | 2024-01-26T00:00:00 | https://arxiv.org/abs/2401.15235v2 | [
"https://github.com/Ascend-Research/CascadedGaze"
] | In the paper 'CascadedGaze: Efficiency in Global Context Extraction for Image Restoration', what PSNR (sRGB) score did the CGNet model get on the SIDD dataset
| 40.39 |
MM-Vet | InternVL2-26B (SGP, token ratio 35%) | A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs | 2024-12-04T00:00:00 | https://arxiv.org/abs/2412.03324v2 | [
"https://github.com/NUS-HPC-AI-Lab/SGL"
] | In the paper 'A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs', what GPT-4 score score did the InternVL2-26B (SGP, token ratio 35%) model get on the MM-Vet dataset
| 63.20 |
Atari 2600 Name This Game | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Name This Game dataset
| 16535.4 |
Distinctions-646 | DCAM | Dual-Context Aggregation for Universal Image Matting | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18109v1 | [
"https://github.com/windaway/dcam"
] | In the paper 'Dual-Context Aggregation for Universal Image Matting', what SAD score did the DCAM model get on the Distinctions-646 dataset
| 31.27 |
SPED | DINOv2 SALAD | Optimal Transport Aggregation for Visual Place Recognition | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15937v2 | [
"https://github.com/serizba/salad"
] | In the paper 'Optimal Transport Aggregation for Visual Place Recognition', what Recall@1 score did the DINOv2 SALAD model get on the SPED dataset
| 92.1 |
fake | FTTransformer + RoBERTa fintune | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the FTTransformer + RoBERTa fintune model get on the fake dataset
| 0.96 |
fake | FTTransformer + OpenAI embedding | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the FTTransformer + OpenAI embedding model get on the fake dataset
| 0.911 |
SFCHD | YOLOv8 | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the YOLOv8 model get on the SFCHD dataset
| 77.9 |
MSLS | ProGEO | ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.01906v1 | [
"https://github.com/chain-mao/progeo"
] | In the paper 'ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization', what Recall@1 score did the ProGEO model get on the MSLS dataset
| 84.9 |
BIG-bench (Causal Judgment) | PaLM 2 (few-shot, k=3, Direct) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (Causal Judgment) dataset
| 62.0 |
AE-110k | ft-GPT-3.5-json-val | ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12537v5 | [
"https://github.com/wbsg-uni-mannheim/extractgpt"
] | In the paper 'ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction', what F1-score score did the ft-GPT-3.5-json-val model get on the AE-110k dataset
| 86 |
MAPS | YourMT3+ (YPTF+S, unseen) | YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04822v3 | [
"https://github.com/mimbres/yourmt3"
] | In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what Onset F1 score did the YourMT3+ (YPTF+S, unseen) model get on the MAPS dataset
| 88.37 |
STS Benchmark | PromptEOL+CSE+OPT-13B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-13B model get on the STS Benchmark dataset
| 0.8856 |
DAVIS-S | BiRefNet (DUTS, HRSOD, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-measure score did the BiRefNet (DUTS, HRSOD, UHRSD) model get on the DAVIS-S dataset
| 0.975 |
MMBench | Video-LaVIT | Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03161v3 | [
"https://github.com/jy0205/lavit"
] | In the paper 'Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization', what GPT-3.5 score score did the Video-LaVIT model get on the MMBench dataset
| 67.3 |
nuscenes Camera-Radar | CR3DT | CR3DT: Camera-RADAR Fusion for 3D Detection and Tracking | 2024-03-22T00:00:00 | https://arxiv.org/abs/2403.15313v2 | [
"https://github.com/eth-pbl/cr3dt"
] | In the paper 'CR3DT: Camera-RADAR Fusion for 3D Detection and Tracking', what AMOTA score did the CR3DT model get on the nuscenes Camera-Radar dataset
| 0.355 |
HellaSwag | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the HellaSwag dataset
| 87.4 |
Weather2K1786 (192) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K1786 (192) dataset
| 0.601 |
MM-Vet | VOLCANO 13B | Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision | 2023-11-13T00:00:00 | https://arxiv.org/abs/2311.07362v4 | [
"https://github.com/kaistai/volcano"
] | In the paper 'Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision', what GPT-4 score score did the VOLCANO 13B model get on the MM-Vet dataset
| 38.0 |
Food-101 | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the Food-101 dataset
| 91.01 |
IWSLT2014 German-English | DRDA | Deterministic Reversible Data Augmentation for Neural Machine Translation | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.02517v1 | [
"https://github.com/BITHLP/DRDA"
] | In the paper 'Deterministic Reversible Data Augmentation for Neural Machine Translation', what BLEU score score did the DRDA model get on the IWSLT2014 German-English dataset
| 37.95 |
PascalVOC-20 | LaVG | In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation | 2024-08-09T00:00:00 | https://arxiv.org/abs/2408.04961v1 | [
"https://github.com/dahyun-kang/lazygrounding"
] | In the paper 'In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation', what mIoU score did the LaVG model get on the PascalVOC-20 dataset
| 82.5 |
Matterport3D | SFSS-MMSI (RGB+Depth+Normal) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what Validation mIoU score did the SFSS-MMSI (RGB+Depth+Normal) model get on the Matterport3D dataset
| 39.26 |
COIN | HERMES | HERMES: temporal-coHERent long-forM understanding with Episodes and Semantics | 2024-08-30T00:00:00 | https://arxiv.org/abs/2408.17443v3 | [
"https://github.com/joslefaure/HERMES"
] | In the paper 'HERMES: temporal-coHERent long-forM understanding with Episodes and Semantics', what Accuracy (%) score did the HERMES model get on the COIN dataset
| 93.5 |
MSCOCO | SIA-OVD (RN50x4) | SIA-OVD: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection | 2024-10-08T00:00:00 | https://arxiv.org/abs/2410.05650v1 | [
"https://github.com/pku-icst-mipl/sia-ovd_acmmm2024"
] | In the paper 'SIA-OVD: Shape-Invariant Adapter for Bridging the Image-Region Gap in Open-Vocabulary Detection', what AP 0.5 score did the SIA-OVD (RN50x4) model get on the MSCOCO dataset
| 41.9 |
SPKL | ResNet50 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the ResNet50 model get on the SPKL dataset
| 0.6674 |
Set14 - 4x upscaling | ATD | Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary | 2024-01-16T00:00:00 | https://arxiv.org/abs/2401.08209v2 | [
"https://github.com/labshuhanggu/adaptive-token-dictionary"
] | In the paper 'Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary', what PSNR score did the ATD model get on the Set14 - 4x upscaling dataset
| 29.24 |
ogbg-molpcba | GatedGCN-HSG | Next Level Message-Passing with Hierarchical Support Graphs | 2024-06-22T00:00:00 | https://arxiv.org/abs/2406.15852v2 | [
"https://github.com/carlosinator/support-graphs"
] | In the paper 'Next Level Message-Passing with Hierarchical Support Graphs', what Test AP score did the GatedGCN-HSG model get on the ogbg-molpcba dataset
| 0.3129±0.0020 |
MS-COCO (10-shot) | CD-ViTO | Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03094v4 | [
"https://github.com/lovelyqian/CDFSOD-benchmark"
] | In the paper 'Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector', what AP score did the CD-ViTO model get on the MS-COCO (10-shot) dataset
| 35.3 |
CloudEval-YAML | GPT-4 Turbo | CloudEval-YAML: A Practical Benchmark for Cloud Configuration Generation | 2023-11-10T00:00:00 | https://arxiv.org/abs/2401.06786v1 | [
"https://github.com/alibaba/cloudeval-yaml"
] | In the paper 'CloudEval-YAML: A Practical Benchmark for Cloud Configuration Generation', what ACC score did the GPT-4 Turbo model get on the CloudEval-YAML dataset
| 0.561 |
Peptides-struct | GatedGCN-HSG | Next Level Message-Passing with Hierarchical Support Graphs | 2024-06-22T00:00:00 | https://arxiv.org/abs/2406.15852v2 | [
"https://github.com/carlosinator/support-graphs"
] | In the paper 'Next Level Message-Passing with Hierarchical Support Graphs', what MAE score did the GatedGCN-HSG model get on the Peptides-struct dataset
| 0.2421±0.0007 |
PASCAL VOC | TinyissimoYOLO-v8 | Ultra-Efficient On-Device Object Detection on AI-Integrated Smart Glasses with TinyissimoYOLO | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01057v2 | [
"https://github.com/eth-pbl/tinyissimoyolo"
] | In the paper 'Ultra-Efficient On-Device Object Detection on AI-Integrated Smart Glasses with TinyissimoYOLO', what Parameters(K) score did the TinyissimoYOLO-v8 model get on the PASCAL VOC dataset
| 839 |
ScanNet | SuperCluster | Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering | 2024-01-12T00:00:00 | https://arxiv.org/abs/2401.06704v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what PQ score did the SuperCluster model get on the ScanNet dataset
| 58.7 |
SYSU-CD | RCTNet | Relating CNN-Transformer Fusion Network for Change Detection | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03178v1 | [
"https://github.com/nust-machine-intelligence-laboratory/rctnet"
] | In the paper 'Relating CNN-Transformer Fusion Network for Change Detection', what F1 score did the RCTNet model get on the SYSU-CD dataset
| 83.01 |
MovieLens 100K | WMLFF | Weighted Multi-Level Feature Factorization for App ads CTR and installation prediction | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02568v1 | [
"https://github.com/knife982000/recsys2023challenge"
] | In the paper 'Weighted Multi-Level Feature Factorization for App ads CTR and installation prediction', what RMSE (u1 Splits) score did the WMLFF model get on the MovieLens 100K dataset
| 0.928 |
COCO-20i -> Pascal VOC (5-shot) | MSDNet (ResNet-101) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-101) model get on the COCO-20i -> Pascal VOC (5-shot) dataset
| 76.4 |
MBPP | MapCoder (GPT-4o) | MapCoder: Multi-Agent Code Generation for Competitive Problem Solving | 2024-05-18T00:00:00 | https://arxiv.org/abs/2405.11403v1 | [
"https://github.com/md-ashraful-pramanik/mapcoder"
] | In the paper 'MapCoder: Multi-Agent Code Generation for Competitive Problem Solving', what Accuracy score did the MapCoder (GPT-4o) model get on the MBPP dataset
| 89.7 |
MM-Vet | Dragonfly (Llama3-8B) | Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language Models | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.00977v2 | [
"https://github.com/togethercomputer/dragonfly"
] | In the paper 'Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language Models', what GPT-4 score score did the Dragonfly (Llama3-8B) model get on the MM-Vet dataset
| 35.9 |
Deep Noise Suppression (DNS) Challenge | MP-SENet | Explicit Estimation of Magnitude and Phase Spectra in Parallel for High-Quality Speech Enhancement | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.08926v2 | [
"https://github.com/yxlu-0102/MP-SENet"
] | In the paper 'Explicit Estimation of Magnitude and Phase Spectra in Parallel for High-Quality Speech Enhancement', what SI-SDR-WB score did the MP-SENet model get on the Deep Noise Suppression (DNS) Challenge dataset
| 21.03 |
Hopper-v4 | MEow | Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow | 2024-05-22T00:00:00 | https://arxiv.org/abs/2405.13629v2 | [
"https://github.com/ChienFeng-hub/meow"
] | In the paper 'Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow', what Average Return score did the MEow model get on the Hopper-v4 dataset
| 3332.99 |
Columbia | Early Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Early Fusion model get on the Columbia dataset
| .888 |
REDS4- 4x upscaling | EvTexture | EvTexture: Event-driven Texture Enhancement for Video Super-Resolution | 2024-06-19T00:00:00 | https://arxiv.org/abs/2406.13457v1 | [
"https://github.com/dachunkai/evtexture"
] | In the paper 'EvTexture: Event-driven Texture Enhancement for Video Super-Resolution', what PSNR score did the EvTexture model get on the REDS4- 4x upscaling dataset
| 32.79 |
Musk v1 | Snuffy | Snuffy: Efficient Whole Slide Image Classifier | 2024-08-15T00:00:00 | https://arxiv.org/abs/2408.08258v2 | [
"https://github.com/jafarinia/snuffy"
] | In the paper 'Snuffy: Efficient Whole Slide Image Classifier', what AUC score did the Snuffy model get on the Musk v1 dataset
| 0.989 |
Chest X-ray images | MSTP | Efficient and Accurate Pneumonia Detection Using a Novel Multi-Scale Transformer Approach | 2024-08-08T00:00:00 | https://arxiv.org/abs/2408.04290v2 | [
"https://github.com/amirrezafateh/multi-scale-transformer-pneumonia"
] | In the paper 'Efficient and Accurate Pneumonia Detection Using a Novel Multi-Scale Transformer Approach', what Accuracy score did the MSTP model get on the Chest X-ray images dataset
| 92.79 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.