dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
CIFAR-100 | SPICE-BPA | The Balanced-Pairwise-Affinities Feature Transform | 2024-06-25T00:00:00 | https://arxiv.org/abs/2407.01467v1 | [
"https://github.com/danielshalam/bpa"
] | In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the SPICE-BPA model get on the CIFAR-100 dataset
| 0.550 |
Texas (60%/20%/20% random splits) | HH-GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GCN model get on the Texas (60%/20%/20% random splits) dataset
| 71.89 ± 3.46 |
ParaMAWPS | GPT-3 text-babbage-001 (6.7B) | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the GPT-3 text-babbage-001 (6.7B) model get on the ParaMAWPS dataset
| 3.21 |
MORPH Album2 (SE) | ResNet-50-Regression | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Regression model get on the MORPH Album2 (SE) dataset
| 2.83 |
Peptides-struct | DRew-GCN+LapPE | DRew: Dynamically Rewired Message Passing with Delay | 2023-05-13T00:00:00 | https://arxiv.org/abs/2305.08018v2 | [
"https://github.com/bengutteridge/drew"
] | In the paper 'DRew: Dynamically Rewired Message Passing with Delay', what MAE score did the DRew-GCN+LapPE model get on the Peptides-struct dataset
| 0.2536±0.0015 |
CCTSDB-AUG | YOLO-CCSPNet | CCSPNet-Joint: Efficient Joint Training Method for Traffic Sign Detection Under Extreme Conditions | 2023-09-13T00:00:00 | https://arxiv.org/abs/2309.06902v4 | [
"https://github.com/haoqinhong/ccspnet-joint"
] | In the paper 'CCSPNet-Joint: Efficient Joint Training Method for Traffic Sign Detection Under Extreme Conditions', what Averaged Precision score did the YOLO-CCSPNet model get on the CCTSDB-AUG dataset
| 0.917 |
ParaMAWPS | DeBERTa (VM) | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | 2023-06-24T00:00:00 | https://arxiv.org/abs/2306.13899v1 | [
"https://github.com/starscream-11813/variational-mathematical-reasoning"
] | In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the DeBERTa (VM) model get on the ParaMAWPS dataset
| 79.1 |
MassSpecGym | Random chemical generation | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Top-1 Accuracy score did the Random chemical generation model get on the MassSpecGym dataset
| 0.00 |
LVIS v1.0 val | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what box AP score did the GLEE-Pro model get on the LVIS v1.0 val dataset
| 55.7 |
GSM8K | OpenMath-Mistral-7B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-Mistral-7B (w/ code, SC, k=50) model get on the GSM8K dataset
| 86.9 |
GSM8K | WizardMath-13B-V1.0 | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09583v1 | [
"https://github.com/nlpxucan/wizardlm"
] | In the paper 'WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct', what Accuracy score did the WizardMath-13B-V1.0 model get on the GSM8K dataset
| 63.9 |
CAS-VSR-W1k (LRW-1000) | SyncVSR (Word Boundary) | SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12233v1 | [
"https://github.com/KAIST-AILab/SyncVSR"
] | In the paper 'SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization', what Top-1 Accuracy score did the SyncVSR (Word Boundary) model get on the CAS-VSR-W1k (LRW-1000) dataset
| 58.2 |
MPI-INF-3DHP | KTPFormer | KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00658v2 | [
"https://github.com/JihuaPeng/KTPFormer"
] | In the paper 'KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation', what AUC score did the KTPFormer model get on the MPI-INF-3DHP dataset
| 85.9 |
iNaturalist 2018 | LIFT (ViT-L/14@336px) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Top-1 Accuracy score did the LIFT (ViT-L/14@336px) model get on the iNaturalist 2018 dataset
| 87.4% |
EBD | D-DFFNet | Depth and DOF Cues Make A Better Defocus Blur Detector | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11334v1 | [
"https://github.com/yuxinjin-whu/d-dffnet"
] | In the paper 'Depth and DOF Cues Make A Better Defocus Blur Detector', what MAE score did the D-DFFNet model get on the EBD dataset
| 0.084 |
STCrowd | MMPedestron | When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10125v1 | [
"https://github.com/BubblyYi/MMPedestron"
] | In the paper 'When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset', what AP score did the MMPedestron model get on the STCrowd dataset
| 74.9 |
amazon-ratings | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy (%) score did the GCN model get on the amazon-ratings dataset
| 53.80 ± 0.60 |
Atari 2600 Amidar | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Amidar dataset
| 2232.3 |
PA-100K | APTM | Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02898v4 | [
"https://github.com/Shuyu-XJTU/APTM"
] | In the paper 'Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark', what Accuracy score did the APTM model get on the PA-100K dataset
| 80.17 |
CIFAR-10 | SeCu | Stable Cluster Discrimination for Deep Clustering | 2023-11-24T00:00:00 | https://arxiv.org/abs/2311.14310v1 | [
"https://github.com/idstcv/secu"
] | In the paper 'Stable Cluster Discrimination for Deep Clustering', what Accuracy score did the SeCu model get on the CIFAR-10 dataset
| 0.93 |
BSD100 - 4x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the BSD100 - 4x upscaling dataset
| 28.16 |
SIDER | elEmBERT-V1 | Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties | 2023-09-17T00:00:00 | https://arxiv.org/abs/2309.09355v3 | [
"https://github.com/dmamur/elembert"
] | In the paper 'Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties', what AUC score did the elEmBERT-V1 model get on the SIDER dataset
| 0.778 |
LAGENDA gender | MiVOLO-V2 | Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02302v3 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation', what CS@5 score did the MiVOLO-V2 model get on the LAGENDA gender dataset
| 74.48 |
AIM-500 | DiffMatte | Diffusion for Natural Image Matting | 2023-12-10T00:00:00 | https://arxiv.org/abs/2312.05915v2 | [
"https://github.com/yihanhu-2022/diffmatte"
] | In the paper 'Diffusion for Natural Image Matting', what SAD score did the DiffMatte model get on the AIM-500 dataset
| 16.31 |
Kvasir-SEG | ProMISe | ProMISe: Promptable Medical Image Segmentation using SAM | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04164v3 | [
"https://github.com/xinkunwang111/promise"
] | In the paper 'ProMISe: Promptable Medical Image Segmentation using SAM', what mean Dice score did the ProMISe model get on the Kvasir-SEG dataset
| 0.911 |
ETTh1 (192) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTh1 (192) Multivariate dataset
| 0.404 |
HMDB51 | OST | OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00096v2 | [
"https://github.com/tomchen-ctj/OST"
] | In the paper 'OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition', what Top-1 Accuracy score did the OST model get on the HMDB51 dataset
| 55.9 |
ActivityNet-1.3 | ActionMamba (InternVideo2-6B) | Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09626v1 | [
"https://github.com/opengvlab/video-mamba-suite"
] | In the paper 'Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding', what mAP IOU@0.5 score did the ActionMamba (InternVideo2-6B) model get on the ActivityNet-1.3 dataset
| 62.43 |
REBUS | LLaVa-1.5-13B | REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05604v2 | [
"https://github.com/cvndsh/rebus"
] | In the paper 'REBUS: A Robust Evaluation Benchmark of Understanding Symbols', what Accuracy score did the LLaVa-1.5-13B model get on the REBUS dataset
| 1.8 |
ADE20K-847 | FC-CLIP | Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02487v2 | [
"https://github.com/bytedance/fc-clip"
] | In the paper 'Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP', what mIoU score did the FC-CLIP model get on the ADE20K-847 dataset
| 14.8 |
CATT | GPT-4 | CATT: Character-based Arabic Tashkeel Transformer | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03236v3 | [
"https://github.com/abjadai/catt"
] | In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the GPT-4 model get on the CATT dataset
| 9.515 |
BEA-2019 (test) | GRECO (voting+ESC) | System Combination via Quality Estimation for Grammatical Error Correction | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14947v1 | [
"https://github.com/nusnlp/greco"
] | In the paper 'System Combination via Quality Estimation for Grammatical Error Correction', what F0.5 score did the GRECO (voting+ESC) model get on the BEA-2019 (test) dataset
| 80.84 |
RES-Q | QurrentOS-coder + GPT-4o | RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16801v2 | [
"https://github.com/qurrent-ai/res-q"
] | In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + GPT-4o model get on the RES-Q dataset
| 46.0 |
Cityscapes | FC-CLIP | Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02487v2 | [
"https://github.com/bytedance/fc-clip"
] | In the paper 'Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP', what mIoU score did the FC-CLIP model get on the Cityscapes dataset
| 56.2 |
UrduDoc | PSENet [67] | UTRNet: High-Resolution Urdu Text Recognition In Printed Documents | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15782v3 | [
"https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition"
] | In the paper 'UTRNet: High-Resolution Urdu Text Recognition In Printed Documents', what Precision score did the PSENet [67] model get on the UrduDoc dataset
| 78.11 |
ICDAR2015 | CLIP4STR-L* | An Empirical Study of Scaling Law for OCR | 2023-12-29T00:00:00 | https://arxiv.org/abs/2401.00028v3 | [
"https://github.com/large-ocr-model/large-ocr-model.github.io"
] | In the paper 'An Empirical Study of Scaling Law for OCR', what Accuracy score did the CLIP4STR-L* model get on the ICDAR2015 dataset
| 92.6 |
ScanNet | PonderV2 + SparseUNet | PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm | 2023-10-12T00:00:00 | https://arxiv.org/abs/2310.08586v3 | [
"https://github.com/OpenGVLab/PonderV2"
] | In the paper 'PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm', what test mIoU score did the PonderV2 + SparseUNet model get on the ScanNet dataset
| 78.5 |
Amazon-Google | gpt-4o-mini-2024-07-18 | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-mini-2024-07-18 model get on the Amazon-Google dataset
| 59.20 |
PACS | SPG (CLIP, ViT-B/16) | Soft Prompt Generation for Domain Generalization | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19286v2 | [
"https://github.com/renytek13/soft-prompt-generation-with-cgan"
] | In the paper 'Soft Prompt Generation for Domain Generalization', what Average Accuracy score did the SPG (CLIP, ViT-B/16) model get on the PACS dataset
| 97.0 |
ImageNet-1k vs Textures | NAC-UE (ResNet-50) | Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02879v3 | [
"https://github.com/bierone/ood_coverage"
] | In the paper 'Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization', what AUROC score did the NAC-UE (ResNet-50) model get on the ImageNet-1k vs Textures dataset
| 97.9 |
GSM8K | OpenMath-CodeLlama-34B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-34B (w/ code) model get on the GSM8K dataset
| 80.7 |
WHU-CD | C2FNet | C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images | 2024-04-22T00:00:00 | https://arxiv.org/abs/2404.13838v1 | [
"https://github.com/chengxihan/c2f-semicd-and-c2f-cdnet"
] | In the paper 'C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images', what F1 score did the C2FNet model get on the WHU-CD dataset
| 94.36 |
MBPP | PaLM 2-S* (few-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S* (few-shot) model get on the MBPP dataset
| 50 |
PACS | PromptStyler (CLIP, ViT-L/14) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ViT-L/14) model get on the PACS dataset
| 98.6 |
SHS100K-TEST | CoverHunter-256 | CoverHunter: Cover Song Identification with Refined Attention and Alignments | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09025v1 | [
"https://github.com/Liu-Feng-deeplearning/CoverHunter"
] | In the paper 'CoverHunter: Cover Song Identification with Refined Attention and Alignments', what mAP score did the CoverHunter-256 model get on the SHS100K-TEST dataset
| 0.875 |
MedConceptsQA | epfl-llm/meditron-70b | MEDITRON-70B: Scaling Medical Pretraining for Large Language Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.16079v1 | [
"https://github.com/epfllm/meditron"
] | In the paper 'MEDITRON-70B: Scaling Medical Pretraining for Large Language Models', what Accuracy score did the epfl-llm/meditron-70b model get on the MedConceptsQA dataset
| 25.360 |
nuScenes | HyDRa | Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07746v2 | [
"https://github.com/phi-wol/hydra"
] | In the paper 'Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception', what AMOTA score did the HyDRa model get on the nuScenes dataset
| 0.584 |
ASSET | GPT-175B (15 SARI-selected examples, random ordering) | Metric-Based In-context Learning: A Case Study in Text Simplification | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.14632v1 | [
"https://github.com/nlp-ku/metric-based-in-context-learning"
] | In the paper 'Metric-Based In-context Learning: A Case Study in Text Simplification', what SARI (EASSE>=0.2.1) score did the GPT-175B (15 SARI-selected examples, random ordering) model get on the ASSET dataset
| 47.94 |
FSS-1000 (1-shot) | DCAMA (DifFSS, ResNet-50) | DifFSS: Diffusion Model for Few-Shot Semantic Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00773v3 | [
"https://github.com/TrinitialChan/DifFSS"
] | In the paper 'DifFSS: Diffusion Model for Few-Shot Semantic Segmentation', what Mean IoU score did the DCAMA (DifFSS, ResNet-50) model get on the FSS-1000 (1-shot) dataset
| 88.4 |
ACDC | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what Dice Score score did the EMCAD model get on the ACDC dataset
| 0.9212 |
Beatles | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Beatles dataset
| 88.8 |
MMNeedle | Gemini Pro 1.5 | Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05530v4 | [
"https://github.com/dlvuldet/primevul"
] | In the paper 'Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context', what 1 Image, 2*2 Stitching, Exact Accuracy score did the Gemini Pro 1.5 model get on the MMNeedle dataset
| 90.34 |
Perception Test | TraveLER | TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.01476v2 | [
"https://github.com/traveler-framework/traveler"
] | In the paper 'TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering', what Accuracy (Top-1) score did the TraveLER model get on the Perception Test dataset
| 50.2 |
Impact Plate | HCMT | Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12467v3 | [
"https://github.com/yuyudeep/hcmt"
] | In the paper 'Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer', what Rollout RMSE-all [1e3] Position score did the HCMT model get on the Impact Plate dataset
| 20.71±0.57 |
S3DIS Area5 | PPT + SparseUNet | Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09718v2 | [
"https://github.com/Pointcept/Pointcept"
] | In the paper 'Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training', what mIoU score did the PPT + SparseUNet model get on the S3DIS Area5 dataset
| 72.7 |
CAMELYON16 | CAMIL (CAMIL-G) | CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.05314v3 | [
"https://github.com/olgarithmics/ICLR_CAMIL"
] | In the paper 'CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images', what AUC score did the CAMIL (CAMIL-G) model get on the CAMELYON16 dataset
| 0.95 |
WOST | CLIP4STR-L | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what 1:1 Accuracy score did the CLIP4STR-L model get on the WOST dataset
| 88.8 |
FGVC-Aircraft | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the FGVC-Aircraft dataset
| 35.70 |
LRS3 | RTFS-Net-6 | RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17189v4 | [
"https://github.com/spkgyk/RTFS-Net"
] | In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-6 model get on the LRS3 dataset
| 16.9 |
Cifar100-B0(10 tasks)-no-exemplars | SEED | Divide and not forget: Ensemble of selectively trained experts in Continual Learning | 2024-01-18T00:00:00 | https://arxiv.org/abs/2401.10191v3 | [
"https://github.com/grypesc/seed"
] | In the paper 'Divide and not forget: Ensemble of selectively trained experts in Continual Learning', what Average Incremental Accuracy score did the SEED model get on the Cifar100-B0(10 tasks)-no-exemplars dataset
| 61.7 |
COCO 10% labeled data | MixPL | Mixed Pseudo Labels for Semi-Supervised Object Detection | 2023-12-12T00:00:00 | https://arxiv.org/abs/2312.07006v1 | [
"https://github.com/czm369/mixpl"
] | In the paper 'Mixed Pseudo Labels for Semi-Supervised Object Detection', what mAP score did the MixPL model get on the COCO 10% labeled data dataset
| 44.6 |
MBPP | o1-mini + MapCoder (Hamming.ai) | MapCoder: Multi-Agent Code Generation for Competitive Problem Solving | 2024-05-18T00:00:00 | https://arxiv.org/abs/2405.11403v1 | [
"https://github.com/md-ashraful-pramanik/mapcoder"
] | In the paper 'MapCoder: Multi-Agent Code Generation for Competitive Problem Solving', what Accuracy score did the o1-mini + MapCoder (Hamming.ai) model get on the MBPP dataset
| 93.2 |
DAVIS 2017 (val) | DEVA (ReferFormer) | Tracking Anything with Decoupled Video Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03903v1 | [
"https://github.com/hkchengrex/Tracking-Anything-with-DEVA"
] | In the paper 'Tracking Anything with Decoupled Video Segmentation', what J&F 1st frame score did the DEVA (ReferFormer) model get on the DAVIS 2017 (val) dataset
| 66.3 |
Natural Questions | DPA-RAG | Understand What LLM Needs: Dual Preference Alignment for Retrieval-Augmented Generation | 2024-06-26T00:00:00 | https://arxiv.org/abs/2406.18676v2 | [
"https://github.com/dongguanting/dpa-rag"
] | In the paper 'Understand What LLM Needs: Dual Preference Alignment for Retrieval-Augmented Generation', what EM score did the DPA-RAG model get on the Natural Questions dataset
| 59.19 |
Eynsham | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the Eynsham dataset
| 91.3 |
RSICD | RemoteCLIP | RemoteCLIP: A Vision Language Foundation Model for Remote Sensing | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.11029v4 | [
"https://github.com/chendelong1999/remoteclip"
] | In the paper 'RemoteCLIP: A Vision Language Foundation Model for Remote Sensing', what Mean Recall score did the RemoteCLIP model get on the RSICD dataset
| 36.35% |
AISHELL-1 | BAT | BAT: Boundary aware transducer for memory-efficient and low-latency ASR | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11571v1 | [
"https://github.com/alibaba-damo-academy/FunASR"
] | In the paper 'BAT: Boundary aware transducer for memory-efficient and low-latency ASR', what Word Error Rate (WER) score did the BAT model get on the AISHELL-1 dataset
| 4.97 |
Replica | OpenMask3D | OpenMask3D: Open-Vocabulary 3D Instance Segmentation | 2023-06-23T00:00:00 | https://arxiv.org/abs/2306.13631v2 | [
"https://github.com/OpenMask3D/openmask3d"
] | In the paper 'OpenMask3D: Open-Vocabulary 3D Instance Segmentation', what mAP score did the OpenMask3D model get on the Replica dataset
| 13.1 |
LingOly | Gemma 7B | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06196v3 | [
"https://github.com/am-bean/lingOly"
] | In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the Gemma 7B model get on the LingOly dataset
| 4.9% |
USNA-Cn2 (short-duration) | Minute Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Minute Climatology model get on the USNA-Cn2 (short-duration) dataset
| 0.452 |
TinyImageNet | ResNet50 | Guarding Barlow Twins Against Overfitting with Mixed Samples | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02151v1 | [
"https://github.com/wgcban/mix-bt"
] | In the paper 'Guarding Barlow Twins Against Overfitting with Mixed Samples', what average top-1 classification accuracy score did the ResNet50 model get on the TinyImageNet dataset
| 51.84 |
MultiRC | PaLM 2-S (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what F1 score did the PaLM 2-S (one-shot) model get on the MultiRC dataset
| 84.0 |
Manga109 - 4x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the Manga109 - 4x upscaling dataset
| 33.14 |
REBUS | BLIP2-FLAN-T5-XXL | REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05604v2 | [
"https://github.com/cvndsh/rebus"
] | In the paper 'REBUS: A Robust Evaluation Benchmark of Understanding Symbols', what Accuracy score did the BLIP2-FLAN-T5-XXL model get on the REBUS dataset
| 0.9 |
Amazon-Google | Meta-Llama-3.1-70B-Instruct | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Meta-Llama-3.1-70B-Instruct model get on the Amazon-Google dataset
| 51.44 |
MOSE | DEVA (no OVIS) | Tracking Anything with Decoupled Video Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03903v1 | [
"https://github.com/hkchengrex/Tracking-Anything-with-DEVA"
] | In the paper 'Tracking Anything with Decoupled Video Segmentation', what J&F score did the DEVA (no OVIS) model get on the MOSE dataset
| 60.0 |
MSCOCO | CLIPSelf | CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01403v2 | [
"https://github.com/wusize/clipself"
] | In the paper 'CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction', what AP 0.5 score did the CLIPSelf model get on the MSCOCO dataset
| 44.3 |
NeedForSpeed | HIPTrack | HIPTrack: Visual Tracking with Historical Prompts | 2023-11-03T00:00:00 | https://arxiv.org/abs/2311.02072v2 | [
"https://github.com/wenruicai/hiptrack"
] | In the paper 'HIPTrack: Visual Tracking with Historical Prompts', what AUC score did the HIPTrack model get on the NeedForSpeed dataset
| 0.681 |
WebQuestions | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-L (one-shot) model get on the WebQuestions dataset
| 28.2 |
WHU-CD | RFL-CDNet | RFL-CDNet: Towards Accurate Change Detection via Richer Feature Learning | 2024-04-27T00:00:00 | https://arxiv.org/abs/2404.17765v1 | [
"https://github.com/hhaizee/rfl-cdnet"
] | In the paper 'RFL-CDNet: Towards Accurate Change Detection via Richer Feature Learning', what F1 score did the RFL-CDNet model get on the WHU-CD dataset
| 91.39 |
voraus-AD | MVT-Flow | The voraus-AD Dataset for Anomaly Detection in Robot Applications | 2023-11-08T00:00:00 | https://arxiv.org/abs/2311.04765v1 | [
"https://github.com/vorausrobotik/voraus-ad-dataset"
] | In the paper 'The voraus-AD Dataset for Anomaly Detection in Robot Applications', what Avg. Detection AUROC score did the MVT-Flow model get on the voraus-AD dataset
| 93.6 |
Astock | SRL&SDPG | FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02647v1 | [
"https://github.com/frinkleko/finreport"
] | In the paper 'FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model', what Accuray score did the SRL&SDPG model get on the Astock dataset
| 66.10 |
VisA | GLAD | GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07487v3 | [
"https://github.com/hyao1/glad"
] | In the paper 'GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection', what Detection AUROC score did the GLAD model get on the VisA dataset
| 99.5 |
SOTS Outdoor | MixDehazeNet | MixDehazeNet : Mix Structure Block For Image Dehazing Network | 2023-05-28T00:00:00 | https://arxiv.org/abs/2305.17654v1 | [
"https://github.com/ameryxiong/mixdehazenet"
] | In the paper 'MixDehazeNet : Mix Structure Block For Image Dehazing Network', what PSNR score did the MixDehazeNet model get on the SOTS Outdoor dataset
| 36.50 |
FSS-1000 (1-shot) | Annotation-free FSS (Without Annotation,ResNet-50) | Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach | 2023-07-26T00:00:00 | https://arxiv.org/abs/2307.14446v1 | [
"https://github.com/mindflow-institue/annotation_free_fewshot"
] | In the paper 'Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach', what Mean IoU score did the Annotation-free FSS (Without Annotation,ResNet-50) model get on the FSS-1000 (1-shot) dataset
| 85 |
HierText | Hi-SAM | Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation | 2024-01-31T00:00:00 | https://arxiv.org/abs/2401.17904v2 | [
"https://github.com/ymy-k/hi-sam"
] | In the paper 'Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation', what F-score (average) score did the Hi-SAM model get on the HierText dataset
| 81.87 |
CSL-Daily | MSKA-SLR | Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation | 2024-05-09T00:00:00 | https://arxiv.org/abs/2405.05672v1 | [
"https://github.com/sutwangyan/MSKA"
] | In the paper 'Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation', what Word Error Rate (WER) score did the MSKA-SLR model get on the CSL-Daily dataset
| 27.8 |
ETTh1 (336) Multivariate | GPHT | Generative Pretrained Hierarchical Transformer for Time Series Forecasting | 2024-02-26T00:00:00 | https://arxiv.org/abs/2402.16516v2 | [
"https://github.com/icantnamemyself/gpht"
] | In the paper 'Generative Pretrained Hierarchical Transformer for Time Series Forecasting', what MSE score did the GPHT model get on the ETTh1 (336) Multivariate dataset
| 0.430 |
GSM-Plus | GPT-4 | GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.19255v2 | [
"https://github.com/qtli/gsm-plus"
] | In the paper 'GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers', what 1:1 Accuracy score did the GPT-4 model get on the GSM-Plus dataset
| 85.6 |
SemanticKITTI | Mask4Former | Mask4Former: Mask Transformer for 4D Panoptic Segmentation | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16133v2 | [
"https://github.com/YilmazKadir/Mask4Former"
] | In the paper 'Mask4Former: Mask Transformer for 4D Panoptic Segmentation', what LSTQ score did the Mask4Former model get on the SemanticKITTI dataset
| 68.4 |
DSEC-SEG | EventSAM | Segment Any Events via Weighted Adaptation of Pivotal Tokens | 2023-12-24T00:00:00 | https://arxiv.org/abs/2312.16222v1 | [
"https://github.com/happychenpipi/eventsam"
] | In the paper 'Segment Any Events via Weighted Adaptation of Pivotal Tokens', what mIoU score did the EventSAM model get on the DSEC-SEG dataset
| 0.38 |
BACE | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what ROC-AUC score did the G-Tuning model get on the BACE dataset
| 84.79 |
DeLiVER | GeminiFusion | GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.01210v2 | [
"https://github.com/jiadingcn/geminifusion"
] | In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what mIoU score did the GeminiFusion model get on the DeLiVER dataset
| 66.9 |
GSM8K | DART-Math-Llama3-70B-Prop2Diff (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Llama3-70B-Prop2Diff (0-shot CoT, w/o code) model get on the GSM8K dataset
| 89.6 |
MSU SR-QA Dataset | TOPIQ
trained on FLIVE | TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment | 2023-08-06T00:00:00 | https://arxiv.org/abs/2308.03060v1 | [
"https://github.com/chaofengc/iqa-pytorch"
] | In the paper 'TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment', what SROCC score did the TOPIQ
trained on FLIVE model get on the MSU SR-QA Dataset dataset
| 0.34092 |
LES-AV | RRWNet | RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03166v4 | [
"https://github.com/j-morano/rrwnet"
] | In the paper 'RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification', what Accuracy score did the RRWNet model get on the LES-AV dataset
| 0.9481 |
E2E | self-mem + new data (fixed) | Self-training from Self-memory in Data-to-text Generation | 2024-01-19T00:00:00 | https://arxiv.org/abs/2401.10567v1 | [
"https://github.com/hoangthangta/stsm"
] | In the paper 'Self-training from Self-memory in Data-to-text Generation', what METEOR score did the self-mem + new data (fixed) model get on the E2E dataset
| 46.07 |
SAMSum | Llama 3 8B | Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12644v4 | [
"https://github.com/devichand579/HPT"
] | In the paper 'Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles', what ROUGE-1 score did the Llama 3 8B model get on the SAMSum dataset
| 0.29996 |
CARLA | ThinkTwice | Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving | 2023-05-10T00:00:00 | https://arxiv.org/abs/2305.06242v1 | [
"https://github.com/opendrivelab/thinktwice"
] | In the paper 'Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving', what Driving Score score did the ThinkTwice model get on the CARLA dataset
| 67 |
NYU Depth v2 | SGACNet (R34-NBt1D) | Spatial-information Guided Adaptive Context-aware Network for Efficient RGB-D Semantic Segmentation | 2023-08-11T00:00:00 | https://arxiv.org/abs/2308.06024v1 | [
"https://github.com/mvme-hbut/sgacnet"
] | In the paper 'Spatial-information Guided Adaptive Context-aware Network for Efficient RGB-D Semantic Segmentation', what Mean IoU score did the SGACNet (R34-NBt1D) model get on the NYU Depth v2 dataset
| 49.4% |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.