dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
dacl10k v1 testdev | FPN EfficientNet-B4 w/ Aux loss | dacl10k: Benchmark for Semantic Bridge Damage Segmentation | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00460v1 | [
"https://github.com/phiyodr/dacl10k-toolkit"
] | In the paper 'dacl10k: Benchmark for Semantic Bridge Damage Segmentation', what mIoU score did the FPN EfficientNet-B4 w/ Aux loss model get on the dacl10k v1 testdev dataset
| 0.414 |
MagnaTagATune (clean) | EAsT-KD + PaSST | Audio Embeddings as Teachers for Music Classification | 2023-06-30T00:00:00 | https://arxiv.org/abs/2306.17424v1 | [
"https://github.com/suncerock/EAsT-music-classification"
] | In the paper 'Audio Embeddings as Teachers for Music Classification', what ROC-AUC score did the EAsT-KD + PaSST model get on the MagnaTagATune (clean) dataset
| 91.5 |
Temp8 | HAHE | HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06588v2 | [
"https://github.com/lhrlab/hahe"
] | In the paper 'HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level', what MRR score did the HAHE model get on the Temp8 dataset
| 0.368 |
TurkCorpus | GPT-175B (6 SARI-selected examples, high/low) | Metric-Based In-context Learning: A Case Study in Text Simplification | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.14632v1 | [
"https://github.com/nlp-ku/metric-based-in-context-learning"
] | In the paper 'Metric-Based In-context Learning: A Case Study in Text Simplification', what SARI (EASSE>=0.2.1) score did the GPT-175B (6 SARI-selected examples, high/low) model get on the TurkCorpus dataset
| 43.46 |
IMDb | Llama-2-70b-chat (0-shot) | LlamBERT: Large-scale low-cost data annotation in NLP | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15938v1 | [
"https://github.com/aielte-research/llambert"
] | In the paper 'LlamBERT: Large-scale low-cost data annotation in NLP', what Accuracy score did the Llama-2-70b-chat (0-shot) model get on the IMDb dataset
| 95.39 |
PCam | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the PCam dataset
| 52.0 |
ADE20K Labels-to-Photos | USIS-Wavelet | Wavelet-based Unsupervised Label-to-Image Translation | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09647v1 | [
"https://github.com/GeorgeEskandar/USIS-Unsupervised-Semantic-Image-Synthesis"
] | In the paper 'Wavelet-based Unsupervised Label-to-Image Translation', what mIoU score did the USIS-Wavelet model get on the ADE20K Labels-to-Photos dataset
| 16.95 |
EPIC-KITCHENS-100 | CAST-B/16 | CAST: Cross-Attention in Space and Time for Video Action Recognition | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18825v2 | [
"https://github.com/khu-vll/cast"
] | In the paper 'CAST: Cross-Attention in Space and Time for Video Action Recognition', what Action@1 score did the CAST-B/16 model get on the EPIC-KITCHENS-100 dataset
| 49.3 |
Cityscapes to ACDC | HALO | Hyperbolic Active Learning for Semantic Segmentation under Domain Shift | 2023-06-19T00:00:00 | https://arxiv.org/abs/2306.11180v5 | [
"https://github.com/paolomandica/HALO"
] | In the paper 'Hyperbolic Active Learning for Semantic Segmentation under Domain Shift', what mIoU score did the HALO model get on the Cityscapes to ACDC dataset
| 71.9 |
DAVIS-S | BiRefNet (DUTS, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-measure score did the BiRefNet (DUTS, UHRSD) model get on the DAVIS-S dataset
| 0.975 |
Pittsburgh-250k-test | SelaVPR | Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14505v3 | [
"https://github.com/Lu-Feng/SelaVPR"
] | In the paper 'Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition', what Recall@1 score did the SelaVPR model get on the Pittsburgh-250k-test dataset
| 95.7 |
LSMDC | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what text-to-video R@1 score did the COSA model get on the LSMDC dataset
| 39.4 |
TAP-Vid-DAVIS-First | LocoTrack-B | Local All-Pair Correspondence for Point Tracking | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15420v1 | [
"https://github.com/ku-cvlab/locotrack"
] | In the paper 'Local All-Pair Correspondence for Point Tracking', what Average Jaccard score did the LocoTrack-B model get on the TAP-Vid-DAVIS-First dataset
| 64.8 |
Winograd Schema Challenge | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the Winograd Schema Challenge dataset
| 86.9 |
RTE | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the RTE dataset
| 78.7% |
TVBench | Gemini 1.5 Pro | Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05530v4 | [
"https://github.com/dlvuldet/primevul"
] | In the paper 'Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context', what Average Accuracy score did the Gemini 1.5 Pro model get on the TVBench dataset
| 47.1 |
DVS128 Gesture | S-TLLR | S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15220v4 | [
"https://github.com/mapolinario94/s-tllr"
] | In the paper 'S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks', what Accuracy (%) score did the S-TLLR model get on the DVS128 Gesture dataset
| 97.72 |
OpenLane-V2 val | TopoLogic | TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14747v1 | [
"https://github.com/franpin/topologic"
] | In the paper 'TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes', what mAP score did the TopoLogic model get on the OpenLane-V2 val dataset
| 33.2 |
VoxCeleb | ReDimNet-B4-LM (6.3M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B4-LM (6.3M) model get on the VoxCeleb dataset
| 0.51 |
DALES | SuperCluster | Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering | 2024-01-12T00:00:00 | https://arxiv.org/abs/2401.06704v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what mIoU score did the SuperCluster model get on the DALES dataset
| 77.3 |
DomainNet | PromptStyler (CLIP, ViT-L/14) | PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization | 2023-07-27T00:00:00 | https://arxiv.org/abs/2307.15199v2 | [
"https://github.com/zhanghr2001/promptta"
] | In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ViT-L/14) model get on the DomainNet dataset
| 65.5 |
NCBI-disease | SpanModel + SequenceLabelingModel | Comparing and combining some popular NER approaches on Biomedical tasks | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.19120v1 | [
"https://github.com/flyingmothman/bionlp"
] | In the paper 'Comparing and combining some popular NER approaches on Biomedical tasks', what F1 score did the SpanModel + SequenceLabelingModel model get on the NCBI-disease dataset
| 89.6 |
HellaSwag | LLaMA2-7b | GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs | 2024-08-27T00:00:00 | https://arxiv.org/abs/2408.15300v1 | [
"https://github.com/On-Point-RND/GIFT_SW"
] | In the paper 'GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs', what Accuracy (% ) score did the LLaMA2-7b model get on the HellaSwag dataset
| 76.68 |
InfographicVQA | Gemini Ultra (pixel only) | Gemini: A Family of Highly Capable Multimodal Models | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.11805v4 | [
"https://github.com/valdecy/pybibx"
] | In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what ANLS score did the Gemini Ultra (pixel only) model get on the InfographicVQA dataset
| 80.3 |
MATH | OpenChat-3.5 7B | OpenChat: Advancing Open-source Language Models with Mixed-Quality Data | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11235v2 | [
"https://github.com/imoneoi/openchat"
] | In the paper 'OpenChat: Advancing Open-source Language Models with Mixed-Quality Data', what Accuracy score did the OpenChat-3.5 7B model get on the MATH dataset
| 28.6 |
Penn Treebank | Hashing + XLNet | To be Continuous, or to be Discrete, Those are Bits of Questions | 2024-06-12T00:00:00 | https://arxiv.org/abs/2406.07812v1 | [
"https://github.com/speedcell4/parserker"
] | In the paper 'To be Continuous, or to be Discrete, Those are Bits of Questions', what F1 score score did the Hashing + XLNet model get on the Penn Treebank dataset
| 96.43 |
NYU Depth v2 | DFormer-S | DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.09668v2 | [
"https://github.com/VCIP-RGBD/DFormer"
] | In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what Mean IoU score did the DFormer-S model get on the NYU Depth v2 dataset
| 53.6% |
InterHand2.6M | EANet | Extract-and-Adaptation Network for 3D Interacting Hand Mesh Recovery | 2023-09-05T00:00:00 | https://arxiv.org/abs/2309.01943v1 | [
"https://github.com/jkpark0825/eanet"
] | In the paper 'Extract-and-Adaptation Network for 3D Interacting Hand Mesh Recovery', what MPJPE Test score did the EANet model get on the InterHand2.6M dataset
| 5.88 |
InfoSeek | RA-VQAv2 w/ PreFLMR | PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers | 2024-02-13T00:00:00 | https://arxiv.org/abs/2402.08327v2 | [
"https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering"
] | In the paper 'PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers', what Accuracy score did the RA-VQAv2 w/ PreFLMR model get on the InfoSeek dataset
| 30.65 |
AfriSenti | SACL-XLMR | UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.01093v1 | [
"https://github.com/zerohd4869/sacl"
] | In the paper 'UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis', what weighted-F1 score score did the SACL-XLMR model get on the AfriSenti dataset
| 0.589 |
UTKFace | BayesAgg-MTL | Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning | 2024-02-06T00:00:00 | https://arxiv.org/abs/2402.04005v2 | [
"https://github.com/ssi-research/bayesagg_mtl"
] | In the paper 'Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning', what delta_m score did the BayesAgg-MTL model get on the UTKFace dataset
| -2.23 |
GTZAN | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the GTZAN dataset
| 78.3 |
FSC147 | SSD | Learning Spatial Similarity Distribution for Few-shot Object Counting | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.11770v1 | [
"https://github.com/CBalance/SSD"
] | In the paper 'Learning Spatial Similarity Distribution for Few-shot Object Counting', what MAE(val) score did the SSD model get on the FSC147 dataset
| 9.73 |
CSIQ | UNIQA | You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment | 2023-10-14T00:00:00 | https://arxiv.org/abs/2310.09560v2 | [
"https://github.com/barcodereader/yoto"
] | In the paper 'You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment', what SRCC score did the UNIQA model get on the CSIQ dataset
| 0.964 |
ARC (Challenge) | PaLM 2 (few-shot, CoT, SC) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, CoT, SC) model get on the ARC (Challenge) dataset
| 95.1 |
Chameleon | M2M-GNN | Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs | 2024-05-31T00:00:00 | https://arxiv.org/abs/2405.20652v1 | [
"https://github.com/Jinx-byebye/m2mgnn"
] | In the paper 'Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs', what Accuracy score did the M2M-GNN model get on the Chameleon dataset
| 75.20 ± 2.3 |
GRAZPEDWRI-DX | YOLOv8+ResCBAM | YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection | 2024-02-14T00:00:00 | https://arxiv.org/abs/2402.09329v5 | [
"https://github.com/ruiyangju/fracture_detection_improved_yolov8"
] | In the paper 'YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture Detection', what mAP score did the YOLOv8+ResCBAM model get on the GRAZPEDWRI-DX dataset
| 65.8 |
Atari 2600 Robotank | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Robotank dataset
| 65.8 |
PIQA | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the PIQA dataset
| 82.2 |
ImageNet 64x64 | PaGoDA | PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14822v2 | [
"https://github.com/sony/pagoda"
] | In the paper 'PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher', what Inception Score score did the PaGoDA model get on the ImageNet 64x64 dataset
| 76.47 |
Lipogram-e | GPT-2-no-fine-tuning | Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio | 2023-06-28T00:00:00 | https://arxiv.org/abs/2306.15926v1 | [
"https://github.com/hellisotherpeople/constrained-text-generation-studio"
] | In the paper 'Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio', what Ignored Constraint Error Rate score did the GPT-2-no-fine-tuning model get on the Lipogram-e dataset
| 28.2% |
VisDA2017 | PDA (CLIP, ResNet-101) | Prompt-based Distribution Alignment for Unsupervised Domain Adaptation | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09553v2 | [
"https://github.com/baishuanghao/prompt-based-distribution-alignment"
] | In the paper 'Prompt-based Distribution Alignment for Unsupervised Domain Adaptation', what Accuracy score did the PDA (CLIP, ResNet-101) model get on the VisDA2017 dataset
| 86.4 |
Harmonix | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Harmonix dataset
| 90.7 |
TXL-PBC: a freely accessible labeled peripheral blood cell dataset | yolov8n | TXL-PBC: a freely accessible labeled peripheral blood cell dataset | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13214v1 | [
"https://github.com/lugan113/TXL-PBC_Dataset"
] | In the paper 'TXL-PBC: a freely accessible labeled peripheral blood cell dataset', what mAP50 score did the yolov8n model get on the TXL-PBC: a freely accessible labeled peripheral blood cell dataset dataset
| 0.97 |
SAMSum | Mistral 7B + SigExt | Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02741v2 | [
"https://github.com/amazon-science/SigExt"
] | In the paper 'Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization', what ROUGE-1 score did the Mistral 7B + SigExt model get on the SAMSum dataset
| 44.1 |
SVHN (40 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the SVHN (40 Labels, ImageNet-100 Unlabeled) dataset
| 72.9 |
KoNViD-1k | ReLaX-VQA (finetuned on KoNViD-1k) | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11496v1 | [
"https://github.com/xinyiw915/relax-vqa"
] | In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA (finetuned on KoNViD-1k) model get on the KoNViD-1k dataset
| 0.8668 |
WinoGrande | phi-1.5-web 1.3B (zero-shot) | Textbooks Are All You Need II: phi-1.5 technical report | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05463v1 | [
"https://github.com/knowlab/bi-weekly-paper-presentation"
] | In the paper 'Textbooks Are All You Need II: phi-1.5 technical report', what Accuracy score did the phi-1.5-web 1.3B (zero-shot) model get on the WinoGrande dataset
| 74.0 |
VeRi-776 | CA-Jaccard | CA-Jaccard: Camera-aware Jaccard Distance for Person Re-identification | 2023-11-17T00:00:00 | https://arxiv.org/abs/2311.10605v2 | [
"https://github.com/chen960/ca-jaccard"
] | In the paper 'CA-Jaccard: Camera-aware Jaccard Distance for Person Re-identification', what mAP score did the CA-Jaccard model get on the VeRi-776 dataset
| 81.4 |
USNA-Cn2 (short-duration) | Hybrid Air-Water Temperature Difference | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Hybrid Air-Water Temperature Difference model get on the USNA-Cn2 (short-duration) dataset
| 0.303 |
BEA-2019 (test) | Majority-voting ensemble on best 7 models | Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.14914v1 | [
"https://github.com/grammarly/pillars-of-gec"
] | In the paper 'Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models', what F0.5 score did the Majority-voting ensemble on best 7 models model get on the BEA-2019 (test) dataset
| 81.4 |
Actor | GESN | Addressing Heterophily in Node Classification with Graph Echo State Networks | 2023-05-14T00:00:00 | https://arxiv.org/abs/2305.08233v2 | [
"https://github.com/dtortorella/addressing-heterophily-gesn"
] | In the paper 'Addressing Heterophily in Node Classification with Graph Echo State Networks', what Accuracy score did the GESN model get on the Actor dataset
| 34.56 ± 0.76 |
Tox21 | GIT-Mol(G+S) | GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06911v3 | [
"https://github.com/ai-hpc-research-team/git-mol"
] | In the paper 'GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text', what AUC score did the GIT-Mol(G+S) model get on the Tox21 dataset
| 0.759 |
APPS | CodeChain+WizardCoder-15b | CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules | 2023-10-13T00:00:00 | https://arxiv.org/abs/2310.08992v3 | [
"https://github.com/SalesforceAIResearch/CodeChain"
] | In the paper 'CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules', what Introductory Pass@1 score did the CodeChain+WizardCoder-15b model get on the APPS dataset
| 26.29 |
WOST | CLIP4STR-H (DFN-5B) | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what 1:1 Accuracy score did the CLIP4STR-H (DFN-5B) model get on the WOST dataset
| 90.9 |
MATH | OpenMath-CodeLlama-70B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-70B (w/ code, SC, k=50) model get on the MATH dataset
| 60.4 |
KITTI Test (Offline Methods) | MCTrack | MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving | 2024-09-23T00:00:00 | https://arxiv.org/abs/2409.16149v2 | [
"https://github.com/megvii-research/mctrack"
] | In the paper 'MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving', what HOTA score did the MCTrack model get on the KITTI Test (Offline Methods) dataset
| 82.75 |
OntoGUM | MTL-coref | Incorporating Singletons and Mention-based Features in Coreference Resolution via Multi-task Learning for Better Generalization | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11582v1 | [
"https://github.com/yilunzhu/coref-mtl"
] | In the paper 'Incorporating Singletons and Mention-based Features in Coreference Resolution via Multi-task Learning for Better Generalization', what Avg F1 score did the MTL-coref model get on the OntoGUM dataset
| 68.2 |
SYSU-CD | MaskCD | MaskCD: A Remote Sensing Change Detection Network Based on Mask Classification | 2024-04-18T00:00:00 | https://arxiv.org/abs/2404.12081v1 | [
"https://github.com/ericyu97/maskcd"
] | In the paper 'MaskCD: A Remote Sensing Change Detection Network Based on Mask Classification', what F1 score did the MaskCD model get on the SYSU-CD dataset
| 82.17 |
nuScenes | FBMNet (Ours) | Multi-Modal 3D Object Detection by Box Matching | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07713v1 | [
"https://github.com/happinesslz/fbmnet"
] | In the paper 'Multi-Modal 3D Object Detection by Box Matching', what NDS score did the FBMNet (Ours) model get on the nuScenes dataset
| 0.721 |
SICK | PromptEOL+CSE+OPT-13B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-13B model get on the SICK dataset
| 0.8206 |
ADE20K | DAT-B++ | DAT++: Spatially Dynamic Vision Transformer with Deformable Attention | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01430v1 | [
"https://github.com/leaplabthu/dat"
] | In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what Validation mIoU score did the DAT-B++ model get on the ADE20K dataset
| 51.5 |
Cityscapes | TTD (TCL) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (TCL) model get on the Cityscapes dataset
| 32.0 |
WSJ0-2mix | SPGM | SPGM: Prioritizing Local Features for enhanced speech separation performance | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12608v2 | [
"https://huggingface.co/yipjiaqi/spgm"
] | In the paper 'SPGM: Prioritizing Local Features for enhanced speech separation performance', what SI-SDRi score did the SPGM model get on the WSJ0-2mix dataset
| 22.1 |
VNHSGE-Chemistry | Bing Chat | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE-Chemistry dataset
| 52.5 |
MM-Vet | IXC-2.5-7B | InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03320v1 | [
"https://github.com/internlm/internlm-xcomposer"
] | In the paper 'InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output', what GPT-4 score score did the IXC-2.5-7B model get on the MM-Vet dataset
| 51.7 |
Caltech-101 | ZLaP* | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the Caltech-101 dataset
| 83.1 |
A2D Sentences | SgMg (Video-Swin-B) | Spectrum-guided Multi-granularity Referring Video Object Segmentation | 2023-07-25T00:00:00 | https://arxiv.org/abs/2307.13537v1 | [
"https://github.com/bo-miao/sgmg"
] | In the paper 'Spectrum-guided Multi-granularity Referring Video Object Segmentation', what Precision@0.5 score did the SgMg (Video-Swin-B) model get on the A2D Sentences dataset
| 0.843 |
NExT-QA | GF | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01529v1 | [
"https://github.com/byz0e/glance-focus"
] | In the paper 'Glance and Focus: Memory Prompting for Multi-Event Video Question Answering', what Accuracy score did the GF model get on the NExT-QA dataset
| 58.83 |
Long Video Dataset | READMem-QDMN (sr=1) | READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12823v2 | [
"https://github.com/Vujas-Eteph/READMem"
] | In the paper 'READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation', what J&F score did the READMem-QDMN (sr=1) model get on the Long Video Dataset dataset
| 84.3 |
LSUN Bedroom 64 x 64 | WGAN-GP + TTUR + Alex-Adam | Fundamental Benefit of Alternating Updates in Minimax Optimization | 2024-02-16T00:00:00 | https://arxiv.org/abs/2402.10475v2 | [
"https://github.com/hanseuljo/alex-gda"
] | In the paper 'Fundamental Benefit of Alternating Updates in Minimax Optimization', what FID score did the WGAN-GP + TTUR + Alex-Adam model get on the LSUN Bedroom 64 x 64 dataset
| 6.3 |
Texas | GraphSAGE + UniGAP | UniGAP: A Universal and Adaptive Graph Upsampling Approach to Mitigate Over-Smoothing in Node Classification Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.19420v1 | [
"https://github.com/wangxiaotang0906/unigap"
] | In the paper 'UniGAP: A Universal and Adaptive Graph Upsampling Approach to Mitigate Over-Smoothing in Node Classification Tasks', what Accuracy score did the GraphSAGE + UniGAP model get on the Texas dataset
| 86.52 ± 4.8 |
TapCorrect | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the TapCorrect dataset
| 93.0 |
MM-Vet | VisionZip (Retain 64 Tokens) | VisionZip: Longer is Better but Not Necessary in Vision Language Models | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04467v1 | [
"https://github.com/dvlab-research/visionzip"
] | In the paper 'VisionZip: Longer is Better but Not Necessary in Vision Language Models', what GPT-4 score score did the VisionZip (Retain 64 Tokens) model get on the MM-Vet dataset
| 31.7 |
USNA-Cn2 (short-duration) | GBRT | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the GBRT model get on the USNA-Cn2 (short-duration) dataset
| 0.299 |
WikiTableQuestions | SynTQA (RF) | SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA | 2024-09-25T00:00:00 | https://arxiv.org/abs/2409.16682v2 | [
"https://github.com/siyue-zhang/SynTableQA"
] | In the paper 'SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA', what Accuracy (Dev) score did the SynTQA (RF) model get on the WikiTableQuestions dataset
| / |
MusicCaps | FLUXMusic | FLUX that Plays Music | 2024-09-01T00:00:00 | https://arxiv.org/abs/2409.00587v1 | [
"https://github.com/feizc/fluxmusic"
] | In the paper 'FLUX that Plays Music', what FAD VGG score did the FLUXMusic model get on the MusicCaps dataset
| 1.43 |
LaSOT-ext | SAMURAI-L | SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory | 2024-11-18T00:00:00 | https://arxiv.org/abs/2411.11922v2 | [
"https://github.com/yangchris11/samurai"
] | In the paper 'SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory', what AUC score did the SAMURAI-L model get on the LaSOT-ext dataset
| 61.0 |
FGVC Aircraft | Real-Guidance + CAL | Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation? | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12954v1 | [
"https://github.com/zhengli97/dm-kd"
] | In the paper 'Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?', what Harmonic mean score did the Real-Guidance + CAL model get on the FGVC Aircraft dataset
| 34.5 |
PubTabNet | MuTabNet | Multi-Cell Decoder and Mutual Learning for Table Structure and Character Recognition | 2024-04-20T00:00:00 | https://arxiv.org/abs/2404.13268v2 | [
"https://github.com/JG1VPP/MuTabNet"
] | In the paper 'Multi-Cell Decoder and Mutual Learning for Table Structure and Character Recognition', what TEDS (all samples) score did the MuTabNet model get on the PubTabNet dataset
| 96.87 |
DiDeMo | PAU | Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17093v3 | [
"https://github.com/leolee99/pau"
] | In the paper 'Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval', what text-to-video R@1 score did the PAU model get on the DiDeMo dataset
| 48.6 |
Nightrain | Turtle | Learning Truncated Causal History Model for Video Restoration | 2024-10-04T00:00:00 | https://arxiv.org/abs/2410.03936v2 | [
"https://github.com/Ascend-Research/Turtle"
] | In the paper 'Learning Truncated Causal History Model for Video Restoration', what PSNR score did the Turtle model get on the Nightrain dataset
| 29.26 |
STS14 | PromptEOL+CSE+OPT-13B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-13B model get on the STS14 dataset
| 0.8534 |
HO-3D v3 | Hamba | Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09646v2 | [
"https://github.com/humansensinglab/Hamba"
] | In the paper 'Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba', what PA-MPJPE score did the Hamba model get on the HO-3D v3 dataset
| 6.9 |
ImageNet-LT | ProCo (ResNeXt50) | Probabilistic Contrastive Learning for Long-Tailed Visual Recognition | 2024-03-11T00:00:00 | https://arxiv.org/abs/2403.06726v2 | [
"https://github.com/leaplabthu/proco"
] | In the paper 'Probabilistic Contrastive Learning for Long-Tailed Visual Recognition', what Top-1 Accuracy score did the ProCo (ResNeXt50) model get on the ImageNet-LT dataset
| 58.0 |
CrowdPose | BUCTD-W48 (w/cond. input from PETR, and generative sampling) | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what AP score did the BUCTD-W48 (w/cond. input from PETR, and generative sampling) model get on the CrowdPose dataset
| 78.5 |
MassSpecGym | Random chemical generation | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Top-1 MCES score did the Random chemical generation model get on the MassSpecGym dataset
| 28.59 |
This is not a Dataset | Flan-T5-xxl | This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15941v1 | [
"https://github.com/hitz-zentroa/this-is-not-a-dataset"
] | In the paper 'This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models', what Accuracy score did the Flan-T5-xxl model get on the This is not a Dataset dataset
| 94.1 |
MM-Vet | LLaVA-1.5+MMInstruct (Vicuna-7B) | MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15838v2 | [
"https://github.com/yuecao0119/mminstruct"
] | In the paper 'MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity', what GPT-4 score score did the LLaVA-1.5+MMInstruct (Vicuna-7B) model get on the MM-Vet dataset
| 34.4 |
Market-1501 | PCL-CLIP (L_pcl+L_id) | Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17218v1 | [
"https://github.com/RikoLi/PCL-CLIP"
] | In the paper 'Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification', what Rank-1 score did the PCL-CLIP (L_pcl+L_id) model get on the Market-1501 dataset
| 95.9 |
MM-Vet | LLaVolta | Efficient Large Multi-modal Models via Visual Context Compression | 2024-06-28T00:00:00 | https://arxiv.org/abs/2406.20092v2 | [
"https://github.com/beckschen/llavolta"
] | In the paper 'Efficient Large Multi-modal Models via Visual Context Compression', what GPT-4 score score did the LLaVolta model get on the MM-Vet dataset
| 30.7 |
MM-Vet | LLaVA-v1.5 (7B, w/ STIC) | Enhancing Large Vision Language Models with Self-Training on Image Comprehension | 2024-05-30T00:00:00 | https://arxiv.org/abs/2405.19716v2 | [
"https://github.com/yihedeng9/stic"
] | In the paper 'Enhancing Large Vision Language Models with Self-Training on Image Comprehension', what GPT-4 score score did the LLaVA-v1.5 (7B, w/ STIC) model get on the MM-Vet dataset
| 32.6 |
HumanML3D | GUESS | GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion Generation | 2024-01-04T00:00:00 | https://arxiv.org/abs/2401.02142v2 | [
"https://github.com/xuehao-gao/guess"
] | In the paper 'GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion Generation', what FID score did the GUESS model get on the HumanML3D dataset
| 0.109 |
Cityscapes val | Resnet50 | MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18331v2 | [
"https://github.com/airl-iisc/MRFP"
] | In the paper 'MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation', what mIoU score did the Resnet50 model get on the Cityscapes val dataset
| 34.66 |
Intel Image Classification | ResNet-18 | Vision Eagle Attention: a new lens for advancing image classification | 2024-11-15T00:00:00 | https://arxiv.org/abs/2411.10564v2 | [
"https://github.com/MahmudulHasan11085/Vision-Eagle-Attention"
] | In the paper 'Vision Eagle Attention: a new lens for advancing image classification', what Accuracy score did the ResNet-18 model get on the Intel Image Classification dataset
| 90.93 |
FSC147 | CACViT | Vision Transformer Off-the-Shelf: A Surprising Baseline for Few-Shot Class-Agnostic Counting | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04440v2 | [
"https://github.com/Xu3XiWang/CACViT"
] | In the paper 'Vision Transformer Off-the-Shelf: A Surprising Baseline for Few-Shot Class-Agnostic Counting', what MAE(val) score did the CACViT model get on the FSC147 dataset
| 10.63 |
DAVIS 2017 (test-dev) | Cutie (base, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie (base, MEGA) model get on the DAVIS 2017 (test-dev) dataset
| 86.1 |
CIFAR-10-LT (ρ=50) | SURE(ResNet-32) | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01T00:00:00 | https://arxiv.org/abs/2403.00543v1 | [
"https://github.com/YutingLi0606/SURE"
] | In the paper 'SURE: SUrvey REcipes for building reliable and robust deep networks', what Error Rate score did the SURE(ResNet-32) model get on the CIFAR-10-LT (ρ=50) dataset
| 9.78 |
CARLA Leaderboard | TF++ WP | Hidden Biases of End-to-End Driving Models | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07957v2 | [
"https://github.com/autonomousvision/carla_garage"
] | In the paper 'Hidden Biases of End-to-End Driving Models', what Driving Score score did the TF++ WP model get on the CARLA Leaderboard dataset
| 66.32 |
LRS2 | SyncVSR | SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12233v1 | [
"https://github.com/KAIST-AILab/SyncVSR"
] | In the paper 'SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization', what Word Error Rate (WER) score did the SyncVSR model get on the LRS2 dataset
| 28.9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.