dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
Office-Home | PGA (ViT-L/14) | Enhancing Domain Adaptation through Prompt Gradient Alignment | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09353v2 | [
"https://github.com/viethoang1512/pga"
] | In the paper 'Enhancing Domain Adaptation through Prompt Gradient Alignment', what Accuracy score did the PGA (ViT-L/14) model get on the Office-Home dataset
| 89.4 |
USPTO-50k | NAG2G (reaction class as prior) | Node-Aligned Graph-to-Graph (NAG2G): Elevating Template-Free Deep Learning Approaches in Single-Step Retrosynthesis | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15798v2 | [
"https://github.com/dptech-corp/nag2g"
] | In the paper 'Node-Aligned Graph-to-Graph (NAG2G): Elevating Template-Free Deep Learning Approaches in Single-Step Retrosynthesis', what Top-1 accuracy score did the NAG2G (reaction class as prior) model get on the USPTO-50k dataset
| 67.2 |
Urban100 - 3x upscaling | HMA† | HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution | 2024-05-08T00:00:00 | https://arxiv.org/abs/2405.05001v1 | [
"https://github.com/korouuuuu/hma"
] | In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Urban100 - 3x upscaling dataset
| 31.00 |
ImageNet | KD++(T:ViT-B, S:resnet18) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T:ViT-B, S:resnet18) model get on the ImageNet dataset
| 71.84 |
SVAMP | SYRELM (GPT-J) | Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning | 2023-12-09T00:00:00 | https://arxiv.org/abs/2312.05571v2 | [
"https://github.com/joykirat18/syrelm"
] | In the paper 'Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning', what Execution Accuracy score did the SYRELM (GPT-J) model get on the SVAMP dataset
| 40.1 |
PDBbind | PAMNet | A Universal Framework for Accurate and Efficient Geometric Deep Learning of Molecular Systems | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.11228v1 | [
"https://github.com/XieResearchGroup/Physics-aware-Multiplex-GNN"
] | In the paper 'A Universal Framework for Accurate and Efficient Geometric Deep Learning of Molecular Systems', what RMSE score did the PAMNet model get on the PDBbind dataset
| 1.263 |
Winoground | MiniGPT-4-7B (GPTScore) | An Examination of the Compositionality of Large Generative Vision-Language Models | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.10509v2 | [
"https://github.com/teleema/sade"
] | In the paper 'An Examination of the Compositionality of Large Generative Vision-Language Models', what Text Score score did the MiniGPT-4-7B (GPTScore) model get on the Winoground dataset
| 24.50 |
classification benchmark | ETran | ETran: Energy-Based Transferability Estimation | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02027v1 | [
"https://github.com/mgholamikn/ETran"
] | In the paper 'ETran: Energy-Based Transferability Estimation', what Kendall's Tau score did the ETran model get on the classification benchmark dataset
| 0.562 |
GSM8K | OpenMath-CodeLlama-70B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-70B (w/ code, SC, k=50) model get on the GSM8K dataset
| 90.8 |
ScanNet | KPConvX-L | KPConvX: Modernizing Kernel Point Convolution with Kernel Attention | 2024-05-21T00:00:00 | https://arxiv.org/abs/2405.13194v1 | [
"https://github.com/apple/ml-kpconvx"
] | In the paper 'KPConvX: Modernizing Kernel Point Convolution with Kernel Attention', what val mIoU score did the KPConvX-L model get on the ScanNet dataset
| 76.3 |
Mol-Instruction | SLM4CRP | A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09606v1 | [
"https://github.com/ai-hpc-research-team/slm4crp"
] | In the paper 'A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions', what METEOR score did the SLM4CRP model get on the Mol-Instruction dataset
| 0.993 |
OoDIS | UGainS | UGainS: Uncertainty Guided Anomaly Instance Segmentation | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.02046v1 | [
"https://github.com/kumuji/ugains"
] | In the paper 'UGainS: Uncertainty Guided Anomaly Instance Segmentation', what AP score did the UGainS model get on the OoDIS dataset
| 25.19 |
ImageNet-C | ResNet-50 (PushPull-Conv) + PRIME | PushPull-Net: Inhibition-driven ResNet robust to image corruptions | 2024-08-07T00:00:00 | https://arxiv.org/abs/2408.04077v2 | [
"https://github.com/bgswaroop/pushpull-conv"
] | In the paper 'PushPull-Net: Inhibition-driven ResNet robust to image corruptions', what mean Corruption Error (mCE) score did the ResNet-50 (PushPull-Conv) + PRIME model get on the ImageNet-C dataset
| 49.95 |
WDC-PAVE | SU-OpenTag | Using LLMs for the Extraction and Normalization of Product Attribute Values | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02130v4 | [
"https://github.com/wbsg-uni-mannheim/wdc-pave"
] | In the paper 'Using LLMs for the Extraction and Normalization of Product Attribute Values', what F1-Score score did the SU-OpenTag model get on the WDC-PAVE dataset
| 60.44 |
3DPW | SMPLer-L | SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.15276v1 | [
"https://github.com/xuxy09/smpler"
] | In the paper 'SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation', what PA-MPJPE score did the SMPLer-L model get on the 3DPW dataset
| 43.4 |
MM-Vet | InternVL2-26B (SGP, token ratio 64%) | A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs | 2024-12-04T00:00:00 | https://arxiv.org/abs/2412.03324v2 | [
"https://github.com/NUS-HPC-AI-Lab/SGL"
] | In the paper 'A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs', what GPT-4 score score did the InternVL2-26B (SGP, token ratio 64%) model get on the MM-Vet dataset
| 65.60 |
PubMed (60%/20%/20% random splits) | Graph-MLP + SAF | The Split Matters: Flat Minima Methods for Improving the Performance of GNNs | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09121v1 | [
"https://github.com/foisunt/fmms-in-gnns"
] | In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what 1:1 Accuracy score did the Graph-MLP + SAF model get on the PubMed (60%/20%/20% random splits) dataset
| 90.64 ± 0.46% |
SAFIM | gpt-3.5-turbo-0301 | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the gpt-3.5-turbo-0301 model get on the SAFIM dataset
| 31.24 |
Gardens Point | AnyLoc-VLAD-DINOv2 | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the Gardens Point dataset
| 95.5 |
roman-empire | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy (% ) score did the GCN model get on the roman-empire dataset
| 91.27±0.20 |
VoxCeleb1 | ReDimNet-B0-LM-ASNorm (1.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B0-LM-ASNorm (1.0M) model get on the VoxCeleb1 dataset
| 1.07 |
ColonINST-v1 (Seen) | ColonGPT (w/ LoRA, w/o extra data) | Frontiers in Intelligent Colonoscopy | 2024-10-22T00:00:00 | https://arxiv.org/abs/2410.17241v1 | [
"https://github.com/ai4colonoscopy/intelliscope"
] | In the paper 'Frontiers in Intelligent Colonoscopy', what Accuray score did the ColonGPT (w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Seen) dataset
| 94.02 |
EQ-Bench | OpenAI gpt-4-0314 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI gpt-4-0314 model get on the EQ-Bench dataset
| 53.39 |
NYU Depth v2 | AsymFormer | AsymFormer: Asymmetrical Cross-Modal Representation Learning for Mobile Platform Real-Time RGB-D Semantic Segmentation | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.14065v7 | [
"https://github.com/Fourier7754/AsymFormer"
] | In the paper 'AsymFormer: Asymmetrical Cross-Modal Representation Learning for Mobile Platform Real-Time RGB-D Semantic Segmentation', what Mean IoU score did the AsymFormer model get on the NYU Depth v2 dataset
| 55.3% |
FSS-1000 (1-shot) | HSNet (DifFSS, ResNet-50) | DifFSS: Diffusion Model for Few-Shot Semantic Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00773v3 | [
"https://github.com/TrinitialChan/DifFSS"
] | In the paper 'DifFSS: Diffusion Model for Few-Shot Semantic Segmentation', what Mean IoU score did the HSNet (DifFSS, ResNet-50) model get on the FSS-1000 (1-shot) dataset
| 86.2 |
MOT20 | UCMCTrack | UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08952v2 | [
"https://github.com/corfyi/ucmctrack"
] | In the paper 'UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation', what IDF1 score did the UCMCTrack model get on the MOT20 dataset
| 77.4 |
FGVC-Aircraft | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the FGVC-Aircraft dataset
| 45.17 |
COCO-20i (1-shot) | QCLNet (ResNet-50) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (ResNet-50) model get on the COCO-20i (1-shot) dataset
| 42.3 |
TXL-PBC: a freely accessible labeled peripheral blood cell dataset | yolov8m | TXL-PBC: a freely accessible labeled peripheral blood cell dataset | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13214v1 | [
"https://github.com/lugan113/TXL-PBC_Dataset"
] | In the paper 'TXL-PBC: a freely accessible labeled peripheral blood cell dataset', what mAP50 score did the yolov8m model get on the TXL-PBC: a freely accessible labeled peripheral blood cell dataset dataset
| 0.974 |
LVIS v1.0 | CLIM (RN50x64) | CLIM: Contrastive Language-Image Mosaic for Region Representation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.11376v2 | [
"https://github.com/wusize/clim"
] | In the paper 'CLIM: Contrastive Language-Image Mosaic for Region Representation', what AP novel-LVIS base training score did the CLIM (RN50x64) model get on the LVIS v1.0 dataset
| 32.3 |
TriviaQA | Branch-Train-MiX 4x7B (sampling top-2 experts) | Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.07816v1 | [
"https://github.com/Leeroo-AI/mergoo"
] | In the paper 'Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM', what EM score did the Branch-Train-MiX 4x7B (sampling top-2 experts) model get on the TriviaQA dataset
| 57.1 |
Bongard-OpenWorld | SNAIL | Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10207v5 | [
"https://github.com/joyjayng/Bongard-OpenWorld"
] | In the paper 'Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World', what 2-Class Accuracy score did the SNAIL model get on the Bongard-OpenWorld dataset
| 64.0 |
CIFAR-10 | ABNet-2G-R3-Combined | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R3-Combined model get on the CIFAR-10 dataset
| 96.378 |
Oxford-IIIT Pets | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Oxford-IIIT Pets dataset
| 92.3 |
ISRUC-Sleep (single-channel) | NeuroNet (C4-A1 only) | NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.17585v2 | [
"https://github.com/dlcjfgmlnasa/NeuroNet"
] | In the paper 'NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG', what Accuracy score did the NeuroNet (C4-A1 only) model get on the ISRUC-Sleep (single-channel) dataset
| 77.05% |
SMAC 3s5z_vs_4s6z | VDN | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the VDN model get on the SMAC 3s5z_vs_4s6z dataset
| 47.16 |
ImageNet | AIMv2-3B | Multimodal Autoregressive Pre-training of Large Vision Encoders | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14402v1 | [
"https://github.com/apple/ml-aim"
] | In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-3B model get on the ImageNet dataset
| 88.5% |
ImageNet | KD++(T:resnet-152 S:resnet-101) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T:resnet-152 S:resnet-101) model get on the ImageNet dataset
| 79.15 |
AFAD | ResNet-50-OR-CNN | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-OR-CNN model get on the AFAD dataset
| 3.16 |
arXiv-year | DJ-GNN | Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16976v1 | [
"https://github.com/AhmedBegggaUA/TFM"
] | In the paper 'Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters', what Accuracy score did the DJ-GNN model get on the arXiv-year dataset
| 49.21±0.20 |
CropHarvest multicrop - Global | Ensemble strategy | Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14297v2 | [
"https://github.com/fmenat/missingviews-study-eo"
] | In the paper 'Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications', what Average Accuracy score did the Ensemble strategy model get on the CropHarvest multicrop - Global dataset
| 0.715 |
NAS-Bench-201, ImageNet-16-120 | IS-DARTS | IS-DARTS: Stabilizing DARTS through Precise Measurement on Candidate Importance | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12648v1 | [
"https://github.com/hy-he/is-darts"
] | In the paper 'IS-DARTS: Stabilizing DARTS through Precise Measurement on Candidate Importance', what Accuracy (Test) score did the IS-DARTS model get on the NAS-Bench-201, ImageNet-16-120 dataset
| 46.34 |
MORPH Album2 (SE) | ResNet-50-OR-CNN | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-OR-CNN model get on the MORPH Album2 (SE) dataset
| 2.83 |
fake | LightGBM + RoBERTa embedding | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the LightGBM + RoBERTa embedding model get on the fake dataset
| 0.954 |
IWSLT 2017 | Llama 3 8B | Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12644v4 | [
"https://github.com/devichand579/HPT"
] | In the paper 'Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles', what BLEU score did the Llama 3 8B model get on the IWSLT 2017 dataset
| 0.23539 |
COCO-20i (5-shot) | QCLNet (ResNet-50) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (ResNet-50) model get on the COCO-20i (5-shot) dataset
| 50 |
GRAZPEDWRI-DX | YOLOv7 | Enhancing Wrist Fracture Detection with YOLO | 2024-07-17T00:00:00 | https://arxiv.org/abs/2407.12597v2 | [
"https://github.com/ammarlodhi255/pediatric_wrist_abnormality_detection-end-to-end-implementation"
] | In the paper 'Enhancing Wrist Fracture Detection with YOLO', what mAP score did the YOLOv7 model get on the GRAZPEDWRI-DX dataset
| 61.00 |
DSO-1 | Early Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Early Fusion model get on the DSO-1 dataset
| .869 |
Refer-YouTube-VOS (2021 public validation) | SOC (Joint training, Video-Swin-B) | SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17011v1 | [
"https://github.com/RobertLuo1/NeurIPS2023_SOC"
] | In the paper 'SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation', what J&F score did the SOC (Joint training, Video-Swin-B) model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 67.3±0.5 |
VisDA2017 | RCL | Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning | 2024-05-28T00:00:00 | https://arxiv.org/abs/2405.18376v1 | [
"https://github.com/Dong-Jie-Chen/RCL"
] | In the paper 'Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning', what Accuracy score did the RCL model get on the VisDA2017 dataset
| 93.2 |
San Francisco Landmark Dataset | BoQ | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the San Francisco Landmark Dataset dataset
| 93.6 |
Hopper-v2 | TLA | Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18701v3 | [
"https://github.com/dee0512/Temporally-Layered-Architecture"
] | In the paper 'Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures', what Mean Reward score did the TLA model get on the Hopper-v2 dataset
| 3458.22 |
PCQM4Mv2-LSC | TIGT | Topology-Informed Graph Transformer | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02005v1 | [
"https://github.com/leemingo/tigt"
] | In the paper 'Topology-Informed Graph Transformer', what Validation MAE score did the TIGT model get on the PCQM4Mv2-LSC dataset
| 0.0826 |
ZINC-500k | GPTrans-Nano | Graph Propagation Transformer for Graph Representation Learning | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11424v3 | [
"https://github.com/czczup/gptrans"
] | In the paper 'Graph Propagation Transformer for Graph Representation Learning', what MAE score did the GPTrans-Nano model get on the ZINC-500k dataset
| 0.077 |
Traffic (192) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Traffic (192) dataset
| 0.372 |
GSM8K | ToRA-Code 13B | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-Code 13B model get on the GSM8K dataset
| 75.8 |
MVTec AD | CPR-faster(TensorRT) | Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06748v1 | [
"https://github.com/flyinghu123/cpr"
] | In the paper 'Target before Shooting: Accurate Anomaly Detection and Localization under One Millisecond via Cascade Patch Retrieval', what FPS score did the CPR-faster(TensorRT) model get on the MVTec AD dataset
| 1016 |
KIT Motion-Language | MoMask | MoMask: Generative Masked Modeling of 3D Human Motions | 2023-11-29T00:00:00 | https://arxiv.org/abs/2312.00063v1 | [
"https://github.com/EricGuo5513/momask-codes"
] | In the paper 'MoMask: Generative Masked Modeling of 3D Human Motions', what FID score did the MoMask model get on the KIT Motion-Language dataset
| 0.204 |
IMDb Movie Reviews | Space-XLNet | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the Space-XLNet model get on the IMDb Movie Reviews dataset
| 0.9488 |
fake | FTTransformer + RoBERTa embedding | PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00776v1 | [
"https://github.com/pyg-team/pytorch-frame"
] | In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the FTTransformer + RoBERTa embedding model get on the fake dataset
| 0.936 |
ScanNetV2 | Point-GCC+TR3D | Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19623v2 | [
"https://github.com/asterisci/point-gcc"
] | In the paper 'Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast', what mAP@0.25 score did the Point-GCC+TR3D model get on the ScanNetV2 dataset
| 73.1 |
SALECI | SUM | SUM: Saliency Unification through Mamba for Visual Attention Modeling | 2024-06-25T00:00:00 | https://arxiv.org/abs/2406.17815v2 | [
"https://github.com/Arhosseini77/SUM"
] | In the paper 'SUM: Saliency Unification through Mamba for Visual Attention Modeling', what KL score did the SUM model get on the SALECI dataset
| 0.473 |
MM-Vet | FAST (Vicuna-7B) | Visual Agents as Fast and Slow Thinkers | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08862v2 | [
"https://github.com/guangyans/sys2-llava"
] | In the paper 'Visual Agents as Fast and Slow Thinkers', what GPT-4 score score did the FAST (Vicuna-7B) model get on the MM-Vet dataset
| 31.0 |
Bongard-OpenWorld | BLIP-2 + ChatGPT (Fine-tuned) | Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10207v5 | [
"https://github.com/joyjayng/Bongard-OpenWorld"
] | In the paper 'Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World', what 2-Class Accuracy score did the BLIP-2 + ChatGPT (Fine-tuned) model get on the Bongard-OpenWorld dataset
| 63.3 |
Gait3D | HSTL | Hierarchical Spatio-Temporal Representation Learning for Gait Recognition | 2023-07-19T00:00:00 | https://arxiv.org/abs/2307.09856v1 | [
"https://github.com/gudaochangsheng/HSTL"
] | In the paper 'Hierarchical Spatio-Temporal Representation Learning for Gait Recognition', what Rank-1 score did the HSTL model get on the Gait3D dataset
| 61.30 |
SportsMOT | AED | Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown | 2024-09-14T00:00:00 | https://arxiv.org/abs/2409.09293v1 | [
"https://github.com/balabooooo/aed"
] | In the paper 'Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown', what HOTA score did the AED model get on the SportsMOT dataset
| 79.1 |
MM-Vet | mPLUG-Owl3 | mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models | 2024-08-09T00:00:00 | https://arxiv.org/abs/2408.04840v2 | [
"https://github.com/x-plug/mplug-owl"
] | In the paper 'mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models', what GPT-4 score score did the mPLUG-Owl3 model get on the MM-Vet dataset
| 40.1 |
CausalGym | DAS | CausalGym: Benchmarking causal interpretability methods on linguistic tasks | 2024-02-19T00:00:00 | https://arxiv.org/abs/2402.12560v1 | [
"https://github.com/aryamanarora/causalgym"
] | In the paper 'CausalGym: Benchmarking causal interpretability methods on linguistic tasks', what Log odds-ratio (pythia-6.9b) score did the DAS model get on the CausalGym dataset
| 9.95 |
Winograd Schema Challenge | PaLM 2-M (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the Winograd Schema Challenge dataset
| 88.1 |
Birdsnap | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the Birdsnap dataset
| 68.1 |
CORD | RORE (GeoLayoutLM) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (GeoLayoutLM) model get on the CORD dataset
| 98.52 |
GrabCut | ViT-B+MST+CL | MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation | 2024-01-09T00:00:00 | https://arxiv.org/abs/2401.04403v2 | [
"https://github.com/hahamyt/mst"
] | In the paper 'MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation', what NoC@90 score did the ViT-B+MST+CL model get on the GrabCut dataset
| 1.48 |
CIFAR-100 | resnet8x4 (T: resnet32x4 S: resnet8x4) | LumiNet: The Bright Side of Perceptual Knowledge Distillation | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03669v2 | [
"https://github.com/ismail31416/luminet"
] | In the paper 'LumiNet: The Bright Side of Perceptual Knowledge Distillation', what Top-1 Accuracy (%) score did the resnet8x4 (T: resnet32x4 S: resnet8x4) model get on the CIFAR-100 dataset
| 77.50 |
Bongard-HOI | SVM-Mimic (frozen CLIP RN-50) | Support-Set Context Matters for Bongard Problems | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03468v2 | [
"https://github.com/nraghuraman/bongard-context"
] | In the paper 'Support-Set Context Matters for Bongard Problems', what Avg. Accuracy score did the SVM-Mimic (frozen CLIP RN-50) model get on the Bongard-HOI dataset
| 72.45 |
DUT-OMRON | BiRefNet (DUTS, HRSOD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (DUTS, HRSOD) model get on the DUT-OMRON dataset
| 0.040 |
MOT17 | C-TWiX | Learning Data Association for Multi-Object Tracking using Only Coordinates | 2024-03-12T00:00:00 | https://arxiv.org/abs/2403.08018v1 | [
"https://github.com/Guepardow/TWiX"
] | In the paper 'Learning Data Association for Multi-Object Tracking using Only Coordinates', what MOTA score did the C-TWiX model get on the MOT17 dataset
| 78.1 |
CUHK-PEDES | PLIP-RN50 | PLIP: Language-Image Pre-training for Person Representation Learning | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08386v2 | [
"https://github.com/zplusdragon/plip"
] | In the paper 'PLIP: Language-Image Pre-training for Person Representation Learning', what R@1 score did the PLIP-RN50 model get on the CUHK-PEDES dataset
| 69.23 |
MM-Vet | VOLCANO 7B | Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision | 2023-11-13T00:00:00 | https://arxiv.org/abs/2311.07362v4 | [
"https://github.com/kaistai/volcano"
] | In the paper 'Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision', what GPT-4 score score did the VOLCANO 7B model get on the MM-Vet dataset
| 32.0 |
PKLot | ResNet50 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the ResNet50 model get on the PKLot dataset
| 0.9926 |
IITB Corridor | VideoPatchCore | VideoPatchCore: An Effective Method to Memorize Normality for Video Anomaly Detection | 2024-09-24T00:00:00 | https://arxiv.org/abs/2409.16225v5 | [
"https://github.com/SkiddieAhn/Paper-VideoPatchCore"
] | In the paper 'VideoPatchCore: An Effective Method to Memorize Normality for Video Anomaly Detection', what AUC score did the VideoPatchCore model get on the IITB Corridor dataset
| 76.4% |
ImageNet | EViT (fuse) | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the EViT (fuse) model get on the ImageNet dataset
| 81.96% |
LRS2 | RTFS-Net-6 | RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17189v4 | [
"https://github.com/spkgyk/RTFS-Net"
] | In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-6 model get on the LRS2 dataset
| 14.6 |
Sintel-final | Ef-RAFT | Rethinking RAFT for Efficient Optical Flow | 2024-01-01T00:00:00 | https://arxiv.org/abs/2401.00833v1 | [
"https://github.com/n3slami/Ef-RAFT"
] | In the paper 'Rethinking RAFT for Efficient Optical Flow', what Average End-Point Error score did the Ef-RAFT model get on the Sintel-final dataset
| 2.60 |
ETTh1 (336) Multivariate | CATS | CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.01673v1 | [
"https://github.com/LJC-FVNR/CATS"
] | In the paper 'CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables', what MSE score did the CATS model get on the ETTh1 (336) Multivariate dataset
| 0.423 |
RWTH-PHOENIX-Weather 2014 | SlowFastSign | SlowFast Network for Continuous Sign Language Recognition | 2023-09-21T00:00:00 | https://arxiv.org/abs/2309.12304v1 | [
"https://github.com/kaistmm/SlowFastSign"
] | In the paper 'SlowFast Network for Continuous Sign Language Recognition', what Word Error Rate (WER) score did the SlowFastSign model get on the RWTH-PHOENIX-Weather 2014 dataset
| 18.3 |
ImageNet-1k vs SUN | SCALE (ResNet50) | Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement | 2023-09-30T00:00:00 | https://arxiv.org/abs/2310.00227v1 | [
"https://github.com/kai422/scale"
] | In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what FPR95 score did the SCALE (ResNet50) model get on the ImageNet-1k vs SUN dataset
| 23.27 |
iNaturalist 2018 | ProCo (ResNet50) | Probabilistic Contrastive Learning for Long-Tailed Visual Recognition | 2024-03-11T00:00:00 | https://arxiv.org/abs/2403.06726v2 | [
"https://github.com/leaplabthu/proco"
] | In the paper 'Probabilistic Contrastive Learning for Long-Tailed Visual Recognition', what Top-1 Accuracy score did the ProCo (ResNet50) model get on the iNaturalist 2018 dataset
| 75.8% |
VietMed | GMM-HMM SAT | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what Dev WER score did the GMM-HMM SAT model get on the VietMed dataset
| 52.6 |
Fishyscapes | FlowEneDet | Concurrent Misclassification and Out-of-Distribution Detection for Semantic Segmentation via Energy-Based Normalizing Flow | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09610v1 | [
"https://github.com/gudovskiy/flowenedet"
] | In the paper 'Concurrent Misclassification and Out-of-Distribution Detection for Semantic Segmentation via Energy-Based Normalizing Flow', what AP score did the FlowEneDet model get on the Fishyscapes dataset
| 67.8 |
PASCAL-5i (1-Shot) | SCCAN (ResNet-50) | Self-Calibrated Cross Attention Network for Few-Shot Segmentation | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09294v1 | [
"https://github.com/sam1224/sccan"
] | In the paper 'Self-Calibrated Cross Attention Network for Few-Shot Segmentation', what Mean IoU score did the SCCAN (ResNet-50) model get on the PASCAL-5i (1-Shot) dataset
| 66.8 |
MSRVTT-CTN | GIT | GiT: Towards Generalist Vision Transformer through Universal Language Interface | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09394v1 | [
"https://github.com/haiyang-w/git"
] | In the paper 'GiT: Towards Generalist Vision Transformer through Universal Language Interface', what CIDEr score did the GIT model get on the MSRVTT-CTN dataset
| 32.43 |
Amazon Fashion | RMHA-4 | Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation | 2024-05-16T00:00:00 | https://arxiv.org/abs/2405.10436v1 | [
"https://github.com/researcher1741/position_encoding_srs"
] | In the paper 'Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation', what Hit@10 score did the RMHA-4 model get on the Amazon Fashion dataset
| 0.7726 |
MSMT17 | PCL-CLIP (L_pcl) | Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17218v1 | [
"https://github.com/RikoLi/PCL-CLIP"
] | In the paper 'Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification', what Rank-1 score did the PCL-CLIP (L_pcl) model get on the MSMT17 dataset
| 89.2 |
HICO-DET | SOV-STG (Swin-L) | Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising | 2023-07-05T00:00:00 | https://arxiv.org/abs/2307.02291v2 | [
"https://github.com/cjw2021/sov-stg"
] | In the paper 'Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising', what mAP score did the SOV-STG (Swin-L) model get on the HICO-DET dataset
| 43.35 |
Structured3D | SFSS-MMSI (RGB+Depth) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what Validation mIoU score did the SFSS-MMSI (RGB+Depth) model get on the Structured3D dataset
| 73.78 |
Cityscapes Labels-to-Photo | USIS-Wavelet | Wavelet-based Unsupervised Label-to-Image Translation | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09647v1 | [
"https://github.com/GeorgeEskandar/USIS-Unsupervised-Semantic-Image-Synthesis"
] | In the paper 'Wavelet-based Unsupervised Label-to-Image Translation', what mIoU score did the USIS-Wavelet model get on the Cityscapes Labels-to-Photo dataset
| 42.32 |
UAV123 | LoRAT-L-378 | Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05231v2 | [
"https://github.com/litinglin/lorat"
] | In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what AUC score did the LoRAT-L-378 model get on the UAV123 dataset
| 0.725 |
Willow Object Class | GMT-BBGM | GMTR: Graph Matching Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08141v2 | [
"https://github.com/jp-guo/gm-transformer"
] | In the paper 'GMTR: Graph Matching Transformers', what matching accuracy score did the GMT-BBGM model get on the Willow Object Class dataset
| 0.9813 |
THuman2.0 Dataset | Human-VDM | Human-VDM: Learning Single-Image 3D Human Gaussian Splatting from Video Diffusion Models | 2024-09-04T00:00:00 | https://arxiv.org/abs/2409.02851v1 | [
"https://github.com/Human-VDM/Human-VDM"
] | In the paper 'Human-VDM: Learning Single-Image 3D Human Gaussian Splatting from Video Diffusion Models', what CLIP Similarity score did the Human-VDM model get on the THuman2.0 Dataset dataset
| 0.9235 |
ACDC Scribbles | ScribFormer | ScribFormer: Transformer Makes CNN Work Better for Scribble-based Medical Image Segmentation | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02029v1 | [
"https://github.com/HUANGLIZI/ScribFormer"
] | In the paper 'ScribFormer: Transformer Makes CNN Work Better for Scribble-based Medical Image Segmentation', what Dice (Average) score did the ScribFormer model get on the ACDC Scribbles dataset
| 88.8% |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.