dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
COCO-20i (5-shot) | SCCAN (ResNet-50) | Self-Calibrated Cross Attention Network for Few-Shot Segmentation | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09294v1 | [
"https://github.com/sam1224/sccan"
] | In the paper 'Self-Calibrated Cross Attention Network for Few-Shot Segmentation', what Mean IoU score did the SCCAN (ResNet-50) model get on the COCO-20i (5-shot) dataset
| 53.9 |
VideoInstruct | PPLLaVA-7B-dpo | PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance | 2024-11-04T00:00:00 | https://arxiv.org/abs/2411.02327v2 | [
"https://github.com/farewellthree/ppllava"
] | In the paper 'PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance', what Correctness of Information score did the PPLLaVA-7B-dpo model get on the VideoInstruct dataset
| 3.85 |
VoxCeleb1 | ReDimNet-B0-LM (1.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B0-LM (1.0M) model get on the VoxCeleb1 dataset
| 1.16 |
ColonINST-v1 (Seen) | MobileVLM-1.7B (w/ LoRA, w/ extra data) | MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16886v2 | [
"https://github.com/meituan-automl/mobilevlm"
] | In the paper 'MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices', what Accuray score did the MobileVLM-1.7B (w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 93.64 |
SIQA | LLaMA-3 8B+MoSLoRA (fine-tuned) | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what Accuracy score did the LLaMA-3 8B+MoSLoRA (fine-tuned) model get on the SIQA dataset
| 81.0 |
WinoGrande | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the WinoGrande dataset
| 83.0 |
SMAC MMM2_7m2M1M_vs_8m4M1M | DMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DMIX model get on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset
| 63.35 |
WebVid | VideoFactory | Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10874v4 | [
"https://github.com/daooshee/hd-vg-130m"
] | In the paper 'Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation', what FVD score did the VideoFactory model get on the WebVid dataset
| 292.35 |
ASQP | MvP (multi-task) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (R15) score did the MvP (multi-task) model get on the ASQP dataset
| 52.21 |
SIM10K to Cityscapes | SADA (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the SADA (ResNet50-FPN) model get on the SIM10K to Cityscapes dataset
| 71.8 |
ETIS-LARIBPOLYPDB | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what mean Dice score did the EMCAD model get on the ETIS-LARIBPOLYPDB dataset
| 0.9229 |
MATH | GPT-4-code model (w/ code) | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | 2023-08-15T00:00:00 | https://arxiv.org/abs/2308.07921v1 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification', what Accuracy score did the GPT-4-code model (w/ code) model get on the MATH dataset
| 69.7 |
TerraIncognita | GMDG (ResNet-50, SWAD) | Rethinking Multi-domain Generalization with A General Learning Objective | 2024-02-29T00:00:00 | https://arxiv.org/abs/2402.18853v1 | [
"https://github.com/zhaorui-tan/GMDG_cvpr2024"
] | In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50, SWAD) model get on the TerraIncognita dataset
| 53.0 |
BIG-bench (Sports Understanding) | PaLM 2(few-shot, k=3, CoT) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2(few-shot, k=3, CoT) model get on the BIG-bench (Sports Understanding) dataset
| 98 |
CHILI-3K | GCN | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GCN model get on the CHILI-3K dataset
| 0.496 +/- 0.001 |
BEHAVE | CONTHO | Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer | 2024-04-07T00:00:00 | https://arxiv.org/abs/2404.04819v1 | [
"https://github.com/dqj5182/contho_release"
] | In the paper 'Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer', what Precision score did the CONTHO model get on the BEHAVE dataset
| 0.754 |
CausalGym | LDA | CausalGym: Benchmarking causal interpretability methods on linguistic tasks | 2024-02-19T00:00:00 | https://arxiv.org/abs/2402.12560v1 | [
"https://github.com/aryamanarora/causalgym"
] | In the paper 'CausalGym: Benchmarking causal interpretability methods on linguistic tasks', what Log odds-ratio (pythia-6.9b) score did the LDA model get on the CausalGym dataset
| 0.27 |
P3M-10k | StyleMatte | Adversarially-Guided Portrait Matting | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.02981v2 | [
"https://github.com/chroneus/stylematte"
] | In the paper 'Adversarially-Guided Portrait Matting', what SAD score did the StyleMatte model get on the P3M-10k dataset
| 6.97 |
ModelNet40 | PointMLS | ModelNet-O: A Large-Scale Synthetic Dataset for Occlusion-Aware Point Cloud Classification | 2024-01-16T00:00:00 | https://arxiv.org/abs/2401.08210v1 | [
"https://github.com/fanglaosi/pointmls"
] | In the paper 'ModelNet-O: A Large-Scale Synthetic Dataset for Occlusion-Aware Point Cloud Classification', what Overall Accuracy score did the PointMLS model get on the ModelNet40 dataset
| 94.0 |
HOST | CLIP4STR-L | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what 1:1 Accuracy score did the CLIP4STR-L model get on the HOST dataset
| 82.7 |
RefCOCO testA | MagNet | Mask Grounding for Referring Image Segmentation | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12198v2 | [
"https://github.com/yxchng/mask-grounding"
] | In the paper 'Mask Grounding for Referring Image Segmentation', what Overall IoU score did the MagNet model get on the RefCOCO testA dataset
| 78.24 |
YouTube-VIS 2021 | UniVS(Swin-L) | UniVS: Unified and Universal Video Segmentation with Prompts as Queries | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18115v2 | [
"https://github.com/minghanli/univs"
] | In the paper 'UniVS: Unified and Universal Video Segmentation with Prompts as Queries', what mask AP score did the UniVS(Swin-L) model get on the YouTube-VIS 2021 dataset
| 57.9 |
RefCOCO+ test B | MaskRIS (Swin-B, combined DB) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B, combined DB) model get on the RefCOCO+ test B dataset
| 62.83 |
SUN-RGBD val | V-DETR | V-DETR: DETR with Vertex Relative Position Encoding for 3D Object Detection | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04409v1 | [
"https://github.com/yichaoshen-ms/v-detr"
] | In the paper 'V-DETR: DETR with Vertex Relative Position Encoding for 3D Object Detection', what mAP@0.25 score did the V-DETR model get on the SUN-RGBD val dataset
| 68.0 |
GSM8K | Qwen2-72B-Instruct-Step-DPO (0-shot CoT) | Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs | 2024-06-26T00:00:00 | https://arxiv.org/abs/2406.18629v1 | [
"https://github.com/dvlab-research/step-dpo"
] | In the paper 'Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs', what Accuracy score did the Qwen2-72B-Instruct-Step-DPO (0-shot CoT) model get on the GSM8K dataset
| 94.0 |
VDD | Segformer-B2 | VDD: Varied Drone Dataset for Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13608v3 | [
"https://github.com/RussRobin/VDD"
] | In the paper 'VDD: Varied Drone Dataset for Semantic Segmentation', what mIoU score did the Segformer-B2 model get on the VDD dataset
| 85.75 |
Atari 2600 Gopher | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Gopher dataset
| 103514.4 |
CHILI-3K | EdgeCNN | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the EdgeCNN model get on the CHILI-3K dataset
| 0.632 +/- 0.009 |
VSPW | UniVS(Swin-L) | UniVS: Unified and Universal Video Segmentation with Prompts as Queries | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18115v2 | [
"https://github.com/minghanli/univs"
] | In the paper 'UniVS: Unified and Universal Video Segmentation with Prompts as Queries', what mIoU score did the UniVS(Swin-L) model get on the VSPW dataset
| 59.8 |
TACRED-Revisited | LLM-QA4RE (XXLarge) | Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11159v1 | [
"https://github.com/osu-nlp-group/qa4re"
] | In the paper 'Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors', what F1 score did the LLM-QA4RE (XXLarge) model get on the TACRED-Revisited dataset
| 53.4 |
MPI-INF-3DHP | W-HMR | W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration | 2023-11-29T00:00:00 | https://arxiv.org/abs/2311.17460v6 | [
"https://github.com/yw0208/W-HMR"
] | In the paper 'W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration', what MPJPE score did the W-HMR model get on the MPI-INF-3DHP dataset
| 83.2 |
ICBHI Respiratory Sound Database | AST (fine-tuning) | Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14032v4 | [
"https://github.com/raymin0223/patch-mix_contrastive_learning"
] | In the paper 'Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification', what Sensitivity score did the AST (fine-tuning) model get on the ICBHI Respiratory Sound Database dataset
| 41.97 |
MSCOCO | Cooperative Foundational Models | Enhancing Novel Object Detection via Cooperative Foundational Models | 2023-11-19T00:00:00 | https://arxiv.org/abs/2311.12068v3 | [
"https://github.com/rohit901/cooperative-foundational-models"
] | In the paper 'Enhancing Novel Object Detection via Cooperative Foundational Models', what AP 0.5 score did the Cooperative Foundational Models model get on the MSCOCO dataset
| 50.3 |
Flickr30k | 3SHNet | 3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-Highlighting | 2024-04-26T00:00:00 | https://arxiv.org/abs/2404.17273v1 | [
"https://github.com/xurige1995/3shnet"
] | In the paper '3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-Highlighting', what Image-to-text R@1 score did the 3SHNet model get on the Flickr30k dataset
| 87.1 |
MATH | MuggleMATH-70B | MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.05506v3 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | In the paper 'MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning', what Accuracy score did the MuggleMATH-70B model get on the MATH dataset
| 35.6 |
CIFAR-100 | ResNet8×4 | LumiNet: The Bright Side of Perceptual Knowledge Distillation | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03669v2 | [
"https://github.com/ismail31416/luminet"
] | In the paper 'LumiNet: The Bright Side of Perceptual Knowledge Distillation', what Accuracy score did the ResNet8×4 model get on the CIFAR-100 dataset
| 77.50 |
ShapeNet Chair | DiT-3D | DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation | 2023-07-04T00:00:00 | https://arxiv.org/abs/2307.01831v1 | [
"https://github.com/DiT-3D/DiT-3D"
] | In the paper 'DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation', what 1-NNA-CD score did the DiT-3D model get on the ShapeNet Chair dataset
| 51.99 |
QVHighlights | video-mamba-suite | Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09626v1 | [
"https://github.com/opengvlab/video-mamba-suite"
] | In the paper 'Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding', what mAP score did the video-mamba-suite model get on the QVHighlights dataset
| 45.18 |
minesweeper | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what AUCROC score did the GCN model get on the minesweeper dataset
| 97.86 ± 0.24 |
PASTIS | Exchanger+Unet | Revisiting the Encoding of Satellite Image Time Series | 2023-05-03T00:00:00 | https://arxiv.org/abs/2305.02086v2 | [
"https://github.com/TotalVariation/Exchanger4SITS"
] | In the paper 'Revisiting the Encoding of Satellite Image Time Series', what Mean IoU (test) score did the Exchanger+Unet model get on the PASTIS dataset
| 66.8 |
SVOX-Overcast | BoQ (ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the SVOX-Overcast dataset
| 97.8 |
VTAB-1k(Natural<7>) | GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K) | Improving Visual Prompt Tuning for Self-supervised Vision Transformers | 2023-06-08T00:00:00 | https://arxiv.org/abs/2306.05067v1 | [
"https://github.com/ryongithub/gatedprompttuning"
] | In the paper 'Improving Visual Prompt Tuning for Self-supervised Vision Transformers', what Mean Accuracy score did the GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K) model get on the VTAB-1k(Natural<7>) dataset
| 47.61 |
ADE20K | DeBiFormer-B (IN1k pretrain, Upernet 160k) | DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08582v1 | [
"https://github.com/maclong01/DeBiFormer"
] | In the paper 'DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention', what Validation mIoU score did the DeBiFormer-B (IN1k pretrain, Upernet 160k) model get on the ADE20K dataset
| 52.0 |
NTU RGB+D 120 | IPP-Net (Parsing + Pose) | Integrating Human Parsing and Pose Network for Human Action Recognition | 2023-07-16T00:00:00 | https://arxiv.org/abs/2307.07977v1 | [
"https://github.com/liujf69/ipp-net-parsing"
] | In the paper 'Integrating Human Parsing and Pose Network for Human Action Recognition', what Accuracy (Cross-Subject) score did the IPP-Net (Parsing + Pose) model get on the NTU RGB+D 120 dataset
| 90.0 |
ShanghaiTech | MULDE-frame-centric-micro | MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14497v1 | [
"https://github.com/jakubmicorek/MULDE-Multiscale-Log-Density-Estimation-via-Denoising-Score-Matching-for-Video-Anomaly-Detection"
] | In the paper 'MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection', what AUC score did the MULDE-frame-centric-micro model get on the ShanghaiTech dataset
| 81.3% |
RES-Q | QurrentOS-coder + DeepSeek-Coder-V2 | RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16801v2 | [
"https://github.com/qurrent-ai/res-q"
] | In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + DeepSeek-Coder-V2 model get on the RES-Q dataset
| 29.0 |
NYU Depth v2 | HAPNet | HAPNet: Toward Superior RGB-Thermal Scene Parsing via Hybrid, Asymmetric, and Progressive Heterogeneous Feature Fusion | 2024-04-04T00:00:00 | https://arxiv.org/abs/2404.03527v2 | [
"https://github.com/LiJiahang617/HAPNet"
] | In the paper 'HAPNet: Toward Superior RGB-Thermal Scene Parsing via Hybrid, Asymmetric, and Progressive Heterogeneous Feature Fusion', what Mean IoU score did the HAPNet model get on the NYU Depth v2 dataset
| 55.0 |
ETIS-LARIBPOLYPDB | ProMISe | ProMISe: Promptable Medical Image Segmentation using SAM | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04164v3 | [
"https://github.com/xinkunwang111/promise"
] | In the paper 'ProMISe: Promptable Medical Image Segmentation using SAM', what mIoU score did the ProMISe model get on the ETIS-LARIBPOLYPDB dataset
| 0.750 |
iNaturalist 2018 | GML (ResNet-50) | Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels | 2023-05-02T00:00:00 | https://arxiv.org/abs/2305.01160v3 | [
"https://github.com/bluecdm/Long-tailed-recognition"
] | In the paper 'Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels', what Top-1 Accuracy score did the GML (ResNet-50) model get on the iNaturalist 2018 dataset
| 74.5% |
Stanford2D3D Panoramic | SFSS-MMSI (RGB+Depth+Normal) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what mIoU score did the SFSS-MMSI (RGB+Depth+Normal) model get on the Stanford2D3D Panoramic dataset
| 59.43% |
Cholec80 | LoViT | LoViT: Long Video Transformer for Surgical Phase Recognition | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08989v3 | [
"https://github.com/MRUIL/LoViT"
] | In the paper 'LoViT: Long Video Transformer for Surgical Phase Recognition', what F1 score did the LoViT model get on the Cholec80 dataset
| 90.24 |
Office-31 | PDA (CLIP, ViT-B/16) | Prompt-based Distribution Alignment for Unsupervised Domain Adaptation | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09553v2 | [
"https://github.com/baishuanghao/prompt-based-distribution-alignment"
] | In the paper 'Prompt-based Distribution Alignment for Unsupervised Domain Adaptation', what Accuracy score did the PDA (CLIP, ViT-B/16) model get on the Office-31 dataset
| 91.2 |
ROOR | LayoutLMv3-GlobalPointer (base) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what Segment-level F1 score did the LayoutLMv3-GlobalPointer (base) model get on the ROOR dataset
| 68.60 |
Real20 | RDNet | Reversible Decoupling Network for Single Image Reflection Removal | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08063v1 | [
"https://github.com/lime-j/RDNet"
] | In the paper 'Reversible Decoupling Network for Single Image Reflection Removal', what PSNR score did the RDNet model get on the Real20 dataset
| 25.58 |
CROHME 2019 | PosFormer | PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer | 2024-07-10T00:00:00 | https://arxiv.org/abs/2407.07764v1 | [
"https://github.com/sjtu-deepvisionlab/posformer"
] | In the paper 'PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer', what ExpRate score did the PosFormer model get on the CROHME 2019 dataset
| 62.22 |
COCO val2017 | U2Seg | Unsupervised Universal Image Segmentation | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17243v1 | [
"https://github.com/u2seg/u2seg"
] | In the paper 'Unsupervised Universal Image Segmentation', what PQ score did the U2Seg model get on the COCO val2017 dataset
| 11.1 |
QNLI | LM-CPPF RoBERTa-base | LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18169v3 | [
"https://github.com/amirabaskohi/lm-cppf"
] | In the paper 'LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning', what Accuracy score did the LM-CPPF RoBERTa-base model get on the QNLI dataset
| 70.2% |
Oxford 102 Flower | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the Oxford 102 Flower dataset
| 87.16 |
THUMOS 2014 | CASE | Revisiting Foreground and Background Separation in Weakly-supervised Temporal Action Localization: A Clustering-based Approach | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.14138v1 | [
"https://github.com/qinying-liu/case"
] | In the paper 'Revisiting Foreground and Background Separation in Weakly-supervised Temporal Action Localization: A Clustering-based Approach', what mAP@0.1:0.7 score did the CASE model get on the THUMOS 2014 dataset
| 46.2 |
ClinTox | ChemBFN | A Bayesian Flow Network Framework for Chemistry Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.20294v1 | [
"https://github.com/Augus1999/bayesian-flow-network-for-chemistry"
] | In the paper 'A Bayesian Flow Network Framework for Chemistry Tasks', what ROC-AUC score did the ChemBFN model get on the ClinTox dataset
| 99.18 |
Weather2K79 (96) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K79 (96) dataset
| 0.555 |
Dark Zurich | CoDA | CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17369v3 | [
"https://github.com/Cuzyoung/CoDA"
] | In the paper 'CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning', what mIoU score did the CoDA model get on the Dark Zurich dataset
| 61.2 |
enwik8 | Transformer+SSA | The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01705v1 | [
"https://github.com/shamim-hussain/ssa"
] | In the paper 'The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles', what Bit per Character (BPC) score did the Transformer+SSA model get on the enwik8 dataset
| 1.024 |
VizWiz 2020 VQA | Video-LaVIT | Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03161v3 | [
"https://github.com/jy0205/lavit"
] | In the paper 'Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization', what overall score did the Video-LaVIT model get on the VizWiz 2020 VQA dataset
| 56.0 |
SIM10K to Cityscapes | UMT (ResNet50-FPN) | Align and Distill: Unifying and Improving Domain Adaptive Object Detection | 2024-03-18T00:00:00 | https://arxiv.org/abs/2403.12029v2 | [
"https://github.com/justinkay/aldi"
] | In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the UMT (ResNet50-FPN) model get on the SIM10K to Cityscapes dataset
| 58.7 |
ChinaOpen-1k | GVT | ChinaOpen: A Dataset for Open-world Multimodal Learning | 2023-05-10T00:00:00 | https://arxiv.org/abs/2305.05880v2 | [
"https://github.com/dong03/GenerativeVideo2Text"
] | In the paper 'ChinaOpen: A Dataset for Open-world Multimodal Learning', what BLEU4 score did the GVT model get on the ChinaOpen-1k dataset
| 17.7 |
Winoground | BLIP (VisualGPTScore, α-tuned) | Revisiting the Role of Language Priors in Vision-Language Models | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01879v4 | [
"https://github.com/linzhiqiu/visual_gpt_score"
] | In the paper 'Revisiting the Role of Language Priors in Vision-Language Models', what Text Score score did the BLIP (VisualGPTScore, α-tuned) model get on the Winoground dataset
| 36.5 |
VNHSGE-Geography | Bing Chat | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE-Geography dataset
| 85.5 |
INSPIRE-AVR (LUNet subset) | LUNet | LUNet: Deep Learning for the Segmentation of Arterioles and Venules in High Resolution Fundus Images | 2023-09-11T00:00:00 | https://arxiv.org/abs/2309.05780v1 | [
"https://github.com/aim-lab/LUNet"
] | In the paper 'LUNet: Deep Learning for the Segmentation of Arterioles and Venules in High Resolution Fundus Images', what Average Dice score did the LUNet model get on the INSPIRE-AVR (LUNet subset) dataset
| 75.6 |
Hainsworth | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Hainsworth dataset
| 91.9 |
Amazon-Google | Meta-Llama-3.1-8B-Instruct_fine_tuned | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Meta-Llama-3.1-8B-Instruct_fine_tuned model get on the Amazon-Google dataset
| 50.00 |
CIFAR-10 (partial ratio 0.3) | ILL | Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12715v4 | [
"https://github.com/hhhhhhao/general-framework-weak-supervision"
] | In the paper 'Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations', what Accuracy score did the ILL model get on the CIFAR-10 (partial ratio 0.3) dataset
| 96.26 |
Texas | MGNN + Hetero-S (8 layers) | The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12539v1 | [
"https://github.com/bingreeky/heterosnoh"
] | In the paper 'The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs', what Accuracy score did the MGNN + Hetero-S (8 layers) model get on the Texas dataset
| 93.09 |
FSS-1000 (5-shot) | Annotation-free FSS (Without Annotation,ResNet-50) | Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach | 2023-07-26T00:00:00 | https://arxiv.org/abs/2307.14446v1 | [
"https://github.com/mindflow-institue/annotation_free_fewshot"
] | In the paper 'Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach', what Mean IoU score did the Annotation-free FSS (Without Annotation,ResNet-50) model get on the FSS-1000 (5-shot) dataset
| 86.8 |
ICBHI Respiratory Sound Database | BTS | BTS: Bridging Text and Sound Modalities for Metadata-Aided Respiratory Sound Classification | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06786v2 | [
"https://github.com/kaen2891/bts"
] | In the paper 'BTS: Bridging Text and Sound Modalities for Metadata-Aided Respiratory Sound Classification', what ICBHI Score score did the BTS model get on the ICBHI Respiratory Sound Database dataset
| 63.54 |
SemEval-2010 Task 8 | LLM-QA4RE (XXLarge) | Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11159v1 | [
"https://github.com/osu-nlp-group/qa4re"
] | In the paper 'Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors', what F1 score did the LLM-QA4RE (XXLarge) model get on the SemEval-2010 Task 8 dataset
| 43.5 |
JARVIS-DFT | PotNet | Efficient Approximations of Complete Interatomic Potentials for Crystal Property Prediction | 2023-06-12T00:00:00 | https://arxiv.org/abs/2306.10045v9 | [
"https://github.com/divelab/AIRS/tree/main/OpenMat/PotNet"
] | In the paper 'Efficient Approximations of Complete Interatomic Potentials for Crystal Property Prediction', what MAE score did the PotNet model get on the JARVIS-DFT dataset
| 0.0294 |
CropHarvest - Brazil | Hybrid fusion with LSTM | In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16582v2 | [
"https://github.com/fmenat/optimal-multiview-crop-classifier"
] | In the paper 'In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data', what Average Accuracy score did the Hybrid fusion with LSTM model get on the CropHarvest - Brazil dataset
| 0.974 |
WikiTableQuestions | Mix SC | Rethinking Tabular Data Understanding with Large Language Models | 2023-12-27T00:00:00 | https://arxiv.org/abs/2312.16702v1 | [
"https://github.com/Leolty/tablellm"
] | In the paper 'Rethinking Tabular Data Understanding with Large Language Models', what Accuracy (Dev) score did the Mix SC model get on the WikiTableQuestions dataset
| / |
CiteSeer with Public Split: fixed 20 nodes per class | OKDEEM | Graph Entropy Minimization for Semi-supervised Node Classification | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19502v1 | [
"https://github.com/cf020031308/gem"
] | In the paper 'Graph Entropy Minimization for Semi-supervised Node Classification', what Accuracy score did the OKDEEM model get on the CiteSeer with Public Split: fixed 20 nodes per class dataset
| 73.53 |
PACS | MoA (OpenCLIP, ViT-B/16) | Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.11031v2 | [
"https://github.com/KU-CVLAB/MoA"
] | In the paper 'Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters', what Average Accuracy score did the MoA (OpenCLIP, ViT-B/16) model get on the PACS dataset
| 97.4 |
MOT17 | ContrasTR | Contrastive Learning for Multi-Object Tracking with Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08043v1 | [
"https://github.com/pfdp0/ContrasTR"
] | In the paper 'Contrastive Learning for Multi-Object Tracking with Transformers', what MOTA score did the ContrasTR model get on the MOT17 dataset
| 73.7 |
AudioCaps | Auffusion-Full | Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generation | 2024-01-02T00:00:00 | https://arxiv.org/abs/2401.01044v1 | [
"https://github.com/happylittlecat2333/Auffusion"
] | In the paper 'Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generation', what FAD score did the Auffusion-Full model get on the AudioCaps dataset
| 1.76 |
ColonINST-v1 (Seen) | LLaVA-Med-v1.0
(w/o LoRA, w/ extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.0
(w/o LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 97.35 |
Criteo | CETN | CETN: Contrast-enhanced Through Network for CTR Prediction | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09715v2 | [
"https://github.com/salmon1802/cetn"
] | In the paper 'CETN: Contrast-enhanced Through Network for CTR Prediction', what AUC score did the CETN model get on the Criteo dataset
| 0.8148 |
Peptides-struct | TIGT | Topology-Informed Graph Transformer | 2024-02-03T00:00:00 | https://arxiv.org/abs/2402.02005v1 | [
"https://github.com/leemingo/tigt"
] | In the paper 'Topology-Informed Graph Transformer', what MAE score did the TIGT model get on the Peptides-struct dataset
| 0.2485 |
MOT20 | IMM-JHSE | One Homography is All You Need: IMM-based Joint Homography and Multiple Object State Estimation | 2024-09-04T00:00:00 | https://arxiv.org/abs/2409.02562v2 | [
"https://github.com/Paulkie99/imm-jhse"
] | In the paper 'One Homography is All You Need: IMM-based Joint Homography and Multiple Object State Estimation', what MOTA score did the IMM-JHSE model get on the MOT20 dataset
| 72.82 |
FRMT (Portuguese - Brazil) | Google Translate | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the Google Translate model get on the FRMT (Portuguese - Brazil) dataset
| 80.2 |
The Pile | GPT-2 Large 774M (test-time training on nearest neighbors) | Test-Time Training on Nearest Neighbors for Large Language Models | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18466v3 | [
"https://github.com/socialfoundations/tttlm"
] | In the paper 'Test-Time Training on Nearest Neighbors for Large Language Models', what Bits per byte score did the GPT-2 Large 774M (test-time training on nearest neighbors) model get on the The Pile dataset
| 0.85 |
YouTube-VIS validation | DVIS++(VIT-L, Online) | DVIS++: Improved Decoupled Framework for Universal Video Segmentation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13305v1 | [
"https://github.com/zhang-tao-whu/DVIS_Plus"
] | In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mask AP score did the DVIS++(VIT-L, Online) model get on the YouTube-VIS validation dataset
| 67.7 |
GOT-10k | ODTrack-L | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what Average Overlap score did the ODTrack-L model get on the GOT-10k dataset
| 78.2 |
CMU-MOSEI | ConCluGen | Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition | 2024-04-16T00:00:00 | https://arxiv.org/abs/2404.10904v2 | [
"https://github.com/tub-cv-group/conclugen"
] | In the paper 'Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition', what Accuracy score did the ConCluGen model get on the CMU-MOSEI dataset
| 66.48 |
MultiviewX | EarlyBird | EarlyBird: Early-Fusion for Multi-View Tracking in the Bird's Eye View | 2023-10-20T00:00:00 | https://arxiv.org/abs/2310.13350v1 | [
"https://github.com/tteepe/EarlyBird"
] | In the paper 'EarlyBird: Early-Fusion for Multi-View Tracking in the Bird's Eye View', what IDF1 score did the EarlyBird model get on the MultiviewX dataset
| 82.4 |
DEplain-web-doc | long-mBART (trained on DEplain-APA-doc) | DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18939v1 | [
"https://github.com/rstodden/deplain"
] | In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the long-mBART (trained on DEplain-APA-doc) model get on the DEplain-web-doc dataset
| 43.087 |
Split Fashion M-NIST | Model with negotiation paradigm | Negotiated Representations to Prevent Forgetting in Machine Learning Applications | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00237v1 | [
"https://github.com/nurikorhan/negotiated-representations-for-continual-learning"
] | In the paper 'Negotiated Representations to Prevent Forgetting in Machine Learning Applications', what Percentage Average accuracy - 5 tasks score did the Model with negotiation paradigm model get on the Split Fashion M-NIST dataset
| 54.8 |
PROTEINS | GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GCN + PANDA model get on the PROTEINS dataset
| 76 |
ISIC 2018 | PVT-GCASCADE | G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.16175v1 | [
"https://github.com/SLDGroup/G-CASCADE"
] | In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what DSC score did the PVT-GCASCADE model get on the ISIC 2018 dataset
| 91.51 |
PubMedQA | Med-PaLM 2 (ER) | Towards Expert-Level Medical Question Answering with Large Language Models | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09617v1 | [
"https://github.com/m42-health/med42"
] | In the paper 'Towards Expert-Level Medical Question Answering with Large Language Models', what Accuracy score did the Med-PaLM 2 (ER) model get on the PubMedQA dataset
| 75.0 |
nuScenes Camera Only | Far3D | Far3D: Expanding the Horizon for Surround-view 3D Object Detection | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09616v2 | [
"https://github.com/megvii-research/far3d"
] | In the paper 'Far3D: Expanding the Horizon for Surround-view 3D Object Detection', what NDS score did the Far3D model get on the nuScenes Camera Only dataset
| 68.7 |
VDD | UperNet(Swin-T) | VDD: Varied Drone Dataset for Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13608v3 | [
"https://github.com/RussRobin/VDD"
] | In the paper 'VDD: Varied Drone Dataset for Semantic Segmentation', what mIoU score did the UperNet(Swin-T) model get on the VDD dataset
| 84.73 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.