dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
MAS3K | MAS-SAM | MAS-SAM: Segment Any Marine Animal with Aggregated Features | 2024-04-24T00:00:00 | https://arxiv.org/abs/2404.15700v2 | [
"https://github.com/drchip61/mas-sam"
] | In the paper 'MAS-SAM: Segment Any Marine Animal with Aggregated Features', what S-measure score did the MAS-SAM model get on the MAS3K dataset
| 0.887 |
CAMO | ZoomNeXt-ResNet-50 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what MAE score did the ZoomNeXt-ResNet-50 model get on the CAMO dataset
| 0.065 |
MM-Vet | Video-LaVIT | Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization | 2024-02-05T00:00:00 | https://arxiv.org/abs/2402.03161v3 | [
"https://github.com/jy0205/lavit"
] | In the paper 'Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization', what GPT-4 score score did the Video-LaVIT model get on the MM-Vet dataset
| 33.2 |
MCubeS | MMSFormer (RGB-A-D) | MMSFormer: Multimodal Transformer for Material and Semantic Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.04001v4 | [
"https://github.com/csiplab/mmsformer"
] | In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB-A-D) model get on the MCubeS dataset
| 52.05% |
EC-FUNSD | LayoutLMv3 (large) | Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02379v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective', what F1 score did the LayoutLMv3 (large) model get on the EC-FUNSD dataset
| 83.88 |
CARS196 | EfficientDML-VPTSP-G/512 | Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02340v2 | [
"https://github.com/noahsark/parameterefficient-dml"
] | In the paper 'Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning', what R@1 score did the EfficientDML-VPTSP-G/512 model get on the CARS196 dataset
| 91.2 |
COVERAGE | Late Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Late Fusion model get on the COVERAGE dataset
| .641 |
SUN397 | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the SUN397 dataset
| 80.88 |
Weather (96) | SCNN | Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13036v3 | [
"https://github.com/JLDeng/SCNN"
] | In the paper 'Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting', what MSE score did the SCNN model get on the Weather (96) dataset
| 0.142 |
PASCAL VOC | OTSeg+ | OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14183v2 | [
"https://github.com/cubeyoung/OTSeg"
] | In the paper 'OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation', what Transductive Setting hIoU score did the OTSeg+ model get on the PASCAL VOC dataset
| 94.4 |
FLIR | MiPa | MiPa: Mixed Patch Infrared-Visible Modality Agnostic Object Detection | 2024-04-29T00:00:00 | https://arxiv.org/abs/2404.18849v2 | [
"https://github.com/heitorrapela/mipa"
] | In the paper 'MiPa: Mixed Patch Infrared-Visible Modality Agnostic Object Detection', what mAP50 score did the MiPa model get on the FLIR dataset
| 81.3% |
RTE | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the RTE dataset
| 79.3% |
ETTm1 (96) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTm1 (96) Multivariate dataset
| 0.286 |
CIFAR-100 (partial ratio 0.01) | ILL | Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12715v4 | [
"https://github.com/hhhhhhao/general-framework-weak-supervision"
] | In the paper 'Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations', what Accuracy score did the ILL model get on the CIFAR-100 (partial ratio 0.01) dataset
| 75.31 |
KITTI Test (Offline Methods) | BiTrack | BiTrack: Bidirectional Offline 3D Multi-Object Tracking Using Camera-LiDAR Data | 2024-06-26T00:00:00 | https://arxiv.org/abs/2406.18414v1 | [
"https://github.com/kemo-huang/bitrack"
] | In the paper 'BiTrack: Bidirectional Offline 3D Multi-Object Tracking Using Camera-LiDAR Data', what HOTA score did the BiTrack model get on the KITTI Test (Offline Methods) dataset
| 82.70 |
SPair-71k | DIFT + CleanDIFT (Zero-Shot) | CleanDIFT: Diffusion Features without Noise | 2024-12-04T00:00:00 | https://arxiv.org/abs/2412.03439v1 | [
"https://github.com/CompVis/cleandift"
] | In the paper 'CleanDIFT: Diffusion Features without Noise', what PCK score did the DIFT + CleanDIFT (Zero-Shot) model get on the SPair-71k dataset
| 61.4 |
ZINC | CKGCN | CKGConv: General Graph Convolution with Continuous Kernels | 2024-04-21T00:00:00 | https://arxiv.org/abs/2404.13604v2 | [
"https://github.com/networkslab/ckgconv"
] | In the paper 'CKGConv: General Graph Convolution with Continuous Kernels', what MAE score did the CKGCN model get on the ZINC dataset
| 0.059 |
VisDA2017 | TransAdapter | TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04073v1 | [
"https://github.com/enesdoruk/TransAdapter"
] | In the paper 'TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation', what Accuracy score did the TransAdapter model get on the VisDA2017 dataset
| 91.2 |
VideoInstruct | BT-Adapter | BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15785v2 | [
"https://github.com/farewellthree/BT-Adapter"
] | In the paper 'BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning', what mean score did the BT-Adapter model get on the VideoInstruct dataset
| 2.19 |
EconLogicQA | GPT-3.5-Turbo | EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning | 2024-05-13T00:00:00 | https://arxiv.org/abs/2405.07938v2 | [
"https://github.com/yinzhu-quan/lm-evaluation-harness"
] | In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the GPT-3.5-Turbo model get on the EconLogicQA dataset
| 0.3769 |
GSM8K | MathCoder-CL-13B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-CL-13B model get on the GSM8K dataset
| 74.1 |
Winograd Schema Challenge | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the Winograd Schema Challenge dataset
| 84.6 |
ImageNet 256x256 | PaGoDA | PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14822v2 | [
"https://github.com/sony/pagoda"
] | In the paper 'PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher', what FID score did the PaGoDA model get on the ImageNet 256x256 dataset
| 1.56 |
CHILI-100K | GraphSAGE | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GraphSAGE model get on the CHILI-100K dataset
| 0.064 +/- 0.001 |
CiteSeer with Public Split: fixed 20 nodes per class | EEM | Graph Entropy Minimization for Semi-supervised Node Classification | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19502v1 | [
"https://github.com/cf020031308/gem"
] | In the paper 'Graph Entropy Minimization for Semi-supervised Node Classification', what Accuracy score did the EEM model get on the CiteSeer with Public Split: fixed 20 nodes per class dataset
| 72.63 |
ImageNet | DeBiFormer-S | DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08582v1 | [
"https://github.com/maclong01/DeBiFormer"
] | In the paper 'DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention', what Top 1 Accuracy score did the DeBiFormer-S model get on the ImageNet dataset
| 83.9% |
ogbn-mag | RpHGNN+LP+CR (LINE embs) | Efficient Heterogeneous Graph Learning via Random Projection | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14481v2 | [
"https://github.com/CrawlScript/RpHGNN"
] | In the paper 'Efficient Heterogeneous Graph Learning via Random Projection', what Test Accuracy score did the RpHGNN+LP+CR (LINE embs) model get on the ogbn-mag dataset
| 0.5773 ± 0.0012 |
COCO-Text | CLIP4STR-L | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what 1:1 Accuracy score did the CLIP4STR-L model get on the COCO-Text dataset
| 81.9 |
ScanObjectNN | PCP-MAE | PCP-MAE: Learning to Predict Centers for Point Masked Autoencoders | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08753v2 | [
"https://github.com/aHapBean/PCP-MAE"
] | In the paper 'PCP-MAE: Learning to Predict Centers for Point Masked Autoencoders', what Overall Accuracy score did the PCP-MAE model get on the ScanObjectNN dataset
| 90.35 |
S3DIS Area5 | PonderV2 + SparseUNet | PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm | 2023-10-12T00:00:00 | https://arxiv.org/abs/2310.08586v3 | [
"https://github.com/OpenGVLab/PonderV2"
] | In the paper 'PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm', what mIoU score did the PonderV2 + SparseUNet model get on the S3DIS Area5 dataset
| 73.2 |
UHD-IQA | QualiCLIP | Quality-Aware Image-Text Alignment for Real-World Image Quality Assessment | 2024-03-17T00:00:00 | https://arxiv.org/abs/2403.11176v2 | [
"https://github.com/miccunifi/qualiclip"
] | In the paper 'Quality-Aware Image-Text Alignment for Real-World Image Quality Assessment', what SRCC score did the QualiCLIP model get on the UHD-IQA dataset
| 0.770 |
Electricity (96) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Electricity (96) dataset
| 0.131 |
VideoInstruct | BT-Adapter | BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15785v2 | [
"https://github.com/farewellthree/BT-Adapter"
] | In the paper 'BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning', what Correctness of Information score did the BT-Adapter model get on the VideoInstruct dataset
| 2.68 |
CiteSeer with Public Split: fixed 20 nodes per class | GCN | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GCN model get on the CiteSeer with Public Split: fixed 20 nodes per class dataset
| 73.14± 0.67 |
CHILI-100K | GraphSAGE | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GraphSAGE model get on the CHILI-100K dataset
| 0.195 +/- 0.007 |
GigaSpeech TEST | Zipformer+pruned transducer w/ CR-CTC
(no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+pruned transducer w/ CR-CTC
(no external language model) model get on the GigaSpeech TEST dataset
| 10.03 |
CREMA-D | Vertically long patch ViT | Accuracy enhancement method for speech emotion recognition from spectrogram using temporal frequency correlation and positional information learning through knowledge transfer | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17327v1 | [
"https://github.com/kjy7567/speech_emotion_recognition_from_log_Mel_spectrogram_using_vertically_long_patch"
] | In the paper 'Accuracy enhancement method for speech emotion recognition from spectrogram using temporal frequency correlation and positional information learning through knowledge transfer', what Accuracy score did the Vertically long patch ViT model get on the CREMA-D dataset
| 9 |
MVTec AD | SCL-VI | SCL-VI: Self-supervised Context Learning for Visual Inspection of Industrial Defects | 2023-11-11T00:00:00 | https://arxiv.org/abs/2311.06504v2 | [
"https://github.com/wangpeng000/visualinspection"
] | In the paper 'SCL-VI: Self-supervised Context Learning for Visual Inspection of Industrial Defects', what Detection AUROC score did the SCL-VI model get on the MVTec AD dataset
| 95.81 |
ScanNetV2 | ImGeoNet (RGB only) | ImGeoNet: Image-induced Geometry-aware Voxel Representation for Multi-view 3D Object Detection | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09098v1 | [
"https://github.com/ttaoREtw/ImGeoNet"
] | In the paper 'ImGeoNet: Image-induced Geometry-aware Voxel Representation for Multi-view 3D Object Detection', what mAP@0.25 score did the ImGeoNet (RGB only) model get on the ScanNetV2 dataset
| 54.8 |
RParis (Medium) | HED-N-GAN | Dark Side Augmentation: Generating Diverse Night Examples for Metric Learning | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16351v2 | [
"https://github.com/mohwald/gandtr"
] | In the paper 'Dark Side Augmentation: Generating Diverse Night Examples for Metric Learning', what mAP score did the HED-N-GAN model get on the RParis (Medium) dataset
| 76.6 |
REBUS | CogVLM | REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05604v2 | [
"https://github.com/cvndsh/rebus"
] | In the paper 'REBUS: A Robust Evaluation Benchmark of Understanding Symbols', what Accuracy score did the CogVLM model get on the REBUS dataset
| 0.9 |
IllusionVQA | LLaVA-1.5-13B | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the LLaVA-1.5-13B model get on the IllusionVQA dataset
| 40 |
AM-2K | StyleMatte | Adversarially-Guided Portrait Matting | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.02981v2 | [
"https://github.com/chroneus/stylematte"
] | In the paper 'Adversarially-Guided Portrait Matting', what SAD score did the StyleMatte model get on the AM-2K dataset
| 9.602 |
Refer-YouTube-VOS (2021 public validation) | UniVS(Swin-L) | UniVS: Unified and Universal Video Segmentation with Prompts as Queries | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18115v2 | [
"https://github.com/minghanli/univs"
] | In the paper 'UniVS: Unified and Universal Video Segmentation with Prompts as Queries', what J&F score did the UniVS(Swin-L) model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 58.0 |
Mol-Instruction | SLM4CRP | A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09606v1 | [
"https://github.com/ai-hpc-research-team/slm4crp"
] | In the paper 'A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions', what METEOR score did the SLM4CRP model get on the Mol-Instruction dataset
| 0.95 |
nuScenes | SemanticFormer | SemanticFormer: Holistic and Semantic Traffic Scene Representation for Trajectory Prediction using Knowledge Graphs | 2024-04-30T00:00:00 | https://arxiv.org/abs/2404.19379v3 | [
"https://github.com/boschresearch/nuscenes_knowledge_graph"
] | In the paper 'SemanticFormer: Holistic and Semantic Traffic Scene Representation for Trajectory Prediction using Knowledge Graphs', what MinADE_5 score did the SemanticFormer model get on the nuScenes dataset
| 1.14 |
Occ3D-nuScenes | DAOcc | DAOcc: 3D Object Detection Assisted Multi-Sensor Fusion for 3D Occupancy Prediction | 2024-09-30T00:00:00 | https://arxiv.org/abs/2409.19972v2 | [
"https://github.com/alphaplustt/daocc"
] | In the paper 'DAOcc: 3D Object Detection Assisted Multi-Sensor Fusion for 3D Occupancy Prediction', what mIoU score did the DAOcc model get on the Occ3D-nuScenes dataset
| 53.82 |
Wisconsin (60%/20%/20% random splits) | HH-GAT | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GAT model get on the Wisconsin (60%/20%/20% random splits) dataset
| 83.53 ± 3.84 |
nuScenes | PointBeV | PointBeV: A Sparse Approach to BeV Predictions | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00703v2 | [
"https://github.com/valeoai/pointbev"
] | In the paper 'PointBeV: A Sparse Approach to BeV Predictions', what IoU veh - 224x480 - No vis filter - 100x100 at 0.5 score did the PointBeV model get on the nuScenes dataset
| 39.9 |
50 Salads | BaFormer | Efficient Temporal Action Segmentation via Boundary-aware Query Voting | 2024-05-25T00:00:00 | https://arxiv.org/abs/2405.15995v1 | [
"https://github.com/peiyao-w/baformer"
] | In the paper 'Efficient Temporal Action Segmentation via Boundary-aware Query Voting', what F1@10% score did the BaFormer model get on the 50 Salads dataset
| 89.3 |
MVBench | LinVT-Qwen2-VL
(7B) | LinVT: Empower Your Image-level Large Language Model to Understand Videos | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05185v2 | [
"https://github.com/gls0425/linvt"
] | In the paper 'LinVT: Empower Your Image-level Large Language Model to Understand Videos', what Avg. score did the LinVT-Qwen2-VL
(7B) model get on the MVBench dataset
| 69.3 |
FUNSD | RORE (GeoLayoutLM) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (GeoLayoutLM) model get on the FUNSD dataset
| 88.46 |
SMAC 3s5z_vs_4s6z | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the DPLEX model get on the SMAC 3s5z_vs_4s6z dataset
| 14.99 |
WebApp1K-React | o1-mini | A Case Study of Web App Coding with OpenAI Reasoning Models | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.13773v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the o1-mini model get on the WebApp1K-React dataset
| 0.939 |
UEA | Primal.+Trans. | Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation | 2023-05-31T00:00:00 | https://arxiv.org/abs/2305.19798v2 | [
"https://github.com/yingyichen-cyy/PrimalAttention"
] | In the paper 'Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation', what ACC score did the Primal.+Trans. model get on the UEA dataset
| 73.1 |
ScanObjectNN | Point-GN | Point-GN: A Non-Parametric Network Using Gaussian Positional Encoding for Point Cloud Classification | 2024-12-04T00:00:00 | https://arxiv.org/abs/2412.03056v2 | [
"https://github.com/asalarpour/Point_GN"
] | In the paper 'Point-GN: A Non-Parametric Network Using Gaussian Positional Encoding for Point Cloud Classification', what Accuracy (%) score did the Point-GN model get on the ScanObjectNN dataset
| 86.4 |
SVTP | DTrOCR 105M | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what Accuracy score did the DTrOCR 105M model get on the SVTP dataset
| 98.6 |
MassSpecGym | DeepSets + Fourier features | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit rate @ 1 score did the DeepSets + Fourier features model get on the MassSpecGym dataset
| 6.56 |
ColonINST-v1 (Seen) | LLaVA-Med-v1.0 (w/o LoRA, w/o extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.0 (w/o LoRA, w/o extra data) model get on the ColonINST-v1 (Seen) dataset
| 93.52 |
Tokyo247 | BoQ | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the Tokyo247 dataset
| 98.1 |
MS-COCO (10-shot) | RISF (SWIN-Large) | Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection | 2023-11-01T00:00:00 | https://arxiv.org/abs/2311.00278v1 | [
"https://github.com/INFINIQ-AI1/RISF"
] | In the paper 'Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection', what AP score did the RISF (SWIN-Large) model get on the MS-COCO (10-shot) dataset
| 25.5 |
RTE | OPT-1.3B | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-1.3B model get on the RTE dataset
| 60.89% |
TAP-Vid-DAVIS | CoTracker | CoTracker: It is Better to Track Together | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07635v3 | [
"https://github.com/facebookresearch/co-tracker"
] | In the paper 'CoTracker: It is Better to Track Together', what Average Jaccard score did the CoTracker model get on the TAP-Vid-DAVIS dataset
| 65.9 |
HateXplain | BERT-base | Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs | 2024-01-30T00:00:00 | https://arxiv.org/abs/2401.16638v1 | [
"https://github.com/stepantita/space-model"
] | In the paper 'Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs', what Accuracy (2 classes) score did the BERT-base model get on the HateXplain dataset
| 0.6588 |
Lip Reading in the Wild | SyncVSR | SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12233v1 | [
"https://github.com/KAIST-AILab/SyncVSR"
] | In the paper 'SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization', what Top-1 Accuracy score did the SyncVSR model get on the Lip Reading in the Wild dataset
| 93.2 |
DTU | MVSFormer++ | MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo | 2024-01-22T00:00:00 | https://arxiv.org/abs/2401.11673v1 | [
"https://github.com/maybelx/mvsformerplusplus"
] | In the paper 'MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo', what Acc score did the MVSFormer++ model get on the DTU dataset
| 0.3090 |
YouCook2 | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what BLEU-4 score did the COSA model get on the YouCook2 dataset
| 10.1 |
ScanNet200 | OpenIns3D | OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00616v5 | [
"https://github.com/Pointcept/OpenIns3D"
] | In the paper 'OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation', what mAP score did the OpenIns3D model get on the ScanNet200 dataset
| 15.9 |
Video Waterdrop Removal Dataset | RainMamba | RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21773v2 | [
"https://github.com/TonyHongtaoWu/RainMamba"
] | In the paper 'RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining', what PSNR score did the RainMamba model get on the Video Waterdrop Removal Dataset dataset
| 37.21 |
COCO-20i (1-shot) | SCCAN (ResNet-101) | Self-Calibrated Cross Attention Network for Few-Shot Segmentation | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09294v1 | [
"https://github.com/sam1224/sccan"
] | In the paper 'Self-Calibrated Cross Attention Network for Few-Shot Segmentation', what Mean IoU score did the SCCAN (ResNet-101) model get on the COCO-20i (1-shot) dataset
| 48.2 |
GSM8K | GaC(Qwen2-72B-Instruct + Llama-3-70B-Instruct) | Breaking the Ceiling of the LLM Community by Treating Token Generation as a Classification for Ensembling | 2024-06-18T00:00:00 | https://arxiv.org/abs/2406.12585v2 | [
"https://github.com/yaoching0/gac"
] | In the paper 'Breaking the Ceiling of the LLM Community by Treating Token Generation as a Classification for Ensembling', what Accuracy score did the GaC(Qwen2-72B-Instruct + Llama-3-70B-Instruct) model get on the GSM8K dataset
| 90.91 |
UCR Anomaly Archive | Matrix Profile SCRIMP | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the Matrix Profile SCRIMP model get on the UCR Anomaly Archive dataset
| 0.416 |
CHILI-100K | Most Frequent Class | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the Most Frequent Class model get on the CHILI-100K dataset
| 0.192 |
WebApp1k-Duo-React | claude-3-5-sonnet | A Case Study of Web App Coding with OpenAI Reasoning Models | 2024-09-19T00:00:00 | https://arxiv.org/abs/2409.13773v1 | [
"https://github.com/onekq/webapp1k"
] | In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the claude-3-5-sonnet model get on the WebApp1k-Duo-React dataset
| 0.679 |
1 | CMA_R | CMA-R:Causal Mediation Analysis for Explaining Rumour Detection | 2024-02-13T00:00:00 | https://arxiv.org/abs/2402.08155v1 | [
"https://github.com/ltian678/cma-r"
] | In the paper 'CMA-R:Causal Mediation Analysis for Explaining Rumour Detection', what 0..5sec score did the CMA_R model get on the 1 dataset
| 1 |
FGVC-Aircraft | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the FGVC-Aircraft dataset
| 40.25 |
ACE 2005 | UniNER-7B | UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03279v2 | [
"https://github.com/emma1066/retrieval-augmented-it-openner"
] | In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the ACE 2005 dataset
| 86.69 |
MassSpecGym | GNN | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Cosine Similarity score did the GNN model get on the MassSpecGym dataset
| 0.19 |
SALMon | TWIST 7B | Textually Pretrained Speech Language Models | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13009v3 | [
"https://github.com/slp-rl/spokenstorycloze"
] | In the paper 'Textually Pretrained Speech Language Models', what Speaker Consistency score did the TWIST 7B model get on the SALMon dataset
| 71.0 |
The Pile | Llama-3.2 1B | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Llama-3.2 1B model get on the The Pile dataset
| 0.697 |
ImageNet V2 | Discrete Adversarial Distillation (ViT-B, 224) | Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01441v2 | [
"https://github.com/lapisrocks/DiscreteAdversarialDistillation"
] | In the paper 'Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models', what Top 1 Accuracy score did the Discrete Adversarial Distillation (ViT-B, 224) model get on the ImageNet V2 dataset
| 71.7 |
MultiScan | UniDet3D | UniDet3D: Multi-dataset Indoor 3D Object Detection | 2024-09-06T00:00:00 | https://arxiv.org/abs/2409.04234v1 | [
"https://github.com/filapro/unidet3d"
] | In the paper 'UniDet3D: Multi-dataset Indoor 3D Object Detection', what mAP@0.25 score did the UniDet3D model get on the MultiScan dataset
| 64.2 |
ImageNet 256x256 | RDM | Relay Diffusion: Unifying diffusion process across resolutions for image synthesis | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.03350v1 | [
"https://github.com/THUDM/RelayDiffusion"
] | In the paper 'Relay Diffusion: Unifying diffusion process across resolutions for image synthesis', what FID score did the RDM model get on the ImageNet 256x256 dataset
| 1.99 |
MUSES: MUlti-SEnsor Semantic perception dataset | Mask2Former (Swin-T) | MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty | 2024-01-23T00:00:00 | https://arxiv.org/abs/2401.12761v4 | [
"https://github.com/timbroed/MUSES"
] | In the paper 'MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty', what mIoU score did the Mask2Former (Swin-T) model get on the MUSES: MUlti-SEnsor Semantic perception dataset dataset
| 70.74 |
VNHSGE-Civic | Bing Chat | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE-Civic dataset
| 85.5 |
MedQA | Med-PaLM 2 (5-shot) | Towards Expert-Level Medical Question Answering with Large Language Models | 2023-05-16T00:00:00 | https://arxiv.org/abs/2305.09617v1 | [
"https://github.com/m42-health/med42"
] | In the paper 'Towards Expert-Level Medical Question Answering with Large Language Models', what Accuracy score did the Med-PaLM 2 (5-shot) model get on the MedQA dataset
| 79.7 |
COCO 2014 | 3SHNet | 3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-Highlighting | 2024-04-26T00:00:00 | https://arxiv.org/abs/2404.17273v1 | [
"https://github.com/xurige1995/3shnet"
] | In the paper '3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-Highlighting', what Image-to-text R@1 score did the 3SHNet model get on the COCO 2014 dataset
| 67.9 |
GSM8K | OpenMath-CodeLlama-70B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-70B (w/ code) model get on the GSM8K dataset
| 84.6 |
COCO-Stuff-81 | CAUSE-TR (ViT-S/8) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what mIoU score did the CAUSE-TR (ViT-S/8) model get on the COCO-Stuff-81 dataset
| 21.2 |
MATH | OpenMath-CodeLlama-34B (w/ code) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-34B (w/ code) model get on the MATH dataset
| 48.3 |
USNA-Cn2 (short-duration) | Air-Water Temperature Difference | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Air-Water Temperature Difference model get on the USNA-Cn2 (short-duration) dataset
| 0.910 |
CIFAR-100-LT (ρ=10) | LIFT (ViT-B/16, ImageNet-21K pre-training) | Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10019v3 | [
"https://github.com/shijxcs/lift"
] | In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Error Rate score did the LIFT (ViT-B/16, ImageNet-21K pre-training) model get on the CIFAR-100-LT (ρ=10) dataset
| 8.7 |
HInt: Hand Interactions in the wild | Hamba | Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba | 2024-07-12T00:00:00 | https://arxiv.org/abs/2407.09646v2 | [
"https://github.com/humansensinglab/Hamba"
] | In the paper 'Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba', what PCK@0.05 (New Days) All score did the Hamba model get on the HInt: Hand Interactions in the wild dataset
| 48.7 |
EQ-Bench | Anthropic Claude2 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the Anthropic Claude2 model get on the EQ-Bench dataset
| 52.14 |
ISIC2018 | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what mean Dice score did the EMCAD model get on the ISIC2018 dataset
| 0.9096 |
PascalVOC-20 | MAFT+ | Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation | 2024-08-01T00:00:00 | https://arxiv.org/abs/2408.00744v2 | [
"https://github.com/jiaosiyu1999/MAFT-Plus"
] | In the paper 'Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation', what mIoU score did the MAFT+ model get on the PascalVOC-20 dataset
| 96.5 |
DEplain-web-doc | long-mBART (trained on DEplain-APA-doc & DEplain-web-doc) | DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18939v1 | [
"https://github.com/rstodden/deplain"
] | In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the long-mBART (trained on DEplain-APA-doc & DEplain-web-doc) model get on the DEplain-web-doc dataset
| 49.745 |
ImageNet | KD++(T: ResNet-34 S:ResNet-18) | Improving Knowledge Distillation via Regularizing Feature Norm and Direction | 2023-05-26T00:00:00 | https://arxiv.org/abs/2305.17007v1 | [
"https://github.com/wangyz1608/knowledge-distillation-via-nd"
] | In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T: ResNet-34 S:ResNet-18) model get on the ImageNet dataset
| 72.07 |
CUB-200-2011 | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the CUB-200-2011 dataset
| 64.3 |
InsPLAD | RD++ (SENet-ResNet-18) | Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study | 2023-11-05T00:00:00 | https://arxiv.org/abs/2311.02747v2 | [
"https://github.com/andreluizbvs/insplad"
] | In the paper 'Attention Modules Improve Image-Level Anomaly Detection for Industrial Inspection: A DifferNet Case Study', what Detection AUROC score did the RD++ (SENet-ResNet-18) model get on the InsPLAD dataset
| 90.32 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.