dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
ImageNet 256x256 | MAR-B, Diff Loss | Autoregressive Image Generation without Vector Quantization | 2024-06-17T00:00:00 | https://arxiv.org/abs/2406.11838v3 | [
"https://github.com/lth14/mar"
] | In the paper 'Autoregressive Image Generation without Vector Quantization', what FID score did the MAR-B, Diff Loss model get on the ImageNet 256x256 dataset
| 2.31 |
Camouflaged Animal Dataset | ZoomNeXt-PVTv2-B5 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what S-measure score did the ZoomNeXt-PVTv2-B5 model get on the Camouflaged Animal Dataset dataset
| 0.757 |
CausalGym | PCA | CausalGym: Benchmarking causal interpretability methods on linguistic tasks | 2024-02-19T00:00:00 | https://arxiv.org/abs/2402.12560v1 | [
"https://github.com/aryamanarora/causalgym"
] | In the paper 'CausalGym: Benchmarking causal interpretability methods on linguistic tasks', what Log odds-ratio (pythia-6.9b) score did the PCA model get on the CausalGym dataset
| 1.81 |
ScanObjectNN | KPConvX-L | KPConvX: Modernizing Kernel Point Convolution with Kernel Attention | 2024-05-21T00:00:00 | https://arxiv.org/abs/2405.13194v1 | [
"https://github.com/apple/ml-kpconvx"
] | In the paper 'KPConvX: Modernizing Kernel Point Convolution with Kernel Attention', what Overall Accuracy score did the KPConvX-L model get on the ScanObjectNN dataset
| 89.3 |
Assembly101 | CHASE(CTR-GCN) | CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.07153v1 | [
"https://github.com/Necolizer/CHASE"
] | In the paper 'CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition', what Actions Top-1 score did the CHASE(CTR-GCN) model get on the Assembly101 dataset
| 28.03 |
ogbn-products | GraphSAGE | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Test Accuracy score did the GraphSAGE model get on the ogbn-products dataset
| 0.8389 ± 0.0036 |
RWTH-PHOENIX-Weather 2014 T | Signformer | Signformer is all you need: Towards Edge AI for Sign Language | 2024-11-19T00:00:00 | https://arxiv.org/abs/2411.12901v1 | [
"https://github.com/EtaEnding/Signformer"
] | In the paper 'Signformer is all you need: Towards Edge AI for Sign Language', what BLEU-4 score did the Signformer model get on the RWTH-PHOENIX-Weather 2014 T dataset
| 23.43 |
Near-OOD | ISH (ResNet50) | Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement | 2023-09-30T00:00:00 | https://arxiv.org/abs/2310.00227v1 | [
"https://github.com/kai422/scale"
] | In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what ID ACC score did the ISH (ResNet50) model get on the Near-OOD dataset
| 76.74 |
EQ-Bench | Qwen/Qwen-14B-Chat | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the Qwen/Qwen-14B-Chat model get on the EQ-Bench dataset
| 43.76 |
CARLA | InteractionNet | InteractionNet: Joint Planning and Prediction for Autonomous Driving with Transformers | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03475v1 | [
"https://github.com/fujiawei0724/interactionnet"
] | In the paper 'InteractionNet: Joint Planning and Prediction for Autonomous Driving with Transformers', what Driving Score score did the InteractionNet model get on the CARLA dataset
| 51 |
DSEC | CAFR | Embracing Events and Frames with Hierarchical Feature Refinement Network for Object Detection | 2024-07-17T00:00:00 | https://arxiv.org/abs/2407.12582v2 | [
"https://github.com/hucaofighting/frn"
] | In the paper 'Embracing Events and Frames with Hierarchical Feature Refinement Network for Object Detection', what mAP score did the CAFR model get on the DSEC dataset
| 38.0 |
USNA-Cn2 (long-term) | Macro Meteorological | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Macro Meteorological model get on the USNA-Cn2 (long-term) dataset
| 1.217 |
CHILI-3K | PMLP | CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning | 2024-02-20T00:00:00 | https://arxiv.org/abs/2402.13221v2 | [
"https://github.com/UlrikFriisJensen/CHILI"
] | In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the PMLP model get on the CHILI-3K dataset
| 0.359 +/- 0.017 |
Waymo Open Dataset | MCTrack | MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving | 2024-09-23T00:00:00 | https://arxiv.org/abs/2409.16149v2 | [
"https://github.com/megvii-research/mctrack"
] | In the paper 'MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving', what MOTA/L2 score did the MCTrack model get on the Waymo Open Dataset dataset
| 0.7344 |
GigaSpeech DEV | Zipformer+pruned transducer w/ CR-CTC
(no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+pruned transducer w/ CR-CTC
(no external language model) model get on the GigaSpeech DEV dataset
| 9.95 |
FP-O-M | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-O-M dataset
| 22.07 |
KITTI 2015 | MoCha-Stereo | MoCha-Stereo: Motif Channel Attention Network for Stereo Matching | 2024-04-10T00:00:00 | https://arxiv.org/abs/2404.06842v3 | [
"https://github.com/zyangchen/mocha-stereo"
] | In the paper 'MoCha-Stereo: Motif Channel Attention Network for Stereo Matching', what D1-all score did the MoCha-Stereo model get on the KITTI 2015 dataset
| 1.53 |
arXiv-year | GESN | Addressing Heterophily in Node Classification with Graph Echo State Networks | 2023-05-14T00:00:00 | https://arxiv.org/abs/2305.08233v2 | [
"https://github.com/dtortorella/addressing-heterophily-gesn"
] | In the paper 'Addressing Heterophily in Node Classification with Graph Echo State Networks', what Accuracy score did the GESN model get on the arXiv-year dataset
| 48.80 ± 0.22 |
Atari 2600 Centipede | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Centipede dataset
| 3899.8 |
DWIE | REXEL | REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking | 2024-04-19T00:00:00 | https://arxiv.org/abs/2404.12788v1 | [
"https://github.com/amazon-science/e2e-docie"
] | In the paper 'REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking', what Avg. F1 score did the REXEL model get on the DWIE dataset
| 95.12 |
Atari 2600 Frostbite | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Frostbite dataset
| 8616.4 |
MSRVTT-QA | JustAsk+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the JustAsk+ model get on the MSRVTT-QA dataset
| 0.418 |
LAGENDA gender | MiVOLO-D1 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what Accuracy score did the MiVOLO-D1 model get on the LAGENDA gender dataset
| 97.36 |
Cora (48%/32%/20% fixed splits) | GESN | Addressing Heterophily in Node Classification with Graph Echo State Networks | 2023-05-14T00:00:00 | https://arxiv.org/abs/2305.08233v2 | [
"https://github.com/dtortorella/addressing-heterophily-gesn"
] | In the paper 'Addressing Heterophily in Node Classification with Graph Echo State Networks', what 1:1 Accuracy score did the GESN model get on the Cora (48%/32%/20% fixed splits) dataset
| 86.04 ± 1.01 |
20NewsGroups | vONTSS | vONTSS: vMF based semi-supervised neural topic modeling with optimal transport | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.01226v2 | [
"https://github.com/xuweijieshuai/vONTSS"
] | In the paper 'vONTSS: vMF based semi-supervised neural topic modeling with optimal transport', what C_v score did the vONTSS model get on the 20NewsGroups dataset
| 0.69 |
Penn94 | DJ-GNN | Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16976v1 | [
"https://github.com/AhmedBegggaUA/TFM"
] | In the paper 'Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters', what Accuracy score did the DJ-GNN model get on the Penn94 dataset
| 84.84±0.34 |
Deep Noise Suppression (DNS) Challenge | MFNET | A Mask Free Neural Network for Monaural Speech Enhancement | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04286v1 | [
"https://github.com/ioyy900205/mfnet"
] | In the paper 'A Mask Free Neural Network for Monaural Speech Enhancement', what SI-SDR-WB score did the MFNET model get on the Deep Noise Suppression (DNS) Challenge dataset
| 20.31 |
PASCAL-5i (5-Shot) | SCCAN (ResNet-101) | Self-Calibrated Cross Attention Network for Few-Shot Segmentation | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09294v1 | [
"https://github.com/sam1224/sccan"
] | In the paper 'Self-Calibrated Cross Attention Network for Few-Shot Segmentation', what Mean IoU score did the SCCAN (ResNet-101) model get on the PASCAL-5i (5-Shot) dataset
| 71.5 |
DBP15k zh-en | UMAEA (w/o surf) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf) model get on the DBP15k zh-en dataset
| 0.856 |
Perception Test | Siam-FC | Perception Test: A Diagnostic Benchmark for Multimodal Video Models | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13786v2 | [
"https://github.com/deepmind/perception_test"
] | In the paper 'Perception Test: A Diagnostic Benchmark for Multimodal Video Models', what Average IOU score did the Siam-FC model get on the Perception Test dataset
| 0.66 |
cifar10 | ResNet50 | Guarding Barlow Twins Against Overfitting with Mixed Samples | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02151v1 | [
"https://github.com/wgcban/mix-bt"
] | In the paper 'Guarding Barlow Twins Against Overfitting with Mixed Samples', what average top-1 classification accuracy score did the ResNet50 model get on the cifar10 dataset
| 93.89 |
ActivityNet-QA | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what Accuracy score did the COSA model get on the ActivityNet-QA dataset
| 49.9 |
CULane | FENetV2 | FENet: Focusing Enhanced Network for Lane Detection | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.17163v6 | [
"https://github.com/hanyangzhong/fenet"
] | In the paper 'FENet: Focusing Enhanced Network for Lane Detection', what F1 score score did the FENetV2 model get on the CULane dataset
| 80.19 |
ICBHI Respiratory Sound Database | AST (Patch-Mix CL) | Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14032v4 | [
"https://github.com/raymin0223/patch-mix_contrastive_learning"
] | In the paper 'Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification', what ICBHI Score score did the AST (Patch-Mix CL) model get on the ICBHI Respiratory Sound Database dataset
| 62.37 |
ActivityNet-1.3 | RDFA-S6 (InternVideo2-6B) | Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13078v1 | [
"https://github.com/lsy0882/RDFA-S6"
] | In the paper 'Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism', what mAP IOU@0.5 score did the RDFA-S6 (InternVideo2-6B) model get on the ActivityNet-1.3 dataset
| 64.1 |
ScanObjectNN | Point-JEPA | Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud | 2024-04-25T00:00:00 | https://arxiv.org/abs/2404.16432v4 | [
"https://github.com/Ayumu-J-S/Point-JEPA"
] | In the paper 'Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud', what OBJ-BG (OA) score did the Point-JEPA model get on the ScanObjectNN dataset
| 92.9±0.4 |
HIDE (trained on GOPRO) | M3SNet | A Mountain-Shaped Single-Stage Network for Accurate Image Restoration | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.05146v1 | [
"https://github.com/Tombs98/M3SNet"
] | In the paper 'A Mountain-Shaped Single-Stage Network for Accurate Image Restoration', what PSNR score did the M3SNet model get on the HIDE (trained on GOPRO) dataset
| 31.49 |
RefCoCo val | HIPIE | Hierarchical Open-vocabulary Universal Image Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00764v2 | [
"https://github.com/berkeley-hipie/hipie"
] | In the paper 'Hierarchical Open-vocabulary Universal Image Segmentation', what Overall IoU score did the HIPIE model get on the RefCoCo val dataset
| 82.8 |
nuScenes | HVDetFusion | HVDetFusion: A Simple and Robust Camera-Radar Fusion Framework | 2023-07-21T00:00:00 | https://arxiv.org/abs/2307.11323v1 | [
"https://github.com/hvxlab/hvdetfusion"
] | In the paper 'HVDetFusion: A Simple and Robust Camera-Radar Fusion Framework', what NDS score did the HVDetFusion model get on the nuScenes dataset
| 0.674 |
Refer-YouTube-VOS (2021 public validation) | ViLLa | ViLLa: Video Reasoning Segmentation with Large Language Model | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.14500v2 | [
"https://github.com/rkzheng99/villa"
] | In the paper 'ViLLa: Video Reasoning Segmentation with Large Language Model', what J&F score did the ViLLa model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 66.5 |
COCO test-dev | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what box mAP score did the GLEE-Pro model get on the COCO test-dev dataset
| 62.3 |
ImageNet-1k vs NINCO | ViT-B-384 Mahalanobis (pre-trained on IN-21k) | In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00826v1 | [
"https://github.com/j-cb/ninco"
] | In the paper 'In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation', what AUROC score did the ViT-B-384 Mahalanobis (pre-trained on IN-21k) model get on the ImageNet-1k vs NINCO dataset
| 95.0 |
UFBA-425 | BB-UNet | Instance Segmentation and Teeth Classification in Panoramic X-rays | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03747v1 | [
"https://github.com/devichand579/Instance_seg_teeth"
] | In the paper 'Instance Segmentation and Teeth Classification in Panoramic X-rays', what Dice Coef score did the BB-UNet model get on the UFBA-425 dataset
| 86.15 |
DFFD | FasterThanLies | Faster Than Lies: Real-time Deepfake Detection using Binary Neural Networks | 2024-06-07T00:00:00 | https://arxiv.org/abs/2406.04932v1 | [
"https://github.com/fedeloper/binary_deepfake_detection"
] | In the paper 'Faster Than Lies: Real-time Deepfake Detection using Binary Neural Networks', what Accuracy score did the FasterThanLies model get on the DFFD dataset
| 0.9895 |
Peptides-func | GraphGPS + HDSE | Enhancing Graph Transformers with Hierarchical Distance Structural Encoding | 2023-08-22T00:00:00 | https://arxiv.org/abs/2308.11129v4 | [
"https://github.com/luoyk1999/hdse"
] | In the paper 'Enhancing Graph Transformers with Hierarchical Distance Structural Encoding', what AP score did the GraphGPS + HDSE model get on the Peptides-func dataset
| 0.7156±0.0058 |
BorealTC | CNN | Proprioception Is All You Need: Terrain Classification for Boreal Forests | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16877v2 | [
"https://github.com/norlab-ulaval/BorealTC"
] | In the paper 'Proprioception Is All You Need: Terrain Classification for Boreal Forests', what Accuracy (5-fold) score did the CNN model get on the BorealTC dataset
| 93.96 |
BIG-bench (Temporal Sequences) | PaLM 2 (few-shot, k=3, CoT) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, CoT) model get on the BIG-bench (Temporal Sequences) dataset
| 100 |
GigaSpeech TEST | Zipformer+CR-CTC
(no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+CR-CTC
(no external language model) model get on the GigaSpeech TEST dataset
| 10.28 |
SPOT-10 | ResNet50V2 Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the ResNet50V2 Distiller model get on the SPOT-10 dataset
| 79.03 |
Coauthor CS | GraphSAGE | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GraphSAGE model get on the Coauthor CS dataset
| 96.38±0.11 |
SUN-RGBD | DFormer-L | DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.09668v2 | [
"https://github.com/VCIP-RGBD/DFormer"
] | In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what Mean IoU score did the DFormer-L model get on the SUN-RGBD dataset
| 52.5% |
Urban100 - 4x upscaling | Extracter-rec | EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution | 2023-10-02T00:00:00 | https://arxiv.org/abs/2310.01379v1 | [
"https://github.com/esteban-rs/extracter"
] | In the paper 'EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution', what PSNR score did the Extracter-rec model get on the Urban100 - 4x upscaling dataset
| 26.04 |
STS16 | PromptEOL+CSE+OPT-2.7B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-2.7B model get on the STS16 dataset
| 0.8591 |
PASCAL-5i (1-Shot) | MSDNet (ResNet-101) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-101) model get on the PASCAL-5i (1-Shot) dataset
| 64.7 |
PASCAL Context | PlainSeg (EVA-02-L) | Minimalist and High-Performance Semantic Segmentation with Plain Vision Transformers | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12755v1 | [
"https://github.com/ydhonghit/plainseg"
] | In the paper 'Minimalist and High-Performance Semantic Segmentation with Plain Vision Transformers', what mIoU score did the PlainSeg (EVA-02-L) model get on the PASCAL Context dataset
| 71.0 |
GoPro | MLWNet | Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring | 2023-12-29T00:00:00 | https://arxiv.org/abs/2401.00027v2 | [
"https://github.com/thqiu0419/mlwnet"
] | In the paper 'Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring', what PSNR score did the MLWNet model get on the GoPro dataset
| 33.83 |
MSVD-Indonesian | X-CLIP (Cross-Lingual) | MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11341v1 | [
"https://github.com/willyfh/msvd-indonesian"
] | In the paper 'MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian', what text-to-video R@1 score did the X-CLIP (Cross-Lingual) model get on the MSVD-Indonesian dataset
| 32.3 |
ETTh1 (720) Multivariate | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTh1 (720) Multivariate dataset
| 0.505 |
OVIS validation | UniVS(Swin-L) | UniVS: Unified and Universal Video Segmentation with Prompts as Queries | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18115v2 | [
"https://github.com/minghanli/univs"
] | In the paper 'UniVS: Unified and Universal Video Segmentation with Prompts as Queries', what mask AP score did the UniVS(Swin-L) model get on the OVIS validation dataset
| 41.7 |
KITTI-360 | Symphonies | Symphonize 3D Semantic Scene Completion with Contextual Instance Queries | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15670v2 | [
"https://github.com/hustvl/symphonies"
] | In the paper 'Symphonize 3D Semantic Scene Completion with Contextual Instance Queries', what mIoU score did the Symphonies model get on the KITTI-360 dataset
| 18.58 |
BoolQ | PaLM 2-L (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-L (1-shot) model get on the BoolQ dataset
| 90.9 |
MSVD | CoCap (ViT/L14) | Accurate and Fast Compressed Video Captioning | 2023-09-22T00:00:00 | https://arxiv.org/abs/2309.12867v2 | [
"https://github.com/acherstyx/CoCap"
] | In the paper 'Accurate and Fast Compressed Video Captioning', what CIDEr score did the CoCap (ViT/L14) model get on the MSVD dataset
| 121.5 |
nuScenes | FocalFormer3D-TTA | FocalFormer3D : Focusing on Hard Instance for 3D Object Detection | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04556v1 | [
"https://github.com/NVlabs/FocalFormer3D"
] | In the paper 'FocalFormer3D : Focusing on Hard Instance for 3D Object Detection', what NDS score did the FocalFormer3D-TTA model get on the nuScenes dataset
| 0.74 |
SMAC 26m_vs_30m | DMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DMIX model get on the SMAC 26m_vs_30m dataset
| 81.82 |
EuroSAT | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the EuroSAT dataset
| 85.30 |
FRMT (Portuguese - Portugal) | Google Translate | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what BLEURT score did the Google Translate model get on the FRMT (Portuguese - Portugal) dataset
| 75.3 |
Waymo Open Dataset | DetZero | DetZero: Rethinking Offboard 3D Object Detection with Long-term Sequential Point Clouds | 2023-06-09T00:00:00 | https://arxiv.org/abs/2306.06023v2 | [
"https://github.com/pjlab-adg/detzero"
] | In the paper 'DetZero: Rethinking Offboard 3D Object Detection with Long-term Sequential Point Clouds', what MOTA/L2 score did the DetZero model get on the Waymo Open Dataset dataset
| 0.7505 |
MM-Vet | ConvLLaVA | ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal Models | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15738v1 | [
"https://github.com/alibaba/conv-llava"
] | In the paper 'ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal Models', what GPT-4 score score did the ConvLLaVA model get on the MM-Vet dataset
| 45.9 |
UCFRep | ESCounts | Every Shot Counts: Using Exemplars for Repetition Counting in Videos | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.18074v2 | [
"https://github.com/sinhasaptarshi/EveryShotCounts"
] | In the paper 'Every Shot Counts: Using Exemplars for Repetition Counting in Videos', what RMSE score did the ESCounts model get on the UCFRep dataset
| 1.972 |
Structured3D | SFSS-MMSI (RGB Only) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what Validation mIoU score did the SFSS-MMSI (RGB Only) model get on the Structured3D dataset
| 71.94 |
PCam | Virchow | Virchow: A Million-Slide Digital Pathology Foundation Model | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07778v5 | [
"https://github.com/Paige-AI/paige-ml-sdk"
] | In the paper 'Virchow: A Million-Slide Digital Pathology Foundation Model', what Accuracy score did the Virchow model get on the PCam dataset
| 0.933 |
MSU SR-QA Dataset | TOPIQ FACE | TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment | 2023-08-06T00:00:00 | https://arxiv.org/abs/2308.03060v1 | [
"https://github.com/chaofengc/iqa-pytorch"
] | In the paper 'TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment', what SROCC score did the TOPIQ FACE model get on the MSU SR-QA Dataset dataset
| 0.59564 |
CIFAR-10 | EDM-AOT (unconditional) | Improving Diffusion-Based Generative Models via Approximated Optimal Transport | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05069v1 | [
"https://github.com/large-scale-kim/EDM-AOT"
] | In the paper 'Improving Diffusion-Based Generative Models via Approximated Optimal Transport', what FID score did the EDM-AOT (unconditional) model get on the CIFAR-10 dataset
| 1.88 |
DEplain-APA-doc | long-mBART (trained on DEplain-web-doc) | DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18939v1 | [
"https://github.com/rstodden/deplain"
] | In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the long-mBART (trained on DEplain-web-doc) model get on the DEplain-APA-doc dataset
| 35.02 |
OK-VQA | HYDRA | HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12884v2 | [
"https://github.com/ControlNet/HYDRA"
] | In the paper 'HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning', what Accuracy score did the HYDRA model get on the OK-VQA dataset
| 48.6 |
RefCOCO | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what IoU score did the GLEE-Pro model get on the RefCOCO dataset
| 80.0 |
Kinetics | OST | OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00096v2 | [
"https://github.com/tomchen-ctj/OST"
] | In the paper 'OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition', what Top-1 Accuracy score did the OST model get on the Kinetics dataset
| 75.1 |
ASAP | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the ASAP dataset
| 61.2 |
TVBench | Tarsier-34B | Tarsier: Recipes for Training and Evaluating Large Video Description Models | 2024-06-30T00:00:00 | https://arxiv.org/abs/2407.00634v2 | [
"https://github.com/bytedance/tarsier"
] | In the paper 'Tarsier: Recipes for Training and Evaluating Large Video Description Models', what Average Accuracy score did the Tarsier-34B model get on the TVBench dataset
| 54.3 |
CIRR | VISTA (base) | VISTA: Visualized Text Embedding For Universal Multi-Modal Retrieval | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.04292v1 | [
"https://github.com/flagopen/flagembedding"
] | In the paper 'VISTA: Visualized Text Embedding For Universal Multi-Modal Retrieval', what (Recall@5+Recall_subset@1)/2 score did the VISTA (base) model get on the CIRR dataset
| 75.9 |
VisDrone- 1% labeled data | SSOD + Crop (L + U) | Density Crop-guided Semi-supervised Object Detection in Aerial Images | 2023-08-09T00:00:00 | https://arxiv.org/abs/2308.05032v1 | [
"https://github.com/akhilpm/dronessod"
] | In the paper 'Density Crop-guided Semi-supervised Object Detection in Aerial Images', what COCO-style AP score did the SSOD + Crop (L + U) model get on the VisDrone- 1% labeled data dataset
| 17.21 |
SICE-Grad | CIDNet | You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05809v3 | [
"https://github.com/fediory/hvi-cidnet"
] | In the paper 'You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement', what Average PSNR score did the CIDNet model get on the SICE-Grad dataset
| 13.446 |
Nardo-Air R | CLIP | AnyLoc: Towards Universal Visual Place Recognition | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00688v2 | [
"https://github.com/AnyLoc/AnyLoc"
] | In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Nardo-Air R dataset
| 61.97 |
ETTh1 (192) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTh1 (192) Multivariate dataset
| 0.407 |
CIFAR-100 | Balanced Mixture | Balanced Mixture of SuperNets for Learning the CNN Pooling Architecture | 2023-06-21T00:00:00 | https://arxiv.org/abs/2306.11982v1 | [
"https://github.com/mehravehj/Balanced-Mixture-of-SuperNets"
] | In the paper 'Balanced Mixture of SuperNets for Learning the CNN Pooling Architecture', what Accuracy (% ) score did the Balanced Mixture model get on the CIFAR-100 dataset
| 79.61 |
VNHSGE-Biology | ChatGPT | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE-Biology dataset
| 58 |
VisA | AdaCLIP | AdaCLIP: Adapting CLIP with Hybrid Learnable Prompts for Zero-Shot Anomaly Detection | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15795v1 | [
"https://github.com/caoyunkang/adaclip"
] | In the paper 'AdaCLIP: Adapting CLIP with Hybrid Learnable Prompts for Zero-Shot Anomaly Detection', what Detection AUROC score did the AdaCLIP model get on the VisA dataset
| 85.8 |
Human3.6M | GLA-GCN (T=243, CPN) | GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video | 2023-07-12T00:00:00 | https://arxiv.org/abs/2307.05853v2 | [
"https://github.com/bruceyo/GLA-GCN"
] | In the paper 'GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video', what Average MPJPE (mm) score did the GLA-GCN (T=243, CPN) model get on the Human3.6M dataset
| 44.4 |
ImageNet 256x256 | MAR-H, Diff Loss | Autoregressive Image Generation without Vector Quantization | 2024-06-17T00:00:00 | https://arxiv.org/abs/2406.11838v3 | [
"https://github.com/lth14/mar"
] | In the paper 'Autoregressive Image Generation without Vector Quantization', what FID score did the MAR-H, Diff Loss model get on the ImageNet 256x256 dataset
| 1.55 |
STAR Benchmark | GF(uns) | Glance and Focus: Memory Prompting for Multi-Event Video Question Answering | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01529v1 | [
"https://github.com/byz0e/glance-focus"
] | In the paper 'Glance and Focus: Memory Prompting for Multi-Event Video Question Answering', what Average Accuracy score did the GF(uns) model get on the STAR Benchmark dataset
| 53.86 |
Aria Everyday Objects | ImVoxelNet | EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10224v1 | [
"https://github.com/facebookresearch/efm3d"
] | In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what mAP score did the ImVoxelNet model get on the Aria Everyday Objects dataset
| 15 |
Cityscapes val | CSFNet-2 | CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01328v1 | [
"https://github.com/Danial-Qashqai/CSFNet"
] | In the paper 'CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes', what mIoU score did the CSFNet-2 model get on the Cityscapes val dataset
| 76.36 |
UMVM-oea-d-w-v2 | UMAEA (w/o surf & iter ) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the UMVM-oea-d-w-v2 dataset
| 0.948 |
AgeDB | ResNet-50-OR-CNN | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-OR-CNN model get on the AgeDB dataset
| 5.78 |
NC4K | ZoomNeXt-ResNet-50 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what S-measure score did the ZoomNeXt-ResNet-50 model get on the NC4K dataset
| 0.874 |
METR-LA | STD-MAE | Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00516v3 | [
"https://github.com/jimmy-7664/std-mae"
] | In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what MAE @ 12 step score did the STD-MAE model get on the METR-LA dataset
| 3.40 |
MSVD-QA | All-in-one+ | Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09363v1 | [
"https://github.com/mlvlab/ovqa"
] | In the paper 'Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models', what Accuracy score did the All-in-one+ model get on the MSVD-QA dataset
| 0.438 |
Atari 2600 HERO | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 HERO dataset
| 26578.5 |
QVHighlights | LLMEPET | Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrieval | 2024-07-21T00:00:00 | https://arxiv.org/abs/2407.15051v3 | [
"https://github.com/fletcherjiang/llmepet"
] | In the paper 'Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrieval', what mAP score did the LLMEPET model get on the QVHighlights dataset
| 40.33 |
SYNTHIA | MRFP+(Ours) Resnet50 | MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation | 2023-11-30T00:00:00 | https://arxiv.org/abs/2311.18331v2 | [
"https://github.com/airl-iisc/MRFP"
] | In the paper 'MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation', what mIoU score did the MRFP+(Ours) Resnet50 model get on the SYNTHIA dataset
| 30.22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.