dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
GOT-10k
ODTrack-B
ODTrack: Online Dense Temporal Token Learning for Visual Tracking
2024-01-03T00:00:00
https://arxiv.org/abs/2401.01686v1
[ "https://github.com/gxnu-zhonglab/odtrack" ]
In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what Average Overlap score did the ODTrack-B model get on the GOT-10k dataset
77.0
ImageNet 64x64
GDD
Diffusion Models Are Innate One-Step Generators
2024-05-31T00:00:00
https://arxiv.org/abs/2405.20750v2
[ "https://github.com/Zyriix/GDD" ]
In the paper 'Diffusion Models Are Innate One-Step Generators', what FID score did the GDD model get on the ImageNet 64x64 dataset
1.42
PASCAL-5i (1-Shot)
MIANet (ResNet-50)
MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation
2023-05-23T00:00:00
https://arxiv.org/abs/2305.13864v1
[ "https://github.com/aldrich2y/mianet" ]
In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (ResNet-50) model get on the PASCAL-5i (1-Shot) dataset
68.72
DUT-OMRON
BiRefNet (HRSOD, UHRSD)
Bilateral Reference for High-Resolution Dichotomous Image Segmentation
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03407v6
[ "https://github.com/zhengpeng7/birefnet" ]
In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (HRSOD, UHRSD) model get on the DUT-OMRON dataset
0.040
Wisconsin
MGNN + Hetero-S (6 layers)
The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs
2024-06-18T00:00:00
https://arxiv.org/abs/2406.12539v1
[ "https://github.com/bingreeky/heterosnoh" ]
In the paper 'The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs', what Accuracy score did the MGNN + Hetero-S (6 layers) model get on the Wisconsin dataset
88.77
MSVD-QA
MA-LMM
MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding
2024-04-08T00:00:00
https://arxiv.org/abs/2404.05726v2
[ "https://github.com/boheumd/MA-LMM" ]
In the paper 'MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding', what Accuracy score did the MA-LMM model get on the MSVD-QA dataset
0.606
SVTP
CLIP4STR-L
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
2023-05-23T00:00:00
https://arxiv.org/abs/2305.14014v3
[ "https://github.com/VamosC/CLIP4STR" ]
In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L model get on the SVTP dataset
97.4
CIFAR-100-LT (ρ=50)
LIFT (ViT-B/16, CLIP)
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
2023-09-18T00:00:00
https://arxiv.org/abs/2309.10019v3
[ "https://github.com/shijxcs/lift" ]
In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Error Rate score did the LIFT (ViT-B/16, CLIP) model get on the CIFAR-100-LT (ρ=50) dataset
16.9
ETTh2 (96) Multivariate
DiPE-Linear
Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17257v1
[ "https://github.com/wintertee/dipe-linear" ]
In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTh2 (96) Multivariate dataset
0.275
KIT Motion-Language
ParCo
ParCo: Part-Coordinating Text-to-Motion Synthesis
2024-03-27T00:00:00
https://arxiv.org/abs/2403.18512v2
[ "https://github.com/qrzou/parco" ]
In the paper 'ParCo: Part-Coordinating Text-to-Motion Synthesis', what FID score did the ParCo model get on the KIT Motion-Language dataset
0.453
SICK
Rematch
Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity
2024-04-02T00:00:00
https://arxiv.org/abs/2404.02126v1
[ "https://github.com/osome-iu/Rematch-RARE" ]
In the paper 'Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity', what Spearman Correlation score did the Rematch model get on the SICK dataset
0.6772
CIFAR-100
ZLaP
Label Propagation for Zero-shot Classification with Vision-Language Models
2024-04-05T00:00:00
https://arxiv.org/abs/2404.04072v1
[ "https://github.com/vladan-stojnic/zlap" ]
In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the CIFAR-100 dataset
74
ImageNet-1k vs iNaturalist
NAC-UE (ResNet-50)
Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization
2023-06-05T00:00:00
https://arxiv.org/abs/2306.02879v3
[ "https://github.com/bierone/ood_coverage" ]
In the paper 'Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization', what AUROC score did the NAC-UE (ResNet-50) model get on the ImageNet-1k vs iNaturalist dataset
96.52
CHILI-100K
EdgeCNN
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the EdgeCNN model get on the CHILI-100K dataset
0.572 +/- 0.017
CDD Dataset (season-varying)
SGSLN/256
Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization
2023-11-19T00:00:00
https://arxiv.org/abs/2311.11302v1
[ "https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network" ]
In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-Score score did the SGSLN/256 model get on the CDD Dataset (season-varying) dataset
96.24
COCO-Stuff
OTSeg+
OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation
2024-03-21T00:00:00
https://arxiv.org/abs/2403.14183v2
[ "https://github.com/cubeyoung/OTSeg" ]
In the paper 'OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation', what Transductive Setting hIoU score did the OTSeg+ model get on the COCO-Stuff dataset
49.8
MSP-Podcast (Activation)
wav2small-Teacher
Wav2Small: Distilling Wav2Vec2 to 72K parameters for Low-Resource Speech emotion recognition
2024-08-25T00:00:00
https://arxiv.org/abs/2408.13920v4
[ "https://github.com/dkounadis/wav2small" ]
In the paper 'Wav2Small: Distilling Wav2Vec2 to 72K parameters for Low-Resource Speech emotion recognition', what CCC score did the wav2small-Teacher model get on the MSP-Podcast (Activation) dataset
0.7620181
Traffic (96)
TSMixer
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
2023-06-14T00:00:00
https://arxiv.org/abs/2306.09364v4
[ "https://github.com/ibm/tsfm" ]
In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the Traffic (96) dataset
0.356
Charades-STA
SG-DETR (w/ PT)
Saliency-Guided DETR for Moment Retrieval and Highlight Detection
2024-10-02T00:00:00
https://arxiv.org/abs/2410.01615v1
[ "https://github.com/ai-forever/sg-detr" ]
In the paper 'Saliency-Guided DETR for Moment Retrieval and Highlight Detection', what R@1 IoU=0.5 score did the SG-DETR (w/ PT) model get on the Charades-STA dataset
71.10
LEVIR-CD
C2FNet
C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images
2024-04-22T00:00:00
https://arxiv.org/abs/2404.13838v1
[ "https://github.com/chengxihan/c2f-semicd-and-c2f-cdnet" ]
In the paper 'C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images', what F1 score did the C2FNet model get on the LEVIR-CD dataset
91.83
RLBench
RVT
RVT: Robotic View Transformer for 3D Object Manipulation
2023-06-26T00:00:00
https://arxiv.org/abs/2306.14896v1
[ "https://github.com/NVlabs/RVT" ]
In the paper 'RVT: Robotic View Transformer for 3D Object Manipulation', what Succ. Rate (18 tasks, 100 demo/task) score did the RVT model get on the RLBench dataset
62.9
D4RL
Primal.+DT
Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation
2023-05-31T00:00:00
https://arxiv.org/abs/2305.19798v2
[ "https://github.com/yingyichen-cyy/PrimalAttention" ]
In the paper 'Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation', what Average Reward score did the Primal.+DT model get on the D4RL dataset
77.5
SAFIM
deepseek-coder-33b-base
Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
2024-03-07T00:00:00
https://arxiv.org/abs/2403.04814v3
[ "https://github.com/gonglinyuan/safim" ]
In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the deepseek-coder-33b-base model get on the SAFIM dataset
60.78
IFEval
AutoIF (Qwen2 72B)
Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models
2024-06-19T00:00:00
https://arxiv.org/abs/2406.13542v3
[ "https://github.com/QwenLM/AutoIF" ]
In the paper 'Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models', what Prompt-level strict-accuracy score did the AutoIF (Qwen2 72B) model get on the IFEval dataset
80.2
SEPE 8K
DiQP on AV1 with QP 255
Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression
2024-12-12T00:00:00
https://arxiv.org/abs/2412.08912v1
[ "https://github.com/alimd94/DiQP" ]
In the paper 'Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression', what Average PSNR (dB) score did the DiQP on AV1 with QP 255 model get on the SEPE 8K dataset
34.868
ZINC
CIN++-500k
CIN++: Enhancing Topological Message Passing
2023-06-06T00:00:00
https://arxiv.org/abs/2306.03561v1
[ "https://github.com/twitter-research/cwn" ]
In the paper 'CIN++: Enhancing Topological Message Passing', what MAE score did the CIN++-500k model get on the ZINC dataset
0.077
SMAP
ContextFlow++ (Glow-based)
ContextFlow++: Generalist-Specialist Flow-based Generative Models with Mixed-Variable Context Encoding
2024-06-02T00:00:00
https://arxiv.org/abs/2406.00578v1
[ "https://github.com/gudovskiy/contextflow" ]
In the paper 'ContextFlow++: Generalist-Specialist Flow-based Generative Models with Mixed-Variable Context Encoding', what Precision score did the ContextFlow++ (Glow-based) model get on the SMAP dataset
88.64
MS COCO
BUCTD (PETR, with generative sampling)
Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity
2023-06-13T00:00:00
https://arxiv.org/abs/2306.07879v2
[ "https://github.com/amathislab/BUCTD" ]
In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what APM score did the BUCTD (PETR, with generative sampling) model get on the MS COCO dataset
74.2
NYU Depth v2
CAINet (MobileNet-V2)
Context-Aware Interaction Network for RGB-T Semantic Segmentation
2024-01-03T00:00:00
https://arxiv.org/abs/2401.01624v1
[ "https://github.com/yinglv1106/cainet" ]
In the paper 'Context-Aware Interaction Network for RGB-T Semantic Segmentation', what Mean IoU score did the CAINet (MobileNet-V2) model get on the NYU Depth v2 dataset
52.6%
CUB 200 5-way 1-shot
PT+MAP+SF+BPA (transductive)
The Balanced-Pairwise-Affinities Feature Transform
2024-06-25T00:00:00
https://arxiv.org/abs/2407.01467v1
[ "https://github.com/danielshalam/bpa" ]
In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the PT+MAP+SF+BPA (transductive) model get on the CUB 200 5-way 1-shot dataset
95.80
Pittsburgh-30k-test
BoQ
BoQ: A Place is Worth a Bag of Learnable Queries
2024-05-12T00:00:00
https://arxiv.org/abs/2405.07364v3
[ "https://github.com/amaralibey/bag-of-queries" ]
In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the Pittsburgh-30k-test dataset
93.7
BDD100K val
DSNet-head64
DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation
2024-06-06T00:00:00
https://arxiv.org/abs/2406.03702v1
[ "https://github.com/takaniwa/dsnet" ]
In the paper 'DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation', what mIoU score did the DSNet-head64 model get on the BDD100K val dataset
62.6(172.2FPS 4090)
MBPP
DeepSeek-Coder-Base 1.3B (few-shot)
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
2024-01-25T00:00:00
https://arxiv.org/abs/2401.14196v2
[ "https://github.com/deepseek-ai/DeepSeek-Coder" ]
In the paper 'DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence', what Accuracy score did the DeepSeek-Coder-Base 1.3B (few-shot) model get on the MBPP dataset
46.2
COCO minival
GLEE-Pro
General Object Foundation Model for Images and Videos at Scale
2023-12-14T00:00:00
https://arxiv.org/abs/2312.09158v1
[ "https://github.com/FoundationVision/GLEE" ]
In the paper 'General Object Foundation Model for Images and Videos at Scale', what box AP score did the GLEE-Pro model get on the COCO minival dataset
62.0
Refer-YouTube-VOS (2021 public validation)
EPCFormer (ViT-H)
EPCFormer: Expression Prompt Collaboration Transformer for Universal Referring Video Object Segmentation
2023-08-08T00:00:00
https://arxiv.org/abs/2308.04162v1
[ "https://github.com/lab206/epcformer" ]
In the paper 'EPCFormer: Expression Prompt Collaboration Transformer for Universal Referring Video Object Segmentation', what J&F score did the EPCFormer (ViT-H) model get on the Refer-YouTube-VOS (2021 public validation) dataset
65
OTB-2015
PiVOT-L
Improving Visual Object Tracking through Visual Prompting
2024-09-27T00:00:00
https://arxiv.org/abs/2409.18901v1
[ "https://github.com/chenshihfang/GOT" ]
In the paper 'Improving Visual Object Tracking through Visual Prompting', what Precision score did the PiVOT-L model get on the OTB-2015 dataset
0.946
ETTm2 (720) Multivariate
MoLE-DLinear
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTm2 (720) Multivariate dataset
0.399
LM-KBC 2023
VE-BERT
Expanding the Vocabulary of BERT for Knowledge Base Construction
2023-10-12T00:00:00
https://arxiv.org/abs/2310.08291v1
[ "https://github.com/MaastrichtU-IDS/LMKBC-2023" ]
In the paper 'Expanding the Vocabulary of BERT for Knowledge Base Construction', what F1 score did the VE-BERT model get on the LM-KBC 2023 dataset
0.362
MS-COCO (30-shot)
RISF (Resnet-101)
Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection
2023-11-01T00:00:00
https://arxiv.org/abs/2311.00278v1
[ "https://github.com/INFINIQ-AI1/RISF" ]
In the paper 'Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection', what AP score did the RISF (Resnet-101) model get on the MS-COCO (30-shot) dataset
24.4
DAVIS 2017 (val)
HyperSeg
HyperSeg: Towards Universal Visual Segmentation with Large Language Model
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17606v2
[ "https://github.com/congvvc/HyperSeg" ]
In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what J&F 1st frame score did the HyperSeg model get on the DAVIS 2017 (val) dataset
71.2
CUHK-PEDES
RDE
Noisy-Correspondence Learning for Text-to-Image Person Re-identification
2023-08-19T00:00:00
https://arxiv.org/abs/2308.09911v3
[ "https://github.com/QinYang79/RDE" ]
In the paper 'Noisy-Correspondence Learning for Text-to-Image Person Re-identification', what R@1 score did the RDE model get on the CUHK-PEDES dataset
75.94
SUN-RGBD val
Point-GCC+TR3D+FF
Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast
2023-05-31T00:00:00
https://arxiv.org/abs/2305.19623v2
[ "https://github.com/asterisci/point-gcc" ]
In the paper 'Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast', what mAP@0.25 score did the Point-GCC+TR3D+FF model get on the SUN-RGBD val dataset
69.7
RST-DT
Bottom-up Llama 2 (7B)
Can we obtain significant success in RST discourse parsing by using Large Language Models?
2024-03-08T00:00:00
https://arxiv.org/abs/2403.05065v1
[ "https://github.com/nttcslab-nlp/rstparser_eacl24" ]
In the paper 'Can we obtain significant success in RST discourse parsing by using Large Language Models?', what Standard Parseval (Span) score did the Bottom-up Llama 2 (7B) model get on the RST-DT dataset
78.2
EvalCrafter Text-to-Video (ECTV) Dataset
Show-1
Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
2023-09-27T00:00:00
https://arxiv.org/abs/2309.15818v2
[ "https://github.com/showlab/show-1" ]
In the paper 'Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation', what Visual Quality score did the Show-1 model get on the EvalCrafter Text-to-Video (ECTV) Dataset dataset
53.74
CNRPark+EXT
EfficientNet-P
Revising deep learning methods in parking lot occupancy detection
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04288v3
[ "https://github.com/eighonet/parking-research" ]
In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the EfficientNet-P model get on the CNRPark+EXT dataset
0.9683
Caltech-101
ProMetaR
Prompt Learning via Meta-Regularization
2024-04-01T00:00:00
https://arxiv.org/abs/2404.00851v1
[ "https://github.com/mlvlab/prometar" ]
In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the Caltech-101 dataset
96.16
MLT17
MRM
MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition
2023-05-24T00:00:00
https://arxiv.org/abs/2305.14758v3
[ "https://github.com/simplify23/MRN" ]
In the paper 'MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition', what Acc score did the MRM model get on the MLT17 dataset
78.4
S3DIS Area5
SuperCluster
Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering
2024-01-12T00:00:00
https://arxiv.org/abs/2401.06704v2
[ "https://github.com/drprojects/superpoint_transformer" ]
In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what PQ score did the SuperCluster model get on the S3DIS Area5 dataset
50.1
MCubeS
ShareCMP (B2 RGB-A-D)
ShareCMP: Polarization-Aware RGB-P Semantic Segmentation
2023-12-06T00:00:00
https://arxiv.org/abs/2312.03430v2
[ "https://github.com/lefteyex/sharecmp" ]
In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP (B2 RGB-A-D) model get on the MCubeS dataset
50.99%
MM-Vet
LLaVA-1.5-7B (VG-S)
ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models
2024-12-09T00:00:00
https://arxiv.org/abs/2412.07012v2
[ "https://github.com/jieyuz2/provision" ]
In the paper 'ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models', what GPT-4 score score did the LLaVA-1.5-7B (VG-S) model get on the MM-Vet dataset
40.4
MELD
ConCluGen
Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition
2024-04-16T00:00:00
https://arxiv.org/abs/2404.10904v2
[ "https://github.com/tub-cv-group/conclugen" ]
In the paper 'Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition', what Weighted Accuracy score did the ConCluGen model get on the MELD dataset
60.03
WDC Products-80%cc-seen-medium
Llama3.1_8B_structured_explanations
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Llama3.1_8B_structured_explanations model get on the WDC Products-80%cc-seen-medium dataset
74.13
FSC147
CounTX (uses text descriptions instead of visual exemplars)
Open-world Text-specified Object Counting
2023-06-02T00:00:00
https://arxiv.org/abs/2306.01851v2
[ "https://github.com/niki-amini-naieni/countx" ]
In the paper 'Open-world Text-specified Object Counting', what MAE(val) score did the CounTX (uses text descriptions instead of visual exemplars) model get on the FSC147 dataset
17.10
UCF101
VFIMamba
VFIMamba: Video Frame Interpolation with State Space Models
2024-07-02T00:00:00
https://arxiv.org/abs/2407.02315v2
[ "https://github.com/mcg-nju/vfimamba" ]
In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the UCF101 dataset
35.45
Atari 2600 Assault
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Assault dataset
14372.8
nuScenes LiDAR only
LION
LION: Linear Group RNN for 3D Object Detection in Point Clouds
2024-07-25T00:00:00
https://arxiv.org/abs/2407.18232v1
[ "https://github.com/happinesslz/LION" ]
In the paper 'LION: Linear Group RNN for 3D Object Detection in Point Clouds', what NDS score did the LION model get on the nuScenes LiDAR only dataset
73.9
Wikidata5M
KGT5 + Description
Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction
2023-05-22T00:00:00
https://arxiv.org/abs/2305.13059v2
[ "https://github.com/uma-pi1/kgt5-context" ]
In the paper 'Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction', what MRR score did the KGT5 + Description model get on the Wikidata5M dataset
0.381
UCSD Ped2
SD-MAE
Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly Detectors
2023-06-21T00:00:00
https://arxiv.org/abs/2306.12041v2
[ "https://github.com/ristea/aed-mae" ]
In the paper 'Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly Detectors', what AUC score did the SD-MAE model get on the UCSD Ped2 dataset
95.4%
ImageNet
KD++(T: ViT-S, S:resnet18)
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
2023-05-26T00:00:00
https://arxiv.org/abs/2305.17007v1
[ "https://github.com/wangyz1608/knowledge-distillation-via-nd" ]
In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T: ViT-S, S:resnet18) model get on the ImageNet dataset
71.46
HIDE (trained on GOPRO)
CAPTNet
Prompt-based Ingredient-Oriented All-in-One Image Restoration
2023-09-06T00:00:00
https://arxiv.org/abs/2309.03063v2
[ "https://github.com/Tombs98/CAPTNet" ]
In the paper 'Prompt-based Ingredient-Oriented All-in-One Image Restoration', what PSNR (sRGB) score did the CAPTNet model get on the HIDE (trained on GOPRO) dataset
31.86
CATH 4.2
StructGNN
Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement
2023-05-20T00:00:00
https://arxiv.org/abs/2305.15151v4
[ "https://github.com/A4Bio/OpenCPD" ]
In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the StructGNN model get on the CATH 4.2 dataset
35.91
ScanNet200
OpenIns3D (3d only)
OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation
2023-09-01T00:00:00
https://arxiv.org/abs/2309.00616v5
[ "https://github.com/Pointcept/OpenIns3D" ]
In the paper 'OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation', what mAP score did the OpenIns3D (3d only) model get on the ScanNet200 dataset
8.8
GSM8K
DART-Math-DSMath-7B-Prop2Diff (0-shot CoT, w/o code)
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving
2024-06-18T00:00:00
https://arxiv.org/abs/2407.13690v1
[ "https://github.com/hkust-nlp/dart-math" ]
In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-DSMath-7B-Prop2Diff (0-shot CoT, w/o code) model get on the GSM8K dataset
86.8
VideoInstruct
SlowFast-LLaVA-34B
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models
2024-07-22T00:00:00
https://arxiv.org/abs/2407.15841v2
[ "https://github.com/apple/ml-slowfast-llava" ]
In the paper 'SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models', what mean score did the SlowFast-LLaVA-34B model get on the VideoInstruct dataset
3.32
ScanNetV2
OneFormer3D
OneFormer3D: One Transformer for Unified Point Cloud Segmentation
2023-11-24T00:00:00
https://arxiv.org/abs/2311.14405v1
[ "https://github.com/oneformer3d/oneformer3d" ]
In the paper 'OneFormer3D: One Transformer for Unified Point Cloud Segmentation', what PQ score did the OneFormer3D model get on the ScanNetV2 dataset
71.2
AudioCaps
EnCLAP-large
EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning
2024-01-31T00:00:00
https://arxiv.org/abs/2401.17690v1
[ "https://github.com/jaeyeonkim99/enclap" ]
In the paper 'EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning', what CIDEr score did the EnCLAP-large model get on the AudioCaps dataset
0.8029
Financial PhraseBank
FiLM
Exploring the Impact of Corpus Diversity on Financial Pretrained Language Models
2023-10-20T00:00:00
https://arxiv.org/abs/2310.13312v1
[ "https://github.com/deep-over/film" ]
In the paper 'Exploring the Impact of Corpus Diversity on Financial Pretrained Language Models', what Accuracy score did the FiLM model get on the Financial PhraseBank dataset
86.25
RefCOCOg-val
VATEX
Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding
2024-04-12T00:00:00
https://arxiv.org/abs/2404.08590v2
[ "https://github.com/nero1342/VATEX_RIS" ]
In the paper 'Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding', what mIoU score did the VATEX model get on the RefCOCOg-val dataset
69.73
SARDet-100K
MSFA (GFL+R50)
SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection
2024-03-11T00:00:00
https://arxiv.org/abs/2403.06534v2
[ "https://github.com/zcablii/sardet_100k" ]
In the paper 'SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection', what box mAP score did the MSFA (GFL+R50) model get on the SARDet-100K dataset
53.7
DomainNet
PromptStyler (CLIP, ViT-B/16)
PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization
2023-07-27T00:00:00
https://arxiv.org/abs/2307.15199v2
[ "https://github.com/zhanghr2001/promptta" ]
In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ViT-B/16) model get on the DomainNet dataset
59.4
AgeDB
ResNet-50-SORD
A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark
2023-07-10T00:00:00
https://arxiv.org/abs/2307.04570v3
[ "https://github.com/paplhjak/facial-age-estimation-benchmark" ]
In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-SORD model get on the AgeDB dataset
5.81
Wildtrack
EarlyBird
EarlyBird: Early-Fusion for Multi-View Tracking in the Bird's Eye View
2023-10-20T00:00:00
https://arxiv.org/abs/2310.13350v1
[ "https://github.com/tteepe/EarlyBird" ]
In the paper 'EarlyBird: Early-Fusion for Multi-View Tracking in the Bird's Eye View', what IDF1 score did the EarlyBird model get on the Wildtrack dataset
92.3
Atari 2600 Atlantis
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Atlantis dataset
947275
ChartQA
UniChart
UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning
2023-05-24T00:00:00
https://arxiv.org/abs/2305.14761v3
[ "https://github.com/vis-nlp/unichart" ]
In the paper 'UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning', what 1:1 Accuracy score did the UniChart model get on the ChartQA dataset
66.24
Office-Home
GMDG (ResNet-50)
Rethinking Multi-domain Generalization with A General Learning Objective
2024-02-29T00:00:00
https://arxiv.org/abs/2402.18853v1
[ "https://github.com/zhaorui-tan/GMDG_cvpr2024" ]
In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50) model get on the Office-Home dataset
70.7
MVTec LOCO AD
ComAD+RD4AD
Component-aware anomaly detection framework for adjustable and logical industrial visual inspection
2023-05-15T00:00:00
https://arxiv.org/abs/2305.08509v1
[ "https://github.com/liutongkun/comad" ]
In the paper 'Component-aware anomaly detection framework for adjustable and logical industrial visual inspection', what Avg. Detection AUROC score did the ComAD+RD4AD model get on the MVTec LOCO AD dataset
88.2
CIFAR-10-LT (ρ=100)
GCL
Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment
2023-05-19T00:00:00
https://arxiv.org/abs/2305.11733v1
[ "https://github.com/keke921/gclloss" ]
In the paper 'Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment', what Error Rate score did the GCL model get on the CIFAR-10-LT (ρ=100) dataset
17.32
PACS
Crafting-Shifts(ResNet18)
Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization
2024-09-29T00:00:00
https://arxiv.org/abs/2409.19774v1
[ "https://github.com/nikosefth/crafting-shifts" ]
In the paper 'Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization', what Accuracy score did the Crafting-Shifts(ResNet18) model get on the PACS dataset
70.37
Occluded-DukeMTMC
BoT+UFFM+AMC
Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination
2024-05-02T00:00:00
https://arxiv.org/abs/2405.01101v4
[ "https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC" ]
In the paper 'Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination', what mAP score did the BoT+UFFM+AMC model get on the Occluded-DukeMTMC dataset
61.0
ScanNetV2
Metric3Dv2 (g2, In-domain)
Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation
2024-03-22T00:00:00
https://arxiv.org/abs/2404.15506v3
[ "https://github.com/yvanyin/metric3d" ]
In the paper 'Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation', what % < 11.25 score did the Metric3Dv2 (g2, In-domain) model get on the ScanNetV2 dataset
77.8
Weather (192)
DiPE-Linear
Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17257v1
[ "https://github.com/wintertee/dipe-linear" ]
In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the Weather (192) dataset
0.187
PASTIS
Exchanger+Unet+PaPs
Revisiting the Encoding of Satellite Image Time Series
2023-05-03T00:00:00
https://arxiv.org/abs/2305.02086v2
[ "https://github.com/TotalVariation/Exchanger4SITS" ]
In the paper 'Revisiting the Encoding of Satellite Image Time Series', what SQ score did the Exchanger+Unet+PaPs model get on the PASTIS dataset
80.3
KonIQ-10k
UNIQA
You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment
2023-10-14T00:00:00
https://arxiv.org/abs/2310.09560v2
[ "https://github.com/barcodereader/yoto" ]
In the paper 'You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment', what SRCC score did the UNIQA model get on the KonIQ-10k dataset
0.926
ColonINST-v1 (Unseen)
Bunny-v1.0-3B (w/ LoRA, w/ extra data)
Efficient Multimodal Learning from Data-centric Perspective
2024-02-18T00:00:00
https://arxiv.org/abs/2402.11530v3
[ "https://github.com/baai-dcai/bunny" ]
In the paper 'Efficient Multimodal Learning from Data-centric Perspective', what Accuray score did the Bunny-v1.0-3B (w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Unseen) dataset
75.08
PeMS04
Cy2Mixer
Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks
2024-01-29T00:00:00
https://arxiv.org/abs/2401.15894v2
[ "https://github.com/leemingo/cy2mixer" ]
In the paper 'Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks', what 12 Steps MAE score did the Cy2Mixer model get on the PeMS04 dataset
18.14
COCO 2017
DAT-S++
DAT++: Spatially Dynamic Vision Transformer with Deformable Attention
2023-09-04T00:00:00
https://arxiv.org/abs/2309.01430v1
[ "https://github.com/leaplabthu/dat" ]
In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what AP score did the DAT-S++ model get on the COCO 2017 dataset
50.2
Stackoverflow
HP-CDE
Hawkes Process Based on Controlled Differential Equations
2023-05-09T00:00:00
https://arxiv.org/abs/2305.07031v2
[ "https://github.com/kookseungji/Hawkes-Process-Based-on-Controlled-Differential-Equations" ]
In the paper 'Hawkes Process Based on Controlled Differential Equations', what Accuracy score did the HP-CDE model get on the Stackoverflow dataset
0.452±0.001
BorealTC
Mamba
Proprioception Is All You Need: Terrain Classification for Boreal Forests
2024-03-25T00:00:00
https://arxiv.org/abs/2403.16877v2
[ "https://github.com/norlab-ulaval/BorealTC" ]
In the paper 'Proprioception Is All You Need: Terrain Classification for Boreal Forests', what Accuracy (5-fold) score did the Mamba model get on the BorealTC dataset
93.68
HRSOD
BiRefNet (HRSOD, UHRSD)
Bilateral Reference for High-Resolution Dichotomous Image Segmentation
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03407v6
[ "https://github.com/zhengpeng7/birefnet" ]
In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (HRSOD, UHRSD) model get on the HRSOD dataset
0.956
WikiTableQuestions
SynTQA (Oracle)
SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA
2024-09-25T00:00:00
https://arxiv.org/abs/2409.16682v2
[ "https://github.com/siyue-zhang/SynTableQA" ]
In the paper 'SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA', what Test Accuracy score did the SynTQA (Oracle) model get on the WikiTableQuestions dataset
77.5
AIDA-CoNLL
SpEL-base (2023)
SpEL: Structured Prediction for Entity Linking
2023-10-23T00:00:00
https://arxiv.org/abs/2310.14684v1
[ "https://github.com/shavarani/spel" ]
In the paper 'SpEL: Structured Prediction for Entity Linking', what Micro-F1 strong score did the SpEL-base (2023) model get on the AIDA-CoNLL dataset
88.1
ANLI test
PaLM 2-M (one-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what A1 score did the PaLM 2-M (one-shot) model get on the ANLI test dataset
58.1
URMP
YourMT3+ (YPTF.MoE+M)
YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation
2024-07-05T00:00:00
https://arxiv.org/abs/2407.04822v3
[ "https://github.com/mimbres/yourmt3" ]
In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what Onset F1 score did the YourMT3+ (YPTF.MoE+M) model get on the URMP dataset
81.79
Hawkins
CLIP
AnyLoc: Towards Universal Visual Place Recognition
2023-08-01T00:00:00
https://arxiv.org/abs/2308.00688v2
[ "https://github.com/AnyLoc/AnyLoc" ]
In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the Hawkins dataset
33.05
Persona-Chat
P5
P5: Plug-and-Play Persona Prompting for Personalized Response Selection
2023-10-10T00:00:00
https://arxiv.org/abs/2310.06390v1
[ "https://github.com/rungjoo/plug-and-play-prompt-persona" ]
In the paper 'P5: Plug-and-Play Persona Prompting for Personalized Response Selection', what R20@1 score did the P5 model get on the Persona-Chat dataset
0.875
ACMPS
MobileNetV2
Revising deep learning methods in parking lot occupancy detection
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04288v3
[ "https://github.com/eighonet/parking-research" ]
In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the MobileNetV2 model get on the ACMPS dataset
0.9971
ScanNetV2
V-DETR
V-DETR: DETR with Vertex Relative Position Encoding for 3D Object Detection
2023-08-08T00:00:00
https://arxiv.org/abs/2308.04409v1
[ "https://github.com/yichaoshen-ms/v-detr" ]
In the paper 'V-DETR: DETR with Vertex Relative Position Encoding for 3D Object Detection', what mAP@0.25 score did the V-DETR model get on the ScanNetV2 dataset
77.8
Ego4D
EgoVideo
EgoVideo: Exploring Egocentric Foundation Model and Downstream Adaptation
2024-06-26T00:00:00
https://arxiv.org/abs/2406.18070v4
[ "https://github.com/opengvlab/egovideo" ]
In the paper 'EgoVideo: Exploring Egocentric Foundation Model and Downstream Adaptation', what Overall (Top5 mAP) score did the EgoVideo model get on the Ego4D dataset
7.21
Turbulence
GPT-3.5-Turbo
Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code
2023-12-22T00:00:00
https://arxiv.org/abs/2312.14856v2
[ "https://github.com/shahinhonarvar/turbulence-benchmark" ]
In the paper 'Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code', what CorrSc score did the GPT-3.5-Turbo model get on the Turbulence dataset
0.617
Weather (192)
SCNN
Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting
2023-05-22T00:00:00
https://arxiv.org/abs/2305.13036v3
[ "https://github.com/JLDeng/SCNN" ]
In the paper 'Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting', what MSE score did the SCNN model get on the Weather (192) dataset
0.188