dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
MOSE
Cutie
Putting the Object Back into Video Object Segmentation
2023-10-19T00:00:00
https://arxiv.org/abs/2310.12982v2
[ "https://github.com/hkchengrex/Cutie" ]
In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie model get on the MOSE dataset
68.3
FLoRes-200
ALMA-13B
A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models
2023-09-20T00:00:00
https://arxiv.org/abs/2309.11674v2
[ "https://github.com/fe1ixxu/alma" ]
In the paper 'A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models', what BLEU score did the ALMA-13B model get on the FLoRes-200 dataset
18.0
PeMSD4
PM-DMNet(P)
Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction
2024-08-12T00:00:00
https://arxiv.org/abs/2408.07100v1
[ "https://github.com/wengwenchao123/PM-DMNet" ]
In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(P) model get on the PeMSD4 dataset
18.34
IC19-Art
CLIP4STR-L
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
2023-05-23T00:00:00
https://arxiv.org/abs/2305.14014v3
[ "https://github.com/VamosC/CLIP4STR" ]
In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy (%) score did the CLIP4STR-L model get on the IC19-Art dataset
85.9
DreamBooth
Emu2 SDXL v1.0
Generative Multimodal Models are In-Context Learners
2023-12-20T00:00:00
https://arxiv.org/abs/2312.13286v2
[ "https://github.com/baaivision/emu" ]
In the paper 'Generative Multimodal Models are In-Context Learners', what Concept Preservation (CP) score did the Emu2 SDXL v1.0 model get on the DreamBooth dataset
0.528
BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation)
DAIL-SQL + GPT-4
Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation
2023-08-29T00:00:00
https://arxiv.org/abs/2308.15363v4
[ "https://github.com/beachwang/dail-sql" ]
In the paper 'Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation', what Execution Accuracy % (Test) score did the DAIL-SQL + GPT-4 model get on the BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) dataset
57.41
DomainNet
SWG
Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidance
2023-12-07T00:00:00
https://arxiv.org/abs/2312.04066v4
[ "https://github.com/ThomasWestfechtel/SWG" ]
In the paper 'Combining inherent knowledge of vision-language models with unsupervised domain adaptation through strong-weak guidance', what Accuracy score did the SWG model get on the DomainNet dataset
66.1
AFAD
ResNet-50-Cross-Entropy
A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark
2023-07-10T00:00:00
https://arxiv.org/abs/2307.04570v3
[ "https://github.com/paplhjak/facial-age-estimation-benchmark" ]
In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Cross-Entropy model get on the AFAD dataset
3.14
spider
MSc-SQL
MSc-SQL: Multi-Sample Critiquing Small Language Models For Text-To-SQL Translation
2024-10-16T00:00:00
https://arxiv.org/abs/2410.12916v1
[ "https://github.com/layer6ai-labs/msc-sql" ]
In the paper 'MSc-SQL: Multi-Sample Critiquing Small Language Models For Text-To-SQL Translation', what Execution Accuracy (Test) score did the MSc-SQL model get on the spider dataset
84.7
HKU-IS
M3Net-R
M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection
2023-09-15T00:00:00
https://arxiv.org/abs/2309.08365v1
[ "https://github.com/I2-Multimedia-Lab/M3Net" ]
In the paper 'M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection', what MAE score did the M3Net-R model get on the HKU-IS dataset
0.026
MM-Vet
VisionZip (Retain 128 Tokens, fine-tuning)
VisionZip: Longer is Better but Not Necessary in Vision Language Models
2024-12-05T00:00:00
https://arxiv.org/abs/2412.04467v1
[ "https://github.com/dvlab-research/visionzip" ]
In the paper 'VisionZip: Longer is Better but Not Necessary in Vision Language Models', what GPT-4 score score did the VisionZip (Retain 128 Tokens, fine-tuning) model get on the MM-Vet dataset
32.9
PASCAL VOC 2012 val
CIM + Mask R-CNN
Complete Instances Mining for Weakly Supervised Instance Segmentation
2024-02-12T00:00:00
https://arxiv.org/abs/2402.07633v1
[ "https://github.com/ZechengLi19/CIM" ]
In the paper 'Complete Instances Mining for Weakly Supervised Instance Segmentation', what mAP@0.25 score did the CIM + Mask R-CNN model get on the PASCAL VOC 2012 val dataset
68.7
NYU Depth v2
PolyMaX(ConvNeXt-L)
PolyMaX: General Dense Prediction with Mask Transformer
2023-11-09T00:00:00
https://arxiv.org/abs/2311.05770v1
[ "https://github.com/google-research/deeplab2" ]
In the paper 'PolyMaX: General Dense Prediction with Mask Transformer', what Mean IoU score did the PolyMaX(ConvNeXt-L) model get on the NYU Depth v2 dataset
58.08%
GoogleGZ-CD
CGNet
Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery
2024-04-14T00:00:00
https://arxiv.org/abs/2404.09179v1
[ "https://github.com/chengxihan/cgnet-cd" ]
In the paper 'Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery', what F1 score did the CGNet model get on the GoogleGZ-CD dataset
85.89
Mapillary val
BoQ (ResNet-50)
BoQ: A Place is Worth a Bag of Learnable Queries
2024-05-12T00:00:00
https://arxiv.org/abs/2405.07364v3
[ "https://github.com/amaralibey/bag-of-queries" ]
In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the Mapillary val dataset
91.2
WHU-CD
DDLNet
DDLNet: Boosting Remote Sensing Change Detection with Dual-Domain Learning
2024-06-19T00:00:00
https://arxiv.org/abs/2406.13606v1
[ "https://github.com/xwmaxwma/rschange" ]
In the paper 'DDLNet: Boosting Remote Sensing Change Detection with Dual-Domain Learning', what F1 score did the DDLNet model get on the WHU-CD dataset
90.56
MSU SR-QA Dataset
TOPIQ
TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment
2023-08-06T00:00:00
https://arxiv.org/abs/2308.03060v1
[ "https://github.com/chaofengc/iqa-pytorch" ]
In the paper 'TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment', what SROCC score did the TOPIQ model get on the MSU SR-QA Dataset dataset
0.57341
PASCAL VOC 2007
GKGNet
GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for Multi-Label Image Recognition
2023-08-28T00:00:00
https://arxiv.org/abs/2308.14378v3
[ "https://github.com/jin-s13/gkgnet" ]
In the paper 'GKGNet: Group K-Nearest Neighbor based Graph Convolutional Network for Multi-Label Image Recognition', what mAP score did the GKGNet model get on the PASCAL VOC 2007 dataset
96.8
URMP
MT3
YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation
2024-07-05T00:00:00
https://arxiv.org/abs/2407.04822v3
[ "https://github.com/mimbres/yourmt3" ]
In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what Onset F1 score did the MT3 model get on the URMP dataset
77
PASCAL-5i (5-Shot)
MIANet (VGG-16)
MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation
2023-05-23T00:00:00
https://arxiv.org/abs/2305.13864v1
[ "https://github.com/aldrich2y/mianet" ]
In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (VGG-16) model get on the PASCAL-5i (5-Shot) dataset
71.99
LibriSpeech test-clean
Zipformer+pruned transducer w/ CR-CTC (no external language model)
CR-CTC: Consistency regularization on CTC for improved speech recognition
2024-10-07T00:00:00
https://arxiv.org/abs/2410.05101v3
[ "https://github.com/k2-fsa/icefall" ]
In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+pruned transducer w/ CR-CTC (no external language model) model get on the LibriSpeech test-clean dataset
1.88
SUN-RGBD
DFormer-B
DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation
2023-09-18T00:00:00
https://arxiv.org/abs/2309.09668v2
[ "https://github.com/VCIP-RGBD/DFormer" ]
In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what Mean IoU score did the DFormer-B model get on the SUN-RGBD dataset
51.2%
Cityscapes test
EAGLE (DINO, ViT-S/8)
EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation
2024-03-03T00:00:00
https://arxiv.org/abs/2403.01482v4
[ "https://github.com/MICV-yonsei/EAGLE" ]
In the paper 'EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation', what mIoU score did the EAGLE (DINO, ViT-S/8) model get on the Cityscapes test dataset
19.7
CROHME 2016
TAMER
TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition
2024-08-16T00:00:00
https://arxiv.org/abs/2408.08578v2
[ "https://github.com/qingzhenduyu/tamer" ]
In the paper 'TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition', what ExpRate score did the TAMER model get on the CROHME 2016 dataset
60.26
CUB-200-2011
LDM Correspondences
Unsupervised Semantic Correspondence Using Stable Diffusion
2023-05-24T00:00:00
https://arxiv.org/abs/2305.15581v2
[ "https://github.com/ubc-vision/LDM_correspondences" ]
In the paper 'Unsupervised Semantic Correspondence Using Stable Diffusion', what Mean PCK@0.05 score did the LDM Correspondences model get on the CUB-200-2011 dataset
61.6
CelebA 64x64
DDIM+CS
Compensation Sampling for Improved Convergence in Diffusion Models
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06285v1
[ "https://github.com/hotfinda/Compensation-sampling" ]
In the paper 'Compensation Sampling for Improved Convergence in Diffusion Models', what FID score did the DDIM+CS model get on the CelebA 64x64 dataset
2.11
COCO-Stuff-27
PriMaPs+STEGO (DINO ViT-B/8)
Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals
2024-04-25T00:00:00
https://arxiv.org/abs/2404.16818v2
[ "https://github.com/visinf/primaps" ]
In the paper 'Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals', what Accuracy score did the PriMaPs+STEGO (DINO ViT-B/8) model get on the COCO-Stuff-27 dataset
57.9
Natural Questions
PaLM 2-L (one-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-L (one-shot) model get on the Natural Questions dataset
37.5
VOT2020
ODTrack-L
ODTrack: Online Dense Temporal Token Learning for Visual Tracking
2024-01-03T00:00:00
https://arxiv.org/abs/2401.01686v1
[ "https://github.com/gxnu-zhonglab/odtrack" ]
In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what EAO score did the ODTrack-L model get on the VOT2020 dataset
0.605
CIFAR-100
ABNet-2G-R0
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
2024-11-28T00:00:00
https://arxiv.org/abs/2411.19213v1
[ "https://github.com/dvssajay/New_World" ]
In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R0 model get on the CIFAR-100 dataset
73.930
EQ-Bench
OpenAI gpt-3.5-turbo-0301
EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06281v2
[ "https://github.com/eq-bench/eq-bench" ]
In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the OpenAI gpt-3.5-turbo-0301 model get on the EQ-Bench dataset
47.61
YouTube-VOS 2019
DEVA
Tracking Anything in High Quality
2023-07-26T00:00:00
https://arxiv.org/abs/2307.13974v1
[ "https://github.com/jiawen-zhu/hqtrack" ]
In the paper 'Tracking Anything in High Quality', what Overall score did the DEVA model get on the YouTube-VOS 2019 dataset
86.2
GQA test-dev
Video-LaVIT
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization
2024-02-05T00:00:00
https://arxiv.org/abs/2402.03161v3
[ "https://github.com/jy0205/lavit" ]
In the paper 'Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization', what Accuracy score did the Video-LaVIT model get on the GQA test-dev dataset
64.4
ogbl-ppa
GCN (node embedding)
Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods
2024-11-22T00:00:00
https://arxiv.org/abs/2411.14711v1
[ "https://github.com/astroming/GNNHE" ]
In the paper 'Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods', what Test Hits@100 score did the GCN (node embedding) model get on the ogbl-ppa dataset
0.6354 ± 0.0121
OVAD benchmark
BLIP
Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy
2024-02-11T00:00:00
https://arxiv.org/abs/2402.07270v2
[ "https://github.com/lmb-freiburg/ovqa" ]
In the paper 'Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy', what Contains w. Synonyms score did the BLIP model get on the OVAD benchmark dataset
45.70
Materials Project
PotNet
Efficient Approximations of Complete Interatomic Potentials for Crystal Property Prediction
2023-06-12T00:00:00
https://arxiv.org/abs/2306.10045v9
[ "https://github.com/divelab/AIRS/tree/main/OpenMat/PotNet" ]
In the paper 'Efficient Approximations of Complete Interatomic Potentials for Crystal Property Prediction', what MAE score did the PotNet model get on the Materials Project dataset
18.8
UZLF
VascX
VascX Models: Model Ensembles for Retinal Vascular Analysis from Color Fundus Images
2024-09-24T00:00:00
https://arxiv.org/abs/2409.16016v2
[ "https://github.com/eyened/rtnls_vascx_models" ]
In the paper 'VascX Models: Model Ensembles for Retinal Vascular Analysis from Color Fundus Images', what Average Dice (0.5*Dice_a + 0.5*Dice_v) score did the VascX model get on the UZLF dataset
80.6
Avazu
CETN
CETN: Contrast-enhanced Through Network for CTR Prediction
2023-12-15T00:00:00
https://arxiv.org/abs/2312.09715v2
[ "https://github.com/salmon1802/cetn" ]
In the paper 'CETN: Contrast-enhanced Through Network for CTR Prediction', what AUC score did the CETN model get on the Avazu dataset
0.7962
WinoGrande
PaLM 2-M (1-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the WinoGrande dataset
79.2
MSVD-CTN
GIT
GiT: Towards Generalist Vision Transformer through Universal Language Interface
2024-03-14T00:00:00
https://arxiv.org/abs/2403.09394v1
[ "https://github.com/haiyang-w/git" ]
In the paper 'GiT: Towards Generalist Vision Transformer through Universal Language Interface', what CIDEr score did the GIT model get on the MSVD-CTN dataset
45.63
WHU-CD
CDMaskFormer
Rethinking Remote Sensing Change Detection With A Mask View
2024-06-21T00:00:00
https://arxiv.org/abs/2406.15320v1
[ "https://github.com/xwmaxwma/rschange" ]
In the paper 'Rethinking Remote Sensing Change Detection With A Mask View', what F1 score did the CDMaskFormer model get on the WHU-CD dataset
91.56
Amazon-Google
Meta-Llama-3.1-8B-Instruct
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Meta-Llama-3.1-8B-Instruct model get on the Amazon-Google dataset
49.16
CHILI-3K
Mean
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the Mean model get on the CHILI-3K dataset
0.265
WebApp1K-React
deepseek-v2.5
A Case Study of Web App Coding with OpenAI Reasoning Models
2024-09-19T00:00:00
https://arxiv.org/abs/2409.13773v1
[ "https://github.com/onekq/webapp1k" ]
In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the deepseek-v2.5 model get on the WebApp1K-React dataset
0.834
VITON-HD
IDM-VTON
Improving Diffusion Models for Authentic Virtual Try-on in the Wild
2024-03-08T00:00:00
https://arxiv.org/abs/2403.05139v3
[ "https://github.com/yisol/IDM-VTON" ]
In the paper 'Improving Diffusion Models for Authentic Virtual Try-on in the Wild', what FID score did the IDM-VTON model get on the VITON-HD dataset
6.290
FreeSolv
ChemBFN
A Bayesian Flow Network Framework for Chemistry Tasks
2024-07-28T00:00:00
https://arxiv.org/abs/2407.20294v1
[ "https://github.com/Augus1999/bayesian-flow-network-for-chemistry" ]
In the paper 'A Bayesian Flow Network Framework for Chemistry Tasks', what RMSE score did the ChemBFN model get on the FreeSolv dataset
1.418
CHILI-100K
EdgeCNN
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the EdgeCNN model get on the CHILI-100K dataset
0.030 +/- 0.001
SMAC 27m_vs_30m
DPLEX
A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
2023-06-04T00:00:00
https://arxiv.org/abs/2306.02430v1
[ "https://github.com/j3soon/dfac-extended" ]
In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC 27m_vs_30m dataset
90.62
IEMOCAP
CORECT (4-class)
Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction
2023-11-08T00:00:00
https://arxiv.org/abs/2311.04507v3
[ "https://github.com/leson502/CORECT_EMNLP2023" ]
In the paper 'Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction', what F1 score did the CORECT (4-class) model get on the IEMOCAP dataset
0.846
MBPP
DeepSeek-Coder-Instruct 6.7B (few-shot)
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
2024-01-25T00:00:00
https://arxiv.org/abs/2401.14196v2
[ "https://github.com/deepseek-ai/DeepSeek-Coder" ]
In the paper 'DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence', what Accuracy score did the DeepSeek-Coder-Instruct 6.7B (few-shot) model get on the MBPP dataset
65.4
DUTS-TE
BiRefNet (DUTS)
Bilateral Reference for High-Resolution Dichotomous Image Segmentation
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03407v6
[ "https://github.com/zhengpeng7/birefnet" ]
In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (DUTS) model get on the DUTS-TE dataset
0.019
ARMBench
RISE (VIT-B)
Robot Instance Segmentation with Few Annotations for Grasping
2024-07-01T00:00:00
https://arxiv.org/abs/2407.01302v1
[ "https://github.com/mkimhi/RISE" ]
In the paper 'Robot Instance Segmentation with Few Annotations for Grasping', what AP50 score did the RISE (VIT-B) model get on the ARMBench dataset
86.37
ModelNet40
Point-RAE
Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning
2023-09-25T00:00:00
https://arxiv.org/abs/2310.03670v1
[ "https://github.com/liuyyy111/point-rae" ]
In the paper 'Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning', what Overall Accuracy score did the Point-RAE model get on the ModelNet40 dataset
94.1
THUMOS' 14
MAT (Ours) Trans
Memory-and-Anticipation Transformer for Online Action Understanding
2023-08-15T00:00:00
https://arxiv.org/abs/2308.07893v1
[ "https://github.com/echo0125/memory-and-anticipation-transformer" ]
In the paper 'Memory-and-Anticipation Transformer for Online Action Understanding', what mAP score did the MAT (Ours) Trans model get on the THUMOS' 14 dataset
71.6
DUTS-TE
BiRefNet (HRSOD, UHRSD)
Bilateral Reference for High-Resolution Dichotomous Image Segmentation
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03407v6
[ "https://github.com/zhengpeng7/birefnet" ]
In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (HRSOD, UHRSD) model get on the DUTS-TE dataset
0.020
StrategyQA
PaLM 2 (few-shot, CoT, SC)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, CoT, SC) model get on the StrategyQA dataset
90.4
S3DIS
SuperCluster
Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering
2024-01-12T00:00:00
https://arxiv.org/abs/2401.06704v2
[ "https://github.com/drprojects/superpoint_transformer" ]
In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what Mean IoU score did the SuperCluster model get on the S3DIS dataset
75.3
COCO minival
GLEE-Plus
General Object Foundation Model for Images and Videos at Scale
2023-12-14T00:00:00
https://arxiv.org/abs/2312.09158v1
[ "https://github.com/FoundationVision/GLEE" ]
In the paper 'General Object Foundation Model for Images and Videos at Scale', what box AP score did the GLEE-Plus model get on the COCO minival dataset
60.4
PACS
GMDG (RegNetY-16GF, SWAD)
Rethinking Multi-domain Generalization with A General Learning Objective
2024-02-29T00:00:00
https://arxiv.org/abs/2402.18853v1
[ "https://github.com/zhaorui-tan/GMDG_cvpr2024" ]
In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (RegNetY-16GF, SWAD) model get on the PACS dataset
97.9
NExT-QA
PAXION
Paxion: Patching Action Knowledge in Video-Language Foundation Models
2023-05-18T00:00:00
https://arxiv.org/abs/2305.10683v4
[ "https://github.com/mikewangwzhl/paxion" ]
In the paper 'Paxion: Patching Action Knowledge in Video-Language Foundation Models', what Accuracy score did the PAXION model get on the NExT-QA dataset
56.9
TabFact
Chain-of-Table
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding
2024-01-09T00:00:00
https://arxiv.org/abs/2401.04398v2
[ "https://github.com/google-research/chain-of-table" ]
In the paper 'Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding', what Test score did the Chain-of-Table model get on the TabFact dataset
86.61
MSMT17
CA-Jaccard
CA-Jaccard: Camera-aware Jaccard Distance for Person Re-identification
2023-11-17T00:00:00
https://arxiv.org/abs/2311.10605v2
[ "https://github.com/chen960/ca-jaccard" ]
In the paper 'CA-Jaccard: Camera-aware Jaccard Distance for Person Re-identification', what Rank-1 score did the CA-Jaccard model get on the MSMT17 dataset
86.2
Office-31
EUDA
EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer
2024-07-31T00:00:00
https://arxiv.org/abs/2407.21311v1
[ "https://github.com/a-abedi/euda" ]
In the paper 'EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer', what Accuracy score did the EUDA model get on the Office-31 dataset
92
AIOZ-GDANCE
GCD
Controllable Group Choreography using Contrastive Diffusion
2023-10-29T00:00:00
https://arxiv.org/abs/2310.18986v2
[ "https://github.com/aioz-ai/GCD" ]
In the paper 'Controllable Group Choreography using Contrastive Diffusion', what FID score did the GCD model get on the AIOZ-GDANCE dataset
31.16
SVT-P
ABINet-LV+TPS++
TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition
2023-05-09T00:00:00
https://arxiv.org/abs/2305.05322v1
[ "https://github.com/simplify23/tps_pp" ]
In the paper 'TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition', what Accuracy score did the ABINet-LV+TPS++ model get on the SVT-P dataset
89.6
ImageNet 256x256
TiTok-B-64
An Image is Worth 32 Tokens for Reconstruction and Generation
2024-06-11T00:00:00
https://arxiv.org/abs/2406.07550v1
[ "https://github.com/bytedance/1d-tokenizer" ]
In the paper 'An Image is Worth 32 Tokens for Reconstruction and Generation', what FID score did the TiTok-B-64 model get on the ImageNet 256x256 dataset
2.48
UMVM-dbp-zh-en
UMAEA (w/o surf & iter )
Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment
2023-07-30T00:00:00
https://arxiv.org/abs/2307.16210v2
[ "https://github.com/zjukg/umaea" ]
In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the UMVM-dbp-zh-en dataset
0.800
MNIST-full
VMM
The VampPrior Mixture Model
2024-02-06T00:00:00
https://arxiv.org/abs/2402.04412v2
[ "https://github.com/astirn/vampprior-mixture-model" ]
In the paper 'The VampPrior Mixture Model', what NMI score did the VMM model get on the MNIST-full dataset
0.920
nuScenes
GPT-Driver
GPT-Driver: Learning to Drive with GPT
2023-10-02T00:00:00
https://arxiv.org/abs/2310.01415v3
[ "https://github.com/pointscoder/gpt-driver" ]
In the paper 'GPT-Driver: Learning to Drive with GPT', what L2 score did the GPT-Driver model get on the nuScenes dataset
0.48
DSO-1
Late Fusion
MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization
2023-12-04T00:00:00
https://arxiv.org/abs/2312.01790v2
[ "https://github.com/idt-iti/mmfusion-iml" ]
In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Late Fusion model get on the DSO-1 dataset
.958
GRAZPEDWRI-DX
YOLOv8x
Enhancing Wrist Fracture Detection with YOLO
2024-07-17T00:00:00
https://arxiv.org/abs/2407.12597v2
[ "https://github.com/ammarlodhi255/pediatric_wrist_abnormality_detection-end-to-end-implementation" ]
In the paper 'Enhancing Wrist Fracture Detection with YOLO', what mAP score did the YOLOv8x model get on the GRAZPEDWRI-DX dataset
77.00
Weather (96)
MoLE-DLinear
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather (96) dataset
0.147
MassSpecGym
FFN Fingerprint
MassSpecGym: A benchmark for the discovery and identification of molecules
2024-10-30T00:00:00
https://arxiv.org/abs/2410.23326v1
[ "https://github.com/pluskal-lab/massspecgym" ]
In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Cosine Similarity score did the FFN Fingerprint model get on the MassSpecGym dataset
0.25
MM-Vet
LLaVA-65B (Data Mixing)
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
2023-09-18T00:00:00
https://arxiv.org/abs/2309.09958v1
[ "https://github.com/haotian-liu/LLaVA" ]
In the paper 'An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models', what GPT-4 score score did the LLaVA-65B (Data Mixing) model get on the MM-Vet dataset
36.4
MLO-Cn2
Linear Forecast
Effective Benchmarks for Optical Turbulence Modeling
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03573v1
[ "https://github.com/cdjellen/otbench" ]
In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Linear Forecast model get on the MLO-Cn2 dataset
0.930
BDD100K val
Resnet50
MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation
2023-11-30T00:00:00
https://arxiv.org/abs/2311.18331v2
[ "https://github.com/airl-iisc/MRFP" ]
In the paper 'MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation', what mIoU score did the Resnet50 model get on the BDD100K val dataset
31.44
ImageNet
WTTM (T: ResNet-34 S:ResNet-18)
Knowledge Distillation Based on Transformed Teacher Matching
2024-02-17T00:00:00
https://arxiv.org/abs/2402.11148v2
[ "https://github.com/zkxufo/TTM" ]
In the paper 'Knowledge Distillation Based on Transformed Teacher Matching', what Top-1 accuracy % score did the WTTM (T: ResNet-34 S:ResNet-18) model get on the ImageNet dataset
72.19
ModelNet40
Point-FEMAE
Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders
2023-12-17T00:00:00
https://arxiv.org/abs/2312.10726v1
[ "https://github.com/zyh16143998882/aaai24-pointfemae" ]
In the paper 'Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders', what Overall Accuracy score did the Point-FEMAE model get on the ModelNet40 dataset
94.5
3DPW
ZeDO (S=1,J=17)
Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation
2023-07-07T00:00:00
https://arxiv.org/abs/2307.03833v3
[ "https://github.com/ipl-uw/ZeDO-Release" ]
In the paper 'Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation', what PA-MPJPE score did the ZeDO (S=1,J=17) model get on the 3DPW dataset
40.3
DramaQA
LLaMA-VQA
Large Language Models are Temporal and Causal Reasoners for Video Question Answering
2023-10-24T00:00:00
https://arxiv.org/abs/2310.15747v2
[ "https://github.com/mlvlab/Flipped-VQA" ]
In the paper 'Large Language Models are Temporal and Causal Reasoners for Video Question Answering', what Accuracy score did the LLaMA-VQA model get on the DramaQA dataset
84.1
MM-Vet
VisionZip (Retain 128 Tokens)
VisionZip: Longer is Better but Not Necessary in Vision Language Models
2024-12-05T00:00:00
https://arxiv.org/abs/2412.04467v1
[ "https://github.com/dvlab-research/visionzip" ]
In the paper 'VisionZip: Longer is Better but Not Necessary in Vision Language Models', what GPT-4 score score did the VisionZip (Retain 128 Tokens) model get on the MM-Vet dataset
32.6
MM-Vet v2
InternVL-Chat-V1-5
How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites
2024-04-25T00:00:00
https://arxiv.org/abs/2404.16821v2
[ "https://github.com/opengvlab/internvl" ]
In the paper 'How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites', what GPT-4 score score did the InternVL-Chat-V1-5 model get on the MM-Vet v2 dataset
51.5±0.2
MVTec AD
ReConPatch Ensemble
ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection
2023-05-26T00:00:00
https://arxiv.org/abs/2305.16713v3
[ "https://github.com/travishsu/ReConPatch-TF" ]
In the paper 'ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection', what Segmentation AUROC score did the ReConPatch Ensemble model get on the MVTec AD dataset
98.67
ETTm2 (192) Multivariate
RLinear
Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping
2023-05-18T00:00:00
https://arxiv.org/abs/2305.10721v1
[ "https://github.com/plumprc/rtsf" ]
In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTm2 (192) Multivariate dataset
0.219
MNIST
fKAN
fKAN: Fractional Kolmogorov-Arnold Networks with trainable Jacobi basis functions
2024-06-11T00:00:00
https://arxiv.org/abs/2406.07456v1
[ "https://github.com/alirezaafzalaghaei/fKAN" ]
In the paper 'fKAN: Fractional Kolmogorov-Arnold Networks with trainable Jacobi basis functions', what Accuracy score did the fKAN model get on the MNIST dataset
99.228
ISIC 2018
ProMISe
ProMISe: Promptable Medical Image Segmentation using SAM
2024-03-07T00:00:00
https://arxiv.org/abs/2403.04164v3
[ "https://github.com/xinkunwang111/promise" ]
In the paper 'ProMISe: Promptable Medical Image Segmentation using SAM', what DSC score did the ProMISe model get on the ISIC 2018 dataset
92.10
FB15k-237
KERMIT
KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with Inverse Transformation
2023-09-26T00:00:00
https://arxiv.org/abs/2309.14770v2
[ "https://github.com/lirt1231/kermit" ]
In the paper 'KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with Inverse Transformation', what MRR score did the KERMIT model get on the FB15k-237 dataset
0.359
EuroSAT
ZLaP*
Label Propagation for Zero-shot Classification with Vision-Language Models
2024-04-05T00:00:00
https://arxiv.org/abs/2404.04072v1
[ "https://github.com/vladan-stojnic/zlap" ]
In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the EuroSAT dataset
62.7
Nordland
BoQ
BoQ: A Place is Worth a Bag of Learnable Queries
2024-05-12T00:00:00
https://arxiv.org/abs/2405.07364v3
[ "https://github.com/amaralibey/bag-of-queries" ]
In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the Nordland dataset
90.6
BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation)
MSc-SQL
MSc-SQL: Multi-Sample Critiquing Small Language Models For Text-To-SQL Translation
2024-10-16T00:00:00
https://arxiv.org/abs/2410.12916v1
[ "https://github.com/layer6ai-labs/msc-sql" ]
In the paper 'MSc-SQL: Multi-Sample Critiquing Small Language Models For Text-To-SQL Translation', what Execution Accuracy % (Dev) score did the MSc-SQL model get on the BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) dataset
65.6
VoxCeleb1
ReDimNet-B3-LM-ASNorm (3.0M)
Reshape Dimensions Network for Speaker Recognition
2024-07-25T00:00:00
https://arxiv.org/abs/2407.18223v2
[ "https://github.com/IDRnD/ReDimNet" ]
In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B3-LM-ASNorm (3.0M) model get on the VoxCeleb1 dataset
0.47
The Pile
Llama-3.2-Instruct 3B
Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08020v2
[ "https://github.com/jonhue/activeft" ]
In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Llama-3.2-Instruct 3B model get on the The Pile dataset
0.737
Inverse-Text
DeepSolo (ResNet-50)
DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting
2023-05-31T00:00:00
https://arxiv.org/abs/2305.19957v2
[ "https://github.com/vitae-transformer/deepsolo" ]
In the paper 'DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting', what F-measure (%) - No Lexicon score did the DeepSolo (ResNet-50) model get on the Inverse-Text dataset
48.5
Replica
Open3DIS
Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance
2023-12-17T00:00:00
https://arxiv.org/abs/2312.10671v3
[ "https://github.com/VinAIResearch/Open3DIS" ]
In the paper 'Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance', what mAP score did the Open3DIS model get on the Replica dataset
18.1
Mol-Instruction
BioT5+
BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning
2024-02-27T00:00:00
https://arxiv.org/abs/2402.17810v2
[ "https://github.com/QizhiPei/BioT5" ]
In the paper 'BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning', what Exact score did the BioT5+ model get on the Mol-Instruction dataset
0.257
LVIS v1.0
DE-ViT
Detect Everything with Few Examples
2023-09-22T00:00:00
https://arxiv.org/abs/2309.12969v4
[ "https://github.com/mlzxy/devit" ]
In the paper 'Detect Everything with Few Examples', what AP novel-LVIS base training score did the DE-ViT model get on the LVIS v1.0 dataset
34.3
ogbn-proteins
LD+GAT
Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias
2023-09-26T00:00:00
https://arxiv.org/abs/2309.14907v1
[ "https://github.com/MIRALab-USTC/LD" ]
In the paper 'Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias', what Test ROC-AUC score did the LD+GAT model get on the ogbn-proteins dataset
0.8942 ± 0.0007
CATT
Sakhr
CATT: Character-based Arabic Tashkeel Transformer
2024-07-03T00:00:00
https://arxiv.org/abs/2407.03236v3
[ "https://github.com/abjadai/catt" ]
In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the Sakhr model get on the CATT dataset
13.841
Nature
Zhu et al.
Reversible Decoupling Network for Single Image Reflection Removal
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08063v1
[ "https://github.com/lime-j/RDNet" ]
In the paper 'Reversible Decoupling Network for Single Image Reflection Removal', what PSNR score did the Zhu et al. model get on the Nature dataset
26.04
GSM8K
WizardMath-7B-V1.0
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09583v1
[ "https://github.com/nlpxucan/wizardlm" ]
In the paper 'WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct', what Accuracy score did the WizardMath-7B-V1.0 model get on the GSM8K dataset
54.9