dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
Cityscapes test
PriMaPs-EM + STEGO (DINO ViT-B/8)
Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals
2024-04-25T00:00:00
https://arxiv.org/abs/2404.16818v2
[ "https://github.com/visinf/primaps" ]
In the paper 'Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals', what mIoU score did the PriMaPs-EM + STEGO (DINO ViT-B/8) model get on the Cityscapes test dataset
21.6
CUHK-PEDES
RDE
Noisy-Correspondence Learning for Text-to-Image Person Re-identification
2023-08-19T00:00:00
https://arxiv.org/abs/2308.09911v3
[ "https://github.com/QinYang79/RDE" ]
In the paper 'Noisy-Correspondence Learning for Text-to-Image Person Re-identification', what Rank-1 score did the RDE model get on the CUHK-PEDES dataset
74.46
Set5 - 2x upscaling
DRCT
DRCT: Saving Image Super-resolution away from Information Bottleneck
2024-03-31T00:00:00
https://arxiv.org/abs/2404.00722v5
[ "https://github.com/ming053l/drct" ]
In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT model get on the Set5 - 2x upscaling dataset
38.72
ETTm1 (96) Multivariate
DiPE-Linear
Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17257v1
[ "https://github.com/wintertee/dipe-linear" ]
In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTm1 (96) Multivariate dataset
0.309
Atari 2600 Time Pilot
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Time Pilot dataset
12071
CVC-ClinicDB
ADSNet
Adaptation of Distinct Semantics for Uncertain Areas in Polyp Segmentation
2024-05-13T00:00:00
https://arxiv.org/abs/2405.07523v1
[ "https://github.com/vinhhust2806/ADSNet" ]
In the paper 'Adaptation of Distinct Semantics for Uncertain Areas in Polyp Segmentation', what mean Dice score did the ADSNet model get on the CVC-ClinicDB dataset
0.938
COCO-Stuff-27
DynaSeg - FSF (ResNet-18 FPN)
DynaSeg: A Deep Dynamic Fusion Method for Unsupervised Image Segmentation Incorporating Feature Similarity and Spatial Continuity
2024-05-09T00:00:00
https://arxiv.org/abs/2405.05477v4
[ "https://github.com/ryersonmultimedialab/dynaseg" ]
In the paper 'DynaSeg: A Deep Dynamic Fusion Method for Unsupervised Image Segmentation Incorporating Feature Similarity and Spatial Continuity', what Accuracy score did the DynaSeg - FSF (ResNet-18 FPN) model get on the COCO-Stuff-27 dataset
81.1
Oxford RobotCar Dataset
AnyLoc-VLAD-DINOv2
AnyLoc: Towards Universal Visual Place Recognition
2023-08-01T00:00:00
https://arxiv.org/abs/2308.00688v2
[ "https://github.com/AnyLoc/AnyLoc" ]
In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the AnyLoc-VLAD-DINOv2 model get on the Oxford RobotCar Dataset dataset
98.95
Oxford 102 Flower
DePT
DePT: Decoupled Prompt Tuning
2023-09-14T00:00:00
https://arxiv.org/abs/2309.07439v2
[ "https://github.com/koorye/dept" ]
In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the Oxford 102 Flower dataset
86.46
HJDB
Beat This!
Beat this! Accurate beat tracking without DBN postprocessing
2024-07-31T00:00:00
https://arxiv.org/abs/2407.21658v1
[ "https://github.com/CPJKU/beat_this" ]
In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the HJDB dataset
98.2
CodeContests
CodeChain + WizardCoder-15B
CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules
2023-10-13T00:00:00
https://arxiv.org/abs/2310.08992v3
[ "https://github.com/SalesforceAIResearch/CodeChain" ]
In the paper 'CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules', what Test Set pass@1 score did the CodeChain + WizardCoder-15B model get on the CodeContests dataset
2.35
DocVQA test
PaLI-3 (w/ OCR)
PaLI-3 Vision Language Models: Smaller, Faster, Stronger
2023-10-13T00:00:00
https://arxiv.org/abs/2310.09199v2
[ "https://github.com/kyegomez/PALI3" ]
In the paper 'PaLI-3 Vision Language Models: Smaller, Faster, Stronger', what ANLS score did the PaLI-3 (w/ OCR) model get on the DocVQA test dataset
0.886
ModelNet40
ExpPoint-MAE
ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers
2023-06-19T00:00:00
https://arxiv.org/abs/2306.10798v3
[ "https://github.com/vvrpanda/exppoint-mae" ]
In the paper 'ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers', what Overall Accuracy score did the ExpPoint-MAE model get on the ModelNet40 dataset
94.2
UCF101
OST
OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition
2023-11-30T00:00:00
https://arxiv.org/abs/2312.00096v2
[ "https://github.com/tomchen-ctj/OST" ]
In the paper 'OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition', what Top-1 Accuracy score did the OST model get on the UCF101 dataset
79.7
ChEBI-20
GIT-Mol-caption
GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text
2023-08-14T00:00:00
https://arxiv.org/abs/2308.06911v3
[ "https://github.com/ai-hpc-research-team/git-mol" ]
In the paper 'GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text', what BLEU score did the GIT-Mol-caption model get on the ChEBI-20 dataset
75.6
dacl10k v1 testdev
DeepLabv3+ EfficientNet-B4
dacl10k: Benchmark for Semantic Bridge Damage Segmentation
2023-09-01T00:00:00
https://arxiv.org/abs/2309.00460v1
[ "https://github.com/phiyodr/dacl10k-toolkit" ]
In the paper 'dacl10k: Benchmark for Semantic Bridge Damage Segmentation', what mIoU score did the DeepLabv3+ EfficientNet-B4 model get on the dacl10k v1 testdev dataset
0.411
Color FERET
VGG based
IdentiFace : A VGG Based Multimodal Facial Biometric System
2024-01-02T00:00:00
https://arxiv.org/abs/2401.01227v2
[ "https://github.com/MahmoudRabea13/IdentiFace" ]
In the paper 'IdentiFace : A VGG Based Multimodal Facial Biometric System', what 5-class test accuracy score did the VGG based model get on the Color FERET dataset
99.2%
COCO test-dev
LeYOLO-Medium
LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection
2024-06-20T00:00:00
https://arxiv.org/abs/2406.14239v1
[ "https://github.com/LilianHollard/LeYOLO" ]
In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what box mAP score did the LeYOLO-Medium model get on the COCO test-dev dataset
39.3
BanglaBook
XGBoost (char 2-gram + char 3-gram)
BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews
2023-05-11T00:00:00
https://arxiv.org/abs/2305.06595v3
[ "https://github.com/mohsinulkabir14/banglabook" ]
In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the XGBoost (char 2-gram + char 3-gram) model get on the BanglaBook dataset
0.8723
ENZYMES
GIN + PANDA
PANDA: Expanded Width-Aware Message Passing Beyond Rewiring
2024-06-06T00:00:00
https://arxiv.org/abs/2406.03671v2
[ "https://github.com/jeongwhanchoi/panda" ]
In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GIN + PANDA model get on the ENZYMES dataset
46.2
VisA
AnomalyDINO-S (1-shot)
AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2
2024-05-23T00:00:00
https://arxiv.org/abs/2405.14529v2
[ "https://github.com/dammsi/AnomalyDINO" ]
In the paper 'AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2', what Detection AUROC score did the AnomalyDINO-S (1-shot) model get on the VisA dataset
87.4
SportsMOT
MeMOTR
MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking
2023-07-28T00:00:00
https://arxiv.org/abs/2307.15700v3
[ "https://github.com/mcg-nju/memotr" ]
In the paper 'MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking', what HOTA score did the MeMOTR model get on the SportsMOT dataset
70.0
ImageNet
ViT-B/16
Kolmogorov-Arnold Transformer
2024-09-16T00:00:00
https://arxiv.org/abs/2409.10594v1
[ "https://github.com/Adamdad/kat" ]
In the paper 'Kolmogorov-Arnold Transformer', what Top 1 Accuracy score did the ViT-B/16 model get on the ImageNet dataset
79.1
ETTm2 (96) Multivariate
TSMixer
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
2023-06-14T00:00:00
https://arxiv.org/abs/2306.09364v4
[ "https://github.com/ibm/tsfm" ]
In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTm2 (96) Multivariate dataset
0.164
Mip-NeRF 360
Compact3D
CompGS: Smaller and Faster Gaussian Splatting with Vector Quantization
2023-11-30T00:00:00
https://arxiv.org/abs/2311.18159v3
[ "https://github.com/ucdvision/compact3d" ]
In the paper 'CompGS: Smaller and Faster Gaussian Splatting with Vector Quantization', what PSNR score did the Compact3D model get on the Mip-NeRF 360 dataset
27.16
ETH (trained on 3DMatch)
GeoTransformer
GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer
2023-07-25T00:00:00
https://arxiv.org/abs/2308.03768v1
[ "https://github.com/qinzheng93/geotransformer" ]
In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (30cm, 5 degrees) score did the GeoTransformer model get on the ETH (trained on 3DMatch) dataset
4.91
MSR-VTT
TF-T2V
A Recipe for Scaling up Text-to-Video Generation with Text-free Videos
2023-12-25T00:00:00
https://arxiv.org/abs/2312.15770v1
[ "https://github.com/ali-vilab/i2vgen-xl" ]
In the paper 'A Recipe for Scaling up Text-to-Video Generation with Text-free Videos', what FID score did the TF-T2V model get on the MSR-VTT dataset
8.19
MixATIS
MISCA
MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention
2023-12-10T00:00:00
https://arxiv.org/abs/2312.05741v1
[ "https://github.com/vinairesearch/misca" ]
In the paper 'MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention', what Micro F1 score did the MISCA model get on the MixATIS dataset
90.5
TID2013
UNIQA
You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment
2023-10-14T00:00:00
https://arxiv.org/abs/2310.09560v2
[ "https://github.com/barcodereader/yoto" ]
In the paper 'You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment', what SRCC score did the UNIQA model get on the TID2013 dataset
0.953
nuScenes Camera Only
SeaBird
SeaBird: Segmentation in Bird's View with Dice Loss Improves Monocular 3D Detection of Large Objects
2024-03-29T00:00:00
https://arxiv.org/abs/2403.20318v1
[ "https://github.com/abhi1kumar/seabird" ]
In the paper 'SeaBird: Segmentation in Bird's View with Dice Loss Improves Monocular 3D Detection of Large Objects', what NDS score did the SeaBird model get on the nuScenes Camera Only dataset
59.7
CIFAR-10
CAF
Constant Acceleration Flow
2024-11-01T00:00:00
https://arxiv.org/abs/2411.00322v1
[ "https://github.com/mlvlab/CAF" ]
In the paper 'Constant Acceleration Flow', what FID score did the CAF model get on the CIFAR-10 dataset
1.39
BIG-bench (Sports Understanding)
PaLM 2 (few-shot, k=3, Direct)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (Sports Understanding) dataset
90.8
California Housing Prices
Binary Diffusion
Tabular Data Generation using Binary Diffusion
2024-09-20T00:00:00
https://arxiv.org/abs/2409.13882v2
[ "https://github.com/vkinakh/binary-diffusion-tabular" ]
In the paper 'Tabular Data Generation using Binary Diffusion', what Parameters(M) score did the Binary Diffusion model get on the California Housing Prices dataset
1.5
RLBench
ARP+
Autoregressive Action Sequence Learning for Robotic Manipulation
2024-10-04T00:00:00
https://arxiv.org/abs/2410.03132v3
[ "https://github.com/mlzxy/arp" ]
In the paper 'Autoregressive Action Sequence Learning for Robotic Manipulation', what Succ. Rate (18 tasks, 100 demo/task) score did the ARP+ model get on the RLBench dataset
86.0
LRS2
RTFS-Net-4
RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation
2023-09-29T00:00:00
https://arxiv.org/abs/2309.17189v4
[ "https://github.com/spkgyk/RTFS-Net" ]
In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-4 model get on the LRS2 dataset
14.1
EconLogicQA
GPT-4-Turbo
EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning
2024-05-13T00:00:00
https://arxiv.org/abs/2405.07938v2
[ "https://github.com/yinzhu-quan/lm-evaluation-harness" ]
In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the GPT-4-Turbo model get on the EconLogicQA dataset
0.5692
DEplain-APA-sent
mBART (trained on DEplain-APA-sent)
DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification
2023-05-30T00:00:00
https://arxiv.org/abs/2305.18939v1
[ "https://github.com/rstodden/deplain" ]
In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the mBART (trained on DEplain-APA-sent) model get on the DEplain-APA-sent dataset
34.818
MedConceptsQA
epfl-llm/meditron-7b
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
2023-11-27T00:00:00
https://arxiv.org/abs/2311.16079v1
[ "https://github.com/epfllm/meditron" ]
In the paper 'MEDITRON-70B: Scaling Medical Pretraining for Large Language Models', what Accuracy score did the epfl-llm/meditron-7b model get on the MedConceptsQA dataset
23.787
ToolLens
COLT
Towards Completeness-Oriented Tool Retrieval for Large Language Models
2024-05-25T00:00:00
https://arxiv.org/abs/2405.16089v2
[ "https://github.com/quchangle1/colt" ]
In the paper 'Towards Completeness-Oriented Tool Retrieval for Large Language Models', what COMP@ score did the COLT model get on the ToolLens dataset
84.55
SFCHD
FCOS+SCALE
Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method
2023-06-03T00:00:00
https://arxiv.org/abs/2306.02098v2
[ "https://github.com/lijfrank-open/SFCHD-SCALE" ]
In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the FCOS+SCALE model get on the SFCHD dataset
76.3
ETTm2 (192) Multivariate
SCNN
Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting
2023-05-22T00:00:00
https://arxiv.org/abs/2305.13036v3
[ "https://github.com/JLDeng/SCNN" ]
In the paper 'Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting', what MSE score did the SCNN model get on the ETTm2 (192) Multivariate dataset
0.221
GOT-10k
SAMURAI-L
SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory
2024-11-18T00:00:00
https://arxiv.org/abs/2411.11922v2
[ "https://github.com/yangchris11/samurai" ]
In the paper 'SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory', what Average Overlap score did the SAMURAI-L model get on the GOT-10k dataset
81.7
ZINC-500k
N2-GNN
Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman
2023-06-05T00:00:00
https://arxiv.org/abs/2306.03266v3
[ "https://github.com/jiaruifeng/n2gnn" ]
In the paper 'Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman', what MAE score did the N2-GNN model get on the ZINC-500k dataset
0.059
MM-Vet
DeepSeek-VL
DeepSeek-VL: Towards Real-World Vision-Language Understanding
2024-03-08T00:00:00
https://arxiv.org/abs/2403.05525v2
[ "https://github.com/deepseek-ai/deepseek-vl" ]
In the paper 'DeepSeek-VL: Towards Real-World Vision-Language Understanding', what GPT-4 score score did the DeepSeek-VL model get on the MM-Vet dataset
41.5
Criteo
TF4CTR
TF4CTR: Twin Focus Framework for CTR Prediction via Adaptive Sample Differentiation
2024-05-06T00:00:00
https://arxiv.org/abs/2405.03167v2
[ "https://github.com/salmon1802/tf4ctr" ]
In the paper 'TF4CTR: Twin Focus Framework for CTR Prediction via Adaptive Sample Differentiation', what AUC score did the TF4CTR model get on the Criteo dataset
0.8150
ImageNet
M2-Encoder
M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining
2024-01-29T00:00:00
https://arxiv.org/abs/2401.15896v2
[ "https://github.com/alipay/Ant-Multi-Modal-Framework/tree/main/prj/M2_Encoder" ]
In the paper 'M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining', what Accuracy (Private) score did the M2-Encoder model get on the ImageNet dataset
88.5
MM-Vet
JanusFlow
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
2024-11-12T00:00:00
https://arxiv.org/abs/2411.07975v1
[ "https://github.com/deepseek-ai/janus" ]
In the paper 'JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation', what GPT-4 score score did the JanusFlow model get on the MM-Vet dataset
30.9
Elephant
Snuffy
Snuffy: Efficient Whole Slide Image Classifier
2024-08-15T00:00:00
https://arxiv.org/abs/2408.08258v2
[ "https://github.com/jafarinia/snuffy" ]
In the paper 'Snuffy: Efficient Whole Slide Image Classifier', what ACC score did the Snuffy model get on the Elephant dataset
0.923
CHILI-100K
GIN
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GIN model get on the CHILI-100K dataset
0.336 +/- 0.005
Squirrel
M2M-GNN
Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs
2024-05-31T00:00:00
https://arxiv.org/abs/2405.20652v1
[ "https://github.com/Jinx-byebye/m2mgnn" ]
In the paper 'Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs', what Accuracy score did the M2M-GNN model get on the Squirrel dataset
63.60 ± 1.7
SODA-D
CFINet
Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09534v1
[ "https://github.com/shaunyuan22/cfinet" ]
In the paper 'Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning', what mAP@0.5:0.95 score did the CFINet model get on the SODA-D dataset
30.7
WeiboPolls
UniPoll
UniPoll: A Unified Social Media Poll Generation Framework via Multi-Objective Optimization
2023-06-12T00:00:00
https://arxiv.org/abs/2306.06851v2
[ "https://github.com/X1AOX1A/UniPoll" ]
In the paper 'UniPoll: A Unified Social Media Poll Generation Framework via Multi-Objective Optimization', what ROUGE-1 score did the UniPoll model get on the WeiboPolls dataset
49.6
MSR-VTT
CoCap (ViT/L14)
Accurate and Fast Compressed Video Captioning
2023-09-22T00:00:00
https://arxiv.org/abs/2309.12867v2
[ "https://github.com/acherstyx/CoCap" ]
In the paper 'Accurate and Fast Compressed Video Captioning', what CIDEr score did the CoCap (ViT/L14) model get on the MSR-VTT dataset
57.2
LIVE-FB LSVQ
OneAlign
Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels
2023-12-28T00:00:00
https://arxiv.org/abs/2312.17090v1
[ "https://github.com/q-future/q-align" ]
In the paper 'Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels', what PLCC score did the OneAlign model get on the LIVE-FB LSVQ dataset
0.886
CULane
CLRKDNet (ResNet-18)
CLRKDNet: Speeding up Lane Detection with Knowledge Distillation
2024-05-21T00:00:00
https://arxiv.org/abs/2405.12503v1
[ "https://github.com/weiqingq/CLRKDNet" ]
In the paper 'CLRKDNet: Speeding up Lane Detection with Knowledge Distillation', what F1 score score did the CLRKDNet (ResNet-18) model get on the CULane dataset
79.66
BIG-bench (Disambiguation QA)
PaLM 2 (few-shot, k=3, CoT)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, CoT) model get on the BIG-bench (Disambiguation QA) dataset
77.6
Mapillary val
DINOv2 SALAD
Optimal Transport Aggregation for Visual Place Recognition
2023-11-27T00:00:00
https://arxiv.org/abs/2311.15937v2
[ "https://github.com/serizba/salad" ]
In the paper 'Optimal Transport Aggregation for Visual Place Recognition', what Recall@1 score did the DINOv2 SALAD model get on the Mapillary val dataset
92.2
GigaSpeech DEV
Zipformer+CR-CTC (no external language model)
CR-CTC: Consistency regularization on CTC for improved speech recognition
2024-10-07T00:00:00
https://arxiv.org/abs/2410.05101v3
[ "https://github.com/k2-fsa/icefall" ]
In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+CR-CTC (no external language model) model get on the GigaSpeech DEV dataset
10.15
Market-1501
BoT+UFFM+AMC
Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination
2024-05-02T00:00:00
https://arxiv.org/abs/2405.01101v4
[ "https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC" ]
In the paper 'Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination', what Rank-1 score did the BoT+UFFM+AMC model get on the Market-1501 dataset
96.2
CIFAR-100
ABNet-2G-R2
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
2024-11-28T00:00:00
https://arxiv.org/abs/2411.19213v1
[ "https://github.com/dvssajay/New_World" ]
In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R2 model get on the CIFAR-100 dataset
80.354
CIFAR-100-LT (ρ=100)
LIFT (ViT-B/16, CLIP)
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
2023-09-18T00:00:00
https://arxiv.org/abs/2309.10019v3
[ "https://github.com/shijxcs/lift" ]
In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Error Rate score did the LIFT (ViT-B/16, CLIP) model get on the CIFAR-100-LT (ρ=100) dataset
18.3
Fishyscapes L&F
Mask2Anomaly
Unmasking Anomalies in Road-Scene Segmentation
2023-07-25T00:00:00
https://arxiv.org/abs/2307.13316v1
[ "https://github.com/shyam671/mask2anomaly-unmasking-anomalies-in-road-scene-segmentation" ]
In the paper 'Unmasking Anomalies in Road-Scene Segmentation', what AP score did the Mask2Anomaly model get on the Fishyscapes L&F dataset
46.04
RealCQA
vlt5 - 11th ep FineTune
RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order Logic
2023-08-03T00:00:00
https://arxiv.org/abs/2308.01979v1
[ "https://github.com/cse-ai-lab/RealCQA" ]
In the paper 'RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order Logic', what 1:1 Accuracy score did the vlt5 - 11th ep FineTune model get on the RealCQA dataset
0.310618012706403
ETTm1 (192) Multivariate
MoLE-DLinear
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTm1 (192) Multivariate dataset
0.328
GRAZPEDWRI-DX
YOLOv9-E
YOLOv9 for Fracture Detection in Pediatric Wrist Trauma X-ray Images
2024-03-17T00:00:00
https://arxiv.org/abs/2403.11249v2
[ "https://github.com/ruiyangju/yolov9-fracture-detection" ]
In the paper 'YOLOv9 for Fracture Detection in Pediatric Wrist Trauma X-ray Images', what mAP score did the YOLOv9-E model get on the GRAZPEDWRI-DX dataset
65.62
CIFAR-100
ZLaP*
Label Propagation for Zero-shot Classification with Vision-Language Models
2024-04-05T00:00:00
https://arxiv.org/abs/2404.04072v1
[ "https://github.com/vladan-stojnic/zlap" ]
In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the CIFAR-100 dataset
74.2
ASDiv-A
MMOS-CODE-7B(0-shot)
An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning
2024-02-23T00:00:00
https://arxiv.org/abs/2403.00799v1
[ "https://github.com/cyzhh/MMOS" ]
In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Execution Accuracy score did the MMOS-CODE-7B(0-shot) model get on the ASDiv-A dataset
78.6
CHILI-100K
GCN
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the GCN model get on the CHILI-100K dataset
0.275 +/- 0.002
MATH
MathCoder-CL-34B
MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning
2023-10-05T00:00:00
https://arxiv.org/abs/2310.03731v1
[ "https://github.com/mathllm/mathcoder" ]
In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-CL-34B model get on the MATH dataset
45.2
ETTh2 (720) Multivariate
MoLE-DLinear
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the ETTh2 (720) Multivariate dataset
0.605
DUTS-TE
BiRefNet (DUTS, HRSOD)
Bilateral Reference for High-Resolution Dichotomous Image Segmentation
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03407v6
[ "https://github.com/zhengpeng7/birefnet" ]
In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (DUTS, HRSOD) model get on the DUTS-TE dataset
0.018
SA-1B
SAM
Segment Anything without Supervision
2024-06-28T00:00:00
https://arxiv.org/abs/2406.20081v1
[ "https://github.com/frank-xwang/unsam" ]
In the paper 'Segment Anything without Supervision', what Average Precision score did the SAM model get on the SA-1B dataset
38.9
ETTm1 (192) Multivariate
DiPE-Linear
Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17257v1
[ "https://github.com/wintertee/dipe-linear" ]
In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTm1 (192) Multivariate dataset
0.339
BIG-bench (Date Understanding)
PaLM 2 (few-shot, k=3, CoT)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, CoT) model get on the BIG-bench (Date Understanding) dataset
91.2
AIFB
BoP
From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis
2024-11-17T00:00:00
https://arxiv.org/abs/2411.11149v1
[ "https://github.com/kbogas/PAM_BoP" ]
In the paper 'From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis', what Accuracy score did the BoP model get on the AIFB dataset
92.22
Atari 2600 Bowling
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Bowling dataset
62.4
IEMOCAP
CORECT (6-class)
Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction
2023-11-08T00:00:00
https://arxiv.org/abs/2311.04507v3
[ "https://github.com/leson502/CORECT_EMNLP2023" ]
In the paper 'Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction', what F1 score did the CORECT (6-class) model get on the IEMOCAP dataset
0.702
SemanticKITTI
PPT+SparseUNet
Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09718v2
[ "https://github.com/Pointcept/Pointcept" ]
In the paper 'Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training', what val mIoU score did the PPT+SparseUNet model get on the SemanticKITTI dataset
71.4%
LoveDA
AerialFormer-B
AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation
2023-06-12T00:00:00
https://arxiv.org/abs/2306.06842v2
[ "https://github.com/UARK-AICV/AerialFormer" ]
In the paper 'AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation', what Category mIoU score did the AerialFormer-B model get on the LoveDA dataset
54.1
SIMMC2.0
PaCE
PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts
2023-05-24T00:00:00
https://arxiv.org/abs/2305.14839v2
[ "https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/pace" ]
In the paper 'PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts', what BLEU score did the PaCE model get on the SIMMC2.0 dataset
34.1
UCSD Ped2
MULDE-object-centric-micro
MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection
2024-03-21T00:00:00
https://arxiv.org/abs/2403.14497v1
[ "https://github.com/jakubmicorek/MULDE-Multiscale-Log-Density-Estimation-via-Denoising-Score-Matching-for-Video-Anomaly-Detection" ]
In the paper 'MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection', what AUC score did the MULDE-object-centric-micro model get on the UCSD Ped2 dataset
99.7%
CUB-200-2011
EfficientDML-VPTSP-G/512
Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning
2024-02-04T00:00:00
https://arxiv.org/abs/2402.02340v2
[ "https://github.com/noahsark/parameterefficient-dml" ]
In the paper 'Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning', what R@1 score did the EfficientDML-VPTSP-G/512 model get on the CUB-200-2011 dataset
88.5
ActivityNet Captions
VTimeLLM
VTimeLLM: Empower LLM to Grasp Video Moments
2023-11-30T00:00:00
https://arxiv.org/abs/2311.18445v1
[ "https://github.com/huangb23/vtimellm" ]
In the paper 'VTimeLLM: Empower LLM to Grasp Video Moments', what CIDEr score did the VTimeLLM model get on the ActivityNet Captions dataset
27.6
SVT
NRTR+TPS++
TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition
2023-05-09T00:00:00
https://arxiv.org/abs/2305.05322v1
[ "https://github.com/simplify23/tps_pp" ]
In the paper 'TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition', what Accuracy score did the NRTR+TPS++ model get on the SVT dataset
94.6
PASCAL VOC
TFOC
Training-free Object Counting with Prompts
2023-06-30T00:00:00
https://arxiv.org/abs/2307.00038v2
[ "https://github.com/shizenglin/training-free-object-counter" ]
In the paper 'Training-free Object Counting with Prompts', what mRMSE score did the TFOC model get on the PASCAL VOC dataset
0.0084
PeMSD7
STD-MAE
Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting
2023-12-01T00:00:00
https://arxiv.org/abs/2312.00516v3
[ "https://github.com/jimmy-7664/std-mae" ]
In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what 12 steps MAE score did the STD-MAE model get on the PeMSD7 dataset
18.31
Cooperative Vision-and-Dialogue Navigation
VLN-PETL
VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation
2023-08-20T00:00:00
https://arxiv.org/abs/2308.10172v1
[ "https://github.com/yanyuanqiao/vln-petl" ]
In the paper 'VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation', what dist_to_end_reduction score did the VLN-PETL model get on the Cooperative Vision-and-Dialogue Navigation dataset
6.13
OpenMIC-2018
EAsT-KD + PaSST
Audio Embeddings as Teachers for Music Classification
2023-06-30T00:00:00
https://arxiv.org/abs/2306.17424v1
[ "https://github.com/suncerock/EAsT-music-classification" ]
In the paper 'Audio Embeddings as Teachers for Music Classification', what mean average precision score did the EAsT-KD + PaSST model get on the OpenMIC-2018 dataset
.852
IMDB-Clean
MiVOLO-V2
Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation
2024-03-04T00:00:00
https://arxiv.org/abs/2403.02302v3
[ "https://github.com/wildchlamydia/mivolo" ]
In the paper 'Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation', what Average mean absolute error score did the MiVOLO-V2 model get on the IMDB-Clean dataset
3.97
FSS-1000 (5-shot)
Annotation-free FSS (With Annotation,ResNet-50)
Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach
2023-07-26T00:00:00
https://arxiv.org/abs/2307.14446v1
[ "https://github.com/mindflow-institue/annotation_free_fewshot" ]
In the paper 'Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach', what Mean IoU score did the Annotation-free FSS (With Annotation,ResNet-50) model get on the FSS-1000 (5-shot) dataset
87.9
MeerKAT: Meerkat Kalahari Audio Transcripts
animal2vec
animal2vec and MeerKAT: A self-supervised transformer for rare-event raw audio input and a large-scale reference dataset for bioacoustics
2024-06-03T00:00:00
https://arxiv.org/abs/2406.01253v2
[ "https://github.com/livingingroups/animal2vec" ]
In the paper 'animal2vec and MeerKAT: A self-supervised transformer for rare-event raw audio input and a large-scale reference dataset for bioacoustics', what AP score did the animal2vec model get on the MeerKAT: Meerkat Kalahari Audio Transcripts dataset
0.91
VoiceBank + DEMAND
Schrödinger bridge (PESQ loss)
Investigating Training Objectives for Generative Speech Enhancement
2024-09-16T00:00:00
https://arxiv.org/abs/2409.10753v1
[ "https://github.com/sp-uhh/sgmse" ]
In the paper 'Investigating Training Objectives for Generative Speech Enhancement', what PESQ score did the Schrödinger bridge (PESQ loss) model get on the VoiceBank + DEMAND dataset
3.70
CATT
Shakkala
CATT: Character-based Arabic Tashkeel Transformer
2024-07-03T00:00:00
https://arxiv.org/abs/2407.03236v3
[ "https://github.com/abjadai/catt" ]
In the paper 'CATT: Character-based Arabic Tashkeel Transformer', what DER(%) score did the Shakkala model get on the CATT dataset
13.494
GTAV-to-Cityscapes Labels
CMFormer
Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation
2023-07-01T00:00:00
https://arxiv.org/abs/2307.00371v5
[ "https://github.com/BiQiWHU/CMFormer" ]
In the paper 'Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation', what mIoU score did the CMFormer model get on the GTAV-to-Cityscapes Labels dataset
55.3
STAR Benchmark
SeViLA
Self-Chained Image-Language Model for Video Localization and Question Answering
2023-05-11T00:00:00
https://arxiv.org/abs/2305.06988v2
[ "https://github.com/yui010206/sevila" ]
In the paper 'Self-Chained Image-Language Model for Video Localization and Question Answering', what Average Accuracy score did the SeViLA model get on the STAR Benchmark dataset
64.9
ReCoRD
PaLM 2-S (one-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what F1 score did the PaLM 2-S (one-shot) model get on the ReCoRD dataset
92.1
LIVE-VQC
ReLaX-VQA
ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment
2024-07-16T00:00:00
https://arxiv.org/abs/2407.11496v1
[ "https://github.com/xinyiw915/relax-vqa" ]
In the paper 'ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing Video Quality Assessment', what PLCC score did the ReLaX-VQA model get on the LIVE-VQC dataset
0.8079
ANLI test
ChatGPT
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets
2023-05-29T00:00:00
https://arxiv.org/abs/2305.18486v4
[ "https://github.com/ntunlp/chatgpt_eval" ]
In the paper 'A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets', what A1 score did the ChatGPT model get on the ANLI test dataset
62.3
SFCHD
FCOS
Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method
2023-06-03T00:00:00
https://arxiv.org/abs/2306.02098v2
[ "https://github.com/lijfrank-open/SFCHD-SCALE" ]
In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the FCOS model get on the SFCHD dataset
76.4
MBPP
DeepSeek-Coder-Base 33B (few-shot)
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
2024-01-25T00:00:00
https://arxiv.org/abs/2401.14196v2
[ "https://github.com/deepseek-ai/DeepSeek-Coder" ]
In the paper 'DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence', what Accuracy score did the DeepSeek-Coder-Base 33B (few-shot) model get on the MBPP dataset
66