dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
CUHK-Shadow
SDDNet (MM 2023) (256x256)
SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection
2023-08-17T00:00:00
https://arxiv.org/abs/2308.08935v2
[ "https://github.com/rmcong/sddnet_acmmm23" ]
In the paper 'SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection', what BER score did the SDDNet (MM 2023) (256x256) model get on the CUHK-Shadow dataset
8.66
SemTabNet
T5
Statements: Universal Information Extraction from Tables with Large Language Models for ESG KPIs
2024-06-27T00:00:00
https://arxiv.org/abs/2406.19102v1
[ "https://github.com/ds4sd/semtabnet" ]
In the paper 'Statements: Universal Information Extraction from Tables with Large Language Models for ESG KPIs', what average Tree Similarity Score score did the T5 model get on the SemTabNet dataset
81.76
CIFAR-10-LT (ρ=10)
VS + ADRW + TLA
A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code
2023-10-07T00:00:00
https://arxiv.org/abs/2310.04752
[ "https://github.com/wang22ti/DDC" ]
In the paper 'A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning. paper with code', what Error Rate score did the VS + ADRW + TLA model get on the CIFAR-10-LT (ρ=10) dataset
8.18
View-of-Delft (val)
HyDRa
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
2024-03-12T00:00:00
https://arxiv.org/abs/2403.07746v2
[ "https://github.com/phi-wol/hydra" ]
In the paper 'Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception', what mAP score did the HyDRa model get on the View-of-Delft (val) dataset
60.9
PerSeg
P^2SAM
Part-aware Personalized Segment Anything Model for Patient-Specific Segmentation
2024-03-08T00:00:00
https://arxiv.org/abs/2403.05433v1
[ "https://github.com/Zch0414/P2SAM" ]
In the paper 'Part-aware Personalized Segment Anything Model for Patient-Specific Segmentation', what mIoU score did the P^2SAM model get on the PerSeg dataset
95.66
KITTI Odometry Benchmark
SCIPaD
SCIPaD: Incorporating Spatial Clues into Unsupervised Pose-Depth Joint Learning
2024-07-07T00:00:00
https://arxiv.org/abs/2407.05283v1
[ "https://github.com/fengyi233/SCIPaD" ]
In the paper 'SCIPaD: Incorporating Spatial Clues into Unsupervised Pose-Depth Joint Learning', what Absolute Trajectory Error [m] score did the SCIPaD model get on the KITTI Odometry Benchmark dataset
20.83
GRAZPEDWRI-DX
YOLOv5x
Enhancing Wrist Fracture Detection with YOLO
2024-07-17T00:00:00
https://arxiv.org/abs/2407.12597v2
[ "https://github.com/ammarlodhi255/pediatric_wrist_abnormality_detection-end-to-end-implementation" ]
In the paper 'Enhancing Wrist Fracture Detection with YOLO', what mAP score did the YOLOv5x model get on the GRAZPEDWRI-DX dataset
69.00
USNA-Cn2 (short-duration)
Persistence
Effective Benchmarks for Optical Turbulence Modeling
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03573v1
[ "https://github.com/cdjellen/otbench" ]
In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Persistence model get on the USNA-Cn2 (short-duration) dataset
0.821
WOST
CLIP4STR-B
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
2023-05-23T00:00:00
https://arxiv.org/abs/2305.14014v3
[ "https://github.com/VamosC/CLIP4STR" ]
In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what 1:1 Accuracy score did the CLIP4STR-B model get on the WOST dataset
87.0
COCO-20i (1-shot)
Matcher(DINOv2)
Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching
2023-05-22T00:00:00
https://arxiv.org/abs/2305.13310v2
[ "https://github.com/aim-uofa/matcher" ]
In the paper 'Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching', what Mean IoU score did the Matcher(DINOv2) model get on the COCO-20i (1-shot) dataset
52.7
MM-Vet
Dynamic-LLaVA-7B
Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification
2024-12-01T00:00:00
https://arxiv.org/abs/2412.00876v2
[ "https://github.com/osilly/dynamic_llava" ]
In the paper 'Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification', what GPT-4 score score did the Dynamic-LLaVA-7B model get on the MM-Vet dataset
32.2
MATH
ToRA-Code 13B (w/ code)
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving
2023-09-29T00:00:00
https://arxiv.org/abs/2309.17452v4
[ "https://github.com/microsoft/tora" ]
In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-Code 13B (w/ code) model get on the MATH dataset
48.1
COCO minival
GLEE-Lite
General Object Foundation Model for Images and Videos at Scale
2023-12-14T00:00:00
https://arxiv.org/abs/2312.09158v1
[ "https://github.com/FoundationVision/GLEE" ]
In the paper 'General Object Foundation Model for Images and Videos at Scale', what box AP score did the GLEE-Lite model get on the COCO minival dataset
55.0
UK Biobank Brain MRI
NeuroPath
NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes
2024-09-26T00:00:00
https://arxiv.org/abs/2409.17510v3
[ "https://github.com/Chrisa142857/neuro_detour" ]
In the paper 'NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes', what Accuracy score did the NeuroPath model get on the UK Biobank Brain MRI dataset
99.59
NExT-QA (Open-ended VideoQA)
Flash-VStream
Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams
2024-06-12T00:00:00
https://arxiv.org/abs/2406.08085v2
[ "https://github.com/IVGSZ/Flash-VStream" ]
In the paper 'Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams', what Accuracy score did the Flash-VStream model get on the NExT-QA (Open-ended VideoQA) dataset
61.6
Caltech-101
RPO
Read-only Prompt Optimization for Vision-Language Few-shot Learning
2023-08-29T00:00:00
https://arxiv.org/abs/2308.14960v2
[ "https://github.com/mlvlab/rpo" ]
In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the Caltech-101 dataset
96.03
CropDisease
RFS+MLP
Improving Cross-domain Few-shot Classification with Multilayer Perceptron
2023-12-15T00:00:00
https://arxiv.org/abs/2312.09589v1
[ "https://github.com/BaiShuanghao/CDFSC-MLP" ]
In the paper 'Improving Cross-domain Few-shot Classification with Multilayer Perceptron', what 5 shot score did the RFS+MLP model get on the CropDisease dataset
89.68
MSCOCO
LP-OVOD
LP-OVOD: Open-Vocabulary Object Detection by Linear Probing
2023-10-26T00:00:00
https://arxiv.org/abs/2310.17109v2
[ "https://github.com/vinairesearch/lp-ovod" ]
In the paper 'LP-OVOD: Open-Vocabulary Object Detection by Linear Probing', what AP 0.5 score did the LP-OVOD model get on the MSCOCO dataset
40.5
Weather (192)
TSMixer
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
2023-06-14T00:00:00
https://arxiv.org/abs/2306.09364v4
[ "https://github.com/ibm/tsfm" ]
In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the Weather (192) dataset
0.191
ARC (Challenge)
PaLM 2-S (1-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the ARC (Challenge) dataset
59.6
PASCAL VOC
OneNete,4-S
OneNet: A Channel-Wise 1D Convolutional U-Net
2024-11-14T00:00:00
https://arxiv.org/abs/2411.09838v1
[ "https://github.com/shbyun080/onenet" ]
In the paper 'OneNet: A Channel-Wise 1D Convolutional U-Net', what mAP0.5 score did the OneNete,4-S model get on the PASCAL VOC dataset
52.75
FGVC-Aircraft
ZLaP
Label Propagation for Zero-shot Classification with Vision-Language Models
2024-04-05T00:00:00
https://arxiv.org/abs/2404.04072v1
[ "https://github.com/vladan-stojnic/zlap" ]
In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the FGVC-Aircraft dataset
29.1
S2Looking
CGNet
Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery
2024-04-14T00:00:00
https://arxiv.org/abs/2404.09179v1
[ "https://github.com/chengxihan/cgnet-cd" ]
In the paper 'Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery', what F1-Score score did the CGNet model get on the S2Looking dataset
64.33
ADE20K training-free zero-shot segmentation
COSMOS ViT-B/16
COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training
2024-12-02T00:00:00
https://arxiv.org/abs/2412.01814v1
[ "https://github.com/ExplainableML/cosmos" ]
In the paper 'COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training', what mIoU score did the COSMOS ViT-B/16 model get on the ADE20K training-free zero-shot segmentation dataset
17.7
Kvasir-SEG
ADSNet
Adaptation of Distinct Semantics for Uncertain Areas in Polyp Segmentation
2024-05-13T00:00:00
https://arxiv.org/abs/2405.07523v1
[ "https://github.com/vinhhust2806/ADSNet" ]
In the paper 'Adaptation of Distinct Semantics for Uncertain Areas in Polyp Segmentation', what mean Dice score did the ADSNet model get on the Kvasir-SEG dataset
0.92
PPI
GCN + SAF
The Split Matters: Flat Minima Methods for Improving the Performance of GNNs
2023-06-15T00:00:00
https://arxiv.org/abs/2306.09121v1
[ "https://github.com/foisunt/fmms-in-gnns" ]
In the paper 'The Split Matters: Flat Minima Methods for Improving the Performance of GNNs', what F1 score did the GCN + SAF model get on the PPI dataset
99.38 ± 0.01%
ActivityNet Captions
CM²
Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval
2024-04-11T00:00:00
https://arxiv.org/abs/2404.07610v1
[ "https://github.com/ailab-kyunghee/cm2_dvc" ]
In the paper 'Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval', what METEOR score did the CM² model get on the ActivityNet Captions dataset
8.55
FreiHAND
WiLoR
WiLoR: End-to-end 3D Hand Localization and Reconstruction in-the-wild
2024-09-18T00:00:00
https://arxiv.org/abs/2409.12259v1
[ "https://github.com/rolpotamias/WiLoR" ]
In the paper 'WiLoR: End-to-end 3D Hand Localization and Reconstruction in-the-wild', what PA-MPVPE score did the WiLoR model get on the FreiHAND dataset
5.1
CrowdPose
RTMO-l
RTMO: Towards High-Performance One-Stage Real-Time Multi-Person Pose Estimation
2023-12-12T00:00:00
https://arxiv.org/abs/2312.07526v2
[ "https://github.com/open-mmlab/mmpose" ]
In the paper 'RTMO: Towards High-Performance One-Stage Real-Time Multi-Person Pose Estimation', what mAP @0.5:0.95 score did the RTMO-l model get on the CrowdPose dataset
83.8
BSD100 - 4x upscaling
WaveMixSR
WaveMixSR: A Resource-efficient Neural Network for Image Super-resolution
2023-07-01T00:00:00
https://arxiv.org/abs/2307.00430v1
[ "https://github.com/pranavphoenix/WaveMixSR" ]
In the paper 'WaveMixSR: A Resource-efficient Neural Network for Image Super-resolution', what SSIM score did the WaveMixSR model get on the BSD100 - 4x upscaling dataset
0.7605
COCO-20i (5-shot)
MSDNet (ResNet-101)
MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping
2024-09-17T00:00:00
https://arxiv.org/abs/2409.11316v1
[ "https://github.com/amirrezafateh/msdnet" ]
In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-101) model get on the COCO-20i (5-shot) dataset
55.3
BanglaBook
Multinomial NB (BoW)
BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews
2023-05-11T00:00:00
https://arxiv.org/abs/2305.06595v3
[ "https://github.com/mohsinulkabir14/banglabook" ]
In the paper 'BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from Book Reviews', what Weighted Average F1-score score did the Multinomial NB (BoW) model get on the BanglaBook dataset
0.8564
ICFG-PEDES
MARS
MARS: Paying more attention to visual attributes for text-based person search
2024-07-05T00:00:00
https://arxiv.org/abs/2407.04287v1
[ "https://github.com/ergastialex/mars" ]
In the paper 'MARS: Paying more attention to visual attributes for text-based person search', what mAP score did the MARS model get on the ICFG-PEDES dataset
44.93
MVTec AD
AnomalyDINO-S (4-shot)
AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2
2024-05-23T00:00:00
https://arxiv.org/abs/2405.14529v2
[ "https://github.com/dammsi/AnomalyDINO" ]
In the paper 'AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2', what Detection AUROC score did the AnomalyDINO-S (4-shot) model get on the MVTec AD dataset
97.7
RST-DT
DMRST
Bilingual Rhetorical Structure Parsing with Large Parallel Annotations
2024-09-23T00:00:00
https://arxiv.org/abs/2409.14969v1
[ "https://github.com/tchewik/bilingualrsp" ]
In the paper 'Bilingual Rhetorical Structure Parsing with Large Parallel Annotations', what Standard Parseval (Span) score did the DMRST model get on the RST-DT dataset
78.7 ± 0.4
TVBench
PLLaVA-13B
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
2024-04-25T00:00:00
https://arxiv.org/abs/2404.16994v2
[ "https://github.com/magic-research/PLLaVA" ]
In the paper 'PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning', what Average Accuracy score did the PLLaVA-13B model get on the TVBench dataset
36.4
RefCOCO+ testA
Florence-2-large-ft
Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
2023-11-10T00:00:00
https://arxiv.org/abs/2311.06242v1
[ "https://github.com/retkowsky/florence-2" ]
In the paper 'Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks', what Accuracy (%) score did the Florence-2-large-ft model get on the RefCOCO+ testA dataset
95.3
MATH
DART-Math-Llama3-70B-Uniform (0-shot CoT, w/o code)
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving
2024-06-18T00:00:00
https://arxiv.org/abs/2407.13690v1
[ "https://github.com/hkust-nlp/dart-math" ]
In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Llama3-70B-Uniform (0-shot CoT, w/o code) model get on the MATH dataset
54.9
MAWPS
ATHENA (roberta-base)
ATHENA: Mathematical Reasoning with Thought Expansion
2023-11-02T00:00:00
https://arxiv.org/abs/2311.01036v1
[ "https://github.com/the-jb/athena-math" ]
In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Accuracy (%) score did the ATHENA (roberta-base) model get on the MAWPS dataset
92.2
ChEBI-20
TGM-DLM
Text-Guided Molecule Generation with Diffusion Language Model
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13040v1
[ "https://github.com/deno-v/tgm-dlm" ]
In the paper 'Text-Guided Molecule Generation with Diffusion Language Model', what Text2Mol score did the TGM-DLM model get on the ChEBI-20 dataset
58.1
WSJ0-2mix
TD-Conformer (XL) + DM
On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments
2023-10-09T00:00:00
https://arxiv.org/abs/2310.06125v1
[ "https://github.com/jwr1995/pubsep" ]
In the paper 'On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments', what SI-SDRi score did the TD-Conformer (XL) + DM model get on the WSJ0-2mix dataset
21.2
Cora
CGT
Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures
2023-12-28T00:00:00
https://arxiv.org/abs/2312.16788v1
[ "https://github.com/nslab-cuk/community-aware-graph-transformer" ]
In the paper 'Mitigating Degree Biases in Message Passing Mechanism by Utilizing Community Structures', what Accuracy score did the CGT model get on the Cora dataset
87.10±1.53
AI2D
Gemini Ultra
Gemini: A Family of Highly Capable Multimodal Models
2023-12-19T00:00:00
https://arxiv.org/abs/2312.11805v4
[ "https://github.com/valdecy/pybibx" ]
In the paper 'Gemini: A Family of Highly Capable Multimodal Models', what EM score did the Gemini Ultra model get on the AI2D dataset
79.5
WinoGrande
LLaMA3 8B+MoSLoRA
Mixture-of-Subspaces in Low-Rank Adaptation
2024-06-16T00:00:00
https://arxiv.org/abs/2406.11909v3
[ "https://github.com/wutaiqiang/moslora" ]
In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what Accuracy score did the LLaMA3 8B+MoSLoRA model get on the WinoGrande dataset
85.8
Synapse multi-organ CT
SegFormer3D
SegFormer3D: an Efficient Transformer for 3D Medical Image Segmentation
2024-04-15T00:00:00
https://arxiv.org/abs/2404.10156v2
[ "https://github.com/osupcvlab/segformer3d" ]
In the paper 'SegFormer3D: an Efficient Transformer for 3D Medical Image Segmentation', what Avg DSC score did the SegFormer3D model get on the Synapse multi-organ CT dataset
82.15
MixSNIPS
BiSLU
Joint Multiple Intent Detection and Slot Filling with Supervised Contrastive Learning and Self-Distillation
2023-08-28T00:00:00
https://arxiv.org/abs/2308.14654v1
[ "https://github.com/anhtunguyen98/bislu" ]
In the paper 'Joint Multiple Intent Detection and Slot Filling with Supervised Contrastive Learning and Self-Distillation', what Micro F1 score did the BiSLU model get on the MixSNIPS dataset
97.2
The Pile
Phi-3 3.8B
Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08020v2
[ "https://github.com/jonhue/activeft" ]
In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Phi-3 3.8B model get on the The Pile dataset
0.679
MVTec LOCO AD
PUAD-S
PUAD: Frustratingly Simple Method for Robust Anomaly Detection
2024-02-23T00:00:00
https://arxiv.org/abs/2402.15143v1
[ "https://github.com/LeapMind/PUAD" ]
In the paper 'PUAD: Frustratingly Simple Method for Robust Anomaly Detection', what Avg. Detection AUROC score did the PUAD-S model get on the MVTec LOCO AD dataset
93.1
VulScribeR
Reveal Model - Tested on Reveal (Training on Devign + VulScribeR 20K + Extra Cleans)
Exploring RAG-based Vulnerability Augmentation with LLMs
2024-08-07T00:00:00
https://arxiv.org/abs/2408.04125v2
[ "https://github.com/VulScribeR/VulScribeR" ]
In the paper 'Exploring RAG-based Vulnerability Augmentation with LLMs', what F1 Score score did the Reveal Model - Tested on Reveal (Training on Devign + VulScribeR 20K + Extra Cleans) model get on the VulScribeR dataset
26.18
CSL-Daily
AdaBrowse
AdaBrowse: Adaptive Video Browser for Efficient Continuous Sign Language Recognition
2023-08-16T00:00:00
https://arxiv.org/abs/2308.08327v1
[ "https://github.com/hulianyuyy/adabrowse" ]
In the paper 'AdaBrowse: Adaptive Video Browser for Efficient Continuous Sign Language Recognition', what Word Error Rate (WER) score did the AdaBrowse model get on the CSL-Daily dataset
30.6
QuerYD
TESTA (ViT-B/16)
TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding
2023-10-29T00:00:00
https://arxiv.org/abs/2310.19060v1
[ "https://github.com/renshuhuai-andy/testa" ]
In the paper 'TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding', what text-to-video R@1 score did the TESTA (ViT-B/16) model get on the QuerYD dataset
83.4
CUHK-PEDES
RaSa
RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search
2023-05-23T00:00:00
https://arxiv.org/abs/2305.13653v1
[ "https://github.com/flame-chasers/rasa" ]
In the paper 'RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search', what R@1 score did the RaSa model get on the CUHK-PEDES dataset
76.51
kickstarter
LightGBM + RoBERTa embedding
PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
2024-03-31T00:00:00
https://arxiv.org/abs/2404.00776v1
[ "https://github.com/pyg-team/pytorch-frame" ]
In the paper 'PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning', what AUROC score did the LightGBM + RoBERTa embedding model get on the kickstarter dataset
0.767
HIV dataset
SMA
Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning
2024-02-22T00:00:00
https://arxiv.org/abs/2402.14789v1
[ "https://github.com/johnathan-xie/sma" ]
In the paper 'Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning', what AUC score did the SMA model get on the HIV dataset dataset
0.789
Amazon Games
ProxyRCA
Proxy-based Item Representation for Attribute and Context-aware Recommendation
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06145v1
[ "https://github.com/theeluwin/ProxyRCA" ]
In the paper 'Proxy-based Item Representation for Attribute and Context-aware Recommendation', what Hit@10 score did the ProxyRCA model get on the Amazon Games dataset
0.809
TSS
SD+DINO (Zero-shot)
A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
2023-05-24T00:00:00
https://arxiv.org/abs/2305.15347v2
[ "https://github.com/Junyi42/sd-dino" ]
In the paper 'A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence', what Average PCK@0.05 score did the SD+DINO (Zero-shot) model get on the TSS dataset
79.7
SMAC MMM2_7m2M1M_vs_9m3M1M
VDN
A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
2023-06-04T00:00:00
https://arxiv.org/abs/2306.02430v1
[ "https://github.com/j3soon/dfac-extended" ]
In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the VDN model get on the SMAC MMM2_7m2M1M_vs_9m3M1M dataset
75.00
MM-Vet
SeVa-13B
Self-Supervised Visual Preference Alignment
2024-04-16T00:00:00
https://arxiv.org/abs/2404.10501v2
[ "https://github.com/Kevinz-code/SeVa" ]
In the paper 'Self-Supervised Visual Preference Alignment', what GPT-4 score score did the SeVa-13B model get on the MM-Vet dataset
41.0
Microsoft COCO dataset
VTON-IT
VTON-IT: Virtual Try-On using Image Translation
2023-10-06T00:00:00
https://arxiv.org/abs/2310.04558v2
[ "https://github.com/shuntos/viton-it" ]
In the paper 'VTON-IT: Virtual Try-On using Image Translation', what SSIM score did the VTON-IT model get on the Microsoft COCO dataset dataset
0.93
UCF101
ZLaP
Label Propagation for Zero-shot Classification with Vision-Language Models
2024-04-05T00:00:00
https://arxiv.org/abs/2404.04072v1
[ "https://github.com/vladan-stojnic/zlap" ]
In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the UCF101 dataset
76.3
SIM10K to Cityscapes
PT (ResNet50-FPN)
Align and Distill: Unifying and Improving Domain Adaptive Object Detection
2024-03-18T00:00:00
https://arxiv.org/abs/2403.12029v2
[ "https://github.com/justinkay/aldi" ]
In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the PT (ResNet50-FPN) model get on the SIM10K to Cityscapes dataset
70.6
MSR-VTT
HiGen
Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation
2023-12-07T00:00:00
https://arxiv.org/abs/2312.04483v1
[ "https://github.com/ali-vilab/VGen" ]
In the paper 'Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation', what FID score did the HiGen model get on the MSR-VTT dataset
8.60
SAFIM
codegen-350M-multi
Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
2024-03-07T00:00:00
https://arxiv.org/abs/2403.04814v3
[ "https://github.com/gonglinyuan/safim" ]
In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the codegen-350M-multi model get on the SAFIM dataset
16.30
LibriSpeech test-clean
FAdam
FAdam: Adam is a natural gradient optimizer using diagonal empirical Fisher information
2024-05-21T00:00:00
https://arxiv.org/abs/2405.12807v10
[ "https://github.com/lessw2020/fadam_pytorch" ]
In the paper 'FAdam: Adam is a natural gradient optimizer using diagonal empirical Fisher information', what Word Error Rate (WER) score did the FAdam model get on the LibriSpeech test-clean dataset
1.34
MATH
DART-Math-Mistral-7B-Uniform (0-shot CoT, w/o code)
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving
2024-06-18T00:00:00
https://arxiv.org/abs/2407.13690v1
[ "https://github.com/hkust-nlp/dart-math" ]
In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Mistral-7B-Uniform (0-shot CoT, w/o code) model get on the MATH dataset
43.5
SIR^2(Wild)
RDNet
Reversible Decoupling Network for Single Image Reflection Removal
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08063v1
[ "https://github.com/lime-j/RDNet" ]
In the paper 'Reversible Decoupling Network for Single Image Reflection Removal', what PSNR score did the RDNet model get on the SIR^2(Wild) dataset
27.7
MAESTRO
hFT-Transformer
Automatic Piano Transcription with Hierarchical Frequency-Time Transformer
2023-07-10T00:00:00
https://arxiv.org/abs/2307.04305v1
[ "https://github.com/sony/hft-transformer" ]
In the paper 'Automatic Piano Transcription with Hierarchical Frequency-Time Transformer', what Onset F1 score did the hFT-Transformer model get on the MAESTRO dataset
97.44
MPI-INF-3DHP
MotionAGFormer-L (T=81)
MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network
2023-10-25T00:00:00
https://arxiv.org/abs/2310.16288v1
[ "https://github.com/taatiteam/motionagformer" ]
In the paper 'MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network', what AUC score did the MotionAGFormer-L (T=81) model get on the MPI-INF-3DHP dataset
85.3
ScanNet200
Open-YOLO 3D
Open-YOLO 3D: Towards Fast and Accurate Open-Vocabulary 3D Instance Segmentation
2024-06-04T00:00:00
https://arxiv.org/abs/2406.02548v2
[ "https://github.com/aminebdj/openyolo3d" ]
In the paper 'Open-YOLO 3D: Towards Fast and Accurate Open-Vocabulary 3D Instance Segmentation', what mAP score did the Open-YOLO 3D model get on the ScanNet200 dataset
24.7
HumanML3D
ParCo
ParCo: Part-Coordinating Text-to-Motion Synthesis
2024-03-27T00:00:00
https://arxiv.org/abs/2403.18512v2
[ "https://github.com/qrzou/parco" ]
In the paper 'ParCo: Part-Coordinating Text-to-Motion Synthesis', what FID score did the ParCo model get on the HumanML3D dataset
0.109
LaSOT
ODTrack-L
ODTrack: Online Dense Temporal Token Learning for Visual Tracking
2024-01-03T00:00:00
https://arxiv.org/abs/2401.01686v1
[ "https://github.com/gxnu-zhonglab/odtrack" ]
In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what AUC score did the ODTrack-L model get on the LaSOT dataset
74.0
DomainNet
UniDG + CORAL + ConvNeXt-B
Towards Unified and Effective Domain Generalization
2023-10-16T00:00:00
https://arxiv.org/abs/2310.10008v1
[ "https://github.com/invictus717/UniDG" ]
In the paper 'Towards Unified and Effective Domain Generalization', what Average Accuracy score did the UniDG + CORAL + ConvNeXt-B model get on the DomainNet dataset
59.5
TriviaQA
PaLM 2-S (one-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-S (one-shot) model get on the TriviaQA dataset
75.2
MATH
MMIQC-72B
Augmenting Math Word Problems via Iterative Question Composing
2024-01-17T00:00:00
https://arxiv.org/abs/2401.09003v4
[ "https://github.com/iiis-ai/iterativequestioncomposing" ]
In the paper 'Augmenting Math Word Problems via Iterative Question Composing', what Accuracy score did the MMIQC-72B model get on the MATH dataset
45.0
PF-PASCAL
SD+DINO (Supervised)
A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
2023-05-24T00:00:00
https://arxiv.org/abs/2305.15347v2
[ "https://github.com/Junyi42/sd-dino" ]
In the paper 'A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence', what PCK score did the SD+DINO (Supervised) model get on the PF-PASCAL dataset
93.6
ICBHI Respiratory Sound Database
AFT on Mixed-500
Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance
2023-11-11T00:00:00
https://arxiv.org/abs/2311.06480v1
[ "https://github.com/kaen2891/adversarial_fine-tuning_using_generated_respiratory_sound" ]
In the paper 'Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance', what ICBHI Score score did the AFT on Mixed-500 model get on the ICBHI Respiratory Sound Database dataset
61.79
WebApp1k-Duo-React
o1-mini
A Case Study of Web App Coding with OpenAI Reasoning Models
2024-09-19T00:00:00
https://arxiv.org/abs/2409.13773v1
[ "https://github.com/onekq/webapp1k" ]
In the paper 'A Case Study of Web App Coding with OpenAI Reasoning Models', what pass@1 score did the o1-mini model get on the WebApp1k-Duo-React dataset
0.667
DotPrompts
SantaCoder
Guiding Language Models of Code with Global Context using Monitors
2023-06-19T00:00:00
https://arxiv.org/abs/2306.10763v2
[ "https://github.com/microsoft/monitors4codegen" ]
In the paper 'Guiding Language Models of Code with Global Context using Monitors', what Compilation Rate score did the SantaCoder model get on the DotPrompts dataset
59.79
ZINC
BoP
From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis
2024-11-17T00:00:00
https://arxiv.org/abs/2411.11149v1
[ "https://github.com/kbogas/PAM_BoP" ]
In the paper 'From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis', what MAE score did the BoP model get on the ZINC dataset
0.297
ADE-OoD
DOoD
Diffusion for Out-of-Distribution Detection on Road Scenes and Beyond
2024-07-22T00:00:00
https://arxiv.org/abs/2407.15739v1
[ "https://github.com/lmb-freiburg/diffusion-for-ood" ]
In the paper 'Diffusion for Out-of-Distribution Detection on Road Scenes and Beyond', what AP score did the DOoD model get on the ADE-OoD dataset
63.03
ImageNet-LT
APA (SE-ResNet-50)
Adaptive Parametric Activation
2024-07-11T00:00:00
https://arxiv.org/abs/2407.08567v2
[ "https://github.com/kostas1515/aglu" ]
In the paper 'Adaptive Parametric Activation', what Top-1 Accuracy score did the APA (SE-ResNet-50) model get on the ImageNet-LT dataset
57.9
WDC Products-80%cc-seen-medium
gpt-4o-mini-2024-07-18_structured_explanations
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-mini-2024-07-18_structured_explanations model get on the WDC Products-80%cc-seen-medium dataset
84.38
REDDIT-BINARY
R-GCN + PANDA
PANDA: Expanded Width-Aware Message Passing Beyond Rewiring
2024-06-06T00:00:00
https://arxiv.org/abs/2406.03671v2
[ "https://github.com/jeongwhanchoi/panda" ]
In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GCN + PANDA model get on the REDDIT-BINARY dataset
80.2
Stanford Cars
ProMetaR
Prompt Learning via Meta-Regularization
2024-04-01T00:00:00
https://arxiv.org/abs/2404.00851v1
[ "https://github.com/mlvlab/prometar" ]
In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the Stanford Cars dataset
76.72
MATH
OpenMath-Llama2-70B (w/ code, SC, k=50)
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10176v2
[ "https://github.com/kipok/nemo-skills" ]
In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-Llama2-70B (w/ code, SC, k=50) model get on the MATH dataset
58.3
LAGENDA age
MiVOLO-D1
MiVOLO: Multi-input Transformer for Age and Gender Estimation
2023-07-10T00:00:00
https://arxiv.org/abs/2307.04616v2
[ "https://github.com/wildchlamydia/mivolo" ]
In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what MAE score did the MiVOLO-D1 model get on the LAGENDA age dataset
3.99
RTE
PaLM 2-M (1-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the RTE dataset
81.9%
FGVC-Aircraft
ZLaP*
Label Propagation for Zero-shot Classification with Vision-Language Models
2024-04-05T00:00:00
https://arxiv.org/abs/2404.04072v1
[ "https://github.com/vladan-stojnic/zlap" ]
In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the FGVC-Aircraft dataset
29
DTD
ZLaP
Label Propagation for Zero-shot Classification with Vision-Language Models
2024-04-05T00:00:00
https://arxiv.org/abs/2404.04072v1
[ "https://github.com/vladan-stojnic/zlap" ]
In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the DTD dataset
51.2
SMAC 3s5z_vs_4s6z
QMIX
A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
2023-06-04T00:00:00
https://arxiv.org/abs/2306.02430v1
[ "https://github.com/j3soon/dfac-extended" ]
In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the QMIX model get on the SMAC 3s5z_vs_4s6z dataset
13.09
SHD - Adding
LIF-SNN
The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks
2023-06-14T00:00:00
https://arxiv.org/abs/2306.16922v3
[ "https://github.com/AaronSpieler/elmneuron" ]
In the paper 'The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks', what Accuracy (%) score did the LIF-SNN model get on the SHD - Adding dataset
FAIL
RefCOCOg-val
HyperSeg
HyperSeg: Towards Universal Visual Segmentation with Large Language Model
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17606v2
[ "https://github.com/congvvc/HyperSeg" ]
In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what Overall IoU score did the HyperSeg model get on the RefCOCOg-val dataset
79.4
SID SonyA7S2 x300
LRD
Towards General Low-Light Raw Noise Synthesis and Modeling
2023-07-31T00:00:00
https://arxiv.org/abs/2307.16508v2
[ "https://github.com/fengzhang427/LRD" ]
In the paper 'Towards General Low-Light Raw Noise Synthesis and Modeling', what PSNR (Raw) score did the LRD model get on the SID SonyA7S2 x300 dataset
36.03
LibriSpeech 100h test-clean
Branchformer + GFSA
Graph Convolutions Enrich the Self-Attention in Transformers!
2023-12-07T00:00:00
https://arxiv.org/abs/2312.04234v5
[ "https://github.com/jeongwhanchoi/gfsa" ]
In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Word Error Rate (WER) score did the Branchformer + GFSA model get on the LibriSpeech 100h test-clean dataset
9.6
VisDA2017
SFDA2++
SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation
2024-03-16T00:00:00
https://arxiv.org/abs/2403.10834v1
[ "https://github.com/shinyflight/sfda2" ]
In the paper 'SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation', what Accuracy score did the SFDA2++ model get on the VisDA2017 dataset
89.6
AudioCaps
Consistency TTA (Single-step generation)
ConsistencyTTA: Accelerating Diffusion-Based Text-to-Audio Generation with Consistency Distillation
2023-09-19T00:00:00
https://arxiv.org/abs/2309.10740v3
[ "https://github.com/Bai-YT/ConsistencyTTA" ]
In the paper 'ConsistencyTTA: Accelerating Diffusion-Based Text-to-Audio Generation with Consistency Distillation', what FAD score did the Consistency TTA (Single-step generation) model get on the AudioCaps dataset
2.18
PCQM4Mv2-LSC
Graphormer + GFSA
Graph Convolutions Enrich the Self-Attention in Transformers!
2023-12-07T00:00:00
https://arxiv.org/abs/2312.04234v5
[ "https://github.com/jeongwhanchoi/gfsa" ]
In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Validation MAE score did the Graphormer + GFSA model get on the PCQM4Mv2-LSC dataset
0.0860
SIR^2(Objects)
Zhu et al.
Reversible Decoupling Network for Single Image Reflection Removal
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08063v1
[ "https://github.com/lime-j/RDNet" ]
In the paper 'Reversible Decoupling Network for Single Image Reflection Removal', what SSIM score did the Zhu et al. model get on the SIR^2(Objects) dataset
0.931
LAM(line-level)
HTR-VT
HTR-VT: Handwritten Text Recognition with Vision Transformer
2024-09-13T00:00:00
https://arxiv.org/abs/2409.08573v1
[ "https://github.com/yutingli0606/htr-vt" ]
In the paper 'HTR-VT: Handwritten Text Recognition with Vision Transformer', what Test CER score did the HTR-VT model get on the LAM(line-level) dataset
2.8
HumanML3D
AttT2M
AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
2023-09-02T00:00:00
https://arxiv.org/abs/2309.00796v1
[ "https://github.com/zcymonkey/attt2m" ]
In the paper 'AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism', what FID score did the AttT2M model get on the HumanML3D dataset
0.112