dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
WebApp1K-React
mistral-large-2
Insights from Benchmarking Frontier Language Models on Web App Code Generation
2024-09-08T00:00:00
https://arxiv.org/abs/2409.05177v1
[ "https://github.com/onekq/webapp1k" ]
In the paper 'Insights from Benchmarking Frontier Language Models on Web App Code Generation', what pass@1 score did the mistral-large-2 model get on the WebApp1K-React dataset
0.7804
DEplain-APA-sent
mBART (trained on DEplain-APA-sent & DEplain-web-sent)
DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification
2023-05-30T00:00:00
https://arxiv.org/abs/2305.18939v1
[ "https://github.com/rstodden/deplain" ]
In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the mBART (trained on DEplain-APA-sent & DEplain-web-sent) model get on the DEplain-APA-sent dataset
34.904
SVAMP
MMOS-DeepSeekMath-7B(0-shot)
An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning
2024-02-23T00:00:00
https://arxiv.org/abs/2403.00799v1
[ "https://github.com/cyzhh/MMOS" ]
In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Execution Accuracy score did the MMOS-DeepSeekMath-7B(0-shot) model get on the SVAMP dataset
79.3
MM-Vet
LLaVA-1.5-13B (+ MMFuser)
MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding
2024-10-15T00:00:00
https://arxiv.org/abs/2410.11829v1
[ "https://github.com/yuecao0119/MMFuser" ]
In the paper 'MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding', what GPT-4 score score did the LLaVA-1.5-13B (+ MMFuser) model get on the MM-Vet dataset
36.6
GTSRB
TURTLE (CLIP + DINOv2)
Let Go of Your Labels with Unsupervised Transfer
2024-06-11T00:00:00
https://arxiv.org/abs/2406.07236v1
[ "https://github.com/mlbio-epfl/turtle" ]
In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the GTSRB dataset
48.4
Cornell
H2GCN + UniGAP
UniGAP: A Universal and Adaptive Graph Upsampling Approach to Mitigate Over-Smoothing in Node Classification Tasks
2024-07-28T00:00:00
https://arxiv.org/abs/2407.19420v1
[ "https://github.com/wangxiaotang0906/unigap" ]
In the paper 'UniGAP: A Universal and Adaptive Graph Upsampling Approach to Mitigate Over-Smoothing in Node Classification Tasks', what Accuracy score did the H2GCN + UniGAP model get on the Cornell dataset
84.96 ± 5.0
SMAC 27m_vs_30m
QPLEX
A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
2023-06-04T00:00:00
https://arxiv.org/abs/2306.02430v1
[ "https://github.com/j3soon/dfac-extended" ]
In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QPLEX model get on the SMAC 27m_vs_30m dataset
78.12
ETTh1 (336) Multivariate
CPNet
Boosting MLPs with a Coarsening Strategy for Long-Term Time Series Forecasting
2024-05-06T00:00:00
https://arxiv.org/abs/2405.03199v2
[ "https://github.com/nannanbian/cpnet" ]
In the paper 'Boosting MLPs with a Coarsening Strategy for Long-Term Time Series Forecasting', what MSE score did the CPNet model get on the ETTh1 (336) Multivariate dataset
0.479
allrecipes.com
LLaVA-Chef
LLaVA-Chef: A Multi-modal Generative Model for Food Recipes
2024-08-29T00:00:00
https://arxiv.org/abs/2408.16889v1
[ "https://github.com/mohbattharani/LLaVA-Chef" ]
In the paper 'LLaVA-Chef: A Multi-modal Generative Model for Food Recipes', what BLEU score did the LLaVA-Chef model get on the allrecipes.com dataset
6.0
BKAI-IGH NeoPolyp-Small
RaBiT
RaBiT: An Efficient Transformer using Bidirectional Feature Pyramid Network with Reverse Attention for Colon Polyp Segmentation
2023-07-12T00:00:00
https://arxiv.org/abs/2307.06420v1
[ "https://github.com/nguyenhoangthuan99/RaBiT" ]
In the paper 'RaBiT: An Efficient Transformer using Bidirectional Feature Pyramid Network with Reverse Attention for Colon Polyp Segmentation', what Average Dice score did the RaBiT model get on the BKAI-IGH NeoPolyp-Small dataset
0.94
MPI-INF-3DHP
Regular Splitting Graph Network
Regular Splitting Graph Network for 3D Human Pose Estimation
2023-05-09T00:00:00
https://arxiv.org/abs/2305.05785v1
[ "https://github.com/nies14/rs-net" ]
In the paper 'Regular Splitting Graph Network for 3D Human Pose Estimation', what AUC score did the Regular Splitting Graph Network model get on the MPI-INF-3DHP dataset
53.2
WildDESED
CRNN
WildDESED: An LLM-Powered Dataset for Wild Domestic Environment Sound Event Detection System
2024-07-04T00:00:00
https://arxiv.org/abs/2407.03656v3
[ "https://github.com/swagshaw/wilddesed" ]
In the paper 'WildDESED: An LLM-Powered Dataset for Wild Domestic Environment Sound Event Detection System', what PSDS1 (-5dB) score did the CRNN model get on the WildDESED dataset
0.017
AudioSet
EAT
EAT: Self-Supervised Pre-Training with Efficient Audio Transformer
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03497v1
[ "https://github.com/cwx-worst-one/eat" ]
In the paper 'EAT: Self-Supervised Pre-Training with Efficient Audio Transformer', what Test mAP score did the EAT model get on the AudioSet dataset
0.486
ETTm1 (336) Multivariate
TSMixer
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
2023-06-14T00:00:00
https://arxiv.org/abs/2306.09364v4
[ "https://github.com/ibm/tsfm" ]
In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTm1 (336) Multivariate dataset
0.365
QVHighlights
NumPro
Number it: Temporal Grounding Videos like Flipping Manga
2024-11-15T00:00:00
https://arxiv.org/abs/2411.10332v2
[ "https://github.com/yongliang-wu/numpro" ]
In the paper 'Number it: Temporal Grounding Videos like Flipping Manga', what mAP score did the NumPro model get on the QVHighlights dataset
40.54
ScanObjectNN
Point-FEMAE
Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders
2023-12-17T00:00:00
https://arxiv.org/abs/2312.10726v1
[ "https://github.com/zyh16143998882/aaai24-pointfemae" ]
In the paper 'Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders', what Overall Accuracy score did the Point-FEMAE model get on the ScanObjectNN dataset
90.22
Atari 2600 Fishing Derby
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Fishing Derby dataset
35.1
DocVQA test
PaLI-3
PaLI-3 Vision Language Models: Smaller, Faster, Stronger
2023-10-13T00:00:00
https://arxiv.org/abs/2310.09199v2
[ "https://github.com/kyegomez/PALI3" ]
In the paper 'PaLI-3 Vision Language Models: Smaller, Faster, Stronger', what ANLS score did the PaLI-3 model get on the DocVQA test dataset
0.876
STS12
PromptEOL+CSE+OPT-13B
Scaling Sentence Embeddings with Large Language Models
2023-07-31T00:00:00
https://arxiv.org/abs/2307.16645v1
[ "https://github.com/kongds/scaling_sentemb" ]
In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-13B model get on the STS12 dataset
0.8020
EuroSAT-SAR
FG-MAE (ViT-S/16)
Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing
2023-10-28T00:00:00
https://arxiv.org/abs/2310.18653v1
[ "https://github.com/zhu-xlab/fgmae" ]
In the paper 'Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing', what Overall Accuracy score did the FG-MAE (ViT-S/16) model get on the EuroSAT-SAR dataset
85.9
BTAD
URD
Unlocking the Potential of Reverse Distillation for Anomaly Detection
2024-12-10T00:00:00
https://arxiv.org/abs/2412.07579v1
[ "https://github.com/hito2448/urd" ]
In the paper 'Unlocking the Potential of Reverse Distillation for Anomaly Detection', what Segmentation AUROC score did the URD model get on the BTAD dataset
98.1
horse2zebra
CycleGANAS
CycleGANAS: Differentiable Neural Architecture Search for CycleGAN
2023-11-13T00:00:00
https://arxiv.org/abs/2311.07162v1
[ "https://github.com/antaegun20/CycleGANAS" ]
In the paper 'CycleGANAS: Differentiable Neural Architecture Search for CycleGAN', what Frechet Inception Distance score did the CycleGANAS model get on the horse2zebra dataset
38.06
S3DIS
PPT + SparseUNet
Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09718v2
[ "https://github.com/Pointcept/Pointcept" ]
In the paper 'Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training', what Mean IoU score did the PPT + SparseUNet model get on the S3DIS dataset
78.1
HumanML3D
MMM (predict length)
MMM: Generative Masked Motion Model
2023-12-06T00:00:00
https://arxiv.org/abs/2312.03596v2
[ "https://github.com/exitudio/MMM" ]
In the paper 'MMM: Generative Masked Motion Model', what FID score did the MMM (predict length) model get on the HumanML3D dataset
0.080
LingOly
Gemini 1.5 Pro
LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages
2024-06-10T00:00:00
https://arxiv.org/abs/2406.06196v3
[ "https://github.com/am-bean/lingOly" ]
In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the Gemini 1.5 Pro model get on the LingOly dataset
32.1%
DomainNet
SFDA2
SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation
2024-03-16T00:00:00
https://arxiv.org/abs/2403.10834v1
[ "https://github.com/shinyflight/sfda2" ]
In the paper 'SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation', what Accuracy score did the SFDA2 model get on the DomainNet dataset
68.3
ETTh1 (336) Multivariate
Pathformer
Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting
2024-02-04T00:00:00
https://arxiv.org/abs/2402.05956v5
[ "https://github.com/decisionintelligence/pathformer" ]
In the paper 'Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting', what MSE score did the Pathformer model get on the ETTh1 (336) Multivariate dataset
0.454
LibriSpeech test-clean
Branchformer + GFSA
Graph Convolutions Enrich the Self-Attention in Transformers!
2023-12-07T00:00:00
https://arxiv.org/abs/2312.04234v5
[ "https://github.com/jeongwhanchoi/gfsa" ]
In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Word Error Rate (WER) score did the Branchformer + GFSA model get on the LibriSpeech test-clean dataset
2.11
Texas (60%/20%/20% random splits)
HH-GAT
Half-Hop: A graph upsampling approach for slowing down message passing
2023-08-17T00:00:00
https://arxiv.org/abs/2308.09198v1
[ "https://github.com/nerdslab/halfhop" ]
In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GAT model get on the Texas (60%/20%/20% random splits) dataset
80.54 ± 4.80
MM-Vet
VisionZip (Retain 64 Tokens, fine-tuning)
VisionZip: Longer is Better but Not Necessary in Vision Language Models
2024-12-05T00:00:00
https://arxiv.org/abs/2412.04467v1
[ "https://github.com/dvlab-research/visionzip" ]
In the paper 'VisionZip: Longer is Better but Not Necessary in Vision Language Models', what GPT-4 score score did the VisionZip (Retain 64 Tokens, fine-tuning) model get on the MM-Vet dataset
30.2
CBVS
UniCLP
CBVS: A Large-Scale Chinese Image-Text Benchmark for Real-World Short Video Search Scenarios
2024-01-19T00:00:00
https://arxiv.org/abs/2401.10475v2
[ "https://github.com/QQBrowserVideoSearch/CBVS-UniCLIP" ]
In the paper 'CBVS: A Large-Scale Chinese Image-Text Benchmark for Real-World Short Video Search Scenarios', what PNR score did the UniCLP model get on the CBVS dataset
3.069
COCO minival
GLEE-Pro
General Object Foundation Model for Images and Videos at Scale
2023-12-14T00:00:00
https://arxiv.org/abs/2312.09158v1
[ "https://github.com/FoundationVision/GLEE" ]
In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Pro model get on the COCO minival dataset
54.2
Amazon-Beauty
HetroFair
Heterophily-Aware Fair Recommendation using Graph Convolutional Networks
2024-01-31T00:00:00
https://arxiv.org/abs/2402.03365v2
[ "https://github.com/nematgh/hetrofair" ]
In the paper 'Heterophily-Aware Fair Recommendation using Graph Convolutional Networks', what NDCG@20 score did the HetroFair model get on the Amazon-Beauty dataset
0.2308
Data3D−R2N2
LRGT
Long-Range Grouping Transformer for Multi-View 3D Reconstruction
2023-08-17T00:00:00
https://arxiv.org/abs/2308.08724v1
[ "https://github.com/liyingcv/long-range-grouping-transformer" ]
In the paper 'Long-Range Grouping Transformer for Multi-View 3D Reconstruction', what 3DIoU score did the LRGT model get on the Data3D−R2N2 dataset
0.696
RLBench
RVT-2
RVT-2: Learning Precise Manipulation from Few Demonstrations
2024-06-12T00:00:00
https://arxiv.org/abs/2406.08545v1
[ "https://github.com/NVlabs/RVT" ]
In the paper 'RVT-2: Learning Precise Manipulation from Few Demonstrations', what Succ. Rate (18 tasks, 100 demo/task) score did the RVT-2 model get on the RLBench dataset
81.4
VideoInstruct
CAT-7B
CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarios
2024-03-07T00:00:00
https://arxiv.org/abs/2403.04640v1
[ "https://github.com/rikeilong/bay-cat" ]
In the paper 'CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarios', what Correctness of Information score did the CAT-7B model get on the VideoInstruct dataset
3.08
MATH
OpenMath-Llama2-70B (w/ code)
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10176v2
[ "https://github.com/kipok/nemo-skills" ]
In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-Llama2-70B (w/ code) model get on the MATH dataset
46.3
Arxiv HEP-TH citation graph
SRformer-BART
Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model
2023-05-24T00:00:00
https://arxiv.org/abs/2305.16340v3
[ "https://github.com/yinghanlong/SRtransformer" ]
In the paper 'Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model', what ROUGE-1 score did the SRformer-BART model get on the Arxiv HEP-TH citation graph dataset
42.99
SMAC 6h_vs_9z
DPLEX
A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
2023-06-04T00:00:00
https://arxiv.org/abs/2306.02430v1
[ "https://github.com/j3soon/dfac-extended" ]
In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Average Score score did the DPLEX model get on the SMAC 6h_vs_9z dataset
14.84
PubMed with Public Split: fixed 20 nodes per class
GEM
Graph Entropy Minimization for Semi-supervised Node Classification
2023-05-31T00:00:00
https://arxiv.org/abs/2305.19502v1
[ "https://github.com/cf020031308/gem" ]
In the paper 'Graph Entropy Minimization for Semi-supervised Node Classification', what Accuracy score did the GEM model get on the PubMed with Public Split: fixed 20 nodes per class dataset
78.48
Sleep-EDFx (single-channel)
NeuroNet (Fpz-Cz only)
NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG
2024-04-10T00:00:00
https://arxiv.org/abs/2404.17585v2
[ "https://github.com/dlcjfgmlnasa/NeuroNet" ]
In the paper 'NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG', what Accuracy score did the NeuroNet (Fpz-Cz only) model get on the Sleep-EDFx (single-channel) dataset
85.24%
LRS2
RTFS-Net-12
RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation
2023-09-29T00:00:00
https://arxiv.org/abs/2309.17189v4
[ "https://github.com/spkgyk/RTFS-Net" ]
In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-12 model get on the LRS2 dataset
14.9
Atari 2600 Solaris
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Solaris dataset
3506.8
Sphere Simple
HCMT
Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer
2023-12-19T00:00:00
https://arxiv.org/abs/2312.12467v3
[ "https://github.com/yuyudeep/hcmt" ]
In the paper 'Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer', what Rollout RMSE-all [1e3] Position score did the HCMT model get on the Sphere Simple dataset
30.41±1.71
CHILI-100K
PMLP
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the PMLP model get on the CHILI-100K dataset
0.486 +/- 0.014
IMDb-M
G-Tuning
Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns
2023-12-21T00:00:00
https://arxiv.org/abs/2312.13583v1
[ "https://github.com/zjunet/G-Tuning" ]
In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what Accuracy (10-fold) score did the G-Tuning model get on the IMDb-M dataset
51.80
MSD (Mirror Segmentation Dataset)
SAM2-UNet
SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation
2024-08-16T00:00:00
https://arxiv.org/abs/2408.08870v1
[ "https://github.com/wzh0120/sam2-unet" ]
In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what MAE score did the SAM2-UNet model get on the MSD (Mirror Segmentation Dataset) dataset
0.022
LIDC-IDRI
GVAE
Variational Autoencoders for Feature Exploration and Malignancy Prediction of Lung Lesions
2023-11-27T00:00:00
https://arxiv.org/abs/2311.15719v1
[ "https://github.com/benkeel/vae_lung_lesion_bmvc" ]
In the paper 'Variational Autoencoders for Feature Exploration and Malignancy Prediction of Lung Lesions', what Accuracy score did the GVAE model get on the LIDC-IDRI dataset
93.1
DTD
SaSPA + CAL
Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation
2024-06-20T00:00:00
https://arxiv.org/abs/2406.14551v2
[ "https://github.com/eyalmichaeli/saspa-aug" ]
In the paper 'Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation', what 8-shot Accuracy score did the SaSPA + CAL model get on the DTD dataset
54.8
PECC
Llama-3-8B-Instruct
PECC: Problem Extraction and Coding Challenges
2024-04-29T00:00:00
https://arxiv.org/abs/2404.18766v1
[ "https://github.com/hallerpatrick/pecc" ]
In the paper 'PECC: Problem Extraction and Coding Challenges', what Pass@3 score did the Llama-3-8B-Instruct model get on the PECC dataset
3.1
LeukemiaAttri
AttriDet
A Large-scale Multi Domain Leukemia Dataset for the White Blood Cells Detection with Morphological Attributes for Explainability
2024-05-17T00:00:00
https://arxiv.org/abs/2405.10803v1
[ "https://github.com/intelligentMachines-ITU/Blood-Cancer-Dataset-Lukemia-Attri-MICCAI-2024" ]
In the paper 'A Large-scale Multi Domain Leukemia Dataset for the White Blood Cells Detection with Morphological Attributes for Explainability', what mAP 50-95 score did the AttriDet model get on the LeukemiaAttri dataset
28.2
CVC-ClinicDB
Yolo-SAM 2
Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model
2024-09-14T00:00:00
https://arxiv.org/abs/2409.09484v1
[ "https://github.com/sajjad-sh33/yolo_sam2" ]
In the paper 'Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model', what mean Dice score did the Yolo-SAM 2 model get on the CVC-ClinicDB dataset
0.951
RICH
WHAM (ViT)
WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion
2023-12-12T00:00:00
https://arxiv.org/abs/2312.07531v2
[ "https://github.com/yohanshin/WHAM" ]
In the paper 'WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion', what MPJPE score did the WHAM (ViT) model get on the RICH dataset
80
DND
DRANet
Dual Residual Attention Network for Image Denoising
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04269v1
[ "https://github.com/WenCongWu/DRANet" ]
In the paper 'Dual Residual Attention Network for Image Denoising', what Average PSNR score did the DRANet model get on the DND dataset
39.64
MS COCO
HyperSeg
HyperSeg: Towards Universal Visual Segmentation with Large Language Model
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17606v2
[ "https://github.com/congvvc/HyperSeg" ]
In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what mIoU score did the HyperSeg model get on the MS COCO dataset
77.2
RealBlur-J
ALGNet
Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring
2024-03-29T00:00:00
https://arxiv.org/abs/2403.20106v2
[ "https://github.com/Tombs98/ALGNet" ]
In the paper 'Learning Enriched Features via Selective State Spaces Model for Efficient Image Deblurring', what SSIM (sRGB) score did the ALGNet model get on the RealBlur-J dataset
0.946
GSM8K
OpenMath-CodeLlama-13B (w/ code, SC, k=50)
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10176v2
[ "https://github.com/kipok/nemo-skills" ]
In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-13B (w/ code, SC, k=50) model get on the GSM8K dataset
86.8
Atari 2600 Alien
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Alien dataset
6955.2
ImageNet
GTP-ViT-B-Patch8/P20
GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation
2023-11-06T00:00:00
https://arxiv.org/abs/2311.03035v2
[ "https://github.com/ackesnal/gtp-vit" ]
In the paper 'GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation', what Top 1 Accuracy score did the GTP-ViT-B-Patch8/P20 model get on the ImageNet dataset
85.8%
Tedlium
Whispering-LLaMa-7b
HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models
2023-09-27T00:00:00
https://arxiv.org/abs/2309.15701v2
[ "https://github.com/hypotheses-paradise/hypo2trans" ]
In the paper 'HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models', what Word Error Rate (WER) score did the Whispering-LLaMa-7b model get on the Tedlium dataset
4.6
DTD
Real-Guidance + CAL
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
2023-05-22T00:00:00
https://arxiv.org/abs/2305.12954v1
[ "https://github.com/zhengli97/dm-kd" ]
In the paper 'Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?', what 8-shot Accuracy score did the Real-Guidance + CAL model get on the DTD dataset
50.6
Yelp2018
NESCL
Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering
2024-02-18T00:00:00
https://arxiv.org/abs/2402.11523v1
[ "https://github.com/PeiJieSun/NESCL" ]
In the paper 'Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative Filtering', what Recall@20 score did the NESCL model get on the Yelp2018 dataset
0.0743
CUHK-Shadow
SDDNet (MM 2023) (512x512)
SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection
2023-08-17T00:00:00
https://arxiv.org/abs/2308.08935v2
[ "https://github.com/rmcong/sddnet_acmmm23" ]
In the paper 'SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection', what BER score did the SDDNet (MM 2023) (512x512) model get on the CUHK-Shadow dataset
7.65
MAESTRO
YourMT3+ (YPTF.MoE+M) noPS
YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation
2024-07-05T00:00:00
https://arxiv.org/abs/2407.04822v3
[ "https://github.com/mimbres/yourmt3" ]
In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what Onset F1 score did the YourMT3+ (YPTF.MoE+M) noPS model get on the MAESTRO dataset
96.98
Image Denoising on SID x300
ExposureDiffusion (UNet+paired data)
ExposureDiffusion: Learning to Expose for Low-light Image Enhancement
2023-07-15T00:00:00
https://arxiv.org/abs/2307.07710v2
[ "https://github.com/wyf0912/ExposureDiffusion" ]
In the paper 'ExposureDiffusion: Learning to Expose for Low-light Image Enhancement', what PSNR (Raw) score did the ExposureDiffusion (UNet+paired data) model get on the Image Denoising on SID x300 dataset
36.82
SMAP
CARLA
CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09296v4
[ "https://github.com/zamanzadeh/CARLA" ]
In the paper 'CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection', what precision score did the CARLA model get on the SMAP dataset
0.3944
Weather2K79 (720)
MoLE-DLinear
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K79 (720) dataset
0.535
CIFAR-10
PFGM++ + CS
Compensation Sampling for Improved Convergence in Diffusion Models
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06285v1
[ "https://github.com/hotfinda/Compensation-sampling" ]
In the paper 'Compensation Sampling for Improved Convergence in Diffusion Models', what FID score did the PFGM++ + CS model get on the CIFAR-10 dataset
1.5
ShapeNet Car
DiT-3D
DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation
2023-07-04T00:00:00
https://arxiv.org/abs/2307.01831v1
[ "https://github.com/DiT-3D/DiT-3D" ]
In the paper 'DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation', what 1-NNA-CD score did the DiT-3D model get on the ShapeNet Car dataset
51.04
ETTh2 (192) Multivariate
MoLE-RLinear
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the ETTh2 (192) Multivariate dataset
0.336
Atari 2600 Q*Bert
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Q*Bert dataset
24548.8
CCTSDB2021
YOLO-CCSPNet
CCSPNet-Joint: Efficient Joint Training Method for Traffic Sign Detection Under Extreme Conditions
2023-09-13T00:00:00
https://arxiv.org/abs/2309.06902v4
[ "https://github.com/haoqinhong/ccspnet-joint" ]
In the paper 'CCSPNet-Joint: Efficient Joint Training Method for Traffic Sign Detection Under Extreme Conditions', what mAP@0.5 score did the YOLO-CCSPNet model get on the CCTSDB2021 dataset
95.8
MedConceptsQA
epfl-llm/meditron-70b
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
2023-11-27T00:00:00
https://arxiv.org/abs/2311.16079v1
[ "https://github.com/epfllm/meditron" ]
In the paper 'MEDITRON-70B: Scaling Medical Pretraining for Large Language Models', what Accuracy score did the epfl-llm/meditron-70b model get on the MedConceptsQA dataset
25.262
CHILI-100K
PMLP
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the PMLP model get on the CHILI-100K dataset
0.191 +/- 0.000
CompCars
Resnet50 + PMAL
Progressive Multi-task Anti-Noise Learning and Distilling Frameworks for Fine-grained Vehicle Recognition
2024-01-25T00:00:00
https://arxiv.org/abs/2401.14336v1
[ "https://github.com/dichao-liu/anti-noise_fgvr" ]
In the paper 'Progressive Multi-task Anti-Noise Learning and Distilling Frameworks for Fine-grained Vehicle Recognition', what Accuracy score did the Resnet50 + PMAL model get on the CompCars dataset
99.1%
S3DIS
Point-GCC+TR3D
Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast
2023-05-31T00:00:00
https://arxiv.org/abs/2305.19623v2
[ "https://github.com/asterisci/point-gcc" ]
In the paper 'Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast', what mAP@0.5 score did the Point-GCC+TR3D model get on the S3DIS dataset
56.7
ADE20K training-free zero-shot segmentation
GEM (CLIP)
Grounding Everything: Emerging Localization Properties in Vision-Language Transformers
2023-12-01T00:00:00
https://arxiv.org/abs/2312.00878v3
[ "https://github.com/walbouss/gem" ]
In the paper 'Grounding Everything: Emerging Localization Properties in Vision-Language Transformers', what mIoU score did the GEM (CLIP) model get on the ADE20K training-free zero-shot segmentation dataset
15.7
SRD
ShadowMaskFormer (arXiv 2024) (256x256)
ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal
2024-04-29T00:00:00
https://arxiv.org/abs/2404.18433v2
[ "https://github.com/lizhh268/shadowmaskformer" ]
In the paper 'ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal', what RMSE score did the ShadowMaskFormer (arXiv 2024) (256x256) model get on the SRD dataset
4.69
ASDiv-A
MMOS-CODE-34B(0-shot)
An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning
2024-02-23T00:00:00
https://arxiv.org/abs/2403.00799v1
[ "https://github.com/cyzhh/MMOS" ]
In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Execution Accuracy score did the MMOS-CODE-34B(0-shot) model get on the ASDiv-A dataset
85.1
MS-COCO (1-shot)
UniFS
UniFS: Universal Few-shot Instance Perception with Point Representations
2024-04-30T00:00:00
https://arxiv.org/abs/2404.19401v3
[ "https://github.com/jin-s13/unifs" ]
In the paper 'UniFS: Universal Few-shot Instance Perception with Point Representations', what AP score did the UniFS model get on the MS-COCO (1-shot) dataset
12.7
NAS-Bench-201, CIFAR-100
IS-DARTS
IS-DARTS: Stabilizing DARTS through Precise Measurement on Candidate Importance
2023-12-19T00:00:00
https://arxiv.org/abs/2312.12648v1
[ "https://github.com/hy-he/is-darts" ]
In the paper 'IS-DARTS: Stabilizing DARTS through Precise Measurement on Candidate Importance', what Accuracy (Test) score did the IS-DARTS model get on the NAS-Bench-201, CIFAR-100 dataset
73.51
MVTEC AD textures
Mixed-Teacher
MixedTeacher : Knowledge Distillation for fast inference textural anomaly detection
2023-06-16T00:00:00
https://arxiv.org/abs/2306.09859v1
[ "https://github.com/SimonThomine/MixedTeacher" ]
In the paper 'MixedTeacher : Knowledge Distillation for fast inference textural anomaly detection', what Detection AUROC score did the Mixed-Teacher model get on the MVTEC AD textures dataset
99.8
COCO-Stuff-27
PriMaPs+HP (DINO ViT-S/8)
Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals
2024-04-25T00:00:00
https://arxiv.org/abs/2404.16818v2
[ "https://github.com/visinf/primaps" ]
In the paper 'Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals', what Accuracy score did the PriMaPs+HP (DINO ViT-S/8) model get on the COCO-Stuff-27 dataset
57.8
MMConv
PaCE
PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts
2023-05-24T00:00:00
https://arxiv.org/abs/2305.14839v2
[ "https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/pace" ]
In the paper 'PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts', what Inform score did the PaCE model get on the MMConv dataset
34.5
Set14 - 4x upscaling
DAT+
Dual Aggregation Transformer for Image Super-Resolution
2023-08-07T00:00:00
https://arxiv.org/abs/2308.03364v2
[ "https://github.com/zhengchen1999/dat" ]
In the paper 'Dual Aggregation Transformer for Image Super-Resolution', what PSNR score did the DAT+ model get on the Set14 - 4x upscaling dataset
29.29
Vinoground
InternLM-XC-2.5
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
2024-07-03T00:00:00
https://arxiv.org/abs/2407.03320v1
[ "https://github.com/internlm/internlm-xcomposer" ]
In the paper 'InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output', what Text Score score did the InternLM-XC-2.5 model get on the Vinoground dataset
28.8
WDC Products-80%cc-seen-medium
Llama3.1_8B_error-based_example_selection
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Llama3.1_8B_error-based_example_selection model get on the WDC Products-80%cc-seen-medium dataset
74.37
ImageNet 512x512
EDM2-XXL w/ guidance interval
Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
2024-04-11T00:00:00
https://arxiv.org/abs/2404.07724v2
[ "https://github.com/kynkaat/guidance-interval" ]
In the paper 'Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models', what FID score did the EDM2-XXL w/ guidance interval model get on the ImageNet 512x512 dataset
1.40
SAFIM
starcoderbase
Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
2024-03-07T00:00:00
https://arxiv.org/abs/2403.04814v3
[ "https://github.com/gonglinyuan/safim" ]
In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the starcoderbase model get on the SAFIM dataset
44.11
MSMT17
CLIP-ReID Baseline + UFFM +AMC
Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination
2024-05-02T00:00:00
https://arxiv.org/abs/2405.01101v4
[ "https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC" ]
In the paper 'Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination', what Rank-1 score did the CLIP-ReID Baseline + UFFM +AMC model get on the MSMT17 dataset
83.8
Peptides-struct
CIN++-500k
CIN++: Enhancing Topological Message Passing
2023-06-06T00:00:00
https://arxiv.org/abs/2306.03561v1
[ "https://github.com/twitter-research/cwn" ]
In the paper 'CIN++: Enhancing Topological Message Passing', what MAE score did the CIN++-500k model get on the Peptides-struct dataset
0.2523
Casia V1+
Early Fusion
MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization
2023-12-04T00:00:00
https://arxiv.org/abs/2312.01790v2
[ "https://github.com/idt-iti/mmfusion-iml" ]
In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Early Fusion model get on the Casia V1+ dataset
.929
ETTh1 (720) Multivariate
RLinear
Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping
2023-05-18T00:00:00
https://arxiv.org/abs/2305.10721v1
[ "https://github.com/plumprc/rtsf" ]
In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTh1 (720) Multivariate dataset
0.442
DTD
TURTLE (CLIP + DINOv2)
Let Go of Your Labels with Unsupervised Transfer
2024-06-11T00:00:00
https://arxiv.org/abs/2406.07236v1
[ "https://github.com/mlbio-epfl/turtle" ]
In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the DTD dataset
57.3
LRS2
TDFNet (MHSA + Shared)
TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion
2024-01-25T00:00:00
https://arxiv.org/abs/2401.14185v1
[ "https://github.com/spkgyk/TDFNet" ]
In the paper 'TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion', what SI-SNRi score did the TDFNet (MHSA + Shared) model get on the LRS2 dataset
15.0
mini WebVision 1.0
LRA-diffusion (CLIP ViT)
Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels
2023-05-31T00:00:00
https://arxiv.org/abs/2305.19518v2
[ "https://github.com/puar-playground/lra-diffusion" ]
In the paper 'Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels', what Top-1 Accuracy score did the LRA-diffusion (CLIP ViT) model get on the mini WebVision 1.0 dataset
84.16
Charades-STA
UnLoc-B
UnLoc: A Unified Framework for Video Localization Tasks
2023-08-21T00:00:00
https://arxiv.org/abs/2308.11062v1
[ "https://github.com/google-research/scenic" ]
In the paper 'UnLoc: A Unified Framework for Video Localization Tasks', what R@1 IoU=0.5 score did the UnLoc-B model get on the Charades-STA dataset
58.1
Lost and Found
Mask2Anomaly
Unmasking Anomalies in Road-Scene Segmentation
2023-07-25T00:00:00
https://arxiv.org/abs/2307.13316v1
[ "https://github.com/shyam671/mask2anomaly-unmasking-anomalies-in-road-scene-segmentation" ]
In the paper 'Unmasking Anomalies in Road-Scene Segmentation', what AP score did the Mask2Anomaly model get on the Lost and Found dataset
86.59
LSUN Bedroom
BOSS
Bellman Optimal Stepsize Straightening of Flow-Matching Models
2023-12-27T00:00:00
https://arxiv.org/abs/2312.16414v3
[ "https://github.com/nguyenngocbaocmt02/boss" ]
In the paper 'Bellman Optimal Stepsize Straightening of Flow-Matching Models', what clean-FID score did the BOSS model get on the LSUN Bedroom dataset
12.13
GQA test-dev
CuMo-7B
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts
2024-05-09T00:00:00
https://arxiv.org/abs/2405.05949v1
[ "https://github.com/shi-labs/cumo" ]
In the paper 'CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts', what Accuracy score did the CuMo-7B model get on the GQA test-dev dataset
64.9