dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
CUB 200 5-way 5-shot
PT+MAP+SF+BPA (transductive)
The Balanced-Pairwise-Affinities Feature Transform
2024-06-25T00:00:00
https://arxiv.org/abs/2407.01467v1
[ "https://github.com/danielshalam/bpa" ]
In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the PT+MAP+SF+BPA (transductive) model get on the CUB 200 5-way 5-shot dataset
97.12
ImageNet 128x128
TarFlow
Normalizing Flows are Capable Generative Models
2024-12-09T00:00:00
https://arxiv.org/abs/2412.06329v2
[ "https://github.com/apple/ml-tarflow" ]
In the paper 'Normalizing Flows are Capable Generative Models', what FID score did the TarFlow model get on the ImageNet 128x128 dataset
5.03
Stanford Cars
RPO
Read-only Prompt Optimization for Vision-Language Few-shot Learning
2023-08-29T00:00:00
https://arxiv.org/abs/2308.14960v2
[ "https://github.com/mlvlab/rpo" ]
In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the Stanford Cars dataset
74.69
PACS
GMDG (e RegNetY-16GF)
Rethinking Multi-domain Generalization with A General Learning Objective
2024-02-29T00:00:00
https://arxiv.org/abs/2402.18853v1
[ "https://github.com/zhaorui-tan/GMDG_cvpr2024" ]
In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (e RegNetY-16GF) model get on the PACS dataset
97.3
CROHME 2019
ICAL
ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition
2024-05-15T00:00:00
https://arxiv.org/abs/2405.09032v4
[ "https://github.com/qingzhenduyu/ical" ]
In the paper 'ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition', what ExpRate score did the ICAL model get on the CROHME 2019 dataset
60.51
ETTm1 (720) Multivariate
TSMixer
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
2023-06-14T00:00:00
https://arxiv.org/abs/2306.09364v4
[ "https://github.com/ibm/tsfm" ]
In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTm1 (720) Multivariate dataset
0.416
EconLogicQA
Mistral-7B-Instruct-v0.2
EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning
2024-05-13T00:00:00
https://arxiv.org/abs/2405.07938v2
[ "https://github.com/yinzhu-quan/lm-evaluation-harness" ]
In the paper 'EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning', what Accuracy score did the Mistral-7B-Instruct-v0.2 model get on the EconLogicQA dataset
0.3154
ImageNet
ViT-S
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09372v3
[ "https://github.com/tobna/whattransformertofavor" ]
In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the ViT-S model get on the ImageNet dataset
82.54%
Uber-Text
CLIP4STR-L (DataComp-1B)
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
2023-05-23T00:00:00
https://arxiv.org/abs/2305.14014v3
[ "https://github.com/VamosC/CLIP4STR" ]
In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy (%) score did the CLIP4STR-L (DataComp-1B) model get on the Uber-Text dataset
92.2
Moving MNIST
PredFormer
PredFormer: Transformers Are Effective Spatial-Temporal Predictive Learners
2024-10-07T00:00:00
https://arxiv.org/abs/2410.04733v2
[ "https://github.com/yyyujintang/predformer" ]
In the paper 'PredFormer: Transformers Are Effective Spatial-Temporal Predictive Learners', what MSE score did the PredFormer model get on the Moving MNIST dataset
11.62
RWTH-PHOENIX-Weather 2014 T
SlowFastSign
SlowFast Network for Continuous Sign Language Recognition
2023-09-21T00:00:00
https://arxiv.org/abs/2309.12304v1
[ "https://github.com/kaistmm/SlowFastSign" ]
In the paper 'SlowFast Network for Continuous Sign Language Recognition', what Word Error Rate (WER) score did the SlowFastSign model get on the RWTH-PHOENIX-Weather 2014 T dataset
18.7
Mini-Imagenet 5-way (1-shot)
PT+MAP+SF+BPA (transductive)
The Balanced-Pairwise-Affinities Feature Transform
2024-06-25T00:00:00
https://arxiv.org/abs/2407.01467v1
[ "https://github.com/danielshalam/bpa" ]
In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the PT+MAP+SF+BPA (transductive) model get on the Mini-Imagenet 5-way (1-shot) dataset
85.59
AMZ Photo
GraphSAGE
Half-Hop: A graph upsampling approach for slowing down message passing
2023-08-17T00:00:00
https://arxiv.org/abs/2308.09198v1
[ "https://github.com/nerdslab/halfhop" ]
In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the GraphSAGE model get on the AMZ Photo dataset
95.03%
InfographicVQA
PaLI-3
PaLI-3 Vision Language Models: Smaller, Faster, Stronger
2023-10-13T00:00:00
https://arxiv.org/abs/2310.09199v2
[ "https://github.com/kyegomez/PALI3" ]
In the paper 'PaLI-3 Vision Language Models: Smaller, Faster, Stronger', what ANLS score did the PaLI-3 model get on the InfographicVQA dataset
57.8
GoPro
ID-Blau (Stripformer)
ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation
2023-12-18T00:00:00
https://arxiv.org/abs/2312.10998v2
[ "https://github.com/plusgood-steven/id-blau" ]
In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR score did the ID-Blau (Stripformer) model get on the GoPro dataset
33.66
ImageNet
ReViT-B
ReViT: Enhancing Vision Transformers Feature Diversity with Attention Residual Connections
2024-02-17T00:00:00
https://arxiv.org/abs/2402.11301v2
[ "https://github.com/adiko1997/revit" ]
In the paper 'ReViT: Enhancing Vision Transformers Feature Diversity with Attention Residual Connections', what Top 1 Accuracy score did the ReViT-B model get on the ImageNet dataset
82.4
Hockey
MSQNet
Actor-agnostic Multi-label Action Recognition with Multi-modal Query
2023-07-20T00:00:00
https://arxiv.org/abs/2307.10763v3
[ "https://github.com/mondalanindya/msqnet" ]
In the paper 'Actor-agnostic Multi-label Action Recognition with Multi-modal Query', what Accuracy score did the MSQNet model get on the Hockey dataset
3.05
The Pile
Test-Time Fine-Tuning with SIFT + Llama-3.2 (3B)
Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08020v2
[ "https://github.com/jonhue/activeft" ]
In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Test-Time Fine-Tuning with SIFT + Llama-3.2 (3B) model get on the The Pile dataset
0.557
ChEBI-20
InstructMol-GS
InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery
2023-11-27T00:00:00
https://arxiv.org/abs/2311.16208v1
[ "https://github.com/idea-xl/instructmol" ]
In the paper 'InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery', what BLEU-2 score did the InstructMol-GS model get on the ChEBI-20 dataset
47.5
MATH
OpenMath-CodeLlama-13B (w/ code)
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10176v2
[ "https://github.com/kipok/nemo-skills" ]
In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-13B (w/ code) model get on the MATH dataset
45.5
MM-Vet
SPHINX-Plus
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
2024-02-08T00:00:00
https://arxiv.org/abs/2402.05935v2
[ "https://github.com/alpha-vllm/llama2-accessory" ]
In the paper 'SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models', what GPT-4 score score did the SPHINX-Plus model get on the MM-Vet dataset
47.9
Texas
CoED
Improving Graph Neural Networks by Learning Continuous Edge Directions
2024-10-18T00:00:00
https://arxiv.org/abs/2410.14109v1
[ "https://github.com/hormoz-lab/coed-gnn" ]
In the paper 'Improving Graph Neural Networks by Learning Continuous Edge Directions', what Accuracy score did the CoED model get on the Texas dataset
84.59±4.53
ICDAR2013
CLIP4STR-B
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
2023-05-23T00:00:00
https://arxiv.org/abs/2305.14014v3
[ "https://github.com/VamosC/CLIP4STR" ]
In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-B model get on the ICDAR2013 dataset
98.3
LSMDC
vid-TLDR (UMT-L)
vid-TLDR: Training Free Token merging for Light-weight Video Transformer
2024-03-20T00:00:00
https://arxiv.org/abs/2403.13347v2
[ "https://github.com/mlvlab/vid-tldr" ]
In the paper 'vid-TLDR: Training Free Token merging for Light-weight Video Transformer', what text-to-video R@1 score did the vid-TLDR (UMT-L) model get on the LSMDC dataset
43.1
PeMSD7(L)
STD-MAE
Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting
2023-12-01T00:00:00
https://arxiv.org/abs/2312.00516v3
[ "https://github.com/jimmy-7664/std-mae" ]
In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what 12 steps MAE score did the STD-MAE model get on the PeMSD7(L) dataset
2.64
Something-Something V2
CAST-B/16
CAST: Cross-Attention in Space and Time for Video Action Recognition
2023-11-30T00:00:00
https://arxiv.org/abs/2311.18825v2
[ "https://github.com/khu-vll/cast" ]
In the paper 'CAST: Cross-Attention in Space and Time for Video Action Recognition', what Top-1 Accuracy score did the CAST-B/16 model get on the Something-Something V2 dataset
71.6
Action-Camera Parking
EfficientNet-P
Revising deep learning methods in parking lot occupancy detection
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04288v3
[ "https://github.com/eighonet/parking-research" ]
In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the EfficientNet-P model get on the Action-Camera Parking dataset
0.9125
ImageNet - 1% labeled data
SimMatch + EPASS (ResNet-50)
Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector
2023-10-24T00:00:00
https://arxiv.org/abs/2310.15764v1
[ "https://github.com/beandkay/epass" ]
In the paper 'Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector', what Top 5 Accuracy score did the SimMatch + EPASS (ResNet-50) model get on the ImageNet - 1% labeled data dataset
87.6
Now You're Cooking!
LLaVA-Chef
LLaVA-Chef: A Multi-modal Generative Model for Food Recipes
2024-08-29T00:00:00
https://arxiv.org/abs/2408.16889v1
[ "https://github.com/mohbattharani/LLaVA-Chef" ]
In the paper 'LLaVA-Chef: A Multi-modal Generative Model for Food Recipes', what Perplexity score did the LLaVA-Chef model get on the Now You're Cooking! dataset
2.6
INRIA Aerial Image Labeling
UANet(PVT-V2-B2)
Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network
2023-07-23T00:00:00
https://arxiv.org/abs/2307.12309v1
[ "https://github.com/henryjiepanli/uncertainty-aware-network" ]
In the paper 'Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network', what IoU score did the UANet(PVT-V2-B2) model get on the INRIA Aerial Image Labeling dataset
83.34
Words in Context
PaLM 2-M (one-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (one-shot) model get on the Words in Context dataset
52.0
ETTh2 (720) Univariate
AutoCon
Self-Supervised Contrastive Learning for Long-term Forecasting
2024-02-03T00:00:00
https://arxiv.org/abs/2402.02023v2
[ "https://github.com/junwoopark92/self-supervised-contrastive-forecsating" ]
In the paper 'Self-Supervised Contrastive Learning for Long-term Forecasting', what MSE score did the AutoCon model get on the ETTh2 (720) Univariate dataset
0.177
VP-Air
CLIP
AnyLoc: Towards Universal Visual Place Recognition
2023-08-01T00:00:00
https://arxiv.org/abs/2308.00688v2
[ "https://github.com/AnyLoc/AnyLoc" ]
In the paper 'AnyLoc: Towards Universal Visual Place Recognition', what Recall@1 score did the CLIP model get on the VP-Air dataset
36.59
LaSOT
SAMURAI-L
SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory
2024-11-18T00:00:00
https://arxiv.org/abs/2411.11922v2
[ "https://github.com/yangchris11/samurai" ]
In the paper 'SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory', what AUC score did the SAMURAI-L model get on the LaSOT dataset
74.2
ETTm1 (192) Multivariate
TSMixer
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
2023-06-14T00:00:00
https://arxiv.org/abs/2306.09364v4
[ "https://github.com/ibm/tsfm" ]
In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTm1 (192) Multivariate dataset
0.333
TAO
AED (RegionCLIP)
Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown
2024-09-14T00:00:00
https://arxiv.org/abs/2409.09293v1
[ "https://github.com/balabooooo/aed" ]
In the paper 'Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown', what TETA score did the AED (RegionCLIP) model get on the TAO dataset
37.0
YouCook2
Norton
Multi-granularity Correspondence Learning from Long-term Noisy Videos
2024-01-30T00:00:00
https://arxiv.org/abs/2401.16702v1
[ "https://github.com/XLearning-SCU/2024-ICLR-Norton" ]
In the paper 'Multi-granularity Correspondence Learning from Long-term Noisy Videos', what Cap. Avg. R@1 score did the Norton model get on the YouCook2 dataset
75.5
Haze4k
MixDehazeNet
MixDehazeNet : Mix Structure Block For Image Dehazing Network
2023-05-28T00:00:00
https://arxiv.org/abs/2305.17654v1
[ "https://github.com/ameryxiong/mixdehazenet" ]
In the paper 'MixDehazeNet : Mix Structure Block For Image Dehazing Network', what PSNR score did the MixDehazeNet model get on the Haze4k dataset
35.64
MRR-Benchmark
Idefics-80B
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
2023-06-21T00:00:00
https://arxiv.org/abs/2306.16527v2
[ "https://github.com/huggingface/obelics" ]
In the paper 'OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents', what Total Column Score score did the Idefics-80B model get on the MRR-Benchmark dataset
139
ANLI test
PaLM 2-S (one-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what A1 score did the PaLM 2-S (one-shot) model get on the ANLI test dataset
53.1
QNLI
GOLD (T5-base)
GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation
2024-03-28T00:00:00
https://arxiv.org/abs/2403.19754v1
[ "https://github.com/mgholamikn/GOLD" ]
In the paper 'GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation', what Accuracy score did the GOLD (T5-base) model get on the QNLI dataset
91.7
NIR2RGB VCIP Challange Dataset
ColorMamba
ColorMamba: Towards High-quality NIR-to-RGB Spectral Translation with Mamba
2024-08-15T00:00:00
https://arxiv.org/abs/2408.08087v1
[ "https://github.com/alexyangxx/colormamba" ]
In the paper 'ColorMamba: Towards High-quality NIR-to-RGB Spectral Translation with Mamba', what PSNR score did the ColorMamba model get on the NIR2RGB VCIP Challange Dataset dataset
24.56
GOT-10k
MITS
Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation
2023-08-25T00:00:00
https://arxiv.org/abs/2308.13266v3
[ "https://github.com/yoxu515/mits" ]
In the paper 'Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation', what Average Overlap score did the MITS model get on the GOT-10k dataset
80.4
DiDeMo
vid-TLDR (UMT-L)
vid-TLDR: Training Free Token merging for Light-weight Video Transformer
2024-03-20T00:00:00
https://arxiv.org/abs/2403.13347v2
[ "https://github.com/mlvlab/vid-tldr" ]
In the paper 'vid-TLDR: Training Free Token merging for Light-weight Video Transformer', what text-to-video R@1 score did the vid-TLDR (UMT-L) model get on the DiDeMo dataset
72.3
MassSpecGym
Precursor m/z
MassSpecGym: A benchmark for the discovery and identification of molecules
2024-10-30T00:00:00
https://arxiv.org/abs/2410.23326v1
[ "https://github.com/pluskal-lab/massspecgym" ]
In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Cosine Similarity score did the Precursor m/z model get on the MassSpecGym dataset
0.15
SVTP
CLIP4STR-L (DataComp-1B)
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
2023-05-23T00:00:00
https://arxiv.org/abs/2305.14014v3
[ "https://github.com/VamosC/CLIP4STR" ]
In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L (DataComp-1B) model get on the SVTP dataset
98.1
3DPW
SMPLer-X
SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation
2023-09-29T00:00:00
https://arxiv.org/abs/2309.17448v3
[ "https://github.com/caizhongang/SMPLer-X" ]
In the paper 'SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation', what MPJPE score did the SMPLer-X model get on the 3DPW dataset
75.2
SPKL
CFEN
Revising deep learning methods in parking lot occupancy detection
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04288v3
[ "https://github.com/eighonet/parking-research" ]
In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the CFEN model get on the SPKL dataset
0.5367
Wisconsin
TE-GCNN
Transfer Entropy in Graph Convolutional Neural Networks
2024-06-08T00:00:00
https://arxiv.org/abs/2406.06632v1
[ "https://github.com/avmoldovan/Heterophily_and_oversmoothing-forked" ]
In the paper 'Transfer Entropy in Graph Convolutional Neural Networks', what Accuracy score did the TE-GCNN model get on the Wisconsin dataset
87.45 ± 3.70
WikiSQL
CABINET
CABINET: Content Relevance based Noise Reduction for Table Question Answering
2024-02-02T00:00:00
https://arxiv.org/abs/2402.01155v3
[ "https://github.com/sohanpatnaik106/cabinet_qa" ]
In the paper 'CABINET: Content Relevance based Noise Reduction for Table Question Answering', what Denotation accuracy (test) score did the CABINET model get on the WikiSQL dataset
89.5
FRMT (Chinese - Taiwan)
PaLM
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what BLEURT score did the PaLM model get on the FRMT (Chinese - Taiwan) dataset
68.6
SSC
SNN with Dilated Convolution with Learnable Spacings
Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings
2023-06-30T00:00:00
https://arxiv.org/abs/2306.17670v3
[ "https://github.com/thvnvtos/snn-delays" ]
In the paper 'Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings', what Accuracy score did the SNN with Dilated Convolution with Learnable Spacings model get on the SSC dataset
80.69
VideoInstruct
TS-LLaVA-34B
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models
2024-11-17T00:00:00
https://arxiv.org/abs/2411.11066v1
[ "https://github.com/tingyu215/ts-llava" ]
In the paper 'TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models', what mean score did the TS-LLaVA-34B model get on the VideoInstruct dataset
3.38
CHASE_DB1
PVT-GCASCADE
G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation
2023-10-24T00:00:00
https://arxiv.org/abs/2310.16175v1
[ "https://github.com/SLDGroup/G-CASCADE" ]
In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what F1 score score did the PVT-GCASCADE model get on the CHASE_DB1 dataset
0.8251
ImageNet
TURTLE (CLIP + DINOv2)
Let Go of Your Labels with Unsupervised Transfer
2024-06-11T00:00:00
https://arxiv.org/abs/2406.07236v1
[ "https://github.com/mlbio-epfl/turtle" ]
In the paper 'Let Go of Your Labels with Unsupervised Transfer', what NMI score did the TURTLE (CLIP + DINOv2) model get on the ImageNet dataset
88.2
DAVIS 2017 (val)
UniVS(Swin-L)
UniVS: Unified and Universal Video Segmentation with Prompts as Queries
2024-02-28T00:00:00
https://arxiv.org/abs/2402.18115v2
[ "https://github.com/minghanli/univs" ]
In the paper 'UniVS: Unified and Universal Video Segmentation with Prompts as Queries', what J&F 1st frame score did the UniVS(Swin-L) model get on the DAVIS 2017 (val) dataset
59.4?
COCO-20i (1-shot)
MIANet (VGG-16)
MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation
2023-05-23T00:00:00
https://arxiv.org/abs/2305.13864v1
[ "https://github.com/aldrich2y/mianet" ]
In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (VGG-16) model get on the COCO-20i (1-shot) dataset
45.69
PCQM4M-LSC
Graphormer + GFSA
Graph Convolutions Enrich the Self-Attention in Transformers!
2023-12-07T00:00:00
https://arxiv.org/abs/2312.04234v5
[ "https://github.com/jeongwhanchoi/gfsa" ]
In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Validation MAE score did the Graphormer + GFSA model get on the PCQM4M-LSC dataset
0.1193
PeMS07
PM-DMNet(P)
Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction
2024-08-12T00:00:00
https://arxiv.org/abs/2408.07100v1
[ "https://github.com/wengwenchao123/PM-DMNet" ]
In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what MAE@1h score did the PM-DMNet(P) model get on the PeMS07 dataset
19.35
Cityscapes to Foggy Cityscapes
MIC (ResNet50-FPN)
Align and Distill: Unifying and Improving Domain Adaptive Object Detection
2024-03-18T00:00:00
https://arxiv.org/abs/2403.12029v2
[ "https://github.com/justinkay/aldi" ]
In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what mAP@0.5 score did the MIC (ResNet50-FPN) model get on the Cityscapes to Foggy Cityscapes dataset
61.7
3DPW
CycleAdapt (w/ 2D GT)
Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction
2023-08-12T00:00:00
https://arxiv.org/abs/2308.06554v1
[ "https://github.com/hygenie1228/cycleadapt_release" ]
In the paper 'Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction', what PA-MPJPE score did the CycleAdapt (w/ 2D GT) model get on the 3DPW dataset
39.9
Synapse multi-organ CT
EMCAD
EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation
2024-05-11T00:00:00
https://arxiv.org/abs/2405.06880v1
[ "https://github.com/sldgroup/emcad" ]
In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what Avg DSC score did the EMCAD model get on the Synapse multi-organ CT dataset
83.63
GRAZPEDWRI-DX
YOLOv8+GE
Pediatric Wrist Fracture Detection Using Feature Context Excitation Modules in X-ray Images
2024-10-01T00:00:00
https://arxiv.org/abs/2410.01031v2
[ "https://github.com/ruiyangju/fce-yolov8" ]
In the paper 'Pediatric Wrist Fracture Detection Using Feature Context Excitation Modules in X-ray Images', what mAP score did the YOLOv8+GE model get on the GRAZPEDWRI-DX dataset
34.01
CV-Cities
CV-Cities
CV-Cities: Advancing Cross-View Geo-Localization in Global Cities
2024-11-19T00:00:00
https://arxiv.org/abs/2411.12431v1
[ "https://github.com/gaoshuang98/cvcities" ]
In the paper 'CV-Cities: Advancing Cross-View Geo-Localization in Global Cities', what Recall@1 score did the CV-Cities model get on the CV-Cities dataset
82.91
GSM8K
MathCoder-CL-34B
MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning
2023-10-05T00:00:00
https://arxiv.org/abs/2310.03731v1
[ "https://github.com/mathllm/mathcoder" ]
In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-CL-34B model get on the GSM8K dataset
81.7
CelebA-HQ 256x256
RDM
Relay Diffusion: Unifying diffusion process across resolutions for image synthesis
2023-09-04T00:00:00
https://arxiv.org/abs/2309.03350v1
[ "https://github.com/THUDM/RelayDiffusion" ]
In the paper 'Relay Diffusion: Unifying diffusion process across resolutions for image synthesis', what FID score did the RDM model get on the CelebA-HQ 256x256 dataset
3.15
COCO Captions
BLIP-FuseCap
FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions
2023-05-28T00:00:00
https://arxiv.org/abs/2305.17718v2
[ "https://github.com/RotsteinNoam/FuseCap" ]
In the paper 'FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions', what CLIPScore score did the BLIP-FuseCap model get on the COCO Captions dataset
78.5
OVIS validation
GRAtt-VIS (Swin-L)
GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation
2023-05-26T00:00:00
https://arxiv.org/abs/2305.17096v1
[ "https://github.com/tanveer81/grattvis" ]
In the paper 'GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation', what mask AP score did the GRAtt-VIS (Swin-L) model get on the OVIS validation dataset
45.7
MathToF
GPT-4 (Teaching-Inspired)
Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08068v1
[ "https://github.com/sallytan13/teaching-inspired-prompting" ]
In the paper 'Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models', what Accuracy score did the GPT-4 (Teaching-Inspired) model get on the MathToF dataset
89.2
ASDiv-A
OpenMath-CodeLlama-70B (w/ code)
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10176v2
[ "https://github.com/kipok/nemo-skills" ]
In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Execution Accuracy score did the OpenMath-CodeLlama-70B (w/ code) model get on the ASDiv-A dataset
84.7
BSDS300
PaddingFlow
PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise
2024-03-13T00:00:00
https://arxiv.org/abs/2403.08216v2
[ "https://github.com/adamqlmeng/paddingflow" ]
In the paper 'PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise', what CD score did the PaddingFlow model get on the BSDS300 dataset
0.495
VLCS
UniDG + CORAL + ConvNeXt-B
Towards Unified and Effective Domain Generalization
2023-10-16T00:00:00
https://arxiv.org/abs/2310.10008v1
[ "https://github.com/invictus717/UniDG" ]
In the paper 'Towards Unified and Effective Domain Generalization', what Average Accuracy score did the UniDG + CORAL + ConvNeXt-B model get on the VLCS dataset
84.5
Set14 - 4x upscaling
AESOP
Auto-Encoded Supervision for Perceptual Image Super-Resolution
2024-11-28T00:00:00
https://arxiv.org/abs/2412.00124v1
[ "https://github.com/2minkyulee/aesop-auto-encoded-supervision-for-perceptual-image-super-resolution" ]
In the paper 'Auto-Encoded Supervision for Perceptual Image Super-Resolution', what PSNR score did the AESOP model get on the Set14 - 4x upscaling dataset
27.421
Breakfast
LTContext
How Much Temporal Long-Term Context is Needed for Action Segmentation?
2023-08-22T00:00:00
https://arxiv.org/abs/2308.11358v2
[ "https://github.com/ltcontext/ltcontext" ]
In the paper 'How Much Temporal Long-Term Context is Needed for Action Segmentation?', what F1@10% score did the LTContext model get on the Breakfast dataset
77.6
WHU-CD
SRC-Net
SRC-Net: Bi-Temporal Spatial Relationship Concerned Network for Change Detection
2024-06-09T00:00:00
https://arxiv.org/abs/2406.05668v2
[ "https://github.com/Chnja/SRCNet" ]
In the paper 'SRC-Net: Bi-Temporal Spatial Relationship Concerned Network for Change Detection', what F1 score did the SRC-Net model get on the WHU-CD dataset
92.06
KIT Motion-Language
ST-MLP
Guided Attention for Interpretable Motion Captioning
2023-10-11T00:00:00
https://arxiv.org/abs/2310.07324v2
[ "https://github.com/rd20karim/m2t-interpretable" ]
In the paper 'Guided Attention for Interpretable Motion Captioning', what BLEU-4 score did the ST-MLP model get on the KIT Motion-Language dataset
24.4
Something-Something V1
TAdaConvNeXtV2-B
Temporally-Adaptive Models for Efficient Video Understanding
2023-08-10T00:00:00
https://arxiv.org/abs/2308.05787v1
[ "https://github.com/alibaba-mmai-research/TAdaConv" ]
In the paper 'Temporally-Adaptive Models for Efficient Video Understanding', what Top 1 Accuracy score did the TAdaConvNeXtV2-B model get on the Something-Something V1 dataset
60.7
CIFAR-10
TRADES-ANCRA/ResNet18
Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria
2023-10-05T00:00:00
https://arxiv.org/abs/2310.03358v2
[ "https://github.com/changzhang777/ancra" ]
In the paper 'Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria', what Attack: AutoAttack score did the TRADES-ANCRA/ResNet18 model get on the CIFAR-10 dataset
59.70
Fashion-MNIST
ResNet-18
Vision Eagle Attention: a new lens for advancing image classification
2024-11-15T00:00:00
https://arxiv.org/abs/2411.10564v2
[ "https://github.com/MahmudulHasan11085/Vision-Eagle-Attention" ]
In the paper 'Vision Eagle Attention: a new lens for advancing image classification', what Percentage error score did the ResNet-18 model get on the Fashion-MNIST dataset
7.72
WDC Products-80%cc-seen-medium
Llama3.1_8B
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Llama3.1_8B model get on the WDC Products-80%cc-seen-medium dataset
53.36
VTAB-1k(Structured<8>)
GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)
Improving Visual Prompt Tuning for Self-supervised Vision Transformers
2023-06-08T00:00:00
https://arxiv.org/abs/2306.05067v1
[ "https://github.com/ryongithub/gatedprompttuning" ]
In the paper 'Improving Visual Prompt Tuning for Self-supervised Vision Transformers', what Mean Accuracy score did the GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K) model get on the VTAB-1k(Structured<8>) dataset
36.80
GAP
Maverick_incr
Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends
2024-07-31T00:00:00
https://arxiv.org/abs/2407.21489v1
[ "https://github.com/sapienzanlp/maverick-coref" ]
In the paper 'Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends', what Overall F1 score did the Maverick_incr model get on the GAP dataset
91.2
S2Looking
C2FNet
C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images
2024-04-22T00:00:00
https://arxiv.org/abs/2404.13838v1
[ "https://github.com/chengxihan/c2f-semicd-and-c2f-cdnet" ]
In the paper 'C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images', what F1-Score score did the C2FNet model get on the S2Looking dataset
62.83
LLVIP
MiPa
MiPa: Mixed Patch Infrared-Visible Modality Agnostic Object Detection
2024-04-29T00:00:00
https://arxiv.org/abs/2404.18849v2
[ "https://github.com/heitorrapela/mipa" ]
In the paper 'MiPa: Mixed Patch Infrared-Visible Modality Agnostic Object Detection', what AP score did the MiPa model get on the LLVIP dataset
0.665
Aria Synthetic Environments
EVL
EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models
2024-06-14T00:00:00
https://arxiv.org/abs/2406.10224v1
[ "https://github.com/facebookresearch/efm3d" ]
In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what MAP score did the EVL model get on the Aria Synthetic Environments dataset
75
LSUN Bedroom 256 x 256
LFM
Flow Matching in Latent Space
2023-07-17T00:00:00
https://arxiv.org/abs/2307.08698v1
[ "https://github.com/vinairesearch/lfm" ]
In the paper 'Flow Matching in Latent Space', what FID score did the LFM model get on the LSUN Bedroom 256 x 256 dataset
4.92
ETTh1 (336) Multivariate
ATFNet
ATFNet: Adaptive Time-Frequency Ensembled Network for Long-term Time Series Forecasting
2024-04-08T00:00:00
https://arxiv.org/abs/2404.05192v1
[ "https://github.com/yhyhyhyhyhy/atfnet" ]
In the paper 'ATFNet: Adaptive Time-Frequency Ensembled Network for Long-term Time Series Forecasting', what MSE score did the ATFNet model get on the ETTh1 (336) Multivariate dataset
0.514
EuroSAT
TURTLE (CLIP + DINOv2)
Let Go of Your Labels with Unsupervised Transfer
2024-06-11T00:00:00
https://arxiv.org/abs/2406.07236v1
[ "https://github.com/mlbio-epfl/turtle" ]
In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the EuroSAT dataset
96.6
Office-Home
MoA (OpenCLIP, ViT-B/16)
Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters
2023-10-17T00:00:00
https://arxiv.org/abs/2310.11031v2
[ "https://github.com/KU-CVLAB/MoA" ]
In the paper 'Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters', what Average Accuracy score did the MoA (OpenCLIP, ViT-B/16) model get on the Office-Home dataset
90.6
MM-Vet
InternLM-XComposer2
InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model
2024-01-29T00:00:00
https://arxiv.org/abs/2401.16420v1
[ "https://github.com/internlm/internlm-xcomposer" ]
In the paper 'InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model', what GPT-4 score score did the InternLM-XComposer2 model get on the MM-Vet dataset
51.2
Comic2k
CDDMSL
Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment
2023-09-24T00:00:00
https://arxiv.org/abs/2309.13525v1
[ "https://github.com/sinamalakouti/CDDMSL" ]
In the paper 'Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment', what mAP score did the CDDMSL model get on the Comic2k dataset
45.9
ACOS
MvP
MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction
2023-05-22T00:00:00
https://arxiv.org/abs/2305.12627v1
[ "https://github.com/ZubinGou/multi-view-prompting" ]
In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (Laptop) score did the MvP model get on the ACOS dataset
43.92
CNRPark+EXT
VGG-19
Revising deep learning methods in parking lot occupancy detection
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04288v3
[ "https://github.com/eighonet/parking-research" ]
In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the VGG-19 model get on the CNRPark+EXT dataset
0.9629
ETTh2 (192) Multivariate
TSMixer
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
2023-06-14T00:00:00
https://arxiv.org/abs/2306.09364v4
[ "https://github.com/ibm/tsfm" ]
In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the ETTh2 (192) Multivariate dataset
0.33
VisDA2017
SFDA2++
SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation
2024-03-16T00:00:00
https://arxiv.org/abs/2403.10834v1
[ "https://github.com/shinyflight/sfda2" ]
In the paper 'SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation', what Accuracy score did the SFDA2++ model get on the VisDA2017 dataset
89.6
SHD - Adding
ELM Neuron
The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks
2023-06-14T00:00:00
https://arxiv.org/abs/2306.16922v3
[ "https://github.com/AaronSpieler/elmneuron" ]
In the paper 'The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks', what Accuracy (%) score did the ELM Neuron model get on the SHD - Adding dataset
82
VLCS
GMDG (ResNet-50, SWAD)
Rethinking Multi-domain Generalization with A General Learning Objective
2024-02-29T00:00:00
https://arxiv.org/abs/2402.18853v1
[ "https://github.com/zhaorui-tan/GMDG_cvpr2024" ]
In the paper 'Rethinking Multi-domain Generalization with A General Learning Objective', what Average Accuracy score did the GMDG (ResNet-50, SWAD) model get on the VLCS dataset
79.6
MS COCO
BUCTD (PETR, with generative sampling)
Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity
2023-06-13T00:00:00
https://arxiv.org/abs/2306.07879v2
[ "https://github.com/amathislab/BUCTD" ]
In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what AP score did the BUCTD (PETR, with generative sampling) model get on the MS COCO dataset
77.8
VibraVox (headset microphone)
ECAPA2
Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors
2024-07-16T00:00:00
https://arxiv.org/abs/2407.11828v2
[ "https://github.com/jhauret/vibravox" ]
In the paper 'Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors', what Test EER score did the ECAPA2 model get on the VibraVox (headset microphone) dataset
0.0026
GSM8K
MuggleMATH 7B
MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning
2023-10-09T00:00:00
https://arxiv.org/abs/2310.05506v3
[ "https://github.com/ofa-sys/gsm8k-screl" ]
In the paper 'MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning', what Accuracy score did the MuggleMATH 7B model get on the GSM8K dataset
69.8